LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • Quick Start
  • Get Started Tutorials
  • Example Reports and Tests
  • Tutorials - LLM
  • Tutorials - ML
  • How to examples
  • Integrations
  1. Tutorials and Examples

All Tutorials

Code examples and tutorials.

PreviousTutorials and ExamplesNextTutorial - Tracing

Last updated 1 month ago

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer version .

Quick Start

Check the short Quickstart examples .

Get Started Tutorials

Introductory tutorials that walk you through the basic functionality step by step.

Title
Guide
Code

LLM Evaluation

Data & ML Monitoring

LLM Tracing

Intro to Reports & Test Suites (OSS)

Self-host ML monitoring Dashboard (OSS)

Example Reports and Tests

Simple examples show different local evaluations (Metrics, Tests and Presets) for tabular data and ML.

Title
Code example
Contents

Evidently Test Presets

Pre-built Test Suites on tabular data:

  • Data Drift

  • Data Stability

  • Data Quality

  • NoTargetPerformance

  • Regression

  • Classification (Multi-class, binary, binary top-K)

Evidently Tests

  • All individual Tests (50+) that one can use to create a custom Test Suite. Tabular data examples.

  • How to set test conditions and parameters.

Evidently Metric Presets

All pre-built Reports:

  • Data Drift

  • Target Drift

  • Data Quality

  • Regression

  • Classification

Evidently Metrics

  • All individual metrics (30+) that one can use to create a custom Report.

  • How to set simple metric parameters.

Evidently LLM Metrics

  • Evaluations for Text Data and LLMs

Tutorials - LLM

Title
Tutorial

How to create LLM judge evaluator

How to run regression testing for LLM products

Tutorials - ML

To better understand the Evidently use cases, refer to the detailed tutorials accompanied by the blog posts.

Title
Code example
Blog post

Understand ML model decay in production (regression example)

Compare two ML models before deployment (classification example)

Evaluate and visualize historical data drift

Monitor NLP models in production

Create ML model cards

Use descriptors to monitor text data

How to examples

For code examples on specific functionality, check the How-To examples:

Integrations

To see how to integrate Evidently in your prediction pipelines and use it with other tools, refer to the integrations.

For LLM and text metrics, check the .

You can find more examples in the repository.

here
here
LLM evaluation tutorial
Community Examples
https://github.com/evidentlyai/evidently/tree/ad71e132d59ac3a84fce6cf27bd50b12b10d9137/examples/how_to_questions
Evidently integrations
Tutorial
Jupyter notebook
Tutorial
Jupyter notebook
Tutorial
Jupyter notebook
Tutorial
Jupyter notebook
Tutorial
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Tutorial
Tutorial
Jupyter notebook
How to break a model in 20 days. A tutorial on production model analytics.
Jupyter notebook
What Is Your Model Hiding? A Tutorial on Evaluating ML Models.
Jupyter notebook
How to detect, evaluate and visualize historical drifts in the data.
Colab
Monitoring NLP models in production: a tutorial on detecting drift in text data
Jupyter notebook
A simple way to create ML Model Cards in Python
Jupyter notebook
Monitoring unstructured data for LLM and NLP with text descriptors