OSS Quickstart - Data and ML monitoring

Run your first evaluation using Evidently open-source, for tabular data.

circle-info

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower and Evidently Cloud v1. Check the newer docs version herearrow-up-right.

It's best to run this example in Jupyter Notebook or Google Colab so that you can render HTML Reports directly in a notebook cell.

Installation

Install Evidently using the pip package manager:

!pip install evidently

Imports

Import the Evidently components and a toy “Iris” dataset:

import pandas as pd

from sklearn import datasets

from evidently.test_suite import TestSuite
from evidently.test_preset import DataStabilityTestPreset

from evidently.report import Report
from evidently.metric_preset import DataDriftPreset

iris_data = datasets.load_iris(as_frame='auto')
iris_frame = iris_data.frame

Run a Test Suite

Split the data into two batches. Run a set of pre-built data quality Tests to evaluate the quality of the current_data:

This will automatically generate tests on share of nulls, out-of-range values, etc. – with test conditions generated based on the first "reference" dataset.

Get a Report

Get a Data Drift Report to see if the data distributions shifted between two datasets:

What's next?

Want more details on Reports and Test Suites? See an in-depth tutorial.

Tutorial - Reports and Testschevron-right

Want to set up monitoring? Send the evaluation results to Evidently Cloud for analysis and tracking. See the Quickstart:

Quickstart - LLM evaluationschevron-right

Working with LLMs? Check the Quickstart:

Quickstart - LLM evaluationschevron-right

Need help? Ask in our Discord communityarrow-up-right.

Last updated