LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • Code example
  • Default
  • Embedding parameters - Metrics and Tests
  • Embedding parameters - Presets
  • Embedding drift detection methods
  • Model-based (“model”)
  • Maximum mean discrepancy (“mmd”)
  • Share of drifted embedding components (“ratio”)
  • Distance-based methods (“distance”)
  1. User Guide
  2. Customization

Embeddings drift parameters

How to customize data drift detection for embeddings.

PreviousData drift parametersNextFeature importance in data drift

Last updated 1 month ago

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer docs version .

Pre-requisites:

  • You know how to generate Reports or Test Suites with default parameters.

  • You know how to pass custom parameters for Reports or Test Suites.

  • You know how to use Column Mapping to map embeddings in the input data.

Code example

You can refer to an example How-to-notebook showing how to pass parameters for different embeddings drift detection methods:

Default

When you calculate embeddings drift, Evidently automatically applies the default drift detection method (“model”).

In Reports:

report = Report(metrics=[
    EmbeddingsDriftMetric('small_subset')
])

In Test Suites:

tests = TestSuite(tests=[
    TestEmbeddingsDrift(embeddings_name='small_subset')
])

It works the same inside presets, like DataDriftPreset.

Embedding parameters - Metrics and Tests

You can override the defaults by passing a custom drift_method parameter to the relevant Metric or Test. You can define the embeddings drift detection method, the threshold, or both.

Pass the drift_method parameter:

from evidently.metrics.data_drift.embedding_drift_methods import model
report = Report(metrics = [
    EmbeddingsDriftMetric('small_subset', 
                          drift_method = model()
                         )
])

Embedding parameters - Presets

When you use NoTargetPerformanceTestPreset, DataDriftTestPreset or DataDriftPreset you can specify which subsets of columns with embeddings to include using embeddings, and the drift detection method using embeddings_drift_method.

By default, the Presets will include all columns mapped as containing embeddings in column_mapping.

To exclude columns with embeddings:

embeddings = []

To specify which sets of columns to include (with the default drift detection method):

embeddings = [‘set1’, ‘set2’]

To specify which sets of columns to include, and specify the method:

embeddings = [‘set1’, ‘set2’]
embeddings_drift_method = {‘set1’: model(), ‘set2’: ratio())}

Embedding drift detection methods

Currently 4 embeddings drift detection methods are available.

Embeddings drift detection method
Description and default

drift_method=model (Default)

  • A binary classifier model to distinguish between embeddings in “current” and “reference” distributions.

  • Returns ROC AUC as a drift_score.

  • Drift detected when drift_score > threshold or when drift_score > ROC AUC of the random classifier at a set quantile_probability.

  • Default threshold: 0.55 (ROC AUC).

  • Default quantile_probability: 0.95. Applies when bootstrap is True; default True if <= 1000 objects.

drift_method=ratio

  • Computes the distribution drift between individual embedding components using any of the tabular numerical drift detection methods available in Evidently.

  • Default tabular drift detection method: Wasserstein distance, with the 0.1 threshold.

  • Returns the share of drifted embeddings as drift_score.

  • Drift detected when drift_score > threshold

  • Default threshold: 0.2 (share of drifted embedding components).

drift_method=distance

  • Computes the distance between average embeddings in “current” and “reference” datasets using a specified distance metric (euclidean, cosine, cityblock, chebyshev). Default: euclidean.

  • Returns the distance metric value as drift_score.

  • Drift detected when drift_score > threshold or when drift_score > obtained distance in reference at a set quantile_probability.

  • Default threshold: 0.2 (relevant for Euclidean distance).

  • Default quantile_probability: 0.95. Applies when bootstrap is True; default True if <= 1000 objects.

drift_method=mmd

  • Computes the Maximum Mean Discrepancy (MMD)

  • Returns the MMD value as a drift_score

  • Drift detected when drift_score > threshold or when drift_score > obtained MMD values in reference at a set quantile_probability.

  • Default threshold: 0.015 (MMD).

  • Default quantile_probability: 0.95. Applies when bootstrap is True; default True if <= 1000 objects.

If you specify an embedding drift detection method but do not pass additional parameters, defaults will apply.

You can also specify parameters for any chosen method. Since the methods are different, each has a different set of parameters. Note that you should pass the parameters directly to the chosen drift detection method, not to the Metric.

Model-based (“model”)

report = Report(metrics = [
    EmbeddingsDriftMetric('small_subset', 
                          drift_method = model(
                              threshold = 0.55,
                              bootstrap = None,
                              quantile_probability = 0.05,
                              pca_components = None,
                          )
                         )
])
Parameter
Description

threshold

Sets the threshold for drift detection (ROC AUC). Drift is detected when drift_score > threshold. Applies when bootstrap != True. Default: 0.55.

bootstrap (optional)

Boolean parameter (True/False) to determine whether to apply statistical hypothesis testing. If applied, the ROC AUC of the classifier is compared to the ROC AUC of the random classifier at a set percentile. The calculation is repeated 1000 times with randomly assigned target class probabilities. This produces a distribution of random roc_auc scores with a mean of 0,5. We then take the 95th percentile (default) of this distribution and compare it to the ROC-AUC score of the classifier. If the classifier score is higher, data drift is detected. Default: True if <= 1000 objects, False if > 1000 objects.

quantile_probability (optional)

Sets the percentile of the possible ROC AUC values of the random classifier to compare against. This applies when bootstrap is True. Default: 0.95

pca_components (optional)

The number of PCA components. If specified, dimensionality reduction will be applied to project data to n-dimensional space based on the number of pca_components. Default: None.

Maximum mean discrepancy (“mmd”)

report = Report(metrics = [
    EmbeddingsDriftMetric('small_subset', 
                          drift_method = mmd(
                              threshold = 0.015,
                              bootstrap = None,
                              quantile_probability = 0.05,
                              pca_components = None,
                          )
                         )
])
Parameter
Description

threshold

Sets the threshold value of MMD for drift detection. Drift is detected when drift_score > threshold. Applies when bootstrap != True. Default: 0.015 (MMD).

bootstrap (optional)

Boolean parameter (True/False) to determine whether to apply statistical hypothesis testing. If applied, the value of MMD between reference and current (mmd_0) is tested against possible MMD values in reference. We randomly split the reference data into two parts and compute MMD values (mmd_i) between them. The calculation is repeated 100 times. This produces a distribution of MMD values obtained for a reference dataset. We then take the 95th percentile (default) of this distribution and compare it to the MMD between reference and current datasets. If the mmd_0 > mmd_95, data drift is detected. Default: True if <= 1000 objects, False if > 1000 objects.

quantile_probability (optional)

Sets the percentile of the possible MMD values in reference to compare against. Applies when bootstrap == True. Default: 0.95.

pca_components (optional)

The number of PCA components. If specified, dimensionality reduction will be applied to project data to n-dimensional space based on the number of pca_components. Default: None.

Share of drifted embedding components (“ratio”)

report = Report(metrics = [
    EmbeddingsDriftMetric('small_subset', 
                          drift_method = ratio(
                              component_stattest = 'wasserstein',
                              component_stattest_threshold = 0.1,
                              threshold = 0.2,
                              pca_components = None,
                          )
                         )
])
Parameter
Description

component_stattest (optional)

Sets the tabular drift detection method (any of the tabular drift detection methods for numerical features available in Evidently). Default: Wasserstein

component_stattest_threshold (optional)

Sets the threshold for drift detection for individual embedding components. Drift is detected when drift_score > component_stattest_threshold in case of distance/divergence metrics where the threshold is the metric value or drift_score < component_stattest_threshold in case of statistical tests where the threshold is the p-value. Default: 0.1 (relevant for Wasserstein).

threshold (optional)

Sets the threshold (share of drifted embedding components) for drift detection for the overall dataset. Default: 0.2

pca_components (optional)

The number of PCA components. If specified, dimensionality reduction will be applied to project data to n-dimensional space based on the number of pca_components. Default: None.

Distance-based methods (“distance”)

report = Report(metrics = [
    EmbeddingsDriftMetric('small_subset', 
                          drift_method = distance(
                              dist = 'euclidean', #"euclidean", "cosine", "cityblock" or "chebyshev"
                              threshold = 0.2,
                              pca_components = None,
                              bootstrap = None,
                              quantile_probability = 0.05
                          )
                         )
])
Parameter
Description

dist (optional) Available: euclidean cosine cityblock (manhattan distance) chebyshev

Sets the distance metric for drift detection. Default: Euclidean distance

threshold (optional)

Sets the threshold for drift detection. Drift is detected when drift_score > threshold. Applies when bootstrap != True Default: 0.2 (relevant for euclidean distance)

bootstrap (optional)

Boolean parameter (True/False) to determine whether to apply statistical hypothesis testing. If applied, the distance between reference and current is tested against possible distance values in reference. We randomly split the reference data into two parts and compute the distance between them. The calculation is repeated 100 times. This produces a distribution of distance values obtained for a reference dataset. We then take the 95th percentile (default) of this distribution and compare it to the distance between reference and current datasets. If the distance between the reference and current is higher than the 95th percentile of the distance obtained for the reference dataset, the drift is detected. Default: True if <= 1000 objects, False if > 1000 objects.

quantile_probability (optional)

Sets the percentile of the possible distance values in reference to compare against. Applies when bootstrap == True. Default: 0.95.

pca_components (optional)

The number of PCA components. If specified, dimensionality reduction will be applied to project data to n-dimensional space based on the number of pca_components. Default: None.

🛠️
here
evidently/examples/how_to_questions/how_to_calculate_embeddings_drift.ipynb at ad71e132d59ac3a84fce6cf27bd50b12b10d9137 · evidentlyai/evidentlyGitHub
Logo