LogoLogo
HomeBlogGitHub
v0.1.57
v0.1.57
  • What is Evidently?
  • Installation
  • Get Started Tutorial
  • Reports
    • Data Drift
    • Data Quality
    • Numerical Target Drift
    • Categorical Target Drift
    • Regression Performance
    • Classification Performance
    • Probabilistic Classification Performance
  • Tests
  • Examples
  • Integrations
    • Evidently and Grafana
    • Evidently and Airflow
    • Evidently and MLflow
  • Features
    • Dashboards
      • Input data
      • Column mapping
      • Generate dashboards
      • CLI
      • Colab and other environments
    • Profiling
      • Input data
      • Column mapping
      • Generate profiles
      • CLI
    • Monitoring
  • User Guide
    • Customization
      • Select Widgets
      • Custom Widgets and Tabs
      • Options for Data / Target drift
      • Options for Quality Metrics
      • Options for Statistical Tests
      • Options for Color Schema
    • Recipes
  • SUPPORT
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • Available Options
  • How to define Quality Metrics Options
  1. User Guide
  2. Customization

Options for Quality Metrics

PreviousOptions for Data / Target driftNextOptions for Statistical Tests

Last updated 2 years ago

An example of setting custom options in Data Drift and Probabilistic Classification Performance reports on Wine Quality Dataset:

Available Options

These options apply to different plots in the Evidently reports: Data Drift, Categorical Target Drift, Numerical Target Drift, Classification Performance, Probabilistic classification performance.

You can specify the following parameters:

  • conf_interval_n_sigmas: int Default = 1.

    • Defines the width of confidence interval depicted on plots. Confidence level indicated in sigmas (standard deviation).

    • Works to the feature or target distribution plots in the Data Drift and Numerical Target Drift reports.

  • classification_threshold: float. Default = 0.5.

    • Defines classification threshold for binary probabilistic classification.

    • Works to the Probabilistic Classification report.

  • cut_quantile: tuple[str, float] or dict[str, tuple[str, float]. Default = None.

    • Cut the data above the given quantile from the histogram plot if side parameter == 'right'.

    • Cut the data below the given quantile from the histogram plot if side parameter == 'left'.

    • Cut the data below the given quantile and above 1 - the given quantile from the histogram plot if side parameter == 'two-sided'.

    • Data used for metric calculation doesn't change.

    • Applies to all features (if passed as tuple) or certain features (if passed as dictionary).

    • Works to the Categorical Target Drift, Probabilistic Classification and Classification reports, and affects tables with Target/Prediction behavior by feature, and Classification Quality by Feature.

How to define Quality Metrics Options

1. Define a QualityMetricsOptions object.

options = QualityMetricsOptions(
                           conf_interval_n_sigmas=3, 
                           classification_threshold=0.8, 
                           cut_quantile={'feature_1': ('left': 0.01), 'feature_2': 0.95, 'feature_3': 'two-sided': 0.05})

2. Pass it to the Dashboard class:

dashboard = Dashboard(tabs=[DataDriftTab(), ProbClassificationPerformanceTab()], 
options=[options])
Google Colaboratory
Logo