LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)
  • class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
  • class DataQualityPreset(columns: Optional[List[str]] = None)
  • class RegressionPreset(columns: Optional[List[str]] = None)
  • class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
  • Submodules
  • classification_performance module
  • class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)
  • data_drift module
  • class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
  • data_quality module
  • class DataQualityPreset(columns: Optional[List[str]] = None)
  • metric_preset module
  • class MetricPreset()
  • regression_performance module
  • class RegressionPreset(columns: Optional[List[str]] = None)
  • target_drift module
  • class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
  1. Reference
  2. API Reference

evidently.metric_preset

Previousevidently.metrics.regression_performanceNextevidently.options

Last updated 2 months ago

class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)

Bases: MetricPreset

Metrics preset for classification performance.

Contains metrics:

  • ClassificationQualityMetric

  • ClassificationClassBalance

  • ClassificationConfusionMatrix

  • ClassificationQualityByClass

Attributes:

columns : Optional[List[str]]

k : Optional[int]

probas_threshold : Optional[float]

Methods:

generate_metrics(data: , columns: )

class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: MetricPreset

Metric Preset for Data Drift analysis.

Contains metrics:

  • DatasetDriftMetric

  • DataDriftTable

Attributes:

cat_stattest_threshold : Optional[float]

columns : Optional[List[str]]

drift_share : float

num_stattest_threshold : Optional[float]

per_column_stattest_threshold : Optional[Dict[str, float]]

stattest_threshold : Optional[float]

Methods:

class DataQualityPreset(columns: Optional[List[str]] = None)

Bases: MetricPreset

Metric preset for Data Quality analysis.

Contains metrics:

  • DatasetSummaryMetric

  • ColumnSummaryMetric for each column

  • DatasetMissingValuesMetric

  • DatasetCorrelationsMetric

  • Parameters

    columns – list of columns for analysis.

Attributes:

columns : Optional[List[str]]

Methods:

class RegressionPreset(columns: Optional[List[str]] = None)

Bases: MetricPreset

Metric preset for Regression performance analysis.

Contains metrics:

  • RegressionQualityMetric

  • RegressionPredictedVsActualScatter

  • RegressionPredictedVsActualPlot

  • RegressionErrorPlot

  • RegressionAbsPercentageErrorPlot

  • RegressionErrorDistribution

  • RegressionErrorNormality

  • RegressionTopErrorMetric

  • RegressionErrorBiasTable

Attributes:

columns : Optional[List[str]]

Methods:

Bases: MetricPreset

Metric preset for Target Drift analysis.

Contains metrics:

  • ColumnDriftMetric - for target and prediction if present in datasets.

  • ColumnValuePlot - if task is regression.

  • ColumnCorrelationsMetric - for target and prediction if present in datasets.

  • TargetByFeaturesTable

Attributes:

cat_stattest_threshold : Optional[float]

columns : Optional[List[str]]

num_stattest_threshold : Optional[float]

per_column_stattest_threshold : Optional[Dict[str, float]]

stattest_threshold : Optional[float]

Methods:

Submodules

classification_performance module

class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)

Bases: MetricPreset

Metrics preset for classification performance.

Contains metrics:

  • ClassificationQualityMetric

  • ClassificationClassBalance

  • ClassificationConfusionMatrix

  • ClassificationQualityByClass

Attributes:

columns : Optional[List[str]]

k : Optional[int]

probas_threshold : Optional[float]

Methods:

data_drift module

Bases: MetricPreset

Metric Preset for Data Drift analysis.

Contains metrics:

  • DatasetDriftMetric

  • DataDriftTable

Attributes:

cat_stattest_threshold : Optional[float]

columns : Optional[List[str]]

drift_share : float

num_stattest_threshold : Optional[float]

per_column_stattest_threshold : Optional[Dict[str, float]]

stattest_threshold : Optional[float]

Methods:

data_quality module

class DataQualityPreset(columns: Optional[List[str]] = None)

Bases: MetricPreset

Metric preset for Data Quality analysis.

Contains metrics:

  • DatasetSummaryMetric

  • ColumnSummaryMetric for each column

  • DatasetMissingValuesMetric

  • DatasetCorrelationsMetric

  • Parameters

    columns – list of columns for analysis.

Attributes:

columns : Optional[List[str]]

Methods:

metric_preset module

class MetricPreset()

Bases: object

Base class for metric presets

Methods:

regression_performance module

class RegressionPreset(columns: Optional[List[str]] = None)

Bases: MetricPreset

Metric preset for Regression performance analysis.

Contains metrics:

  • RegressionQualityMetric

  • RegressionPredictedVsActualScatter

  • RegressionPredictedVsActualPlot

  • RegressionErrorPlot

  • RegressionAbsPercentageErrorPlot

  • RegressionErrorDistribution

  • RegressionErrorNormality

  • RegressionTopErrorMetric

  • RegressionErrorBiasTable

Attributes:

columns : Optional[List[str]]

Methods:

target_drift module

Bases: MetricPreset

Metric preset for Target Drift analysis.

Contains metrics:

  • ColumnDriftMetric - for target and prediction if present in datasets.

  • ColumnValuePlot - if task is regression.

  • ColumnCorrelationsMetric - for target and prediction if present in datasets.

  • TargetByFeaturesTable

Attributes:

cat_stattest_threshold : Optional[float]

columns : Optional[List[str]]

num_stattest_threshold : Optional[float]

per_column_stattest_threshold : Optional[Dict[str, float]]

stattest_threshold : Optional[float]

Methods:

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]]

stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

generate_metrics(data: , columns: )

generate_metrics(data: , columns: )

generate_metrics(data: , columns: )

class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]]

stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

generate_metrics(data: , columns: )

generate_metrics(data: , columns: )

class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]]

stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

generate_metrics(data: , columns: )

generate_metrics(data: , columns: )

abstract generate_metrics(data: , columns: )

generate_metrics(data: , columns: )

class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]]

stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], ]]

generate_metrics(data: , columns: )

StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
StatTest
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
InputData
InputData
InputData
InputData
InputData
InputData
InputData
InputData
InputData
InputData
InputData