LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • Submodules
  • base_classification_metric module
  • class ThresholdClassificationMetric(probas_threshold: Optional[float], k: Optional[Union[float, int]])
  • class_balance_metric module
  • class ClassificationClassBalance()
  • class ClassificationClassBalanceRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationClassBalanceResult(plot_data: Dict[str, int])
  • class_separation_metric module
  • class ClassificationClassSeparationPlot()
  • class ClassificationClassSeparationPlotRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationClassSeparationPlotResults(target_name: str, current_plot: Optional[pandas.core.frame.DataFrame] = None, reference_plot: Optional[pandas.core.frame.DataFrame] = None)
  • classification_dummy_metric module
  • class ClassificationDummyMetric(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)
  • class ClassificationDummyMetricRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationDummyMetricResults(dummy: DatasetClassificationQuality, by_reference_dummy: Optional[DatasetClassificationQuality], model_quality: Optional[DatasetClassificationQuality], metrics_matrix: dict)
  • classification_quality_metric module
  • class ClassificationQualityMetric(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)
  • class ClassificationQualityMetricRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationQualityMetricResult(current: DatasetClassificationQuality, reference: Optional[DatasetClassificationQuality], target_name: str)
  • confusion_matrix_metric module
  • class ClassificationConfusionMatrix(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)
  • class ClassificationConfusionMatrixRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationConfusionMatrixResult(current_matrix: ConfusionMatrix, reference_matrix: Optional[ConfusionMatrix])
  • pr_curve_metric module
  • class ClassificationPRCurve()
  • class ClassificationPRCurveRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationPRCurveResults(current_pr_curve: Optional[dict] = None, reference_pr_curve: Optional[dict] = None)
  • pr_table_metric module
  • class ClassificationPRTable()
  • class ClassificationPRTableRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationPRTableResults(current_pr_table: Optional[dict] = None, reference_pr_table: Optional[dict] = None)
  • probability_distribution_metric module
  • class ClassificationProbDistribution()
  • class ClassificationProbDistributionRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationProbDistributionResults(current_distribution: Optional[Dict[str, list]], reference_distribution: Optional[Dict[str, list]])
  • quality_by_class_metric module
  • class ClassificationQualityByClass(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)
  • class ClassificationQualityByClassRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationQualityByClassResult(columns: DatasetColumns, current_metrics: dict, current_roc_aucs: Optional[list], reference_metrics: Optional[dict], reference_roc_aucs: Optional[dict])
  • quality_by_feature_table module
  • class ClassificationQualityByFeatureTable(columns: Optional[List[str]] = None)
  • class ClassificationQualityByFeatureTableRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationQualityByFeatureTableResults(current_plot_data: pandas.core.frame.DataFrame, reference_plot_data: Optional[pandas.core.frame.DataFrame], target_name: str, curr_predictions: PredictionData, ref_predictions: Optional[PredictionData], columns: List[str])
  • roc_curve_metric module
  • class ClassificationRocCurve()
  • class ClassificationRocCurveRenderer(color_options: Optional[ColorOptions] = None)
  • class ClassificationRocCurveResults(current_roc_curve: Optional[dict] = None, reference_roc_curve: Optional[dict] = None)
  1. Reference
  2. API Reference
  3. evidently.metrics

evidently.metrics.classification_performance

Previousevidently.metricsNextevidently.metrics.data_drift

Last updated 2 months ago

Submodules

base_classification_metric module

class ThresholdClassificationMetric(probas_threshold: Optional[float], k: Optional[Union[float, int]])

Bases: [TResult], ABC

Attributes:

k : Optional[Union[float, int]]

probas_threshold : Optional[float]

Methods:

get_target_prediction_data(data: DataFrame, column_mapping: )

class_balance_metric module

class ClassificationClassBalance()

Bases: [ClassificationClassBalanceResult]

Methods:

Attributes:

Methods:

render_html(obj: ClassificationClassBalance)

render_json(obj: ClassificationClassBalance)

class ClassificationClassBalanceResult(plot_data: Dict[str, int])

Bases: object

Attributes:

plot_data : Dict[str, int]

class_separation_metric module

class ClassificationClassSeparationPlot()

Methods:

Attributes:

Methods:

render_html(obj: ClassificationClassSeparationPlot)

render_json(obj: ClassificationClassSeparationPlot)

class ClassificationClassSeparationPlotResults(target_name: str, current_plot: Optional[pandas.core.frame.DataFrame] = None, reference_plot: Optional[pandas.core.frame.DataFrame] = None)

Bases: object

Attributes:

current_plot : Optional[DataFrame] = None

reference_plot : Optional[DataFrame] = None

target_name : str

classification_dummy_metric module

class ClassificationDummyMetric(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)

Bases: ThresholdClassificationMetric[ClassificationDummyMetricResults]

Attributes:

quality_metric : ClassificationQualityMetric

Methods:

Attributes:

Methods:

render_html(obj: ClassificationDummyMetric)

render_json(obj: ClassificationDummyMetric)

Bases: object

Attributes:

metrics_matrix : dict

classification_quality_metric module

class ClassificationQualityMetric(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)

Bases: ThresholdClassificationMetric[ClassificationQualityMetricResult]

Attributes:

confusion_matrix_metric : ClassificationConfusionMatrix

Methods:

Attributes:

Methods:

render_html(obj: ClassificationQualityMetric)

render_json(obj: ClassificationQualityMetric)

Bases: object

Attributes:

target_name : str

confusion_matrix_metric module

class ClassificationConfusionMatrix(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)

Bases: ThresholdClassificationMetric[ClassificationConfusionMatrixResult]

Attributes:

k : Optional[Union[float, int]]

probas_threshold : Optional[float]

Methods:

Attributes:

Methods:

render_html(obj: ClassificationConfusionMatrix)

render_json(obj: ClassificationConfusionMatrix)

Bases: object

Attributes:

pr_curve_metric module

class ClassificationPRCurve()

Methods:

Attributes:

Methods:

render_html(obj: ClassificationPRCurve)

render_json(obj: ClassificationPRCurve)

class ClassificationPRCurveResults(current_pr_curve: Optional[dict] = None, reference_pr_curve: Optional[dict] = None)

Bases: object

Attributes:

current_pr_curve : Optional[dict] = None

reference_pr_curve : Optional[dict] = None

pr_table_metric module

class ClassificationPRTable()

Methods:

Attributes:

Methods:

render_html(obj: ClassificationPRTable)

render_json(obj: ClassificationPRTable)

class ClassificationPRTableResults(current_pr_table: Optional[dict] = None, reference_pr_table: Optional[dict] = None)

Bases: object

Attributes:

current_pr_table : Optional[dict] = None

reference_pr_table : Optional[dict] = None

probability_distribution_metric module

class ClassificationProbDistribution()

Methods:

static get_distribution(dataset: DataFrame, target_name: str, prediction_labels: Iterable)

Attributes:

Methods:

render_html(obj: ClassificationProbDistribution)

render_json(obj: ClassificationProbDistribution)

class ClassificationProbDistributionResults(current_distribution: Optional[Dict[str, list]], reference_distribution: Optional[Dict[str, list]])

Bases: object

Attributes:

current_distribution : Optional[Dict[str, list]]

reference_distribution : Optional[Dict[str, list]]

quality_by_class_metric module

class ClassificationQualityByClass(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None)

Bases: ThresholdClassificationMetric[ClassificationQualityByClassResult]

Attributes:

k : Optional[Union[float, int]]

probas_threshold : Optional[float]

Methods:

Attributes:

Methods:

render_html(obj: ClassificationQualityByClass)

render_json(obj: ClassificationQualityByClass)

Bases: object

Attributes:

current_metrics : dict

current_roc_aucs : Optional[list]

reference_metrics : Optional[dict]

reference_roc_aucs : Optional[dict]

quality_by_feature_table module

class ClassificationQualityByFeatureTable(columns: Optional[List[str]] = None)

Attributes:

columns : Optional[List[str]]

Methods:

Attributes:

Methods:

render_html(obj: ClassificationQualityByFeatureTable)

render_json(obj: ClassificationQualityByFeatureTable)

Bases: object

Attributes:

columns : List[str]

current_plot_data : DataFrame

reference_plot_data : Optional[DataFrame]

target_name : str

roc_curve_metric module

class ClassificationRocCurve()

Methods:

Attributes:

Methods:

render_html(obj: ClassificationRocCurve)

render_json(obj: ClassificationRocCurve)

class ClassificationRocCurveResults(current_roc_curve: Optional[dict] = None, reference_roc_curve: Optional[dict] = None)

Bases: object

Attributes:

current_roc_curve : Optional[dict] = None

reference_roc_curve : Optional[dict] = None

calculate(data: )

class ClassificationClassBalanceRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [ClassificationClassSeparationPlotResults]

calculate(data: )

class ClassificationClassSeparationPlotRenderer(color_options: Optional[] = None)

Bases:

color_options :

calculate(data: )

correction_for_threshold(dummy_results: , threshold: float, target: Series, labels: list, probas_shape: tuple)

class ClassificationDummyMetricRenderer(color_options: Optional[] = None)

Bases:

color_options :

class ClassificationDummyMetricResults(dummy: , by_reference_dummy: Optional[], model_quality: Optional[], metrics_matrix: dict)

by_reference_dummy : Optional[]

dummy :

model_quality : Optional[]

calculate(data: )

class ClassificationQualityMetricRenderer(color_options: Optional[] = None)

Bases:

color_options :

class ClassificationQualityMetricResult(current: , reference: Optional[], target_name: str)

current :

reference : Optional[]

calculate(data: )

class ClassificationConfusionMatrixRenderer(color_options: Optional[] = None)

Bases:

color_options :

class ClassificationConfusionMatrixResult(current_matrix: , reference_matrix: Optional[])

current_matrix :

reference_matrix : Optional[]

Bases: [ClassificationPRCurveResults]

calculate(data: )

calculate_metrics(target_data: Series, prediction: )

class ClassificationPRCurveRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [ClassificationPRTableResults]

calculate(data: )

calculate_metrics(target_data: Series, prediction: )

class ClassificationPRTableRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [ClassificationProbDistributionResults]

calculate(data: )

class ClassificationProbDistributionRenderer(color_options: Optional[] = None)

Bases:

color_options :

calculate(data: )

class ClassificationQualityByClassRenderer(color_options: Optional[] = None)

Bases:

color_options :

class ClassificationQualityByClassResult(columns: , current_metrics: dict, current_roc_aucs: Optional[list], reference_metrics: Optional[dict], reference_roc_aucs: Optional[dict])

columns :

Bases: [ClassificationQualityByFeatureTableResults]

calculate(data: )

class ClassificationQualityByFeatureTableRenderer(color_options: Optional[] = None)

Bases:

color_options :

class ClassificationQualityByFeatureTableResults(current_plot_data: pandas.core.frame.DataFrame, reference_plot_data: Optional[pandas.core.frame.DataFrame], target_name: str, curr_predictions: , ref_predictions: Optional[], columns: List[str])

curr_predictions :

ref_predictions : Optional[]

Bases: [ClassificationRocCurveResults]

calculate(data: )

calculate_metrics(target_data: Series, prediction: )

class ClassificationRocCurveRenderer(color_options: Optional[] = None)

Bases:

color_options :

ColumnMapping
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
DatasetColumns
DatasetColumns
Metric
Metric
InputData
Metric
InputData
InputData
InputData
InputData
Metric
InputData
Metric
InputData
Metric
InputData
InputData
Metric
InputData
Metric
InputData
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
DatasetClassificationQuality
ConfusionMatrix
ConfusionMatrix
ConfusionMatrix
ConfusionMatrix
PredictionData
PredictionData
PredictionData
PredictionData
PredictionData
PredictionData
PredictionData