LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • Submodules
  • abs_perc_error_in_time module
  • class RegressionAbsPercentageErrorPlot()
  • class RegressionAbsPercentageErrorPlotRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionAbsPercentageErrorPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)
  • error_bias_table module
  • class RegressionErrorBiasTable(columns: Optional[List[str]] = None, top_error: Optional[float] = None)
  • class RegressionErrorBiasTableRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionErrorBiasTableResults(top_error: float, current_plot_data: pandas.core.frame.DataFrame, reference_plot_data: Optional[pandas.core.frame.DataFrame], target_name: str, prediction_name: str, num_feature_names: List[str], cat_feature_names: List[str], error_bias: Optional[dict] = None, columns: Optional[List[str]] = None)
  • error_distribution module
  • class RegressionErrorDistribution()
  • class RegressionErrorDistributionRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionErrorDistributionResults(current_bins: pandas.core.frame.DataFrame, reference_bins: Optional[pandas.core.frame.DataFrame])
  • error_in_time module
  • class RegressionErrorPlot()
  • class RegressionErrorPlotRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionErrorPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)
  • error_normality module
  • class RegressionErrorNormality()
  • class RegressionErrorNormalityRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionErrorNormalityResults(current_error: pandas.core.series.Series, reference_error: Optional[pandas.core.series.Series])
  • predicted_and_actual_in_time module
  • class RegressionPredictedVsActualPlot()
  • class RegressionPredictedVsActualPlotRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionPredictedVsActualPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)
  • predicted_vs_actual module
  • class RegressionPredictedVsActualScatter()
  • class RegressionPredictedVsActualScatterRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionPredictedVsActualScatterResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]])
  • regression_dummy_metric module
  • class RegressionDummyMetric()
  • class RegressionDummyMetricRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionDummyMetricResults(rmse_default: float, mean_abs_error_default: float, mean_abs_perc_error_default: float, abs_error_max_default: float, mean_abs_error_by_ref: Optional[float] = None, mean_abs_error: Optional[float] = None, mean_abs_perc_error_by_ref: Optional[float] = None, mean_abs_perc_error: Optional[float] = None, rmse_by_ref: Optional[float] = None, rmse: Optional[float] = None, abs_error_max_by_ref: Optional[float] = None, abs_error_max: Optional[float] = None)
  • regression_performance_metrics module
  • class RegressionPerformanceMetrics()
  • class RegressionPerformanceMetricsRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionPerformanceMetricsResults(columns: DatasetColumns, r2_score: float, rmse: float, rmse_default: float, mean_error: float, me_default_sigma: float, me_hist_for_plot: Dict[str, Union[pandas.core.series.Series, pandas.core.frame.DataFrame]], mean_abs_error: float, mean_abs_error_default: float, mean_abs_perc_error: float, mean_abs_perc_error_default: float, abs_error_max: float, abs_error_max_default: float, error_std: float, abs_error_std: float, abs_perc_error_std: float, error_normality: dict, underperformance: dict, hist_for_plot: Dict[str, pandas.core.series.Series], vals_for_plots: Dict[str, Dict[str, pandas.core.series.Series]], error_bias: Optional[dict] = None, mean_error_ref: Optional[float] = None, mean_abs_error_ref: Optional[float] = None, mean_abs_perc_error_ref: Optional[float] = None, rmse_ref: Optional[float] = None, r2_score_ref: Optional[float] = None, abs_error_max_ref: Optional[float] = None, underperformance_ref: Optional[dict] = None)
  • regression_quality module
  • class RegressionQualityMetric()
  • class RegressionQualityMetricRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionQualityMetricResults(columns: DatasetColumns, r2_score: float, rmse: float, rmse_default: float, mean_error: float, me_default_sigma: float, me_hist_for_plot: Dict[str, pandas.core.series.Series], mean_abs_error: float, mean_abs_error_default: float, mean_abs_perc_error: float, mean_abs_perc_error_default: float, abs_error_max: float, abs_error_max_default: float, error_std: float, abs_error_std: float, abs_perc_error_std: float, error_normality: dict, underperformance: dict, hist_for_plot: Dict[str, pandas.core.series.Series], vals_for_plots: Dict[str, Dict[str, pandas.core.series.Series]], error_bias: Optional[dict] = None, mean_error_ref: Optional[float] = None, mean_abs_error_ref: Optional[float] = None, mean_abs_perc_error_ref: Optional[float] = None, rmse_ref: Optional[float] = None, r2_score_ref: Optional[float] = None, abs_error_max_ref: Optional[float] = None, underperformance_ref: Optional[dict] = None, error_std_ref: Optional[float] = None, abs_error_std_ref: Optional[float] = None, abs_perc_error_std_ref: Optional[float] = None)
  • top_error module
  • class RegressionTopErrorMetric()
  • class RegressionTopErrorMetricRenderer(color_options: Optional[ColorOptions] = None)
  • class RegressionTopErrorMetricResults(curr_mean_err_per_group: Dict[str, Dict[str, float]], curr_scatter: Dict[str, Dict[str, pandas.core.series.Series]], ref_mean_err_per_group: Optional[Dict[str, Dict[str, float]]], ref_scatter: Optional[Dict[str, Dict[str, pandas.core.series.Series]]])
  1. Reference
  2. API Reference
  3. evidently.metrics

evidently.metrics.regression_performance

Previousevidently.metrics.data_qualityNextevidently.metric_preset

Last updated 2 months ago

Submodules

abs_perc_error_in_time module

class RegressionAbsPercentageErrorPlot()

Bases: [RegressionAbsPercentageErrorPlotResults]

Methods:

calculate(data: )

class RegressionAbsPercentageErrorPlotRenderer(color_options: Optional[] = None)

Bases:

Attributes:

color_options :

Methods:

render_html(obj: RegressionAbsPercentageErrorPlot)

render_json(obj: RegressionAbsPercentageErrorPlot)

class RegressionAbsPercentageErrorPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)

Bases: object

Attributes:

current_scatter : Dict[str, Series]

reference_scatter : Optional[Dict[str, Series]]

x_name : str

error_bias_table module

class RegressionErrorBiasTable(columns: Optional[List[str]] = None, top_error: Optional[float] = None)

Attributes:

TOP_ERROR_DEFAULT = 0.05

TOP_ERROR_MAX = 0.5

TOP_ERROR_MIN = 0

columns : Optional[List[str]]

top_error : float

Methods:

Attributes:

Methods:

render_html(obj: RegressionErrorBiasTable)

render_json(obj: RegressionErrorBiasTable)

class RegressionErrorBiasTableResults(top_error: float, current_plot_data: pandas.core.frame.DataFrame, reference_plot_data: Optional[pandas.core.frame.DataFrame], target_name: str, prediction_name: str, num_feature_names: List[str], cat_feature_names: List[str], error_bias: Optional[dict] = None, columns: Optional[List[str]] = None)

Bases: object

Attributes:

cat_feature_names : List[str]

columns : Optional[List[str]] = None

current_plot_data : DataFrame

error_bias : Optional[dict] = None

num_feature_names : List[str]

prediction_name : str

reference_plot_data : Optional[DataFrame]

target_name : str

top_error : float

error_distribution module

class RegressionErrorDistribution()

Methods:

Attributes:

Methods:

render_html(obj: RegressionErrorDistribution)

render_json(obj: RegressionErrorDistribution)

class RegressionErrorDistributionResults(current_bins: pandas.core.frame.DataFrame, reference_bins: Optional[pandas.core.frame.DataFrame])

Bases: object

Attributes:

current_bins : DataFrame

reference_bins : Optional[DataFrame]

error_in_time module

class RegressionErrorPlot()

Methods:

Attributes:

Methods:

render_html(obj: RegressionErrorPlot)

render_json(obj: RegressionErrorPlot)

class RegressionErrorPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)

Bases: object

Attributes:

current_scatter : Dict[str, Series]

reference_scatter : Optional[Dict[str, Series]]

x_name : str

error_normality module

class RegressionErrorNormality()

Methods:

Attributes:

Methods:

render_html(obj: RegressionErrorNormality)

render_json(obj: RegressionErrorNormality)

class RegressionErrorNormalityResults(current_error: pandas.core.series.Series, reference_error: Optional[pandas.core.series.Series])

Bases: object

Attributes:

current_error : Series

reference_error : Optional[Series]

predicted_and_actual_in_time module

class RegressionPredictedVsActualPlot()

Methods:

Attributes:

Methods:

render_html(obj: RegressionPredictedVsActualPlot)

render_json(obj: RegressionPredictedVsActualPlot)

class RegressionPredictedVsActualPlotResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]], x_name: str)

Bases: object

Attributes:

current_scatter : Dict[str, Series]

reference_scatter : Optional[Dict[str, Series]]

x_name : str

predicted_vs_actual module

class RegressionPredictedVsActualScatter()

Methods:

Attributes:

Methods:

render_html(obj: RegressionPredictedVsActualScatter)

render_json(obj: RegressionPredictedVsActualScatter)

class RegressionPredictedVsActualScatterResults(current_scatter: Dict[str, pandas.core.series.Series], reference_scatter: Optional[Dict[str, pandas.core.series.Series]])

Bases: object

Attributes:

current_scatter : Dict[str, Series]

reference_scatter : Optional[Dict[str, Series]]

regression_dummy_metric module

class RegressionDummyMetric()

Attributes:

quality_metric : RegressionQualityMetric

Methods:

Attributes:

Methods:

render_html(obj: RegressionDummyMetric)

render_json(obj: RegressionDummyMetric)

class RegressionDummyMetricResults(rmse_default: float, mean_abs_error_default: float, mean_abs_perc_error_default: float, abs_error_max_default: float, mean_abs_error_by_ref: Optional[float] = None, mean_abs_error: Optional[float] = None, mean_abs_perc_error_by_ref: Optional[float] = None, mean_abs_perc_error: Optional[float] = None, rmse_by_ref: Optional[float] = None, rmse: Optional[float] = None, abs_error_max_by_ref: Optional[float] = None, abs_error_max: Optional[float] = None)

Bases: object

Attributes:

abs_error_max : Optional[float] = None

abs_error_max_by_ref : Optional[float] = None

abs_error_max_default : float

mean_abs_error : Optional[float] = None

mean_abs_error_by_ref : Optional[float] = None

mean_abs_error_default : float

mean_abs_perc_error : Optional[float] = None

mean_abs_perc_error_by_ref : Optional[float] = None

mean_abs_perc_error_default : float

rmse : Optional[float] = None

rmse_by_ref : Optional[float] = None

rmse_default : float

regression_performance_metrics module

class RegressionPerformanceMetrics()

Methods:

get_parameters()

Attributes:

Methods:

render_html(obj: RegressionPerformanceMetrics)

render_json(obj: RegressionPerformanceMetrics)

Bases: object

Attributes:

abs_error_max : float

abs_error_max_default : float

abs_error_max_ref : Optional[float] = None

abs_error_std : float

abs_perc_error_std : float

error_bias : Optional[dict] = None

error_normality : dict

error_std : float

hist_for_plot : Dict[str, Series]

me_default_sigma : float

me_hist_for_plot : Dict[str, Union[Series, DataFrame]]

mean_abs_error : float

mean_abs_error_default : float

mean_abs_error_ref : Optional[float] = None

mean_abs_perc_error : float

mean_abs_perc_error_default : float

mean_abs_perc_error_ref : Optional[float] = None

mean_error : float

mean_error_ref : Optional[float] = None

r2_score : float

r2_score_ref : Optional[float] = None

rmse : float

rmse_default : float

rmse_ref : Optional[float] = None

underperformance : dict

underperformance_ref : Optional[dict] = None

vals_for_plots : Dict[str, Dict[str, Series]]

regression_quality module

class RegressionQualityMetric()

Methods:

Attributes:

Methods:

render_html(obj: RegressionQualityMetric)

render_json(obj: RegressionQualityMetric)

Bases: object

Attributes:

abs_error_max : float

abs_error_max_default : float

abs_error_max_ref : Optional[float] = None

abs_error_std : float

abs_error_std_ref : Optional[float] = None

abs_perc_error_std : float

abs_perc_error_std_ref : Optional[float] = None

error_bias : Optional[dict] = None

error_normality : dict

error_std : float

error_std_ref : Optional[float] = None

hist_for_plot : Dict[str, Series]

me_default_sigma : float

me_hist_for_plot : Dict[str, Series]

mean_abs_error : float

mean_abs_error_default : float

mean_abs_error_ref : Optional[float] = None

mean_abs_perc_error : float

mean_abs_perc_error_default : float

mean_abs_perc_error_ref : Optional[float] = None

mean_error : float

mean_error_ref : Optional[float] = None

r2_score : float

r2_score_ref : Optional[float] = None

rmse : float

rmse_default : float

rmse_ref : Optional[float] = None

underperformance : dict

underperformance_ref : Optional[dict] = None

vals_for_plots : Dict[str, Dict[str, Series]]

top_error module

class RegressionTopErrorMetric()

Methods:

Attributes:

Methods:

render_html(obj: RegressionTopErrorMetric)

render_json(obj: RegressionTopErrorMetric)

class RegressionTopErrorMetricResults(curr_mean_err_per_group: Dict[str, Dict[str, float]], curr_scatter: Dict[str, Dict[str, pandas.core.series.Series]], ref_mean_err_per_group: Optional[Dict[str, Dict[str, float]]], ref_scatter: Optional[Dict[str, Dict[str, pandas.core.series.Series]]])

Bases: object

Attributes:

curr_mean_err_per_group : Dict[str, Dict[str, float]]

curr_scatter : Dict[str, Dict[str, Series]]

ref_mean_err_per_group : Optional[Dict[str, Dict[str, float]]]

ref_scatter : Optional[Dict[str, Dict[str, Series]]]

Bases: [RegressionErrorBiasTableResults]

calculate(data: )

class RegressionErrorBiasTableRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionErrorDistributionResults]

calculate(data: )

class RegressionErrorDistributionRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionErrorPlotResults]

calculate(data: )

class RegressionErrorPlotRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionErrorNormalityResults]

calculate(data: )

class RegressionErrorNormalityRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionPredictedVsActualPlotResults]

calculate(data: )

class RegressionPredictedVsActualPlotRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionPredictedVsActualScatterResults]

calculate(data: )

class RegressionPredictedVsActualScatterRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionDummyMetricResults]

calculate(data: )

class RegressionDummyMetricRenderer(color_options: Optional[] = None)

Bases:

color_options :

Bases: [RegressionPerformanceMetricsResults]

calculate(data: )

class RegressionPerformanceMetricsRenderer(color_options: Optional[] = None)

Bases:

color_options :

class RegressionPerformanceMetricsResults(columns: , r2_score: float, rmse: float, rmse_default: float, mean_error: float, me_default_sigma: float, me_hist_for_plot: Dict[str, Union[pandas.core.series.Series, pandas.core.frame.DataFrame]], mean_abs_error: float, mean_abs_error_default: float, mean_abs_perc_error: float, mean_abs_perc_error_default: float, abs_error_max: float, abs_error_max_default: float, error_std: float, abs_error_std: float, abs_perc_error_std: float, error_normality: dict, underperformance: dict, hist_for_plot: Dict[str, pandas.core.series.Series], vals_for_plots: Dict[str, Dict[str, pandas.core.series.Series]], error_bias: Optional[dict] = None, mean_error_ref: Optional[float] = None, mean_abs_error_ref: Optional[float] = None, mean_abs_perc_error_ref: Optional[float] = None, rmse_ref: Optional[float] = None, r2_score_ref: Optional[float] = None, abs_error_max_ref: Optional[float] = None, underperformance_ref: Optional[dict] = None)

columns :

Bases: [RegressionQualityMetricResults]

calculate(data: )

class RegressionQualityMetricRenderer(color_options: Optional[] = None)

Bases:

color_options :

class RegressionQualityMetricResults(columns: , r2_score: float, rmse: float, rmse_default: float, mean_error: float, me_default_sigma: float, me_hist_for_plot: Dict[str, pandas.core.series.Series], mean_abs_error: float, mean_abs_error_default: float, mean_abs_perc_error: float, mean_abs_perc_error_default: float, abs_error_max: float, abs_error_max_default: float, error_std: float, abs_error_std: float, abs_perc_error_std: float, error_normality: dict, underperformance: dict, hist_for_plot: Dict[str, pandas.core.series.Series], vals_for_plots: Dict[str, Dict[str, pandas.core.series.Series]], error_bias: Optional[dict] = None, mean_error_ref: Optional[float] = None, mean_abs_error_ref: Optional[float] = None, mean_abs_perc_error_ref: Optional[float] = None, rmse_ref: Optional[float] = None, r2_score_ref: Optional[float] = None, abs_error_max_ref: Optional[float] = None, underperformance_ref: Optional[dict] = None, error_std_ref: Optional[float] = None, abs_error_std_ref: Optional[float] = None, abs_perc_error_std_ref: Optional[float] = None)

columns :

Bases: [RegressionTopErrorMetricResults]

calculate(data: )

class RegressionTopErrorMetricRenderer(color_options: Optional[] = None)

Bases:

color_options :

ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
ColorOptions
DatasetColumns
DatasetColumns
DatasetColumns
DatasetColumns
ColorOptions
ColorOptions
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
MetricRenderer
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData
Metric
InputData