LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • View in Jupyter notebook
  • HTML
  • JSON
  • Python dictionary
  • Scored DataFrame
  • Evidently snapshot
  • DataFrame with a Report summary
  1. User Guide
  2. Tests and Reports

Output formats

How to export the results of evaluations.

PreviousEvaluate Text DataNextGenerate multiple Tests or Metrics

Last updated 2 months ago

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer version .

You can view or export results from Evidently Reports or Test Suites in multiple formats.

View in Jupyter notebook

You can directly render the visual summary of evaluation results in interactive Python environments like Jupyter notebook or Colab.

After running the Report, simply call the resulting Python object:

drift_report

This will render the HTML object directly in the notebook cell.

HTML

You can also save this interactive visual report as an HTML file to open in a browser:

drift_report.save_html(“file.html”)

This option is useful for sharing Reports with others or if you're working in a Python environment that doesn’t display interactive visuals.

JSON

You can get the results of the calculation as a JSON. It is useful for storing and exporting results elsewhere.

To view the JSON in Python:

drift_report.json()

To save the JSON as a separate file:

drift_report.save_json("file.json")

Python dictionary

You can get the output as a Python dictionary. This format is convenient for automated evaluations in data or ML pipelines, allowing you to transform the output or extract specific values.

To get the dictionary:

drift_report.as_dict()

Scored DataFrame

If you generated text Descriptors during your evaluation, you can retrieve a DataFrame with all generated descriptors added to each row of your original input data.

text_evals_report.datasets().current

This returns the complete original dataset with new scores.

Evidently snapshot

You can save the output of a Report or Test Suite as an Evidently JSON snapshot.

How is a JSON snapshot different from json()? A snapshot contains all supplementary and render data. This lets you restore the output in any Evidently format (like HTML) without accessing the initial raw data.

This is a rich JSON format used for storing the evaluation results on Evidently platform. When you save Reports or Test Suites to the platform, a snapshot is generated automatically. However, you can also generate and save a snapshot explicitly.

To save the Report as a snapshot:

drift_report.save('snapshot.json')

To load the snapshot back, use the “load” function.

loaded_report = Report.load('snapshot.json')

After you load the snapshot back, you can again view it in Python or export it to other formats.

DataFrame with a Report summary

Note: this export option is only supported for Reports, and not Test Suites.

You can get the Report results in a tabular format as a DataFrame.

To export results for a specific Metric:

drift_report.as_dataframe("DataDriftTable")

To export results for the entire Report, which returns a dictionary of DataFrames:

drift_report.as_dataframe()

This will return all relevant values that are computed inside the Metric as the metric result.

Inlcude/exclude. Check how to of json or as_dict output.

Generating snaphots. Check how to and upload the evaluation results to the Evidently Platform.

🚦
here
manage verbosity
get snapshots