LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  1. User Guide

How-to guides

How-to guides.

PreviousChange color schemaNextAll tests

Last updated 1 month ago

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer docs version .

These example notebooks and how-to guides show how to solve specific tasks. You can also browse on .

Topic
Question
Guide or example

Input data

How to load data from different sources to pandas.Dataframes?

Input data

How to use column mapping?

Test and Reports

How to generate multiple Tests or Metrics quickly? (Test and Metric Generators).

Test and Reports

How to calculate drift in embeddings?

Test and Reports

How to run evaluations on raw text data? (Tests, Metrics and Presets that work with Text).

Test and Reports

How to use text descriptors in text-specific Metrics?

Test and Reports

How to use text descriptors in tabular Metrics and Tests?

Test and Reports

How to calculate metrics for ranking and recommender systems?

Test and Reports

How to import text data, convert to embeddings and detect drift?

Test and Reports

How to set Test criticality as "warning"?

Test and Reports

How to save and load Test Suites and Reports as JSON "snapshots"?

Test and Reports

How to export Report results to a dataframe?

Test and Reports

How to run calculations on Spark?

Customization

How to assign a particular method for Data Drift detection?

Customization

How to define a custom list of Missing Values?

Customization

How to add a custom Metric or Test from scratch)?

Customization

How to add a custom Metric as a Python function?

Visual render

How to add text comments to the Reports?

Visual render

How to specify a color scheme in Reports and Test Suites?

Visual render

How to get non-aggregated visuals in Reports?

Outputs

How to customize JSON output?

Outputs

How to get Report or Test Suite output in csv?

here
GitHub
Guide
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Guide
Jupyter notebook
Jupyter notebook
Jupyter notebook
Jupyter notebook
Guide
Jupyter notebook
Guide
Jupyter notebook
Guide
Jupyter notebook
Guide
Jupyter notebook
Jupyter notebook