LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • What is a Dashboard?
  • What is a Panel?
  • What is the data source?
  1. User Guide
  2. Dashboard

Dashboard overview

Introduction to Dashboards.

PreviousDashboardNextPre-built Tabs

Last updated 2 months ago

Supported in: Evidently OSS, Evidently Cloud and Evidently Enterprise.

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer version .

What is a Dashboard?

Each Project has its Dashboard. A Dashboard lets you evaluation results over time, providing a clear view of the quality of your AI application and data.

When you create a new Project, the Dashboard starts empty. To populate it, run or set up . Once you have data, you can configure the Dashboard to show the values you want to see.

You can use the Dashboard to monitor live data in production or to keep track of results from batch experiments and tests. The "Show in order" toggle lets you switch between two views:

  • Time series. Displays data with actual time intervals, ideal for live monitoring.

  • Sequential. Shows results in order with equal spacing, perfect for experiments.

All Panels within the same view reflect the date range set by the time range filter. You can also zoom in on any time series visualizations for deeper analysis.

What is a Panel?

A Dashboard consists of **Panels((, each visualizing specific values or test results. Panels can be counters, line or bar plots, and more.

You can customize your Dashboard by adding Panels through the Python API using dashboard-as-code.

In Evidently Cloud and Enterprise, you have additional options:

  • Add Panels directly from the UI

  • Use multiple Tabs within the same Dashboard

  • Start with pre-built Tabs as templates

What is the data source?

Panels pull data from snapshots, which are Reports or Test Suites you've generated and saved to a Project.

Each Test Suite and Report contains a wealth of information and visuals. To add a Panel to the Dashboard, you must choose a specific value you'd like to plot and select other parameters, such as the Panel type and title.

For example, if your Reports include the ColumnSummaryMetric, you can visualize values like mean, max, min, etc. within your Panels. This method works for all other Metrics. If you're running Tests, say TestColumnValueMin, you can also display the Test result (pass or fail).

You can also use Tags, which you should add to Reports or Test Suites during generation. Tags allow you to filter and visualize data from specific subsets of snapshots when creating a Panel.

📈
Panel types
Pre-built Tabs
Adding Panels
here
evaluations
monitoring