LogoLogo
HomeBlogGitHub
latest
latest
  • New DOCS
  • What is Evidently?
  • Get Started
    • Evidently Cloud
      • Quickstart - LLM tracing
      • Quickstart - LLM evaluations
      • Quickstart - Data and ML checks
      • Quickstart - No-code evaluations
    • Evidently OSS
      • OSS Quickstart - LLM evals
      • OSS Quickstart - Data and ML monitoring
  • Presets
    • All Presets
    • Data Drift
    • Data Quality
    • Target Drift
    • Regression Performance
    • Classification Performance
    • NoTargetPerformance
    • Text Evals
    • Recommender System
  • Tutorials and Examples
    • All Tutorials
    • Tutorial - Tracing
    • Tutorial - Reports and Tests
    • Tutorial - Data & ML Monitoring
    • Tutorial - LLM Evaluation
    • Self-host ML Monitoring
    • LLM as a judge
    • LLM Regression Testing
  • Setup
    • Installation
    • Evidently Cloud
    • Self-hosting
  • User Guide
    • 📂Projects
      • Projects overview
      • Manage Projects
    • 📶Tracing
      • Tracing overview
      • Set up tracing
    • 🔢Input data
      • Input data overview
      • Column mapping
      • Data for Classification
      • Data for Recommendations
      • Load data to pandas
    • 🚦Tests and Reports
      • Reports and Tests Overview
      • Get a Report
      • Run a Test Suite
      • Evaluate Text Data
      • Output formats
      • Generate multiple Tests or Metrics
      • Run Evidently on Spark
    • 📊Evaluations
      • Evaluations overview
      • Generate snapshots
      • Run no code evals
    • 🔎Monitoring
      • Monitoring overview
      • Batch monitoring
      • Collector service
      • Scheduled evaluations
      • Send alerts
    • 📈Dashboard
      • Dashboard overview
      • Pre-built Tabs
      • Panel types
      • Adding Panels
    • 📚Datasets
      • Datasets overview
      • Work with Datasets
    • 🛠️Customization
      • Data drift parameters
      • Embeddings drift parameters
      • Feature importance in data drift
      • Text evals with LLM-as-judge
      • Text evals with HuggingFace
      • Add a custom text descriptor
      • Add a custom drift method
      • Add a custom Metric or Test
      • Customize JSON output
      • Show raw data in Reports
      • Add text comments to Reports
      • Change color schema
    • How-to guides
  • Reference
    • All tests
    • All metrics
      • Ranking metrics
    • Data drift algorithm
    • API Reference
      • evidently.calculations
        • evidently.calculations.stattests
      • evidently.metrics
        • evidently.metrics.classification_performance
        • evidently.metrics.data_drift
        • evidently.metrics.data_integrity
        • evidently.metrics.data_quality
        • evidently.metrics.regression_performance
      • evidently.metric_preset
      • evidently.options
      • evidently.pipeline
      • evidently.renderers
      • evidently.report
      • evidently.suite
      • evidently.test_preset
      • evidently.test_suite
      • evidently.tests
      • evidently.utils
  • Integrations
    • Integrations
      • Evidently integrations
      • Notebook environments
      • Evidently and Airflow
      • Evidently and MLflow
      • Evidently and DVCLive
      • Evidently and Metaflow
  • SUPPORT
    • Migration
    • Contact
    • F.A.Q.
    • Telemetry
    • Changelog
  • GitHub Page
  • Website
Powered by GitBook
On this page
  • 1. Set up Evidently Cloud
  • 2. Installation
  • 2. Initialize Tracing
  • 3. Trace a simple function
  • 4. View Traces
  • What's next?
  1. Get Started
  2. Evidently Cloud

Quickstart - LLM tracing

LLM tracing "Hello world."

PreviousEvidently CloudNextQuickstart - LLM evaluations

Last updated 2 months ago

You are looking at the old Evidently documentation: this API is available with versions 0.6.7 or lower. Check the newer version .

This quickstart shows how to instrument a simple LLM app to send inputs and outputs to Evidently Cloud. You will use the open-source Tracely library.

You will need an OpenAI key to create a toy LLM app.

Need help? Ask on .

1. Set up Evidently Cloud

Set up your Evidently Cloud workspace:

  • Sign up. If you do not have one yet, sign up for a free .

  • Create an Organization. When you log in the first time, create and name your Organization.

  • Create a Project. Click + button under Project List. Create a Project, copy and save the Project ID. ()

  • Get your API token. Click the Key icon in the left menu. Generate and save the token. ().

You can now go to your Python environment.

2. Installation

Install the Tracely library to instrument your app:

!pip install tracely

Install the Evidently library to interact with Evidently Cloud:

!pip install evidently

Install the OpenAI library to create a toy app:

!pip install openai

Imports:

import os
import openai
import time
from tracely import init_tracing
from tracely import trace_event

2. Initialize Tracing

Initialize the OpenAI client. Pass the token as an environment variable:

# os.environ["OPENAI_API_KEY"] = "YOUR_KEY"
client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Set up tracing parameters. Give it a name to identify your tracing dataset.

init_tracing(
    address="https://app.evidently.cloud/",
    api_key="EVIDENTLY_API_KEY",
    project_id="YOUR_PROJECT_ID"
    export_name="LLM tracing example"
    )

3. Trace a simple function

Create a simple function to send questions to Open AI API and receive a completion. Set the questions list:

question_list = [
    "What is Evidently Python library?",
    "What is LLM observability?",
    "How is MLOps different from LLMOps?"
]

Create a function and use the trace_event() decorator to trace it:

@trace_event()
def pseudo_assistant(question):
    system_prompt = "You are a helpful assistant. Please answer the following question concisely."
    messages = [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": question},
    ]
    return client.chat.completions.create(model="gpt-4o-mini", messages=messages).choices[0].message.content

# Iterate over the list of questions and pass each to the assistant
for question in question_list:
    response = pseudo_assistant(question=question)
    time.sleep(1)

4. View Traces

What's next?

Want to run evaluations over this data? See a Quickstart.

Check out a more in-depth tutorial to learn more about tracing:

Go to the Evidently Cloud, open Datasets in the left menu (), and view your Traces.

here
Discord
Evidently Cloud account
Projects page
Token page
Datasets Page
Quickstart - LLM evaluations
Tutorial - Tracing