Quick Start Guide
Get LangFuse tracing working in your application in just a few minutes. This guide assumes you have access to the Justice AI LangFuse instance.
What You’ll Learn
By the end of this tutorial, you’ll be able to: - Set up LangFuse in your environment - Create your first trace - View traces in the LangFuse dashboard - Understand the basic concepts of observability
Prerequisites
- Python 3.8+
- Access to the Justice AI LangFuse instance
- Your LangFuse API keys (contact the Justice AI team if you don’t have these)
Step 1: Install the SDK
pip install langfuse
Why this step matters: The LangFuse SDK handles all the communication with the tracing server and provides convenient decorators for your code.
Step 2: Set Up Your Environment
Retrieve these variables from your LangFuse dashboard. Click on a specified project > Settings cog > API Keys. Note that traces are grouped by project.
Create a .env
file in your project root:
# LangFuse Configuration
LANGFUSE_HOST=<YOUR_LANGFUSE_HOST> # Replace with your actual LangFuse URL
LANGFUSE_PUBLIC_KEY=your_public_key_here
LANGFUSE_SECRET_KEY=your_secret_key_here
Security Note: Never commit your secret keys to version control. Use environment variables or secure configuration management.
Step 3: Write Your First Traced Function
Let’s create a simple example to see tracing in action:
import os
from langfuse import Langfuse
from langfuse.decorators import observe
# Initialize LangFuse
= Langfuse(
langfuse =os.getenv("LANGFUSE_PUBLIC_KEY"),
public_key=os.getenv("LANGFUSE_SECRET_KEY"),
secret_key=os.getenv("LANGFUSE_HOST")
host
)
# Simple function tracing with decorator
@observe()
def analyze_text(text: str) -> dict:
"""Analyze text and return basic metrics."""
= len(text.split())
word_count = len(text)
char_count
# Simulate some processing time
import time
0.1)
time.sleep(
return {
"word_count": word_count,
"char_count": char_count,
"processed": True
}
# Run the function
if __name__ == "__main__":
= "Hello, this is a sample text for analysis."
sample_text = analyze_text(sample_text)
result print(f"Analysis result: {result}")
# Ensure traces are sent
langfuse.flush()
What’s happening: The @observe()
decorator automatically creates a trace for your function, capturing inputs, outputs, and execution time.
Step 4: Run Your Code
Save the code above as first_trace.py
and run it:
python first_trace.py
You should see output like:
Analysis result: {'word_count': 9, 'char_count': 40, 'processed': True}
Step 5: View Your Traces
- Navigate to your LangFuse dashboard
- Log in with your credentials
- Look for your traces in the Traces tab
- Click on the trace named
analyze_text
to see detailed execution information
You should see: - Function name and execution time - Input parameters - Output results - Any errors (if they occurred)
Step 6: Try Manual Tracing
Now let’s try creating traces manually for more control:
def manual_trace_example():
"""Example of manual trace creation."""
# Create a trace
= langfuse.trace(
trace ="text_processing_pipeline",
nameinput={"text": "Sample input text"},
=["tutorial", "manual"]
tags
)
# Add a span for preprocessing
= trace.span(
preprocessing_span ="preprocessing",
nameinput={"raw_text": "Sample input text"}
)
# Simulate preprocessing
= "sample input text"
cleaned_text ={"cleaned_text": cleaned_text})
preprocessing_span.update(output
preprocessing_span.end()
# Add a span for analysis
= trace.span(
analysis_span ="analysis",
nameinput={"text": cleaned_text}
)
# Simulate analysis
= {"sentiment": "neutral", "confidence": 0.8}
analysis_result =analysis_result)
analysis_span.update(output
analysis_span.end()
# Update the main trace
={"result": analysis_result})
trace.update(output
return analysis_result
# Run the manual example
if __name__ == "__main__":
= manual_trace_example()
result print(f"Manual trace result: {result}")
langfuse.flush()
What’s happening: Manual tracing gives you complete control over what gets traced and when. You can create nested spans to represent different parts of your workflow.
What You’ve Learned
Congratulations! You’ve successfully:
- ✅ Installed the LangFuse SDK
- ✅ Set up your environment variables
- ✅ Created your first traced function
- ✅ Viewed traces in the dashboard
- ✅ Tried manual tracing for more control
What’s Next?
Now that you’ve successfully traced your first request, you can:
- Python SDK Guide - Learn the full SDK capabilities
- Raw Requests Guide - Use HTTP requests directly
- OpenTelemetry Guide - Implement standardized tracing
- Azure Deployment - Deploy your own LangFuse instance
🎉 You’ve completed the LangFuse quickstart! Choose your next step based on your integration needs.
Common Issues
Connection Problems
- Check that your
LANGFUSE_HOST
is correct - Verify your API keys are valid
- Ensure network connectivity to the LangFuse instance
Missing Traces
- Call
langfuse.flush()
before your application exits - Check for any error messages in your application logs
- Verify your trace names don’t contain special characters
You should now have basic tracing working! The LangFuse dashboard will show you detailed information about your application’s execution.