TL;DR
- Install SDKs + OpenTelemetry
- Configure the Brixo OTLP endpoint and API key
- Initialize Traceloop (OpenLLMetry) and OpenInference
- Run your app — traces flow to Brixo
Prerequisites
- Python 3.9+
- Brixo account + API key
- Internet access from your app to Brixo ingest (OTLP/HTTP)
.env file or export environment variables:
x-api-key header.
Step 1 — Install dependencies
Step 2 — Configure OpenTelemetry to export to Brixo
Createtelemetry.py:
Step 3 — Initialize OpenLLMetry (Traceloop)
Step 4 — Initialize OpenInference
Step 5 — Run your app
Example app:Advanced examples
LlamaIndex
CrewAI
Verify locally (optional)
Test traces locally using Phoenix before sending to Brixo:BRIXO_OTLP_ENDPOINT=http://localhost:6006/v1/traces to verify spans.
Sampling and PII
- Sampling: Use
ParentBased(TraceIdRatioBased(0.2))to manage span volume. - PII: Redact sensitive data before export. Both OpenLLMetry and OpenInference support custom attribute filtering.
Troubleshooting
No traces:- Check
BRIXO_API_KEYand endpoint - Ensure telemetry is initialized before other imports
- Run with
OTEL_LOG_LEVEL=debug
- Disable duplicate instrumentations when layering frameworks
- Use
disable_batch=Truewith Traceloop in development
FAQ
Why both OpenLLMetry and OpenInference?
They capture different layers of context — OpenLLMetry focuses on LLM/agent orchestration, OpenInference on tool and framework spans. Brixo merges and normalizes both for full insight.Do I need a Collector?
No. Brixo supports direct OTLP/HTTP export.Which frameworks are supported?
Anything OpenLLMetry or OpenInference supports: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, AutoGen, Bedrock, etc.Next steps
- Add business tags like
brixo.outcome=booked_meeting - Add LLM evaluations for MES/quality analytics
- Explore the Brixo UI: Traces → Sessions → Dashboards
