# Trackly Full LLM Index > This is the comprehensive resource for LLMs to understand the Trackly platform, its features, and documentation. Base URL: https://tracklyai.in Primary docs: https://tracklyai.in/docs Primary resources hub: https://tracklyai.in/resources ## 🚀 Product Summary Trackly is the AI Decision Engine for improving production AI systems. Trackly helps teams find what's wrong with their AI — and fix it. Automatically surface plain-English insights, detect critical paths, and optimize costs for your AI agents and chains. ## 🛠️ Core Features ### 1. AI Decision Engine - **Auto Insights Engine**: Automatically surfaces plain-English findings from every run. - **Critical Path Detection**: Highlight the slowest, most expensive, and failure steps automatically. - **Run Comparison**: Side-by-side comparison of cost, latency, steps, and output diffs. - **"What-If" Analysis**: Real-time cost simulation for model swaps. ### 2. Cost Intelligence & Optimization - **Cost Intelligence**: Get model efficiency suggestions (e.g., switching to faster/cheaper models). - **Feature-level Attribution**: Slice usage by functional area (e.g., chat, RAG, summary). - **Live Model Pricing**: Ingest-time cost computation ensures accuracy even as provider rates change. - **Smart Alerts**: Interpreted real-time notifications for budget thresholds and usage spikes. ## 🏗️ Supported Providers - OpenAI (GPT-4o, GPT-4-turbo) - Anthropic (Claude 3.5 Sonnet, Claude 3 Opus/Haiku) - Google Gemini (1.5 Pro, 1.5 Flash) - Groq (Llama 3.1, 3.2, 3.3) - Ollama (Local LLM tracking) - Mistral, Together AI, Fireworks, AWS Bedrock, Cohere ## 💰 Pricing Plans - **Starter ($0/mo)**: 1,000,000 tokens, 3 projects, 7-day retention. - **Pro ($29/mo)**: 5,000,000 tokens, unlimited projects, 30-day retention, team collaboration. - **Scale ($99/mo)**: 10,000,000 tokens, 90-day retention, custom feature tags, priority support. ## 📚 Resource Chapters - AI Agents: https://tracklyai.in/resources/ai-agents - From zero to building production-grade AI agents. - RAG: https://tracklyai.in/resources/rag - From basic retrieval to more adaptive RAG workflows. - LangChain: https://tracklyai.in/resources/langchain - Practical LangChain guides for getting from idea to working agent. - LLM Costs: https://tracklyai.in/resources/llm-costs - Understand provider pricing, token costs, and how to monitor spend. - Ship AI Without Guesswork: https://tracklyai.in/resources/ship-ai-without-guesswork - Practical Trackly workflows for tracking tokens, running model what-if analysis, and tracing real agent flows. ## 📝 All Resource Articles - Track Token Usage Like a Product Team: https://tracklyai.in/resources/ship-ai-without-guesswork/track-token-usage-like-a-product-team - Instrument a real Python feature with Trackly so you can see prompt tokens, completion tokens, spend, and latency by feature. - Use Playground Before You Switch Models: https://tracklyai.in/resources/ship-ai-without-guesswork/use-playground-before-you-switch-models - Track real traffic in Python, then use Trackly Playground to compare your current model against cheaper or faster alternatives before shipping a change. - Trace Agent Runs With Graphs: https://tracklyai.in/resources/ship-ai-without-guesswork/trace-agent-runs-with-graphs - Use Trackly traces, spans, and graph views to understand where an agent workflow spent time, tokens, and money. - What are AI Agents?: https://tracklyai.in/resources/ai-agents/what-are-agents - Understand the agent loop, tools, memory, and why agents behave differently from a plain chatbot. - Agent Tools and Memory Explained: https://tracklyai.in/resources/ai-agents/agent-tools-and-memory-explained - Learn how tools let agents act and how memory helps them stay coherent across steps. - How Agent Loops Work: https://tracklyai.in/resources/ai-agents/how-agent-loops-work - Understand observe-think-act loops, stopping conditions, and the tradeoffs behind iterative agents. - Planning and Reflection in AI Agents: https://tracklyai.in/resources/ai-agents/planning-and-reflection-in-ai-agents - Learn why deliberate planning and self-review often improve agent quality on multi-step tasks. - Multi-Agent Systems with LangGraph: https://tracklyai.in/resources/ai-agents/multi-agent-systems-with-langgraph - Understand coordinator, specialist, and reviewer patterns for multi-agent systems built with LangGraph. - How to Build a RAG Pipeline: https://tracklyai.in/resources/rag/how-to-build-a-rag-pipeline - Build a practical retrieval augmented generation pipeline in Python from chunking to answer generation. - Advanced RAG Patterns: https://tracklyai.in/resources/rag/advanced-rag-patterns - Improve retrieval quality with hybrid search, reranking, better chunking, and query transformation. - Agentic RAG vs Naive RAG: https://tracklyai.in/resources/rag/agentic-rag-vs-naive-rag - Compare single-pass retrieval pipelines with agentic systems that can search, retry, and verify. - What is Graph RAG?: https://tracklyai.in/resources/rag/what-is-graph-rag - Learn what Graph RAG is, when it helps, and why relationships sometimes matter more than raw similarity. - Building Your First LangChain Agent: https://tracklyai.in/resources/langchain/building-your-first-langchain-agent - Build a practical first LangChain agent with one model, one tool, and one clear task. - Chains and Runnables in LangChain: https://tracklyai.in/resources/langchain/chains-and-runnables-in-langchain - Understand how LangChain composes prompts, models, and parsers with the Runnable interface. - LangChain Agents Explained: https://tracklyai.in/resources/langchain/langchain-agents-explained - Understand how LangChain agents decide between tools, manage intermediate steps, and where they fit in real apps. - Understanding Token Costs: https://tracklyai.in/resources/llm-costs/understanding-token-costs - Learn how prompt tokens, output tokens, and request shape turn into real LLM cost. - Groq vs Together AI vs Fireworks: https://tracklyai.in/resources/llm-costs/groq-vs-together-ai-vs-fireworks - A practical framework for comparing LLM providers by speed, cost behavior, and integration fit. - How to Track LLM API Costs in Python: https://tracklyai.in/resources/llm-costs/how-to-track-llm-api-costs-in-python - Track token usage, latency, and estimated spend in Python with Trackly and a LangChain callback. ## 📖 Documentation Focus - Installing the Trackly SDK: https://tracklyai.in/docs#installation - Using LangChain Callbacks: https://tracklyai.in/docs#callbacks - Native Gemini/Anthropic Wrappers: https://tracklyai.in/docs#native-sdks - Tracing and Agent Debugging: https://tracklyai.in/docs#tracing - Technical Ingest API Specs: https://tracklyai.in/docs#backend-api ## 📍 Core Pages - Home: https://tracklyai.in - Docs: https://tracklyai.in/docs - Resources: https://tracklyai.in/resources - Changelog: https://tracklyai.in/changelogs - llms.txt: https://tracklyai.in/llms.txt