Unified AI model
API gateway.
Fully OpenAI-compatible. Change one line of code and connect to top-tier models worldwide with high concurrency and low-latency enterprise routing.
Seamless access, one gateway
Keep your current code. LetAI handles the routing underneath.
Keep your existing SDKs and agent tooling. The platform dispatch layer takes over route selection and sends traffic to top-tier model capacity across regions.
OpenAI SDK
/openai/*
Anthropic SDK
/anthropic/*
Gen AI SDK
/gemini/*
Drop-in code compatibility
Stay compatible with the OpenAI request shape and most community clients so teams can migrate without rebuilding their application layer.
Fast and highly available
Deploy across global regions with route-level failover so latency, fallback and uptime are treated as product capabilities instead of ad-hoc scripts.
Transparent runtime billing
Track token consumption, runtime logs and quota reports in one place so every unit of spend stays visible and explainable.
Simple and transparent pricing
No monthly fee and no hidden caps. Billed by actual usage in $/1M tokens.
GPT-4o
Input
$5.00
Output
$15.00
Claude 3.5 Sonnet
Input
$3.00
Output
$15.00
Gemini 1.5 Pro
Input
$1.25
Output
$5.00
GPT-4 Turbo
Input
$10.00
Output
$30.00
Claude 3 Opus
Input
$15.00
Output
$75.00
Llama 3 70B
Input
$0.50
Output
$0.50
View the full supported model list and multipliers