Model A
Gemini 1.5 Pro
- Input / 1M
- $1.25
- Output / 1M
- $5.00
- Context Window
- 2M
Model Comparison
v2.0Price Data Last Verified: March 21, 2026
This page compares per-token pricing and context limits for Gemini 1.5 Pro and Llama 3.3 70Busing a standard scenario of 10,000 input tokens and 10,000 output tokens.
Model A
Model B
Meta
| Model | Input Cost | Output Cost | Total Cost |
|---|---|---|---|
| Gemini 1.5 Pro | $0.0125 | $0.05 | $0.0625 |
| Llama 3.3 70B | $0.006 | $0.006 | $0.012 |
Best pick: Llama 3.3 70B for a 2,000,000 in /2,000,000 out workload.
Best pick: Llama 3.3 70B for a 10,000 in /10,000 out request profile.
Cost estimates are generated from published input and output token rates for each provider. We apply identical token scenarios to both models in this comparison, so the result reflects price differences only. Pricing values are reviewed weekly against official API documentation and updated when changes are verified.