glm-4.6
GLM · Per Token
glm-4.6Available⚡Cache 82% offChatTool UsePrompt Cache
Context Window: 200KMax Output Tokens: 128KPricing
| Official Price | LemonData Price | |
|---|---|---|
| Input | $0.60 | $0.42 |
| Output | $2.20 | $1.54 |
| Cache Read | $0.11 | $0.11 |
| Cache Write | Free | Free |
Parameters
Context Window
200K tokens
Max Output Tokens
128K tokens
Best For
Chat
Conversational AI, customer support, and Q&A
Cost Calculator
1M
0.5M
Estimated Monthly Cost$1.19
API Code Example
POST/v1/chat/completions
curl https://api.lemondata.cc/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-xxx" \
-d '{
"model": "glm-4.6",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'FAQ
How much does glm-4.6 cost?
On LemonData, glm-4.6 costs $0.4200 per 1M input tokens and $1.5400 per 1M output tokens, which is up to 30% off the official pricing.
What is glm-4.6 best for?
glm-4.6 excels at Chat, Tool Use, Prompt Cache. Access it through LemonData's unified API with a single API key.
How to use glm-4.6 API?
Get your API key from LemonData, then call https://api.lemondata.cc/v1/chat/completions using any compatible SDK. See the code examples above for detailed integration.