glm-4.7-thinking
GLM · Per Token
glm-4.7-thinkingAvailable⚡Cache 73% offChatTool UseVisionContext Window: 203KMax Output Tokens: 64K
Pricing
| Official Price | LemonData Price | |
|---|---|---|
| Input | $0.40 | $0.28 |
| Output | $1.50 | $1.05 |
| Cache Read | $0.11 | $0.11 |
| Cache Write | Free | Free |
Parameters
Context Window
203K tokens
Max Output Tokens
64K tokens
Best For
Chat
Conversational AI, customer support, and Q&A
Vision
Image understanding, document analysis, and visual reasoning
Cost Calculator
1M
0.5M
Estimated Monthly Cost$0.81
API Code Example
POST/v1/chat/completions
curl https://api.lemondata.cc/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-xxx" \
-d '{
"model": "glm-4.7-thinking",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'FAQ
How much does glm-4.7-thinking cost?
On LemonData, glm-4.7-thinking costs $0.2800 per 1M input tokens and $1.0500 per 1M output tokens, which is up to 30% off the official pricing.
What is glm-4.7-thinking best for?
glm-4.7-thinking excels at Chat, Tool Use, Vision. Access it through LemonData's unified API with a single API key.
How to use glm-4.7-thinking API?
Get your API key from LemonData, set the base URL to api.lemondata.cc/v1, and use any OpenAI-compatible SDK. See the code examples above.