Migrate from OpenAI to LemonData in 5 Minutes
Switching from OpenAI's official API to LemonData takes two line changes. Your existing code, prompts, and model names all work as-is. You also get access to 300+ models across OpenAI, Anthropic, Google, DeepSeek, and more, through the same API key.
The Short Version
- Sign up at lemondata.cc and grab an API key (you get $1 free credit)
- Replace your
base_urlandapi_key - Done. Everything else stays the same.
Python (OpenAI SDK)
# Before โ OpenAI official
from openai import OpenAI
client = OpenAI(api_key="sk-openai-xxx")
# After โ LemonData (change 2 lines)
from openai import OpenAI
client = OpenAI(
api_key="sk-lemon-xxx",
base_url="https://api.lemondata.cc/v1"
)
# Everything else stays the same
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Streaming, function calling, vision: all work identically. The OpenAI Python SDK sends requests to whatever base_url you point it at.
Node.js (OpenAI SDK)
// Before โ OpenAI official
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: 'sk-openai-xxx' });
// After โ LemonData (change 2 lines)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'sk-lemon-xxx',
baseURL: 'https://api.lemondata.cc/v1',
});
// Everything else stays the same
const completion = await openai.chat.completions.create({
model: 'gpt-4.1',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(completion.choices[0].message.content);
Note: it's baseURL (camelCase) in the Node.js SDK, not base_url.
curl
# Before โ OpenAI official
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer sk-openai-xxx" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4.1","messages":[{"role":"user","content":"Hello"}]}'
# After โ LemonData (change URL and key)
curl https://api.lemondata.cc/v1/chat/completions \
-H "Authorization: Bearer sk-lemon-xxx" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4.1","messages":[{"role":"user","content":"Hello"}]}'
Same endpoint path, same request body, same response format.
Environment Variable Approach
If your code reads from environment variables (which it should), you don't even need to touch code:
# Before
export OPENAI_API_KEY="sk-openai-xxx"
export OPENAI_BASE_URL="https://api.openai.com/v1"
# After
export OPENAI_API_KEY="sk-lemon-xxx"
export OPENAI_BASE_URL="https://api.lemondata.cc/v1"
The OpenAI SDK automatically reads OPENAI_API_KEY and OPENAI_BASE_URL from the environment. Zero code changes.
What You Get After Migration
Once you're on LemonData, you keep full OpenAI compatibility and gain access to additional capabilities:
300+ Models, One API Key
Your existing OpenAI code now works with Claude, Gemini, DeepSeek, Mistral, and hundreds more โ just change the model parameter:
# GPT-4.1 (OpenAI) โ $2.00/$8.00 per 1M tokens
response = client.chat.completions.create(model="gpt-4.1", messages=messages)
# Claude Sonnet 4.6 (Anthropic) โ $3.00/$15.00 per 1M tokens
response = client.chat.completions.create(model="claude-sonnet-4-6", messages=messages)
# Gemini 2.5 Pro (Google)
response = client.chat.completions.create(model="gemini-2.5-pro", messages=messages)
# DeepSeek V3 โ $0.28/$0.42 per 1M tokens (use "deepseek-chat" or alias "deepseek-v3")
response = client.chat.completions.create(model="deepseek-chat", messages=messages)
Multi-channel redundancy means if one upstream provider has issues, the gateway automatically routes to an alternative channel. No code changes needed.
Native Protocol Access (Optional)
If you want to use Anthropic or Google models with their full native capabilities (extended thinking, prompt caching with cache_control, Google search grounding), LemonData supports their native protocols through the same base URL:
# Anthropic native โ use the Anthropic SDK
# Extended thinking, cache_control, Citations all work natively
from anthropic import Anthropic
client = Anthropic(
api_key="sk-lemon-xxx",
base_url="https://api.lemondata.cc" # No /v1 โ Anthropic SDK adds /v1/messages itself
)
# Google Gemini native โ use the Google SDK
# Search grounding, grounding_metadata all work natively
from google import genai
client = genai.Client(
api_key="sk-lemon-xxx",
http_options={"base_url": "https://api.lemondata.cc"} # No path suffix โ SDK adds /v1beta/models/...
)
This is entirely optional. The OpenAI-compatible endpoint works for all models. But if you need Anthropic's extended thinking or Google's grounding, native protocol access gives you those features without any format conversion loss.
Common Integration Migration
Cursor
Settings โ Models โ OpenAI API Key:
- API Key:
sk-lemon-xxx - Base URL:
https://api.lemondata.cc/v1
LangChain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4.1",
api_key="sk-lemon-xxx",
base_url="https://api.lemondata.cc/v1"
)
Vercel AI SDK
import { createOpenAI } from '@ai-sdk/openai';
const lemondata = createOpenAI({
apiKey: 'sk-lemon-xxx',
baseURL: 'https://api.lemondata.cc/v1',
});
const result = await generateText({
model: lemondata('gpt-4.1'),
prompt: 'Hello!',
});
LiteLLM
import litellm
response = litellm.completion(
model="openai/gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}],
api_key="sk-lemon-xxx",
api_base="https://api.lemondata.cc/v1"
)
Verify Your Migration
Quick sanity check after switching:
curl https://api.lemondata.cc/v1/models \
-H "Authorization: Bearer sk-lemon-xxx" | head -c 200
If you see a JSON response with model objects, you're good.
FAQ
Will my existing prompts work? Yes. LemonData is fully OpenAI-compatible. Same request format, same response format.
Do I need to change model names? No. gpt-4.1, gpt-4o, gpt-4.1-mini โ all standard OpenAI model names work. LemonData also has a three-layer model resolution system: exact match โ alias lookup (21 static aliases like gpt4 โ gpt-4, gpt-3.5 โ gpt-3.5-turbo) โ fuzzy correction (Levenshtein distance โค 3). So even deprecated names like gpt-4-turbo or typos like gpt4o resolve correctly.
What about streaming? Works identically. SSE format, same chunk structure. For native Anthropic/Gemini protocols, you get each provider's native SSE format (including thinking deltas for extended thinking).
What about function calling / tools? Fully supported. Same schema, same behavior.
What about error handling? LemonData returns OpenAI-compatible errors with additional agent-friendly fields: retryable, did_you_mean, suggestions, retry_after. Standard OpenAI SDK error handling works unchanged โ the extra fields are additive.
Can I switch back? Yes. Change the two lines back. There's no lock-in. No proprietary format, no data migration.
Full API documentation: docs.lemondata.cc Quickstart guide: docs.lemondata.cc/quickstart
