Documents Product Categories Zenlayer AI Gateway

Zenlayer AI Gateway

Apr 13, 2026
Custom Models, add a model using the following format: + @OpenAI Example +claude-3-opus-20240229@OpenAI Explanation • The actual model name supported by AI Gateway. • @OpenAI Indicates that NextChat should call this model using the OpenAI-compatible API format, even if the underlying model is provided by a different vendor. AI Gateway translates the OpenAI-compatible request into the appropriate provider- specific API call. This approach allows NextChat to use multiple AI providers through a single OpenAI-compatible interface exposed by AI Gateway. 72026/3/23 17:36 Zenlayer Docs LobeHub Chat Common Configuration Parameters To connect an application to AI Gateway, copy the URL below, replace {APPLICATION_URL} and sk-xxxxxxxxxxxxxx , and open it in your browser. https://{APPLICATION_URL}/?settings={ "keyVaults": { "openai": { "apiKey": "sk-xxxxxxxxxxxxxx", "baseURL": "https://gateway.theturbo.ai" } }, "languageModel": { "openai": { "autoFetchModelLists": true, "enabled": true, "enabledModels": [ "gpt-4o", "gpt-4o-mini", "o1-preview", "o1-mini", "gpt-4o-2024-08-06", "gpt-4-turbo", "chatgpt-4o-latest", "claude-3-5-sonnet-20240620", "claude-3-haiku-20240307", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "gemini-1.5-flash-latest", "gemini-1.5-pro-latest" ] }, "ollama": { "enabled": false } }, "check_updates": false } 82026/3/23 17:36 Zenlayer Docs Parameter Description The access URL of the AI application you {APPLICATION_URL} have deployed (for example, a chat UI or desktop web app) The API key generated after creating an AI key Gateway instance. This key is used to authenticate all requests The base URL of your AI Gateway service url endpoint, that is https://gateway.theturbo.ai Note You can view Github for more detailed configurations. Application Configuration – Language Model (OpenAI) To manually configure the AI Gateway in the application: 1. Navigate to Application Settings > Language Models > OpenAI. 2. In the API Key field, enter your AI Gateway API key. 3. In the Base URL field, enter: https://gateway.theturbo.ai 4. Click Fetch Model List to retrieve the available models from AI Gateway. 5. From the model list, select and enable the models you want to use. Once completed, the application will route all model requests through the AI Gateway. 92026/3/23 17:36 Zenlayer Docs ChatGPT. (uTools Plugin) uTools ChatGPT. is an AI plugin for uTools that allows users to access multiple AI models by connecting to the AI Gateway. By binding an AI Gateway API key, the plugin can route requests through the gateway and interact with different model providers via a unified interface. You can view Help Guide for more details about installation and API integration. API Endpoint Management 1. Click to manage API endpoint. 2. Select Private API Route. 3. Enter https://gateway.theturbo.ai as the API Base URL, and provide the API Key generated after creating an AI Gateway instance. 102026/3/23 17:36 Zenlayer Docs 112026/3/23 17:36 Zenlayer Docs Chatbox Chatbox is a useful AI desktop application. You can view GitHub for more details about installation and API integration. API Endpoint Management 1. Navigate to Settings > Model Provider > OpenAI. 2. Enter https://gateway.theturbo.ai as the API Host, and provide the API Key generated after creating an AI Gateway instance. 3. Click Check to view availability. 122026/3/23 17:36 Zenlayer Docs Cherry Studio Cherry Studio is a desktop client that supports multiple LLM providers, available on Windows, Mac and Linux. You can view GitHub for more details about installation and API integration. API Endpoint Management 1. Navigate to Settings > Model Provider > Add. 2. Enter gateway.theturbo as the Provider Name, and New API as the Provider Type. 3. Enter https://gateway.theturbo.ai as the API Host, and provide the API Key generated after creating an AI Gateway instance. 132026/3/23 17:36 Zenlayer Docs 4. Click Manage to add or remove models. 142026/3/23 17:36 Zenlayer Docs ChatWise ChatWise is a simple, powerful and privacy-friendly AI chat application. You can view Help Guide for more details about installation and API integration. API Endpoint Management 1. Navigate to Settings > Provider. Click at the bottom and select OpenAI Compatible. 2. Enter the provider name, API Base URL and API Key. 152026/3/23 17:36 Zenlayer Docs 3. Click Fetch Models to manage all the accessed models. 162026/3/23 17:36 Zenlayer Docs Cursor Cursor is a code editor built for programming with AI. You can view Help Guide for more details about installation and API integration. Note You need upgrade to Pro to manage the API endpoints. API Endpoint Management 1. Open the editor, click the gear icon in the top-right corner, then navigate to Cursor Settings → Models. 2. Expand API Keys, enable and enter Open AI Key, and Override OpenAI Base URL. 172026/3/23 17:36 Zenlayer Docs Claude Code Integrating Claude Code with AI Gateway provides a unified and flexible way to access Claude models while simplifying configuration and usage. You can view Help Guide for more details. Installation Install Claude Code globally using npm: npm install -g @anthropic-ai/claude-code Ensure that Node.js and npm are properly installed before proceeding. Configuration Claude Code reads its runtime configuration from a local settings file. 1. Open (or create) the configuration file: vim ~/.claude/settings.json 2. Add the following configuration, replacing the values with your own AI Gateway information: { "env": { "ANTHROPIC_BASE_URL": "https://gateway.theturbo.ai", "ANTHROPIC_AUTH_TOKEN": "sk-xxxxxxxxxxxxxx" } } 182026/3/23 17:36 Zenlayer Docs • ANTHROPIC_BASE_URL The Claude-compatible endpoint exposed by the AI Gateway. • ANTHROPIC_AUTH_TOKEN The API key generated when creating your AI Gateway instance. Usage Once configured, Claude Code can be used in multiple ways: Start an interactive session claude Send a single prompt directly claude "Explain the difference between HTTP and HTTPS." All requests will be routed through the AI Gateway, allowing centralized access control, logging, and model management. 192026/3/23 17:36 Zenlayer Docs Continue Continue is a development tool (VS Code extension and CLI) that uses AI models to assist with coding tasks such as autocompletion, explanation, refactoring, and more. To make AI requests, Continue needs a model provider, which can be configured to use your AI Gateway. You can view Help Guide for more details. API Endpoint Management 1. In VS Code, install the Continue extension if not already installed. 2. Open the Continue settings: • Press Ctrl/Cmd + P to open the command palette. • Type: Continue: Open Settings . • Or open the extension sidebar and click the gear / settings icon for Continue. 3. Open (or create) the Continue configuration file: vim ~/.continue/config.json If the file does not exist, create it manually. 4. Paste the following example configuration and replace the apiKey with your own values: 202026/3/23 17:36 Zenlayer Docs { "models": [ { "title": "AI Gateway Chat Model", "provider": "openai", "model": "gpt-4o", "apiKey": "sk-xxxxxxxxxxxxxx", "apiBase": "https://gateway.theturbo.ai" } ], "tabAutocompleteModel": { "title": "AI Gateway Autocomplete", "provider": "openai", "model": "gpt-4o-mini", "apiKey": "sk-xxxxxxxxxxxxxx", "apiBase": "https://gateway.theturbo.ai" } } Continue uses different models for different purposes: • models Used for chat and general AI interactions, such as: ◦ Asking questions ◦ Explaining code ◦ Refactoring ◦ Generating code snippets • tabAutocompleteModel Used specifically for code auto-completion, for example: ◦ Suggestions that appear while typing ◦ Pressing Tab to complete code You can use the same model for both, or different models depending on your needs. 212026/3/23 17:36 Zenlayer Docs OpenAI Codex Codex is OpenAIʼs coding agent that helps you write, review, and ship code faster. Use it side-by-side in your IDE or delegate larger tasks to the cloud. You can view Help Guide for more details. Install Codex You can install Codex using one of the following methods. Option 1: Install via Homebrew brew install codex Option 2: Install via npm npm install -g @openai/codex Configure Codex Codex reads its settings from a local configuration file written in TOML format. Step 1: Open the configuration file vim ~/.codex/config.toml Step 2: Define the default model and provider 222026/3/23 17:36 Zenlayer Docs # Default model used by Codex model = "o3" # Default model provider model_provider = "openai-chat-completions" Step 3: Configure the model provider [model_providers.openai-chat-completions] # Display name shown in the Codex interface name = "gateway.theturbo" # Base URL of the OpenAI-compatible API base_url = "https://gateway.theturbo.ai" # Environment variable that stores the API key env_key = "YOUR_AI_GATEWAY_API_KEY" # API protocol type (optional, defaults to "chat") wire_api = "chat" Set the API Key Set your API key as an environment variable: export YOUR_AI_GATEWAY_API_KEY="sk-xxxxxxxxxxxxxx" This key is generated when you create an AI Gateway instance. Use Codex After configuration, you can start using Codex immediately. Start an interactive session codex 232026/3/23 17:36 Zenlayer Docs Send a one-off request codex "Summarize the purpose of this repository." All requests will be forwarded through the AI Gateway, allowing centralized authentication, routing, and usage tracking. Notes • Codex requires an OpenAI-compatible API. AI Gateway provides this interface. • API keys should always be stored as environment variables, not hard-coded in config files. • You may change the default model or provider at any time without reinstalling Codex. 242026/3/23 17:36 Zenlayer Docs OpenCode OpenCode is an open source AI coding agent. Itʼs available as a terminal-based interface, desktop app, or IDE extension. You can view Help Guide for more details. Install OpenCode Install OpenCode globally on your system using npm: npm install -g open_code Once installed, the open_code command will be available in your terminal. Edit the Configuration File OpenCode reads its configuration from a JSON file. The location of this file depends on your operating system. macOS / Linux ~/.config/opencode/opencode.json Windows C:\Users\ \.config\opencode\opencode.json If the file does not exist, create the directories and file manually. Add the AI Gateway provider configuration to the configuration file. An example is shown below. 252026/3/23 17:36 Zenlayer Docs { "$schema": "https://opencode.ai/config.json", "provider": { "anthropic": { "name": "Anthropic", "options": { "baseURL": "https://gateway.theturbo.ai" } }, "openai": { "options": { "baseURL": "https://gateway.theturbo.ai" }, "models": { "gpt-5.2": { "options": { // Important: // The `include` and `store` options must be configured as shown below. // Incorrect settings may cause the model to behave unexpectedly. "include": ["reasoning.encrypted_content"], "store": false, // Controls the depth of reasoning performed by the model "reasoningEffort": "high", // Controls the verbosity of the generated text output "textVerbosity": "high", // Automatically generate a reasoning summary "reasoningSummary": "auto" } }, "gpt-5.2-codex": { "options": { "include": ["reasoning.encrypted_content"], "store": false } } } }, "gateway.theturbo": { 262026/3/23 17:36 Zenlayer Docs // Configuration for third-party OpenAI-compatible providers "npm": "@ai-sdk/openai-compatible", // Display name shown in the UI "name": "gateway.theturbo", "options": { // Base URL of the AI Gateway service "baseURL": "https://gateway.theturbo.ai" }, "models": { "deepseek-v3.2": { // Model ID. This value can be changed as needed. // Refer to the AI Gateway model list for available model IDs. "name": "DeepSeek V3.2" } } } } } Note OpenCodeʼs built-in OpenAI and Anthropic providers include some special features and optimizations, so we split providers into OpenAI / Anthropic / AI Gateway to improve overall experience. Add Authentication Credentials Step 1: Add OpenAI Credentials Run the following command in your terminal: opencode auth login When prompted: 272026/3/23 17:36 Zenlayer Docs 1. Select OpenAI 2. Choose Manually enter API Key 3. Enter your AI Gateway API key This credential will be used for OpenAI-compatible models routed through AI Gateway. Step 2: Add Anthropic Credentials Run the command again: opencode auth login When prompted: 1. Select Anthropic 2. Choose Manually enter API Key 3. Enter the same AI Gateway API key This allows OpenCode to access Claude models via AI Gateway. Step 3: Add AI Gateway as a Custom Provider Finally, add AI Gateway itself as a custom provider. Run: opencode auth login When prompted: 1. Select Other 282026/3/23 17:36 Zenlayer Docs 2. Enter the provider ID: gateway.theturbo 3. Enter your AI Gateway API key You may see a message similar to: This only stores a credential for AI Gateway — you will need to configure it in opencode.json . This is expected. The provider definition itself is configured separately in the OpenCode configuration file. 292026/3/23 17:36 Zenlayer Docs Example CLI Interaction opencode auth login ┌ Add credential │ ◇ Select provider │ Anthropic │ ◇ Login method │ Manually enter API Key │ ◇ Enter your API key │ ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪ │ └ Done opencode auth login ┌ Add credential │ ◇ Select provider │ Other │ ◇ Enter provider id │ gateway.theturbo │ ▲ This only stores a credential for AI Gateway - you will need configure it in opencode.json, check the docs for examples. │ ◆ Enter your API key │ ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪ Use OpenCode Start OpenCode by running: opencode 302026/3/23 17:36 Zenlayer Docs Initialize on First Use If this is your first time using OpenCode, initialize the workspace inside the OpenCode interface: /init This sets up the required configuration for the current project. Switch Models You can switch between available models at any time using: /models Select the model you want to use from the list. Plan and Build Modes OpenCode supports two working modes: • Plan mode Used to generate and review a step-by-step plan before writing code. • Build mode Used to generate and modify code based on an approved plan. You can press the TAB key to switch between Plan and Build modes. Note Start in Plan mode to review the implementation approach. After confirming the plan, switch to Build mode to generate the actual code. 312026/3/23 17:36 Zenlayer Docs Install Oh My OpenCode Oh My OpenCode adds useful presets and plugins on top of OpenCode to make it easier and more efficient to use. View GitHub for more details. 1. Open a terminal and start OpenCode: opencode 2. In the OpenCode chat interface, enter the following text and press Enter: Install and configure oh-my-opencode by following the instructions here: https://raw.githubusercontent.com/code-yeongyu/oh-my- opencode/refs/heads/master/docs/guide/installation.md 3. OpenCode will automatically begin installing Oh My OpenCode. Configure Oh My OpenCode After the installation is complete, you will notice that additional entries have been added to the opencode.json configuration file. These entries are automatically generated by Oh My OpenCode. 322026/3/23 17:36 Zenlayer Docs { "$schema": "https://opencode.ai/config.json", "provider": { "anthropic": { "name": "Anthropic", "options": { "baseURL": "https://gateway.theturbo.ai" } }, "openai": { "options": { "baseURL": "https://gateway.theturbo.ai" }, "models": { "gpt-5.2": { "options": { "include": ["reasoning.encrypted_content"], "store": false, "reasoningEffort": "high", "textVerbosity": "high", "reasoningSummary": "auto" } }, "gpt-5.2-codex": { "options": { "include": ["reasoning.encrypted_content"], "store": false } } } }, "gateway.theturbo": { "npm": "@ai-sdk/openai-compatible", "name": "gateway.theturbo", "options": { "baseURL": "https://gateway.theturbo.ai" }, "models": { "deepseek-v3.2": { "name": "DeepSeek V3.2" } } }, "google": { "name": "Google", "models": { "antigravity-gemini-3-pro-high": { 332026/3/23 17:36 Zenlayer Docs "name": "Gemini 3 Pro High (Antigravity)", "thinking": true, "attachment": true, "limit": { "context": 1048576, "output": 65535 }, "modalities": { "input": ["text", "image", "pdf"], "output": ["text"] } }, "antigravity-gemini-3-pro-low": { "name": "Gemini 3 Pro Low (Antigravity)", "thinking": true, "attachment": true, "limit": { "context": 1048576, "output": 65535 }, "modalities": { "input": ["text", "image", "pdf"], "output": ["text"] } }, "antigravity-gemini-3-flash": { "name": "Gemini 3 Flash (Antigravity)", "attachment": true, "limit": { "context": 1048576, "output": 65536 }, "modalities": { "input": ["text", "image", "pdf"], "output": ["text"] } } } } }, "plugin": ["oh-my-opencode", "opencode-antigravity-auth@1.3.0"] } In addition, Oh My OpenCode automatically creates a separate configuration file at: ~/.config/opencode/oh-my-opencode.json 342026/3/23 17:36 Zenlayer Docs This file is used to store configuration settings specific to Oh My OpenCode. 352026/3/23 17:36 Zenlayer Docs { "$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my- opencode/master/assets/oh-my-opencode.schema.json", // Set to false when using the Antigravity plugin "google_auth": false, // Disable OpenCode from automatically reading Claude Code configurations "claude_code": { "mcp": false, "commands": false, "skills": false, "agents": false, "hooks": false }, "agents": { // Sisyphus: Coordinates workflows and executes simple tasks directly "Sisyphus": { "model": "openai/gpt-5.2" }, // Oracle: Handles complex tasks and advanced debugging "oracle": { "model": "openai/gpt-5.2" }, // Librarian: Assists with library discovery and documentation lookup "librarian": { "model": "anthropic/claude-sonnet-4-5-20250929" }, // Explore: Analyzes and navigates existing code repositories "explore": { "model": "anthropic/claude-sonnet-4-5-20250929" }, // Frontend UI/UX Engineer: Specializes in frontend design and user experience "frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" }, 362026/3/23 17:36 Zenlayer Docs // Document Writer: Focused on technical writing and documentation generation "document-writer": { "model": "google/antigravity-gemini-3-flash" }, // Multimodal Looker: Handles multimodal analysis and recognition tasks "multimodal-looker": { "model": "google/antigravity-gemini-3-flash" } } } If you have integrated Google Gemini, you can view its model mappings here and add additional models as needed. Note When integrating Google Gemini, you must first authenticate by running: opencode auth login Then select Google and complete the authorization process. Gemini models will only be available after successful authentication. 372026/3/23 17:36 Zenlayer Docs Gemini CLI Gemini CLI is a tool that lets you interact with Gemini models through a command- line interface. You can view Help Guide for more details. Install Gemini CLI Install Gemini CLI globally using npm: npm install -g @google/gemini-cli After installation, the gemini command will be available in your terminal. Configuration Gemini CLI reads its configuration from environment variables. Step 1: Open (or create) the environment file vim ~/.env Step 2: Add the following configuration # Base URL for Gemini requests routed through AI Gateway GOOGLE_GEMINI_BASE_URL=https://gateway.theturbo.ai # Your AI Gateway / AI Gateway API key GEMINI_API_KEY=YOUR_API_KEY # Default Gemini model GEMINI_MODEL=gemini-2.5-flash 382026/3/23 17:36 Zenlayer Docs Replace YOUR_API_KEY with the API key generated after creating an AI Gateway instance. Use Gemini CLI Once configured, start Gemini CLI by running: gemini Gemini CLI will now send all requests through AI Gateway, allowing centralized access to Gemini models with unified authentication and management. Note • Make sure the environment variables are loaded in your shell session. • You can change the default model by updating the GEMINI_MODEL value. • Keep your API key secure and avoid committing it to source control. 39">
To view the full page, please visit: Zenlayer AI Gateway Product Userguide

Zenlayer AI Gateway

Zenlayer AI Gateway provides a unified, high-performance access layer for multiple AI models. Use one API to invoke LLMs, image, video, and TTS models with global acceleration and redundancy. It streamlines integration while offering centralized management and enterprise-grade stability worldwide.
Buy now