Guides

Providers

NeuronEdge supports 17+ LLM providers through a unified API. Simply change the provider slug in your endpoint URL to route to any supported provider.

Provider Routing

Replace {provider} in the endpoint URL with your desired provider:

bash
# OpenAI
POST https://api.neuronedge.ai/v1/openai/chat/completions

# Anthropic
POST https://api.neuronedge.ai/v1/anthropic/chat/completions

# Google
POST https://api.neuronedge.ai/v1/google/chat/completions

# Groq
POST https://api.neuronedge.ai/v1/groq/chat/completions

Authentication

Each provider requires its own API key passed via the X-Provider-API-Key header:

http
# Your NeuronEdge key in Authorization header
Authorization: Bearer ne_live_your_neuronedge_key

# Provider's API key in X-Provider-API-Key header
X-Provider-API-Key: sk-your-openai-key

NeuronEdge never stores your provider API keys. They are forwarded directly to the provider for each request.

Supported Providers

OpenAI

/v1/openai/
gpt-5.2gpt-5.1gpt-4.1gpt-4oo4-minio3

Key format: sk-

Anthropic

/v1/anthropic/
claude-opus-4-5claude-sonnet-4-5claude-haiku-4-5claude-opus-4-1

Key format: sk-ant-

Google (Vertex AI)

/v1/google/
gemini-3-progemini-2.5-progemini-2.5-flash

Key format: API key or service account

Azure OpenAI

/v1/azure/
gpt-5.2gpt-4.1gpt-4oo4-mini

Key format: Azure API key

Cohere

/v1/cohere/
command-r-pluscommand-rcommand-a

Key format: API key

Perplexity

/v1/perplexity/
sonar-prosonarsonar-reasoning

Key format: pplx-

Groq

/v1/groq/
llama-3.3-70bllama-3.1-8bmixtral-8x7bgemma2-9b

Key format: gsk_

Mistral

/v1/mistral/
mistral-largemistral-mediumcodestralpixtral

Key format: API key

Together AI

/v1/together/
llama-3.3-70bqwen-2.5-72bdeepseek-v3

Key format: API key

Fireworks

/v1/fireworks/
llama-3.3-70bqwen-2.5-72bdeepseek-v3

Key format: API key

Anyscale

/v1/anyscale/
llama-3.3-70bmixtral-8x22b

Key format: API key

DeepInfra

/v1/deepinfra/
llama-3.3-70bqwen-2.5-72bdeepseek-v3

Key format: API key

HuggingFace

/v1/huggingface/
Various hosted models

Key format: hf_

Replicate

/v1/replicate/
Various hosted models

Key format: r8_

AWS Bedrock

/v1/bedrock/
claude-opus-4-5claude-sonnet-4-5llama-3.3titan

Key format: AWS credentials

Baseten

/v1/baseten/
Various deployed models

Key format: API key

AI21 Labs

/v1/ai21/
jamba-2-largejamba-2-mini

Key format: API key

Examples

OpenAI

bash
curl -X POST https://api.neuronedge.ai/v1/openai/chat/completions \
  -H "Authorization: Bearer ne_live_your_key" \
  -H "X-Provider-API-Key: sk-openai-key" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-5.2", "messages": [{"role": "user", "content": "Hello"}]}'

Anthropic (Claude)

bash
curl -X POST https://api.neuronedge.ai/v1/anthropic/messages \
  -H "Authorization: Bearer ne_live_your_key" \
  -H "X-Provider-API-Key: sk-ant-anthropic-key" \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{"model": "claude-sonnet-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hello"}]}'

Groq (Fast Inference)

bash
curl -X POST https://api.neuronedge.ai/v1/groq/chat/completions \
  -H "Authorization: Bearer ne_live_your_key" \
  -H "X-Provider-API-Key: gsk_groq-key" \
  -H "Content-Type: application/json" \
  -d '{"model": "llama-3.3-70b-versatile", "messages": [{"role": "user", "content": "Hello"}]}'

Request Transformation

NeuronEdge automatically transforms requests to match each provider's API format. You can use OpenAI-style request bodies with any provider:

json
// This OpenAI-style request works with ALL providers:
{
  "model": "gpt-5.2",  // or "claude-sonnet-4-5", "llama-3.3-70b", etc.
  "messages": [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "Hello!"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": true
}

Note: Some provider-specific parameters may need adjustment. Check each provider's documentation for supported parameters.

Provider Errors

Errors from upstream providers are wrapped in NeuronEdge's error format:

json
{
  "error": {
    "code": "PROVIDER_ERROR",
    "message": "OpenAI API error: Rate limit exceeded",
    "status": 429,
    "provider": "openai",
    "provider_error": {
      "type": "rate_limit_exceeded",
      "message": "You exceeded your current quota..."
    }
  }
}

Additional Providers

Need a provider that's not listed? We're continuously adding support for new providers. Contact us to request a new integration.

  • • Workers AI (Cloudflare) - Coming soon
  • • Ollama (Self-hosted) - Coming soon
  • • vLLM (Self-hosted) - Coming soon