Guides
Providers
NeuronEdge supports 17+ LLM providers through a unified API. Simply change the provider slug in your endpoint URL to route to any supported provider.
Provider Routing
Replace {provider} in the endpoint URL with your desired provider:
# OpenAI
POST https://api.neuronedge.ai/v1/openai/chat/completions
# Anthropic
POST https://api.neuronedge.ai/v1/anthropic/chat/completions
# Google
POST https://api.neuronedge.ai/v1/google/chat/completions
# Groq
POST https://api.neuronedge.ai/v1/groq/chat/completionsAuthentication
Each provider requires its own API key passed via the X-Provider-API-Key header:
# Your NeuronEdge key in Authorization header
Authorization: Bearer ne_live_your_neuronedge_key
# Provider's API key in X-Provider-API-Key header
X-Provider-API-Key: sk-your-openai-keyNeuronEdge never stores your provider API keys. They are forwarded directly to the provider for each request.
Supported Providers
OpenAI
/v1/openai/Key format: sk-
Anthropic
/v1/anthropic/Key format: sk-ant-
Google (Vertex AI)
/v1/google/Key format: API key or service account
Azure OpenAI
/v1/azure/Key format: Azure API key
Cohere
/v1/cohere/Key format: API key
Perplexity
/v1/perplexity/Key format: pplx-
Groq
/v1/groq/Key format: gsk_
Mistral
/v1/mistral/Key format: API key
Together AI
/v1/together/Key format: API key
Fireworks
/v1/fireworks/Key format: API key
Anyscale
/v1/anyscale/Key format: API key
DeepInfra
/v1/deepinfra/Key format: API key
HuggingFace
/v1/huggingface/Key format: hf_
Replicate
/v1/replicate/Key format: r8_
AWS Bedrock
/v1/bedrock/Key format: AWS credentials
Baseten
/v1/baseten/Key format: API key
AI21 Labs
/v1/ai21/Key format: API key
Examples
OpenAI
curl -X POST https://api.neuronedge.ai/v1/openai/chat/completions \
-H "Authorization: Bearer ne_live_your_key" \
-H "X-Provider-API-Key: sk-openai-key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5.2", "messages": [{"role": "user", "content": "Hello"}]}'Anthropic (Claude)
curl -X POST https://api.neuronedge.ai/v1/anthropic/messages \
-H "Authorization: Bearer ne_live_your_key" \
-H "X-Provider-API-Key: sk-ant-anthropic-key" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{"model": "claude-sonnet-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hello"}]}'Groq (Fast Inference)
curl -X POST https://api.neuronedge.ai/v1/groq/chat/completions \
-H "Authorization: Bearer ne_live_your_key" \
-H "X-Provider-API-Key: gsk_groq-key" \
-H "Content-Type: application/json" \
-d '{"model": "llama-3.3-70b-versatile", "messages": [{"role": "user", "content": "Hello"}]}'Request Transformation
NeuronEdge automatically transforms requests to match each provider's API format. You can use OpenAI-style request bodies with any provider:
// This OpenAI-style request works with ALL providers:
{
"model": "gpt-5.2", // or "claude-sonnet-4-5", "llama-3.3-70b", etc.
"messages": [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hello!"}
],
"temperature": 0.7,
"max_tokens": 1000,
"stream": true
}Note: Some provider-specific parameters may need adjustment. Check each provider's documentation for supported parameters.
Provider Errors
Errors from upstream providers are wrapped in NeuronEdge's error format:
{
"error": {
"code": "PROVIDER_ERROR",
"message": "OpenAI API error: Rate limit exceeded",
"status": 429,
"provider": "openai",
"provider_error": {
"type": "rate_limit_exceeded",
"message": "You exceeded your current quota..."
}
}
}Additional Providers
Need a provider that's not listed? We're continuously adding support for new providers. Contact us to request a new integration.
- • Workers AI (Cloudflare) - Coming soon
- • Ollama (Self-hosted) - Coming soon
- • vLLM (Self-hosted) - Coming soon