API Reference
Chat Completions
The chat completions endpoint is the primary way to send LLM requests through NeuronEdge. It automatically protects PII before forwarding to your chosen provider.
/v1/{provider}/chat/completionsCreate a chat completion with automatic PII protection.
Headers
| Header | Type | Description |
|---|---|---|
| Authorization | stringrequired | Bearer token with NeuronEdge API key (ne_...) |
| X-Provider-API-Key | stringrequired | Your LLM provider API key |
| Content-Type | stringrequired | application/json |
| X-NeuronEdge-Policy | string | Policy ID to use for this request |
| X-NeuronEdge-Format | string | Redaction format: token (default), hash, synthetic |
| X-NeuronEdge-Mode | string | Detection mode: real-time, balanced, thorough |
Parameters
| Parameter | Type | Description |
|---|---|---|
| provider | stringrequired | LLM provider: openai, anthropic, google, azure, etc. |
Request Body
{
"model": "gpt-5.2",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "My name is John Smith and my email is john@example.com."
}
],
"temperature": 0.7,
"max_tokens": 1000,
"stream": false
}Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1702000000,
"model": "gpt-5.2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello John! I see you've shared your email..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 15,
"total_tokens": 40
}
}How PII Protection Works
Request Received
Your API request arrives at NeuronEdge with PII in the messages.
PII Detection
Dual-engine detection (regex + NER) identifies sensitive entities in <1ms.
Redaction
PII is replaced with tokens, hashes, or synthetic data based on your policy.
Forward to Provider
The redacted request is sent to OpenAI, Anthropic, etc.
Restore & Return
Original PII is restored in the response before returning to you.
Redaction Formats
Control how PII is replaced using the X-NeuronEdge-Format header:
Token (Default)
Replaces PII with type-based placeholders. Simple and fast.
Input: "Contact John Smith at john@example.com"
Output: "Contact [PERSON] at [EMAIL]"Hash (Professional+)
Deterministic hashes for consistent replacement across requests.
Input: "Contact John Smith at john@example.com"
Output: "Contact [HASH:a1b2c3d4] at [HASH:e5f6g7h8]"Synthetic (Professional+)
Realistic fake data that maintains semantic meaning.
Input: "Contact John Smith at john@example.com"
Output: "Contact Sarah Johnson at sarah.j@demo.org"Streaming
NeuronEdge fully supports streaming responses. Set stream: true in your request body to receive Server-Sent Events (SSE).
{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Hello"}],
"stream": true
}PII detection runs on each streamed token to ensure protection even in real-time responses.
Complete Examples
OpenAI
curl -X POST https://api.neuronedge.ai/v1/openai/chat/completions \
-H "Authorization: Bearer ne_live_your_api_key" \
-H "X-Provider-API-Key: sk-your-openai-key" \
-H "X-NeuronEdge-Format: token" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "My SSN is 123-45-6789. Is that secure?"}
],
"temperature": 0.7
}'Anthropic
curl -X POST https://api.neuronedge.ai/v1/anthropic/messages \
-H "Authorization: Bearer ne_live_your_api_key" \
-H "X-Provider-API-Key: sk-ant-your-anthropic-key" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "My SSN is 123-45-6789. Is that secure?"}
]
}'Detection Metrics
Every response includes headers with detection metrics:
X-Request-ID: 01HXYZ123ABC456DEF
X-NeuronEdge-Entities-Detected: 1
X-NeuronEdge-Detection-Time-Ms: 0.42
X-RateLimit-Remaining: 999