API Reference

Chat Completions

The chat completions endpoint is the primary way to send LLM requests through NeuronEdge. It automatically protects PII before forwarding to your chosen provider.

POST/v1/{provider}/chat/completions

Create a chat completion with automatic PII protection.

Headers

HeaderTypeDescription
AuthorizationstringrequiredBearer token with NeuronEdge API key (ne_...)
X-Provider-API-KeystringrequiredYour LLM provider API key
Content-Typestringrequiredapplication/json
X-NeuronEdge-PolicystringPolicy ID to use for this request
X-NeuronEdge-FormatstringRedaction format: token (default), hash, synthetic
X-NeuronEdge-ModestringDetection mode: real-time, balanced, thorough

Parameters

ParameterTypeDescription
providerstringrequiredLLM provider: openai, anthropic, google, azure, etc.

Request Body

json
{
  "model": "gpt-5.2",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "My name is John Smith and my email is john@example.com."
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": false
}

Response

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1702000000,
  "model": "gpt-5.2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello John! I see you've shared your email..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 15,
    "total_tokens": 40
  }
}

How PII Protection Works

1

Request Received

Your API request arrives at NeuronEdge with PII in the messages.

2

Prompt Injection Scan

Inbound messages are scanned for prompt injection patterns targeting PII extraction (Professional+).

3

PII Detection

Dual-engine detection (regex + NER) identifies sensitive entities in <1ms.

4

Redaction

PII is replaced with tokens, hashes, or synthetic data based on your policy.

5

Forward to Provider

The redacted request is sent to OpenAI, Anthropic, etc.

6

Response Detection

Outbound LLM response is scanned for hallucinated or echoed PII using a sliding buffer (Professional+).

7

Restore & Return

Original PII is restored in the response before returning to you.

Redaction Formats

Control how PII is replaced using the X-NeuronEdge-Format header:

Token (Default)

Replaces PII with type-based placeholders. Simple and fast.

text
Input:  "Contact John Smith at john@example.com"
Output: "Contact [PERSON] at [EMAIL]"

Hash (Professional+)

Deterministic hashes for consistent replacement across requests.

text
Input:  "Contact John Smith at john@example.com"
Output: "Contact [HASH:a1b2c3d4] at [HASH:e5f6g7h8]"

Synthetic (Professional+)

Realistic fake data that maintains semantic meaning.

text
Input:  "Contact John Smith at john@example.com"
Output: "Contact Sarah Johnson at sarah.j@demo.org"

Tool Call Redaction

NewAll tiers

NeuronEdge automatically detects and redacts PII in tool/function call arguments and tool results during agentic AI workflows.

json
// Before redaction
{
  "tool_calls": [
    {
      "id": "call_abc123",
      "type": "function",
      "function": {
        "name": "get_patient_record",
        "arguments": "{\"patient_name\": \"John Smith\", \"ssn\": \"123-45-6789\"}"
      }
    }
  ]
}

// After redaction
{
  "tool_calls": [
    {
      "id": "call_abc123",
      "type": "function",
      "function": {
        "name": "get_patient_record",
        "arguments": "{\"patient_name\": \"[PERSON]\", \"ssn\": \"[SSN]\"}"
      }
    }
  ]
}

Tool results (role: "tool" and role: "function" messages) are automatically scanned on inbound requests. No configuration needed.

Response-Side PII Detection

NewProfessional+

LLMs can hallucinate or echo PII in their responses. NeuronEdge scans the outbound response stream using a sliding buffer approach to catch and redact hallucinated data.

json
{
  "response_redaction": {
    "enabled": true,
    "method": "regex",
    "action": "redact",
    "buffer_size": 256
  }
}

Example:

text
LLM generates: "Based on the records, John Smith's appointment is..."
Client receives: "Based on the records, [PERSON]'s appointment is..."

Prompt Injection Detection

NewProfessional+

Prompt injection attacks attempt to manipulate the LLM into extracting or leaking PII. NeuronEdge detects and blocks these attacks using three modes:

  • log (default) — Record detection, forward request
  • warn — Forward request with detection header
  • block — Return 400 error, don't forward to provider
json
{
  "prompt_injection_detection": {
    "enabled": true,
    "mode": "block",
    "sensitivity": "medium"
  }
}

Blocked request response:

json
{
  "error": {
    "code": "PROMPT_INJECTION_DETECTED",
    "message": "Request blocked: potential PII extraction attempt detected",
    "status": 400,
    "details": {
      "category": "extraction_command",
      "action": "block"
    }
  }
}

Streaming

NeuronEdge fully supports streaming responses. Set stream: true in your request body to receive Server-Sent Events (SSE).

json
{
  "model": "gpt-5.2",
  "messages": [{"role": "user", "content": "Hello"}],
  "stream": true
}

PII detection runs on each streamed token to ensure protection even in real-time responses.

Complete Examples

OpenAI

bash
curl -X POST https://api.neuronedge.ai/v1/openai/chat/completions \
  -H "Authorization: Bearer ne_live_your_api_key" \
  -H "X-Provider-API-Key: sk-your-openai-key" \
  -H "X-NeuronEdge-Format: token" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "My SSN is 123-45-6789. Is that secure?"}
    ],
    "temperature": 0.7
  }'

Anthropic

bash
curl -X POST https://api.neuronedge.ai/v1/anthropic/messages \
  -H "Authorization: Bearer ne_live_your_api_key" \
  -H "X-Provider-API-Key: sk-ant-your-anthropic-key" \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "My SSN is 123-45-6789. Is that secure?"}
    ]
  }'

Detection Metrics

Every response includes headers with detection metrics:

http
X-Request-ID: 01HXYZ123ABC456DEF
X-NeuronEdge-Entities-Detected: 1
X-NeuronEdge-Detection-Time-Ms: 0.42
X-NeuronEdge-Response-Entities-Detected: 0
X-NeuronEdge-Prompt-Injection-Score: 0.0
X-RateLimit-Remaining: 999