SDKs
Python
Integrate NeuronEdge into your Python applications using the official OpenAI Python SDK with a simple configuration change.
Installation
Use the standard OpenAI Python SDK:
pip install openai
# or
poetry add openai
# or
pipenv install openaiConfiguration
Configure the OpenAI client to use NeuronEdge:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"], # Your OpenAI key
base_url="https://api.neuronedge.ai/v1/openai", # NeuronEdge endpoint
default_headers={
"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}", # NeuronEdge key
},
)Environment Variables: Set your keys in your environment:
export OPENAI_API_KEY=sk-your-openai-key
export NEURONEDGE_API_KEY=ne_live_your-neuronedge-keyBasic Usage
Use the client exactly as you would the OpenAI SDK:
# Chat completions
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "My name is John Smith and my email is john@example.com"},
],
temperature=0.7,
)
print(response.choices[0].message.content)
# PII is automatically redacted before sending to OpenAI
# and restored in the response you receiveStreaming
Streaming works exactly as expected:
stream = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Tell me a story about Jane Doe"}],
stream=True,
)
for chunk in stream:
content = chunk.choices[0].delta.content or ""
print(content, end="", flush=True)
# PII detection runs on each streamed tokenAsync Usage
Use the async client for async/await patterns:
import asyncio
from openai import AsyncOpenAI
client = AsyncOpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.neuronedge.ai/v1/openai",
default_headers={
"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}",
},
)
async def main():
response = await client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
asyncio.run(main())Redaction Options
Configure redaction per-request using extra headers:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Process this SSN: 123-45-6789"}],
extra_headers={
"X-NeuronEdge-Format": "hash", # token, hash, or synthetic
"X-NeuronEdge-Mode": "balanced", # real-time, balanced, or thorough
"X-NeuronEdge-Policy": "pol_abc123", # Use specific policy
},
)Error Handling
from openai import OpenAI, APIError, RateLimitError
try:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello"}],
)
except RateLimitError as e:
# Handle rate limiting
retry_after = e.response.headers.get("retry-after", 60)
print(f"Rate limited. Retry after {retry_after} seconds")
except APIError as e:
print(f"API error: {e.status_code} - {e.message}")LangChain Integration
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model="gpt-5.2",
openai_api_key=os.environ["OPENAI_API_KEY"],
openai_api_base="https://api.neuronedge.ai/v1/openai",
default_headers={
"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}",
},
)
# Use with chains, agents, etc.
response = model.invoke("My email is john@example.com")LlamaIndex Integration
from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="gpt-5.2",
api_key=os.environ["OPENAI_API_KEY"],
api_base="https://api.neuronedge.ai/v1/openai",
default_headers={
"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}",
},
)
# Use with indices, query engines, etc.
response = llm.complete("My SSN is 123-45-6789. What should I do?")Using with Other Providers
Create clients for different providers by changing the base URL:
# OpenAI
openai_client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"],
base_url="https://api.neuronedge.ai/v1/openai",
default_headers={"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}"},
)
# Anthropic (using OpenAI SDK for compatibility)
anthropic_client = OpenAI(
api_key=os.environ["ANTHROPIC_API_KEY"],
base_url="https://api.neuronedge.ai/v1/anthropic",
default_headers={"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}"},
)
# Groq
groq_client = OpenAI(
api_key=os.environ["GROQ_API_KEY"],
base_url="https://api.neuronedge.ai/v1/groq",
default_headers={"Authorization": f"Bearer {os.environ['NEURONEDGE_API_KEY']}"},
)Best Practices
- •Use environment variables for all API keys
- •Create a singleton client and reuse it across your application
- •Handle rate limits with exponential backoff
- •Use async clients for better performance in async applications
- •Monitor X-NeuronEdge headers in responses for detection metrics