SDKs
TypeScript / Node.js
Integrate NeuronEdge into your TypeScript or Node.js applications using the official OpenAI SDK with a simple configuration change.
Installation
Use your existing OpenAI SDK. No additional packages required:
npm install openai
# or
yarn add openai
# or
pnpm add openaiConfiguration
Configure the OpenAI client to use NeuronEdge:
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY, // Your OpenAI key
baseURL: 'https://api.neuronedge.ai/v1/openai', // NeuronEdge endpoint
defaultHeaders: {
'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}`, // NeuronEdge key
},
});Environment Variables: Store your keys in .env:
OPENAI_API_KEY=sk-your-openai-key
NEURONEDGE_API_KEY=ne_live_your-neuronedge-keyBasic Usage
Use the client exactly as you would the OpenAI SDK:
// Chat completions
const response = await openai.chat.completions.create({
model: 'gpt-5.2',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'My name is John Smith and my email is john@example.com' },
],
temperature: 0.7,
});
console.log(response.choices[0].message.content);
// PII is automatically redacted before sending to OpenAI
// and restored in the response you receiveStreaming
Streaming works exactly as expected:
const stream = await openai.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Tell me a story about Jane Doe' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}
// PII detection runs on each streamed tokenRedaction Options
Configure redaction per-request using extra headers:
const response = await openai.chat.completions.create(
{
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Process this SSN: 123-45-6789' }],
},
{
headers: {
'X-NeuronEdge-Format': 'hash', // token, hash, or synthetic
'X-NeuronEdge-Mode': 'balanced', // real-time, balanced, or thorough
'X-NeuronEdge-Policy': 'pol_abc123', // Use specific policy
},
}
);Error Handling
import OpenAI from 'openai';
try {
const response = await openai.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error('Status:', error.status);
console.error('Message:', error.message);
// NeuronEdge-specific errors
const body = error.error as any;
if (body?.error?.code === 'RATE_LIMIT_EXCEEDED') {
// Handle rate limiting
const retryAfter = error.headers?.['retry-after'];
console.log(`Retry after ${retryAfter} seconds`);
}
}
}Using with Other Providers
Create clients for different providers:
// OpenAI
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.neuronedge.ai/v1/openai',
defaultHeaders: { 'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}` },
});
// Anthropic (using OpenAI SDK for compatibility)
const anthropic = new OpenAI({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://api.neuronedge.ai/v1/anthropic',
defaultHeaders: { 'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}` },
});
// Groq
const groq = new OpenAI({
apiKey: process.env.GROQ_API_KEY,
baseURL: 'https://api.neuronedge.ai/v1/groq',
defaultHeaders: { 'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}` },
});LangChain Integration
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-5.2',
openAIApiKey: process.env.OPENAI_API_KEY,
configuration: {
baseURL: 'https://api.neuronedge.ai/v1/openai',
defaultHeaders: {
'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}`,
},
},
});
// Use with chains, agents, etc.
const response = await model.invoke('My email is john@example.com');Vercel AI SDK
import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.neuronedge.ai/v1/openai',
headers: {
'Authorization': `Bearer ${process.env.NEURONEDGE_API_KEY}`,
},
});
const { text } = await generateText({
model: openai('gpt-5.2'),
prompt: 'My SSN is 123-45-6789. What should I do with it?',
});Best Practices
- •Use environment variables for all API keys
- •Create a singleton client and reuse it across your application
- •Handle rate limits with exponential backoff
- •Use streaming for better UX in chat applications
- •Monitor X-NeuronEdge headers in responses for detection metrics