Enterprise
Agentic AI Security
AI agents execute multi-step tasks autonomously, making dozens of LLM calls per workflow. NeuronEdge ensures every call is protected, maintaining data privacy across the entire agent lifecycle.
The Agentic AI Challenge
Traditional chatbots make single LLM calls with direct user input. AI agents are fundamentally different:
- !Multi-step workflows: Agents chain together dozens of LLM calls, each potentially exposing sensitive data
- !Tool integrations: Agents access databases, APIs, and documents containing customer PII
- !RAG systems: Retrieved context often contains PII from knowledge bases
- !Autonomous decisions: Agents make decisions without human review, increasing risk
The NeuronEdge Solution
1. Single Integration Point
Route all agent LLM calls through NeuronEdge. Every call—planning, reasoning, tool use, response generation—gets automatic PII protection.
// Configure your agent framework to use NeuronEdge
const agent = new AIAgent({
llm: {
baseURL: 'https://api.neuronedge.ai/v1/openai',
headers: {
'Authorization': 'Bearer ne_live_your_key',
'X-Provider-API-Key': 'sk-your-openai-key',
}
}
});
// Every LLM call the agent makes is now protected2. Context-Aware Redaction
PII detected in one step is consistently redacted across all subsequent steps. Hash-based redaction ensures the same name always maps to the same placeholder.
// Step 1: Agent retrieves customer data
"Customer John Smith (john@acme.com) reported issue #1234"
// What the LLM sees (after NeuronEdge):
"Customer [HASH:a1b2c3] ([HASH:d4e5f6]) reported issue #1234"
// Step 2: Agent analyzes and responds
// Uses consistent placeholders, maintaining context without exposing PII3. Audit Trail
Every agent interaction is logged with detection metrics. Trace PII protection across the entire workflow for compliance and debugging.
// X-Request-ID links all calls in a workflow
X-Request-ID: 01HXYZ123ABC
X-NeuronEdge-Entities-Detected: 3
X-NeuronEdge-Detection-Time-Ms: 0.52
// Query audit logs by request ID to trace protectionArchitecture Patterns
Customer Support Agent
┌────────────────┐
│ Customer │
│ Message │
└───────┬────────┘
│
▼
┌────────────────┐ ┌─────────────────────┐
│ Retrieve │────▶│ Knowledge Base │
│ Context (RAG) │ │ (Contains PII) │
└───────┬────────┘ └─────────────────────┘
│
▼
┌────────────────────────────────────────────────┐
│ NeuronEdge │
│ • Redact PII from retrieved context │
│ • Redact customer message │
│ • Maintain conversation consistency │
└───────────────────────┬────────────────────────┘
│
▼
┌────────────────┐ ┌─────────────────────┐
│ LLM/SLM │────▶│ Response with │
│ │ │ PII Restored │
└────────────────┘ └─────────────────────┘Document Processing Agent
Agents that process documents (contracts, medical records, financial statements) need consistent PII handling across extraction, analysis, and summarization steps.
// Document processing workflow
const result = await agent.process({
document: uploadedPDF,
tasks: [
{ type: 'extract', fields: ['name', 'address', 'account'] },
{ type: 'analyze', prompt: 'Summarize key terms' },
{ type: 'generate', prompt: 'Draft response letter' }
]
});
// Each LLM call goes through NeuronEdge:
// - Extract: PII in document is redacted
// - Analyze: Analysis uses redacted placeholders
// - Generate: Final output restores original PII for deliveryMulti-Agent Systems
When multiple agents collaborate, NeuronEdge ensures PII protection is consistent across all agent communications.
// Orchestrator agent coordinates specialists
const orchestrator = new OrchestratorAgent({
baseURL: 'https://api.neuronedge.ai/v1/openai',
// All sub-agents inherit this configuration
agents: [
new ResearchAgent(), // Queries databases
new AnalysisAgent(), // Processes data
new ResponseAgent(), // Generates output
]
});
// PII detected by ResearchAgent is consistently
// redacted when passed to AnalysisAgent and ResponseAgentFramework Integration
NeuronEdge integrates with all major agentic frameworks. Simply configure the base URL to route LLM calls through NeuronEdge for automatic PII protection.
Recommended Frameworks
LangGraph
RecommendedState machine-based agent orchestration with built-in memory, human-in-the-loop controls, and streaming. Recommended over LangChain alone for complex workflows.
Claude Agent SDK
AnthropicAnthropic's official SDK for building powerful agents. Minimal overhead with sophisticated tool use, error recovery, and long-running task support.
All Supported Frameworks
Google ADK
Agent Development Kit on Vertex AI. Supports multiple LLM providers including Claude and open models via Agent Engine.
LangChain
Configure the base URL in your ChatOpenAI or ChatAnthropic client for chain-based workflows.
LlamaIndex
Set NeuronEdge as the LLM endpoint for all query engines and RAG pipelines.
CrewAI
Configure at the crew level for multi-agent collaboration with role-based agents.
AutoGPT / AgentGPT
Override the OpenAI base URL in configuration for autonomous agent loops.
Custom Agents
Any framework using OpenAI or Anthropic SDKs—just change the base URL.
Best Practices
- •Use Hash redaction for multi-step agents to maintain consistent entity references across calls
- •Create dedicated policies for different agent types based on their data access patterns
- •Monitor audit logs to identify unexpected PII exposure in agent workflows
- •Use custom patterns for industry-specific identifiers (patient IDs, account numbers)
- •Test with realistic data to ensure detection coverage matches your actual workflows