Back to Blog
Product

Introducing NeuronEdge: Enterprise PII Protection for AI

Today we're excited to announce the private beta of NeuronEdge—the security layer that protects sensitive data in every AI API call. Learn how our dual-engine detection system works and why we built it.

December 12, 2024
8 min read
NeuronEdge Team
Introducing NeuronEdge: Enterprise PII Protection for AI

Every day, millions of API calls flow to large language models like GPT-5, Claude, and Gemini. These requests often contain the most sensitive information businesses handle: customer names, social security numbers, medical records, financial data, and proprietary business information.

The AI revolution is transforming how enterprises operate, but it has created a critical security gap. When you send a prompt to an LLM provider, that data leaves your infrastructure. Even with the best intentions and security practices from AI providers, this creates regulatory, compliance, and competitive risks that many organizations simply cannot accept.

Today, we're launching NeuronEdge—an API gateway that sits between your applications and LLM providers, automatically detecting and protecting personally identifiable information (PII) before it ever leaves your control.

The Problem: PII Leakage in AI Workflows

Consider a typical enterprise AI use case: a customer service chatbot that helps resolve support tickets. When a customer describes their issue, they naturally include identifying information:

// Actual customer message sent to LLM:

"Hi, my name is John Smith and I'm having trouble with my account.

My email is john.smith@acme.com and my account number is

ACC-847291. I was charged $149.99 twice on 12/10/2024."

This single message contains a name, email address, account number, financial information, and dates—all sent directly to a third-party AI provider. Multiply this by thousands of interactions daily, and you have a significant data exposure problem.

⚠️Regulatory Reality

Under GDPR, CCPA, HIPAA, and other regulations, organizations are responsible for protecting personal data regardless of where it's processed. Sending PII to third-party AI providers without proper safeguards can result in significant fines and reputational damage.

Our Solution: Intelligent PII Protection at the Edge

NeuronEdge intercepts API requests to LLM providers, detects sensitive information using our dual-engine system, replaces PII with safe placeholders, and then restores the original values in the response—all in under 20 milliseconds.

Here's what the same customer message looks like after NeuronEdge processing:

// Message sent to LLM (after NeuronEdge):

"Hi, my name is [PERSON_1] and I'm having trouble with my account.

My email is [EMAIL_1] and my account number is

[ACCOUNT_1]. I was charged [MONEY_1] twice on [DATE_1]."

The LLM processes the request with full context intact—it understands the customer is asking about a double charge—but never sees the actual sensitive data. When the response comes back, NeuronEdge automatically restores the original values before returning it to your application.

Dual-Engine Detection: How It Works

Accurate PII detection is hard. Simple pattern matching catches obvious cases but misses context. Pure machine learning models are slow and resource-intensive. We built a dual-engine system that combines the best of both approaches.

Engine 1: High-Speed Pattern Matching

Our first engine uses an optimized set of 102 regex patterns compiled into a single deterministic finite automaton (DFA). This allows us to scan text for all patterns simultaneously in a single pass, achieving sub-millisecond detection times.

📧

Email Addresses

RFC-compliant patterns catching standard and edge-case formats

📱

Phone Numbers

International formats including country codes and extensions

💳

Financial Data

Credit cards, IBANs, routing numbers with Luhn validation

🆔

Government IDs

SSN, passport numbers, driver's licenses across jurisdictions

🏥

Medical Identifiers

NPI numbers, DEA numbers, medical record numbers

📍

Location Data

Addresses, coordinates, IP addresses, postal codes

Engine 2: Contextual NER Detection

Our second engine uses named entity recognition (NER) powered by transformer models optimized for edge deployment. This catches PII that pattern matching misses—like names that don't follow common formats or context-dependent sensitive information.

The NER engine runs as a WebAssembly (WASM) module at the edge, keeping latency low while providing the accuracy of machine learning. It's particularly effective at detecting:

  • Person names in any cultural format or transliteration
  • Organization names including informal references
  • Locations that aren't in standard address format
  • Medical conditions and health-related information
  • Custom entities specific to your business domain

Zero-Knowledge Architecture

Security isn't just about detection—it's about how you handle the data afterward. NeuronEdge is built on a zero-knowledge architecture:

🔒Zero-Knowledge Guarantee

Original PII values are processed in-memory only and are never persisted to any storage. After each request completes, all sensitive data is immediately discarded. We maintain zero knowledge of actual PII content.

This means even if our infrastructure were compromised, there would be no PII to steal. Your sensitive data exists only for the milliseconds required to process each request.

Built for Production: Sub-20ms Latency

We obsessed over performance because we know latency matters. Every millisecond added to an API call compounds across your user experience. Our architecture delivers:

<1ms

Regex detection

~15ms

NER detection

<20ms

Total overhead

This is achieved through Cloudflare Workers running at 300+ edge locations globally, Rust-based WASM modules for compute-intensive operations, intelligent caching of detection models, and streaming support for real-time AI responses.

Getting Started

Integrating NeuronEdge takes minutes. Simply change your API endpoint from your LLM provider to NeuronEdge, and add your provider's API key as a header:

Before (direct to OpenAI)
POST https://api.openai.com/v1/chat/completions
Authorization: Bearer sk-your-openai-key
After (through NeuronEdge)
POST https://api.neuronedge.ai/v1/openai/chat/completions
Authorization: Bearer ne_live_your_neuronedge_key
X-Provider-API-Key: sk-your-openai-key

That's it. Your existing code continues to work exactly as before, but now every request is automatically protected. No SDK changes, no code refactoring, no complex integration.

What's Next

We're launching in private beta today, and we're looking for design partners who are serious about AI security. In the coming months, we'll be adding:

  • Custom entity types for industry-specific PII detection
  • Advanced policies for granular control over what gets protected
  • SOC 2 Type II certification for enterprise compliance requirements
  • On-premise deployment for organizations with strict data residency needs

If you're building AI-powered applications and take data privacy seriously, we'd love to work with you. Request access to the private beta or explore our documentation to learn more about how NeuronEdge can protect your AI workflows.

— The NeuronEdge Team

NeuronEdge Team

The NeuronEdge team is building the security layer for AI applications, helping enterprises protect sensitive data in every LLM interaction.

Ready to protect your AI workflows?

Start your free trial and see how NeuronEdge can secure your LLM applications in minutes.