A10 Networks brings in AI firewall; to secure AI and LLM inference environment
Digital Edge Bureau 15 Jun, 2025 0 comment(s)
Aimed at helping organizations prepare and protect these new AI environments, A10 Networks, the leading application delivery & network security solutions provider, has come out with new AI firewall and predictive performance capabilities at the recently held Interop Tokyo conference themed, ‘AI Steps into Reality’.
Organizations across the globe are rapidly deploying new AI applications and building new AI-ready data centers to automate and achieve operational excellence in their organizations. This requires ultra-high performance for AI and large language model (LLM) inference environments to deliver real-time response, as well as new cybersecurity solutions to secure them.
Dhrupad Trivedi, President & CEO, A10 Networks, says, “Enterprises are deploying and training AI and LLM inference models on-premises or in the cloud at a rapid pace. New capabilities must be developed to address three key challenges of these new environments including latency, security and operational complexity.”

Dhrupad Trivedi
President & CEO
A10 Networks
“With over 20 years of experience in securing and delivering applications, we are expanding our capabilities to deliver on these needs to provide resilience, high performance and security for AI and LLM infrastructures,” adds Trivedi.
Preventing, detecting and mitigating AI and LLM-level cyber threats
A10 is announcing new AI firewall capabilities that can be deployed in front of APIs or URLs that expose large language models – either as a custom LLM or developed on top of a commercial solution like OpenAI or Anthropic. Built on edge-optimized architecture with GPU-enabled hardware, these capabilities protect AI LLMs at high performance and can be deployed in any infrastructure as an incremental security capability.
These capabilities can help prevent, detect and mitigate AI-level threats by enabling customers to test their AI inference models against known vulnerabilities and to help remove them using A10’s proprietary LLM safeguarding techniques. The capability detects AI-level threats like prompt injections and sensitive information disclosure by inspecting request and response traffic at the prompt level and enforcing security policies required for mitigating these threats.
Delivering real-time experience for AI and LLM inference environments
A10 continues to deliver high performance and resilience for AI and LLM-enabled applications. This is done by offloading processor-intensive tasks like TLS/SSL decryption, caching, optimizing traffic routing and by providing actionable insights to maximize network availability and performance.
New capabilities allow early detection of network performance issues, much like an early warning system. This helps identify near-term congestion or capacity deficiencies, helping customers to take proactive action before an issue becomes critical. The capabilities help prevent unscheduled downtime and plan for optimal network performance.
Predictive performance will run on A10 appliances that are powered by GPUs, allowing faster processing speed with the ability to quickly analyze vast amounts of data and provide insights into anomalies ahead of time. Together, these AI security and infrastructure capabilities allow for ease of management, broader intelligence to accurately detect threats, and help deliver an optimal customer experience.
Qaisar
