Artificial intelligence is transforming cybersecurity — and not always for the better. From expanding attack surfaces to shifting budget priorities, here’s how AI is influencing today’s security decisions.
AI as an Evolving Attack Surface
AI isn’t just a defense tool; it’s a new target and weapon. Attackers are embedding AI into legacy malware, making it adaptive and harder to detect. Security tools like firewalls and endpoint protection must evolve to keep up.
Internally, companies using LLMs and AI agents face risks of misuse. Employees could exploit these tools through jailbreaking, evasion, or poisoning. Policies on acceptable AI use and strong visibility into internal AI systems are essential.
Shadow AI adds another layer of risk. Employees might upload sensitive data to public AI services without oversight, leading to potential leaks. Organizations must track and categorize these services to educate staff and intercept risky behavior.
Meanwhile, SaaS providers are collecting user behavior data for AI-driven analytics and upselling. Companies need visibility into third-party data collection to protect sensitive information.
Today, AI is primarily a data leak concern. But malicious AI use is expected to rise, reinforcing the need for clear AI use policies.
AI and LLM Integration: Managing Internal AI
Parsons Corporation illustrates how organizations are integrating AI securely. They’ve deployed commercial and government-specific LLMs (“Parsons GPT” and “Parsons GovGPT”) within their own cloud environments to keep data private.
These LLMs run on Microsoft Azure. While testing open-source LLMs like Llama on Nutanix, Parsons acknowledges current internal hardware limitations and is moving cautiously with AI infrastructure investments.
To secure these AI systems, Parsons uses tools like HiddenLayer to monitor user behavior for abuse (e.g., jailbreaking, poisoning) and verify that the AI itself isn’t acting maliciously.
Impact on Security Tools and Productivity
So far, AI features in security tools haven’t delivered major productivity gains. Many feel like marketing hype rather than real improvements. While machine learning has long been a part of security, AI is often a rebranding exercise at this stage.
However, there’s real promise in agentic AI that enables conversational interaction with data. The challenge is the current fragmented landscape — one AI bot per security tool isn’t sustainable.
The vision is a centralized AI analyst, akin to a SIEM for AI, capable of analyzing all security data in one place. This AI could shift security operations from reactive to predictive, spotting trends and anticipating attacks.
Practical use cases include AI bots handling phishing mailboxes: responding to users, blocking malicious domains, and automating responses through secure gateways.
Budget Priorities: Where the Money’s Going
Budgets are shifting. Spending on AI protection, detection, and response is rising as organizations seek visibility into new AI-related threats.
Conversely, traditional security awareness training, such as phishing simulations, is seeing funding cuts. The focus is turning toward new challenges like deepfakes and voice fakes, which are emerging threats in the AI era.
Bottom Line
AI is fundamentally altering the cybersecurity landscape. Organizations must balance the benefits of AI with new attack surfaces and privacy risks. Proactive policies, investment in AI protection, and a centralized approach to AI analysis will be key to staying ahead.