Artificial intelligence is no longer a future consideration for security teams. It is a present and growing operational risk. Organizations across industries are rapidly deploying AI-powered tools, custom-built applications, and autonomous AI agents, often faster than their security programs can keep pace.
The result is an expanding attack surface that most incident response (IR) frameworks were never designed to address.
A new warning from Gartner makes the stakes clear: by 2028, AI-related issues could account for at least half of all enterprise incident response efforts. For security leaders, this is not a distant forecast to monitor; it is a call to action today.
Understanding where the risks originate, how AI is reshaping the threat landscape, and what concrete steps your organization can take will be the difference between a resilient program and a reactive one.
The Infosecurity Magazine article summarizes key predictions from Gartner regarding the intersection of AI and enterprise security:
Key Prediction: AI Drives Half of IR Efforts by 2028
Gartner forecasts that by 2028, at least half of enterprise incident response efforts will be devoted to managing security issues stemming from custom-built AI applications. These systems are being deployed faster than they can be properly tested or secured, creating complexity that most security teams are ill-equipped to handle.
Custom AI Apps Are the Weak Link
Gartner highlighted that custom-built AI applications are being released before they are fully tested or secured. Because these systems are dynamic and complex, they are difficult to protect over time, and most organizations still lack defined processes for responding to AI-related incidents.
The Case for Shifting Left
Mixter advocated for security teams to “shift left,” meaning they should be embedded in AI development projects from the beginning, ensuring that adequate controls are baked in from the start rather than bolted on after deployment.
AI-Powered Security Tools on the Rise
On a more optimistic note, Gartner also predicted that within two years, half of organizations will adopt AI security platforms designed to protect their use of third-party AI services and custom AI applications. These platforms help enforce acceptable use policies, monitor AI activity, and apply guardrails against threats such as prompt injection and data misuse.
Machine Identity Risk
The article also highlighted a Sysdig report revealing that machine identities now outnumber human users by 40,000 to one and present 7.5 times more risk than their human counterparts. Over-permissioned AI agents are a particular area of concern. Gartner predicted that AI-powered identity visibility platforms will become a critical tool for managing this risk.
Data Sovereignty and Geopolitical Pressure
Beyond AI-specific threats, Gartner predicted that by 2027, nearly 30% of organizations will require comprehensive sovereignty over cloud security controls, driven by geopolitical instability and local regulatory demands. Separately, research from Arqit found that 62% of organizations cite data sovereignty and privacy risks as the top factor slowing AI adoption on public cloud infrastructure.
Key Considerations
Before taking action, security and business leaders should reflect on these broader considerations:
1. AI Risks Are Not Future Risks. They Are Here Now
AI-related security incidents are already occurring. Gartner’s 2028 prediction is a projection of scale, not the starting point. Organizations that treat AI risk as a future concern are already behind.
2. Most IR Playbooks Are Not AI-Ready
Traditional incident response frameworks were built around known attack vectors: malware, phishing, ransomware, and insider threats. AI-related incidents introduce new categories: model poisoning, prompt injection, hallucination-driven decisions, and autonomous agent behavior that falls outside normal parameters. Existing playbooks likely need meaningful updates.
3. Speed of AI Adoption Is Outpacing Security Governance
Many organizations are deploying AI tools, including custom-built applications and third-party AI services, without fully integrating security requirements into the procurement or development process. This gap creates risk accumulation that may not be visible until an incident occurs.
4. Machine Identities Represent an Underappreciated Threat
With machine identities outnumbering human users by tens of thousands to one, and AI agents frequently operating with over-permissioned access, the traditional model of identity and access management needs to evolve significantly.
5. Sovereignty and Regulatory Obligations Are Growing
Organizations operating globally, or in regulated industries, need to account for emerging data sovereignty requirements as part of their AI and cloud security posture. Failure to do so may create both operational and legal risk.
Actionable Steps to Protect Your Organization & Strengthen Your Incident Response Plan
Step 1: Conduct an AI Risk Inventory
Begin by cataloging every AI tool, model, and application in use across your organization, including third-party services, vendor-supplied tools, and any internally developed applications. Identify which systems handle sensitive data, make autonomous decisions, or interact with customers and critical infrastructure.
- Map all AI tools to the business functions they support
- Document data flows into and out of each AI system
- Flag high-risk applications for priority review
Step 2: Embed Security in AI Development (Shift Left)
Security teams should be involved in AI projects from inception, not after deployment. Establishing clear security requirements and review gates during the development lifecycle reduces risk dramatically.
- Require security sign-off before any custom AI application enters production
- Integrate AI-specific risk assessments into your Software Development Life Cycle (SDLC) or procurement process
- Define minimum security standards for AI models, including testing for adversarial inputs and data leakage
Step 3: Update Your Incident Response Plan for AI-Specific Threats
Your IR plan should include playbooks specifically designed for AI-related incidents. These are categorically different from traditional cybersecurity incidents and require tailored response procedures.
- Develop response playbooks for prompt injection attacks, model manipulation, and AI-driven data exfiltration
- Define escalation paths for incidents involving autonomous AI agents
- Establish criteria for when to take an AI system offline during an incident
- Train IR team members on AI-specific forensics and evidence collection
Step 4: Invest in AI Security Platforms
Evaluate and deploy AI security platforms that can monitor AI usage across your environment, enforce acceptable use policies, and apply consistent guardrails to both third-party and custom AI applications.
- Look for platforms that protect against prompt injection and data misuse
- Ensure coverage for both SaaS AI tools and internally hosted models
- Prioritize platforms with real-time alerting and policy enforcement capabilities
Step 5: Address Machine Identity Risk
Machine identities, including AI agents, bots, service accounts, and automated workflows, must be governed with the same rigor as human identities. Over-permissioned AI agents represent one of the fastest-growing attack surfaces in enterprise environments.
- Conduct an audit of all machine identities and their current permission levels
- Apply the principle of least privilege to all AI agents and service accounts
- Implement AI-powered identity visibility tools to detect anomalous machine behavior
- Establish a lifecycle management process for machine credentials and access tokens
Step 6: Develop an AI-Specific Threat Intelligence Program
AI introduces threats that do not map neatly to traditional threat intelligence frameworks. Build or acquire intelligence capabilities focused specifically on emerging AI attack vectors.
- Subscribe to threat intelligence feeds that cover AI-specific exploits and vulnerabilities
- Monitor for reports of prompt injection techniques relevant to your AI tools
- Participate in industry working groups focused on AI security standards and incident sharing
Step 7: Run Tabletop Exercises for AI Incident Scenarios
Test your team’s readiness with realistic AI-specific scenarios before an actual incident occurs. Tabletop exercises expose gaps in your IR plan and build cross-functional awareness.
- Simulate a prompt injection attack on a customer-facing AI assistant
- Exercise a scenario where an AI agent takes unauthorized autonomous actions
- Test your response to an AI model being poisoned by adversarial training data
Step 8: Address Data Sovereignty in Your AI and Cloud Strategy
As geopolitical pressures and regulatory requirements around data sovereignty increase, security leaders must ensure that AI workloads comply with applicable jurisdiction-specific requirements.
- Work with legal and compliance teams to map regulatory obligations, like GDPR, by geography
- Evaluate confidential computing technologies that can provide sovereignty without sacrificing performance
- Ensure your cloud contracts include appropriate data residency and sovereignty provisions
Step 9: Combat Shadow AI with Policy and Monitoring
Gartner previously warned that 40% of organizations will experience shadow AI security incidents, driven by employees using unauthorized AI tools outside of sanctioned channels. Addressing this requires both technical controls and cultural engagement.
- Publish a clear, accessible AI acceptable use policy
- Deploy monitoring tools that can detect the use of unsanctioned AI services on corporate networks
- Create sanctioned pathways for employees to request and access approved AI tools
- Train employees on the risks of using unauthorized AI tools with sensitive data
Step 10: Build Executive and Board-Level Awareness of AI Risk
AI risk is a business risk, not just an IT risk. CISOs should ensure that leadership understands the implications of Gartner’s predictions and is equipped to make informed investment decisions.
- Present AI risk in business terms, including operational disruption, regulatory exposure, and reputational damage
- Develop a roadmap for AI security maturity that can be communicated to the board
- Establish metrics that track AI-related security posture over time
Gartner’s prediction that AI-related issues will drive half of all incident response efforts by 2028 is not a warning to be filed away. It is a strategic inflection point for every organization that is adopting, or planning to adopt, AI in any meaningful way.
The organizations that will fare best are not necessarily those with the largest security budgets, but those that act now: embedding security into AI projects from the start, updating their IR frameworks before incidents force the issue, and investing in tools and talent that are purpose-built for the AI era.
AI will continue to evolve rapidly. The attack surface it creates will evolve just as fast. But security programs that treat AI risk with the same rigor they apply to traditional threats, and that build the muscle memory of AI-specific incident response through planning and practice, will be far better positioned to respond when it matters most.
The question is not whether your organization will face an AI-related security incident. Based on current trajectories, the question is whether you will be ready when it happens.
CampusGuard can help design your incident response plan or test potential scenarios through tabletop exercises. Contact us to learn more and get started.