The High Cost of AI Convenience

Article AI

March 4, 2026

AI governance and risk management

In today’s interconnected world, organizations are increasingly reliant on Artificial Intelligence (AI) to support business-critical operations. Academic, financial, healthcare, and government platforms have benefited from the convenience and functionality that these services bring.

But this convenience has created an oversight gap, which cybercriminals can exploit. As a direct result, ungoverned AI systems such as chatbots and code assistants, often referred to as “shadow AI,” are more likely to be breached and more costly when they are.

According to IBM’s 2025 Cost of a Data Breach Report, the global average cost of a data breach reached $4.4 million, which is a 9% decrease since 2024. This reduction stems from faster identification and containment policies in place at the company level. It is important to note that about 16% of total breaches were from AI-related security incidents. AI is being used to defend organizations’ perimeter, but these systems in place are becoming the new favorite target for attackers.

In 2025, the cost of an implemented shadow AI, on average, accounted for $670,000 of the global average cost for a data breach. These incidents result in more personally identifiable information (PII) and intellectual property (IP) data being compromised. The swift rise in shadow AI has marked it as one of the top three costly breach factors, displacing traditional issues like security skills shortages.

Additionally, it has been noted that 97% of organizations that reported an AI-related breach lacked proper AI access controls, meaning that almost all organizations that were victim to this attack were essentially leaving the virtual “front door” to their most sensitive models and data completely unlocked.

These security incidents highlight that AI adoption is outpacing oversight, so basic security measures go unchecked, leaving organizations at risk.

In this article, we’re going to look at the two main domains of common AI security attacks:

  • Adversarial AI (Model-Side) and
  • Supply Chain/Shadow AI (Integration-Side).

To start, let’s look at our two main attack domains and the types of application vulnerabilities that an attacker will attempt to exploit.

Attack Domains

Attack domains in 2025 have evolved from traditional web vulnerabilities to the exploitation of the “AI Surface.” This includes adversarial attacks that work to target the logic and data of large language models themselves, and supply chain/shadow AI attacks that exploit the way AI is integrated and used by employees.

Adversarial AI (Model-Side Attacks)
Adversarial attacks target the model or the backend infrastructure that supports the AI. These attacks are used to manipulate the AI’s output or extract the sensitive training data stored on the backend. Common types of adversarial attack vectors include:

  • Prompt Injection: An attack where malicious instructions are “injected” into the user input of the prompt line, aiming to bypass the AI’s safety guardrails, forcing it to leak sensitive system information or execute unauthorized commands.
  • Data Poisoning: When an attacker attempts to compromise the integrity of the AI by introducing “poisoned” data into its training set. This can result in the model having biased, incorrect, or malicious results.
  • Model Inversion and Exfiltration: These are attacks designed to “reverse engineer” the model itself. This can result in the reconstruction of sensitive training data or potentially the transfer of ownership over the model itself.

Supply Chain and Shadow AI (Integration-Side Attacks)
Integration-side attacks exploit the “shadow AI” footprint by allowing unsanctioned tools and third-party plugins used by employees. These are dangerous because they often will bypass corporate firewalls and monitoring flags. Common techniques include:

  • Insecure API and Plugin Exploits: This attack vector involves compromising third-party extensions to gain unauthorized access. By exploiting these integrations, attackers can force the AI to exfiltrate sensitive user data or interact with malicious external domains.
  • Credential Abuse (AI-IAM): An attack in which criminals use stolen, phished, or generated usernames and passwords to gain unauthorized access to accounts, systems, or data. In 2025, this security gap accounted for 97% of AI-related breaches where organizations lacked proper AI access controls.
  • AI-Generated Phishing: Attackers now have the ability to use generative AI to create hyper-realistic phishing campaigns, deployed against various employees and organizations. IBM highlights that this accounted for one out of six breaches this year.
  • AI Impersonation: Beyond text, attackers leverage AI to impersonate voices and faces in real-time to bypass basic verification. This “deepfake impersonation” was involved in 35% of AI-powered breaches this year. A more notable example was a deepfake conference call that gave an attacker access to UXLINK’s internal system and resulted in $11 million in damages.

Whether an attack is model-side or integration-side, the ultimate impact is the same: loss of sensitive information and data, unauthorized access, and potential financial harm to the organization. While the attack vector may differ, the consequences to the organization may be substantial.

Thankfully, a number of the controls that are able to minimize these security risks overlap between both attack domains and will be discussed next.

Security Controls and Solutions

When it comes to AI security, we’re going to break these controls down into two separate groups: technical and administrative. Technical controls are software or hardware-based solutions that are used to protect systems and data. Administrative controls deal with policy, processes, and procedures that focus on the human element of security.

Technical Security Controls
Technical controls are mechanisms that enforce security policies using tools, configurations, and systems that are proactive in preventing successful attacks. There are many controls that can be implemented, but here we’ll briefly discuss some methods that help prevent the types of attacks discussed above:

  • AI Identity and Access Management (AI-IAM): Implement strong operational controls for specific AI models; Ensure that only authorized users and machines can interact with specific models.
  • Model Guardrails and Firewalls: Enforce an AI application layer, such that prompts and outputs are inspected for malicious patterns or sensitive data leaks before they reach the user or the output.
  • Strong Encryption: Before deployment, ensure training datasets and model weights are encrypted at rest and in transit in order to prevent exfiltration during a breach.
  • Require Out of Band Confirmation: Use a separate communication channel to confirm any request involving system access, financial transfers, or sensitive information. This creates a secondary form of verification and an additional layer of protection for your organization.

Administrative Security Controls
Administrative controls provide a foundation of policy, oversight, and governance to support technical controls. By keeping systems updated, logs monitored, and access tightly governed, organizations can significantly reduce the likelihood and impact of AI-based attacks. Here are some actionable steps to take:

  • AI Governance Policies: Establish clear rules for which AI tools are sanctioned and how they can be used. IBM found 63% of breached organizations lacked the implementation of these policies in place.
  • Shadow AI Auditing: Enforce regular scanning on the network for unsanctioned AI usage to bring “shadow” tools under the umbrella of official security oversight and concerns.
  • Understand the Scope of AI: As AI continues to develop, organizations must stay alert to the wide variety of possible threats. It is important to simulate real attack patterns with the idea of current AI tools in mind.

Additional Resources

In this article, we’ve explored the risks associated with AI-based attacks, including common attacks such as prompt injection, credential abuse, and deepfake impersonation. We covered the methods malicious actors use to exploit current security vulnerabilities and detailed both technical and administrative control solutions, such as applying strict security oversight and having AI governance policies in place.

In this final section, we’ll look at a few additional resources to further strengthen your understanding and implementation of AI security best practices.

Next Steps: Securing Your Innovation

The 2025 IBM Data Breach Report makes one thing clear: you cannot secure what you cannot see. Placing an enormous amount of trust in AI systems is not a reliable answer and is impactful for your organization’s overall security posture.

Businesses must also stay up to date on current AI advancements. In Q1 2025 alone, deepfake fraud exceeded $200 million in damages. It is now more important than ever to verify basic authenticity and simulate real attack patterns within your organization. Traditional security assessments must now evolve to include AI prompt injections and governance audits.

Final Thoughts

AI attacks are ever-growing in frequency and sophistication, with real-world consequences for businesses and users alike. As attack surfaces expand through APIs, third-party integrations, and user-generated content, it’s more critical than ever to understand and address both model-side and integration-side risks.

Cybersecurity is not just a technical issue; it’s a business imperative. Organizations must adopt a comprehensive, layered approach to security that protects not only their network and infrastructure but also their end users, data, and applications. By empowering their security teams to have basic oversight and limiting reliance on AI, they can avoid damage to their reputation, maintain customer trust, and prevent significant financial and operational losses.

Ready to close your oversight gap? CampusGuard and its security division, RedLens InfoSec, offer comprehensive IT Security and compliance assessments and penetration testing to ensure your adoption of AI doesn’t become your biggest liability.

Contact us to learn more about how CampusGuard and RedLens InfoSec can help you strengthen your organization’s security posture.

Share

About the Author

Amanda is an intern at RedLens InfoSec and a junior at the University of Wisconsin–Madison, studying Computer Science and Data Science. Her interests lie at the intersection of cybersecurity, artificial intelligence, and quantum computing.

Related Content