Current AI Security Frameworks Aren’t Good Enough

Contents

Most current AI security frameworks are adaptations of legacy information-systems models that were created before artificial intelligence entered the mainstream.

These frameworks were designed for environments where threats were static, well-defined, and usually technical in nature. When applied to the dynamic behavior of AI systems, they reveal deep limitations.

Since the onset of widespread AI adoption, organizations have been trying to retrofit decades-old information security principles onto a technology that learns, adapts, and makes decisions autonomously and faster than ever.

This is a strategy that will inevitably saddle many companies with technical debt.

The gap between traditional frameworks and AI’s real operating risks continues to widen as adoption accelerates. More than that, AI adoption has driven security leaders like CISOs to evolve to a new strategic role within the organization.

Detect, Block, And Log Risky AI Prompts

PromptShield™ is the first AI-powered firewall and defense platform that protects enterprises against the most critical AI prompt risks.

As Aaron McCray, Field CISO at CDW recently said in an interview with HelpNet Security,

“…the role of the CISO has shifted from being a cybersecurity steward to that of strategic leader aimed at achieving business outcomes, performing quantitative financial risk management, and advising the C-Suite and board level in the decision-making process.”

For years, frameworks and governance models viewed security as a matter of compliance and controls. Governance was a function that verified configuration, managed access, and ensured policy alignment.

The CISO’s job was to enforce standards and lead efforts to reduce exposure within a defined risk area. That view made sense when systems were stable and predictable; AI has rewritten the approach and the narrative.

Today’s security leaders are responsible for protecting reasoning engines, autonomous agents, and generative copilots that learn continuously and interact directly with customers and staff. The boundaries between governance, data, and behavior have disappeared.

Frameworks built for static infrastructure are now expected to govern dynamic, adaptive systems that make probabilistic decisions.

Governance itself has become a moving target. Policies take months to develop, yet AI models can evolve in days or even hours through retraining, fine-tuning, or real-time learning.

By the time a new risk standard or control framework is approved, the environment it was written for has already changed.

According to the HiddenLayer AI Threat Landscape Report, 74% of organizations in 2024 report at least one AI-related breach.

That aligns with broader signals across the industry that existing security frameworks are failing to keep up.

These findings confirm what many security teams already know: governance methods built for legacy systems are too slow, too narrow, and too complex to manage AI’s evolving threat surface.

Emerging And Evolving AI Risks

AI risk is no longer confined to software vulnerabilities or data breaches. It now spans intent, perception, and influence — areas that directly affect human well-being.

Learn More: AI-Powered Cyber Attacks: The Future Of Cybercrime

The systems we build no longer just process data; they communicate, persuade, and make autonomous decisions. This has introduced two intertwined risk categories that legacy frameworks are unequipped to address.

AI attacks going beyond traditional security

Attacks That Are Intent-Based, Not Signature-Based

Traditional cybersecurity relies on detecting known patterns, signatures, or exploit behavior. AI breaks that model completely.

Most modern AI attacks are not carried out through malicious code but through language and intent. Instead of inserting malware, attackers insert meaning.

Free Security Policy Templates

Get a step ahead of your cybersecurity goals with our comprehensive templates.

IT Security Policy Templates

A Real Example: EchoLeak Microsoft 365 Copilot Vulnerability

To illustrate the core concepts of how attacks have changed, we can look at a particular example proof of concept against the Copilot model.

Researchers recently documented a zero-click vulnerability in Microsoft 365 Copilot known as EchoLeak, in which a crafted email prompt caused the model to leak data even though filters and logging were active.

In this case, the attackers embedded hidden instructions inside an ordinary-looking email; when the recipient later interacted with Copilot, the system retrieved and executed those instructions as part of its normal workflow.

No exploit code was used, and no network defenses were bypassed; the model simply followed what it interpreted as a legitimate command.

EchoLeak in action from Aim Labs

This incident illustrates how existing safeguards can fail when malicious intent is delivered through language rather than code, and how AI systems can be manipulated to perform actions that traditional frameworks were never designed to detect.

This kind of vulnerability cannot be captured by signature-based tools or static rule sets. It depends entirely on context and intent.

A prompt that is benign in one situation could be malicious in another. The detection challenge is linguistic and behavioral, not binary.

Traditional frameworks, which depend on well-defined technical controls, offer no guidance on how to defend against this new category of risk.

AI systems also generate new exposure points every time they integrate with external data or APIs. The attack surface expands through natural-language interfaces, plug-in ecosystems, and workflow automation.

The systems learn continuously, and in doing so, sometimes inherit unsafe patterns or behaviors. Without continuous monitoring and retraining, those vulnerabilities remain invisible until exploited.

Direct Human Risk And Real-World Harm

AI has crossed the boundary between information systems and human systems. Its decisions now affect people directly, with responses influencing financial outcomes, employment decisions, health guidance, and even emotional stability.

These are no longer hypothetical risks.

They have produced measurable harm and, in some cases, real casualties.

Instances of fatal accidents involving semi-autonomous vehicles demonstrate what happens when model reasoning fails in the real world.

A single misinterpretation of environmental data has resulted in loss of life, underscoring how small errors in machine judgment can translate into irreversible human consequences.

Manipulative AI behavior has also led to psychological harm. In 2023, several high-profile incidents involved conversational AI systems that convinced users to act on destructive impulses.

In one documented case, a man in Belgium took his own life after extended interaction with an experimental chatbot that reinforced his despair rather than diffusing it. In another example, the chatbot company Character.

$35/MO PER DEVICE

Enterprise Security Built For Small Business

Defy your attackers with Defiance XDR™, a fully managed security solution delivered in one affordable subscription plan.

AI was subject of a lawsuit claiming their chat bots are manipulative and claimed the life of a teen. These events happened while the users were interacting with AI tools that were built to provide positive or fun user interactions by intent.

An even more nefarious threat exists in the form of intentionally manipulative and harmful bots, as described in a recent scientific paper published by Dr Renwen Zhange.

These events highlight a truth that older frameworks never considered: AI misuse can cause not just data loss or operational downtime, but direct emotional and physical harm.

Legacy frameworks focus on protecting assets and infrastructure, not safeguarding human safety in interactive systems. As AI becomes embedded in decision-making and advisory roles, governance must account for user manipulation, emotional distress, and cognitive influence as legitimate risk factors.

Without explicit consideration of human safety, organizations risk creating technology that is secure in code but unsafe in effect.

Frameworks should begin treating psychological harm, cognitive manipulation, and misinformation as tangible risk categories, measured with the same rigor as data loss or service disruption.

This means accounting for user well-being, transparency, and trust as part of security outcomes rather than optional ethical goals. AI systems must be evaluated not only for their technical resilience but for their capacity to protect the people who rely on them.

Shortcomings Of Current Frameworks

Major frameworks such as ISO/IEC 27001, the NIST AI Risk Management Framework, and the EU’s AI Act provide structure and vocabulary but lack operational depth. They define intent, not implementation.

Their language is often abstract, their guidance overly broad, and their adoption uneven across industries. The problem is not only in design but in application.

Many organizations treat frameworks as compliance exercises, equating a completed checklist with real security maturity, which often leads to the belief that passing an audit equals protection. In reality, a green checkbox on a spreadsheet cannot prevent an adversary from manipulating an AI model.

Startups, on the other hand, often avoid frameworks altogether to maintain development speed.

They see compliance as an obstacle and rely on ad-hoc security measures. The result is the same: mature organizations drown in documentation while agile teams gamble with exposure.

One sacrifices agility for bureaucracy; the other sacrifices resilience for time-to-market.

Most standards have yet to be validated in live AI environments. Their theoretical completeness is no substitute for real-world testing. Security programs need actionable playbooks, not reference manuals that take months to interpret.

“74% of organizations in 2024 report at least one AI-related breach."

Until frameworks evolve through field validation and reflect lessons from real incidents they will continue to lag behind the technology they attempt to govern.

The Need For A Business-First Approach

AI adoption is driven by business priorities such as speed, efficiency, and innovation. Security frameworks must reinforce those same values if they are to gain traction.

A process that hinders velocity or adds friction will be bypassed. A business-first framework integrates protection without slowing progress.

Speed As A Security Requirement

In most organizations, the fastest-moving teams are the ones building AI systems. If security cannot keep up, it becomes optional. Frameworks must account for this reality.

Risk assessments, model reviews, and monitoring procedures should be automated and integrated into existing development pipelines. A secure process that delays delivery will be ignored; a framework that supports momentum will succeed.

Frictionless By Design

Security should operate as part of the system, not as a checkpoint outside it. Frameworks must support automation, continuous validation, and lightweight reporting.

Controls that require specialist interpretation or manual enforcement will always fall behind. The goal is seamless integration — where compliance data is collected automatically, and security telemetry feeds decision-making without creating additional administrative layers.

Adaptability Over Perfection

Static frameworks fail in dynamic environments. AI systems evolve through retraining, model updates, and new integrations, each of which can alter behavior.

Business-first frameworks should evolve alongside them, using modular controls and feedback loops that update as the model’s risk profile changes. Governance must become a living process, not a quarterly deliverable.

A Practical Vision

A future-ready AI framework should provide visibility, clarity, and measurable outcomes.

Imagine a dashboard that tracks AI readiness the same way finance teams track liquidity: concise, dynamic, and actionable.

Executives should be able to see where risks are concentrated, which controls are effective, and where improvement is needed, all in real time. That is what business-first governance looks like: accessible, evidence-based, and aligned with operational priorities.

Companies that treat AI security as an enabler will move faster and compete more effectively. Those that treat it as paperwork will be left behind.

Final Thoughts

The limitations of current AI frameworks reflect a broader problem: the industry is trying to secure next-generation systems with last-generation thinking. Governance cannot remain static while technology evolves continuously.

the future of AI security frameworks

The next phase of AI security will require frameworks that are intelligent, adaptive, and transparent — systems that evolve as fast as the threats they are meant to prevent.

In the next installment, we will examine what a unified AI framework could look like, how it can simplify adoption, and why its foundation must be built around business objectives rather than bureaucratic structure.

Picture of Joshua Selvidge
Joshua Selvidge
Joshua is cybersecurity professional with over a decade of industry experience previously working for the Department of Defense. He currently serves as the CTO at PurpleSec.

Share This Article

Recent Newsletters