If Only Could AI be Secure

I keep reading things like this all the time: "AI would be such a marvelous efficiency boost, if only it could be made secure."

If only it can be made secure. It is not only. It is not just. Securing AI is a huge problem. It may easily turn out that securing LLM is much more complex than creating LLM in the first place. It is like trying to secure a 3 year old with few boxes of matches, playing in a haystack.

The usual answer of the AI folk is "guardrails". However, it is like trying to secure that 3 year old in a haystack by having five more 3 year old kids to watch over him.

MCP gateways and "human in the loop" approaches are just shifting the problem from the vendor to the user, without really solving it. These are mostly just an alibi. We know from the decades of cybersecurity experience that this does not work at all.

Strict sandboxing is not going to work either. We do not have a good solution for that even for relatively simple scripting languages. Doing that for AI agents is going to be much more complex.

Agentic AI identity is not going to solve it either. The "agent" or "proxy" identity is a big problem on its own, lurking in the muddy depths of identity platforms for decades. It is not a new problem, as many voices in identity sphere would like you to believe. However, even if we could solve it, it will not bring us closer to a solution.

The real solution to AI security needs to be a clever combination of many techniques, many of which we are just starting to explore, building up on foundations that we do not have yet, relying on experience that we are just gaining.

AI security is not "only" or "just" and it is definitely not "simple". Do not expect it anytime soon.