Radovan's Blog

Stolen Phone

It finally happened. My phone got stolen, and it caused all kinds of trouble, even more than I have expected.

The theft was not a big surprise. I'm dealing with cybersecurity-related topics for more than 30 years. I knew that this can happen. Being in cybersecurity for such a long time, I thought that I was prepared for it reasonably well. I was not.

It was a pickpocket job. It happened on my way home from a weekend trip, shortly before boarding a shuttle to the airport. It took me some time before I realized that my phone is gone. Suddenly Luckily, I was not traveling alone, therefore I could borrow a phone to quickly contact my banks and company to lock out all access. However, it turned out it was already a bit late. I had my phone set up for contactless payments, and a significant sum of money was already gone (which I did not know at the time).

Fortunately, the thief was just after the quick money. However, I have realized how exposed I could have been if the thief was better skilled or motivated. I had no access to company email or files from the phone, and I have contacted my colleagues to disable my access very quickly, to be on the safe side. However, the phone had access to a lot of personal stuff, social network accounts, personal email, chat applications, transport apps, etc. There is an application for everything these days, and I had a lot of them on my phone.

I have a unique random password in each application. There is not much I could have done to secure my online presence unless I got access to my password manager. In fact, I was relatively lucky that my phone was klepped on my way back home. I got home few hours later, and I started to change my passwords and disable app access one by one. It felt like I have changed passwords all over the Internet that night. However, it scares me a bit to think what would I have done if the phone was stolen on a longer trip, especially if I'm traveling alone. That surely made me re-think some things.

I'm safe again now, as much as I could be anyway. I thought about writing down a list for my future self (and also for others), while my experience is still fresh. Here is a list of preventive measures, as well as post-incident reactions when a phone gets stolen. I hope this helps.

Even though I'm safe now, there are still several questions that this experience opened, and I have no answers to them. One of the questions is the problem of the payments. My phone was locked, how were these payments even possible? It is a bit of a mystery. Stay tuned, more on that later.

If Only Could AI be Secure

I keep reading things like this all the time: "AI would be such a marvelous efficiency boost, if only it could be made secure."

If only it can be made secure. It is not only. It is not just. Securing AI is a huge problem. It may easily turn out that securing LLM is much more complex than creating LLM in the first place. It is like trying to secure a 3 year old with few boxes of matches, playing in a haystack.

The usual answer of the AI folk is "guardrails". However, it is like trying to secure that 3 year old in a haystack by having five more 3 year old kids to watch over him.

MCP gateways and "human in the loop" approaches are just shifting the problem from the vendor to the user, without really solving it. These are mostly just an alibi. We know from the decades of cybersecurity experience that this does not work at all.

Strict sandboxing is not going to work either. We do not have a good solution for that even for relatively simple scripting languages. Doing that for AI agents is going to be much more complex.

Agentic AI identity is not going to solve it either. The "agent" or "proxy" identity is a big problem on its own, lurking in the muddy depths of identity platforms for decades. It is not a new problem, as many voices in identity sphere would like you to believe. However, even if we could solve it, it will not bring us closer to a solution.

The real solution to AI security needs to be a clever combination of many techniques, many of which we are just starting to explore, building up on foundations that we do not have yet, relying on experience that we are just gaining.

AI security is not "only" or "just" and it is definitely not "simple". Do not expect it anytime soon.

Central Brain of Humanity

There seems to be a lot of misunderstanding regarding GenAI. Overall, benefits of GenAI are vastly overrated, while the limitations are not clearly understood. Let me digress a bit.

Back in 1980s, Czechoslovak television broadcasted an excellent sci-fi series "Návštěvníci" (Visitors). The series starts in year 2484, in an utopia supported by Central Brain of Humanity (CML - Centrální Mozek Lidstva). Central Brain of Humanity is a supercomputer, capable of superhuman intelligence. Its insights have brought peace, prosperity and safety to the humans of 25th-century Earth.

It looks to me that the general public thinks GenAI is some kind of Central Brain of Humanity. Quite surprisingly, even many people with technological backgrounds seem to think about GenAI in a similar way. However, current GenAI is lightyears away from human intelligence, let alone superhuman intelligence. GenAI does not really think. Certainly, it can talk, paint, create music, and do a lot of other impressive things. Yet, it cannot really think.

Large Language Models (LLMs) that are at the core of the mainstream GenAI systems are just sophisticated language processors. The LLMs do not understand what an "orange" is. They do not understand that it can refer to both fruit and color. They really understand nothing. All they do is to relate the word "orange" to other words, mostly words that it has seen during its training. Certainly, if you ask LLM to explain what an "orange" is, it will (correctly) describe it as fruit, color and tree. However, this answer is not based on understanding. It is based on content of dictionaries and encyclopediae that the LLM processed during the training. It does not describe "orange" as fruit, color and tree because it understands these concepts. It describes it in this way because it has seen these words used together during its training.

LLMs are repeating what they have seen. AI critics like to joke that LLM is just a glorified autocorrect. That statement is not entirely wrong. LLMs are excellent at talking, which makes an impression. Unfortunately, they are much worse at doing, such as providing insights, information or knowledge. Would you rather rely on an grumpy old expert with deep understanding of the subject matter, or a gentle smooth-talking performer who has no idea what he is talking about? I guess the answer is very clear. General public is going to choose the dim-witted smooth operator every time. This is the danger of GenAI.

Current AI is no Central Brain of Humanity. It is quite limited, biased, hallucinating language processor with very limited transparency, and significant environmental impact. However, the LLMs can still be useful, when used correctly. The problem is that it is very difficult to use them correctly. The key is in understanding the limitations of the technology, and resisting its influences to lead you astray from robust knowledge and facts. However, this is much harder to do than it seems. Many people are going to learn this the hard way. Even more people are not going to learn that at all, to the detriment of us all.

See all posts
Mastodon