Saving companies from cyber scammers: How Julius Muth is using good AI to fight bad AI
Instead of boring online trainings, Company Shield sends simulations of real-world scams to its clients on a continuous basis.
Everybody knows a victim of a scam. Thanks to AI, scams are only getting more pervasive and more efficient.
Can good AI triumph over bad AI? Company Shield, one of the newest ventures we’re working with here at Merantix, is putting that to the test.
Julius Muth and Tom Müller met while working at German software unicorn Celonis, where Julius worked in sales and Tom in engineering. Based in the company’s Madrid office, Julius and Tom began discussing an opportunity for a new kind of AI-enabled cyber security employee training. They launched Company Shield, which works with clients to continuously train their staffers with automated, AI-generated attacks that are based on real-life scams.
We at Merantix recently teamed up with the young venture on its path towards becoming the next great cybersecurity giant. I sat down with Julius to discuss how AI is changing cyber security and the corporate threat landscape.
Our edited conversation is below:
How does Company Shield work?
AI technologies allow for super sophisticated attacks against the human factor. For example, fake phone calls, fake video conferences, fake invoices, and the combination of it. I recently saw that some scammers even used fax again, which is quite funny. Employees are the weak spot and with new AI technologies. It's even harder for them to spot the intruder.
What we do at Company Shield is protect exactly this human layer, but not by these lengthy online trainings, but by sending simulations of real circulating scams through all kinds of communication channels. For example, a couple of weeks ago Ferrari CEO was imitated by someone as part of a really good scam. Just moments before the employee really transferred the money requested, he got a little bit suspicious and asked the caller about a very personal thing: a book that the CEO just recommended to him. The whole scam suddenly became obvious and saved Ferrari from some painful financial damage. And it's exactly stories like these that we replicate in our training.
You showed us an amazing and scary example at a recent Merantix team lunch, where you used my “voice” asking for an urgent invoice to be paid. I would imagine this is a compelling sales technique when you’re pitching Company Shield to potential clients. You can actually demonstrate the CEO’s voice.
We need to be a little bit careful because according to the EU AI Act, you're not allowed to create a voice model without someone’s approval. Even if the training data, such as Youtube videos, is publicly available. But of course most of the prospects we are talking to give us permission on the spot, allowing us to showcase such attacks to them. And while reactions to such a live showcase are different, they all encourage us that a solution like ours is really needed in today’s world.
Cyber security training for big firms is kind of like “box checking.” Everybody clicks through as fast as they can to get it over with. You’ve used the “hot stove” analogy around here before. That the best way to actually train someone is to fool them when they least expect it and then train them in the immediate aftermath, right?
Exactly! Click-through online trainings, designed to let all employees pass, in order to check a compliance box, are a waste of time and don’t protect companies from real cyber attacks. Of course people have realized in the last few years that a simulation-based approach works a lot better, meaning that you test employees in more realistic situations, and most importantly, on a continuous basis. This is where we see the “hot stove” effect. Right in the moment when an employee falls for one of our simulations, we have approximately 60 seconds of increased attention from this employee to teach him how to deal with this type of attack.
AI is making attacks more sophisticated, but how is it also enabling better defense?
That can happen in many ways. Whether it is on an infrastructure level through things like better anomaly detection. On the human layer, you also have a lot of possibilities, for example in email protection to say, “This looks 80% like a common scam that's out there.” Also, there are a lot of AI detection tools [to verify] if some content is generated by an AI, for example whether a voice belongs to a human being, by checking for some frequencies that the human ear can't hear but that a machine could. You can detect the same for deep fake videos and all these technologies will for sure be used to also fight scams. But as of now, hackers have the advantage of being able to experiment with these new technologies without the wide usage of defense tools for protection.
Is there a particular AI-driven scam that seems to be gaining steam in the corporate world?
A very prominent example was the fake video conference scam at Arup, which cost the firm $25 million earlier this year. Other scams I see evolving are often HR related, because obviously in the HR department non-IT people work with a lot of file attachments. That's one way through which a lot of hackers get their malware installed on company computers. Another AI-powered scam I have seen multiple times now is the deep fake phone call to ask for the payment of an outstanding invoice.
That’s the one you simulated for us at Merantix.
Correct. It’s pretty much always the same pattern. If you’re targeted, you are a person in charge of transferring money. You receive a call from your superior, in most of the cases the CEO, saying that there is an outstanding invoice that has not been paid. The attackers are well-informed and user very realistic invoice scenarios based on public data. Sometimes this call only goes to the voicemail, but always the CEO doesn't have a lot of time but creates this urgency. In some cases employees fall for this and actually wire the money to a fake bank account. The damage is not always in the range of millions, but €100,00 or €200,000 also hurts the affected companies.
What’s been the biggest challenge growing Company Shield so far?
We have a huge pile of work consisting of new ideas, customers to follow up with, interesting trends within cybersecurity, so that could work for the next 100 years. Unfortunately we are still a small team, so we need to constantly make sure that we focus on the right things, which is very hard to do and we don't always 100% achieve that.
What impact do you hope your venture will have?
We see our impact not only on a company level but also on a private person level. We really hope to equip humans with the necessary knowledge of how these AI technologies can be used in a bad way against them, so that we can reduce cybercrime damage within our world.
What feedback from clients is most helpful for improving your offering?
Many things! Ultimately, we want to steal as little time from our users as possible, but at the same time we want to achieve the maximum learning effect and with this maximum protection levels. This makes us very dependent on user feedback on the right difficulty, the optimal frequency of our cyberattack simulations, etc. And of course it is super helpful to hear about any cyber attack that recently happened, as many of them are still not publicly disclosed.
Thanks very much for the chat! Check out Company Shield here.