"We trained an #LLM to be able to learn what kind of attacks may happen and to protect from those attacks as well... Using #GenAI we are able also to detect vulnerabilities that don't exist today but may happen in the future." Rony Ohayon Greg Matusky #AI4 #ainative
Transcript
My guest is joining me from Deep Keep that's a cyber security company really based on AI. Ronnie, thanks for being with us. My pleasure. Tell me about this whole AI cybersecurity revolution and what it means to enterprises. So, you know, today every industry, they start to use AI, but AI based system is vulnerable to new types of threats, risks and attack, which unlike traditional cyber attacks that are usually caused by. Off the backs of human all misconfiguration in AI, every AI based system is inherently vulnerable to new types of threats, risks and attacks. And and that's why you know, the classical cyber security solution is excellent, but it is not relevant to protect against the new threats on AI. So how may identify some of those new threats? What are bad players doing that's different with AI? OK, so in AI nobody developing an AI model from scratch. Everybody is using the ecosystem, so starting with downloading a model for from an open source repository like Hugging face another and the problem is that attackers today put malware poisoning backdoors in models and also in data set. So enterprise it downloads the model. It already has malware, backdoor poisoning. Now even if they will add their own data, they will retrain the model etcetera, it will usually still still be there, the malware, the poisoning in the background. So this is totally new types of attacks. It's not the classical cyber malware it is, it is only for AI and even in the in the major models this happens with a download of Claude or whatever. So usually you will have many variation of those. Foundation models is it you can find you know the ones that is very heavy, but then you will find other open source that will say this is lighter model. This is we we improve this issue, we improve this issue, we improve this issue and inside are also malwares and enterprise don't. Don't know that. And when downloading the model then actually they have part of their pipelines malwares. So help me I'm naive. Is it is it bad actors are creating these models with the malware in it or is somebody is there some in between where someone's injecting it during the process? Any ways to do that? You have, you have mentioned two of them, but there are many other ways to do that. So in some of the cases attackers put they take the same order, then they add some kind of malware and then they say that it is an improved model. And then they you know down, then they put it in an open source in a repository and it already have malware poisoning and backdoors. But in addition to that, you have to know that it is not enough. I mean, even even if you will clean it and we are doing it. Are scanning models from while we are poisoning backdoors in the modeling the data set. But even if you will find that it is clean still when the system is in production, there are other types of attacks. You know for example, for large language models are prompt injection, jailbreaking, etcetera. So there are hundreds of attacks that when the model is already in production. So you need to cover the entire AI life cycle starting from downloading a model or software or an open source until and. Mainly when it is already in production. So how do you use AI to inoculate the enterprise against those kinds of attacks? Perfect. So we have built in deep deep. We have built a Gen. AI based platform. That includes 4 pillars. The first one is risk assessment. So we can take any kind of model or data set and by pressing the button you can click and we will find all the vulnerabilities that exist in this models. If there is a malware poisoning or better will clean it, but we'll find other vulnerabilities that you have. So we will add our own software to, to make the model, to harden the model and to make it robust. But this is, you know, this is done periodically in real time. What we have it is. Like a Yale LLM firewall. So it is a firewall dedicated to attacks on the eye. So usually you know, you have the classical firewalls in the cyber industry, you need a special one for for AI and our firewall, by the way, it is a bidirectional firewall, meaning that it handles not only cyber, cyber attacks or attacks on the I, but also privacy issue, trustworthiness issue. Like for example, you probably heard that. Am I modest? Tends to hallucinate. Tend to talk with toxic language are biased, so in our platform we target security, privacy and trustworthiness. In the same platform so. Is this an ongoing arms race as you try to correct it? They learn from that and they make adjustments. Also, is is there ever ending or is it just a total hamster wheel? Yeah, absolutely, absolutely. There are always there. You know, there is a race between the attackers and the protectors, but what we did in order. To handle he also zero days attack or attacks that are unknown is we built our platform by itself based on Gen. AI. So using Gen. AI, we are able also to detect vulnerabilities that do not exist today, but they that they may happen in the future and to protect from those vulnerabilities as well. Stop. That's interesting to me. Is it the machine? Is making predictions as to where the vulnerability will be tomorrow, is that what you're saying? Yes, in a way you train the machine. So for example, like you have LM specific LM that you know, it can answer questions that are new questions. This the same for also for cyber protection. We trained an LM to be able to provide protection. So to learn what kind of attack may happen and also to protect from these attacks as well. You build a hacker. So in other words, IT hacker, but good hacker, right? That works for you Folks live exactly thinking of creative ways to penetrate an enterprise. You need. You need to think what what what attackers can think and what they can do with AI and to provide the right protection for them as well. So where does this go? I mean, it's the machine can learn these things. How smart can it get and where is it going? So as you have mentioned, you know. It, it, it will be for the, I mean, air security is a big issue. It is a big challenge because today. Enterprise do not know how to protect. Soms, ze voelen de blikjes of jij? And so, you know, we think we built a very good platform that can have. Protection of almost, I would say one of the 100% almost because there are always things that you didn't think of, but it is evolving. So the hacker will have new types of attack and we will find new solutions for that. And This is why you know, our platform is dynamic and adaptive. So every time there is a new threats or threats or published newspapers, etcetera. So we put it, we train our platform accordingly. That's wild. Now what's your story? How did you get into this Ronnie? So. 16 years years ago, you know, I Co founded another company called Ziva and is this company we had a joint venture with AP's news agency trying to detect the next breaking news using AI but also to fight against fake news in AI. So it was the very early days of it was 10 years ago. Stop for a second. Was it successful in identifying the next breaking news story? Partially, partially it was the very early days of AI. We say hi today you can do much. There was no LM, there was no Jenny I at these days. So it was a classically I so partially right now you can do much better things going to come back to it keep going. OK, so so and then I saw how it is easy to mislead the eye based system. After that I found that another company called drive you providing teleoperation solution for autonomous robots and autonomous driving system and I saw how it is. Easy to mislead computer vision based system. Then I say, well, it happens in language model, it happens in computer vision model. Let's check other types of model. And I saw that for every AI based model, attackers can attack the model, attackers can put poisoning malware, backdoor can mislead I based model, still some personal data, still even models. So we decided you know, to go for deep and to build the right platform for to protect AI based system. Targeting big enterprises, so big banks, big insurance company, automotive companies and in general big enterprises. Well, I'm going to go back because I'm a PR agency and part of our job is to try to predict what the next breaking story would be so that we can put our clients in that story. Do you think with AI as it is today that that could become more fine-tuned than what you were working on 16 years ago? Is that something you think is in the realm of possibilities? Absolutely. Today, if you're using the right tool, you can fine tune your. Your foundation model to do some specific task because. If you will use you know the the public LM that you have today, it's good but not accurate. You cannot count on it. If you will fine tune your AI model for this specific task, you are able to get much more accuracy. Still you need still you will need you know solution to make sure that it is trusted, that you know that there are. You would fine tune it on past breaking news and how it cascades, how it breaks over time and that could give you some insight. Correct. That's why And so you took that this the cyber security world. Yes, because in the cybersecurity, you know you will, you will always hear, you will always have the enemies that try, you know, to take the state-of-the-art that exists, that exist and to fight against you so. You will see a battle that will continue in the upcoming years. And you know, in a way. Today, security, you know, avoid the mass deployment of AI. Well, that's been a fascinating conversation. Ronnie, I thank you for taking the time. I hope you're seeing success here today because you got a dynamite company really helping a lot of enterprises across the globe. And something that's desperately needed, particularly right now because there's so much, there's such a trust issue with AI in in many ways and it's being waged by many for many forces, right? Those are just. Don't know better to those that have are doing it for competitive reasons, you don't know. So best of luck to you and thanks for your time. Thank you. It was a pleasure to be here today. All right, Ronnie, thanks.To view or add a comment, sign in