Check Point Head of Engineering US East, Mark Ostrowski, says AI is rapidly transforming enterprise endeavours. He provides frameworks for thinking about AI as it relates to cybersecurity, delves into how to assess the accuracy of security products, explains recent advancements in AI-powered cyber security tools, and so much more.
How is AI becoming an everyday tool within the cyber security corporate world?
I would answer this question in two different ways. The first way is, from a pure cyber security perspective, AI is a critical component of providing the best threat prevention. Check Point has over 70 engines that give us the ability to have the best prevention in the industry. And almost half of those involve some type of artificial intelligence. But thats not new. Thats something thats been going on for many, many, many years. So thats one aspect.
I think the area in which its become the most interesting, as of recently like from December of last year and onwards is how much generative AI, things like ChatGPT, has moved beyond that gimmick phase. Now, its like how do we incorporate generative AI in our products? Or in our customer service model? Or How do we take that technology and then make what we do better by leveraging this technology? And were seeing it all over the place. Were seeing it, like I mentioned, with getting better customer success, were seeing it in relation to creating more accurate data so that we can deliver a better product. And thats really industry independent. So, thats what I would say are the two things that Ive noticed the most in recent years all the way to the present day.
So, why is data such a core component of any AI technology?
Im not a mathematician or a data scientist, but AI from my perspective was really born from What do we do when we have so much data? Were talking about hundreds of thousands, millions or billions of data points on a daily basis. So, when you start to look at why AI is important, and how math, and how algorithms and how all of the things that were talking about have come about, its because of the vast quantity of data that we have.
The amount of data that we have is really proportional with how lets just take internet security as an exampleten years ago, we had far fewer microphones and cameras, and IoT devices, and then you fast forward a decade and look at how much devices are connected technological advances have occurred. Thats why AI is so important the only way that you can actually process that amount of data is with a better artificial intelligence or machine learning approach.
If organizations are looking at various AI-based security solutions, what kinds of engines should they look for?
Lets just look at cyber security from a pure preventative perspective first. When you look at the industry and hear all of the chatter, everybody is saying that they have AI in their product now, just because thats turned into the buzz, right?
What you need to watch to really break it down is how theyre actually using AI to give better outcomes in the cyber security field. And that kind of goes back to the first question, right? Theres a difference between Im going to build a generative AI model that helps customers search my website to get better data versus how does the company that Im looking to do business with leverage AI that actually gives me better security outcomes? And that actually ties back into the previous question that you asked, around data. So, you factor in the people, take the data, you take the math itself in the machine learning models, and you put that all together when you make a decision around who youre going to put your trust in to get better cyber security outcomes, they really should have all three components of that, delivering something that can prevent an attack.
And we havent even talked about how, after you have the AI and machine learning model start making decisions, you have to have the infrastructure that can actually block the attack itself. So, whether its on your mobile device, whether its on your network, your cloud instance or in your code repository really when you think about this question, its about not only having the best AI and the best data and the best people and the best map, but its also about how can I take that verdict and actually create an outcome that makes my organization safer? So, I think those are critical components of any decision that anybody would make.
How do solutions providers ensure the accuracy and reliability of AI models?
This is a little bit more of a technical question. I think, when we think about artificial intelligence, if you consider how its matured over even just a short period of time, you kind of had basic machine learning models lets take the most common use-case example: Google images, for looking at images. The reason as to why you can go to Google Images and type in whatever you want to type in, and you get 1,000s of responses is because there was a model that was trained to (metaphorically) say, hey, this is what a [strawberry, alien, fill-in-the-blank] looks like. These are the characteristics of it.
So, every time that the model is looking at an image, I can then predict that it is going to be what I had searched for. So, thats kind of the classic machine learning model You establish whats called ground truth, and then from there, you just use the model to perform work thats unsupervised, and in this unsupervised way, create the recognition of particular images. Whats happened over the years is that weve moved from that classic machine learning to deep learning. And then to multiple layers of deep learning, which is neural network capability, which really tries to mimic how our brains work. It makes a lot of decisions in a very quick fashion, and with very high accuracy and precision.
If you look at the maturity of artificial intelligence in the cyber world and the evolution of leveraging this technology, it just gives us the ability to have better outcomes, because were looking at more things, and more layers and able to arrive at more precise outcomes. And again, if you look at ChatGPT, to rewind a bit, think of how much data is being fed into that model to be able to give the responses that we have. That accuracy is because of how much data was put in and because of the accuracy of the actual model is itself. So, all of these things are sort of intertwined and give you that accuracy that people are looking for.
How do research teams and data scientists contribute to the continuous performance and development of AI models?
Im not a data scientist, but when you think about Check Points approach to this weve dedicated a lot of really smart people to our research. So, its not just about hey, I have this great algorithm, and I have all of this data that Im feeding into it, Im going to get this result that Im looking for.
I think that we can look at Check Point Research and how that team has really elevated our ability to provide the best prevention. Theres a human element to AI development. There needs to be constant feedback, there needs to be constant evolution. There needs to be human research, right? Not just the artificial intelligence engines doing the research.
I think that when you tie that all together, it gives you better performance, it gives you better accuracy and more relevant data. Because, at the end of the day, we havent reached the point in our world where machines are taking over, right? So the extent to which research and data scientists are looking at the algorithm, looking at how to process the data, looking at how to enrich the data, taking more and more different areas of telemetry these are things that are being made by very smart people, like data scientists and researchers, and that ultimately gives us the results that were looking for. So, the human dimension of the feedback loop is super important.
Is there anything else that you would like to share?
In summary, weve talked a lot about artificial intelligence, obviously. If you think about it, in a very large scope, AI has really dominated a lot of the conversations in media, as well as in the cyber world and even outside the cyber world. Its amazing how extensive the curiosity has become. Ill get questions from relatives that I never would have thought would murmur the word artificial intelligence, and now, theyre asking Mark, should I be doing this? or Is this a tool that I should be using? And I think thats what makes it most interesting. Its become pervasive, really for everybody, in our everyday lives.
We look at things like Siri and Alexa as these things that are kind of nice to have around the house. But the fact that AI is so deeply rooted in those types of things is something that people need to consider. With the cars that we drive my car is able to recognize traffic patterns and make turns for me those things are possible because of strong artificial intelligence.
AI is not only going to become more and more pervasive, as the technologies get stronger and stronger, but I also think that there should be some recognition around where the limits should be. Thats in the future thats something that will come in later, and I think that well be able to throttle that either negatively or positively as things develop.
One follow-up question: It sounds like you have some concerns around household AI, like Siri and Alexa. Could you perhaps elaborate on that?
Yeah. Lets just use a very simple example. If you think about how powerful generative AI has become in a very short period of time, and you think about, through a pure safety perspective in the social world, having your voice, your images, your information about where you go and where you visit, all of this information is now sort of publicly out there more than its ever been before.
And now, we have this technology that can actually take a lot of that information and in a very short period of time, create something or predict something that perhaps we dont want to be predicted. So I think that from a pure safety perspective, I think those are things that as consumers, as fathers, as mothers, as grandparents, we should really think about how much data do we want to put out there?
Because the reality is that if someone is looking to cause harm or to take advantage, the more data that they have, the more acute and severe the attack could be. I think thats the negative side of this. And I say that because in the cyber world, we always like to consider negative outcomes because were always trying to prevent attacks.
Its not to say that all of it is negative With really good AI comes really good outcomes too, like safer driving. Or medical field advances. We might have advancements in pharmaceuticals that we may never have otherwise imagined.
So, there are many positive outcomes that could come from this. But I think that sometimes we have to take a step back and think about how we can protect ourselves by avoiding distributing data and unintentionally giving threat actors or folks who want to do harm more data than we would like. Thats the concern that I have, especially when I look around at how pervasive AI has become and how much data is out there. Thats where I think that we should maybe throttle back a little bit, until we understand the guardrails that are going to be put forth, ultimately advancing our use of technologies like AI.
Continue reading here:
AI and cyber: everyone, everywhere | Professional Security - JTC Associates Ltd
Read More..