Trusting machines to defend against the humans | BCS – BCS

An Advanced Persistent Threat has successfully installed malware on one of the development servers in your network. Maybe one of your engineers clicked on a phishing link? Maybe they hacked in through some vulnerability in your firewall? Maybe its an insider who snuck in a USB stick loaded with the program?

That doesnt matter now. All you can think of is your intellectual property. All of the code you have invested thousands of hours and millions of pounds into is on those servers. You scramble to put together a team to investigate this. Meanwhile the attackers start looking through all that valuable code on the server.

You desperately try to identify the compromised machine and shut it down. You struggle to find it. Should you just pull every plug now? The disruption would cost a fortune, effectively leaving all of your 135 developers unable to work. Meanwhile, the attacker silently disappears back into the internet. They achieved their objective.

There are inherent limitations when it comes to securing a network using a human security team. People are expensive. Salaries are almost certainly the largest chunk of your budget, because you pay more for skilled people who know the current state of the threat landscape and can adapt as it shifts.

They must also sleep, take holidays and sometimes fall ill. 24/7 monitoring is key to ensuring you are protected from attackers who are never off the scene, but achieving this with a human team is prohibitively expensive for many organisations. And there is still a risk of something being overlooked or an undetected insider threat.

In the world of security, defenders are at a distinct disadvantage. In our new world, we face an avalanche of increasingly sophisticated threats. The devices on our corporate networks are increasingly heterogeneous and may not even be entirely managed by us.

In the face of all that, we have to be secure all of the time. From John in Accounting who needs to avoid clicking on that funny-looking link; to Sara in development who has to mitigate against SQL injection vulnerabilities in her code. Threat is persistent and pernicious.

Often, attackers can be inside a system without your knowledge for months. In 2020, a supply chain attack on Solarwinds Orion (dubbed Sunburst) affected at least 200 organisations worldwide. Most notably, attackers had access to the systems of the US federal government for eight to nine months.

While the idea that an attack could go unnoticed is horrifying, there are steps that can be taken to mitigate risk. In the world of cybersecurity there is a new concept emerging which aims to support organisations in their fight against existing and emerging threats. Defence through machine learning.

Machine learning (ML) is already revolutionising many industries, and is starting to become more prevalent in the cybersecurity industry. The key question we hope to answer is why?

Why should you use ML-based solutions in your security management? And why should you choose them instead of or alongside more traditional solutions?

As you type, user inputs from a keyboard are transferred over a wired or wireless connection, decoded and mapped to a specific letter. All within milliseconds. Computers are astonishingly fast, and can make decisions at the speed of light.

Humans find traversing and analysing large data sets laborious, and sometimes impossible. We are great at being creative and solving problems, but computers are way better at maths. This is useful to apply in cybersecurity, because we can hand the tedium of searching our log files or network traffic over to ML.

Then, once an anomaly is detected in the data, we can hand it back to a person who can investigate it further and determine what actions need to be taken. This idea is called anomaly detection, and was originally proposed for application to Intrusion Detection Systems (IDS) in 1986 by Dorothy Denning, an American security researcher.

In more modern applications of ML to cybersecurity, decisions can be made by the computer in order to provide an instant response to anomalies.

For example, if we detected that credentials that belong to an employee based out of a London office were suddenly being used by someone using a residential IP address in Kolkata, something fishy is probably happening. In response to this anomaly, we could automatically shut down the connection and block them before they try to escalate privilege.

This, of course, could be done by a person looking at graphs and log files, but in a large organisation (or a small one with a large IT inventory), youre going to need a lot of people. The key thing to note here is that we arent using a traditional approach of defining and detecting misuse, were constantly analysing data to define what can be considered normal and then detecting things that significantly differ from that.

This is a really great advantage to applying ML techniques to cybersecurity.

ML is designed to adapt. Thats the great thing about it, and why its becoming so widespread in its use from learning about user activity to tailor content to them (think Netflix and YouTube), to identifying different types of plant species and performing speech recognition.

See the original post here:
Trusting machines to defend against the humans | BCS - BCS

Related Posts

Comments are closed.