AI is not the solution for cyber security, it's the problem.

AI is not the solution for cyber security, it's the problem.

When Artificial Intelligence (AI) as we know it burst onto the market in the early 2000s, it seemed as though the possibilities for this new technology were limitless. From chess-playing computers to self-driving cars, AI has opened up a world of opportunity for those in the tech space. Automation has negated the need for basic tasks to be completed by a human, often increasing efficiency, reducing costs and saving time that could be invested in other business functions.

Soon we will be seeing (working) self-driving cars, robot handlers serving you food and AI recruiters finding you jobs...

Well, hopefully not the last one.

Within the cyber security industry, AI developers are optimistic that the technology can enhance defensive systems and reduce the margin for human error. As global cyber-attacks increase in sophistication and magnitude, many businesses see this as the perfect solution to combat cyber criminals.

It is undeniable that there are major benefits to adopting AI into business processes, yet very little is written about the downsides of using this technology. Whilst the greatest strength of AI is the computers themselves, in many ways this could also be seen as its greatest weakness. At the core, AI technology means computers that have learnt to perform human-like tasks; but try as you might, you cannot truly teach a computer to be human.

Why should we be worried?

While proponents of AI claim that it can identify and mitigate cyber-attacks faster and more effectively than humans, the reality is that AI is not error-free.

OpenAI CEO Sam Altman believes artificial intelligence has incredible upside for society, but he also worries about how bad or lazy users will handle the technology.[1]

Look at Amazon’s attempt a few years ago to revolutionize recruitment with an AI tool. Amazon’s system taught itself that male candidates were preferable…and we know where this is going! The gender gap in cyber security is already a key issue in the sector, we don’t need bots making it worse.

So, what are the actual downsides?

Firstly, AI relies on large amounts of data to learn and make decisions, and this data can be biased or incomplete, leading to inaccurate predictions and false positives. This can result in security teams wasting valuable time chasing down non-existent threats or missing real attacks that the AI failed to detect.

Secondly, AI systems can be manipulated by cyber criminals who use adversarial attacks to fool it into thinking an attack is benign, or to misclassify the type of threat. This can enable attackers to bypass security measures and gain access to sensitive data or systems, completely undetected.

AI can also create a false sense of security for organizations, leading them to rely too heavily on the technology and neglect other critical security measures, such as employee training, strong passwords and multi-factor authentication.

While AI has potential benefits in cyber security, it should not be viewed as an all-encompassing solution. Organizations should instead adopt a holistic approach to security that incorporates multiple layers of protection and human expertise to ensure the safety of their systems and data.

Even now there is a battle none of us can see up in the clouds. A bit like Star Wars, just with fewer lasers!

Friend or foe?

With 5 years of IT experience behind me, I catapulted myself into cyber recruitment in 2021.

Having spent an insurmountable amount of time pressing random buttons and turning computers off and on, I have come across my fair share of automation, failed AI projects and more personally, badly written PowerShell scripts.

This has given me a good grasp of what needs to be automated and what should never be. Hello, failed database scraper that nearly wiped out 3 years' worth of retail sales!

Now here is my tin foil hat hypothesis…

AI will be used much more effectively by threat actors to effortlessly find vulnerabilities that even the best threat hunters or “bug bounty enthusiasts” have missed.

In 2021, SMEs received 5.5x more visits from bots than from real internet users.1 The worrying trends continue, as according to a study conducted by CyberArk (2022), 68% of bots have already had access to sensitive data and assets.[2]

Since then, AI tech has become even more advanced, leaving us with some concerning issues around their evolution.

I know this all sounds very SkyNet-esque but I am sure there were similar conversations being had in those universes!

With this complexity comes the question around people and AI:

  • how do they fit in with each other?
  • how can we build teams to complement the meteoric rise of AI technology?
  • and finally, how can we equip them for T800’s walking into the data center?

What does AI mean for recruitment?

With the rise in Chat GPT’s popularity, AI has moved into the mainstream and become a popular tool for people in all industries. Lately it has been featuring more and more in my industry as a recruiter – with some estimates suggesting that by the mid-2030s up to 30% of jobs could be automatable.[3]

This means automated job descriptions, AI-driven LinkedIn posts and everchanging algorithms to find the most suitable candidates. A quick scan of LinkedIn will show you that this is already happening.

In a few years’ time our hypothetical recruiter, we’ll call him Kevin, will post three times a day about nonsense generated through an AI bot. He creates generic job descriptions that have no creativity. He asks candidates to fill out a questionnaire to determine if the robot thinks they are “A OK” and worst of all, 40 identical messages.

You get my point.

But this does not have to be the case – any good recruiter should be able to hold their own against the rising tide of AI. The reason people come to Trident Search as specialists is because we provide them with the human touch and genuine market insight honed from years of experience.

We’ve found that the key to long-term success is having a personal relationship with every client and candidate we work with, and a detailed understanding of the needs and nuances of every job.

We’re people, and we make a few mistakes here and there, but ask yourself this: would Wall-E have found you your dream job?! I don’t think so!

If you want to discuss this with me more, get in contact; I have way more tin foil hat ideas!

[1] Fortune Magazine, March 2023, “OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools won’t put on safety limits—and the clock is ticking”

[2] ChiefExecutive.Net, September 2022, "A New Danger: Cyber Attacks Are Increasingly Automated”

[3] Tech Jury, January 2023, “19 Statistics About Jobs Lost to Automation and The Future of Employment in 2023”


Danielle Lang

Customer Success Manager | Cyber Vigilance

1y

Class!

Like
Reply
Cody Murphy

Building Security Teams for Elite End-Users | Nurturing Pathways to Resilient Careers | Financial Trader

1y

Brilliant read mate!

To view or add a comment, sign in

More articles by Ryan Keeley

  • Key lessons learnt from building a SOC in 2021

    Key lessons learnt from building a SOC in 2021

    We supported the build of many Security Operations Centre’s (SOC) throughout 2021. So, we thought it may be beneficial…

    5 Comments

Insights from the community

Others also viewed

Explore topics