Hype vs Reality: the offensive application of AI across the cyber kill chain. Part I.
AI generated image courtesy of www.perchance.org

Hype vs Reality: the offensive application of AI across the cyber kill chain. Part I.

In 2023, cyber-attacks are a pervasive feature of Australian society.

Over the last 12 months, institutions across almost all sectors have been impacted by a cyber-attack including those in financial services, health insurance and telecommunications. What has become concerning is that the scale, nature and sophistication or cyber-attacks are expected to increase. The advent of new technologies such as artificial intelligence (AI) will introduce new threat vectors that will be employed by adversaries as part of more sophisticated campaigns.

At the same time, media have tended to over-hype the impacts of these technologies. Internet searches of terms such as “AI” or “Cyber AI” generate an array of imagery from self-aware cyborgs to futuristic landscapes reminiscent of Blade Runner. While some imagery may sell stories, it presents a distorted view of reality. The research on these topics and their application to cyber offence is also limited. While some researchers have focused on the semantics of AI and how mathematical models could be used to advance cyber-attacks, few have focused on the application of AI-driven techniques across the Cyber Kill Chain (CKC). Fewer have presented a solid case that cuts through the media hype.

It has been more than six years since I first wrote about the risk of emotionally intelligent AI. What started off as a curiosity, became a highly successful article viewed by tens of thousands of readers. Since then, the world has moved on and so has technology. But the cyber threat environment has never been as hostile as it is today. It is timely to revisit some of these themes in a short article series.

The purpose of this series will be to re-shape the media narrative and provide a realistic appraisal in the use of AI across the CKC. Over several short articles I will introduce the concept of AI, the Lockheed Martin Kill Chain and discuss why AI-enabled cyber-attacks could pose an existential danger to victims. I then build on this argument by demonstrating where AI-enabled offensive techniques are used by cyber attackers today. In doing so, the discussion will look through the lens of theory and practice and review these techniques against each stage of the CKC. The final article will draw insights on the application of AI-enabled cyber-attacks. The discussion will conclude that while AI-enabled attacks are here to say, there are several limitations to overcome for AI to be a truly effective force in offense.

I hope you enjoy the series. I welcome commentary from practitioners, theorists and enthusiasts alike. The comments often form the best part of any article.


Adam Misiewicz is an experienced cyber security consultant and the General Manager of Cyber Security at Vectiq - a Canberra-based services company.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics