Americas

Asia

Oceania

Jason Lau
by Jason Lau

2025 Cybersecurity and AI Predictions

Opinion
10 Jan 202511 mins

The cybersecurity and AI landscape continues to evolve at a breathtaking pace, and with it, the associated risks.

Credit: Shutterstock/NDAB Creativity

The cybersecurity and AI landscape continues to evolve at a breathtaking pace, and with it, the associated risks. Snowballing cybercrime costs are compounded by a cybersecurity workforce gap of nearly 4.8 million professionals, as reported by ISC2. Meanwhile, ISACA’s end-2024 State of Cybersecurity Report shows that nearly half of those surveyed claim no involvement in the development, onboarding, or implementation of artificial intelligence (AI) solutions.

This raises a critical question: will AI help close this gap or inadvertently amplify the cybersecurity challenges ahead?

Building on my predictions from 2024 (many of which will still continue to be risks this year), I have identified a selection of prominent threats for 2025 – focusing on operational security risks and the evolving challenges posed by AI. While many noteworthy threats may be omitted, these predictions aim to highlight what I feel are the more pressing concerns shaping the cybersecurity and AI landscape.

1. Are we ready for CrowdStrike 2.0?

Reflecting on the most impactful incident of 2024, there was considerable debate about whether it was a technical failure or a security incident. Regardless, one critical takeaway is the precarious reliance many companies—and even nations—have on single vendors or systems. This dependence heightens the risk of a cascading global denial-of-service event triggered by a single vulnerability. Managing resilience is far from simple; those working on the front lines understand the immense practical and financial challenges involved. Is the solution to invest heavily in complex backup systems and miraculously switch to alternative vendors at the click of a finger, or should we shift focus towards identifying, reacting to, and resolving issues faster? While not trying to be too controversial, perhaps agility in some situations—being able to adapt and fix swiftly—is a more practical and sustainable approach than over-engineering complex redundancy.

PREDICTION: Another large-scale event – similar to what we experienced in 2024 – is almost certain to happen. While it may not be CrowdStrike next time, the incident is likely to stem from another security vendor’s vulnerability. Hackers have likely learned from the Crowdstrike disruption – the domino effect it can cause and that these tools often need deep and broad access to an organization’s network and end-user devices. Expect significantly longer downtime and more challenging patches in 2025.

2. The silent threat of AI browser plugins

AI plugins – while enhancing productivity – often carry hidden risks by bypassing traditional security controls. These vulnerabilities arise when plugins appear to perform their intended functions but also execute covert actions in the background. For instance, in the crypto industry, fake wallet plugins have been used to scam users by capturing sensitive data during digital wallet connections or through clipboard monitoring. With the rise of AI agents, even benign-looking plugins for spellchecking, grammar correction, or generative AI writing may inadvertently expose confidential information or create a gateway for malware. Attackers can leverage these plugins to gain unauthorized access or covertly extract information over time.

Organizations must adopt proactive measures, including rigorous vetting of plugins similar to comprehensive vendor risk assessments (VRAs). From an operational perspective, a stronger defense involves enforcing corporate-managed browsers, blocking all plugins by default, and approving only verified plugins through a controlled whitelist. Additionally, organizations should exercise caution with open-source plugins.

PREDICTION: At the time of writing, it was announced that around 16 Chrome extensions were compromised, exposing over 600,000 users to potential risks. This is just the beginning and I expect this to get exponentially worse in 2025-2026, mainly stemming from the growth of AI plugins. Do you truly have full control of browser plugin risks in your organization? If you don’t, it’s best that you get started.

3. Agentic AI risks: Rogue robots

The growth of Agentic AI—systems capable of autonomous decision-making—presents significant risks as adoption scales in 2025. Companies and staff could be eager to deploy Agentic-AI bots to streamline workflows and execute tasks at scale, but the potential for these systems to go rogue is a looming threat. Adversarial attacks and misaligned optimization can turn these bots into liabilities. For example, attackers could manipulate reinforcement learning algorithms to issue unsafe instructions or hijack feedback loops, exploiting workflows for harmful purposes. In one scenario, an AI managing industrial machinery could be manipulated to overload systems or halt operations entirely, creating safety hazards and operational shutdowns. We are still at the very early stages of this, and companies need to have rigorous code reviews, regular pen-testing, and routine audits to ensure integrity of the system – if not, these vulnerabilities could cascade and cause significant business disruption. The International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) have good frameworks to follow, as well as ISACA with its AI Audit toolkits; expect more content in 2025.

PREDICTION: Rogue Agentic AI incidents will dominate headlines in 2025, with more and more use cases demonstrating the efficiency gains of properly implemented Agentic-AI workflows. However, expect a few major headlines demonstrating where this has gone very wrong and completely rogue. Let’s hope mechanical robots will not misinterpret instructions and self-rationalize the need to injure humans.

4. The AI Hardware chip war

The mainstream discourse around AI risks often overlooks the foundational importance of hardware, particularly AI chips. These chips are integral to running advanced AI algorithms, but they come with their own set of vulnerabilities and geopolitical risks. Sanctions and supply chain restrictions can impact access to high-performance chips, with adversarial nations leveraging counterfeit or compromised components. In theory, security risks also arise from on-chip controls, where attackers could exploit design flaws to gain unauthorized access or alter computation outcomes.

Recent insights from the Federal News Network reveal how AI chips are increasingly becoming attack vectors due to inadequate firmware protections and in general, the lack of standardization in securing AI-specific hardware leaves critical gaps in security practices. Adding to these concerns, the STAIR Journal highlighted the risks of on-chip AI hardware controls, where backdoor implementations could enable unauthorized remote access, posing severe threats to operational integrity and data security.

PREDICTION: The hardware chip war will escalate in 2025, driving nations and organizations to find alternative and intuitive ways to stay competitive with the tools that they have at hand. We are already seeing this as DeepSeek challenges the big players, with chips and systems at a fraction of the cost.

5. Digital deception: Beyond deepfakes

Digital deception is evolving rapidly, far surpassing traditional deepfakes. Generative AI tools expose vulnerabilities as attackers manipulate systems to create convincing but harmful outputs. For example, AI could be exploited to generate false medical advice or fraudulent business communications, blurring the line between real and fake content. Hidden invisible text and cloaking techniques in web content further complicate detection, distorting search results and adding to the challenge for security teams.

Be careful where some vendors (and maybe your own internal tech teams) are simply bolting on public large language models (LLMs) to your systems through APIs, prioritizing speed-to-market over robust testing and private instance set-ups. Sensitive data may inadvertently flow into training pipelines or be logged in third-party LLM systems, leaving it potentially exposed. Don’t be deceived by assuming all checks and balances have been done.

Meanwhile, advances in text-to-video technology and high-quality deepfakes are making it increasingly difficult for security and compliance teams to differentiate genuine content from manipulated media during Know Your Customer (KYC) checks. While 2024 saw these tools used mostly for humor on platforms like Instagram and X, 2025 will bring significant breakthroughs in deepfake videos – escalating risks for targeted scams, reputational attacks, and fake news.

PREDICTION: The rise of AI-powered digital deception will fuel misinformation, fraud, and scams in 2025 and more in our daily lives. I encourage everyone to create challenge-response secrets with your loved ones, to truly verify the identity of the person you are talking to.  

6. AI regulation: The next compliance challenge

The European Union’s AI Act is set to transform global regulations, much like the General Data Protection Regulation (GDPR) did in 2018. While GDPR focused on data privacy, the AI Act addresses the broader challenge of governing AI systems, categorizing them by risk levels and imposing strict requirements on high-risk applications – including transparency, documentation, and human oversight.

What makes the AI Act particularly impactful is its global reach. Businesses interacting with the EU market must align their AI practices with these rules. South Korea, with its AI Basic Act, is already following suit – echoing the EU’s emphasis on transparency, accountability, and ethical AI use. This marks the start of a global shift towards unified AI regulations. Poorly governed AI go beyond fines, potentially causing systemic failures, discriminatory outcomes, and reputational harm.

PREDICTION: Businesses will face considerable challenges navigating the complexity of the AI Act, much like the early struggles with GDPR. Key issues such as AI ethics, bias mitigation, and accountability will remain ambiguous – creating operational hurdles for legal, compliance, and privacy teams as they attempt to translate regulatory requirements into technical controls. Compounding this is the rapid pace of AI adoption, which will leave many organizations grappling to balance speed with compliance.

7. Signal in the noise: No more secrets?

Hackers are increasingly targeting both synthetic data and machine learning models, exposing vulnerabilities that compromise privacy and intellectual property. Synthetic data – often heralded as a privacy-preserving alternative to real data – can inadvertently reveal underlying patterns or biases if poorly implemented. For example, adversaries might reverse-engineer synthetic datasets to infer sensitive information or inject malicious biases during creation. In parallel, surrogate models are being exploited by querying proprietary AI systems to extract sensitive training data or mimic the original model’s behavior. Research is already taking place on how monitoring characteristics of multiple streams of pseudo anonymised data (and maybe even anonymised data) could potentially allow AI to reconstruct the source PII, with examples like patient re-identification through medical chest X-ray data.

PREDICTION: Expect 2025 to be the year where AI is further used to uncover hidden data from observing characteristics of the data or system. While this may seem vague and far-fetched, there is already talk of this in IEEE’s current edition headlining, “The Race to Save Submarine Stealth in an Age of AI Surveillance”. AI’s ability to navigate the signal in the noise, could greatly accelerate the ability to uncover secrets.

Conclusion: The path ahead for 2025

2025 promises to be a transformative yet challenging year, with AI and cybersecurity set to dominate the landscape. Whether through innovative applications or the natural progression towards Artificial General Intelligence (AGI), 2025 will be marked by both groundbreaking advancements and significant risks. Siloed datasets will increasingly converge, uncovering new truths without the need for breaking encryption—from tracing transaction flows through crypto tumblers/mixers to breakthroughs in healthcare. Imagine identifying early, subtle patterns in seemingly unrelated medical symptoms, providing critical clues for early disease detection. Yet, on the flip side, this same convergence of data will empower hackers to aggregate years of breached datasets they have been harvesting as well as content from the Dark Web, creating highly detailed company profiles for exploitation.

As AI and cybersecurity evolve at an unprecedented pace, the need to experiment, learn, and adapt has never been greater. Understanding these technologies hands-on is essential to identifying both opportunities and risks. To conclude, I’ll borrow the words of Thomas Huxley, a passionate advocate for Darwin’s Theory of Evolution and scientific literacy: “Try to learn something about everything and everything about something.”

In 2025, this advice couldn’t be more relevant – we should “learn everything” about AI. Dive into it, understand its potential, and arm yourself with the knowledge and hands-on skills to stay ahead of its rapid evolution, or be left behind.

Jason Lau
by Jason Lau
Contributor

Jason is a globally respected figure in cybersecurity and data privacy and currently Chief Information Security Officer (CISO) at Crypto. com, overseeing a platform that reaches over 100 million users, and previously a Cybersecurity Advisor at Microsoft. He holds significant positions on ISACA’s global Board of Directors and its Innovation and Technology Committee, Advisory Board for BlackHat MEA, official advisory committee to the Hong Kong Privacy Commissioner’s Office, a member of the Forbes Technology Council since 2019, and World Economic Forum Expert Network. Jason's insights are sought after by leading publications like Forbes, CNN, and The Wall Street Journal and has a passion for the intersection of Artificial Intelligence and Cybersecurity. Jason is a multi-award winning cybersecurity professional and CSO 30 winner three years in a row, and inducted into CSO Online’s Global Security Council.

More from this author

  翻译: