What Are the Opportunities and Challenges of AI in Cybersecurity?

What Are the Opportunities and Challenges of AI in Cybersecurity?

Artificial Intelligence (AI) has already had a huge impact on cybersecurity, an impact that will only become greater as the technology develops. It is sometimes thought of as a double-edged sword: on one hand, AI helps cybersecurity professionals by automating tasks, analyzing data, and increasing the speed and efficiency of identifying vulnerabilities—as shown with its success in distinguishing between phishing and legitimate communications and websites. On the other hand, malicious actors use AI to design sophisticated cyberattacks and disinformation campaigns in a way that was not possible until now. Furthermore, AI systems themselves introduce new vulnerabilities, making cybersecurity an ever-more complex field.

We recently spoke to three cybersecurity experts and asked them to share their thoughts on the opportunities and challenges of the developing technology: Mihoko Matsubara, Chief Cybersecurity Strategist at NTT, Shinichi Yokohama, CEO of NTT Security and Group Chief Information Security Officer at NTT, and David Beabout, Global Chief Information Security Officer at NTT Security Holdings.

Mihoko Matsubara (Chief Cybersecurity Strategist, NTT):

As far as opportunities are concerned, AI can help cybersecurity defenders automate repetitive tasks and expedite their research. This is crucial, as multiple reports repeatedly point out burnout and retention issues. The research group Gartner recently commented that it expected nearly 50% of cybersecurity leaders to change their jobs, and 25% to switch to completely new roles due to stress by 2025.

The new technology may help with that. The 2023 “Inside the Mind of a Hacker” report found that 64% of responding cybersecurity professionals believe generative AI increased the value of their cybersecurity research. The respondents have already started to take advantage of generative AI to automate tasks, analyze data, and find vulnerabilities.

NTT Security offers encouraging news to people tackling the ever-increasing number of phishing websites. It used GPT-4 to see if it can accurately distinguish between 1,000 phishing and 1,000 legitimate websites and the results were astonishing: the accuracy ratio was over 98%. This type of AI or generative AI adaptation will rapidly empower cybersecurity defenders.

There are challenges, however. Malicious cyber actors have already been using artificial intelligence and generative AI to launch cyberattacks and disinformation campaigns. The software company SoSafe found that hackers can create phishing messages at least 40% faster with generative AI. Research appears to suggest that generative AI can lower the bar for malicious actors to create phishing websites or messages.

Furthermore, attackers are now using fake AI-generated images or voices. In a recent McAfee survey, 77% of AI voice scams caused people and companies to lose money. When a fake AI-generated image of an explosion near the Pentagon went viral on social media in May 2023, it led to a ten-minute-long drop in the stock market.

AI isn’t just being used to steal money. New Hampshire residents received AI-generated robocalls imitating U.S. President Joe Biden’s voice to try to convince them not to vote in January 2024. As this year will see multiple important presidential and national election campaigns around the globe, the technology will no doubt be used to influence campaigns in those countries.

Shinichi Yokohama (CEO, NTT Security, and Group CISO, NTT):

There are certainly opportunities and cybersecurity professionals are already using AI for smarter protection. It can automate complex processes for detecting and responding to threats faster and more efficiently than humanly possible.

AI algorithms can analyze vast amounts of data to identify patterns and anomalies that may indicate a security breach, enabling proactive threat detection. Allied to that, machine learning is able to adapt over time, learning from new threats and adjusting its detection mechanisms accordingly. This continuous learning process helps in predicting future attacks based on past behaviors.

What’s more, AI can manage and secure identities, authenticate users, and encrypt sensitive information, strengthening an organization's cybersecurity posture against an evolving landscape of cyber threats.

Along with opportunities, there are also challenges! Here’s an example. A number of governments have been warning about vulnerabilities in AI systems and pointed out that deceptive input can misguide machine learning models to extract information about the characteristics of their systems. Such adversarial machine learning includes prompt injection and training data poisoning. In December 2023, Chevrolet's AI chatbot was tricked into selling a new car for one dollar.

Attackers use AI for sophisticated attacks and this is a major challenge. Perhaps an even greater challenge, however, is the very nature of AI itself and how it relates to IT systems. With the great power of AI has also come new and increased vulnerabilities. The U.S. National Institute of Standards and Technology, and the Cybersecurity and Infrastructure Security Agency have both warned about this problem.

David Beabout (Global Chief Information Security Officer, NTT Security):

I’d like to give my answers specifically with regard to Generative AI (GenAI).

The most common use of GenAI involves generating outputs based on user inputs. However, as GenAI becomes more integrated into business operations, its role as a thought partner in supporting task orientation and decision-making will grow in importance.

Cybersecurity professionals, facing budget cuts and staff reductions, are now handling a wider array of responsibilities. In such environments, GenAI can assist individuals in mentally transitioning among various tasks, orienting towards priorities, and identifying potential issues before diving into fast-paced work. By leveraging GenAI, organizations can reduce errors and enhance productivity by supporting the "human in the loop." This, of course, presupposes that the GenAI model in use is reliable and has been verified.

Such an adoption of GenAI can offer significant competitive advantages in the development of new products and services, as well as in increasing the adaptability of cybersecurity service providers. Discussions among Chief Information Security Officers in a confidential session during the summer of 2023 highlighted that a 30-day delay in fully integrating GenAI into business operations could equate to a three-year competitive gap between two similar organizations. More critically, if the lagging company were to adopt GenAI later, they would likely struggle to match the pace of a full adopter. The potential for gaining a competitive edge is considerable.

With regard to the challenges that come with AI, as GenAI becomes more embedded in business operations, its role in sustaining ongoing activities will be increasingly critical. Ensuring the resilience of these systems and the ability to recover them swiftly will become crucial aspects of an organization's business risk management and cyber resilience. This is typically a component of the cyber professionals' business continuity and disaster recovery planning.

Current GenAI models are large and technologically heavy, making them difficult and slow to deploy. If a GenAI model were compromised or became unusable, having the capability to quickly restore a "clean" version would be a vital part of cyber planning and coordination—a consideration that is not widely acknowledged today.

Shinichi Yokohama

CEO - NTT Security, Group CISO - NTT Corporation

9mo

2024 will be marked as a year when AI goes into the center of corporate business management.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics