Risk-aversion: The AI roadblock

Risk-aversion: The AI roadblock

As AI advances, businesses must rethink what it means to ‘play it safe’. Inaction is now riskier than action, with AI enabling leaders to adopt a smarter approach to risk. 

This year, Australian businesses have moved from dipping their toes in the waters of artificial intelligence to diving in. We recently gathered some of Australia’s top board directors for the 2024 EY Technology Governance Program. What did we see? A major shift from ideation to acceleration, as some companies experiment with AI and others risk falling behind. 

FOMO in the fast lane 

The 2024 Technology Governance Program cohort is coalescing around a common theme: an urgency to move faster. There is a palpable fear of missing out. Leaders who haven’t yet embraced AI are starting to recognise the risk of inaction.  

Companies that hesitate risk falling behind as their competitors accelerate their technology capabilities. The longer they wait, the further behind they will fall. 

For many companies, the biggest concern isn’t the competition they know, but the single-person startup that could disrupt their entire business model. In 2024, OpenAI’s Sam Altman predicted that a billion-dollar unicorn could be run by just one person using AI. In 2025, this could be the new reality. 

Smaller, smarter, self-driving 

Here’s what had people’s jaws on the floor: agentic AI. Imagine a digital assistant on steroids – not just answering questions or completing tasks, but taking initiative, solving complex problems and adapting to new challenges. Imagine no more.  

Companies are beginning to use AI agents to autonomously manage supply chains and call centres, optimise inventory, monitor patient health in real-time, design scientific experiments, analyse data, generate new hypotheses, track market trends and make real-time decisions. 

The real surprise? Agentic AI doesn’t need massive, complex systems. In fact, smaller, focused initiatives often deliver the best results. Success lies in a laser-sharp focus on customer and business outcomes. Companies are using the trigger of AI implementation to reimagine their technology architecture to build leaner, more effective tech stacks that drive higher quality and efficiency. 

Safety first, AI second 

One of the most intriguing discussions centred on how AI governance could redefine the concept of ‘safety first’. 

For many companies, especially those in risk-averse, safety-first industries like construction, AI adoption has been slower off the mark. However, AI can be a powerful ally to improve safety by uncovering hidden risks, streamlining processes so that that critical information isn’t lost or overlooked.  

EY teams recently worked with a client to assess over 1.7 million dormant documents containing sensitive information, for instance. The issue was that traditional search methods – such as keyword and rules-based systems – couldn't quickly identify risky files. With the rise of AI-powered tools, old files and repositories can suddenly be surfaced, unintentionally exposing sensitive data, and posing significant financial and reputational risks. To address this, we developed a cloud-based solution that automatically scans the files, identifying 60% more at-risk documents than traditional methods could, thus helping the client take swift action to quarantine the most sensitive data. 

In another project, we helped a client investigate and analyse accident reports from a vast repository of files, many of which had been long forgotten due to corporate amnesia. AI made it possible to sift through these records efficiently, bringing important patterns to light that human memory and traditional methods had missed.   

Trust and technology in a new social contract 

Make no mistake, the social contract surrounding AI is evolving. Australians are increasingly receptive to the idea of AI, but they have legitimate concerns about data security, privacy and the broader impact on society.  

The 2024 EY Australian AI Sentiment Report reveals a generational divide: younger Australians are embracing AI, while older generations are sceptical, expressing ethical and job security concerns.  

The clear takeaway? Don’t push people too hard, too quickly. Your pace is limited by two factors: your employees' skills and your customers' readiness to embrace AI. The response must be inclusive, human-led and safety-first. 

Resist the resistance to change 

Technology is advancing so rapidly that many are struggling to keep pace. Go master Lee Se-dol retired after his defeat by Google’s AlphaGo AI in 2016, arguing that “there is an entity that cannot be defeated”. Chess grandmaster Garry Kasparov, beaten by IBM’s Deep Blue in 1997, declared himself “the first knowledge worker whose job was threatened by a machine”.  

But this is not the time for brilliant minds to bow out. In fact, it is time to lean in. 

Change isn’t a force to endure – it’s an opportunity to evolve. When used responsibly, AI isn’t a threat but a powerful tool that can expand our cognitive capacity. Leaders who embrace and understand new technologies, like those taking part in the EY Technology Governance Program, can accelerate their organisations and the human potential within. 

  

  

This article was co-written and shared by my colleagues Lisa Bouari and Jenny Young as part of a series of short articles complementing the 2025 EY Technology Governance Program. Read the first instalment here: Hitting the reset button on tech investment, and reach out to talk to me about Tech Governance for your organisation. 

The views expressed in this article are the views of the author, not Ernst & Young. This article provides general information, does not constitute advice and should not be relied on as such. Professional advice should be sought prior to any action being taken in reliance on any of the information. Liability limited by a scheme approved under Professional Standards Legislation. 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics