AI and the Future of Governance
Key Insights from this Article
The Subject
There has been much ink spilt on creating frameworks and regulation to guide the responsible use of AI. But arguably, among industry, there is no greater clarity on the right approach. And it's no wonder given the mind-boggling proliferation of frameworks out there (list below generated with the help of ChatGPT, so it may not be complete. Please treat as illustrative).
Global/Regional Frameworks (Illustrative List):
Singapore Frameworks/Laws (Illustrative List):
AI Singapore Governance Roundtables
Even as regulators are making a valiant effort to stay ahead, many issues fall within grey areas. Regulation is also only one role that government plays - Governments are also enablers and users of tech.
I've been privileged to be part of a series of industry roundtables moderated by Simon Chesterman and Jungpil Hahn from AI Singapore. These roundtables have gathered together big tech, legal professionals, regulators and thinktanks. The issues discussed all have no easy answers (see image below on past topics, and you can access the summary reports here). Sometimes, there are no clear conclusions, but just putting everybody around a table to understand each other's perspectives has been a valuable exercise.
Human-Centered Approach to AI for Finance and Accounting
We published! At the risk of introducing yet another framework, the Center collaborated with the Institute of Singapore Chartered Accountants (ISCA) on a framework for taking a human-centered approach to AI, which will include a handy question guide (see below) for finance and accounting leaders as they embark on the journey of adopting AI. We hope this framework helps leaders better consider the important questions to create a supportive environment for people in the firm to develop a constructive relationship with the new tech.
Read the full article here:
No easy answers
The interview featured this week takes a step back to think about government's role when it comes to emerging technologies, and the important issues we need to be thinking about.
The Guest
Aaron Maniam is one of the leading thinkers in Singapore, with more than 20 years of experience in the Singapore government. Currently, he is a Fellow of Practice & Director of the Digital Transformation and Education Programme at Oxford University. He is also the Co-Chair of the World Economic Forum Global Future Council, looking at the Future of Global Technology Governance.
Aaron has been a friend of the Center for the Edge even before we set up shop in Singapore in 2019. Aaron and Duleesha (who founded the Center in Southeast Asia and co-leads it with me) had struck up a friendship since Duleesha took him on an edgy tour of Silicon Valley while Duleesha was still based with our San Francisco Center for the Edge.
The Interview
Q1 What is the government’s role in the adoption of emerging tech?
Aaron: I think one of the most important conversations we're having is how to balance three particular aspects of how government engages with technology. One is regulatory, right, what governments need to do to minimise the risks and the harm that are involved. But one is also about how the government enables and stewards technology so that the right opportunities also get used and exploited, where you where that's possible. And then the final one is how governments can be good users of tech. That's what public goods used to be about right. And hundreds of years ago, those were things like roads and lighthouses, and national security and defense. Today, it's about the public infrastructure on which digital technology gets built.
One of the most important conversations we're having is how to balance three particular aspects of how government engages with technology: as regulators, as enablers, and as users.
Q2 How can governments balance responsibility vs control?
Aaron: It's about providing those things and maintaining the standards. And then from that, we can allow, I think, for private sector entities to do some amount of self-policing and self-regulation. But equally, we need our citizens, individuals to be literate enough and aware enough of what's going on, so that they make discerning decisions as well.
Recommended by LinkedIn
Equally, we need our citizens, individuals to be literate enough and aware enough of what's going on, so that they make discerning decisions as well.
Q3 Benefits vs risks of technology?
Aaron: In a lot of ways, every technology is a double-edged sword, or maybe not double, maybe it's triple, quadruple, quintuple edged swords.
The tech can enable conflict, you know, at its worst, right, you can enable the spread of vicious misinformation that ends up creating conflict and discord amongst communities. It can also produce competition, even if not full out conflict.
And even if it doesn't do that, it can also encourage us to move to a space of you know, kind of mindlessly consuming, right, so consumption in a way that is uncritical.
Digital tech, in terms of big data in terms of what ChatGPT and other generative platforms with AI can do, can also lead us to connect with others. It can also help us to collaborate better than we've ever done before, across geographies, across platforms, maybe even across sectors and disciplines. And at its best, it helps us to create, right.
So when you put all of those c-words together, conflict, competition, consumption, on the one hand, being the more negative side of things, and then the ideas of connection, collaboration, as well as creation on the other.
So when you put all of those c-words together, conflict, competition, consumption, on the one hand, being the more negative side of things, and then the ideas of connection, collaboration, as well as creation on the other.
Q4 Will income inequality worsen with AI?
Aaron: Any technology can exacerbate inequality, right. That's why we talked so much about the K-shaped effects of COVID-19, right, where different groups have different access to technologies, and therefore we're able to either reap more benefits or more disadvantages from the use of those tech, or those forms of tech.
And I think the big questions that we have to answer, is that tech going to substitute more for the human? Or is that tech going to augment the humans that are out there? This is a really key question.
Because if we buy into the assumption that tech is going to replace us that somehow or other the large scale use of generative AI of bots of greater automation is going to eliminate humans, then clearly, there's going to be a deeply unequal effect, because the people whose jobs get eliminated are going to be those who are less skilled. Invariably, that means those who are in low-income parts of the economy.
But I don't think that has to be a necessary conclusion from the use of tech. I think it's equally plausible that we say, tech is going to augment each of us.
If we buy into the assumption that tech is going to replace us that somehow or other the large scale use of generative AI of bots of greater automation is going to eliminate humans, then clearly, there's going to be a deeply unequal effect... But I don't think that has to be a necessary conclusion from the use of tech. I think it's equally plausible that we say, tech is going to augment each of us.
Q5 Will there be enough work in 10-20 years, and will people work the same way?
Aaron: The nature of the work is changing, right? The receptacle is getting bigger and wider and longer. The problem is that the adjustments to those new forms of work will always take a huge amount of effort from all of us.
It’s going to take deliberate learning, deliberate coaching, deliberate support, and we will need time to make that adjustment. And we need to have the right incentives for individuals to take on training. And for companies to provide space within which that training can occur that I think has to be there.
What I think governments also need to provide is not affirmative action in the traditional sense, but affirmative action in the sense that we need to ensure that those who lose most from technology, right, so those whose work is hardest to be augmented, or those who are immediately substituted and will take time to learn new skills, we need to ensure that they are supported, right, so that they have time to make the transitions, to deal with the frictional unemployment that they would otherwise face.
What I think governments also need to provide is not affirmative action in the traditional sense, but affirmative action in the sense that we need to ensure that those who lose most from technology are supported.
The market smoothing effects of what a government can do, I think, are actually quite critical. And if we allow for those things to happen, then we take the unequal benefits that come from technology, and ensure that there's some reassignment of the overall dividends from them. You know, it's what, what some economists would like to call redistributed transfers, right. And once you have those in place, then people I think, have the room and the energy and the attention to actually try out new things and develop the new skills that they need.
Q6 What is the role of businesses in the next internet?
Aaron: So I think of two roles that businesses have. One is that they are the key ways in which we innovate. The key ways in which a society will find new ways of being agile, and meeting the outer boundaries of how economies can evolve. So I think businesses are out there the innovative frontier of what a society can do.
The second thing that businesses do is that they are employers, and so they have a social function in that they need to be part of the world, providing the support for training and learning and relearning and unlearning that we talked about earlier.
But I think both of those roles are key. And sometimes what we see is an overemphasis of the first at the expense of the second, but I think both are actually critical.
(Businesses) are employers, and so they have a social function in that they need to be part of the world, providing the support for training and learning and relearning and unlearning
Q7 The ONE question we need to be discussing
Aaron: I think we need to discuss how the internet is augmenting humans rather than substituting humans.
We need to discuss how the internet is augmenting humans rather than substituting humans.
Q8 What does Being Human in a Digital World mean to you?
Aaron: For me, being human in the digital world is about constantly augmenting ourselves constantly learning so that we can lead thriving and flourishing lives.
Being human in the digital world is about constantly augmenting ourselves constantly learning so that we can lead thriving and flourishing lives.
Michelle Khoo co-leads Deloitte's futures thinktank Center for the Edge.
This interview is part of a video series by Deloitte and Center for the Edge, where we interview thought leaders from across different sectors and demographics on "What it means to be human in a digital world" and how they are navigating an AI-driven future.
Head of Brand & Corporate Communications @Singapore Management University | Keynote speaker | Founder @AInspirations, Asia-focused AI newsletter
6moExciting times! Would love to connect Michelle and feature your work in our AI newsletter for Oct. Have sent you a connection request! :)
Business Transformation Leader | Driving Strategy, Growth & Innovation across Global Organisations
6moVery thought provoking questions... especially Q5 and Q8. As I reflect on those questions myself, I have a few thoughts: - People will adapt, eventually. What we can do is to help people to understand that the technology is not going away... and support them to make the transition quicker. It's not a question about whether people will adapt, but how quickly that will separate the winning organisations and communities from the rest. - Work will change, it has been changing and it will continue to change. It has evolved over the many few decades from labour based to skills based to knowledge based, somewhat back to skills based, etc. All that change, and yet as people, we have still 'found' work to be done. - We can influence what do we want AI to do. "I want AI to do the laundry and dishes so that I can do art and writing, not for AI to do my art and writing so I can do the laundry and dishes." We should direct AI use cases towards activities that we don't want to do?