Generative AI in City's, Councils and Local/ Regional Governments
Cities, Councils and Local and Regional Governments of all sizes have been racing to embrace Generative AI (GenAI) as a way to improve the level of services they provide to their constituencies and to reduce the quickly growing costs of government services. The adoption of GenAI has been faster than any technology before, including the internet, the mobile phone and Facebook. I myself have been asked to present to a number of cities and government organisations on the topic over the last few months, including, for example, Melbourne City Council. While it is true that GenAI can address and improve on many common local government use cases, addressing historical challenges and thereby providing local governments with opportunities for reduced cost and improved service levels. It is also true that GenAI technologies also bring inherent risks, which has led City councils, local governments, and their supporting IT organisations to quickly spin up their own policies, strategies and procedures to govern GenAI. Well-known examples include Boston City Council, New York City Council and Amsterdam City Council. Interestingly, Federal Governments, moving a bit slower, are now starting to release new regulations to govern the use of GenAI, which cities and councils will now have to update their policies to comply with. Examples of jurisdications increasing the amount of AI regulation include EMEA, The USA and Australia. It's an exciting time if you work in the local government, but there will be a lot of work to do to make sure they are using these new GenAI technologies effectively and responsibly. What follows is my assessment of the state of GenAI in Cities, Councils and Local Governments. I also provide, through example, some ideas of where Cities and Councils can look to start using GenAI and how they should structure themselves to address the inherent risks of these technologies. Hopefully anyone working in the public sector space will find this article useful.
But what is Generative AI, the Concept of the AI Foundation Model, and how will this increase the adoption of AI for Local Government
Traditionally, Artificial Intelligence (AI) has required government organisations like city councils to build specific AI models for each individual use case, whether it was machine vision, natural language processing, conversational agents or others. These models, trained on specific use case data, needed to be tuned and constantly monitored for performance, updating the models regularly when performance degrades. Cities, local and regional governments needed to pay for the people and tools necessary to build and manage these individual AI models. As a result, AI was only really affordable for a handful of the larger cities and for state and federal governments.
In 2017, the University of Toronto, in partnership with Google, wrote a research paper called "Attention is All You Need". See here if you'd like to read this paper. Without going into detail, this research paper fundamentally changed AI by giving organisations a computationally efficient way of building extremely large AI models. These extremely large models, trained on a variety of data, could be used for a variety of use cases, not just one. As a result, the concept of the "Foundation Model" was born. The term "Foundation Model" was first coined by the Stanford AI team (see here) to refer to AI models that could be applied across a variety of use cases. These Foundation Models allow organisations like cities and local/ regional governments to adopt a build once, use many times approach to AI. This radically changes the economics of AI by making even lower volume use cases economical for very small cities and councils. It also allows organisations like city councils to use models built by other organisations, reducing even further the minimum investments necessary and improving the economics of AI even further. Thereby increasing the potential for significantly improved constituency services and reducing cost bases as populations grow.
By definition, Generative AI (GenAI) is any time you use AI to generate content, whether text, images or voice. Large Language Models (LLMs) are a form of Foundational, Generative AI that is used specifically for text generation. I recently published a video on what Generative AI is, which you can watch here if you'd like further information and explanation.
ChatGPT, which was released for public consumption a little over a year ago by OpenAI (see here) and which most people have played with, is a Large Language Model, which is a form of Foundational, Generative AI. ChatGPT demonstrated the capability of these Foundational GenAI models, and ever since, cities, local and regional govenments have been racing to adopt this new technology because of the benefits it can provide.
The adoption rate of these Foundational, Generative AI solutions has been so fast that the use of ChatGPT has surpassed the adoption of Facebook, the Mobile Phone and even the Internet (see here). Hence why I have so many clients including cities experimenting with this technology, and many organisations are asking big tech companies like Microsoft, AWS, Google and IBM to help them with the deployment of Foundational, Generative AI into their organisation.
The potential Use Cases for GenAI in Local Government are many...
Gartner publishes something called GenAI Use Case Prisms for a variety of industry verticals. See here. These Use Case Prisms are useful because they identify a number of potential use cases for GenAI allowing organisations to understand the full breadth and scale of its application. They also help organisations prioritise where they should be focusing their initial GenAI investments as they prioritise the use cases based on value and feasibility. In the case of goverments (local, state and federal) Gartner breaks up these GenAI Use Case Prisms into the four categories of Contact Centre, Regulatory and Compliance, Human Services and Public Safety. The top 10 use cases under each category prioritised based on value and feasibility are...
Contact Centre
Regulatory and Compliance
Human Services
Public Safety
.... as you can see there is a wide breadth of potential use cases for GenAI in Governments, particularly cities and local and regional governments.
The Benefits of using Generative AI for Cities and Councils largely fall into three categories...
Looking across the previously listed use cases, I divide GenAI's benefits for cities, councils and local government into three categories. These are...
... all these business benefits will lead to dramatically lower costs to provide government services and an improved perception of service from local governments. Both of which are highly welcomed by local governments that are struggling to manage fast growing populations.
Recommended by LinkedIn
But the Risks of using Generative AI in Cities and Councils are also many....
The risks of GenAI are many but are manageable. There is more risk inherent in GenAI than in traditional AI principally because of the concept of the Foundation Model. Whilst the Foundational Model concept means that you improve the economics of AI by adopting a build once use many times approach and allows you to cost-effectively purchase or rent AI models that someone else has built for you, it does mean that you are assuming that the GenAI vendor you are purchasing the model from is providing safe and reliable GenAI models. Sometimes this is not the case.
The Australian Signals Directorate has published a really good guide on the risks associated with GenAI, which you can find here. These risks include but are not limited to...
But is clear that the benefits far outweigh the risks of GenAI and that the risks are controllable.
The Changing Regulatory Environment requires immediate action on the part of local governments ...
So, one of the more interesting events in the world of GenAI in the last year was when the US President issued an executive order directing a number of activities to make the use of Artificial Intelligence in the United States more trustworthy and safe. This is an important event as many countries outside of the United States, including Australia, have said that they will look to this executive order as a template for new AI regulations/ legislation that they will implement themselves. The Executive Order is worth a read and will only take 10 minutes. See: https://lnkd.in/g_4PYWNJ. Some of the more significant components of the executive order are the requirements for organisations to report Red Team testing results for their Foundation Models and the direction to Congress and various other agencies to pass additional legislation and regulations to govern the safe and ethical use of AI and to ensure an individuals privacy. For a good explanation of Red Teaming in AI see: https://lnkd.in/gYkUQ6mc Some would argue that the US is just starting to catch up to the European Union, which has had regulations governing ethical and trustworthy AI use for years because of the GDPR. See: https://lnkd.in/gQR-a_RC . Newer EU regulations on GenAI require a risk-based approach and address non-compliance with heavy fines of up to 7% of global revenue.
The Australian Government has also recently issued in January an Interim Discussion Paper on AI. I have been warning the clients and industry bodies I work with that the Australian Government was ramping up to introduce a whole bunch of regulatory changes to govern the use of Artificial intelligence (AI) and, more specifically, Generative AI (GenAI). Well, that white paper was the first salvo in what will be a wave of regulatory changes. You can find that report here: https://lnkd.in/gnkRjQmq. It is worth a read. An excerpt from that report is:
A preliminary analysis of submissions found at least 10 legislative frameworks that may require amendments to respond to applications of AI. Many AI risks outlined in submissions were well-known before recent advances in generative AI. These include: • inaccuracies in model inputs and outputs • biased or poor-quality model training data • model slippage over time • discriminatory or biased outputs • a lack of transparency about how and when AI systems are being used
For me, these recent events in the USA, EMEA and Australia highlight yet again the need for organisations such as city councils to have good governance in place to ensure they use AI responsibly because city councillors will be held personally accountable for transgressions of these regulations. For Organisations that think they already have sufficient governance in place, this will definitely need to be upgraded because of the increased risks that new Generative AI creates (see: https://lnkd.in/gGxRvgfK ) and because of these pending increases in regulation that will increasingly put new responsibilities onto governments themselves (local and regional) to use AI and GenAI responsibly. If you are interested in more detail on AI Governance you can read an article I recently published on the topic here.
How the Cities and Local/ Regional Govenments have been Responding to GenAI so far has been varied...
The reaction to AI and GenAI by Cities, Councils and Local Governments has been varied, but perhaps the plan provided by the New Your City Council in October 2023 (see here) might provide a good template for the actions that Cities and Local and Regional Governments need to take when preparing for increased use of AI, and GenAI.
That plan included the following components....
Conclusions
As I've said at the beginning, it's an exciting time if you work in Local and City Government. AI and Generative AI provide a way of improving the services you provide to your citizens cost-effectively and will be an important ingredient to managing the growth in populations over the next 3 to 5 years. But it does come with Risks and an increase in Regulation that will need to be planned for and Cities will need to consult broadly when implementing these new AI and GenAI programs.
Dr David Goad is the CTO and Head of Advisory for IBM Consulting Australia and New Zealand. He is also Microsoft Regional Director. David is frequently asked to speak at conferences on the topics of Generative AI, AI, IoT, Cloud and Robotic Process Automation. He teaches courses in Digital Strategy and Digital Transformation at a number of universities. David can be reached at david.goad@ibm.com if you have questions about this article.
Marketing & Social Media Specialist | 5+ years | Higher Education Management | Digital Strategy, Brand Awareness, Content Creation
9moWe definitely need more talk on AI tools like SpeakShift
AI Experts - Join our Network of AI Speakers, Consultants and AI Solution Providers. Message me for info.
9moImpressive insights on the rapid adoption of GenAI in local governments!