Generative AI in City's, Councils and Local/ Regional Governments

Generative AI in City's, Councils and Local/ Regional Governments

Cities, Councils and Local and Regional Governments of all sizes have been racing to embrace Generative AI (GenAI) as a way to improve the level of services they provide to their constituencies and to reduce the quickly growing costs of government services. The adoption of GenAI has been faster than any technology before, including the internet, the mobile phone and Facebook. I myself have been asked to present to a number of cities and government organisations on the topic over the last few months, including, for example, Melbourne City Council. While it is true that GenAI can address and improve on many common local government use cases, addressing historical challenges and thereby providing local governments with opportunities for reduced cost and improved service levels. It is also true that GenAI technologies also bring inherent risks, which has led City councils, local governments, and their supporting IT organisations to quickly spin up their own policies, strategies and procedures to govern GenAI. Well-known examples include Boston City Council, New York City Council and Amsterdam City Council. Interestingly, Federal Governments, moving a bit slower, are now starting to release new regulations to govern the use of GenAI, which cities and councils will now have to update their policies to comply with. Examples of jurisdications increasing the amount of AI regulation include EMEA, The USA and Australia. It's an exciting time if you work in the local government, but there will be a lot of work to do to make sure they are using these new GenAI technologies effectively and responsibly. What follows is my assessment of the state of GenAI in Cities, Councils and Local Governments. I also provide, through example, some ideas of where Cities and Councils can look to start using GenAI and how they should structure themselves to address the inherent risks of these technologies. Hopefully anyone working in the public sector space will find this article useful.


But what is Generative AI, the Concept of the AI Foundation Model, and how will this increase the adoption of AI for Local Government

Traditionally, Artificial Intelligence (AI) has required government organisations like city councils to build specific AI models for each individual use case, whether it was machine vision, natural language processing, conversational agents or others. These models, trained on specific use case data, needed to be tuned and constantly monitored for performance, updating the models regularly when performance degrades. Cities, local and regional governments needed to pay for the people and tools necessary to build and manage these individual AI models. As a result, AI was only really affordable for a handful of the larger cities and for state and federal governments.

In 2017, the University of Toronto, in partnership with Google, wrote a research paper called "Attention is All You Need". See here if you'd like to read this paper. Without going into detail, this research paper fundamentally changed AI by giving organisations a computationally efficient way of building extremely large AI models. These extremely large models, trained on a variety of data, could be used for a variety of use cases, not just one. As a result, the concept of the "Foundation Model" was born. The term "Foundation Model" was first coined by the Stanford AI team (see here) to refer to AI models that could be applied across a variety of use cases. These Foundation Models allow organisations like cities and local/ regional governments to adopt a build once, use many times approach to AI. This radically changes the economics of AI by making even lower volume use cases economical for very small cities and councils. It also allows organisations like city councils to use models built by other organisations, reducing even further the minimum investments necessary and improving the economics of AI even further. Thereby increasing the potential for significantly improved constituency services and reducing cost bases as populations grow.

By definition, Generative AI (GenAI) is any time you use AI to generate content, whether text, images or voice. Large Language Models (LLMs) are a form of Foundational, Generative AI that is used specifically for text generation. I recently published a video on what Generative AI is, which you can watch here if you'd like further information and explanation.

ChatGPT, which was released for public consumption a little over a year ago by OpenAI (see here) and which most people have played with, is a Large Language Model, which is a form of Foundational, Generative AI. ChatGPT demonstrated the capability of these Foundational GenAI models, and ever since, cities, local and regional govenments have been racing to adopt this new technology because of the benefits it can provide.

The adoption rate of these Foundational, Generative AI solutions has been so fast that the use of ChatGPT has surpassed the adoption of Facebook, the Mobile Phone and even the Internet (see here). Hence why I have so many clients including cities experimenting with this technology, and many organisations are asking big tech companies like Microsoft, AWS, Google and IBM to help them with the deployment of Foundational, Generative AI into their organisation.


The potential Use Cases for GenAI in Local Government are many...

Gartner publishes something called GenAI Use Case Prisms for a variety of industry verticals. See here. These Use Case Prisms are useful because they identify a number of potential use cases for GenAI allowing organisations to understand the full breadth and scale of its application. They also help organisations prioritise where they should be focusing their initial GenAI investments as they prioritise the use cases based on value and feasibility. In the case of goverments (local, state and federal) Gartner breaks up these GenAI Use Case Prisms into the four categories of Contact Centre, Regulatory and Compliance, Human Services and Public Safety. The top 10 use cases under each category prioritised based on value and feasibility are...

Contact Centre

  1. Contact Centre Virtual Assistance
  2. Contact Centre Staff Onboarding
  3. Unattended Mailbox Monitoring
  4. File Noting
  5. Step-by-Step Services
  6. Contact Center Chatbot
  7. Multilingual Contact Centre
  8. Draft Briefing Notes
  9. Sentiment Analysis
  10. Legislation Virtual Assistants

Regulatory and Compliance

  1. Tailored Explanations/ Guides
  2. Single view of Citizen/ Business
  3. Case Manager Assistant
  4. Case Manager Training
  5. Investigation Support
  6. Case Notes
  7. Communications Triaging
  8. FOI Support
  9. Synthetic Incident Generation
  10. Records Management Support

Human Services

  1. Case Manager Training
  2. Process Explanation for Service Recipients
  3. Synthetic Incidents
  4. Case Notes and Minutes
  5. Synthesizing Policy and Case Reference Materials
  6. FOI Support
  7. Case Manager Virtual Assistant
  8. Integrated View of Recipients
  9. Personalisation of Complex Interventions
  10. Natural Language Support

Public Safety

  1. Nonemergency Incident Chatbot
  2. 991/ 000 Call/ Text Prescreening
  3. Incident Response Messaging
  4. Public Safety Training
  5. FOI Support for Public Safety
  6. 911/000 Call Contextualization
  7. Regulatory and Grant Reporting
  8. Real-time Multilingual 911
  9. Special Event Planning
  10. Public Awareness Campaign Content

.... as you can see there is a wide breadth of potential use cases for GenAI in Governments, particularly cities and local and regional governments.


The Benefits of using Generative AI for Cities and Councils largely fall into three categories...

Looking across the previously listed use cases, I divide GenAI's benefits for cities, councils and local government into three categories. These are...

  1. Content Retrieval - making sense of large quantities of unstructured content like government procedures, policies and websites and returning well summarised and referenced fast responses to queries to help government officials do their roles and citizens get quick and easy-to-understand answers to their questions.
  2. Content Generation - this is where governments can use GenAI to generate text and images that are heavily tailored and timely to the individuals needs. This could be items such as emails, text and reports.
  3. Decision Making - This to me is by far the most interesting capability of GenAI. Because these LLMs are essentially models that take the last 100 words to predict the next 100 words and because they are good at choosing between options this makes them very useful in decision making. They can be used for everything from forecasting the need for a particular government service (when your garbage should be picked up) to choosing where a citizen's enquiry should be routed to get the fastest response.

... all these business benefits will lead to dramatically lower costs to provide government services and an improved perception of service from local governments. Both of which are highly welcomed by local governments that are struggling to manage fast growing populations.


But the Risks of using Generative AI in Cities and Councils are also many....

The risks of GenAI are many but are manageable. There is more risk inherent in GenAI than in traditional AI principally because of the concept of the Foundation Model. Whilst the Foundational Model concept means that you improve the economics of AI by adopting a build once use many times approach and allows you to cost-effectively purchase or rent AI models that someone else has built for you, it does mean that you are assuming that the GenAI vendor you are purchasing the model from is providing safe and reliable GenAI models. Sometimes this is not the case.

The Australian Signals Directorate has published a really good guide on the risks associated with GenAI, which you can find here. These risks include but are not limited to...

  1. Fairness and Bias - like the AI systems that came before it, GenAI solutions can be heavily biased because of the data they have been trained on. This bias can be engineered out of the Gen AI solution before it's built (by the vendor) or after (by the organisational user) but it has to be looked for and governments need to make sure they put the proper governance in place to manage for this.
  2. Intentional Data Poisoning - Data poisoning involves intentionally manipulating an AI model’s training data so that the model learns incorrect patterns and may misclassify data or produce inaccurate, biased or malicious outputs. This can happen when the model is built. Here you are relying on the reputation of the GenAI model vendor to ensure that this is not the case which can be problematic given the proliferation of open source models out in the market which effectively provide no warranties. Or it can happen after the model is built when you are fine tuning the models outputs which requires internal govenance procedures and processes to prevent.
  3. Prompt Injection - Prompt injection is an input manipulation attack that attempts to insert malicious instructions or hidden commands into an AI system. Prompt injection can allow malicious actors outside your organisation to hijack the AI model’s output and jailbreak the AI system. In doing so, the malicious actor can evade content filters and other safeguards restricting the AI system’s functionality. Again, organisations can test for this, but it is something you need to actively look for and consider from a security perspective.
  4. Privacy and Intellectual Property - GenAI systems may also present a challenge to ensuring the security of sensitive data an organisation holds, including citizens’ personal data and intellectual property. Government organisations should be cautious with what information they and their personnel provide to generative AI systems. Information given to these systems may be incorporated into the system’s training data and could inform outputs to prompts from non-organisational users. It is important to include the automated masking of Personal Identifiable Information or Confidential Information to prevent it from being provided to the GenAI solution.
  5. Hallucination/ Accuracy - Outputs generated by an AI system may not always be accurate or factually correct. Generative AI systems are known to hallucinate information that is not factually correct. Organisational functions that rely on the accuracy of generative AI outputs could be negatively impacted by hallucinations unless appropriate mitigations are implemented. The solution to the problem of hallucination is to "Ground" the GenAI responses with data sources that are known to be factually corrected. This grounding relies on the well-known technique of Retrieval Augmented Generation (RAG) and is well handled by most of the Gen AI vendors but does rely on organisations to have a well-curated set of accurate data to rely on and to use RAG-based techniques with.
  6. Transparency - The challenge with GenAI models and LLMs is that they can essentially be black boxes providing you answers that sound correct but may not be. All GenAI solutions are stochastic in nature, meaning they will be wrong a percentage of the time. Even well-engineered GenAI solutions may provide incorrect responses 5 to 10% of the time. In this context, it is important that the GenAI solutions support its response/decision making by providing the decision logic and by providing references to the sources it used so that the user can validate that response/ decision. This needs to be intentionally engineered into all your GenAI solutions, and it means that often the use of GenAI has to be supervised by a human user reviewing the GenAI responses.
  7. Accountable - Organisations need to put controls and governance procedures in place to ensure they are governing Gen AI properly. You are seeing the use of GenAI increase exponentially directly by end users, by IT council IT organisations and by organisations providing off-the-shelf software solutions to your end users and your IT organisations. The first step in ensuring quality of all these GenAI solutions is to have effective and complete records of all the areas where GenAI and AI in general is being used and to declare to your consituents when you are using GenAI in your services. The various departments within your organisation should be held accountable for their use of AI and GenAI, and it should reported to the senior executives to ensure monitoring and control.

But is clear that the benefits far outweigh the risks of GenAI and that the risks are controllable.


The Changing Regulatory Environment requires immediate action on the part of local governments ...

So, one of the more interesting events in the world of GenAI in the last year was when the US President issued an executive order directing a number of activities to make the use of Artificial Intelligence in the United States more trustworthy and safe. This is an important event as many countries outside of the United States, including Australia, have said that they will look to this executive order as a template for new AI regulations/ legislation that they will implement themselves. The Executive Order is worth a read and will only take 10 minutes. See: https://lnkd.in/g_4PYWNJ. Some of the more significant components of the executive order are the requirements for organisations to report Red Team testing results for their Foundation Models and the direction to Congress and various other agencies to pass additional legislation and regulations to govern the safe and ethical use of AI and to ensure an individuals privacy. For a good explanation of Red Teaming in AI see: https://lnkd.in/gYkUQ6mc Some would argue that the US is just starting to catch up to the European Union, which has had regulations governing ethical and trustworthy AI use for years because of the GDPR. See: https://lnkd.in/gQR-a_RC . Newer EU regulations on GenAI require a risk-based approach and address non-compliance with heavy fines of up to 7% of global revenue.

The Australian Government has also recently issued in January an Interim Discussion Paper on AI. I have been warning the clients and industry bodies I work with that the Australian Government was ramping up to introduce a whole bunch of regulatory changes to govern the use of Artificial intelligence (AI) and, more specifically, Generative AI (GenAI). Well, that white paper was the first salvo in what will be a wave of regulatory changes. You can find that report here: https://lnkd.in/gnkRjQmq. It is worth a read. An excerpt from that report is:

A preliminary analysis of submissions found at least 10 legislative frameworks that may require amendments to respond to applications of AI. Many AI risks outlined in submissions were well-known before recent advances in generative AI. These include: • inaccuracies in model inputs and outputs • biased or poor-quality model training data • model slippage over time • discriminatory or biased outputs • a lack of transparency about how and when AI systems are being used

For me, these recent events in the USA, EMEA and Australia highlight yet again the need for organisations such as city councils to have good governance in place to ensure they use AI responsibly because city councillors will be held personally accountable for transgressions of these regulations. For Organisations that think they already have sufficient governance in place, this will definitely need to be upgraded because of the increased risks that new Generative AI creates (see: https://lnkd.in/gGxRvgfK ) and because of these pending increases in regulation that will increasingly put new responsibilities onto governments themselves (local and regional) to use AI and GenAI responsibly. If you are interested in more detail on AI Governance you can read an article I recently published on the topic here.


How the Cities and Local/ Regional Govenments have been Responding to GenAI so far has been varied...

The reaction to AI and GenAI by Cities, Councils and Local Governments has been varied, but perhaps the plan provided by the New Your City Council in October 2023 (see here) might provide a good template for the actions that Cities and Local and Regional Governments need to take when preparing for increased use of AI, and GenAI.

Artificial Intelligence in Action for NYC

That plan included the following components....

  1. Design and Implement a Robust Governnance Framework - including a City AI Steering Committee, Guiding Principles and Definitions
  2. Build External Relationships - including establishing an external expert Advisory Panel
  3. Foster Public Engagement - by holding listening sessions and establishing plans to get public input
  4. Build AI Skills and Knowledge in City Government - Conduct training, education and facilitate knowledge sharing across agencies and teams both within and across cities and local government
  5. Support AI Implementation—by identifying opportunities for in-house tool development and providing support for effective procurement of GenAI solutions. Scale, Reuse, and Repurpose identified in-house projects. After all, the prime benefit of Foundation models is the build-one-use-many-time capability of the tools.
  6. Enable Streamlined and Responsible AI Acquisition - Conduct needs assessments, foster pilots and implement GenAI specific procurement standards.
  7. Ensure Action Plan Measures are maintained and updated - and reportedly annually on the City's progress.


Conclusions

As I've said at the beginning, it's an exciting time if you work in Local and City Government. AI and Generative AI provide a way of improving the services you provide to your citizens cost-effectively and will be an important ingredient to managing the growth in populations over the next 3 to 5 years. But it does come with Risks and an increase in Regulation that will need to be planned for and Cities will need to consult broadly when implementing these new AI and GenAI programs.



Dr David Goad is the CTO and Head of Advisory for IBM Consulting Australia and New Zealand. He is also Microsoft Regional Director. David is frequently asked to speak at conferences on the topics of Generative AI, AI, IoT, Cloud and Robotic Process Automation. He teaches courses in Digital Strategy and Digital Transformation at a number of universities. David can be reached at david.goad@ibm.com if you have questions about this article.



Neha D.

Marketing & Social Media Specialist | 5+ years | Higher Education Management | Digital Strategy, Brand Awareness, Content Creation

9mo

We definitely need more talk on AI tools like SpeakShift

Like
Reply
John Edwards

AI Experts - Join our Network of AI Speakers, Consultants and AI Solution Providers. Message me for info.

9mo

Impressive insights on the rapid adoption of GenAI in local governments!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics