Another busy week in public sector AI, with significant focus on upcoming political shifts and their potential impact on AI policies. Checkout this NotebookLM-generated podcast, which provides an automated walkthrough of this week’s federal news, pulled directly for the headlines in an easy-to-digest format. It's amazing how the two AI speakers in the podcast cover the news.
This week, reports from the UK and Poland highlighted the impact of AI on the labor market, underlining the significant ways in which AI, specifically GenAI, is set to transform the public sector. Now is the time to prepare our workforce for the shifts already underway, anticipate the changes still to come, and equip them with the skills to navigate the future of work confidently and effectively.
Enjoy the rest of this week's news!
- Trump’s 2024 win ushers in new AI policy directions in U.S. The Trump administration aims to reform AI policy with a lighter regulatory approach, prioritizing tech competition with China, promoting AI-driven government efficiency, and curtailing Biden-era regulations.
- Early insights emerge from the Air Force’s NIPRGPT chatbot experiment The Air Force’s NIPRGPT chatbot, launched as an experimental tool for personnel, reveals valuable user feedback and computing challenges, informing future investments in secure, generative AI applications.
- One year later: White House AI EO’s impact assessed Brookings evaluates the White House AI Executive Order’s one-year impact, highlighting agency progress in safety, transparency, and civil rights while stressing the need for congressional action for long-term enforcement.
- OpenAI expands GenAI use across federal agencies with ChatGPT Enterprise OpenAI’s ChatGPT Enterprise sees growing adoption in federal agencies like Treasury, NASA, and the Air Force Research Lab, targeting administrative efficiency and compliance with federal security standards.
- Trump's return could bring sweeping changes to AI regulation and innovation With Trump’s anticipated rollback of Biden-era AI safety measures, the U.S. could see rapid AI growth, but questions linger on potential risks to privacy, misinformation, and job security.
- NGA Director highlights GEOINT AI’s role in defense At the DoD IT conference, NGA Director Frank Whitworth discussed GEOINT AI’s applications in defense, including advances in NGA Maven, digital twins, and NGA’s collaboration with the Space Force and NRO.
- Antony Blinken on using data, AI to enhance US diplomacy Secretary Antony Blinken highlighted the State Department’s growing use of AI for tasks like translation, fact-checking, and media monitoring, aiming to boost diplomatic efficiency and analytical rigor.
- Trump vows to repeal Biden’s AI executive order, signaling shift in U.S. AI policy President-elect Trump’s pledge to rescind Biden’s AI executive order may lead to reduced federal AI oversight, shifting focus toward rapid AI adoption while altering current anti-bias and data governance frameworks.
- Making sure AI is used responsibly at NSF NSF’s Chief AI Officer Dorothy Aronson prioritizes establishing infrastructure and governance to ensure AI’s responsible use and long-term management, supporting internal AI adoption while empowering agency-wide innovation.
- Anthropic urges AI regulation to avoid catastrophes Anthropic advocates for adaptable AI regulation to mitigate risks in areas like cybersecurity and CBRN misuse. Its Responsible Scaling Policy emphasizes iterative safety measures and calls for proactive, balanced federal legislation.
- DOE and DOC signed a MOU to collaborate on developing safe and trustworthy AI The U.S. Department of Energy and Department of Commerce have signed an MoU to bolster AI safety and trustworthiness, focusing on security risks, infrastructure protection, and privacy-enhancing technologies.
- The unintended consequences of GenAI adoption Army CIO Leonel Garciga emphasizes secure Gen AI expansion across the DoD, balancing innovation with stringent security to meet mission needs and counter foreign adversaries’ rapid AI advancements.
- A view from DC: What does a second Trump presidency mean for privacy, AI governance? Under a second Trump administration, key regulatory changes on privacy, AI governance, and digital trade are likely, with a focus on limiting AI oversight and recalibrating privacy protections.
- Federal AI use cases highlight adoption opportunities, challenges Federal leaders are advancing AI projects in defense, economic growth, and public service while addressing data location, security, and ethics challenges to maximize AI’s transformative potential.
- FAA seeks artificial intelligence-powered safety tool The FAA is seeking AI-powered solutions to identify safety risks and analyze aviation data, aiming to advance safety standards and enable rapid response in high-risk scenarios.
- Alabama AI task force recommends transparency, oversight for government AI use Alabama’s AI Task Force advises strict guardrails—such as data classification, human oversight, and transparency protocols—to ensure responsible state AI use, with final recommendations due Nov. 30.
- AI can transform Hawaii’s civic engagement and government processes Thoughtfully applied AI can improve civic engagement in Hawaii by managing extensive public input, facilitating informed decision-making, and supporting meaningful, transparent policy discussions.
- AI isn’t new to cybersecurity, but some of its use cases are Public sector agencies are leveraging AI-driven tools, including custom large language models and advanced endpoint detection, to enhance threat response, streamline alerts, and improve cybersecurity efficiency.
- Aurora, Ill., considers consultant to develop AI policy The Aurora City Council is set to vote on hiring a consultant to help the city develop a policy and strategy for use of GenAI.
- China’s People’s Liberation Army weaponizing Meta’s AI China’s PLA has repurposed Meta’s Llama for battlefield intelligence, sparking concerns about open-source AI's potential for unauthorized military use despite international restrictions.
- UK Launches GovGPT to Streamline Business Support with AI The UK government has launched GovGPT, an AI chatbot in limited trial to assist 15,000 businesses with queries. The tool aims to simplify access to government information and reduce administrative delays.
- Balancing innovation and governance in the age of AI The World Economic Forum’s AI Governance Alliance outlines a three-pillar strategy for AI regulation, focusing on leveraging existing frameworks, fostering cross-sector collaboration, and preparing for future advancements.
- DES Ministry unveils latest GenAI guidelines Thailand’s updated GenAI guidelines aim to ensure ethical, secure AI deployment across industries, aligning digital advancements with societal and regulatory expectations, according to the DES Minister.
- The state of GenAI in the Middle East GCC nations are actively adopting generative AI, but few realize significant value. Leading organizations drive value through strategic planning, technology customization, centralized data management, and targeted talent development.
- Ireland National AI Strategy Refresh Ireland’s updated AI Strategy prioritizes responsible AI growth, focusing on EU AI Act alignment, SME awareness, talent development, regulatory sandboxes, and expanding digital upskilling to harness AI’s societal and economic benefits.
- Georgian Deputy Economy Minister: AI-driven transformation of traditional sectors “priority” Georgian Deputy Economy Minister Irakli Nadareishvili announced AI-driven transformation in sectors like healthcare, agriculture, and education as a top government focus, with new initiatives including the Kutaisi technology hub.
- PRC adapts Meta’s Llama for military and security AI applications China’s customization of Meta’s Llama model for military applications underscores the need for stronger oversight on open-source AI, as adapted models enhance intelligence, situational awareness, and mission support capabilities.
- New Singapore UK agreement to strengthen global AI safety and governance Singapore and the UK signed a Memorandum of Cooperation to advance AI safety through joint research, standards, and testing, aiming to build global trust in AI technologies.
- The impact of AI on the Labor market AI could reshape the UK workforce by automating tasks, creating new jobs, and boosting productivity. Policymakers should focus on broad AI adoption, labor-market adaptation, job quality, and scenario planning.
- AI for Science report AI’s adoption is transforming science, accelerating research across sectors like agriculture, energy, and health. This report guides science leaders on harnessing AI’s potential while addressing implementation challenges.
- More Canadian organizations sign AI code of conduct Ten more organizations join Canada’s Voluntary Code of Conduct for responsible AI, reflecting a growing commitment to safe AI practices and advancing Canada’s leadership in ethical AI development.
- Cybersecurity risks of AI-generated code AI-generated code presents cybersecurity risks through insecure code outputs, vulnerabilities in AI models, and downstream impacts on future training, requiring multi-stakeholder mitigation and expanded secure coding standards.
- Google leaks AI tool “Jarvis” capable of PC control Google accidentally revealed “Jarvis,” an AI agent capable of completing tasks directly on a user’s computer, advancing competition in automated digital assistance.
- What might good AI policy look like? Four principles for a light touch approach to AI Jennifer Huddleston advocates a light-touch approach to AI regulation, suggesting policymakers focus on assessing existing laws, creating federal preemption, prioritizing education over control, and preserving civil liberties.
- Stop writing all your AI prompts from scratch Reusable AI "blueprints," customized prompts that educators can adapt for recurring tasks like lesson planning and quiz generation, enhancing efficiency without re-creating prompts from scratch.
- AI brings autonomous procedures closer, but surgeons still key Johns Hopkins experts reveal how AI-enhanced robots are advancing surgical precision, addressing workforce shortages, and preserving human oversight as critical in future autonomous surgical procedures.
- Neuro-oncology experts reveal how to use AI to improve brain cancer diagnosis, monitoring, treatment A global team’s AI guidelines aim to improve brain cancer diagnosis and treatment by standardizing practices, reducing subjectivity, and ensuring models meet rigorous clinical standards for better patient outcomes.
- Why AI could eat quantum computing’s lunch Rapid AI advances in physics and chemistry simulations challenge the need for quantum computing in these fields, yet experts predict a collaborative, hybrid future for solving complex scientific problems.
- AI artwork of Alan Turing sells for $1m Ai-Da Robot’s portrait of Alan Turing sold for $1.08 million, marking a milestone as the first humanoid robot artwork auctioned, raising dialogue on AI’s role in society and art.
- Despite its impressive output, Gen AI doesn’t have a coherent understanding of the world New research shows large language models can excel at tasks like navigation and gaming without a true understanding of the world, raising concerns about their reliability in real-world applications.
- Industry’s take on the chief artificial intelligence officer role Industry leaders emphasize that effective chief AI officers need a strong grasp of organizational goals, robust governance capabilities, and an ability to balance AI innovation with security and compliance.
- AI saves ad agencies a lot of time. Should they still charge by the hour? AI is prompting ad agencies to rethink traditional hourly billing by reducing time and labor needs. New pricing models based on outputs or outcomes may better align with AI-driven efficiencies.
- The $50 million movie Here de-aged Tom Hanks with GenAI The film Here uses generative AI to de-age Tom Hanks and Robin Wright, showing their characters across 60 years without casting different actors—a first for large-scale AI-powered visual effects.
- Crossing the deepfake rubicon A CSIS study shows people struggle to identify AI-generated content, emphasizing the urgent need for regulatory and technical measures to combat escalating synthetic media threats to national security.
- How far behind are open models? Open AI models trail top closed models by about a year in performance and training compute, yet signs indicate the gap may shorten as open models approach frontier capabilities.
Learning Program Director: AI Education
2moI cannot speak highly enough of both this content, critical thought behind it, and publish out to both here and Google Notebook LM. Hahaha Chris I hope you come up with t-shirts, quarter zips, hats, or notebooks because we need fan gear at this point! Thank you for all you do to keep us up to date and confidently informed.
Content Strategist & AI Center of Excellence Lead @ Fearless
2moI am looking forward to NotebookLM offering more podcast styles/interactions. Such a useful tool.