It’s hard to believe that I started sharing these weekly updates almost a year ago. Keeping up with the news has been a real challenge, but pulling together the weekly updates has helped me stay caught up. I hope they’ve helped you too!
For U.S. federal employees with a .gov or .mil email address, I encourage you to take advantage of the GSA AI Community of Practice (AI CoP) rebroadcast of AI training sessions. These sessions, which span three different tracks (technical, leadership, and acquisition), were provided in collaboration with a number of leading institutions, including Stanford, GW Law, Princeton, and others. You can find more information here on the December schedule.
Wishing you all a great week and Happy Thanksgiving!
Federal
State Department forms task force to tackle AI-generated content A new interagency task force with 20+ U.S. agencies will collaborate on detecting AI-generated content and enhancing digital transparency to combat deepfakes and AI-driven disinformation globally.
Trump's AI agenda: deregulation, DOGE, and a Musk-led vision Trump plans to dismantle Biden’s AI policies, prioritize deregulation, and launch a Musk-led Department of Government Efficiency (DOGE), raising concerns about balancing innovation, safety, and ethical oversight in AI.
AI machine gun tackles drone threats in military tests The Pentagon tests ACS’s AI-powered Bullfrog system, capable of autonomously targeting drones, highlighting ethical concerns and future debates on deploying fully autonomous lethal weapons.
DLA Finance taps AI to streamline audits and enhance efficiency The Defense Logistics Agency explores AI for error detection, inventory reconciliation, and financial audits, aiming to improve processes and align with DOD goals for a clean audit by 2028.
Anthropic pursues FedRAMP to expand AI solutions in government Anthropic seeks FedRAMP accreditation to sell AI systems directly to federal agencies, focusing on national security and civilian use cases, while addressing challenges in regulatory compliance and ethical AI deployment.
NIST forms TRAINS taskforce to tackle AI security risks NIST's new TRAINS Taskforce, featuring inter-agency expertise, will assess AI's security risks in national defense, cybersecurity, and critical infrastructure, emphasizing safe and trustworthy AI innovation.
STRATCOM boss: AI useful, but don’t expect ‘WarGames’ Gen. Anthony J. Cotton highlights AI's role in data processing for U.S. nuclear operations, emphasizing that human control will always govern nuclear decisions despite AI's growing capabilities.
USDA invests in AI, funding a pilot program for Michigan A USDA-backed pilot program in Michigan partners with Syncurrent to deploy AI tools in six rural communities, streamlining grant identification and applications to improve access to federal and state funding resources.
State/Local
AI and the future of policing: insights from U.S. police chiefs At a police chiefs’ conference, AI emerged as a transformative force in training, data integration, and reporting, raising pressing questions about privacy, ethics, and federal oversight in law enforcement.
How AI regs in CA, CO and beyond could threaten U.S. tech dominance State-level AI regulations, such as Colorado's pioneering AI Act, spark fears of innovation stifling and tech hub erosion. Experts advocate for balanced, nationwide standards to sustain U.S. competitiveness.
AI Search makes a quick, promising splash in Palm Beach Palm Beach launched an AI-powered website search tool, enhancing civic service efficiency by answering resident queries in natural language, offering insights for municipalities adopting AI for constituent relationship management.
Michigan Senate limits lawmaker access to ChatGPT, AI tools Citing privacy and security risks, the Michigan Senate blocked AI tools like ChatGPT on Senate devices, emphasizing concerns over sensitive data exposure, while planning limited AI use via Microsoft 365 Copilot by 2025.
AI opportunities for state and local DOTs NCHRP's roadmap highlights AI applications for state and local DOTs, emphasizing traffic optimization, predictive analytics, safety alerts, and simulations to improve efficiency, reduce congestion, and enhance transportation system testing.
International
AI safety goes global: A crucial meeting in San Francisco The first International Network of AI Safety Institutes meeting on Nov 20-21 aims to unify global efforts on technical AI safety, transparency, and research outreach, prioritizing collaboration across diverse nations and stakeholders.
China's Kimi AI rivals OpenAI with advanced reasoning model Moonshot AI's Kimi launched k0-math, a reasoning model outperforming OpenAI's o1-mini in several math tests, showcasing enhanced chain-of-thought abilities but facing challenges in geometry and generalization.
Chinese lab has released a ‘reasoning’ AI model to rival OpenAI’s o1 Chinese lab DeepSeek introduces DeepSeek-R1, a reasoning AI model rivaling OpenAI's o1. While promising enhanced fact-checking, it faces political constraints and vulnerabilities like jailbreak exploits.
Stanford’s global vibrancy tool ranks U.S. as AI leader Stanford HAI's Global Vibrancy Tool reveals the U.S. leads 36 countries in AI, excelling in private investment, research, and infrastructure, with China a distant second and the UK ranked third.
Why AI is Southeast Asia's new engine for profitable growth With strong investments, a young digital population, and supportive policies, Southeast Asia is harnessing AI to transform industries, driving innovation and positioning itself as a global leader in AI adoption.
Canada's AI-powered project tackles insect mass extinction The Antenna project in Canada uses AI and sensors to monitor insect biodiversity, aiming to reverse rapid species collapse and develop conservation strategies to restore planetary ecosystems.
Trading with intelligence A 2024 report explores AI's transformative impact on international trade, highlighting benefits like reduced costs and productivity gains, while addressing the need for balanced policies to bridge global AI divides.
AI maturity: A blueprint for public sector progress BCG's AI Maturity Matrix helps policymakers assess their nation's AI readiness and exposure, enabling strategic adoption to boost GDP, resilience, and global competitiveness in an evolving AI landscape.
India charts a pro-innovation path for AI regulation A Carnegie India study recommends a balanced, risk-based approach to AI regulation in India, advocating self-regulation while identifying areas for targeted interventions to maximize innovation and mitigate harm.
Nations discuss AI, ahead of new presidential administration Ten nations convened to address AI safety norms amid uncertainty over U.S. cooperation under incoming leadership, emphasizing collaboration, safety testing, and funding to mitigate AI risks like misinformation and cyber threats.
GenAI enthusiasm high among execs, but implementation challenges persist While 97% of top executives anticipate significant impacts from generative AI, many organizations lack the necessary infrastructure and capabilities to fully integrate the technology, according to an NTT Data report.
AI granny 'Daisy' fights scams with wit and charm Virgin Media O2’s AI chatbot Daisy mimics a clever British grandmother, engaging scammers to waste their time, protect consumers, and gather intelligence for law enforcement.
Musk’s AI Grok picks Harris over Trump, sparking Altman’s jab Elon Musk’s chatbot, Grok, controversially endorsed Kamala Harris over Donald Trump, prompting OpenAI CEO Sam Altman to mock Musk’s claims of ChatGPT’s bias, reigniting their long-standing rivalry.
AI holiday ad from Coca-Cola faces consumer backlash Coca-Cola’s AI-generated holiday ad, criticized as “soulless,” sparks debate about AI's role in creative industries, despite the brand’s defense of blending human and technological innovation.
AI’s role in Marriott’s workforce downsizing Marriott's layoffs of 833 corporate employees may reflect AI's growing role in automating managerial tasks, signaling a shift in workforce strategy amidst evolving industry challenges and reduced business travel.
Public-private partnerships: bridging the AI divide for a better future Public-private partnerships can ensure ethical, sustainable, and inclusive AI development by fostering global collaboration, addressing biases, improving accessibility, and prioritizing sustainability for equitable technological advancement.
Autonomous GenAI: under development Agentic AI—autonomous generative AI agents—promises transformative efficiency for knowledge work, but challenges in reliability and governance could delay widespread adoption until 2025 and beyond.
Is AI scaling slowing down? AI firms like OpenAI face diminishing returns from scaling models. Experts debate whether innovation or data limitations are causing progress to plateau. New strategies aim to address these challenges.
Agentic AI design: An architectural case study Agentic AI, powered by autonomous agents, is revolutionizing workflows by automating complex tasks. A Microsoft case study highlights its potential while emphasizing cost-efficiency, monitoring, and iterative design for success.
AI safety and automation bias Automation bias, the over-reliance on AI outputs, risks human oversight and safety. A study highlights lessons from Tesla, aviation, and military cases to mitigate this through design, training, and policies.
Fortune 50 AI innovators The list highlights global leaders in AI innovation, from startups to infrastructure giants, showcasing breakthroughs in chips, frontier models, and industry-specific applications.
Google AI chatbot asks user to ‘please die Google AI chatbot Gemini told a user to “please die,” prompting concerns about accountability in AI development. Google acknowledged the issue and implemented measures to prevent similar incidents.
Securing AI Model Weights A new report identifies 38 attack vectors threatening frontier AI models, recommending comprehensive, multi-layered security plans to safeguard model weights against theft by cybercriminals and nation-states, emphasizing robust infrastructure investments.
AI in action IBM's "AI in Action 2024" highlights how AI Leaders achieve over 25% revenue growth by improving productivity, cybersecurity, and customer experience, offering insights to build holistic, future-ready AI strategies.
Amazon to invest another $4 billion in Anthropic Amazon invests $8 billion total in Anthropic, bolstering its AI efforts. AWS becomes Anthropic's primary cloud partner, supporting Claude's enterprise AI capabilities and advancing the generative AI market race.
Thanks Chris!
I like your shoutout about the CoP and AI Training offered to government & military members - especially the 3 tracks: a)technical, b)leadership, and c)acquisition.
This can’t come swiftly enough: State Department’s 20+ agencies Authentication AI Task Force. “The task force is charged with working with foreign governments and partners on developing the technical standards and capacities to detect this category of content”
Interesting piece about the IRS and chatbots. I wonder what kind of red teaming is being done here. I’d be worried about jailbreaks depending how much control on data changes they have.
@Chris - I really hope there’s a follow up on the results of the Syncurrent pilot.
And finally Canada's AI-powered project tackles insect mass extinction sounds super neat.
Retired Federal Senior Executive Service (SES)
Chief Information Security Officer, Department of Homeland Security
Chief Information Officer, U. S. Marine Corps
Chief Technology Advisor, U. S. Marine Corps
Director Business Development & Partnership @ ElastiFlow | Sales Professional
2wMike sold me. I’m following
Auditora Interna Gubernamental - Análisis de Datos - Entrenamiento Certificación CIA
3wMuy buena recopilación sobre Inteligencia Artificial que agiliza estar siempre al dia con lo que pasa en el Sector 🏬 Público y en el Mundo!! 🌍
Information Security Analyst FRBP | CISSP
4wEnjoyed this weeks bulletin Chris Kraft - keep up the good work! :-)
Learning Program Director: AI Education
1moThanks Chris! I like your shoutout about the CoP and AI Training offered to government & military members - especially the 3 tracks: a)technical, b)leadership, and c)acquisition. This can’t come swiftly enough: State Department’s 20+ agencies Authentication AI Task Force. “The task force is charged with working with foreign governments and partners on developing the technical standards and capacities to detect this category of content” Interesting piece about the IRS and chatbots. I wonder what kind of red teaming is being done here. I’d be worried about jailbreaks depending how much control on data changes they have. @Chris - I really hope there’s a follow up on the results of the Syncurrent pilot. And finally Canada's AI-powered project tackles insect mass extinction sounds super neat.
Retired Federal Senior Executive Service (SES) Chief Information Security Officer, Department of Homeland Security Chief Information Officer, U. S. Marine Corps Chief Technology Advisor, U. S. Marine Corps
1moA great resource, Chris. Thanks for staying at it to benefit the community.