Week in Review (8/24/24)

Week in Review (8/24/24)

Generative AI (GenAI) created images continue to amaze me with their realism. The ability to insert someone into these images makes it even more mind-blowing. The problem? It's becoming increasingly more difficult to tell what's real and what's generated. It would be so much easier if you had a label, maybe something like this:


AI-generated image of me

So, how did I generate this image? It was created using a model fine-tuned with about twenty pictures of me. The fine-tuning process adjusts the model's weights - essentially stored values corresponding to the image characteristics - that can then be used along with a more general-purpose image model. The image model I used is called FLUX.1[dev] and it has 12 billion parameters. Fine-tuning took less than one hour and costs under $5. Now, I have a model capable of generating images of me on demand.

Image generation technology has evolved rapidly over the last two years. We now regularly come across generated images that are hard to distinguish from the real thing. This progress is exciting and offers incredible opportunities, as long as it's used responsibly. However, when used irresponsibly, it can cause significant harm. Deepfakes, for example, have been making headlines, and many states are implementing regulations to address their misuse. Some recent legislation includes:

  • NY law that makes it illegal to distribute AI-generated explicit images without permission
  • Alabama Child Protection Act that makes it a crime to use AI to create sexually explicit depictions of children
  • New Mexico law that requires political campaigns and candidates to disclose when they use false information generated by AI in an ad campaign

More examples of recent legislation can be found here.

So, how do we combat deepfakes? Legislation is one approach. There are also efforts to leverage watermarks to authenticate images. Another approach is using an AI-based deepfake detection models. I tried one of these models with some generated images. The result? It was able to identify one as fake and mistook one for being real.

For now, it's important to recognize that this technology is rapidly evolving and increasingly accessible. As the capabilities improve, the potential for both positive applications and misuse will grow. Those of us in the public sector have a responsibility to protect the most vulnerable parts of our communities.

Now, back to this week's public sector news. Let me know how you like the new layout!

Federal

▪ White House releases final AI use case guidance

▪ GSA announces free AI training for feds

▪ NASA CAIO is man on a mission

▪ USAID is the first federal agency customer for OpenAI’s ChatGPT Enterprise

▪ VA believes AI can save vets from pain and suffering

▪ OSTP, GAO still differ on informing agencies of implementing status on AI

▪ State/GAO provide details on AI deployments

▪ Agency intel officials tackling complex implications of AI

▪ Defense priorities in the open-source AI debate

▪ Marines searching for AI-enabled sensors

▪ Army wraps up 100-day sprint, plots next steps for AI

▪ Zero trust and AI remain top priorities

▪ FCC settles spoofed AI robocall case

▪ DOE wants to use AI to speed up permitting


State/Local

▪ IL enacts new AI legislation, joining CO in regulating algorithmic discrimination

▪ Schools buying AI to detect guns

▪ Privacy, responsibility discussed at inaugural AI summit

▪ TX AI-powered search drives intelligent child support tool

▪ SD school district implements AI policy

▪ State lawmakers contemplate AI ahead of November elections

▪ Why should CA lead on AI regs

▪ CA could pass AI regs, but what do they say

▪ Is a state AI patchwork next? AI legislation at a state level in 2024


International

▪ S. Korean LG releases open-source AI model

▪ Columbian landmark ruling on AI in the court

▪ Lithuania's ready to expand AI adoption

▪ Norwegian companies need more AI competent board members

Tony Blair’s AI mania sweeps new UK gov

▪ UK preparing NHS for the AI era

Don’t reinvent the wheel to govern AI

▪ AU passes new laws to combat deepfakes

Militaries are waiting for the AI revolution

▪ North Jakarta launches local GenAI chatbot

Africa now has a continental AI strategy

▪ UNESCO releases consultation paper on AI governance

▪ Canadian national security agencies should reveal how they’re using AI

▪ UK House of Commons AI reading list

▪ AU gov agencies to outline AI use


Other

▪ Trump posts fake AI image of Taylor Swift

DeepLearning.AI Batch news

▪ Deloitte releases the state of GenAI in the enterprise – Q3 report

▪ GenAI still a solution in search of an answer

▪ Company responsible for Biden deepfake call to pay $1m fine

Abiud (AJ) Amaro Diaz

Freelance Technology Consultant

4mo

The government is going to have an incredibly difficult time regulating AI given the Chevron decision. If I’m the Justice Department I would challenge the Supreme Court to strike down any decision from the regulators so they have an opportunity to reverse this stupid decision.

Like
Reply
Sahaj Vaidya

Policy Expert at SuperAlign

4mo

Thank you Chris Kraft

Maurie Beasley, M.Ed

Keynote Speaker | AI Professional Development | AI Education Professionals (AIEdPro) Founder | Author | Counselor

4mo

Chris, I am concerned with the ability of AI to create images and videos that are “deepfakes”. I am in K12 education and believe that schools need to up their game when it comes to educating students about all they may face in the future. Thank you so much for all the information that you put into the public sector AI news!👏🏻

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics