How Will California's AI Bill Shape AI Safety in the US?
Dragon Claws via Alamy Stock

How Will California's AI Bill Shape AI Safety in the US?

Story by Carrie Pallardy

Key Points:

  • The proposed legislation focuses on safety and security protocols for large AI models, those that cost $100 million or more to develop. It aims to hold the developers of these models liable for harmful outcomes. The legislation would empower the Attorney General to take civil action against AI developers if their models are linked to a catastrophic event. 
  • SB 1047 has garnered support from AI researchers Geoffrey Hinton and Yoshua Bengio, as well as billionaire Elon Musk. Hinton pointed to the bill's “sensible approach,” according to a press release from Senator Scott Wiener (D-San Francisco), who introduced the bill. Musk took to X to voice his support for regulating “…any product/technology that is a potential risk to the public.”
  • “If approved, legislation in an influential state like California could help to establish industry best practices and norms for the safe and responsible use of AI,” Ashley Casovan , managing director, AI Governance Center at non-profit International Association of Privacy Professionals (IAPP), says in an email interview.
  • The EU AI Act passed earlier this year. The federal government in the US released an AI Bill of Rights, though this serves as guidance rather than regulation. Colorado and Utah enacted laws applying to the use of AI systems.
  • Given how nascent the AI industry is, answering fundamental questions about safety will likely take time and money. Funding academic institutions, research labs, and thinktanks could potentially help regulators answer those questions and shape their approach.
  • Regardless of how the regulatory landscape forms, AI developers and users have to think about safety now. “It's important to start to develop a strong AI governance literacy within your organization,” Casovan urges.


Welcome to InformationWeek 's Big Picture!

You already know that every day at InformationWeek brings expert insights and advice to help today’s IT leaders identify the best strategies and tools to drive their organizations forward.

That means original reporting from our team of journalists and unique commentary you won’t see anywhere else! But in case you missed them, here are some of our other must-read favorites from this week:

Previewing Forrester ’s Technology & Innovation Summit

Forrester’s upcoming Technology and Innovation Summit North America 2024 is all about unleashing the power of tech, talent, and AI.

In these three video interviews below, we connected with Forrester Analysts Jayesh Chaurasia , Julie L. Mohr , and Brandon Purcell for exclusive interviews to preview some of the topics they’ll cover at the summit.

Subscribe to our official YouTube channel now for these exclusive video interviews, podcasts, and webinar recaps uploaded weekly!

Augmented Analytics Powered by GenAI

Story by Lisa Morgan, CeM, J.D.

Key Points:

  • Before generative AI (GenAI) hit the scene, ABI platforms were already using natural language processing and machine learning to understand queries and explain analytical results. Since then, GenAI capabilities have been added to platforms, though they tend to vary in their level of AI and GenAI maturity.  
  • “Augmented analytics is one of the primary ways that we use AI. While we stay away from generative AI for content creation or communication, AI can be a fantastic tool when it comes to data analysis,” says Edward Tian , CEO of GPTZero, which detects GenAI in written content.
  • According to Ryan Rosett , co-CEO and founder of Credibly , these kinds of enhanced analytics, through combining GenAI and supervised models, have led to shorter approval times, better accuracy, and a deeper understanding of the customer.
  • As a lending platform, Credibly must be able to conduct a fast and accurate risk assessment of business owners seeking financing.
  • To achieve that, the company developed a methodology to risk-adjust external data, created a proprietary search engine using GenAI that quickly ingests and summarizes metadata from external and internal sources, and paired that with automated machine learning models to provide more accurate, risk-adjusted determinations for underwriting use. 

Privacy Wake Up Call

Story by Shane Snider

Key Points:

  • Dutch data watchdog the Data Protection Agency (DPA) says the lack of consent from people whose images appear in the database amounted to a breach of the European Union's (EU) General Data Protection Regulation (GDPR). Clearview AI ’s database is used by businesses and law enforcement agencies.
  • In a statement sent to InformationWeek, Jack Mulcaire, Clearview AI’s chief legal officer, called the decision “unlawful” and “unenforceable.” “Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU and does not undertake any activities that would otherwise mean it is subject to GDPR.”
  • This is not the first time Clearview AI faced legal challenges for its facial recognition database practices. The company in June settled an Illinois lawsuit -- which consolidated several lawsuits from around the US -- over the firm’s massive photographic database. Plaintiffs in the case were given a share of the company’s potential value, rather than a traditional payout.
  • The legal controversies have done little to slow the success of the firm, which sells its database technology to law enforcement agencies and governments. Most recently, the technology was used in war-torn Ukraine to identify Russian soldiers.

User-Centric Design Matters

Story by Laura Shact and Greg Vert

Key Points:

  • The practical application of AI can often fall short due to a failure to integrate it thoughtfully. Leaders recognize this, but many are still trying to understand what working harmoniously with AI really looks like in their organization.
  • Deloitte research shows that 73% of leaders believe in the importance of ensuring human imagination keeps pace with tech innovation, but a mere 9% are making progress toward achieving that balance. 
  • Organizations that have successfully integrated AI demonstrate that human and business outcomes can improve when technology supports rather than replaces workers. Our latest Global Human Capital Trends report shows that leading organizations already recognize this. 
  • To shift perceptions and address concerns around AI adoption, organizations can invite employees into the process by communicating clearly with them about how AI is being implemented and offering training programs and digital playgrounds where they can experiment with the technology in a safe, controlled environment. 
  • Addressing AI’s errors and biases requires a human touch -- curious and empathetic workers who can ensure responsible, nuanced decision-making --something AI currently cannot replicate. For example, unlike applications that either work or don’t, generative AI can produce results with varying levels of accuracy, making human oversight critical. 


Commentary of the Week 

Story by Jordan Kenyon, PhD, PMP and Taylor Brady

Key Points:

  • Last month, NIST standardized the first three public key algorithms in its post-quantum cryptography (PQC) suite.
  • This marks a turning point in cybersecurity and opens the door for implementation of this next-generation cryptography. Leaders can begin deploying the new PQC standards through five key steps: 

  1. Plan for change. Cryptography exists across multiple network layers, allowing systems to encrypt and decrypt information in different ways to meet different goals from ensuring confidentiality and integrity to providing mechanisms for authentication and nonrepudiation.
  2. Understand the attack surface. Despite early warnings that all public key cryptography will be at risk in the quantum era, replacing vulnerable cryptography will not be easy.
  3. Prioritize high-value assets. The nature and timeline of the quantum threat to encryption makes prioritization paramount in any PQC transition.
  4. Prototype critical applications. NIST has now standardized three public key algorithms in its PQC suite and will soon release draft standards for a fourth.
  5. Design for cryptographic agility. NIST’s initial PQC standards are widely regarded as the best available defense against quantum attacks on public key cryptography.

Dive deeper into these 5 key insights in the story above.


Podcast of the Week

Podcast and Story by Joao-Pierre Ruth

Key Points:


  • At a time when organizations want to leverage AI, need more compute power, and plan for a post-quantum future, the resources that support those technologies see escalating demand.
  • There have already been hiccups with computer chip shortages that roused concerns of AI development maintaining pace. Many organizations want to balance environmental, social, and governance efforts with the real need for power and materials to thrive.
  • Are we on a path to a collision of innovation versus energy and hardware availability?
  • In this episode of DOS Won’t Hunt, Zachary Smith (bottom right in video), board member with the Sustainable & Scalable Infrastructure Alliance; Aidan Madigan-Curtis (upper left), partner with Eclipse; and Ugur Tigli (upper right), CTO with MinIO, discuss whether the limits of chips, energy, and other materials may hinder innovation and if a point of inflection is on the horizon.


Latest Major Tech Layoff Announcements

Original Story by Jessica C. Davis, Updated by Brandon Taylor

Key Points:

  • As COVID drove everyone online, tech companies hired like crazy. Now, we are hitting the COVID tech bust as tech giants shed jobs by the thousands.
  • Updated August 28, 2024 with layoff announcements from Brave , Scale AI , Apple , IBM , and Tome Biosciences .
  • Check back regularly for updates to our IT job layoffs tracker.


WATCH ON-DEMAND!

“Generative AI: From Bleeding Edge to Mainstream, How It's Shaping Enterprise IT”

An archived LIVE virtual event from August 22:

Presented by InformationWeek and ITPro Today

In this archived keynote session, Sidney Madison Prescott, MBA , founder & CEO of Mirror and Moonshot Productions, discusses the duality of generative AI, emphasizing its benefits in automating tasks, and enhancing productivity, while also addressing concerns about data privacy and ethical implications.

Watch the archived “Generative AI: From Bleeding Edge to Mainstream, How It's Shaping Enterprise IT” live virtual event on-demand today.


This is just a taste of what’s going on. If you want the whole scoop, then register for one of our email newsletters, but only if you’re going to read it. We want to improve the sustainability of editorial operations, so we don’t want to send you newsletters that are just going to sit there unopened. If you're a subscriber already, please make sure Mimecast and other inbox bouncers know that we’re cool and they should let us through.

And if you’re thinking about subscribing, then maybe start with the InformationWeek in Review; it only arrives on our new look Saturdays.

To view or add a comment, sign in

More articles by InformationWeek

Insights from the community

Others also viewed

Explore topics