Thursday Thoughts on AI + Law (2/23/23)
DALL*E2 Prompt "a disney-like, CGI-quality cartoon of snow-capped mountains behind a sun-drenched city of los angeles in california"

Thursday Thoughts on AI + Law (2/23/23)

As ChatGPT makes Time Magazine’s cover, the world is trying to sort out how to incorporate AI into the world of work–and worrying about the implications for white collar workers and members of the creative class. In the meantime, organizations and governments are full-steam ahead on crafting approaches to build ethics and accountability into AI development efforts, while researchers continue to discover exciting new ways that generative AI can be used. Also, is Clippy ChatGPT's grandparent?

  1. ChatGPT makes the cover of Time Magazine.
  2. As the WSJ reports, everyone everywhere is trying to figure out how to best use AI-powered tools for work. The Guardian piles on with their own take on it.
  3. As Sam Altman points out, integration of AI into workstreams will happen quickly (and we’ll possibly wonder what we did before it)
  4. Conversely, there are some dark clouds on the horizon as AI is being used to select workers for layoffs. The impact of AI on jobs is being felt in a particularly acute (and, for workers, negative) manner in call centers.  And by voice actors who are training the generative AI tools that will ultimately replace their occupation. The same goes for translators.
  5. But, as always, “The first thing we do, let's [replace] all the lawyers.” Henry VI, Part 2, Act IV, Scene 2.
  6. And then: maybe the DJs? And then video game designers? And then the rest of the creative class? Yikes. 
  7. Conversely, Noema digs into the relationship between AI and blue collar work.
  8. And of course, new technologies can spur the creation of new markets, like the emerging prompt market.
  9. Evidence that fear of regulatory impact can shape behavior and technology adoption: bankers are pulling back from their use of ChatGPT.
  10. If AI for search/chat is following the Gartner Hype Cycle model, we might be somewhere around the “Peak of Inflated Expectations” and headed downwards
  11. But before you write off LLMs as the product of hype machines, you should check out the state of the AI tech landscape - it’s huge
  12. But what seems to be missing a bit is Europe’s place in that landscape.
  13. If you’re using AI for financial trading and it gets something wrong, well, at least now there’s insurance for that.
  14. Here’s a very cool overview of how AI can assist in disaster recovery.
  15. Microsoft posted a long-form read on its approach to AI development, Responsible AI, and more. As part of its efforts to reduce risk in this space, Bing AI chat replies will be volume-capped
  16. Likewise, OpenAI is engaging in public discourse on how AI systems should behave (and who should make that choice).
  17. And–shameless plug here–LinkedIn announced its approach to Responsible AI.
  18. The Carnegie Council for Ethics in International Affairs is calling for a systemic reset and improvement of AI governance practices.
  19. An article published by Princeton University Press argues that a democratic system is important for ensuring appropriate AI governance requirements.
  20. Balaji Srinivasan has a thought-provoking thread on Twitter outlining the metaphysical component of AI
  21. Act 1: ChatGPT enters school. Act 2: Schools respond.
  22. People think that AI will replace many jobs but most people think it’ll be someone else’s job, apparently.
  23. The timeline for the AI Act might be slipping a bit.
  24. Baker McKenzie has a good overview of how the AI Act requirements can fit into a broader approach to AI ethics.
  25. Brazil is preparing its own version of AI regulations (which look very similar to the EU’s AI Act proposal, similar to how the LGPD followed the GDPR). 
  26. State lawmakers in the U.S. are gearing up to try to regulate the impact of AI tools. The Mercury News editorial board is calling for California to lead in this space.
  27. India’s Economic Times is making the case for how India should approach AI regulation. Meanwhile, the Indian government is already using an AI-powered chatbot to help rural communities.
  28. And the SCMP makes a case for how to build an ethical AI governance regime in Hong Kong.
  29. Lelapa is working to bring members of the African tech diaspora back to the continent to build AI tools.
  30. Axios dives into the battle over combatting misinformation generated through chatbots.
  31. MIT Tech Review talks about how ‘bias bounties’ can help AI platforms combat misinformation
  32. There has been a good deal of focus on Gonzales v. Google and CDA 230. Twitter v. Taamneh raises other issues about platform liability (this time more related to content moderation algorithms).
  33. The use of generative AI for coding presents a number of cybersecurity challenges.
  34. Axios is flagging that, yes, it’s time to think about AI as a platform.
  35. Reminder: generative AI chatbots are not alive, are non-sentient, and don’t have feelings, despite what they might say
  36. Perhaps the media is to blame for fostering the notion that chatbots are sentient?
  37. Chatbots also, like many things, follow a “garbage in, garbage out” rule. This results in more garbage prose being pushed into the universe. Some of which can already be found in books on Amazon.
  38. Perhaps even a ‘bad’ Bing isn’t actually that bad, after all.
  39. Social Europe is calling for the “AI Arms Race” to slow down so that regulation has a chance to catch up with AI deployment. Speaking of an arms race, where are Amazon and Apple?
  40. A very interesting proposal to adapt IP law to AI governance using copyleft and the patent troll business model. (More–not paywalled–here.)
  41. Good: German courts struck down the use of predictive algorithms for policing.
  42. AI photo apps for profile photos apparently had a shelf-life shorter than the NFT craze.
  43. South Korea is boosting its AI industry with government spending.
  44. The tech community in China is also reacting to the U.S.-driven generative AI boom.
  45. Sam Biddle is calling ‘malarky’ on much of what is referred to as ‘AI’ by marketers and the tech press.
  46. Some journalists are concerned that their articles are being used to train LLMs
  47. The U.S. military announced a framework for Responsible Military Use of Artificial Intelligence and Autonomy.
  48. Researchers at the University of Chicago developed a clever tool called Glaze to help artists protect their work from being used for generative AI training. If widely adopted, it’ll be interesting to see what this means for visual generative AI tools.
  49. Twitter might be open-sourcing its relevancy/feed algorithm next week.
  50. Public figures and institutions should be careful in their use of generative AI for communications, particularly on obviously sensitive topics.
  51. Here is a good deep dive into the topic of ChatGPT and hallucinations.
  52. AdWeek engages with ChatGPT on misinformation, antitrust, data protection, and copyright concerns.
  53. A British PM has indicated that the Online Safety Bill will apply to ChatGPT and similar technologies.
  54. Other PMs asked questions of AI developers in panels earlier this week. More here.
  55. Here are some thoughts on the intersection of antitrust/competition law and the battle for AI supremacy.
  56. Chat is one thing. But AI-discovered drugs might have significant impacts for human health.
  57. Forget chatbots - AI ‘clones’ are the next thing?
  58. Is Clippy the ancestor of chatbots? Probably!

Jonathan Adams

Independent Wealth Manager

1y

Wow, there’s a lot to take in here! LinkedIn Principles (#17) seem to be a good and responsible guide. Thanks for sharing.

Sarahlynn Nichols, CIPT

Customer Security and Privacy Assurance / Customer Trust Professional

1y

Gosh, Jon, so many interesting links this week. Where should I start? Got a top 5? (Or is that super obvious and you'd say no. 1-5 above?)

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics