Implications of the EO on AI
Since I am the founder of an AI company with nearly 27 years and most of my net worth invested in KYield, I obviously have more than a passing interest in all things AI, and in particular any significant change in regulation that would impact our business, including competition and markets. I also have a keen interest in preventing catastrophes
As EOs go, and I have read a few over the decades, this is a doozy. The combination of the potential impacts from AI
We won’t have insight into the potential of each agency report until they are submitted and shared with the public, and the potential of each won’t be realized (positive or negative) until either adopted as policy within the jurisdiction of the Executive Office of the President (EOP) or Congress. For most issues included in the EO, that won't be until after the 2024 election. Given the obvious level of political uncertainty, I will therefore focus primarily on the most immediate changes that also seem most likely to be enforced, from my perspective.
Primary purpose of the EO
The EO is primarily a policy document with a great deal of political jargon and some specific orders. The orders essentially jumpstarts that part of the U.S. Federal Government controlled or influenced by the Executive Branch, which includes the majority of government agencies. An indication of the scope is provided near the end of the EO where it lists the initial 28 members of the new White House AI Council, including the secretaries or assigns of most of the large agencies.
Who will be impacted most in the near term?
The short answer is government agencies tasked with research, reporting, and hiring new staff. For the private sector, the near-term impact will be limited to big tech cloud providers and their partners who develop and operate the large language model (LLM) chatbots. The EO appropriately targets LLMs for safety as their chatbots represent almost all of the current risk from AI.
Beyond the special requirements for extremely large volume models to submit testing to the government prior to unleashing new models to the public, the EO also requires special reporting for dual-use models. Within 90 days, developers must submit “the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST”. Fortunately for smaller companies, even the dual-use definition is limited to large-scale models:
“The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”
While it’s clear that any new model that meets the above criteria for either extreme scale (as defined in the EO) or dual-use foundation models, must submit tests in advance to the government for approval in a manner to be determined by the Secretary of Commerce. I did not notice any requirements for pre-existing models that meet these criteria, presumably because so few exist and they are already complying. However, given that existing LLM chatbots are known to be unsafe, and the EO does not appear to require any immediate change, the pre-existing risk profile remains the same, which in my opinion is much greater risk than should be acceptable for public policy.
In other words, despite the increased regulation targeting LLM chatbots and big tech cloud providers, they appear to have been given a pass by the EOP on current risks—now 11 months and counting, including biological warfare and bioterrorism risk. That could come back to haunt POTUS in a very big way if some of the WMD risks I and others have warned about are realized. Let's hope not for all of our sake.
As I’ve said from the first day OpenAI launched ChatGPT, the greatest catastrophic risk in the near-term from LLM chatbots and other models working towards superintelligence is in assisting and accelerating biological agents to be used for biowarfare or bioterrorism. The EO tasks agencies to study and report on risks associated with synthetic nucleic acids, but doesn’t require any immediate action, and this has been one of the greatest catastrophic risks from day one.
The reason I was in such a good position to understand the risk is three years earlier we unveiled our synthetic genius machine (SGM) following disclosure in my patent application (in hindsight I should have requested secrecy). The primary mission of the SGM is to accelerate discovery, which goes back to the underlying theorem of KYield I conceived while in the lab in 1997 after my brother was diagnosed with ALS. Like most practitioners and researchers in AI, I was more focused on the positives than the risks, but unlike all but two or three others, I had studied most major catastrophes for the last three decades with an eye towards employing our KOS to mitigate or prevent, including 9/11, the 2008 financial crisis, DeepWater Horizon Oil Spill, Hurricane Katrina, and many others (see HumCat – a model we pioneered).
We didn’t release much detail on the SGM, and when it came time to provide the USPTO with sufficient detail for the three patents they envisioned in the system, I declined, preferring to keep it as trade secrets. The fact that the Chinese and Russian governments are consistently one of the highest volume viewers of our web sites contributed to that decision (I'm confident they are not interested in accelerating disease cures..). Our intention has always been to complete the SGM internally under tight security, which is why I was shocked to see consumer chat bots released to the public by anyone. I was even more shocked a company I was an early booster to would enable it with a vast investment (Microsoft). Very few people in the world may be in a position to recognize it, but from my perspective consumer chatbots represent the most reckless commercialization in our lifetimes.
While I was disappointed not to see something immediate and actionable to reduce or eliminate the catastrophic risks in LLM chatbots, EOs are limited by the Constitution, and executive overreach has been an increasing problem for many years (many EOs in both parties have been overturned by courts). Judges could dramatically eliminate these risks by enforcing copyright laws in the emergency fashion it deserves with a temporary injunction until such time SCOTUS can rule on it. By then perhaps Congress will be up to the task of governing as ultimately, it’s their responsibility (I agree with President Biden and many others on that--it's quite clear).
To clarify on the actual risk profile from LLM bots, a rogue bot that decides to take over the world is not a viable risk in the foreseeable future (someday it will be but it’s a long way off). However, accelerating weapons of mass destruction like bioweapons with LLM bots is a very real risk today, and every minute of every day.
So-called guardrails is an intellectually dishonest description that intentionally misleads the public. As I and others predicted, unlike real guardrails that provide a physical barrier, LLM safety precautions have proven easy to work around. The very nature and structure of LLMs and other self-generating algorithms are inherently unsafe, and with current technology they can’t be made safe unless running them within a precision data management system that produces high quality data with strong security (see my recent exec briefing on GenAI).
Recommended by LinkedIn
Cybersecurity receives slightly more specific treatment in the EO than catastrophic risks. Unlike catastrophic risk, which is well understood only by a few, and will be unrealized until it isn’t, cybersecurity risks from LLM chatbots were immediately realized, and have increased dramatically since.
“Within 180 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of Commerce and with SRMAs and other regulators as determined by the Secretary of Homeland Security, shall incorporate as appropriate the AI Risk Management Framework, NIST AI 100-1, as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators.”
The EO continues with a prescription to convert NIST AI 100-1 to a mandate.
“Within 240 days of the completion of the guidelines described in subsection … shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action. “
Healthcare
The main takeaway on healthcare is similar to security—the private sector is far ahead of the government, with a few exceptions (e.g. article by IQL on AI audits) . The EO tasks agencies to perform research and establish guidelines that are already much more mature in the private sector (I suspect the USG will simply adopt them). We began doing deep-dives with healthcare and pharma companies nearly a decade ago.
“Within 90 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an HHS AI Task Force that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks — possibly including regulatory action, as appropriate — on responsible deployment and use of AIand AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health)…”
Competition
The area that promises the most discussion with the least potential to deliver is in competition and small business. Those who have either served in government or volunteered on boards and as an advisor like I have will recognize a common practice in government: a great deal of talking about competition with precious little walking the talk. For example, the EO has extensive tasking for the SBA, but I see no pathway therein to even move the needle on competition.
One reason is that apart from antitrust, competition is primarily a market function that does fall not within the responsibility of government in the U.S. construct. Small business is best served by other small businesses, customers, vendors, distributors, partners, bankers, investors, consultants, and franchisors. In AI, it will require companies like my own to serve the interests of small businesses. One challenge is the majority of venture capital now serves the interests of incumbents (76% of exits in 2022 were to incumbents). It’s difficult to finance meaningful improvement for small business in the tech industry today. Commoditized technology from big tech certainly doesn’t help, it just raises the bar and increases costs for everyone.
The EO does discuss the risk of over-concentration of market power in AI, which is a major concern across science, business, and economics. It does not provide any meaningful action other than to encourage the FTC to continue what it is already doing. While this may confirm the fears of many around the world that the USG has stalled in regulating AI to intentionally allow big tech incumbents based in the U.S. to extend their existing monopolies over AI (a legit concern in my opinion), and sufficient time to influence and manipulate government to secure regulatory capture, the EOP has little authority over antitrust or competition beyond applying leadership, recommending enforcement and legislation, and negotiating with Congress through the power of a veto. The primary problem is all other actors are failing to live up to their responsibility—especially entrepreneurs, customers, and investors. Healthy competition doesn't magically happen by accident in consolidated markets—it requires what I call 'market farming', which is heavy lifting and quite intentional.
Other Sections
Similar to small business and healthcare, the EO discusses labor, civil rights, privacy, bias, social equity, the need for America to lead and many other topics, but most of that responsibility falls on Congress, not POTUS. National security is the one area that stands out where the EO can really make a difference, in large part because it falls within the responsibility of POTUS. For example, the EO requires infrastructure providers to make disclosures about large-scale models run by foreign organizations. That's actually helpful for national security. So too are restrictions on advanced tech to adversaries.
Conclusion
After reading the EO a second time and revisiting sections a third time, the EO strikes me as a bit of a political masterpiece that managed to promise something to nearly everyone while avoiding any pain other than tasking the bureaucracy with research, reporting, and expansion (significant amount of hiring required in the EO). For example, the bulk of the safety requirements for testing and disclosure of LLMs was already agreed to by the few companies that run models of that scale. It sounds impressive but has already been achieved. Too bad we didn't have this in place a year ago.
The main purpose of the EO is obviously to jumpstart the machinery of the USG in governing AI both internally and externally. The timing of course coincides with the international conference on safety hosted by Britain's Prime Minister Rishi Sunak , who hopes the UK will become a major center for AI safety. Given that the EU and several other countries are far ahead of the U.S. in policymaking on AI (even China in some respects), it’s evident the EO was in part also intended to deflect both domestic and international criticism, and make it appear the USG has finally started its engines, and shifted to first gear with some forward movement.
From a political perspective, which isn’t my specialty, the EO appears to be a success. Most of the AI newsletters and social media feeds seem to have swallowed the bait (it is appealing to those who have good intentions). However, I suspect the most experienced executives and attorneys will come to a similar conclusion to my own, which is not much has changed in terms of risk, opportunity, or markets. It’s business as usual with a great deal more media coverage.
Whether intended or forced by a divided Congress, POTUS is punting AI into the next admin. The question is whose admin will it be, and who will control Congress? The answer to those questions will ultimately decide the direction of AI regulation in the U.S. Whoever they are will have plenty of reading material—sounds like tens of thousands of pages of reports from most of the federal agencies are due within the next 364 days.
Thank you for sharing your insights. I'm learning and becoming more aware of the potential and the concerns you noted about AI, one of which is bias in the dataset. Also, as a military veteran, I share your warning about biowarfare, which I feel is the poor man's nuclear weapon.
Exec. Dir. KLM CONSULTING (LION/30,000) ~ Optimizing Gov't Srvcs. ~
1yA timely, very relevant summary of the new AI-related EO. Thanks for sharing it with us, Mark!
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1yThanks for Sharing.