Ctrl, Alt, Elect: How will Trump or Harris regulate AI?
The regulation of AI has emerged as a critical issue for policymakers worldwide, with different approaches appearing, some heavy handed and some more light touch. It is an important topic from a geopolitical standpoint, with potential divergence between some of the world’s major economies significantly affecting tech companies.
However, the issue is not at the forefront of the minds of the American public. This, despite policymakers seeing it as a pressing subject especially after recent incidents like the fake image of Taylor Swift endorsing Trump underscoring the growing threat of misinformation, particularly in election campaigns.
Despite its tech prowess, the US has yet to implement an overarching federal framework for AI, leading to a patchwork of state-led efforts. This fragmentation stands in contrast with the EU, which proposed the AI Act in April 2021 to avoid a disjointed approach across member states. At a time when AI is reshaping industries globally, the absence of a unified US approach leaves room for varied state regulations to emerge.
California recently made headlines with Senate Bill 1047, which aimed to establish one of the most ambitious AI regulatory frameworks in the US. The bill proposed mandatory safety tests, public disclosure of safeguards against misuse, and even "kill switches" to mitigate risks. However, Governor Newsom vetoed it, citing concerns that such measures could stifle innovation - a reflection of the ongoing tension between regulation and progress. Newsom acknowledged the bill’s good intentions but warned of a potential harmful effect on the industry.
Meanwhile, states like New York and Colorado are adopting more cautious approaches, focusing on observing and addressing specific risks rather than implementing sweeping regulations.
The direction of AI regulation in the US hinges largely on outcome of the upcoming election. A potential Harris administration is likely to take a firm stance on AI regulation. As a Californian, Harris understands the Silicon Valley perspective and the broader need for strong investments in American technology. She has emphasised that public safety and innovation are not mutually exclusive, an argument reinforced by the Biden Administration’s executive order in October 2023, which set new federal standards for AI safety while aiming to safeguard civil rights.
The Democratic Party platform also promises a focus on safe AI development, protecting national security, and preventing the spread of misinformation. At the same time, promoting investing in American-made technologies to bolster the country’s tech leadership.
Meanwhile, Trump appears to favour a deregulatory approach. Though historically critical of big tech’s approach to censorship and bias, especially in the context of elections, Trump remains fundamentally a businessman, and his views on AI reflect his commitment to capitalist ideals. He has pledged to overturn Biden's executive order on AI, which he and others in the Republican Party argue impose unnecessary restrictions on innovation and free speech.
Recommended by LinkedIn
Trump’s approach, backed by figures like Elon Musk, is likely to prioritise minimal regulatory intervention, allowing the industry to innovate with few constraints. His promise to ban AI censorship "on day one" signals a vision of AI development that is industry-driven, focused on maximising economic potential with little oversight.
Despite their different regulatory standpoints, both Harris and Trump agree in recognising the need to maintain American leadership in technology, particularly in the face of Chinese competition. However, their paths to achieving this goal do diverge slightly - one through government safeguards and investment, the other through a market-led, laissez-faire approach. Whatever the path pursued, major US tech companies are going to have their say one way or another.
For UK AI firms and policymakers, understanding these divergent approaches is crucial. Unlike the UK's presumed plans for a narrow yet structured AI regulatory framework, or the EU's comprehensive AI Act, the US has yet to converge on a unified path. However, it would be expected to lean towards the former approach.
Companies looking to operate in all three major markets will need to navigate this uncertainty and prepare for various scenarios depending on the election outcome.
Will the US pursue a path similar to the EU, with strict regulations, or lean towards a lighter-touch approach that prioritises rapid innovation over safeguards? It seems, the American voters will be the ones writing the prompt on this one.
One thing is certain: the next month will be crucial in determining the future of AI regulation in the world’s largest technology market, and it remains highly unlikely that AI will ever go out of style.
George Farrer, Consultant, Fourtold
Consultant EU (European Commission / Horizon Europe) & Public Sector @SopraSteria | EPSO CAST FGIV 4811340 | Fellow @Generation Europe OFAJ DFJW | Member @Académie Notre Europe alumni association | Founder @Le Cercle
2moGabriel Cohen-Vadé