What founders need to know about the regulatory landscape

What founders need to know about the regulatory landscape

The acronyms are coming in hot, creating both risks and opportunities

The regulatory landscape has never been more complex for founders. Just a few years ago the biggest headache was GDPR and all the paperwork it forced upon companies large and small. Since then, the goalposts of GDPR keep shifting, making compliance a moving target. Thanks to the Brussels effect (no, not a beer), the rights-based approach pioneered by Europe’s privacy law has found its way into new regulation around the world. Even in the US, where politicians seem congenitally incapable of passing a federal privacy law, states like California have implemented GDPR-like laws and codes.

And there is more regulatory innovation (if you’re inclined to call it that) coming. Although the Digital Services Act (DSA) and Digital Markets Act (DMA) – which will be fully enforced from next month – are aimed at large platforms, they will have an impact downstream on many other businesses. And the UK’s enormous Online Safety Act will – for better or worse – change the way user-generated content is processed and published in the UK and likely beyond. 

But regulation is both burden and opportunity. Given the EU’s explicit intent to make them more onerous for large platforms, the new rules—including the upcoming AI Act—offer a chance for young companies to innovate within a two-tier regulatory system that will be more challenging for larger competitors. At least in theory.

So, what do founders have to think about as they get ready to scale up in this new world?

Anticipate the direction of travel and build the key principles of rights-based regulation into your products from the start. 

  • Data minimisation is now well-established as an operating principle, and yet many companies still try to implement it after they designed their products to collect more data than necessary. Don’t collect individual-level data unless you have both a sound legal basis[1] and a very good commercial reason. 
  • Both the DSA and the UK’s Online Safety Act create new operator liability for user-generated content (UGC). Consider early on how you can prevent illegal or legal-but-harmful content from spreading on your service. You will need mechanisms for content moderation, handling user complaints and for reporting.[2]

The DMA's restrictions on social media and marketplaces will likely make them less effective as channels for acquiring consumers or selling product, at least in the near-term. Build some buffer into your financial models.

Appreciate that the line between maximising sales or engagement and being fair to consumers is shifting. The FTC is going after companies for using dark patterns and over-optimised transaction flows to treat consumers unfairly. The EU is ramping up its investigations of similar cases. Consider: 

  • Testing every sign-up and check-out flow with your grandma to make sure she understands what she is buying. Make it easy to reverse a transaction or cancel an account. Be aware of mechanics that are falling out of favour like fake countdown timers, and avoid them. Consider honest design a part of building a healthy, trusting relationship with consumers.

Don’t forget that children and teens are the fastest-growing digital audience, so whether or not your service is aimed at them, you need to consider their interests and apply appropriate best practices:

It’s not good enough to move fast and allow some stuff to break – data security is top of the list. Build in cybersecurity best practices from the get-go and don’t end up like 23andme. Remember that keeping customer data secure is also a core tenet of privacy laws everywhere.

If you’re using AI or making AI part of your product, get an early start on your policy for its use and deployment. Ask yourself:

  • Have you documented how your sourcing and use of training data complies with privacy laws[5], copyright laws[6]?
  • Do you have internal policies and guardrails in place to govern the use of AI in the company? Are you assessing use cases based on value (of AI’s use) vs risk (given AI’s flaws)?
  • Are you able to measure that your AI-based system is working as intended? What metrics are you using to quantify its effectiveness and its safety?
  • What is the right level of abstraction at which you need to be able to explain how your AI system works? To customers? To regulators?
  • What level of legal responsibility (and liability) are you prepared to take for the output (and potential errors) of your AI-powered system? Will that sufficiently address concerns your customers might have?
  • Etc.[7]

It all sounds a bit painful to consider, especially if you’re consumed with recruiting talent and finding product-market fit. But in this new world, compliance is pure table stakes. Advantages are created by being ahead of the pack, even finding a source of regulatory arbitrage.


This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


[1] And note that the pool of available legal bases is shrinking. Even before GDPR became law in 2018, regulators provided guidance which effectively restricted the use of the legitimate interest legal basis in targeted advertising. More recently, various decisions against Meta invalidated the company’s use of contractual necessity as its legal basis for profiling users. It then announced it was switching to legitimate interest as the legal basis for ads personalisation. (The CJEU has since signalled that consent is the only viable legal basis for the kind of tracking and profiling that drives content recommendation engines, including advertising.) In October 2023, Meta launched a new ‘consent or pay’ model in Europe, which is now also being challenged by activists, who question whether (especially existing) users are able to provide consent that is ‘freely given’ given the imbalance of power between the user and the company that has all the user’s account history and content. Eric Seufert provides a very good overview of the practical issues around Meta’s approach and the way regulators might approach the problem.

[2] For an overview of the challenge facing growth and midmarket companies in the near-term under the DSA, se my previous post: Why we should all be thinking about DSA readiness.

[3] More on this in a future post on how to create a global baseline standard for handling kids and teens in your service.

[4] For a detailed review, see There’s more to the FTC’s proposed COPPA changes than meets the eye

[5] The UK Information Commissioner’s Office (ICO) has been usefully proactive in outlining how to think about the UK GDPR (which will probably—not legal advice!—be reusable for the EU GDPR in the context of training AIs on personal information. The only viable legal basis is legitimate interest, but this will be subject to performing a very thorough balancing test (Is there a valid interest? Is the data necessary? Do individuals’ rights override the interest?).

[6] Although the debate on AI and copyright is very much in flux (more on this in an upcoming post), you’ll want to be able to articulate how fair use (US) or the text-and-data-mining (TDM) exceptions (EU) apply to any copyrighted training data you use (or your model uses).

[7] More on this in a future post outlining a framework for AI policy that smaller companies can use.


Vaclav Vincalek

Technology entrepreneur, CTO and technology advisor for startups and fast-growing companies. Creating Strategic options with Technology.

11mo
Like
Reply
Oliver Villegas

🤘 Generate Leads and Sales Through Search Engine Optimization; specialized for Law Firms, Veterinarians, Local Business and Ecommerce Sites 🚀🎯

11mo

Spot on! Thanks for shedding light on this, Max! 

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics