KOSA 2.0: X marks the spot?

KOSA 2.0: X marks the spot?

Is this a better bill or just better packaging, courtesy of Elon?

Congress's latest attempt to ‘think of the children!’ has arrived, with an updated version of the Kids Online Safety Act or KOSA (+COPPA 2.0 = KOSPA) that tries to thread an impossible needle between child safety and free expression. But in a plot twist that perfectly captures the absurdity of today's tech policy landscape, this iteration comes courtesy of the company that built its brand on being the internet's most unregulated space.

This is the latest version of a bill that has been stuck in political limbo since it overwhelmingly passed the Senate, 91-3, in July.

But like its many predecessors, it still threatens to create more problems than it solves.

Or is it better than nothing?

Let me get this out of the way upfront: how creepy is it that this draft was effectively written by Elon Musk’s team at X[1], and that Congress seems happy to position it that way? It used to be that lobbyists influenced legislation quietly and legislators pretended happily they wrote the laws…

Anyway.

There is a lot of pressure to pass this bill before the holidays (though House speaker Mike Johnson cautioned Republicans not to get too excited).

On the face of it, the revisions take on board a lot of earlier criticisms, seeking to narrow definitions and clarify scope where the original was vague. It also purports to reign in the powers of the attorneys general in an effort to mitigate the risk of political misuse.

Reminder, this law would apply to anyone under 17 in the US (a ‘minor’).

Let’s dive into what’s changed—and what hasn’t.

Free Speech & Censorship

One of the most glaring concerns in earlier drafts was fear that the law would be used by overzealous regulators or attorneys general to suppress lawful but controversial speech[2]. The new KOSA tries to address this head-on with new language preventing enforcement against "the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment." But as Techdirt’s Mike Masnick points out, the very mention of the First Amendment in the text is an admission that the bill is likely to be used to infringe it.

In any case, the change seems largely theatrical because (a) a politically motivated regulator (like, say, proposed new FTC chair Andrew Ferguson[3]) can claim a breach first, requiring the operator to argue after the fact about what constitutes free speech, and (b) the real censorship risk comes not from direct viewpoint discrimination but from platforms' risk-averse responses to potential liability. When facing the choice between hosting potentially controversial content and risking fines or lawsuits, platforms will inevitably choose self-censorship.

The bill now also includes a “Rules of Construction” clause which lists all the things it will NOT force companies to do, which happen to be many of the things critics are concerned about (probably because the bill seems to require them elsewhere). Good luck figuring that out…

One of those Rules says that platforms are not required to prevent minors accessing controversial or unpopular content if it is intended to prevent or mitigate harms (such as eating disorders), but it further puts the onus on platforms to somehow predict how different users might use, or react to, content on such topics, a virtually impossible task.

Duty of Care: gone is “best interest”!

The 'duty of care' provision has always been KOSA's philosophical core and greatest vulnerability. The new draft attempts to solve this by moving away from the mandate (imported from the UK’s Age Appropriate Design Code) to act in the “best interests” of minors. Instead, platforms are required to mitigate specific harms like mental health risks, eating disorders, online harassment, and exposure to illicit content.[4] We definitely don’t want our kids & young teens exposed to that stuff…

But then it throws in a curve ball with a highly contrived "reasonable and prudent person" standard, requiring operators to prevent harms that are "reasonably foreseeable." Got that? Woe is the platform trying to predict what a court might decide was "reasonably foreseeable" after the fact.

Take the new definition of "serious emotional disturbance"—

the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor's role or functioning in family, school, or community activities.

How exactly is an operator supposed to know about users' medical diagnoses? More importantly, this vague standard could easily be weaponised against content supporting LGBTQ+ youth, with conservative AGs arguing that such content causes "emotional disturbance" in minors.

Safeguards: Defaults, Dark Patterns and “Design Features“

The list of safeguards to be made available to a minor user now includes:

  • limiting ability of others to communicate with them (changed from ability of others to find or contact a minor, in particular adults)
  • restricting visibility of personal data (unchanged)
  • limit by default design features* that extend engagement if they are likely to result in compulsive usage (emphasis reflects change from prior draft)
  • enable opt-out and customisation of recommender systems
  • restrict sharing geolocation; enable user option to limit time spent on the platform (unchanged)

In each case, the most restrictive available option is to be the default setting for minors, subject to parents’ ability to override them. Notably the defaults must now be set not just for registered users, but also to visitors that the platform knows are minors.[5]

The law retains the ban on “dark patterns”, defined here as interfaces that “obscure, subvert or impair user autonomy, decision-making, or choice with respect to safeguards or parental tools,” which is is a narrower definition than what we have seen before.

*But before you breathe a sigh of relief, we are blessed with a new concept: “design features”—

(A) infinite scrolling or auto play; (B) rewards or incentives based on the frequency, time spent, or activity of minors on the covered platform; (C) notifications and push alerts; (D) badges or other visual award symbols based on the frequency, time spent, or activity of minors on the covered platform; (E) personalised design features; (F) in-game purchases; or (G) appearance altering filters.

So, now we have to avoid both (a) dark patterns, and (b) design features that extend engagement by minors, if they result in compulsive usage[6]. As written, operators would have to to analyse all the features, determine which drive frequency/duration/activity, then decide which tip them over from ‘better product my users like’ to ‘compulsive usage’, and then limit them by default? Eeek.

Oddly microtransactions—purchases made with virtual currency in video games, including surprise mechanics—are explicitly defined but not specifically regulated (?).

Parental Tools, Reporting and Console Integration

As before, the bill strengthens the role of parents in overseeing their kids’ digital experiences. Platforms must now provide robust parental tools to limit screen time, monitor financial transactions, restrict interactions and generally manage the safeguards above. Importantly, it requires platforms to let minors know that such monitoring is in place.

Although we can debate the effectiveness of parental controls when studies show most parents can’t or won’t make use of most of them, it’s hard to argue that these tools should not be made available. By requiring them up to age 16, at a minimum they will help trigger conversations between teens and parents about online safety, which is no bad thing.

A new clause requires that minors and parents should be able to report incidents or content, but includes weirdly specific parameters around response times based on platform size. Because nothing says ‘future-proof’ like locking in today’s arbitrary platform sizes in tomorrow’s tech landscape of AI customer service bots…

Helpfully, there is an explicit pathway for platforms to integrate with third party systems (eg, consoles, operating systems) so long as they are compliant with the safeguard and parental tools requirements. This is clearly a nod to the industry to get on with collaborations that make whole ecosystems safer, and to explore common consent mechanisms and cross-platform parent portals (such as k-ID is rolling out).

Advertising

The prohibition on behavioural targeting (up to and including age 16) remains. In addition, the updated draft explicitly prohibits platforms from advertising illegal products (e.g., narcotics, gambling) to minors, and includes a new requirement for clear labelling and disclosures on all advertising, particularly influencer endorsements. This effectively enshrines into law the core premise of advertising codes that have been in place for some time alongside years of FTC guidance.

Market Research

This is an odd one… the bill includes a new prohibition on “market or product-focused research” on children (under 13), and on teens (under 17) except with verifiable parental consent. No definitions are provided, so as written this would seem to prevent platforms from—for example—anonymously assessing the age of existing users, or canvassing user opinions on the effectiveness of features.

This seemingly minor provision could have major unintended consequences—potentially preventing platforms from even studying whether their safety measures are working

Presumably this clause would also limit the only legitimate way to anonymously measure the effectiveness of contextual advertising to kids and teens—brand awareness (or ‘uplift’) surveys. This secondary consequence would be regressive, impacting ad-funded publishers of kids’ content disproportionately.

The Age Verification Paradox

Although KOSA relies on actual knowledge, ie it’s only applicable where the platform knows the user is a minor (creating a perverse incentive to avoid knowing), the inclusion of the proposed COPPA 2.0 (or CTOPPA) in Title II would update the children’s privacy law, COPPA, to be enforceable wherever the operator has “actual knowledge or knowledge fairly implied on the basis of objective circumstances.” This basically moves the standard to constructive knowledge, which the industry has been fighting for decades but which various state youth privacy laws have already adopted.

The FTC is expected to come up with guidance on what it actually means.

The main proposed change to COPPA[7] is extending the protections of the privacy law to teens (under 17) such that where consent is required, it means the consent of the teen rather than the parent.

Regarding the hot-potato topic of age verification (AV), the law calls for a study by the National Institute of Standards and Technology (NIST)—in coordination with the Federal Communications Commission (FCC) and FTC—to evaluate the feasibility of AV systems at the device or operating system level (which I touched on here). Now this study will have to specifically consider: accuracy and privacy, data minimisation, and impact on competition (ie, how new market entrants will be affected by any AV mandate).

Adding to the whiplash of confusion around this, the draft law explicitly states that platforms are not required to collect additional personal data or use intrusive verification methods. So now we will require platforms to protect users they "know" (or in some cases “should know”) are minors, while simultaneously claiming not to require age verification?

The debate on AV has been coming to a head for some time. Australia accelerated it recently with its social media ban for under-16s. Canada is debating an extreme (or at least terribly written) law that would require AV at every internet touch point, probably break encryption, and possibly shut young people out of the digital realm. And activists continue to double down on their absolutism that nothing should be done about mediating access to adult spaces unless any AV is 100% accurate and guaranteed to be privacy-protective, which is a ludicrous standard. We may as well ban seatbelts since they don't save every life.

The factions need to start lowering the volume and sit together to find real world solutions that are good-enough and can be implemented in a privacy-protective way.

They exist, and will have to include some combination of:

Each of the above can be implemented in a way that assures the user’s privacy. That’s what regulations and industry codes and certification bodies and technical standards are for.

The Enforcement Shell Game

While there is no real change to the preemption provisions (state laws offering greater protections are preserved), the new bill limits the jurisdiction of AGs. They can now only bring civil actions in relation to sections 103, 104, 105 (eg, safeguards for minors including default settings and parental tools, disclosure & transparency requirements, advertising).

Note that this list excludes section 102 (duty of care) so AGs can no longer directly enforce the provision requiring mitigation of “foreseeable harms.” This would seem to address fears on both sides that opposition governors would use the law to censor content they deem harmful but the other side wants to protect.

I’m not sure the 70+ AGs that signed letters of support for KOSA in September and November will be very happy, but—for better or worse—it seems like only the FTC can now proactively decide what is more harmful, anti-vaxx content or LGBTQ+ resources…

Finally, to limit conflicts with the FTC, AGs are now required to inform the agency concurrently of actions they take. In turn the FTC can remove the case to federal court, or be active in the case. AGs can’t act on cases where the FTC has initiated proceedings.

So then…

This bill is a Frankenstein's monster of competing priorities—stitched together from good intentions, political compromises, and Elon Musk's apparent bid to kneecap his competitors. The changes may look responsive to critics on paper, but they create as many problems as they solve. Without a comprehensive federal privacy law as foundation, we're having to invent new legal constructs on the fly, which does not bode well for the durability of the law.

And yet, sitting on our hands isn't an option either. Real kids face real online harms every day. So perhaps the question isn't whether this is a good bill (it's not), but whether it's better than nothing at all. That likely depends on whether you share my quixotic belief that legislators should strive to get laws right, or accept the cynical reality that in today's political climate, passing almost any law counts as progress.


This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


[1] Obviously someone had the idea to pull Musk in so as to curry favour with Republican holdouts against the bill. And yet, we can’t escape the irony that the platform holding itself out as the ultimate defender of free, unfettered (and unmoderated!) speech, owned by the guy now charged with making a bonfire of government regulations, backed by a party that wants to defenestrate regulatory agencies, is helping draft a law that adds a significant regulatory burden to online platforms. Musk seems to be betting that this will hurt other platforms more than X/Twitter, kind of like eliminating the EV tax credit to disbenefit auto giants more than Tesla… This takes regulatory capture to a whole new level of sophistication.

[2] The bill’s author Marsha Blackburn, R-Tenn, did not help the cause when said in a video last year that her objective is “protecting minor children from the transgender in this culture” [sic].

[3] If you want to know what a politicised FTC looks like, check out Ferguson's bizarre concurring statement in the FTC's run-of-the mill case against GOAT for cheating consumers.

[4] The full list is: (1) Eating disorders, substance use disorders, and suicidal behaviors. (2) Depressive disorders and anxiety disorders when such conditions have objectively verifiable and clinically diagnosable symptoms and are related to compulsive usage. (3) Patterns of use that indicate compulsive usage. (4) Physical violence or online harassment activity that is so severe, pervasive, or objectively offensive that it impacts a major life activity of a minor. (5) Sexual exploitation and abuse of minors. (6) Distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol. (7) Financial harms caused by unfair or deceptive acts or practices (as defined in section 5(a)(4) of the Federal Trade Commission Act (15 U.S.C. 45(a)(4)).

[5] Though, for existing accounts, platforms are not required to apply defaults if parents had previously opted out of such tools.

[6] Defined as “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.”

[7] There is a very useful redline between COPPA, COPPA 2.0 and Title II of KOSPA here.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics