KOSA is back. Or is it?
Competing posters celebrating / opposing KOSA.

KOSA is back. Or is it?

Momentum for digital safety legislation in the US grows. Could this be the one?

Since the recent roasting of social media CEOs on Capitol Hill, senators are trying to harness momentum behind their new child & teen safety bills, including ‘COPPA 2.0’ and ‘KOSA’. These are among the half dozen or so draft laws that have passed out of committee but remain in the queue to be considered by the polarised/paralyzed Congress…

A handful of tech platforms—X, Snap and Microsoft—have announced their support of the Kids Online Safety Act or KOSA. Now at some point in the next few months, KOSA might actually come to a vote, at least in the Senate. That would leave the law in the hands of the dysfunctional House.

OK, so I’m not holding my breath, but… just for fun: what would KOSA actually do if passed, and what would it mean for operators? Read on.

This week KOSA’s authors (Sens Richard Blumenthal and Marsha Blackburn) announced 62 co-sponsors for the bill, including Senate Majority Leader Chuck Schumer (D-NY), theoretically giving it a filibuster-proof majority in the Senate. They got there by making changes, including:

  • shifting the power to enforce the law mostly away from state attorneys general and to the FTC. This appears to have allayed the concern of some longstanding opponents—that AGs could use the ‘best interest of the child’ provision to suppress content they don’t like, such as LGBTQ+ resources.[1]
  • reducing the scope of pre-emption, eg KOSA will only trump state law which is less restrictive, allowing stricter laws (like California’s CCPA) to stand.

This version also contains an oddity that was added in 2023—it incorporates as Title II the Filter Bubble Transparency Act (first introduced in 2019!), which tries to banish the evil algorithms senators blame for all kinds of society’s ills. It would require social media platforms to offer users a non-algorithmic feed option, eg chronological rather than based on user data. Although this bill is now sitting inside KOSA (which applies to under-17s), Title II applies to all ages.

Note that under a key exception, age information may still be used to personalise feed content. And, another provision pre-emptively says that platforms may not charge users for choosing one feed over another, nor condition any service on the user’s choice. Will social media make less money if feeds are less personalised? You would think so… But, according to an internal Facebook experiment:

Its findings showed that engagement dropped precipitously, users hid 50 percent more posts (meaning they found these posts to be irrelevant or uninteresting), use of Facebook Groups—where some of the most extreme and concerning content resides—skyrocketed, and Facebook actually made more money on advertising because users had to scroll longer to find the content they were looking for, and therefore, were exposed to more ads.[2]

Well, maybe.

But Title II is not the main event. At the heart of KOSA is the concept that will cause operators the most pain: the duty of care[3] to “prevent and mitigate” risks faced by under-17s, including anxiety, depression, eating disorders, substance use disorders and suicidal behaviors; “patterns of use that indicate or encourage addiction-like behaviors;” physical violence, online bullying, harassment; sexual exploitation and abuse; promotion of narcotics; predatory, unfair or deceptive marketing practices and other financial harms.

That’s a lot of subjective concepts for product and legal teams to get their heads around!

Note that KOSA explicitly does not prevent minors deliberately searching for content, or the platform from providing resources to prevent or mitigate the above risks. This addresses the concerns of free speech advocates, but sounds operationally complex in practice. If implemented apolitically, it would mean, say, hosting informational content on eating disorders which is accessible via search, but not promoting it to teenagers based on inferred interests (or any other personal information). Differentiating between content that is informational vs harmful will be very difficult indeed, and critics have a point that for some operators at least it will be more expedient to remove all content relating to potential harms.[4]

And there is more. In addition to the duty of care, the law proposes a series of safeguards for minors:

  • KOSA restricts (but does not ban, unlike COPPA 2.0) behavioral advertising. It requires that users be informed why a particular ad was targeted to them and what personal data was used. In practice this likely means shifting to contextual ads for all u17s, which is significantly simpler. (The FTC is to provide further guidance.)
  • It specifically limits design features that are believed to cause or exacerbate compulsive usage,[5] like infinite scroll, autoplay, and rewards for time spent on the platform (I guess that includes streaks?). The list of examples also includes notifications, personalised recommendations, in-game purchases and ‘appearance altering filters’ (really). It will be very challenging for operators to unpick which design features are core to the service vs ones that cause compulsive usage. What if the user requested notifications?
  • Easy wins: The most restrictive privacy and safety settings must be on by default (which is similar to the UK and California Age Appropriate Design Code requirements). The law also requires making it easy for users to limit who can contact them and who can see their personal info. And it proposes enhanced parental controls as well as a clear notification to the user if parental controls are in use.
  • KOSA bans dark patterns—albeit only with respect to subverting child safeguards or parental controls—by making it unlawful: “to design, modify, or manipulate a user interface with the purpose of subverting or impairing user autonomy, decision-making, or choice.” Again, much to argue about here—how big can a button to make your profile more public be without qualifying as a dark pattern?
  • Finally, it imposes annual reporting requirements on platforms with more than 10m MAU in the US—describing risks of material harms, what mitigation measures taken—based on independent third-party audit. They must also report the number of minors, median and mean amounts of time spent; number of reports received re harms; etc.

Notably absent from the bill is an age verification requirement, though it does ask the National Institute of Standards and Technology (NIST) to lead a study on methods to verify age at the device or operating system level—which is very welcome. Critics argue that the bill’s expansion of the knowledge standard (see below) implicitly creates a new obligation to verify age.

Also the bill does not include a private right of action, so it can only be enforced FTC and via civil action brought by state attorneys general. Some would argue that without a private right of action the law has no teeth—after all, private lawsuits are the most popular way to hold corporate America accountable… In the context of privacy law, the debate on that is rather more nuanced.

The new draft of KOSA includes some odd carve-outs that read like a collaboration between lobbyists and legislative aides over a bad zoom connection. For example, the definition of video games in scope is now limited to those that: contain user-generated content (excluding character or level designs created by the user, or chat based on preselected phrases or ‘short interactions’); or transactions in virtual currency that has cash value; or user-to-user communications; or behavioral advertising. I can’t quite work out which kinds of games are excluded by this definition—thoughts welcome!

The other key concept that is being stretched in KOSA is the definition of knowledge. COPPA operates under the actual knowledge standard, ie an operator either knows that a user is under 13, or they do not. This standard has been criticised for allowing many platforms to remain out of scope of COPPA by wilfully ignoring their child audiences. In KOSA, the knowledge standard is broadened to include “knowledge fairly implied on the basis of objective circumstances.” So far, so vague, but the FTC is to provide guidance on this within a deadline under the law.[6]

Meanwhile, KOSA is not the only new law in town. Sens Ed Markey (D-MA) and Bill Cassidy (R-LA) announced new sponsors for their bill (which also passed out of committee in 2023) colloquially known as COPPA 2.0. This law would require opt-in consent for collecting personal data from teens aged 13-16; ban behavioural advertising to kids and teens; expand COPPA’s ‘actual knowledge’ standard to cover platforms that are “reasonably likely to be used” by minors; and create an ‘eraser button’ to make it easy for minors or parents to delete personal data held by a platform. No word as yet on the Senate’s appetite to put this one to a vote.

We haven’t seen this much momentum behind child safety bills in a long time. Then again, we haven’t seen this much paralysis in Congress either, so nothing may happen. The debates on wording and definitions (knowledge, dark patterns, design features, harms, compulsive use, etc) remain a key battleground. How they fare through the next set of drafts, as well as court challenges in relation to recent efforts by states7 to pass their own laws, will likely determine the shape of legislation to come.


This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


[1] In response, seven LGBTQ+ groups, including GLAAD and the Human Rights Campaign, wrote to Sen. Blumenthal to say they no longer oppose the bill.

[2] Thank you Klon Kitchen, via Alex Reinhauer, Ten Terrible Tech Bills from the 117th Congress: Filter Bubble Transparency Act.

[3] What ‘duty of care’ means and how it can withstand legal challenges in the US has become rather pressing (and complicated). The intention is to protect kids against certain product designs, but the means is a privacy law that regulates personal data, whereas the net effect is restricting what content is displayed. And content of course could be argued is protected speech, any restriction of which might be subject to the First Amendment. One way to distance these bills from the polarised free speech debate is to recast them in terms of product liability rather than design or content restrictions. Ben Sperry describes such an approach in What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?

[4] According to the Electronic Frontier Foundation (EFF), “A huge number of online services would thus be forced to make a choice: overfilter to ensure no one encounters content that could be construed as ambiguously harmful, or raise the age limit for users to 17. Many platforms may even do both.” 

[5] Defined as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety or depression.”

[6] The FTC itself has wrestled with the knowledge standard for years. In its current ongoing (3+ years now!) review of COPPA, the agency explains why it does not have the statutory authority to try to shift from ‘actual knowledge’ to a ‘constructive knowledge’ standard: "the legislative history indicates that Congress originally drafted COPPA to apply to operators that 'knowingly' collect personal information from children, a standard which would include actual, implied, or constructive knowledge. After consideration of witness testimony, however, Congress modified the knowledge standard in the final legislation to require 'actual knowledge.'"

[7] The Brookings Institution has a really good comparison of the online child safety laws passed (and challenged) in various states, including comparisons on key terms to KOSA and COPPA 2.0: The fragmentation of online child safety regulations.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics