Law in the Internet Society

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025

One Poet, Two Greeks


In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.

Put differently, rooting out all the “harms” by KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]

One Flaw, Two Bills


Introduced in 2022, KOSA infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:

"A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”

Consider how the bolded portion can be read (1) narrowly, placing a duty on the sorts of design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour meant to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) would read “mitigate” out of the statute (as both preventing and mitigating harms would exempt design features that merely mitigate such harms from KOSA's “duty of care”). Traditional rules of statutory construction would disfavor uncharitable interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online speech more pernicious than in the Senate's amendment on “compulsive usage.”

One Amendment, Two Compulsions


In December 2024, the Senate hardened kids’ virtual cages. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says what “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”

Ascertaining such “patterns” implies averaging across millions of minors’ online communications and footprints, so there is no real knowledge gained as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must still be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) suggests that some health care professional will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is a need for “covered platforms” to ensure compliance,[9] so minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign grown-ups watching American kids. Developers can build tighter nets for these smaller fish, but some brain development will likely be confused for “brainrot” whenever adults are not in on the joke. It was problematic when “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse as those criteria have given way to a set of factors that “significantly impacts” kids. Thus, the change from a probable to actual knowledge underscores how “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]

First Amendment, Second Act


In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that the TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is unclear whether Vizio would be liable under KOSA as a “covered platform”, but the ever-expanding IOT tends to complicate KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and such platforms' use of large language models may be a worthier goal to proponents of supporting kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated users made some 550 million posts on Reddit last year alone,[19] there was probably at least one philosophical haiku written by a kid.

Endnotes:

  1. Aristotle, Poet. 1447b.
  2. Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  3. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
  4. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  5. Ibid, p. 38.
  6. Ibid.
  7. Robert Katzmann, Judging Statutes. Oxford University Press, 2014.
  8. Estate of Gene B. Lokken et al. v. UnitedHealth? Group, Inc. et al. (where AI algorithm developed by nH Predict—now defunct—and employed by United Healthcare allegedly carried a 90% error rate in judging insurance claims).
  9. Cecilia Kang, “F.T.C. Study Finds ‘Vast Surveillance’ of Social Media Users,” New York Times, September 19, 2024.
  10. Nate Anderson, "Anonymized" data really isn't—and here's why not, Ars Technica, September 8, 2009.
  11. Betsy Reed, “More than 140 Kenya Facebook Moderators Diagnosed with Severe PTSD,” The Guardian, Guardian News and Media, December 18, 2024.
  12. Jason Kelley, "Kids Online Safety Act continues to threaten our rights online: 2024 in Review," Electronic Frontier Foundation, January 1, 2025 (n.b. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification).
  13. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019), p. 170.
  14. Richard Lawler, “Vizio Makes Nearly as Much Money from Ads and Data as It Does from TVs,” Engadget, May 12, 2021.
  15. Zuboff, p. 171.
  16. NetChoice? v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
  17. Miles Klee, “Facebook and Instagram to Unleash AI-Generated ‘users’ No One Asked For,” Rolling Stone, December 31, 2024.
  18. Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT? ,” New York Times, March 8, 2023.
  19. Sagaar Joshi, “51 Reddit Statistics to Analyze The Internet’s Front Page,” G2, October 4, 2024.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r7 - 15 Feb 2025 - 16:13:40 - MichaelMacKay
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM