Law in the Internet Society

View   r9  >  r8  ...
MichaelMacKaySecondEssay 9 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 7 to 7
 

One Poet, Two Greeks

Changed:
<
<

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.

Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]

>
>

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone. Rather, Empedocles is a philosopher,[1] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
 
Added:
>
>
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not always count the most. Aristotle says “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[3] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context,[4] and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose the results treats online expression as univocal.
 

One Flaw, Two Bills

Changed:
<
<

Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
>
>

Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
Consider how the bolded portion can be read (1) narrowly, placing a duty on just the design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally uncharitable and prone to miss the nuances in minors' speech.[8] Critically, nowhere is that mechanical approach to online expression more pernicious than in the Senate's latest amendment.
>
>
But consider how the bolded portion can be read either narrowly or broadly. The latter interpretation likely implicates most changes to UI/UX, whereas the former imposes liability on a subset of features intended to prevent and mitigate such harms (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such uncharitable interpretations, but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance (elsewhere, United Healthcare's application of AI to insurance claims purportedly suffered a 90% error rate). Critically, nowhere is the mechanical approach to online expression more befuddled than the Senate's latest amendment on "compulsive usage."
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms, as language like "predatory" was scrubbed. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of kids under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms by striking language like "predatory... marketing." Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Ascertaining such “patterns” implies averaging across millions of minors’ online communications, so there is no real knowledge as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is still a grave need for firms' compliance,[9] so minors’ privacy breach is probably the only foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing vigilance against kids will likely result in more foreign grown-ups watching Americans. Developers can build bigger nets for these smaller fish, but some brain development will be lost for content flagged as “brainrot” whenever adults are not in on the joke. Beforehand, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are inherently valuable. Notably, Meta cannot even automatically flag disturbing adult content for removal, so increasing firms' vigilance against kids will likely only result in more foreign grown-ups watching Americans. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is quite possible Vizio could be liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and regulating such platforms' use of large language models may prove a worthier way to support kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated Reddit users made some 550 million posts last year alone,[19] there was probably at least one philosophical haiku written by a kid.
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have publicly announced "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
 

Endnotes:

Changed:
<
<
  1. Aristotle, Poet. 1447b.
  2. Ibid. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  3. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
  4. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  5. Ibid, p. 38.
  6. Ibid.
  7. Robert Katzmann, Judging Statutes. Oxford University Press, 2014.
  8. Estate of Gene B. Lokken et al. v. UnitedHealth? Group, Inc. et al. (where AI algorithm developed by nH Predict—now defunct—and employed by United Healthcare allegedly carried a 90% error rate in judging insurance claims).
  9. Cecilia Kang, “F.T.C. Study Finds ‘Vast Surveillance’ of Social Media Users,” New York Times, September 19, 2024.
  10. Nate Anderson, "Anonymized" data really isn't—and here's why not, Ars Technica, September 8, 2009.
  11. Betsy Reed, “More than 140 Kenya Facebook Moderators Diagnosed with Severe PTSD,” The Guardian, Guardian News and Media, December 18, 2024.
  12. Jason Kelley, "Kids Online Safety Act continues to threaten our rights online: 2024 in Review," Electronic Frontier Foundation, January 1, 2025 (n.b. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification).
  13. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019), p. 170.
  14. Richard Lawler, “Vizio Makes Nearly as Much Money from Ads and Data as It Does from TVs,” Engadget, May 12, 2021.
>
>
  1. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” versus the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  2. Aristotle, Poetics, p. 38: “there is at times no word in existence; still the metaphor may be used.”
  3. Ibid, p. 28.
  4. n.b. Sec. 107(a) authorizes a joint report between the FTC and Commerce Department on age verification.
  5. Zuboff, p. 170.
 
  1. Zuboff, p. 171.
Changed:
<
<
  1. NetChoice? v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
  2. Miles Klee, “Facebook and Instagram to Unleash AI-Generated ‘users’ No One Asked For,” Rolling Stone, December 31, 2024.
  3. Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT? ,” New York Times, March 8, 2023.
  4. Sagaar Joshi, “51 Reddit Statistics to Analyze The Internet’s Front Page,” G2, October 4, 2024.
>
>
  1. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
 



Revision 9r9 - 15 Feb 2025 - 23:50:09 - MichaelMacKay
Revision 8r8 - 15 Feb 2025 - 18:29:35 - MichaelMacKay
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM