Law in the Internet Society

View   r12  >  r11  >  r10  >  r9  >  r8  >  r7  ...
MichaelMacKaySecondEssay 12 - 16 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 9 to 9
 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone. Rather, Empedocles is a philosopher,[1] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not always count the most. Aristotle says “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[3] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context,[4] and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose the results treats online expression as univocal.
>
>
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not always count the most. Aristotle says “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[3] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context,[4] and by that measure, KOSA’s requirement that platforms monitor children’s patterns of usage and publicly disclose the results wrongly treats online expression as univocal.
 

One Flaw, Two Bills

Line: 17 to 17
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
But consider how the bolded portion can be read either narrowly or broadly. The latter interpretation likely implicates most changes to UI/UX, whereas the former imposes liability on a subset of features intended to prevent and mitigate such harms (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such uncharitable interpretations, but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance (elsewhere, United Healthcare's application of AI to insurance claims purportedly suffered a 90% error rate). Critically, nowhere is the mechanical approach to online expression more befuddled than the Senate's latest amendment on "compulsive usage."
>
>
But consider how the underlined portion can be read either narrowly or broadly. The latter interpretation likely implicates most changes to UI/UX, whereas the former imposes liability on a subset of features intended to prevent and mitigate such harms (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such uncharitable interpretations, but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance (elsewhere, United Healthcare's application of AI to insurance claims purportedly suffered a 90% error rate). Critically, nowhere is the mechanical approach to online expression more befuddled than the Senate's latest amendment on "compulsive usage."
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms by striking language like "predatory... marketing." Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms by striking language like "predatory... marketing." Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are inherently valuable. Notably, Meta cannot even effectively flag disturbing adult content for removal, so increasing corporate vigilance will likely only result in a greater number of overseas adults surveilling American kids. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
>
>
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are valuable to firms. Notably, Meta cannot even effectively flag disturbing adult content for removal, so increasing corporate vigilance will simply result in a greater number of overseas adults surveilling American kids. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that Vizio's TVs were secretly watching their owners at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have publicly announced "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that Vizio's TVs were secretly watching their owners at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have demoed their "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
 

Endnotes:


MichaelMacKaySecondEssay 11 - 16 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 37 to 37
 
  1. Ibid, p. 28.
  2. n.b. Sec. 107(a) authorizes a joint report between the FTC and Commerce Department on age verification.
  3. Zuboff, p. 170.
Changed:
<
<
  1. Zuboff, p. 171.
  2. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act DPIA requirement resembling KOSA’s “duty of care”).
>
>
  1. Ibid, p. 171.
  2. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against the California Age-Appropriate Design Code Act's DPIA requirement resembling KOSA’s “duty of care”).
 



MichaelMacKaySecondEssay 10 - 16 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 23 to 23
 
In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms by striking language like "predatory... marketing." Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
Changed:
<
<
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are inherently valuable. Notably, Meta cannot even automatically flag disturbing adult content for removal, so increasing firms' vigilance against kids will likely only result in more foreign grown-ups watching Americans. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
>
>
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are inherently valuable. Notably, Meta cannot even effectively flag disturbing adult content for removal, so increasing corporate vigilance will likely only result in a greater number of overseas adults surveilling American kids. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have publicly announced "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that Vizio's TVs were secretly watching their owners at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have publicly announced "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
 

Endnotes:

Line: 38 to 38
 
  1. n.b. Sec. 107(a) authorizes a joint report between the FTC and Commerce Department on age verification.
  2. Zuboff, p. 170.
  3. Zuboff, p. 171.
Changed:
<
<
  1. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
>
>
  1. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act DPIA requirement resembling KOSA’s “duty of care”).
 



MichaelMacKaySecondEssay 9 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 7 to 7
 

One Poet, Two Greeks

Changed:
<
<

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.

Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]

>
>

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone. Rather, Empedocles is a philosopher,[1] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
 
Added:
>
>
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not always count the most. Aristotle says “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[3] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context,[4] and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose the results treats online expression as univocal.
 

One Flaw, Two Bills

Changed:
<
<

Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
>
>

Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
Consider how the bolded portion can be read (1) narrowly, placing a duty on just the design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally uncharitable and prone to miss the nuances in minors' speech.[8] Critically, nowhere is that mechanical approach to online expression more pernicious than in the Senate's latest amendment.
>
>
But consider how the bolded portion can be read either narrowly or broadly. The latter interpretation likely implicates most changes to UI/UX, whereas the former imposes liability on a subset of features intended to prevent and mitigate such harms (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such uncharitable interpretations, but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance (elsewhere, United Healthcare's application of AI to insurance claims purportedly suffered a 90% error rate). Critically, nowhere is the mechanical approach to online expression more befuddled than the Senate's latest amendment on "compulsive usage."
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms, as language like "predatory" was scrubbed. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of kids under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms by striking language like "predatory... marketing." Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Ascertaining such “patterns” implies averaging across millions of minors’ online communications, so there is no real knowledge as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is still a grave need for firms' compliance,[9] so minors’ privacy breach is probably the only foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing vigilance against kids will likely result in more foreign grown-ups watching Americans. Developers can build bigger nets for these smaller fish, but some brain development will be lost for content flagged as “brainrot” whenever adults are not in on the joke. Beforehand, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are inherently valuable. Notably, Meta cannot even automatically flag disturbing adult content for removal, so increasing firms' vigilance against kids will likely only result in more foreign grown-ups watching Americans. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is quite possible Vizio could be liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and regulating such platforms' use of large language models may prove a worthier way to support kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated Reddit users made some 550 million posts last year alone,[19] there was probably at least one philosophical haiku written by a kid.
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[5] Today, Vizio's business model depends on customer data which could make it liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” regulated under Sec. 101(11)?).[6] Assuming arguendo that KOSA is constitutional under the First Amendment,[7] the 119th Congress should seriously reconsider KOSA’s policy goals. Recently, social media companies like Meta have publicly announced "AI agents" communicating with in-app users, and guarding against such use of large language models may better support kids’ online engagement without suffocating their self-expression. After all, statistical models are poor proxies for communicative genius, and where G2 estimated Reddit users made some 550 million posts last year alone, there was probably at least one philosophical haiku written by a kid.
 

Endnotes:

Changed:
<
<
  1. Aristotle, Poet. 1447b.
  2. Ibid. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  3. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
  4. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  5. Ibid, p. 38.
  6. Ibid.
  7. Robert Katzmann, Judging Statutes. Oxford University Press, 2014.
  8. Estate of Gene B. Lokken et al. v. UnitedHealth? Group, Inc. et al. (where AI algorithm developed by nH Predict—now defunct—and employed by United Healthcare allegedly carried a 90% error rate in judging insurance claims).
  9. Cecilia Kang, “F.T.C. Study Finds ‘Vast Surveillance’ of Social Media Users,” New York Times, September 19, 2024.
  10. Nate Anderson, "Anonymized" data really isn't—and here's why not, Ars Technica, September 8, 2009.
  11. Betsy Reed, “More than 140 Kenya Facebook Moderators Diagnosed with Severe PTSD,” The Guardian, Guardian News and Media, December 18, 2024.
  12. Jason Kelley, "Kids Online Safety Act continues to threaten our rights online: 2024 in Review," Electronic Frontier Foundation, January 1, 2025 (n.b. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification).
  13. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019), p. 170.
  14. Richard Lawler, “Vizio Makes Nearly as Much Money from Ads and Data as It Does from TVs,” Engadget, May 12, 2021.
>
>
  1. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” versus the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  2. Aristotle, Poetics, p. 38: “there is at times no word in existence; still the metaphor may be used.”
  3. Ibid, p. 28.
  4. n.b. Sec. 107(a) authorizes a joint report between the FTC and Commerce Department on age verification.
  5. Zuboff, p. 170.
 
  1. Zuboff, p. 171.
Changed:
<
<
  1. NetChoice? v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
  2. Miles Klee, “Facebook and Instagram to Unleash AI-Generated ‘users’ No One Asked For,” Rolling Stone, December 31, 2024.
  3. Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT? ,” New York Times, March 8, 2023.
  4. Sagaar Joshi, “51 Reddit Statistics to Analyze The Internet’s Front Page,” G2, October 4, 2024.
>
>
  1. See NetChoice v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
 



MichaelMacKaySecondEssay 8 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 9 to 9
 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” by KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills

Changed:
<
<

Introduced in 2022, KOSA infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
>
>

Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
Consider how the bolded portion can be read (1) narrowly, placing a duty on the sorts of design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour meant to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) would read “mitigate” out of the statute (as both preventing and mitigating harms would exempt design features that merely mitigate such harms from KOSA's “duty of care”). Traditional rules of statutory construction would disfavor uncharitable interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online speech more pernicious than in the Senate's amendment on “compulsive usage.”
>
>
Consider how the bolded portion can be read (1) narrowly, placing a duty on just the design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally uncharitable and prone to miss the nuances in minors' speech.[8] Critically, nowhere is that mechanical approach to online expression more pernicious than in the Senate's latest amendment.
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate hardened kids’ virtual cages. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says what “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms, as language like "predatory" was scrubbed. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of kids under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Ascertaining such “patterns” implies averaging across millions of minors’ online communications and footprints, so there is no real knowledge gained as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must still be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) suggests that some health care professional will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is a need for “covered platforms” to ensure compliance,[9] so minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign grown-ups watching American kids. Developers can build tighter nets for these smaller fish, but some brain development will likely be confused for “brainrot” whenever adults are not in on the joke. It was problematic when “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse as those criteria have given way to a set of factors that “significantly impacts” kids. Thus, the change from a probable to actual knowledge underscores how “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Ascertaining such “patterns” implies averaging across millions of minors’ online communications, so there is no real knowledge as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is still a grave need for firms' compliance,[9] so minors’ privacy breach is probably the only foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing vigilance against kids will likely result in more foreign grown-ups watching Americans. Developers can build bigger nets for these smaller fish, but some brain development will be lost for content flagged as “brainrot” whenever adults are not in on the joke. Beforehand, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that the TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is unclear whether Vizio would be liable under KOSA as a “covered platform”, but the ever-expanding IOT tends to complicate KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and such platforms' use of large language models may be a worthier goal to proponents of supporting kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated users made some 550 million posts on Reddit last year alone,[19] there was probably at least one philosophical haiku written by a kid.
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is quite possible Vizio could be liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and regulating such platforms' use of large language models may prove a worthier way to support kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated Reddit users made some 550 million posts last year alone,[19] there was probably at least one philosophical haiku written by a kid.
 

Endnotes:

  1. Aristotle, Poet. 1447b.
Changed:
<
<
  1. Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
>
>
  1. Ibid. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
 
  1. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
  2. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  3. Ibid, p. 38.

MichaelMacKaySecondEssay 7 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 9 to 9
 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” by KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills


MichaelMacKaySecondEssay 6 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 7 to 7
 

One Poet, Two Greeks

Changed:
<
<

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are composed entirely of hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction—despite apparent similarities—is increasingly relevant facing a new regime of online censorship, as the Kids Online Safety Act (KOSA) threatens to ratchet up minors' surveillance and mistake measurement for meaning.
>
>

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
 
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds real inquiry when words contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills

Changed:
<
<

Introduced in 2022, KOSA ironically infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
>
>

Introduced in 2022, KOSA infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
Consider how the bolded portion of KOSA can be read as (1) a single objective genitive noun phrase (effectively, imposing a duty on any design feature that prevents and mitigates harms) or (2) a shorter such phrase followed by an underlined purpose clause (ergo, placing a duty on creating and implementing any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) tends to read “mitigate” out of the statute, since prevention exceeds mitigation (thus, both preventing and mitigating harms would exempt some design features that merely mitigate such harms from the “duty of care”). Traditional rules of statutory construction say that such uncharitable interpretations should be avoided,[7] but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online activity more pernicious than in the Senate amendment on “compulsive usage.”
>
>
Consider how the bolded portion can be read (1) narrowly, placing a duty on the sorts of design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour meant to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) would read “mitigate” out of the statute (as both preventing and mitigating harms would exempt design features that merely mitigate such harms from KOSA's “duty of care”). Traditional rules of statutory construction would disfavor uncharitable interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online speech more pernicious than in the Senate's amendment on “compulsive usage.”
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says what “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to determine who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
>
>
Ascertaining such “patterns” implies averaging across millions of minors’ online communications and footprints, so there is no real knowledge gained as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must still be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) suggests that some health care professional will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is a need for “covered platforms” to ensure compliance,[9] so minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign grown-ups watching American kids. Developers can build tighter nets for these smaller fish, but some brain development will likely be confused for “brainrot” whenever adults are not in on the joke. It was problematic when “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse as those criteria have given way to a set of factors that “significantly impacts” kids. Thus, the change from a probable to actual knowledge underscores how “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14], it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being without harming their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that users made some 550 million posts on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that the TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is unclear whether Vizio would be liable under KOSA as a “covered platform”, but the ever-expanding IOT tends to complicate KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and such platforms' use of large language models may be a worthier goal to proponents of supporting kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated users made some 550 million posts on Reddit last year alone,[19] there was probably at least one philosophical haiku written by a kid.
 

Endnotes:


MichaelMacKaySecondEssay 5 - 22 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025

Changed:
<
<

One Poet, Two Greeks

>
>

One Poet, Two Greeks

 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are composed entirely of hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction—despite apparent similarities—is increasingly relevant facing a new regime of online censorship, as the Kids Online Safety Act (KOSA) threatens to ratchet up minors' surveillance and mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by reasonable care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but such quantification confounds real inquiry when words themselves contain multitudes.[4] Thus, “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from the text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds real inquiry when words contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills

Line: 24 to 24
 
In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to know who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
>
>
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to determine who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being and online expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14], it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being without harming their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that users made some 550 million posts on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
 

Endnotes:

  1. Aristotle, Poet. 1447b.
  2. Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
Changed:
<
<
  1. Word count was determined programmatically from Perseus; page count is Penguin’s reprint (1997).
>
>
  1. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
 
  1. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  2. Ibid, p. 38.
  3. Ibid.

MichaelMacKaySecondEssay 4 - 17 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 16 to 16
 
Introduced in 2022, KOSA ironically infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
Changed:
<
<
"A covered platform shall exercise reasonable care in the creation and implementation *of any design feature to prevent and mitigate the following harms to minors*… [emphasis]”
>
>
"A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
 Consider how the bolded portion of KOSA can be read as (1) a single objective genitive noun phrase (effectively, imposing a duty on any design feature that prevents and mitigates harms) or (2) a shorter such phrase followed by an underlined purpose clause (ergo, placing a duty on creating and implementing any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) tends to read “mitigate” out of the statute, since prevention exceeds mitigation (thus, both preventing and mitigating harms would exempt some design features that merely mitigate such harms from the “duty of care”). Traditional rules of statutory construction say that such uncharitable interpretations should be avoided,[7] but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online activity more pernicious than in the Senate amendment on “compulsive usage.”
Line: 24 to 24
 
In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to know who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms use of large language models may be a worthier goal in promoting kids’ well-being and healthy online speech. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being and online expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
 

Endnotes:


MichaelMacKaySecondEssay 3 - 15 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025


MichaelMacKaySecondEssay 2 - 15 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Line: 25 to 25
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly should a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, though commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance also suggests that data collection will be exhaustive to err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” but has been replaced by a system that “significantly impacts” kids, which also suggests that “covered platforms” will ultimately incur certain KYC obligations such as mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Line: 47 to 47
 
  1. Cecilia Kang, “F.T.C. Study Finds ‘Vast Surveillance’ of Social Media Users,” New York Times, September 19, 2024.
  2. Nate Anderson, "Anonymized" data really isn't—and here's why not, Ars Technica, September 8, 2009.
  3. Betsy Reed, “More than 140 Kenya Facebook Moderators Diagnosed with Severe PTSD,” The Guardian, Guardian News and Media, December 18, 2024.
Changed:
<
<
  1. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification.
>
>
  1. Jason Kelley, "Kids Online Safety Act continues to threaten our rights online: 2024 in Review," Electronic Frontier Foundation, January 1, 2025 (n.b. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification).
 
  1. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019), p. 170.
  2. Richard Lawler, “Vizio Makes Nearly as Much Money from Ads and Data as It Does from TVs,” Engadget, May 12, 2021.
  3. Zuboff, p. 171.

MichaelMacKaySecondEssay 1 - 11 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025

One Poet, Two Greeks


In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are composed entirely of hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction—despite apparent similarities—is increasingly relevant facing a new regime of online censorship, as the Kids Online Safety Act (KOSA) threatens to ratchet up minors' surveillance and mistake measurement for meaning.

Put differently, rooting out all the “harms” under KOSA by reasonable care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but such quantification confounds real inquiry when words themselves contain multitudes.[4] Thus, “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from the text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]

One Flaw, Two Bills


Introduced in 2022, KOSA ironically infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:

"A covered platform shall exercise reasonable care in the creation and implementation *of any design feature to prevent and mitigate the following harms to minors*… [emphasis]”

Consider how the bolded portion of KOSA can be read as (1) a single objective genitive noun phrase (effectively, imposing a duty on any design feature that prevents and mitigates harms) or (2) a shorter such phrase followed by an underlined purpose clause (ergo, placing a duty on creating and implementing any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) tends to read “mitigate” out of the statute, since prevention exceeds mitigation (thus, both preventing and mitigating harms would exempt some design features that merely mitigate such harms from the “duty of care”). Traditional rules of statutory construction say that such uncharitable interpretations should be avoided,[7] but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online activity more pernicious than in the Senate amendment on “compulsive usage.”

One Amendment, Two Compulsions


In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly should a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, though commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”

Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance also suggests that data collection will be exhaustive to err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” but has been replaced by a system that “significantly impacts” kids, which also suggests that “covered platforms” will ultimately incur certain KYC obligations such as mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]

First Amendment, Second Act


In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms use of large language models may be a worthier goal in promoting kids’ well-being and healthy online speech. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]

Endnotes:

  1. Aristotle, Poet. 1447b.
  2. Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
  3. Word count was determined programmatically from Perseus; page count is Penguin’s reprint (1997).
  4. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  5. Ibid, p. 38.
  6. Ibid.
  7. Robert Katzmann, Judging Statutes. Oxford University Press, 2014.
  8. Estate of Gene B. Lokken et al. v. UnitedHealth? Group, Inc. et al. (where AI algorithm developed by nH Predict—now defunct—and employed by United Healthcare allegedly carried a 90% error rate in judging insurance claims).
  9. Cecilia Kang, “F.T.C. Study Finds ‘Vast Surveillance’ of Social Media Users,” New York Times, September 19, 2024.
  10. Nate Anderson, "Anonymized" data really isn't—and here's why not, Ars Technica, September 8, 2009.
  11. Betsy Reed, “More than 140 Kenya Facebook Moderators Diagnosed with Severe PTSD,” The Guardian, Guardian News and Media, December 18, 2024.
  12. Sec. 107(a) also authorizes a joint report between the FTC and Commerce Department on age verification.
  13. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019), p. 170.
  14. Richard Lawler, “Vizio Makes Nearly as Much Money from Ads and Data as It Does from TVs,” Engadget, May 12, 2021.
  15. Zuboff, p. 171.
  16. NetChoice? v. Bonta (where the Ninth Circuit upheld a preliminary injunction against California’s Age Appropriate Design Code Act’s Data Protection Impact Assessment (DPIA) requirement resembling KOSA’s “duty of care”).
  17. Miles Klee, “Facebook and Instagram to Unleash AI-Generated ‘users’ No One Asked For,” Rolling Stone, December 31, 2024.
  18. Noam Chomsky, Ian Roberts and Jeffrey Watumull, “The False Promise of ChatGPT? ,” New York Times, March 8, 2023.
  19. Sagaar Joshi, “51 Reddit Statistics to Analyze The Internet’s Front Page,” G2, October 4, 2024.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 12r12 - 16 Feb 2025 - 04:48:19 - MichaelMacKay
Revision 11r11 - 16 Feb 2025 - 02:16:44 - MichaelMacKay
Revision 10r10 - 16 Feb 2025 - 01:10:36 - MichaelMacKay
Revision 9r9 - 15 Feb 2025 - 23:50:09 - MichaelMacKay
Revision 8r8 - 15 Feb 2025 - 18:29:35 - MichaelMacKay
Revision 7r7 - 15 Feb 2025 - 16:13:40 - MichaelMacKay
Revision 6r6 - 15 Feb 2025 - 05:44:59 - MichaelMacKay
Revision 5r5 - 22 Jan 2025 - 03:48:52 - MichaelMacKay
Revision 4r4 - 17 Jan 2025 - 23:21:38 - MichaelMacKay
Revision 3r3 - 15 Jan 2025 - 22:51:43 - MichaelMacKay
Revision 2r2 - 15 Jan 2025 - 16:57:09 - MichaelMacKay
Revision 1r1 - 11 Jan 2025 - 22:50:33 - MichaelMacKay
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM