Law in the Internet Society

View   r7  >  r6  >  r5  >  r4  >  r3  >  r2  ...
MichaelMacKaySecondEssay 7 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 9 to 9
 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” by KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills


MichaelMacKaySecondEssay 6 - 15 Feb 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 7 to 7
 

One Poet, Two Greeks

Changed:
<
<

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are composed entirely of hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction—despite apparent similarities—is increasingly relevant facing a new regime of online censorship, as the Kids Online Safety Act (KOSA) threatens to ratchet up minors' surveillance and mistake measurement for meaning.
>
>

In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning.
 
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds real inquiry when words contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills

Changed:
<
<

Introduced in 2022, KOSA ironically infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
>
>

Introduced in 2022, KOSA infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
 "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
Changed:
<
<
Consider how the bolded portion of KOSA can be read as (1) a single objective genitive noun phrase (effectively, imposing a duty on any design feature that prevents and mitigates harms) or (2) a shorter such phrase followed by an underlined purpose clause (ergo, placing a duty on creating and implementing any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) tends to read “mitigate” out of the statute, since prevention exceeds mitigation (thus, both preventing and mitigating harms would exempt some design features that merely mitigate such harms from the “duty of care”). Traditional rules of statutory construction say that such uncharitable interpretations should be avoided,[7] but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online activity more pernicious than in the Senate amendment on “compulsive usage.”
>
>
Consider how the bolded portion can be read (1) narrowly, placing a duty on the sorts of design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour meant to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) would read “mitigate” out of the statute (as both preventing and mitigating harms would exempt design features that merely mitigate such harms from KOSA's “duty of care”). Traditional rules of statutory construction would disfavor uncharitable interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online speech more pernicious than in the Senate's amendment on “compulsive usage.”
 

One Amendment, Two Compulsions

Changed:
<
<

In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
>
>

In December 2024, the Senate hardened kids’ virtual cages. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says what “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
 
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to determine who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
>
>
Ascertaining such “patterns” implies averaging across millions of minors’ online communications and footprints, so there is no real knowledge gained as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must still be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) suggests that some health care professional will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is a need for “covered platforms” to ensure compliance,[9] so minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign grown-ups watching American kids. Developers can build tighter nets for these smaller fish, but some brain development will likely be confused for “brainrot” whenever adults are not in on the joke. It was problematic when “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse as those criteria have given way to a set of factors that “significantly impacts” kids. Thus, the change from a probable to actual knowledge underscores how “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14], it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being without harming their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that users made some 550 million posts on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that the TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is unclear whether Vizio would be liable under KOSA as a “covered platform”, but the ever-expanding IOT tends to complicate KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and such platforms' use of large language models may be a worthier goal to proponents of supporting kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated users made some 550 million posts on Reddit last year alone,[19] there was probably at least one philosophical haiku written by a kid.
 

Endnotes:


MichaelMacKaySecondEssay 5 - 22 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025

Changed:
<
<

One Poet, Two Greeks

>
>

One Poet, Two Greeks

 
In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are composed entirely of hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction—despite apparent similarities—is increasingly relevant facing a new regime of online censorship, as the Kids Online Safety Act (KOSA) threatens to ratchet up minors' surveillance and mistake measurement for meaning.
Changed:
<
<
Put differently, rooting out all the “harms” under KOSA by reasonable care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but such quantification confounds real inquiry when words themselves contain multitudes.[4] Thus, “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from the text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
>
>
Put differently, rooting out all the “harms” under KOSA by its duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds real inquiry when words contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Applying statistical models in a top-down manner tends to affix meaning rather than infer meaning from text in context, and by that measure, KOSA’s requirement that platforms monitor patterns of children’s usage and publicly disclose such information appears to treat online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6]
 

One Flaw, Two Bills

Line: 24 to 24
 
In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to know who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
>
>
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to determine who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being and online expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14], it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being without harming their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that users made some 550 million posts on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
 

Endnotes:

  1. Aristotle, Poet. 1447b.
  2. Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
Changed:
<
<
  1. Word count was determined programmatically from Perseus; page count is Penguin’s reprint (1997).
>
>
  1. Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
 
  1. Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
  2. Ibid, p. 38.
  3. Ibid.

MichaelMacKaySecondEssay 4 - 17 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Corrupting the Youth: KOSA and Greek Philosophy

Line: 16 to 16
 
Introduced in 2022, KOSA ironically infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills:
Changed:
<
<
"A covered platform shall exercise reasonable care in the creation and implementation *of any design feature to prevent and mitigate the following harms to minors*… [emphasis]”
>
>
"A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors [emphasis]”
 Consider how the bolded portion of KOSA can be read as (1) a single objective genitive noun phrase (effectively, imposing a duty on any design feature that prevents and mitigates harms) or (2) a shorter such phrase followed by an underlined purpose clause (ergo, placing a duty on creating and implementing any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) tends to read “mitigate” out of the statute, since prevention exceeds mitigation (thus, both preventing and mitigating harms would exempt some design features that merely mitigate such harms from the “duty of care”). Traditional rules of statutory construction say that such uncharitable interpretations should be avoided,[7] but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online activity more pernicious than in the Senate amendment on “compulsive usage.”
Line: 24 to 24
 
In December 2024, the Senate amended KOSA to soften some “harms” for platforms (e.g. striking “predatory… marketing practices” from Sec. 103) but at the expense of hardening kids’ virtual cages. Previously, in Sec. 101, the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” How exactly is a “covered platform” to know what genuinely impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says that “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.”
Changed:
<
<
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12]
>
>
Again, ascertaining such “patterns” implies averaging across millions of minors’ uploaded content and online footprints, so there is no real knowledge as to any one minor’s particular use of "covered platforms" like Discord or Reddit. Blindly, though, firms are required to be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) (“clinically diagnosable symptoms”) suggests that some doctors may play a role in guiding FTC enforcement, the need for “covered platforms” to ensure compliance under vague parameters of "compulsive" means data collection will likely be exhaustive, as firms err on the side of caution,[9] meaning that minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign adults watching American kids. Surely, developers can build bigger nets for smaller fish, but some brain development will inevitably be confused for “brainrot” when adults are not in on the joke. Before the amendment, “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but they have been replaced by a set of factors that “significantly impacts” kids, where the change from probable to actual knowledge underscores that “covered platforms” will ultimately incur KYC obligations like mandatory age verification to know who is using their apps and how, as the Electronic Frontier Foundation has predicted.[12]
 

First Amendment, Second Act

Changed:
<
<

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms use of large language models may be a worthier goal in promoting kids’ well-being and healthy online speech. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
>
>

In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were essentially watching the family at home.[13] Despite its devices’ “smart interactivity,”[14] it is unclear whether such a company would be liable as a “covered platform” under KOSA, and the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is even constitutional under the First Amendment,[16] the next step that the 119th Congress should take would be reconsidering KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology interacting with in-app users,[17] so restricting such platforms' use of large language models may be a worthier goal in promoting kids’ well-being and online expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated that some 550 million posts were made on Reddit last year alone, there was probably at least one philosophical haiku written by a kid.[19]
 

Endnotes:


MichaelMacKaySecondEssay 3 - 15 Jan 2025 - Main.MichaelMacKay
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Corrupting the Youth: KOSA and Greek Philosophy

-- By MichaelMacKay - 11 Jan 2025


Revision 7r7 - 15 Feb 2025 - 16:13:40 - MichaelMacKay
Revision 6r6 - 15 Feb 2025 - 05:44:59 - MichaelMacKay
Revision 5r5 - 22 Jan 2025 - 03:48:52 - MichaelMacKay
Revision 4r4 - 17 Jan 2025 - 23:21:38 - MichaelMacKay
Revision 3r3 - 15 Jan 2025 - 22:51:43 - MichaelMacKay
Revision 2r2 - 15 Jan 2025 - 16:57:09 - MichaelMacKay
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM