|
META TOPICPARENT | name="SecondEssay" |
Corrupting the Youth: KOSA and Greek Philosophy | | In Aristotle’s Poetics, why is Homer a poet but not Empedocles? Both Greeks’ works are written in hexameter verse, but for Aristotle, poetry does not turn on prosody alone.[1] Rather, Empedocles is a philosopher,[2] and today, that distinction is increasingly relevant, as the Kids Online Safety Act (KOSA) threatens to mistake measurement for meaning. | |
< < | Put differently, rooting out all the “harms” by KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6] | > > | Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not necessarily count the most. What survives of Poetics is approximately 8,933 Attic Greek words, resulting in a paperback English edition of 144 pages (7.92 x 5.04″),[3] but quantification confounds inquiry when words themselves contain multitudes.[4] Aristotle cautions that “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[5] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context, and by that measure, KOSA’s requirement that platforms should monitor patterns of children’s usage and publicly disclose such information treats online expression as univocal—forgetting that “when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage.”[6] | |
One Flaw, Two Bills | |
< < | Introduced in 2022, KOSA infantilizes online expression as something to be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills: | > > | Introduced in 2022, KOSA infantilizes online expression as something that can be aggregated and averaged, which overburdens the law’s “duty of care” under Sec. 101(2)(a) (“Prevention of Harm to Minors”) in both House and Senate bills: | | "A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors… [emphasis]” | |
< < | Consider how the bolded portion can be read (1) narrowly, placing a duty on the sorts of design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour meant to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) would read “mitigate” out of the statute (as both preventing and mitigating harms would exempt design features that merely mitigate such harms from KOSA's “duty of care”). Traditional rules of statutory construction would disfavor uncharitable interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally prone to miss such nuance in minors' speech.[8] Critically, nowhere is the mechanical approach to online speech more pernicious than in the Senate's amendment on “compulsive usage.” | > > | Consider how the bolded portion can be read (1) narrowly, placing a duty on just the design features that prevent and mitigate harms or (2) broadly, imposing a duty on the creation and implementation of any design feature, in order that the covered platform may prevent and mitigate harms). The second interpretation likely implicates most changes to UI/UX, whereas the first imposes liability on a smaller subset of features (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such interpretations,[7] but KOSA’s proposed apparatus for collecting and crunching data is equally uncharitable and prone to miss the nuances in minors' speech.[8] Critically, nowhere is that mechanical approach to online expression more pernicious than in the Senate's latest amendment. | | One Amendment, Two Compulsions | |
< < | In December 2024, the Senate hardened kids’ virtual cages. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of children under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says what “covered platforms” must know: “(III) Patterns of use that indicate compulsive usage.” | > > | In December 2024, the Senate hardened kids’ virtual cages while softening some "harms" for covered platforms, as language like "predatory" was scrubbed. Previously, Sec. 101 of the bipartisan bill had defined “compulsive usage” as “any response stimulated by external factors that causes [sic] an individual to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression.” But now, it is “a persistent and repetitive use of a covered platform that significantly impacts [emphasis] one or more major life activities of an individual." Yet, how exactly is a “covered platform” to know what really impacts the lives of kids under 13?—apparently, through commercial surveillance, because Sec. 102(a) (“Duty of Care”) now says: “(III) Patterns of use that indicate compulsive usage.” | | | |
< < | Ascertaining such “patterns” implies averaging across millions of minors’ online communications and footprints, so there is no real knowledge gained as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must still be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) suggests that some health care professional will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is a need for “covered platforms” to ensure compliance,[9] so minors’ privacy breach is the only real foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing platforms’ vigilance against kids’ “compulsive usage” through proprietary algorithms that prove too much will probably lead to more foreign grown-ups watching American kids. Developers can build tighter nets for these smaller fish, but some brain development will likely be confused for “brainrot” whenever adults are not in on the joke. It was problematic when “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse as those criteria have given way to a set of factors that “significantly impacts” kids. Thus, the change from a probable to actual knowledge underscores how “covered platforms” will ultimately incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12] | > > | Ascertaining such “patterns” implies averaging across millions of minors’ online communications, so there is no real knowledge as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), there is still a grave need for firms' compliance,[9] so minors’ privacy breach is probably the only foreseeable harm within the risk.[10] Notably, Meta cannot even automatically flag disturbing adult content for removal,[11] so increasing vigilance against kids will likely result in more foreign grown-ups watching Americans. Developers can build bigger nets for these smaller fish, but some brain development will be lost for content flagged as “brainrot” whenever adults are not in on the joke. Beforehand, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is even worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will incur KYC obligations like mandatory age verification, as the Electronic Frontier Foundation has predicted.[12] | | First Amendment, Second Act | |
< < | In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that the TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is unclear whether Vizio would be liable under KOSA as a “covered platform”, but the ever-expanding IOT tends to complicate KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and such platforms' use of large language models may be a worthier goal to proponents of supporting kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated users made some 550 million posts on Reddit last year alone,[19] there was probably at least one philosophical haiku written by a kid. | > > | In The Age of Surveillance Capitalism, Zuboff recounts the FTC’s $2.2M settlement with Vizio in 2017 after it was discovered that its TVs were watching the family at home.[13] Given those devices’ “smart interactivity,”[14], it is quite possible Vizio could be liable as a “covered platform” under KOSA, and generally, the ever-expanding IOT complicates KOSA’s paternalistic goals (e.g. should Mattel sell at least 10 million “smart” Barbie dream homes that children play with, why would that not be an “online video game” under Sec. 101(11)?).[15] Assuming arguendo that KOSA is constitutional under the First Amendment,[16] the next step the 119th Congress should take is to reconsider KOSA’s policy goals. Recently, social media companies have publicly displayed AI technology communicating with in-app users,[17] and regulating such platforms' use of large language models may prove a worthier way to support kids’ online presence without squelching their self-expression. After all, statistical models are poor proxies for communicative genius,[18] and where G2 estimated Reddit users made some 550 million posts last year alone,[19] there was probably at least one philosophical haiku written by a kid. | |
Endnotes:
- Aristotle, Poet. 1447b.
| |
< < |
- Ibid. Technically, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
| > > |
- Ibid. Literally, a “physiologist,” as Aristotle says “φυσιόλογος,” which often differentiates the pre-Socratic from the kind of philosopher of Aristotle’s day (“φῐλόσοφος”).
| |
- Word count was parsed programmatically from Perseus; page count comes from Penguin’s reprint (1997).
- Aristotle, Poetics, tr. S. H. Butcher, Pennsylvania Press (2000), p. 28: “there is at times no word in existence; still the metaphor may be used.”
- Ibid, p. 38.
|
|