-- By MichaelMacKay - 11 Jan 2025
Put differently, rooting out all the “harms” under KOSA's duty of care is like rounding up all the poets in Poetics by dactyl. Meter is easy to measure, but what can be counted most easily does not always count the most. Aristotle says “[w]e should therefore solve the question [of what something means] by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.”[3] Hence, applying statistical models in a top-down manner tends to affix meaning rather than infer what the text means in context,[4] and by that measure, KOSA’s requirement that platforms monitor children’s patterns of usage and publicly disclose the results wrongly treats online expression as univocal.
"A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors… [emphasis]”
But consider how the underlined portion can be read either narrowly or broadly. The latter interpretation likely implicates most changes to UI/UX, whereas the former imposes liability on a subset of features intended to prevent and mitigate such harms (e.g. a new default setting that automatically changes an app’s color temperature by the hour so as to curb nighttime usage). Similarly, interpreting the conjunction “and” in “prevent and mitigate” as the logical operator found elsewhere in Sec. 101(3)(IV)(aa) reads out “mitigate” (as both preventing and mitigating harms means exempting some design features that merely mitigate). Traditional rules of statutory construction would disfavor such uncharitable interpretations, but KOSA’s proposed apparatus for collecting and crunching data is prone to miss such nuance (elsewhere, United Healthcare's application of AI to insurance claims purportedly suffered a 90% error rate). Critically, nowhere is the mechanical approach to online expression more befuddled than the Senate's latest amendment on "compulsive usage."
Ascertaining such “patterns” implies averaging across millions of online communications, so nothing would really be learned as to any one particular minor’s use of Discord or Reddit. Blindly, though, such firms must be intrusive to establish what is “compulsive,” so while Sec. 102(a)(II) may suggest that some health care professionals will play a role in guiding FTC enforcement (“clinically diagnosable symptoms”), minors’ privacy breach is probably the only foreseeable harm within the risk, as "anonymized" data can be utilized and are valuable to firms. Notably, Meta cannot even effectively flag disturbing adult content for removal, so increasing corporate vigilance will simply result in a greater number of overseas adults surveilling American kids. Developers can build stronger nets for smaller fish, but some brain development will be misapprehended as “brainrot” whenever adults are not in on the joke. Before, it was problematic that “compulsive usage” under Sec. 101(3) was predicated on external factors “reasonably likely to cause” such compulsion, but now, it is worse that these criteria have given way to a set of factors that “significantly impacts” kids. Overall, the shift from probable to actual knowledge underscores how “covered platforms” will probably incur KYC obligations like mandatory age verification, as the EFF has predicted.[4]
Endnotes:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.