PulawitWanichsetakulSecondEssay 3 - 17 Jan 2022 - Main.PulawitWanichsetakul
|
|
META TOPICPARENT | name="SecondEssay" |
| | Introduction | |
< < | As the internet is now a major communications channel, it became another mean for dissemination of hate speech generated by internet users. The problem of hate speech on the internet thus led to the issue of intermediary liability for third-party content. | > > | As the internet is now a major communications channel, it became another mean for dissemination of hate speech. The problem of hate speech on the internet thus led to the issue of intermediary liability for user-generated content. | | ECtHR? on Intermediary Liability | |
< < | In 2015 and 2016, the European Court of Human Rights (“ECtHR”) rendered two major decisions concerning intermediary liability for hate speech in Delfi AS v. Estonia and Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (“MTE”). In both cases, the question was whether domestic court’s decision holding internet intermediary liable for user-generated comments were in breach of its right to freedom of expression. The ECtHR? reached different results in these two cases as Delfi was found liable for the comments, but the MTE Court found that MTE and Index’s right were violated. Key differences were that most comments in Delfi were clearly hate speech. Delfi is a large news website, which earns revenue from the number of visitors. There was thus commenting environment integrated in its website. Most importantly, comments can only be modified or removed by Delfi. It therefore had substantial degree of control over the comments and was required to remove them without delay after publication to escape liability. Delfi had employed several measures including word-filtering and a notice-and-takedown system. However, Delfi’s word-filtering system failed to detect the speech in question, which had a direct meaning. As a result, these comments remained on website for six weeks. | > > | In 2015 and 2016, the European Court of Human Rights (“ECtHR”) rendered two major decisions concerning intermediary liability for hate speech in Delfi AS v. Estonia and Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (“MTE”), both concerning whether domestic court’s decision holding internet intermediary liable for user-generated comments were in breach of its right to freedom of expression. The ECtHR? found that Delfi was liable for the comments, but MTE and Index’s right were violated. Key differences were that most comments in Delfi were clearly hate speech. Delfi is a large news portal, which earns revenue from the number of visitors. There was thus commenting environment integrated in its website. Most importantly, comments can only be modified or removed by Delfi. It therefore had substantial degree of control over the comments and was required to remove them without delay after publication to escape liability. Delfi had employed several measures including word-filtering and a notice-and-takedown system. However, Delfi’s word-filtering system failed to detect the speech in question, which had a direct meaning. | | Collateral Censorship: Who should be liable? | |
< < | Per Professor Jack Balkin, in digital age, the shift from traditional publishers to platforms where any user can be a publisher made it difficult for States to regulate speeches online. Intermediary liability regime then became States’ method for controlling user-generated content on the internet. Intermediary liability therefore incentivizes intermediaries to overcensor to reduce the risk of liability and enjoy immunity. Imposition of intermediary liability thus leads to prior restraint and shifts the burden of err costs to speakers. The issue of intermediary liability therefore concerns confrontation between freedom of expression and the right to privacy and personality rights of users. | > > | Per Professor Jack Balkin, the shift from traditional publishers to digital platforms where any user can publish made it difficult for States to regulate speeches online. Intermediary liability then became States’ method for controlling user-generated content on the internet. Consequently, intermediaries are incentivised to overcensor to reduce the risk of liability and enjoy immunity. Imposition of intermediary liability thus leads to prior restraint and shifts the burden of err costs to speakers. The issue therefore concerns a triangular relationship between the interests of intermediaries, victims of hate speech, and speakers and a confrontation between freedom of expression and the right to privacy and personality rights of users. | | | |
< < | As regards the question of who should be liable for defamatory statements made online by anonymous/pseudonymous users, Professor Ronen Perry and Professor Tal Zarsky proposed a legal regime called ‘residual indirect liability’, which combines direct and indirect liability together. In this regime, the speaker is exclusively liable, but where he or she is not reasonably reachable, the content provider becomes liable. One example of this regime is the United Kingdom’s Defamation Act 2013. The question of liability thus relies on the capability to identify the speaker. | > > | As regards the question of who should be liable for defamatory statements made online by anonymous/pseudonymous users, liability of speaker is unquestionable. For intermediary, the Delfi Court observed “the ease, scope and speed of the dissemination of information on the Internet, and the persistence of the information once disclosed, which may considerably aggravate the effects of unlawful speech on the Internet compared to traditional media,” and considered platforms as more capable to continuously monitor hate speech on the internet to prevent or rapidly remove such content than potential victims. Intermediary liability can thus be justified when user comments are clearly unlawful and have been posted anonymously or pseudonymously, at least where no domestic mechanisms are in place to provide the injured party a real and effective opportunity to pursue the speaker.
Accordingly, even though the speaker should be exclusively liable, but where he/she is not reasonably reachable, the intermediary should become liable. This regime is proposed and referred to by Professor Ronen Perry and Professor Tal Zarsky as ‘residual indirect liability’. | | Testing proportionality as technical problem? | | CJEU on Filtering | |
< < | In Glawischnig-Piesczek v Facebook Ireland, the Court of Justice of the European Union (CJEU) decided whether the Austrian Supreme Court orders to block ‘identical’ or ‘equivalent’ content were permissible under Article 15 of the EU’s E-Commerce Directive (ECD) and concluded that although courts cannot require the platform to independently assess whether content violates the law as it would contradicts the ECD’s blanket immunity for intermediaries from general monitoring obligations, courts can still issue more specific injunctions to block particular content identified by them. However, the CJEU did not address how measures like filters might work. | > > | In Glawischnig-Piesczek v Facebook Ireland, the CJEU decided whether the Austrian Supreme Court orders to block ‘identical’ or ‘equivalent’ content were permissible under Article 15 of the E-Commerce Directive (ECD) and concluded that although courts cannot require platforms to independently assess whether content violates the law as it would contradicts the ECD’s blanket immunity for intermediaries from general monitoring obligations, courts can still issue more specific injunctions to block particular content identified by them. However, the CJEU did not clearly address how filters should work. | | Filtering and risks for internet users | |
< < | It is clear that the only way that a big entity like Facebook or Delfi proactively block specific content is by using filters. The question is how well courts understand the functioning and shortcomings of technology. The most used filters today are designed to find duplicates of known, specific content such as images, audio, or videos. For example, PhotoDNA? is used to find child sexual abuse content. Sophisticated filters may also find near-duplicates like cropped images. Duplicate-detection filters for written text are technically simpler but more prone to error because specific words/phrases can be unlawful in one situation but innocuous in another, but filters cannot assess the context in which information appears. | > > | The only way that big entities like Facebook and Delfi proactively block specific content is by using filters. The question is how well courts understand the technology. Filters are designed based on their purposes. In case of speech, spam filtering may be a basic example. Bayesian spam filtering functions based on a theorem that certain email is spam. The filter uses words in title and messages to identify spam by learning from messages that were identified as and not as spam. To avoid false positive, simple scoring filters are used and if specific words contained exceed a certain score, the message is regarded as spam. Then spam filtering is more refined by learning from individual human input about what is and is not spam. Still, text filters are prone to error because specific words/phrases can be unlawful in one situation but innocuous in another, but filters, including images and videos filters, cannot assess the context in which information appears. | | | |
< < | The CJEU noted in previous decisions about the risk that an automated filter might not distinguish adequately between lawful and unlawful content, which could lead to the blocking of lawful speech and preventing old material that was unlawful in certain context to be re-used in new contexts. Platforms with human review of filter decisions are incentivized to remove ‘gray area’ content to avoid losing the status of intermediary and immunity that comes with it, and instead err in taking down content flagged by filters. In the end, all human review could become a ‘rubber-stamping mechanism.’ Other means of correcting errors include counter-notice system, but its efficacy is still questionable, and it only provides remedy for speakers but not users unknowingly deprived of access to information. | > > | Per CJEU, the risk that an automated filter might not distinguish adequately between lawful and unlawful content could lead to blocking of lawful speech and preventing old material that was unlawful in certain context to be re-used in new contexts. Platforms with human review of filter decisions are incentivized to remove ‘gray area’ content to avoid the risk of liability, and instead err in taking down content flagged by filters. In the end, all human review could become a ‘rubber-stamping mechanism.’ Various studies identified high rates of over-removal and error even in purely human-operated notice-and-takedown system. Other means of correcting errors include counter-notice system, but its efficacy is still questionable, and it only provides remedy for speakers but not users unknowingly deprived of access to information. A study also showed that insensitivity to differences in dialect when labelling toxic language could create bias in datasets used to train automated hate speech detection tools. | | | |
< < | How to design framework that includes mechanisms to protect both rights of intermediaries and internet users remain problematic. The EU is currently working on the proposed Digital Services Act (DSA), which is expected to update the ECD. While the intermediary liability regime is expected to be generally the same, the proposal introduces additional measures including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decision. As the DSA is still a proposal, it remains to be seen if the framework can work with every type of speech, to what extent the introduced functions will solve the current problems, and how much it considers current state of technology. | > > | The EU is currently working on the proposed Digital Services Act (DSA), which is expected to update the ECD. The proposal introduces additional measures to ECD including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decision. As the DSA is still a proposal, it remains to be seen if the framework can work with every type of speech, to what extent the introduced functions will solve the current problems, and how much it considers current state of technology. | |
A few substantive points: |
|
PulawitWanichsetakulSecondEssay 2 - 04 Jan 2022 - Main.EbenMoglen
|
|
META TOPICPARENT | name="SecondEssay" |
| |
< < | | | | |
< < | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | Intermediary Liability: What is the best model? | | How to design framework that includes mechanisms to protect both rights of intermediaries and internet users remain problematic. The EU is currently working on the proposed Digital Services Act (DSA), which is expected to update the ECD. While the intermediary liability regime is expected to be generally the same, the proposal introduces additional measures including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decision. As the DSA is still a proposal, it remains to be seen if the framework can work with every type of speech, to what extent the introduced functions will solve the current problems, and how much it considers current state of technology. | |
> > |
A few substantive points:
1. You assume that you know how platform content filtering technically works. It's not clear why, and it certainly isn't demonstrated here. "The most used filters today are designed to find duplicates of known, specific content such as images, audio, or videos." The sort of filters used for that purpose are designed for that purpose. But perhaps you should think about the Bayesian probability model of the typical lowly spam filter, trained on an infinitesimal number of examples of spam-like messages. That's a much better illustration of the molecular structure from which the immense machines of the platforms are constructed. Learning about them will help you in other areas related to so-called "AI" too.
2. You assume that "the Internet" is the platforms. That prevents you from considering any forms of intervention that involve using the rest of the actual Net to limit the platforms' power.
3. You don't actually answer your question. You don't even consider the simplest of possible answers, which is to treat the platforms exactly like all other corporate publishes with respect to their legal liabilities and regulatory responsibilities. If Rupert Murdoch owns both MySpace and Sky News (assuming those, counterfactually, to be viable meaningful entities) the case for treating their legal and regulatory beings in strictly equal fashion is strong. That might bother the platforms too large to own mere media companies, but that is not in itself any form of counterargument.
As usual, the European Union wishes to pretend an immensity of importance it simply doesn't have, but its efforts to think through its policy preferences in the DMA and DSA veins are nonetheless immensely valuable. But neither the regulatory past or the possible legislative futures in one small corner of the world, no matter how rich and self-satisfied, can fully account for the present of the subject about which you are writing.
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: |
|
PulawitWanichsetakulSecondEssay 1 - 10 Dec 2021 - Main.PulawitWanichsetakul
|
|
> > |
META TOPICPARENT | name="SecondEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Intermediary Liability: What is the best model?
-- By PulawitWanichsetakul - 10 Dec 2021
Introduction
As the internet is now a major communications channel, it became another mean for dissemination of hate speech generated by internet users. The problem of hate speech on the internet thus led to the issue of intermediary liability for third-party content.
ECtHR? on Intermediary Liability
In 2015 and 2016, the European Court of Human Rights (“ECtHR”) rendered two major decisions concerning intermediary liability for hate speech in Delfi AS v. Estonia and Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (“MTE”). In both cases, the question was whether domestic court’s decision holding internet intermediary liable for user-generated comments were in breach of its right to freedom of expression. The ECtHR? reached different results in these two cases as Delfi was found liable for the comments, but the MTE Court found that MTE and Index’s right were violated. Key differences were that most comments in Delfi were clearly hate speech. Delfi is a large news website, which earns revenue from the number of visitors. There was thus commenting environment integrated in its website. Most importantly, comments can only be modified or removed by Delfi. It therefore had substantial degree of control over the comments and was required to remove them without delay after publication to escape liability. Delfi had employed several measures including word-filtering and a notice-and-takedown system. However, Delfi’s word-filtering system failed to detect the speech in question, which had a direct meaning. As a result, these comments remained on website for six weeks.
Collateral Censorship: Who should be liable?
Per Professor Jack Balkin, in digital age, the shift from traditional publishers to platforms where any user can be a publisher made it difficult for States to regulate speeches online. Intermediary liability regime then became States’ method for controlling user-generated content on the internet. Intermediary liability therefore incentivizes intermediaries to overcensor to reduce the risk of liability and enjoy immunity. Imposition of intermediary liability thus leads to prior restraint and shifts the burden of err costs to speakers. The issue of intermediary liability therefore concerns confrontation between freedom of expression and the right to privacy and personality rights of users.
As regards the question of who should be liable for defamatory statements made online by anonymous/pseudonymous users, Professor Ronen Perry and Professor Tal Zarsky proposed a legal regime called ‘residual indirect liability’, which combines direct and indirect liability together. In this regime, the speaker is exclusively liable, but where he or she is not reasonably reachable, the content provider becomes liable. One example of this regime is the United Kingdom’s Defamation Act 2013. The question of liability thus relies on the capability to identify the speaker.
Testing proportionality as technical problem?
The question is when the Court found that the comment clearly constitutes hate speech and order the intermediary to filter content, to what extent the Court can order the platform to act?
CJEU on Filtering
In Glawischnig-Piesczek v Facebook Ireland, the Court of Justice of the European Union (CJEU) decided whether the Austrian Supreme Court orders to block ‘identical’ or ‘equivalent’ content were permissible under Article 15 of the EU’s E-Commerce Directive (ECD) and concluded that although courts cannot require the platform to independently assess whether content violates the law as it would contradicts the ECD’s blanket immunity for intermediaries from general monitoring obligations, courts can still issue more specific injunctions to block particular content identified by them. However, the CJEU did not address how measures like filters might work.
Filtering and risks for internet users
It is clear that the only way that a big entity like Facebook or Delfi proactively block specific content is by using filters. The question is how well courts understand the functioning and shortcomings of technology. The most used filters today are designed to find duplicates of known, specific content such as images, audio, or videos. For example, PhotoDNA? is used to find child sexual abuse content. Sophisticated filters may also find near-duplicates like cropped images. Duplicate-detection filters for written text are technically simpler but more prone to error because specific words/phrases can be unlawful in one situation but innocuous in another, but filters cannot assess the context in which information appears.
The CJEU noted in previous decisions about the risk that an automated filter might not distinguish adequately between lawful and unlawful content, which could lead to the blocking of lawful speech and preventing old material that was unlawful in certain context to be re-used in new contexts. Platforms with human review of filter decisions are incentivized to remove ‘gray area’ content to avoid losing the status of intermediary and immunity that comes with it, and instead err in taking down content flagged by filters. In the end, all human review could become a ‘rubber-stamping mechanism.’ Other means of correcting errors include counter-notice system, but its efficacy is still questionable, and it only provides remedy for speakers but not users unknowingly deprived of access to information.
How to design framework that includes mechanisms to protect both rights of intermediaries and internet users remain problematic. The EU is currently working on the proposed Digital Services Act (DSA), which is expected to update the ECD. While the intermediary liability regime is expected to be generally the same, the proposal introduces additional measures including notice-and-action procedures for illegal content and the possibility to challenge the platforms’ content moderation decision. As the DSA is still a proposal, it remains to be seen if the framework can work with every type of speech, to what extent the introduced functions will solve the current problems, and how much it considers current state of technology.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|