|
META TOPICPARENT | name="SecondPaper" |
The Tragedy of the ML Commons | |
> > | draft two | | | |
< < | -- By KoljaVerhage - 15 Apr 2021 | > > | -- By KoljaVerhage - 5 May 2021 | |
Principles of AI Governance | |
< < | Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which we'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which we'll call the digital democracies. These actors have a genuine interest is protecting human values, like freedom of thought and expression, privacy and autonomy. | > > | Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which I'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which I'll call the digital democracies. These actors have a genuine interest in protecting human values, like freedom of thought and expression, privacy and autonomy. | | The Tragedy of the ML Commons
The first step towards "AI Governance" has been for organizations within these three groups to present abstract high-level principles of what they consider ethical or trustworthy artificial intelligence. Over the past five years, hundreds of organizations have published these kinds of principles. In what can be described as an act of irony, even the Beijing Academy of Artificial Intelligence (BAAI) has released a set of principles in support of privacy, freedom, and the like. However, despite all these publicized principles there has been little progress towards any agreement between or within groups on how to operationalize any of these principles into actual policies or technical standards. The fact that countries who generally disagree on just about anything, have striking similarities in their principles should be evidence enough of their vacuity. | |
< < | This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current abuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any agreement. All the while the Chinese state and MNC's have been plundering our pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons. | > > | This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current misuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any comprehensive agreement. All the while the Chinese state and MNC's have been plundering the world's pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons. | |
| Cooperate | Defect |
Cooperate | 4,4 | -2,6 |
| | As we carefully proceed towards thinking about solutions there are a few general lessons that the history of climate cooperation has taught us about getting out of collective action problems. First of all, it is important that we accurately define both the risks and the consequences of non-cooperation. At the very least this will help conceptualize our tipping point to catastrophe. Secondly, the idea of combining many abstract proposals into one may undermine their prospects for success. Before getting to a proposal, we must figure out the dimensions along which disagreement exists and work on getting a better understanding of the interests of the individual digital democracies. Finally, lowering the cost of cooperation may increase the likelihood of cooperative success. Creating small, decentralized groups, made up of representatives from the individual countries may help to provide insights into the conditions under which we could expect proposals to be successful.
The Brussels Effect? | |
< < | The recently leaked proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But that dream seems more distant than ever as NSCAI Chairman Eric Schmidt responded Europe's strategy won't be successful as its "simply not big enough" to compete in this field.
Efforts to turn climate change into a coordination problem are well underway with efforts to place a value on environmental externalities, like a carbon tax. Similarly, we must think about how the problem of algorithms can be turned into one of coordination if we have any chance of reducing our reliance on vague principles. By taking the lessons from efforts on climate change and nuclear proliferation we can begin crossing the barriers of social dilemmas.
An excellent start. Your comparisons are to areas in which the problem could be strictly defined and the public could understand the dangers. Here the EC cannot even define "artificial intelligence" before regulating it, and we have only just begun testing our ability to teach citizens what the serious risks are. These are relevant differences to take back into account in your drafting.
| > > | The recent proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. The proposal underscores its own limitations by failing to define "artificial intelligence" before attempting to regulate it. Unlike climate change, where efforts to turn it into a coordination problem are well underway, here we have hardly begun teaching citizens what the serious risks are. Without this education, any hope to transform the problem of algorithms into one of coordination seems ever distant. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But while the EU had a first-mover advantage on data protection with GDPR, there are many more actors working on "AI Governance", leading us to the tragedy and making the wholesale adoption of EU rules much less likely. | | |
|