|
META TOPICPARENT | name="FirstPaper" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. |
| 1. Introduction |
|
< < | After Trump was elected the 2016 US presidential election, many pointed out that Facebook helped Trump won the election. They claimed that unverified misinformation spread and reproduced through a gigantic service Facebook which consequentially contributed to Trump’s win.
As of February 2017, Google accounts for 80 percent or more of the search marketand Facebook is taking up 40 percent or more of the SNS market. The users of those services receive only the information selected based on the algorithm adopted by the services. |
> > | When we use Google or Facebook, we sometimes are impressed by how personalized the information provided by those services are. However, would it be fine to simply enjoy the personalized services that provide the exact information that users are looking for without serious contemplation on the cost that we might be paying in return for such services?
As of February 2017, Google accounts for 80 percent or more of the search market and Facebook is taking up 40 percent or more of the SNS market. Given that these services have enormous influence on people’s lives and that people are highly relying on these services, it is critical to carefully understand the method and the potential cost of the personalized service. |
| |
|
< < | No. The receive
preferentially information provided by the services, on the
basis of inferences drawn by pattern-matching algorithms based on
the capturing of their own past behavior. The environment is
learning to reinforce their attention patterns, by selling the
incremental changes in attention. To treat this as a monopoly on
information is incorrect, and the resulting analysis will be
ineffective. |
| |
|
< < | Is it possible that service providers abuse such algorithms or users misuse such algorithm in bad faith? Even when there is no abuse or misuse, is it possible that the individuals who acquire information selected by other entities do have intellectual autonomy? |
| |
|
< < |
Newspapers, magazines, and broadcast news were also edited in the 20th century, often by quite crude algorithms. People read multiple newspapers as they presently interact with multiple platforms. What's the problem?
|
> > | 2. Methodology, cost and intellectual autonomy |
|
|
|
< < | |
| |
|
> > | A. Methodology |
| |
|
> > | How these giant services are providing users with customized information is as follows. First, the services collect data on users’ behavior on internet And based on the collected historical data on users’ behavior, the services infer users’ topics of attention and provide users with the information that users would seek by using pattern-matching algorithms.
For example, Facebook would record user A’s behavior pattern on its platform, gathering data onread, reply, like, share of the postings. Then, Facebook matches user A with another user B who demonstrates similar behavior patterns, exposes user A to the postings to which user B responded to and again records user A’s response pattern to the postings. Such process repeats infinitely. |
| |
|
< < | A. Fairness of algorithm? |
| |
|
> > | B. The cost |
| |
|
< < | Regarding the US presidential election, one media raised allegation that Facebook manipulated algorithm to prevent Trump’s win. Facebook received investigation by the US Senate Commerce Committee regarding the allegation it deliberately suppressed news from conservative view.
Are the algorithms adopted by monopolistic services fair? As those services refuse to completely disclose their algorithms based on trade secret, we can never know but only guess how they select the information they provide. It is easy to misunderstand that machines would be fair, but machines in the end are also made by humans. It is technically possible service providers select the information to provide users with and manipulate public opinion through tampering with algorithms. |
| |
|
> > | First of all, the giant services record every movement of users on internet. The user gives the entire information about herself, including information unrecognized to the user herself, to giant services as payment for using the services. Ironically, the user cannot exercise any right on the accumulated data although the data is about the user herself and has no idea about the purpose the data used for.
Second, a user can become victim to her own biases. As the algorithms adopted by giant services operate in the manner to intensify the user’s attention pattern by giving information that she prefers, users might simply regard receiving of the information she wants as benefit. Actually, however, that is rather cost for the services. |
| |
|
< < | B. Manipulation of public opinion through fair algorithms
Although we do not have access to the secrets of Facebook’s algorithm, it is well known that the more Likes there are, the more likely it is that the posting will be on someone else’s news feeds. It is actually a common marketing technique to expose certain postings more frequently on others’ news feeds through manipulating number of Likes, and in such ways inaccurate conspiracy theories can be spread widely.
In South Korea, there was a case where National Intelligence Service of Korea manipulated public opinion by increasing the counts of for or against on reports or comments on political issues or posting tweets on Tweeter and using automatic retweet program etc. in the 2012 presidential election.
But there is no indication of effectiveness. The wrong done is
the involvement of the intelligence services in politics, not the
effectiveness of retweeting bullshit.
|
| C. Intellectual autonomy |
|
< < | Even without considering situations as above, we find that service providers basically adopt algorithm in a way to acquire competitive advantages. The giant services tend to select and provide information that users want to hear to prevent user attrition. As a result, users are repeatedly exposed to the information they agreed upon, instead of balanced information, and get more convinced with such information. People think that their choice and conviction are independent, but are they? Under such circumstances, is intellectual autonomy feasible?
Intellectual autonomy can be defined as thinking for oneself independently from others’ direction and control. With the thinking process regarded as making rational decisions based on information through reasoning, people start thinking process from defective information in the world dominated by giant services that adopted algorithms.
Moreover, as people do not want to be unpopular by thinking differently from those around them, they tend to agree with them, and the algorithms adopted by giant services, particularly SNSs, suppress intellectual autonomy by continuously stressing how people around their users think.
Furthermore, what makes this even more dangerous is that people in societies dominated by such services misunderstand that their thoughts and decisions are based on their autonomous thinking while they actually think and decide as the algorithms decide.
No proof whatever
that this is true, compared to whatever baseline you like, let
alone the baseline of government control of broadcast, common to
places such as Russia, and the DPRK, where TV is much more
important than the Internet, and in this respect approximating
late 20th-century conditions in all the advanced societies.
It's easy to retain intellectual autonomy in countries with an
uncensored Net, if one structures how one reads and uses the Net
appropriately. Far easier than it was in 20th century, because
the power of the Net is great if it is free. It's the Chinese
paradigm that matters, not the social networks I don't have to
use. |
> > | Intellectual autonomy can be defined as thinking for oneself independently from others’ direction and control. And staying anonymous during the thinking process is the essential element in protecting substantial intellectual autonomy. It is impossible for individuals to fully enjoy intellectual autonomy in the environment where every behavior of the individual, such as reading and collecting information and expressing her own thoughts is traced. Giant services’ operating mechanism which records every behavior of users put the users’ intellectual autonomy in danger.
When thinking process regarded as making rational decisions based on information through reasoning, biased information would lead to damage to intellectual economy. Giant services’ operating mechanism also put the users’ intellectual autonomy in danger as it make users more biased
Furthermore, what makes the matter even worse is that people in societies dominated by such services are indifferent to the risks of such services and often misunderstand that their thoughts and decisions are based on their autonomous thinking. |
| |
| Moreover, considering that many experts are striving to improve the rights of consumers, while it might be difficult for individual consumers to understand what input generates what output in what process at a glance, it would be possible to understand those algorithms through experts’ interpretation and education.
Algorithmic transparency is essential for intellectual autonomy, considering the market monopoly by giant service providers and the environment where it is hard to avoid using such services. |
|
> > | C. Using the Net incognito
If we can neither leave the giant services nor preclude the services from recording our behavior, we should be able to use those services anonymously. Users can use the net incognito by using TOR, the “onion routing” system which enables anonymous communication by keeping user’s location and usage from being monitored. TOR can ensure the user’s anonymous use of the services when combined with prudently secured operating system. |
| |
|
< < | C. Education |
> > | D. Education |
|
The most important is education of people. Users need to understand the trends and defects of the information they find on the Internet. Moreover, users should understand and stay vigilant against the fact that they do not think independently. Rather, they tend to blindly follow what others think. While it might not be possible for users to completely leave such algorithmic services, they should at least make efforts to pursue compromised intellectual autonomy based on the understanding of the risks of such services. |
|
< < |
A good set of observations and thoughts about a problem that wasn't defined quite right. The route to revision lies in the first section, where I have tried to raise the relevant challenges for improvement.
|
> > | |
|
|