|
META TOPICPARENT | name="SecondEssay" |
|
| The Origins of AI |
|
< < | The first conference to study “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference attended by just eleven people, but it proposed an enormous undertaking: |
> > | The first conference dedicated to the study of “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference, attended by just eleven people, but it proposed an enormous undertaking: |
| The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1]. |
| In Dreyfus’ view, the AI researchers fundamentally misunderstood the phenomenon they were attempting to emulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with a simple digital calculator. Both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2]. |
|
< < | Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2]. |
> > | Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2]. |
| |
|
< < | Dreyfus also emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2]. |
> > | Dreyfus emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2]. |
| Tree Climbing with One's Eyes on the Moon |
|
< < | By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity. |
> > | By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to be a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity. |
| |
|
< < | The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end. |
> > | Promising initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to artificial intelligence than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end. |
|
From Symbol Manipulation to Behavior Manipulation |
| The Rise of Big Data |
|
< < | The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior. |
> > | The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as AI's proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior. |
| |
|
< < | The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI. |
> > | The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI. |
| |
|
< < | If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to extrapolate effectively. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers. |
> > | If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers. |
| Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers which emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it. |