| |
LauraBaneSecondEssay 4 - 25 Dec 2024 - Main.LauraBane
|
|
META TOPICPARENT | name="SecondEssay" |
| |
< < | Classrooms in the Digital Age: The False Equivalence of AI and the Internet | > > | Classrooms in the Digital Age: The False Equivalence of AI and the Internet’s Effects on Learning | | -- By LauraBane - 29 Nov 2024 | | Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the remedial effects of AI literacy. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, these benefits are not perfectly symmetrical: interpersonal interaction alone breeds the kind of “deep engagement and relationship-building” that is crucial for “language and social development.” However, AI proponents allege that as long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.
Potential Refutations (Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest) | |
< < | In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do based on limited cues and patterns). | > > | In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do). | | If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.
My Proposition |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|
| |