|
META TOPICPARENT | name="SecondEssay" |
| |
< < | Learning Management Systems Applications; Privacy Risks and Suggested Principles. | > > | Learning Management Systems, Privacy, and Freedom of Thought. | | | |
< < | -- By SapirAzur - 08 Dec 2021 | | | |
> > | -- By SapirAzur - 08 Dec 2021 (revised 9 Jan 2022) | | | |
< < | In the past few years, we have witnessed an exponential growth in distant-learning applications, including learning management and analytics systems; With the social distancing restrictions introduced by the covid-19 pandemic, professionals worldwide advocated even more passionately for a hybrid, analytics-oriented approach for academic learning. This paper will present data collection practices of "Canvas" learning management analytics system, discuss privacy and security concerns and suggest principles for minimizing student rights risks. | | | |
< < | Numerous studies have indicated that learning management and analytics applications can support higher education and improve students learning by providing data about learning activities and engagement. Although there is no large-scale empirical proof supporting those findings, in the past years, various educational institutions have partnered with learning-analytics companies to collect and utilize data to assess student's behavior and formulate a predictive analysis of performance, which will enable faculty to personalize learning. | > > | In the past few years, we have witnessed an exponential growth in distant-learning applications, including learning management and analytics systems; with the social distancing restrictions introduced by the covid-19 pandemic, professionals worldwide advocated even more passionately for a hybrid, analytics-oriented approach for academic learning. This paper will present data collection practices of the Canvas learning management analytics system, discuss relevant privacy concerns as well as the implications to freedom of thought.
Numerous studies have indicated that learning management and analytics applications can support higher education and improve students learning by providing data about learning activities and engagement. Although limited evidence is shown to support those findings (Ifenthaler et al. 2020; Suchithra et al. 2015), in the past years, various educational institutions have partnered with learning-analytics companies to collect and utilize data to assess students' behavior and formulate a predictive analysis of performance, which will enable faculty to personalize learning.
Whether those systems are effective or not, I believe that the privacy risks and the potential harm to students' freedom of thought that they pose are substantial, particularly given the nature of the relationship between universities and students, and hence, calls for a thorough public discussion. | | | |
< < | Whether those systems are effective or not, the privacy risks they pose are substantial and should be thoroughly considered, especially given the nature of the relationship between universities and students, as illustrated hereafter through the example of Canvas. | | Canvas | |
Canvas claims it does not tie the information gathered using third-party analytics to identifiable information. However, there are still some apparent privacy risks regarding the data aggregation mechanism, the aggregator's identity, its security measurements, and the implications of security attacks from an AI perspective. | |
< < | Aggregated Data
Aggregated data combines two or more fields or attributes into a single field in a database. Though no meaningful information is usually revealed when looking at a single data point in the database, aggregating multiple points may lead to non-trivial insights. Meaning, one can connect specific data points by adding context or linking a particular dataset into other datasets, and the smaller and diverse the database, the greater the risk to the data subject's privacy. For example, the smaller a classroom is, the higher the risk of identifying an individual within this group, using two or more attributes (e.g., location, age). Thus, even unidentifiable attributes may be used to narrow down an individual in a dataset in an easy three-step query: women> age25-30> Israel. The field of statistical learning, and specifically data mining, systematically leverages computational methods to infer corollaries from aggregated datasets.
Security challenges from an AI perspective
AI introduces some potentially new security risks. One of those risks is a "model inversion attack," wherein an attacker has access to partial identifying data belonging to individuals already included in the data of a particular model. Since the attacker holds both the initial data and the model, they can infer further information about the individual by reconstructing the model's inputs from its outputs. | > > | Privacy
Canvas claims it does not tie the information gathered using third-party analytics to identifiable information. However, there are still some apparent privacy risks regarding the data aggregation mechanism, the aggregator's identity, its security measurements, and the implications of security attacks from an AI perspective, such as model inversion attack and membership inference attack. Though no meaningful information is usually revealed when looking at a single data point in the database, aggregating multiple points may lead to non-trivial insights. Thus, even unidentifiable attributes may be used to narrow down an individual in a dataset in an easy three-step query: women> age25-30> Israel. The field of statistical learning, specifically data mining, systematically leverages computational methods to infer corollaries from aggregated datasets. | | | |
< < | Another risk is a "membership inference attack," which allows a malicious actor to deduce if a given individual is present in a training data on an AI model; they have the target model, and they use it in conjunction with the information they already have to find out if the individual is in the database. If the database includes a specific group of data subjects, this information alone constitutes a privacy concern (e.g., individuals with disabilities). | > > | Whether the data is kept adequately by the alleged data owners, or whether it is compromised, whether it is identifiable or aggregated, there is no way of knowing how and to what extent it will be used in the future by those who gain access to it. | | | |
< < | The Point
Whether the data is kept adequately by the alleged data owners, or whether it is compromised, whether it is identifiable or aggregated, there is no way of knowing how and to what extent it will be used in the future by those who gain access to it. In light of that, I would like to propose several principles for maintaining privacy when using learning applications, mainly Canvas: | > > | Is learning management software the antithesis of assisting learning?
Should the process of thinking, writing, or engaging in learning activities be surveilled or managed? | | | |
< < | Voluntary participation: Universities should provide adequate alternatives and allow students to be excluded from the system. Another option is to allow students to choose features they would like to enable and the data they agree to be collected. | > > | Privacy risks are not the only problem of learning management software. Though technology is an inseparable part of modern civilization, I would dare argue that at their core, technological management systems not only go against our nature but also threaten our freedom of thought. | | | |
< < | Transparency: Students should understand what data is collected and how it will be used. Those terms should be properly agreed upon, not only through a one-sided privacy policy, and reflect the university's responsibility toward its students. | > > | I doubt that the greatest minds can grow from technologically-surveilled managed education, as the process of learning, at its core, is meant to generate creativity and critical thought. A student should feel safe asking questions and cultivating human relationships in an educational environment, and I believe a surveilled environment genuinely harms that. | | | |
< < | Data minimization: the data collected should be anonymized and limited to the minimum amount and time; more educational data does not always make better educational data. It would not eliminate risks but would minimize them. | > > | There is nothing wrong, in my mind, with personalized education software. In fact, a lot of concentrated work in different fields of science and technology, including software, hardware, agriculture, and medicine, introduced precision and personalization through big data and machine learning. Software learning should aim at the opposite of management learning, that is, to personalize the education experience, not in order to surveil or manage it, but rather to offer a richness of learning opportunities the student will not have access to otherwise. | | | |
< < | Access: Data access should be strictly limited and continuously monitored. All authorized personnel should have security authorization. | > > | Using new technologies is tempting- it is easy, it is profitable, and it is innovative. But its benefit comes at a risk to freedom of thought and privacy, that, as a society, we need to be concerned and conscious of, and take all the measurements to proactively. | | | |
< < | Supervision: An designated committee of students and university professionals could examine, advise, and monitor the learning system. The committee should include privacy law attorneys. | | Conclusion | |
< < | Weighing the advantages of acquiring analytical insights to enhance learning against the privacy risks and ethical concerns, it is becoming increasingly clear that the risks are substantial. It does not mean that the innovative or engaging approach should be forsaken, nor does it mean that the benefits of enhanced learning programs are small, but rather that an interactive system in which students can provide feedback must be sufficiently protective of their data and privacy. As more companies are monetizing off of our data, we need to develop protective technologies that would be dedicated to privacy and security. In particular, given the special relationship between universities and their students, where students may be under a misconception that universities, places of higher education, know best and would not jeopardize their privacy.
From my point of view, the most desirable solution would be developing an independent management system. That way, the university alone decides what data it would collect, how it would be stored, and who would access it.
This is a fine start. Substantively, there are some issues to address:
- Using curricular delivery software as a surveillance tool is an extraordinary step that demands extraordinary justification. Instead, as you say, there is nothing in the professional literature that justifies this intrusion of surveillance capitalism into learning. So you need to show your claim is correct. Don't stop research with the tertiary journalistic sources. Get the underlying papers. If you don't use research software of your own (that should respect your privacy completely), learn to use Zotero now. You could attach the results of your literature search as a Zotero database or as Bibtex to this topic. Then anyone who wants to follow your research can do so.
- You don't need to recapitulate the possible harms to privacy of individuals resulting from de-anonymization or other misuses of aggregate data. A couple of links to wellp-chosen secondary sources will get the reader started on learning more if she wants to. You can use that space instead to set up the underlying question: If there is no proven benefit to spying on learning, while there are reasons both practical and normative to put all information about students' learning under students' control, why are we doing what we are doing instead of whatever would be better?
- Merely proposing rules that will not be followed when there is so much money to be made is perhaps not the most efficient way to use our time on the mudball we are making out of the miracle that is Earth. The root of the "learning management software" problem is that it does the wrong thing. I hate content management systems because all they do is manage content. I hate learning management systems even more because all they do is manage learning, and learning is of all human processes that one that doesn't need management. Human curiosity, the desire to learn, which peaks in childhood, needs nurturing in the context of human relationship, of dialogue, of mutual empowerment. That processes of teaching and learning in relationship has made the human race what it is, and can still save it. Management is the antithesis of assisting learning.
So we want instead of this learning management stuff to use technology that enacts our educational philosophy. The wiki is a fundamental tool for social constructionist education: we make the course by writing it. Students control whether they use it minimally or maximally to support their learning, individually or collectively, as they choose. People can comment or edit as they please, and can both learn and teach as they do. Everyone has equal access to all the data, and equally complete control over access to their work. The public has access to that which is published, but no third party has preferential access to anything, and all non-public data is under the instructor's, and only the instructor's, control. Making such educational enablement software is easy: you are, after all, living with me in it right now. It is made of free software parts anyone can put together freely and share as widely as they like. It's all operated and maintained by one person: me. It runs in virtual machines sitting on servers in my apartment and my office that I assembled from loose parts with my own hands. The education was in this literal, technical sense constructed out of the learning that goes into it, including mine.
| > > | Weighing the advantages of acquiring analytical insights to "enhance" learning against privacy risks, ethical concerns, and freedom of thought, it is becoming increasingly clear that the cost of management learning is substantial. It does not mean that the innovative or engaging approach should be forsaken, nor does it mean that the benefits of learning programs are small, but rather that an interactive system, in which students can provide feedback, must be: (1) sufficiently protective of their data and privacy; and (2) aligned with the true essence of the learning process.
This is particularly important, given the special relationship between universities and their students, where students may be under a misconception that universities, places of higher education, know best and would not jeopardize their privacy or freedom of thought. From my point of view, the most desirable solution is an independent system designed to align our core values and needs. An interactive educational platform that has already proven to provide the benefits of technology, without the potential harm. A platform, such as the one these very words are written in. | | | |
< < | So in substance, we are choosing between software that does the wrong thing, with no demonstrable benefit, at immense cost to a fundamental value, freedom of thought, once learning becomes a comprehensively surveilled activity; and software that embodies our educational philosophy and preserves our values, for whose educational benefits we have at minimum our own experience in doing what we say we want society to be able to do. Proof of concept plus running code equals revolution. | | | |
< < | These may not be your conclusions, of course, though they are mine. But your own l,earning should be in dialogue with them.
| | |
|