MattDialSecondEssay 2 - 08 Jan 2020 - Main.MattDial
|
|
META TOPICPARENT | name="SecondEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | -- By MattDial - 29 Nov 2019 | |
< < | It’s a trope of many science fiction movies, and seemingly necessary for any futuristic society. In many science fiction movies, having a voice-activated digital assistant is a staple of a futuristic society- the utopian vision of technology requires getting commands out of our hands and performed only by our voices. The modern realization of this idea, however, has come with myriad privacy concerns- namely that in order to activate such an assistant with your voice, it has to be listening. Herein I will look to whether such an assistant is possible within a legal framework prioritizing privacy. | > > | It’s a trope of many science fiction movies, and seemingly necessary for any futuristic society. In many science fiction movies, having a voice-activated digital assistant is a staple of a futuristic society- the utopian vision of technology seems to move interaction with it from our hands to our voices. The modern realization of this idea, however, has come with myriad privacy concerns- namely, in order to activate such an assistant with your voice, it has to be listening. Herein I will look at whether a voice-activated digital assistant is possible within a legal framework prioritizing privacy. | | The Current (Broad) Legal Framework | |
< < | Assuming operation within a legal system prioritizing privacy is perhaps the first hurdle to overcome. We do not currently exist in an environmental law model for privacy of data, at least here in the United States. While the Supreme Court has recently stated that Fourth Amendment search-and-seizure concerns can apply to data-collecting such as cell-site location information, that data can still be accessed through a “probable cause” warrant. Carpenter v. US, 138 S. Ct. 2206 (2018). The voice data is still being collected by the tech companies developing these assistants. And with a slightly different makeup of justices on the Supreme Court, it is a distinct possibility that requirement could be relaxed so the data is even more easily accessible by law enforcement. | > > | Assuming operation within a legal system prioritizing privacy is perhaps the first hurdle to overcome. We do not currently exist in an environmental law model for privacy of data, at least here in the United States. While the Supreme Court has recently stated in the most analogous case that Fourth Amendment search-and-seizure concerns relating to law enforcement can apply to data-collecting such as cell-site location information, police can still access this data through a “probable cause” warrant. Carpenter v. US, 138 S. Ct. 2206 (2018). Never mind that the digital assistants are collecting this data either way. | | | |
< < | Outside of a law-enforcement or governmental interference context, there is the more general concern about our voice data being used or sold by the developers themselves. Apple and Google have denied taking part in this and there are internal options to turn off voice activation of their assistants. The centralization by these few companies and their reticence to address the underlying privacy concerns begs the more central question- is there any way to make this technology work without these privacy concerns? | > > | Outside the Fourth Amendment, there is the more general concern about our voice data being used or sold by the developers themselves. Apple and Google have denied taking part in this and there are internal options to turn off voice activation of their assistants. While there is some legal movement against the companies to stop automatic audio recording, there has been no definite ruling. The centralization of the tech industry around the few companies who produce digital assistants, combined with their hesitance to address the underlying privacy concerns, begs the more central question- is there any way to make this technology work? | |
Potential Solutions
Opting In | |
< < | As the tech giants producing these products have had more of their eavesdropping habits exposed, they have begun to respond. But their baby steps exist in a field in need of huge leaps. Some of the giants have disclosed that human employees can review anything you say around the speakers, for purposes of refining the AI assistants. Amazon has said this aspect will be opt-in instead of opt-out, but this does not follow an ecosystem legal model of privacy, instead allowing for a waiver of rights affecting more than the individual speaker’s owner. If any guest in your home doesn’t want their words reviewed by an Amazon employee but you’ve opted in, too bad for them. Furthermore, it is unclear if not opting into the speaker or smart phone’s listening protocol would prevent third-party applications running through the device from discretely recording voice data on their own. What could help is a form of enforcement or oversight where the tech giants couldn’t allow third-party apps to access the microphones and cameras, beyond the current system of “we can terminate a developer’s system if we think they have impermissibly used user data”. They likely have this power now, but there is no incentive for them to investigate violations by third party apps or limit connection to their own servers. | > > | As the tech giants producing these products have more eavesdropping habits exposed, they have responded. But their baby steps exist in a field in need of huge leaps. Amazon has said that allowing human employees to review your Alexa voice data will be opt-in instead of opt-out, but this does not follow an ecosystem legal model of privacy, instead allowing for a waiver of rights affecting more than the individual speaker’s owner. If any guest in your home doesn’t want their words reviewed by an Amazon employee but you’ve opted in, too bad for them. Furthermore, it is unclear if not opting into the speaker or smart phone’s listening system would prevent third-party applications running through the device from discretely recording voice data on their own. What could help is a form of specifically enforced oversight where tech giants could disallow third-party apps from accessing the microphones and cameras, beyond the current system of “we can terminate a developer’s system if we think they have impermissibly used user data”. They have this power now, but there is no incentive for them to investigate violations by third party apps or limit connection to their own servers. | | The Kill-Switch | |
< < | Another possible solution offered is a hardware “kill-switch,” where the microphone, camera, or internet connection within the device can be physically disconnected with a switch built into the product. But there are several issues with the implementation of this type of feature. Firstly, thus far this feature has been offered mostly in smart speakers, with some overtures to include them or a version of mic disconnection in future laptops from HP and Apple. But where such a feature would arguably be even more needed is smartphones, as they follow the users around more and are therefore a better target for collecting an all-inclusive picture of a user’s data. Secondly, these features borrow from the hardware developer Purism who developed kill-switches for its laptops starting in 2014. Purism’s implementation was a last-line defense to prevent hackers and malware from accessing these features, and ostensibly these switches in mainstream tech devices serve the same purpose. But most of the trust issues come from the developers of the devices themselves. Just look at Google’s Nest home security device, which contained a microphone that it failed to disclose to its consumers for two years. Consumers have to trust the less-than-trustworthy companies on the efficacy of their kill-switches. Lastly and most importantly, the kill-switch acts as a physical line of defense from unwanted listening and recording, but the switches undermine the entire purpose of a device taking commands from your voice. If you want to control the device with your voice, you would have to reconnect the microphone and send your voice data to the developer’s servers. The whole purpose of the device’s “functionality” is undermined by this protection. Perhaps that’s instructive on the wisdom of pursuing this type of technology. | > > | Another possible solution offered is a hardware “kill-switch,” where the microphone, camera, or internet connection within the device can be physically disconnected with a switch built into the product. But there are several issues with the implementation of this type of feature. Firstly, so far this feature has been offered mostly in smart speakers, with some overtures from HP and Apple to include them or a version of them in future laptops. But where such a feature would arguably be even more needed is smartphones, as they follow the users around more and are therefore a better target for collecting an all-inclusive picture of a user’s data. These features borrow from the hardware developer Purism, who developed kill-switches for its laptops starting in 2014. Purism’s implementation was a last-line defense to prevent hackers and malware from accessing these features, and ostensibly these switches in mainstream tech devices serve the same purpose. But another issue with kill-switches as a privacy solution is that most of the trust issues come from the developers of the devices themselves. Just look at Google’s Nest home security device, which contained a microphone that it failed to disclose to its consumers for two years. Consumers then have to trust the less-than-trustworthy companies on the efficacy of their kill-switches. Lastly and most importantly, the kill-switch acts as a physical line of defense from unwanted listening and recording, but the switches undermine the entire purpose of a device taking commands from your voice. If you want to control the device with your voice, you would have to reconnect the microphone and send your voice data to the developer’s servers. The whole purpose of the device’s “functionality” is undermined by this protection. A kill-switch seems to act only as a physical embodiment of the "opt-in" method, where once you do opt into the system, you are right back in the underlying privacy mess. | | Conclusion | |
< < | Attempts to balance privacy concerns with the basic functions of a smart speaker or smart phone’s digital assistant fail with or without “protections”. As currently implemented, the products function as intended and track your voice data, but there is little to no privacy afforded. Activate a physical barrier like the kill-switch, however, and the product ceases to provide its stated function. The smart speaker or digital assistant is a design that cannot be compatible with any basic idea of secrecy or anonymity required for a private existence. They inherently listen to the substance of what you are saying to function, and companies from whom you might want protection can track your specific use of the device by virtue of your purchase and continued use. These concerns are baked into the very design of all these digital assistants, and there isn’t any current defense against them other than simply not using devices that contain them. Perhaps a future where digital AI assistants are ubiquitous is inevitable. But a future of controlling devices while also retaining our privacy likely cannot involve voice activation, and mechanical activation will have to stick around more than our science fiction predicted. | > > | Attempts to balance privacy concerns with the basic functions of a smart speaker or smart phone’s digital assistant fail with or without “protections”. As currently implemented, the products function as intended and track your voice data, but there is little to no privacy afforded. Activate a physical barrier like the kill-switch, however, and the product ceases to provide its stated function. The smart speaker or digital assistant is a design that cannot be compatible with any basic idea of secrecy or anonymity required for a private existence. These concerns are baked into the very design of all these digital assistants, and there isn’t any current defense against them other than simply not using devices that contain them. Even if science fiction predicted that the future would mean technological commands getting out of our hands and into our voice-boxes, it did not fully predict what would follow along with that future. The basic idea of a digital assistant with whom a human user can interact and interface is perhaps still promising, but as for the initial activation method- tried and true mechanical activation is the best bet. | |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. |
|
MattDialSecondEssay 1 - 29 Nov 2019 - Main.MattDial
|
|
> > |
META TOPICPARENT | name="SecondEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Voice Activation and Its Fundamental Disagreements with Privacy
-- By MattDial - 29 Nov 2019
It’s a trope of many science fiction movies, and seemingly necessary for any futuristic society. In many science fiction movies, having a voice-activated digital assistant is a staple of a futuristic society- the utopian vision of technology requires getting commands out of our hands and performed only by our voices. The modern realization of this idea, however, has come with myriad privacy concerns- namely that in order to activate such an assistant with your voice, it has to be listening. Herein I will look to whether such an assistant is possible within a legal framework prioritizing privacy.
The Current (Broad) Legal Framework
Assuming operation within a legal system prioritizing privacy is perhaps the first hurdle to overcome. We do not currently exist in an environmental law model for privacy of data, at least here in the United States. While the Supreme Court has recently stated that Fourth Amendment search-and-seizure concerns can apply to data-collecting such as cell-site location information, that data can still be accessed through a “probable cause” warrant. Carpenter v. US, 138 S. Ct. 2206 (2018). The voice data is still being collected by the tech companies developing these assistants. And with a slightly different makeup of justices on the Supreme Court, it is a distinct possibility that requirement could be relaxed so the data is even more easily accessible by law enforcement.
Outside of a law-enforcement or governmental interference context, there is the more general concern about our voice data being used or sold by the developers themselves. Apple and Google have denied taking part in this and there are internal options to turn off voice activation of their assistants. The centralization by these few companies and their reticence to address the underlying privacy concerns begs the more central question- is there any way to make this technology work without these privacy concerns?
Potential Solutions
Opting In
As the tech giants producing these products have had more of their eavesdropping habits exposed, they have begun to respond. But their baby steps exist in a field in need of huge leaps. Some of the giants have disclosed that human employees can review anything you say around the speakers, for purposes of refining the AI assistants. Amazon has said this aspect will be opt-in instead of opt-out, but this does not follow an ecosystem legal model of privacy, instead allowing for a waiver of rights affecting more than the individual speaker’s owner. If any guest in your home doesn’t want their words reviewed by an Amazon employee but you’ve opted in, too bad for them. Furthermore, it is unclear if not opting into the speaker or smart phone’s listening protocol would prevent third-party applications running through the device from discretely recording voice data on their own. What could help is a form of enforcement or oversight where the tech giants couldn’t allow third-party apps to access the microphones and cameras, beyond the current system of “we can terminate a developer’s system if we think they have impermissibly used user data”. They likely have this power now, but there is no incentive for them to investigate violations by third party apps or limit connection to their own servers.
The Kill-Switch
Another possible solution offered is a hardware “kill-switch,” where the microphone, camera, or internet connection within the device can be physically disconnected with a switch built into the product. But there are several issues with the implementation of this type of feature. Firstly, thus far this feature has been offered mostly in smart speakers, with some overtures to include them or a version of mic disconnection in future laptops from HP and Apple. But where such a feature would arguably be even more needed is smartphones, as they follow the users around more and are therefore a better target for collecting an all-inclusive picture of a user’s data. Secondly, these features borrow from the hardware developer Purism who developed kill-switches for its laptops starting in 2014. Purism’s implementation was a last-line defense to prevent hackers and malware from accessing these features, and ostensibly these switches in mainstream tech devices serve the same purpose. But most of the trust issues come from the developers of the devices themselves. Just look at Google’s Nest home security device, which contained a microphone that it failed to disclose to its consumers for two years. Consumers have to trust the less-than-trustworthy companies on the efficacy of their kill-switches. Lastly and most importantly, the kill-switch acts as a physical line of defense from unwanted listening and recording, but the switches undermine the entire purpose of a device taking commands from your voice. If you want to control the device with your voice, you would have to reconnect the microphone and send your voice data to the developer’s servers. The whole purpose of the device’s “functionality” is undermined by this protection. Perhaps that’s instructive on the wisdom of pursuing this type of technology.
Conclusion
Attempts to balance privacy concerns with the basic functions of a smart speaker or smart phone’s digital assistant fail with or without “protections”. As currently implemented, the products function as intended and track your voice data, but there is little to no privacy afforded. Activate a physical barrier like the kill-switch, however, and the product ceases to provide its stated function. The smart speaker or digital assistant is a design that cannot be compatible with any basic idea of secrecy or anonymity required for a private existence. They inherently listen to the substance of what you are saying to function, and companies from whom you might want protection can track your specific use of the device by virtue of your purchase and continued use. These concerns are baked into the very design of all these digital assistants, and there isn’t any current defense against them other than simply not using devices that contain them. Perhaps a future where digital AI assistants are ubiquitous is inevitable. But a future of controlling devices while also retaining our privacy likely cannot involve voice activation, and mechanical activation will have to stick around more than our science fiction predicted.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
Revision 2 | r2 - 08 Jan 2020 - 17:33:58 - MattDial |
Revision 1 | r1 - 29 Nov 2019 - 18:14:57 - MattDial |
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|