LiorSokolFirstEssay 3 - 11 Jan 2022 - Main.LiorSokol
|
|
META TOPICPARENT | name="FirstEssay" |
Employees in the post-pandemic era: how does the rise in using algorithmic evaluation methods violate the employees' right to equality? | |
< < | -- By LiorSokol - 22 Oct 2021 | > > | -- By LiorSokol - 11 Jan 2022 | | In 2020, the COVID-19 pandemic was spreading around the world, and millions of employees were required to move to home working as part of the effort to mitigate the pandemic risks. The homeworking breakthrough will plausibly not be limited to the | |
< < | pandemic period but marking a new era in the labor market (Baert, 2020). | > > | pandemic period but marking a new era in the labor market.
One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible. | | | |
< < | One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible (Köchling, 2020). | | What is an algorithmic evaluation method? | |
< < | Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. (Köchling, 2020). By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success. | > > | Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success. | |
How do algorithmic evaluation methods discriminate against employees? | |
< < | Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. (Packin, 2018). For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause (Sweeney, 2013).
What's the point of giving references in this form if the bibliography they refer to is not made available? Was this text brought from somewhere without the reference list?
Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure (Köchling, 2020).
For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage (Rosenblat, 2016).
Third, the use of algorithmic evaluation tools may be used by employers to justify discriminatory decisions made intentionally, by presenting the algorithmic evaluation as a relevant difference. In doing so, they will give discriminatory treatment while reducing the chances of being sued. This is because, in cases of algorithmic evaluation, it is difficult to point out the discriminatory behavior, as it is well hidden within the algorithmic code and almost completely detached from the employer's actions.
In particular, when trying to trace the discriminatory criterion, in many cases the human eye will not be able to identify it at all. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent.
Suggested legal immediate solution
The inability to detect and diminish the discriminatory criteria requires an external solution. Therefore, it is appropriate to adopt the legal principle from the administrative law of imposing a legal reasoning obligation on employers' decisions that are based, fully or partially, on algorithmic evaluation. This legal imposition mitigates the discriminatory decisions made based on algorithmic evaluation in two main mechanisms.
First, the technological possibility to develop algorithms with more transparent and simple criteria and predictive mechanisms already exist, but it is almost not used. Imposing the duty of reasoning will encourage the assimilation of transparent algorithms and help to a-priori solve the inability to monitor algorithmic decisions (Pearl, 2018).
Second, the reasoning itself enables indirect criticism of the employer's decisions. The employee can use the reasoning as evidence of discrimination, appeal the decision, or demand more comprehensive reasoning (Dotan, 2002). Moreover, the reasoning creates a mechanism for the employer to self-audit even before the decision is made and to be more aware of implied discrimination and the particular difficulties that characterize each and every employee. Another benefit is addressing the employee's emotional need for an explanation of the decision about him, even regardless of whether it is a discriminatory decision or not (Pearl, 2018). | > > | Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause. | | | |
> > | Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure.
For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage. | | | |
< < | The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. Assuming the inability to prevent it, the legal systems should adjust and mitigate those harms by imposing a reasoning obligation, and the sooner the better. | > > | The biased outcomes of algorithmic decision-making are uniquely disturbing in two ways: First, algorithmic methods seem to be "objective", thus employers are much less aware of their biases than if they were to make the decision themselves. Moreover, it is much harder to apply for judicial review on these decisions. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent. Second, it can use as a tool for employers to cover up their discriminatory decisions and strengthen their defense in court. Humans have enough biases without adding external ones or giving them tools to cover them up. | | | |
< < |
You nowhere explain why it would be acceptable for an employer to make all these same decisions based on all the same data if it weren't using a computer, or were running a different kind of software to support the employer's decision-making It's not as though employers can't find other ways of behaving unfairly. I don't understand this argument about "duty of reasoning." If the employment is at will, where does this duty get imposed on employers who have do duty to give any reasons to anyone. If this is about collective bargaining agreements for union workers, why is there any difference between the grievance processes under the contract based on whether the employer is using particular forms of decision-support software? Some clarification is in order.
| | | |
< < |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: | > > | Suggested approach
The internal biases of algorithms require any approach to both increase employers' awareness of algorithmic decision-making biases, so to encourage employers' cautiousness, and give tools to employees for criticism. The framework should be implemented by employers' guidelines for the execution and auditing of algorithmic decision-making. Such guidelines may be provided by federal bodies, such as the EEOC (US Equal Employment Opportunity Commission), and by that giving employers the tools to avoid unintentional discrimination. | | | |
< < | | > > | The key factors that such guidelines should include, in my opinion, are as follows: First, the algorithm's usage should be made by people with sufficient expertise and a sophisticated understanding of the tools. Second, transparency, meaning, providing an explanation of how the algorithm operates and disclosing the conditions for the algorithmic decision. Third, employers should define external fairness standards in which the algorithmic decision will be reviewed upon. By that, the algorithms’ biases are more likely to be identified to correct mistakes and improve the algorithms. Fourth, instruction to verify and audit the whole process regularly. Employers should implement a data quality control process to develop quality metrics, collect new data, evaluate data quality, and remove inaccurate data from the training data set. | | | |
< < | Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. | | \ No newline at end of file | |
> > | The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. As preventing or denying the usage of such tools is not plausible, deferral bodies should provide the necessary guidelines to employers as to how to mitigate the algorithms biased outcomes. | | \ No newline at end of file |
|
LiorSokolFirstEssay 2 - 05 Dec 2021 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
Employees in the post-pandemic era: how does the rise in using algorithmic evaluation methods violate the employees' right to equality? | | Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. (Packin, 2018). For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause (Sweeney, 2013). | |
> > |
What's the point of giving references in this form if the bibliography they refer to is not made available? Was this text brought from somewhere without the reference list?
| | Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure (Köchling, 2020).
For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage (Rosenblat, 2016). | | The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. Assuming the inability to prevent it, the legal systems should adjust and mitigate those harms by imposing a reasoning obligation, and the sooner the better. | |
> > |
You nowhere explain why it would be acceptable for an employer to make all these same decisions based on all the same data if it weren't using a computer, or were running a different kind of software to support the employer's decision-making It's not as though employers can't find other ways of behaving unfairly. I don't understand this argument about "duty of reasoning." If the employment is at will, where does this duty get imposed on employers who have do duty to give any reasons to anyone. If this is about collective bargaining agreements for union workers, why is there any difference between the grievance processes under the contract based on whether the employer is using particular forms of decision-support software? Some clarification is in order.
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: |
|
LiorSokolFirstEssay 1 - 22 Oct 2021 - Main.LiorSokol
|
|
> > |
META TOPICPARENT | name="FirstEssay" |
Employees in the post-pandemic era: how does the rise in using algorithmic evaluation methods violate the employees' right to equality?
-- By LiorSokol - 22 Oct 2021
In 2020, the COVID-19 pandemic was spreading around the world, and millions of employees were required to move to home working as part of the effort to mitigate the pandemic risks. The homeworking breakthrough will plausibly not be limited to the
pandemic period but marking a new era in the labor market (Baert, 2020).
One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible (Köchling, 2020).
What is an algorithmic evaluation method?
Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. (Köchling, 2020). By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success.
How do algorithmic evaluation methods discriminate against employees?
Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. (Packin, 2018). For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause (Sweeney, 2013).
Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure (Köchling, 2020).
For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage (Rosenblat, 2016).
Third, the use of algorithmic evaluation tools may be used by employers to justify discriminatory decisions made intentionally, by presenting the algorithmic evaluation as a relevant difference. In doing so, they will give discriminatory treatment while reducing the chances of being sued. This is because, in cases of algorithmic evaluation, it is difficult to point out the discriminatory behavior, as it is well hidden within the algorithmic code and almost completely detached from the employer's actions.
In particular, when trying to trace the discriminatory criterion, in many cases the human eye will not be able to identify it at all. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent.
Suggested legal immediate solution
The inability to detect and diminish the discriminatory criteria requires an external solution. Therefore, it is appropriate to adopt the legal principle from the administrative law of imposing a legal reasoning obligation on employers' decisions that are based, fully or partially, on algorithmic evaluation. This legal imposition mitigates the discriminatory decisions made based on algorithmic evaluation in two main mechanisms.
First, the technological possibility to develop algorithms with more transparent and simple criteria and predictive mechanisms already exist, but it is almost not used. Imposing the duty of reasoning will encourage the assimilation of transparent algorithms and help to a-priori solve the inability to monitor algorithmic decisions (Pearl, 2018).
Second, the reasoning itself enables indirect criticism of the employer's decisions. The employee can use the reasoning as evidence of discrimination, appeal the decision, or demand more comprehensive reasoning (Dotan, 2002). Moreover, the reasoning creates a mechanism for the employer to self-audit even before the decision is made and to be more aware of implied discrimination and the particular difficulties that characterize each and every employee. Another benefit is addressing the employee's emotional need for an explanation of the decision about him, even regardless of whether it is a discriminatory decision or not (Pearl, 2018).
The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. Assuming the inability to prevent it, the legal systems should adjust and mitigate those harms by imposing a reasoning obligation, and the sooner the better.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|