Americans Are Worried About Losing Their Jobs to Artificial Intelligence
There have been great strides in the development of artificial intelligence in the last few years. The everyday use of this technology is the order of the day for many, but there are growing concerns that it could put many people out of a job.
The Pew Research Center has released a new study showing that a majority of Americans are concerned about the growing use of artificial intelligence in hiring and employee evaluations at various companies.
The survey of 11,004 American adults was conducted by the Pew Research Center in mid-December last year. The main focus of the survey was their opinion and the impact of artificial intelligence on the workforce. Although the vast majority of respondents acknowledged that AI is effective in recruiting new employees, many expressed concerns that the technology could invade their privacy and negatively impact their evaluations, which could subsequently lead to job loss.
A study released on April 20 showed that 32% of Americans believe artificial intelligence is more likely to harm employees in hiring or performance reviews than to help them find or keep jobs.
71% of U.S. citizens are strongly opposed to AI being used in hiring or firing decisions. However, the study found that 40% of Americans think AI could help job applicants and employees by speeding up the hiring process, reducing the number of mistakes people can make, and removing potential biases that may prevent an applicant from getting a job. In addition, some respondents said that AI has the potential to evaluate their performance more objectively and consistently than a human.
The survey also found that 32% of respondents thought that AI will harm rather than help employees in the next 20 years, with only 13% taking an optimistic view and nearly two-thirds of respondents saying they would not apply for a job if they knew that AI would be evaluating them.
The concerns relate to several aspects of the hiring process, from reviewing resumes and evaluating job candidates to monitoring job performance and making staffing decisions. The survey also highlights that most respondents are concerned about privacy breaches occurring because of AI collecting too much personal information, including browser history or social media activity. The study goes on to say that 90% of senior workers, 84% of mid-level workers and 70% of rank-and-file workers fear being "inappropriately tracked if AI were used to collect and analyze information."
Addressing these concerns
With AI becoming more pervasive in the workplace, tech industry leaders are asking policymakers, companies and developers to start addressing these public concerns. In the European Union, for example, regulators are trying to protect users against potential abuse by demanding that AI systems are transparent, and training workers in this area so that they can adapt to the changing workplace environment. Some of the smartest people in the AI industry are also calling for the development of more advanced models to be halted or slowed in order to address these issues before it is too late.
In the meantime, regulators have begun to look at how these AI models are trained and how they could potentially affect citizens' rights. Italy is one of the first countries to take action in the form of a complete ban on the AI chat app ChatGPT, arguing that it could illegally collect users' personal data and expose children to inappropriate interactions. Other European countries and Canada have indicated similar concerns, mainly because AI models are only useful if they are perfectly trained (programmed), which requires huge amounts of data.
As AI evolves and permeates our everyday lives, including the workplace, it brings many benefits with it, but there are also concerns about privacy, fairness and discrimination. Policymakers are trying to ensure that AI is used for the good of people through a range of regulations, as well as access to transparency and education. Only time will tell if this will be achieved or if it is even possible.
Source: decrypt.co
analyst opinion
Eda Tutkun
The Pew Research Center has released a new study showing that a majority of Americans are concerned about the growing use of artificial intelligence in hiring and employee evaluations at various companies.
The survey of 11,004 American adults was conducted by the Pew Research Center in mid-December last year. The main focus of the survey was their opinion and the impact of artificial intelligence on the workforce. Although the vast majority of respondents acknowledged that AI is effective in recruiting new employees, many expressed concerns that the technology could invade their privacy and negatively impact their evaluations, which could subsequently lead to job loss.
A study released on April 20 showed that 32% of Americans believe artificial intelligence is more likely to harm employees in hiring or performance reviews than to help them find or keep jobs.
71% of U.S. citizens are strongly opposed to AI being used in hiring or firing decisions. However, the study found that 40% of Americans think AI could help job applicants and employees by speeding up the hiring process, reducing the number of mistakes people can make, and removing potential biases that may prevent an applicant from getting a job. In addition, some respondents said that AI has the potential to evaluate their performance more objectively and consistently than a human.
The survey also found that 32% of respondents thought that AI will harm rather than help employees in the next 20 years, with only 13% taking an optimistic view and nearly two-thirds of respondents saying they would not apply for a job if they knew that AI would be evaluating them.
The concerns relate to several aspects of the hiring process, from reviewing resumes and evaluating job candidates to monitoring job performance and making staffing decisions. The survey also highlights that most respondents are concerned about privacy breaches occurring because of AI collecting too much personal information, including browser history or social media activity. The study goes on to say that 90% of senior workers, 84% of mid-level workers and 70% of rank-and-file workers fear being "inappropriately tracked if AI were used to collect and analyze information."
Addressing these concerns
With AI becoming more pervasive in the workplace, tech industry leaders are asking policymakers, companies and developers to start addressing these public concerns. In the European Union, for example, regulators are trying to protect users against potential abuse by demanding that AI systems are transparent, and training workers in this area so that they can adapt to the changing workplace environment. Some of the smartest people in the AI industry are also calling for the development of more advanced models to be halted or slowed in order to address these issues before it is too late.
In the meantime, regulators have begun to look at how these AI models are trained and how they could potentially affect citizens' rights. Italy is one of the first countries to take action in the form of a complete ban on the AI chat app ChatGPT, arguing that it could illegally collect users' personal data and expose children to inappropriate interactions. Other European countries and Canada have indicated similar concerns, mainly because AI models are only useful if they are perfectly trained (programmed), which requires huge amounts of data.
As AI evolves and permeates our everyday lives, including the workplace, it brings many benefits with it, but there are also concerns about privacy, fairness and discrimination. Policymakers are trying to ensure that AI is used for the good of people through a range of regulations, as well as access to transparency and education. Only time will tell if this will be achieved or if it is even possible.
Source: decrypt.co