As AI technologies are becoming more prevalent, more and more employers and organizations are implementing AI into their companies. One way of doing this is by using AI decision-support.
AI Decision-Support: "Runs algorithms and makes decisions based off of specific criteria asked for by the relevant organization" (Feldkamp et al. 2023)
Example: After inputting the necessary criteria, AI decision-support can sort through applicant applications and resumes to recommend which individuals are given interviews and which are rejected.
Given this example, it may not be surprising that AI decision-support has been being implemented in human resources management (HRM). HRM utilizes this technology to help recruit employees, sort through and select potential employees, and/or improve retention rates (Feldkamp et al. 2023). It has been reported that decision support is used in these departments to make better decisions more quickly.
Using AI decision-support in HRM has raised some concerns. Some of the initial concerns are regarding the fairness of decision-support. In order to fully consider an applicant, many factors need to be taken into consideration such as work history, experience, and availability. All of which could be considered by algorithms. However, other factors that need to be considered such as reliability, soft-skills, and motivation, might be harder for decision-support to account for. In addition to this, it is possible for AI systems to have bias coded into them. The idea of bias AI systems in the workplace question whether the technology is just. For instance, it would be possible for an AI system to be coded with bias that might result in the exclusion of certain groups of people such as men or women, heterosexual or bisexual, Black or White, and so on. With this in mind, professionals were curious of how individuals using decision-support would react to the decision-support’s outcomes. For instance, would individuals receiving recommendations from AI decision-support trust what the AI proposes?
A process that follows several requirements such as being consistent, ensuring there are no unfair biases, ensuring the accuracy of the information being used, correctability, and it must be an ethical process (Feldkamp et al., 2023). When selecting potential employees, organizations and employers are legally required to ensure potential employees are not picked based on job-irrelevant aspects and that there is no discrimination of any kind (Feldkamp et al., 2023).
"An individual's prior experience and knowledge on the given topic or situation, and it includes how an individual feels and perceives the topic and situation" (Feldkamp et al, 2023).
It refers to whether or not the applicants perceive the process as fair (Feldkamp et al., 2023). For example, if an employer allows one applicant to reschedule an interview but denies another applicant’s request to reschedule.
Feldkamp et al. (2023) conducted a study in order to explore individual’s reactions to recommendations formed by AI. The researchers tested this by having one group of participants receive a recommendation from AI that had an equal number of men and women. However, in another group of participants, the recommendation from AI included many more men than women. This discrepancy was created in order for justice, fairness, and trustworthiness to be questioned by participants. With this divide, individuals were asked if they would accept or reject the recommendation from the AI system, whether the recommendation was fair, whether it was trustworthy, and whether it was just. The results of this study showed that participants perceived decisions made by AI to be less fair, less trustworthy, less likely to meet moral standards, and individuals were more likely to reject these recommendations (Feldkamp et al, 2023). However, AI decision-support was also considered less biased and more consistent (Feldkamp et al, 2023).
These results are important because it shows that humans are generally skeptical of AI technologies.
Imagine this…
You work in HMR and your company purchases an AI based decision-support system. Now, instead of spending your day sifting through applications and resumes, you are doing paperwork or another simple task. Referring back to the Bankin & Formosa (2023) article, your skill cultivation and use, task significance, autonomy, and sense of belongingness may be effected. You are not learning or using new skills, you feel replaced by technology, you don’t have the freedom to determine your task, and you don’t feel connected to the applicant pool. However, you do have the decision to accept or reject the recommendation suggest from AI. What happens now?
Bankins & Formosa (2023) will suggest that your work is less meaningful, DeKay (2022) will suggest your workplace psychological wellbeing may decline, and Feldkamp et al. (2023) will suggest you are more likely to reject what the AI recommends anyway. What this shows is that employers and maybe even some HRM employees are eager to purchase expensive software to assist with personnel selection, but it has the potential to either harm employees or be entirely not useful. If individuals tend to reject useful and applicable recommendations from AI because it is viewed as less fair and less trustworthy, it may not have more benefits than it does issues.
Another important idea to consider is that although individuals rejected the AI recommendations more often, these recommendations were still considered to be less biased. Why did this occur? Feldkamp et al. (2023) suggests this may be due to an assumption that algorithms are able to filter out biases better than humans. However, as previously discussed, it is possible for AI technologies to have bias in the coding. However, if decision-supported recommendations are assumed to be less bias, that opens the door to potential discrimination in personnel selection in HRM.
Imagine this…
An employee working HRM has just been shown the recommendation from an AI system as to who should be invited to an interview for a dental assistant position. 10 males applied and 10 females applied. 4 males identified as Black and 8 females identified as Black. The recommendation from the AI system included 2 white females and 6 white males. It is clear that the AI is exhibiting a racial bias. Feldkamp et al. (2023) found that the HRM employee may overlook this bias because they assume it is impossible for AI to be bias. If you are one of the 12 Black applicants, how would you feel about this?
The answer should be: “Not very happy!” In fact, if a bias AI system was implemented into a workplace, it could very well lead to applicant discrimination. Discrimination involves excluding people based on factors not required for the job task such as race, gender, sexual orientation, religion, etc. Thinking back to the definition Feldkamp et al. (2023) provides for justice, discriminating against applicants is illegal and a direct violation of justice. This possibility should encourage employers and organizations to ensure employees are properly educated on the capabilities of AI decision-support recommendations.
All in all, the results Feldkamp et al. (2023) found were not very encouraging for the implementation of AI decision-support in HRM workplaces. The researchers found that participants perceived decisions made by AI to be less fair, less trustworthy, less likely to meet moral standards, and individuals were more likely to reject these recommendations (Feldkamp et al., 2023). However, decision-support was also considered less biased and more consistent. This may lead to an exaggerated denial of AI decision-support recommendations and opens the door to the possibility of overlooked discrimination. With these findings in mind, in order to ensure AI decision-support recommendations are fair, trustworthy, and just, employers need to take precautions to educate their employees. One example would be to host informational sessions for employees to explain AI processes and capabilities. Employees should be made aware of the possible biases of AI systems, however, employers and coders should take all necessary steps to avoid biased software. With the proper education, acknowledgement of limitations, and proper implementation (in regard to psychological well-being and meaningful work), AI decision-support could potentially be a beneficial tool.
Although Feldkamp et al. (2023) looked at the implementation of AI in HRM, others having been considering AI in numerous other workplaces. For example, although it has yet to be used in clinical settings, there is major discussion of using AI in psychotherapy. Check out the next tab to learn more about the potential roles AI may have in the mental health care field someday.