Main:

AI and Ethics

Science Fact, Not Fiction

One longstanding common fear among workers, largely yet to be realized, is that robots are coming for their jobs. A far less recognized concern is that the robots are going to decide whether someone gets a job in the first place. These ‘robots,’ more accurately referred to as artificial intelligence (AI)-based systems, are making their way into organizations in ways that affect employees at every step from hiring to promotion and at every level from the mailroom to the board room.

The desire to adapt workplace practices to a COVID-safe environment may have even accelerated the adoption of artificial intelligence systems, in part because the lack of face-to-face interaction can lead organizations to think of workers more like data points and less like people. While artificial intelligence can be used in a number of ways that benefit both workers and employers, it is crucial to understand enough about how the ‘robots’ work in order to assess the ethical implications of AI-based systems and the effects they have on workers’ quality of life.

From an ethical standpoint, the most discussed concern regarding integrating artificial intelligence into the workplace is that, rather than being objective tools that mitigate various biases in hiring and promotion processes, many AI-based systems that have been implemented so far have actually served to reinforce existing biases instead. This is particularly problematic because it enables companies to retain their biases while allowing managers to feel like they aren’t directly involved in or responsible for their expression. It’s also important to consider the effects that artificial intelligence tools have on the morale and wellbeing of workers more generally. Because many AI-based systems used in the workplace focus on increasing worker productivity, however that is measured, it can often feel like such systems are an attempt to get more output from workers while reducing autonomy and not sharing the rewards of the greater production.

What Is Artificial Intelligence Anyway?

In order to put the notion of artificial intelligence in the workplace into proper perspective, it’s important to have an understanding of what artificial intelligence is in and of itself. While much of popular culture portrays the use of artificial intelligence in various spectacular ‘the robots are taking over’ sorts of ways, artificial intelligence systems are frequently far more mundane in the real world. The concept of artificial intelligence is often taken to mean that a computer is replicating human reasoning, but artificial intelligence also refers to cases where a computer is learning how to best achieve a specified goal. In this sense, a large part of the artificial intelligence universe consists of a set of solutions for optimization problems, sometimes referred to as machine learning.

Many people encounter examples of artificial intelligence on a daily basis. Things like customer service chatbots are powered by artificial intelligence, as are the systems that provide product recommendations on web sites such as Amazon. In either of these systems, the user is providing feedback that helps the system learn, whether it be in the form of an answer to ‘did this information answer your question?’ or a product purchase (or lack thereof). In the case of product recommendation systems, the algorithm behind the artificial intelligence uses the purchase feedback as its target for optimization. In other words, it chooses the products to show each new user that maximize the expected purchase amount based on pattern matching against the data provided by existing users.

Artificial Intelligence in the Hiring Process

As mentioned earlier, one type of artificial intelligence involves the ability of computer systems to learn from inputs and take actions that are most likely to achieve a particular objective. Not surprisingly, the computer is told what its objectives are by its human programmers, and this has a number of important implications. To illustrate one of these implications, let’s think about what often happens when a company uses an artificial intelligence system to screen potential candidates for employment. Generally speaking, the company will show the AI system the resumes of people that have been hired by the company and were ultimately successful workers within the organization. The system then has the implicit directive of ‘find me more people like this,’ and it bases the matching on all available information that can be gleaned from the resumes and whatever supplemental information is provided to the system, most likely with some restrictions in order to (theoretically) steer clear of anti-discrimination laws.

AI systems are quite good at pattern matching, which in the candidate screening process means that artificial intelligence tends to reinforce existing biases within an organization. For example, if an organization has historically decided to hire mostly high-status white men, an artificial intelligence algorithm trained on existing employees is likely to select on characteristics, relevant or not, that are associated with being in that demographic group – say an interest in sailing or golf. This is obviously an exercise in stereotyping, and it should be noted that what AI algorithms do in this sense can be thought of as stereotyping in mathematical form. It is also important to note that, despite the fact that these systems are generally explicitly barred from choosing based on race, gender and so on, the algorithms by design are very good at identifying characteristics that are proxies for these characteristics, resulting in de facto discriminatory practices.

Unfortunately, the scenario given above is far from hypothetical. In recent years, Amazon developed an AI-based resume screening tool, which it then quickly scrapped due to ethical and legal concerns:

“In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.” 1

Some companies may want to have AI-based systems make hiring decisions in order to try to insulate themselves from accusations of discrimination and bias, but many legal jurisdictions have decided that companies are still responsible for conducting audits of the results of their processes in order to ensure that the companies are not engaging in de facto discrimination. In addition, companies often also have a responsibility to be able to defend the design of their hiring systems ex ante, i.e. before the algorithm is run and results are viewed. Furthermore, states such as Illinois are enacting legislation that requires employers to be transparent regarding their use of artificial intelligence systems to evaluate candidate interviews 2.

Even beyond the issue of perpetuating bias, using artificial intelligence systems in the hiring process poses significant analytical challenges. If a company only trains the artificial intelligence system on resumes or related information from successful workers, it is engaging in what is known as survivorship bias, which results from ignoring data from candidates who were not hired or who were unsuccessful in the organization (i.e. the ‘non-survivors’). While it is true that the AI system is able to identify features that are common in successful resumes even when survivorship bias is present, there is no guarantee that these factors actually differentiate successful candidates from unsuccessful ones. Taking this concept to the extreme, consider a situation where the AI system decides that all successful candidates have two eyes and high-school diplomas. While this is likely true, it is not helpful from a screening standpoint since most people in the underlying population also have these characteristics.

One obvious solution to this problem is to also feed information about unsuccessful candidates to the AI system. In this context, ‘unsuccessful’ can refer either to a bad hire or to a candidate who wasn’t hired at all, depending on the specific task that the system is supposed to perform. This mitigates the problem caused by survivorship bias but can also be an imperfect solution since companies often have limited information about workers who don’t get hired and generally don’t know for sure that people they don’t hire wouldn’t have been successful employees. Another important consideration is that, while artificial intelligence can automate the process of finding more of what a company views as good, it can’t guarantee that it is optimizing against ‘good’ in a ground truth sense because the assessments used to generate the outcomes themselves may be biased or flawed.

AI-based systems are also often utilized once a candidate gets past the resume stage and into the interview process. In one somewhat extreme system, an employer has job candidates record interviews that are then given to a computer to parse the candidate’s speech into ‘tokens’ of words and phrases and analyze the tokens using what are known as natural language processing (NLP) algorithms. This system, used frequently in the hospitality and finance industries, even takes this concept a step further and assesses job candidates’ mannerisms in addition to their speech patterns:

“Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated ‘employability’ score.” 3

Another system claims to assess job candidates’ cognitive and personality traits via a number of somewhat irrelevant seeming tasks:

“One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).” 4

While such systems are ramping up quickly in terms of their sophistication, they are still at a pretty early stage. For instance, the first program mentioned evaluates a job candidate’s use of ‘I’ versus ‘we’ in response to a question about teamwork. Therefore, it is somewhat difficult to believe that these systems can properly evaluate technical competence as opposed to merely perform a very basic analysis of personality traits (and even this capability can be viewed with suspicion). It is also the case that such algorithms are inherently biased against applicants who are not native speakers of the language assumed by the NLP algorithm, have unusual speech patterns, exhibit various forms of neurodiversity or are otherwise outside the norms that are prevalent in the dataset used to train the algorithms. Even if exceptions are made for such individuals, perhaps by having a current employee assess such recordings, having different processes for different types of people presents its own set of ethical challenges.

One feature of algorithms is that they are, by definition, formulaic, and this feature can be exploited by potential employees.

“As AI-based hiring systems become more commonplace, we should expect that sources of information on how to game the systems will also grow in popularity, thereby defeating much of the value of the systems.”

From the current employees’ perspective, automated screening systems take away their agency and discretion in hiring in a number of ways, since this is one situation in which the judgment and authority of human decision makers is literally replaced by that of the AI robots. This replacement likely leaves current workers feeling like a less-valued part of the organization and less able to influence their future working environment. This consideration should be weighed alongside the other benefits and costs of automation. Furthermore, such formulaic approaches take away the ability to identify and capitalize on candidates with unconventional backgrounds.

Artificial Intelligence in the Workplace

Employee Assistance Technology

Many introductions of artificial intelligence systems into the workplace are viewed with the concern that the ‘AI robots’ are going to take people’s jobs. To counter this notion, many such systems are sold to employees, investors and the general public as productivity enhancers – in other words, ways to shield workers from having to perform boring or repetitive tasks so that they can focus on their real, important work. It is likely true, however, that unless the amount of real work to be done increases, fewer workers will be needed to complete it, so workers’ concerns are not entirely baseless. Layoffs aside, a brief survey of the human psychology literature suggests that real, important work is mentally taxing, and most people would probably prefer a mix of more and less challenging tasks as opposed to being expected to do only the hard stuff all day every day. Furthermore, workers generally prefer to have a degree of autonomy in their work, which is fairly antithetical to letting AI systems handle various aspects of decision making. Even if workers aren’t put off by the AI systems introduced into the workplace, it is often the corporation that benefits most from them, particularly if it can use AI technology to increase worker productivity without having to increase employee compensation, in which case the company is benefitting at the expense of harder-working workers.

To understand the effects of such systems, it’s helpful to take a look at a concrete example. Here is an excerpt of the value proposition for an AI-based customer-service support tool called Amelia:

“… last year an insurance services provider hired Amelia as a ‘whisper agent’ for its call center. Now, when live human agents are on call with customers, they are simultaneously interacting with Amelia, who elevates relevant information and dynamically leads agents through services processes, step-by-step.” 5

The company then claims, not surprisingly, that the Amelia tool increased productivity by reducing average call time and the need for follow-up calls, but what is ignored in the pitch is the fact that workers now have more things to focus on at once (both the customer and the tool, which appears to be pushing information to the worker in real time) and are therefore doing a more complicated and taxing job than they had been before the introduction of the AI tool.

In contrast, there is significant potential to use artificial intelligence to make workers’ lives easier through the use of agent-based support tools. For example, many workers would likely prefer to interact with an AI-based chatbot to get help with administrative questions than sit on hold waiting for an HR representative or try to navigate a sea of poorly organized web pages on the corporate intranet. In such cases, workers should be thought of as the customers for such products, and the same frustrations that customers have regarding automated support tools should be noted and addressed in this context. Most notably, AI-based service solutions generally don’t solve 100% of customers’ issues, so companies still need to staff an appropriately sized support department in order to provide backup to the automated system rather than leave employees stranded and frustrated. In addition, some of these customer support technologies are designed with the explicit goal of making it prohibitively difficult to reach a human representative, and the ethics of implementing such systems to ‘help’ with essential worker functions should be seriously questioned.

Employee Monitoring Technology

The desire to monitor employees is often strong within organizations (likely stronger than is optimal even), and this desire has for some companies only grown stronger with the COVID-driven shift to remote work. Since it is quite cumbersome, particularly in a remote environment, for managers to make sure that their reports are being productive and not slacking off, the idea of AI-driven monitoring solutions is very appealing to a lot of employers. How do these systems work exactly? At a general level, the systems are given data on what productive (and, ideally, unproductive) workers look like in terms of behaviors such as keyboard and mouse usage, presence in front of the webcam and so on, and then it flags workers as good or bad based on these same behaviors. While these systems likely work as intended for a while, three important caveats should be noted. First, workers will likely learn over time how to trick the systems, particularly if there isn’t a strong feedback loop where the underlying algorithm gets tweaked as gaming is attempted. Second, as with some other AI technologies, these types of systems generally have negative effects on employee morale and feelings of agency and autonomy. Lastly, it’s not even entirely clear in the first place whether the relationship between observable behaviors and productivity is strong enough to warrant this type of system, particularly when the systems replace a thorough review of a worker’s output.

Some organizations use AI-based systems to implement various forms of data security. At a basic level, there might be an artificial intelligence component behind the systems that decide what web sites to allow a worker to access from their workstation or laptop, perhaps an algorithm that evaluates sites for similarity to other sites that have previously been flagged as problematic. Similarly, companies can use AI-based systems to retroactively flag sites visited, files uploaded or downloaded, emails sent or other actions as being breaches of corporate data security policies as opposed to having to have people go through and examine each individual information transfer event.

While such systems can be far more efficient than humans in identifying potential security risks, it is crucial to maintain human oversight over the processes that take action against workers in order to mitigate the negative consequences of false positives. For example, some systems will literally threaten to terminate an employee if a proper explanation for the flagged action is not provided within a specified time frame, and this can result in logistical disaster if the message is sent while the worker is on vacation or otherwise unavailable. In addition, such automated systems are likely to create a high level of anxiety for workers who receive the messages if they are not accompanied by a sufficient level of detail regarding the nature of the offense and why it is viewed as problematic, which is difficult for many AI-based systems to do. When considering these types of technologies, it is important to note that the decisions regarding what actions to take when a flag is raised is an organizational choice and not an inherent feature of the technology, though the systems may provide default settings that nudge organizations toward more draconian policies, perhaps to make the technologies appear more impactful.

Artificial Intelligence and Employee Promotions

In a lot of ways, promoting an employee can be viewed as hiring that employee into a new position within the organization. As such, most of the concerns that relate to the use of artificial intelligence in the hiring process also apply in the context of employee promotions. Furthermore, it’s unclear whether the task of determining who is ready for promotion can be performed by an algorithm, and attempts to construct the inputs to such a process are likely to distort the incentives of employees. For example, if workers know that the number of meetings they hold is an input into the promotion function, their bosses are probably going to see a larger number of meetings on the calendar than they would otherwise, even if such a change is not good for the organization.

“It’s helpful to recall Goodhart’s Law in this context – i.e. when a measure becomes a target, it ceases to be a good measure.”

Even aside from gaming concerns, AI-based promotion systems have a major weakness in that much of what organizations should be considering in promotion decisions is not easily quantifiable in most cases.

Artificial Intelligence and C-Suite Management

Given the extensive and increasing use of artificial intelligence in the workplace, it’s reasonable to ponder how high up in an organization artificial intelligence can, well, get promoted. As it turns out, experiments with artificial intelligence have reached all the way up to the board of directors in some organizations. In one instance, venture capital firm Deep Knowledge Ventures explicitly placed an AI-based system on its board in order to give it a vote on investment decisions. The system takes the same information under consideration that a regular board member would – financial information, clinical trial pipeline data, intellectual property ownership, previous funding and so on – but evaluates it in a theoretically more rigorous and objective manner than its human counterparts, all with the goal of finding patterns that explain future productivity in order to select the most lucrative investments 6. One interesting feature of this system is that it has the human backup that has been pointed out as crucial in other contexts, largely because the algorithm is only one vote in a group of otherwise traditional human voters.

Is AI at Odds with Employee Happiness?

Overall, AI-based technologies should be viewed with a bit of skepticism, and care should be taken to ensure that such systems do not reinforce existing biases or flawed practices. In addition, the impact on workers’ morale and quality of life should be taken into account when deciding whether a particular artificial intelligence system is worthwhile, and companies should not kid themselves regarding what technologies workers actually view as valuable and helpful. That said, AI-based systems that seem to harness the upsides of the technology without significant downsides do exist.

In addition to the systems already mentioned, there are a few other uses of artificial intelligence in organizations that are worth noting. First, Skype Translator is a great tool that does exactly what its name implies: it uses NLP technology to enable coworkers who aren’t fluent in the same language to almost seamlessly communicate with one another. As another example, systems such as Textio use reinforcement learning to help companies create job postings that attract a larger (and hopefully more diverse) pool of applicants 7. Lastly, given that much of data science is at its core a form of artificial intelligence, many employees harness artificial intelligence themselves in order to provide important business insights to their employers and other stakeholders. In this sense, artificial intelligence technologies lead to well-paying and meaningful employment opportunities in a wide variety of industries.

Picture of Jodi N. Beggs

Jodi N. Beggs

Jodi is a behavioral economist who specializes in how human psychology affects organizational and market dynamics. She is the founder of Economists Do It With Models, a company that focuses on producing educational content for use in and out of the classroom. In addition, Jodi works as an economist in the tech sector, writes for various publications, and is a competitive figure skater.

Share

Share this article and let us know what you think. We're here to help and answer any questions you might have. Any suggestions or feedback? Let us know what you think and we will use your input for the future improvements.

Contact us

Join us

The return to journalism, the pursuit of truth and the utmost respect for solid, peer-reviewed science. You're just one click away from receiving the best of The Habtic Standard straight to your inbox. Subscribe to our monthly newsletter now and keep up to date with the latest corporate wellbeing insights from our experts around the globe.