Automated Decision-Making Systems

Decision automation frees managers and professionals to focus on tasks that require more creativity, nuance, or ethical judgment. It increases productivity and reduces risk and error.

But automated decision systems raise questions about disparate impact and discrimination. In response, lawmakers are introducing legislation in record numbers to address these concerns.

Artificial Intelligence

AI is used in a variety of business processes to improve efficiency and accuracy. It analyzes large data sets and finds patterns that humans may not easily see. For example, it can identify trends in customer behavior and offer more personalized services. It can also detect inventory shortages and make supply chain decisions to prevent stockouts.

The most common form of AI is machine learning, which involves creating mathematical formulas that are trained to find patterns in data. For example, an HR software program might create a formula that predicts whether job applicants will be successful by analyzing data like their past jobs, college major, school attended, and zip code.

Companies that use AI and automated decision-making systems (ADS) must understand their legal landscape and compliance obligations, including those imposed by the law of privacy, such as PIPEDA and Bill 64. They should conduct technical due diligence on AI and ADS to ensure that they are developed using appropriate Standards or codes, that they can explain how a particular automated system makes its decisions, and have contracts with AI and ADS vendors that assign liabilities for indemnification and liability in case of damages or disputes.

Machine Learning

Machine learning is a method that allows computer systems to take information and learn from it. As algorithms adjust based on feedback, they can develop behaviors that weren’t explicitly programmed in advance and offer new capabilities, like reading context.

However, this comes with a host of new challenges. Many companies struggle to find a way to effectively integrate machine learning into their decision-making processes and ensure transparency and accountability. The opaque nature of many AI systems can exacerbate societal disparities and make it hard for businesses to know whether they are using data responsibly.

It is also difficult for humans to spot the biases inherent in ML models. Hard evidence of algorithmic discrimination can often only be found through fortuitous revelations—such as Latanya Sweeney’s discovery that searching her name brought up arrest records while searches for traditionally white names did not—or time-consuming forensic analysis. Until these issues are resolved, it is crucial to keep a human in the loop.

Predictive Models

Predictive models ingest massive data sets and create clear, actionable outcomes to support specific business goals. They can forecast demand and provide more accurate estimates of churn, for example. Or, they can reduce operational expenses and save time by predicting when equipment is likely to break down or need repair.

For example, a predictive model may assess whether a student is at risk of failing to graduate, using digital and campus transaction data (e.g., dining hall usage and book store purchases). These models are not without controversy, however, as they can result in statistical discrimination based on race or ethnicity.

This has been a concern in some areas, leading to regulatory restrictions on their use by health care systems, for example. To address this, some researchers are developing predictive analytics that take equity into consideration. This is an important step, but not enough on its own. To be effective, these models must also be able to explain their decisions.

Human Input

A variety of contexts use automated decision-making systems, including policing, pretrial release, employment and credit. These systems are often based on openly published risk assessment tools that use criminal history as a major input. The decisions they make perpetuate bias rooted in America’s history of racism, and the tools themselves are flawed.

Policies calling for human overrides (allowing humans to disagree with an algorithm) seem like an attractive way to counter bias and ensure a quality decision. However, they fail to address the underlying issues that cause harm. People defer to automated systems and can be blind to their own biases, leading them to omit important considerations or place excessive weight on factors that the system highlights (Parasuraman & Manzey, 2010; Skitka et al., 1999).

Anti-discrimination laws should cover the use of automated decision systems that could result in adverse impacts on protected classes. They should require these systems to conduct a disparate impact assessment and publicly disclose the results.

Share