Skip to main content

Tracing back to bias: Ohio State joins $1M NSF fairness in AI program

Posted: 
machine learning

From arcade games and search engines to computer vision and facial recognition, advances in machine learning and artificial intelligence (AI) algorithms created countless scientific innovations.

However, AI is often built upon decades-old algorithms – and bias hidden in the past can very well affect future technologies.

The question now is: are these algorithms trustworthy?

To help find out, The Ohio State University just joined a collaborative $1M award from the National Science Foundation’s Program on Fairness in Artificial Intelligence (FAI). Created with funding from NSF and Amazon, the project specifically explores fairness in the realms of automated machine learning and artificial intelligence.

parinaz

Parinaz Naghizadeh, an assistant professor in Ohio State’s Integrated Systems Engineering and Electrical and Computer Engineering departments explains the issue. She is leading Ohio State's involvement in the program.

“Can we endorse algorithms for facial recognition, speech, ratings, or ranking, if they don’t apply to all segments of society when used in decision support systems?” she said.

Naghizadeh is co-leading a team on the FAI project, “Fairness in Machine Learning with Human in the Loop,” alongside Professor Yang Liu from the University of California Santa Cruz (lead), Professor Mingyan Liu from the University of Michigan, and Professor Ming Yin from Purdue University. Read more about the program, courtesy of UM.

The goal, Naghizadeh said, is to advance understanding of the long-term implications of automating decision-making using machine learning algorithms.

While research has looked into the fairness issues of using AI in the short-term, the long-term consequences and impacts of automated decision making remain unclear.

“For instance,” Naghizadeh said, “some of the existing algorithms used for predicting recidivism in U.S. courts have exhibited racial biases, and those used for job advertising have exhibited gender biases. In the long-term, biased algorithms can reinforce pre-existing social injustices, and increase the bias in the datasets that will be used for training future algorithms. Preventing these feedback loops and guaranteeing fairness is a legal and ethical imperative.”

Central to the project is the focus on “human in the loop.”

Naghizadeh said automated decision-making involves human participation throughout its life cycle: algorithms are trained using data collected from humans; they also make decisions that impact humans.

“We are specifically interested in accounting for human subjects whose behavior, participation incentives, and qualification states, will evolve over time when facing these algorithms. This creates a decision-action feedback loop that informs and complicates the design of fair AI,” she said.

What must happen, according to their proposal, is to “drive the design of algorithms with an eye toward the welfare of both the makers and the users of these algorithms, with an ultimate goal of achieving more equitable outcomes.”

Story by Ryan Horns | Communications Specialist | @OhioStateECE | Horns.1@osu.edu