Future of Ohio State Computer Vision Research So Bright it Needs Shades

Posted: 

In the world of computer vision research, The Ohio State University is rising to Hollywood level proportions.

Out of more than 6,500 research papers submitted from around the world, just 26 were nominated for CVPR Best Paper Award and highlighted at the virtual international awards event in June.

The event is considered the premier international annual computer vision research expo.

Among those in the finals this year was Ohio State Electrical and Computer Engineering Professor Aleix Martinez and his team for the research, “Computing the Testing Error without a Testing Set."

The paper is co-authored with University of Barcelona colleagues Ciprian Corneanu and Sergio Escalera.

image002.png

“These are the Oscars of computer vision,” Martinez said. “We did not win the award, but being nominated is already a great accomplishment.” 

According to the team’s abstract, Deep Neural Networks (DNNs) have revolutionized computer vision, expanding object recognition, facial expression analysis, and semantic segmentation, to name but a few. 

To help explain this, know that artificial intelligence is when a machine is programmed to make informed decisions on its own. This technology is what helps decide how much water goes into a clothes washer, or provides the right amount of cash at an ATM. Similarly, machine learning involves programming to predict the right outcomes by providing access to relatable data. 

Deep learning, however, is a field within machine learning and artificial intelligence, which deals with algorithms inspired from a human brain to aid machines with intelligence without explicit programming. This led to technology like Siri or Alexa, offering real-time responses to questions, or autonomous vehicles; even those apps which change your face to look like an elderly man or a cat.

According to the research, the current top DNN designs are mostly created by trial- and-error and lack consistency.

“Using a sequestered testing dataset may address this problem, but this requires a constant update of the dataset, a very expensive venture,” the abstract states.

In their research proposal, the team created an algorithm to estimate the performance gap between training and testing that does not require any testing dataset. 

“Specifically, we derive a number of persistent topology measures that identify when a DNN is learning to generalize to unseen samples,” the team explains. “This allows us to compute the DNN’s testing error on unseen samples, even when we do not have access to them. We provide extensive experimental validation on multiple networks and datasets to demonstrate the feasibility of the proposed approach.”

Story by Ryan Horns ECE/IMR Communications Specialist | Horns.1@osu.edu | @OhioStateECE