Oh, Dokmanić win Google Faculty Research Awards

5/23/2017 Kim Gudeman, Coordinated Science Lab

Two CSL faculty members, Sewoong Oh and Ivan Dokmanic, have received 2016 Google Faculty Research Awards.

Written by Kim Gudeman, Coordinated Science Lab

Two CSL faculty members, Sewoong Oh and Ivan Dokmanić, have received 2016 Google Faculty Research Awards.  

Oh, an assistant professor of industrial and enterprise systems engineering, won the award for his project, “Optimal Mechanism Design for Private Data Sharing.” As consumers use their phones in more ways – from browsing to ordering to paying for goods, for example – they allow phone service providers, web browsers, and other companies growing access to sensitive data.

Oh’s work will focus on protecting private data by introducing “noise,” or randomness, to data. By taking a mathematical approach to the problem, Oh is working to introduce smart noise that allows companies to make high-level inferences about the kind of content a user might want to see while withholding private data, such as a password or credit card number, that can identify you as an individual. A search engine such as Google, for example, might want to know the top 10 websites you visit so it can better target advertising to your interests. Oh’s method could enable the ability to randomly interject noise in the logs, preventing the search engine from being able to identify the results with the user personally.

“Consumers benefit a lot from sharing our data, but we also risk a lot too,” said Oh, an assistant professor of industrial and enterprise systems engineering at Illinois. “It’s our goal to introduce noise that’s smart and appropriate, so that the utility of the service is preserved and privacy is protected.”

Ivan Dokmanić, an assistant professor of electrical and computer engineering at Illinois, received the award for “Echonomy in Auditory Scene Analysis.”

Echoes are more or less copies of real sources—you can think about them as virtual sources that provide spatial diversity. In teleconferencing, for example, you may want to listen to someone talking 2 meters away, but there is a third person in between speaking on the phone and obscuring the desired speech signal. Dokmanić’s team is working on new signal processing methods for microphone arrays that listen to the echo of the person you want to hear, since this echo is not occluded by the interfering talker.

The work can also be applied to echo-aided source separation. Say you record two people talking in the room with several microphones and you want to separate the recording into what each individual talker was saying.

“You can think about this situation as having not only two sources but actually having many “virtual” sources, or echoes,” Dokmanić said. “So instead of looking at the problem as that of separating two sound streams, we actually aim to separate one group of sound streams, which are all identical since they correspond to echoes of a single stream, from another group of sound streams, which are also identical, but they all come from different points in space.”

Learn more about the Google Faculty Research Awards here.


Share this story

This story was published May 23, 2017.