Science/Technology

Q&A: Meet Hao Li, Computer Scientist with a Flair for Cinematic Effects

USC Viterbi School of Engineering’s Hao Li wants to create your future.

March 10, 2016 Alicia Di Rado

USC computer scientist Hao Li blends math with creative flair, and you can see the results on screen. He credits the visual effects from Jurassic Park as his inspiration to get into programming, and during graduate school he worked at Industrial Light & Magic on an algorithm that helped filmmakers capture and reproduce facial expressions for the upcoming Star Wars: Episode VII. Fresh off his inclusion in MIT Technology Review’s list of the world’s top innovators under 35, Li talked with USC Trojan Family Magazine about the challenges of mathematizing the world, his penchant for baking bread and more.

What’s your niche?

There’s a huge field in computer science called computer vision. A camera can take a picture of someone and the computer can recognize who it is, for example. What we’re doing differently is we’re using sensors that aren’t regular cameras—they’re cameras that can see in 3-D. And I create software that tells the computer to understand what it’s seeing.

So you help computers transform data into moving pictures.

One of my projects is about facial capture. Traditionally in the film industry, people place markers on actors’ faces and, depending on the facial motions, they translate the motion to computer-generated creatures. Like in Avatar—you can see the character behaving exactly like Zoe Saldana. Then people started asking, can we use machine perception techniques to do this in a smarter way? What we’re trying to do is build software that lets the computer figure out what it’s seeing and translate that facial animation to a character automatically, in real time. We’re working on that for bodies and for all kinds of things related to humans, too.

You recently had an unusual gig with Snickers that’s gone viral.

Yeah, I flew some students over to Korea and we set up in a university for this project. Snickers has this ad campaign where they say, “You’re not you when you’re hungry.” So we set up an HD television on the wall and one minute people passing by saw their reflection and the next they saw something very different. We had power outages, our code stopped working, our students had to hack things, and we worked on it for two days straight. It didn’t work until the last second. It was a great introduction for them.

Computer-generated characters today still look funny.

When we talk about human faces or emotions, everyone says we’re still in the “uncanny valley”—where things just don’t look right. This can be answered with more accuracy. And it’s not just on the hardware side; it’s also on the algorithmic side. Our computer models need to be more advanced.

We try to re-create reality by defining models that simplify reality. We have to find the closest formula that describes something. For example, if I want to re-create this flat table, that’s easy—it’s a plane. That’s just one equation. But if there’s a bump in it, you won’t see the bump with a simple equation. It gets a lot more complicated when you’re re-creating someone’s face.

You said one of the next steps is to get computers to learn what makes a human, right?

We want to build a mechanism that can collect a lot of information from how humans behave and act, and then using this data we can improve our models—but also go beyond that. The premise is that if the computer can understand so much detail about what it sees, it potentially could tell you things about people, like detecting and reflecting really deep emotions.

A really good lie detector?

Obviously, right? The first thing to do is to see whether an expert can tell something—like if a person is lying—from whatever the camera is capturing. If so, the information must be there, and you could potentially algorithmically be able to classify whether a person is lying.

We’re also talking about cool, noninvasive ways to quantify someone’s health. We’re actually starting a project with Dr. [David] Agus, a cancer researcher at USC, on some pilot studies on this area. He has a lot of patients, and one thing that he can always tell as a doctor is when patients don’t look well. There must be a way for a computer to do this. Can we train the computer to see cues, like weight loss or sweating, for example?

Medicine is already using some of this 3-D camera technology.

There are applications in medicine for the 3-D reconstruction of surfaces. The idea is that we can locate interior features, like tumors, based on the definition of your skin. So while someone is getting radiation treatment, you can actually track the position of their tumor, in real time, using a depth sensor—a sensor that can see in 3-D. It helps them get radiation where they need it.

On the visual effects side, will technology get so good that it’ll be impossible to tell fake video from real video?

We’re pretty close to that. There’s a demand for it. And if there’s demand, people will put money into it. It’s solvable. I don’t think it’s like something that will take another 10 years.

You can already fake things. Sometimes you can’t really tell. Like even I can’t tell. Take The Curious Case of Benjamin Button, which was very good. I think there are many effects you cannot see. At some point it’s going to be an app in your iPhone, right? [Laughs]

Do you ever worry this technology can be used for evil?

Actually, I’m pretty sure it will be. But society adapts. Once you know the technology is out there, like Photoshop with photographs, you know that pictures can be altered.

Are you a math guy who’s into art, or an artist who’s into math?

Actually, when I was a kid I was more into art, even though I had a computer at a very early age. But then at some point it just switched. During my undergrad studies I wanted to focus on math and computer science-related topics and more theoretical things. I almost went into cryptography. [Laughs.] I think I got easily inspired by people who were really good in their field. I get addicted to things and excited about them really quickly.

But then the more artsy thing came back. I still want to do something that looks great.

Maybe you’re looking to solve problems.

Mmm. I’m not looking to solve problems, but I like to solve problems. I like to create things, like how the future would look. I like the idea of defining how people are going to live in the future.

There must be something low-tech about you.

Oh, I cook all the time. I don’t even like to use any machines or anything that’s already finished. I like to start it from the ground. Well, I’m not hunting or anything like that. [Laughs.] But I made my own bread. I looked into making a baguette. I spent about eight hours doing it, just to get two baguettes. People just think, what’s wrong with you? You can just buy it. But really, it tastes different.

I don’t know, maybe some people do meditation, but I do this.

It’s also a creative process, right?

Exactly.

Being at USC now must be very different from being in the industry.

What’s nice about being in academia, I think, is that you’re sort of defining what’s coming in the next five years. I think this is maybe the biggest difference between academia and industry. In industry you need to sell something, but in academia you get to look farther ahead.