ROBOTS CAN PULL off a lot of righteous tricks. Hopping on one leg with ease, for instance. Or teaching themselves to play children’s games. Or even rolling through one of San Francisco’s most chaotic neighborhoods to deliver you falafel. One thing they definitely can’t do, though: see around corners.

But they just might soon. Because engineers at the MIT Computer Science and Artificial Intelligence Laboratory have developed a clever and surprisingly simple way to see around corners. And it’s all thanks to the hidden wonders of light.

Let’s pretend you’re standing in an L-shaped hallway, looking where the inside corner of the hallway meets the floor. You can’t see what’s around the corner, but you can see light emitting from the other side, splashed onto the floor at that right angle. So long as what’s over there isn’t a single light source, like a flashlight, you won’t see one hard line of shadow. You’ll see a sort of gradient of not-quite-shadow—kind of a blurry shadow. This is known as the penumbra. (If you have a corner with suitable light, go take a look. It’ll be there.)


Your eyes can’t see it, but there’s a lot going on in this penumbra: It’s a reflection—a real-time, low-res view of the scene around the corner. This happens outdoors, too, thanks to light from the sun. Train a camera on this spot and magnify the color, and you can start to pick out different-colored pixels that correspond to objects otherwise obscured by the wall.

If a person walks through wearing a bright red shirt, they reflect red light into the penumbra. “But more often, they block light from sunlight, so you’ll get this dark path because they’re blocking the bright light,” says imaging engineer Katie Bouman, lead author of a study detailing the tech.

You cannot see these movements with the naked eye, because the changes Bouman is tracking are taking place in just .1 percent of the light that’s reflected. But what her recordings capture are the movements of people out of view. She’s seeing around corners.

It’s so simple, you can capture the image with a cheap webcam. “Because of this, it’s very computationally inexpensive, since you’re basically just doing a derivative,” Bouman says. “You’re just doing pixel differences, and so it works in real time.”

The downside being: The camera has to be stationary to catch these subtle changes in light. But the technology’s promise, and what Bouman and her colleagues are doing now, is getting the system to work in motion.

That could make self-driving cars even more powerful, for one. The lasers they use are great at building detailed maps of the world, but not so hot at seeing around obstacles. Autonomous wheelchairs, too, could benefit from seeing around corners in office buildings and on city sidewalks. Same with health care robots, which are already roaming the halls of hospitals. The power to see around corners could mean everything from fewer auto accidents to fewer crushed toes.

Most importantly, though, it could help get you that falafel without incident. Doesn’t hurt to dream.