Skip to main content

Micro-cameras flex their way into the future of imaging

September 20, 2013

Imagine sticking a thin sheet of microscopic cameras to the surface of a car to provide a rear-view image, or wrapping that sheet around a pole to provide 360-degree surveillance of an intersection under construction.

A thin sheet of micro-cameras could fit where bulkier cameras cannot — and many small cameras working together could even rival high-end cameras’ image quality, according to two University of Wisconsin–Madison researchers.

Photo: Hongrui Jiang

Hongrui Jiang

Hongrui Jiang, the Vilas Distinguished Achievement professor of electrical and computer engineering, and Li Zhang, an assistant professor of computer science, have received a $1 million National Science Foundation grant to develop smart micro-camera arrays mounted on thin, flexible polymer sheets.

Jiang and Zhang will focus not simply on making these cameras small and higher-quality, but also on developing algorithms that allow the cameras to change direction and focus both individually and collectively.

Like so many complex technological problems, this one comes down to making different disciplines work together.

“You can develop cameras and take whatever algorithm is available, or develop algorithms and try to optimize whatever camera is available,” Jiang says. “We want to tackle the problem from both ends and optimize not just one component, but the whole system.”

The polymer sheets, combined with the micro-cameras, will measure less than a centimeter thick. Whereas a traditional camera design must be bigger and bulkier to capture more light and increase its image quality, Jiang and Zhang propose to improve their image quality through a sort of “collective aperture” of many micro-cameras.

“The cameras can also coordinate to capture the whole scene,” Jiang says. “The algorithm will decide what the cameras look at and the cameras’ focus plane. We’re not just talking about the image processing itself. We’re also talking about the control of the camera array.”

Zhang’s focus is on figuring out how to control the orientation of the cameras. By manipulating them through computation, he hopes to maximize the collective potential of small cameras that would be rather weak on their own.

“These small cameras can’t have components like zoom lenses, but if I control them to aim at a specific thing, we can use computation plus the individual camera movements in a way that emulates zooming,” Zhang says.

He says the arrays ultimately could do things that conventional cameras can’t do at all — for example, focusing simultaneously on different objects at different distances. That’s because the image data the camera array captures contains 3-D depth information, which can make otherwise fragile image-recognition algorithms more powerful.

The cameras could work well in environments that often are cramped — for instance, thin video cameras attached to a vehicle or installed in a medical treatment setting.

“Space is always going to be an issue,” Jiang says. “You can’t always afford to mount a huge camera in a given room.”

The researchers aim to make the design more cost-effective than existing camera technology, because it is cheap to mass-produce small cameras. And by making high-quality imaging possible in tight spaces, Jiang and Zhang may create a whole host of new uses for cameras.

Scott Gordon

Enjoy this story?

Read more news from the College of Engineering