Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Optics+Photonics Showcase
News
Menu
Applications

Event camera based on human eye's motion improves robot vision

09 Jul 2024

University of Maryland device mimics involuntary movements to stabilize images.

A project at the University of Maryland (UMD) has taken inspiration from the natural world for a camera that could improve robot vision.

Bioinspired vision systems have already taken many forms, including cameras mimicking a fly's multiview vision and sensors designed to match the polarization sensitivity of the mantis shrimp.

The UMD project has instead based its camera on the behavior of the human eye, in particular the small involuntary eye movements called microsaccades.

These movements can be a hurdle to some medical imaging operations such as close examination of the retina, but UMD has turned them to its advantage. The results were published in Science Robotics.

UMD's breakthrough is based around an event camera, a form of vision system designed to capture per-pixel changes in intensity rather than full frames at fixed intervals. The principle behind event cameras was itself based on the way the human eye perceives images, so the UMD project is building on connected lines of research.

"Event cameras are a relatively new technology better at tracking moving objects than traditional cameras, but today's event cameras struggle to capture sharp, blur-free images when there's a lot of motion involved," said UMD's Botao He.

"It's a big problem, because robots and many other technologies, such as self-driving cars, rely on accurate and timely images to react correctly to a changing environment. So we asked ourselves: How do humans and animals make sure their vision stays focused on a moving object?"

Microsaccade movements turn out to be part of the solution. Small and quick eye movements occurring involuntarily when a person focuses their view counteract the phenomenon of "perceptual fading," and continually refresh the visual textures being perceived. This keeps parameters such as image depth, color and shadowing more accurate over time.

Smoother and more realistic motion

UMD modelled this approach in a camera system christened the Artificial Microsaccade-Enhanced Event Camera (AMI-EV), in which a rotating wedge prism mounted in front of the aperture redirects light beams captured by the device's lens.

The continuous rotational movement of the prism simulates the movements naturally occurring within a human eye, allowing the camera to stabilize the textures of a recorded object just as a human would, according to UMD. The team then developed software to compensate for the prism's movement within the AMI-EV, to consolidate stable images from the shifting lights.

In trials, AMI-EV was said to capture and display movement accurately in a variety of contexts including human pulse detection and rapidly moving shape identification. AMI-EV captured motion in tens of thousands of frames per second, outperforming most typically available commercial cameras which capture 30 to 1000 frames per second on average, said the team.

This smoother and more realistic depiction of motion could prove to be pivotal in multiple applications, from creating more immersive augmented reality experiences and better security monitoring to improving how astronomers capture images in space.

"Our novel camera system can solve many specific problems, like helping a self-driving car figure out what on the road is a human and what isn't," said Yiannis Aloimonos of UMD.

"As a result, it has many applications that much of the general public already interacts with, like autonomous driving systems or even smartphone cameras. We believe that our novel camera system is paving the way for more advanced and capable systems to come."

Changchun Jiu Tian  Optoelectric Co.,Ltd.CeNing Optics Co LtdUniverse Kogaku America Inc.HÜBNER PhotonicsLaCroix Precision OpticsAlluxaLASEROPTIK GmbH
© 2024 SPIE Europe
Top of Page