Upcoming:
First Student Fellowship Seminar
with Botao He (UMD, College Park) and Friedhelm Hamann (TU Berlin)
Thursday, April 25th, 2024, 11:00 am – 12:00 pm (ET), 17:00- 18:00 CEST
Microsaccade-inspired Event Camera for Robotics
Abstract: Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras’ output is dependent on both – motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements – the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can significantly maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism is mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call Artificial MIcrosaccade-enhanced EVent camera (AMI-EV). Benchmark comparisons validate the superior data quality of AMI-EV recordings in scenarios where both traditional RGB and event cameras fail to deliver. Various real-world experiments demonstrate the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.
Bio: Botao He is currently a Ph.D. student at PRG (Perception and Robotics Group), University of Maryland – College Park, advised by Prof. Yiannis Aloimonos and Dr. Cornelia Fermüller. His research interests are event-based robot perception, visual perception and motion planning for mobile robots. He is especially interested in systematic solutions that make the robot system actually work in various field scenarios.
Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation
Abstract: Current optical flow and point-tracking methods rely heavily on synthetic datasets. Event cameras are novel vision sensors with advantages in challenging visual conditions. Still, state-of-the-art frame-based methods cannot be easily adapted to event data due to the limitations of current event simulators. This talk investigates how to estimate a scene’s complex, dense motion from event data without relying on synthetic training data. To this end, we introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories. We also propose an efficient solution for the high-dimensional assignment problem between non-linear trajectories and events. The effectiveness of the approach is demonstrated in two scenarios: (i) in dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically-trained model on the real-world dataset EVIMO2 by 29%; and (ii) in optical flow estimation, our method elevates a simple UNet to achieve state-of-the-art performance among self-supervised methods on the DSEC optical flow benchmark.
Bio: Friedhelm Hamann is a PhD student at Technical University Berlin and the “Science of Intelligence” research cluster since 2022, working with Prof. Guillermo Gallego. Additionally, he spent part of his PhD at the University of Pennsylvania with Prof. Kostas Daniilidis. Previously, he obtained a Master’s in Computer Engineering and Information Technology from RWTH Aachen University. His research interests include computer vision, machine learning, and signal processing, focusing on algorithmic advances for event-based cameras.
Previous:
Forum on Neuromorphic Control
Tuesday, May 16, 2023, 15:00 – 16:30 ET, 19:00 – 20:30 GMT
with Rodolphe Sepulchre ( University of Cambridge and KU Leuven), Maurice Heemels (Eindhoven University of Technology), and Jorge Cortés (University of California, San Diego)
Forum on Neuromorphic Navigation
Tuesday, April 18, 2023, 19:00 – 20:30 GMT
with Andrew Davison (Imperial College, London), Kostas Daniilidis (University of Pennsylvania), and Michael Milford (Queensland University of Technology)