Every agent, whether animal or robotic, needs to process its visual sensory input in an efficient way, to allow understanding of, and interaction with, the environment. The process of filtering revelant information out of the continuous bombardment of complex sensory data is called selective attention. Visual attention is the result of the complex interplay between bottom-up and top-down mechanisms to perceptually organise and understand the scene. Giulia will describe how to approach visual attention using bio-inspired models emulating the human visual system to allow robots to interact with their surroundings.
Giulia D’Angelo is a postdoctoral researcher in neuroengineering in the EDPR laboratory at the Italian Institute of Technology. She obtained a B.Sc. in biomedical engineering and an M.Sc. in neuroengineering, developing a neuromorphic visual system at the King’s College of London. She successfully defended her Ph.D. VIVA in 2022 at the university of Manchester, proposing a biologically plausible model for event-driven saliency-based visual attention. She is currently working on bio-inspired visual algorithms exploiting neuromorphic platforms.