BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.16//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.neuropac.info
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230627T080000
DTEND;TZID=America/Los_Angeles:20230627T090000
DTSTAMP:20260415T194732
CREATED:20230626T220627Z
LAST-MODIFIED:20230626T220627Z
UID:10000238-1687852800-1687856400@www.neuropac.info
SUMMARY:INRC Forum: Robert Legenstein
DESCRIPTION:Memory-enriched computation and learning through synaptic and non-synaptic plasticity\nAbstract:Virtually any task faced by humans has a temporal component and therefore demands some form of memory. Consequently\, a variety of memory systems and mechanisms have been shown to exist in the brain of humans and other animals. These memory systems operate on a multitude of time scales\, from seconds to years. Yet\, it is still not well understood how memory is implemented in the brain and how cortical neuronal networks utilize these systems for computation. In this talk\, I will present some recent models that extend (spiking and non-spiking) neural network models with memory using Hebbian and non-Hebbian types of plasticity. I will discuss the similarities between these models and transformers\, arguably the most powerful models for sequence processing in the area of machine learning. I will show that Hebbian plasticity can significantly increase the computational and learning capabilities of spiking neural networks. Further\, I will show how neurons with non-synaptic plasticity can be utilized for memory and how networks of such neurons can be trained without the need to backpropagate errors through time. \nBio: Dr. Robert Legenstein received his PhD in computer science from the Graz University of Technology\, Graz\, Austria\, in 2002. He is a full professor at the Department of Computer Science\, TU Graz\, head of the Institute for Theoretical Computer Science\, and leading the Graz Center for Machine Learning. Dr. Legenstein has served as associate editor of IEEE Transactions on Neural Networks and Learning Systems (2012-2016). He is an action editor for Transactions on Machine Learning Research\, and he was on the program committee for NeurIPS and ICLR several times. His primary research interests are learning in models for biological networks of neurons and neuromorphic hardware\, probabilistic neural computation\, novel brain-inspired architectures for computation and learning\, and memristor-based computing concepts. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-robert-legenstein/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230626
DTEND;VALUE=DATE:20230629
DTSTAMP:20260415T194732
CREATED:20230129T163038Z
LAST-MODIFIED:20230129T163038Z
UID:10000018-1687737600-1687996799@www.neuropac.info
SUMMARY:tinyML EMEA Innovation Forum 2023
DESCRIPTION:The tinyML EMEA Innovation Forum is accelerating the adoption of tiny machine learning across the region by connecting the efforts of the private sector with those of academia in pushing the boundaries of machine learning and artificial intelligence on ultra-low powered devices.
URL:https://www.neuropac.info/event/tinyml-emea-innovation-forum-2023/
LOCATION:Marriott Hotel\, Amsterdam\, Netherlands
CATEGORIES:Conference,Discussion,Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230625
DTEND;VALUE=DATE:20230715
DTSTAMP:20260415T194732
CREATED:20230129T155215Z
LAST-MODIFIED:20230129T155215Z
UID:10000012-1687651200-1689379199@www.neuropac.info
SUMMARY:Telluride Neuromorphic Cognition Engineering Workshop
DESCRIPTION:The workshop is a 3-week project based meeting organized around specific topic areas to bring the organizing principles of neural cognition into machine intelligence\, and to use lessons and technology from machine intelligence to understand how brains work. \n 
URL:https://www.neuropac.info/event/telluride-neuromorphic-cognition-engineering-workshop/
LOCATION:TBA
CATEGORIES:Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230620T080000
DTEND;TZID=America/Los_Angeles:20230620T090000
DTSTAMP:20260415T194732
CREATED:20230618T010420Z
LAST-MODIFIED:20230618T010420Z
UID:10000237-1687248000-1687251600@www.neuropac.info
SUMMARY:INRC Forum: Wolfgang Maass\, Christoph Stoeckl & Yukun Yang
DESCRIPTION:Local prediction-learning in high-dimensional spaces enables neural networks to plan\nAbstract: Being able to plan a sequence of actions in order to reach a goal\, or more generally to solve a problem\, is a cornerstone of higher brain function. But compelling models which explain how the brain can achieve that are missing. We show that local synaptic plasticity enables a neural network to create high-dimensional representations of actions and sensory inputs so that they encode salient information about their relationship. In fact\, it can create a cognitive map that reduces planning to a simple geometric problem in a high-dimensional space that can easily be solved by a neural network. This method also explains how self-supervised learning enables a neural network to control a complex muscle system so that it can handle locomotion challenges that never occurred during learning. The underlying learning strategy bears some similarity to self-attention networks (Transformers). But it does not require non-local learning rules or very large datasets. Hence it is suitable for implementation in highly energy-efficient neuromorphic hardware\, in particular for on-chip learning on Loihi 2.\nOne goal of our presentation will be to initiate discussions about the relation of this learning-based use of large vectors to other VSA approaches\, its relation to Transformers\, and possible applications in robotics. \nBio: Wolfgang Maass is a Professor of Computer Science at Technische Universität Graz. He received his PhD (1974) and Habilitation (1978) in Mathematics from Ludwig-Maximilians-Universität in Munich. He conducted research at MIT\, the University of Chicago\, and UC Berkeley\, as a Heisenberg Fellow of the Deutsche Forschungsgemeinschaft. He has been the Editor of Machine Learning (1995-1997)\, Archive for Mathematical Logic (1987-2000)\, and Biological Cybernetics (2006-present). He was also a Sloan Fellow at the Computational Neurobiology Lab of the Salk Institute in La Jolla\, California from 1997-1998. Since 2005\, he has been an Adjunct Fellow of the Frankfurt Institute of Advanced Studies (FIAS).\nChristoph Stoeckl is a Postdoc researcher at Technische Universität Graz working in the intersection between computational neuroscience and AI. His research interests include neuromorphic hardware as well as exploring connections between Transformers and neural networks. Before joining the research lab of Prof. Maass\, he obtained a Master’s degree in Computer Science also at TU Graz.\nYukun Yang is a 1st-year Doctoral Student at Technische Universität Graz\, supervised by Prof. Wolfgang Maass. His primary research interest is at the intersection of AI and neuroscience\, with a focus on discovering the learning principles of the brain and its neuromorphic applications. Before joining TU Graz\, he earned M.S. in the ECE Department at Duke University in 2020. Earlier\, he received B.E. in Information Engineering from Xi’an Jiaotong University in 2018. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-tu-graz/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230619
DTEND;VALUE=DATE:20230620
DTSTAMP:20260415T194732
CREATED:20230129T163545Z
LAST-MODIFIED:20230129T163545Z
UID:10000019-1687132800-1687219199@www.neuropac.info
SUMMARY:Workshop on Event-based Vision @ CVPR 2023
DESCRIPTION:4th International Workshop on Event-Based Vision. \nHeld in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition 2023\, as part of the track: CV for non-traditional modalities\n\nThis workshop is dedicated to event-based cameras\, smart cameras\, and algorithms processing data from these sensors. Event-based cameras are bio-inspired sensors with the key advantages of microsecond temporal resolution\, low latency\, very high dynamic range\, and low power consumption. Because of these advantages\, event-based cameras open frontiers that are unthinkable with standard frame-based cameras (which have been the main sensing technology for the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight\, build a flying robot with the agility of a bee\, and perform structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. In recent years they have received attention from large companies\, e.g.\, the event-sensor company Prophesee collaborated with Intel and Bosch on a high spatial resolution sensor\, Samsung announced mass production of a sensor to be used on hand-held devices\, and they have been used in various applications on neuromorphic chips such as IBM’s TrueNorth and Intel’s Loihi. The workshop also considers novel vision sensors\, such as pixel processor arrays (PPAs)\, which perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor\, the resulting systems have high speed and low-power consumption\, enabling new embedded vision applications in areas such as robotics\, AR/VR\, automotive\, gaming\, surveillance\, etc. This workshop will cover the sensing hardware\, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.
URL:https://www.neuropac.info/event/workshop-on-event-based-vision-cvpr-2023/
LOCATION:Vancouver\, Canada\, Vancouver\, Canada
CATEGORIES:Conference,Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230618
DTEND;VALUE=DATE:20230624
DTSTAMP:20260415T194732
CREATED:20230129T221059Z
LAST-MODIFIED:20230129T221059Z
UID:10000021-1687046400-1687564799@www.neuropac.info
SUMMARY:International Joint Conference on Neural Networks (IJCNN)
DESCRIPTION:The International Joint Conference on Neural Networks is organized jointly by the International Neural Network Society and the IEEE Computational Intelligence Society\, and is the premiere international meeting for researchers and other professionals in neural networks and related areas. \nEach year\, the conference features invited plenary talks by world-renowned speakers in the areas of neural network theory and applications\, computational neuroscience\, robotics\, and distributed intelligence. In addition to regular technical sessions with oral and poster presentations\, the conference program will include special sessions\, competitions\, tutorials\, and workshops on topics of current interest.
URL:https://www.neuropac.info/event/international-joint-conference-on-neural-networks-ijcnn/
LOCATION:Gold Coast Convention and Exhibition Centre\, Queensland\, Australia
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230618
DTEND;VALUE=DATE:20230623
DTSTAMP:20260415T194732
CREATED:20230129T222319Z
LAST-MODIFIED:20230129T222319Z
UID:10000023-1687046400-1687478399@www.neuropac.info
SUMMARY:Computer Vision and Pattern Recognition Conference (CVPR) 2023
DESCRIPTION:The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost\, it provides an exceptional value for students\, academics and industry researchers.
URL:https://www.neuropac.info/event/computer-vision-and-pattern-recognition-conference-cvpr-2023/
LOCATION:Vancouver Convention Center\, Vancouver\, Canada
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230615T163000
DTEND;TZID=UTC:20230615T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000045-1686846600-1686850200@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-06-15/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230611
DTEND;VALUE=DATE:20230614
DTSTAMP:20260415T194732
CREATED:20230129T222826Z
LAST-MODIFIED:20230129T222826Z
UID:10000025-1686441600-1686700799@www.neuropac.info
SUMMARY:International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2023
DESCRIPTION:The entire world and in particular China are massively investing in AI. China is hosting large ecosystems in AI\, as well as numerous conferences. Most of these activities are software oriented. Top universities\, academies\, and institutes are bringing support to motivate scientists to contribute. IEEE AICAS 2023 is intended to fill the hardware large gap. \nAICAS 2023 is currently planned as a hybrid event with in-person presentations along with an option for remote attendees. Speakers should plan to present in person at AICAS 2023. The safety of our speakers and audience remains a priority concern. We will monitor global pandemic conditions and update and adjust the conference format if needed. \nThe venue is in Hangzhou\, which is an ancient city with a history of 2200 years and one of the seven ancient capitals in China. It is located 200 km from Shanghai. Hangzhou is the center of science\, education\, and culture of Zhejiang Province\, and is a key national tourism city. Hangzhou is also renowned as “A Paradise on the Earth”\, with its West Lake scenic area widely known\, which is one of the most attractive tourism regions in China. \nThe AICAS’23 conference will be held in one of the best 5-star hotels in the center of the city\, and within a walking distance to the subway. The region would offer choices of cinemas\, supermarket\, restaurants and entertainment as well.
URL:https://www.neuropac.info/event/international-conference-on-artificial-intelligence-circuits-and-systems-aicas-2023/
LOCATION:Hangzhou\, Hangzhou\, China
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230606T080000
DTEND;TZID=America/Los_Angeles:20230606T090000
DTSTAMP:20260415T194732
CREATED:20230606T211025Z
LAST-MODIFIED:20230606T211025Z
UID:10000236-1686038400-1686042000@www.neuropac.info
SUMMARY:INRC Forum: Kenneth Stewart
DESCRIPTION:Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing\nAbstract:Achieving real-time\, personalized intelligence at the edge with learning capabilities holds enormous promise to enhance our daily experiences and assist in decision-making\, planning\, and sensing. Yet\, today’s technology encounters difficulties with efficient and reliable learning at the edge\, due to a lack of personalized data\, insufficient hardware\, and the inherent challenges posed by online learning. Over time and across multiple developmental phases\, the brain has evolved to incorporate new knowledge by efficiently building on previous knowledge. We seek to emulate this remarkable process in digital neuromorphic technology through two interconnected stages of learning.\nInitially\, a meta-training phase fine-tunes the learning hardware’s hyperparameters for few-shot learning by deploying a differentiable simulation of three-factor learning in a neuromorphic chip. This meta-training process refines the synaptic plasticity and related hyperparameters to align with the specific dynamics inherent in the hardware and the given task domain. During the subsequent deployment stage\, these optimized hyperparameters enable accurate learning of new classes using the local three-factor synaptic plasticity updates.\nWe demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor and the associated plasticity dynamics\, achieving state-of-the-art accuracy in learning new categories in one-shot in real-time among three task domains. Our methodology is versatile and can be applied to situations demanding quick learning and adaptation at the edge\, such as navigating unfamiliar environments or learning unexpected categories of data through user engagement. \nBio: Kenneth Stewart is a final year Ph.D. candidate in Computer Science at the University of California\, Irvine advised by professors Emre Neftci\, Nikil Dutt\, and Jeffery Krichmar. Throughout his Ph.D. Kenneth has investigated adaptive learning algorithms with Spiking Neural Networks that can be applied in Neuromorphic hardware for online\, on-chip learning. During his Ph.D. Kenneth has published several papers in the area and was a candidate for the IEEE AICAS’20 best paper award. In addition to papers\, Kenneth co-authored patents regarding adaptive edge learning for gesture and speech recognition applications with the Accenture Future Tech Lab. Kenneth is one of the leading members of Neurobench’s Few-shot Online Learning initiative trying to motivate further research into the area. After earning his degree at the end of the Summer Kenneth hopes to scale up his research to apply it to real-world problems. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-kenneth-stewart/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230601T163000
DTEND;TZID=UTC:20230601T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000044-1685637000-1685640600@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-06-01/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230530T080000
DTEND;TZID=America/Los_Angeles:20230530T090000
DTSTAMP:20260415T194732
CREATED:20230527T011911Z
LAST-MODIFIED:20230527T011911Z
UID:10000235-1685433600-1685437200@www.neuropac.info
SUMMARY:INRC Forum: Jason Eshraghian & Ruijie Zhu
DESCRIPTION:Scaling up SNNs with SpikeGPT\nAbstract: If we had a dollar for every time we heard “It will never scale!”\, then neuromorphic engineers would be billionaires. This presentation will be centered on SpikeGPT\, the first large-scale language model (LLM) using spiking neural nets (SNNs)\, and possibly the largest SNN that has been trained using error backpropagation.\nThe need for lightweight language models is more pressing than ever\, especially now that we are becoming increasingly reliant on them from word processors and search engines\, to code troubleshooting and academic grant writing. Our dependence on a single LLM means that every user is potentially pooling sensitive data into a singular database\, which leads to significant security risks if breached.\nSpikeGPT was built to move towards addressing the privacy and energy consumption challenges we presently run into using Transformer blocks. Our approach decomposes self-attention down into a recurrent form that is compatible with spiking neurons\, along with dynamical weight matrices where the dynamics are learnable\, rather than the parameters as with conventional deep learning.\nWe will provide an overview of what SpikeGPT does\, how it works\, and what it took to train it successfully. We will also provide a demo on how users can download pre-trained models available on HuggingFace so that listeners are able to experiment with them. \nBio: Dr. Jason Eshraghian is an assistant professor of Electrical and Computer Engineering at UC Santa Cruz. He is the developer of snnTorch\, a widely adopted Python library used to train and model brain-inspired spiking neural networks. He was awarded the IEEE TCAS-I Darlington’23\, IEEE TVLSI’19\, and IEEE AICAS’19 best paper awards\, and the best live demonstration award at IEEE ICECS’20. He was the recipient of the Fulbright Future Fellowship (Australian-America Fulbright Commission)\, the Forrest Research Fellowship (Forrest Research Foundation)\, and the Endeavour Fellowship (Australian Government). He leads the UCSC Neuromorphic Computing Group which focuses on porting principles from neuroscience into building more effective learning algorithms in software and hardware. Dr. Eshraghian is the Secretary of the IEEE Neural Systems and Applications Committee and an Associate Editor with APL Machine Learning.\nRuijie Zhu is commencing his Ph.D. in Electrical and Computer Engineering at UC Santa Cruz in the Fall of 2023. He recently completed his Bachelor Degree in Computer Science at the University of Electronic Science and Technology of China\, where he became a regular contributor to open-source neuromorphic projects\, including snnTorch\, SpikingJelly\, and led the development of SpikeGPT\, the first spiking neural network generative language model. He was elected as the chair of the 2020 Students Open-Source Conference (SOSConf)\, which attracted over 3\,000 online participants. His research focus is on enabling the development of large-scale spiking neural networks. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-eshraghian-zhu/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230521
DTEND;VALUE=DATE:20230526
DTSTAMP:20260415T194732
CREATED:20230129T222609Z
LAST-MODIFIED:20230129T222609Z
UID:10000024-1684627200-1685059199@www.neuropac.info
SUMMARY:International Symposium on Circuits and Systems (ISCAS) 2023
DESCRIPTION:The IEEE International Symposium on Circuits and Systems (ISCAS) is the flagship conference of the IEEE Circuits and Systems (CAS) Society and the world’s premiere forum for researchers in the active fields of theory\, design and implementation of circuits and systems.
URL:https://www.neuropac.info/event/international-symposium-on-circuits-and-systems-iscas-2023/
LOCATION:Monterey\, Monterey\, CA\, United States
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230518T163000
DTEND;TZID=UTC:20230518T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000043-1684427400-1684431000@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-05-18/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230509T080000
DTEND;TZID=America/Los_Angeles:20230509T090000
DTSTAMP:20260415T194732
CREATED:20230509T065534Z
LAST-MODIFIED:20230509T065534Z
UID:10000234-1683619200-1683622800@www.neuropac.info
SUMMARY:INRC Forum: Bradley Theilman
DESCRIPTION:Stochastic Neuromorphic Circuits for Solving MAXCUT\nAbstract: Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. In this talk\, I will present two neuromorphic circuits that transform a source of randomness into computationally useful correlations for approximating solutions to graph MAXCUT. Neuromorphic computing has been successfully applied to various graph algorithms\, by exploiting the analogy between a graph and the connectivity of a neural circuit. However\, the physical constraints of neuromorphic hardware make translating an arbitrary graph into the neuromorphic domain challenging. Neuromorphic computing is also beginning to explore stochastic devices as efficient sources of randomness for large-scale stochastic algorithms. Graph MAXCUT is a well-known NP-complete discrete optimization problem with the best-known approximate solutions being stochastic algorithms\, such as the Goemans-Williamson algorithm. I will show how to combine large-scale sources of intrinsic randomness with neuromorphic principles to implement two classes of stochastic approximations to graph MAXCUT in neuromorphic hardware. These approaches have architectural advantages over other neuromorphic graph algorithms and benefit from the theoretical performance guarantees of their algorithmic inspirations. I will show results from simulations of these circuits as well as results from an implementation of one of these circuits on Intel’s Loihi neuromorphic system. This work opens a new direction for stochastic neuromorphic circuits applied to discrete optimization. \nBio: Bradley Theilman is a postdoctoral appointee at Sandia National Laboratories. His research focuses on applying neuroscientific principles to neuromorphic computing. He earned a Ph.D. in computational neuroscience in 2021 from UC San Diego\, where he worked on topological approaches to understanding neural population activity in the auditory regions of songbird brains in the laboratory of Dr. Tim Gentner. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bradley-theilman/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230505T110000
DTEND;TZID=America/New_York:20230505T120000
DTSTAMP:20260415T194732
CREATED:20230423T184025Z
LAST-MODIFIED:20230423T185121Z
UID:10000232-1683284400-1683288000@www.neuropac.info
SUMMARY:Frances Chance - Modeling Coordinate Transformations in Neural and Neuromorphic Systems
DESCRIPTION:Hosted by the Perception and Robotics Group Seminar Series on Robotics and Computer Vision at the University of Maryland. \nAbstract. Animals excel at a wide range behaviors\, many of which are essential for survival. For example\, dragonflies are aerial predators\, known for both their speed and high success rate\, that must perform fast\, accurate\, and efficient calculations to survive. I will present a neural network model\, inspired by the dragonfly nervous system\, that calculates turning for successful prey interception. The model relies upon a coordinate transformation from eye-coordinates to body-coordinates\, an operation that must be performed by almost any animal nervous system relying upon sensory information to interact with the external world. I will discuss how I and collaborators are combining neuroscience experiments\, modeling studies\, and exploration of neuromorphic architectures to understand how the biological dragonfly nervous system performs coordinate transformations and to develop novel approaches for efficient neural- inspired computation. \nBio. As a computational neuroscientist\, Frances Chance has always been fascinated by how neural circuits compute information. Her current research focuses on applying knowledge of how neural systems operate towards the development of novel neuro-inspired algorithms and brain- based architectures. Frances Chance received her PhD and MS from Brandeis University and her BS from the California Institute of Technology. Currently she is a Principal Member of the Technical Staff at Sandia National Laboratories.
URL:https://www.neuropac.info/event/frances-chance-modeling-coordinate-transformations-in-neural-and-neuromorphic-systems/
LOCATION:University of Maryland\, 8125 Paint Branch Dr (Room IRB 4105)\, College Park\, MD\, 20740\, United States
CATEGORIES:Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230504T163000
DTEND;TZID=UTC:20230504T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000042-1683217800-1683221400@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-05-04/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230502T080000
DTEND;TZID=America/Los_Angeles:20230502T090000
DTSTAMP:20260415T194732
CREATED:20230430T102520Z
LAST-MODIFIED:20230430T102520Z
UID:10000233-1683014400-1683018000@www.neuropac.info
SUMMARY:INRC Forum: Jeff Orchard
DESCRIPTION:Hyperdimensional Algorithms using Spiking Phasors\nAbstract: Hyperdimensional (HD) computing offers a powerful framework for representing compositional reasoning. Such algorithms lend themselves to neural-network implementations\, allowing us to create neural networks that can perform cognitive functions\, like spatial reasoning\, arithmetic\, and symbolic logic. But the vectors involved can be quite large. Advances in neuromorphic hardware hold the promise of reducing the running time and energy footprint of neural networks by orders of magnitude. In this talk\, I will extend some pioneering work to run HD algorithms on a substrate of spiking neurons\, implementing examples in spatial memory\, function representation\, and temporal memory. \nBio: Jeff Orchard received degrees in applied mathematics from the University of Waterloo (BMath) and the University of British Columbia (MSc)\, and received his PhD in Computing Science from Simon Fraser University in 2003. Since then\, he has been a faculty member at the Cheriton School of Computer Science at the University of Waterloo in Canada. Prof. Orchard’s research focuses on computational neuroscience\, using mathematical models and computer simulations of neural networks in an effort to understand how the brain works. Guided by both theory and anatomy\, he is building neural networks based on computational theories of the brain — such as predictive coding — to uncover the way we perceive the world. His research also includes Vector Symbolic Architectures and Algebras\, spatial navigation\, and population coding. He is a core member of the Centre for Theoretical Neuroscience. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-jeff-orchard/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230501
DTEND;VALUE=DATE:20230506
DTSTAMP:20260415T194732
CREATED:20230129T222127Z
LAST-MODIFIED:20230129T222127Z
UID:10000022-1682899200-1683331199@www.neuropac.info
SUMMARY:International Conference on Learning Representations (ICLR) 2023
DESCRIPTION:
URL:https://www.neuropac.info/event/international-conference-on-learning-representations-iclr-2023/
LOCATION:Kigali Convention Center\, Kigali\, Rwanda
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230430
DTEND;VALUE=DATE:20230515
DTSTAMP:20260415T194732
CREATED:20230127T225007Z
LAST-MODIFIED:20230127T225007Z
UID:10000011-1682812800-1684108799@www.neuropac.info
SUMMARY:CapoCaccia Workshop 2023
DESCRIPTION:Workshop theme for 2023: “Lessons from machine learning and neuroscience for building efficient intelligent systems” \nIt is an exciting era of significant progress in the quest for implementing intelligence in artificial systems. This stems from major breakthroughs in our understanding of natural intelligence\, thanks to new tools for better data collection and analysis from the brain; in the development of machine learning algorithms for solving real-world problems; and in the availability of scalable computing substrates that are smaller\, denser\, faster and feature parallel processing capabilities. \nIn this workshop\, we combine all the above towards a more efficient and powerful implementation of intelligent systems. Specifically\, our objective is to pinpoint what current ideas from machine learning and neuroscience can lead to practical designs for implementing low-power and miniaturized neuromorphic intelligent systems. To do so\, in a setting that fosters brainstorming and cross-fertilization\, we stimulate exchange of ideas on topics in which biology\, modeling\, and engineering are dealt with simultaneously. These topics will range from fundamental principles such as learning\, memory and the neurobiology of time; to high-level functions such as navigation\, embodiment and active sensing. \nThe mission of the CapoCaccia Workshops for Neuromorphic Intelligence is to understand the principles of biological intelligence and apply this knowledge in technologies\, for the good of all mankind. \nThe workshop features open and highly interactive discussion sessions in the morning; hands-on projects\, tutorials\, and hardware and software jamming sessions during the day; and free-form discussions in the evenings. \nThe workshop is open to everyone\, but since resources are limited\, we can accept only a limited number of registrations. Due to the limited number of hotel rooms\, Ph.D. students are expected to pair up and share rooms. All participants are encouraged to stay for the full two week period\, but can stay for less if necessary
URL:https://www.neuropac.info/event/capocaccia-workshop-2023/
LOCATION:Alghero\, Sardinia\, Italy\, Alghero\, Sardinia\, Italy
CATEGORIES:Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230426T180000
DTEND;TZID=UTC:20230426T193000
DTSTAMP:20260415T194732
CREATED:20230127T223705Z
LAST-MODIFIED:20230127T223705Z
UID:10000009-1682532000-1682537400@www.neuropac.info
SUMMARY:Hands-on session with Xylo and Rockpool
DESCRIPTION:Speaker bio: Dylan Muir is the Vice President for Global Research Operations; Director for Algorithms and Applications; and Director for Global Business Development at SynSense. Dr. Muir is a specialist in architectures for neural computation. He has published extensively in computational and experimental neuroscience. At SynSense he is responsible for the company research vision\, and directing development of neural architectures for signal processing. Dr. Muir holds a Doctor of Science (PhD) from ETH Zurich\, and undergraduate degrees (Masters) in Electronic Engineering and in Computer Science from QUT\, Australia.
URL:https://www.neuropac.info/event/hands-on-session-with-xylo-and-rockpool/
LOCATION:Online
CATEGORIES:Tutorial
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230425T080000
DTEND;TZID=America/Los_Angeles:20230425T090000
DTSTAMP:20260415T194732
CREATED:20230425T072512Z
LAST-MODIFIED:20230425T072512Z
UID:10000178-1682409600-1682413200@www.neuropac.info
SUMMARY:INRC Forum: James Knight
DESCRIPTION:Efficient training of sparse SNN classifiers using GeNN\nAbstract:Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of ANN-based machine learning. We want to unlock the potential of spike-based machine learning in the same way\, so here we present mlGeNN\, an easy way to define\, train and test spiking neural networks using GeNN — our efficient GPU-accelerated SNN simulator. Using GeNN\, we demonstrate that we can use e-prop to train recurrent SNN classifiers on datasets including the Spiking Heidelberg Digits (SHD) and DVS gesture. We show that these classifiers can not only offer comparable performance to LSTMs but are up to 7× faster when performing inference on the same GPU hardware. As GeNN is designed to exploit sparse connectivity\, by replacing the dense recurrent connectivity in classifier models with random sparse connectivity\, we can reduce the time taken to train such models by almost 10× — although this results in some reduction in classification accuracy. However\, in biological brains\, alongside the changes to the strength of existing synapses driven by synaptic plasticity\, structural plasticity prunes unused synapses and forms new ones. The Deep-R learning rule provides a framework for combining gradient-based learning with structural plasticity and by combining Deep-R with e-prop\, we demonstrate that the aforementioned reduction in classification accuracy can be eliminated\, even in very sparsely connected models. \nBio: Jamie Knight received his BEng degree in Electronic Engineering from the University of Warwick in 2006. After working as a games developer for several years\, he received an MPhil in Advanced Computer Science from the University of Cambridge in 2013 and a PhD in Computer Science from the University of Manchester in 2016. His doctoral work focussed on using the SpiNNaker neuromorphic supercomputer to simulate large-scale computational neuroscience models with synaptic plasticity. Since 2017 Jamie has worked at the University of Sussex\, first as a Research Fellow focussing on using GPU hardware to accelerate spiking neural network based robot controllers and\, since 2022\, as a EPSRC Research Software Engineering Fellow focussing on spike-based machine learning and the software to enable it. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-4/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230420T163000
DTEND;TZID=UTC:20230420T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000041-1682008200-1682011800@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-04-20/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20230418T210000
DTEND;TZID=Europe/Amsterdam:20230418T223000
DTSTAMP:20260415T194732
CREATED:20230320T142706Z
LAST-MODIFIED:20230320T142706Z
UID:10000033-1681851600-1681857000@www.neuropac.info
SUMMARY:NeuroPAC Seminar: Forum on Neuromorphic Navigation
DESCRIPTION:Panelists: \n\nAndrew Davidson\, Imperial College\, London\nKostas Daniilidis\, University of Pennsylvania\nMichael Milford\, Queensland University of Technology\n\nJoin the seminar: https://umd.zoom.us/j/93344217202 \nMore information: https://www.neuropac.info/seminars/
URL:https://www.neuropac.info/event/neuropac-seminar-forum-on-neuromorphic-navigation/
LOCATION:Online
CATEGORIES:Symposium,Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230418T080000
DTEND;TZID=America/Los_Angeles:20230418T090000
DTSTAMP:20260415T194732
CREATED:20230416T091939Z
LAST-MODIFIED:20230416T091939Z
UID:10000177-1681804800-1681808400@www.neuropac.info
SUMMARY:INRC Forum: Akshit Saradagi
DESCRIPTION:Neuromorphic sensing in sub-terranean environments and neuromorphic solvers for model predictive control\nAbstract: In this talk\, I will be presenting some recent results in Neuromorphic Engineering from the Robotics and AI group at Luleå University of Technology\, Sweden.\nIn the first half of my talk\, I will be presenting a novel LiDAR and event camera fusion framework for fast and precise object and human detection in subterranean (SubT) environments. The fusion framework caters to the wide variety of adverse lighting conditions found in SubT environments\, such as low or no light\, high-contrast zones and in the presence of blinding light sources. The proposed fusion uses intensity filtering and K-means clustering on the LiDAR point cloud and frequency filtering and connectivity clustering on the events induced in an event camera by the returning LiDAR beams. The fusion framework was experimentally validated in a real SubT environment (a mine) with a Pioneer 3AT mobile robot. The experimental results show real-time performance for human detection and the NMPC-based controller allows for reactive tracking of a human or object of interest\, even in complete darkness.\nIn the second half of the talk\, I will be presenting our preliminary results on using neuromorphic solvers for solving quadratic programs arising in Model Predictive Control (MPC). More specifically\, we employed the floating-point LAVA QP solver\, which emulates the Proportional-Integral Projected Gradient (PIPG) Method for solving QP problems\, to solve terminally constrained MPC problems. The objective function in linear MPC problems being strongly convex\, the LAVA QP solver ensures that the distance to optimum and the constraint violation converge to zero at the rate of O(1/k^2) and O(1/k^3) respectively\, with ‘k’ being the number of solver iterates. Given this peculiar convergence property of the solver\, I will present a sketch of our proof for asymptotic stability of the closed loop system\, along with the simulation-based validation. \nBio: Akshit Saradagi is a Postdoctoral researcher in the Robotics and AI group at Luleå University of Technology\, Sweden. He received his M.S and Ph.D dual degree from the Indian Institute of Technology Madras (IITM)\, Chennai\, India. His current research focusses on distributed control of multi-agent systems\, control barrier functions-based safety guarantees in Robotics\, applications of Neuromorphic Computing in Robotics and control under resource constraints. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-3/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230411T080000
DTEND;TZID=America/Los_Angeles:20230411T090000
DTSTAMP:20260415T194732
CREATED:20230416T091845Z
LAST-MODIFIED:20230416T091845Z
UID:10000176-1681200000-1681203600@www.neuropac.info
SUMMARY:INRC Forum: Guido de Croon
DESCRIPTION:Neuromorphic sensing and processing for small\, autonomous drones\n\n\n\nAbstract: Small drones are promising for many applications\, such as search-and-rescue\, greenhouse monitoring\, or keeping track of stock in warehouses. Since they are small\, they can fly in narrow areas. Moreover\, their light weight makes them very safe for flight around humans. However\, making such small drones fly completely by themselves is an enormous challenge due to the extreme resource restrictions in terms of sensing and processing. In my talk\, I will discuss the promises of novel neuromorphic sensing and processing technologies for autonomous flight of small drones\, illustrating this with recent experiments from our lab. Specifically\, I will delve into our multi-year effort to create a fully neuromorphic vision-to-control pipeline\, going from raw events to low-level control commands. Recently\, we have achieved this feat for optical-flow-based ego-motion estimation and control\, implementing the spiking neural network on the Loihi Kapoho bay onboard of a free-flying drone. \n\n\n\nBio: Guido de Croon received his M.Sc. and Ph.D. in the field of Artificial Intelligence (AI) at Maastricht University\, the Netherlands. His research interest lies with computationally efficient\, bio-inspired algorithms for robot autonomy\, with an emphasis on computer vision. Since 2008 he has worked on algorithms for achieving autonomous flight with small and light-weight flying robots\, such as the DelFly flapping wing MAV. In 2011-2012\, he was a research fellow in the Advanced Concepts Team of the European Space Agency\, where he studied topics such as optical flow based control algorithms for extraterrestrial landing scenarios. After his return at TU Delft\, his work has included fully autonomous flight of a 20-gram DelFly\, a new theory on active distance perception with optical flow\, and a swarm of tiny drones able to explore unknown environments. Currently\, he is Full Professor at TU Delft and scientific lead of the Micro Air Vehicle lab (MAVLab) of Delft University of Technology. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-2/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230411
DTEND;VALUE=DATE:20230415
DTSTAMP:20260415T194732
CREATED:20230114T164825Z
LAST-MODIFIED:20230114T164825Z
UID:10000001-1681171200-1681516799@www.neuropac.info
SUMMARY:NICE
DESCRIPTION:The 2023 Neuro-Inspired Computing Elements (NICE) Conference is the 10th annual meeting of researchers in the neural computing field. Like previous editions\, NICE 2023 will focus on the interplay between neural theory\, neural algorithms\, neuromorphic architectures and hardware\, and applications for neural computing technology. \nNICE aims to involve diverse participation from all over the world and bring together research communities with universities\, government\, and industry.
URL:https://www.neuropac.info/event/nice/
LOCATION:University of Texas at San Antonio\, San Antonio\, TX\, United States
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230406T163000
DTEND;TZID=UTC:20230406T173000
DTSTAMP:20260415T194732
CREATED:20230127T222256Z
LAST-MODIFIED:20230828T170621Z
UID:10000040-1680798600-1680802200@www.neuropac.info
SUMMARY:Theory of Neuromorphic Computing
DESCRIPTION:Recurring discussion meeting by researchers interested in the theory of neuromorphic computing. \nHosted by Arne Diehl and Johan Kwisthout of Radboud University. To join the meetings\, please contact Arne Diehl: arne.diehl@donders.ru.nl.
URL:https://www.neuropac.info/event/theory-of-neuromorphic-computing/2023-04-06/
LOCATION:Online
CATEGORIES:Discussion
ORGANIZER;CN="Arne Diehl":MAILTO:arne.diehl@donders.ru.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230404T180000
DTEND;TZID=UTC:20230404T193000
DTSTAMP:20260415T194732
CREATED:20230127T223552Z
LAST-MODIFIED:20230127T223552Z
UID:10000008-1680631200-1680636600@www.neuropac.info
SUMMARY:Hands-on session with Sinabs and Speck
DESCRIPTION:Speaker bio: Gregor Lenz graduated with a Ph.D. in neuromorphic engineering from Sorbonne University. He thinks that technology can learn a thing or two from how biological systems process information. \nHis main interests are event cameras that are inspired by the human retina and spiking neural networks that mimic human brain in an effort to teach machines to compute a bit more like humans do. At the very least there are some power efficiency gains to be made\, but hopefully more! Also he loves to build open source software for spike-based machine learning. You can find more information on his personal website. \nHe is the maintainer of two open source projects in the field of neuromorphic computing\, Tonic and expelliarmus.
URL:https://www.neuropac.info/event/hands-on-session-with-sinabs-and-speck/
LOCATION:Online
CATEGORIES:Tutorial
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230404T080000
DTEND;TZID=America/Los_Angeles:20230404T090000
DTSTAMP:20260415T194732
CREATED:20230331T122014Z
LAST-MODIFIED:20230331T122032Z
UID:10000175-1680595200-1680598800@www.neuropac.info
SUMMARY:INRC Forum: Arto Nurmikko
DESCRIPTION:Efficient Decoding of Multipoint Spiking Events Recorded by A Network of Wireless Biosensors\n\n\nAbstract: Our lab is developing tools for brain-machine interfaces using a concept of spatially distributed wireless microsensors\, “neurograins” implanted in a functional cortical area of interest (motor\, auditory\, visual). When a given sensor detects a spiking event\, the signal is immediately sent to an external radio-frequency receiver as a binary “1”. Thus\, for a network of thousand neurograins\, one goal of an ongoing research project\, the received data at the external detector is a stream of spikes in which the cortical computations of interest are embedded. Based on our work on smaller ensembles (hundred neurograins)\, we have discovered a major computational bottleneck in detecting and decoding signals for large ensembles of neurograins for a real-time (wearable/portable brain-interface systems. In this work\, we explore and apply the Loihi platform to integrate the demodulation (time-series correlation) and neural population decoding (spike-timing based model) steps into one parallel process. \nBio: Prof. Arto Nurmikko is a L. Herbert Ballou University Professor of Engineering and Physics at Brown. He recived his degrees from University of California\, Berkeley\, and did postdoctoral work at the Hebrew University (Jerusalem) and MIT. Prof. Nurmikko’s research spans the areas of neuroengineering\, photonics\, microelectronics\, nanosciences\, and the translation of device research to new technologies in physical and life science applications. Currently\, his research interests are focused on implantable neural interfaces. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-arto-nurmikko/
LOCATION:Online
END:VEVENT
END:VCALENDAR