BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.16//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.neuropac.info
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231205T080000
DTEND;TZID=America/Los_Angeles:20231205T090000
DTSTAMP:20260422T091325
CREATED:20231130T122725Z
LAST-MODIFIED:20231130T122725Z
UID:10000270-1701763200-1701766800@www.neuropac.info
SUMMARY:Michael Jurado @ INRC - Enhancing Performance and Efficiency of SNNs
DESCRIPTION:Title:\nEnhancing Performance and Efficiency of SNNs: From Spike-Based Loss Improvements to Synaptic Sparsification Techniques. \nAbstract:\nThe introduction of offline training capabilities like Spike Layer Error Reassignment in Time (SLAYER) and advancements in the probabilistic interpretations of Spiking Neural Network (SNN) output reinforce SNNs as a viable alternative to Artificial Neural Networks (ANNs). However\, special care must be taken during Surrogate Gradient (SG) training to achieve desired performance and efficiency. This talk will cover our recent work in improving spike-based loss functions for SNNs as well as sparsifying SNNs for low cost\, high performant neuromorphic computing. \nSpikemax was previously introduced as a family of differentiable loss methods which use windowed spike counts to form classification probabilities. We modify the Spikemaxs loss method to use rates and a scaling parameter instead of counts to form Scaled-Spikemax. Our mathematical analysis shows that an appropriate scaling term can yield less coarse probability outputs from the SNN and help smooth the gradient of the loss during training. Experimentally\, we show that Scaled-Spikemax achieves faster training convergence than Spikemax and results in relative improvements of 4.2% and 9.9% in accuracy for NMNIST and N-TIDIGITS18\, respectively. We then extend Scaled-Spikemax to construct a spike-based loss function for multi-label classification called Spikemoid. The viability of Spikemoid is shown via the first known multi-label classification results on N-TIDIGITS18 and 2NMNIST\, a novel variation of NMNIST that superimposes event-driven sensory data. \nHowever\, SNNs trained through SG methods oftentimes use dense or convolutional connections which are not always suitable for Loihi2. In order to minimize core usage and power consumption on chip\, we employ synaptic pruning techniques as part of our SNN training pipelines. We demonstrate the effectiveness of synaptic pruning techniques for ANN to SNN conversion of vgg16 on Loihi1 as well as for a lava-dl trained SNN for the Intel DNS Challenge. This later approach involved the use of Gradual Magnitude Pruning (GMP) applied during SLAYER training\, which reduced the memory footprint of the baseline SDNN by 50-75%. We highlight infrastructure changes to netX which enable conversion of lava-dl trained SNNs into sparsity aware lava processes. \nMeeting link to join is available to INRC members and affiliates on the INRC Forum Schedule (click here). \nIf you are not yet a member of the INRC\, please see the “Joining the INRC link” below. \nBio: Michael Jurado is a research engineer at the Georgia Tech Research Institute. He studied computer science at Georgia Tech and received his master’s degree in Machine Learning in 2022. Lately\, Michael has been studying and developing neuromorphic algorithms for edge computing and a regular contributor to the lava code base. In his free time\, he likes to read and study languages. \n\n\n\n\n\n\n\n\nFor the recording and slides\, see the full INRC Forum 2023 Schedule (accessible only to INRC Affiliates and Engaged Members). \nIf you are interested in becoming a member\, here is the information about ”Joining the INRC.
URL:https://www.neuropac.info/event/michael-jurado-inrc-enhancing-performance-and-efficiency-of-snns/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231128T080000
DTEND;TZID=America/Los_Angeles:20231128T090000
DTSTAMP:20260422T091325
CREATED:20231130T122249Z
LAST-MODIFIED:20231130T122249Z
UID:10000269-1701158400-1701162000@www.neuropac.info
SUMMARY:Jannik Lubeoinski @ INRC - Brian2Lava: connecting the Brian 2 simulator to neuromorphic hardware
DESCRIPTION:Abstract:\nNeuromorphic hardware allows for fast and energy-efficient simulation of spiking neural networks. However\, the usage of such devices is still a challenge\, as it requires detailed knowledge about the neuromorphic hardware as well as the used software interface\, e.g.\, the Lava framework for neuromorphic computing spearheaded by Intel. This stands in contrast to the relative ease of simulating spiking neural networks on conventional CPU or GPU architectures\, for which user-friendly simulation environments exist. The Brian 2 simulator\, for instance\, allows to readily define a spiking neural network with a set of equations\, handling all subsequent hardware interactions. \nTo link the best of both worlds\, we are developing Brain2Lava. Brian2Lava combines the intuitive user interface of Brian 2 with the functionality of Lava. By means of a so-called device for Brian 2\, Brian2Lava seamlessly generates and executes the desired simulations in Lava without the need for users to write additional code. At the current stage\, Brian2Lava supports most Brian 2 features when executing Lava on CPU\, and a selection of essential features for the execution on Intel’s Loihi 2 chip. We are constantly working to expand the number of features supported with the chip\, aiming to enable to flexibly execute simulations on different hardware platforms. \nIn summary\, by bridging the gap between user-friendly model definition and neuromorphic implementation\, Brian2Lava empowers engineers and neuroscientists alike to leverage the potential of neuromorphic hardware with greater ease and efficiency. \nBio: Jannik Luboeinski is currently a postdoctoral researcher at University of Göttingen. He received his B.Sc. and M.Sc. degrees in Physics from Technical University of Darmstadt and Goethe University Frankfurt\, respectively. From 2017 to 2021\, he did his Ph.D. with Christian Tetzlaff at University of Göttingen\, investigating the role of two-phase synaptic plasticity in recurrent spiking neural networks\, which resulted in the publication of several journal papers. In 2021\, Dr. Luboeinski continued to work in the group of Professor Tetzlaff (now Computational Synaptic Physiology Group) as a postdoctoral researcher. A major aim of his research is to identify properties that enable efficient memory processes in biological and artificial neural systems. His work currently focuses on neuromorphic computing and the development of simulation software for recurrent spiking neural networks. \nMeeting link to join is available to INRC members and affiliates on the INRC Forum Schedule (click here). \nIf you are not yet a member of the INRC\, please see the “Joining the INRC link” below. \nFor the recording and slides\, see the full INRC Forum 2023 Schedule (accessible only to INRC Affiliates and Engaged Members). \nIf you are interested in becoming a member\, here is the information about ”Joining the INRC.”
URL:https://www.neuropac.info/event/jannik-lubeoinski-inrc-brian2lava-connecting-the-brian-2-simulator-to-neuromorphic-hardware/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230627T080000
DTEND;TZID=America/Los_Angeles:20230627T090000
DTSTAMP:20260422T091325
CREATED:20230626T220627Z
LAST-MODIFIED:20230626T220627Z
UID:10000238-1687852800-1687856400@www.neuropac.info
SUMMARY:INRC Forum: Robert Legenstein
DESCRIPTION:Memory-enriched computation and learning through synaptic and non-synaptic plasticity\nAbstract:Virtually any task faced by humans has a temporal component and therefore demands some form of memory. Consequently\, a variety of memory systems and mechanisms have been shown to exist in the brain of humans and other animals. These memory systems operate on a multitude of time scales\, from seconds to years. Yet\, it is still not well understood how memory is implemented in the brain and how cortical neuronal networks utilize these systems for computation. In this talk\, I will present some recent models that extend (spiking and non-spiking) neural network models with memory using Hebbian and non-Hebbian types of plasticity. I will discuss the similarities between these models and transformers\, arguably the most powerful models for sequence processing in the area of machine learning. I will show that Hebbian plasticity can significantly increase the computational and learning capabilities of spiking neural networks. Further\, I will show how neurons with non-synaptic plasticity can be utilized for memory and how networks of such neurons can be trained without the need to backpropagate errors through time. \nBio: Dr. Robert Legenstein received his PhD in computer science from the Graz University of Technology\, Graz\, Austria\, in 2002. He is a full professor at the Department of Computer Science\, TU Graz\, head of the Institute for Theoretical Computer Science\, and leading the Graz Center for Machine Learning. Dr. Legenstein has served as associate editor of IEEE Transactions on Neural Networks and Learning Systems (2012-2016). He is an action editor for Transactions on Machine Learning Research\, and he was on the program committee for NeurIPS and ICLR several times. His primary research interests are learning in models for biological networks of neurons and neuromorphic hardware\, probabilistic neural computation\, novel brain-inspired architectures for computation and learning\, and memristor-based computing concepts. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-robert-legenstein/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230620T080000
DTEND;TZID=America/Los_Angeles:20230620T090000
DTSTAMP:20260422T091325
CREATED:20230618T010420Z
LAST-MODIFIED:20230618T010420Z
UID:10000237-1687248000-1687251600@www.neuropac.info
SUMMARY:INRC Forum: Wolfgang Maass\, Christoph Stoeckl & Yukun Yang
DESCRIPTION:Local prediction-learning in high-dimensional spaces enables neural networks to plan\nAbstract: Being able to plan a sequence of actions in order to reach a goal\, or more generally to solve a problem\, is a cornerstone of higher brain function. But compelling models which explain how the brain can achieve that are missing. We show that local synaptic plasticity enables a neural network to create high-dimensional representations of actions and sensory inputs so that they encode salient information about their relationship. In fact\, it can create a cognitive map that reduces planning to a simple geometric problem in a high-dimensional space that can easily be solved by a neural network. This method also explains how self-supervised learning enables a neural network to control a complex muscle system so that it can handle locomotion challenges that never occurred during learning. The underlying learning strategy bears some similarity to self-attention networks (Transformers). But it does not require non-local learning rules or very large datasets. Hence it is suitable for implementation in highly energy-efficient neuromorphic hardware\, in particular for on-chip learning on Loihi 2.\nOne goal of our presentation will be to initiate discussions about the relation of this learning-based use of large vectors to other VSA approaches\, its relation to Transformers\, and possible applications in robotics. \nBio: Wolfgang Maass is a Professor of Computer Science at Technische Universität Graz. He received his PhD (1974) and Habilitation (1978) in Mathematics from Ludwig-Maximilians-Universität in Munich. He conducted research at MIT\, the University of Chicago\, and UC Berkeley\, as a Heisenberg Fellow of the Deutsche Forschungsgemeinschaft. He has been the Editor of Machine Learning (1995-1997)\, Archive for Mathematical Logic (1987-2000)\, and Biological Cybernetics (2006-present). He was also a Sloan Fellow at the Computational Neurobiology Lab of the Salk Institute in La Jolla\, California from 1997-1998. Since 2005\, he has been an Adjunct Fellow of the Frankfurt Institute of Advanced Studies (FIAS).\nChristoph Stoeckl is a Postdoc researcher at Technische Universität Graz working in the intersection between computational neuroscience and AI. His research interests include neuromorphic hardware as well as exploring connections between Transformers and neural networks. Before joining the research lab of Prof. Maass\, he obtained a Master’s degree in Computer Science also at TU Graz.\nYukun Yang is a 1st-year Doctoral Student at Technische Universität Graz\, supervised by Prof. Wolfgang Maass. His primary research interest is at the intersection of AI and neuroscience\, with a focus on discovering the learning principles of the brain and its neuromorphic applications. Before joining TU Graz\, he earned M.S. in the ECE Department at Duke University in 2020. Earlier\, he received B.E. in Information Engineering from Xi’an Jiaotong University in 2018. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-tu-graz/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230606T080000
DTEND;TZID=America/Los_Angeles:20230606T090000
DTSTAMP:20260422T091325
CREATED:20230606T211025Z
LAST-MODIFIED:20230606T211025Z
UID:10000236-1686038400-1686042000@www.neuropac.info
SUMMARY:INRC Forum: Kenneth Stewart
DESCRIPTION:Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing\nAbstract:Achieving real-time\, personalized intelligence at the edge with learning capabilities holds enormous promise to enhance our daily experiences and assist in decision-making\, planning\, and sensing. Yet\, today’s technology encounters difficulties with efficient and reliable learning at the edge\, due to a lack of personalized data\, insufficient hardware\, and the inherent challenges posed by online learning. Over time and across multiple developmental phases\, the brain has evolved to incorporate new knowledge by efficiently building on previous knowledge. We seek to emulate this remarkable process in digital neuromorphic technology through two interconnected stages of learning.\nInitially\, a meta-training phase fine-tunes the learning hardware’s hyperparameters for few-shot learning by deploying a differentiable simulation of three-factor learning in a neuromorphic chip. This meta-training process refines the synaptic plasticity and related hyperparameters to align with the specific dynamics inherent in the hardware and the given task domain. During the subsequent deployment stage\, these optimized hyperparameters enable accurate learning of new classes using the local three-factor synaptic plasticity updates.\nWe demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor and the associated plasticity dynamics\, achieving state-of-the-art accuracy in learning new categories in one-shot in real-time among three task domains. Our methodology is versatile and can be applied to situations demanding quick learning and adaptation at the edge\, such as navigating unfamiliar environments or learning unexpected categories of data through user engagement. \nBio: Kenneth Stewart is a final year Ph.D. candidate in Computer Science at the University of California\, Irvine advised by professors Emre Neftci\, Nikil Dutt\, and Jeffery Krichmar. Throughout his Ph.D. Kenneth has investigated adaptive learning algorithms with Spiking Neural Networks that can be applied in Neuromorphic hardware for online\, on-chip learning. During his Ph.D. Kenneth has published several papers in the area and was a candidate for the IEEE AICAS’20 best paper award. In addition to papers\, Kenneth co-authored patents regarding adaptive edge learning for gesture and speech recognition applications with the Accenture Future Tech Lab. Kenneth is one of the leading members of Neurobench’s Few-shot Online Learning initiative trying to motivate further research into the area. After earning his degree at the end of the Summer Kenneth hopes to scale up his research to apply it to real-world problems. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-kenneth-stewart/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230530T080000
DTEND;TZID=America/Los_Angeles:20230530T090000
DTSTAMP:20260422T091325
CREATED:20230527T011911Z
LAST-MODIFIED:20230527T011911Z
UID:10000235-1685433600-1685437200@www.neuropac.info
SUMMARY:INRC Forum: Jason Eshraghian & Ruijie Zhu
DESCRIPTION:Scaling up SNNs with SpikeGPT\nAbstract: If we had a dollar for every time we heard “It will never scale!”\, then neuromorphic engineers would be billionaires. This presentation will be centered on SpikeGPT\, the first large-scale language model (LLM) using spiking neural nets (SNNs)\, and possibly the largest SNN that has been trained using error backpropagation.\nThe need for lightweight language models is more pressing than ever\, especially now that we are becoming increasingly reliant on them from word processors and search engines\, to code troubleshooting and academic grant writing. Our dependence on a single LLM means that every user is potentially pooling sensitive data into a singular database\, which leads to significant security risks if breached.\nSpikeGPT was built to move towards addressing the privacy and energy consumption challenges we presently run into using Transformer blocks. Our approach decomposes self-attention down into a recurrent form that is compatible with spiking neurons\, along with dynamical weight matrices where the dynamics are learnable\, rather than the parameters as with conventional deep learning.\nWe will provide an overview of what SpikeGPT does\, how it works\, and what it took to train it successfully. We will also provide a demo on how users can download pre-trained models available on HuggingFace so that listeners are able to experiment with them. \nBio: Dr. Jason Eshraghian is an assistant professor of Electrical and Computer Engineering at UC Santa Cruz. He is the developer of snnTorch\, a widely adopted Python library used to train and model brain-inspired spiking neural networks. He was awarded the IEEE TCAS-I Darlington’23\, IEEE TVLSI’19\, and IEEE AICAS’19 best paper awards\, and the best live demonstration award at IEEE ICECS’20. He was the recipient of the Fulbright Future Fellowship (Australian-America Fulbright Commission)\, the Forrest Research Fellowship (Forrest Research Foundation)\, and the Endeavour Fellowship (Australian Government). He leads the UCSC Neuromorphic Computing Group which focuses on porting principles from neuroscience into building more effective learning algorithms in software and hardware. Dr. Eshraghian is the Secretary of the IEEE Neural Systems and Applications Committee and an Associate Editor with APL Machine Learning.\nRuijie Zhu is commencing his Ph.D. in Electrical and Computer Engineering at UC Santa Cruz in the Fall of 2023. He recently completed his Bachelor Degree in Computer Science at the University of Electronic Science and Technology of China\, where he became a regular contributor to open-source neuromorphic projects\, including snnTorch\, SpikingJelly\, and led the development of SpikeGPT\, the first spiking neural network generative language model. He was elected as the chair of the 2020 Students Open-Source Conference (SOSConf)\, which attracted over 3\,000 online participants. His research focus is on enabling the development of large-scale spiking neural networks. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-eshraghian-zhu/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230509T080000
DTEND;TZID=America/Los_Angeles:20230509T090000
DTSTAMP:20260422T091325
CREATED:20230509T065534Z
LAST-MODIFIED:20230509T065534Z
UID:10000234-1683619200-1683622800@www.neuropac.info
SUMMARY:INRC Forum: Bradley Theilman
DESCRIPTION:Stochastic Neuromorphic Circuits for Solving MAXCUT\nAbstract: Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. In this talk\, I will present two neuromorphic circuits that transform a source of randomness into computationally useful correlations for approximating solutions to graph MAXCUT. Neuromorphic computing has been successfully applied to various graph algorithms\, by exploiting the analogy between a graph and the connectivity of a neural circuit. However\, the physical constraints of neuromorphic hardware make translating an arbitrary graph into the neuromorphic domain challenging. Neuromorphic computing is also beginning to explore stochastic devices as efficient sources of randomness for large-scale stochastic algorithms. Graph MAXCUT is a well-known NP-complete discrete optimization problem with the best-known approximate solutions being stochastic algorithms\, such as the Goemans-Williamson algorithm. I will show how to combine large-scale sources of intrinsic randomness with neuromorphic principles to implement two classes of stochastic approximations to graph MAXCUT in neuromorphic hardware. These approaches have architectural advantages over other neuromorphic graph algorithms and benefit from the theoretical performance guarantees of their algorithmic inspirations. I will show results from simulations of these circuits as well as results from an implementation of one of these circuits on Intel’s Loihi neuromorphic system. This work opens a new direction for stochastic neuromorphic circuits applied to discrete optimization. \nBio: Bradley Theilman is a postdoctoral appointee at Sandia National Laboratories. His research focuses on applying neuroscientific principles to neuromorphic computing. He earned a Ph.D. in computational neuroscience in 2021 from UC San Diego\, where he worked on topological approaches to understanding neural population activity in the auditory regions of songbird brains in the laboratory of Dr. Tim Gentner. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bradley-theilman/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230502T080000
DTEND;TZID=America/Los_Angeles:20230502T090000
DTSTAMP:20260422T091325
CREATED:20230430T102520Z
LAST-MODIFIED:20230430T102520Z
UID:10000233-1683014400-1683018000@www.neuropac.info
SUMMARY:INRC Forum: Jeff Orchard
DESCRIPTION:Hyperdimensional Algorithms using Spiking Phasors\nAbstract: Hyperdimensional (HD) computing offers a powerful framework for representing compositional reasoning. Such algorithms lend themselves to neural-network implementations\, allowing us to create neural networks that can perform cognitive functions\, like spatial reasoning\, arithmetic\, and symbolic logic. But the vectors involved can be quite large. Advances in neuromorphic hardware hold the promise of reducing the running time and energy footprint of neural networks by orders of magnitude. In this talk\, I will extend some pioneering work to run HD algorithms on a substrate of spiking neurons\, implementing examples in spatial memory\, function representation\, and temporal memory. \nBio: Jeff Orchard received degrees in applied mathematics from the University of Waterloo (BMath) and the University of British Columbia (MSc)\, and received his PhD in Computing Science from Simon Fraser University in 2003. Since then\, he has been a faculty member at the Cheriton School of Computer Science at the University of Waterloo in Canada. Prof. Orchard’s research focuses on computational neuroscience\, using mathematical models and computer simulations of neural networks in an effort to understand how the brain works. Guided by both theory and anatomy\, he is building neural networks based on computational theories of the brain — such as predictive coding — to uncover the way we perceive the world. His research also includes Vector Symbolic Architectures and Algebras\, spatial navigation\, and population coding. He is a core member of the Centre for Theoretical Neuroscience. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-jeff-orchard/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230425T080000
DTEND;TZID=America/Los_Angeles:20230425T090000
DTSTAMP:20260422T091325
CREATED:20230425T072512Z
LAST-MODIFIED:20230425T072512Z
UID:10000178-1682409600-1682413200@www.neuropac.info
SUMMARY:INRC Forum: James Knight
DESCRIPTION:Efficient training of sparse SNN classifiers using GeNN\nAbstract:Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of ANN-based machine learning. We want to unlock the potential of spike-based machine learning in the same way\, so here we present mlGeNN\, an easy way to define\, train and test spiking neural networks using GeNN — our efficient GPU-accelerated SNN simulator. Using GeNN\, we demonstrate that we can use e-prop to train recurrent SNN classifiers on datasets including the Spiking Heidelberg Digits (SHD) and DVS gesture. We show that these classifiers can not only offer comparable performance to LSTMs but are up to 7× faster when performing inference on the same GPU hardware. As GeNN is designed to exploit sparse connectivity\, by replacing the dense recurrent connectivity in classifier models with random sparse connectivity\, we can reduce the time taken to train such models by almost 10× — although this results in some reduction in classification accuracy. However\, in biological brains\, alongside the changes to the strength of existing synapses driven by synaptic plasticity\, structural plasticity prunes unused synapses and forms new ones. The Deep-R learning rule provides a framework for combining gradient-based learning with structural plasticity and by combining Deep-R with e-prop\, we demonstrate that the aforementioned reduction in classification accuracy can be eliminated\, even in very sparsely connected models. \nBio: Jamie Knight received his BEng degree in Electronic Engineering from the University of Warwick in 2006. After working as a games developer for several years\, he received an MPhil in Advanced Computer Science from the University of Cambridge in 2013 and a PhD in Computer Science from the University of Manchester in 2016. His doctoral work focussed on using the SpiNNaker neuromorphic supercomputer to simulate large-scale computational neuroscience models with synaptic plasticity. Since 2017 Jamie has worked at the University of Sussex\, first as a Research Fellow focussing on using GPU hardware to accelerate spiking neural network based robot controllers and\, since 2022\, as a EPSRC Research Software Engineering Fellow focussing on spike-based machine learning and the software to enable it. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-4/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230418T080000
DTEND;TZID=America/Los_Angeles:20230418T090000
DTSTAMP:20260422T091325
CREATED:20230416T091939Z
LAST-MODIFIED:20230416T091939Z
UID:10000177-1681804800-1681808400@www.neuropac.info
SUMMARY:INRC Forum: Akshit Saradagi
DESCRIPTION:Neuromorphic sensing in sub-terranean environments and neuromorphic solvers for model predictive control\nAbstract: In this talk\, I will be presenting some recent results in Neuromorphic Engineering from the Robotics and AI group at Luleå University of Technology\, Sweden.\nIn the first half of my talk\, I will be presenting a novel LiDAR and event camera fusion framework for fast and precise object and human detection in subterranean (SubT) environments. The fusion framework caters to the wide variety of adverse lighting conditions found in SubT environments\, such as low or no light\, high-contrast zones and in the presence of blinding light sources. The proposed fusion uses intensity filtering and K-means clustering on the LiDAR point cloud and frequency filtering and connectivity clustering on the events induced in an event camera by the returning LiDAR beams. The fusion framework was experimentally validated in a real SubT environment (a mine) with a Pioneer 3AT mobile robot. The experimental results show real-time performance for human detection and the NMPC-based controller allows for reactive tracking of a human or object of interest\, even in complete darkness.\nIn the second half of the talk\, I will be presenting our preliminary results on using neuromorphic solvers for solving quadratic programs arising in Model Predictive Control (MPC). More specifically\, we employed the floating-point LAVA QP solver\, which emulates the Proportional-Integral Projected Gradient (PIPG) Method for solving QP problems\, to solve terminally constrained MPC problems. The objective function in linear MPC problems being strongly convex\, the LAVA QP solver ensures that the distance to optimum and the constraint violation converge to zero at the rate of O(1/k^2) and O(1/k^3) respectively\, with ‘k’ being the number of solver iterates. Given this peculiar convergence property of the solver\, I will present a sketch of our proof for asymptotic stability of the closed loop system\, along with the simulation-based validation. \nBio: Akshit Saradagi is a Postdoctoral researcher in the Robotics and AI group at Luleå University of Technology\, Sweden. He received his M.S and Ph.D dual degree from the Indian Institute of Technology Madras (IITM)\, Chennai\, India. His current research focusses on distributed control of multi-agent systems\, control barrier functions-based safety guarantees in Robotics\, applications of Neuromorphic Computing in Robotics and control under resource constraints. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-3/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230411T080000
DTEND;TZID=America/Los_Angeles:20230411T090000
DTSTAMP:20260422T091325
CREATED:20230416T091845Z
LAST-MODIFIED:20230416T091845Z
UID:10000176-1681200000-1681203600@www.neuropac.info
SUMMARY:INRC Forum: Guido de Croon
DESCRIPTION:Neuromorphic sensing and processing for small\, autonomous drones\n\n\n\nAbstract: Small drones are promising for many applications\, such as search-and-rescue\, greenhouse monitoring\, or keeping track of stock in warehouses. Since they are small\, they can fly in narrow areas. Moreover\, their light weight makes them very safe for flight around humans. However\, making such small drones fly completely by themselves is an enormous challenge due to the extreme resource restrictions in terms of sensing and processing. In my talk\, I will discuss the promises of novel neuromorphic sensing and processing technologies for autonomous flight of small drones\, illustrating this with recent experiments from our lab. Specifically\, I will delve into our multi-year effort to create a fully neuromorphic vision-to-control pipeline\, going from raw events to low-level control commands. Recently\, we have achieved this feat for optical-flow-based ego-motion estimation and control\, implementing the spiking neural network on the Loihi Kapoho bay onboard of a free-flying drone. \n\n\n\nBio: Guido de Croon received his M.Sc. and Ph.D. in the field of Artificial Intelligence (AI) at Maastricht University\, the Netherlands. His research interest lies with computationally efficient\, bio-inspired algorithms for robot autonomy\, with an emphasis on computer vision. Since 2008 he has worked on algorithms for achieving autonomous flight with small and light-weight flying robots\, such as the DelFly flapping wing MAV. In 2011-2012\, he was a research fellow in the Advanced Concepts Team of the European Space Agency\, where he studied topics such as optical flow based control algorithms for extraterrestrial landing scenarios. After his return at TU Delft\, his work has included fully autonomous flight of a 20-gram DelFly\, a new theory on active distance perception with optical flow\, and a swarm of tiny drones able to explore unknown environments. Currently\, he is Full Professor at TU Delft and scientific lead of the Micro Air Vehicle lab (MAVLab) of Delft University of Technology. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-2/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230404T080000
DTEND;TZID=America/Los_Angeles:20230404T090000
DTSTAMP:20260422T091325
CREATED:20230331T122014Z
LAST-MODIFIED:20230331T122032Z
UID:10000175-1680595200-1680598800@www.neuropac.info
SUMMARY:INRC Forum: Arto Nurmikko
DESCRIPTION:Efficient Decoding of Multipoint Spiking Events Recorded by A Network of Wireless Biosensors\n\n\nAbstract: Our lab is developing tools for brain-machine interfaces using a concept of spatially distributed wireless microsensors\, “neurograins” implanted in a functional cortical area of interest (motor\, auditory\, visual). When a given sensor detects a spiking event\, the signal is immediately sent to an external radio-frequency receiver as a binary “1”. Thus\, for a network of thousand neurograins\, one goal of an ongoing research project\, the received data at the external detector is a stream of spikes in which the cortical computations of interest are embedded. Based on our work on smaller ensembles (hundred neurograins)\, we have discovered a major computational bottleneck in detecting and decoding signals for large ensembles of neurograins for a real-time (wearable/portable brain-interface systems. In this work\, we explore and apply the Loihi platform to integrate the demodulation (time-series correlation) and neural population decoding (spike-timing based model) steps into one parallel process. \nBio: Prof. Arto Nurmikko is a L. Herbert Ballou University Professor of Engineering and Physics at Brown. He recived his degrees from University of California\, Berkeley\, and did postdoctoral work at the Hebrew University (Jerusalem) and MIT. Prof. Nurmikko’s research spans the areas of neuroengineering\, photonics\, microelectronics\, nanosciences\, and the translation of device research to new technologies in physical and life science applications. Currently\, his research interests are focused on implantable neural interfaces. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-arto-nurmikko/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230328T080000
DTEND;TZID=America/Los_Angeles:20230328T090000
DTSTAMP:20260422T091325
CREATED:20230331T121912Z
LAST-MODIFIED:20230331T121924Z
UID:10000174-1679990400-1679994000@www.neuropac.info
SUMMARY:INRC Forum: Garrett Kenyon
DESCRIPTION:Sparse Coding with Locally Competitive Algorithm on Loihi 2\nBio: Garrett T. Kenyon received the BA degree in physics from the University of California at Santa Cruz in 1984 and the MS and PhD degrees in physics from the University of Washington in Seattle in 1986 and 1990\, respectively. He received further postdoctoral training at the Baylor College of Medicine\, Division of Neuroscience\, and at the University of Texas Medical School\, Houston\, Department of Neurobiology and Anatomy. He has been a staff member in the Biological and Quantum Physics group at the Los Alamos National Laboratory since 2001. His research interests involve the application of computer simulations and theoretical techniques to the analysis of computation in biological neural networks. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-garrett-kenyon/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230321T080000
DTEND;TZID=America/Los_Angeles:20230321T090000
DTSTAMP:20260422T091325
CREATED:20230331T121734Z
LAST-MODIFIED:20230331T121734Z
UID:10000173-1679385600-1679389200@www.neuropac.info
SUMMARY:INRC Event: Neuromorphic Dynamic Noise Suppression (N-DNS) Challenge
DESCRIPTION:The Intel Neuromorphic DNS Challenge is a unique opportunity to advance state-of-the-art neuromorphic algorithms research and win up to $55\,000 of prize money.\nYou need not be an INRC member to participate in this challenge\, however you will need to join in order to develop solutions for Track 2\, see below. Kick-off presentation and recording from the March 21\, 2023 INRC Forum session is available via this link.
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-3/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230314T080000
DTEND;TZID=America/Los_Angeles:20230314T090000
DTSTAMP:20260422T091325
CREATED:20230331T121426Z
LAST-MODIFIED:20230331T121701Z
UID:10000172-1678780800-1678784400@www.neuropac.info
SUMMARY:INRC Forum: Thomas Nowotny
DESCRIPTION:Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks\nAbstract: In a recent paper Wunderlich and Pehle (2021) introduced the EventProp algorithm that enables training spiking neural networks by gradient descent on exact gradients. In this talk I will present extensions of EventProp to support a wider class of loss functions and an implementation in the GPU enhanced neuronal networks framework (GeNN) which exploits sparsity. The GPU acceleration allows us to test EventProp extensively on more challenging learning benchmarks. We find that EventProp performs well on some tasks but for others there are issues where learning is slow or fails entirely. We have discovered that the problems relate to the exact gradient of the loss function not providing information about loss changes due to spike creation or spike deletion. Depending on the details of the task and loss function\, descending the exact gradient with EventProp can lead to the deletion of important spikes and so to an inadvertent increase of the loss and decrease of classification accuracy and hence a failure to learn. In other situations\, the lack of knowledge about the benefits of creating additional spikes can lead to a lack of gradient flow into earlier layers\, slowing down learning. We are trying to overcome these problems in the form of `loss shaping’\, where we introduce a suitable weighting function into an integral loss to increase gradient flow from the output layer towards earlier layers. I will show example result for the Spiking Heidelberg Digits and sequential spiking MNIST where we achieve (close to) state-of-the-art performance. \nBio. Prof. Thomas Nowotny has a background in theoretical physics. After his PhD from Leipzig University in 2001 he started working in Computational Neuroscience and bio-inspired AI at the Institute for non-linear Science at UCSD. He is now a Professor in Informatics at the University of Sussex and the head of the AI research group. His interests include olfaction\, hybrid systems\, spiking neural networks and their efficient simulation\, bio-inspired AI and algorithms for neuromorphic computing. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-thomas-nowotny-2/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230307T080000
DTEND;TZID=America/Los_Angeles:20230307T090000
DTSTAMP:20260422T091325
CREATED:20230331T120956Z
LAST-MODIFIED:20230331T121020Z
UID:10000171-1678176000-1678179600@www.neuropac.info
SUMMARY:INRC Forum: Bruno Olshausen
DESCRIPTION:Computing with Dynamics\nAbstract: Is the brain a computer?  Or is it a dynamical system?  While computation serves as a useful metaphor for cognitive processes\, as we delve into the neuroanatomical circuits and physiological properties of brains we encounter structures and phenomena that seem foreign and unfamiliar in terms of standard computing models.  Highly recurrent circuits with massive interconnectivity\, attractor dynamics\, oscillations\, traveling waves\, and active sensing are all hallmarks of biological neural systems.  How do we make sense of these things in terms of “computation?”  Or are we working with the wrong metaphor?  Here I shall present a number of recent findings from neuroscience that challenge us to think in new ways about the underlying physical processes governing perception and cognition. \nBio: Bruno OIshausen is Professor of Neuroscience and Optometry at the University of California\, Berkeley.  He also serves as Director of the Redwood Center for Theoretical Neuroscience\, an interdisciplinary research group focusing on mathematical and computational models of brain function.  He received B.S. and M.S. degrees in Electrical Engineering from Stanford University\, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology.  Prior to Berkeley he was a member of the Departments of Psychology and Neurobiology\, Physiology & Behavior at UC Davis.  During postdoctoral work with David Field at Cornell he developed the sparse coding model of visual cortex which provides a linking principle between natural scene statistics and the response properties of visual neurons.  Olshausen’s current research aims to understand the information processing strategies employed by the brain for doing tasks such as object recognition and scene analysis.  This work seeks not only to advance our understanding of the brain\, but also to discover new algorithms for scene analysis based on how brains work. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen/2023-03-07/
LOCATION:Online
END:VEVENT
END:VCALENDAR