BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.16//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.neuropac.info
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231205T080000
DTEND;TZID=America/Los_Angeles:20231205T090000
DTSTAMP:20260502T022035
CREATED:20231130T122725Z
LAST-MODIFIED:20231130T122725Z
UID:10000270-1701763200-1701766800@www.neuropac.info
SUMMARY:Michael Jurado @ INRC - Enhancing Performance and Efficiency of SNNs
DESCRIPTION:Title:\nEnhancing Performance and Efficiency of SNNs: From Spike-Based Loss Improvements to Synaptic Sparsification Techniques. \nAbstract:\nThe introduction of offline training capabilities like Spike Layer Error Reassignment in Time (SLAYER) and advancements in the probabilistic interpretations of Spiking Neural Network (SNN) output reinforce SNNs as a viable alternative to Artificial Neural Networks (ANNs). However\, special care must be taken during Surrogate Gradient (SG) training to achieve desired performance and efficiency. This talk will cover our recent work in improving spike-based loss functions for SNNs as well as sparsifying SNNs for low cost\, high performant neuromorphic computing. \nSpikemax was previously introduced as a family of differentiable loss methods which use windowed spike counts to form classification probabilities. We modify the Spikemaxs loss method to use rates and a scaling parameter instead of counts to form Scaled-Spikemax. Our mathematical analysis shows that an appropriate scaling term can yield less coarse probability outputs from the SNN and help smooth the gradient of the loss during training. Experimentally\, we show that Scaled-Spikemax achieves faster training convergence than Spikemax and results in relative improvements of 4.2% and 9.9% in accuracy for NMNIST and N-TIDIGITS18\, respectively. We then extend Scaled-Spikemax to construct a spike-based loss function for multi-label classification called Spikemoid. The viability of Spikemoid is shown via the first known multi-label classification results on N-TIDIGITS18 and 2NMNIST\, a novel variation of NMNIST that superimposes event-driven sensory data. \nHowever\, SNNs trained through SG methods oftentimes use dense or convolutional connections which are not always suitable for Loihi2. In order to minimize core usage and power consumption on chip\, we employ synaptic pruning techniques as part of our SNN training pipelines. We demonstrate the effectiveness of synaptic pruning techniques for ANN to SNN conversion of vgg16 on Loihi1 as well as for a lava-dl trained SNN for the Intel DNS Challenge. This later approach involved the use of Gradual Magnitude Pruning (GMP) applied during SLAYER training\, which reduced the memory footprint of the baseline SDNN by 50-75%. We highlight infrastructure changes to netX which enable conversion of lava-dl trained SNNs into sparsity aware lava processes. \nMeeting link to join is available to INRC members and affiliates on the INRC Forum Schedule (click here). \nIf you are not yet a member of the INRC\, please see the “Joining the INRC link” below. \nBio: Michael Jurado is a research engineer at the Georgia Tech Research Institute. He studied computer science at Georgia Tech and received his master’s degree in Machine Learning in 2022. Lately\, Michael has been studying and developing neuromorphic algorithms for edge computing and a regular contributor to the lava code base. In his free time\, he likes to read and study languages. \n\n\n\n\n\n\n\n\nFor the recording and slides\, see the full INRC Forum 2023 Schedule (accessible only to INRC Affiliates and Engaged Members). \nIf you are interested in becoming a member\, here is the information about ”Joining the INRC.
URL:https://www.neuropac.info/event/michael-jurado-inrc-enhancing-performance-and-efficiency-of-snns/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20231213T060000
DTEND;TZID=Europe/Berlin:20231213T080000
DTSTAMP:20260502T022035
CREATED:20231130T123001Z
LAST-MODIFIED:20231130T123015Z
UID:10000271-1702447200-1702454400@www.neuropac.info
SUMMARY:Kade Heckel @ ONM - Neuromorphic Hackathon with Spyx
DESCRIPTION:From the open-neuromorphic.org website: \nJoin us on December 13th for an exciting Spyx hackathon and ONM talk! Learn how to use and contribute to Spyx \, a high-performance spiking neural network library\, and gain insights into the latest developments in neuromorphic frameworks. The session will cover Spyx’s utilization of memory and GPU to maximize training throughput\, along with discussions on the evolving landscape of neuromorphic computing. \nDon’t miss this opportunity to engage with experts\, collaborate on cutting-edge projects\, and explore the potential of Spyx in shaping the future of neuromorphic computing. Whether you’re a seasoned developer or just curious about the field\, this event promises valuable insights and hands-on experience. \nAgenda: \n\n18:00 – 19:00: Spyx Introduction\n\nDive into Spyx\, its features\, and how to contribute\nHands-on session: Explore Spyx functionalities and tackle real-world challenges\nQ&A and collaborative discussions\n\n\n19:00 – 20:00: Hackathon\n\nCollaborate on cutting-edge projects and explore the potential of Spyx\nQ&A and collaborative discussions\n\n\n\nSpeakers: \n\nKade Heckel\n\nNote: The event will be hosted virtually. Stay tuned for the video link and further updates. Let’s come together to push the boundaries of neuromorphic computing!
URL:https://www.neuropac.info/event/kade-heckel-onm-neuromorphic-hackathon-with-spyx/
LOCATION:Online
CATEGORIES:Talk,Tutorial,Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20231219T180000
DTEND;TZID=Europe/Berlin:20231219T193000
DTSTAMP:20260502T022035
CREATED:20231103T152925Z
LAST-MODIFIED:20231103T152925Z
UID:10000265-1703008800-1703014200@www.neuropac.info
SUMMARY:Brad Aimone @ ONM - Programming Scalable Neuromorphic Algorithms With Fugu
DESCRIPTION:From the Open Neuromorphic website \nExplore neural-inspired computing with Brad Aimone\, a leading neuroscientist at Sandia Labs. Join us for insights into next-gen technology and neuroscience.
URL:https://www.neuropac.info/event/brad-aimone-onm-programming-scalable-neuromorphic-algorithms-with-fugu/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
END:VCALENDAR