BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.16//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.neuropac.info
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231205
DTEND;VALUE=DATE:20231207
DTSTAMP:20260415T091436
CREATED:20231103T150634Z
LAST-MODIFIED:20231103T150634Z
UID:10000259-1701734400-1701907199@www.neuropac.info
SUMMARY:IEEE ICRC 2023
DESCRIPTION:The IEEE International Conference on Rebooting Computing is the premier venue for novel computing approaches\, including algorithms and languages\, system software\, system and network architectures\, new devices and circuits\, and applications of new materials and physics. This is an interdisciplinary conference that has participation from a broad technical community\, with emphasis on all aspects of the computing stack. \nIEEE ICRC 2023 is an in-person event with an option for virtual attendance. While all speakers will deliver their talks in-person\, attendees will have the option of attending the conference virtually. Check that option when you REGISTER! \nThe International Roadmap on Devices and Systems (IRDS) will also be featured at ICRC 2023 with talks from academia\, industry\, and government research centers spanning materials\, devices\, circuits\, and systems for computing.
URL:https://www.neuropac.info/event/ieee-icrc-2023/
LOCATION:San Diego\, San Diego\, CA\, United States
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20231205T080000
DTEND;TZID=America/Los_Angeles:20231205T090000
DTSTAMP:20260415T091436
CREATED:20231130T122725Z
LAST-MODIFIED:20231130T122725Z
UID:10000270-1701763200-1701766800@www.neuropac.info
SUMMARY:Michael Jurado @ INRC - Enhancing Performance and Efficiency of SNNs
DESCRIPTION:Title:\nEnhancing Performance and Efficiency of SNNs: From Spike-Based Loss Improvements to Synaptic Sparsification Techniques. \nAbstract:\nThe introduction of offline training capabilities like Spike Layer Error Reassignment in Time (SLAYER) and advancements in the probabilistic interpretations of Spiking Neural Network (SNN) output reinforce SNNs as a viable alternative to Artificial Neural Networks (ANNs). However\, special care must be taken during Surrogate Gradient (SG) training to achieve desired performance and efficiency. This talk will cover our recent work in improving spike-based loss functions for SNNs as well as sparsifying SNNs for low cost\, high performant neuromorphic computing. \nSpikemax was previously introduced as a family of differentiable loss methods which use windowed spike counts to form classification probabilities. We modify the Spikemaxs loss method to use rates and a scaling parameter instead of counts to form Scaled-Spikemax. Our mathematical analysis shows that an appropriate scaling term can yield less coarse probability outputs from the SNN and help smooth the gradient of the loss during training. Experimentally\, we show that Scaled-Spikemax achieves faster training convergence than Spikemax and results in relative improvements of 4.2% and 9.9% in accuracy for NMNIST and N-TIDIGITS18\, respectively. We then extend Scaled-Spikemax to construct a spike-based loss function for multi-label classification called Spikemoid. The viability of Spikemoid is shown via the first known multi-label classification results on N-TIDIGITS18 and 2NMNIST\, a novel variation of NMNIST that superimposes event-driven sensory data. \nHowever\, SNNs trained through SG methods oftentimes use dense or convolutional connections which are not always suitable for Loihi2. In order to minimize core usage and power consumption on chip\, we employ synaptic pruning techniques as part of our SNN training pipelines. We demonstrate the effectiveness of synaptic pruning techniques for ANN to SNN conversion of vgg16 on Loihi1 as well as for a lava-dl trained SNN for the Intel DNS Challenge. This later approach involved the use of Gradual Magnitude Pruning (GMP) applied during SLAYER training\, which reduced the memory footprint of the baseline SDNN by 50-75%. We highlight infrastructure changes to netX which enable conversion of lava-dl trained SNNs into sparsity aware lava processes. \nMeeting link to join is available to INRC members and affiliates on the INRC Forum Schedule (click here). \nIf you are not yet a member of the INRC\, please see the “Joining the INRC link” below. \nBio: Michael Jurado is a research engineer at the Georgia Tech Research Institute. He studied computer science at Georgia Tech and received his master’s degree in Machine Learning in 2022. Lately\, Michael has been studying and developing neuromorphic algorithms for edge computing and a regular contributor to the lava code base. In his free time\, he likes to read and study languages. \n\n\n\n\n\n\n\n\nFor the recording and slides\, see the full INRC Forum 2023 Schedule (accessible only to INRC Affiliates and Engaged Members). \nIf you are interested in becoming a member\, here is the information about ”Joining the INRC.
URL:https://www.neuropac.info/event/michael-jurado-inrc-enhancing-performance-and-efficiency-of-snns/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231210
DTEND;VALUE=DATE:20231217
DTSTAMP:20260415T091436
CREATED:20231103T143805Z
LAST-MODIFIED:20231103T143805Z
UID:10000253-1702166400-1702771199@www.neuropac.info
SUMMARY:NeurIPS 2023
DESCRIPTION:Conference on Neural Information Processing Systems (NeurIPS) 2023\nNeurIPS 2023 will be held again at the at the New Orleans Ernest N. Morial Convention Center.\n\n\n\nThe conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks\, demonstrations\, symposia\, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice\, a series of tutorials\, and topical workshops that provide a less formal setting for the exchange of ideas.
URL:https://www.neuropac.info/event/neurips-2023/
LOCATION:New Orleans\, New Orleans\, LA\, United States
CATEGORIES:Conference
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20231211
DTEND;VALUE=DATE:20231214
DTSTAMP:20260415T091436
CREATED:20231006T163033Z
LAST-MODIFIED:20231006T163033Z
UID:10000247-1702252800-1702511999@www.neuropac.info
SUMMARY:NeuroEng Workshop @ Western Sydney University
DESCRIPTION:The International Centre for Neuromorphic Systems welcomes NeuroEng Association members and guests to join in a 3-day workshop focusing on computational neuroscience and neuromorphic engineering as well providing a valuable platform for networking and collaboration. \nHosted at the newly refurbished St George Sailing Club\, attendees will be treated to sensational waterfront views of the Georges River and the pristine beaches of the Sans Souci and Botany Bay region – a hidden gem known to locals. Furthermore\, guests can enjoy a number of local accommodation options as well as an array of activities in the area. \nPrepare for a workshop that combines cutting-edge knowledge sharing with the serenity of a beach setting. We look forward to welcoming you to the 2023 ICNS NeuroEng Workshop! \nCall for abstracts until October 31st\, 2023.
URL:https://www.neuropac.info/event/neuroeng-workshop-western-sydney-university/
LOCATION:St George Sailing Club\, 2 Riverside Drive\, Sans Souci\, Sydney\, NSW\, Australia
CATEGORIES:Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20231213T060000
DTEND;TZID=Europe/Berlin:20231213T080000
DTSTAMP:20260415T091436
CREATED:20231130T123001Z
LAST-MODIFIED:20231130T123015Z
UID:10000271-1702447200-1702454400@www.neuropac.info
SUMMARY:Kade Heckel @ ONM - Neuromorphic Hackathon with Spyx
DESCRIPTION:From the open-neuromorphic.org website: \nJoin us on December 13th for an exciting Spyx hackathon and ONM talk! Learn how to use and contribute to Spyx \, a high-performance spiking neural network library\, and gain insights into the latest developments in neuromorphic frameworks. The session will cover Spyx’s utilization of memory and GPU to maximize training throughput\, along with discussions on the evolving landscape of neuromorphic computing. \nDon’t miss this opportunity to engage with experts\, collaborate on cutting-edge projects\, and explore the potential of Spyx in shaping the future of neuromorphic computing. Whether you’re a seasoned developer or just curious about the field\, this event promises valuable insights and hands-on experience. \nAgenda: \n\n18:00 – 19:00: Spyx Introduction\n\nDive into Spyx\, its features\, and how to contribute\nHands-on session: Explore Spyx functionalities and tackle real-world challenges\nQ&A and collaborative discussions\n\n\n19:00 – 20:00: Hackathon\n\nCollaborate on cutting-edge projects and explore the potential of Spyx\nQ&A and collaborative discussions\n\n\n\nSpeakers: \n\nKade Heckel\n\nNote: The event will be hosted virtually. Stay tuned for the video link and further updates. Let’s come together to push the boundaries of neuromorphic computing!
URL:https://www.neuropac.info/event/kade-heckel-onm-neuromorphic-hackathon-with-spyx/
LOCATION:Online
CATEGORIES:Talk,Tutorial,Workshop
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20231219T180000
DTEND;TZID=Europe/Berlin:20231219T193000
DTSTAMP:20260415T091436
CREATED:20231103T152925Z
LAST-MODIFIED:20231103T152925Z
UID:10000265-1703008800-1703014200@www.neuropac.info
SUMMARY:Brad Aimone @ ONM - Programming Scalable Neuromorphic Algorithms With Fugu
DESCRIPTION:From the Open Neuromorphic website \nExplore neural-inspired computing with Brad Aimone\, a leading neuroscientist at Sandia Labs. Join us for insights into next-gen technology and neuroscience.
URL:https://www.neuropac.info/event/brad-aimone-onm-programming-scalable-neuromorphic-algorithms-with-fugu/
LOCATION:Online
CATEGORIES:Talk
END:VEVENT
END:VCALENDAR