BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.15.16//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.neuropac.info
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20230312T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20231105T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20220101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20230425T080000
DTEND;TZID=America/Los_Angeles:20230425T090000
DTSTAMP:20260418T060441
CREATED:20230425T072512Z
LAST-MODIFIED:20230425T072512Z
UID:10000178-1682409600-1682413200@www.neuropac.info
SUMMARY:INRC Forum: James Knight
DESCRIPTION:Efficient training of sparse SNN classifiers using GeNN\nAbstract:Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of ANN-based machine learning. We want to unlock the potential of spike-based machine learning in the same way\, so here we present mlGeNN\, an easy way to define\, train and test spiking neural networks using GeNN — our efficient GPU-accelerated SNN simulator. Using GeNN\, we demonstrate that we can use e-prop to train recurrent SNN classifiers on datasets including the Spiking Heidelberg Digits (SHD) and DVS gesture. We show that these classifiers can not only offer comparable performance to LSTMs but are up to 7× faster when performing inference on the same GPU hardware. As GeNN is designed to exploit sparse connectivity\, by replacing the dense recurrent connectivity in classifier models with random sparse connectivity\, we can reduce the time taken to train such models by almost 10× — although this results in some reduction in classification accuracy. However\, in biological brains\, alongside the changes to the strength of existing synapses driven by synaptic plasticity\, structural plasticity prunes unused synapses and forms new ones. The Deep-R learning rule provides a framework for combining gradient-based learning with structural plasticity and by combining Deep-R with e-prop\, we demonstrate that the aforementioned reduction in classification accuracy can be eliminated\, even in very sparsely connected models. \nBio: Jamie Knight received his BEng degree in Electronic Engineering from the University of Warwick in 2006. After working as a games developer for several years\, he received an MPhil in Advanced Computer Science from the University of Cambridge in 2013 and a PhD in Computer Science from the University of Manchester in 2016. His doctoral work focussed on using the SpiNNaker neuromorphic supercomputer to simulate large-scale computational neuroscience models with synaptic plasticity. Since 2017 Jamie has worked at the University of Sussex\, first as a Research Fellow focussing on using GPU hardware to accelerate spiking neural network based robot controllers and\, since 2022\, as a EPSRC Research Software Engineering Fellow focussing on spike-based machine learning and the software to enable it. \nFor the meeting link\, see the full INRC Forum Spring 2023 Schedule (accessible only to INRC Affiliates and Fully Engaged Members).
URL:https://www.neuropac.info/event/inrc-forum-bruno-olshausen-2-4/
LOCATION:Online
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230426T180000
DTEND;TZID=UTC:20230426T193000
DTSTAMP:20260418T060441
CREATED:20230127T223705Z
LAST-MODIFIED:20230127T223705Z
UID:10000009-1682532000-1682537400@www.neuropac.info
SUMMARY:Hands-on session with Xylo and Rockpool
DESCRIPTION:Speaker bio: Dylan Muir is the Vice President for Global Research Operations; Director for Algorithms and Applications; and Director for Global Business Development at SynSense. Dr. Muir is a specialist in architectures for neural computation. He has published extensively in computational and experimental neuroscience. At SynSense he is responsible for the company research vision\, and directing development of neural architectures for signal processing. Dr. Muir holds a Doctor of Science (PhD) from ETH Zurich\, and undergraduate degrees (Masters) in Electronic Engineering and in Computer Science from QUT\, Australia.
URL:https://www.neuropac.info/event/hands-on-session-with-xylo-and-rockpool/
LOCATION:Online
CATEGORIES:Tutorial
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230430
DTEND;VALUE=DATE:20230515
DTSTAMP:20260418T060441
CREATED:20230127T225007Z
LAST-MODIFIED:20230127T225007Z
UID:10000011-1682812800-1684108799@www.neuropac.info
SUMMARY:CapoCaccia Workshop 2023
DESCRIPTION:Workshop theme for 2023: “Lessons from machine learning and neuroscience for building efficient intelligent systems” \nIt is an exciting era of significant progress in the quest for implementing intelligence in artificial systems. This stems from major breakthroughs in our understanding of natural intelligence\, thanks to new tools for better data collection and analysis from the brain; in the development of machine learning algorithms for solving real-world problems; and in the availability of scalable computing substrates that are smaller\, denser\, faster and feature parallel processing capabilities. \nIn this workshop\, we combine all the above towards a more efficient and powerful implementation of intelligent systems. Specifically\, our objective is to pinpoint what current ideas from machine learning and neuroscience can lead to practical designs for implementing low-power and miniaturized neuromorphic intelligent systems. To do so\, in a setting that fosters brainstorming and cross-fertilization\, we stimulate exchange of ideas on topics in which biology\, modeling\, and engineering are dealt with simultaneously. These topics will range from fundamental principles such as learning\, memory and the neurobiology of time; to high-level functions such as navigation\, embodiment and active sensing. \nThe mission of the CapoCaccia Workshops for Neuromorphic Intelligence is to understand the principles of biological intelligence and apply this knowledge in technologies\, for the good of all mankind. \nThe workshop features open and highly interactive discussion sessions in the morning; hands-on projects\, tutorials\, and hardware and software jamming sessions during the day; and free-form discussions in the evenings. \nThe workshop is open to everyone\, but since resources are limited\, we can accept only a limited number of registrations. Due to the limited number of hotel rooms\, Ph.D. students are expected to pair up and share rooms. All participants are encouraged to stay for the full two week period\, but can stay for less if necessary
URL:https://www.neuropac.info/event/capocaccia-workshop-2023/
LOCATION:Alghero\, Sardinia\, Italy\, Alghero\, Sardinia\, Italy
CATEGORIES:Workshop
END:VEVENT
END:VCALENDAR