BRN 7.69% 21.0¢ brainchip holdings ltd

All roads lead to JAST, page-39

  1. 6,267 Posts.
    lightbulb Created with Sketch. 9114
    Barrel,

    This is Thomas Pfeil:



    also behind this research:


    https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full

    Deep Learning With Spiking Neurons: Opportunities and Challenges

    Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.



    https://www.frontiersin.org/articles/10.3389/fnins.2020.00439/full?report=reader

    Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks

    Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.



    ---

    I think this patent is Boschs method to convert their networks for use on spiking hardware
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
21.0¢
Change
0.015(7.69%)
Mkt cap ! $389.7M
Open High Low Value Volume
20.0¢ 22.0¢ 19.5¢ $1.195M 5.824M

Buyers (Bids)

No. Vol. Price($)
1 60595 21.0¢
 

Sellers (Offers)

Price($) Vol. No.
22.0¢ 746004 21
View Market Depth
Last trade - 16.10pm 18/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.