BRN 2.38% 21.5¢ brainchip holdings ltd

Neuromorphic Research & News to watch

  1. 6,267 Posts.
    lightbulb Created with Sketch. 9114
    Again, thought I would share some more research - this time on neuromorphic research and developments to keep an eye on in the future. There is so much happening and developing so it obviously isn't an all inclusive list, just highlights I am watching. Very exciting to see where this is going.

    Enjoy ~~


    Valeo


    Exploring Deep Spiking Neural Networks for Automated Driving Applications


    https://arxiv.org/abs/1903.02080


    Abstract

    Neural networks have become the standard model for various computer vision tasks in automated driving including semantic segmentation, moving object detection, depth estimation, visual odometry, etc. The main flavors of neural networks which are used commonly are convolutional (CNN) and recurrent (RNN). In spite of rapid progress in embedded processors, power consumption and cost is still a bottleneck. Spiking Neural Networks (SNNs) are gradually progressing to achieve low-power event-driven hardware architecture which has a potential for high efficiency. In this paper, we explore the role of deep spiking neural networks (SNN) for automated driving applications. We provide an overview of progress on SNN and argue how it can be a good fit for automated driving applications.



    Bosch


    Deep Learning With Spiking Neurons: Opportunities and Challenges


    https://www.frontiersin.org/articles/10.3389/fnins.2018.00774/full


    Abstract

    Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.



    Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks


    https://www.frontiersin.org/articles/10.3389/fnins.2020.00439/full?report=reader


    Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.


    Ultrasonic sensor system and method for detecting objects in the surroundings of a vehicle, and vehicle comprising an ultrasonic sensor system



    https://patents.google.com/patent/WO2019215028A1/en


    Abstract

    The invention relates to an ultrasonic sensor system for detecting objects in the surroundings of a vehicle, said system comprising a first group of ultrasonic sensors and a second group of ultrasonic sensors. The ultrasonic sensors of the first group each have a first installation height on the vehicle and the ultrasonic sensors of the second group of ultrasonic sensors each have a second installation height on the vehicle, the first installation height being greater than the second installation height. The ultrasonic sensors of the first group have a higher sensitivity for the detection of objects than the ultrasonic sensors of the second group.



    Fast detection of secondary objects that may intersect the trajectory of a moving primary object


    https://patents.google.com/patent/EP3543898A1/


    A system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50). nVision sensors (2) for use in the system (1). A corresponding computer program.




    Google


    Temporal coding in spiking neural networks with alpha synaptic function


    Abstract

    The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.



    Other



    Temporal Pulses Driven Spiking Neural Network for Fast Object Recognition in Autonomous Driving

    https://arxiv.org/abs/2001.09220


    Abstract

    Accurate real-time object recognition from sensory data has long been a crucial and challenging task for autonomous driving. Even though deep neural networks (DNNs) have been successfully applied in this area, most existing methods still heavily rely on the pre-processing of the pulse signals derived from LiDAR sensors, and therefore introduce additional computational overhead and considerable latency. In this paper, we propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN). Being evaluated on various datasets (including Sim LiDAR, KITTI and DVS-barrel) derived from LiDAR and dynamic vision sensor (DVS), our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency. It highlights the SNN's great potentials in autonomous driving and related applications. To the best of our knowledge, this is the first attempt to use SNN to directly perform object recognition on raw temporal pulses.



    Deep SCNN-Based Real-Time Object Detection for Self-Driving Vehicles Using LiDAR Temporal Data


    https://arxiv.org/abs/2001.09220


    Abstract

    Real-time accurate detection of three-dimensional (3D) objects is a fundamental necessity for self-driving vehicles. Most existing computer vision approaches are based on convolutional neural networks (CNNs). Although the CNN-based approaches can achieve high detection accuracy, their high energy consumption is a severe drawback. To resolve this problem, novel energy efficient approaches should be explored. Spiking neural network (SNN) is a promising candidate because it has orders-of-magnitude lower energy consumption than CNN. Unfortunately, the studying of SNN has been limited in small networks only. The application of SNN for large 3D object detection networks has remain largely open. In this paper, we integrate spiking convolutional neural network (SCNN) with temporal coding into the YOLOv2 architecture for real-time object detection. To take the advantage of spiking signals, we develop a novel data preprocessing layer that translates 3D point-cloud data into spike time data. We propose an analog circuit to implement the non-leaky integrate and fire neuron used in our SCNN, from which the energy consumption of each spike is estimated. Moreover, we present a method to calculate the network sparsity and the energy consumption of the overall network. Extensive experiments have been conducted based on the KITTI dataset, which show that the proposed network can reach competitive detection accuracy as existing approaches, yet with much lower average energy consumption. If implemented in dedicated hardware, our network could have a mean sparsity of 56.24% and extremely low total energy consumption of 0.247mJ only. Implemented in NVIDIA GTX 1080i GPU, we can achieve 35.7 fps frame rate, high enough for real-time object detection.



    A self-driving robot using deep convolutional neural networks on neuromorphic hardware


    https://ieeexplore.ieee.org/abstract/document/7965912


    Abstract

    Neuromorphic computing is a promising solution for reducing the size, weight and power of mobile embedded systems. In this paper, we introduce a realization of such a system by creating the first closed-loop battery-powered communication system between an IBM Neurosynaptic System (IBM TrueNorth chip) and an autonomous Android-Based Robotics platform. Using this system, we constructed a dataset of path following behavior by manually driving the Android-Based robot along steep mountain trails and recording video frames from the camera mounted on the robot along with the corresponding motor commands. We used this dataset to train a deep convolutional neural network implemented on the IBM NS1e board containing a TrueNorth chip of 4096 cores. The NS1e, which was mounted on the robot and powered by the robot's battery, resulted in a self-driving robot that could successfully traverse a steep mountain path in real time. To our knowledge, this represents the first time the IBM TrueNorth has been embedded on a mobile platform under closed-loop control.



    Tactile sensory coding and learning with bio-inspired optoelectronic spiking afferent nerves


    https://www.nature.com/articles/s41467-020-15105-2


    The integration and cooperation of mechanoreceptors, neurons and synapses in somatosensory systems enable humans to efficiently sense and process tactile information. Inspired by biological somatosensory systems, we report an optoelectronic spiking afferent nerve with neural coding, perceptual learning and memorizing capabilities to mimic tactile sensing and processing. Our system senses pressure by MXene-based sensors, converts pressure information to light pulses by coupling light-emitting diodes to analog-to-digital circuits, then integrates light pulses using a synaptic photomemristor. With neural coding, our spiking nerve is capable of not only detecting simultaneous pressure inputs, but also recognizing Morse code, braille, and object movement. Furthermore, with dimensionality-reduced feature extraction and learning, our system can recognize and memorize handwritten alphabets and words, providing a promising approach towards e-skin, neurorobotics and human-machine interaction technologies.



    Memristive synapses connect brain and silicon spiking neurons


    https://www.nature.com/articles/s41598-020-58831-9


    Abstract

    Brain function relies on circuits of spiking neurons with synapses playing the key role of merging transmission with memory storage and processing. Electronics has made important advances to emulate neurons and synapses and brain-computer interfacing concepts that interlink brain and brain-inspired devices are beginning to materialise. We report on memristive links between brain and silicon spiking neurons that emulate transmission and plasticity properties of real synapses. A memristor paired with a metal-thin film titanium oxide microelectrode connects a silicon neuron to a neuron of the rat hippocampus. Memristive plasticity accounts for modulation of connection strength, while transmission is mediated by weighted stimuli through the thin film oxide leading to responses that resemble excitatory postsynaptic potentials. The reverse brain-to-silicon link is established through a microelectrode-memristor pair. On these bases, we demonstrate a three-neuron brain-silicon network where memristive synapses undergo long-term potentiation or depression driven by neuronal firing rates.



    Epileptic Seizure Detection Using a Neuromorphic-Compatible Deep Spiking Neural Network


    https://link.springer.com/chapter/10.1007/978-3-030-45385-5_34


    Abstract

    Monitoring brain activities of Drug-Resistant Epileptic (DRE) patients is crucial for the effective management of the chronic epilepsy. Implementation of machine learning tools for analyzing electrical signals acquired from the cerebral cortex of DRE patients can lead to the detection of a seizure prior to its development. Therefore, the objective of this work was to develop a deep Spiking Neural Network (SNN) for the epileptic seizure detection. The energy and computation-efficient SNNs are well compatible with neuromorphic systems, making them an adequate model for edge-computing devices such as healthcare wearables. In addition, the integration of SNNs with neuromorphic chips enables the secure analysis of sensitive medical data without cloud computations.



    Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI games


    https://arxiv.org/pdf/1903.11012.pdf


    Abstract

    Deep Reinforcement Learning (RL) demonstrates excellent performance on tasks

    that can be solved by trained policy. It plays a dominant role among cutting-edge

    machine learning approaches using multi-layer Neural networks (NNs). At the same

    time, Deep RL suffers from high sensitivity to noisy, incomplete, and misleading input data. Following biological intuition, we involve Spiking Neural Networks (SNNs)

    to address some deficiencies of deep RL solutions. Previous studies in image classification domain demonstrated that standard NNs (with ReLU nonlinearity) trained

    using supervised learning can be converted to SNNs with negligible deterioration in

    performance. In this paper, we extend those conversion results to the domain of

    Q-Learning NNs trained using RL. We provide a proof of principle of the conversion

    of standard NN to SNN. In addition, we show that the SNN has improved robustness to occlusion in the input image. Finally, we introduce results with converting

    full-scale Deep Q-network to SNN, paving the way for future research to robust Deep

    RL applications.



    High-speed particle detection and tracking in microfluidic devices using event-based sensing


    https://pubs.rsc.org/en/content/articlepdf/2020/LC/D0LC00556H


    Abstract

    Visualising fluids and particles within channels is a key element of microfluidic work. Current imaging methods for particle image velocimetry often require expensive high-speed cameras with powerful illuminating sources, thus potentially limiting accessibility. This study explores for the first time the potential of an event-based camera for particle and fluid behaviour characterisation in a microfluidic system. Eventbased cameras have the unique capacity to detect light intensity changes asynchronously and to record spatial and temporal information with low latency, low power and high dynamic range. Event-based cameras could consequently be relevant for detecting light intensity changes due to moving particles, chemical reactions or intake of fluorescent dyes by cells to mention a few. As a proof-of-principle, eventbased sensing was tested in this work to detect 1 μm and 10 μm diameter particles flowing in a microfluidic channel for average fluid velocities of up to 1.54 m s−1 . Importantly, experiments were performed by directly connecting the camera to a standard fluorescence microscope, only relying on the microscope arc lamp for illumination. We present a data processing strategy that allows particle detection and tracking in both bright-field and fluorescence imaging. Detection was achieved up to a fluid velocity of 1.54 m s−1 and tracking up to 0.4 m s−1 suggesting that event-based cameras could be a new paradigm shift in microscopic imaging.



    Related News


    With Hopes of Helping Paralyzed Patients Regain Movement, Intel and Brown University Deploy AI

    https://newsroom.intel.com/news/hopes-helping-paralyzed-patients-regain-movement-intel-brown-university-deploy-ai/#gs.b6x9ua


    Singapore Researchers Look to Intel Neuromorphic Computing to Help Enable Robots That ‘Feel’

    https://newsroom.intel.com/news/singapore-researchers-neuromorphic-computing-robots-feel/#gs.b6xips

    Is Samsung Electronics approaching automobiles as a semiconductor?

    http://m.econovill.com/news/articleView.html?idxno=288651


    Toyota Joins the Race for Self-Driving Cars with an Invisible Copilot

    https://www.technologyreview.com/2016/04/07/161135/toyota-joins-the-race-for-self-driving-cars-with-an-invisible-copilot/


 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
21.5¢
Change
0.005(2.38%)
Mkt cap ! $399.0M
Open High Low Value Volume
21.5¢ 22.0¢ 21.0¢ $990.6K 4.606M

Buyers (Bids)

No. Vol. Price($)
5 177633 21.5¢
 

Sellers (Offers)

Price($) Vol. No.
22.0¢ 517381 15
View Market Depth
Last trade - 16.10pm 02/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.