BRN 7.32% 19.0¢ brainchip holdings ltd

2020 BRN Discussion, page-2683

  1. 9,867 Posts.
    lightbulb Created with Sketch. 26267
    A couple of weeks ago Rayz posted a lead and I agreed to do into some research of Shibo (Bob) Zhou and the following is the result.

    Most posters will recall that uiux posted the fact that a new patent had been granted to BrainChip for an improved spiking neural network some weeks ago now.The improved spiking neural network as can be seen from the extract below is AKIDA and the AKIDA which last years delay produced can now be best described as a digital Spiking Convolution Neural Network or SCNN. At the time of reading this patent it was clear that the extra 12 months (or 18 months) had been very worthwhile given the claimed improvements.

    In following up on the research of Shibo (Bob) Zhou I came across the below paper and have included the extracts which for me are significant for the following reason. In the paper Zhou and others have shown that the use of an SCNN-based YOLOv2 architecture for real-time object detection achieves state of the art performance when processing Lidar for self driving vehicles.

    Interestingly in the paper they compare AKIDA but clearly from the statements made that it is not an SCNN architecture they have not had access to the new improved SCNN version for which the patent had been granted and so they dismiss it even though the earlier version of AKIDA achieved close to state of the art performance.

    The value of their research for me is that they have independently proven that the new improved SCNN patented AKIDA architecture is state of the art for for real time object detection for processing Lidar for self driving vehicles. It is therefore no wonder that Ford and Valeo have jumped on board.

    Anyway as usual please do your own research and form your own opinions.

    ****************************************************************************
    SPIKING NEURAL NETWORKDocument Type and Number:United States Patent Application 20200143229Kind Code:A1Abstractisclosed herein are system, method, and computer program product embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised extraction of features from an input stream. An embodiment operates by receiving a set of spike bits corresponding to a set synapses associated with a spiking neuron circuit. The embodiment applies a first logical AND function to a first spike bit in the set of spike bits and a first synaptic weight of a first synapse in the set of synapses. The embodiment increments a membrane potential value associated with the spiking neuron circuit based on the applying. The embodiment determines that the membrane potential value associated with the spiking neuron circuit reached a learning threshold value. The embodiment then performs a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the spiking neuron circuit reached the learning threshold value.Van Der, Made Peter Aj (Laguna Woods, CA, US)Mankar, Anil Shamrao (Mission Viejo, CA, US)the SCNN can be allowed to use full convolution, same convolution, or valid convolution....

    The SCNN can also be allowed to use a custom convolution type referred to as ‘padding.’ A programmer can indicate a padding convolution type by specifying the padding around each side of the original input 1001.**************************************************************************************************************************Received March 22, 2020, accepted April 19, 2020, date of publication April 27, 2020, date of current version May 7, 2020. Digital Object Identifier 10.1109/ACCESS.2020.2990416

    Deep SCNN-Based Real-Time Object Detection for Self-Driving Vehicles Using LiDAR Temporal Data

    SHIBO ZHOU1 , YING CHEN 2 , XIAOHUA LI 1 , (Senior Member, IEEE), AND ARINDAM SANYAL 3 , (Member, IEEE) 1Department of Electrical and Computer Engineering, The State University of New York at Binghamton, Binghamton, NY 13902, USA 2Department of Management Science and Engineering, School of Management, Harbin Institute of Technology, Harbin 150000, China 3Department of Electrical Engineering, The State University of New York at Buffalo, Buffalo, NY 14260, USA Corresponding author: Ying Chen ([email protected]) This work was supported in part by the National Natural Science Foundation of China under Grant 91846301.

    ABSTRACT

    Real-time accurate detection of three-dimensional (3D) objects is a fundamental necessity for self-driving vehicles. Most existing computer vision approaches are based on convolutional neural networks (CNNs). Although the CNN-based approaches can achieve high detection accuracy, their high energy consumption is a severe drawback. To resolve this problem, novel energy efficient approaches should be explored. Spiking neural network (SNN) is a promising candidate because it has orders-of-magnitude lower energy consumption than CNN. Unfortunately, the studying of SNN has been limited in small networks only. The application of SNN for large 3D object detection networks has remain largely open. In this paper, we integrate spiking convolutional neural network (SCNN) with temporal coding into the YOLOv2 architecture for real-time object detection. To take the advantage of spiking signals, we develop a novel data preprocessing layer that translates 3D point-cloud data into spike time data. We propose an analog circuit to implement the non-leaky integrate and fire neuron used in our SCNN, from which the energy consumption of each spike is estimated. Moreover, we present a method to calculate the network sparsity and the energy consumption of the overall network. Extensive experiments have been conducted based on the KITTI dataset, which show that the proposed network can reach competitive detection accuracy as existing approaches, yet with much lower average energy consumption. If implemented in dedicated hardware, our network could have a mean sparsity of 56.24% and extremely low total energy consumption of 0.247mJ only. Implemented in NVIDIA GTX 1080i GPU, we can achieve 35.7 fps frame rate, high enough for real-time object detection.
    V. CONCLUSION
    Existing LiDAR-based 3D real-time object detection methods use CNN. Although they can achieve high detection accuracy, their high energy consumption is a great concern for practical vehicular applications. This paper is the first to report the development of an SCNN-based YOLOv2 architecture for real-time object detection over the KITTI 3D point-cloud dataset considering the energy consumption. We designed a novel data preprocessing layer to translate the 3D point clouds directly into spike times. To better show the energy efficiency of the proposed network in real-time object detection, we built an analog neuron circuit to obtain the energy cost of each spike. We also proposed an energy consumption and network sparsity estimation method. Our proposed network had a mean spiking sparsity of 56.24% and consumed an average of 0.247 mJ only, indicating higher energy efficiency. Experimental results over the KITTI dataset demonstrated that our proposed network reached thestate-of-the-art accuracy in the bird’s-eye view and full 3D detection. In some cases, our proposed network performed better than other typical models reported in literature.
    ACKNOWLEDGMENT
    The authors would like to thank Prof. Wenfeng Zhao of Binghamton University for the valuable suggestions in the revision process.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
19.0¢
Change
-0.015(7.32%)
Mkt cap ! $352.6M
Open High Low Value Volume
20.0¢ 20.5¢ 18.5¢ $2.469M 12.85M

Buyers (Bids)

No. Vol. Price($)
77 3220129 18.5¢
 

Sellers (Offers)

Price($) Vol. No.
19.0¢ 57511 2
View Market Depth
Last trade - 16.10pm 30/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.