BRN 13.7% 29.0¢ brainchip holdings ltd

On-chip advantage

  1. 6,267 Posts.
    lightbulb Created with Sketch. 9117

    I wanted to put some thoughts down about the last few patents we have received and how important they are to making us stand out in front of the rest of the neuromorphic chips currently in the research phase. The two notables are Intel’s Loihi and IBMs TrueNorth. As you have probably read I recently did a post on BrainChips first mover advantage, highlighting the different applications where we are advancing ahead of our competitors, eg:
    https://hotcopper.com.au/threads/first-mover-advantage.6526947/


    Specifically, I will discuss the two unique features of Akida - on-chip learning and on-chip convolution. You can see in the table below how our biggest competitors can’t do it. This is why the company says it “is producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products.”

    MyLSEmRoEZSXJ1izRDYkCuXzqzlv1zhB8s_RMsQmoA8829aJ35gvSLBlM2LUy1WBskP31wWtWiyid9d5sHkDEw6R-Ww0DXaOZNQXvXXxvBAZuaEm8X6kHJ1mIkK4VVVdNa0-VlxS


    On-chip learning


    This is our ‘killer feature’. Something that no other product can currently do.


    https://patents.google.com/patent/US11157800B2/en


    Neural processor based accelerator system and method


    Abstract

    A configurable spiking neural network based accelerator system is provided. The accelerator system may be executed on an expansion card which may be a printed circuit board. The system includes one or more application specific integrated circuits comprising at least one spiking neural processing unit and a programmable logic device mounted on the printed circuit board. The spiking neural processing unit includes digital neuron circuits and digital, dynamic synaptic circuits. The programmable logic device is compatible with a local system bus. The spiking neural processing units contain digital circuits comprises a Spiking Neural Network that handles all of the neural processing. The Spiking Neural Network requires no software programming, but can be configured to perform a specific task via the Signal Coupling device and software executing on the host computer. Configuration parameters include the connections between synapses and neurons, neuron types, neurotransmitter types, and neuromodulation sensitivities of specific neurons.



    During the patent application process for the above patent, the patent examiner went through the claims and recognised the capability of BrainChips innovation to learn via STDP, which is in contrast to the other innovations that were referenced, including those of Spinnaker, IBM and CEA LETI:



    https://globaldossier.uspto.gov/#/details/US/15218075/A/86747





    AG3vSARLI1V8pRPSppoQLl2oT2r0vvxegEt-FZ5-82L3sMfSmvZMvrl6__y2AORFMiIa9zb49emmgR-JzfI1t2P1MniOEJ6KzDmaK85fQp60nnwXQ_2ZU34Wy5jcu-CCPVY1EbnG

    f7SaNjh_doVXa3YIc9Mbjexw00sDz4xp2eRrhEILjRC4cjySmC2Npjk3to3GF8dbKp2fkK_GtlrOvqbu-a_ppGp3xjBNzFXXLTRR7sRh3igMQW_rbVIGV5yLWkgurkAl2Bsf-AMP




    Merolla


    IBM Research Almaden, USA - TrueNorth


    https://ieeexplore.ieee.org/document/6055294


    A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm



    Paul Merolla is a co-founder of Neuralink:

    http://paulmerolla.com/





    Bamford


    https://ieeexplore.ieee.org/document/4633990


    Large developing axonal arbors using a distributed and locally-reprogrammable address-event receiver



    Seems to be associated with Spinnaker:


    https://www.frontiersin.org/articles/10.3389/fnins.2018.00434/full


    Structural Plasticity on the SpiNNaker Many-Core Neuromorphic System


    "We would also like to thank Simeon Bamford for aiding in understanding and resolving incorrect assumptions regarding the model and analysis."




    Querlioz


    Bioinspired networks with nanoscale memristive devices that combine the unsupervised and supervised learning approaches


    https://ieeexplore.ieee.org/document/6464164/authors


    Mixed authors between CNRS and CEA




    Another example of BrainChip beating out a competitor to STDP related functionality. Chris Eliasmith being the phd from Applied Brain Research who was instrumental in the early development of applications on the Loihi platform, such as keyword spotting (LOL):


    https://dl.acm.org/doi/10.1145/3320288.3320304

    Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware



    In Applied Brain Research’s patent application, they got called out of controllable dendritic delays, with the examiner directly referencing Peter Van Der mades patent:


    https://globaldossier.uspto.gov/#/result/publication/EP/3287957/1


    Methods And Systems For Implementing Dynamic Neural Networks


    1) VOELKER, Aaron Russell

    2) ELIASMITH, Christopher David



    jLZUI8iwPo7wdaFFErL_Rc8H_3Os8jAvpuP5ym6w1f6XmzEyjC-UIRhlLryvquQMK4LVcynIXR6viXkz2TfN6LkQcEJbURYZ779UqqCg1oqifUHTINav4-QdnLvuBZ9YfD0YfUbb



    As you can read, this is evidence that puts BrainChip ahead of:

    1) IBM TrueNorth

    2) Spinnaker

    3) CEA

    4) Applied Brain Research


    As the patent officer has recognised the teachings of Peter van der Made in the various BrainChip patents related to STDP (spike timing-dependent plasticity) either as a deficiency of previous inventions or directly blocking a feature of a pending application.


    On-chip convolution


    Yesterday we were officially granted the new patent for on-chip convolutional spiking neural networks with learning:



    https://patents.google.com/patent/US20210027152A1/en


    Event-based classification of features in a reconfigurable and temporally coded convolutional spiking neural network


    Abstract

    Embodiments of the present invention provides a system and method of learning and classifying features to identify objects in images using a temporally coded deep spiking neural network, a classifying method by using a reconfigurable spiking neural network device or software comprising configuration logic, a plurality of reconfigurable spiking neurons and a second plurality of synapses. The spiking neural network device or software further comprises a plurality of user-selectable convolution and pooling engines. Each fully connected and convolution engine is capable of learning features, thus producing a plurality of feature map layers corresponding to a plurality of regions respectively, each of the convolution engines being used for obtaining a response of a neuron in the corresponding region. The neurons are modeled as Integrate and Fire neurons with a non-linear time constant, forming individual integrating threshold units with a spike output, eliminating the need for multiplication and addition of floating-point numbers.




    To understand the importance of this innovation, there is research that has sought to achieve similar functionality with the IBM chip, TrueNorth:



    https://www.pnas.org/content/113/41/11441.short


    Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing


    4.Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos,A., . . . Modha, D. “Convolutional Networks for Fast, Energy-EfficientNeuromorphic Computing.” IBM Research: Almaden, May 24, 2016.



    This research was funded by DARPA:


    ACKNOWLEDGMENTS. This research was sponsored by the Defense Advanced Research Projects Agency



    And they explain the significance of being able to run convolutional neural networks on neuromorphic hardware:


    Significance

    Brain-inspired computing seeks to develop new technologies that solve real-world problems while remaining grounded in the physical requirements of energy, speed, and size. Meeting these challenges requires high-performing algorithms that are capable of running on efficient hardware. Here, we adapt deep convolutional neural networks, which are today’s state-of-the-art approach for machine perception in many domains, to perform classification tasks on neuromorphic hardware, which is today’s most efficient platform for running neural networks. Using our approach, we demonstrate near state-of-the-art accuracy on eight datasets, while running at between 1,200 and 2,600 frames/s and using between 25 and 275 mW.



    This is important as BrainChip are currently the only ones with a commercial neuromorphic chip capable of on-chip learning AND on-chip convolution and it's obvious from the sheer number of references to this research that there is high demand for this functionality. There are a large number of references AND a wide variety of applications. This gives you an indication as to how far reaching the technology will be.


    Personally, when I read of an organisation or company referencing the above DARPA/IBM research, I immediately recognise that as consumer demand that can be fulfilled by Akida - today. For example, here is a NAVY SBIR proposal seeking exactly that functionality:



    https://www.navysbir.com/n20_2/N202-099.htm


    Implementing Neural Network Algorithms on Neuromorphic Processors


    OBJECTIVE: Deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware.


    Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.



    The SBIR specifically references the research:

    4. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., . . . Modha, D. “Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing.” IBM Research: Almaden, May 24, 2016. https://arxiv.org/pdf/1603.08270.pdf



    Hopefully this gives you a clearer picture about the unique advantages Akida offers in relation to competitors. The above is my opinion and I am still learning the technology after reading about it for a few years, but hopefully my explanation can help you understand it a bit more. Happy to be corrected for inaccuracies also if my understanding is wrong.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
29.0¢
Change
0.035(13.7%)
Mkt cap ! $572.0M
Open High Low Value Volume
26.0¢ 29.0¢ 26.0¢ $10.11M 36.62M

Buyers (Bids)

No. Vol. Price($)
3 22403 28.5¢
 

Sellers (Offers)

Price($) Vol. No.
29.0¢ 628072 21
View Market Depth
Last trade - 16.10pm 08/11/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.