BRN 13.7% 29.0¢ brainchip holdings ltd

Our new Scientific Advisory Board member Mr. Eshraghian stated...

  1. 10,247 Posts.
    lightbulb Created with Sketch. 27869
    Our new Scientific Advisory Board member Mr. Eshraghian stated that AKIDA TENNS was a winning combination. Could he have had this recently published research confirming that pretrained large language models could be efficiently run on SNN.

    My opinion only DYOR

    Fact Finder

    arXiv:2302.13939v5 [cs.CL] 11 Jul 2024
    SpikeGPT: Generative Pretrained Language Model with Spiking Neural NetworksLicense: CC BY-NC-SA 4.0
    Rui-Jie Zhu [email protected]
    Department of Electrical and Computer Engineering
    University of California, Santa Cruz
    Qihang Zhao [email protected]
    Kuaishou
    Guoqi Li [email protected]
    Institute of Automation
    Chinese Academy of Sciences
    Jason K. Eshraghian [email protected]
    Department of Electrical and Computer Engineering
    University of California, Santa Cruz
    Abstract

    As the size of large language models continue to scale, so does the computational resources required to run them. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and until now, SNNs have yet to succeed at language generation on large-scale datasets. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement ‘SpikeGPT’, a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 46M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model when released, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self-attention to reduce quadratic computational complexity ⁢(T2) to linear complexity ⁢(T) fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ridgerchu/SpikeGPT.



    5 Conclusion

    Our results demonstrate that event-driven spiking activations are not only capable of language generation, but they can do so with fewer high-cost operations. We develop techniques that promote lightweight models for the NLP community, and make large-scale models for the neuromorphic and SNN community more effective. We demonstrate how large SNNs can be trained in a way that harnesses advances in transformers and our own serialized version of the attention mechanisms. We expect this research can open new directions for large-scale SNNs.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
29.0¢
Change
0.035(13.7%)
Mkt cap ! $572.0M
Open High Low Value Volume
26.0¢ 29.0¢ 26.0¢ $10.11M 36.62M

Buyers (Bids)

No. Vol. Price($)
3 22403 28.5¢
 

Sellers (Offers)

Price($) Vol. No.
29.0¢ 628072 21
View Market Depth
Last trade - 16.10pm 08/11/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.