BRN 1.89% 27.0¢ brainchip holdings ltd

Keith Johnson - BrainChip | LinkedInI have a passion for machine...

  1. 6,166 Posts.
    lightbulb Created with Sketch. 2655
    This was something I posted early Mar on Mamba only due to picking up link with Rudy at BRN playing with it.


    Saw a couple of our employees "liking" mamba and wondered what it was.

    Couple snips below and curious whether it was a general "like", great development thing or whether it is something they are now working on too.

    Is possible the PeaBrane/Mamba-Tiny on Git and liked is something that Rudy created by the looks from the full Mamba but maybe I'm reading it wrong?

    Interesting none the less I think given the Ai in 24 links Mamba, Transformers & Neuromorphic all together.



    Mamba: This refers to the emergence of new, ground breaking AI architectures like transformers and neuromorphic computing. These architectures mimic the human brain’s structure and function, allowing for significantly faster processing and deeper learning capabilities. Mamba-based models will revolutionize areas like natural language processing, image recognition, and robotics.

    • Mamba Architecture: Revolutionizing Sequence Modelling
      1. Mamba, a ground breaking architecture, represents a leap forward from the Transformer models.
      2. It addresses the computational challenges of large-scale sequence processing.
      3. Albert Goo’s work on structured state spaces inspired Mamba’s development.
      4. The architecture’s potential lies in its ability to handle extensive sequences, as demonstrated in DNA classification tasks.

    [Submitted on 1 Dec 2023]

    Mamba: Linear-Time Sequence Modelling with Selective State Spaces

    Albert Gu, Tri Dao




    1709470139376.png




    1709471145144.png






    Rudy Pei

    Physicist | ML researcher | quantum & neuromorphic computing | behavioral economics | composer
    3w Edited

    Mamba is a new state-space model out-performing transformers on "everywhere tried". Originally, it was trained with associative scan, which pytorch does not support natively, hence the need for custom CUDA kernels. However, there is a simple math trick to express the associative scans used in mamba as a ratio of two cumulative sums. This makes for an efficient native pytorch implementation of mamba possible. How? Check out my simple repo with an one-file implementation of this idea forking from the mamba-minimal repo https://lnkd.in/g5QR7yHC #mamba #llm
    GitHub - PeaBrane/mamba-tiny: Simple, minimal implementation of the Mamba SSM in one file of PyTorch. More efficient than the minimalist version but less efficient than the original mamba implementation.

    GitHub - PeaBrane/mamba-tiny: Simple, minimal implementation of the Mamba SSM in one file of PyTorch. More efficient than the minimalist version but less efficient than the original mamba implementation.

    github.com





 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
27.0¢
Change
0.005(1.89%)
Mkt cap ! $501.1M
Open High Low Value Volume
26.5¢ 27.0¢ 26.5¢ $1.016M 3.800M

Buyers (Bids)

No. Vol. Price($)
15 1059388 26.5¢
 

Sellers (Offers)

Price($) Vol. No.
27.0¢ 166139 3
View Market Depth
Last trade - 16.10pm 10/05/2024 (20 minute delay) ?
Last
26.8¢
  Change
0.005 ( 1.21 %)
Open High Low Volume
27.0¢ 27.3¢ 26.5¢ 3034679
Last updated 15.59pm 10/05/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.