BRN 2.33% 22.0¢ brainchip holdings ltd

2024 BrainChip Discussion, page-5371

  1. 9,715 Posts.
    lightbulb Created with Sketch. 25417
    Every now and then I like to remind myself of what our research has unearthed.

    Particularly when there has been a catalyst which in this case was the presentation by Dr. Anthony Lewis CTO at the Brainchip AGM regarding AKIDA 2.0 and TENNs.

    The following are extracts from a paper which was a comprehensive review of the neuromorphic offerings including Brainchip AKD1000 and their suitability for inclusion/adoption in data centres:

    “ 3) BrainChip: Akida is an event-based processor developed by BrainChip [27]. Akida supports a wide variety of neural networks and can execute complex networks. It also supports the AXI bus for connection to CPUs, allowing custom net- works not supported by Akida to be executed on the CPU. It comprises a data processing unit to preprocess input data, converting it into events, and uses an LPDDR4 interface for storing programs and parameters. Additionally, the PCIe inter- face can be used to connect to other Akida chips. Akida aims to support a broad range of applications, including robotics and automation in industry, real-time sensing in automotive, vital- signs prediction in on-device health monitoring, and intelligent automation in homes”

    “ B. Interim conclusions
    From the above results, we draw some interim conclusions:
    • Tasks and models: Neuromorphic solutions are available for image processing with CNN, natural language pro- cessing with recurrent neural networks (spiking LSTM or event-based GRU), and spatiotemporal pattern recognition.
    • Energy: Neuromorphic hardware is between 3 to 100 times more energy efficient per inference at batch size 1.
    • Speed: For some tasks, neuromorphic hardware shows faster inference compared to conventional systems. This
    advantage diminishes for larger batch sizes.”

    “ A. Hardware integration
    1) Status quo: So far, the following neuromorphic systems have been integrated at large scale into data centers: SpiNNaker 1 [20], TrueNorth (NS16-4e) [83], Loihi (Po- hoiki Springs) [84], BrainScaleS-1 [85] and Tianjic [32]. All systems use slide-in modules with custom printed circuit boards (PCBs) for integration into standard 19” server racks. Typically, the neuromorphic chips are accessed via Ethernet, only the TrueNorth NS16e-4 uses PCIe for communication with the host chip. Baseboard Management Controller (BMC) or similar controllers are used for booting and monitoring the boards. All platforms also include field-programmable gate arrays (FPGAs) or system-on-chips (SoCs), most often as middleware between host computers and neuromorphic sys- tems. Some systems have already integrated a host CPU, e.g., Pohoiki Springs or the Tianjic server, while the other systems require external host CPU servers for the configuration and control of the neuromorphic systems and for preprocessing.
    As an exception, BrainChip offers PCIe boards for integrat- ing their Akida chips with CPU servers. This represents an- other option for integrating neuromorphic computing systems into data centers, similar to normal GPUs. Note, however, that this might limit the size of neuromorphic models that can be implemented compared to the larger systems discussed above.
    2) Conclusion: The above examples show that a variety of neuromorphic systems have been successfully integrated into standard data center server racks. Thus, technically, the hardware integration does not pose a problem. Yet, we observe a diversity in how large neuromorphic systems are assembled into server boards, e.g., many of them leverage FPGAs or SoCs as middleware. These extra devices and the host CPU add a power overhead to the very energy-efficient neuromorphic systems. Optimizing for system-level efficiency of AI compute servers, these components need to be included when perform- ing benchmarking on AI workloads. Another requirement for the industry-level deployment of neuromorphic chips is high reliability and robustness. The chips and boards need to be designed for a 24/7 operation, e.g., the server board should keep working if a single chip or processor fails. Replacement parts should be available for a long period”

    VI. CONCLUSION
    In this article, we reviewed neuromorphic hardware plat- forms and algorithms for their suitability in reducing energy consumption in AI data centers. We also discussed the current challenges that neuromorphic computing faces in becoming a mainstream technology used by the industry. In particular, we analyzed that the current AI model types supported by neuromorphic computing only partially match the AI models commonly run in AI data centers. We conclude that the neuromorphic computing community should focus on state- of-the-art ML technologies, such as transformers, and needs to establish standardized software frameworks that ensure interoperability among hardware vendors.
    Data center sustainability is not only about saving energy during operations but also about saving water and materials while keeping social and governance issues in mind. These latter issues are becoming increasingly important as AI models use vast amounts of personal data. When considering the carbon footprint, one will eventually face the question of whether or not to integrate specialized hardware: the embodied footprint of an additional device may be greater than the op- erational footprint savings due to specialized solutions [102]. Neuromorphic engineers should therefore focus on the high utilization of their platforms”

    This paper was published on 4 February, 2024 by an impressive array of European based academics and at the point of publication the narrowly focused AKD1000 was standing up against the best that the rest of the world had to offer in the neuromorphic space.

    Enter stage left AKIDA 2.0 and TENNs making AKD1000 look like the disappointing older brother. Very good bordering on being brilliant but not the next Beethoven his parents predicted.

    In out shinning AKD1000 AKIDA 2.0 with TENNs simultaneously eclipses all the other neuromorphic players reviewed by these academics.

    Peter van der Made and Sean Hehir with the release of AKIDA 2.0 suggested the lead AKD1000 held of about three years had moved out to five.

    It stands to reason that the addition of TENNs must have at least cemented that lead and likely extended it further.

    My opinion only DYOR

    Fact Finder
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
22.0¢
Change
0.005(2.33%)
Mkt cap ! $408.3M
Open High Low Value Volume
21.5¢ 23.5¢ 21.3¢ $7.637M 34.28M

Buyers (Bids)

No. Vol. Price($)
15 503753 22.0¢
 

Sellers (Offers)

Price($) Vol. No.
23.0¢ 196385 4
View Market Depth
Last trade - 16.10pm 21/06/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.