BRN brainchip holdings ltd

Ann: BrainChip Evaluates Redomiciling to US, page-481

  1. 11,355 Posts.
    lightbulb Created with Sketch. 31804


    This 2025 paper does not mention Brainchip or AKIDA but it does mention and dismiss a few notable (non) competitors that some posters try to suggest have caught up Brainchip’s 5 years plus lead. Be sure to read the final paragraph #*#*#:

    State of the Art in Parallel and Distributed Systems: Emerging Trends and Challenges

    by
    Fei Dai
    1,*orcid.png?0465bc3812adeb52?1741170917,
    Md Akbar Hossain
    1 and
    Yi Wang
    2orcid.png?0465bc3812adeb52?1741170917


    1
    School of Computing, Eastern Institute of Technology, Napier 4104, New Zealand
    2
    School of Mathematical and Computational Sciences, Massey University, Palmerston North 4410, New Zealand
    *
    Author to whom correspondence should be addressed.
    Electronics 2025, 14(4), 677; https://doi.org/10.3390/electronics14040677
    Submission received: 5 December 2024 /Revised: 25 January 2025 /Accepted: 6 February 2025 /Published: 10 February 2025
    (This article belongs to the Special Issue Emerging Distributed/Parallel Computing Systems)

    “3.3. Neuromorphic Computing

    Neuromorphic computing is a class of brain-inspired computing architectures which, at a certain level of abstraction, simulate the biological computations of the brain. This approach enhances the efficiency of compatible computational tasks and achieves computational delays and energy consumption with biological computation. The term “neuromorphic” was introduced by Carver Mead in the late 1980s [53,54], referring to mixed analogue–digital implementations of brain-inspired computing. Over time, as technology evolved, it came to encompass a wider range of brain-inspired hardware implementations. Specifically, unlike the von Neumann architecture’s CPU–memory separation and synchronous clocking, neuromorphic computing utilises neurons and synapses, the fundamental components, to integrate computation and memory. It employs an event-driven approach based on asynchronous event-based spikes, which is more efficient for the brain-like sparse and massively parallel computing, significantly reducing energy consumption. At the algorithmic level, the brain-inspired Spike Neural Network (SNN) serves as an essential algorithm deployed on neuromorphic hardware, efficiently completing ML tasks [55,56] and other operations [57,58]. Recent advancements in VLSI technology and AI have propelled neuromorphic computing towards large-scale development [59]. This introduces developments in neuromorphic computing from both hardware and algorithmic perspectives and discusses future trends.
    IBM TrueNorth is based on distributed digital neural models designed to address cognitive tasks in real time [60]. Its chip contains 4096 neurosynaptic cores, each core featuring 256 neurons, with each neuron having 256 synaptic connections. On the one hand, the intra-chip network integrates 1 million programmable neurons and 256 million trainable synapses; on the other hand, the inter-chip interface supports seamless multi-chip communication of arbitrary size, facilitating parallel computation. By using offline learning, various common algorithms such as convolutional networks, restricted Boltzmann machines, hidden Markov models, and multi-modal classification have been mapped to TrueNorth, achieving good results in real-time multi-object detection and classification tasks with milliwatt-level energy consumption.
    Neurogrid, a tree-structured neuromorphic computing architecture, fully considers neural features such as the axonal arbor, synapse, dendritic tree, and ion channels to maximise synaptic connections [61]. Neurogrid uses analogue signals to save energy and a tree structure to maximise throughput, allowing it to simulate 1 million neurons and billions of synaptic connections with only 16 neurocores and a power consumption of only 3 watts. Neurogrid’s hardware is suitable for real-time simulation, while its software can be used for interactive visualisation.
    As one of the neuromorphic computing platforms contributing to the European Union Flagship Human Brain Project (HBP), SpiNNaker is a parallel computation architecture with a million cores [62]. Each SpiNNaker node has 18 cores, connected by a system network-on-chip. Nodes select 1 neural core to act as the monitor processor, assigned an operating system support role, while the other 16 cores support application roles, with the 18th core reserved as a fault-tolerance spare. Nodes communicate through a router to complete parallel data exchange. SpiNNaker can be used as an interface with AER sensors and for integration with robotic platforms.
    Intel’s Loihi is a neuromorphic research processor supporting multi-scale SNNs, achieving performance comparable to mainstream computing architectures [63,64]. Loihi features a maximum of 128,000 neurons per chip with 128 million synapses. Its unique capabilities include a highly configurable synaptic memory with variable weight precision, support for a wide range of plasticity rules, and graded reward spikes that facilitate learning. Loihi has been evaluated in various applications, such as adaptive robot arm control, visual–tactile sensory perception, modelling diffusion processes for scientific computing applications, and solving hard optimisation problems like railway scheduling. Loihi2 [65], as a new generation of neuromorphic computing and an upgrade of Loihi, is equipped with generalised event-based messaging, greater neuron model programmability, enhanced learning capabilities, numerous capacity optimisations to improve resource density, and faster circuit speeds. Importantly, besides the features from Loihi1, Loihi2 has shared synapses for convolution, which is ideal for deep convolutional neural networks.
    SNNs are an essential algorithmic component of neuromorphic computing. To accomplish a task, we should consider how to define a tailored SNN and deploy it on hardware [54]. From a training perspective, algorithms can be categorised into online learning and offline learning. The offline-learning approach first deploys the SNN on neuromorphic hardware and then uses the plasticity features to approximate backpropagation. This is a real-time method for optimising hardware simulation of plasticity. Offline learning involves training an Artificial Neural Network (ANN) on a CPU or GPU based on specific tasks and data, then converting the ANN to an equivalent SNN and deploying it on neuromorphic hardware. As a key to training algorithms, various studies have analysed backpropagation.
    An Energy-Efficient Backpropagation approach successfully implemented backpropagation on TrueNorth hardware [56]. Importantly, this method treats spikes and discrete synapses as continuous probabilities, allowing the trained network to map to neuromorphic hardware through probability sampling. This training method achieved 99.42% accuracy on the MNIST dataset with only 0.268 mJ per image. Furthermore, backpropagation through time (BPTT) has been implemented on neuromorphic datasets, providing a training method for recurrent structures on neuromorphic platforms [66]. Benefiting from these training optimisations, SNNs in neuromorphic computing have been applied in various ML tasks such as Simultaneous Velocity and Texture Classification [67], Real-time Facial Expression Recognition [68], and EMG Gesture Classification [69]. Similarly, they have been used in neuroscience research [70,71]. SNN-based neuromorphic computing is also utilised in non-ML tasks. Benefiting from the neuromorphic vertex–edge structure, graph theory problems can be mapped onto the hardware [58,72,73]. Additionally, it has been applied to solving NP-complete problems [74].
    Neuromorphic computing often aims to replicate aspects of biological neural processing in hardware, but there is an ongoing debate over how strictly such systems must adhere to biophysical plausibility versus employing more abstract ML methods. On the one hand, SNN models, such as the Izhikevich formulation [75], focus on capturing the temporal dynamics of real neurons, which can yield insights into how biological brains encode and process information. Research has shown that such models can replicate a variety of neuronal firing patterns with computational efficiency, providing a bridge between computational neuroscience and neuromorphic engineering [76]. On the other hand, more traditional ML algorithms, such as Bayesian inference [77], support vector machines [78], or the large language models [79] dominating modern AI, tend to trade some fidelity to biological detail for mathematical tractability, scalability, and often better empirical performance on a range of industrial tasks.

    #*#*#*# Despite the proven feasibility of neuromorphic computing in many tasks, it remains largely experimental. In today’s landscape of energy-consuming AI driven by GPU clusters, bringing neuromorphic computing out of the lab and achieving performance equal to or better than GPU-based AI with low energy consumption is a significant trend [80,81,82]. Standardised hardware protocols and community-maintained software will be crucial. From a neuroscience research perspective, neuromorphic computing simulates brain structures to varying degrees. Leveraging these simulations could provide new insights into neural mechanisms and brain function. Neuromorphic computing has a close-loop relationship with both AI and neuroscience, drawing inspiration from and serving both fields, tightly linking their development and advancing our understanding of intelligence.”

    My opinion only DYOR

    Fact Finder
 
Add to My Watchlist
What is My Watchlist?
A personalised tool to help users track selected stocks. Delivering real-time notifications on price updates, announcements, and performance stats on each to help make informed investment decisions.
(20min delay)
Last
19.5¢
Change
0.005(2.63%)
Mkt cap ! $394.9M
Open High Low Value Volume
19.5¢ 19.5¢ 19.0¢ $802.7K 4.129M

Buyers (Bids)

No. Vol. Price($)
41 1102480 19.0¢
 

Sellers (Offers)

Price($) Vol. No.
20.0¢ 710044 24
View Market Depth
Last trade - 16.10pm 24/06/2025 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.