BRN 3.17% 30.5¢ brainchip holdings ltd

Akida Advantages

  1. 1,725 Posts.
    lightbulb Created with Sketch. 5541

    For the ones whom conveniently forget the facts, who get great pleasure in rubbishing and in some cases bad mouthing our brilliant staff from the Founder/s down to the toilet cleaner/s.

    Yet, never ever contribute any constructive alternative, or in fact, actually post anything that comes close to mature, educated research to balance the debate, only attack other posters whom they have never actually met, because face to face they would just melt with cowardly embarrassment.

    Please enjoy reading the following....Tech.


    Here are a few things that set us apart:

    Neuromorphic processor products from Intel (Loihi) and IBM (TrueNorth) are hideously difficult to use. The IBM chip is programmed as ‘corelets’ and the user is required to learn this language as well as the architecture of the chip. The Intel chip is configured using Nengo, which was written by a computational neuroscientist and users will need to have an understanding of neurology to connect neurons up to synapses. Both approaches are too difficult, representing a huge hurdle, and therefore prone to fail. We have chosen to be compatible with TensorFlow and the Keras library. That means you simply specify your layer dimensions, how many layers there are, and what type and Akida does the rest.

    The current Deep Learning and CNN / AI market understands arrays of multipliers. These are math co-processors that implement only the multiplication and addition functions of neural networks. The networks are defined in TensorFlow / Keras using the Python programming language. The actual neural network code runs as a Keras library function on the CPU (and is hidden from the users). These are ‘static’ neural networks; they do not have the ability to learn but are trained using an optimizing technique. This optimizing technique is based on successive approximation and back-propagation of errors through the network. Akida is by contrast a neuromorphic chip. That means it contains complete neuron- and synapse circuits, not just multipliers and adders. It is ‘neuromorphic’ – that is – it imitates the brain and learns in a similar manner, in seconds.

    Akida was engineered to have the look and feel of existing networks. A library sits in between the user and hardware functions, making the hardware functions invisible. The Akida neuromorphic processor chip is in many aspects similar to the IBM TrueNorth and the Intel Loihi neuromorphic processors, but with the ease of use of TensorFlow and Keras libraries that data engineers are used to. We use the same library functions, but process the network in a neuromorphic event-based way in hardware.

    We implemented the entire neural function in hardware, while others only implement the multiplier/addition functions and do everything else in software.

    Advantage 1: less impact on the host CPU memory and processing bandwidth. There is also evidence that sparse networks are more robust to adversarial attacks.
    Our design is, like the brain, event-based. That means that data is processed directly in dedicated hardware, without processing instruction code to create neurons (the algorithm is embedded in dedicated hardware)

    Advantage 2: Sparse processing, only when data is generated does the circuit absorb power, low power consumption and +/- 50% fewer neural operations.

    Advantage 3: Low power, in the range of microwatts to milliwatts means almost no heat is generated.
    Due to event-based processing, it is possible to learn in the same way the brain learns. The brain learns patterns by Spike Time Dependent Plasticity (STDP) – if a data input event contributed to an output event, then the weights of that pattern are increased, otherwise they are decreased. Thus a neuron becomes more selective to specific repeating patterns.

    Advantage 4: rapid on-chip learning, applied to on-chip training (in seconds) and incremental learning of new objects.
    In the Akida design, neurons communicate with one another through synapses over a bus structure. The IBM TrueNorth processor is using a cross-bar to connect neurons and synapses. That means that each junction consists of a neuron axon, an enabling bit stored in memory, and a synapse input. Three wires to connect a neuron, multiplied by 256 million synapses is a lot of wiring. Hence their chip is huge: 2.5cm x 2.5cm and hence very expensive.

    Advantage 5: Our chip is small and hence cheap.
    The Akida design combines a spiking neural network technology with convolution and pooling. Others have no pooling or convolutional functions but perform these functions in software.

    Advantage 6: using far less neurons for image-based processing. For example: We did odor classification with a fraction of the neural fabric one Akida chip. Intel did similar odor classification with a system containing 80 chips. IBM did gesture recognition using 97% of the neurons in TrueNorth, we did the same gesture recognition, using the same data set and accuracy, using only 6% of our Akida processor neural fabric ( leaving 94% available for other tasks).

    I LOVE OUR COMPANY AND HOW WE ARE TRAVELLING.....smile.pngcool.png

    Here are a few things that set up apart:

    Neuromorphic processor products from Intel (Loihi) and IBM (TrueNorth) are hideously difficult to use. The IBM chip is programmed as ‘corelets’ and the user is required to learn this language as well as the architecture of the chip. The Intel chip is configured using Nengo, which was written by a computational neuroscientist and users will need to have an understanding of neurology to connect neurons up to synapses. Both approaches are too difficult, representing a huge hurdle, and therefore prone to fail. We have chosen to be compatible with TensorFlow and the Keras library. That means you simply specify your layer dimensions, how many layers there are, and what type and Akida does the rest.

    Here are a few things that set up apart:

    Neuromorphic processor products from Intel (Loihi) and IBM (TrueNorth) are hideously difficult to use. The IBM chip is programmed as ‘corelets’ and the user is required to learn this language as well as the architecture of the chip. The Intel chip is configured using Nengo, which was written by a computational neuroscientist and users will need to have an understanding of neurology to connect neurons up to synapses. Both approaches are too difficult, representing a huge hurdle, and therefore prone to fail. We have chosen to be compatible with TensorFlow and the Keras library. That means you simply specify your layer dimensions, how many layers there are, and what type and Akida does the rest.


    The current Deep Learning and CNN / AI market understands arrays of multipliers. These are math co-processors that implement only the multiplication and addition functions of neural networks. The networks are defined in TensorFlow / Keras using the Python programming language. The actual neural network code runs as a Keras library function on the CPU (and is hidden from the users). These are ‘static’ neural networks; they do not have the ability to learn but are trained using an optimizing technique. This optimizing technique is based on successive approximation and back-propagation of errors through the network. Akida is by contrast a neuromorphic chip. That means it contains complete neuron- and synapse circuits, not just multipliers and adders. It is ‘neuromorphic’ – that is – it imitates the brain and learns in a similar manner, in seconds.


    Akida was engineered to have the look and feel of existing networks. A library sits in between the user and hardware functions, making the hardware functions invisible. The Akida neuromorphic processor chip is in many aspects similar to the IBM TrueNorth and the Intel Loihi neuromorphic processors, but with the ease of use of TensorFlow and Keras libraries that data engineers are used to. We use the same library functions, but process the network in a neuromorphic event-based way in hardware.


    We implemented the entire neural function in hardware, while others only implement the multiplier/addition functions and do everything else in software.

    Advantage 1: less impact on the host CPU memory and processing bandwidth. There is also evidence that sparse networks are more robust to adversarial attacks.


    Our design is, like the brain, event-based. That means that data is processed directly in dedicated hardware, without processing instruction code to create neurons (the algorithm is embedded in dedicated hardware)

    Advantage 2: Sparse processing, only when data is generated does the circuit absorb power, low power consumption and +/- 50% fewer neural operations

    Advantage 3: Low power, in the range of microwatts to milliwatts means almost no heat is generated


    Due to event-based processing, it is possible to learn in the same way the brain learns. The brain learns patterns by Spike Time Dependent Plasticity (STDP) – if a data input event contributed to an output event, then the weights of that pattern are increased, otherwise they are decreased. Thus a neuron becomes more selective to specific repeating patterns.

    Advantage 4:rapid on-chip learning, applied to on-chip training (in seconds) and incremental learning of new objects.


    In the Akida design, neurons communicate with one another through synapses over a bus structure. The IBM TrueNorth processor is using a cross-bar to connect neurons and synapses. That means that each junction consists of a neuron axon, an enabling bit stored in memory, and a synapse input. Three wires to connect a neuron, multiplied by 256 million synapses is a lot of wiring. Hence their chip is huge: 2.5cm x 2.5cm and hence very expensive.

    Advantage 5: Our chip is small and hence cheap


    The Akida design combines a spiking neural network technology with convolution and pooling. Others have no pooling or convolutional functions but perform these functions in software.

    Advantage 6: using far less neurons for image-based processing. For example: We did odor classification with a fraction of the neural fabric one Akida chip. Intel did similar odor classification with a system containing 80 chips. IBM did gesture recognition using 97% of the neurons in TrueNorth, we did the same gesture recognition, using the same data set and accuracy, using only 6% of our Akida processor neural fabric ( leaving 94% available for other tasks).

    Last edited by Mr Tech Laden: 31/01/21
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
30.5¢
Change
-0.010(3.17%)
Mkt cap ! $563.0M
Open High Low Value Volume
31.5¢ 32.0¢ 30.0¢ $2.997M 9.764M

Buyers (Bids)

No. Vol. Price($)
11 245232 30.0¢
 

Sellers (Offers)

Price($) Vol. No.
30.5¢ 10754 1
View Market Depth
Last trade - 16.10pm 19/04/2024 (20 minute delay) ?
Last
30.0¢
  Change
-0.010 ( 4.88 %)
Open High Low Volume
31.5¢ 32.0¢ 30.0¢ 5406233
Last updated 15.59pm 19/04/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.