BRN 2.50% 20.5¢ brainchip holdings ltd

DMOR on The only Autonomous Learning Device on Earth

  1. 7,057 Posts.
    lightbulb Created with Sketch. 98
    While I was put in the naughty corner, my eyes turned square trying to find bits of clues into the claims made by Brainchip, namely on the only autonomous learning device on the planet. It would take at least a Bachelor degree in computer science to really understand the science behind the claims made here, but we don’t have that luxury so all we are left with is our capacity to search for bits of info relating to the credibility of the claim. And I don’t mean simply Googling a word in and come up with something like “IBM have created a chip that mimics the human brain with 1 million neurons and 256 million synapses” and go Bingo! Haha Brainchip’s claim is a fraud! Our friend Bris Bogus and a few others should know that well. I mean going much further than that and try and really understand why have Brainchip made those claims. A lot of things looked similar on the surface until you really get technical, and especially in the fields of AI the concept is really hard to grasp, especially the concept of a chip that can ’learn’.

    I find this video by Henry Markram sums up the current advances in AI very nicely:



    Notice that all of the different approaches have not been swept under the carpet by Brainchip but have been openly addressed by them:

    The Neural Cognitive Computing Science Sector
    BrainChip also participates in the neural cognitive computing science sector. The sector is made up of a significant number of well-known companies including Cisco, IBM, Intel, Google, Microsoft, nVidia, Qualcomm and Samsung. The companies operating in this sector are all evolving their own versions of neural architecture designed across different platforms and utilising various techniques in order to achieve their desired results. BrainChip is uniquely positioned within this sector as a developer of a “hardware only” solution with significantly higher performance and low power consumption as opposed to the software solution that is available to the industry today.

    Interestingly in that same video above it says that the IBM Truenorth chip have 1 bit binary synapses. I am not a computer scientist but it doesn’t take one to know that this is a huge difference to the Brainchip synapse which contains 223 logic gates! 1 versus 223! Truenorth may contain a lot more neurons and synapses per neuron but something about quantity versus quality springs to mind:

    All the processes of a biological synapse are realistically represented in 223 gates, including feedback, the synaptic cleft and neurotransmitter receptors on the neuron membrane.
    http://www.academia.edu/5425886/A_Platform_Technology_For_Brain_Emulation_Updated_9-05-2013

    When I spoke to Robert Mitro, he said that the TrueNorth chip does not learn, and below are some evidence of that:

    The neurosynaptic cores are chips each containing 256 neurons with a connection network of 1024 axons - making a total of 1024x256 synapses. The neuronal model is based on a leaky integrate-and-fire action, which is a relatively commonly used neural network. Unlike real neurons which emit a series of pulses when they fire, these artificial neurons simply fire when the activity on their inputs summed over time exceed a threshold. This is the "integrate-and-fire" part of the description. The "leaky" simply means that the effect of an input dies away over time so that the neuron doesn't have an unlimited memory of past activity. It is generally agreed that this is a reasonable baseline model of the behavior of a biological neuron but notice that it doesn't contain any element of learning in terms of modifying the strengths of the connections - so it only models the behaviour of a fully trained brain.

    The overall architecture of the simulation was based on a model of the brain of a monkey with the 2 billion neurosynaptic cores divided into 77 brain-inspired regions each with their own probabilistic white matter or grey matter connectivity. Again this is not an unreasonable way to organize things but it is a fairly crude approximation to the structure of a real brain. The truth of the matter is that we really have no idea how accurately we have to reproduce brain structure to reproduce the behavior. Perhaps random connections with the same statistics as the real brain is good enough - but as the neurons in the simulation have no learning capacity it is even more limited than it seems at first.
    http://www.i-programmer.info/news/1...-truenorth-simulates-530-billion-neurons.html

    So the fundamental difference of Truenorth, even with all their spectacular publicity stunts, is that it doesn’t even learn.  Everything is simulated on a super computer before copying it into hardware for specific use. Unlike Brainchip, which learns in hardware just like a biological brain. Training will be done on the Hardware, and the Hardware continues to learn even after training. All knowledge acquired during training will be copied to a Knowledge Library to be loaded to a different untrained chip. My understanding is the Brainchip platform is not application specific, in that you can have the same chip doing many different tasks depending on the training models used. The same as  your smart phone can perform many different things depending on what apps you have downloaded, I think Brainchip neural platform will be the same except that it will perform neural functions instead.

    A new device has been created that mimics the learning, association and processing techniques of the brain. This approach to Artificial Intelligence is to create an electronic information carrier that has structure allowing it to be trained, and is capable of acquiring complexity through learning rather than attempting to recreate the complexity of a fully developed brain and mind by programming
    http://www.academia.edu/5425886/A_Platform_Technology_For_Brain_Emulation_Updated_9-05-2013

    Even in the actual papers by Dharmendra Modha, we can see that the TrueNorth chip suspiciously acts just like a programmed device when recognizing patterns:
    http://www.modha.org/papers/012.CICC1.pdf

    (Should be a diagram here but couldn’t cut and paste so refer to the link above)

    Fig. 6. (left) Pixels represent visible units, which drive spike activity on excitatory (+) and inhibitory (-) axons. (middle) 16 × 16 grid of neurons spike in response to the digit stimulus. Spikes are indicated as black squares, and encode the digit as a set of features. (right) An off-chip linear classifier trained on the features, and the resulting activation. Here, the classifier predicts that 3 is the most likely digit, whereas 6 is the least likely

    So my interpretation of the above is, for Truenorth to recognize something: in this case the number ‘3’ image is digitised in pixels to be presented to the chip, the neurons then spike in response and produce a set of features. THEN they have an algorithm that is not on the chip that is trained to predict the most likely answer based on the features. So my conclusion is: nothing is actually learned on the chip which contained the neurons, it just encode the image and rely on an external program to predict what the feature is likely to be! If anyone can interpret anything differently than I have please feel free to elaborate as I think Truenorth is nothing more than clever programming.

    The groin itching question now is: how would Brainchip do it any differently? It would probably take the knowledge of Peter Van Der Made to answer this question correctly IMO. But if we go back to the prototype that was tested as part of the patent application here:

    Here the test results of a prototype Synthetic Neuro-Anatomy chip, containing  a hardware emulation of ten biological neurons are discussed. This 'proof of concept' chip was constructed using an Actel ProASIC3 programmable gate array (FPGA) which consists of generic functional 'tiles' that can be configured and interconnected using a fuse diagram. The ProAsic3 FPGA contains 64K tiles.
    The FPGA fuse diagram was developed using Actel's Libero 7 IDE PC software, an Actel experiment board and an Actel FlashPro3 programmer. The design consists of custom VHDL code, plus VHDL code generated from our schematic diagrams which contain Actel library components. Each neuron was configured to have 24 synapses, and comprises the equivalent of just over 6000 gates. Hence, in 64K tiles only ten neurons could be constructed (without optimization). This was considered sufficient for a proof of concept test. The first tests were completed at the end of 2007. This design was used as the foundation for a US patent application. The patent has been granted (uspto.gov patent no. 8,250,011, granted in 2012). Since then additional functions have been added to improve the accuracy of the emulation, including long and short persistence neurotransmitters and increased precision in STDP - BCM learning.
    After programming the FPGA fuse matrix the chip was tested in a sound recognition trial. The test setup consisted of a signal generator, an artificial cochlear (spectrum analyzer), an Actel experiment board containing the programmed ProAsic3 FPGA and a PC to monitor the process. The PC was connected to the experiment board's JTAG interface, enabling it to monitor all aspects of the design.
    The objective of this test was to prove the learning ability of the Synthetic Neuro-Anatomy design. Ten frequencies were selected, from 220 Hz up to 587 Hz at intervals that represented whole notes A, B, C, D, E, F, G,A',B',C'. These were applied one by one to the spectrum analyzer. The output of the spectrum analyzer was applied to the inputs of the synthetic neural matrix contained in the FPGA.
    At first none of the synthetic neurons responded to input signals. After exposing the synthetic neural matrix for several seconds to input pulses from the spectrum analyzer, changes in the synaptic registers were observed that indicated that the synthetic cells were learning. This process was continued for a total of nearly 6 minutes on all frequencies (30 seconds per frequency, plus set-up time). At this time the registers had settled, with synapses and neurons responding to the frequencies that they had learned. Next, the signal generator was disconnected and an audio signal of a recorded human voice was applied to the input of the spectrum analyzer. The synthetic neural matrix responded to the previously learned frequencies whenever they occurred in human speech. This indicates that the device is capable of autonomous learning from sensory input, and that the synthetic neurons perform the same function as the neurons in the auditory channel of the human midbrain.


    So, going by the prototype test above, the recognition capability of Brainchip is learned on the chip itself without any programming intervention or simulating the training on software first. That has to be the fundamental difference that sets Brainchip apart IMO. If the same test were to be done on Truenorth the chip couldn’t produce any result because it simply doesn’t learn, nothing happens at the synaptic level because their synapse contains only 1 bit. If we take the same image recognition example of the figure ‘3’ that was done on Truenorth, the way that Brainchip would have done is to present the image to Brainchip in the same digitised form as was done on Truenorth, the chip then learns by remembering the input pulse streams associated with that image so that next time the same stimuli is presented it will respond. How can we interpret what Brainchip has learnt is where the API comes in I guess.
    The reason why several top Neuroscientists and even a Computational Neuroscientist like Prof Jeff Krichmar endorses it is the capacity of Brainchip to resemble a real biological neuron. Whatever those clueless trolls are trying to instil doubts about the endorsements, their immense credentials stand tall.

    Disclaimer: All of the above are my interpretation based on basic researches only, I have no qualifications in computer science. For unbiased and fully qualified opinions please seek advice from the likes of Bris Vegas, Eshmun and qqqqqqqqq!

    To be continued...............
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
20.5¢
Change
0.005(2.50%)
Mkt cap ! $380.4M
Open High Low Value Volume
19.0¢ 21.0¢ 18.5¢ $2.897M 14.68M

Buyers (Bids)

No. Vol. Price($)
3 397036 20.5¢
 

Sellers (Offers)

Price($) Vol. No.
21.0¢ 512236 8
View Market Depth
Last trade - 16.10pm 25/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.