BRN 1.28% 19.8¢ brainchip holdings ltd

2021 BRN Discussion, page-1488

  1. 9,800 Posts.
    lightbulb Created with Sketch. 25857
    This very recent interview all about where Intel is up to in the neuromorphic space should remove any doubts that Brainchip has an actual current competitor. This interview occurred on 1 March, 2021. It covers a lot of territory so well worth a read otherwise but I have extracted the part specifically relating to Intel's current failures and problems getting their neuromorphic program to work, to work at all and when it works to not be so expensive as to make it unattractive to the market but apart from that it is a winner (not). Not sure looking at it before posting if it will work if not I will do some pruning and put it up without the bells and whistles. Should say my opinion only so DYOR.:

    https://www.anandtech.com/show/16515/the-intel-moonshot-division-an-interview-with-dr-richard-uhlig-of-intel-labs

    The IntelMoonshot Division: An Interview with Dr. Richard Uhlig of Intel Labs

    by Dr. Ian Cutress on March 2, 20219:00 AM EST

    Intel Labs Research Today:Neuromorphic

    IC: On the Neuromorphic computing side, we are also starting to seeproduct come to market. The Loihi chip built on 14nm, with 128000 neurons,scales up to Pohoiki Springs with 768 chips and 100 million neurons, all for300 watts – Intel’s slide deck says this is the equivalent to a hamster! Intelrecently promoted an agreement with Sandia National Laboratories, starting witha 50 million neuron machine, scaling up to a billion neurons, or 11 hamstersworth of neurons, as needed when research progresses.

    IC: Can neuromorphic computing simplybe scaled in this way? Much like interconnect performance is the ultimatescale-out limiter for traditional compute, where is Neuromorphic computingheaded?

    RU: We're exploring two broadapplications application areas. The first is small configurations, maybe just asingle low Loihi chip in an energy constrained environment where you want maywant to do some on-the-fly learning close to the sensor with the data. Youmight want to, for example, build a neuromorphic vision sensor. The other forkis to look at these bigger configurations where we're clustering together lotsof Loihi chips. There you might be trying to solve a different problem like aconstraint satisfaction problem or similarity search across a large dataset.Those are the kinds of things that we would like to solve with [large amountsof Loihi]. Incidentally, we have a neuromorphic research community, or IRC,that that is that we collaborate with as an example of where we work withacademic researchers to enable them with these platforms to look at differentareas.

    But to answer your question: what are the limiters to building thelarger configurations? It's not so much the interconnect, it’s a matter offabric design, and we can figure that out. Probably the biggest issue right nowis that if you look inside a Loihi chip, it's the logic that helps you buildthe neuron model and run it efficiently as an event processing engine. But[there’s] a lot of SRAM, and the SRAM can be low power, but it's alsoexpensive. So as you get [to] really large clusters of networked together SRAM,it's an expensive system. We have to figure out that memory cost problem inorder to really be able to justify these larger Loihi configurations.

    IC: So the cost is more dollars thandie area, so stacking technologies are too expensive?

    RU: It is expensive, and howeveryou slice it it’s going to be costly on a cost-per-bit perspective. We have toovercome that in some way.

    IC: You mentioned that for smallerapplications, a vision engine processing model is applicable. So to put that intoperspective, does that mean Loihi could be used for, say, autonomous driving?

    RU: It might be a complement to theother kinds of sensors that we have in autonomous vehicles - we've got regularRGB cameras that are capturing visual input, but LIDAR [is also] useful inautonomous vehicles, [which] could be another sensor type. The basic argumentfor having more than one is redundancy and resiliency against possible failure,or [avoiding] misperception of things that just makes the system safer overall.So the short answer is yes.

    IC: One of the things with neuromorphiccomputing, because it's likened so much to brain processing, is the ability todetect smells. But what I want to ask you is what the weirdest thing you'veseen partners and developers do with neuromorphic hardware that couldn'tnecessarily easily be done with conventional computing?

    RU: Well, that's one of them! Ithink that's a favorite, teaching a computer how to smell - so you already tookmy favorite example!

    But I think it's quite interesting how the results that we're gettingaround problems like similarity search. If you imagine you've got a massivedatabase of visual information and you want to find similar images, so thingsthat look like a couch or [have] certain dimension or whatever, being able todo that in a very energy efficient way is kind of interesting. [It] can also bedone with classical methods, but that that's a good one [for Neuromorphic].Using it in control systems for like a robotic arm controller, those areinteresting applications. We really are still at that exploratory stage tounderstand what are the best ways that you could do stuff - sometimes forcontrol systems you can you can solve them with classical methods but it's justreally energy consuming, and the methods for training the system make it lessapplicable in dynamically changing environments. We're trying to explore waysthat neuromorphic might be able to tackle those problems.

    IC: One of the examples you’vementioned is kind of like an image tag search - something that's typicalmachine learning might do. If we take YouTube, when it's looking forcopyrighted audio and clips, is neuromorphic still applicable to that scale?

    RU: One straightforward applicationfor neuromorphic is that we were looking at artificial neural networks, like aDNN or CNN, and that would be trained with a large dataset. Once it's beentrained, we're transferring it over into a spiking neural network (or SNN)which is what Loihi does, and then seeing that once trained we can run theinference part of the task more efficiently.

    That's a straightforward application, but one of the things that we'retrying to explore from a research point of view with Loihi is how can it learnwith less data? How can it adapt more quickly, without having to go back to theextensive training process where you run a large labelled data set against thenetwork.

    IC: Brains take years to train using aspiking neural net method - can Intel afford years to train a neuromorphicspiking neural network?

    RU: That's one of the bigunanswered questions in AI. Biological brains experience the environment andthey learn continuously, and it can take years. But even then, in the earlystages they can do remarkable things - a child can see a real cat, and then seea cartoon of a cat, and generalize from those two examples with very littletraining. So there's something that happens in natural biological brains thatwe aren't able to quite replicate. That's one of the things that we're tryingto explore - I should be really clear, we've not yet solved this yet, butthat's one of the one of the interesting questions we're trying to understand.

    IC: The Loihi chip is still a 2016design – are there future hardware development plans here, or is the work todayprimarily focused on software?

    RU: We are doing another design,and you'll be hearing more about that in the future. But we haven't stopped onthe hardware side - we've learned a lot from the current design, and [we’re]trying to incorporate [what we’ve learned] into another one. At the same time Iwould say that we really are trying to focus on what is it good for what arethe applications that make the most sense and that's why we have thismethodology of getting working Loihi systems out in the hands of researchers inthe field. I think that's a really important aspect of the work - it is more ofthat workload, [more] exploration software development.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
19.8¢
Change
0.003(1.28%)
Mkt cap ! $366.5M
Open High Low Value Volume
19.5¢ 20.0¢ 19.5¢ $229.7K 1.167M

Buyers (Bids)

No. Vol. Price($)
86 1749892 19.5¢
 

Sellers (Offers)

Price($) Vol. No.
20.0¢ 870207 33
View Market Depth
Last trade - 12.32pm 16/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.