I have posted below a link to aninterview with the former retired CEO of Brainchip Mr. Dinardo. I suspect that many newer investors will never have heard or read this interview. It is old from 2020 but my reason for keeping track of these old interviews like many of the long term holders is that it allows you to look for consistent or inconsistent narratives from the company you are invested in by comparing what was said to what has happened and what is now being stated by the company.
Newer investors will have no doubt readthat longer term investors have a strong belief in the truth of statements made by Brainchip and it is not as some would like you to believe based on emotional attachment rather it is based upon an ongoing deep forensic examination of the narrative and the fact that it has proven itself to be entirely consistent. That does not mean that the company has not made mistakes but it has always with the benefit of this forensic examination truthful and up front about how, why and what they are doing to correct the error/s.
Not that I would recommend it but fromprobably mid 2019 perhaps a little earlier my alter ego Fact Finder became wellknown for not accepting anything the company or its people said until I couldfind some form of independent corroboration. As it transpired I always managed to do so and by end of 2020 I was convinced that I had found one of those rare offerings on a stock market a company staffed by entirely honest and trust worthy individuals.
That said I would encourage you tolisten to the full interview using the link address or just read the extractand compare it to what you must now know has come to pass since thispresentation was delivered and what is likely still sitting in the wings.
My opinion only DYOR
FF
AKIDA BALLISTA
https://www.finnewsnetwork.com.au/archives/finance_news_network274977.html
All the interfaces at the top wereindustry standard, 3.0, PCIe 2.1, I2S for audio, I3C for sensor inputs. Thatcould be pressure, temperature, flow, vibration, any real-world phenomenon thatyou want to acquire and provide analytics at the edge. That's what the devicelooks like.
This will give you some sense of really what isimpressing customers or potential customers, I should say. To the left, you'vegot your standard... I mean, these are very sophisticated, but I'll call themstandard data-based convolutional neural networks. CNNs. They've been around along time and they kind of dominate the landscape now. They tend to be bigplayers in the hyper scale or data centre arena. These are very, very difficultto do at the edge. If you look down the list, you'll see you need an externalCPU, you need external memory, and you probably need a math accelerator to keepup with all the matrix multiplication that's necessary. These things can be 20 layersdeep, 50 layers deep. Some are very, very complex networks. I'll show you somebenchmarks in just a moment. It's very math-intensive, max or multiplieraccumulators. It's basically very, very high speed math, millions and millionsand millions of calculations. They tend to be relatively inefficient. If you'reusing a GPU, you could be in a category of 40 to 100 watts. That's far too muchpower to put in an edge device.
The middle section is really where we're seeing alot of activity right now. This is event-based convolution. We can doconvolution, convolutional networks on Akida. We do turn the data into spikes.We operate in the event-based domain, which gives us that benefit of playingoff of sparsity and all the other things that we can accomplish in eventually anative spiking neural network, but you can see it's fully integrated. No CPU,no external memory, nor does it require an external accelerator because we'renot doing all of that matrix multiplication. Again, we're implementing the sameconvolutional neural networks here, but in the event domain, so these could be20 to 50 layers deep, but we do get to play off the sparsity of data, which isless operations, therefore less power.
You can see the last bullet is maybe the mostimpressive, efficient power. That's 50 microwatts. That's 50 millionths of awatt up to maybe 4 watts. Compared to what you see in the data-basedconvolutional neural network, this can go in an edge device. This can go in abattery-operated device.
Then the third category is when you are trulynative spiking neural network oriented. Similar attributes, no CPU, no externalmemory or accelerator, but the networks are shallow. They're two to five layersdeep, so you get better latency. You don't have to go through all of the layersto get your answer. We do play off of sparsity as well. And you can see 50microwatts to 2 watts. That's 50 millionths of a watt to 2 watts.
The diagrams below just show you what it wouldtake to do a standard CNN. You've got several devices that you need, takes upspace, sucks up a lot of power. Then you can see with Akida, once we get thepre-processed data, the device stands alone, needs no external support.
These are some benchmarks that we share withpotential customers, and then potential customers running the ADE are doingtheir own validation. These are ranked from lowest power application to maybesome of the larger networks. If you look at the network configuration itself,keyword spotting, on, off, up, down, the ability to provide personalisedkeywording, keyword spotting, so that you can personalise the device. It's38,000 parameters. It's the Google data set of commands. We can do... We callit frames per second, because that's been an industry-standard term, but aframe implies that you're looking at an image, which is actually how keywordspotting is done, but nonetheless, that's also inferences per second, so framesper second or inferences per second. You can identify seven inferences, sevenclassifiers and keyword per second.
The centre block, and this is for the moretechnical guys that also sent in questions, this is the input data size. You'llhave 10 by 10. The last is really what would be colour. In this case, ofcourse, it's one. Then as you move down, you can see that's... When you moveover, you can see that's 150 microwatts to do keyword spotting. That's a veryimpressive number. Object detection, this is not classifications, justdetecting that an object is there. On a proprietary dataset, we're running fiveinferences per second. You can see what the input data size is, the number ofclasses that you're trying to identify, and you're running accuracy at 90% at200 microwatts.
I'm not going to go through the whole chart here,but if you look at the last row, this is a very, very large network. It'scalled Complex-YOLO, you look only once, which is what YOLO stands for. It's 50million parameters. Compared to the first line, which was 38,000 parameters, wecan accomplish 50 million parameters with inferences or frames per second at133, a relatively large-size input data. Accuracy, which at 65% is about thebest you're going to get with any convolutional neural network that's beenimplemented. We can do that with 4 watts, not 40 watts and not 100 watts.
These are the things that are exciting for us tobe introducing to customers and editors. Customers are responding very well,potential customers are responding very well.
A little about licensing intellectual property,and again, this is the device on the left, the neural fabric and the data tospike converters is really what gets licensed. Builders of SoCs don't need ourCPU complex, so they're going to have to handle all of their own housekeeping.They'll pick what interfaces their device has. Really what they license are thecores and they license the data to spike converter or converters. They can takeall 80 cores, which is 20 nodes, or for keyword spotting or other similarapplications with less parameters, they might only want 4 nodes, which is goingto be 16 cores, but that is up to the customer, and we will work with them todetermine what is the size of their neural fabric to complete whatever taskthey have.
Additionally, what customers, potential customers,see as valuable is you can run multiple networks. If you took all 80 cores, youcould dedicate 10 of those cores, or doing them as nodes, take 20 nodes, thenyou can take 3 or 4 or 5 of them and you could run one network to do objectdetection. You could take the other cores and have them do keyword spotting orsome other network, but again, it's complete, it's on the chip. You're notrunning the network on a host CPU, so you can basically run multiple networkson a device simultaneously.
Talk about intellectual property licensing a bitmore. There were a lot of questions about it, and I think in part that'sbecause we've voiced a strong opinion, coming in advance of actual devicesales. There's no manufacturing process involved. There's no inventory. There'sno loan package qualification by the customer. We released that in 2019. Wehave received strong response from prospective customers. The ADE, in the handsof one major South Korean company, is being exercised almost as much as weexercise it. They really have dug in, validated some of the benchmark results thatwe've provided, and now they're moving onto some of their own proprietarynetworks to do validation.
We've targeted specifically vision and acousticsystems. Those are two places where at the edge there's a dominance ofrequirements. We also have cybersecurity working in the background as well.That is a native SNN. That's not an event-based CNN, but in vision we can doevent-based CNN, in vision and acoustic we can do event-based CNN, and worktoward developing, in collaboration with customers, potential customers, we canwork toward moving them into a native SNN environment.
Certainly there's a lot of activity in automotiveindustry. I got asked questions about what's going on with companies likeVolkswagen and Bosch. Really, I think the shortest path to success is certainlyworking with the major automobile manufacturers so they put some top-downpressure, but when you look at companies like Bosch, a company out of Franceand Germany, Valeo, Continental here in the US, Aptiv, which was formerly Delphi,ZF and several others, they build modules that they send themselves to theautomotive market.
Some of these are big companies. Valeo is a 20billion Euro a year revenue company. Continental and Aptiv have to be in thesame kind of category. These are primarily going to be radar, LIDAR andcameras, maybe some ultrasound and cameras for ADAS. At this juncture, levelone, two and three, which is ADAS, and certainly with the target beingautonomous vehicles, level four and level five.
In other vision applications, vision and acoustic,smart cameras, smart home systems, a plethora of edge use cases. Here again youhave the OEM manufacturers, the guys that build the cameras themselves, andsometimes build entire systems, or you can work with tier one sensor manufacturerswhich want to incorporate incremental intellectual property into their deviceso they get more of the gross margin dollars. Here the leaders in the space areSony on semiconductor and OmniVision.
Amongst those you probably have all the cell phonesin the world, at least the vast majority of them, as well as smart cameras.Partnering with the image sensor guys as well as module manufacturers in theautomotive industry, we've taken a very, I think, diligent course inidentifying in each of these marketplaces who are the most likely to besuccessful, who has market share, what do their future roadmaps look like, howare our existing relationships with those customers? As important isintersecting their design cycle at the right time. For us to generate near-termrevenue, it can't be a company that has just released its last-generationmodule and is on a two- or three-year path to identifying and building theirnext. So intersecting at the right time. I think we've been fortunate in thatregard as well.
Acoustic applications for smart homes include alot of tier one suppliers in the US, in Europe and in China. I'll touch onChina a bit more here. In China for cameras, hubs of all kinds and peripherals.I just came back from Shanghai, Roger and I were in Shanghai, I guess it was aweek and a half ago. It's an incredible amount of energy and an incredibleamount of financing going into AI generally, and more specifically, AI at theedge. We'll touch on what our plans are in China in just a few moments.
In order to really attack the IP sale, it's adifferent selling process than selling chips. We have retained a group calledSurroundHD, three guys we know very well -- very, very seasoned executives inthe IP sales process, relationships with all of the tier one suppliers. Theyare really, with myself, Roger and Anil, and when we can have his attention,Peter, we're really the face to the customer for IP sales at this point. As werelease the device itself, we'll have a more traditional semiconductor salesforce with manufacturers, reps in the US, distributors overseas, and in somecases there'll be global distributors as well.
In most of the markets in Europe, there areregional distributors that are very technically competent. Someone asked aquestion about our relationship with a company in Israel called Eastronics.Eastronics is a very, very technical... They call themselves a distributorbecause they do inventory product, but they really act as your manufacturer'srep on the ground. Now, we're just reducing to practice a contract to get itdone. They've already started to work with us in the field and introducecustomers. Somewhere along the line, one of you folks that have all this greatdiligence stumbled upon the fact that they've already started to promote ourproduct. But that's moving well. That's in Roger's hands.
In Europe, with respect to IP, we've got anexisting relationship with T2M. They're bringing us great opportunities. Theyare a major independent supplier of IP. They don't build SoCs, they basicallymarket blocks of IP to their account base, which tend to be people that arebuilding ASICs or system on chips.
We're also in discussions in Japan. There was aquestion about our relationship with Socionext. We'll touch a little bit moreon... The manufacturing or the development process has gone exceptionally well-- very, very strong team, both in San Jose here in California as well as inShinyokohama in Japan. We couldn't be happier. Our team works very, very wellwith them. But we have had significant discussions about how to broaden thatrelationship, now that we're going to go away for fab. They build ASICs.They're the second largest ASIC supplier in the world, only behind Broadcom.They would be a phenomenal channel for us to have our IP block in their menu ofalternatives that they can present to their customers for neural networkembedded in SoCs or ASICs.
- Forums
- ASX - By Stock
- 2022 BRN Discussion
I have posted below a link to aninterview with the former...
-
-
- There are more pages in this discussion • 7,746 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)
Featured News
Add BRN (ASX) to my watchlist
(20min delay)
|
|||||
Last
24.5¢ |
Change
0.010(4.26%) |
Mkt cap ! $483.2M |
Open | High | Low | Value | Volume |
23.5¢ | 25.0¢ | 23.0¢ | $2.383M | 9.852M |
Buyers (Bids)
No. | Vol. | Price($) |
---|---|---|
4 | 338128 | 24.5¢ |
Sellers (Offers)
Price($) | Vol. | No. |
---|---|---|
25.0¢ | 1301709 | 24 |
View Market Depth
No. | Vol. | Price($) |
---|---|---|
4 | 338128 | 0.245 |
12 | 188072 | 0.240 |
15 | 799510 | 0.235 |
15 | 904813 | 0.230 |
15 | 607966 | 0.225 |
Price($) | Vol. | No. |
---|---|---|
0.250 | 550751 | 12 |
0.255 | 808926 | 18 |
0.260 | 708908 | 18 |
0.265 | 321272 | 13 |
0.270 | 253135 | 11 |
Last trade - 16.10pm 01/11/2024 (20 minute delay) ? |
Featured News
BRN (ASX) Chart |