Interview with Nandan Nayampally.
https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/Transcript
Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series. I’m your host Mike Vizard. Today we’re with Nandan Nayampally, CMO for BrainChip, and they’ve created a processor that mimics the way the brain works. And it’s going to be used in a lot of interesting use cases that we’re going to jump into. Dan, welcome the show.
Nandan Nayampally: Thanks, Michael.
Mike Vizard: A lot of people are a little dubious of how the brain works. So why is it a good thing to mimic the way our brain works and what went into building this process?
Nandan Nayampally: Well, firstly, the brain is probably the most efficient cognitive processor known to man, right? So naturally, there are a lot of good things that come with understanding study of the brain, especially how to learn more efficiently, which is the critical part of artificial intelligence. And what’s generally done is, there’s a lot of very parallel compute. And that’s why GPUs and new accelerators have been created that do a lot of things in parallel. Now, the problem with that is that they’re often computations that aren’t fully used and get thrown away. So it becomes very, very inefficient as you keep getting more and more complex models, right? So things like ChatGPT 3, for example, just to train it on the cloud takes four weeks, $6 million. There are better ways to kind of achieve those kinds of things. And that comes from the study of the brain.
Mike Vizard: So what exactly did you guys do to create this? I mean, how does that processor architecture work? And how long you’ve been been building this thing?
Nandan Nayampally: It’s a very good question. So obviously, there’s a lot of study going on about how neurons work, and how they compute only when needed, right? And trigger forward computation only when needed. The founders of BrainChip, Peter van der Made and Anil Mankar, had been doing this research over the last 15 years. They actually built a lot of neuron models. And then realized that pure neuron models, which there are a number of other companies like IBM, Intel, also doing neuromorphic computing, as it’s called, but applying it to real world problems is still far away if you truly build it exactly like the brain functions. So what brain chip did was, about five years ago, they started applying it to today’s problems. So we have a hybrid approach from a traditional neuromorphic neuron driven model, applying a layer that does very well with today’s conventional neural networks, such as the convolutional one, the deep learning neural networks and transformers. So applying the principles of only executing what is needed, and only executing when it’s needed, improves the efficiency substantially, while still delivering the kind of performance that you need. And when you think about AI in general, everybody thinks that AI is in the cloud. It’s only going to scale when you actually have more intelligent computation at the edge. Otherwise, you’re just going to clog up the network, you’re just going to explode the compute on the cloud.
Mike Vizard: To then end, am I replacing certain classes of processors that exists today? Or is this an entirely new use case and this processor will be used alongside other processors, and we’ll have more of this kind of hybrid processor architecture?
Nandan Nayampally: So yes, so AI is a problem that – it’s a computational problem, right? So you can do it on CPUs, you can do it on GPUs, you can do it on different kinds of accelerators. If you think about it, the AI computation use cases are all growing. So what we’ll see is more and more use cases at the edge that are smarter, that can learn. So for example, today, you have a ring doorbell that recognizes faces, or at least recognizes there is a face, but it keeps reminding you that somebody showed up that you knew already. You don’t want to be disturbed. If your neighbor walks past it, and it says, oh, “Somebody’s at the door.” They are naturally going to walk past. Now if you can train it to say, okay, this is my neighbor, don’t bother me if they’re not, you know, showing up at the door – that’s a use case that is new, right? You could do it on the CPU. You can do it on the GPU. But I think a lot of these use cases become more and more cost effective and efficient if they’re done with specialized computation. So I believe that we will have a strong growth in the types of use cases that this enables. The Price Waterhouse Coopers view is that’s about by 2030, the annual impact GDP from AI is about $15 trillion. And out of that, they look at a Iot or the artificial intelligence of things industry, that is hardware software services, is going to be over a trillion dollars. So there’s a huge market that’s developing, whether it is healthcare, monitoring vital signs and predicting, right? You can’t do that today, just because the competition, or the technology is not there to do it in a portable, cost effective way, you can start doing that on devices that you can embed or where you have, maybe hearing devices that could be a lot more efficient, that can help you either filter noise automatically and learn from your environment. There are customizations that you could do on your device saying, “hey, your car learns how you drive and helps you drive better, for example. So there are lots of new use cases that will emerge that drive new computation paradigms like what we’re proposing.
Mike Vizard: How did you solve the training problem at the edge? Because, at least in my understanding, it takes a lot of compute power to train these models. And so how did you get that down to a footprint that’s acceptable from a energy and heat perspective?
Nandan Nayampally: That’s, that’s a great question. So I want to be very clear, we’re not training on the edge. Okay? At this point, the benefits of neuromorphics are in being able to learn at the edge, but it still starts from a trained model. Right? So what what we do is we take the trained model, and it’s already got features extracted; we use that to learn and extend the classes. So for example, if there’s a model that is recognizing faces, so it’s on the device, but you can then teach it to recognize Mike’s face. Okay, so it’s still a face. But now, you know, it’s Mike’s face. And you can add that to the similar things. There are applications like pet doors, where they have cameras to allow the pet door to open or not, depending on the type of pet today recognizes between cats and dogs and other pets; you can now customize it to say, “okay, this is my cat and don’t let in the neighbor’s cat,” for example.
Mike Vizard: So to that point, will this narrow the amount of drift that we see in AI models over time? When somebody deploys these and we start to collect new data, there seems to be a need to update those models more regularly. So can we narrow that a little bit and kind of get more life out of the model before we need to replace it?
Nandan Nayampally: Yeah, that’s, that’s actually I think you’ve hit the nail on the head. Every time you create a model, it’s expensive. Sending it to cloud to retrain or customize is expensive. So this gives you a path for it. To some extent, there’ll be more drift, but then you can actually pull it back together the next generation. And it’s – the reality is, some of these drifts, you don’t even want to go back to the cloud. Because if I’m training it to recognize my pet, or my face, I don’t want it to go in my cloud. That’s my privacy. That’s my security associated with it. So there’ll be some things that are relevant that need to go back to cloud, some things that are personalized, that may not.
Mike Vizard: As we go along, how will you expose this to developers and to the data scientists that are out there? Is there some sort of SDK that they’ll invoke or set API’s? Or how will we build the software stack for this?
Nandan Nayampally: Yeah, this is an excellent question, right? We can have the most elegant hardware. If it’s not usable in a developer friendly fashion, it doesn’t mean anything. So I’ll make one comment as well on our learning, which is that one of the key things about our learning, because it’s on device and last layer only – we don’t actually save even the data on the device. It’s only stored as the weights in the network as an adjust. So it adds to the security because even if the device is compromised, they only have weights and doesn’t really change it. So there’s a security privacy layer that goes with it. So we do have a very intelligent runtime that goes on top of our hardware. And that has an API for developers to utilize. We also plug into a lot of the frameworks that we have today and partners like Edge Impulse who provide a developer environment extension environment, that that can help people tune what they need to do for our platform.
Mike Vizard: So how long before we started to see these use cases? You guys just launched the processors; it usually takes some time for all this to come together. And for it to manifest itself somewhere, what’s your kind of timeline?
Nandan Nayampally: So I think the way to think about it is, you’re the real kind of growth in more radical innovative use cases, probably, you know, a few months out, a year out. But what I think what we’re saying is there are use cases that exist on more high powered devices today, that actually can now migrate to much more efficient edge devices, right? And so I do want to make sure people understand when when we talk about edge, it’s not kind of the brick that’s sitting next to your network and still driven by a fan, right? It’s smaller than the bigger bricks, but it is still a brick. What we’re talking about is literally at-sensor, always on intelligence, let’s say whether it’s a heart rate monitor, for example, or, you know, respiratory rate monitor – you could actually have a very, very compact device of that kind. And so one of the big benefits that we see is, let’s say video object detection today needs quite a bit of high power compute, to do HD video object detection, target tracking. Now imagine you could do that in a battery operated or have very low form factor, cost effective device, right? So suddenly, your dash cam, with additional capabilities built into that, could become much more cost effective or more capable. So we see a lot of the use cases that exists today, coming in. And then we see a number of use cases like vital signs predictions much closer, or remote healthcare, now getting cheaper, because you don’t have to send everything to cloud., You can get a really good idea before you have to send anything to cloud and then you’re sending less data you’re sending, it’s already pre-qualified before you send it rather than finding out through the cycle that it’s taken a lot more time. Does that make sense?
Mike Vizard: Sure. Are you at all concerned that the Intel’s and the invidious of the world will go build something similar? You mentioned IBM, but what ultimately makes your approach unique in, you know, something that is sustainable as a platform that people should build on today?
Nandan Nayampally: That’s, that’s an excellent question. And so the Intel’s, the IBM’s are building their platforms. More often than not, they are building their platforms for their needs. Right? Nvidia is selling platforms that are much more scalable. But again, they they tend towards going towards a much higher end of the market, rather than the very sensor level, which is a different cost structure, a different set of requirements. So we are geared towards building for embedded solutions. And so both our business model, as well as our design, is much more geared from the ground up for being very, very low-resource requirements, whether it’s memory, whether it’s power, whether it’s, you know, silicon, right? So we are focused on building cost effective solutions and enabling, and because we are an IP model – so we license our technology to customers, customers could actually build their own specialized systems on chip, or ASICs, as they call it, write application specific ICs. That tune to their requirement. We’re not trying to sell chips into that market. We’re licensing technology that enables people to build their own specialized solutions. So a washing machine manufacturer that knows what they need to do intelligently may use microcontrollers today and say, “Okay, I’ve got this done. But, but in a year’s time or two years time, and I’ve perfected this, I’m actually going to build my own chip because the volumes and the scale require it.” Same thing with camera manufacturers; they may choose to have their own specialized IC design because it cuts their overall costs when they strive to scale.
Mike Vizard: Alright, folks, you heard it here. AI is coming to the edge, but you should not assume it’s going to be running on a processor that looks like anything we have today. Hey, Dan, thanks for being on the show.
Nandan Nayampally: Thanks, Michael. Thanks for having us. All right, and thank you all for watching the latest edition of the Digital CxO Insights series. I’m your host Mike Vizard. You can find this episode and others on the digitalcxo.com website and we invite you to check them all out. Once again, thanks for watching.
- Forums
- ASX - By Stock
- 2023 BrainChip Discussion
Interview with Nandan...
-
-
- There are more pages in this discussion • 5,783 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)
Featured News
Add BRN (ASX) to my watchlist
(20min delay)
|
|||||
Last
24.5¢ |
Change
0.010(4.26%) |
Mkt cap ! $483.2M |
Open | High | Low | Value | Volume |
23.5¢ | 25.0¢ | 23.0¢ | $2.383M | 9.852M |
Buyers (Bids)
No. | Vol. | Price($) |
---|---|---|
4 | 338128 | 24.5¢ |
Sellers (Offers)
Price($) | Vol. | No. |
---|---|---|
25.0¢ | 1301709 | 24 |
View Market Depth
No. | Vol. | Price($) |
---|---|---|
4 | 338128 | 0.245 |
18 | 387457 | 0.240 |
19 | 1032155 | 0.235 |
20 | 1241639 | 0.230 |
18 | 713310 | 0.225 |
Price($) | Vol. | No. |
---|---|---|
0.250 | 1057101 | 20 |
0.255 | 818298 | 20 |
0.260 | 715429 | 19 |
0.265 | 321272 | 13 |
0.270 | 253135 | 11 |
Last trade - 16.10pm 01/11/2024 (20 minute delay) ? |
Featured News
BRN (ASX) Chart |