BRN 5.00% 21.0¢ brainchip holdings ltd

2023 BrainChip Discussion, page-6268

  1. Dhm
    2,377 Posts.
    lightbulb Created with Sketch. 3483

    Below is an interview with Sally Ward-Foxton. Link here:
    https://www.eetimes.com/podcasts/brainchip-second-gen-architecture-transformers-tenns-and-stdp/

    Great. Nandan, welcome to the show.

    NN: Thanks, Sally. Thanks for having us on.

    BrainChip’s Akida chip. (Source: BrainChip)

    SWF: So maybe you could just start by telling us a bit about BrainChip and BrainChip’s technology. I’ve been working with you, or with BrainChip, for a while. I know you have this IP called Akida. Maybe you could start to tell us a little bit about how this works. It’s a kind of neuromorphic brain inspired design. Tell us a little bit about it.

    NN: Sure. So just on myself, I’ve been in BrainChip for about a year, but BrainChip is a company that’s been around for a while. It was originally founded by Peter Van Der Made, a serial entrepreneur out of Australia, who wanted to explore brain function and how it could be applied in terms of technology. And then he combined, actually collaborated with Anil Mankar, who has a long history in semiconductor design, about ten years ago, to form what you see at BrainChip today. And what they did was, they actually explored a lot of neural models. You’ll get various ways to go to that. They actually built neurons, trying to see the function, and realized quickly that brain function is, if you try to do it as the brain does, is pretty difficult or heavily compute intensive. So you need to take essence of brain function and try to make it practical. So in the industry, you’ve seen things. Neuromorphic is a very cool term, which means inspired by the brain, by the neurons. It means a lot of things to a lot of people. And what you see are leaky neuron models. You see all kinds of interesting functions. And what BrainChip finally did was picked the key essences of it. There is analog-ish computation that always happens in the brain, but then it fires and it sends events across. That’s the communication. So what BrainChip does is distills it into the essence of what neuromorphic computing is, which is simplified computation and event based transmission. So we kind of call it event based neural nets. It’s effectively a simplified view of what the brain function does, but gets the benefits of brain function, which is— I mean, as you know, the brain is probably the most efficient compute engine known. Trillions of neurons operating, and they say, you can estimate that it takes them less than 20 watts for what would be a supercomputer. If you can take those things and build it into technology, that’s what BrainChip is trying to do.

    SWF: It’s a very powerful inspiration from the brain, and I know there are lots of others that are taking inspiration from the brain. But I’m glad that you said that neuromorphic means a lot of different things to different people, because I definitely see that as well.

    NN: If you were to actually think about what we do, you would distill it and say this is kind of an integrate and fire type neuron, which is kind of the basic essence of neuromorphic.

    SWF: I know you said the computation is simplified compared to what we find in the brain. Can you give us— So I know that when you have neurons in the brain, it’s all based on timing, right? It’s about the timing of these spikes as they arrive in the next neuron or the nearest neighbor neuron. Tell us about how you’ve simplified that for BrainChip’s hardware.

    NN: So the key concepts of what makes neuromorphic computing work and event based computing efficient is, unlike traditional deep learning architectures, which work on a layer by layer approach, and need activation maps and …, event based works on smaller granules, and smaller computation, and only the events within those get forwarded. So yes, there’s a concept of timing that is there. However, if you’re taking today’s model, so if you take a spiking neural net model, for example, you would have that timing built in. But spiking neural net models already have that notion that goes across. But if you take today’s convolutional models, for example, you don’t have to have that timing, because once you compile them, you have a notion of how things are going to propagate, and then the events happen based on inputs to those models. So in a lot of ways, it’s different from the traditional approach of how the brain works, unless you’re taking purely spiking neural nets, in which case the models understand what they need to do. If it’s today’s models, then you’re converting those, which BrainChip does as well through its software stack into how are those events going to be generated? How are they going to be propagated, and hence time is essentially built in.

    SWF: Okay. So simplifying in this way means maybe we don’t need the asynchronous, complicated asynchronous designs that we see with other neuromorphic chips like Intel Loihi, right?

    NN: Correct. I think, I could sense the quietness there, because, yes, because traditionally, you’re absolutely right. Brain does work on an asynchronous level, which is fire when needed, pass that forward, and whereas BrainChip has taken a much more simplified, single clock design. So it uses clock gating, rather than wake up and fire from an execution standpoint, even though the communication between the various neural processing engines and nodes is wake up and fire, in effect. So yes, it does not need asynchronous approaches that would be natural in a true bio-neuromorphic approach or companies like Intel that have actually taken that to heart and tried to build it from that standpoint. I mean, they’re great, but they’re also, from today’s technology, very compute intensive and non-intuitive, weirdly non-intuitive for technologies.

    SWF: I think the biggest benefit here is the power consumption that we’re talking about. And this is across all of Neuromorphic, but for BrainChip specifically, we’re talking about microwatts to milliwatts for Inference, right?

    NN: Yes. I think there’s two things that we need to consider. So for example, you’ve seen IBM talk about the new variation of true north to north pole. You can create computation at various levels. Yes, data center could benefit from neuromorphic techniques, because it can reduce power consumption, reduce effective computation needed for the same types of models, etc. But what BrainChip did at the beginning itself is to say, hey, we can apply these. If I take it to a larger extent, it’s going to be much more complex. We’re going to focus on the edge, and try to bring AI compute to where you wouldn’t expect it. And obviously the motivations are very clear. As AI gets more proliferated, you can’t have intelligence in the cloud and a dumb device that’s just a delivery vehicle or a sensor vehicle. You need a distributed intelligence where you can improve the experience of the user. You can improve the privacy and security concerns of the user, because it’s literally personal data, sensor data is sensitive data, critical data. You don’t want that going back and forth, ideally. So that’s there, plus as you can see the models get more and more complex, and especially if you go to cloud, you need the context, or without context it becomes much more brute force and general purpose. The compute starts getting much heavier. And so the idea of actually having distributed intelligence is fundamental to why BrainChip’s doing what it’s doing. And so our focus has been at the edge, and we come to now your question about the power. Sorry for the long interlude.

    SWF: It’s okay.

    NN: But obviously given the motivation, right? And so our focus is on microwatts and milliwatts of power. But to give you a sense, if you’re doing things that are kind of keyword related, phrasing related, in that simple phrasing functions, even in 20 nanometer type technologies, you would be in microjoules for inference or sub-microjoules, even at times. Moving up in our big configurations, you would be talking tens, maybe hundreds of milliwatts for very high end vision or video type applications. But all you see here is things that are very portable. They lend themselves to portable applications. They lend themselves to fanless applications, and hence when you think about it, you could think about medical devices that are wearable, or even embeddable at times. You can think about audio or other sensory devices that are much, don’t have any heat problems, don’t have any form factor problems. And then of course you can start seeing what today gets called the network edge type applications, which are still, for all practical purposes, small servers going to more portable form factors, and hence proliferating wider.

    SWF: Yeah, I see. So it seems like you apply everywhere across the edge, from sensor edge to kind of small server, I guess. I know there is, the latest generation of Akida is the second gen, right? Maybe we should talk a little bit more about that, and what are the differences from the first gen is particularly what I’m interested in.

    NN: Great. So Akida kind of launched in 2020-ish, the first generation. We demonstrated our technology to prove not only that it works, but how good it can be. And you’ve probably seen a lot of the demos in your past few years.

    SWF: I have, yeah.

    NN: And they show you, one, how it’s sensor independent, so it can be used in various types of applications. So the hardware software stack, which is fundamental to any productizable AI, is working, and actually delivering on promise of actually taking milliwatts per inference or microwatts per inference. So that is being proven. Our first generation was the proving out of this digital technology, had 4-2-1 bit support, which is expected for these small form factors. In the second generation, we went a little bit further. There are a few things that came back from the market as well, which is, hey, great. You can do these traditional feed forward networks well. If you start doing more complex models, one of the big advantages Akida has in the first and the second gen is that it can operate independently for most part. So it has the functions and the capabilities to manage itself, and not use the host CPU much, which is really a benefit for edge devices, which are usually constrained on system performance, on memory, etc. So the at memory compute that allows, and that comes from the neuromorphic or event based aspect, which is, hey, we don’t need more compute because we don’t have to store all those intermediate layers.

    We compute. We get an event. We pass it on. We don’t need all of that storage. Two, it’s managed separately from the co-CPUs, so there’s very little pressure on the system and the CPU compute needed for it, which is traditionally what host CPUs do. That’s a big benefit. But then we kind of started saying, hey, we need to do more complex models, and we want to avoid CPU intervention there as well. We need to start getting smarter about recurrent layers, which are, again, can be intensive. We are seeing that these edge models are getting more complex, which means training them, constantly training them, is also expensive. So what can we do to move that forward? Also, in order for you to do a lot of edge AI at source on device, you need to be able to handle a lot more streaming data, whether it’s 1D like vital signs, or audio, or multidimensional 2D like vision or video, right? And so the second generation takes multiple steps. One, and you’re going to ask me this, so I’ll jump to it anyway. We’ve gone towards an 8-bit, a very efficient way to do 8-bit compute. It’s not like doubling everything. We’re being pretty smart about how we do that. And the reason for that is, one, it’s not necessary, necessarily, for us to do 8-bit. We can actually encode the payloads in 4-bit, 2-bit, 1-bit, just fine. But it gives us a bit more capacity, because when it comes to neuromorphic or event based, we can actually simplify our life by not sending every event, but encoding a firing rate. So instead of sending 20 events, we could encode it into this 8-bit value and say, this is firing at this rate. So one event can encode that payload. So that gives us more efficiency there.

    But a lot of the models today, weights activations are 8-bit. The market is comfortable with 8-bit. So this gives us the flexibility to support a lot more models with a lot less chagrin from our customers. The second thing we add is long range skip connections. So when you do larger complex models, skip connections help you tide over some of this. Deep learning acceleration actually, some of these are done in the host CPU, managing it, or you’re adding a lot more compute to do that. We’re kind of building it into our mesh itself. So you can start doing more advanced complex models with the skip connections. But the two big new things that we’ve added are, firstly, I’ll say, vision transformers is not new, but an efficient encoding that can be put in small footprints is new. And the other one that has been garnering us a lot of attention is the temporal, event based neural nets. And the idea of the temporal event based neural nets is the ability to recurrent layers more efficiently, and time series data or sequential data analysis or analytics much more capably. And so that really changes the way Akida can support much wider ranges of applications, from higher ed video object detection in real time on a portable device, to potentially healthcare monitoring, on patient, which is managed and secure and personalized. So the thing we haven’t talked yet, which is common across both generations, is our ability to learn on device.

    Now this is not re-training, because that’s a pretty complex process, and expensive process. But here, we can take the learning that has been done or features extracted from training, and we can extend it on device with a more flexible final layer. So if you know that the model can recognize faces with one shot or multi-shot learning, we can now say, hey, this is Sally’s face, or this is Nandan’s face. You don’t want two thousand new faces to train, but for most of these devices, it’s really the owner and maybe the family, and similarly from healthcare, it is fundamentally interesting, because even though today’s healthcare deals with statistics, and if you are on the edge of that statistic, are you normal, are you not? What does that mean for me, right? My BP is X. My blood pressure is X. It could fall in the formal range, and I get treated like everybody in that range, but actually for me it may mean something different. And I have personal experience with that, especially my wife’s health, that was misunderstood, because she was on the edge of a range that was considered normal, and hence treatment took a lot longer to come, because they didn’t feel she was out of bounds, when if it is personalized, they would have known that whatever was happening to her was out of bounds, and we would have moved to more of a preventative rather than a post facto and hence much more painful treatment.

    SWF: Yeah. Goodness. There’s a couple of things here I want to dive into. I think first I want to ask you about the learning on device. Is this a property of being in a spiking domain, like being in the event domain? Or where does this ability come from?

    NN: Yes, it does come from the event domain, and I think the term STDP, you can look it up, it’s not rocket science, no pun intended. So it has been done before. I think what we have designed, you do need to architect for it to make it efficient, and that’s what Akida has done.

    SWF: Okay. To be clear though, STDP is not like how you would fine tune or train a normal deep learning network. It’s a completely different kind of learning paradigm, right?

    NN: It is a different learning paradigm, and we do naturally invest a lot of our patent work on that. So we do have some of the original work what has been done, that we believe is valuable. And this has been something that originally was a kind of a, what I call a vitamin, which you’re supposed to have. But in some cases like this especially, it starts becoming an aspirin that solves headaches.

    SWF: The other couple of things I wanted to dive into a little more are the hardware support for VIT. Can you tell me anything about how that looks in the hardware?

    NN: Like most things, we’re taking the edge approach, which means minimalist at best. And so what we see is, we’ve taken not just vision transformer, but let’s be honest, tiny vision transformers, right, because that’s what you need to make it fit. We also recognize that decoders are very complex, because decoding gets done differently in different places. So even if you built one, either you built something that is in hardware capable of doing everything, whereas encoding is a bit more well understood, and more scalable. So we tried to get the essence of what encoding we need to kind of go do, and we’ve demonstrated that, or built it into our VIT node. So the VIT node is not in the event domain, but it will sit on the mesh that we have. And what that gives us the flexibility to do, in fact what we’ve seen is, things like vision itself, you know, convolutional CNNs do well at some aspects of it. Transformers do well at some aspects of it. You want to go more accurate, you go one way. If you want to go faster, you go another way. But you can also start then looking at a picture as a paragraph, or a set of words and sentences, etc. And that’s what this kind of hybrid mesh effectively helps us do. So you can actually build out very optimized, unique solutions with the benefits of the vision transformer that it gives you on some of the accuracy aspects of it, and the flexibility of, in this case I would call it event based neural threads, rather than just convolution neural nets, and get similar accuracy, but much more efficiently. So the big step for us has been the movement of production grade accuracy, whatever that means at the edge, which has been— You don’t want to be a toy. This needs to be real, right? And so we’re taking the steps towards that.

    SWF: Sure. Can you see a day where we’re going to get LLMs, language models, on this kind of, into this kind of power envelope? Because that would be like the holy grail, right?

    NN: It is. Well, today. I heard somebody say that, oh, LLMs, pretty much, we’re done after that, but that sounds like TJ Watson’s claim that five supercomputers was the total market in 1960. So I may be wrong. But I do think that especially the economics of LLMs and foundational models will drive both efficiency on the cloud and more activity at the edge. So if you just consider the fact that the cost to search with gen AI has gone almost 10X, it’s just not sustainable. And naturally, there’s a lot of money going into this. You can see what companies like Qualcomm and Meta are doing with Llama. And in fact, I was told that even in the smartphone space, there are over a thousand Gen AI apps, whatever that means. So I think there are, you also, you have probably been to Edge Impulse’s Imagine.

    SWF: Yeah.

    NN: And they were also working closely towards, how can we actually do foundational models? And there are various ways this is being attacked, which is, hey, can we use it to optimize the data sets that are needed to generate models, which can help with the shrinking and the compactness. And then there are smarter things you could do in the same way to build hardware [SLS] models that support that. So you can see many initiatives. So my quick answer to your question is, yes, it will. Now the question is, how, and what do you do about it? For example, there’s a lot of research now that has become an open source initiative, Spike GPT, which naturally fits into, how do you actually make light weight LLMs? So would that be L cube N then?

    SWF: Light weight language models, yeah.

    NN: Maybe I should trademark that. But really, it’s about how do you reduce the transformer aspects of it to some extent? How do you start doing phraseology that is easier, rather than brute force or general purpose in every context. So I think some of the same context that you bring in to edge AI will apply to edge LLM. It’s not trying to solve everything for everybody, but for its scope, it does what it’s supposed to do. And I will give you an analogy that may be weird, but I have to say it anyway. Where do you think the biggest innovation was when the first icon came out?

    SWF: The keyboard, on the touchscreen.

    NN: Well, sure. So I shouldn’t have made it such an open ended question. I apologize. But you’re right. The touchscreen, capacitive screen, etc., came in. But I thought the biggest innovation was the notion of the app which packaged the internet. Being sort of, you were probably used to [WAP] at that point, which was a, let me use a desktop model of surfing the web with webpages that were not designed for it, and you didn’t have the compute or the bandwidth to do that.

    SWF: I remember it well. Yes, it was painful.

    NN: Exactly. The app was the killer app. The app itself was the killer app, because it packaged everything to say, okay, I’m going to use it like this. I need to see three things. I need to be quick. I need it to do it for me, and not worry about it. Yes, so instead of one browser, you had a thousand apps, but at least you knew what we were trying to get to and you could do that, right? And I think it’s the same analogy that works for Edge AI and LLMs as well, because these things will be, is it for transcription? Is it for speech? Is it for scene generation? All of these things, if you have focus on it, it solves multiple problems, and that’s actually the whole notion about distributed AI. A slightly longer route, but hopefully that answers your question. It’s a question of when and how, not if.

    SWF: Exactly. I agree with you. I think it’s definitely a question of when. Coming back to today though, I want to pick up on another thing you mentioned, which is the TENNs, the concept of TENNs, which is something I think we’re not hearing anywhere else. TENNs sound really cool. It sounds like you can get away with much smaller models to do the same job as other neural networks. So I was hoping you could tell us a little bit about how that works and then how you get away with it, basically.

    NN: Yeah. I think this is very interesting. So I think, this comes from the idea of basically more structured ways to do models. And there’s been a lot of research. There’s a lot of industry research on it. We have taken a slightly different approach. So think about the notion of temporal, right. So we have event based neural nets already. Now you take the time element into it. And so that basically shows you how is attention built in? How is state built in? And in very different way. And so the recurrent aspects of the models are built in in a nontraditional way. So can you do what LSTMs and GRUs do, but much more efficiently? And you actually make the training aspects of LSTMs and GRUs, which are a pain to train. I should trademark that as well. Because CNNs tend to be easier to train, right, feedforward networks, at least. And the more recurrent ones are more painful to train. TENNs kind of says, you can train it like a CNN, but you can use it in recurrent mode.

    SWF: Cool. Okay.

    NN: And so what that does is multiple things. We can now do it on multidimensional times series data, which is, hey, I could do audio. I could do denoising based on what it is. I could do different types of sensor signs, like vital signs, heart rates, etc. But I can also do video and vision, because now I can use the spatial aspects as a 2D, and the time aspect on the third dimension. So the temporal event based neural net model approach is applicable across any hardware. So it will help, but what we’ve done is saying, if you run it on our hardware, we have tuned it to be able to do those convolutions much, much more efficiently, where you’ll get orders of magnitude benefit in terms of power and performance.

    SWF: So one of the things I picked up on when I was reading up on tens was that it seems like you can accept signals without this initial DSP stage, or without this initial filtering. Tell me about how you can do that.

    NN: That’s fair, right? So I think the big reason, again, is the temporal aspect being part of the model itself, and understanding the time aspect. Traditionally, what you’ve done, especially in things like audio or denoising, is, there’s a front end that separates, brings in the time element to some extent, right? So the MFCC phase, that it’s called Mel frequency cepstral coefficients or a short term fourier transform, STFT type thing. You do that to kind of filter it and create a signal that is more well understood that the network can, traditionally your DSCNN can then work on. Because we’re taking the time element as a part of it, you can actually get a lot more separation of signal and noise from it, and you don’t need to do that front end filtering. So the details of it, we should probably do a webinar or something like this moving forward, because I will tell you. Beyond a couple of layers here, my depth also, or the way to convince you or explain to you may take some time, even for myself. And some of it is still kind of secret sauce, if you will. So not all of it is published. But right now we’ve been demonstrating this consistently. You don’t need an MFCC. You can take the signal right in, and you get this out with better accuracy than MFCC plus DSCNN, for example. And we’ve seen the same thing on the front ends of some of the vital signs. There’s a different type of filtering for each, we don’t need to do that, and we can still distill signal correctly.

    SWF: That sounds very cool. It sounds like a bit of a cheat. It’s like the best innovations. It does sound like a bit of a cheat, but it’s very cool.

    NN: Well yeah, thank you. I’ll take that as a compliment.

    SWF: Yeah, it was a compliment.

    NN: On behalf of all the researchers at BrainChip.

    SWF: I think one of the last BrainChip demos that I saw was with a DVS sensor, with an event based camera. We’re talking about event based in the spiking domain here as well. I think you had it hooked up with a Prophesee DVS camera. Tell us about the synergy there. They’re kind of both event based, both neuromorphic. How well do they work together?

    NN: So I think, this is certainly, I wouldn’t call it the holy grail but the natural evolutionary path, which would be, hey, if you can connect event based sensors to event based processing, then it is the way to go forward, because it simplifies a lot. And we are working with partners like Prophesee who are great partners to kind of go through that. The challenge in the short term is, everybody has to also be able to support frame based, even though they generate event based, a lot of the back end of it tends to be more frame based. So you have some overheads as you have to build that in to your products in the short to medium term, but even with that, we see a lot of benefit there. And in fact, we demonstrated, as you say, you saw some of the demos. But in terms of the research as well, what happens is, if you start doing that, you cannot just build models but let those models learn alongside as they— You can do more with less data in the beginning, and then build it out, which I guess is the essence of neuromorphic, right?

    SWF: Exactly. Yeah, exactly. It does sound pretty cool to have a human retina inspired camera, and then a human inspired processor would be super cool. We’re talking about edge, edge ML, TinyML kind of applications here for this. With the rise of the TinyML movement, which has been coming up over the last three or four years, how do you see competition from ML on regular, run of the mill microcontrollers? They’re cheap. They’re ubiquitous, and deep learning models are getting so efficient now. You can do a lot more with a microcontroller now than you could three or four years ago. How do neuromorphic chips like yours fit in alongside regular microcontrollers?

    NN: So I think that’s a very fair question. So in terms of, finally, one of the realities is, AI does not mean specialized hardware. It may need specialized hardware at scale, but AI can be done on a CPU, on a GPU, on neural processors, etc. So the fact that the TinyML movement is drawing more attention is a great sign. And so what that means is there’s more interest in doing more AI at the edge. There’s a lot of pressure on software to start making models more compact, processing more efficient, because there are hundreds of billions of microcontrollers out there. It’s a great market if you can actually serve it. But as you see that go through, what you’ll always see is, there are specialized things that you can’t do in the form factors that you’re looking at, and that’s where specialized acceleration comes in. So we kind of are both on that tide of TinyML, because if you think about the spectrum from sensor edge, which is at sensor, like a cortex M0 doing it, to the network edge, which is for all practical purposes a server outside the data center. There’s a huge spectrum, and what you always see is, okay, now I can take what I could do on network edge processor, but I can do it in a form factor that makes it a lot more portable and a lot more cost effective. That creates a potential for viral solutions if you have new applications. And you’re seeing that in, especially in the emerging markets. You think about portable healthcare. You can’t necessarily connect to clouds in some of the remote places where medical transcription’s required, or you went for an MRI, which is itself an expensive thing to go do, and you didn’t get the right answer. So if you had a device that was portable, it was reasonably cost effective that could tell you, okay, you’re in the bounds of doing this right, because I know what you’re looking for, and then the deeper analysis can be done once it gets back to the main hospital. That saves a huge amount of cost, human and device, because those things are expensive. And if you think about, a lot of this is going to come from markets that are not as well connected. Compute is not an easy resource, or it’s a very expensive resource. So this is truly about, edge AI is truly about helping the broader community, if you will, rather than just sexy new apps.

    SWF: Yeah. If I’m the TinyML engineer listening to this, I’m probably thinking, how on earth am I going to convert what I’m already working on into this spiking domain, which sounds incredibly complicated, and I don’t know anything about it? Tell us about, can you reassure us that your software stack is easy to use, and it works, and it can handle what we need it to do?

    NN: Thank you for the platform to talk about it. So one of the fundamental things that any AI solution has to have is a simplicity of actually integrating models and building applications. Without that, the most elegant hardware is useless. And one of the things that BrainChip did recognize early, and in fact some of the software acquisitions that we made initially was as tools, but then we said, no, we’re actually selling the platform, and hence tools are built in. And so we have a tool chain that plugs into any of the traditional frameworks. MetaTF is what it’s called. It can plug into TensorFlow and Keras, and now in fact with this generation we’re doing ONNX and PyTorch, which seem to be the four main. And we will continue to make it easily portable. So if you’re a model geek that wants that level of flexibility, it’ll plug into your framework. You can work with it. It will generate what it needs to. And in fact, our hardware understands how to convert those into events. So you can take today’s models, compile them. You have to do them for any hardware platform anyway, and then the quantization, etc., all done through the framework. They go into the model to be executed. But we’re also working with development players like Edge Impulse. So BrainChip was the first real IP platform that Edge Impulse supported, and we’re closely working with them to make no code, low code development much, much easier. So our goal is to make the model development and application development much, much easier, rather than you having to go back to the drawing board to understand a new paradigm of neural modeling.

    SWF: You certainly make it sound easy. I think that’s the perfect place to finish. Thank you very much, Nandan. It’s been great talking to you today.

    NN: Thanks, Sally. Thanks for a very engaging chat. I appreciate it.

    SWF: Thank you so much to BrainChip’s Nandan Nayampally for the insight into BrainChip’s technology.

    That brings us to the

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
21.0¢
Change
0.010(5.00%)
Mkt cap ! $399.0M
Open High Low Value Volume
20.5¢ 21.5¢ 20.0¢ $951.1K 4.521M

Buyers (Bids)

No. Vol. Price($)
45 193658 21.0¢
 

Sellers (Offers)

Price($) Vol. No.
21.5¢ 638798 28
View Market Depth
Last trade - 12.03pm 26/06/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.