Thought it might be interesting to revisit some of what the CEO said to shareholders in his June 2018 Quarter Update and reflect upon what has been announced and what is still to come from these early AKIDA technology engagements which may give some hint to which household named customers are part of the early access and proof of concept customers:
"Intellectual property is an interesting arena as well for the purpose of time to money and some very large opportunities. When you think about going to the edge, IoT devices think about surveillance cameras, even cell phones. Cell phones I would consider an edge device. You're not going to sell an Akida chip in all likelihood into a cell phone. If you rip your cell phone apart you're going find three or four ICs in there, very large scale integration. Most of the major cell phone manufacturers, think about Apple, Samsung, Huawei, all of them do some in-house development of their own chips or they use outside resources to develop custom chips. They would like to put IP such as Akida into one of their custom chips. So we have early discussions going on with a major cell phone manufacturer. It's actually been quite a number of meetings, a meeting at their facility, a meeting at our facility developing a statement of work to define what it is that they would like out of Akida as an IP lock integrated into one of their very large scale system wide chips.
Then of course you've got the Akida neural system on a chip, which is the chip that you see to the right. That is really the end game for us, is to be a supplier of that chip in volume, both at the edge or on the next bullet like PCIE accelerator. So at the edge you buy the chip and in the enterprise in the server room or a data center, in the Cloud you plug in a PCIE card and that will have multiple Akida chips on it ganged so you can get a multiple of the number of synapses or neurons that you'd get at on an Akida chip you can put multiples, two, four, six, eight on a PCIE card. When we do the architectural announcement we'll talk about the low power nature of Akida. We'll talk about its processing power that will all come to the fore as the architectural announcement comes out, but we've really targeted some outstanding specs which I think will put us in the leadership position in spiking their own networks or neuromorphic IC's generally.
So this is just a little bit more of what I just mentioned, so the Edge applications it’s very small. Whether it's a small IC or it’s an IP block that gets integrated and extremely low power. Again you can think about cameras you can think about IoT sensors, you can think about cell phones. Cell phones would likely be an IP block. IoT sensors would probably be a chip and cameras would likely be an IP block, although it's more highend cameras or civil surveillance the IC is also a potential opportunity.
When you look at enterprise applications, you're talking about data centres, with servers you're talking about Clad and SaaS based systems, multiple devices on a PCIE card and you really eke out the greatest performance at the lowest power. This will be orders of magnitude lower power than a GPU installation of a CNN running in SNN on the Akida products, it's just going to be significantly lower power and really serves for end use cases with much better performance.
There was a question, I'm going try and make sure I get to questions as I go through this, there was a question about, Peter answered a questions at the AGM, about in an autonomous vehicle, would the Akida chip sit side by side with the GPU which is doing the computation or could Akida replace the GPU altogether? Of course Peter thinks big and he thinks for the future and he said, "Yes we could potentially replace the GPU." That's far in the future. The spike in neural network will likely first find its way into transducers and sensor interfaces and then maybe as a card level solution or multiple chips, off loading certain tasks as a kind of co-processor to the GPU, with the end game in the future of certainly spiking their own networks mature and customers get more familiar with the architecture and what they can accomplish will eat more and more into the GPU CPU landscape.
I think I touched on this a little bit already, the automotive manufacturers and third party vendor stuff continues very nicely. We're getting architectural insight, we're seeing development of specific application learning rules. What we really benefit from in these close engagements whether it's in the automotive space whether it's in Fintech, Agtech, cyber, is we can develop the learning rules that are necessary to look at specific data flows, how we take that data, how we convert it to spikes and what learning rules are most applicable for the use cases. You know if you want to take data in agriculture or you want to take data in Fintech, you need to understand the nature of the data, the patterns that you're looking for, and the correlations that you want to make to other patterns. So it's really defining the learning rules and then doing the correlation via other learning rules in order to come up with actionable results. You know I'd like to think of it as, we take data, we turn it into information, we take that information and we build a knowledge base and that becomes actionable by the end customer.
And that could be in agriculture for optimisation and yield in produce or plants. In Fintech it's certainly, I think you all understand the learning rules would be about algorithms, mean regression, candlesticks, comparing and correlating so that you can make trades more quickly and improve your number of wins versus losses in any given, used to be a week, then it went to a day, then it went to a minute, and then it went by the fraction of a second.
In Fintech we've got an initial dialogue. I think it's moving along nicely. I think we're coming to some common ground on what we think we could accomplish in Fintech and it's a solid team. I've talked to a couple of the engineering guys there myself. Cyber security's moving along, nicely actually. We've found a bit of intellectual property that we think we can pick up on the cheap. It's already done in a spiking neural network and we've de-packed an inspection and other cyber methods and I think we'll close on that very quickly and that will give us a jump start on the learning rules in cyber security.
At this point the Akida development system, would be used in these particular cases at least the first two where we have dialogue going on with live customers, will be used to develop kind of custom learning rules where generic learning rules with custom applications, for Agtech and for Fintech. And cyber security because it's already in a spiking domain and the learning rules exist, that'll be a general purpose development environment, be the IP that we can distribute widely to as many cyber customers as we think is necessary to kind of hone our skills."
We have had the cyber security Democratis University announcement and NASA link. We have had the automotive Tier 1 Ford and Valeo links.
We have possible links to phones, fintech, agtech, and cameras still to reveal themselves.
I have not found it yet but I know that on one occasion the CEO referred to the fact that Peter van der Made had made a special trip to Europe to discuss the fintech applications of AKIDA technology and get a proper understanding of precisely what the particular potential customer wanted. I am sure Rayz , Cyberworks or unix will be able to assist.
My opinion only DYOR.
Expand