BRN 2.44% 20.0¢ brainchip holdings ltd

M3 and deployment on server.

  1. 2,368 Posts.
    lightbulb Created with Sketch. 67
    I emailed the management earlier and via the triage between Neil and probably Peter ( I think ) , I got an understanding between why and how M3 was developed.

    Most investors would probably not be interested in the depth of the tech or the vision where the company was headed. Anyways, to me this was important and I received a response that I will share by joining some dots for the less tech inclined.


    I think the readers will grasp the bigger picture if I tell it via a narrative so here goes
    ___________________________-

    ILSVRC2015 just finished this week and the results were announced where Microsoft was victorious. Here is a nice coverage on the NYtimes.

    The Problem.

    When you go to "Google" or "Bing" and you search for  cat pictures you are treated with something as the follows.

    cats.PNG

    So one wonder how Google knows which pictures are cat pictures ?
    Does someone manually mark these pictures or Google employs something more sophisticated ?

    We call this process machine learning. Google employs machine learning - an algorithm or complex programming of sorts for less tech inclined to detect which pictures are cats and which are not. They employ neural networks to do the machine learning and use it to further automation.

    The way they can do this is by first training the system by feeding it thousands of pictures of cats that the Google engineers have compiled manually. Once the system has enough of these "fed" examples it gains the experience of being able to detect cat pictures automatically via various techniques like edge detection , photo alpha/gamma detection and various other methods.

    Now if we start feeding the system with mixed images - Google's algorithm will start detecting which are cat pictures and which are not. Since this is still a machine sorting the images there is a chance that it will still have some errors and we call this error rate.

    We call this problem "object detection" and every year top tech giants/universities participate in a competition to show their capabilities. The papers are then  published at arxiv and various academics end up making a very lucrative opportunities out of these events.

    The solution

    Some of the world's most talented people participate in this competition where they show how to go about object detection. Our friends at Qualcomm / MS and even university of Adelaide participated. You can also find Google under ReCeption.

    Ironically it is also the first time that Baidu got caught cheating creating the first scandal in machine learning and here is a nice write-up on this one. Interesting times ! Here is another simpler write up.

    Although the Imagenet is getting dated , human level performance was surpassed this year. This means that the system got better and better and reached a point that it did as good or a better job at classification than a human would. Yes - thats right.

    Anyways , the solution worked and it looks there is progress.

    The bottleneck.

    Anyways , even though the algorithm of object detection has gotten better (detecting cats or whatever) this is still being done on CPU/GPU.

    Now since the processing happens on GPU/CPU - a company like Google would probably end up being frustrated because they need to process billions of images. They hence need something that can process large number of these images in parallel. They can for an example stack a large number of GPU in the cloud and use them in parallel. This is still a clunky way of doing things but they are currently dealing with whats available.

    Now if you read the full article , you will at the end of see the following.

    To dissect the above statement a bit further

    Softlayer public cloud (datacenter) consists of these GPU's and via the API the various teams used the GPU's to perform the object detection. So teams may be sitting in any part of the world but by simply using some API/codes they were able to use the GPU sitting in the datacenter maybe thousand of miles away.

    Some of you already know that Nvidia's Tesla GPU are one of BRN's competitor and BRN will compete with them as a hardware only solution. You can find my Qualititative analysis here.

    So in light of the above, BRN has pointed out to me that the M3 deals with creating API interface with server. This for an example allows employees of Fortune 500/other companies to remotely access Brainchip from thousand of miles away and still see if they can integrate the chip as a part of existing product line.

    This helps deployment to larger developer networks for larger organization and will help bring revenue to the table as opposed to releasing BDK or EDB.

    I hope this helps that may be milestone design curious.
    Last edited by neutralopinions: 14/12/15
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
20.0¢
Change
-0.005(2.44%)
Mkt cap ! $371.1M
Open High Low Value Volume
21.0¢ 21.0¢ 19.5¢ $1.251M 6.200M

Buyers (Bids)

No. Vol. Price($)
42 1837538 19.5¢
 

Sellers (Offers)

Price($) Vol. No.
20.0¢ 114606 3
View Market Depth
Last trade - 16.10pm 09/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.