BRN 2.94% 16.5¢ brainchip holdings ltd

AKIDA benchmarking?, page-21

  1. 23 Posts.
    lightbulb Created with Sketch. 43

    Judging by the replies to the original post, it seems clear that nobody has seen any hard numbers for performance with the chip - despite the fact that BrainChip received the chips months ago. Now, I can understand that anyone that has been testing the chip under NDA could be prohibited from publishing what they found. But that doesn't mean that BrainChip could not provide information themselves if they wanted to.

    Test problems that could be used to provide such numbers is easy to find. For example, Brainchip's developer site provides many examples of how to generate code for standard problems.

    GXNOR/MNIST inference https://doc.brainchipinc.com/examples/general/plot_0_gxnor_mnist.html
    DS-CNN CIFAR10 inference https://doc.brainchipinc.com/examples/general/plot_1_ds_cnn_cifar10.html
    MobileNet/ImageNet inference https://doc.brainchipinc.com/examples/general/plot_2_mobilenet_imagenet.html
    DS-CNN/KWS inference https://doc.brainchipinc.com/examples/general/plot_3_ds_cnn_kws.html
    YOLO/PASCAL-VOC detection https://doc.brainchipinc.com/examples/general/plot_6_voc_yolo_detection.html

    All these illustrations are standard problems used for benchmarking, and you can find the state of the art for each.
    CIFAR10 https://paperswithcode.com/dataset/cifar-10
    MNIST https://paperswithcode.com/dataset/mnist
    ImageNet https://paperswithcode.com/dataset/imagenet
    KWS https://paperswithcode.com/dataset/auto-kws
    PASCAL-VOC https://paperswithcode.com/dataset/pascal-voc

    But this is just scratching the surface - ther are over 4000 such datasets that can be used for benchmarking https://paperswithcode.com/datasets in many different domains - images, text, video, audio, medical, 3D...

    I asked for comparisons with NVIDIA's Jetson chip, because such benchmarks are easy to find https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks

    I've looked through all the example provided on BrainChip's website, in the hope that there would be some figures that referred not just to accuracy, but the time an AKIDA chip takes to perform the task.

    But all I can see are numbers for the software simulator. For example, in the CIFAR10 example, you can see the following output
    Accuracy: 94.00%
    Akida inference on 1000 images took 23.52 s.



    In the MobileNet example, you can find this
    Inference on 10 images took 1.24 s.
    Accuracy: 100.00 %

    In the YOLO example, you can read
    car 0.2552
    person 0.2852
    mAP: 0.2702
    Akida inference on 100 images took 22.95 s.

    But my understanding is that all these performance figures were obtained with the Akida Development Environment, which is doing a software emulation of the chip. Now, I have seen BrainChip's demos where you can see a chip on a test board generating a number to say that it has effectively recognized a keyword. But those demos certainly do not allow you to say how long it took to recognize the stimulus. Was it 500 ms? Or 1 ms?? Or 10 microseconds? I have no idea. But it should be easy for BrainChip's hardware team to provide real numbers for such

    I would have thought that the first thing that BrainChip's hardware team would have done after receiving the chip would have been to test how long the chip took to process 1000 CIFAR10 images (and the other problems that are illustrated in the company's documentation. It takes 23.52 seconds on the computer they were using for the ADE demo. If they could say what machine was used for that, and how long the chip took do the same task, they would instantly have some hard numbers to convince developers and investors. It is quite possible that the chip is 1000 times faster than software. But who knows?

    The company first announced that it had "Validated" the hardware design on the 24th September 2020 https://brainchipinc.com/brainchip-confirms-validation-of-the-akida-neural-processor/

    The fact that the company has yet to provide any such information is, frankly, weird. But it's never too late....

    If people at the company read Hotcopper, maybe someone could suggest that providing some actual numbers for throughput and power consumption would be much appreciated - by me at least!

    And if this thread could be used for actually talking about benchmarking data, rather than discussions of other chips, then that would be very much appreciated too.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
16.5¢
Change
-0.005(2.94%)
Mkt cap ! $323.8M
Open High Low Value Volume
17.5¢ 17.5¢ 16.0¢ $1.039M 6.226M

Buyers (Bids)

No. Vol. Price($)
53 2389988 16.0¢
 

Sellers (Offers)

Price($) Vol. No.
16.5¢ 94290 7
View Market Depth
Last trade - 16.10pm 17/09/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.