BRN 2.94% 16.5¢ brainchip holdings ltd

Hi AccaThank you for those kind words. It is true that one can...

  1. 10,049 Posts.
    lightbulb Created with Sketch. 27028
    Hi Acca

    Thank you for those kind words. It is true that one can become frustrated with some posters who state they have done their own research yet make statements which are completely incorrect.

    There has been some criticism of AKD1000 identifying a tiger and table together incorrectly.

    AKD1000 did not incorrectly identify the tiger on the table incorrectly anymore than if you take a child and teach them that a brown block of wood is an elephant and then teach them that an elephant is a brown block of wood that when later asked to pick up the brown block of wood that the child is in error if they pick up the elephant.

    To anyone who knows anything about the problem that one shot learning with AKIDA is solving the fact that AKIDA identified correctly what it was told initially was a tiger which included the table as part of what it was being trained from one image to identify makes clear that AKIDA did not make a mistake anymore than the child who was incorrectly taught has made a mistake.

    In fact if Anil had said we will now train AKIDA with one photograph to remember and be able to identify a tiger on a table then AKIDA would have had a perfect score. AKIDA was simply asked the wrong question or told the wrong label.

    So clearly those who were watching it live and were impressed understood the above.

    Further those who were watching and who have twittered about the experience with the tiger have correctly identified how revolutionary AKIDA is and were properly impressed with the fact that having been shown the correct image of the tiger minus the table with one shot it learned to identify the tiger from multiple angles so that it was extrapolating from the first shot learned image what the tiger would logically look like from different angles.

    To further explain how revolutionary this is and why those in remote attendance were blown away it is clear they have knowledge of what is involved with training Deep Learning with CNN. As those who have posted criticism are clearly not aware of what is involved I have included below three items from a blog on Deep Learning with CNN regarding how many images are required to do something like that presented by AKD1000 taken randomly from 2016, 2018 and 2020. They are simple to read and understand and are each addressing how many images you need to train CNN to do what AKD1000 does on one shot.

    Before you do this however I have one final comment and that is that as AKD1000 is shown the tiger from different angles and correctly identifies it each time it does this it is actually incrementally learning and reinforcing what it learned from the first shot. This makes its capacity to identify tigers better and better into the future without human intervention.

    Anyway enough said by me:

    8th Feb, 2016

    Jack Kelly

    Imperial CollegeLondon

    It really is prettyhard (if not impossible) to give a 'one size fits all' answer. The amountof training data you require is dependent on many different aspects of yourexperiment:

    · how different are the classes that you're trying to separate? e.g.if you were just trying to classify black versus white images then you'd needvery few training examples! But if you're trying to solve ImageNet thenyou need training data on the order of 1000 examples per class.

    Ian Kirwan 19th Jun, 2018

    Jaguar Land RoverLimited

    Apparently you'renot going to get a rule of thumb as there are too many variables involved.However, I have been training several CNNs over the last few days for thepurpose of steering from camera input. 'Behavioural cloning'. The models areapproximately 5 million parameters in size and have a single regressed output.I want to use the minimum amount of samples I can for the purpose as collectingappropriate data is tedious. I trained models with about 40, 60, and 80thousand samples (16 epochs). Each exhibiting marked improvement on the last.At 80 thousand samples the models look like they are just starting to do theirjob as intended. I'm about to start training on 140 thousand samples and expectto see significant improvement. I suspect 1 million samples would do verynicely.

    I should howeversay that if I collected a few thousand samples from every country in the worldand it added up to a million samples, then trained my model with that data whatwould it be good for? It would not be particularly good in any country. If Itrain my model on a million samples collected from London, it would probably bequite good in London and numerous other British cities. The sample data andapplication relationship matter, despite generalisation capabilities.

    Obviously we wantgood generalisation, but many real world problems are just too complex for asingle model.

    2nd Jul, 2020

    Roman Gurbiel

    Stratpole

    The question isold, but the problem is up-to-date. My case is 1000-5000 data set, so probablysmall. However there are strategies to attack the problems. Interestingarticles on small sizes: https://www.nist.gov/system/files/documents/2019/08/27/workshopslides-small_data_convnet

    https://papers.nips.cc/paper/7620-modern-neural-networks-generalize-on-small-data-sets.pdf


    M
    y opinion only DYOR.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
16.5¢
Change
-0.005(2.94%)
Mkt cap ! $323.8M
Open High Low Value Volume
17.5¢ 17.5¢ 16.0¢ $1.039M 6.226M

Buyers (Bids)

No. Vol. Price($)
53 2389988 16.0¢
 

Sellers (Offers)

Price($) Vol. No.
16.5¢ 94290 7
View Market Depth
Last trade - 16.10pm 17/09/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.