BRN 1.92% 26.5¢ brainchip holdings ltd

Peter's second patent relies on the storage of the "training...

  1. 12,261 Posts.
    lightbulb Created with Sketch. 3833
    Peter's second patent relies on the storage of the "training model" to conventional computers (copy of the abstract for patent application US 13/461,800 below).

    http://www.google.com/patents/US20130297537

    "The current invention comprises a function library and relates to Artificial Intelligence systems and devices. Within a Dynamic Neural Network (the “Intelligent Target Device”) training model values are autonomously generated in during learning and stored in synaptic registers. One instance of an Intelligent Target Device is the “Autonomous Learning Dynamic Artificial Neural Computing Device and Brain Inspired System”, described in patent application number 20100076916 and referenced in whole in this text. A collection of values that has been generated in synaptic registers comprises a training model, which is an abstract model of a task or a process that has been learned by the intelligent target device. A means is provided within the Intelligent Target Device to copy the training model to computer memory. A collection of such training model sets are stored within a function library on a computer storage facility, such as a disk, CD, DVD or other means."

    The way I see it is that it is a fairly dumb learning machine that doesn't rely on logic just a series of weighted results "values" from some feedback loops which constitute the learning model.

    Mankind spent 2000 years trying to prove Euclid's fifth postulate (the parallel postulate) from his four other postulates. The postulate states that

    "If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles. "

    It wasn't until the early 19th century that it was discovered that a logically consistent geometries could be created ("curved space") by negating Euclid's fifth postulate. So in 1829 mankind discovered that curved space was equally as valid as rectilinear space and this allowed scientists to view the universe in a totally different way and construct hyper geometries based totally on logic.

    https://en.wikipedia.org/wiki/Parallel_postulate#History

    I mention this only to stimulate some thought on the race track milestones. Consider a track that has some straight sections and some short and very long curved sections. A machine driven by conventional software (algorithmic logic based learning and memory storage) can create a map based picture of the track after completing it once and at the same time avoid crashing into the using some method of triangulation with sensors. A human does more or less the same thing. The more familiar a human driver is with a track the more likely they are to put in a good lap time, but at the same time a good drivers brain also operates on a more instinctive level in combination with their senses to steer the car around the track and avoid collisions and leaving the track altogether.

    Given the only thing we know about the reaching of milestone2 (from slide 10 of the milestone2 presentation) is that that the car "learns" the track in 0.89 seconds, which from the slide and the short amount of time one would presume is even before it has completed a full lap. This is extremely impressive but implies a highly constrained and dumb learning model. With conventional programming were a mappable element can be added the car would know whether its following a straight part of the track verses a long curved section through the application of logic in its programming. The BRN car would appear to be able to learn very quickly to triangulate itself very well with each side of the track. But do the sides of the track represent solid barriers (walls) or not? What if the walls are removed? will the car's learning model be rendered useless? A software driven car that can triangulate off the walls plus learn the track after one lap by using a mapping algorithm would still be able to navigate the track after the walls have been removed. Much like a human brain doesn't need a solid wall to triangulate off, we use all the visual cues, the edges of the track, the distance between the track and the barriers, the grandstands, the lines on the road.

    What I can see from the BRN milestones is very efficient dumb learning at best. Something more akin to a very good sensor rather than a brain capable of logical thought combined with sensing ability. For me software driven AI will be smarter any day of the week as it is programmed by the smartest thing that mankind is aware of in the universe, the human brain.

    Eshmun
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.