BRN 11.7% 26.3¢ brainchip holdings ltd

Amit Mate 2nd Founder and CEO | Multi-AI, Multi-stream...

  1. 8,180 Posts.
    lightbulb Created with Sketch. 2393
    Amit Mate 2nd
    Founder and CEO | Multi-AI, Multi-stream Connected-Intelligent Applications

    3h
    Very interesting to see this precise articulation of why vision is central to unlocking more intelligence on machines. GMAC Intelligence remains committed to enabling Intelligent-Digital-Assistants that can see, hear, understand, talk, move-around and operate at the fastest possible speed and lowest possible energy. #edgeai #artificialintelligence #visionai #llm #northstar


    Yann LeCun 3rd
    VP & Chief AI Scientist at Meta

    1d Edited
    Follow
    * Language is low bandwidth: less than 12 bytes/second. A person can read 270 words/minutes, or 4.5 words/second, which is 12 bytes/s (assuming 2 bytes per token and 0.75 words per token). A modern LLM is typically trained with 1x10^13 two-byte tokens, which is 2x10^13 bytes. This would take about 100,000 years for a person to read (at 12 hours a day). * Vision is much higher bandwidth: about 20MB/s. Each of the two optical nerves has 1 million nerve fibers, each carrying about 10 bytes per second. A 4 year-old child has been awake a total 16,000 hours, which translates into 1x10^15 bytes. In other words: - The data bandwidth of visual perception is roughly 1.6 million times higher than the data bandwidth of written (or spoken) language. - In a mere 4 years, a child has seen 50 times more data than the biggest LLMs trained on all the text publicly available on the internet. This tells us three things: 1. Yes, text is redundant, and visual signals in the optical nerves are even more redundant (despite being 100x compressed versions of the photoreceptor outputs in the retina). But redundancy in data is *precisely* what we need for Self-Supervised Learning to capture the structure of the data. The more redundancy, the better for SSL. 2. Most of human knowledge (and almost all of animal knowledge) comes from our sensory experience of the physical world. Language is the icing on the cake. We need the cake to support the icing. 3. There is *absolutely no way in hell* we will ever reach human-level AI without getting machines to learn from high-bandwidth sensory inputs, such as vision. Yes, humans can get smart without vision, even pretty smart without vision and audition. But not without touch. Touch is pretty high bandwidth, too.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
26.3¢
Change
0.028(11.7%)
Mkt cap ! $522.7M
Open High Low Value Volume
24.5¢ 27.0¢ 24.5¢ $3.098M 11.91M

Buyers (Bids)

No. Vol. Price($)
12 551403 26.0¢
 

Sellers (Offers)

Price($) Vol. No.
26.5¢ 1689535 38
View Market Depth
Last trade - 12.44pm 06/11/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.