BRN 0.00% 17.0¢ brainchip holdings ltd

2024 BrainChip Discussion, page-3477

  1. 7,999 Posts.
    lightbulb Created with Sketch. 2144
    Amit Mate 2nd
    Founder and CEO | Multi-AI, Multi-stream Connected-Intelligent Applications

    3h
    Very interesting to see this precise articulation of why vision is central to unlocking more intelligence on machines. GMAC Intelligence remains committed to enabling Intelligent-Digital-Assistants that can see, hear, understand, talk, move-around and operate at the fastest possible speed and lowest possible energy. #edgeai #artificialintelligence #visionai #llm #northstar


    Yann LeCun 3rd
    VP & Chief AI Scientist at Meta

    1d Edited
    Follow
    * Language is low bandwidth: less than 12 bytes/second. A person can read 270 words/minutes, or 4.5 words/second, which is 12 bytes/s (assuming 2 bytes per token and 0.75 words per token). A modern LLM is typically trained with 1x10^13 two-byte tokens, which is 2x10^13 bytes. This would take about 100,000 years for a person to read (at 12 hours a day). * Vision is much higher bandwidth: about 20MB/s. Each of the two optical nerves has 1 million nerve fibers, each carrying about 10 bytes per second. A 4 year-old child has been awake a total 16,000 hours, which translates into 1x10^15 bytes. In other words: - The data bandwidth of visual perception is roughly 1.6 million times higher than the data bandwidth of written (or spoken) language. - In a mere 4 years, a child has seen 50 times more data than the biggest LLMs trained on all the text publicly available on the internet. This tells us three things: 1. Yes, text is redundant, and visual signals in the optical nerves are even more redundant (despite being 100x compressed versions of the photoreceptor outputs in the retina). But redundancy in data is *precisely* what we need for Self-Supervised Learning to capture the structure of the data. The more redundancy, the better for SSL. 2. Most of human knowledge (and almost all of animal knowledge) comes from our sensory experience of the physical world. Language is the icing on the cake. We need the cake to support the icing. 3. There is *absolutely no way in hell* we will ever reach human-level AI without getting machines to learn from high-bandwidth sensory inputs, such as vision. Yes, humans can get smart without vision, even pretty smart without vision and audition. But not without touch. Touch is pretty high bandwidth, too.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
17.0¢
Change
0.000(0.00%)
Mkt cap ! $333.6M
Open High Low Value Volume
17.0¢ 17.3¢ 16.8¢ $1.169M 6.885M

Buyers (Bids)

No. Vol. Price($)
7 221397 17.0¢
 

Sellers (Offers)

Price($) Vol. No.
17.5¢ 1536294 39
View Market Depth
Last trade - 10.52am 28/08/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.