BRN 2.50% 19.5¢ brainchip holdings ltd

2021 BRN Discussion, page-27885

  1. 4,411 Posts.
    lightbulb Created with Sketch. 1614
    And I'm not done yet!

    COULD THIS BE GOOGLE unofficially introducing AKIDA to the world??????? You read............YOU DECIDE

    https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/

    Today's models mostly focus on one sense. Pathways will enable multiple senses.

    People rely on multiple senses to perceive the world. That’s very different from how contemporary AI systems digest information. Most of today’s models process just one modality of information at a time. They can take in text, or images or speech — but typically not all three at once.

    Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously. So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.

    And of course an AI model needn’t be restricted to these familiar senses; Pathways could handle more abstract forms of data, helping find useful patterns that have eluded human scientists in complex systems such as climate dynamics.

    Today's models are dense and inefficient. Pathways will make them sparse and efficient.

    A third problem is that most of today’s models are “dense,” which means the whole neural network activates to accomplish a task, regardless of whether it’s very simple or really complicated.

    This, too, is very unlike the way people approach problems. We have many different parts of our brain that are specialized for different tasks, yet we only call upon the relevant pieces for a given situation. There are close to a hundred billion neurons in your brain, but you rely on a small fraction of them to interpret this sentence.

    AI can work the same way. We can build a single model that is “sparsely” activated, which means only small pathways through the network are called into action as needed. In fact, the model dynamically learns which parts of the network are good at which tasks -- it learns how to route tasks through the most relevant parts of the model. A big benefit to this kind of architecture is that it not only has a larger capacity to learn a variety of tasks, but it’s also faster and much more energy efficient, because we don’t activate the entire network for every task.

    For example, GShard and Switch Transformer are two of the largest machine learning models we’ve ever created, but because both use sparse activation, they consume less than 1/10th the energy that you’d expect of similarly sized dense models — while being as accurate as dense models.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
19.5¢
Change
-0.005(2.50%)
Mkt cap ! $361.9M
Open High Low Value Volume
20.0¢ 20.5¢ 19.5¢ $878.2K 4.371M

Buyers (Bids)

No. Vol. Price($)
64 1444914 19.5¢
 

Sellers (Offers)

Price($) Vol. No.
20.0¢ 388739 9
View Market Depth
Last trade - 16.10pm 15/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.