WBT 3.33% $2.32 weebit nano ltd

Industry related on topic, page-671

  1. 3,844 Posts.
    lightbulb Created with Sketch. 741
    There was a previous posting on this topic, but here is some more detail on Weebit's contribution:

    https://semiwiki.com/ip/weebit-nano/346523-weebit-nano-at-the-2024-design-automation-conference-2/


    Weebit Nano at the 2024 Design Automation Conference
    by mce-anchorDaniel Nenni on 06-21-2024 at 6:00 am
    Categories: Events, IP, Weebit Nano

    We all know that the proliferation of AI applications is happening at an unprecedented rate while at the same time, memories aren’t scaling along with logic. This is one of many reasons that the industry is exploring new memory technologies and architectures.
    When it comes to embedded non-volatile memory (NVM), the incumbent technology, flash, has reached its limits in terms of power consumption, speed, endurance and cost. It is also not scalable below 28nm, so it’s not possible to integrate flash and an AI inference engine together in a single SoC at 28nm and below for edge AI applications.
    Embedded ReRAM is the logical alternative. Embedding ReRAM into an AI SoC would replace off-chip flash devices, and it can also be used to replace the large on-chip SRAM to store the AI weights and CPU firmware. Because the technology is non-volatile, there is no need to wait at boot time to load the AI model from external NVM.
    ReRAM is also much denser than SRAM which makes it less expensive than SRAM per bit, so more memory can be integrated on-chip to support larger neural networks for the same die size and cost. While on-chip SRAM will still be needed for data storage, the array will be smaller and the total solution more cost-effective. With ReRAM, designers can have a single chip implementation of advanced AI in a single IC while saving die size and cost.
    ReRAM will also be a building block for the future of edge AI: neuromorphic computing. In this paradigm, also called in-memory analog processing, compute resources and memory reside in the same location, so there is no need to ever move the weights. The neural network matrices become arrays of ReRAM cells, and the synaptic weights become the conductance of the NVM cells that drive the multiply operations.
    Because ReRAM cells have physical and functional similarities to the synapses in the human brain, it will be possible to emulate the behavior of the human brain with ReRAM for fast real-time processing on massive amounts of data. Such a solution will be orders of magnitude more power-efficient than today’s neural network simulations on traditional processors.
    At the Design Automation Conference (DAC) 2024, Gideon Intrater from Weebit Nanowill go in depth on this topic during his presentation, ‘ReRAM: Enabling New Low-power AI Architectures in Advanced Nodes.’
    Gideon’s presentation will be part of the session, ‘Cherished Memories: Exploring the Power of Innovative Memory Architectures for AI Applications,’ which will explore cutting-edge technologies transforming the landscape of memory design. Organized by Moshe Zalcberg of Veriest Solutions and moderated by Raul Camposano of Silicon Catalyst, other presenters include experts from RAAM Technologies and Veevx Inc.
    Don’t miss this DAC session!
    • Cherished Memories: Exploring the Power of Innovative Memory Architectures for AI Applications
    • Time: 10:30 AM – 12:00 PM
    • Location: IP Room: 2012, 2nd Floor
 
watchlist Created with Sketch. Add WBT (ASX) to my watchlist
(20min delay)
Last
$2.32
Change
-0.080(3.33%)
Mkt cap ! $442.0M
Open High Low Value Volume
$2.39 $2.39 $2.32 $490.2K 208.9K

Buyers (Bids)

No. Vol. Price($)
12 19317 $2.32
 

Sellers (Offers)

Price($) Vol. No.
$2.34 5646 14
View Market Depth
Last trade - 12.41pm 19/07/2024 (20 minute delay) ?
WBT (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.