4DS 1.30% 7.6¢ 4ds memory limited

Trying to understand the technology, I found this article...

  1. 184 Posts.
    lightbulb Created with Sketch. 56
    Trying to understand the technology, I found this article explaining some of the competition for memory development.....

    3 emerging memory technologies that will change how you handle big dataChris Angelini@chris_angeliniNovember 13, 2019 10:20 AMEnterpriseImage Credit: Intel.comThis article is part of the Technology Insight series, made possible with funding from Intel. A couple of years back, IDC predicted that by 2025 the average person will interact with connected devices 4,800 times per day. Information pouring in from those sensors will fuel machine learning, language processing, and artificial intelligence, all requiring fast storage and more compute horsepower. The next generation of memory technologies will address gaps in today’s storage hierarchy, delivering data where it’s needed for real-time processing

    Up NextEmerging memory technologies promise to keep voluminous data closer to processors without the high cost or power consumption of SRAM and DRAM. Most are non-volatile, like the NAND flash inside of SSDs, and dramatically faster than NVMe-attached solid-state drives.In this first of a two-part series, we’ll look at three technologies with answers to the impending big-data bottleneck: Intel’s Optane, two types of magneto-resistive RAM (MRAM), and resistive random-access memory (ReRAM). Part two will cover nanotube RAM, ferroelectric RAM, and phase-change memory.Where does your organization stand on the AI curve? Find out with this survey!Key benefits of new memory technologyIntel Optane DC persistent memory: Non-volatile, high-capacity memory tuned for data center workloads. Can be accessed through memory operations or as block storage.MRAM: Non-volatile memory that can be powered down completely, then awakened quickly for fast writes in an IoT application.ReRAM: Promises to bridge the gap between DRAM and flash in the datacenter. Storing entire databases in fast, non-volatile ReRAM would revolutionize in-memory computing.Setting the stage for big dataHere’s the problem: computational performance is increasing at a pace unmatched by data access technologies. When massively parallel CPUs or purpose-built accelerators run out of ultra-fast cache or speedy system memory, they’re forced to dip into slow, disk-based storage for bytes to crunch on, and grind to a (relative) halt. Larger SRAM caches help keep hot data close at hand, and copious DRAM works wonders for in-memory computing. However, both types of storage are expensive to procure. They’re also volatile by nature, requiring constant power to retain data. Adding more of either just isn’t an economical way to address the sheer volume of data awaiting real-time analysis.Rob Crooke, senior vice president and general manager of Intel’s non-volatile memory solutions group, sums up the basic challenge this way: “DRAM is not big enough to solve today’s problem of real-time data analysis—and traditional storage isn’t fast enough.”Above: Emerging memory technologies help close the gap between flash, which is capacious but relatively slow, and DRAM, fast but much more limited in capacity.Image Credit: SNIAThe company’s Optane technology fits into a growing gap between system memory and flash-based solid-state drives, potentially supercharging analytics, artificial intelligence, and content delivery networks. DRAM is great for in-memory processing, but it’s also limited in capacity. SSDs cost a lot less per gigabyte as they scale into massive deployments. They just don’t have the performance for real-time transactional operations. Optane was designed to bridge those two worlds.Optane employs a unique architecture made up of individually addressable memory cells stacked in a dense, three-dimensional matrix. Intel doesn’t get specific about the technology at play in its Optane-based devices. However, we do know that Optane can either act like DRAM or an SSD, depending on its configuration.Above: Intel’s Optane DC persistent memory module drops into a motherboard’s DIMM slots, adding anywhere from 128GB to 512GB of high-speed, non-volatile storage.Image Credit: IntelIntel’s Optane DC persistent memory drops into a standard DIMM slot connected to a CPU’s memory controller. Available in capacities of up to 512GB, it can hold several times more data than the largest DDR4 module. The information on an Optane DC persistent memory DIMM operating in App Direct Mode is retained when the power goes out. In contrast, volatile memory technologies like DRAM lose data quickly if they aren’t constantly refreshed. Software does need to be optimized for Intel’s technology. However, the right tweaks allow performance-bound applications to access Optane DC persistent memory with low-latency memory operations,.Alternatively, the DIMMs can be used in Memory Mode, where they coexist with volatile memory to expand capacity. Software doesn’t need to be rewritten to deploy Optane DC persistent memory in Memory Mode.The technology can also be used in what Intel calls Storage Over App Direct Mode, where persistent memory address space becomes accessible through standard file APIs. Applications expecting block storage can access the App Direct region of Optane DC persistent memory modules without any special optimizations. The benefit is higher performance compared to moving data over the I/O bus.Regardless of how applications use Optane DC persistent memory, the technology’s strengths remain the same: capacity, performance, and persistence. Datacenter apps with large memory footprints (think cloud and infrastructure-as-a-service) are direct beneficiaries. The same goes for in-memory databases, storage caching layers, and Network Function Virtualization.MRAM shows promise at the edgeWhereas Optane is mostly being aimed at the datacenter, magneto-resistive RAM, or MRAM, shows promise across a range of IoT devices—the very sensors that IDC says we’ll soon be touching thousands of times a day.Consider this example from a blog post by Dr. Mahendra Pakala, managing director of Applied Materials’ memory group. It uses a security camera with voice and facial recognition as an example of where MRAM works well. You want that camera to process as much data as possible at the edge, and only upload information that matters to the cloud. Power consumption is paramount, however. According to Dr. Pakala, today’s edge devices primarily employ SRAM memory, which uses up to six transistors per cell and can suffer high active leakage power, hurting their efficiency. “As an alternative, MRAM promises several times more transistor density, enabling higher storage densities or smaller die sizes.” Greater capacity, more compact chips, and less power consumption sounds like a win for anyone processing at the edge.Data in MRAM is stored by magnetic elements formed from a pair of ferromagnetic plates, separated by a thin dielectric tunneling insulator. One plate’s polarity is set permanently, while the other’s magnetization changes to store zeroes and ones. Together, the plates form a magnetic tunnel junction (MTJ). These become the memory device’s building blocks.Like Optane DC persistent memory, MRAM is non-volatile. Everspin Technologies, one of the leaders in MRAM technology, says data stored in its Toggle MRAM lasts for 20 years at temperature. MRAM is incredibly fast, too. Everspin claims simultaneous read/write latency in the 35ns range. That’s close to the vaunted performance of SRAM, making MRAM an attractive substitute for almost any of today’s volatile memories.Density is where classic MRAMs fall short of DRAM and flash memory. Everspin recently announced a 32Mb device. But in comparison, the largest four-bit-per-cell NAND parts offer 4Tb densities. All the more reason for MRAM to excel in IoT and industrial applications, where its performance, persistence, and unlimited endurance more than make up for a lack of capacity.Above: Everspin’s newest 1Gb spin-transfer torque MRAM device targets enterprise and computing applications that need high capacity, low latency, and persistence.Image Credit: EverspinSpin-transfer torque (STT-MRAM) is a variation of the magneto-resistive technology that works by manipulating electron spin with a polarizing current. Its mechanism requires less switching energy than Toggle MRAMs, bringing power consumption down. STT-MRAM is also more scalable. Everspin’s standalone devices are available in 256Mb and 1Gb densities. A company like Phison can drop one of them next to its flash controller and get amazing caching performance with the added benefit of power-loss protection. You wouldn’t need to worry about buying SSDs with built-in battery backup. Data in-flight would always be safe, even in the event of an unexpected shut-down.Foundries like Intel, TSMC, and UMC are interested in STT-MRAM for another purpose: they want to embed it in their microcontrollers. The NOR flash currently used in those designs has a hard time scaling to smaller manufacturing nodes, while MRAM is more economical to integrate. In fact, Intel already presented a paper showing off a production-ready 7.2Mb MRAM array integrated with its 22nm FinFET Low Power process. The company says that MRAM as embedded non-volatile memory is a potential solution for IoT, FPGAs, and chipsets with on-chip boot data requirements.ReRAM may be the answer for in-memory computingA few months after announcing its success integrating MRAM with 22FFL manufacturing, Intel gave a presentation at the International Solid-State Circuits Conference describing a 3.6Mb resistive random-access memory (ReRAM) macro embedded with the same process node.ReRAM is another type of non-volatile memory touting low power, high density, and a performance profile that puts it in between DRAM and flash-based storage. But whereas MRAM’s characteristics foretell a life among IoT devices, ReRAM is being groomed for a datacenter career, bridging the gap between server memory and SSDs.Above: Crossbar’s ReRAM technology: Nanofiliments in the dielectric between two electrodes are formed and reset by different voltage levels, creating low- and high-resistance paths.Image Credit: CrossbarSeveral companies are developing ReRAM, using a variety of materials. Crossbar’s ReRAM technology, for example, employs a silicon-based switching material sandwiched between top and bottom electrodes. When voltage is applied between the electrodes, a nanofilament is formed in the dielectric, creating a low-resistance path. The filament can then be reset by another voltage. Intel uses a tantalum oxide high-κ dielectric under an oxygen exchange layer, creating vacancies between its electrodes. The two cells differ in composition, but perform the same function, delivering many-times-faster read and write performance compared to NAND flash.Applied Materials’ Dr. Pakala said ReRAM appears to be the most viable memory technology for in-memory computing, where data is held in RAM rather than in databases on disk. “Matrix multiplication can be done within the arrays by utilizing Ohm’s Law and Kirchoff’s Rule—without moving weights in and out of the chip. The multilevel cell architectures promise new levels of memory density that can allow much larger models to be designed and used.” It’s prohibitively expensive to work on those models in DRAM, which is why the cost benefits of ReRAM look so promising here.Above: CrossBar’s ReRAM can be embedded into SoCs for fast, non-volatile storage on-board.Image Credit: CrossbarThe best is yet to comeFrom the factory floor to the datacenter, fully utilizing compute resources without breaking the bank requires a fresh approach to storage. Energias Market Research expects the market for MRAM to grow rapidly between now and 2025, reaching $1.2 billion after a compound annual growth rate of 49.6%. Coughlin Associates predicts that 3D XPoint memory—the technology at the heart of Optane—will drive revenues to over $16 billion by 2028. Clearly, there’s a demand for new memories that address the impending limits of flash memory, DRAM, and SRAM.There doesn’t have to be just one winner, either. It’s possible that all three of these emerging memory types coexist at various levels of the storage hierarchy with a common goal: to make sure the impending deluge of data doesn’t overwhelm existing access technologies.Intel’s Optane DC persistent memory is already prolific in servers with second-gen Xeon Scalable Processors. MRAM is being used alongside SSD controllers for write caching in place of DRAM. And ReRAM is more viable than ever thanks to Applied Materials’ Endura Impulse PV high-volume manufacturing system. If you’re serious about processing massive amount of data, the next five years are going to be critical. Now’s the time to start weighing your options.
 
watchlist Created with Sketch. Add 4DS (ASX) to my watchlist
(20min delay)
Last
7.6¢
Change
-0.001(1.30%)
Mkt cap ! $134.0M
Open High Low Value Volume
7.8¢ 7.9¢ 7.6¢ $190.5K 2.486M

Buyers (Bids)

No. Vol. Price($)
2 250000 7.6¢
 

Sellers (Offers)

Price($) Vol. No.
7.7¢ 190587 1
View Market Depth
Last trade - 15.54pm 25/06/2024 (20 minute delay) ?
4DS (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.