4DS 2.17% 9.4¢ 4ds memory limited

Ann: 4DS receives imec wafers, page-195

  1. 2,651 Posts.
    lightbulb Created with Sketch. 4725

    I don't Hash... Maybe the rest of the world is still asleep, but not everybody... Some may even have the systems in place already?

    Plus some play a much smarter game than others... hehe

    Now Musk as we know is a bit of a forward thinker... SO it was no wonder he jumped at the chance of being the poster boy for Bitchcoin...

    But then he supposedly found out it used a lot of power.... ?? Really Elon, you didn't know that?


    https://hotcopper.com.au/data/attachments/3291/3291653-f8e02ca3401a4b25fbf285fa4d88e189.jpg


    Or did you just pump it up to pull a couple billion out as play money to pump into other things?? :-)

    https://hotcopper.com.au/data/attachments/3291/3291659-404cc1769b76484e2990578eeba6c2dd.jpg
    https://hotcopper.com.au/data/attachments/3291/3291661-924cd9452ee4d96342886dc943b7bb48.jpg
    Like chip hungry Rockets, Rubber Dildoes, Cars & Computers... ??

    See Tesla been hanging around Geeksville since like forever... Cause lets face it, right ???????

    So he or they would know all about power useage in computing... But cause that's to benefit the planet we'll just rob from the rich and give un to Moore.. haha

    "Sustainable development" is one of the major issues in the 21st century. Thus the notions of green computing, green development and so on show up one after another. As the large-scale parallel computing systems develop rapidly, energy consumption of such systems is becoming very huge, especially system performance reaches Petascale (10^15 Flops) or even Exascale (10^18 Flops). The huge energy consumption increases the system temperature, which seriously undermines the stability and reliability, and limits the growth of system size. The effects of energy consumption on scalability become a growing concern.


    Now this is what Tesla dropped today... 1.8 Exaflops c/w 10pb of hot tier NVMe storage @ 1.6Tbps

    And you going to tell me this is the low power model right?? How you do that??

    https://hotcopper.com.au/data/attachments/3291/3291668-1fda973ac434c0eb65d2a1ee08f03ef7.jpg

    So how much power it take to run this thing?????? Seeing as how we just hated on Bitcoin for been power whores?

    The answer is or was lots..... Mainly because of the typical layout.

    https://hotcopper.com.au/data/attachments/3291/3291674-0c160af39261263d9610d858ecee2293.jpg

    10 Yr ago the issues were clear to be seen that new SC's would demand ever more power, or a radical approach to changing architecture.. Hash's fabric or network...

    https://hotcopper.com.au/data/attachments/3291/3291680-db232a44160591d0326bdb88e926f8e8.jpg
    And there's only one place to go for really expensive project funding... Enter the US department of Defence.. Note Peter Klogge's review of what it would take to build an exaflop computer 10 yr ago..

    So just what can we expect? That’s a question with no easy answer. Even so, in 2007 the U.S. Defense Advanced Research Projects Agency (DARPA) decided to ask an even harder one: What sort of technologies would engineers need by 2015 to build a supercomputer capable of executing a quintillion (1018) mathematical operations per second? (The technical term is floating-point operations per second, or flops. A quintillion of them per second is an exaflops.)
    The U.S. Department of Energy and the National Science Foundation are funding similar investigations, aimed at creating supercomputers for solving basic science problems.Supercomputers need long-term storage that’s dense enough and fast enough to hold what are called checkpoint files. These are copies of main memory made periodically so that if a fault is discovered, a long-running application need not be started over again from the beginning. The panel came to the conclusion that writing checkpoint files for exaflops-size systems may very well require a new kind of memory entirely, something between DRAM and rotating disks. And we saw very limited promise in any variation of today’s flash memory or in emerging nanotechnology memories, such as carbon nanotubes or holographic memory.So don’t expect to see a supercomputer capable of a quintillion operations per second appear anytime soon.
    But don’t give up hope, either. So are exaflop computers forever out of reach? I don’t think so.Success in assembling such a machine will demand a coordinated cross-disciplinary effort carried out over a decade or more, during which time device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits—something that can beat what are normally voracious power requirements down to manageable levels.

    Because as Hash rightly points out now, Klogge called then;

    Also, computer architects will have to figure out how to put the right kinds of memory at the right places to allow applications to run on these systems efficiently and without having to be restarted constantly because of transient glitches. And hardware and software specialists will have to collaborate closely to find ways to ensure that the code running on tomorrow’s supercomputers uses a far greater proportion of the available computing cores than is typical for supercomputers today.

    Remembering the power equation and Musks dislike of wasting (power) or his petrol for new model Tesla's
    https://hotcopper.com.au/data/attachments/3291/3291696-75ed9da5a15ee89a197ac44505f4f6e6.jpg
    Personally, considering the prominence a Super Large Playstation PS5ex has on the world stage I'd say the old USDoD may have tipped in a little cash to fund research but things started to move around 2015-2016 and research projects went hard at the task

    https://hotcopper.com.au/data/attachments/3291/3291699-4a10937e2f0e2d0075dbd0f14f6c821c.jpg

    Against the background, this paper proposes the concept of "Energy Wall" to highlight the significance of achieving scalable performance in peta/exascale supercomputing by taking energy consumption into account. We quantify the effect of energy consumption on scalability by building the energy-efficiency speedup model, which integrates computing performance and system energy. We define the energy wall quantitatively, and provide the theorem on the existence of the energy wall, and categorize the large-scale parallel computers according to the energy consumption. In the context of several representative types of HPC applications, we analyze and extrapolate the existence of the energy wall considering three kinds of topologies, 3D-Torus, binary n-cube and Fat tree which provides insights on how to mitigate the energy wall effect in system design and through hard ware/soft ware optimization in peta/exascale supercomputing.

    Then around 2016 something changed.... European funding grants being one of them.. Which led to new prototype architecture. And the beginning of the Nvidia - ARM alliance... The same ARM that Nvidia is trying to buy for $40B :-)

    This project and the research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under grant agreement no 288777. Part of this work receives support by the PRACE project (European Community funding under grants RI-261557 and RI-283493).

    https://hotcopper.com.au/data/attachments/3291/3291707-0dd382eb50ac3a8895c5872f81e9ffff.jpg

    If we account for an upcoming quad-core ARM CortexA15 SoC, it could achieve an energy efficiency similar to current GPU systems (1.8 GFLOPS/W). This would provide the same energy efficiency, but using a homogeneous multicore architecture, which appears as a very competitive solution for the next generation of high performance computing systems.

    And around same time a poor broke ass research and development company started getting some coverage....

    TMT Analytics 1- Sep 2016
    Given 4DS’ limited financial resources, we believe the JDA with HGST in the past two years has been instrumental in driving the technology to where it is today. Under the terms of the JDA, HGST has the option to purchase a non-exclusive license to use 4DS's technology for up to 20 years. Additionally, HGST has a right of notification of any third party acquisition or financing proposal.


    And we all know where the Netherlands is right?
    https://hotcopper.com.au/data/attachments/3291/3291723-c14ec8384dd6716bca669913f2247dc4.jpg
    Brian Wang | October 22, 2016
    Any licensee will need to further develop the technology in the next several years to the point where high density, Interface Switching ReRAM memory chips can be manufactured in existing fabs, which we would expect around 2019-2020.
    This development process will require tens of millions of dollars, possibly up to US$100M, in our view, which is why 4DS will likely not embark on this journey, at least not by itself.

    Anyway back to the power equation cause by now Supercomputing is becoming the in thing...
    https://hotcopper.com.au/data/attachments/3291/3291731-71c7b6d98be4b40dfe19e164bda8984f.jpg
    https://hotcopper.com.au/data/attachments/3291/3291737-e96c0a251ab29ff9eafe013231d7930a.jpg

    Tesla supercomputer performance targets 2016..
    That is a total of 512 GB of main memory (with 120 GB/sec of bandwidth, according to specs provided by IBM earlier this year), and if Nvidia can reach its original goal of 32 GB of HBM memory per GPU accelerator it hoped to hit with Pascal on the Voltas, that works out to 192 GB of HBM memory with 6 TB/sec of bandwidth. That is a lot more than the 64 GB of HBM memory and aggregate 2.8 TB/sec of GPU memory bandwidth in the current “Minsky” Power Systems LC precursor to the Summit’s Witherspoon node. There is another 800 GB of non-volatile memory in the Summit node, and we are pretty sure it is not Intel’s 3D XPoint memory and we would guess it is flash capacity (probably NVM-Express drives) from Seagate Technologies but Oak Ridge has not said. The math works with this scenario, with 512 GB of DDR4 main memory, a total of 192 GB of HBM memory on the GPUs and 800 GB of flash??, across 4,600 nodes that is a total of 6.9 TB of aggregate memory. (By the way, that chart has an error. The “Titan” supercomputer has 32 GB of DDR3 memory plus 6 GB of GDDR5 memory per node to reach a total of 693 TB of aggregate memory.)

    Nope, nothin to see here best move along now.... Que?

    At that 50 teraflops of performance per node, which we think is doable if the feeds and speeds for Volta work out, that is a 230 petaflops cluster peak, and if the performance of the Volta GPUs can be pushed to an aggregate of 54.5 teraflops, then we are talking about crossing through 250 petaflops – a quarter of the way to exascale. And this is also a massive machine that could, in theory, run 4,600 neural network training runs side-by-side for machine learning workloads (we are not saying it will), but at the half precision math used in machine learning, that is above an exaflops of aggregate compute capacity.

    In 2011 the problem was visible... So was the solution

    https://hotcopper.com.au/data/attachments/3291/3291746-0b88bae09ae7b9753c5d7911265d5b34.jpg

    A 10^8 power saving is on offer to anyone who can stack?? And we are pretty sure that aint 3DXP.. haha
    https://hotcopper.com.au/data/attachments/3291/3291749-1c17b11014b83e760375f848e8c895ad.jpg

    Yeah yeah 8tey we get it, ya love melodrama and talking with pictures and crap... What's this all about dude???

    In 2016 Shaw & Patners 28-10-2016 wrote:

    Competitive next generation data centre storage technology hasn’t yet been commercialised. Interface Switching ReRAM is an emerging, non-volatile memory technology, unique to 4DS, that can potentially be scaled down to even smaller sizes, essential for data centres and cloud storage. We believe that 4DS is the only nonfilamentary ReRAM company with a global partner. The long lead times to develop such technology (10+ years), suggests that it is unlikely that there will be a short term challenge to the 4DS technology, which has been in development over the past decade, as the next generation non-volatile memory solution forecast to take US$6- 7b share of the US$40b Flash memory market by 2020.

    As the only non filamentary product that is capable of fulfilling the needs of the next gen supercomputers for the next 10 years perhaps, which way do you think any Global Partner is going to lay their bet...??

    Especially if you planning on flying rubber dildoes to the moon any time soon..

    Nvidia Market Cap History... ho hum, ho hum then 2016 WTF just happened... ?? Now worth $459B

    https://hotcopper.com.au/data/attachments/3291/3291785-f799534f370fcdfd7b20874e0dab3272.jpg

    Tesla Market Cap History 2020 meteoric rise coincides with Nvidia uplift.
    https://hotcopper.com.au/data/attachments/3291/3291777-ebe058067a54143f6e4e5ca9ac4c16c7.jpg
    So does the new Supercomputer range????
    https://hotcopper.com.au/data/attachments/3291/3291794-df027fa03ca9dad8f6d51482066b5b94.jpg

    Now this would all just be a crock of horse shite were it not for two things...

    1. Elon Musk looks further forward and sees fewer hurdles than nearly any else on the planet and he has deep pockets and a real big need for neural networks and storage capacity..

    2. 4DS at the Nov presentation last year on announcing the Dr had come on board and everything was getting ready to rumble posted this on page 18 of that release.
    https://hotcopper.com.au/data/attachments/3291/3291806-f92d7796a9fad109c0df47282760a99e.jpg

    TESLA AND NVIDIA WE KNOW LOVE AI, CARS AND SHORT WALKS UP CAPITAL HILL $$$$$
    https://hotcopper.com.au/data/attachments/3291/3291810-fe5b5a300adfcfc90e3a9fd4cc3c5978.jpg
    THE OTHER MOB SEEM TOTALLY RANDOM AND DOESN'T FIT THE PICTURE AT ALL & ADDS LITTLE ($30B) TO THE OTHER COMBINED ($1.1T) REASONS TO THINK THAT IT DOES??

    Actually yeah, ya right... Waste of time I'll shut up... I'm off fishing with Nutbag, somethins nibblin I heard.. Latr, 8T
    Last edited by Hateful8: 22/06/21
 
watchlist Created with Sketch. Add 4DS (ASX) to my watchlist
(20min delay)
Last
9.4¢
Change
0.002(2.17%)
Mkt cap ! $165.7M
Open High Low Value Volume
9.2¢ 9.5¢ 9.0¢ $160.7K 1.746M

Buyers (Bids)

No. Vol. Price($)
1 27317 9.2¢
 

Sellers (Offers)

Price($) Vol. No.
9.4¢ 480392 1
View Market Depth
Last trade - 16.10pm 19/07/2024 (20 minute delay) ?
4DS (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.