DDR dicker data limited

The hardware may be different on a token level, but LLMs and...

  1. 105 Posts.
    lightbulb Created with Sketch. 18
    The hardware may be different on a token level, but LLMs and diffusion models are almost entirely run in hosted services. This is where the demand for NVIDIA is coming from - data centres, not office PCs. Right now and for the near future, running useful AI models locally is either prohibitively expensive, or annoyingly slow. I agree that this will probably drive demand for higher spec machines that can run LLMs locally, but that's going to happen gradually through many cycles... the way hardware has always iteratively improved. As far as I can tell, Microsoft and Intel's definition of an "AI PC" is a machine with an NPU (which it doesn't need to use), a copy of Copilot installed (which runs remotely), and a Copilot key on the keyboard next to the Windows key.


 
Add to My Watchlist
What is My Watchlist?
A personalised tool to help users track selected stocks. Delivering real-time notifications on price updates, announcements, and performance stats on each to help make informed investment decisions.
(20min delay)
Last
$7.88
Change
0.020(0.25%)
Mkt cap ! $1.425B
Open High Low Value Volume
$7.88 $7.93 $7.84 $1.837M 232.9K

Buyers (Bids)

No. Vol. Price($)
19 2548 $7.87
 

Sellers (Offers)

Price($) Vol. No.
$7.89 2253 14
View Market Depth
Last trade - 15.40pm 24/06/2025 (20 minute delay) ?
DDR (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.