DDR 9.80% $9.11 dicker data limited

The reason is that while a single prediction from a large neural...

  1. 5,030 Posts.
    lightbulb Created with Sketch. 2454
    The reason is that while a single prediction from a large neural network takes roughly the same amount of compute resource regardless of whether it's for training or end use, the difference is that you need to run squidillions of predictions for training and only one or a handful for end use. So while massively parallel banks of processors (i.e. GPUs, i.e. NVidia) are critical for training models, it makes no discernible difference for using the model after training is complete.

    So yes, "we could just use Intel chips to run AI libraries locally".
 
watchlist Created with Sketch. Add DDR (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.