UNT 0.00% 1.6¢ unith ltd

General Discussion, page-4256

  1. 118 Posts.
    lightbulb Created with Sketch. 14
    The term 'hallucinations' refers to large language models. Yes large language models need time to become more reliable in the answers they come up with.

    However, if you train a digital human on a specific data set i.e. on all of the employee training manuals, or every piece of published peer reviewed research on a particular illness, then you're very likely to only get correct answers. Plus the digital human can retain so much more information than a human and therefore potentially make a lot less mistakes.



 
watchlist Created with Sketch. Add UNT (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.