the most interesting thing I've seen in a while is how the new AI which Microsoft developed which is being used for materials research understands chemistry and uses it accurately -----------
as opposed to large language models (LLM) -------- which tell lies, have false beliefs etc etc. (called hallucinations)
So, it seems that when machines are filled with human opinions and stories and the like ---- they are as stuffed up as humans are
but, when other models are fed facts and proven theories etc ----------- they get things right
so, is there proof of this? ------------ well, yes. The totally new material discovered by the MS AI - that is proof that it works.
I relate LLM's to the internet ---------- if one takes the religion forum ------------- absolutely chokka with hallucinations, as is the politics forum and any social media
I seriously doubt if it's possible to have a human interaction 'thing' for want of a better word - that can be controlled or trained for accuracy
Gates thinks that LLM's can be taught to believe fact only - I suspect that they can ------ but, it's nowhere near yet and even if it can be -
will that be a tool we can employ to control ourselves - sprouting hallucinations?
Will be a big day out if we can and if we employ it -------- because religion is a goner that day - as are many of the posts on HC
- Forums
- Political Debate
- Australia a communist State
Australia a communist State, page-359
-
- There are more pages in this discussion • 123 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)