APX 3.89% $1.87 appen limited

Top 10 A.I predictions and breakthroughs for 2024, page-156

  1. 5,081 Posts.
    lightbulb Created with Sketch. 2473
    Going back to the headline topic of this thread, I see a lot of speculation about breakthroughs in AI and their possible implications for the world in general and Appen in particular. But the bulk of the discussion seems to be based on the assumption that AI research has actually produced something that you could call "intelligent". Let me explain by summarising the process as it stands for all forms of AI:
    1. Human(s) decide on a desirable result.
    2. Human(s) choose the data that will be used to train a model.
    3. Human(s) prepare the training and test data sets (collate, annotate, categorise, etc).
    4. Human(s) train and test model(s).
    5. Human(s) decide when the results satisfy their desires (point 1), when to refine their work (return to point 2), or give up.

    So where in this process is there anything resembling any kind of intelligence other than human intelligence? Nowhere. It's just a different kind of programming, and just a prone to "bugs" as programming in a computer language. The saying from the 1960s of "garbage in, garbage out" applies just as much today to AI as it did to COBOL back then. We've seen thousands of instances of AI "getting it wrong". But in all these cases it's not AI at fault, it's the humans. They chose incorrect training data, they made invalid decisions about suitable uses for the predictions obtained from their trained models, they didn't consult with the right stakeholders, they were unethical or irresponsible, they lied or obfuscated about what their technology actual does.

    One of the more insidious aspects of AI models is that they give the appearance of intelligence while at the same time obscuring the process by which results are obtained from inputs. A generative text model (like ChatGPT) will give you answers that could credibly be attributed to a person (i.e. an intelligent agent) in response to a prompt that you might give to a person. If you were talking to a person then you would be justified in concluding that the person "understood" your prompt, "thought about" the issue, and responded with a "solution". And because we are conditioned to interacting with humans we project this picture onto the model we are interacting with. But the truth is that none of these things are actually happening: no "understanding", no "thinking", no "solving". Data is extracted from the prompt, an algorithm is executed based on that data, other data is produced and provided.

    This appearance of intelligence is producing some seriously exaggerated expectations of where AI research is actually at, what is coming, and how soon. One is that AI systems will automatically learn from experience or expand their knowledge over time. Another is that AI will suddenly develop independent aims and decide follow those. None of those things can possibly occur with current technology.

    From my perspective, the one big breakthrough would be if somebody were to develop a system that is demonstrably actually intelligent!

    Until then I feel that it would be a mistake to pin the hopes of a business future on a thing that hasn't been developed yet (AGI) from a technology that hasn't really delivered on its core promise (the I in AI).
 
watchlist Created with Sketch. Add APX (ASX) to my watchlist
(20min delay)
Last
$1.87
Change
0.070(3.89%)
Mkt cap ! $417.1M
Open High Low Value Volume
$1.84 $1.94 $1.82 $23.27M 12.34M

Buyers (Bids)

No. Vol. Price($)
5 35375 $1.87
 

Sellers (Offers)

Price($) Vol. No.
$1.87 102996 5
View Market Depth
Last trade - 16.10pm 26/09/2024 (20 minute delay) ?
APX (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.