...Since 2000, I believe humanity was on course for...

  1. 26,724 Posts.
    lightbulb Created with Sketch. 2385
    ...Since 2000, I believe humanity was on course for self-destruction mode.
    ...While the internet made sharing of information globally useful, social media made sharing over the top and proved that it can be as divisive and distracting as it helpful for individuals to stand out
    ...AI on the other hand is likely to be more net negative as there is a good chance we do not truly know the extent of its adverse outcomes for humanity, both in terms of employment, concentration of power and security interests.


    At an exclusive event of world leaders, Paul Tudor Jones says a top AI leader warned everyone:

    “It's going to take an accident where 50 to 100 million people die to make the world take the threat of this really seriously… I'm buying 100 acres in the Midwest, cattle, chickens."

    He says ALL of the top AI leaders present agreed there was a 10% chance of more than 50% of humanity being exterminated by AI.

    (SIDEBAR: These are INSANE levels of risk, especially coming from insiders *in* the field, which is totally unprecedented - industry always downplays risks to avoid oversight - and we would NEVER accept risk levels like this in other fields. Don't become numb to this insanity. Civil engineers build mere *pedestrian bridges* capable of supporting "one in a million" type events. Imagine a bridge with all 8 billion people on it - and the engineers themselves say it has a 1 in 10 chance of collapsing!)

    --------- Andrew Ross Sorkin: I was going to ask you about the stock market itself, but then you just said something to me which makes me a little bit nervous, which is your focus is less on that right this moment than you are about artificial intelligence. What do you mean?

    Paul Tudor Jones: Well, let me just say: I was minding my business—minding my business. I went to this tech conference about two weeks ago out West, and I just want to share with you what I learned there. Chatham House Rules, so we can talk about the content. It was a small one—forty notables, but real notables, like household names that you would recognize: the leaders in finance, politics, science, tech. The panel that disturbed me the most—is that AI clearly poses an imminent threat, a security threat—imminent in our lifetimes—to humanity. And that—that was the one that really, really got me.

    Sorkin: When you say “imminent threat,” what do you mean?

    Jones: So I’ll get to it. They had a panel of, again, four of the leading tech experts, and about halfway through someone asked them on AI security, “Well, what are you doing on AI security?” And they said, “The competitive dynamic is so intense among the companies, and then geopolitically between Russia and China, that there’s no agency—no ability to stop and say, ‘Maybe we should think about what we’re actually creating and building here.’” And so then the follow-up question is, “Well, what are you doing about it?” He said, “Well, I’m buying a hundred acres in the Midwest, I’m getting cattle and chickens, and I’m laying in provisions.”

    Sorkin: For real?

    Jones: For real, for real. And that was obviously a little disconcerting. And then he went on to say, “I think it’s going to take an accident where fifty to a hundred million people die to make the world take the threat of this really seriously.” Well, that was—that was freaky-deaky to me.

    And no one pushed back on him on that panel. And then afterwards we had a breakout session, which was really interesting. All forty people got up in a room like this, and they had a series of propositions and you had to either agree with or disagree with the proposition. And one of the propositions was: “There’s a ten-percent chance in the next twenty years that AI will kill fifty percent of humanity.

    So there’s a 10% chance that AI will kill 50% of humanity in the next twenty years—agree or disagree. So I’d say the vast majority of the room moved to the disagree side. Elon Musk said there’s a 20% chance that AI will annihilate humanity. Now I know why he wants to go to Mars, right? And so about six or seven of us went to the agree side. And I’d gone there because of what I’d heard Elon Musk say, who’s maybe the most brilliant engineer of our time. All four modelers were on the agree side of that—all four of the leading developers of the AI models were on that side. And then we debated—then the two sides got to debate—and one of the modelers says to the disagree side, “If you don’t think there’s a 10% chance, as fast as these models are growing and how quickly they’re commoditizing knowledge, how easily they’re making it accessible, that someone couldn’t make a bioweapon weapon that could take out half of humanity, I don’t know, 10% seems… seems reasonable to me.”

    Sorkin: Okay, so thank you for bringing us great, great news over breakfast.

    Jones: I’m not a tech expert, but I’ve spent my whole life managing risk. And we just have to realize, to their credit, all these folks in AI are telling us we’re creating something that’s really dangerous. It’s going to be really great, too, but we’re helpless to do anything about it. That’s, to their credit, what they’re telling us, and yet we’re doing nothing right now, and it’s really disturbing.

    https://x.com/AISafetyMemes/status/1919835820187357569
 
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.