The latest AI – known as generative pre-trained transformers (GPT) – promises to utterly transform the geopolitics of war and deterrence. It will do so in ways that are not necessarily comforting, and which may even turn existential.
On one hand, this technology could make war less lethal and possibly strengthen deterrence. By dramatically expanding the role of AI-directed drones in air forces, navies and armies, human lives could be spared.
Already, the Pentagon is experimenting with AI bots that can fly a modified F-16 fighter jet, and Russia has been testing autonomous tank-like vehicles. China is rushing to roll out its own AI-run systems, and the effectiveness of armed drones will also take off in coming years. One of the largest, although still nascent, efforts to advance AI is a secretive US Air Force program, Next Generation Air Dominance, under which some 1000 drone “wingmen”, called collaborative combat aircraft, operate alongside 200 piloted planes.
“I can easily imagine a future in which drones outnumber people in the armed forces pretty considerably,” says Douglas Shaw, senior advisor at the Nuclear Threat Initiative.
According to retired US Air Force general Charles Wald, “That’ll be a force multiplier. One of the biggest problems right now is recruiting.”
On the other hand, AI-driven software could lead the major powers to cut down their decision-making window to minutes instead of hours or days. They could come to depend far too much on AI strategic and tactical assessments, even when it comes to nuclear war. The danger, says Herbert Lin of Stanford University, is that decision-makers could gradually rely on the new AI as part of command and control of weaponry, since it operates at vastly greater speeds than people can.
Establish limits
In a book published this year, AI and the Bomb, James Johnson of the University of Aberdeen imagines an accidental nuclear war in the East China Sea in 2025 precipitated by AI-driven intelligence on both the US and Chinese sides, and “turbo-charged by AI-enabled bots, deepfakes and false-flag operations”.
“The real problem is how little it takes to convince people that something is sentient, when all GPT amounts to is a sophisticated auto-complete,” says Lin, a cybersecurity expert who serves on the Science and Security Board of the Bulletin of the Atomic Scientists. Given AI’s propensity to hyperbole, Lin says, “when people start to believe that machines are thinking, they’re more likely to do crazy things”.
In a report published in early February, the Arms Control Association says that AI and other new technologies, such as hypersonic missiles, could result in “blurring the distinction between a conventional and nuclear attack.” The report says that the scramble to “exploit emerging technologies for military use has accelerated at a much faster pace than efforts to assess the dangers they pose and to establish limits on their use. It is essential, then, to slow the pace of weaponising these technologies, to carefully weigh the risks in doing so, and to adopt meaningful restraints on their military use.”
US officials have said they are doing so, but they may be navigating a slippery slope. This January, the Defence Department updated its directive on weapons systems involving the use of artificial intelligence, saying that at least some human judgment must be used in developing and deploying autonomous weapon systems. At the same time, however, the Pentagon is experimenting with AI to integrate decision-making from all service branches and multiple combatant commands. And with the Biden administration cracking down on high-tech exports to China (especially advanced semiconductors) in order to maintain the current US lead in AI, the Pentagon is likely to accelerate those efforts.
Wald says, “I do think that AI will help with target prioritisation. This could prove useful in the strategy against China, which owns a home field advantage over the US in bridging the vast distances in the Pacific that could interfere with a co-ordinated response to an attack [on Taiwan].”
In a 2019 speech, Lieutenant General Jack Shanahan, the former director of the Pentagon’s Joint Artificial Intelligence Centre, said that while the Defence Department was eagerly pursuing “integration of AI capabilities”, this would definitely not include nuclear command and control. Shanahan added that he could imagine a role for AI in determining how to use lethal force – once a human decision is made.
“I’m not going to go straight to ‘lethal autonomous weapons systems’,” he said, “but I do want to say we will use artificial intelligence in our weapons systems … to give us a competitive advantage. It’s to save lives and help deter war from happening in the first place.”
The question is whether the Chinese and Russians, along with other third parties, will follow the same rules as Washington.
“I don’t believe the US is going to go down the path of allowing things … where you don’t have human control,” Wald says. “But I’m not sure somebody else might not do that. In the wrong hands, I think the biggest concern would be allowing this machine or entity too much latitude.”
Another concern is that advanced AI technology could allow rogue actors such as terrorists to gain knowledge in building dirty bombs or other lethal devices. And AI is now shared by far more actors than during the Cold War, meaning that it could be used to detect nuclear arms sites, reducing the deterrent effect of keeping their locations secret.
“AI will change the dynamic of hiding and finding things,” says Shaw, who notes that much of the data today is held by private companies that might be vulnerable to AI-driven espionage and probing of weapons systems.
Ruler of the world
The open letter was the latest evidence of what can only be called a widespread panic since ChatGPT appeared on the scene last year and major tech companies scrambled to introduce their own AI systems with so-called human-competitive intelligence.
The issues at stake, the letter said, were fundamental to human civilisation: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”
But the key line was this one: if the companies that fund AI labs don’t agree to such a pause, then “governments should step in and institute a moratorium”.
As far back as 2017, though, Russian President Vladimir Putin declared that “the one who becomes the leader in this sphere [AI] will be the ruler of the world”, and that future wars will be decided “when one party’s drones are destroyed by drones of another”.
- Forums
- ASX - By Stock
- Drone News
The latest AI – known as generative pre-trained transformers...
-
- There are more pages in this discussion • 916 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)
Featured News
Add DRO (ASX) to my watchlist
(20min delay)
|
|||||
Last
95.5¢ |
Change
0.040(4.37%) |
Mkt cap ! $832.8M |
Open | High | Low | Value | Volume |
91.5¢ | 95.5¢ | 90.5¢ | $8.549M | 9.078M |
Buyers (Bids)
No. | Vol. | Price($) |
---|---|---|
5 | 59389 | 95.5¢ |
Sellers (Offers)
Price($) | Vol. | No. |
---|---|---|
96.0¢ | 110589 | 13 |
View Market Depth
No. | Vol. | Price($) |
---|---|---|
5 | 59389 | 0.955 |
3 | 5750 | 0.950 |
8 | 232793 | 0.945 |
7 | 27152 | 0.940 |
7 | 42035 | 0.935 |
Price($) | Vol. | No. |
---|---|---|
0.960 | 110589 | 13 |
0.965 | 204834 | 7 |
0.970 | 174474 | 5 |
0.975 | 43500 | 4 |
0.980 | 24813 | 4 |
Last trade - 16.10pm 31/10/2024 (20 minute delay) ? |
Featured News
DRO (ASX) Chart |