DRO 1.21% $1.47 droneshield limited

Drone News, page-570

  1. 411 Posts.
    lightbulb Created with Sketch. 124

    The latest AI – known as generative pre-trained transformers (GPT) – promises to utterly transform the geopolitics of war and deterrence. It will do so in ways that are not necessarily comforting, and which may even turn existential.

    On one hand, this technology could make war less lethal and possibly strengthen deterrence. By dramatically expanding the role of AI-directed drones in air forces, navies and armies, human lives could be spared.

    Already, the Pentagon is experimenting with AI bots that can fly a modified F-16 fighter jet, and Russia has been testing autonomous tank-like vehicles. China is rushing to roll out its own AI-run systems, and the effectiveness of armed drones will also take off in coming years. One of the largest, although still nascent, efforts to advance AI is a secretive US Air Force program, Next Generation Air Dominance, under which some 1000 drone “wingmen”, called collaborative combat aircraft, operate alongside 200 piloted planes.

    “I can easily imagine a future in which drones outnumber people in the armed forces pretty considerably,” says Douglas Shaw, senior advisor at the Nuclear Threat Initiative.

    According to retired US Air Force general Charles Wald, “That’ll be a force multiplier. One of the biggest problems right now is recruiting.”

    On the other hand, AI-driven software could lead the major powers to cut down their decision-making window to minutes instead of hours or days. They could come to depend far too much on AI strategic and tactical assessments, even when it comes to nuclear war. The danger, says Herbert Lin of Stanford University, is that decision-makers could gradually rely on the new AI as part of command and control of weaponry, since it operates at vastly greater speeds than people can.

    Establish limits

    In a book published this year, AI and the Bomb, James Johnson of the University of Aberdeen imagines an accidental nuclear war in the East China Sea in 2025 precipitated by AI-driven intelligence on both the US and Chinese sides, and “turbo-charged by AI-enabled bots, deepfakes and false-flag operations”.

    “The real problem is how little it takes to convince people that something is sentient, when all GPT amounts to is a sophisticated auto-complete,” says Lin, a cybersecurity expert who serves on the Science and Security Board of the Bulletin of the Atomic Scientists. Given AI’s propensity to hyperbole, Lin says, “when people start to believe that machines are thinking, they’re more likely to do crazy things”.

    In a report published in early February, the Arms Control Association says that AI and other new technologies, such as hypersonic missiles, could result in “blurring the distinction between a conventional and nuclear attack.” The report says that the scramble to “exploit emerging technologies for military use has accelerated at a much faster pace than efforts to assess the dangers they pose and to establish limits on their use. It is essential, then, to slow the pace of weaponising these technologies, to carefully weigh the risks in doing so, and to adopt meaningful restraints on their military use.”

    US officials have said they are doing so, but they may be navigating a slippery slope. This January, the Defence Department updated its directive on weapons systems involving the use of artificial intelligence, saying that at least some human judgment must be used in developing and deploying autonomous weapon systems. At the same time, however, the Pentagon is experimenting with AI to integrate decision-making from all service branches and multiple combatant commands. And with the Biden administration cracking down on high-tech exports to China (especially advanced semiconductors) in order to maintain the current US lead in AI, the Pentagon is likely to accelerate those efforts.

    Wald says, “I do think that AI will help with target prioritisation. This could prove useful in the strategy against China, which owns a home field advantage over the US in bridging the vast distances in the Pacific that could interfere with a co-ordinated response to an attack [on Taiwan].”

    In a 2019 speech, Lieutenant General Jack Shanahan, the former director of the Pentagon’s Joint Artificial Intelligence Centre, said that while the Defence Department was eagerly pursuing “integration of AI capabilities”, this would definitely not include nuclear command and control. Shanahan added that he could imagine a role for AI in determining how to use lethal force – once a human decision is made.

    “I’m not going to go straight to ‘lethal autonomous weapons systems’,” he said, “but I do want to say we will use artificial intelligence in our weapons systems … to give us a competitive advantage. It’s to save lives and help deter war from happening in the first place.”

    The question is whether the Chinese and Russians, along with other third parties, will follow the same rules as Washington.

    “I don’t believe the US is going to go down the path of allowing things … where you don’t have human control,” Wald says. “But I’m not sure somebody else might not do that. In the wrong hands, I think the biggest concern would be allowing this machine or entity too much latitude.”

    Another concern is that advanced AI technology could allow rogue actors such as terrorists to gain knowledge in building dirty bombs or other lethal devices. And AI is now shared by far more actors than during the Cold War, meaning that it could be used to detect nuclear arms sites, reducing the deterrent effect of keeping their locations secret.

    “AI will change the dynamic of hiding and finding things,” says Shaw, who notes that much of the data today is held by private companies that might be vulnerable to AI-driven espionage and probing of weapons systems.


    Ruler of the world

    The open letter was the latest evidence of what can only be called a widespread panic since ChatGPT appeared on the scene last year and major tech companies scrambled to introduce their own AI systems with so-called human-competitive intelligence.

    The issues at stake, the letter said, were fundamental to human civilisation: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”

    But the key line was this one: if the companies that fund AI labs don’t agree to such a pause, then “governments should step in and institute a moratorium”.

    As far back as 2017, though, Russian President Vladimir Putin declared that “the one who becomes the leader in this sphere [AI] will be the ruler of the world”, and that future wars will be decided “when one party’s drones are destroyed by drones of another”.

 
watchlist Created with Sketch. Add DRO (ASX) to my watchlist
(20min delay)
Last
$1.47
Change
0.018(1.21%)
Mkt cap ! $1.101B
Open High Low Value Volume
$1.49 $1.49 $1.41 $10.96M 7.590M

Buyers (Bids)

No. Vol. Price($)
27 164376 $1.47
 

Sellers (Offers)

Price($) Vol. No.
$1.47 47446 11
View Market Depth
Last trade - 15.16pm 14/06/2024 (20 minute delay) ?
Last
$1.46
  Change
0.018 ( 1.18 %)
Open High Low Volume
$1.48 $1.48 $1.41 2260949
Last updated 15.34pm 14/06/2024 ?
DRO (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.