Suicide drones which can identify and attack targets on their own. Robot dogs with guns. Fighter jets that can autonomously fly and fight.
If you thought military drones, controlled by soldiers sitting in dark rooms, are the future of warfare, think again.
Lethal autonomous weapons, more commonly called killer robots, or more sinisterly, slaughterbots, are on the rise. With artificial intelligence improving and growing rapidly, its use in warfare is increasing. We have already reached a point where multiple weapons are capable of attacking without human guidance or intervention.
ALSO WATCH | Doom Tech | 'Ninja' missile with blades, no blast: USA's secret weapon killed al-Qaeda chief al-Zawahiri?
One of these next-gen weapons is the KARGU rotary wing strike drone made by Turkey's STM. It can reportedly fire at targets without a human operator's command. It has already been used on the battlefield with an autonomous attack on Khalifa Haftar's forces in Libya, according to a UN report. The KARGU is a small, portable, rotary-wing kamikaze drone, programmed with artificial intelligence, and machine learning algorithms.
A similar UAV has been deployed by Russia in Ukraine - the Zala Lancet-3. It has autonomous target-seeking capability, using cameras to find targets without human guidance. It is equipped with a 3-kg warhead to destroy targets. These suicide drones fly into their target, exploding upon impact.
Going a step further, an American company is developing fighter jets with artificial intelligence. Calspan is fitting L-39 Albatross jets with AI systems to create aircraft which can even do aerial combat without human pilots. There is a plan to have a live dogfight between 4 such jets in 2024.
ALSO WATCH | Doom Tech | Why Putin is scared of birds; USA's pigeon-guided missile project: bio-weapon claim decoded
Then there's the weaponised robot dog developed by SWORD International, and Ghost Robotics. It's reportedly named SPUR, or Special Purpose Unmanned Rifle. It has an on-board sighting system and can be controlled via an app. There's a machine gun on its back, and SPUR can remotely load, and unload the first round of ammunition. It was displayed at the 2021 US Army trade show, and while reports suggest it is not yet autonomous, the robot dog seems to be one of the prime platforms for weaponised AI in the near future.
So how exactly do these autonomous weapons work?
These so-called 'slaughterbots' use artificial intelligence to identify and attack targets. The decision is made by algorithms, and not human operators. The weapons are pre-programmed to attack specific 'target profiles'. The artificial intelligence system searches for the target profile using facial recognition etc. If an object matches the target profile, an attack is launched.
Lethal autonomous weapons provide many advantages on the battlefield.
There is lower risk for human soldiers in conflict zones. Also, attack accuracy increases, and collateral damage risk is reduced. Some of these weapons are not dependent on GPS, giving them immunity in case a country's satellites are shot down by enemies.
However, the debate is on whether the risks outweigh the advantages.
Autonomous weapons can lead to large-scale violence as it will not be limited to the number of human soldiers available. Also, a technical malfunction could cause serious civilian casualties. During wars, soldiers on the ground can be held accountable, but not machines. These weapons also make it easier for the aggressors to hide their identities. Another threat is that killer robots would make ethnic cleansing easier using facial recognition etc.
These threats have led to many human rights, and peace organisations to call for a complete ban on autonomous weapons. Some international discussions have been held on the issue.
In 2014, the United Nations organised an informal meeting of experts on the issue. In 2016, a decision was taken to establish a Group of Governmental Experts, or GGE. The next year, the GGE's first meeting took place to assess questions on artificially intelligent weapons. In 2019, UN Secretary General called for a ban on autonomous arms. The same year, 11 guiding principles on AI weapons were adopted. In 2021, a UN meeting took place to set the agenda for regulation of such arms.
However, an international treaty to prohibit or impose strict controls on autonomous weapons is unlikely. This is because major powers like the United States of America, Russia, Australia, Israel, and South Korea are reportedly against a new pact. But it is not impossible. After all, countries have come together in the past to ban or control widely-used weapons.
In 1970, the Non-Proliferation Treaty came into effect to prevent the spread of nuclear arms. Also, a 2017 treaty bans nuclear weapons, but it is considered toothless since the US and other powers are not party to it. Cluster munitions are banned under the 2008 Oslo Convention. These are bombs which disperse submunitions like grenades, and mines over a large area. The 1997 Ottawa Convention bans anti-personnel landmines. It prohibits the use, stockpiling, production, and transfer of mines.
For now, the 11 guiding principles adopted at the 2019 UN meeting seem to govern the use of autonomous weapons.
The principles say that international humanitarian law applies fully to all weapons, including autonomous systems. Human responsibility must be retained for decisions on weapon usage. Human-machine interaction must ensure compliance with law. Accountability must be ensured, including through a human command-and-control chain. Countries must determine whether new weapon systems violate international law. Countries must consider the risk of weapon acquisition by terrorists, and the threat of proliferation of such systems. Risk assessment, and mitigation measures be part of the design, and use of these arms. Consideration must be given to compliance with all legal obligations. In crafting policies, autonomous arms should not be anthropomorphised. Meanwhile, UN discussions should not hamper the peaceful use of such technology. The final principle says that the UN Convention on Certain Conventional Weapons, or CCW, has an appropriate framework for AI weapons.
Given human nature, our propensity for conflict, and the potential of technology to be destructive, countries are painting a grim future if they give priority to short-term military gains, and fail to control the rise of killer robots.