What once belonged to the imagination of science fiction films is now rapidly becoming a reality on modern battlefields. So-called “killer robots", formally known as Lethal Autonomous Weapon Systems (LAWS), are emerging as a transformative, and controversial, development inwarfare. These weapons are capable of identifying, selecting and attacking targets without direct human control, triggering an intense international debate over both their technological implications and ethical consequences.
At the centre of the controversy is the idea that machines may soon be making life-and-death decisions. More than 30 countries have called for a complete ban on such systems, arguing that the risks posed by autonomous weapons go far beyond technological concerns and raise profound moral questions.
Autonomous combat drones represent one of the most prominent forms of these systems. Designed to function independently once activated, they perform three key battlefield tasks on their own; locating targets, identifying them, and launching an attack. While humans may initiate their deployment, subsequent decisions are made by onboard algorithms powered by artificial intelligence.
These weapons are generally classified into three categories based on the level of human control involved. In the first category, the system requires explicit human approval before it can engage a target. In the second, the machine can make the decision to attack, though a human operator retains the ability to intervene and halt the strike. The third category, considered the most controversial, operates entirely without human oversight. These fully autonomous systems represent the true concept of LAWS.
The effectiveness of such weapons lies in their advanced artificial intelligence and sensor technologies. Autonomous drones are equipped with multiple sensors that continuously monitor their surroundings. LiDAR systems use laser pulses to generate detailed 3D maps of terrain, while thermal cameras can detect human body heat even in darkness. Radar systems allow the drone to detect movement and activity over long distances. By combining data from these sensors, the drone builds a comprehensive real-time understanding of its environment.
This information is then processed by AI-driven deep learning systems trained on vast datasets containing millions of images and scenarios. Within milliseconds, the system analyses visual data to determine whether a detected object is a soldier, a military vehicle or a civilian presence.
The most contentious stage follows, i.e. the decision to attack. Known as the threat-assessment algorithm, this process evaluates factors such as the perceived danger posed by the target, the military value of the objective and the presence of nearby individuals. Critics argue that, unlike human decision-making, such calculations do not incorporate ethical judgement.
Autonomous drones also rely on sophisticated navigation technologies. Using GPS and systems such as Simultaneous Localisation and Mapping (SLAM), they can map unfamiliar environments and chart their own paths. Even if GPS signals are jammed during electronic warfare, these drones can continue navigating using onboard sensors.
In some cases, autonomous drones are deployed collectively through swarm technology. Multiple drones operate as a network, sharing data instantly with one another. Information detected by one drone can be transmitted to the entire swarm, allowing coordinated actions across dozens or even hundreds of machines.
Many of these systems employ loitering munitions, drones that can hover over an area for extended periods before diving into their target and detonating. Others are designed to launch missiles or bombs, while some are capable of electronic warfare operations such as jamming enemy communication or radar systems.
Source: World News in news18.com, World Latest News, World News