Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
The WildCat robot, manufactured by U.S. robotics company Boston Dynamics, is the world's fastest free-running robot with four legs, capable of running at 32 kilometres per hour. Photo: Boston Dynamics

Project launched at Korean institute to develop AI weapons

Controversy remains on whether autonomous arms are really necessary

By Jun Ji-hye

Hanwha Systems, South Korea’ leading defense business, and state-run science research university Korea Advanced Institute of Science and Technology (KAIST) have launched a project to co-develop artificial intelligence (AI) technologies to be applied to military weapons, joining the global competition to develop autonomous arms.

The two parties recently opened a joint research centre at KAIST, where researchers from the university and Hanwha will carry out various studies into how technologies of the Fourth Industrial Revolution can be utilised on future battlefields.

Twenty-five researchers from KAIST will participate in the centre, while the defence arm of Hanwha Group will dispatch its researchers in accordance with subjects of research, according to a PR official from the firm.

AI arms, which would search for and eliminate targets without human control, are called the third revolution in the battleground after gunpowder and nuclear weapons.

Such weapons would include an AI-based missile that can control its speed and altitude on its own and detect an enemy radar fence in real time while in flight. AI-equipped unmanned submarines and armed quadcopters would also be among autonomous arms.


The Hanwha official said the joint research centre will focus on four tasks by priority _ developing an AI-based command system, an AI algorithm for an unmanned sub’s navigation, an AI-based aviation training system and an AI-based object-tracking technique.

“Our goal is to complete such developments by the end of the year,” the official said.

Chang Si-kweon, CEO of Hanwha Systems, said his company is well-prepared to lead the development of defence technologies of the Fourth Industrial Revolution, based on its advanced skills and achievements it has made so far in the area of defence electronics.

“We will make our full efforts and keenly cooperate with KAIST to provide innovative AI technologies to our customers,” he said. “We will also work to secure technology competitiveness in global markets.”


The Korean firm’s recent achievements include the development of active electronically scanned array (AESA) radar. It will be supplied to the nation’s KF-X project designed to build 4.5-generation indigenous fighters by 2026 to replace the Air Force’s ageing fleet of F-4s and F-5s. The firm unveiled its first prototype in July.

Major countries such as the United States and Russia have already been in competition to develop AI weapons.


Washington has reportedly applied AI features to its new, long-range anti-ship missiles that will replace the Harpoon. For its part, the Pentagon is increasingly testing experimental AI technologies from drone swarms and ground robots to naval ships.

Russia’s Tass news agency reported the country is developing an AI-equipped guided missile. The CEO of Russia’s tech firm Kronstadt Group, Armen Isaakyan, also said last year Moscow was working to develop AI technologies to be applied to unmanned aerial vehicles that could one day form “swarms of drones.”

Still, whether or not autonomous arms development is really necessary is a controversial issue.


Advocates for AI weapons say such arms can help reduce defence costs and casualties in warfare.

On the other hand, more researchers, including theoretical physicist Stephen Hawking and Tesla founder Elon Musk, are raising their voice against the development of AI weapons.

The objectors say AI weapons will eventually result in large-scale armed clashes between countries or cultures. Plus, humans could lose control of AI arms at some point.


AI researchers, who participated in the International Joint Conference on Artificial Intelligence (IJCAI) held in Buenos Aires in 2015, announced an open letter at the opening of the conference, calling for a ban on offensive autonomous weapons beyond meaningful human control.

Noting that autonomous weapons could be developed comparatively easily, compared to nuclear arms that need a great deal of money and hard-to-obtain raw materials, the researchers said once developed, such weapons “will become ubiquitous and cheap for all significant military powers to mass-produce.”

This would allow AI weapons to be dealt on the black market and fall into the hands of terrorists and dictators, they said, adding that such weapons would be used for tasks such as “assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group.”

Mindful of such controversy, the United States sticks to a position that despite AI arms development, human control should not be abandoned, meaning AI weapons could monitor and judge battlefield conditions, but only humans could press a button to attack.

Washington’s development of robots for military use is taking place based on the concept of intelligence augmentation (IA), not AI. This also reflects the position that humans should make the final decision, while robots are used to conduct reconnaissance missions or dispose of explosives.

Fred Kennedy, deputy director of the Defense Advanced Research Projects Agency (DARPA) Tactical Technology Office, the Pentagon’s research arm, has made clear the Pentagon’s goal, saying, “Autonomy is going to be our asymmetric approach to how we fight,” according to