Whoever thought machines would be so intelligent and smart enough to tell who to kill in a military zone and who to spare. Well, this is now closer to reality thanks to advancement in artificial intelligence in the world of smart machines. Yes, that’s right! The US army recently announced that it is crafting a drone that is able toidentify people and vehicles in war then decide which one to attack using advanced artificial intelligence.
The Militarization of AI
Once fully developed and deployed, there is a chance it spells the future military battles-it may be fought with autonomous machines.Of course, drones have been used in the military for some time but up until now, the top drones like MQ-9 Reaper are only semi-automated meaning that human intervention is still needed to control what the drones can do in the battlefields.
The emergence of Artificial Intelligence in warfare is one of what is better known as the Internet of Intelligent Battle Things or rather, a combination of networked intelligent systemsoperating with some form of autonomy. Such autonomy is growing as machine learning and AI capabilities improve, a feat that is expected to be fine-tuned in the coming years as computer get better in capabilities. In fact, artificial intelligence in the military is expected to grow to USD 18.82 Billion by 2025 with more money being pumped into R&D: https://www.researchandmarkets.com/research/j3cs86/global_artificial?w=5.
What People Have To Say
Drones experts believe that AI powered drones can only be feasible in military zones if they can behave like humans. In others words, only if these drones can have the same level of intelligence as human beings. Hugo Lopez, a drone’s expert who is the COO of Global Precision Surveys is one of those who harbor the belief that it would be difficult to know if an autonomous military drone sticks to a set of parametersif there is no feedback loop to an operator http://gps-uav.com/news.html. Moreover, he also thinks that there a danger that a stray decision by the AI drone could spark unnecessary squabbles, particularly in the political arena.
Now, he isn’t the only one who has expressed doubts about this type of innovation. There are others critics especially those in the corporate and academic world. Remember the boycott by AI researchers on a South Korean University owing to allegations that the university was developing AI weapons lab: https://www.reuters.com/article/us-tech-korea-boycott/researchers-to-boycott-south-korean-university-over-ai-weapons-work-idUSKCN1HB392
There is also that protest by some 3100 Google employees who vented their reactions over the company’s involvement in Pentagon’s project known as Maven. This project entails the use of computer algorithms in the war zone and the employees believed it was unethical for Google to go down this path: https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html.
However, those in support of the development of AI in drones are of the opinion that it will not only reduce casualties and improve military efficiency but it will also play a role in reducing psychological problems among the military crew. Issues like mental trauma and post-traumatic stress disorder are among common ailments affecting many who serve in the militarybut autonomous drone killings may certainly reduce them by dropping the bombs and doing all the things that human minds detest. The only flaw with this line of thinking though, is that intelligence experts will still be responsible for analyzing video footage which may still impact negatively on their minds.
That said, the application of AI in the military is very much in its infancy and it remains to be seen how the evolution will shape up in the coming years. Only time will tell if the future of military is artificial intelligence or not!