Skip to main content
05 Nov 2022

Ambitious plans for Artificial Intelligence in Defence

Ambitious plans for Artificial Intelligence in Defence
Image Source: MOD Science and Technology Portfolio Policy Paper.
Written 05.11.2022. Updated 01.05.2024.

The ascendency of Artificial Intelligence (AI) is sparking strategies, adoption, and creation of regulation which are propelling its use in defence. From the release of large language models to the development in machine learning, AI’s applications are vast. 


What can AI do for defence? 

AI is set to transform the productivity and GDP potential of the global economy and with its increasingly sophisticated learning abilities, will bring $15.7 trillion (14% growth) to the global economy by 2030, according to PWC’s “Sizing the prize” paper. Research from the UK Government suggests that more than 1.3 million UK businesses will use AI by 2040 and spending on AI is expected to reach more than £200 billion by the same date. 

The MOD’s Defence Artificial Intelligence Strategy (DAIS) suggests that the Russia-Ukraine war has accelerated the need to continue exploiting innovative concepts and cutting edge technological advances and AI is at the forefront of such advances. As a result, AI is one of 25 technology programmes which are earmarked for a total of £6.6 billion in R&D investment over the next four years. 


How have AI initiatives evolved in the last few years? 


2021 - AI Rising

In 2021, the Royal Navy participated in a NATO exercise where AI was utilised on-board ship for the first time, with aid from Dstl and industry, a Type 45 Destroyer and a Type 23 Frigate benefitted from the latest AI tech in their command centres. The test of AI in this scenario involved a supersonic missile threat, requiring a rapid threat assessment and recommended actions to be taken by crew. 

Also in 2021, the British Army utilised AI during Exercise Spring Storm (Estonia), to provide environmental and terrain support, via automated smart analytics which cut through vast amounts of data. This form of AI was developed with Army training in mind, to seamlessly support integration to the battlefield environment. The Army went on to test AI tech with Adarga at the start of 2022 to supercharge the rapid development of key military digital tools fit for the new information age. 


2022 - AI Emerges

In June 2022, the MOD released the Defence AI Strategy, based off the 2021 Integrated Review, the National AI Strategy of 2021, and the Defence Command Paper 2021. The strategy outlined the types, uses, and ambitions it conceives for the technology, citing a need to ‘transform into an AI ready organisation.’ The main aims were to assist decision-making; improve efficiencies; develop new capabilities and empower our forces. 

In an MOD Corporate report titled “The Science Inside 2022”, published this November of that year, AI was said to already be changing the battlefield, with the potential to make radical technical enhancements. The report breaks down the advantages of AI as: 

  • Machine scale analysis – removing the constraints of human-centric approaches to data analyses. 
  • Machine speed analysis – outpacing adversaries with faster decision-making in different environments. 
  • Fighting the information war – counteracting fake news by detecting and defeating conventional and algorithmically generated misinformation in real time. 
  • Increasing everyone’s impact – by enabling each human operator to safely control multiple autonomous platforms. 

Contract awards relating to AI increased in 2022, for various purposes. For example, Accrete won a contract from Pentagon for AI threat detection software, Scale AI was awarded $250 million to allow US Federal agencies access to its data which powers AI, and Shield AI clinched a $60 million contract to give the US Air Force access to its full AI tech stack.

In November 2022, Open-AI partnered with Microsoft, to release Chat-GPT. This was the first viral large language model, and within one week it had attracted over a million users. Since then the chatbot has seen updates and improvements, alongside wider support for commercial applications.  


2023 - Actioning AI

In March 2023, the UK unveiled its AI White Paper, focused on five key AI technology areas, and how to regulate them. This followed meetings Prime Minister Rishi Sunak held with industry, concerning the responsible governance, opportunities, and threats which AI presents. 

Two months later, the UK and US flagged their concerns over the rapid advances in the technology, with the UK’s Competition and Markets Authority looking into the potential for the spread of misinformation, and the White House advising industry of their responsibilities to safety. 

In October that year, the UKRI (UK Research and Innovation) invested funds for doctoral students looking to focus on AI, at 16 universities. The aim was to help cultivate the next generation of AI researchers, crucial in an intense job market with increasing skills shortages. Also that month, the British Army released their approach to AI, seeking to reap the benefits of a competitive advantage and increased operational efficiency. The critical enablers of this approach are ‘people’, ‘process’, ‘technology’ and the ‘ecosystem’. 

In the November of 2023, the UK held the first AI Safety Summit, drawing major international attention. US Vice President Kamala Harris, Chinese Vice Technology Minister Wu Zhaohui, X and SpaceX CEO Elon Musk, UN Secretary General Antonio Guterres, Open-AI’s Sam Altman, and EU Commission President Ursula von der Leyen, were all in attendance alongside other world and industry leaders. The summit saw consensus amongst many as to the potential dangers of AI, with the UK, US, Australia, EU, and China signing a joint declaration

Palantir was a stand-out winner from AI contracts in 2023, winning $463 million to provide enterprise AI capabilities to the US Special Operations Command (USSOCOM), and another contract for $250 million for AI experimentation and research.


2024 - AI Advances

In kicking off what has already been a busy year for AI, the UK Government unveiled its ‘AI Safety Institute’ in mid-January. A Government publication notes that ‘Advanced AI systems have the potential to drive economic growth and productivity, boost health and wellbeing, improve public services, and increase security’. It went further to invite global best-practice and collaboration on the topic of AI, suggesting that the new organisation ‘will make its work available to the world, enabling an effective global response to the opportunities and risks of advanced AI’. 

In February, the much-anticipated Defence AI Playbook was released, following on from the 2022 Defence AI Strategy. The document reveals some major use cases for military AI, alongside efforts to collaborate and work with industry to cultivate a strategic advantage. This came as AI saw increasing use across the forces, with the Navy using AI to help keep ships at sea longer, the MOD testing AI-enabled assets, the Army using AI to help combat skills and recruitment problems.  

In April, US forces conducted the first known dogfight where a human pilot fought against AI-controlled aircraft. The AI aircraft relied upon machine-learning to respond during the test, which is said to have been an excellent use case for such an adaptive system. 

Responsible use of AI 

AI offers many advantages but also presents ethical challenges, many of which are heightened when applied to the high stakes defence context. These include, but are not limited to: the relative unpredictability of AI, responsibility gaps when delegating to autonomous systems, and potential reductions of human control. There is also a concern that AI could enable weapons to operate with no human involvement. 

Given the potential risks associated with the use of AI in defence, it’s essential that it is developed and used responsibly. In adherence with UK, UN and international laws, there must be effective, ethically-considerate governance of AI technology. This part of the MOD’s AI strategy maintains that whilst the development and implementation of AI has broad potential, it must have effective oversight. For example, the MOD has stated that: “we do not rule out incorporating AI within weapon systems”, however it is “very clear that there must be context-appropriate human involvement in weapons which identify, select and attack targets.” 

The strategy is intended to make the MOD the world’s most effective, efficient, trusted and influential defence organisation of its size in the AI field. The formation of the AI Data Centre came a month after the publication of the MOD’s Science and Technology Portfolio which declared the MOD’s vision to ‘exploit emerging AI technologies to drive solutions to defence’s key challenges’. Furthermore, the UK is at the forefront of an initiative to shape global regulatory and oversight standards for AI. The Alan Turing Institute and the National Physical Laboratory are together piloting this approach in an effort to increase the contribution of the UK to the development of global AI technical standards. 


The use of data-driven technologies in defence can facilitate better decision-making at pace, enhance the organisation of large and complex operations, and enable humans to be removed from dangerous roles through autonomy.

View all Knowledge