Gatehouse
  • Home
  • Who we are
  • What we do
  • Our Story
  • Gatehouse Network
  • Case Studies
  • Gatehouse Commentary
  • Careers
  • Contact
Gatehouse

Gatehouse Commentary

JQG Resized Zoom

Artificial Intelligence (AI): the Geopolitical Implications

March 2023

AI has arrived. Applications of it are proliferating, and seminars on it abound. The opportunities and potential benefits of this powerful instrument are huge. But the wider consequences for a competitive world and its societies need careful examination. This Comment raises some of them.

1) Governments and regulators have no hope of keeping up with the mind-boggling progress being made by the developers of AI. In the West, the companies involved are professing high levels of responsibility and accountability, but their commercial interests lead. Self-regulation, without industry-wide accountability and a strong civil society, will not be sufficient. The public sector cannot compete in salary terms for the best engineers and practitioners.

2) There is little sign of international coordination or cooperation in setting guidelines and safeguards on the application of AI. Localised efforts are highly divergent, with the EU’s proposed ‘AI Act’ taking a more precautionary stance compared with the US’s more laissez-faire model. Competition between different standards regimes is bound to develop. If the international community is woefully behind on regulating a) Space and b) the Internet, how can it hope to catch up on AI?

3) Oversight is already posing problems for the advanced democracies, but it will be absent, or closely linked to political control, in the autocracies. Computing power (often overlooked in favour of algorithmic sophistication and data availability) will increasingly become a factor in global rivalries. The evolution of the US-China relationship will be indicative here. At present, neither country has a clear lead in the technologies involved, but the ingredients of computing power, especially superchips, involve supply chains which may have to be fought over, not least as regards Taiwan.

4) The use of AI in weapons development will have generated massive programmes in all the major military powers. Even a small technological advantage in a vital area can immediately produce a battlefield impact. The war in Ukraine has touched the edges of this, but the scope for enlargement of it, if not controlled, appears unfathomable.

5) The concept of AI in the hands of non-state actors raises obvious worries, probably shared by governments otherwise in competition with each other. Will this and other mutual interests stimulate progress towards the collective establishment of some systematic constraints?

6) If a vacuum of order develops over such a powerful tool of free action as AI, malign actors can be expected to take advantage of it ahead of benign ones, because their motivation is stronger. This could be at the national level, or in the fields of organised crime, extortion, embezzlement, money-laundering, cyber attacks – all the way down to individual criminal opportunism. The increasing digitalisation of our business and personal lives, combined with the power of AI, vastly increases the ‘surface area’ for attacks.

7) There is anyway a larger problem for open societies in that AI appears to have no inbuilt attachment to the truth. Given the varied quality of information to be found across the internet, which is AI’s main catchment area, its capacity to propagate inaccurate and misleading data is unlimited. ‘Deep Fakes’ portend a new, dystopian era of disinformation.

The seminars I and my colleagues have so far attended on AI development have failed to come close to answering some of these questions. Some observers feel that Western nations have shown complacency in their assumption that their R&D and industrial strength will keep them ahead in the AI offence-defence race. But China and Russia have already moved at a sharper pace into hybrid and non-kinetic warfare precisely because they sense their disadvantages in conventional warfare. They are constantly raising their capacity to disrupt democratic processes, scatter lies, gum up systems and feed prejudices. They will want to apply AI in this area. See 4) above on small advantages, which can apply as much to shadowy as to open military conflict, and which can potentially change the facts of a situation in a flash.

As the control of nuclear arsenals grows more problematic, with the apparent termination of the START treaties, so the prospect of international action to forestall the application of AI into the largest weapons systems becomes dimmer. I sense a blindness to the lessons of history. Compromises, even with your adversary, cost less than the unimaginable damage to be caused by unrestrained conflict.

There may be no need for governments to be as proficient at the technology as the state-of-the-art developers. They need to focus, nationally and internationally, on the application of the law, and set high bars of accountability for the practitioners. Complex, yes; but what is required now is urgent action. Otherwise there will be a highly dangerous free-for-all.

PS: I did not ask ChatGPT to write this piece – but I would say that, wouldn’t I.

In many territories good geopolitical advice has been impossible to find; Gatehouse always provides a cast of minds on how to consider a problem and uncover solutions.

CEO
Global FMCG company
Recreated White Logo Vertical Lock Up

020 7099 5553

info@gatehouseadvisorypartners.com

Careers

Address

22 Tudor Street
London
EC4Y 0AY

Legal

Terms of Use
Privacy Policy

©2023 Gatehouse Advisory Partners Ltd