Search Class


O'Reilly Conferences / Flickr

Assessing the UK Government’s Approach to Artificial Intelligence

2017 was a year of geopolitical manoeuvring for the future of artificial intelligence (AI). Voicing what is likely a common sentiment amongst many political leaders, the Russian president Vladimir Putin stated that “whoever becomes the leader in [AI] will become the ruler of the world.” China had its Sputnik moment with DeepMind’s AlphaGo, and went on to unveil their ambitious plan to gain global AI dominance by 2030. The United States had already launched its own strategic plan for AI in 2016, as well as announcing a working group, and commissioning a number of reports. Similar initiatives have been popping up across countries such as Canada, Singapore, Japan, and France. Around the international community, AI is being taken seriously as a key area of investment. How do the Conservative government’s Budget commitments compare to these efforts?

First, it must be made clear that the United Kingdom stands little chance of competing for world leadership in AI research. While its tradition of strong universities will likely enable it to continue to punch above its weight (as, for instance, Canada does), Britain simply does not have the resources to compete against China or the US. The city of Beijing alone is spending £1.6 billion to build an AI development park, while another Chinese city, Tianjin, has set up a £3.4 billion fund to foster AI research. By contrast, Phillip Hammond earmarked a paltry £75 million (about 2% of Tianjin) specifically for investment in AI in his Autumn Budget. The disparity here is remarkable, and only deepens when one considers China’s £110 billion national plans for AI. Given the Tories’ ideological intransigence to supporting research and development, the possibility of improving the UK’s position appears woefully unlikely. Productivity growth – and the wage growth that typically accompanies it – look set to remain stagnant.

What can be done then? If the UK will find it difficult to financially keep pace with the two world leaders in AI research, it can instead focus on areas that don’t fit into the narrow, short-term, and profit-oriented range of contemporary commercial AI. Despite the seeming world-eating encroachment of modern AI, it is in fact exploring a relatively limited range of possibilities determined more by matters of convenience than principles. The emergence of big data, combined with the development of graphics processing units (GPUs), and new research into algorithms, has revived machine learning (and especially deep learning) and turned it into the dominant paradigm. But this research remains limited in terms of the tasks it can accomplish (broadly, pattern recognition) and the applications that it is used for (for instance, advertising). Rather than follow down this presently fruitful, but ultimately narrow, path, the UK would do better to chart its own course towards an AI that works for the common good.

What might this mean? First, it could mean funding an institution focused on analysing the societal impacts of AI. How can we support workers in light of AI’s impact on working landscapes? How can bias in machine learning systems be avoided and addressed, particularly when they are used in sensitive areas such as policing, court judgments, and employment decisions? How can we ensure that companies which deploy AI do so in a way that doesn’t efface their responsibility and liability for potential accidents? And what is the best way to ensure that the productivity improvements of AI are spread widely and not funnelled up to the owners of capital? The all-too likely outcome of AI without regulation is that it simply exacerbates existing inequalities and leaves more and more workers surplus to the economy. The UK has in fact suggested building an advisory body to look into these issues – but leaving this institution at a toothless level of advising will do little to affect how firms act.

Second, the UK could seek to establish a world-leading legal framework for privacy and collective ownership of data, ensuring that the resource that fuels contemporary machine learning is controlled and owned by those that produce it. With current AI dominated by data-intensive approaches, the imperative for companies is to seek out more and more data in increasingly intrusive ways. We are building a surveillance infrastructure to feed into the hands of a few powerful platforms. The UK could lead in building resistance to these oppressive approaches.

Thirdly, the UK could pioneer the development of AI for socially useful tasks, rather than privately profitable tasks. For now, the vast majority of AI research is on tasks that companies can turn into profitable (and proprietary) functions. It is not difficult to see that the time, money, and intellectual resources spent on improving digital advertising would be much better spent on things like AI for macroeconomic planning, public health, and scientific research. If commercial AI will inevitably focus on short-term and profit-oriented tasks, the UK government might instead fund AI for long-term and socially-useful tasks.

Lastly, the government could establish a research funding body oriented explicitly towards research that is not immediately practical, but instead explores the possibilities of AI much further than the current machine learning paradigm allows. This would involve research into alternative modes of learning and into data-sparse forms of AI. It would involve attempts to develop more ‘general’ AI, with broader capabilities beyond being pre-programmed for and trained on single, specific tasks. And, echoing Mariana Mazzucato’s arguments, the public should be remunerated for the eventual benefits of this government-funded research. While capitalist firms seek after short-term profits in narrow AI fields, the UK government could be building the foundations of future transformations.

Share