Demystifying the AI policy landscape

We’re going to begin with a series of posts relating to hot topic areas of AI policy, to provide some colour to developments that could have immense implications for businesses seeking to use AI as new rules come into force. First, here is a quick primer on the AI policy landscape, for those new to the field.

 

Key players in AI policy

 

Governments and regulators have the most influence, because ultimately they set the rules. While there are some efforts (like the EU’s AI Act) to set AI-specific rules, it’s important to remember that ultimately AI is just a technology, so it is bound by existing sector regulations - like regulations for finance or medicine. However, one of the things sector regulators are having to do is to interpret and expand on existing rules for AI applications in their domain. 

Standards bodies have been slowly working on AI related topics for years. But now, linked to the emergence of regulation, we’re seeing the pace pick up and the emergence of guidance that has a more practical nature, such as risk management frameworks. Intergovernmental bodies are also important forums for AI policy discussion when it comes to sharing best practices and trying to get alignment between different countries. 

Finally, there is a wider group of influencers who span academia, industry and civil society. While they have no direct power, they play an important role in helping to catalyse and inject different perspectives and expert views into the AI policy debate.

 
 

For all involved, the overall goal is about maximising the benefits of AI while suitably mitigating the downsides. The goals shaded blue are typically the main areas of focus: 

  • Boosting economic growth by supporting businesses to use AI to improve their productivity and quality

  • Ensuring that potential harms are mitigated enough so that the use of AI is safe and doesn’t violate human rights (like the rights to privacy and equal treatment). 

  • Harnessing AI to tackle the most difficult problems facing society - like climate change or improving healthcare

 

AI Policy levers

 

Given these broad goals, the scope of topics covered by AI policy is not just regulation. It is also about creating an ecosystem that supports responsible innovation, research, and development in the field. 

Creating an environment that supported AI innovation and encouraged AI uptake by businesses was the first focus for governments, with a myriad of national AI strategies emerging in 2018. Later some governments also sought to impose requirements on the public sector use of AI, in particular mandating risk and impact assessments (e.g., Canada, New Zealand), or guidelines for public sector procurement and use of AI systems (e.g., UK, US).  Right now though, much of the focus has switched to putting in place rules and frameworks to guide the private sector in the responsible use of AI.

 

Key areas of concern

 

In practice, talking about “responsible AI” is really just a catch-all term for addressing the myriad of concerns that people have. It’s worth delving into just to understand the many aspects to the debate, and why AI governance has become such an important topic. 

Fairness was the first concern to attract wide attention. The worry is that if the data used to train an AI model is not sufficiently diverse or is wrong (such as if based on inaccurate stereotypes), the model may have baked-in discrimination against certain demographics. Addressing this is not as straightforward as changing the data, since there are downstream mitigations that can be equally effective (e.g., filtering out bad responses). Also excluding data that includes toxic language or stereotyping from training means that such models cannot be used as a moderation tool to detect it.



There is also growing unease about the unfettered use of online data for model training, even when it is legal to do so (e.g., because data was published under a CC licence, or is allowable under Fair Use doctrine or some other kind of text/data mining exemption). In terms of copyright, publishers and artists are concerned that their copyrighted work is being used to train AI without their permission or any compensation. In terms of privacy, one fear is that data hoovered up from the open internet might include sensitive private information, and that this could be later regurgitated in model output. There are also concerns from Data Protection Authorities about whether existing privacy rules are being violated. For example, in Italy ChatGPT was initially banned for violating GDPR (EU’s big privacy rule) for failing to notify users of its data collection practices, not doing enough to prevent children from using it, and having no legal justification for processing personal data.

From an environmental impact perspective, the concern is with the scale of resources that AI systems consume, particularly in terms of energy, and the associated carbon emissions. This includes both the computational requirements for creating the model (training) as well as using the finished model in a product (In AI lingo this is called ‘inference’ - it’s the process of running live data points into the model to calculate an output). While this is getting better with the advent of more powerful and energy efficient chips, it remains a concern.

Concerns about transparency are really not about transparency for its own sake, but as a helpful input for gauging how much trust to place in an AI system. Having some degree of explainability is helpful, but what is appropriate and meaningful in practice varies by context. The same applies to human oversight and mechanisms for redress. This is an area that needs a lot more research in terms of both what is technically possible and what is useful. 

There are also concerns relating to the wider impact of AI on society. Worrying about AI’s impact on jobs is not new, but we’re seeing a new wave now driven by the arrival of unexpectedly capable generative AI. The fear is not so much of the need to adapt, but that the rapid pace at which job displacement occurs could lead to massive disruption that society is not prepared for. Concerns about how to ensure that the goals of AI systems remain aligned with what society and humanity wants are a key focus of the current debates taking place around AI safety and existential risk. The worry is that AI capabilities may advance so far and so fast that they outstrip our mechanisms for controlling it.

Concerns about the potential for misuse by bad actors is the same issue as for any multipurpose technology. AI can be used for a myriad of malicious purposes, just as easily as it can be used for good. Like the digital industry more widely, there are also concerns about the balance of power. Some worry that AI development is controlled by just a handful of big technology firms, and that they may prioritise their capitalist self-interest over societal well-being. Others are more concerned about the wider stakes in terms of geopolitical stability with AI as being akin to a new space race. 


Finally, as AI’s capabilities advance - and businesses become more savvy about using them - there are fears about the potential for manipulation . In particular, there is concern that AI could be used by marketers to profile and micro-target people, as well as to design “dark patterns” in user interfaces that manipulate people into acting against their best interest. This is particularly amplified in the context of election campaigns, where there is a fear that it could increase societal polarisation and sway results - undermining the democratic process

Previous
Previous

AI safety is in the spotlight - what is it all about, and should you worry?