Pathways & Ideas

Delivering the benefits of AI responsibly and equitably

While risks were naturally prevalent, the roundtables also surfaced many ideas for seizing and sharing the benefits of AI. Many of the individual roundtable reports include detailed, context-specific ideas. The following themes recurred throughout the discussions and serve as examples of practical insights for further exploration and action. 

Equitable data is a prerequisite for equitable AI

Data is crucial to the training, testing and deployment of AI models. While more data is being generated than ever before, the fact that the global data landscape is highly unequal has implications for the impact AI will have on society when deployed. The governance of data - its collection, quality, robustness, representativeness and readiness - must be recognised as fundamental to the effective governance of AI.  

Participants emphasised the need for all stakeholders to understand the complexities surrounding the equitable use and collection of data, including concepts of data access and ownership, data hygiene, data quality and robustness, and bias, fairness and representation in datasets. To meet the opportunities and needs of societies as AI advances, frameworks for governing data and enabling transparency and access may need more than updating - they may need to be reimagined at a fundamental level. 

Specifically, opportunities were identified to: 

AI-related expertise and skills are needed across all sectors of society 

The need to support, develop and evenly distribute AI-related expertise and skills featured prominently throughout roundtables. AI literacy will be an essential factor in realising the potential of AI, mitigating the risks associated with its use and building public confidence in the technology. The prevailing sentiment is that the centre of gravity of AI expertise currently rests too much in the private sector and that it’s critical for this to be expanded. 

AI expertise and skills are not limited to coding and computer science. Increasingly, the development and responsible deployment of AI systems will require skills in social science, business, humanities, and data analysis, as well as the ability to collaborate across disciplines. What and how much people need to understand about AI will differ across sectors and roles, but shared definitions and language will be needed to facilitate collaboration and build equitable systems. For example: 

The widely recognised need to expand the diversity of the global talent pool, including by supporting universities and companies in the Global South to attract and retain local talent, was also highlighted throughout the conversations. It’s important to attract a range of people with a diversity of backgrounds to work in AI, including in the building of datasets, and to embed within communities to facilitate equitable data collection and use. 

Broader participation in the development of AI systems is crucial, albeit hard to do at scale

To understand how AI might deliver benefits across sectors, it’s essential to look beyond AI companies and government and engage with a wide range of stakeholders across the private sector, civil society and academia. One participant quoted the disability rights motto, “Nothing about us, without us.”

The challenge is how to do this effectively at scale in a way that ensures people are recognised and compensated for their contributions. One roundtable participant reflected that an essential question is: “How might we create the right market to scale solutions at lower costs for participating in AI?”⁸ Throughout the roundtables, several design principles emerged for guiding inclusive engagement: 

The challenges and opportunities of participatory AI have been experienced and discussed among civil society and community organisations for several years, and explored in scholarship including the 2022 paper, ‘Power to the People? Opportunities and Challenges for Participatory AI’ by Abeba Birhane.

8 Gro Intelligence., ‘AI and the Future of Food Security’

“The changing landscape necessitates identifying and developing the complementary skills and values essential for effective interaction with AI”

RSA, AI and Future of Learning Roundtable Report

“The AI industry has an opportunity to bring individuals from these communities (and those who may have multiple, or intersectional identities) together to acknowledge the various aspects and needs of the human experience. By creating a more inclusive design process, AI tools can become more inclusive and resonant for the users they serve”. 

Claypool Consulting, Using AI to Improve Employment Outcomes for People with Disabilities Roundtable Report

Building public trust in AI is essential for delivering benefits at scale

At several of the roundtables, participants from various national governments explicitly asked for ideas and frameworks to address the governance challenges, and economic and societal opportunities, of AI. While there were no definitive answers, participants made valuable suggestions: 

9 Centre for Security and Emerging Technology., ‘Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems’.

Agile governance is needed for long term ecosystem alignment and accountability  

Alignment on governance and AI accountability measures, including evaluation, access and disclosure processes, was called for by participants from across sectors and groups. The need for accountability measures to be dynamic and iterative, to respond to emergent risks and opportunities, was emphasised strongly. To this end, it will be essential to understand the policy levers at different stages of the AI development lifecycle, as shown in the table overleaf from the CSET roundtable. It is equally important to consider the roles that industry, academia, civil society and the public sector can play at each of these stages. Levers include:

DEFINING FRONTIER MODELS

A common definition for frontier models and an understanding of the associated risks are still being established. Absent a precise definition, the ‘AI research frontier’ or ‘frontier models’ may be thought of as referring to AI models that are comparable to or slightly beyond the current cutting edge.¹¹

11 Centre for Security and Emerging Technology., ‘Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems’.