PATHWAYS
& IDEAS

Delivering the benefits of AI responsibly and equitably

While risks were naturally prevalent, the roundtables also surfaced
many ideas for seizing and sharing the benefits of AI. Many of the individual roundtable reports include detailed, context-specific ideas. The following themes recurred throughout the discussions and serve as examples of practical insights
for further exploration and action. 

Equitable data is a prerequisite for equitable AI

Data is crucial to the training, testing and deployment of AI models. While more data is being generated than ever before, the fact that the global data landscape is highly unequal has implications for the impact AI will have on society when deployed. The governance of data - its collection, quality, robustness, representativeness and readiness - must be recognised as fundamental to the effective governance of AI.  

Participants emphasised the need for all stakeholders to understand the complexities surrounding the equitable use and collection of data, including concepts of data access and ownership, data hygiene, data quality and robustness, and bias, fairness and representation in datasets. To meet the opportunities and needs of societies as AI advances, frameworks for governing data and enabling transparency and access may need more than updating - they may need to be reimagined at a fundamental level. 

Specifically, opportunities were identified to: 

  • Develop standards and institutions to organise, document and share data effectively. 

  • Incentivise transparency and reporting requirements to build public trust. 

  • Invest in AI literacy and skills to develop a new generation of data practitioners, especially in the Global South – and support subsidies for lower-income communities to enter the field of AI.

  • Incentive talent into the less glamorous “service layer” of data architecture and management.

AI-related expertise and skills are needed across all sectors of society 

The need to support, develop and evenly distribute AI-related expertise and skills featured prominently throughout roundtables. AI literacy will be an essential factor in realising the potential of AI, mitigating the risks associated with its use and building public confidence in the technology. The prevailing sentiment is that the centre of gravity of AI expertise currently rests too much in the private sector and that it’s critical for this to be expanded. 

AI expertise and skills are not limited to coding and computer science. Increasingly, the development and responsible deployment of AI systems will require skills in social science, business, humanities, and data analysis, as well as the ability to collaborate across disciplines. What and how much people need to understand about AI will differ across sectors and roles, but shared definitions and language will be needed to facilitate collaboration and build equitable systems. For example: 

The widely recognised need to expand the diversity of the global talent pool, including by supporting universities and companies in the Global South to attract and retain local talent, was also highlighted throughout the conversations. It’s important to attract a range of people with a diversity of backgrounds to work in AI, including in the building of datasets, and to embed within communities to facilitate equitable data collection and use. 

Broader participation in the development of AI systems is crucial, albeit hard to do at scale

To understand how AI might deliver benefits across sectors, it’s essential to look beyond AI companies and government and engage with a wide range of stakeholders across the private sector, civil society and academia. One participant quoted the disability rights motto, “Nothing about us, without us.”

The challenge is how to do this effectively at scale in a way that ensures people are recognised and compensated for their contributions. One roundtable participant reflected that an essential question is: “How might we create the right market to scale solutions at lower costs for participating in AI?”⁸ Throughout the roundtables, several design principles emerged for guiding inclusive engagement: 

  • Communities who will be most impacted by the development and deployment of AI must have a voice in the conversation. For example, if an AI tool for supporting students in classrooms is rolled out, students, parents and teachers should all be consulted as part of the development process, as they each have unique perspectives to share in regards to learning outcomes. Institutions that collect data must educate the people whose data they’re collecting on how those services will help them.

  • It will be increasingly important to include thinkers and designers who are skilled in imagining and analysing future implications to mitigate short-term thinking.

  • Policies that support the adoption of a ‘sandbox’ approach will encourage swift and thoughtful experimentation of the use of AI in different sectors. 

The challenges and opportunities of participatory AI have been experienced and discussed among civil society and community organisations for several years, and explored in scholarship including the 2022 paper, ‘Power to the People? Opportunities and Challenges for Participatory AI’ by Abeba Birhane.

8 Gro Intelligence., ‘AI and the Future of Food Security’

“The changing landscape necessitates identifying and developing the complementary skills and values essential for effective interaction with AI”

RSA, AI and Future of Learning Roundtable Report

“The AI industry has an opportunity to bring individuals from these communities (and those who may have multiple, or intersectional identities) together to acknowledge the various aspects and needs of the human experience. By creating a more inclusive design process, AI tools can become more inclusive and resonant for the users they serve”. 

Claypool Consulting, Using AI to Improve Employment Outcomes for People with Disabilities Roundtable Report

Building public trust in AI is essential for delivering benefits at scale

At several of the roundtables, participants from various national governments explicitly asked for ideas and frameworks to address the governance challenges, and economic and societal opportunities, of AI. While there were no definitive answers, participants made valuable suggestions: 

  • There is scope to lay out a clear vision of how AI could be used, providing the public with a sense of where we are now, where we are headed, how AI could be a tool to help us get there and how risks are being managed. During the CSET roundtable, participants noted that "most debates about the future of AI are anchored in current technologies—such as today’s LLM-based chatbots—but lack a clear sense of which tools or capabilities might bridge the gap between the present and the future.”⁹ 

  • Information asymmetries must be addressed through accountability and transparency measures. As the Institute for Advanced Studies AI Policy and Governance Working Group (AIPGWG) shared, “designers and deployers of AI must demonstrate that their products are safe and effective—and therefore merit the public’s trust—through iterative accountability mechanisms that span the full development and deployment lifecycle and address risks related to both highly specialised and more general purpose AI systems.”

  • There was a perceived opportunity to corral efforts and energy into the development of best practices and standards in multiple areas, from the responsible collection and governance of data, to effective red-teaming and evaluation. Rather than aiming for homogeneity of systems and standards, interoperability should be the priority.

  • Evidence-based stories about the benefits of AI in society are needed to build public trust. There is significant hope that AI could help solve some of the world’s greatest challenges, for example around climate and public health, but participants shared the need to know and hear about a) the positive impacts already being realised, and b) what can realistically be expected for the future.  There is a role for all stakeholders to play in commissioning evidenced research, including scientometrics, economic and social impact analyses of how AI is being used in various fields and its impacts. 

9 Centre for Security and Emerging Technology., ‘Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems’.

Agile governance is needed for long term ecosystem alignment and accountability  

Alignment on governance and AI accountability measures, including evaluation, access and disclosure processes, was called for by participants from across sectors and groups. The need for accountability measures to be dynamic and iterative, to respond to emergent risks and opportunities, was emphasised strongly. To this end, it will be essential to understand the policy levers at different stages of the AI development lifecycle, as shown in the table overleaf from the CSET roundtable. It is equally important to consider the roles that industry, academia, civil society and the public sector can play at each of these stages. Levers include:

  • Iterative and sociotechnical accountability mechanisms: As highlighted by the AIPGWG, “responsibility for accountability in the design and deployment of AI systems and tools must begin with technology developers, but industry, academia, civil society, and the public sector each have a key role to play in the development of an effective AI accountability system.”

  • Incentive structures to optimise for safety and benefits. AI labs can be incentivised to test and evaluate their frontier AI systems and report dangerous capabilities to oversight bodies. Likewise, incentives can promote the testing of education tools in a sandbox before they are used in schools. In the case of AI labs, as discussed in the CSET roundtable, incentives could be explored via procurement requirements, establishing industry certifications for frontier AI systems and loosening liability in exchange for transparency.

  • Keeping pace with developments in a fast-moving field will continue to be a challenge. Regular updates of mechanisms, models, evaluations and audits will be needed to anticipate and meet new risks. In this regard, there may be lessons to be taken from the field of cybersecurity and the mechanisms that the sector has in place globally to keep pace with the development of new capabilities. 

  • Advocating for an interoperable governance framework would allow countries, regions, and regulatory bodies to work effectively together. As recommended by the AIPGWG, “Policy interoperability also enables jurisdictions to set their own policy priorities - in line with local needs and the specific context relevant for determining thresholds for fairness, responsibility, and safety – while still aligning with a globally recognized set of core commitments and accountability and safety mechanisms.”

  • International Institutions may have an important role to play in enabling effective global governance and ensuring advanced AI systems benefit humanity,  as highlighted in the paper on International Institutions for Advanced AI that was presented at the Brookings Institution roundtable. When it comes to the possible establishment of a new institution for governing AI, there is a range of governance functions that could be performed at an international level to address key governance challenges, ranging from supporting access to frontier AI systems to setting international safety standards. 

  • Greater access to and distribution of infrastructure, such as compute, data centres and cloud infrastructure, will help ensure that power is not overly concentrated within industry, and that more businesses and communities can use and benefit from AI-enabled technologies. An example of this is the UK’s Future of Compute Review

  • Listening as well as sharing expertise. While demonstrating the capabilities, opportunities and risks of emergent systems will continue to be a core responsibility of AI companies and labs, the obligation to listen and collaborate with outside experts and affected communities should be prioritised too.

DEFINING FRONTIER MODELS

A common definition for frontier models and an understanding of the associated risks are still being established. Absent a precise definition, the ‘AI research frontier’ or ‘frontier models’ may be thought of as referring to AI models that are comparable to or slightly beyond the current cutting edge.¹¹

11 Centre for Security and Emerging Technology., ‘Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems’.