CONCLUSION

If the development and deployment of AI systems and frontier models are steered responsibly and in positive directions, the collective benefits for society will be huge.

Hearing the perspectives of diverse practitioners and communities proved extremely valuable - sometimes in surprising ways - as this report has aimed to show. Policy debates can easily be dominated by the priorities and perspectives of those with the strongest voices and biggest platforms. But clearly, the hopes and fears of different communities relating to AI are not homogenous. In some parts of the world, fears about labour market displacement are more pressing than safety risks. Among populations that tend to be seen - justifiably - as at risk of exclusion from the benefits of AI in Global North-centric debates, there is actually a great deal of hope and excitement about AI. And while representation is rightly a key focus of most debates about data, there are also valid reasons why certain groups and individuals may want to exercise their right not to have their data used. These are just a few examples of perspectives that provide food for thought and demonstrate that questioning assumptions is a crucial foundational step in inclusive policy making. 

The pace of development in AI technologies means that policy discovery and development needs to evolve quickly, too. This set of roundtables was the beginning of a process, with the next phase to be informed by the insights generated and the nuanced conversations sparked by these exploratory conversations. We are committed to continuing to work with civil society and community-led organisations to ensure they have a voice in fast-moving conversations that will impact them and the people they represent. 

Some of the most important outcomes from these roundtables will stem from the connections they facilitated. A number of participants are now exploring collaborations, which we look forward to seeing develop. Creating alignment and a shared understanding of certain key terms and processes will help strengthen the collective capacity of the ecosystem to operationalise and honour requirements like the White House Commitments. To this end, we plan to convene further conversations, as well as targeted working groups, to work towards establishing the definitions and standards we need to enable progress towards safe and beneficial AI. We further plan to develop and support initiatives that address some of the key opportunities and challenges that emerged from these conversations, in partnership with civil society, the academic community, policymakers and our industry peers.

We hope that by sharing this report publicly, we can spark continued conversations and catalyse collective action towards inclusive policies that support equitable AI.

Authors

Eimear Nolan is Policy & Public Engagement Manager at Google DeepMind

Rachel Foley is Policy & Public Engagement Manager at Google DeepMind

Acknowledgements

We would like to acknowledge Lucy Lim, Lewis Ho and Dorothy Chou from Google DeepMind and Jo Sparber, Melissa Hinkley, Aidan Peppin and Malcolm Glenn for their editorial support. Thank you to Studio La Plage for their design support. Finally, we would like to thank all of the organisations and roundtable participants for their contributions to the discussions and individual roundtable reports that this summary has drawn from.