Measuring progress on the responsible use of AI in over 120 countries around the world
How do we measure the evolution of commitment and progress on the implementation of responsible AI principles and practice?
Regulation of Artificial Intelligence is the need of the hour
Regulation of Artificial Intelligence is the need of the hour. The release of The Blueprint for An AI Bill of Rights(AIBoR) by the US White House Office of Science and Technology Policy is fitting. The Blueprint adds to global debates on AI governance by intending to guide the design, use, and deployment of automated systems for the protection of the American people in the age of AI. Although lacking legal enforcement, the Blueprint signals a deliberate move towards a US model for the governance of AI which acknowledges the role played by automated systems in widening existing patterns of discrimination and inequality. Similarly to the proposed AI Act of the European Union, the AIBoR adopts a rights-based framework and sets out guidance to ensure the protection of rights through practices such as data minimization, privacy by design, seeking consent from data subjects, and an express deterrence of continuous surveillance in spaces such as work and housing where its use limits rights, opportunities, and access. By arguing for policy guardrails to limit the perpetuation of such harms the Blueprint is a massive step forward by the US towards a rights-based regulation of AI. So what are the implications of the Blueprint for Africans? If implemented, it could exert great benefit to the African continent. The embrace of AI in Africa has not been without its challenges and the consequences of this technology have been felt through human rights violations, systemic discrimination, and deepened inequalities. This piece discusses the potential implications of the blueprint on AI governance in Africa and for Africans and seeks to highlight some of the challenges posed by AI and the benefits that this blueprint could present for these hurdles. Highlights of the AI Bill of Rights The Blueprint promotes five principles for automated systems. These are that: 1. Automated systems should be safe and effective by ensuring consultation from diverse communities is done during their development while systems undergo testing, risk identification, and mitigation before deployment; 2. Automated systems should be used and designed equitably. Developers should protect users from algorithmic discrimination by implementing equity assessments, using representative data, and ensuring that there is ongoing disparity testing and mitigation; 3. Users should be protected from abusive data practices. Also users should have agency over how their data is used; 4. Users should be notified in plain and clear language when an automated system is being used and how and why an outcome impacting them was determined by an automated system; and 5. Users can opt-out of an automated system and have an accessible, equitable, and effective human alternative and fall back. Effects of the use of automated systems on Africans Arguably the African continent has often been neglected during conversations about AI and automated systems. The effect of this neglect has been the importation of these technologies without adequately ensuring that they fit circumstances and conditions. By pointing to the need to consider the social impact before deployment and open testing, the US Blueprint provides the language and principles that others can leverage to push for similarly safe and effective systems in their circumstances. Similar provisions in African jurisdictions could limit exposure to Africans from automated systems that may not advance their interests. Discriminatory Practices Africans have experienced systemic discriminatory practices the world over. These may now be further entrenched with the rollout of automated systems. For example, Africans have faced high visa rejections when seeking to travel outside the continent as a result of bias in decision-making technologies used in various immigration sectors. The use of CCTvs with embedded facial recognition technologies has also become more prevalent in Africa. The danger of these systems is the collection of indiscriminate footage of people. In Johannesburg, CCTvs provide a powerful tool to monitor and segregate historically disadvantaged individuals under the disguise of the provision of neutral security. The Blueprint serves as a guide to address AI discriminatory practices by providing principles for social protections against algorithmic discrimination. Additionally, the opt-out option on automated systems with an intended use within sensitive domains which are highlighted as ‘those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil rights, provided in the blueprint enables individual resistance to severe discriminatory practices and protects the access to essential opportunities and services through human alternatives, consideration, and fallback. Violation of human rights The blueprint recognizes the impact of AI technologies on the enjoyment of human rights. For example, typically AI uses data that was acquired in an ethically dubious manner, either through breaches of privacy or routine surveillance that compromises basic freedoms of movement, association, and expression. A notable provision of the AIBoR is its concern not just for individual rights, but also towards the protection of communities against group harms. The Blueprint stipulates that AI and other data driven automated systems cause harm to individuals but a greater magnitude of their impacts are mostly readily available at the community level. The blueprint broadly defines communities as neighbourhoods, social network connections, families, and people connected by identity among others. This provision advances the rights of African communities that have been subjected to surveillance practices from foreign companies and multinational companies. Implications of the AI Bill of Rights on AI governance in Africa The evident outcome that policy frameworks such as the AIBoR seek to achieve is to provide a balance between potential harms and encourage innovation of AI technologies. The blueprint can help inspire the African continent to consider the principles it has laid out in the development of regulatory responses for responsible AI. However, it is vital we, as a continent, do not blindly adopt such principles. Rather, we should tweak them to the development priorities and lived experiences of Africans while keenly noting that Africa is not homogenous and a collective policy would not be an effective rule-making mechanism. Africa is a continent of different cultures, ethnicities, and religions. This means that each African country, with its peculiarities, is tasked with its policy-making centered on its specific values. We should also avoid the danger of a blanket regulation that fails to contextualize the continent’s needs and problems and instead limits African AI policy to the advancement of developed policies that are inapplicable to Africa’s unique circumstances by enacting sector-specific ethical and responsible AI principles, especially in strategic sectors of the African continent such as Agriculture, Fintech, and Healthcare among others. Though the blueprint is an appreciated step, its non-binding nature provides little assurance with regard to implementation or sanctions for non-compliance. What will be most interesting to see is the action taken as a result of this blueprint; instances of recall, redress, and the ability to opt-out of these systems in practice. Research ICT Africa is working with partners at D4d.net to develop a rights-based Global Index on Responsible AI, which will measure commitments and progress to responsible AI in countries around the world. The blueprint will be an important instrument in assessing the activities of the US in supporting rights-based AI governance and in setting standards that can be considered and reproduced in other parts of the world.
The Global Index on Responsible AI receives an award at the Paris Peace Forum.
We are delighted to announce that the Global Index on Responsible Artificial Intelligence (AI) was selected for a year of support through the Paris Peace Forum Scale-up Program at the 2022 Paris Peace Forum! The program is designed to accelerate the profile, reach and impact of selected projects and entails working closely with expert and world recognised mentors who will support our team throughout the year. We are honoured to have been selected and look forward to working closely with our mentors to take the Global Index on Responsible AI to new heights.
Global Index on Responsible AI: A Tool to Support the Implementation of the UNESCO Recommendations on Ethics in AI
Toward Responsible AI
“The Global AI Index is a wonderful tool that, on top of its goals, is supporting positively the implementation of the UNESCO Recommendations on Ethics in AI, the only global standard of its kind, by generating key data on responsible AI commitments and progress in 120 countries.” Gabriela Ramos, Assistant Director General, UNESCO Toward the responsible global governance of AI As digital technologies proliferate across all spheres, from global markets and civic spaces, to everyday life, inequality within and between countries is also on the rise. Urgently, national and international governance arrangements and tools are needed to direct digitalisation in ways that enhance the flourishing of human and planetary life forms. If we do not act, we risk the world dividing into deeper socio-economic cleavages, and the embrittlement of democracy. This is one of the most important policy questions of our time, to which a growing and diverse group of actors are dedicating resources and making commitments. Artificial Intelligence (AI) is broadly considered to be the most influential of the digital goods at play in the world today. While a precise definition of what systems and practices constitute AI proper is debatable, the scale and magnitude of what AI is and can do poses profound questions about how this remarkable suite of technologies should be developed and used to address grave global challenges and support the equitable and inclusive advancement of human societies. But to respond to the questions and challenges AI gives rise to, requires a richness of insight and knowledge surfaced from all corners of the world. A major advancement in the development of globally-relevant tools to support the ethical and responsible use and development of AI was the adoption of the UNESCO Recommendation on Ethics in AI (UNESCO Recommendations) by all 193 member states of UNESCO in November 2021. The UNESCO Recommendations was developed out of a conscientious programme of consultations with stakeholders of all levels from all world regions and builds on an international movement to advance rights-respecting AI use and governance, including the widely adopted OECD Principles on AI Ethics. Around the same time as the UNESCO Recommendations were adopted, an ambitious research project was conceived to track progress and commitments to responsible AI at a national level through the creation of a Global Index on Responsible AI (Global Index). Led by Research ICT Africa, an African think-tank based in South Africa, the Global Index is a project of the Data 4 Development (d4d.net) global network, supported by the International Development Research Centre (IDRC) of Canada and their affiliate AI for Development (AI4D) Africa funding programme (co-led with the Swedish International Development Agency (SIDA)). One of the primary functions of the Global Index is as a tool to support the implementation of the UNESCO Recommendations by developing an instrument which can be used to monitor country-level adoption of its provisions. In this short blog, we set out how the Global Index is being designed to support the implementation of the UNESCO Recommendations, and build on the global movement to advance rights-respecting AI adoption and governance with actors such as OECD. We begin with a short overview of the Global Index project, before discussing its relationship to the UNESCO Recommendations in setting benchmarks for responsible AI, building capacity around the world, and developing a repository of AI innovations and governance that support the advancement of substantive equality. Setting Benchmarks for Responsible AI In establishing the Global Index, a series of indicators are being developed to benchmark standards for responsible AI, against which all participating countries can be assessed. Broadly, we use the term “responsible AI” to mean the development, use, and governance of AI in ways that fully uphold human rights and democratic values throughout the AI lifecycle and value chain (development, deployment, and maintenance). The conceptual framework for establishing the indicators of the Global Index is adopted from the UNESCO Recommendations, as the most globally comprehensive instrument on ethical AI to which the overwhelming majority of countries around the world have committed to implementing. A series of indicators on responsible AI are being developed out of the conceptual framework. These indicators stand as a set of core and more elaborate country-level benchmarks for the responsible use and governance of AI, and fall across three key dimensions: pre-conditions for responsible AI; responsible AI governance; and responsible AI capacities. Our indicators will be aligned with the tools being developed by UNESCO to support implementation of the UNESCO Recommendations, specifically an Ethical Impact Assessment tool and a self-assessment national readiness tool. In designing the Global Index, we have conducted a parallel process of open consultations, particularly with stakeholders from the Global South, to better understand how best to develop a tool to be used by different groups - from policy-makers, to investors in AI technologies, to human rights defenders - in an effort to advance responsible AI. For more information about the consultations undertaken on the design of the Global Index, see our website. Dimension 1: Preconditions for Responsible AI The advancement of responsible AI governance that protects and promotes human rights, countries need to have certain legal, institutional, infrastructural and socioeconomic preconditions in place. The benchmarks established against this criteria fall within the preconditions dimension of the Global Index because they do not relate directly to AI governance and capacities and are likely to have longer historical precedents. Dimension 2: Responsible AI Governance The role of governments is to protect people and their communities and environments against the risks and harms of AI, and to promote the use of AI that serves the public interest and advances the realisation of, or access to, human rights. This is encapsulated in the responsible AI governance dimension of the Global Index under which a set of indicators representing the regulatory frameworks required for fully implementing the UNESCO Recommendations have been developed. This recognises that policies and legal frameworks are needed to govern AI systems in ways that advance the protection and promotion of human rights. While upholding universal human rights principles, these policies and legal frameworks also need to be contextually grounded—reflecting local, cultural and indigenous priorities and values, and cognisant of local historical dimensions of exclusion, marginalisation and injustice. The UNESCO Recommendations is singularly significant in promoting the centrality of diversity in AI, both in terms of diversity of governance approaches and ensuring that AI itself does not erode cultural diversity. UNESCO’s success in bringing together a diverse range of experts to highlight the key conundrums facing responsible use of AI, such as in its work on Artificial Intelligence and Gender Equality, have been instrumental in shaping the conceptual framework of the Global Index. Indeed, a particular focus of the Global Index’s work will be measuring the efforts countries are taking to promote gender equality in AI, such as increasing the representativity of women, non-binary and other sexual minorities in the design and production of AI systems and tools, as well as their efforts to reduce gender-related bias and discrimination in their use and impact. This will involve collating data from different countries and showcasing examples that advance gender equality, paying particular attention to innovations originating in the Global South. c) Dimension3: Responsible AI Capacities Responsible governance requires a range of institutional, social and technical capacities in order to ensure the sustainability and agility of national AI governance capabilities. This includes independent institutions to oversee and enforce standards of responsible AI, programmes and centres to advance AI knowledge and skills, and social awareness and literacy around AI. For more information on the assessment framework being adopted to measure country-level progress in responsible AI in the three dimensions outlined above, and how we are working to ensure that it offers contextually relevant, fair and useful evaluations of countries, see “Developing a Rights-Based Global Index on Responsible AI”. Building Capacity and a Global Network on Responsible AI A key objective of the project is to develop research capacity in responsible AI and identify country-level regulatory, institutional and knowledge gaps to advance the responsible development, use and governance of AI. The Global Index hopes to draw on UNESCO’s global standing to help in establishing an international network of responsible AI researchers in countries to be included in the project. A capacity building programme is currently being developed in advance of data collection to support responsible AI researchers to engage in the full spectrum of issue areas for responsible AI covered under the Global Index, as briefly outlined above. and approach towards capacity building. This network of researchers will be supported by regional research hubs in Africa, the Middle East, Latin America, Asia, North America, and Europe, with a particular emphasis on building research capacity and tools to advocate for responsible AI in the Global South. Building this research capacity will bring more voices to the table, and bolster the diversity of global debates on how best to govern AI. UNESCO’s emphasis on building capacity and literacy on AI and digital issues in the African region, in particular, is especially aligned to the Global Index project which is led from Africa and involves many partners and collaborators from the region. Indeed, this will be the first time that a global tool for advancing responsible AI is being developed from Africa. Establishing a Repository for AI and Substantive Equality While the scoring and ranking of countries for efforts made to guarantee and promote responsible AI practices is a central part of the Global Index, it is not its most significant aspect. In addition to identifying where capacity development is required, and building a global network of responsible AI researchers, the Global Index will function as a repository of good governance and progressive government programmes on responsible AI. In this way, the Global Index will provide key data to support the fulfilment of the commitments under the UNESCO Recommendations for UNESCO to establish repositories of best practice in responsible AI. The ways in which countries are facing the challenge of governing AI practices is diverse, with legal and policy frameworks as well as institutional arrangements unfolding in varied ways in different country contexts. As shown in other policy areas related with digitalization, such as data protection, countries are exploring differing approaches that may be influenced by factors such as the level of development of digital economies, a country’s specific role in digital value chains, regional insertions, institutional structures and traditions, and human rights commitments, among many others. In this way, as we collect data about responsible AI around the world, the Global Index will stand not only as a measuring tool, but a collection of experiences and best practices that can support the inclusive advancement of diverse conversations and debates on responsible AI. In particular, the UNESCO Recommendations provides that UNESCO will ‘form a repository of best practices for incentivizing the participation of girls, women and under-represented groups in all stages of the AI system life cycle’. Accordingly, the Global Index will specifically collate examples and use cases of country-level programmes and practices to use AI to advance the realisation of substantive equality, such as innovations that provide for the use of AI to advance access to justice for persons with disabilities. In all, the Global Index team is delighted to be supporting the groundbreaking work of UNESCO in advancing inclusive, sustainable and diverse approaches to the governance of AI.
Perspectives and experiences of feminist AI through a Global South lens
Experts from Latin America, Africa and India came together to discuss what feminist AI means, coming from a Global South perspective. Their key insights were used to inform and engage with the gender indicator that has been formed for the Global Index on Responsible AI. The workshop was an interactive one with a number of participants from around the world.
Measuring progress toward the responsible use of artificial intelligence in over 120 countries around the world
Artificial Intelligence (AI) is a wicked problem facing society globally. It is wicked because it is complex and hard to define as a policy concern. How is it being used, and who must – who can? – take responsibility for ensuring it is used to better society? As it increasingly moves to becoming a general purpose technology, it cannot be isolated from the social and economic conditions in which it is produced and used.
The use of AI is raising critical issues around human rights
Artificial intelligence and machine learning have the potential to contribute to the resolution of some of the most intractable problems of our time. Examples include climate change and pandemics. But they have the capacity to cause harm too. And they can, if not used properly, perpetuate historical injustices and structural inequalities.
About the Global Index on Responsible AI
The Global Index on Responsible AI is a new tool being developed to support the implementation of responsible AI principles by countries around the world. The Global Index is a project of Research ICT Africa and the Data for Development Network (D4D.net), and is supported by the International Development Research Centre (IDRC).
The Global Index will equip governments, civil society, and stakeholders with new evidence to support the efforts of countries to meet their human rights obligations and uphold principles for responsible use in the development and implementation of AI systems.
The Global Index on Responsible AI will establish an international research network and a core team which will operate under the guidance of an Expert Advisory Committee composed of experts on the responsible and ethical use of AI from around the world. The research methodology is also being co-created and validated with other AI-related research organisations under the guidance of the Expert Advisory Committee.
The Global Index is a participatory project by design with many opportunities for stakeholders to participate in the development of the indicator framework and the assessment methodology. Many consultation opportunities will be open to the public as we engage the broader community on approaches to data collection and analysis.
In the near future, we will also be recruiting country-researchers around the world to support data collection during the second half of 2022.