How do we measure progress on the implementation of responsible AI principles and practice?
The Global Index on Responsible AI is a new tool being developed to support the implementation of responsible AI principles by countries around the world. The Global Index will equip governments, civil society, and stakeholders with the critical evidence needed to support the efforts of countries to meet their human rights obligations and uphold principles for responsible use of AI.
The Global Index is a project of Research ICT Africa and the Data for Development Network (D4D.net). The project is being carried out with the aid of a grant from the International Development Research Centre (IDRC) and is funded by the Government of Canada.
IRCAI recognises the Global Index
Early in May 2023 the Global Index was selected as one of the top 100 solutions for Sustainable Development Goals (SDGs) by the International Research Centre on Artificial Intelligence (IRCAI). This list of top 100 projects solving global problems connects the application and regulation of AI to the UN SDGs.
GI receives award at the Paris Peace Forum.
We are delighted to announce that the Global Index on Responsible Artificial Intelligence (AI) was selected for a year of support through the Paris Peace Forum Scale-up Program at the 2022 Paris Peace Forum! The program is designed to accelerate the profile, reach and impact of selected projects and entails working closely with expert and world recognised mentors who will support our team throughout the year. We are honoured to have been selected and look forward to working closely with our mentors to take the Global Index on Responsible AI to new heights.
Toward the responsible global governance of AI
As digital technologies proliferate across all spheres, from global markets and civic spaces, to everyday life, inequality within and between countries is also on the rise. New tools are urgently needed to direct digitalisation in ways that enhance democracy and human rights.
Regulating AI is the need of the hour
Regulation of Artificial Intelligence is the need of the hour. The release of The Blueprint for An AI Bill of Rights (AIBoR) by the US White House Office of Science and Technology Policy is fitting. The Blueprint adds to global debates on AI governance by intending to guide the design, use, and deployment of automated systems for the protection of the American people in the age of AI. Although lacking legal enforcement, the Blueprint signals a deliberate move towards a US model for the governance of AI which acknowledges the role played by automated systems in widening existing patterns of discrimination and inequality. Similarly to the proposed AI Act of the European Union, the AIBoR adopts a rights-based framework and sets out guidance to ensure the protection of rights through practices such as data minimization, privacy by design, seeking consent from data subjects, and an express deterrence of continuous surveillance in spaces such as work and housing where its use limits rights, opportunities, and access. By arguing for policy guardrails to limit the perpetuation of such harms the Blueprint is a massive step forward by the US towards a rights-based regulation of AI. So what are the implications of the Blueprint for Africans? If implemented, it could exert great benefit to the African continent. The embrace of AI in Africa has not been without its challenges and the consequences of this technology have been felt through human rights violations, systemic discrimination, and deepened inequalities. This piece discusses the potential implications of the blueprint on AI governance in Africa and for Africans and seeks to highlight some of the challenges posed by AI and the benefits that this blueprint could present for these hurdles. Highlights of the AI Bill of Rights The Blueprint promotes five principles for automated systems. These are that: 1. Automated systems should be safe and effective by ensuring consultation from diverse communities is done during their development while systems undergo testing, risk identification, and mitigation before deployment; 2. Automated systems should be used and designed equitably. Developers should protect users from algorithmic discrimination by implementing equity assessments, using representative data, and ensuring that there is ongoing disparity testing and mitigation; 3. Users should be protected from abusive data practices. Also users should have agency over how their data is used; 4. Users should be notified in plain and clear language when an automated system is being used and how and why an outcome impacting them was determined by an automated system; and 5. Users can opt-out of an automated system and have an accessible, equitable, and effective human alternative and fall back. Effects of the use of automated systems on Africans Arguably the African continent has often been neglected during conversations about AI and automated systems. The effect of this neglect has been the importation of these technologies without adequately ensuring that they fit circumstances and conditions. By pointing to the need to consider the social impact before deployment and open testing, the US Blueprint provides the language and principles that others can leverage to push for similarly safe and effective systems in their circumstances. Similar provisions in African jurisdictions could limit exposure to Africans from automated systems that may not advance their interests. Discriminatory Practices Africans have experienced systemic discriminatory practices the world over. These may now be further entrenched with the rollout of automated systems. For example, Africans have faced high visa rejections when seeking to travel outside the continent as a result of bias in decision-making technologies used in various immigration sectors. The use of CCTvs with embedded facial recognition technologies has also become more prevalent in Africa. The danger of these systems is the collection of indiscriminate footage of people. In Johannesburg, CCTvs provide a powerful tool to monitor and segregate historically disadvantaged individuals under the disguise of the provision of neutral security. The Blueprint serves as a guide to address AI discriminatory practices by providing principles for social protections against algorithmic discrimination. Additionally, the opt-out option on automated systems with an intended use within sensitive domains which are highlighted as ‘those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil rights, provided in the blueprint enables individual resistance to severe discriminatory practices and protects the access to essential opportunities and services through human alternatives, consideration, and fallback. Violation of human rights The blueprint recognizes the impact of AI technologies on the enjoyment of human rights. For example, typically AI uses data that was acquired in an ethically dubious manner, either through breaches of privacy or routine surveillance that compromises basic freedoms of movement, association, and expression. A notable provision of the AIBoR is its concern not just for individual rights, but also towards the protection of communities against group harms. The Blueprint stipulates that AI and other data driven automated systems cause harm to individuals but a greater magnitude of their impacts are mostly readily available at the community level. The blueprint broadly defines communities as neighbourhoods, social network connections, families, and people connected by identity among others. This provision advances the rights of African communities that have been subjected to surveillance practices from foreign companies and multinational companies. Implications of the AI Bill of Rights on AI governance in Africa The evident outcome that policy frameworks such as the AIBoR seek to achieve is to provide a balance between potential harms and encourage innovation of AI technologies. The blueprint can help inspire the African continent to consider the principles it has laid out in the development of regulatory responses for responsible AI. However, it is vital we, as a continent, do not blindly adopt such principles. Rather, we should tweak them to the development priorities and lived experiences of Africans while keenly noting that Africa is not homogenous and a collective policy would not be an effective rule-making mechanism. Africa is a continent of different cultures, ethnicities, and religions. This means that each African country, with its peculiarities, is tasked with its policy-making centered on its specific values. We should also avoid the danger of a blanket regulation that fails to contextualize the continent’s needs and problems and instead limits African AI policy to the advancement of developed policies that are inapplicable to Africa’s unique circumstances by enacting sector-specific ethical and responsible AI principles, especially in strategic sectors of the African continent such as Agriculture, Fintech, and Healthcare among others. Though the blueprint is an appreciated step, its non-binding nature provides little assurance with regard to implementation or sanctions for non-compliance. What will be most interesting to see is the action taken as a result of this blueprint; instances of recall, redress, and the ability to opt-out of these systems in practice. Research ICT Africa is working with partners at D4d.net to develop a rights-based Global Index on Responsible AI, which will measure commitments and progress to responsible AI in countries around the world. The blueprint will be an important instrument in assessing the activities of the US in supporting rights-based AI governance and in setting standards that can be considered and reproduced in other parts of the world.
Feminist AI through a Global South Lens
Experts from Latin America, Africa and India came together to discuss what feminist AI means, coming from a Global South perspective. Their key insights were used to inform and engage with the gender indicator that has been formed for the Global Index on Responsible AI. The workshop was an interactive one with a number of participants from around the world.
Measuring the responsible use of AI
Artificial Intelligence (AI) is a wicked problem facing society globally. It is wicked because it is complex and hard to define as a policy concern. How is it being used, and who must – who can? – take responsibility for ensuring it is used to better society? As it increasingly moves to becoming a general purpose technology, it cannot be isolated from the social and economic conditions in which it is produced and used.
AI raising issues around human rights
Artificial intelligence and machine learning have the potential to contribute to the resolution of some of the most intractable problems of our time. Examples include climate change and pandemics. But they have the capacity to cause harm too. And they can, if not used properly, perpetuate historical injustices and structural inequalities.
Join Dr Rachel Adams at the Seminar Series!
Join our Principal Investigator, Dr Rachel Adams, at an upcoming seminar which is hosted by the Information Law & Policy Centre at the Institute of Advanced Legal Studies where she will be talking about the Global Index on Responsible AI as a tool that is being developed to measure the current state of responsible AI around the globe.
Join the Global Index at the Paris Peace Forum
Mark your calendars! The 2023 Paris Peace Forum will take place on 10-11 November, gathering once again under one roof the most important actors in global governance. We look forward to participating in this meaningful event for the second year in a row!
Promoting Responsible AI Around the World
The Regional Research Hubs of the Global Index on Responsible AI are recruiting country researchers to conduct research for the project. Researchers from all over the world are welcome to apply! We look forward to receiving your application and reviewing your qualifications.
Highlights from Ibrahim Governance Weekend
Team members of the Global Index on Responsible AI had the privilege of participating in the prestigious Mo Ibrahim Governance Weekend. An event that brought together influential leaders, policymakers, and experts from various fields to discuss pressing global challenges and explore innovative solutions.
The Global Index on Responsible AI has established an international research network as well as a core project team that operates under the guidance of an Expert Advisory Committee (EAC). The EAC is composed of global experts on the responsible use of AI. The research methodology has been co-created and validated with the members of the EAC as well as other AI-related research organisations around the world.
The Global Index is a participatory project by design with many opportunities for stakeholders to engage in the development of the assessment methodology, data collection, as well as the dissemination of results.
In the near future, we will also be recruiting country-researchers around the world to support data collection in late 2023. Stay tuned for more information.