The first-ever global agreement on
The Ethics of
Artificial Intelligence
Artificial intelligence is present in everyday life, from booking flights and applying for loans to steering driver-less cars. It is also used in specialized fields such as cancer screening or to help create inclusive environments for the disabled. AI is also supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger. However, the technology ‘is bringing unprecedented challenges’. “We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues,” UNESCO explains in a statement.
In this backdrop, on November 26, member states of the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted the first global agreement on the ethics of artificial intelligence (AI). This historical text defines the common values and principles which will guide the construction of the necessary legal infrastructure to ensure the healthy development of AI. The adopted text aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.
It aims to highlight the advantages of AI, while reducing the risks it also entails. According to UNESCO, it provides a guide to ensure that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.
The content of the Recommendation
The member states of UNESCO has agreed on a text that has been named as “The Recommendation on the Ethics of Artificial Intelligence”.
1. Protecting data
The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this. The document says, “Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security. Safe and secure AI will be enabled by the development of sustainable, privacy-protective data access frameworks that foster better training and validation of AI models utilizing quality data.”
2. Banning social scoring and mass surveillance
The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves.
In para 26 of the document, it has been said: “In scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions, final human determination should apply. In particular, AI systems should not be used for social scoring or mass surveillance purposes.”
3. Helping to monitor and evaluate
The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies in developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts.
Point 50 of the Recommendation states: “Member States should introduce frameworks for impact assessments, such as ethical impact assessment, to identify and assess benefits, concerns and risks of AI systems, as well as appropriate risk-prevention, mitigation and monitoring measures, among other assurance mechanisms. Such impact assessments should identify impacts on human rights and fundamental freedoms, in particular but not limited to the rights of marginalized and vulnerable people or people in vulnerable situations, labour rights, the environment and ecosystems and ethical and social implications, and facilitate citizen participation in line with the values and principles set forth in this Recommendation.”
4. Protecting the environment
The Recommendation emphasises that AI actors should favour data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impacts of AI systems on the environment, the Recommendation instructs that they should not be used.
Para 84 of the document states: “Member States and business enterprises should assess the direct and indirect environmental impact throughout the AI system life cycle, including, but not limited to, its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies, and reduce the environmental impact of AI systems and data infrastructures. Member States should ensure compliance of all AI actors with environmental law, policies and practices.”
Conclusion
Emerging technologies such as AI have proven their immense capacity to deliver for good. However, their negative impacts that are exacerbating an already divided and unequal world, should be controlled. AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected.
According to the document, the Member States are recommended to apply on a voluntary basis the provisions of this Recommendation by taking appropriate steps, including whatever legislative or other measures may be required, in conformity with the constitutional practice and governing structures of each State, to give effect within their jurisdictions to the principles and norms of the Recommendation in conformity with international law, including international human rights law. It also recommends that Member States engage all stakeholders, including business enterprises, to ensure that they play their respective roles in the implementation of this Recommendation; and bring the Recommendation to the attention of the authorities, bodies, research and academic organizations, institutions and organizations in public, private and civil society sectors involved in AI technologies, so that the development and use of AI technologies are guided by both sound scientific research as well as ethical analysis and evaluation.
The writer is a student at Punjab University.