- Home
- Digital Sovereignty
- Towards an international regulation of algorithms?
Towards an international regulation of algorithms?
By disrupting organisations, artificial intelligence (AI) algorithms have revealed previously unknown vulnerabilities. For this reason, there is a growing demand for a legislative framework governing their use.
What is an algorithm? The French CNIL defines it as « the description of a sequence of steps that allows a result to be obtained from input elements. » It is therefore a list of operations necessary to, for example, calculate the annual repayment of a loan or to recognise an object photographed.
Threats from algorithms
Algorithms have a dual use—civilian and military. However, their use by civilians has the specificity of generating threats to gain an advantage in a (economic) war situation. This is evidenced by the ‘dual’ use visible on the social network TikTok. While this video-sharing app has unleashed new modes of expression among the Chinese population—for example during the « Tang Ping » movement (youth rejection of consumer society)—it has also allowed the moderation service to monitor what is being said online, to use broadcasting algorithms to hide accounts discussing the effects of the Covid-19 epidemic on the civilian population, to broadcast videos promoting social ideology, and to create fake accounts to gain more information about how people think.[1]
These threats fall into three categories. Firstly, they can be « biases, » i.e. a different processing of an item after its value has been changed. One such phenomenon is the subject of a study[2] recently published by Harvard Business School and Accenture. It reveals how application assessment algorithms keep out older jobseekers or those with a period of inactivity due to incarceration or medical care. The second risk is manipulation. In 2020, a Korean search engine, Naver, was fined €20 million for tampering with its algorithm for its own benefit. Espionage is the third threat. The NSA was able to automatically collect the data of American citizens suspected of terrorism using the Skynet algorithm.
In addition to deliberate malicious use, threats arise from the autonomous operation of algorithmic programs. They do not have to justify their logic as any human worker would. By automatically identifying in 2017 a category of users described as « Jew haters, » Facebook’s algorithm followed to the letter its programming instructions: to find the common interests of accounts in order to connect them. Solutions such as « algorithmic audits » to have independent inspectors explain the calculations and the publication of ethical standards such as Trustworthy AI are currently being studied.
A first regulation supported by the European Union
These solutions must be underpinned by legislation to safeguard online freedom and security. The European Union has been a pioneer on this issue. Article 22 of the General Data Protection Regulation (GDPR) grants the right not to be subject to an automatic decision without being notified. Its « Platform to Business » Regulation—which came into force in July 2020—obliges « online intermediation services » (such as Amazon or Google) to document the most important parameter(s) for ranking and recommending products.
Adopted by the European Council on 25 November 2021, the Digital Services Act and Digital Markets Act regulations oblige players using algorithms to inform their users of the processing of their data. The operation of algorithms will be supervised by a European body. This is already happening in France: the Act of 24 August 2021 mandates the CSA (Conseil Supérieur de l’Audiovisuel) to inspect the algorithms used to broadcast fake news. Finally, in April 2021, the European Commission announced the creation of a regulation governing AI that will be similar to the GDPR.
For a global regulation
To be effective, the governance of algorithms must be international. A European Parliament resolution of January 2021 noted that in cyberspace algorithms operate « across borders. » The Facebook documents disclosed by Frances Haugen reveal that its moderation algorithms had different levels of training depending on the language set. Fake news from Yemen or Myanmar was thus much less identified than that from English-speaking accounts.
The global governance of algorithms, however, faces two obstacles. The first is the lack of international agreements limiting the behaviour of states in cyberspace. The second is the importance of private firms—American, Chinese, or of other nationalities—which have far greater resources for both Net control and research and development than sovereign states. These companies, however powerful, are still subject to the laws of their home country. Thus, Chinese firms are required to provide intelligence to the People’s Republic of China.
Transnational institutions are beginning to set out rules for use. In May 2019, the OECD issued recommendations to regulate the operation of algorithms. In May 2020, the Council of Europe presented its guidelines for human rights compliant algorithmic systems. On 25 November 2021, UNESCO announced the signing of an agreement establishing the ethical use of AI by its 193 Member States. It sets out four fundamental principles, including the rejection of mass surveillance and social rating, and the possibility of monitoring AI-enabled tools and taking action where necessary. Governance has been facilitated by the establishment, on 14 September 2021, of Globalpolicy.AI, which brings together eight organisations, including the European Commission, the OECD and UNESCO. Its aim is to promote research and best practices in AI control with a view to reapplying them.
The limitation of the use of algorithms was also addressed by the members of the United Nations Open-ended Working Group (OEWG), during discussion sessions that took place in September 2019, February 2020, and March 2021. This is the first attempt at global cooperation to make the use of information and communication technologies (ICTs)—to which algorithms belong—as peaceful as possible. At the end of these meetings, members recognised the seriousness of cyber threats and the importance of international cooperation to protect against them.
Signatories committed to adopting non-binding norms and implementing confidence-building measures to counteract identified cyber threats. To implement them, the signatories plan to set up structures to defend themselves against malicious use of ICTs and to engage in dialogue, in particular during the Internet Governance Forum (IGF). An example of such structures is France’s PEReN (Pôle d’Expertise de la Régulation Numérique – Digital platform expertise for the public). This research centre, which belongs to the Direction Générale des Entreprises (DGE – Directorate General for Enterprise), studies the use of digital technologies by private companies and is specifically interested in algorithms (with the collaboration of Inria).
The governance of algorithms on a global scale is therefore still in its infancy. It raises many questions about the responsibility of private companies, international cooperation in cyberspace, and human autonomy with regard to algorithms. At the moment the only certainty is that their answers will come from the collaboration of public authorities with researchers and engineers from both public and private laboratories, and finally civil society.
the newsletter
the newsletter