In the 17th century, Thomas Hobbes imagined an all-powerful sovereign who would subdue people to guarantee their security. This sovereign was Leviathan, the Old Testament monster that the philosopher likened to a “mortal god.” Those who deify artificial intelligence (AI) say it could become the new Leviathan. Should we really believe them?

One of the most important books of 1948 was Norbert Wiener’s Cybernetics, or Control and Communication in the Animal and the Machine. In it, Wiener popularized the cybernetic theory behind AI, revealing its inner workings: humans and machines adapt to their environment, defining their “future conduct by past performance” through the “learning” made possible by the “information” they exchange. He called this learning “feedback.”

Since this book, each decade has seen its “oracles” prophesying the advent of machines capable of prescribing decisions adapted to their environment through feedback. Among them was Dominican philosopher Dominique Dubarle, who wrote a remarkable article in Le Monde on December 28, 1948 entitled “A New Science: Cybernetics—Towards the Governing Machine.”

Dubarle critically explained that a “prodigious political Leviathan” could emerge as a cybernetic “governing machine.” By gathering information, this machine would be able to “determine, according to […] the measures that can be taken at a given moment […] the most probable developments in the situation.” He continues: “Could we not design a state apparatus covering the whole system of political decisions […] in the regime […] of a single government of the planet? Nothing prevents us from thinking about this today.”

Now, those who are predicting the advent of such “apparatus” explain that it will be AI that makes it possible. But for the more clear-sighted, this is neither feasible nor desirable. In their view, AI can and must only be a tool for more effective and democratic governance.

From Cybersyn to the AI Nanny

After General Pinochet overthrew the Chilean government of Salvador Allende on September 11, 1973, the coup plotters discovered a strange, futuristic room straight out of a science-fiction novel: the “op room.” It was equipped with screens and seven armchairs with buttons to control a cybernetic governance tool called Cybersyn.

The Cybersyn project (a contraction of “cybernetics” and “synergy”) was designed to use feedback to make decisions based on data sent in real time by factories using software called Cyberstrider, with the aim of improving planning for President Allende’s socialist economy. The system had been designed by cybernetics theorist Stafford Beer, but was never completed.

Since Cybersyn, the idea of a governing machine has continued to capture the imagination of those keen to make what at first sight looks like science fiction a reality. Among them is Ben Goertzel, who advocates creating what he calls an “AI Nanny” that can “protect” and “monitor” us, as he explains in his article “Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?”, published in 2012 in the Journal of Consciousness Studies.

In his view, this AI should be tasked with “slow[ing] down” the advent of the “Singularity,” which refers to the hypothetical emergence of a superintelligence far surpassing the intelligence of humans, who would then no longer be masters of their own destiny. The aim of this “mildly superhuman supertechnology” is to guide humanity toward a “friendly Singularity,” to avoid the sudden and unforeseen advent of a destructive superintelligence.

The AI Nanny, which would only be transitory and interconnected to “worldwide surveillance systems” and “robots,” must be equipped with a “cognitive architecture featuring an explicit set of goals.” These would include a “strong inhibition against rapidly modifying its general intelligence” and against carrying out actions of which the majority of humans would disapprove. It would also pursue a “mandate” to “cede control of the world to a more intelligent AI within N years,” “abolish human disease,” “prevent the development of technologies that would threaten its ability to carry out its […] goals,” and “be open-minded toward suggestions by intelligent […] humans.”

AI, a simple tool for public decision-making?

In a 2022 study entitled Artificial Intelligence and Public Action: Building Trust, Serving Performance, the French Council of State strikes down the “fantasy of the ‘singularity.’” In its view, just as “the singularity remains a myth to this day, the massive replacement of government officials by AI and the advent of an ‘AIcracy,’ acting on its own instructions or indeed those of human leaders, does not pass the acid test of analyzing AISs [artificial intelligence systems] deployed today in government entities.”

And for good reason: as the European Commission’s AI Watch report published in 2020 explains, of the 230 cases of AI use in government bodies analyzed, more than half (127) had only weak, “incremental” effects and only three produced “radical change.” In the words of the Council of State, “the government must not rush into dreaming of an overnight AI revolution, but instead wake up to the reality of machine learning.”

This wake-up call is being driven by the growing use of “public decision support” AISs. Public authorities use these systems to “extract information, knowledge, and analysis from the data they hold, which can then be used by their staff to improve decision-making,” explains the Council of State.

Deployed across a wide range of sector (including transport, emergency services, healthcare, and education), these decision-support systems can simulate and evaluate public policies. One example is digital twins, which provide local authorities with a virtual reproduction of their territory based on available data. They can then use this to determine the effects of a given decision and adopt the most appropriate one.

AI, a tool for greater democracy?

These AISs can also make hidden automated decisions, as their increasing performance and use creates the potential for “a strong ‘automation bias,’ in other words, the tendency to trust the machine’s analyses and recommendations unduly and […] more than our own human judgment,” observes the Council of State. However, since they are merely an aid to decision-making, their use falls outside the strict legal framework governing automated decision-making, which is designed to protect citizens’ rights and freedoms. It is for this reason we need to avoid this cognitive bias, through training, to prevent an “AIcracy” from quietly taking hold.

In a 2019 report on Artificial intelligence and its use in the public sector, the OECD praises the Belgian digital platform CitizenLab for giving citizens the opportunity to take part in shaping public policy through written contributions. Using automatic natural language processing and machine learning techniques, the platform helps elected representatives to analyze contributions, categorizing them by theme and highlighting the key trends influencing policy choices. At a time when there is talk of a “crisis” in representative democracy, this kind of initiative is surely welcome. Does AI hold the key to saving our democracies?

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.