Why a deep reform of democracy is so urgent?
There are several organizations focused on identifying and reducing global catastrophic and existential risks such as Future of Humanity Institute, University of Oxford. In table 1, some of these risks, like nuclear terrorism, are not existential on their own, but only when combined with other risks.
The table shows that by the end of this century there is at least 19% chance that one, or several existential risks may materialize with the worst impact scenario – the end of the human species. These are only the risks over which we have some control, mainly in political, military, and social domains, such as nuclear wars, or artificial pandemics. Please note, climate change is not in that list, because it is not viewed as existential by the end of this century (it might be existential in 22nd century). However, some scientists, such as a former Astronomer Royal, prof. Martin Rees, assessed the risk of human extinction as higher than 50%[2]. Two of the most significant risks are:
- Superintelligence (Artificial General Intelligence). The risk stems from two possibilities. The first one is that Superintelligence may emerge as a malicious entity, which may potentially wipe out the human species. Four polls conducted in 2012 and 2013 showed that 50% of top AI specialists agreed that the median estimate for the emergence of Superintelligence is between 2040 and 2050.[3] [4] In May 2017, several AI scientists from the Future of Humanity Institute, Oxford University and Yale University published a report “When Will AI Exceed Human Performance? – Evidence from AI Experts”, reviewing the opinions of 352 AI experts. Overall, those experts believe there is a 50% chance that Superintelligence (AGI) will occur by 2060.[5] However, Ray Kurzweil, perhaps one of the best-known futurists, believes that Superintelligence will emerge by 2045. Furthermore, he warns that humans may lose control over a maturing AI as early as by 2030, when it reaches human level intelligence[6]. Such AI may be thousands of times more intelligent in some domains and absolutely ignorant in other. It will be able to plot clandestinely certain actions like firing nuclear weapons, creating fake war plans that might start a real world war, etc. Other experts in AI-related technologies such as Elon Musk also hold the view that we may lose control over AI by the end of this decade. Some AI specialists indicate that 2030 may be a tipping point in AI development, beyond which it may be impossible for humans to control their own future. But from all 8 existential risks listed in Table 1, it is the only one, which can also help us mitigate all other risks.
- Climate Change. We have already had a glimpse of the impact of the changing climate causing catastrophic events such as wide-spread fires and floods in the last few years. That impact may be severely felt by the middle of this century, although it may not be existential yet. Conventional modelling of climate change has focused on the most likely outcome: global warming by up to 4C. But there is a risk that feedback loops, such as the release of methane from Arctic permafrost, could produce an increase in temperature of about 6C or more by the end of this century. Mass deaths through starvation and social unrest could then lead to the collapse of our civilisation. Some climatologists warn that the climate change tipping point will arrive by 2030[7], at about the same time as the AI’s tipping point.
At the same time all these risks’ probability will increase significantly because of the factor that has been rarely discussed. It is the exponential pace of change. It is driven of course by technology and especially a rapid acceleration in the capabilities of AI, but it already impacts all domains of humans life. It is enough to point to the unforeseen consequences of the COVID-19 pandemics due to just ZOOM conferencing. It has directly changed in a super-fast and drastic way the education system, health service, working from home, significant changes in transportation and the appearance of the ghost office districts in the cities across the world.
Therefore, to survive as a species we may only have a decade to start acting as a planetary civilisation in various areas of the human activity. That means we need to abandon any inclination towards isolationism or nationalism. Instead, the world must act together effectively as a federation. Perhaps one of the reasons that we do not have it yet is that many of us still hope for the UN to be quickly transformed into such an organisation. After all, this is the organisation that should deal with existential risks in the first place. Unfortunately, this is also the organisation that indirectly increases the Humanity’s overall existential risks by being almost totally incapable or ineffective in solving existing grave problems (e.g. in Syria, Libya, Iraq, and very recently in Afghanistan) mainly because of the way in which the UN makes decisions – the unanimity voting in the Security Council.
Since existential risks can materialize at any time, e.g. natural pandemics like the Coronavirus pandemics in 2020, or due to laboratory-generated bugs being maliciously released into the open, we should have an organization that could act as the World Government right now. However, there is no hope that all the countries of the world would give up significant part of their sovereignty in the foreseeable future.
Therefore, creating a true World Government from scratch by the end of this decade is not feasible. We can only achieve such an objective partially, by transforming an existing organisation, or empowering a single large country with supranational powers, to become a de facto World Government. Who could play such a role? This is an important and complex issue that extends beyond the subject matter of this paper. However, table 2 summarises the result of a detailed analysis of potential candidate countries and organizations, which might play such a role, carried out in the author’s book “Democracy for a Human Federation’.
The most recent events in Afghanistan add more arguments that it is probably a federated European Union, rather than the Unites States, which is our best hope, notwithstanding the EU’s own grave problems. Such a possibility will be tested by the end of 2022, after the Conference on the Future of Europe and the elections in France. In any case, if such an organisation emerges by default, it should co-operate as much as it would practically be possible with the UN, gradually substituting its role.
[1] Future of Humanity Institute: Global Catastrophic Risk Survey, 2008, online, https://www.fhi.ox.ac.uk/reports/2008-1.pdf
[2]Martin Rees, Our Final Hour: A Scientist’s Warning, Amazon books, 2004
[3] Khatchadourian, Raffi (16 November 2015). “The Doomsday Invention”. The New Yorker. Archived from the original on 29 April 2019. Retrieved 31 January 2018
[4] Müller, V. C., & Bostrom, N. (2016). “Future progress in artificial intelligence: A survey of expert opinion”. In V. C. Müller (ed): Fundamental issues of artificial intelligence (pp. 555–572). Springer, Berlin
[5] Future of Humanity Institute, 13/6/2017: “When Will AI Exceed Human Performance? – Evidence from Experts
[6]Ray Kurzweil, one of the most famous futurists said in an interview with ‘Futurism’ on 10/5/2017: ‘2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created’
[7] Stephen Leahy: Climate change driving entire planet to dangerous ‘tipping point‘, CBS News, 27/11/2019
[8] The subject of creating a de facto the World Government to minimize the risk of human extinction has been covered extensively in the book: “Federate to survive!”, Vol. 1 of “Posthumans” by Tony Czarnecki, July 2020.
Click on the side tab to get essential information on what a new style of democracy is about.