– Concerns about Suffering on a Gigantic Scale

Last update: 2024-11-07

One organization in the Effective Altruism movement describes what it calls s-risks, or the possibility of suffering occurring on a gigantic scale, as follows.

https://80000hours.org/problem-profiles/s-risks/:

Why might s-risks be an especially pressing problem? We’re concerned about impacts on future generations, such as from existential threats from pandemics or artificial intelligence. But these are primarily risks of extinction or of humanity’s potential being permanently curtailed — they don’t put special emphasis on avoiding the chance of extreme amounts of suffering, in particular. Research into suffering risks or s-risks attempts to fill this gap. New technology, for example the development of artificial intelligence or improved surveillance technology, but also new nuclear or biological weapons, may well concentrate power in the hands of those that develop and control the technology. As a result, one possible outcome worse than extinction could be a perpetual totalitarian dictatorship, where people suffer indefinitely. But researchers on s-risks are often concerned with outcomes even worse than this. For example, what would happen if such a dictatorship developed the technology to settle space? And if we care about nonhuman animals or even digital minds, the possible scale of future suffering seems astronomical. After all, right now humanity is almost completely insensitive to the welfare of nonhuman animals, let alone potential future digital consciousness. We don’t know how likely s-risks are. In large part this depends on how we define the term (we’ve seen various possible definitions). We think it’s very likely that there will be at least some suffering in the future, and potentially on very large scales — potentially vastly more suffering than has existed on Earth so far, especially if there are many, many more individuals in the future and they live a variety of lives. But often when people talk about s-risks, they are talking about the risk of outcomes so bad that they are worse than the extinction of humanity. Our guess is that the likelihood of such risks is very low, much lower than risks of human extinction — which is part of why we focus more on the latter. However, research on s-risks is so neglected that it’s hard to know. We think there are fewer than 50 people worldwide working explicitly on reducing s-risks. (…) Key organisations in this space — The Center for Reducing Suffering researches the ethical views that might put more weight on s-risks, and considers practical approaches to reducing s-risks. The Center on Long-Term Risk focuses specifically on reducing s-risks that could arise from the development of AI, alongside community-building and grantmaking to support work on the reduction of s-risks.

The Center for Reducing Suffering is featured in our chapter on Suffering-focused Ethics. Its co-founder, Tobias Baumann, offers this website: https://s-risks.org/. See also S-risks: An introduction.

The Center on Long-Term Risk provides the following articles that might be useful to us: Beginner’s guide to reducing s-risks and Reducing Risks of Astronomical Suffering: A Neglected Priority.

Wikipedia’s article on Global catastrophic risk has this list of possible sources of suffering on a gigantic scale:

Potential global catastrophic risks are conventionally classified as anthropogenic or non-anthropogenic hazards. Examples of non-anthropogenic risks are an asteroid or comet impact event, a supervolcanic eruption, a natural pandemic, a lethal gamma-ray burst, a geomagnetic storm from a coronal mass ejection destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the Sun transforming into a red giant star and engulfing the Earth billions of years in the future. Anthropogenic risks are those caused by humans and include those related to technology, governance, and climate change. Technological risks include the creation of artificial intelligence misaligned with human goals, biotechnology, and nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as global war and nuclear holocaust, biological warfare and bioterrorism using genetically modified organisms, cyberwarfare and  cyberterrorism  destroying  critical infrastructure like the electrical grid, or radiological warfare using weapons such as large cobalt bombs. Global catastrophic risks in the domain of earth system governance include global warming, environmental degradation, extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures, and non-sustainable agriculture.

Other potential sources of gigantic suffering might be found, as long as we have not definitively established what is consciousness, in plants, rocks, atoms or other entities, as evoked in articles like Plant perception (paranormal) or Is There Suffering in Fundamental Physics?.

Let’s keep an open eye on what the Center might do about suffering on a gigantic scale. (to be followed)

License

World Center for the Control of Excessive Suffering Copyright © by A.C.E.S. (Eds). All Rights Reserved.

Share This Book