The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies that something is wrong with one or more of the arguments (from various scientific disciplines) that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species with advanced civilizations actually observed (currently just one: human). This probability threshold, which could lie in the past or following human extinction, might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction. The main conclusion of this argument is that the easier it was for life to evolve to the present stage, the bleaker the future chances of humanity probably are.
The idea was first proposed in an online essay titled "The Great Filter – Are We Almost Past It?", written by economist Robin Hanson. The first version was written in August 1996 and the article was last updated on September 15, 1998. Hanson's formulation has received recognition in several published sources discussing the Fermi paradox and its implications. (...)
Fermi paradox
There is no reliable evidence that aliens have visited Earth; we have observed no intelligent extraterrestrial life with current technology, nor has SETI found any transmissions from other civilizations. The Universe, apart from the Earth, seems "dead"; Hanson states:
Fermi paradox
There is no reliable evidence that aliens have visited Earth; we have observed no intelligent extraterrestrial life with current technology, nor has SETI found any transmissions from other civilizations. The Universe, apart from the Earth, seems "dead"; Hanson states:
Our planet and solar system, however, don't look substantially colonized by advanced competitive life from the stars, and neither does anything else we see. To the contrary, we have had great success at explaining the behavior of our planet and solar system, nearby stars, our galaxy, and even other galaxies, via simple "dead" physical processes, rather than the complex purposeful processes of advanced life.Life is expected to expand to fill all available niches. With technology such as self-replicating spacecraft, these niches would include neighboring star systems and even, on longer time scales which are still small compared to the age of the universe, other galaxies. Hanson notes, "If such advanced life had substantially colonized our planet, we would know it by now." (...)
The Great Filter
With no evidence of intelligent life in places other than the Earth, it appears that the process of starting with a star and ending with "advanced explosive lasting life" must be unlikely. This implies that at least one step in this process must be improbable.
Hanson's list, while incomplete, describes the following nine steps in an "evolutionary path" that results in the colonization of the observable universe:
- The right star system (including organics and potentially habitable planets)
- Reproductive molecules (e.g. RNA)
- Simple (prokaryotic) single-cell life
- Complex (eukaryotic) single-cell life
- Sexual reproduction
- Multi-cell life
- Tool-using animals with intelligence
- A civilization advancing toward the potential for a colonization explosion (where we are now)
- Colonization explosion
***
Global Catastrophic RiskA global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an "existential risk." (...)
Defining global catastrophic risks
The term global catastrophic risk "lacks a sharp definition", and generally refers (loosely) to a risk that could inflict "serious damage to human well-being on a global scale".
Humanity has suffered large catastrophes before. Some of these have caused serious damage but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population, 10% of the global population at the time. Some were global, but were not as severe—e.g. the 1918 influenza pandemic killed an estimated 3–6% of the world's population. Most global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks).
Potential sources of risk
Main article: Global catastrophe scenarios
Potential global catastrophic risks are conventionally classified as anthropogenic or non-anthropogenic hazards. Examples of non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a natural pandemic, a lethal gamma-ray burst, a geomagnetic storm from a coronal mass ejection destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the predictable Sun transforming into a red giant star engulfing the Earth.
Anthropogenic risks are those caused by humans and include those related to technology, governance, and climate change. Technological risks include the creation of artificial intelligence misaligned with human goals, biotechnology, and nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as global war and nuclear holocaust, biological warfare and bioterrorism using genetically modified organisms, cyberwarfare and cyberterrorism destroying critical infrastructure like the electrical grid, or radiological warfare using weapons such as large cobalt bombs. Global catastrophic risks in the domain of earth system governance include global warming, environmental degradation, extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures, and non-sustainable agriculture.
[ed. Several links removed to enhance readability. Eventually somebody is going to figure out how to use AI to mine Wikipedia for interesting stuff that nobody is aware of. Maybe they already have and I'm just not aware of it. Bonus link from this topic: Self-replicating spacecraft.]