Talk:Fermi paradox
This is the talk page for discussing improvements to the Fermi paradox article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: Index, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10Auto-archiving period: 180 days |
Discussions on this page often lead to previous arguments being restated. Please read recent comments and look in the archives before commenting. |
This page is not a forum for general discussion about the Fermi paradox. Any such comments may be removed or refactored. Please limit discussion to improvement of this article. You may wish to ask factual questions about the Fermi paradox at the Reference desk. |
Fermi paradox is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed. | |||||||||||||||||||||||||
This article appeared on Wikipedia's Main Page as Today's featured article on January 13, 2005. | |||||||||||||||||||||||||
|
This level-5 vital article is rated B-class on Wikipedia's content assessment scale. It is of interest to multiple WikiProjects. | ||||||||||||||||||||||||||||||||||||||
|
Untitled
[edit]TODO: Add the actual Fermi paradox solution's critical key element (besides it being a compound solution within to observations compatible components) for deduction & its resulting dynamic of the solution:
Ethical explanation
[edit]It is possible that ethical assessment of general forms of evolution of life in the universe constitutes the central issue which intelligent alien species' macroscopic decision-making, such as for the topic of natural panspermia, directed panspermia, space colonization, megastructures, or self-replicating spacecraft, revolves around. If the result of utility evaluations of enough and sufficiently in time extended initial or lasting portions of expected or prospective cases of evolution is among all other ethically relevant factors the dominant ethical concern of intelligent alien species, and if furthermore a large enough negative expected utility is assigned to sufficiently common forms of expected or prospective cases of evolution, then foregoing directed panspermia, space colonization, the construction of megastructures, sending out self-replicating spacecraft, but also active attempts to mitigate the consequences of interplanetary and interstellar forms of natural panspermia may follow. While in the case of space colonization it might ultimately stay too uncontrollable to - by technical or educational means - ensure settlers or emerging space colonies themselves consistently keep acting in accordance to the awareness of by colonizer considered major ethical dangers accompanying physical interstellar space exploration, and for the case of interstellar self-replicating spacecraft, due to potential prebiotic substances in interstellar clouds and exoplanets' atmospheres and soils, it may forever stay impossible to ensure their sterility to avoid contamination of celestial bodies which may kick-start uncontrollable evolution processes, reasons to forego the creation of a megastructure, even if such may be beneficial to an intelligent alien species and also to some other intelligent alien species imitators, may mainly have psychological origin. Since certain megastructures may be identifiable to be of unnatural, intelligent design requiring origin by foreign intelligent alien species, for as long as the by an intelligent alien species expected number of (especially less experienced or less far developed) from them foreign intelligent alien species capable of identifying their megastructure as such is large enough, the by them rather uncontrollable spectrum of interstellar space endeavor related influences this may have on those foreign intelligent alien species might constitute a too strong ethical deterrence from creating megastructures that are from outer space identifiable as such, until eventually a lasting state of cosmic privacy may be attained by natural or technological means.
And so consequently, the steady-state solution of the Fermi Paradox consists of naturally emerging civilizations that just stay on their home-world, hide and by their location of emergence are assigned a region of space around them in which they can exercise local cosmic intervention operations for the macro-ethical good, until a galaxy is covered by regions of civilizations' local influences, similar to a mathematical minimal packing problem, but for covering a galaxy with the least required amount of civilizations in order to keep as much of it overall as sterile as possible for as long as possible. And for the case of a spiral galaxy, chances are that the majority - if not all - of such civilizations will inhabit star systems moving together with the main-stream of stars around the galactic center, since for wrong-way-driver star systems, due to their severely increased interaction rates with different galaxy regions, both the emergence of a civilization as well as their continued long-term presence is at far higher risk. The only exception to this general behavior might arise near the very end of the universe's development when galaxies have ran out of material with which to keep stars burning, darkened severely with "the lights having gone out", and planets have cooled out sufficiently far, so that the risk of accidental or intentional, direct or indirect causation by civilizations of lasting, uncontrolled evolutions of wildlife has upon astronomically slow, gradual decay finally diminished to a sufficiently low level as to potentially conceivably provide macro-ethical allowance or even justification for civilizations to not have to hide and be silent anymore.
based on the following reasoning:
1. Axiom of Importance: The ethical importance of an issue increases alongside the number of therein involved sentient lifeforms, the time duration during which they are affected by it, and the vastness of the affected space to the extent to which changes of it affect the lifeforms. Or more directly, it increases with the absolute difference in caused, resulting time-integrals over all (with receptor-specific intensities weighed) pleasure & pain receptor-signals for any and all sentient beings.
2. Extreme case: By the in the above statement defined abstract, general standard, according to the current body of humanity's knowledge, general forms of evolution of life (if on earth or on exoplanets) forever constitute the most ethically important issue to exist in the universe: With billions of species - each with numerous individual lifeforms - together with durations on the scale of billions of years, and spacial extension of at least a whole planet, it dwarfs any other conceivable ethical issue's level of importance.
3. Valuation Axiom for the extreme case: According to many scientific studies, such as by Richard Dawkins, Brian Tomasik, Alejandro Villamor Iglesias, Oscar Horta, pain and suffering dominates over joy for animal wildlife in general forms of Darwinian evolution of life due to the global war-like situation commonly framed as survival of the fittest (rather than the demise of all unfit), and therefore - when accumulated across all logically entangled parameters such as duration and count of involved individuals - instances of such forms of evolution of life has to be kept at a minimum in the universe, as there never was and never will be anything that could be more important, to change the conclusion of this Anti-Panspermia-implying directive.
1. Macro-Ethical Scale of Evolution of Life: Certainly, if evolution of life happens somewhere or not is a very big deal in (macro) ethical terms since easily millions of species can be subjected to it, be involved in it, for several hundreds of millions, possibly even billions of years.
2. Macro-Ethical Importance of Evolution of Life: Now, what also is surely very agreeable is that evolution can play out in extremely many different ways and with extremely large variety in its short- and long-term dynamic, with that depending on all kinds of events (of various qualitatively different types) happening during it at all or not, or later or sooner. And so the window, or (in terms of all in the process aggregated joy and suffering) distance between the worst kinds of an instance of evolution of life and the best kinds of it surely is astronomically huge, providing the subject matter with monumental relevance, importance due to its scale. And this is independent of where (i.e. wholly on the negative side or between the negative and the positive side, or entirely on the positive side) such an interval or window consisting of the whole range or spectrum of cases of evolution of life between the worst and the best cases lies on any continuous axis (from - infinity to + infinity) meant to account for the ethical evaluation of the whole, once everything of ethical relevance related to it has finished happening.
3. Nearly guaranteed expectable decision-making- or design-improvement, rapidly in short time: Also, certainly any randomly intentionally or accidentally, maybe even unnoticed, kind of initiated instance of evolution elsewhere would not with any sufficiently high likelihood result in a form of evolution of life that is anywhere close among whatever the better actually plausible possible cases of it may be. And at the same time, science and technologies progress rapidly and surely can keep progressing speedily for millennia, if not hundreds of thousands of years, putting humanity then into a position with far greater holistic overview and comprehension of the matter. And given how gargantuan of a macro-ethically important matter this is, even if in the future we only could turn it into e.g. a 5% (relative to the window width) better version than any now possibly as such then irreversible version of evolution of life, the absolute difference would be unimaginably titanic.
4. Humanity's historical, contextually as empirical reference frame relevant, abysmal track record: As our history repeatedly shows, humanity does not have a track record of managing complex large-scale matters anywhere near perfectly right, the 1st time around, in part due to unaccounted for side-effects. Huge problems tied in with them are more the norm than an exception. And on top of this, unfortunately there is several factors that likely make it harder for contemporary people to care about this topic, such as all the crises we had and still have here on earth, but also that it's about a huge risk for others, not ourselves, and it'd not be humans (though it could also eventually lead to species with human level intelligence being subjected to it) but wildlife animals (which generally are by people judged to have a lower priority of care compared to other humans), and the disaster would unfold far in the future (long past the lifetime of anyone that lives currently) and far away, and the means by which it'd happen would be in a very subtle manner of which the comprehension, understanding of all that is made less accessible by the interdisciplinary complexity of the subject and that it has to be explained in rather little time, as it doesn't take long anymore for future space missions and activities in general carrying these grave risks with them. And so it seems that just about all odds stand in opposition rather than in favor of people taking it seriously with the right mindset about it.
5. It holds true that there is lack of any urgency or need for near-future final decision-making, by which to lock humanity out of otherwise currently still available, significant alternatives.
Conclusion: Unchallengeably, unquestionably it makes sense and is entirely far safer for humanity to have discipline, patience, and hold itself back from all its outer space activities that carry at least the slightest forward contamination risks.
Besides all of this, the same general line of reasoning would apply for all intelligent aliens with exo-biospheres of different biological constitution analogously. And not just that either, but all alien civilizations would have to account for all biologically distinct kinds of evolution of life possible in the universe - for if distinct kinds do exist - depending on the general distribution of habitable candidate worlds specific to each of them individually, and so in particular, intelligent aliens would have to account for our DNA-based kind of biosphere, and vice versa, humanity would have to account for the possibility of the emergence of biologically distinct cases of evolution of life.
--- — Preceding unsigned comment added by 195.192.195.234 (talk) 03:14, 22 May 2023 (UTC)
Refutation of the dark forest hypothesis as well as Dyson swarms (informally worded):
In regards to this topic, imagine a single civilization separated so much from all others and in manners such that the very civilization could know for certain that this is the case. They then would have no one to worry about in terms of "inter-galactic threats". Or take a civilization that just happens to "have nothing to fear (from any other civilization) anymore" because any response reaction to whatever they did (if they hadn't throughout all the time they existed already been detected yet anyway) wouldn't matter to them anymore because it'd just take far too long and they could be aware of that, safely so, and then just unhide however they liked, if they wanted to do so anyway at that old age. And stars get separated, ejected out of galaxies not that rarely, and they can form elsewhere in separation, too, and there's small galaxies as well and especially galaxies with 90% dark matter making them up with (relatively speaking, compared to normal galaxies) "barely any stars" in them to have to within how long(?), 1 million, 10 million, 100 million years search over and check out to confirm "no, there's no aliens over there either", so that even in small galaxies, aliens within their life-span could be able to know/make sure they're alone and then (again, if they wanted) "do whatever they wanted". And so no, the dark forest hypothesis has this problem that it cannot resolve. It's close but flawed. It cannot be what deters aliens from building Dyson swarms or such (assuming those were possible in the first place). At that point, even the dark forest hypothesis would rely on statistics or chance, namely that none of these exception cases (and maybe more if I'd give myself few more minutes to think of them) have happened so far.
Oh, and I guess if one were to be working within the (il-)logical framework of "civilizations would expand generally when possible but would also hide as long as possible but would at some point have more reasons to stop hiding rather than continue hiding, if other threats are too high and hiding would be too limiting to their options for addressing those risks", then when it comes to checking which exoplanet may have aliens on them, one could look for all the kinds of cosmic threats that we can predict and which would happen "soon" elsewhere and which would threaten other places, to then see if nothing happens on those places or if instead processes looking alike preparations for addressing such threat is observable instead.
Also (since the potential "complete unfathomability" of aliens' psyche comes up often), as long as such aliens do genetic research and try to do animal uplifting (in cognitive, mental terms), at least they may come around other aliens thinking just like us, contributing the same kinds of ideas, having such considerations, intentions etc. that we're familiar with; because it may easily be a mistake to think aliens would live as the only humanly (or beyond) intelligent species on their planet after 1 or 10 or 100 million years passed. So yes, they may additionally have species among them that think very different, but unless that'd be part of or be the reason for why they'd not create other intelligent species just like us, they could just as much have "aliens thinking just like us" on their planet, too.
Surely if a civilization (within the dark forest framework) would by cosmic predictable nearby threats be forced to react to that and expand or flee by means that uncover them, or die out, with these kinds of threats being up to absolutely certain ones, there'd be situations where the speculated threats coming from other civilizations nearby possibly trying to make use of their situation for if they'd allow themselves to be visible to them, to take them out, they'd rather enter that speculative risk than the physical certain one, and they may even consider trading information to nearby civilizations for in return being spared from aggression. They could be useful to potential nearby dangerous civilizations and use this fact for their own survival for if they needed to become visible, and yet we don't see them, or at least aren't seeing Dyson swarms etc., which if it indicates/shows anything, it's that apparently it's a very high priority in "current times around this region of the cosmos" to not make oneself visible, to stay hidden. So this way, that the by cosmic forces towards potential extinction pressed civilizations also could leverage their usefulness to other civilizations in trade for being spared by them would be another problem of the dark forest hypothesis as explanation that advocates of it would have to address, I guess. And once such trades would've happened already, even collaborative, friendly relationships (in/despite the dark forest theory framework) could (happen to) be established as result. With other civilizations, one could negotiate, but not with the forces of the cosmos.
So anyway, it seems at least either the dark forest hypothesis is busted or the Dyson swarm concept, but that'd not be the only expectable way in how expansion of a civilization may be visible if they'd do that, but variations of it, of collecting a star's energy, should be covered in the case of the general rationale used for the refutation, too. And while normally, one would think that for being able to negotiate between civilizations in different star systems, they'd already need to be aware of each other, have located each other, this wouldn't necessarily be so, as a civilization (e.g. pressed to flee or expand, again in the dark forest framework) may either try to communicate indiscriminately outward in all directions if signals could be set up to go out in all directions, or they could pick all the plausible nearby exoplanets and send messages there without knowing that another civilization is there, and ask for response if there'd be willingness for trades, deals of any kind.
And then there's another differentiation to point out, namely that the energy requirements for via hypothetical space-ships (or forms of attacks) to reach other star systems isn't the same everywhere, at least not across vast enough distances, because e.g. a civilization living in a globular star cluster "high up the galactic gravitational well" may have it far easier to "drop something down there into the pond that is the galactic disc", than what the energy requirements for a civilization within the disc may be for sending something the other direction; and so such kinds of minimum energy requirement differences for attacks or travel (or for the situation not being symmetric here) can mean that some civilizations could possibly only then become dangerous relative to specific other civilizations nearby if they e.g. built a Dyson swarm, and since building such a thing can take very long and may already be observable to be in construction development by another nearby civilization that could in principle be in danger but only far in the future once such construction project were to be completed, this can mean that the presence of small enough or incomplete enough stages of hypothetical Dyson swarms are compatible with (or allowable to happen within) the dark forest hypothesis (because other civilizations nearby could estimate the minimum further required time for a civilization with small Dyson swarm present already for making it larger to become a threat to them, and estimate what the latest point for themselves would be for taking them out before the others with already present partial Dyson swarm could become a threat to them), and yet we see none of them. However, I suppose a civilization could also always have a lot of energy stored invisibly by other means (though there'd also be limits to that, and potential conflict with other priorities for uses of such stored energy).
But e.g. when it comes to the topic of swing-by chains accessible to a civilization to be used for travel or attacks, there can be major differences, and from afar very accessible to assess ones, possible to be estimated, so that'd be another strong example of differences/asymmetries between where the developmental thresholds would be for 1 civilization with its environmental circumstances in space to become a threat to another civilization either by aggression or by possibly reaching other exoplanets at all or before them. And not only that but also specifically when it comes to hypothetical means of interstellar travel (or attack) via chains of swing-by events, then the plane orientation within which all the planets (more or less) sit becomes important, too, and this can mean that 1 civilization's plane in which their planets sit may (if as plane extended far enough outward) contain (or be very close to containing) another civilization's system but not vice versa. And if one considers the set of all star systems with their relative locations to each other in space in combination with their planes in which their planets sit (or if one only considers those star systems that have especially many or heavy planets usable for swing-by chain maneuvers), then one could obtain a kind of travel-graph or a directed graph of connections where travel is (relatively speaking, within the realm of the anyway already at least nearly impossible) much easier than traveling in different directions that are rather orthogonal to such plane orientation (which though would be the fewer directions).
And a qualitatively different kind of approach for a refutation of the dark forest hypothesis would be to try to argue scenarios, situations in which some towards expansion or galaxy dominance ambitious civilization may "just go for it" and try to outpace any other possibly in the galaxy present civilizations' potential efforts in containing/inhibiting them in that process, and given that it's always been said that within some millions or hundreds of millions of years allegedly a civilization could "take over a galaxy", yet for even isolated galaxies we don't see them full of Dyson swarms or such, it indicates that no dominant civilization monopoly for any dwarf galaxy or globular star cluster exist either. And I suppose at least in regard to "relativistic objects" sent after a civilization meant to have to hide out of fear of such, if a supernova happens or just some dense space dust eventually in some millions of years move between them and any other potential civilizations (which from just 1 or few supernovae within a small time period would be plausible to happen, especially since the timing of supernovae can come "rather simultaneously" if the stars were formed from the same starburst phase; and a civilization can be located rather on the edge of a spiral galaxy's disc, too and could have "in wise foresight" scanned at first all the remaining star systems that'd not be or not as early be covered behind a dense huge dust or gas cloud in interstellar space to be able to rule out threats from remaining places), blocking off a civilization from such kinds of threats would provide them the opportunity to be safe and expand, use that opportunity to afterward be not anymore easily taken out (if one would stay in this framework, including the hypothetical possibility of interstellar travel), and such cases apparently don't happen either. So then at most intense laser-focusing-based threats for wavelengths that could ignore and go through space dust and gas could conceivably prevent that.
Or another case could be for civilizations to "play the waiting game" (unless civilizations anyway could expand while hiding at all times, but then at some point there'd be no need to hide anymore) and check if the other civilization's life-span naturally may end sooner than that of the own civilization, based on development of their star, and similarly so for other civilizations, though more and more young new civilizations could emerge and become a threat before old threatening civilizations were gone, but then one could also try to prevent that from happening via sterilization of places (to the extent possible) or contamination preventions, though one couldn't necessarily keep control over all planets in a small galaxy long-term this way, and while it may be the case that if all current civilizations in a galaxy could try to do the same to have a chance to eventually expand, even if they may naturally not have as long to live as known other civilizations, the ones that would be gone sooner could hypothetically counter-act sterilization plans of the civilization that would live the longest (to then take over a galaxy) by contaminating other planets to possibly allow new civilizations as new threats to arise.
Very interesting, so the smallest known dwarf galaxies only have a few thousands of stars (and can be very old nonetheless), and yet there is no Dyson swarms or any techno-signatures even though civilizations emerging in any of them could "in no time" check if they're "the only civilization in town" and (presumably, within the fictional dark forest hypothesis framework) dominate such dwarf galaxy.
Two more reasons/arguments against the dark forest hypothesis: 1. Civilizations make mistakes by which they can become visible to other civilizations (for if that would be the only thing saving them from being attacked, namely not having been found yet, rather than any other reasons making sure they are safe even if they were visible, and in the latter case, if they were visible, they might as well stop caring about hiding then unless growth in energy availability would be seen and would be what would instigate aggression, rather than visibility in and of itself), by either not realizing this kind of risk or not thinking in this kind of (game theory) mindset (and people have pointed out that aliens may have a different psyche after all), or by contaminating nearby planets in ways or with microbes that couldn't naturally have gotten there, changing the other planet(s)' atmosphere in detectable manner from afar to then but not prior contain a techno-signature, e.g. if maybe a star's wind would be so strong that microbes wouldn't make it to a planet closer to the star from a biosphere on a further away planet in the system, especially the heavier it is so that the heavier (and rarer) of asteroids would have to knock into them for any kind of ballistic litho-panspermia to happen with protected microbes within a space rock ejected from the planet despite strong stellar winds. And so if there would be no other civilization present (or none that hasn't due to making a mistake and becoming visible been taken out yet) within a (small) galaxy or within the whole region around a civilization in which any aggression toward them would be feasible in theory but not from any place further outward anymore, then a remaining civilization could start expanding and allow itself to be visible. 2. Even if civilizations make no mistake per se by which they become visible, with the right kind of circumstances e.g. for stellar gravitational lensing for viewing exoplanets and stars precisely enough with high resolution, they could be found and presumably be taken out (and possibly make Dyson swarms irrelevant in the context in such cases). — Preceding unsigned comment added by 195.192.195.234 (talk) 11:06, 25 March 2024 (UTC)
Original conversation
[edit]This section is long and not that interesting. Can it be shortened? 88.212.128.82 (talk) 13:27, 29 January 2023 (UTC)
- Done. I agree and took a crack at shortening it. –CWenger (^ • @) 21:25, 29 January 2023 (UTC)
Alistair Reynolds "Inhibitor" hypothesis probably needs a mention
[edit]In the Revelation Space Universe of several novels he suggests that early in the life of our galaxy one of the the first space-faring civillizations came to the conclusion that it is harmful for a society to expand beyond its home star system and so the set up a way of detecting and destroying space-faring cultures whenever they arose. Steve77moss (talk) 05:47, 11 February 2023 (UTC)
- Meta comment: start including pop culture (Dark Forest) in a science article, and it'll attract more.... Geogene (talk) 15:28, 11 February 2023 (UTC)
- I think what we actually need is to demand WP:SECONDARY WP:DUE-establishing coverage of stuff like this. Our own interpretations of these novels is not enough. We need secondary reliable sources to establish these connections for us. That is also how we, through WP:RSUW, prevent over-proliferation of these pop culture one-off mentions. — Shibbolethink (♔ ♕) 16:51, 11 February 2023 (UTC)
This article is about real life alien intelligences, not about fictional ones. Works of fiction are not valid references. Cambalachero (talk) 22:41, 11 February 2023 (UTC)
Okay, but if a fictional scenario brings our attention to a real-universe possibility? You needn't mention the books, it could just say "Explanation xxx: an aggressively anti-spacefaring culture or other entity may be snuffing out interstellar travel whenever it arises".
To me that's a real non-fictional hypothesis. I do though agree that this article isn't the place for sharing about our favourite stories... Steve77moss (talk) 03:27, 12 February 2023 (UTC)
Is this relevant?
[edit]Harvard physicist plans expedition to find ‘alien artefact’ that fell from space Doug Weller talk 15:36, 26 April 2023 (UTC)
- The only real news is that they will search for that object. It is of scientific interest because it is a meteor that came from beyond the Solar System, and that's a thing even if no aliens were involved. But as for using it on Wikipedia, I think that right now it's only relevant for the biography of Avi Loeb. The meteor itself may have an article, if it's retrieved, studied and there's something to say about it. I don't think it is relevant for this article. Cambalachero (talk) 16:09, 26 April 2023 (UTC)
It is worthwhile to mention the origin of this term
[edit]According to articles published in the peer-reviewed journal Astrobiology (journal), the term "Fermi paradox", though widely known, inaccurately reflects Fermi's views regarding the feasibility of interstellar travel and the potential existence of intelligent extraterrestrial life. In addition, it incorrectly attributes ideas primarily from Hart and Tipler to Fermi, using his reputation for endorsement. Furthermore, it suggests a logical inconsistency where there isn't one.
1. [1]The Fermi Paradox Is Neither Fermi's Nor a Paradox
2. [2]Fermi's Paradox Is a Daunting Problem—Under Whatever Label
3. The So-Called Fermi Paradox Is Misleading, Flawed, and Harmful
I think this is worth mentioning. I added this but was removed immediately with the reason: "unattributed opinions".
On the contrary, all the sources cited in the very first paragraph introducing the "Fermi paradox" are online media websites. I don't understand how online media websites are more reliable than a serious peer-reviewed scientific journal (except for the most prestigious multidisciplinary journals like Nature and Science, Astrobiology (journal) is likely the best in this area).
Ortsaxu (talk) 22:48, 3 October 2023 (UTC)
- Those seem like extravagant claims to source from two WP:PRIMARY papers in a single journal. Whats more, they were stated in Wikivoice and added directly to the lead. Geogene (talk) 23:30, 3 October 2023 (UTC)
"An article says..."
[edit]The second sentence of this article says "As a 2015 article put it, "If life is so easy, someone from somewhere must have come calling by now."". Referenced by an article at the New York Times, with an explanation for the layman of what is the Fermi Paradox. Valid as a reference, but quoted that way it makes it seem as if it was more noteworthy than what it really is. It may be better to make the article summary in wikivoice. Cambalachero (talk) 04:44, 22 October 2024 (UTC)
- How's that? Remsense ‥ 论 07:07, 22 October 2024 (UTC)
- Wikipedia former featured articles
- Featured articles that have appeared on the main page
- Featured articles that have appeared on the main page once
- Old requests for peer review
- B-Class level-5 vital articles
- Wikipedia level-5 vital articles in Physical sciences
- B-Class vital articles in Physical sciences
- B-Class Astronomy articles
- Mid-importance Astronomy articles
- B-Class Astronomy articles of Mid-importance
- B-Class science fiction articles
- Mid-importance science fiction articles
- WikiProject Science Fiction articles
- B-Class Skepticism articles
- Low-importance Skepticism articles
- WikiProject Skepticism articles