Love it or hate it, artificial intelligence (AI) has ushered the world to an inflection point. The power of large language models (LLMs) is changing the structure of labor markets, education, informational gatekeeping, and governance. It is still unclear whether these changes will lead to a utopian scenario in which all human needs are taken care of by machines or a neo-feudal hellscape in which a few ultra-wealthy technocrats cyber-rule over billions of platform-dependent serfs. Or maybe we land somewhere in between. Regardless of where it is ultimately headed, AI is the current big thing and on track to continue expanding.
Given this, it is important (and even mildly entertaining until you get to the end of this article) to consider the weirdness of some AI-adjacent subcultures. While the vast majority of individuals working in the AI space do not adhere to these philosophies and beliefs, a handful of notable and highly-influential technologists are either true believers or have been known to rely on these fringe philosophies.
The rationalists are a loosely affiliated intellectual community focused on human reasoning, understanding cognitive biases, and anticipating long-term risks from advanced technologies such as AI.[1] Many believe that artificial general intelligence (AGI) poses an existential risk unless it is aligned with human values. This sounds fine, but some rationalists have taken this to an extreme degree, overlapping intellectually with longtermers, another esoteric niche that places the needs of future humans at or above the level of those of current humans.[2]
This has led to some strange positions, often based on self-centered, post-hoc justifications. Longtermers place an equal moral weight on the well-being of future humans and that of current humans. Thus, they posit that there will be many more human beings in the future than there are now, and so a utilitarian and "rational" conclusion is to optimize for the needs of those of the future even if this causes some degree of suffering in those of the present. Sometimes these lines of thinking are referred to as "effective altruism."
Probably the most famous effective altruist is Sam Bankman-Fried, who justified enriching himself by making risky investments with other peoples' money as a way to improve the overall welfare of the world.[3] Bankman-Fried was to eventually give it all away in massive acts of philanthropy. In practice, this alleged mission did not work out so well, and Bankman-Fried was sentenced last year to 25 years in prison for fraud and conspiracy related to his operation of the FTX cryptocurrency exchange.
Other well-known individuals in various rationalist circles subscribe to or are influenced by similar ideas, and have not (yet) been convicted of a crime. Venture capitalist Marc Andreessen views rationalism and longtermism through a libertarian-accelerationist lens, viewing failure to adopt AI a greater danger than any of its existential risks. In fact, Andreessen has provided a non-individualized list of "enemies" who purportedly stand in the way of AI growth.[4] Andreessen's The Techno-Optimist Manifesto, a very long blog post espousing his beliefs, has a strangely messianic glow. But no matter how you frame it, Andreessen is preaching for a low-regulation environment that would be good for Andreessen, allowing him to add multiples to the billions he already enjoys.
But Andreessen is not the only rationalist to flirt with religion. An online community of rationalists had a major kerfuffle over the thought experiment called Roko's Basilisk.[5] In short, it asks us to consider that a future super-intelligent AI might punish anyone who knew about it but didn't help bring it into existence. In particular, the Basilisk would perform simulations of these people, the simulated copies subjected to extreme suffering, possibly for very long periods of time.[6]
Of course, this is just a sophisticated variation of Pascal's Wager, a much older thought experiment about the existence of God.[7] Nonetheless, online panic ensued, leading to moderators banning discussion of the Basilisk on their forum. And it does not end there. According to some reports, tech CEO and armchair efficiency expert Elon Musk met the singer Grimes (mother of 3 of his at least 14 children) due to Twitter discussions of the Basilisk.[8]
And that is just the short list. Arguments could be made that others have promoted controversial, ethically questionable, or arguably sociopathic ideas in the furtherance of AI (and often themselves at the same time). These include OpenAI CEO Sam Altman (deemphasizing AI safety while using fear tactics to gain influence and power), investor Peter Theil (pushing for AI use in authoritarian contexts), and former Coinbase CEO Balaji Srinivasan (advocating for cloud-based governance and rule by techno-elites, with AI as a central tool).
Some AI evangelists on the more careful side of things will attempt to rationalize their arguments by stating that AGI is an engine of abundance and the distribution of the excess wealth is a political rather than technological problem.[9] Yet, it is rare in human history for a political system to distribute wealth equally or equitably, and we have many counter-examples of the exact opposite occurring.
But let's end on an even darker note. The Zizians are a small cult-like group that grew out of the rationalist movement. They are currently believed to be involved in the deaths of six individuals in 2022 and 2025.[10] To be clear, the Zizians are not mainstream rationalists.
I personally know dozens of people working in the AI space and I have to admit that they are all pretty chill. It would not be accurate or fair to lump the majority of AI users, developers, and advocates under the "weird" moniker.
But the statements of a number of very powerful individuals, many of whom currently lead the AI community, have been more than just weird. They are also troubling because they reveal a willingness to dismiss oversight, minimize risks, and speak in abstract, hyper-rational terms that often ignore real-world human suffering. On the other hand, these are some of the same people who brought us aggressive online ads, social media, meme coins, and surveillance capitalism. So their vision of the future, however well-funded and covered in altruistic gloss, deserves critical scrutiny rather than blind trust.
[1] https://en.wikipedia.org/wiki/Rationalist_community.
[2] https://en.wikipedia.org/wiki/Longtermism.
[3] https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity.
[4] Someone powerful making lists of enemies has never gone wrong. Nope. Not once.
[5] https://en.wikipedia.org/wiki/Roko%27s_basilisk.
[6] Ironically, the rationalists, many of whom claim to be atheists, appear to have recreated a version of the Christian hell.
[7] https://en.wikipedia.org/wiki/Pascal%27s_wager.
[8] https://www.vice.com/en/article/what-is-rokos-basilisk-elon-musk-grimes/.
[9] See, e.g., https://www.aei.org/articles/the-age-of-agi-the-upsides-and-challenges-of-superintelligence/.
[10] https://en.wikipedia.org/wiki/Zizians.
Image of Basilisco, desenho de Rodrigo Ferrarezi by Rodrigo Ferrarezi, from the Wikimedia Commons under the Creative Commons Attribution 3.0 Unported license.
[View source.]