Lately, I have been wondering why most major AI companies are concerned with the “existential risks” that Artificial General Intelligence (AGI) will inevitably introduce as they hypothesize. 1 What even are existential risks or AGI? And why do companies like OpenAI or Anthropic push so much for the story that AI is dangerous even though they are the people developing it in the first place? Ultimately, as a student in Computer Science and Applied Mathematics, I want to find out which AI research directions are worth spending my time on and which other directions are inherently destructive to society and I should therefore avoid getting into. And answering the question of why many of the biggest AI companies chase the development of AGI while discussing the potential of existential risks is instrumental to getting closer to real impactful AI research.
Existential risks arise from any event that would destroy posthuman paradise full of “value” after Nick Bostrom.2 While there exist many risks in our current world, like an environmental catastrophe or nuclear war, the notion of existential risk is more concerned with the far future. Afterall, a nuclear war or severe weather disasters would not end in the extinction of all of humanity.3 A popular example of an existential risk is an Artificial intelligence system that has the task of removing suffering from the world, but realizes that humans are the creaters of suffering and therefore kills all humans. 2 Bostrom views himself as a transhumanist, an ideology that aims for the enhancement of humanity of creation of a poshuman species with the means of technology. The idea of creating an artificial intelligence as capable or even more capable than humans, colonizing space or uploading the human mind into the cloud to become immortal are all transhumanist ideas. Well, many people find those ideas fascinating and I agree. As a computer science student these ideas are the pinnacle of engineering and science. However, it is important to understand which people are creating those technologies and for who. Bostrom is not the first to have the idea of using technology to enhance the human species. Initially, transhumanism was developed by twentieth century eugenicists 4, people who advocated for improving the genetic quality of the human population and which ultimately was used as “scientific” legitimacy of killing millions of people during the second world war by the Nazis. And people promoting transhumanismn today such as Peter Singer, a professor of Bioethics at Princeton University, welcome ideas such as infanticide, killing infants with severe disabilities. 4 Transhumanism is deeply connected to other ideologies, such as extropianism which aims for the improvement of intelligence of a system or singularitarianism that seeks a fundamental rupture in human history through technological progress. Longtermism that longs to maximize the total quantity of value in the universe in the far future developed out of Effective altruism which optimizes for the most effective way to do maximal good in the world. Effective altruism is home at Oxford university and is supported by silicon valley billionaires. 5 The community argues that donating money to less fortunate countries will result in the most good in the most effective way and call this principle earning to give. 6 No question asked about why donators arrived at their wealthy positions in the first place and no interest in changing the underlying system that enables environmental and human exploitation. This ideology is backed by powerful people who use their influence in order to steer the development of AI into the direction of their needs. This is done by supporting politicians, e.g. Peter Thiel supported the campaign of J.D. Vance, or through the creation of scholarships programs and philanthropy that rewards people who solve problems that is in line with the transhumanistic ideology. Cosmism is another notion of transhumanism while it focuses more on how posthumans could transform the universe itself. And lastly Rationalism which is not directly connected to transhumanism with the community around the blog LessWrong, is a place where people with transhumanistc mindset gather to rationalise their abstract ideas into formal arguments. All of these ideologies are known as the TESCREAL bundle. 4
The leaders of major AI companies such as OpenAI or Anthropic agree with the TESCREAL set of ideologies. 4 The quest for creating Artificial General Intelligence, a system that matches or is superior to humans in any cognitive ability, is fundamentally transhumanistic as it aims to create a entity superior to humans akin to first-wave eugenicists. AI researchers working on Large Language Models (LLMs) such as ChatGPT, Claude or Gemini believe that scaling LLMs might lead to AGI. This argument is used to justify enormous spendings on datacenter construction in the US like the Stargate project. 7 Other opinions exist from prominent researchers such as Yann LeCun, however the quest for AGI remains. While It is arguably very impressive how Language Models have improved in the last years, they are fundamentally unscoped systems that cannot be stress-tested as their set of tasks is undefined. Therefore, the AI Safety research agenda has gained much support by TESCREAL minded people that seek to minimize the existential risk of AGI. 8 And remember, AGI is a hypothetical technology that does not exist yet. It is important to note that preventing misuse of AI is indeed important as AI poses real risks to society and the world.9 However, minimizing existential risks of AGI prioritizes the future and ignores the present. Misuse and harms are happening right now. For example, the datacenters for Generative AI require immense amount of energy and water to run and this impacts the water quality and energy affordability of communities where datacenters are built right now. Or, people in the global south who were hired to filter the most dehumanizing or violent content to enable Reinforcement Learning with Human feedback such that Large Language Models such as ChatGPT are kind to its users. Or marginalized groups who are discriminated right now through facial recognition software used by the police that is less accurate with black skin due to high imbalance of skin colour in the training data. Or children that used large language models extensively during low periods of their life and ended up commiting suicide with support of the Language model. You can read more about real risks in Karen Hao’s brilliant book. 10 It is harmful to ignore those risks right now and only talk about existential risks of AGI. So why are existential risks the main topic? Well, it is simple. This view on the world makes it easy accumulate more wealth and power for the people already in power. By creating an apocalyptic version of the world and transmitting fear and uncertainty, the AI companies create AI Hype that captures the public discourse and financial assets. And more than that. This technology can potentially be used to enable a modern kind of fascism. Technology was used and embraced by fascist systems that used it to gain control of people. AI systems when misused can pose similar risks, e.g. by mass surveillance or misinformation campaigns. However, steering the public conversation towards existential risks drives the attention away from how this technology is already used by people to capture more power and control over the population. 11
So what is then meaningful AI research? Timnit Gebru, founder of the Distributed AI Research Institute, and Emilé Torres, postdoctoral scholar on ethics of AI, argue that building well-scoped systems can provide meaningful impact. 4 Those system are synonymous with narrow AI are the exact type of AI system that TESCREALists moved away from. Examples of such systems are
-
Building language technologies for under-represented languages where the goal is preservation and ownership instead of accuracy. 12
-
Projects that focus on conservation and biodiversity monitoring of the environment, especially in regions that change due to climate change 13
-
Analyzing social injustice quantitatively such as the impacts of apartheid 14
Throughout writing this blog post, I realized that the internet is dominated by voices from the TESCREAL bundle when it comes to AI. As we have seen, they are capable of moving politics and the public opinion to create hype and anxiety around AI. However, there are ways to do something valuable in this field that does not align with the research agenda of TESCREALists. First, addressing the issues that we’re facing because of AI right now environmentally or sociologically is important. Secondly, embracing our own research interests and how our enjoyment for Machine Learning and AI could possibly help communities right now is possible. It might not get as much hype or recognition compared to working on the latest GPT or Gemini model. However, it is sure that working on a scoped problem that exists today will prove useful.
Thank you for reading the whole piece if you made it until here. Leave me a comment, I’m curious about your view on this topic. Until the next one!
Footnotes
-
Dario Amodei—The Adolescence of Technology. (n.d.). Retrieved February 1, 2026, from https://www.darioamodei.com/essay/the-adolescence-of-technology#humanity-s-test ↩
-
Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c ↩ ↩2
-
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette UK. ↩
-
Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday. https://doi.org/10.5210/fm.v29i4.13636 ↩ ↩2 ↩3 ↩4 ↩5
-
Evans, J. (2022, October 29). How did transhumanism become the religion of the super-rich? Medium. https://julesevans.medium.com/how-did-transhumanism-become-the-religion-of-the-super-rich-d670a410b01a ↩
-
Todd, B. (n.d.). Why and how to earn to give. 80,000 Hours. Retrieved February 1, 2026, from https://80000hours.org/articles/earning-to-give/ ↩
-
Announcing The Stargate Project. (n.d.). Retrieved February 1, 2026, from https://openai.com/index/announcing-the-stargate-project/ ↩
-
Center for AI Safety (CAIS). (n.d.). Center for AI Safety. Retrieved February 1, 2026, from https://safe.ai ↩
-
‘Existential to who?’ US VP Kamala Harris urges focus on near-term AI risks. (2023, November 1). POLITICO. https://www.politico.eu/article/existential-to-who-us-vp-kamala-harris-urges-focus-on-near-term-ai-risks/ ↩
-
Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Group. ↩
-
Mühlhoff, R. (2025). Künstliche Intelligenz und der neue Faschismus. Ditzingen: Reclam ↩
-
Raj, A. (2024, January 23). Preserving indigenous languages with AI. TechWire Asia. https://techwireasia.com/2024/01/preserving-indigenous-languages-with-ai/ ↩
-
zeon.studio. (n.d.). About. Retrieved February 1, 2026, from https://salmonvision.org/about/ ↩
-
Impacts of Spatial Apartheid. (n.d.). DAIR (Distributed AI Research Institute). Retrieved February 1, 2026, from https://www.dair-institute.org/projects/impacts-of-spatial-apartheid/ ↩