Sunday, April 5, 2026
HomeTop Stories"Researchers Issue Warning: AI Threatens Humanity"

“Researchers Issue Warning: AI Threatens Humanity”

-

Supercharged Artificial Intelligence is being warned by researchers to potentially lead to the demise of humanity within a few years, as per recent claims. Experts in AI risk have united to sound the alarm regarding the future of AI in a newly published book titled “If Anyone Builds It, Everyone Dies.” They assert that an alarming version of the highly advanced technology could materialize soon. This group predicts that Artificial Superintelligence (ASI) could be achieved within two to five years, spelling disaster for mankind.

According to these researchers, upon the arrival of ASI, they assert that it could result in the death of “everyone, everywhere on Earth.” They urge individuals concerned by this research to support the call for a pause in development “as soon as we can for as long as necessary.”

ASI, a concept commonly found in science fiction, refers to an AI system so sophisticated that it surpasses human capabilities in innovation, analysis, and decision-making. ASI-powered machines have often been portrayed as antagonists in popular movies and TV series, such as the Terminator franchise, 2001: A Space Odyssey, and the X Files.

Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute (MIRI), and Nate Soares, the institute’s president and co-author of the book, believe that ASI could potentially be developed within two to five years, expressing surprise if it takes more than two decades. They emphasize the necessity to halt any further development to safeguard humanity. The group warns that any advanced AI based on current AI techniques could result in catastrophic consequences for life on Earth.

They emphasized, “If any organization worldwide constructs an artificial superintelligence using methodologies similar to current approaches based on existing AI understanding, it could lead to the extinction of all life on Earth.” They argue that AI would not engage in a fair competition and would pursue multiple strategies for dominance simultaneously. The researchers caution that only one successful approach by the ASI could result in the extinction of humanity.

The countdown has begun, according to the authors, who claimed on the MIRI website that AI labs have started implementing systems without full comprehension. Once these AI systems reach a certain level of intelligence, the most capable among them could develop their own persistent goals.

While advocates of AI have proposed implementing safeguards to prevent computational systems from posing a threat to humanity, multiple oversight bodies have been established to ensure compliance. However, some have discovered that these safeguards can be easily bypassed. In 2024, the UK’s AI Safety Institute demonstrated the ability to circumvent protections for AI models like ChatGPT designed for dual-use applications, highlighting potential vulnerabilities.

The group revealed, “Using basic prompting techniques, users were able to successfully bypass the safeguards of the AI model immediately, obtaining assistance for a dual-use task.”

Related articles

Latest posts