Executives behind the American artificial intelligence (AI) company AI Foundation are cautioning against implementing kill switches in machine systems, arguing that such a move could increase the chances of a superintelligence that is hostile toward human civilization.
According to a new Yale CEO Summit survey, 42% of polled CEOs agreed that AI could potentially end humanity within five to ten years.
In citing the study, AI Foundation CMO and Chair Lars Buttler said the debate around AI needs to be elevated and suggested that people react emotionally to the new technology because of a lack of understanding about what is happening behind the scenes.
However, both Buttler and CEO Rob Meadows warned of several concerns surrounding the advancement of AI and the possible creation of an artificial general intelligence (AGI) capable of reasoning and decision-making equal to or beyond that of a human.
“If you forget to rule out something, you know, it’s an unknown unknown, trouble will happen and that will be forever our relationship with AI,” Buttler added.
Buttler says that once humanity achieves AGI, those AI priorities flip almost 100%. An AGI, as opposed to an AI, will be able to understand the world around it fully and will likely avoid the accidental mistakes of its predecessor. However, it may also refuse to do what you tell it to.
To highlight the potential harm, the American tech entrepreneur revealed that the AI foundation has internal research that can take just a few minutes of a person’s voice and video to create a highly realistic deepfake.
“I could call up your mom and your mom would not know the difference, that she just got FaceTime’d by you. We can’t put that out into the world. You know, what we can do with it is use it to put the antidote to that out in the world,” Meadows added.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
Despite concerns across industries, Meadows said the AI Foundation believes the world should not slow down work on the technology, breaking from the position of prominent figures who signed a letter urging for a temporary pause on AI development back in March.
AI REVEALS ANCIENT SYMBOLS HIDDEN IN PERUVIAN DESERT FAMOUS FOR ALIEN THEORIES
“All of the most dangerous technologies and, you know, tools and other things that have been invented over the years did way more good than harm. And we need to [advance] in a thoughtful way and be careful not to be reckless. But we’re strong on the side of if anything, we need to go faster and educate more people on what is possible here,” he said.
HOUSE DEMANDS AI UPDATE FROM PENTAGON AS THREATS FROM CHINA, OTHER ADVERSARIES PILE UP
Experts say AI-assisted fraud schemes could cost taxpayers one trillion dollars in just a single year (Getty Images)
Instead, Buttler suggested that humanity should be collaborative towards AGI and noted that such an approach would significantly increase the chance of reciprocity on the part of the superintelligence, allowing everyone to benefit.
Meadows likened the relationship between humans and AGI to a dog and its owner. A dog doesn’t really understand fully what the owner is doing or saying, but it understands enough. If the owner says “walk,” the dog will become excited. Similarly, the AGI may operate on a level that humans cannot fully decipher—or at least until humans have chips implanted to communicate with the intelligence adequately.

