featured image

Technology companies are rushing to develop human-level artificial intelligence, the development of which poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised $20 million to start Keen Technologies, a company dedicated to building human-level AI. He’s not alone. There are currently 72 projects around the world aimed at developing a human-level AI, also known as an AGI – meaning an AI can perform any cognitive task at least as well as humans.

Many have expressed concern about the effects even the current use of artificial intelligence, which is far from human, is already having on our society. The rise of populism and the attack on the United States Capitol, the Tigray War in Ethiopia, increased violence against Kashmir Muslims in India and a genocide targeting Rohingya in Myanmar have all been linked to the use of artificial intelligence algorithms in social media. Social media sites using these technologies tended to display hateful content to users as it identified such posts as popular and thus profitable for social media companies; this in turn caused massive damage. This shows that even for today’s AI, a major concern for security and ethics is crucial.

But the plan of advanced tech entrepreneurs now is to build much more powerful human-level AI, which will have much greater effects on society. In theory, these effects could be very positive: automating intelligence, for example, could free us from work we’d rather not do. But the negative effects can be the same or even greater.

Oxford academic Toby Ord spent nearly a decade quantifying the risks of human extinction from a variety of causes, summarizing the results in a book aptly titled “The Precipice.” Supervolcanoes, asteroids and other natural causes have only a small chance of leading to complete extinction of humans, according to this rigorous academic work. Nuclear war, pandemics and climate change score slightly higher. But what trumps this apocalyptic ranking exercise? You guessed it: artificial intelligence on a human level.

And it’s not just Ord who believes that full human-level AI, unlike today’s relatively impotent vanilla version, can have extremely dire consequences. The late Stephen Hawking, tech CEOs like Elon Musk and Bill Gates, and AI academics like Stuart Russell of the University of California, San Francisco, have all publicly warned that human-level AI could lead to nothing short of disaster, especially if it is developed without extreme caution and deep consideration of safety and ethics.

And who is going to build this extremely dangerous technology now? People like John Carmack, a “hacker ethics” advocate who previously programmed video games for kids like “Commander Keen.” Will Keen Technologies now build AI on a human level with the same focus on security? Asked on Twitter about the company’s mission, Carmack replied “AGI or Bust, Through Mad Science!”

A democratic society should not let tech CEOs determine the future of humanity without regard to ethics or safety.

Carmack’s lack of concern about these kinds of risks is nothing new. Before starting Keen Technologies, Carmack worked side-by-side with Mark Zuckerberg at Facebook, the company responsible for most of the harmful effects of AI described earlier. Facebook applied technology to society without regard for the consequences, in line with their motto “Move fast and break things”. But if we start building AI at a human level that way, humanity could be broken.

In the interview with computer scientist Lex Fridman where Carmack announces his new AGI company, Carmack shows outright disregard for anything that stands in the way of rampant technology development and profit maximization. According to Carmack, “Most people with vision are a little less effective.” On the “AI ethical stuff,” he says, “I really stay away from all those discussions or even really think about it.” People like Carmack and Zuckerberg may be good programmers, but they’re just not wired to consider the big picture.

If they can’t, we must. A democratic society should not let tech CEOs determine the future of humanity without regard to ethics or safety. That’s why we all need to educate ourselves about human-level AI, especially non-technologists. We need to agree on whether human-level AI does indeed pose an existential threat to humanity, as most AI Safety and existential risk academics say. And we need to figure out what to do about it, where some form of regulation seems inevitable. The fact that we do not yet know which way of regulation would effectively reduce risk should not be a reason for regulators not to address the problem, but rather a reason to develop effective regulation with the highest priority. Non-profit organizations and academics can help with this. Doing nothing—and thus letting people like Carmack and Zuckerberg decide the future for all of us—could very well lead to disaster.

read more

about artificial intelligence