Vitalik Buterin, co-founder of Ethereum, has raised alarms regarding the swift evolution of artificial intelligence (AI), suggesting that superintelligent AI could emerge sooner than anticipated.

In a blog post dated January 5, he advocates for “defensive acceleration” strategies to mitigate potential risks associated with AI, particularly in military contexts. Buterin emphasizes the importance of decentralized AI systems that remain closely linked to human decision-making to prevent misuse, especially given the increasing use of AI in warfare, as seen in conflicts in Ukraine and Gaza.

He predicts that artificial general intelligence (AGI) could be just three years away, with superintelligence potentially following shortly after. Buterin argues that humanity must not only promote beneficial AI advancements but also actively hinder harmful ones to avert catastrophic scenarios, including the risk of human extinction.

To tackle these challenges, Buterin proposes several measures, including establishing liability rules to hold users accountable for AI applications. He also suggests implementing “soft pause” mechanisms that would significantly reduce global computing capacity for one to two years, allowing time for society to adapt to emerging AI threats.

Additionally, he recommends integrating authorization requirements for AI hardware, necessitating approval from three international bodies, including at least one non-military entity.

While acknowledging that his proposals are not foolproof, Buterin stresses the urgency of taking action to manage the risks posed by rapidly advancing AI technologies. His insights underscore the need for global cooperation to ensure AI remains under human control and to minimize the potential for disastrous outcomes.

Tags