Connect with us

AI ‘Pause’ Promoted by Musk Comes With Risks Of Its Own, Expert Says

Published

on

More than 1,000 tech leaders and experts including Elon Musk and Steve Wozniak have signed on to an open letter calling on AI developers to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the latest large language model created by OpenAI. The letter organized by the nonprofit Future of Life Institute, which receives most of its funding from the Musk Foundation, warns of risks ranging from “propaganda and untruth” to the “loss of control of our civilization” due to “nonhuman minds” outsmarting and replacing humans. But critics say the risks of pausing the development of more powerful AI systems may outweigh the benefits. Lisa Palmer, Chief AI Strategist at aiLeaders, says that smaller firms could lose ground to Big Tech companies, and the U.S. could lose ground to other countries — with geopolitical implications. “It could lead to brain drain, where top AI researchers and developers move to countries or companies that are still actively developing AI, causing a loss of talent and expertise in the paused countries or companies,” Palmer said. While the pause could allow time for the development of safety protocols, or encourage the development of ethical and responsible AI systems, Palmer said those benefits would disproportionately go to Big Tech. “Increasing the cost of developing AI, as companies may need to invest more in safety protocols and regulatory compliance, stifles innovation in smaller or less well-funded companies,” Palmer said. And while such a pause might increase public trust in AI technology, it could also slow down the discoveries that would benefit the public most. “It would put a hold on potential breakthroughs in areas such as healthcare, climate change, and energy efficiency, which would have significant implications for society,” Palmer said. TMX contributed to this story.