London, March 31 (Reuters/GNA) – Four artificial intelligence experts have expressed concern after their work was cited in an open letter – co-signed by Elon Musk – demanding an urgent pause in research.
The letter, dated March 22 and with more than 1,800 signatures by Friday, called for a six-month circuit-breaker in the development of systems “more powerful” than Microsoft-backed OpenAI’s new GPT-4, which can hold human-like conversation, compose songs and summarise lengthy documents.
Since GPT-4’s predecessor ChatGPT was released last year, rival companies have rushed to launch similar products.
The open letter says AI systems with “human-competitive intelligence” pose profound risks to humanity, citing 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI’s research. OpenAI did not immediately respond to requests for comment.
Critics have accused the Future of Life Institute, the organisation behind the letter which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.
Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google.
GNA/Credit: Reuters