
Edward Olive/Getty Images
ChatGPT might be the most well known, and potentially valuable, algorithm of the moment, but the artificial intelligence technologies that OpenAI uses to present their intelligence are neither unique nor confidential. Competing projects and open source versions may soon make ChatGPT-style bots available for anyone to copy and reuse.
AI stability, a startup that has already developed advanced, open source image generation technology, and is working on an open competitor to ChatGPT. “We are still a few months away from release,” says Imad Mostaqqi, CEO of Stability. A number of competing startups, incl anthropicAnd cohereAnd AI21They are working on a private chatbot similar to the OpenAI bot.
The imminent influx of sophisticated chatbots will make the technology more plentiful and visible to consumers, as well as more accessible to AI companies, developers, and researchers. This can accelerate the rush to make money with AI tools that generate images, code, and text.
Established companies like Microsoft and Slack are integrating ChatGPT in their productsand many startups are struggling to build on top of them New ChatGPT API for developers. But the wide availability of technology may also complicate efforts to anticipate and mitigate the risks that come with it.
ChatGPT’s deceptive ability to provide cogent answers to a wide range of queries also causes this to happen at times. Fabricate facts Or adopt problematic personalities. It can help with malicious tasks such as producing malware code, or spam and disinformation campaigns.
as a result of, Some researchers have advocated slowing the deployment of ChatGPT-like systems while assessing risks. “There’s no need to stop research, but we can certainly orchestrate widespread spread,” says Gary Marcus, an AI expert who has sought to draw attention to risks such as misinformation generated by AI. “We might, for example, require studies of 100,000 people before releasing these technologies to 100 million people.”
The wide availability of ChatGPT-style systems, and the release of open source versions, will make it difficult to restrict research or wider publication. The competition between companies large and small to adopt or match ChatGPT suggests little desire to slow down, but instead seems to spur the spread of the technology.
Last week, LLaMA, an artificial intelligence model developed by Meta – and similar to the one at the core of ChatGPT – was leaked online after being shared with some academic researchers. The system can be used as a key component in creating and releasing a chatbot raised concern Among those who fear that artificial intelligence systems known as large language models, and chatbots built on them like ChatGPT, will be used to generate misinformation or to automate cybersecurity breaches. Some experts argue so These risks may be exaggeratedand others suggest making the technology more transparent It will actually help others to protect from misuse.