According to the head of the artificial intelligence chatbot, ChatGPT, a bug permitted certain users to read the titles of the chats being held by other users.
Users of the social networking sites Reddit and Twitter had posted screenshots of conversation histories, claiming that the records did not belong to them.
OpenAI CEO Sam Altman said the business feels “terrible,” but the “major” issue has since been corrected.
Many users, though, remain concerned about privacy on the platform.
Millions of individuals have used ChatGPT to draught letters, write music, and even code since it began in November of last year.
Every interaction that a user has with the chatbot is recorded and saved in the chat history bar, where it may be accessed at a later time.
But as early as Monday, users started seeing conversations appear in their history that they claimed they hadn’t had with the chatbot. This phenomenon continued throughout the week.
One user on Reddit posted a picture of their discussion history, which included chats in Mandarin with names like “China Socialist Growth.”
On Tuesday, the business told Bloomberg that it had briefly disabled the chatbot late on Monday to remedy the error.
In addition to that, they mentioned that users had not been able to view the actual chats.
The chief executive officer of OpenAI stated that the company would shortly do a “technical postmortem.” Nevertheless, due to the error, users are now concerned that whatever private information they have stored within the tool could potentially become public.
The problem seemed to indicate that OpenAI has access to user communications.
However, this data is not used until after any information that could be used to identify the individual has been removed.
A group of beta testers and journalists were shown Google’s new chatbot, Bard, the day before the error was discovered; thus, this gaffe comes as quite a surprise.
Both Google and Microsoft, which is also a big investment in OpenAI, have been competing with one another for the opportunity to monopolize the rapidly expanding market for artificial intelligence products.
However, due to the rapidity with which new product updates and releases are being implemented, many people are concerned that errors such as these could have negative or unintended consequences.