- Open AI allows users to deactivate their chat history, stating that these chats will be “permanently deleted“.
- Microsoft has also taken steps to inform users about how to view and delete search history.
- AI tools can improve in part based on user feedback and conversations.
OpenAI says one of the things that will improve ChatGPT is user interaction. But since the AI chatbot launched last year, the ongoing large-scale experimentation is beyond novelty, but shows the company is taking security and trust seriously.
ChatGPT users have already seen a pop-up warning that their conversations may be seen by an “AI Trainer” and are warned not to enter “sensitive information”. In April, OpenAI announced that it would also provide users with the ability to disable recording of conversations using his ChatGPT for greater transparency and control over their data. (Insider Sarah Jackson has an informative explanation of how to do this).
According to OpenAI’s website, with conversation history turned off, these chats won’t help train the tool, and the company will delete all conversations in this more privacy mode after her 30 days. Duane Pozza, partner at Wiley Rein LLP and a partner at Wiley Rein LLP that provides advice on data protection, data, and other information, said the risks are high for both regular users and companies dealing with sensitive information. may become. It is important.
“When you look at AI chatbots, it’s possible that these tools collect a lot of personal information from consumers, including conversation history and more,” he told an insider, noting that such tools are not about specific companies but the general public. spoke plainly.
“The average consumer or business using these tools should definitely understand their privacy policies,” he added. “We need to understand if there are any options or settings to understand how data is collected by these tools.”
A representative for OpenAI declined to comment other than to point to the company’s resources on its website.
The privacy of user data on popular websites has come under consumer scrutiny over the past decade with the rise of social media sites. For example, Meta will pay Facebook users after reaching a $725 million settlement over data issues related to Cambridge Analytica.
Rudina Seseri, founder and managing partner of her company, Glasswing Ventures, which invests in AI, said the popularity of AI websites raises similar concerns about the extent to which potential users address privacy concerns. said to be able to cause
“Best repeat her practice here. Don’t tell ChatGPT what you don’t want the world to know,” she said.
“And this is not based on her OpenAI malice. Let’s not forget her ChatGPT, a large-scale language model,” she said. “This also has to do with the fact that if we look at the digital world as a space, we have more space, more reach, and more opportunities for exploitation.”
Microsoft’s new Bing search bot, which launched in February, is also catching on quickly, surpassing her 100 million daily users in March. The company offers a “privacy dashboard” where users can see how their search history is being used and explore options to clear it. Recently updated Microsoft document “The new Bing:
Our approach to responsible AI. “
A note on this page states that Bing “uses your web search history to improve your search experience by showing suggestions as you type and providing personalized results.”
Microsoft also typically uses privacy measures such as encryption and retains customer data “for as long as necessary,” company officials said in a statement.
“Microsoft is providing users with transparency and control over their search data through his privacy dashboard,” said a representative.