ChatGPT for Mac app flaw left users’ chat history exposed
- by nlqip
Is it only a few weeks since OpenAI announced its new app for macOS computers?
To much fanfare, the makers of ChatGPT revealed a desktop version that allowed Mac users to ask questions directly rather than via the web.
“ChatGPT seamlessly integrates with how you work, write, and create,” bragged OpenAI.
What could possibly go wrong?
Well, anyone rushing to try out the software may have be rueing their impatience, because – as software engineer Pedro José Pereira Vieito posted on Threads – OpenAI’s ever-so-clever ChatGPT’s software was doing something really-rather-stupid.
It was storing users’ chats with ChatGPT for Mac in plaintext on their computer. In short, anyone who gained unauthorised use of your computer – whether it be a malicious remote hacker, a jealous partner, or rival in the office, would be able to easily read your conversations with ChatGPT and the data associated with them.
As Pereira Vieito described, OpenAI’s app was not sandboxed, and stored all conversations, unencrypted in a folder accessible by any other running processes (including malware) on the computer.
“macOS has blocked access to any user private data since macOS Mojave 10.14 (6 years ago!). Any app accessing private user data (Calendar, Contacts, Mail, Photos, any third-party app sandbox, etc.) now requires explicit user access,” explained Pereira Vieito. “OpenAI chose to opt-out of the sandbox and store the conversations in plain text in a non-protected location, disabling all of these built-in defenses.”
Thankfully, the security goof has now been fixed. The Verge reports that after it contacted OpenAI about the issue raised by Pereira Vieito, a new version of the ChatGPT macOS app was shipped, properly encrypting conversations.
But the incident acts as a salutary reminder. Right now there is a “gold rush” mentality when it comes to artificial intelligence. Firms are racing ahead with their AI developments, desperate to stay ahead of their competitors. Inevitably that can lead to less care being taken with security and privacy as shortcuts are taken to push out developments at an ever-faster speed.
My advice to users is not to make the mistake of jumping onto every new development on the day of release. Let others be the first to investigate new AI features and developments. They can be the beta testers who try out AI software when it’s most likely to contain bugs and vulnerabilities, and only when you are confident that the creases have been ironed out try it for yourself.
Source link
lol
Is it only a few weeks since OpenAI announced its new app for macOS computers? To much fanfare, the makers of ChatGPT revealed a desktop version that allowed Mac users to ask questions directly rather than via the web. “ChatGPT seamlessly integrates with how you work, write, and create,” bragged OpenAI. What could possibly go…
Recent Posts
- Bob Sullivan Discovers a Scam That Strikes Twice
- A Vulnerability in Apache Struts2 Could Allow for Remote Code Execution
- CISA Adds One Known Exploited Vulnerability to Catalog | CISA
- Xerox To Buy Lexmark For $1.5B In Blockbuster Print Deal
- Vulnerability Summary for the Week of December 16, 2024 | CISA