When two years of academic work vanished with a single click


Credit: Getty
Within a couple of years of ChatGPT coming out, I had come to rely on the artificial-intelligence tool, for my work as a professor of plant sciences at the University of Cologne in Germany. Having signed up for OpenAI’s subscription plan, ChatGPT Plus, I used it as an assistant every day — to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching.
It was fast and flexible, and I found it reliable in a specific sense: it was always available, remembered the context of ongoing conversations and allowed me to retrieve and refine previous drafts. I was well aware that large language models such as those that power ChatGPT can produce seemingly confident but sometimes incorrect statements, so I never equated its reliability with factual accuracy, but instead relied on the continuity and apparent stability of the workspace.
But in August, I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied — two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever.
At first, I thought it was a mistake. I tried different browsers, devices and networks. I cleared the cache, reinstalled the app and even changed the settings back and forth. Nothing helped.
When I contacted OpenAI’s support, the first responses came from an AI agent. Only after repeated enquiries did a human employee respond, but the answer remained the same: the data were permanently lost and could not be recovered.
Accountability gap
This was not a case of losing random notes or idle chats. Among my discussions with ChatGPT were project folders containing multiple conversations that I had used to develop grant applications, prepare teaching materials, refine publication drafts and design exam analyses. This was intellectual scaffolding that had been built up over a two-year period.
AI for research: the ultimate guide to choosing the right tool
We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.
If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. As a paying subscriber (€20 per month, or US$23), I assumed basic protective measures would be in place, including a warning about irreversible deletion, a recovery option, albeit time-limited, and backups or redundancy.
OpenAI, in its responses to me, referred to ‘privacy by design’ — which means that everything is deleted without a trace when users deactivate data sharing. The company was clear: once deleted, chats cannot be recovered, and there is no redundancy or backup that would allow such a thing (see ‘No going back’). Ultimately, OpenAI fulfilled what they saw as a commitment to my privacy as a user by deleting my information the second I asked them to.
Source link




