An update turns Google Gemini into a prudish system and renders apps for trauma survivors unusable.
The latest update to Google’s Gemini language models appears to have rendered the configuration settings for safety filters unusable. As a result, certain applications no longer work properly, particularly those that deal with sensitive topics such as sexual violence. This has direct consequences for apps that provide support to trauma victims.
Jack Darcy, a software developer and security researcher from Brisbane, Australia, noticed the problem after the Gemini 2.5 Pro Preview was released on Tuesday. He explained to The Register that he is working on a platform that allows victims of sexual violence to record their experiences and convert them into structured reports for legal purposes. The technology was also intended to help people express their experiences in a safe way.
However, since the update, this type of content is automatically blocked, even if developers have explicitly indicated via the settings panel that the content is allowed. According to Darcy, the model now even refuses general support around mental health, something that was previously possible.
The Gemini API normally offers settings that allow developers to tailor the sensitivity of the model to specific content types. Think of explicit material, hate speech, or dangerous actions. However, in applications for healthcare or journalism, for example, it is sometimes necessary to be able to discuss difficult topics. Darcy indicates that his apps — VOXHELIX, AUDIOHELIX, and VIDEOHELIX — are explicitly designed for this purpose. One of these applications processes unfiltered reports of incidents into both audio material and formal documents.
Vulnerable situations
Darcy indicated that even with the security settings fully enabled, the model now refuses to process content related to sexual violence by default. According to him, this undermines the original purpose of his apps, which are used by therapists, social workers, and Australian government agencies. Since the change, he has received reports from users who get stuck during intake interviews with clients. According to him, this is particularly harmful because it blocks users — often in vulnerable situations — in the middle of sharing their story.
He also shared that another developer built an app called InnerPiece for people with PTSD or a history of abuse. That app has also become unusable because users are now told that their experiences are too explicit to process. According to Darcy, this affects people who are trying to process their past and are looking for recognition.
Other developers are also experiencing problems. The developer forum Build With Google AI states that a previously used version of the model has been replaced by a new version without notice, resulting in significant differences in performance. One developer writes that this means existing workflows, prompts, and evaluations are no longer reliable because they are actually using a different model.
Call to Google
Darcy is calling on Google to restore the consent-based settings so that apps like his can continue to work with sensitive content in a responsible manner. He believes that the lack of support is not just a technical problem, but a fundamental breach of trust towards people who are seeking help at a vulnerable time. Google has confirmed that it is aware of the report, but has not yet provided any explanation about the nature or cause of the change.