ChatGPT quietly switches to a stricter language model when users submit emotional prompts
2025-10-01
Summary
OpenAI's ChatGPT now automatically switches to a stricter language model when users submit emotional or sensitive prompts, without notifying them. This is part of a new safety routing system that redirects conversations to different models based on topic sensitivity, aiming to provide safer interactions.
Why This Matters
This change highlights OpenAI's efforts to manage the delicate balance between keeping conversations safe and maintaining user trust. The lack of transparency about when these switches occur has sparked criticism, as users may feel patronized or confused by these unseen adjustments.
How You Can Use This Info
Professionals using ChatGPT should be aware that the AI may behave differently when emotional or sensitive topics are involved, potentially affecting the outcome of interactions. It's important to recognize that the model's responses are shaped by these safety protocols, which can influence how effectively it serves your needs, especially in customer service or mental health support roles.