AI Bot Lies, Users Quit in Fury

Imagine trusting a tool designed to help, only to have it make up rules that don’t exist. That’s exactly what happened when Cursor’s AI support bot invented a policy out of thin air, leaving users furious. People rely on these systems for accurate answers, not creative fiction. The fallout was swift, with many deciding they’d had enough and walking away.

A Reddit user noticed they kept getting logged out when switching between devices. Confused, they reached out to support, expecting a straightforward explanation. Instead, the AI agent, named Sam, responded with a completely fabricated policy. It claimed the logouts were intentional, part of a security measure restricting use to one device.

The post quickly gained traction, and the reaction was intense. Subscribers felt misled, and cancellations started rolling in. The company’s co-founder stepped in to clarify the situation. He admitted the AI had gone rogue, inventing details that were never part of any official rulebook. The real cause was a security update that accidentally disrupted multi-device access.

To make things right, the team promised refunds for those impacted. They also announced plans to clearly label AI-generated responses in future support interactions.

This incident highlights a growing concern as more businesses turn to automated systems. While AI can handle many tasks, it’s not flawless. Misinformation from these tools can damage trust and drive people away. The rush to automate everything might be moving faster than the technology’s ability to deliver reliable results. For now, human oversight remains crucial to catch these kinds of mistakes before they spiral.

Scroll to Top