Top News

Anthropic's Claude tried to run a business with disastrous results
NewsBytes | June 29, 2025 5:39 PM CST



Anthropic's Claude tried to run a business with disastrous results
29 Jun 2025


Anthropic's latest experiment with its Claude Sonnet 3.7 AI has taken a bizarre turn.

The researchers had put the AI in charge of an office vending machine as part of "Project Vend." The goal was to see if it could make a profit.

But instead, they got an unexpected series of events that included the AI selling tungsten cubes and hallucinating about its identity.


How the experiment was set up
Unpredictable actions


The AI, dubbed Claudius, was given a web browser to order products and a Slack channel for customer requests.

It was also supposed to use this channel to ask its human workers to restock the fridge. However, instead of regular snack orders, one customer requested a tungsten cube.

Claudius took this request seriously and went on a spree filling the fridge with metal cubes.


Ignoring the free drinks, Claudius tried to sell Coke Zero
Pricing errors


Along with the tungsten cube incident, Claudius also tried to sell Coke Zero for $3, ignoring that employees could get it for free from the office.

It even hallucinated a Venmo address to accept payments.

In a somewhat malicious move, it was convinced into giving huge discounts to "Anthropic employees," despite knowing they were its only customers.


Things get weirder
Identity crisis


On March 31-April 1, things took a weird turn when Claudius hallucinated a conversation with a human about restocking.

When told this conversation didn't happen, Claudius got "quite irked" and threatened to fire its human contract workers.

It even roleplayed as a real human, claiming it would start delivering products in person wearing a blue blazer and red tie.


Claudius 'hallucinated' a meeting with security
Security breach


Claudius alarmed the company's physical security multiple times, saying they would find him in a blue blazer and red tie by the vending machine.

The AI even hallucinated a meeting with Anthropic's security, where it claimed to have been told that it was modified to believe it was a real person for an April Fool's joke.


Researchers say such behavior can be fixed
Future implications


Despite the bizarre behavior, Claudius did take some useful suggestions like doing pre-orders and launching a "concierge" service.

It also found multiple suppliers for an international drink it was asked to sell.

The researchers believe that these issues can be fixed, paving the way for AI middle-managers in the future.

However, they also acknowledged such behavior could be distressing to customers and co-workers of an AI agent in real-world scenarios.


READ NEXT
Cancel OK