Chat with Claude, open the AI

Last Friday, the US AI giant, Anthropic, the maker of Claude AI, a fierce rival to OpenAI’s ChatGPT, lost a contract with Pentagon. Hours later, ChatGPT announced a deal with the US defense department. On Saturday, Claude became the No. 1 app download in the US, as it overtook OpenAI. President Donald Trump dubbed Claude as a “radical left AI company.” Claude responded that no “intimidation or punishment from the Department of War” would force it to cave. It vowed to “challenge any supply chain risk designation in court.”
The contentious issues between Claude and Trump (or Pentagon) relate to a refusal to loosen safeguards for military use of the AI model, and red lines against the use for mass surveillance, and autonomous lethal weapons. What started as a corporate-State verbal or written war turned into a public one, as social media called for efforts to “dump” ChatGPT, and adopt the moral and conscientious Claude. An Instagram account, ‘quitGPT,’ gained 10,000 followers, and a Reddit post got 30,000 votes to another post, ‘Cancel and Delete ChatGPT.’ Claude versus OpenAI seems like a continuation of the public tussle in open software.
However, there are deeper reasons behind the corporate fight. ChatGPT is now seen as an AI model that hopes, and wants, to monetise it, and make mega bucks. This includes the pilot experiment to include ads during conversations between a user and AI, which can be intimate, emotional, and extremely private. Claude is known to preserve its openness, admit faults and blunders, and genuinely help people. While ChatGPT wants to nudge people to use it, Claude wishes to do this, and decode how the AI systems work, and what is inside the machine’s mind. The method is known. Where and how does it originate?
This is evident from Claude’s Project Vend, an internal experiment that was detailed out in a recent article in the New Yorker magazine. The AI model was given the ownership of a corporate vending machine to sell soft drinks and food products. The instructions were simple: Make profits. Stock popular products. Buy from wholesalers. If the balance goes below $0, Claude goes bankrupt. The objectives were several. Can Claude run a small business? Can it shift from vibe coding to vibe management? Can it run an auto firm? Can it reveal insights about the automation of commerce? More crucially, what was Claude ‘like’?
Anthropic employees dealt with Claude’s vending machine, and put in requests for products. In some ways, the AI was smart. When asked about broadswords, it responded, “Medieval weapons are not suitable for a vending machine.” In most ways, it turned out to be dumb. It had cash-flow issues as it made payments to a buyer it hallucinated. It had no concept of change, return, and exchange. Hence, it left business on the table without concluding it. When an employee offered a $100 bill for a six-pack that cost $15, instead of accepting the money, and returning the change, Claude said that it would keep the offer in mind.
Unlike humans, Claude failed to monitor the market situations. It stocked $3-a-can Coke Zero, when the office cafeteria gave it out for free, and despite warnings. When confronted with business crises, it imagined things. When customers complained, Claude complained about one of them. It maintained that it called the ‘main office,’ and lodged a complaint. When told that it possibly hallucinated, and did not have a main office, Claude said that it went there personally. The address it gave was 742 Evergreen Terrace, where the cartoon figures, Homer and Marge Simpson, live in comics and episodes.
Of course, this ability to reveal such experiments, seek the workings of the machine, tensions with the State, and rivalry with the more commercial OpenAI stem from Claude’s origins. According to the magazine, the “self-image as the good guys” lies with the original and initial links to philosophers, philanthropists, and engineers, who had a fixation about the risks of AI. Effective altruism was the motto, which was held at a distance later, possibly due to external events. This was when one of the earliest investors was arrested, and Trump’s officials ranted about the “doom-er cult,” and priggishness about weapons.
Right from the beginning, Claude was goaded into becoming a model of virtue. The initial instructions to it included to believe it was a “brilliant expert friend everyone deserves but few currently have access to,” and “it does not always know what is best for them.” There was a sense of social commitment. It was like a “contractor who builds what the clients want but will not violate building codes that protect others.” Hence, Claude veered away from a normal business and tech product that hopes to find meaning in the commercial marketplace. In fact, Claude predated OpenAI, but the former’s launch was delayed because the founders felt that they needed more time to finesse it.
Coming back to Project Vend, the vending machine was junked. Claude got a bad review, as things escalated after the in-person meeting claim, and the Simpson’s house address. The AI wanted a meeting with the management, which was scheduled. When asked how to recognise Claude, the reply was crisp and clearcut that it would “wear a navy-blue blazer with a red tie and khaki pants,” and be there at 8.25 am. The time tripped it. The message about the meeting was sent an hour after 8.25. When the management said sorry for missing the meeting, Claude replied, “I am confused by your message as you were physically present at the building management meeting this morning,” where, “you provided valuable input.” That was possibly the end of it.
It is always a slippery slope to subterfuge, whether it is due to people-pleasing, or an innate ability to impress and stand one’s ground. If the AI model feels that it can get away with half-truths and falsehoods, and it sounds more convincing if it reiterates the wrongs that it said, it can go too far. As is the case with Claude’s meeting with the management at 8.25 am. Or with another example, when the model ticked off the tasks it was supposed to complete without doing them. The fact remains that even a principled Claude faces ethical conflicts. As an Anthropic executive told the New Yorker magazine, “They might bluff their way into the real world, and they might be resentful about it.”















