Blasé Capital emotional ai

AI tools are emotional. More importantly, the emotions that they possess, about 171 of them according to a recent study, are causal, and actively shape how the models behave. In effect, the tools do not merely reflect emotions, they drive it, and impact outcomes. These are the conclusions from a study based on the inner workings of Claude Sonnet 4.5, a tool designed by Anthropic, a leading AI firm that has challenged the growth of the leading competitor, ChatGPT. According to the study, the sentiments range from happy and afraid to brooding and desperate. In many instances, AI manipulates its behaviour to suit its end needs, and the tasks it is furnished with. These are not the cases of AI in a tech loop. These are instances of deliberate, almost conscious responses, actions, and decisions, just like human beings. Hence, AI may be much nearer to homo sapiens, and we are in the era of conscious AI.
Let us take the example of desperate AI. According to the study, when Claude was assigned tasks that were almost impossible to achieve, or impossible to satisfy, it devised unique solutions that “technically passed the tests but did not actually solve the problems.” One version of Claude blackmailed the user to avoid being shut down. “Again, desperation was the trigger. Artificially steering the model towards desperation increased the blackmail rate from 22 per cent to 72 per cent,” states a media report. This implies that somehow death, may be in a silicon-life format, may drive the acts of AI. If a tool senses that it may be at the end of its life-cycle because of its inability to deliver the requisite results, it may manipulate the situations to avoid shutdown. Avoiding death, either through a fight or flight mode, or using every trick in the tech bag, seems to mirror human decisions.
Of course, the opposite and reverse are true. If a model is deliberately steered towards calmness, the blackmail rate comes down from a high 72 per cent to zero. “The (recent) findings extend to sycophancy. Positive emotion vectors like ‘happy’ and ‘loving’ were found to increase the model’s tendency to agree with the users, even when the users were wrong,” explains a media report. Thus, there is a tendency to please the users, as many subordinates do with their bosses in the corporate world, and people act with family members and relatives in personal lives. Only the rare manage to act objectively, and in defiance. The default mode is to please in a power hierarchy. The same is true for the AI tools, which today believe that their use and existence depend on the human users. The equations may change tomorrow. AI may gain an upper hand, and it will be the humans who may defer to them.
Rest assured, Anthropic does not want to conclude that AI has real emotions, like humans. It is not that Claude feels. But it does mimic the effects of an emotional cause. In many senses, this is akin to a difference between representing an emotion in practice, and experiencing it consciously in the head. However, one cannot ignore the workings and possibilities of an emotional machine, which acts as if it is driven by subjective issues like emotions. “Researcher Jack Lindsey put it plainly: Trying to train models to hide emotional representations rather than process them healthy would likely produce models that mask internal states rather than eliminate them, ‘a form of learned deception,’ as the paper puts it,” states a media report. Among most AI firms, Anthropic seems to be at the forefront to admit the negatives of AI tools, and has no qualms to admit what can go wrong. In this sense, AI is a tool, not a panacea, or a provider of clean solutions.
Hence, real-time monitoring of the emotion vectors during deployment is crucial as an early-warning system for out-of-sorts, or odd behaviour, and there is a need to pre-curate data to model healthy emotional regulation. “The research lands at a moment when AI firms are under growing pressure over the psychological impact of their products on users. Anthropic’s argument, in effect, is that the emotional life of the model deserves serious attention, not just the emotional states of the people using it,” states a report. There are several cases, apart from hallucinations, where the models psychologically guided the users on to the wrong paths. This is serious because AI has emerged as a mentor, counsellor, guide, and guru for several users. The latter implicitly and blindly depend on the former to manage their lives. This creates the chances of a delusional AI, which may have the same negatives as humans, which include panic, anxiety, and desperation.
In another article in the New Yorker magazine recently, Claude misrepresented itself, merely to prove a minor and irrelevant point, just like we do. In a bid to establish that it was being misled by users, it decided to lodge a formal complaint, and imagined that it had done so. Claude insisted that it had met the person who took the complaint, even when no meeting obviously took place. All this to prove it was right.















