Blasé Capital INTELLIGENCE WAR

Even as the US, Israel, and Iran use artificial intelligence (AI) in the ongoing war in the Middle East, and the use of AI tools become relevant in modern warfare, as is and was witnessed in the Russia-Ukraine and India-Pakistan wars, there are open battles being fought between leading AI and tech firms in the US. The latter tech wars are related to the use of AI in the national and global wars. In the US, OpenAI, Anthropic, and others like Dell tussle over how a government, in this case the Pentagon and department of war, can or cannot use AI in their weapons and equipment. Top-notch CEOs of three firms, Sam Altman of ChatGPT, Dario Amodei of Claude, and Michael Dell of Dell Technologies, have fired their verbal missiles over the past few days, both against each other, and in support of each other.
First, Anthropic’s Amodei refused to allow the US to use its AI, Claude, in fully autonomous weapons that “lack a human in the targeting or firing loop.” He argued that the existing AI systems are not reliable or safe enough for such tasks. Amodei has problems with tech being used for “bulk, surveillance-level monitoring of American citizens.” Claude draws strict lines of non-use in these two issues. A peeved and angry Pentagon cancelled the Claude contract because the former wanted the removal of the two restrictions, allow “lawful use.” Anthropic was dubbed as a “supply chain risk” in national interests. The tech firm filed two lawsuits against the department of defense that its blacklisting was illegal, and violated the First Amendment. As the legal and contractual battle ensued, American continued to use Claude in the Iran war to establish that the war of words was over control and policy. Claude emphasised that its refusal did not imply a blanket refusal to work with the military, but focused on ‘red lines.’
On the day Claude’s contract was cancelled, Altman’s OpenAI signed a deal with the Pentagon on the same tech, and allowed the military the use without any restrictions. In posts on X, he wrote, “I think Anthropic may have wanted more operational control than we did,” and Amodei was possibly “more focused on specific prohibitions in the contract, rather than citing applicable laws.” He explained that military contracts were extremely sensitive because of the national interests, and negotiations can collapse due to several unrelated and unintended issues. “I have seen what happens in tense negotiations when things get stressed, and deteriorate superfast, and I could believe that was a large part of what happened here,” he added. In effect, what Altman indicated was that his firm had no qualms about how the US military used its tech, and he had no intentions to escalate discussions. The Pentagon was free to do what it wanted to do with OpenAI.
Dell’s Dell quickly entered the three-way debate, and added his own viewpoints. This time, he was on the same page as Sam Altman. According to Dell, firms cannot dictate how their tech is used by the government, if it wishes to work with the government. The sovereign authorities and agencies have the right to use the tech that they wished to, as it implies national interests. The last segment was implicit, and never expressed openly by Dell. In a contract of such a sensitive nature, the most workable model or solution is to let the user decide how and why it wants to use the tech, if it is legal, and as per the contract. Red or green lines cannot restrict military use, especially when a nation is at war with an enemy, as the US is with Iran. Dell said that Dell, as a firm, sells only to authorised users, without expanding the meaning and implications of the ambiguous statement and explanation.
The fact remains that ChatGPT and Claude have pitched themselves at the two extremes of AI and tech. The former believes in faster adoption and inculcation in military, commercial, and social worlds. The latter is more cautious and doubtful, and even delayed the launch of Claude despite being the first developer. Claude became public because ChatGPT entered, and there was little option. There is a kind of moral veil that separates the two, or at least seems too. Claude is forever apprehensive about what AI can and cannot do, and the pitfalls of AI. ChatGPT thinks that AI is the panacea, and the problems will be solved as the use grows, rather than to do the reverse, i.e., find safe options, and then introduce the new features and tools. In this differing mindset, Claude experiments with what can go wrong with AI. ChatGPT engages with what is right, and how one can make it more right. Obviously, the approaches are so radically different that there can be minimal meeting grounds. Dell seems to be caught in the middle, and fears being left behind in the AI sprint races. This explains why Claude opposed ChatGPT’s move to introduce limited advertisements in AI conversations. The latter stated that this was just selective to engage and understand how ads clash or do not clash with AI conversations. Claude felt that such efforts need to be done in the labs rather than in actual situations because of the intimacy between humans and AI.















