“Should I stay or should I go,” sang The Clash in 1982: “If I stay, it will be trouble; if I go, it will be double.” A dilemma that neatly captures the position of governments and companies when it comes to Palantir. The company offers extraordinarily powerful decision-support tools that seem almost impossible to do without. Yet that same power also opens the door to the most troubling abuses. So is Palantir an indispensable tool or a formidable Trojan horse? INCYBER investigated.

Was the attack by the United States and Israel against Iran good business for Palantir? On March 2, when markets reopened after the first Western strikes on Tehran, the company’s stock rose nearly 6%, then 1.5% the following day and another 4% twenty-four hours later. The tech giant tends to thrive in this kind of context.

Imagine, as rumors suggest, that Israeli and American intelligence services hacked into Tehran’s surveillance cameras and gained real-time access to their feeds. Add to that NSA intercepts, aerial reconnaissance by drones and satellites, radar and radio signals, human intelligence… enormous volumes of data that, if correlated and exploited in near real time, could be turned into a coherent operational picture, making it possible to identify logistics networks, map chains of command, link individuals to infrastructure and track their movements.

With such data, an AI system could then model possible courses of action, estimate their operational and logistical consequences, calculate potential collateral damage and even coordinate units operating across different theaters. Enough, in theory, to decapitate the Iranian regime within twenty-four hours of strikes—indeed the regime lost its Supreme Leader, Ayatollah Ali Khamenei, on March 1, the day after the bombing began. This is precisely the kind of task Palantir is designed to perform and that it carries out in such operational environments.

Behind Palantir, the shadow of the CIA

The power of one of the most controversial big tech companies on the market lies precisely in this integration capability. Palantir is not simply another AI capable of producing detailed analyses; it is what the company calls a “decision-making operating system. ”It creates an ontology—that is, a structured model of a client’s operational world, including its objects, properties, relationships and rules—allowing raw data to be transformed into information directly usable for action. Thanks to this, AI can reason in terms of operational concepts rather than SQL tables or tokens, as consumer AI systems do. The ontology makes AI operational, not merely conversational.

Several software layers make up its offering. In military environments, Gotham is the foundational component. It connects heterogeneous databases, cross-references information coming from sensors, reports or administrative files, visualizes networks (people, places, events), reconstructs logistics chains or organizational structures and produces real-time situational dashboards.
Palantir itself was created with backing from In-Q-Tel, the CIA’s venture capital arm. The company emerged from the trauma of 9/11 and the realization that all the information about the terrorists already existed but was scattered across bureaucratic silos. The FBI, CIA, NSA and local services each held pieces of the puzzle, yet no one had the full picture.
Palantir’s Gotham platform was designed as the answer to that problem: connecting scattered and compartmentalized information within sensitive environments in order to build a dynamic operational map.

Ready-made action plans

To deploy these solutions in protected environments—sometimes offline or on closed networks—and ensure maintenance and updates without interrupting operations, Palantir provides another layer: Apollo. Finally, the most recent offering, AIP (Artificial Intelligence Platform), relies on the structured data within Gotham to query databases in natural language, generate automated summaries, assist analysts in identifying correlations and propose ready-made action plans. Palantir emphasizes that its decision-support tool does not act in place of humans: a person must always initiate a query and validate a scenario.

With a consolidated ten-year contract worth 10 billion dollars, the U.S. Army is one of Palantir’s main military clients. The company also supplies the broader American defense and intelligence community (U.S. Navy, U.S. Air Force, CIA, NSA, FBI, Department of Homeland Security, customs and immigration). The United Kingdom—whose clients include the Ministries of Defense, the Interior and Health—and France, through its domestic intelligence services, also appear among Palantir’s public-sector customers.

The same layered logic applies to Palantir’s enterprise offering, with Foundry as the core building block. By connecting all of a company’s databases and services, Foundry aims to make the organization operate as an integrated system, optimizing supply chains, industrial production and other processes. In this environment, Apollo and AIP play roles similar to those they occupy in the military sphere.

A structuring influence on decision-making

An industrial manager might ask AIP, for example: “Which suppliers are likely to disrupt production in the next thirty days?” and receive not only an answer but also an action plan to address the situation.

Airbus, Ferrari, Morgan Stanley, Merck KGaA and HD Hyundai are among the company’s corporate clients. Many companies are discreet about their use of the platform or are connected to the Palantir ecosystem through their contractors—for example Airbus subcontractors and clients, who are granted access to the Skywise predictive maintenance system, developed with Palantir, in exchange for sharing some of their data.

Since the 2000s, NATO has gradually shifted from a model of platform-centric warfare—in which tanks, aircraft and ships constitute the central nodes—to data-centric warfare, where information becomes the structuring element. A similar transformation is taking place in the civilian world, and Palantir has become a major player in this evolution. Too major, perhaps. The company now sits at the heart of the American defense apparatus and exports its vision to allied countries, notably through NATO, which signed in March 2025 for the deployment of the Maven Smart System NATO. Palantir’s solution does not make decisions itself, but it influences what decision-makers see by prioritizing certain information: it highlights some relationships while obscuring others. In doing so, Palantir offers its clients a particular view of reality—some would say its view. This grants the company a powerful influence over how political and military leaders understand and manage conflicts. The risk is that decision-makers become trapped within Palantir’s ontology. Can a country truly be independent if its informational nervous system is designed by a private company that openly supports American power and has itself become a key component of it?

Alex Karp: “to intimidate our enemies and sometimes even kill them”

And this is without even mentioning the political convictions of its founder, Peter Thiel, a libertarian, Republican, and outspoken supporter of Donald Trump, who believes that technology should strengthen state power against asymmetric threats and who has argued that “freedom and democracy are no longer compatible.”

One might think that Alex Karp, its CEO, restores some balance: a doctor of philosophy trained in Germany, he generally holds liberal positions in the American sense of the term, meaning left-leaning. Yet it was he who told shareholders in 2025: “Palantir is here to disrupt systems and make partner institutions the best in the world. And, if necessary, to intimidate our enemies and sometimes even kill them.” Not exactly peace and love…

Karp also firmly defends cooperation with security agencies and criticizes technology companies that refuse to do so. On March 3 he declared—referring to the dispute between Anthropic, which refuses to allow its AI to be used for certain controversial purposes (mass domestic surveillance and fully autonomous weapons), and the U.S. Department of Defense:

“If Silicon Valley thinks it’s going to eliminate all white-collar jobs… and on top of that screw the military—if you don’t think that’s going to lead to the nationalization of our technology, you’re stupid. This conflict has now spilled over onto Palantir itself, which—like all suppliers to the U.S. defense sector—has been asked to sever its ties with Anthropic, even though its AI partly relies on Claude, the model developed by that company.

Humans as referees in a game whose rules they do not control

For these reasons—and for obvious sovereignty concerns—France, although it extended its partnership between Palantir and the DGSI (Direction Générale de la Sécurité Intérieure) last December, is still seeking to develop a national solution.Three more years have been added to a contract that has been in place since 2016, signed in the aftermath of the 2015 terrorist attacks. The OTDH (outil de traitement de données hétérogènes) tender, launched in 2022, has still not been concluded. Athea (the alliance between Atos and Thales), Blueway, and Chapsvision are leading contenders. The latter appears to be favored by analysts, but they are evidently struggling to close the operational gap with Palantir, which remains at the core of France’s counter-terrorism capabilities.

With AIP, analysis and decision support increasingly become proposals for action—suggestions that decision-makers may be tempted to follow. Humans tend to become dependent on technology, sometimes to the point of undermining their own cognitive abilities. A study prepublished by the MIT in 2025 suggests that the use of ChatGPT leads to a reduction in brain connectivity and cognitive amplitude. “Over more than four months, LLM users consistently performed below expectations at neuronal, linguistic and behavioral levels,” the authors write.

The explanation is simple: humans are naturally inclined to offload effort onto technology whenever possible. With Palantir—as with other AI systems—they risk becoming little more than referees in a game whose rules they no longer control.

Privacy by design… really?

Worrying? Perhaps—but the logical next step is even more troubling: the automation of decision-making, justified by the need for speed and efficiency in order to stay ahead of a military adversary or a business competitor. Minutes—or even seconds—can make the difference between victory and defeat. In finance, this logic already exists through high-frequency trading, whose occasional disruptions illustrate the risks involved. In medicine, similar developments are emerging: AI systems are already capable of diagnosing certain serious illnesses—particularly cancers—more accurately than doctors, and may eventually surpass them in determining treatment strategies. We are assured that humans will always remain “in the loop,” but experience invites skepticism toward such assurances.

Another claim should also be treated with caution. Palantir insists that its clients’ highly sensitive data are protected from its own access. “Palantir cannot access, use or share [its clients’] data for its own benefit,” stated Louis Mosley, head of Palantir in the United Kingdom, in March 2025.

According to the company, confidentiality is built directly into the architecture of the system: installations on the client’s premises, encryption keys held by the client and segmentation mechanisms.

Yet doubts persist, largely because of Palantir’s origins as a company funded by the CIA. Did the agency require its protégé to include hidden backdoors in its system, allowing it to monitor its clients’ activities? There is no proof of such practices—but neither would it be unprecedented.

One need only recall the Dual_EC_DRBG affair, a cryptographic standard used between 2006 and 2014 that contained a backdoor benefiting the NSA. The standard was integrated into RSA BSafe products used by numerous companies and into NetScreen VPNs produced by Juniper. Hackers eventually exploited the same vulnerability themselves.

For Switzerland, Palantir is a firm “Nein”

Regulatory risks also worry potential users. As an American company, Palantir is subject to the Cloud Act, which allows U.S. authorities to demand access—via judicial request—to any content hosted by an American company.

Palantir dismisses these concerns as unfounded rumors circulating for years.

Yet it was not conspiracy theorists who revived them but the very serious Swiss Federal Department of Defence. In an internal report drafted in late 2024 and made public in December 2025 by the Republik Magazin, the Swiss army evaluated the advantages and disadvantages of adopting Palantir’s solutions.

“Because Palantir is an American company, there is a risk that sensitive data could be accessible to American authorities or intelligence services,” the report states. According to Swiss experts, this possibility raises legitimate questions about the actual control of military information and the protection of strategic data.

If such leaks cannot be proven, neither can their impossibility.

One thing, however, is indisputable: “the use of Palantir solutions can lead to dependence on an external provider for the operation of critical systems.”

The more complex a solution is to deploy, the stronger the vendor lock-in becomes. In operational domains, switching suppliers—retraining staff, reorganizing processes and reallocating resources—can be extremely costly.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.