AI promises to accelerate the deployment of a Zero Trust strategy: faster correlation, noise reduction, and partial automation of responses. In production, the mechanics are simple: the more automation increases, the more justification is required.

In collaboration with HPE Aruba Networking and TD SYNNEX

This justification relies on a clean information system: usable identities, consistent telemetry, maintainable segmentation, and a traceable decision chain. Without this foundation, AI does not strengthen the security posture. It accelerates decision-making based on fragile signals.

The network becomes a sensor: useful, until a decision must be made

Aina Rampanana, cybersecurity pre-sales, offers a clear perspective: the network no longer serves only to transport data—it observes, contextualizes, and potentially enforces policies as close as possible to traffic flows.¹ In a hybrid, multi-site, multi-cloud information system, with east-west traffic everywhere, the argument holds: the network remains one of the few continuous observation points.

The problem arises in the next step. The network detects a deviation. It does not indicate whether the deviation is malicious or simply new. Migration, scaling, application changes, onboarding a new provider—everything initially appears as an anomaly. The Zero Trust strategy is therefore determined before the “sensor” technology: asset inventory, dependency mapping, clean identities, documented exceptions. Without this, the signal exists, but the decision remains a matter of debate.

UEBA: the score exists, IAM makes the difference at decision time

This is where AI comes into play, often through UEBA (User and Entity Behavior Analytics). The goal: detect unusual connections, atypical access sequences, privilege escalations, or potential exfiltration. In practice, UEBA filters more than it decides. Modern environments generate noise: remote work, subcontracting, technical accounts, reorganizations, SaaS shadow IT. The use of AI assistants introduces rapidly evolving application patterns.

SOC teams ultimately face the same obstacle: a score without context does not guide action. UEBA gains value when the context is solid. A clean IAM (Identity and Access Management), consistent MFA (Multi-Factor Authentication), strict account hygiene, clear separation between human and service accounts, and a usable device posture. Gartner recalls this principle in its public definition of SASE (Secure Access Service Edge): “zero trust” access relies on identity and real-time context to enforce security and compliance policies.²

Without this layer, UEBA produces scores that teams hesitate to convert into policy. With it, UEBA becomes actionable, and the question shifts: how to enforce without disruption.

Automating a policy: where cybersecurity meets production

Vendors are adopting “dynamic” policies: contextual hardening, conditional restrictions, adaptive segmentation, quarantines. The objective is clear: reduce exposure time and limit propagation.

The constraint can be summarized in one sentence: automated actions break things quickly. A block interrupts a flow, disrupts dependencies, triggers cascading effects. CIOs therefore proceed in stages. They first accept what remains reversible and controlled: time-limited measures, traceability, rollback capabilities, reintegration paths. Recommendation and optimization progress faster than large-scale autonomous execution.

This choice reflects a technical requirement: to automate safely in production, flows must be understood. This brings the discussion back to telemetry quality.

NDR and telemetry: AI does not work on approximate logs

AI-assisted security depends on one fundamental element: data collection quality. Lateral movement, east-west traffic, application dependencies, identity + posture + traffic correlation—all require a consistent foundation. Reliable timestamps, minimal normalization, up-to-date inventory, homogeneous coverage.

NDR (Network Detection and Response) illustrates the gap between promise and reality: without stable signals, correlation resembles a sophisticated lottery. Tools produce plausible hypotheses, but SOC teams lack the substance to make operational decisions.

The same applies to micro-segmentation. On paper, it reduces the attack surface. In real-world systems, it exposes exceptions, implicit flows, and historical dependencies. AI can map, suggest, and identify toxic exceptions. It does not eliminate business trade-offs or integration debt. And the more automation increases, the more this debt becomes a source of incidents: an automated policy does not tolerate unidentified flows.

Edge vs cloud: inference at the edge is an operational challenge

When automation meets latency, sovereignty, or local operational constraints, the discussion shifts to the edge: bringing inference closer, acting faster, transmitting less data. On paper, it is clean.

In a real information system, it is operational complexity: hardware heterogeneity, patch management, distributed monitoring, policy consistency across sites, agent attack surfaces. The edge does not resolve complexity—it relocates it. A Zero Trust strategy does not oppose edge and cloud on principle; it arbitrates based on operational criteria: observability, patchability, auditability, policy consistency.

This arbitration inevitably leads to a recurring issue as soon as actions are automated: proof.

AI governance: traceability or nothing

As soon as a security decision relies on AI, performance is no longer the only concern. An explainable chain is required: traceability, input data control, drift measurement, testing, lifecycle security, delegation rules, autonomy boundaries. Without usable proof, an action becomes indefensible—especially when it blocks access.

This requirement challenges stacked architectures. The more the “signal → decision → action” chain crosses multiple layers, the more fragmented the explanation becomes. Forrester describes SASE as a structured landscape (historically approached through “Zero Trust Edge”) and highlights the need for rationalization: fewer layers, more consistent policies, less fragmented telemetry.³ In this context, governance is not about reassurance—it enables reconstruction of a sequence of events. Forrester is explicit: Zero Trust remains a strategy, not a product.⁵

The same applies to AI governance. Gartner positions AI TRiSM (AI Trust, Risk and Security Management) among its strategic trends: controls, guardrails, and the ability to govern AI as a critical component.⁶ Gartner also warns about risks linked to cross-border generative AI usage, reinforcing the need for traceability and control.⁷

The loop is closed: automating without governance leads to incidents. Governing without rationalization leads to unmanageable audits. And without identity and telemetry, nothing can be governed.

Three contexts, three implementations: automation follows constraints

A Zero Trust strategy does not deploy the same way across sectors. Availability, legacy systems, and audit requirements determine the acceptable level of automation.

Industry 4.0 (OT/IT).

In OT/IT (Operational Technology / Information Technology), the objective is not maximum hardening. It is to harden without disruption. Long-term observation, understanding legitimate flows, progressive segmentation, locking down human and technical access, bounded automation. Actions that “cut” remain rare. AI mainly stabilizes baselines and detects deviations.

Healthcare.

Availability is critical. A poorly triggered quarantine impacts care delivery. Targeted containment, temporary isolation, conditional access, auditable break-glass procedures: automation progresses, but under strict control. AI correlates and prioritizes; it does not decide alone.

Finance.

Real-time operations and auditability go hand in hand. Detecting quickly is not enough—it must be justified. An AI-influenced decision without usable proof becomes a compliance risk in addition to a cyber risk. Automation progresses when traceability keeps pace: signal, applied rule, confidence level, proportionate measure, rollback capability.

What AI actually accelerates

AI delivers tangible gains: cross-domain correlation, prioritization, policy recommendations, gradual hardening, reversible automation. Large-scale autonomous execution remains constrained by known limitations: usable identity, reliable telemetry, maintainable segmentation, operational governance.

The promises hold when the foundations exist. Without them, AI does not implement a Zero Trust strategy—it accelerates the speed of an information system that is already uncertain in its decision-making.

Articles by the same author:
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.