OVHcloud Secret Manager meets External Secrets Operator: the new ESO OVHcloud provider brings Kubernetes secrets under control

AI generated image for OVHcloud Secret Manager meets External Secrets Operator: the new ESO OVHcloud provider brings Kubernetes secrets under control

Kubernetes has many talents: scheduling, self-healing, service discovery, turning YAML into existential dread. But it has never been particularly great at keeping secrets secret. Sure, it has Secret objects — but “base64-encoded” is not a synonym for “secure,” and anyone who has ever accidentally committed an imagePullSecret to Git can confirm that entropy is not a strategy.

That’s why external secret stores (and the tooling that connects them to Kubernetes) have become a default part of modern platform engineering. This week, OVHcloud added a new piece to that puzzle: an OVHcloud provider for External Secrets Operator (ESO) that integrates directly with OVHcloud Secret Manager — and it’s not just a lab experiment anymore.

In an OVHcloud Blog post published on April 14, 2026, Aurélie Vache (Developer Advocate at OVHcloud) introduces the new integration and walks through syncing a secret from OVHcloud Secret Manager into a Kubernetes cluster using ESO. The original article is here: Discover the External Secret Operator (ESO) OVHcloud Provider to manage your Kubernetes secrets. citeturn1view0

This dorland.org deep dive expands on that announcement: what exactly changed (and why it’s important), how ESO fits into the wider ecosystem (including Secrets Store CSI Driver), what to watch out for in production, and how to design a sane secrets workflow that won’t bite you during your next on-call rotation.

What OVHcloud shipped: an official ESO provider for Secret Manager

OVHcloud’s headline is straightforward: OVHcloud Secret Manager is now in General Availability, and OVHcloud teams have developed an OVHcloud ESO provider that’s available starting with ESO v2.3.0. citeturn1view0

That last detail matters: for a long time, users who wanted to integrate OVHcloud’s managed secrets service with ESO were effectively “pretending it was Vault” by using the HashiCorp Vault KV2-compatible API exposed by OVHcloud Secret Manager. That approach worked, but it was a bit like driving a screw with a butter knife: possible, but not what you’d recommend to colleagues you want to keep. The new provider gives a more native integration path, and OVHcloud’s own guide explicitly calls out ESO + OVHcloud Secret Manager as a supported pairing. citeturn1view0turn2view3

Even at a technical level, you can see the provider is now a first-class citizen in the external-secrets codebase: there is an ovh provider package that “implements a provider that enables synchronization with OVHcloud’s Secret Manager.” citeturn2view1

Quick refresher: what External Secrets Operator actually does (and doesn’t)

The core ESO idea is simple: you store secrets in an external system (AWS Secrets Manager, Vault, Google Secret Manager, Azure Key Vault… and now OVHcloud Secret Manager), and ESO synchronizes those values into Kubernetes Secret objects so workloads can consume them the usual Kubernetes way. citeturn2view2

ESO does this using Kubernetes Custom Resource Definitions (CRDs). The important ones are:

  • SecretStore / ClusterSecretStore: defines how to talk to the external backend and how to authenticate.
  • ExternalSecret: defines what to fetch (keys, properties, templates) and what Kubernetes object to write.

The CNCF description of the project captures the “sync into Kubernetes Secrets” mission well, and also confirms ESO’s CNCF status: it was accepted to the CNCF Sandbox on July 26, 2022. citeturn3search0

Now for the fine print: ESO is not a secret store. It doesn’t magically make Kubernetes a secure vault, and it doesn’t eliminate the need to harden Kubernetes itself. ESO is the bridge. Your security posture still depends heavily on:

  • How secure your external secret store is
  • How you authenticate ESO to that store
  • How you lock down Kubernetes secret access (RBAC, namespace boundaries, admission control)
  • Whether your cluster encrypts secret data at rest in etcd

Kubernetes Secrets: why “base64” is a problem (and what Kubernetes recommends)

Kubernetes Secrets are convenient, but Kubernetes documentation is blunt about the risks:

  • By default, Secret objects are stored unencrypted in etcd. Kubernetes recommends configuring encryption at rest. citeturn3search2turn3search3
  • Base64 encoding in YAML manifests is not encryption. If you share the manifest (or commit it), you’ve effectively leaked the secret. citeturn3search1

So even if you use ESO, you should still consider a “belt-and-suspenders” setup:

  • External store as the source of truth (with audit logs, access control, versioning)
  • ESO as the sync mechanism
  • Kubernetes encryption at rest to reduce blast radius if etcd leaks
  • Least-privilege RBAC to reduce blast radius if someone’s kubeconfig leaks

What is OVHcloud Secret Manager, and why tie it to Kubernetes?

OVHcloud positions Secret Manager as a managed service for storing secrets (API keys, credentials, etc.) with key features like versioning, access management (via IAM), and audit logging via OVHcloud Logs Data Platform. citeturn2view3

From a platform engineering perspective, what you want from a managed secret store is boring reliability and clear control:

  • Centralization: one place to manage secrets across clusters and environments
  • Rotation and versioning: safe updates without “who changed prod?” archaeology
  • Auditability: who accessed what and when
  • Regionalization: keeping data in a region for compliance / sovereignty needs citeturn2view3

But Kubernetes workloads still need secrets delivered somehow. You can:

  • Mount secrets directly from an external store using a CSI driver (more on that later)
  • Have apps call the external secret API at runtime (which forces app changes and adds failure modes)
  • Sync secrets into Kubernetes Secret objects (ESO’s bread and butter)

OVHcloud’s new provider strongly signals that they’re endorsing the third approach: keep the “source of truth” outside Kubernetes, but let workloads consume secrets using standard Kubernetes patterns.

The OVHcloud ESO provider: authentication and prerequisites

In the OVHcloud blog walkthrough, Aurélie Vache lists prerequisites including an OVHcloud account, an OKMS domain, an IAM local user, OVHcloud CLI, and a Kubernetes cluster. citeturn1view0

The new OVH provider supports token and mTLS authentication, and the post demonstrates token auth by generating a PAT token using the OVHcloud CLI. citeturn1view0

Why the auth method matters (a practical take)

Platform teams always end up debating authentication like it’s a philosophy seminar. Here’s the practical view:

  • Token-based auth is usually easier to bootstrap and automate, but now you’ve created “secret zero” (the token) that has to live somewhere.
  • mTLS can be great for strong client identity, but certificate lifecycle management can become a second job if you don’t already have PKI automation.

If you’re operating multiple clusters, the right choice is often: whichever option integrates best with your existing identity model and doesn’t require manual renewal rituals at 2 AM.

How it works in practice: ClusterSecretStore + ExternalSecret

The OVHcloud post shows a concrete, minimal pattern:

  • Create a secret in OVHcloud Secret Manager (example path: prod/eu-west-par/dockerconfigjson) citeturn1view0
  • Install/upgrade ESO using Helm; you need ESO app version v2.3.0+ to use the OVHcloud provider citeturn1view0
  • Create a ClusterSecretStore that points to the OKMS endpoint and references a Kubernetes Secret containing the PAT token citeturn1view0
  • Create an ExternalSecret with a refreshInterval and a target template that outputs a kubernetes.io/dockerconfigjson Secret citeturn1view0

That last bit is more significant than it looks. Container registries are where secrets go to misbehave: expirations, renamed robots, and “why did CI stop pulling images?” mysteries. Using ESO to produce a standard imagePullSecret (the Docker config JSON Secret type) is an immediately useful, real-world use case. It also shows the provider isn’t just a toy “hello world” integration.

Production note: watch your namespace boundaries

The OVHcloud example writes the token Secret and the ExternalSecret into the external-secrets namespace. That’s fine for demos. In production, you’ll want a pattern like:

  • ClusterSecretStore managed by the platform team (cluster-scoped)
  • ExternalSecret objects created by app teams in their own namespaces
  • RBAC so teams can’t read the platform’s auth token Secret
  • Policies so app teams can only reference allowed secret paths/prefixes in OVHcloud Secret Manager

ESO supports a lot of these controls, but the defaults won’t save you. “It works” is not the same as “it’s safe.”

Why a native provider beats “Vault compatibility mode”

OVHcloud Secret Manager offers both a REST API and a HashiCorp Vault KV2-compatible API. citeturn2view3turn2view4

Compatibility APIs are fantastic for migrations and portability — but they often lag behind in vendor-specific capabilities, and they can hide important semantics behind a lowest-common-denominator interface.

A native provider can bring advantages such as:

  • Clearer configuration: using provider-specific fields rather than bending a generic interface
  • Better error messages: fewer “it failed somewhere in a compatibility shim” incidents
  • Faster access to new features: when Secret Manager adds capabilities, the provider can expose them directly
  • More predictable support: OVHcloud can document and validate the integration end-to-end

In other words: you still get portability from the service’s API choices, but you also get an integration path designed for the platform you’re actually using.

ESO vs Secrets Store CSI Driver: the “sync” vs “mount” decision

In Kubernetes secret management, there are two dominant patterns:

  • Sync into Kubernetes Secrets (ESO style)
  • Mount from external store at runtime (Secrets Store CSI Driver style)

Kubernetes itself highlights the CSI option in its secrets good practices: the Secrets Store CSI Driver “lets the kubelet retrieve Secrets from external stores, and mount the Secrets as a volume into specific Pods that you authorize.” citeturn3search2

So which is better? It depends on what problem you’re trying to solve:

When ESO-style sync is a great fit

  • You have existing workloads that expect Kubernetes Secrets (environment variables, imagePullSecrets, ingress TLS Secrets, etc.)
  • You want to keep application code ignorant of the secret store API
  • You want GitOps-friendly declarations of “which secret goes where” without storing values
  • You’re okay with secrets existing inside the cluster (with encryption at rest and RBAC in place)

When CSI mounting can win

  • You want to reduce (or avoid) storing secret material in etcd at all
  • You want secrets to be mounted as files and potentially updated without rewriting Kubernetes Secret objects
  • You’re already comfortable with node-level components and CSI driver operational complexity

In reality, many organizations use both patterns depending on workload needs. ESO syncing an imagePullSecret is almost always simpler than teaching your entire deployment stack to pull images without Kubernetes Secrets. Meanwhile, runtime-only secrets for certain apps can be a strong CSI use case.

Security and operational implications you should plan for

Let’s talk about what can go wrong — because it will — and how to avoid turning “secrets management” into “secrets incident response.”

1) Secret sprawl and access sprawl

External secret stores are supposed to centralize sensitive data, but integration tools can accidentally create sprawl in Kubernetes: lots of namespaces, lots of copied secrets, lots of RBAC rules that were “temporary” six months ago.

Mitigations:

  • Standardize secret naming conventions (and enforce them)
  • Limit who can create ExternalSecret objects (or require review)
  • Use multiple Secret Stores (dev/stage/prod) so accidental cross-environment reads are harder

2) Refresh intervals are not free

The example uses refreshInterval: 30m. citeturn1view0

Refresh is a tradeoff:

  • Short intervals reduce “stale secret” windows
  • Long intervals reduce load on your secret backend and reduce churn in Kubernetes

If you have hundreds or thousands of ExternalSecrets, refresh can become a real operational concern (API rate limits, reconciliation storms, and the occasional “why is the control plane hot?” moment). Set refresh intervals intentionally — not by copy/pasting the first example you see.

3) Kubernetes still needs hardening (ESO is not a magical cloak)

Kubernetes recommends encryption at rest for Secrets, and also stresses least-privilege access patterns. citeturn3search2turn3search3

If you are syncing secrets into Kubernetes Secrets, do the basics:

  • Enable encryption at rest for Secrets in etcd
  • Avoid granting list / watch on Secrets broadly
  • Restrict who can exec into pods that mount secrets
  • Audit who can create pods: someone who can create a pod that references a Secret can often indirectly get the secret value (a classic foot-gun)

4) “Secret zero” is real

If you authenticate ESO to OVHcloud Secret Manager using a long-lived token stored as a Kubernetes Secret, congratulations: you now have a “secret that unlocks all secrets.” This isn’t unique to OVHcloud or ESO; it’s a universal bootstrap problem.

Mitigations include:

  • Scope the token’s permissions as narrowly as possible (least privilege)
  • Use separate IAM users/tokens per environment (and ideally per cluster)
  • Rotate tokens and validate your rotation process regularly (don’t wait for an incident)
  • Prefer stronger identity mechanisms when available (mTLS, or workload identity models where possible)

Why this is a meaningful move for OVHcloud (industry context)

Cloud providers have learned the hard way that “we have a secret manager” is not enough; the market expects integrations. Developers and platform teams don’t want to write glue code in every workload to talk to every vendor API. They want standard tools, standard CRDs, standard workflows.

ESO has become one of the standard pieces of that ecosystem. It’s CNCF Sandbox, widely used, and built around the Kubernetes extension model rather than bespoke sidecars. citeturn3search0turn2view2

By shipping an official provider:

  • OVHcloud makes Secret Manager more immediately usable for Kubernetes teams
  • It reduces friction for migrations (especially for teams standardizing on ESO across multiple clouds)
  • It reinforces OVHcloud’s story around IAM integration, audit logs, and centralized control (things secret managers are judged on) citeturn2view3

And it’s happening at a good time: Kubernetes security posture is increasingly scrutinized, and secrets are still the most common “oops” vector. If your platform reduces the number of ways engineers can accidentally leak credentials, you’ve already improved your odds.

A practical reference architecture (for teams that want fewer surprises)

Here’s a reference model I’d recommend for a mid-sized organization running multiple clusters and multiple teams. It’s vendor-neutral in spirit, but maps cleanly to OVHcloud Secret Manager + ESO:

1) Define secret ownership clearly

  • Platform team owns the SecretStore/ClusterSecretStore configuration and auth method
  • App teams own which secrets their workloads need (ExternalSecret resources), within policy limits

2) Use path prefixes as policy boundaries

  • Example: prod/payments/* vs prod/marketing/*
  • Bind OVHcloud IAM permissions to those prefixes, not to “everything”

3) Enforce Kubernetes hardening as a prerequisite

  • Encryption at rest enabled for Secrets (etcd) citeturn3search3
  • RBAC policies restricting Secret read access citeturn3search2
  • Audit logs (Kubernetes + secret store)

4) Decide on rotation strategy per secret type

  • Human-managed secrets (rare): manual rotation with clear runbooks
  • Service credentials: rotate using external store versioning and update consumers via ESO refresh
  • Registry credentials: rotate frequently; verify that deployment tools re-pull images appropriately

Mini case study: image pull secrets (why this example is smart)

OVHcloud chose a kubernetes.io/dockerconfigjson target as its example, and that’s a telling choice. citeturn1view0

Registry credentials are a high-leverage secret because they sit in the critical path for deployments:

  • If they expire, pods fail to pull images
  • If they leak, attackers can pull private images (and potentially reverse engineer your stack)
  • Teams often need them across many namespaces

By storing the Docker config JSON in Secret Manager and letting ESO hydrate it into the cluster, you get:

  • Central management and auditing in Secret Manager citeturn2view3
  • A standard Kubernetes Secret type that works with existing tooling
  • A defined refresh cadence so updates propagate predictably citeturn1view0

The remaining challenge is still operational: if you rotate registry credentials, make sure your deployment systems (and node caches) behave the way you expect. “Rotated successfully” is not the same as “every node is now able to pull.”

What to watch next

OVHcloud Secret Manager itself has signaled continued roadmap items (as seen on OVHcloud Labs), such as the ability to choose the encryption key, rotation notifications, and multi-region secret support. citeturn2view4

On the ESO side, the provider ecosystem moves quickly. That’s great for innovation, but it also means you should treat upgrades as real changes (read release notes, test in staging, validate CRD versions). The OVHcloud blog is explicit that the OVH provider requires an ESO version at or above 2.3.0. citeturn1view0

If you’re adopting OVHcloud Secret Manager today, the best long-term move is to treat secrets management as a product inside your org: versioned configs, tested upgrade paths, and clear responsibilities. The “just store it in Kubernetes” era is ending — not because it never worked, but because it fails too loudly when it fails.

Conclusion

OVHcloud’s new External Secrets Operator provider is the kind of integration that looks small in a changelog and huge in day-to-day operations. It turns OVHcloud Secret Manager from “a place to store secrets” into “a place to store secrets that Kubernetes can actually use without bespoke glue code.”

And if it helps even one team avoid hardcoding credentials into Helm charts or copying base64 strings into Slack, I’d call that a feature worth shipping.

Sources

Bas Dorland, Technology Journalist & Founder of dorland.org