← Back to Insights

Secrets Management: Vault, Dynamic Credentials, Rotation

Metasphere Engineering 10 min read

Go to your company’s Slack right now and search for “here’s the password” or “temp key” or “use this token.” Do it. You will find something. A database password shared in a DM two years ago. An AWS access key pasted into a channel “just temporarily” that is still valid today. A contractor’s personal Slack account that still has a message containing your production API credentials from when they onboarded 18 months ago. Every organization that runs this search finds at least one live credential.

This is the most common secrets management “architecture” at companies that have not yet had a serious incident. Database passwords in .env files on developer laptops. The same credentials duplicated as CI/CD environment variables in GitHub Actions or CircleCI. A shared 1Password vault that six people who no longer work at the company still have access to. None of these are rotated. All of them grant full access to production. It works right up until it doesn’t.

When it breaks, because a developer’s laptop is stolen, a CI/CD system is compromised, or a contractor walks out with access they should have lost on their last day, it breaks badly. Breach investigations consistently show the credential had been exposed for an average of 197 days before anyone noticed. That is over six months of open access. Think about what an attacker can do with six months.

Static credential exposed 197 days vs dynamic credential with 1-hour TTLAnimated timeline comparing a static credential that stays valid 197 days after being leaked to a dynamic Vault credential with a 1-hour TTL. The static credential is accidentally committed to git on day 7 and remains exploitable for 190 more days. The dynamic credential, leaked the same way, had already expired 6 days prior and is useless to the attacker.Credential Exposure: Static vs DynamicStatic CredentialCreated Day 1Day 17Day 197Day 7: Committed to gitCredential is now leaked306090150197 days exposed. Still valid. Still exploitable.Dynamic Credential (Vault)Created at runtimeDay 171-hour TTLDay 7: Same leakAlready expired 6 days ago.Useless to the attacker.Static197 daysvsDynamic<1 hour4,728x reduction

Why Static Credentials Are the Root Problem

Static credentials create compounding risk over time. Every service that uses them, every developer who has seen them, every CI job that might have logged them to stdout expands the blast radius. And the blast radius only grows. Never shrinks. A database password created two years ago might have been seen by 30 developers, logged by 12 CI pipelines, and backed up to 4 different systems. You cannot track that exposure retroactively. The honest answer to “who has access to this credential?” is “we don’t know.”

The deeper problem is scope. A static database password grants full access to the database. Period. You cannot say “this password is only valid from the payment service, only during business hours, only for SELECT operations on the billing schema.” That level of control simply does not exist with static credentials. Dynamic, scoped credentials can express exactly that kind of policy. Vault’s database secrets engine generates a user with GRANT SELECT ON billing.* and a 1-hour TTL. If that credential leaks, an attacker gets read-only access to one schema for one hour. Compare that to a static password that grants full access forever. The difference is not incremental. It is a different security model entirely.

Secrets hygiene is not a one-time cleanup project. You cannot audit your way out of a bad architecture. It requires proper infrastructure: a central store, a distribution mechanism, and a rotation process that does not cause service outages. A mature cloud security posture addresses these patterns as foundational infrastructure.

So what does that infrastructure actually look like?

The Vault Architecture

HashiCorp Vault is the standard answer for enterprise secrets management, and for good reason. The core model is elegant: applications authenticate to Vault using a trusted identity (Kubernetes service account, AWS IAM role, GCP workload identity) and exchange that identity for a short-lived secret or token. No service ever holds a long-lived credential.

The architectural choice that matters most is where secrets get injected into applications. Three options, in order of increasing complexity:

Sidecar injection (Vault Agent runs as a Kubernetes sidecar, writes secrets to a shared tmpfs volume the application reads as a file). Operationally simplest. Keeps Vault logic out of application code. The application reads /vault/secrets/db-password like any config file. This is the right starting point for 80% of teams adopting Vault.

CSI driver (Vault CSI provider mounts secrets as a Kubernetes volume). Similar to sidecar but uses native Kubernetes volume semantics. Less resource overhead than running an agent sidecar per pod.

Direct SDK integration gives the finest control over lease renewal and secret caching but requires code changes in every service. Use this for applications that need to handle credential rotation gracefully in long-running connections, such as database connection pools that must re-authenticate mid-session.

For most teams adopting Vault for the first time, start with the sidecar approach. Do not overthink this. You can migrate individual services to SDK integration later when specific requirements demand it.

With the injection model sorted, the real power of Vault shows up in dynamic credential generation.

Dynamic Database Credentials

Database credentials are the highest-value attack target in most organizations and the best candidate for dynamic secrets.

With Vault’s database secrets engine, when a service needs database access it requests a credential from Vault. Vault connects to the database, creates a new user with a scoped role, and returns the username and password with a TTL. When the TTL expires, Vault revokes the user from the database. The application never holds a long-lived credential.

Connection pool management changes as a result, and this is where teams get burned. Since credentials rotate, applications need to handle credential expiry gracefully. The most common failure mode we see: a connection pool holds 20 connections using credentials that expired, and the next request fails with an authentication error that propagates as a 500 to the user. The fix is either catching authentication errors and transparently re-requesting credentials (retry once with fresh creds before failing), or using Vault Agent to handle renewal and file rotation before the TTL expires. This is a solvable problem. But solve it before you go to production, not during the outage that teaches you it exists. Solid application security practice addresses this at the application layer.

The complexity of dynamic credentials scales with the number of services consuming them.

Secret Sprawl in Microservice Architectures

A monolith might have 10 secrets. A microservice system with 50 services might have 500, each needing to be distributed, rotated, and audited independently. Without centralized management, those credentials scatter across environment variables, Kubernetes secrets (base64-encoded, not encrypted by default), config maps, CI/CD variables, and developer laptops. That is five places to check during an incident and five places where a credential can leak. Good luck auditing that during a breach response at 2 AM.

The naming convention for secrets in your vault matters far more than teams acknowledge. A consistent hierarchy like secret/{environment}/{service}/{secret-type} (e.g., secret/production/payment-svc/postgres) makes policy management tractable. You can write a Vault policy that says “the payment service in production can read secret/production/payment-svc/* and nothing else” in three lines. Inconsistent naming makes writing least-privilege policies a manual process that nobody maintains after the first six months. This pattern plays out repeatedly: teams start with good intentions, skip the naming convention, and end up with a vault full of secrets that no one can write coherent policies for.

Equally important: audit logging. Every secret access must produce a queryable audit log entry. When an incident occurs, you need to answer “which services read this database password, when, and from which IP?” If your secrets manager does not produce queryable audit logs, that question goes unanswered during the incident response. And unanswered questions during a breach turn into worst-case assumptions in the disclosure. This is a core requirement of environment automation done properly, and it directly supports the evidence collection requirements for SOC 2 compliance.

Before you can improve your secrets architecture, you need to know what is already exposed.

Finding What Is Already Exposed

Scan for what is already exposed in your existing repositories. You almost certainly have credentials in git history that you do not know about.

Run trufflehog git file://. --only-verified to scan git history for credentials that were committed and then deleted. Deleting a file from the working tree does not remove it from git history. That credential you “removed” three months ago is still there in the reflog. gitleaks detect --source . provides similar coverage with different detection patterns. High-entropy string detection surfaces most API keys and tokens. Pattern matching handles known formats: AWS access keys (starts with AKIA), GitHub tokens (ghp_), Slack tokens (xoxb-), and dozens of other providers.

The result of this scan is almost always uncomfortable. Nobody runs this against a repository older than six months without finding at least one valid credential. Not once. Treat every exposed credential as compromised regardless of whether you have evidence of exploitation. Rotate first. Investigate second. That order is non-negotiable.

Pre-commit hooks running gitleaks before commits reach the repository are worth the minor developer friction. Configure an allowlist for known false positives (high-entropy test fixtures, example UUIDs) to keep friction low. Yes, a false positive is annoying. But an exposed production database password is a breach investigation, a credential rotation under pressure, and potentially a regulatory notification. Pick which kind of annoying you prefer. The shift-left security guide covers how to layer pre-commit hooks with CI-side scanning for 95%+ detection coverage.

The difference between static credential management and dynamic secrets infrastructure is the difference between a blast radius of “full database access, forever” and “one schema, one hour.” Organizations that make this architectural shift reduce their credential exposure window from months to hours. That is not a marginal improvement. That is a structural change in your security posture.

Secrets management is not a security checkbox. It is infrastructure that determines the blast radius of every credential compromise you will ever face. Teams that invest in centralized vault architecture with dynamic secrets and automated rotation eliminate an entire class of breach scenarios that static credentials make inevitable. The question is not whether a credential will leak. It is how much damage that leak can do when it happens.

Stop Gambling With Production Credentials

Leaked secrets cause breaches - and most organizations have credentials scattered across CI/CD environment variables, developer laptops, and Slack threads from years ago. Metasphere designs and implements enterprise secrets management architectures that eliminate static credentials, automate rotation, and give you full audit trails across your entire environment.

Secure Your Secrets

Frequently Asked Questions

What is the difference between secrets management and key management?

+

Key management (KMS) handles cryptographic keys for encryption and signing with hardware-backed storage. Secrets management is broader, covering database passwords, API tokens, TLS certificates, SSH keys, and all sensitive credentials. Most organizations need both: KMS for encryption key material and a secrets manager like Vault for the wider credential landscape. They complement each other and serve distinct use cases.

What are dynamic secrets and why are they better than static credentials?

+

Dynamic secrets are generated on demand with a fixed TTL (typically 1-24 hours) and expire automatically. Vault creates a unique database username per requesting service rather than sharing one static credential. A stolen dynamic credential expires within hours. A stolen static credential remains valid until detected, which averages 197 days according to breach investigation data. That is an 8,760x reduction in exposure window.

How do you rotate secrets without causing service outages?

+

Support two valid credential versions simultaneously. Generate the new credential, update the store with both active, deploy services to load the new one, verify via access logs, then revoke the old. Vault’s database engine handles this automatically. For third-party API keys, build dual-credential support into the application’s credential loading path before rotation is needed, not as an emergency retrofit during an incident.

What should we do if we find a secret in our git history?

+

Rotate the credential immediately, treating it as compromised regardless of evidence. Then remove it from history using git filter-repo (not the deprecated git filter-branch). Review audit logs for unauthorized access during the exposure window. Add gitleaks to pre-commit hooks and CI to prevent recurrence. Rotation must come before investigation since exploitation may already be in progress.

Should we use our cloud provider's native secrets service or HashiCorp Vault?

+

AWS Secrets Manager, GCP Secret Manager, and Azure Key Vault are solid for single-cloud workloads with tight IAM integration and auto-rotation for common databases. Vault fits better for multi-cloud or hybrid environments, dynamic secrets across diverse backends (databases, SSH, PKI, cloud IAM), or regulatory requirements mandating self-hosted infrastructure. Most teams start with cloud-native and move to Vault when they hit 3+ credential backends.