Secrets Management: Kill the Static Credential
Go to your company’s Slack right now and search for “here’s the password” or “temp key” or “use this token.” Do it. You’ll find something. A database password shared in a DM two years ago. An AWS access key pasted into a channel “just temporarily” that’s still valid today. A contractor’s personal account with a message containing your production API credentials from when they onboarded 18 months ago.
A master key copied 30 times, left in 30 drawers. Nobody changed the locks.
Every organization that runs this search finds at least one live credential sitting in plaintext where it shouldn’t be.
- Search your company’s Slack for “here’s the password.” You’ll find live credentials. Every organization does.
- Dynamic secrets with short TTLs collapse the exposure window from months to minutes. A temporary badge that expires in an hour. The credential dies before anyone could exploit a leak.
- Vault (or equivalent) as the single source of truth means applications request credentials at runtime. The badge office. No
.envfiles. No CI/CD environment variables. No shared password managers. - Pre-commit hooks catch secrets before they enter version control. The metal detector at the door. Once a secret hits Git, it lives in the history forever. Prevention beats cleanup.
- Automated rotation runs whether anyone remembers or not. Manual rotation schedules slip. Automated 90-day rotation just happens. Locks that change themselves.
Passwords in .env files. Credentials duplicated across CI/CD pipelines. A shared vault six former employees still have access to. Breach investigations keep finding credentials that sat exposed for months before anyone noticed. The root problem isn’t careless developers. It’s an architecture that makes static credentials the default and dynamic credentials the exception. Master keys everywhere. Temporary badges nowhere.
Why Static Credentials Are the Root Problem
A database password created two years ago. Seen by 30 developers. Logged by 12 CI pipelines. Backed up to 4 systems. Currently lives in a .env file on a laptop that might be in a coffee shop right now. The blast radius only grows. It never shrinks. Ask “who has access to this credential?” and the honest answer is: nobody knows.
Static credentials also can’t express policy. A password is all-or-nothing. You can’t scope it to one service, one schema, one hour. Dynamic credentials can. Vault generates a database user with GRANT SELECT ON billing.* and a 1-hour TTL. A temporary badge that opens one door for one hour. If that credential leaks, an attacker gets read-only access to one schema for 60 minutes. The static password? Full access, forever, for everyone who’s ever seen it.
You can’t audit your way out of this architecture. Scanning for exposed credentials is necessary, but it’s reactive. You’re looking for damage that already happened. Cloud security done properly treats secrets management as foundation infrastructure: a central store, a distribution mechanism, and a rotation process that doesn’t cause outages.
The Vault Architecture
Vault is the badge office. Applications prove their identity (Kubernetes service account, AWS IAM role, GCP workload identity) and receive a short-lived secret in return. No service ever holds a long-lived credential. The credential lifecycle flips: instead of “created once, forgotten, leaked eventually,” it becomes “issued on demand, scoped tightly, dead in an hour.”
How secrets reach your application matters as much as where they’re stored. Three injection models, escalating in complexity:
Sidecar injection. Vault Agent runs alongside your pod, writes secrets to tmpfs. Your app reads /vault/secrets/db-password like any config file. The badge slipped under the door. No code changes. No SDK dependency. The simplest path, and the right starting point for most teams. If you’re agonizing over which injection model to pick, this is the answer.
CSI driver. Same concept using native Kubernetes volume semantics. Lighter footprint than running an agent per pod. Feels more idiomatic to platform teams who care about Kubernetes-native patterns.
Direct SDK integration. Maximum control over lease renewal, caching, and credential lifecycle. Requires code changes in every service. Only justified for long-running database connections where the credential expires mid-session and the connection pool needs to handle the rotation gracefully.
Don’t: Spend three weeks debating injection models before writing a single Vault policy. Debating which door to use while the windows are open. The plumbing doesn’t matter if credentials are still in .env files.
Do: Start with sidecar injection. Migrate to SDK integration only when a specific service proves it needs the control. Most services never will.
Dynamic Database Credentials
Database credentials are what attackers actually hunt for, and they’re the best candidate for dynamic issuance.
A service requests database access. Vault creates a fresh database user with scoped permissions and a 1-hour TTL, hands back the credentials. When the TTL expires, Vault revokes the user from the database directly. The badge self-destructs whether anyone remembers to rotate it or not.
Connection pools trip up every team that deploys dynamic credentials without planning for renewal. The pool holds 20 connections authenticated with the old credential. Next request gets an authentication error. That cascades to a 500. The on-call engineer spends 45 minutes debugging a “database outage” before someone realizes the credentials expired and the pool didn’t notice.
Two fixes: catch authentication errors at the pool layer and retry once with fresh credentials, or let Vault Agent renew the lease before TTL expiry so the transition is invisible. Application security practices handle this at the code layer. Get renewal working in staging first. Discovering it during a production outage is a much worse learning environment.
Secret Sprawl in Microservice Architectures
A monolith has maybe 10 secrets. Fifty microservices have 500. Each one needs distribution, rotation, and audit. Without centralized management, credentials end up spread across environment variables, Kubernetes secrets (base64-encoded, not encrypted by default), config maps, CI/CD variables, and developer laptops. Five storage locations. Five leak vectors. Five places to frantically audit while the breach response clock ticks.
Naming conventions are the unglamorous decision that determines whether your Vault policies work or collapse into chaos. secret/{environment}/{service}/{secret-type} makes least-privilege policies tractable. Three lines of Vault policy: “payment service reads secret/production/payment-svc/* and nothing else.” Skip the convention because you’re “moving fast” and you end up with a vault nobody can write coherent policies for.
Audit logging is non-negotiable. Every secret access produces a queryable log entry. During an incident, the question “which services read this credential, when, from which IP?” must have an answer. If it doesn’t, the breach disclosure defaults to worst-case assumptions. Environment automation bakes audit logging into the infrastructure layer.
| Dimension | Static Credentials (sprawled) | Dynamic Credentials (vault-managed) |
|---|---|---|
| Distribution | Scattered across 50 services in env vars, config files, CI/CD variables | Centralized in vault. Services authenticate to vault at runtime |
| Rotation | Manual. Average rotation: never (or annually under audit pressure) | Automatic. TTL-based: hours to days. Rotation is the default |
| Blast radius of leak | Permanent access until someone notices and rotates. Often months | Access expires with TTL. Leaked credential is useless within hours |
| Audit trail | Scattered. Which service used which credential when? Nobody knows | Centralized. Every access logged with service identity and timestamp |
| Revocation speed | Find every copy, rotate each one, redeploy affected services | Revoke in vault. All services get denied on next auth attempt |
| Onboarding a new service | Copy credentials from another service’s config. Hope they still work | Service authenticates to vault with its own identity. Zero credential copying |
Finding What Is Already Exposed
trufflehog git file://. --only-verified. Run it today. Not next sprint. Not after the current release. Today.
Deleting a file from the working tree doesn’t touch git history. That credential you “removed” three months ago is still in the reflog. The key you threw away. Git kept a copy. Git keeps everything. So does anyone who cloned the repo before your cleanup.
gitleaks detect --source . provides complementary coverage. High-entropy detection catches most API keys. Pattern matching handles known formats: AKIA for AWS, ghp_ for GitHub, xoxb- for Slack, dozens more. Between the two tools, very little hides.
The results are always uncomfortable. No repo older than six months comes back clean. (If yours does, your scanner is misconfigured.) Treat every exposed credential as compromised. Rotate first. Investigate second. Always in that order. The investigation can wait. The rotation cannot. Change the locks. Then look at the security footage.
- Secret scanning tools (trufflehog + gitleaks) installed and configured for your repository structure
- Pre-commit hooks blocking credentials from entering version control
- CI pipeline scanning as a second layer for anything pre-commit misses
- Credential rotation runbook with documented steps for each secret type
- Vault (or cloud-native equivalent) deployed with at least one auth backend configured
Pre-commit hooks add friction, and developers will grumble about false positives. Compare that friction to the alternative: a breach investigation, forced credential rotation under pressure, and possibly a regulatory notification letter. Shift-left security covers layering pre-commit with CI-side scanning for near-complete detection coverage.
Choosing Between Vault and Cloud-Native Secrets Services
| HashiCorp Vault | AWS Secrets Manager / GCP Secret Manager / Azure Key Vault | |
|---|---|---|
| Best for | Multi-cloud, hybrid, diverse credential backends | Single-cloud workloads with tight IAM integration |
| Dynamic secrets | Native. Database, SSH, PKI, cloud IAM, custom backends. | Limited. Supports common databases with auto-rotation. |
| Auth methods | Kubernetes, AWS IAM, GCP, Azure, LDAP, OIDC, AppRole | Cloud-native IAM only |
| Operational overhead | Significant. HA cluster, unsealing, upgrades, backups. | Managed. No infrastructure to run. |
| Cost model | Self-hosted infrastructure + licensing | Per-secret, per-API-call pricing |
| When to migrate | When you exceed 3+ credential backends or need cross-cloud consistency | When cloud-native covers all backends and IAM integration is enough |
Most teams start with their cloud provider’s native service and move to Vault when they outgrow it. Running both during the switch is common, and that’s fine. The goal is centralized credential management, not tool purity.
What the Industry Gets Wrong About Secrets Management
“Encrypted .env files are secure enough.” Encrypted at rest, decrypted at runtime, never rotated, shared across every environment, and accessible to everyone who’s ever had access to the repository. A locked drawer with the key taped to the frame. Encryption without rotation and access control is security theater.
“Annual credential rotation meets the compliance requirement.” Annual rotation assumes you find every leak within 12 months. Most credential leaks are found during incident investigations, not scheduled audits. Changing the locks once a year. Assuming nobody copied the key in the meantime. A 90-day maximum TTL with automated rotation bounds the exposure window regardless of when (or whether) the leak is found.
.env file for three years. Fix the credential lifecycle first. Everything else in the security roadmap is less impactful per engineering hour spent.Run that Slack search again. “Here’s the password.” “Temp key.” “Use this token.” The messages are still there, but the credentials are dead. Temporary badges that expired hours ago, issued per-service and per-request, logged and auto-revoked. The contractor’s 18-month-old DM is an artifact now, not a vulnerability. The master key is gone. Every badge expires. Every door has its own lock.