Monorepo Strategy: Nx, Turborepo, and Bazel Compared
Your team migrated six repositories into a monorepo because someone in leadership came from a company that used one. Three months later, CI on main takes 45 minutes. Developers push to feature branches and merge late to avoid triggering the full build. The teams that had been most autonomous before the migration are the loudest about reverting.
One big shared workshop. All the tools in one place. But nobody organized the workbenches. Finding your tools takes longer than the actual work. Everybody’s tripping over everybody else’s projects.
No one set up affected-only task execution. No one configured remote caching. The monorepo was not the problem. The missing infrastructure was.
- Monorepos fail because of missing infrastructure, not because of the pattern. Affected-only task execution and remote caching are prerequisites, not optimizations you add later.
- 45-minute CI on main means developers stop merging. They push to branches and merge late. The coordination problem monorepos are supposed to fix gets worse, not better.
- Atomic cross-package changes are the killer feature. A type change in a shared library, updated in every consumer, validated together, in a single PR.
- CODEOWNERS enforces team boundaries within the shared repo. Without explicit ownership rules, the monorepo becomes everyone’s responsibility and therefore nobody’s.
- Repository structure doesn’t determine success. Team topology and tooling discipline do. The repo is a container.
What Monorepos Actually Solve
The core advantage is atomic commits across project boundaries. When your authentication library ships a breaking change, that change happens in a single PR alongside updates to every consuming service. Reviewers see the full impact. CI validates it together. Library and consumers are always in a consistent state.
In a polyrepo, the same change requires publishing the library, creating individual PRs in each consumer repository, then waiting for each team to review and merge on their own schedule. During that coordination window, some services run the old version and some run the new one. Parts don’t fit together until everyone catches up. If the change breaks in an unexpected way, it surfaces during an individual team’s upgrade, long after the library PR merged, with no easy path back.
Second advantage: code discoverability. When everything is in one place, engineers search the entire codebase before building something new. That shared utility someone wrote last quarter? In libs/utils, found in seconds. In a polyrepo, it lives in a repository a new team member has never heard of, on a wiki page last touched the day it was written. Duplicate implementations of the same logic quietly pile up.
Tooling That Makes or Breaks the Pattern
A monorepo without task orchestration doesn’t scale past a few dozen packages. Full stop. The naive approach of running all builds and tests on every commit produces CI times measured in hours. Migrate to a monorepo without investing in the tooling and you’ll spend the next year explaining why the migration made everything slower.
Nx, Turborepo, and Bazel all provide affected-only execution: PR modifies libs/payments? Only packages depending on it get rebuilt and retested. Nothing else runs. Remote caching takes it further: same inputs, same outputs, cached across machines. A build that passed on a colleague’s laptop doesn’t run again in CI. Remote caching alone delivers the biggest CI speedup most teams see.
| Tool | Setup time | Language support | Remote caching | Best for |
|---|---|---|---|---|
| Turborepo | 2-4 hours | JS/TS primarily | Vercel Remote Cache | JS/TS monorepos up to ~100 packages |
| Nx | 1-2 days | JS/TS primary, plugins for Go, Rust, Java | Nx Cloud | JS/TS monorepos up to ~200 packages |
| Bazel | 2-4 weeks | Any language with build rules | Remote execution | Large polyglot repos, 500+ packages |
Bazel is the most powerful option: hermetic builds, complete dependency declaration, true reproducibility. Also the most complex. Budget weeks of platform engineering setup. Nx and Turborepo are much easier for JavaScript/TypeScript projects. Start there. Reserve Bazel for when build reproducibility is a compliance requirement or you’ve passed 500+ packages across multiple languages.
- Affected-only task execution configured and validated (Nx, Turborepo, or Bazel)
- Remote caching operational and shared across CI agents and developer machines
- Module boundary enforcement rules defined and integrated into lint
- CODEOWNERS file mapping every directory to a responsible team
- CI pipeline tested with a representative PR to confirm times stay under 15 minutes for typical changes
Module Boundaries and Ownership
An organizational risk sneaks up on monorepo teams: having everything in one place makes it easy for everything to depend on everything else. A service quietly imports a utility from another service’s internal module. A shared library grows without bounds as teams add domain-specific helpers. Over 6-12 months, hidden coupling piles up until changes in one area trigger unexpected test failures across unrelated packages.
Nx’s enforce-module-boundaries ESLint rule defines which package types can import from which others based on tags. A type:app can import type:feature and type:lib. A type:lib cannot import type:feature. Domain packages only import from their own domain or shared utility libraries. Tape on the floor. Violations surface in the editor while typing, not during a production incident six weeks later.
CODEOWNERS files handle review ownership. Teams own their directories. Changes to shared libraries require review from the platform team. Design ownership to mirror your actual organizational structure, not as an afterthought once teams have been stepping on each other’s code for months.
Don’t: Set up boundary enforcement after the third repository migration, once coupling patterns are already established. Retrofitting constraints means breaking existing imports across every team, all at once. Putting up walls in a room people are already using.
Do: Configure boundary enforcement before merging the second repository. The first migration is too small to feel the need. By the third, it’s already too late to introduce constraints painlessly. Constraints first, then merges.
When Polyrepo Is the Right Answer
| Monorepo | Polyrepo | |
|---|---|---|
| Cross-service changes | One PR, atomic | Multi-repo coordination |
| CI complexity | High (affected-only required) | Low per repo, high across fleet |
| Team autonomy | Lower (shared CI, shared rules) | Higher (independent pipelines) |
| Dependency management | Always latest (trunk-based) | Versioned, explicit upgrades |
| Access control | Repo-wide (or complex path-based rules) | Per-repo, straightforward |
| Best when | Frequent cross-service changes, shared libraries | Stable APIs, independent release schedules |
| Breaks when | No affected-only CI | Cross-repo changes happen weekly |
Polyrepo fits well when services have stable, versioned APIs and teams release on genuinely independent schedules. A financial services company where the payments team deploys on a strict change control window and the analytics team ships multiple times per day runs on a completely different clock. Coupling them in a shared repository creates coordination overhead that didn’t exist before. Tooling can’t fix a mismatch in tempo.
The microservice architecture principle of independent deployability maps naturally to independent repositories when interfaces are genuinely stable. The qualifier is “genuinely.” Many teams believe their services are more independent than the actual frequency of cross-service coordination reveals. If you’re making coordinated changes across 3+ repos more than twice a month, your polyrepo structure is fighting your actual coupling patterns.
The polyrepo tooling investment most teams underestimate
Polyrepo coordination requires its own tooling investment that roughly equals the monorepo alternative. Renovate or Dependabot for automated dependency updates. Internal package registries with clear semantic versioning. Cross-repo search so engineers can find existing implementations. Dependency dashboards showing which services run which library versions. Release coordination tooling for breaking changes. DevOps engineering investment for polyrepo scale is different from monorepo scale, not necessarily lower. Pick based on which coordination problems your teams actually face, not which repo structure a company with 10x your headcount chose.
| Signal | Monorepo | Polyrepo | Hybrid (monorepo per domain) |
|---|---|---|---|
| Cross-repo changes | 2+ times/month across 3+ projects | Rare. Stable, versioned APIs | Mixed. Frequent within domain, rare across |
| Release cadence | Coordinated releases benefit from atomic commits | Independent deploy cycles per service | Domain-level coordination, cross-domain independence |
| Shared code | First-class. Internal packages, one upgrade | Via published packages + Renovate | Shared within domain monorepo, published across |
| CI investment | Nx/Turborepo + remote caching | Per-repo pipelines, simpler | Domain-scoped build graphs |
| Team coupling | Strong coupling. Teams touch same code | Loose coupling. Clear ownership boundaries | Domain teams coupled, cross-domain decoupled |
| Best when | Strong coupling, shared libraries, frequent cross-cutting changes | Stable interfaces, autonomous teams, independent deploy | Large orgs with domain boundaries but intra-domain sharing |
What the Industry Gets Wrong About Monorepos
“Google uses a monorepo, so should we.” Google’s monorepo works because Google invested thousands of engineering hours in Bazel, remote caching, and affected-only execution. Copying the repository structure without the tooling investment produces the worst of both worlds: the coordination overhead of a shared repo with none of the velocity benefits. Copying Google’s workshop layout without Google’s tool budget. Your workshop, their floor plan, no power tools.
“Monorepos prevent dependency conflicts.” Monorepos make dependency conflicts visible. They don’t prevent them. Without CODEOWNERS and boundary enforcement, a shared utility library gets modified by 8 teams with different needs, and conflict resolution moves from the package registry to the PR queue. Different venue, same problem.
“Polyrepos solve team autonomy.” Polyrepos provide autonomy only if interfaces are genuinely stable. When cross-repo changes happen weekly, polyrepos create coordination overhead that wouldn’t exist in a monorepo. Autonomy is a function of interface stability, not repository boundaries. Separate garages don’t help if you keep driving to each other’s garage for tools.
Six repos merged, CI at 45 minutes, developers dodging the build. With affected-only execution and remote caching, that same monorepo runs CI in minutes for a typical change. The repo was never the problem. The missing infrastructure was. Get the tooling right first, and the monorepo delivers exactly what leadership hoped for when they merged those repos in the first place. Same repo. The tooling made the difference.