← Back to Insights

Monorepo Strategy: Nx, Turborepo, and Bazel Compared

Metasphere Engineering 12 min read

Your team migrated six repositories into a monorepo because someone in leadership came from a company that used one. Three months later, CI on main takes 45 minutes. Developers push to feature branches and merge late to avoid triggering the full build. The teams that had been most autonomous before the migration are the loudest about reverting.

One big shared workshop. All the tools in one place. But nobody organized the workbenches. Finding your tools takes longer than the actual work. Everybody’s tripping over everybody else’s projects.

No one set up affected-only task execution. No one configured remote caching. The monorepo was not the problem. The missing infrastructure was.

Key takeaways
  • Monorepos fail because of missing infrastructure, not because of the pattern. Affected-only task execution and remote caching are prerequisites, not optimizations you add later.
  • 45-minute CI on main means developers stop merging. They push to branches and merge late. The coordination problem monorepos are supposed to fix gets worse, not better.
  • Atomic cross-package changes are the killer feature. A type change in a shared library, updated in every consumer, validated together, in a single PR.
  • CODEOWNERS enforces team boundaries within the shared repo. Without explicit ownership rules, the monorepo becomes everyone’s responsibility and therefore nobody’s.
  • Repository structure doesn’t determine success. Team topology and tooling discipline do. The repo is a container.
Monorepo Atomic Commit vs Polyrepo Coordinated ChangesSplit-screen comparison showing a single atomic PR in a monorepo versus multi-day coordination chaos across polyrepos with version mismatch windows and runtime errors.Atomic Commit vs. Multi-Day CoordinationMonorepoPolyrepoSingle PR #472Modifies: libs/auth (breaking change)+ apps/billing + apps/orders + apps/users1 PRCI runs affected tests across all 4 packagesAll greenMerged. All consumers updated atomically.1 PR, 1 merge, 0 version mismatchesTotal elapsed time: 2 hoursPR #1: auth-lib repoBreaking change. Publishes v2.0.Day 1PR #2: billing-service repoUpdates auth-lib to v2.0. Reviewed, merged.v2.0Day 2PR #3: orders-service repoStill in review. Running v1.x.v1.xDay 4users-service repoUpgrade not started. Running v1.x.v1.xVersion mismatch window: 4+ daysSome services on v1, others on v2Runtime Error in orders-serviceIncompatible auth-lib version at runtimeMonorepoAtomic. 1 PR. Consistent state guaranteed.!Polyrepo4+ days of version drift. Runtime failures.The coordination cost scales with the number of repositories and teams involved.

What Monorepos Actually Solve

The core advantage is atomic commits across project boundaries. When your authentication library ships a breaking change, that change happens in a single PR alongside updates to every consuming service. Reviewers see the full impact. CI validates it together. Library and consumers are always in a consistent state.

In a polyrepo, the same change requires publishing the library, creating individual PRs in each consumer repository, then waiting for each team to review and merge on their own schedule. During that coordination window, some services run the old version and some run the new one. Parts don’t fit together until everyone catches up. If the change breaks in an unexpected way, it surfaces during an individual team’s upgrade, long after the library PR merged, with no easy path back.

Three problems monorepos solve: atomic changes, shared tooling, dependency visibilityAtomic cross-project changes land in one PR. Shared CI/CD tooling enforces consistency across all projects. Dependency graphs are instantly visible with one grep instead of checking every repo.What Monorepos Actually SolveAtomic ChangesAPI + client + testsin one PR, one reviewCI validates atomicallyPolyrepo: 3 PRs, hope they merge in orderShared ToolingOne CI config, one linterSecurity scanning onceUpgrade everywhere at oncePolyrepo: 50 repos, 50 configs, 50 upgradesDependency Visibility\"Who uses this?\"One grep, instant answerImpact visible pre-mergePolyrepo: check every repo manuallyMonorepos solve coordination. They don't solve organizational problems.

Second advantage: code discoverability. When everything is in one place, engineers search the entire codebase before building something new. That shared utility someone wrote last quarter? In libs/utils, found in seconds. In a polyrepo, it lives in a repository a new team member has never heard of, on a wiki page last touched the day it was written. Duplicate implementations of the same logic quietly pile up.

Tooling That Makes or Breaks the Pattern

A monorepo without task orchestration doesn’t scale past a few dozen packages. Full stop. The naive approach of running all builds and tests on every commit produces CI times measured in hours. Migrate to a monorepo without investing in the tooling and you’ll spend the next year explaining why the migration made everything slower.

The Build Time Cliff The CI duration at which developers stop merging to main and start working on long-lived feature branches. For most teams, the threshold sits around 15-20 minutes. Beyond that, the feedback loop is too slow and engineers change their workflow to avoid it. A 45-minute pipeline doesn’t just slow deployment. It changes how the team works.

Nx, Turborepo, and Bazel all provide affected-only execution: PR modifies libs/payments? Only packages depending on it get rebuilt and retested. Nothing else runs. Remote caching takes it further: same inputs, same outputs, cached across machines. A build that passed on a colleague’s laptop doesn’t run again in CI. Remote caching alone delivers the biggest CI speedup most teams see.

Monorepo CI: Build Only What ChangedMonorepo CI: Build Only What ChangedPR MergedChanged: packages/authAffected DetectionNx/Turborepo graph walkauth + api + web affectedParallel BuildBuild auth, api, web in parallelSkip: docs, infra, mobile (unchanged)Remote Cache Hitapi was built by previous PR with same hashCache restored in 2s instead of 4min build50 projects changed 1. Build 3 affected. Cache 1. Total: 2 builds instead of 50.
ToolSetup timeLanguage supportRemote cachingBest for
Turborepo2-4 hoursJS/TS primarilyVercel Remote CacheJS/TS monorepos up to ~100 packages
Nx1-2 daysJS/TS primary, plugins for Go, Rust, JavaNx CloudJS/TS monorepos up to ~200 packages
Bazel2-4 weeksAny language with build rulesRemote executionLarge polyglot repos, 500+ packages

Bazel is the most powerful option: hermetic builds, complete dependency declaration, true reproducibility. Also the most complex. Budget weeks of platform engineering setup. Nx and Turborepo are much easier for JavaScript/TypeScript projects. Start there. Reserve Bazel for when build reproducibility is a compliance requirement or you’ve passed 500+ packages across multiple languages.

Prerequisites
  1. Affected-only task execution configured and validated (Nx, Turborepo, or Bazel)
  2. Remote caching operational and shared across CI agents and developer machines
  3. Module boundary enforcement rules defined and integrated into lint
  4. CODEOWNERS file mapping every directory to a responsible team
  5. CI pipeline tested with a representative PR to confirm times stay under 15 minutes for typical changes

Module Boundaries and Ownership

An organizational risk sneaks up on monorepo teams: having everything in one place makes it easy for everything to depend on everything else. A service quietly imports a utility from another service’s internal module. A shared library grows without bounds as teams add domain-specific helpers. Over 6-12 months, hidden coupling piles up until changes in one area trigger unexpected test failures across unrelated packages.

Nx’s enforce-module-boundaries ESLint rule defines which package types can import from which others based on tags. A type:app can import type:feature and type:lib. A type:lib cannot import type:feature. Domain packages only import from their own domain or shared utility libraries. Tape on the floor. Violations surface in the editor while typing, not during a production incident six weeks later.

CODEOWNERS files handle review ownership. Teams own their directories. Changes to shared libraries require review from the platform team. Design ownership to mirror your actual organizational structure, not as an afterthought once teams have been stepping on each other’s code for months.

Anti-pattern

Don’t: Set up boundary enforcement after the third repository migration, once coupling patterns are already established. Retrofitting constraints means breaking existing imports across every team, all at once. Putting up walls in a room people are already using.

Do: Configure boundary enforcement before merging the second repository. The first migration is too small to feel the need. By the third, it’s already too late to introduce constraints painlessly. Constraints first, then merges.

When Polyrepo Is the Right Answer

MonorepoPolyrepo
Cross-service changesOne PR, atomicMulti-repo coordination
CI complexityHigh (affected-only required)Low per repo, high across fleet
Team autonomyLower (shared CI, shared rules)Higher (independent pipelines)
Dependency managementAlways latest (trunk-based)Versioned, explicit upgrades
Access controlRepo-wide (or complex path-based rules)Per-repo, straightforward
Best whenFrequent cross-service changes, shared librariesStable APIs, independent release schedules
Breaks whenNo affected-only CICross-repo changes happen weekly

Polyrepo fits well when services have stable, versioned APIs and teams release on genuinely independent schedules. A financial services company where the payments team deploys on a strict change control window and the analytics team ships multiple times per day runs on a completely different clock. Coupling them in a shared repository creates coordination overhead that didn’t exist before. Tooling can’t fix a mismatch in tempo.

The microservice architecture principle of independent deployability maps naturally to independent repositories when interfaces are genuinely stable. The qualifier is “genuinely.” Many teams believe their services are more independent than the actual frequency of cross-service coordination reveals. If you’re making coordinated changes across 3+ repos more than twice a month, your polyrepo structure is fighting your actual coupling patterns.

The polyrepo tooling investment most teams underestimate

Polyrepo coordination requires its own tooling investment that roughly equals the monorepo alternative. Renovate or Dependabot for automated dependency updates. Internal package registries with clear semantic versioning. Cross-repo search so engineers can find existing implementations. Dependency dashboards showing which services run which library versions. Release coordination tooling for breaking changes. DevOps engineering investment for polyrepo scale is different from monorepo scale, not necessarily lower. Pick based on which coordination problems your teams actually face, not which repo structure a company with 10x your headcount chose.

SignalMonorepoPolyrepoHybrid (monorepo per domain)
Cross-repo changes2+ times/month across 3+ projectsRare. Stable, versioned APIsMixed. Frequent within domain, rare across
Release cadenceCoordinated releases benefit from atomic commitsIndependent deploy cycles per serviceDomain-level coordination, cross-domain independence
Shared codeFirst-class. Internal packages, one upgradeVia published packages + RenovateShared within domain monorepo, published across
CI investmentNx/Turborepo + remote cachingPer-repo pipelines, simplerDomain-scoped build graphs
Team couplingStrong coupling. Teams touch same codeLoose coupling. Clear ownership boundariesDomain teams coupled, cross-domain decoupled
Best whenStrong coupling, shared libraries, frequent cross-cutting changesStable interfaces, autonomous teams, independent deployLarge orgs with domain boundaries but intra-domain sharing

What the Industry Gets Wrong About Monorepos

“Google uses a monorepo, so should we.” Google’s monorepo works because Google invested thousands of engineering hours in Bazel, remote caching, and affected-only execution. Copying the repository structure without the tooling investment produces the worst of both worlds: the coordination overhead of a shared repo with none of the velocity benefits. Copying Google’s workshop layout without Google’s tool budget. Your workshop, their floor plan, no power tools.

“Monorepos prevent dependency conflicts.” Monorepos make dependency conflicts visible. They don’t prevent them. Without CODEOWNERS and boundary enforcement, a shared utility library gets modified by 8 teams with different needs, and conflict resolution moves from the package registry to the PR queue. Different venue, same problem.

“Polyrepos solve team autonomy.” Polyrepos provide autonomy only if interfaces are genuinely stable. When cross-repo changes happen weekly, polyrepos create coordination overhead that wouldn’t exist in a monorepo. Autonomy is a function of interface stability, not repository boundaries. Separate garages don’t help if you keep driving to each other’s garage for tools.

Our take Invest in affected-only task execution before merging repos. Not after. Nx, Turborepo, or Bazel with remote caching should be configured and validated before the first polyrepo is merged in. The window between “repos merged” and “tooling working” is the window where you lose the teams that were happiest in polyrepo. Close that window before it opens.

Six repos merged, CI at 45 minutes, developers dodging the build. With affected-only execution and remote caching, that same monorepo runs CI in minutes for a typical change. The repo was never the problem. The missing infrastructure was. Get the tooling right first, and the monorepo delivers exactly what leadership hoped for when they merged those repos in the first place. Same repo. The tooling made the difference.

Your CI Takes 45 Minutes and Nobody Merges

Affected-only execution, remote caching, and module boundary enforcement turn a monorepo from a bottleneck into the coordination layer it was supposed to be. The difference between a 45-minute pipeline and a 3-minute one is tooling discipline, not repository structure.

Fix Your Build Pipeline

Frequently Asked Questions

What is the main advantage of a monorepo for a software company?

+

Atomic cross-project changes. When a shared library has a breaking change, you update all consumers in a single commit and CI validates everything together before merge. In a polyrepo, the same change needs coordinated PRs across multiple repositories and a window where some services run v1 and others run v2. Monorepos also make code discoverability trivial, and organizations that measure it consistently find fewer duplicate implementations of common logic across teams.

What tooling is required to make a monorepo work at scale?

+

Affected-only task execution and remote caching are non-negotiable. Without them, CI times grow linearly with repository size, routinely passing tolerable thresholds once you pass a few dozen packages. Nx, Turborepo, and Bazel compute the dependency graph and run only affected tasks. Remote caching cuts CI time sharply by sharing build artifacts across machines and developers. Module boundary enforcement prevents the circular dependencies that make large monorepos brittle at the 100+ package scale.

How do you maintain code ownership in a monorepo?

+

CODEOWNERS files map directory paths to responsible teams. In a monorepo, apps/billing belongs to the billing team, libs/auth belongs to the platform team. Changes to those paths need review from the designated owner. Design the ownership model deliberately upfront. Without it, teams modify each other’s code without proper review, or informal social rules emerge that break down as the team scales.

What does a polyrepo do better than a monorepo?

+

Polyrepos provide harder isolation boundaries. Teams can use different stacks, deploy independently, and manage access control per repository. A contractor can access one service repository without seeing others. If your teams genuinely operate with stable interfaces and independent release cadences, the coordination overhead of a monorepo often exceeds the benefit. The key word is ‘genuinely’ as many teams believe their services are more independent than they actually are.

Can you migrate from polyrepo to monorepo without losing git history?

+

Yes. Tools like git filter-repo and git subtree merge repositories while preserving full commit history. The practical challenge is migrating CI pipelines, deployment automation, and developer workflows. A typical migration of 10-20 repositories takes 4-8 weeks including tooling setup. Start with the 3-5 repositories that have the most cross-repo coordination overhead. They give the highest payoff and prove the approach before you migrate the rest.