← Back to Insights

SPA Architecture: Rendering Strategies for Scale

Metasphere Engineering 16 min read

You click a link in your SPA and nothing happens. Two full seconds of nothing. The URL changed. The previous page disappeared. A spinner materialized. Meanwhile, your JavaScript bundle is downloading, parsing, and executing before the browser can paint a single pixel of the content you asked for. An empty restaurant where the waiter hands you the recipe and expects you to cook. Blank white screen. Lighthouse scores crater. Core Web Vitals fail. And the team starts debating whether to “just add SSR” as if flipping a toggle on a quiet afternoon will fix an architectural decision baked into the foundation.

Key takeaways
  • CSR adds seconds to LCP on median mobile connections. The browser downloads, parses, and runs JS before painting anything.
  • SSR with streaming sharply cuts TTFB by sending HTML chunks as they resolve. The single biggest improvement for most SPAs.
  • RSCs (React Server Components) remove most client JS for data-heavy pages. Components that only render on the server never ship to the browser.
  • SSG wins for content changing less than once per minute with manageable page counts. Pre-plated dishes. Beyond tens of thousands of pages, build times become prohibitive. ISR fills the gap.
  • Route-based code splitting is the right default. Cuts the majority of the initial bundle. Component-based splitting adds value only for heavy widgets like chart libraries.

You cannot bundle-split your way out of a bad rendering decision.

The Four Rendering Strategies

Each one trades off build-time cost, server-time cost, and client-time cost. Two things decide which one fits: how often the content changes, and how personalized it is per request.

StrategyContent ProfileLCP ImpactServer CostBest For
CSRFully personalized, real-timeWorst (JS must execute first)None (client bears cost)Authenticated dashboards, admin panels
SSRPersonalized per requestGood (HTML sent immediately)Per-request computeE-commerce PDPs, search results, social feeds
SSGSame for all usersBest (pre-built, CDN-served)Build-time onlyMarketing pages, docs, blogs
ISRSemi-static, periodic updatesNear-best (stale-while-revalidate)On-demand regenerationProduct catalogs, listings, content sites
StrategyBest ForFCPTTISEOTrade-off
CSR (Client-Side Rendering)Highly personalized, dynamic dashboards (admin panels, authenticated apps)Slow (1.5-3s blank screen)Slow (JS must download, parse, execute, fetch data)Poor without prerenderingSimplest architecture. Worst initial performance
SSR (Server-Side Rendering)Personalized pages that need SEO (product pages, user profiles)Fast (200ms server render)Delayed (hydration gap: 200ms-1.2s)GoodServer cost per request. Hydration gap frustrates users
SSG (Static Site Generation)Static content, marketing pages, documentation, blogsFastest (50ms from CDN)Fastest (FCP = TTI for static, ~100ms for hydrated)ExcellentContent is static until rebuild. Not for per-user content
ISR (Incremental Static Regen)Content that changes occasionally (product catalogs, CMS-driven pages)Fast (CDN-served)FastExcellentStale content window between rebuilds. Complexity in cache invalidation

CSR makes sense for authenticated, interactive apps where search indexing doesn’t matter. Need crawlability? CSR is the wrong pick.

SSR wins when content is personalized per request but needs to be crawlable. The cost is server compute on every request. Plan for scaling from the start, or a traffic spike becomes a rendering outage.

SSG is the performance ceiling. Pre-built HTML from a CDN. The fastest LCP achievable. The constraint: any content change requires a rebuild. For content changing less than hourly with manageable page counts, SSG should be the default.

ISR bridges the gap. Statically generated, regenerated in the background when stale. A 60-second revalidation window is fine for catalogs. Unacceptable for a stock ticker. Know your freshness requirements before choosing.

The Hydration Problem

The server sends fully rendered HTML. The browser paints it. Then the page goes dead while React “hydrates,” re-attaching event listeners to the already-visible markup. The user sees a fully rendered page, clicks the checkout button, and nothing happens. They click again. And again. Rage-click five times and leave.

On a mid-range Android device over 4G, hydration can take over a second. Your LCP looks great because the HTML painted fast. Your INP tells the real story: the page looks ready but cannot respond to interaction.

SSR hydration gap: page looks ready but interaction is broken until JS loadsTimeline of SSR page load. HTML arrives at 200ms, page looks complete visually. But JavaScript bundle downloads until 800ms, then hydration runs until 1200ms. During the 200-1200ms window, users see buttons but clicks do nothing. This is the hydration gap.The Hydration Gap: Looks Ready, Isn't0ms200ms800ms1200msHTML arrives (FCP)Page looks completeJS bundle downloadingHydratingHydration Gap: 200ms - 1200msUsers see buttons. Clicks do nothing. Forms don't submit.InteractiveTTI reachedFCP is a lie. TTI is the truth. The gap between them is user frustration.
The Hydration Dead Zone The period between First Contentful Paint and Time to Interactive where the page appears fully rendered but cannot respond to user input. Clicks during this window are silently dropped. The page looks ready. It isn’t. On content-heavy pages with mid-range phones, this dead zone easily passes one second.

Three approaches attack this from different angles:

Selective hydration (React 18+) wraps non-critical components in <Suspense>. React hydrates critical interactive elements first and defers the rest. The page feels interactive sooner because the browser prioritizes what the user can see and click.

Resumability (Qwik’s approach) serializes component state into HTML so the client resumes without re-executing JavaScript. Near-zero hydration cost in theory. In practice, the ecosystem is a fraction of React’s. Worth watching. Not production-ready for most teams today.

React Server Components skip the client entirely for non-interactive components. A product description that just renders text stays server-only. Zero client JavaScript for that component. Interactive components still hydrate, but the total JS needing hydration shrinks a lot on content-heavy pages.

SPA rendering strategy comparison showing CSR, SSR, SSG, and streaming SSR timelinesFour rendering strategies compared on a timeline showing when first byte, first paint, and full interactivity occur for each approach, demonstrating the hydration penalty in SSR and the blank screen period in CSRRendering Strategy TimelinesTime to first meaningful content on mid-range mobile over 4GBlank / waitingVisible but deadInteractiveStreaming chunks0ms500ms1000ms1500ms2000ms+CSRClient-SideBlank screen - JS downloading + executingInteractiveLCP: ~1800msSSRServer-SideServer renderHydration penaltyInteractiveFCP: ~400msTTI: ~1100msSSGStatic GenHydrateInteractive - full speedFCP: ~80msTTI: ~350msStreamSSR StreamShellSel. hydrateInteractiveTTFB: ~80msTTI: ~600msStreaming SSR + selective hydration: best of SSR speed and SSG interactivityTTFB of SSG with personalization of SSR. The hydration penalty shrinks from 700ms to 200ms.

Streaming SSR: Why TTFB Is Hostage to Your Slowest API

Traditional SSR waits for the entire page to render before sending any HTML. Three API calls at 50ms, 200ms, and 800ms? The browser waits 800ms before seeing a single byte. TTFB is hostage to the slowest dependency.

Streaming SSR sends HTML as each section resolves. The browser gets the shell and header while the recommendations API is still processing. TTFB drops from “slowest API call” to “first resolved section,” because the browser starts painting without waiting for every data source to finish.

PhaseTraditional SSRStreaming SSR
Request receivedServer starts fetching ALL data sourcesServer sends HTTP headers + shell HTML immediately
First data readyWaits. Nothing sent until all data resolvesStreams first chunk (header, nav, above-fold content). Browser starts painting
Second data readyStill waiting for remaining sourcesStreams second chunk. Browser renders piece by piece
Slowest data readyNOW sends complete HTML. User sees everything at onceStreams final chunk. Page has been interactive for seconds already
TTFBHostage to slowest data source. 800ms+ common<100ms (shell sent immediately)
User perceptionBlank screen for 800ms, then full page appearsContent appears in stages. Feels faster even if total time is the same

Streaming breaks the dependency between TTFB and data fetching. Your slowest API call no longer holds the page hostage.

Remix’s loader pattern and Next.js App Router both handle streaming natively. Each route module exports a loader, the framework streams as data resolves. No manual chunk management.

Anti-pattern

Don’t: Assume CDN caching works with streaming SSR. Most CDNs buffer the full response before caching, negating the streaming benefit entirely. Your most expensive pages to render are exactly the ones that benefit most from streaming and least from caching.

Do: Configure per-route cache behavior. Cloudflare and Fastly support streaming responses, but need explicit configuration. Static pages get CDN caching. Personalized pages get streaming without caching. Trying to do both on the same route is a trap.

Bundle Splitting That Works at Scale

The default output for a production React app is a single bundle. A mature application easily grows to hundreds of kilobytes gzipped. Your homepage visitor downloads the entire admin panel’s JavaScript. Route-based splitting is the highest-impact fix:

// Before: everything in one bundle
import Dashboard from './pages/Dashboard';
import Settings from './pages/Settings';

// After: each route is a separate chunk
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));

This alone typically cuts the majority of initial bundle size. The browser downloads only the JavaScript for the current route. Subsequent navigation loads additional chunks on demand.

Component-based splitting adds value for heavy widgets that appear on some routes but not others. A rich text editor (200KB+), a charting library (150KB+), a map component (300KB+). The threshold: if a component adds more than 20KB gzipped to a chunk and is not visible above the fold, split it.

Below 20KB the HTTP request overhead outweighs the bundle savings. Twenty 5KB chunks perform worse than one 100KB bundle. Over-splitting is a real trap.

Splitting StrategyWhen to UseTypical SavingsImplementation
Route-level splittingDefault for every SPA. Each route loads only its own code40-60% reduction in initial bundleReact.lazy + Suspense, Next.js automatic, Vite dynamic imports
Component-level splittingHeavy widgets above 20KB (rich text editors, charts, maps)15-30% additional savings on routes that contain heavy componentsDynamic import at component boundary. Load on user interaction, not on route entry
Vendor chunkingSeparate stable third-party code from volatile app codeCache hit rate improvement (vendor chunk changes rarely)Webpack splitChunks or Vite manualChunks for node_modules
Shared module dedupCommon utilities imported by multiple routesPrevents duplicate code across route chunksBuild tool handles automatically. Verify with bundle analyzer

Core Web Vitals: Where Each Rendering Strategy Pays Its Tax

LCP suffers under CSR because JavaScript must execute before rendering. SSR and SSG remove the fetch-parse-execute chain from the critical path entirely. The improvement on mobile is measured in full seconds, often the difference between passing and failing Core Web Vitals thresholds .

INP takes hits from hydration and client-side routing transitions. A 300ms page transition blocks all interactions during that window. React 18’s startTransition marks navigation as non-urgent, keeping the UI responsive, but you have to opt in explicitly. It’s not the default.

CLS adds up from skeleton-to-content transitions, images without dimensions, and dynamic ads. Four layout shifts at 0.05 CLS each sum to 0.2. Fails the threshold. Death by a thousand small shifts, each one innocent in isolation.

VitalCSR ImpactSSR ImpactSSG ImpactPrimary Fix
LCPSevere (JS must execute)Good (HTML immediate)Best (CDN-served)Move to SSR/SSG for public pages
INPHydration blocks inputDead zone during hydrationMinimal (little JS)Selective hydration, startTransition
CLSSkeleton-to-content shiftsServer HTML reduces shiftsStable from first paintDimension attributes, CSS containment

Teams building single-page applications should default to SSR or SSG for public-facing pages and reserve CSR for authenticated sections where search indexing is irrelevant.

Server State vs Client State: Stop Caching API Responses in Redux

Server state (data from APIs) makes up most of the state in a typical application. Redux stores all of it, then you manually wire up caching, invalidation, loading states, and optimistic updates. For every endpoint.

React Query, SWR, and Apollo Client handle server state with caching, refetching, and optimistic updates built in. Switching from Redux for server state eliminates a large chunk of the store and the boilerplate surrounding it.

Redux still earns its place for genuinely client-side state: cart contents, form wizard progress, UI preferences that persist across navigation. If the Redux store is mostly API response caches, you are using a global state manager as a request library. Wrong tool.

Edge rendering: the real performance envelope

Edge functions have hard limits: tight memory ceilings, strict CPU time budgets, no persistent database connections. Running a full SSR pipeline with database queries at the edge is not feasible.

What works at the edge: personalized headers, A/B test assignment, geo-based content selection, auth checks, lightweight HTML assembly from cached fragments. What does not: database joins, heavy computation, large dependency trees.

Edge-origin hybrid rendering is where practical edge architecture lands. The edge handles personalization and routing decisions. The origin pre-renders content. The edge assembles the final response from cached content plus personalized fragments. Consistently low TTFB regardless of geography, without pushing compute to a platform that cannot handle it.

Edge rendering measurably reduces TTFB for users geographically distant from origin servers, but only for personalized pages where CDN caching is ineffective. For static or lightly personalized content, a CDN with regional SSR origins achieves similar TTFB with less complexity.

Choosing a Framework by Workload

Next.js App Router: content-heavy sites needing ISR, middleware-based personalization, and mature React Server Component integration. The caching layers (router cache, data cache, full route cache) interact in non-obvious ways. Teams that do not understand all three will fight unexpected stale content.

Remix: data-heavy interactive applications. The loader/action model maps cleanly to CRUD operations. Streaming SSR is first-class. Fewer abstractions means fewer escape hatches, but also fewer sharp edges to discover six months in.

Vite + React Router: full control, zero opinions. Works for teams with strong frontend infrastructure engineers who want to own every architectural decision. A trap for teams that will rebuild half of Next.js ad hoc over 18 months.

For most product teams, Next.js or Remix gets you to production faster. Effective frontend UX engineering means choosing the framework whose opinions match your application’s actual requirements, not the one with the best conference talk.

Putting It Together: Per-Route Rendering

A mature SPA uses multiple strategies on the same site. Marketing pages: SSG. Product pages: ISR with a 60-second revalidation window. Search results: streaming SSR. Dashboard: CSR behind authentication. Both Next.js App Router and Remix support per-route strategy selection natively.

The W3C Paint Timing API measures the impact precisely. Use it to validate that each route’s rendering strategy actually delivers the LCP and INP numbers you expect. The gap between “should work” and “measured in production” is where performance regressions hide.

What the Industry Gets Wrong About SPA Rendering

“Just add SSR.” SSR is not a toggle. It requires server infrastructure, hydration logic, data fetching restructuring, and state management changes. A CSR-first app retrofitted with SSR piles on complexity at every layer. The rendering strategy should be chosen at architecture time, not bolted on after Lighthouse scores crater.

“React Server Components solve everything.” RSCs eliminate client JS for server-rendered components. You still need to know which components need client interactivity and which don’t. Move an interactive component to RSC and it breaks. Move a static one and its client JS disappears. Per-component decision, not per-page. Get it wrong and you’re chasing bugs that are subtle and hard to trace.

Our take Match rendering strategy to content type per route, not per application. Marketing pages: SSG. Product pages with dynamic pricing: ISR. Dashboards with real-time data: CSR with a streaming SSR shell. Choosing one strategy for the entire application is the most common SPA architecture mistake. Different routes have different content, different freshness needs, and different performance constraints. Treat them all the same and some of them are paying the wrong tax.

That two-second blank screen? With per-route rendering strategy, the same click paints content in under 300ms. Not faster code. The right rendering strategy stopped sending JavaScript to do HTML’s job.

Stop Shipping Blank Screens to Your Users

A bad rendering strategy taxes every Core Web Vital on every page load. Matching CSR, SSR, SSG, and RSC to content type improves LCP sharply and eliminates the hydration penalty killing Time to Interactive.

Fix Your SPA Performance

Frequently Asked Questions

How much does client-side rendering hurt Largest Contentful Paint?

+

CSR adds multiple seconds to LCP on median mobile connections because the browser must download, parse, and execute JavaScript before rendering any meaningful content. SSR with streaming closes this gap by sending HTML chunks as they resolve. The gap widens on slower devices where JavaScript parse time alone dominates the critical path.

When does Static Site Generation outperform SSR?

+

SSG wins when content changes less frequently than once per minute and the total page count stays in the low tens of thousands. Build times scale linearly with page count, and large sites can face rebuild times measured in tens of minutes on typical CI runners. ISR solves this by regenerating pages on demand, but introduces stale content windows equal to the revalidation interval.

What do React Server Components actually change about bundle size?

+

RSCs remove component JavaScript from the client bundle entirely for components that only render on the server. A typical dashboard page with heavy data-formatting libraries sheds the majority of its client JS when those libraries stay server-side. Total Blocking Time on mid-range mobile devices drops hard, often landing well within Core Web Vitals thresholds.

Is route-based or component-based code splitting more effective?

+

Route-based splitting is the correct default. It cuts the majority of initial bundle size with minimal effort since bundlers like webpack and Vite handle it automatically with dynamic imports. Component-based splitting adds value only for heavy above-the-fold components like rich text editors or chart libraries that appear on some routes but not others. Over-splitting into very small chunks increases HTTP overhead and negates the benefit.

When does edge rendering actually improve performance over regional SSR?

+

Edge rendering reduces TTFB for users geographically distant from the nearest origin server. The benefit is most visible on personalized pages where CDN caching is ineffective. For static or lightly personalized content, a CDN with regional SSR origins achieves similar TTFB at lower complexity. Edge functions also have memory and execution time limits that make complex data fetching impractical.