SPA Architecture: Rendering Strategies for Scale
You click a link in your SPA and nothing happens. Two full seconds of nothing. The URL changed. The previous page disappeared. A spinner materialized. Meanwhile, your JavaScript bundle is downloading, parsing, and executing before the browser can paint a single pixel of the content you asked for. An empty restaurant where the waiter hands you the recipe and expects you to cook. Blank white screen. Lighthouse scores crater. Core Web Vitals fail. And the team starts debating whether to “just add SSR” as if flipping a toggle on a quiet afternoon will fix an architectural decision baked into the foundation.
- CSR adds seconds to LCP on median mobile connections. The browser downloads, parses, and runs JS before painting anything.
- SSR with streaming sharply cuts TTFB by sending HTML chunks as they resolve. The single biggest improvement for most SPAs.
- RSCs (React Server Components) remove most client JS for data-heavy pages. Components that only render on the server never ship to the browser.
- SSG wins for content changing less than once per minute with manageable page counts. Pre-plated dishes. Beyond tens of thousands of pages, build times become prohibitive. ISR fills the gap.
- Route-based code splitting is the right default. Cuts the majority of the initial bundle. Component-based splitting adds value only for heavy widgets like chart libraries.
You cannot bundle-split your way out of a bad rendering decision.
The Four Rendering Strategies
Each one trades off build-time cost, server-time cost, and client-time cost. Two things decide which one fits: how often the content changes, and how personalized it is per request.
| Strategy | Content Profile | LCP Impact | Server Cost | Best For |
|---|---|---|---|---|
| CSR | Fully personalized, real-time | Worst (JS must execute first) | None (client bears cost) | Authenticated dashboards, admin panels |
| SSR | Personalized per request | Good (HTML sent immediately) | Per-request compute | E-commerce PDPs, search results, social feeds |
| SSG | Same for all users | Best (pre-built, CDN-served) | Build-time only | Marketing pages, docs, blogs |
| ISR | Semi-static, periodic updates | Near-best (stale-while-revalidate) | On-demand regeneration | Product catalogs, listings, content sites |
| Strategy | Best For | FCP | TTI | SEO | Trade-off |
|---|---|---|---|---|---|
| CSR (Client-Side Rendering) | Highly personalized, dynamic dashboards (admin panels, authenticated apps) | Slow (1.5-3s blank screen) | Slow (JS must download, parse, execute, fetch data) | Poor without prerendering | Simplest architecture. Worst initial performance |
| SSR (Server-Side Rendering) | Personalized pages that need SEO (product pages, user profiles) | Fast (200ms server render) | Delayed (hydration gap: 200ms-1.2s) | Good | Server cost per request. Hydration gap frustrates users |
| SSG (Static Site Generation) | Static content, marketing pages, documentation, blogs | Fastest (50ms from CDN) | Fastest (FCP = TTI for static, ~100ms for hydrated) | Excellent | Content is static until rebuild. Not for per-user content |
| ISR (Incremental Static Regen) | Content that changes occasionally (product catalogs, CMS-driven pages) | Fast (CDN-served) | Fast | Excellent | Stale content window between rebuilds. Complexity in cache invalidation |
CSR makes sense for authenticated, interactive apps where search indexing doesn’t matter. Need crawlability? CSR is the wrong pick.
SSR wins when content is personalized per request but needs to be crawlable. The cost is server compute on every request. Plan for scaling from the start, or a traffic spike becomes a rendering outage.
SSG is the performance ceiling. Pre-built HTML from a CDN. The fastest LCP achievable. The constraint: any content change requires a rebuild. For content changing less than hourly with manageable page counts, SSG should be the default.
ISR bridges the gap. Statically generated, regenerated in the background when stale. A 60-second revalidation window is fine for catalogs. Unacceptable for a stock ticker. Know your freshness requirements before choosing.
The Hydration Problem
The server sends fully rendered HTML. The browser paints it. Then the page goes dead while React “hydrates,” re-attaching event listeners to the already-visible markup. The user sees a fully rendered page, clicks the checkout button, and nothing happens. They click again. And again. Rage-click five times and leave.
On a mid-range Android device over 4G, hydration can take over a second. Your LCP looks great because the HTML painted fast. Your INP tells the real story: the page looks ready but cannot respond to interaction.
Three approaches attack this from different angles:
Selective hydration (React 18+) wraps non-critical components in <Suspense>. React hydrates critical interactive elements first and defers the rest. The page feels interactive sooner because the browser prioritizes what the user can see and click.
Resumability (Qwik’s approach) serializes component state into HTML so the client resumes without re-executing JavaScript. Near-zero hydration cost in theory. In practice, the ecosystem is a fraction of React’s. Worth watching. Not production-ready for most teams today.
React Server Components skip the client entirely for non-interactive components. A product description that just renders text stays server-only. Zero client JavaScript for that component. Interactive components still hydrate, but the total JS needing hydration shrinks a lot on content-heavy pages.
Streaming SSR: Why TTFB Is Hostage to Your Slowest API
Traditional SSR waits for the entire page to render before sending any HTML. Three API calls at 50ms, 200ms, and 800ms? The browser waits 800ms before seeing a single byte. TTFB is hostage to the slowest dependency.
Streaming SSR sends HTML as each section resolves. The browser gets the shell and header while the recommendations API is still processing. TTFB drops from “slowest API call” to “first resolved section,” because the browser starts painting without waiting for every data source to finish.
| Phase | Traditional SSR | Streaming SSR |
|---|---|---|
| Request received | Server starts fetching ALL data sources | Server sends HTTP headers + shell HTML immediately |
| First data ready | Waits. Nothing sent until all data resolves | Streams first chunk (header, nav, above-fold content). Browser starts painting |
| Second data ready | Still waiting for remaining sources | Streams second chunk. Browser renders piece by piece |
| Slowest data ready | NOW sends complete HTML. User sees everything at once | Streams final chunk. Page has been interactive for seconds already |
| TTFB | Hostage to slowest data source. 800ms+ common | <100ms (shell sent immediately) |
| User perception | Blank screen for 800ms, then full page appears | Content appears in stages. Feels faster even if total time is the same |
Streaming breaks the dependency between TTFB and data fetching. Your slowest API call no longer holds the page hostage.
Remix’s loader pattern and Next.js App Router both handle streaming natively. Each route module exports a loader, the framework streams as data resolves. No manual chunk management.
Don’t: Assume CDN caching works with streaming SSR. Most CDNs buffer the full response before caching, negating the streaming benefit entirely. Your most expensive pages to render are exactly the ones that benefit most from streaming and least from caching.
Do: Configure per-route cache behavior. Cloudflare and Fastly support streaming responses, but need explicit configuration. Static pages get CDN caching. Personalized pages get streaming without caching. Trying to do both on the same route is a trap.
Bundle Splitting That Works at Scale
The default output for a production React app is a single bundle. A mature application easily grows to hundreds of kilobytes gzipped. Your homepage visitor downloads the entire admin panel’s JavaScript. Route-based splitting is the highest-impact fix:
// Before: everything in one bundle
import Dashboard from './pages/Dashboard';
import Settings from './pages/Settings';
// After: each route is a separate chunk
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));
This alone typically cuts the majority of initial bundle size. The browser downloads only the JavaScript for the current route. Subsequent navigation loads additional chunks on demand.
Component-based splitting adds value for heavy widgets that appear on some routes but not others. A rich text editor (200KB+), a charting library (150KB+), a map component (300KB+). The threshold: if a component adds more than 20KB gzipped to a chunk and is not visible above the fold, split it.
Below 20KB the HTTP request overhead outweighs the bundle savings. Twenty 5KB chunks perform worse than one 100KB bundle. Over-splitting is a real trap.
| Splitting Strategy | When to Use | Typical Savings | Implementation |
|---|---|---|---|
| Route-level splitting | Default for every SPA. Each route loads only its own code | 40-60% reduction in initial bundle | React.lazy + Suspense, Next.js automatic, Vite dynamic imports |
| Component-level splitting | Heavy widgets above 20KB (rich text editors, charts, maps) | 15-30% additional savings on routes that contain heavy components | Dynamic import at component boundary. Load on user interaction, not on route entry |
| Vendor chunking | Separate stable third-party code from volatile app code | Cache hit rate improvement (vendor chunk changes rarely) | Webpack splitChunks or Vite manualChunks for node_modules |
| Shared module dedup | Common utilities imported by multiple routes | Prevents duplicate code across route chunks | Build tool handles automatically. Verify with bundle analyzer |
Core Web Vitals: Where Each Rendering Strategy Pays Its Tax
LCP suffers under CSR because JavaScript must execute before rendering. SSR and SSG remove the fetch-parse-execute chain from the critical path entirely. The improvement on mobile is measured in full seconds, often the difference between passing and failing Core Web Vitals thresholds .
INP takes hits from hydration and client-side routing transitions. A 300ms page transition blocks all interactions during that window. React 18’s startTransition marks navigation as non-urgent, keeping the UI responsive, but you have to opt in explicitly. It’s not the default.
CLS adds up from skeleton-to-content transitions, images without dimensions, and dynamic ads. Four layout shifts at 0.05 CLS each sum to 0.2. Fails the threshold. Death by a thousand small shifts, each one innocent in isolation.
| Vital | CSR Impact | SSR Impact | SSG Impact | Primary Fix |
|---|---|---|---|---|
| LCP | Severe (JS must execute) | Good (HTML immediate) | Best (CDN-served) | Move to SSR/SSG for public pages |
| INP | Hydration blocks input | Dead zone during hydration | Minimal (little JS) | Selective hydration, startTransition |
| CLS | Skeleton-to-content shifts | Server HTML reduces shifts | Stable from first paint | Dimension attributes, CSS containment |
Teams building single-page applications should default to SSR or SSG for public-facing pages and reserve CSR for authenticated sections where search indexing is irrelevant.
Server State vs Client State: Stop Caching API Responses in Redux
Server state (data from APIs) makes up most of the state in a typical application. Redux stores all of it, then you manually wire up caching, invalidation, loading states, and optimistic updates. For every endpoint.
React Query, SWR, and Apollo Client handle server state with caching, refetching, and optimistic updates built in. Switching from Redux for server state eliminates a large chunk of the store and the boilerplate surrounding it.
Redux still earns its place for genuinely client-side state: cart contents, form wizard progress, UI preferences that persist across navigation. If the Redux store is mostly API response caches, you are using a global state manager as a request library. Wrong tool.
Edge rendering: the real performance envelope
Edge functions have hard limits: tight memory ceilings, strict CPU time budgets, no persistent database connections. Running a full SSR pipeline with database queries at the edge is not feasible.
What works at the edge: personalized headers, A/B test assignment, geo-based content selection, auth checks, lightweight HTML assembly from cached fragments. What does not: database joins, heavy computation, large dependency trees.
Edge-origin hybrid rendering is where practical edge architecture lands. The edge handles personalization and routing decisions. The origin pre-renders content. The edge assembles the final response from cached content plus personalized fragments. Consistently low TTFB regardless of geography, without pushing compute to a platform that cannot handle it.
Edge rendering measurably reduces TTFB for users geographically distant from origin servers, but only for personalized pages where CDN caching is ineffective. For static or lightly personalized content, a CDN with regional SSR origins achieves similar TTFB with less complexity.
Choosing a Framework by Workload
Next.js App Router: content-heavy sites needing ISR, middleware-based personalization, and mature React Server Component integration. The caching layers (router cache, data cache, full route cache) interact in non-obvious ways. Teams that do not understand all three will fight unexpected stale content.
Remix: data-heavy interactive applications. The loader/action model maps cleanly to CRUD operations. Streaming SSR is first-class. Fewer abstractions means fewer escape hatches, but also fewer sharp edges to discover six months in.
Vite + React Router: full control, zero opinions. Works for teams with strong frontend infrastructure engineers who want to own every architectural decision. A trap for teams that will rebuild half of Next.js ad hoc over 18 months.
For most product teams, Next.js or Remix gets you to production faster. Effective frontend UX engineering means choosing the framework whose opinions match your application’s actual requirements, not the one with the best conference talk.
Putting It Together: Per-Route Rendering
A mature SPA uses multiple strategies on the same site. Marketing pages: SSG. Product pages: ISR with a 60-second revalidation window. Search results: streaming SSR. Dashboard: CSR behind authentication. Both Next.js App Router and Remix support per-route strategy selection natively.
The W3C Paint Timing API measures the impact precisely. Use it to validate that each route’s rendering strategy actually delivers the LCP and INP numbers you expect. The gap between “should work” and “measured in production” is where performance regressions hide.
What the Industry Gets Wrong About SPA Rendering
“Just add SSR.” SSR is not a toggle. It requires server infrastructure, hydration logic, data fetching restructuring, and state management changes. A CSR-first app retrofitted with SSR piles on complexity at every layer. The rendering strategy should be chosen at architecture time, not bolted on after Lighthouse scores crater.
“React Server Components solve everything.” RSCs eliminate client JS for server-rendered components. You still need to know which components need client interactivity and which don’t. Move an interactive component to RSC and it breaks. Move a static one and its client JS disappears. Per-component decision, not per-page. Get it wrong and you’re chasing bugs that are subtle and hard to trace.
That two-second blank screen? With per-route rendering strategy, the same click paints content in under 300ms. Not faster code. The right rendering strategy stopped sending JavaScript to do HTML’s job.