Edge Computing: CDN Functions, Local Nodes, Data Residency
Your application runs in us-east-1. Your users in London need sub-5ms response times. The network round-trip alone is 75ms. No amount of code optimization, no caching strategy, no framework swap closes that gap. Physics doesn’t care about your sprint velocity. The speed of light through fiber optic cable is the constraint, and the only solution is moving the compute to where the users are.
That is the edge computing use case most teams actually hit first. Not factory sensors. Not autonomous vehicles. Not the IoT scenarios that dominate conference talks. Just the straightforward physics problem of centralized compute being too far from the users who need fast responses. Once you see what 70ms of unnecessary latency costs on a latency-sensitive workflow, the ROI calculation for edge deployment becomes very simple.
Data residency regulations create a structurally similar problem. GDPR, Schrems II, and sector-specific regulations may require that certain data never leaves a geographic boundary. You cannot process it in your central us-east-1 region even if latency were acceptable. The edge becomes a compliance mechanism, not just a performance optimization.
The question is which kind of edge you actually need.
CDN Edge Functions: Your First Edge Layer
Modern CDN platforms are not just content caches anymore. Cloudflare Workers, Lambda@Edge, and Fastly Compute execute JavaScript or WebAssembly at CDN points of presence within 5-20ms of most internet users, often with sub-millisecond cold starts. This is the lowest-friction entry point to edge computing. For most teams, start here.
The use cases where CDN edge logic beats origin processing convincingly: authentication token validation (reject unauthorized requests before they consume origin resources, cutting origin traffic by 15-30% in typical deployments), A/B testing with request routing (split traffic without touching application code), response personalization based on geolocation or user attributes, and geographic request routing to regional service instances.
But you will hit the constraint fast. CDN edge functions execute in sandboxed V8 isolates with limited CPU time (usually 10-50ms of wall clock), no filesystem access, and stateless execution per request. Complex business logic requiring database queries, multi-step computation, or file I/O does not belong at the CDN edge. Teams try to run entire API backends on Cloudflare Workers and discover that the 50ms CPU limit means their complex queries time out under load. Don’t try to put everything at the edge. Identifying which logic provides a genuine latency benefit at the edge versus which should stay at origin is a key cloud-native architecture decision.
Retail Offline Resilience: When the Internet Goes Down
CDN edge functions solve the latency problem. But some problems aren’t about latency. They’re about connectivity.
Here’s a scenario that large retailers face regularly. Black Friday afternoon. 2,000 customers in a flagship store. The ISP link goes down. If your POS system depends entirely on internet connectivity to process transactions, you just stopped selling. On your biggest revenue day of the year.
Local edge computing in retail moves transaction processing to the store’s own hardware. A POS terminal or in-store server processes sales normally during internet outages and queues transactions for synchronization when connectivity resumes. The system keeps selling. Revenue keeps flowing. The outage becomes an IT ticket instead of a business-critical incident.
The consistency challenge hits immediately. Two POS terminals in different stores both sell the “last” unit of a product while offline. When both reconnect, you have an oversell. There are two architectural responses, and the right one depends on your business:
Eventual consistency with reconciliation: Accept the oversell and handle fulfillment exceptions in the order management system. Ship from another warehouse, offer a substitute, or apologize and refund. If oversells happen on 0.1% of offline transactions and each one costs less to resolve than a lost sale, the math usually favors this approach over the alternative.
Pessimistic inventory reservation: Allocate a fixed stock quota to each store’s local edge. The store can only sell its allocated units offline. No oversells possible, but you sacrifice availability. If one store’s allocation runs out while another has excess, customers are turned away unnecessarily. This approach works best for high-value items (electronics, luxury goods) where an oversell is a real customer experience problem.
The State Synchronization Decision
Retail is one example, but every edge deployment eventually faces the same fundamental question: how stale can edge-cached data be before it causes a business problem?
For personalization data, content preferences, and feature flags, eventual consistency with 30-60 second propagation delays is fine. Nobody notices if their recommended products are 45 seconds out of date. Use edge-local KV stores like Cloudflare Workers KV or DynamoDB Global Tables with last-writer-wins conflict resolution.
For inventory counts, pricing, and access control, you often need strong consistency. This means a round trip to the central store, adding 50-200ms depending on geography. At that point, the latency advantage of edge processing shrinks or disappears for that specific operation. The real architectural skill is decomposing requests so that the latency-sensitive parts (authentication, personalization, static content) resolve at the edge while the consistency-sensitive parts (inventory check, payment authorization) make the round trip to origin. Split the request, not the consistency model.
You’ve solved for latency and consistency. Now for the part that actually keeps you up at night.
Edge Observability at Scale
The hardest operational problem with edge infrastructure is not deployment. It is knowing what is happening across 50 or 500 geographically distributed nodes when something goes wrong. Centralized architectures give you one place to look. Edge gives you dozens or hundreds.
CDN edge nodes are vendor-managed infrastructure where your visibility is limited to what the provider exposes through APIs and dashboards. Local edge nodes at retail locations or industrial sites are your hardware, but they are spread across geography that makes physical access impractical for debugging.
Effective edge observability requires three things. Structured logging with correlation IDs that trace requests across edge-to-origin hops in a single query. Metrics collection from all edge nodes aggregated into your central observability stack with geographic dimensions so you can see per-region health. And health monitoring that detects degraded edge nodes before users feel the impact. Not after. If your users are your monitoring system, you’ve already failed.
Infrastructure platform teams managing large edge fleets need GitOps-driven deployment with staged regional rollouts. Deploy to 5% of nodes first. Verify health metrics for 15 minutes. Expand to 25%, then 100%. Automated rollback triggers if error rates exceed 1% during any stage. The organization that treats edge nodes as cattle (declaratively configured, automatically reconciled) manages 10 nodes and 1,000 nodes with the same team size. The organization that treats them as pets hits a scaling wall around 20-30 nodes where operational burden outpaces the team’s capacity. Edge computing solves the physics problem of distance. Whether it creates more operational problems than it solves depends entirely on how you run it.