Cloud & infrastructure
1
min read

Cloud repatriation is a strategy, not retreat

One in three cloud dollars goes to waste. That is why 83% of enterprises are pulling workloads off public cloud.
Article author
Written by
Patrick Jamal
Published on
March 31, 2026
Last updated on
April 1, 2026

86% of CIOs plan to move workloads off public cloud, according to a Barclays CIO survey.

Headlines call it retreat, but they are wrong.

Cloud repatriation is the most rational infrastructure decision many organizations will make in 2026. The shift does not mean the cloud failed.

It means the "move everything to one hyperscaler" strategy did.

The $180 billion cloud waste problem

Global public cloud spending hit roughly $723 billion in 2025, growing at 21.3% annually, per Gartner's projections. But a massive slice of that spend produces ZERO business value.

Gartner estimates that more than 25% of global cloud spend is waste. IDC places the figure at 20–30%. A 2024 cloud efficiency study referenced by Stacklet found that 78% of organizations estimate between 21% and 50% of their cloud spend is wasted. At the conservative end, that is $180 billion in annual waste globally.

Organizations without cloud optimization processes overspend by 40%, according to Gartner. The Flexera 2025 State of the Cloud Report confirms that managing cloud spend remains the top challenge for 84% of organizations.

For the fourth year, running.

These “inefficiencies” are structural failures in how organizations plan cloud infrastructure.

How "Cloud First" became "Cloud Trapped"

The pandemic compressed years of cloud migration into months. Organizations moved workloads without workload analysis. Speed mattered more than architecture. This resulted in infrastructure debt that now compounds with every billing cycle.

Organizations are now trapped by three forces of that decision.

Vendor lock-in through managed services.

Kubernetes was supposed to guarantee portability. In practice, Amazon EKS commands roughly 30% market share, Azure AKS holds 20%, and each embeds proprietary integrations for networking, identity, and storage. Moving a production cluster between providers requires often re-engineering, not redeployment.

Egress fees as an exit tax.

Public cloud providers charge significantly for data leaving their networks. For data-heavy workloads like analytics pipelines, media delivery, or AI training,  transfer charges can exceed the compute cost itself. This creates a financial moat around your data that grows deeper over time.

Cost opacity at scale.

A single hyperscaler bill can contain thousands of line items across dozens of services. Without dedicated FinOps teams, organizations cannot tell which workloads deliver value and which burn cash. Flexera reports that 59% of organizations now have FinOps teams (up from 51% in 2024), but most are still catching up to their own spending.

Public cloud spending can weigh down a software company's gross margins by 50% or more. For ISVs and SaaS companies, infrastructure cost is a margin decision.

Repatriation is a workload decision, not a cloud decision

The loudest repatriation story belongs to 37signals. The company behind Basecamp and HEY pulled seven apps off AWS in 2023 and saved $2 million in the first clean year, on track for $7 million over five years. They cut infrastructure costs by roughly two-thirds without adding a single staff member.

It is a compelling narrative. It is also incomplete.

37signals runs stable, predictable workloads with well-understood resource patterns, ideal for dedicated infrastructure. But many organizations run workloads that genuinely benefit from hyperscaler capabilities: burst compute, global edge distribution, managed AI services, or compliance frameworks baked into the platform.

The correct question is not "should we leave the cloud?"

It is: "where does each workload actually belong?"

This is the workload-first approach in practice. Every workload gets evaluated across four dimensions:

Compute pattern. Steady-state workloads with predictable resource needs cost less on dedicated or alternative cloud infrastructure. Burst workloads with unpredictable spikes benefit from hyperscaler elasticity.

Latency and data gravity. Where does the data live? Moving compute closer to data eliminates egress costs and reduces latency.

Compliance requirements. Regulated industries may need data residency guarantees that favour specific providers or on-premises deployment. Others need the certifications hyperscalers offer out of the box.

Total cost of ownership. Not just the monthly bill. Factor in egress fees, support contracts, engineering time for provider-specific tooling, and the opportunity cost of lock-in.

Flexera reports that 42% of workloads are already moving from public cloud to private cloud or on-premises. But the destination matters less than the decision framework. Repatriation done without workload analysis just repeats the original mistake in reverse.

The missing middle: Beyond hyperscalers and on-prem

The repatriation conversation often collapses to: stay with AWS/Azure/GCP, or return to your own data centres.

That framing misses an entire category of infrastructure.

Alternative cloud providers now offer enterprise-grade compute without hyperscaler cost structures.

Akamai Connected Cloud, built on the world's most distributed network, delivers compute, storage, and Kubernetes at significantly lower price points, with no egress fees. Kubernetes workloads running on Akamai's Linode Kubernetes Engine (LKE) use standard Kubernetes APIs, no proprietary networking or identity hooks. Portability is real, not theoretical.

For ISVs in particular, the infrastructure layer directly impacts unit economics. An AI analytics platform migrated to Akamai Connected Cloud achieved 35%+ monthly savings while maintaining identical performance. The workload did not need hyperscaler-specific services. It needed compute, storage, and networking at a fair price.

Cloud-neutral architecture does not pick winners.

It matches each workload to the environment where it runs best at the lowest total cost. Sometimes that is AWS. Sometimes it is Akamai. Sometimes it is your own rack.

How to start repatriation: A workload placement framework

1. Audit current cloud spend by workload, not by service.

Most cloud bills organize costs by service (EC2, S3, RDS). Reorganize by business workload. This step alone often reveals that 10–15% of workloads drive 60–70% of costs.

2. Classify workloads by compute pattern.

Separate steady-state from burst. Identify data-heavy workloads paying significant egress. Flag workloads tied to managed services that create provider dependency versus those running on portable infrastructure.

3. Map each workload to its best-fit environment.

For each workload, evaluate hyperscaler, alternative cloud, and dedicated infrastructure options. Consider not just the compute cost but also egress, support, migration effort, and operational complexity.

4. Calculate true TCO with a 3-year horizon.

Cloud pricing changes. Reserved instances expire. Egress grows with data volume. Model costs over three years, including hidden charges: support tiers, data transfer, compliance tooling, and engineering hours spent on provider-specific integrations.

5. Execute in phases, not all at once.

Start with workloads where the cost-performance gap is widest. Prove the model. Build internal confidence. Migration acceleration works best when early wins fund later phases.

The real cloud repatriation strategy is placement, not provider

Cloud repatriation signals maturity, not failure. It means organizations now have enough operational data to make informed infrastructure decisions, something that was impossible during the first wave of cloud adoption.

The companies that win over the next five years will not be "cloud-first" or "cloud-exit." They will be workload-first. Every workload is placed where it performs best and costs least.

Every provider justified by business outcomes, not by default.

As Gartner projects that roughly half of all cloud compute will serve AI workloads by 2029, placement decisions will only grow in importance. AI inference at the edge, training in the core, data gravity pulling compute toward storage, each pattern demands a different infrastructure answer.

The question has changed. It is no longer "which cloud provider?" It is "where does this workload belong?"

Organizations that answer it workload by workload will spend less, perform better, and avoid the lock-in that made repatriation necessary in the first place.

Table of contents

more articles from

Cloud & infrastructure