Cloud & infrastructure
1
min read

Observability needs its own FinOps strategy

Observability has quietly become one of the largest line items in cloud budgets, yet it's one of the least managed.
Article author
Written by
Prasad Durgaoli
Published on
March 7, 2027
Last updated on
April 3, 2026

Most enterprises can tell you what they spend on compute. They can tell you what they spend on storage. Ask what they spend on observability, and the room goes quiet.

That silence is expensive.

Honeycomb's 2025 analysis found that organizations routinely spend 15–25% of their infrastructure bill on monitoring and observability tools.

For a company with a $2 million annual cloud budget, that is $300,000–$500,000 a year flowing into dashboards, metrics pipelines, log storage, and tracing platforms, often across multiple overlapping vendors.

Yet unlike compute or storage, almost nobody treats observability spend as a cost category worth managing.

Observability became a budget category without a budget owner

The observability market is now one of technology's fastest-growing segments. Gartner projects it will reach $14.2 billion by 2028, driven by the growing complexity of hybrid and cloud-native architectures.

Datadog alone reported $953 million in quarterly revenue for Q4 2025, a 29% year-over-year surge fueled by AI-native company adoption. Their annual bookings hit $1.63 billion, up 37% from the prior year.

These numbers represent real enterprise budgets.

But observability costs grow organically across teams. One team adopted Datadog for APM. Another chose Splunk for logs. Infrastructure teams added Prometheus and Grafana. Security brought in their own SIEM. Nobody owns the total spend because nobody planned for it to become this large.

LogicMonitor's 2026 Observability Trends Report confirms the pattern: 66% of organizations now run two to three observability platforms simultaneously, while 18% run four or five. Each platform has its own data pipeline, its own licensing model, and its own support overhead.

The overlap is massive.

The waste is invisible.

Three forces driving observability costs higher

Data volume is exploding

Global data generation is expected to reach 221 zettabytes in 2026, up from 181 zettabytes in 2025. Cloud-native architectures with microservices, containers, and serverless functions generate more data than monolithic systems.

Every API call, every pod lifecycle event, every distributed trace adds to the volume. With that in mind we have to remember that most observability platforms are priced by ingest volume.

AI workloads create new telemetry demands

IBM's 2026 observability outlook identifies a pretty obvious challenge: organizations now need to use AI to observe AI. AI workloads introduce non-deterministic behavior, model drift, and inference latency patterns that traditional monitoring wasn't designed to capture.

Dynatrace predicts that agentic AI will drive an "exponential leap" in system complexity, requiring end-to-end observability across autonomous agent interactions. Each new AI agent brings its own logic, its own behavior, and its own failure modes.

Vendor pricing models reward growth, not efficiency

Most observability vendors use consumption-based pricing: pay per host, per GB ingested, per custom metric, per user seat. These models reward data volume growth.

When your infrastructure scales, your observability bill scales faster, because each new service generates telemetry across multiple dimensions (logs, metrics, traces, events). Gartner notes that "cost-fatigue is reaching a breaking point" across the 40+ vendor market.

Consolidation alone won't solve the data observability problem

Faced with rising costs, 84% of organizations are pursuing or considering tool consolidation, according to LogicMonitor. On the surface, consolidation looks like the answer: fewer platforms, less overlap, lower licensing costs.

But consolidation without strategy creates a different problem.

Migrating from three observability tools to one proprietary platform reduces vendor count. It also deepens lock-in. Organizations that consolidate to a single vendor's ecosystem will find themselves in the same position they were in with cloud.

Operationally dependent on a single vendor's pricing decisions.

The better approach is what we call a workload-first approach to observability: match the tool to the requirement, not the other way around. Production-critical transaction traces need real-time APM. Infrastructure metrics need time-series storage. Security logs need compliance-grade retention. These are different workloads with different cost profiles.

Treating them as one problem leads to one expensive solution.

Applying FinOps discipline to observability

The FinOps Foundation's 2026 State of FinOps report identifies a shift in how mature organizations manage technology costs. The best teams are no longer just optimizing cloud compute. They're expanding FinOps practices to every significant cost category.

The FinOps framework (Inform → Optimize → Operate) translates directly to observability spend management.

Inform: Know what you're spending

Most organizations cannot produce a single number for total observability spend. Costs are scattered across engineering budgets, platform team allocations, and departmental SaaS subscriptions. Step one is consolidating visibility: map every observability tool, every license, and every data pipeline into a single cost view. Tag data by team, service, and environment. Without attribution, there is no accountability.

Optimize: Cut what doesn't earn its place

Not all telemetry is equal. Debug-level logs from development environments don't need the same retention as production transaction traces. Honeycomb's analysis shows that disciplined enterprises can push observability costs to around 10% of infrastructure spend, roughly half what many organizations currently pay.

The path there involves data tiering: route high-value telemetry to real-time analysis platforms and lower-value data to cost-effective archival storage. Set retention policies by data type, not by platform default. Sample intelligently rather than ingesting everything.

Operate: Build governance into the pipeline

Treat telemetry like a product with a cost of goods. Establish budgets per team and per service. Create alerts not just for system health, but for telemetry cost anomalies as well. When a new deployment doubles log volume, that should trigger a cost review as quickly as it triggers a performance review.

OpenTelemetry changes the economics

The most significant development in observability cost optimization is OpenTelemetry (OTel), the CNCF's fastest-growing project, backed by contributions from 10,000 individuals across 1,200 companies.

OpenTelemetry provides a vendor-neutral framework for collecting, processing, and exporting telemetry data. It separates instrumentation from analysis, meaning teams can collect traces, metrics, and logs in a standard format and route them to any backend. This decoupling fundamentally changes the cost equation.

Research from APM Digest found that 57% of observability leaders who adopted OpenTelemetry reduced costs by gaining control over what telemetry is collected, how it is routed, and where it goes.

Instead of every tool ingesting everything, teams define intelligent routing: payment transactions go to both archival storage and real-time APM, development environment metrics go only to low-cost time-series storage.

This mirrors the cloud-neutral architecture principle that drives effective infrastructure decisions.

Just as workloads should run on the platform that best serves their requirements, telemetry data should flow to the tool that best matches its cost and value profile.

For organizations running on Akamai Connected Cloud, this approach is even more relevant.

Akamai's distributed architecture, combined with LKE-Enterprise for Kubernetes workloads, already delivers favorable economics for data-intensive operations. Adding vendor-neutral observability on top preserves cost advantages rather than eroding them through proprietary monitoring lock-in.

The observability budget conversation is overdue

96% of IT leaders expect observability budgets to hold or grow over the next two years. That is justified, monitoring and observability deliver measurable ROI when done right.

But justified growth is not the same as unmanaged growth.

Observability is too important to leave unmanaged. It deserves the same financial rigor as compute, storage, and networking. The question isn't whether your organization needs observability. It's whether anyone is managing what you spend on it.

Table of contents

more articles from

Cloud & infrastructure