Earth Observation in 2025: Acceleration Without Direction
What 2025 revealed about priorities, trust, and direction in EO
2025 has been defined by geopolitical tension, rapid AI adoption, and economic pressure shaped by both. Persistent conflict monitoring, sovereign capability concerns, and budgetary volatility increasingly shaped how Earth Observation was discussed, funded, and procured. Climate urgency did not disappear, but it faded into the background under the weight of these forces.
Looking back, Earth Observation did not lack progress in 2025. Instead, the year exposed a growing mismatch between a sector that continued to describe itself as broadly commercial and one shaped by defence-led demand.
Those trends had been visible for years: defence alignment, AI everywhere, and commercial consolidation. In 2025, most major announcements revolved around them.

Budgets made priorities explicit
In Europe, Earth Observation was treated unambiguously as strategic infrastructure. The ESA Ministerial Council committed a record €22.3 billion, including €1.2 billion allocated to European Resilience from Space. While environmental monitoring sits at the core of Europe’s EO identity, this programme was framed primarily around surveillance, security, and system resilience, explicitly positioning Earth Observation as a sovereign capability.
Across the Atlantic, the picture was far less stable. The proposed FY2026 U.S. budget introduced deep uncertainty for civilian Earth Observation, with substantial cuts targeting NASA Earth Science, NOAA, and the USGS. Landsat Next, long assumed to be institutionally inevitable, became politically fragile.
This was not a universal retreat. Defence-oriented geospatial procurement in the United States remained aggressively funded. The contrast lay in how Earth Observation outside national security was positioned. Europe treated Earth Observation as a core component of strategic autonomy. The United States appeared undecided about Earth Observation as a civilian public good.
Earth Observation organised around defence contracts
Over the course of 2025, it became difficult to ignore how deeply the sector was entrenching itself into defence and intelligence procurement. This shift was not simply about defence becoming a larger customer, but about defence contracts shaping how systems were designed, funded, evaluated, and prioritised.
As defence procurement logic became more influential, expectations around revisit, latency, automation, and operational responsiveness moved from being use-case specific to baseline design assumptions.
In parallel, purely commercial, non-defence B2B offerings became harder to justify and sustain, not because they lacked value, but because they operated under very different procurement and investment rationales.
Defence demand was no longer simply a growing segment of the market. It had become the reference point around which much of the sector organised itself. According to industry analysts, defence-related demand now accounts for close to half of the Earth Observation market, particularly when data services and value-added analytics are included. Critically, this growth concentrated in precisely the segments that influence constellation design, tasking strategy, and downstream operational workflows.
This orientation was visible across the year’s major contracts: large commercial data awards by the U.S. National Geospatial-Intelligence Agency, long-term sovereign agreements across the Middle East and Asia, and a steady expansion of EO offerings explicitly aligned with national security requirements.
It was also reflected in the year’s most visible commercial announcements. Planet’s Pelican agreement with SKY Perfect JSAT reinforced where high-resolution capacity is being positioned and financed. BlackSky’s Gen-3 contracts and product direction tied high-resolution monitoring closely to automated analytics and operational delivery. Maxar’s reorganisation and rebranding, emphasising sovereign defence capabilities and long-term government relationships, carried the same signal. Individually, none of these moves were surprising. Collectively, they made the market’s centre of gravity harder to deny.
Here, the increasing prominence of “dual-use” framing deserves careful interpretation. In practice, dual-use rarely functioned as a balanced operating model between civil and military objectives. Instead, it served as a framing mechanism, allowing systems and missions to be presented as broadly applicable while being justified, evaluated, and funded primarily through defence-driven requirements.
That distinction matters because it shapes what is funded first. Speed, responsiveness, and actionability were consistently prioritised ahead of calibration, interoperability, and long-term continuity. These latter qualities were not removed from Earth Observation, but they were more often deferred, positioned as secondary concerns to be addressed once operational demands were met.
The practical consequence is that non-defence, non-intelligence B2B Earth Observation now occupies a narrower space. That space still exists, but operating within it requires far more deliberate positioning than before.
Some non-defence segments shifted further downstream, with EO operating as one component inside broader analytical or operational systems rather than as a standalone product, with value realised at the workflow level rather than the data layer.
Launches told the same story from orbit
From a technical standpoint, 2025 was a strong year for Earth Observation.
Institutionally, continuity and precision were central. Copernicus extended its backbone with the launches of Sentinel-1D, Sentinel-5A, and Sentinel-6B, while Sentinel-1C completed commissioning. These missions reinforced long-term measurement priorities and the value of sustained, calibrated observation.
Commercially, however, the emphasis was different. BlackSky’s Gen-3 satellites pushed optical resolution to 35 cm and were tightly integrated with automated analytics. Maxar completed deployment of the WorldView Legion constellation. Planet accelerated its Pelican launches and, notably, flew them with onboard AI compute. Pixxel placed high-resolution hyperspectral capability into orbit with the first satellites of its Fireflies constellation.
These systems are not primarily designed around comprehensive, long-term archiving of the Earth. They are designed to prioritise, filter, and act on observations quickly, in some cases before data ever reaches the ground.
The year also included a reminder of spaceflight’s inherent fragility. The loss of contact with MethaneSAT underscored both the technical difficulty of operating in orbit and the relative vulnerability of climate-first missions when compared with defence-backed programmes.
This contrast reveals how different parts of the sector absorb risk and define what counts as acceptable performance.
Looking ahead, Europe’s next generation of Sentinel missions points toward a higher reference level for institutional Earth Observation. Planned increases in spatial resolution and instrument capability do more than expand what can be observed; they implicitly redefine what is considered acceptable in terms of data quality, consistency, and continuity, particularly as public and commercial datasets are used together within the same analytical and operational workflows.
For commercial providers, this has practical consequences. Resolution alone becomes a weaker differentiator when institutional missions function as quality benchmarks. As spatial detail increases, tolerance for artefacts, misalignment, and inconsistency narrows, and attention shifts toward how data are calibrated, validated, and maintained over time. Higher resolution does not automatically translate into greater confidence; in many cases, it simply makes limitations easier to see.
As resolution increases, institutional datasets also become harder to dismiss as “coarse but free”, and in some domains they begin to function as credible reference layers. This raises the bar for commercial providers, who must increasingly justify not only higher spatial detail, but also calibration quality, consistency, and reliability over time.
AI crossed into operations, while scrutiny thinned
The most consequential AI development in Earth Observation in 2025 was not a startup launch or a product demo. It was the decision by the European Centre for Medium-Range Weather Forecasts to take its Artificial Intelligence Forecasting System (AIFS) into full operational use, running alongside established numerical weather prediction models.
This was not an abrupt shift. AIFS entered active service after years of benchmarking, validation, and institutional review. It was framed as complementary to physics-based models, with parallel execution enabling continuous assessment and trust-building. Reported gains, including improved tropical cyclone tracking, faster inference, and substantially lower energy consumption, were treated as empirical evidence, not marketing claims.
That distinction matters. It demonstrates how AI can be integrated responsibly into Earth Observation when governance, evaluation, and domain expertise are treated as central.
Across much of the commercial EO sector, conditions were different. AI adoption accelerated faster than the frameworks needed to govern its use. Speed, demonstrability, and time-to-market outweighed long-term validation. Foundation models proliferated, natural-language interfaces promised simplified access to geospatial data, and super-resolution approaches were promoted as delivering substantial apparent spatial gains.
In this environment, validation served to support product narratives rather than to challenge them. Evaluation focused on a narrow set of reassuring metrics, while questions of physical limits, robustness, uncertainty, and interoperability were deferred. Systems that produced outputs which looked plausible and could be demonstrated convincingly were often treated as sufficiently validated for use.
This dynamic was reinforced by a rapid influx of machine-learning practitioners into the field. Their contributions accelerated experimentation and development. At the same time, the domain-specific understanding required to interrogate inputs, recognise established patterns, and interpret outputs within physical constraints was not always equally prioritised. Attention concentrated on model performance and visual outputs, while data provenance, consistency, and continuity were postponed.
AI did not fail Earth Observation in 2025. It succeeded well enough, particularly in commercial settings, that scrutiny was treated as optional rather than essential.
This is where a different form of technical debt begins to accumulate, not in code, but in shared understanding, interpretability, and trust.
Better infrastructure, quieter progress
Some of the most consequential progress in 2025 occurred outside headline narratives of AI and defence.
The SpatioTemporal Asset Catalog (STAC) became an official OGC Community Standard. Zarr-based workflows matured. Cloud-native access grew faster and more interoperable. Time series access at scale improved dramatically. From a systems perspective, Earth Observation data became easier to discover, access, and process than at any point before.
These advances were incremental rather than spectacular. They did not produce the kinds of demos or major announcements that attract headlines, but they strengthened the foundations upon which both scientific continuity and commercial reliability depend.

Foundation models, and the return of old mistakes
By 2025, investment in large, generic foundation models for Earth Observation reached unprecedented levels. The question was no longer whether such models could be built, but whether their investment logic made sense outside a narrow set of institutional environments.
Training and maintaining these models requires sustained capital, repeated large-scale iteration, and continuous feedback loops. Even when labour costs are excluded, assuming PhD students or publicly funded researchers, the direct compute cost alone for models comparable in scale to recent flagship efforts can approach €1–2 million, largely in cloud credits. This does not account for downstream fine-tuning, evaluation cycles, or live deployment, nor for the associated energy use and carbon footprint. These costs are real, recurring, and unevenly distributed.
Outside organisations with stable public funding, long-term compute commitments, and mature evaluation pipelines, it is unclear who absorbs these costs, how failure is surfaced, or how iteration is prioritised. In practice, many actors can afford to train once, but far fewer can afford to revise, maintain, and meaningfully govern these models over time.
In this sense, foundation models risk repeating a familiar pattern in Earth Observation: the promise of a single, universal system capable of supporting a wide range of use cases, markets, and revenue streams. Historically, such “one platform that does everything” approaches have struggled. They centralise complexity, push evaluation downstream, and assume a level of user capacity that rarely exists in practice. The result is often a powerful core system that few users are equipped to interrogate, adapt, or meaningfully influence.
There is genuine value in foundation models. Their representation spaces are rich, and they capture complex spatial and temporal structure across large volumes of data. But representation alone does not justify investment. The critical question is whether downstream ecosystems are prepared to use these models responsibly, to evaluate performance against real operational decisions, to integrate feedback from failure cases, and to sustain iteration when models underperform in specific regions or conditions.
This readiness is uneven in structural, rather than technical, ways. In many organisations, benchmarks are still tied to pre-training objectives rather than decision outcomes, and processes for uncertainty characterisation, regional stress-testing, or post-deployment evaluation remain limited. There are teams that do invest in these practices, but they are the exception rather than the norm. As a result, model performance is often assessed using metrics that are convenient to compute and broadly reassuring, while value is claimed in downstream contexts such as risk assessment, monitoring, or planning, where those metrics are only weakly informative.
The focus on global scale reinforces this misalignment. Most Earth Observation users do not buy “global performance.” They pay for coverage, still priced per square kilometre in many cases, and they care about reliability in specific regions, for specific phenomena, under known conditions. Models optimised to maximise average global performance can appear strong overall while underperforming precisely where commercial or policy decisions are made.
In an industry that continues to monetise value primarily per km2, scale can become a liability rather than an advantage. The cost of training and maintaining globally optimised models is absorbed upfront, while value is realised locally and unevenly. This creates a structural mismatch between investment logic and how Earth Observation products are priced, sold, and trusted.
Taken together, these dynamics suggest that foundation models are advancing faster than the institutional, economic, and governance structures required to support them responsibly.
Lower barriers, louder noise
AI-assisted development significantly lowered the barrier to producing tools across the Earth Observation ecosystem. This enabled experimentation, learning, and rapid prototyping, and in many cases accelerated individual understanding. It also changed what contribution looked like, and what gained visibility.
There is real value in combining existing tools, libraries, and services. Composition has always been part of effective software development in Earth Observation, and AI-assisted workflows have made it easier to explore ideas, test assumptions, and connect components that were previously difficult to integrate. In many cases, this has genuinely broadened participation and lowered entry costs.
The ease of producing convincing demos outpaced the mechanisms needed to assess durability, relevance, and long-term value. Many tools were built and shared primarily to showcase individual capability or technical fluency. They served their immediate purpose, but rarely moved beyond that stage into operational use.
This matters because Earth Observation depends on a relatively small set of foundational open-source libraries, standards, and data-access tools. These projects handle interoperability, numerical robustness, and long-term compatibility. They underpin both commercial platforms and institutional missions, including systems supported by billion-euro investments but maintained with minimal dedicated funding. When they work well, they tend to disappear into the background.
Lowering the barrier to assembling new workflows and demonstrations did not automatically strengthen this shared foundation on which the sector depends. In practice, it often increased dependence on it without increasing care for its maintenance. Contribution took the form of combining existing components rather than sustaining the libraries, formats, and interfaces that make such composition possible. Output increased, but the condition of the underlying infrastructure became harder to see.
As development practices evolve, the imbalance becomes harder to ignore. The industry is beginning to move away from informal, throwaway prototypes toward systems expected to run in production, interoperate reliably, and withstand audit and review over time. Open source plays a different role here. Specifications can define interfaces and expected behaviour, but they do not ensure numerical correctness, performance, or operational reliability. That work still resides in code, tests, documentation, and long-term stewardship.
The issue is not experimentation, nor the use of AI to accelerate development. It is the absence of conditions that reward maintenance, validation, and long-term ownership at the same pace that they reward visible output. As reliance on shared open-source infrastructure continues to grow, this imbalance accumulates a form of technical and organisational debt that is difficult to measure, but costly to ignore.
Climate stayed central, but not structurally
Climate signals continued to intensify. Data confirmed that 2024 crossed the 1.5 °C threshold, and January 2025 became the warmest January on record.
For an industry built to observe planetary change, these should have been defining reference points. Instead, they functioned as background context. They were noted, acknowledged, and then absorbed into a year dominated by geopolitics and AI.
Earth Observation is better equipped than ever to measure climate change. The tools are more capable, the data more accessible, and the analytical capacity more advanced than at any point before. Yet the alignment between measurement and response has weakened. Acting on climate signals competes with other priorities in a world shaped by geopolitical tension, economic uncertainty, and short-term risk management.
Climate response is not only a scientific challenge; it is also an economic one. In an environment where public budgets are under pressure and procurement favours immediate operational value, sustained investment in climate resilience and mitigation becomes harder to justify alongside other competing priorities.
Climate is central to Earth Observation in principle, but less often in how systems are structured, funded, and justified.
Where this leaves Earth Observation
By the end of 2025, Earth Observation was faster, more operational, and more confident in its capabilities. Budgets expanded. AI systems entered production. Standards stabilised. Constellations matured.
At the same time, the year narrowed what the sector is effectively organised around. Defence increasingly sets the pace. AI compresses complexity into outputs. Visibility is rewarded more readily than understanding. Climate urgency remains present, but no longer consistently shapes structure or strategy.
None of this means that Earth Observation has lost its relevance or potential. The opposite is true; the tools, platforms, and analytical frameworks are stronger than ever. What has become less certain is not capability, but intent; the purposes toward which these systems are directed, and the kinds of outcomes they are ultimately designed to serve.
Acceleration, on its own, is not direction. Neither is scale. Nor automation. Direction emerges from the choices embedded in funding models, evaluation practices, and the kinds of work that are rewarded and sustained over time.
Looking ahead, the question is not whether Earth Observation will continue to advance. It will. The more consequential question is whether the disciplines of stewardship, governance, and institutional memory will advance with it, or whether they will quietly erode.
2026 will not resolve that by default. But it will make the consequences of those choices more visible, and harder for the sector to ignore.
Alongside this article, I am publishing a NotebookLM containing all Spectral Reflectance newsletter issues from 2025. Throughout the year, the newsletter functioned as a running record of industry developments as they emerged, including launches, funding decisions, policy signals, and technical developments.
The notebook is not intended as a comprehensive account of everything that occurred across the sector, and some developments were inevitably missed. Rather, it reflects the material I drew on to form and test the arguments presented here. Readers can explore it directly, query it, and trace how specific observations relate to the broader themes discussed in this piece.

