The Assumptions That Need Revisiting

Introduction

There is an assumption embedded in the resilience strategies of most organisations that almost nobody states explicitly, because it is so widely held it has ceased to feel like an assumption at all. That assumption is this: the cloud will be there when you need it.

It is worth sitting with that for a moment. Not the question of whether a specific service might experience an outage, or whether a configuration error might cause an incident. Those risks are well understood and regularly rehearsed. The deeper question is whether the infrastructure your organisation has come to depend upon for its operational continuity is truly within your control, and whether the threat landscape has shifted to a point where that dependency requires urgent reassessment.

The answer, increasingly, is yes. And the implications for business continuity and operational resilience planning are significant.

When Infrastructure Becomes a Target

For years, those of us who counselled caution about wholesale migration to cloud infrastructure were characterised as resistant to change, wedded to legacy thinking, or simply failing to grasp the economic and operational benefits on offer. The resistance, in most cases, was nothing of the sort. It was the application of basic risk assessment: identifying a genuine need, weighing the benefits against the real negatives, and asking questions that the migration roadmap had not accounted for.

One of those questions was physical. The consolidation of vast quantities of critical data and operational infrastructure into a relatively small number of hyperscale datacentre facilities creates, from a certain angle, high-value targets. A practitioner thinking about physical attack rather than purely cyber threat could see that concentration as a vulnerability in its own right. The cloud providers could invest enormously in cyber defence; they could not make a building invisible.

This was, at the time, treated as an extreme and rather theoretical concern. It is no longer theoretical. The kinetic targeting of cloud datacentre infrastructure in conflict zones has demonstrated that physical attack on digital infrastructure is not only conceivable but has occurred. Unlike cyber threats, where providers have significant capability and resource to respond, a kinetic strike is not a domain in which technology companies hold any particular advantage. The threat is real, and the response options are limited.

More broadly, the geopolitical environment has changed in ways that should prompt a fundamental review of assumptions about technology neutrality and provider reliability.

The Political Dimension Nobody Is Mapping

The major cloud providers are, first and foremost, corporations subject to the laws and political pressures of the jurisdictions in which they operate. That was always true. What has changed is the willingness of state actors to exert that pressure in ways that would once have seemed unthinkable in the western democratic context.

We have seen technology companies face sustained governmental pressure for decisions that were, by any reasonable measure, in the public interest. We have seen regulatory and legislative mechanisms used as instruments of economic coercion. The question that business continuity professionals need to be asking is not whether a major cloud provider might be subject to state pressure to restrict, degrade, or disrupt services to organisations in a particular country or sector. The question is whether their organisation has mapped that as a risk and planned accordingly.

Most have not. The risk register may note dependency on third-party providers. It is unlikely to contain a scenario in which that provider is instructed, by a government with leverage over it, to withdraw or degrade services as part of an economic or political pressure campaign. Yet that scenario, in the current environment, is not paranoid. It is a logical extension of tools that are already being deployed.

The us-east-1 AWS outage of 2024 should have been a wake-up call on concentration risk alone: the sheer number of organisations whose operations were disrupted by a single regional failure illustrated how deeply the assumption of availability had been built into continuity plans that were supposed to account for exactly that kind of disruption. The more recent developments in the geopolitical landscape should be sounding alarms of a different register entirely. If your business continuity programme has not been reviewed in light of both, it is overdue.

The Sovereignty Problem: Your Data, Their Terms

There is a way of expressing the core problem of cloud dependency that tends to cut through: when you lose the connection, your data is no longer yours.

On-premises infrastructure, for all its costs and limitations, has an important property: in almost any recovery scenario short of complete physical destruction, you retain access to your data. You own the hardware. You control the connection path to it. The recovery may be complex, but the data is yours to recover.

Cloud infrastructure inverts this. Your data may be perfectly intact, replicated across multiple availability zones, subject to rigorous backup processes. But if the service is unavailable, whether through technical failure, deliberate restriction, or the withdrawal of access, you cannot reach it. A backup strategy that lives entirely within the same provider ecosystem as the primary system does not constitute resilience. It constitutes redundancy within a single point of failure.

My recommendation for any organisation with material cloud dependency is to maintain at minimum a current backup of critical data to on-premises storage, with a defined and tested fallback mechanism for accessing that data independently of the cloud environment. This need not mean abandoning cloud infrastructure. It means ensuring that you retain sovereignty over your own data in a genuine worst-case scenario. Even a direct database access route, available to ICT professionals under controlled conditions, is preferable to no route at all.

The Forced Change Problem: Provider Decisions, Your Risk

There is a further dimension of cloud dependency that receives less attention than outage risk but is, in practice, encountered far more frequently. Providers make decisions. Those decisions affect your systems. You had no part in making them.

The deprecation of TLS 1.0 and 1.1 by AWS is a useful illustration. The decision was, on security grounds, entirely defensible. The effect on organisations running software built on older framework versions that defaulted to those protocols was, in some cases, immediate and severe. Software developed using older versions of the .NET framework, for example, defaulted to TLS 1.0 or 1.1 as a matter of convention: when the provider restricted support to TLS 1.2, that software was broken at a stroke. For organisations that owned their software and had access to development resource, the fix was achievable. For organisations running third-party legacy applications with no available update path, there was no straightforward remedy. The provider made a decision; the customer carried the operational risk.

This is not an argument against security improvements. It is an argument that business continuity planning in cloud environments must account for the risk of provider-driven change imposed on a timeline and at a pace not of the organisation's choosing. That risk needs to be on the register, with a corresponding assessment of which systems would be affected and what the response would be.

A Balanced Assessment, Not a Rejection

None of this is a case against cloud infrastructure. The pandemic demonstrated, comprehensively, the value of distributed, cloud-accessible systems for maintaining operational continuity when physical access to workplaces is lost. Productivity was maintained at scale, and rapidly, because organisations had cloud infrastructure in place. That is a genuine resilience benefit, and it should be recognised as such.

The same distributed model that enables remote working also supports geographic redundancy, rapid scaling during demand spikes, and access to security and monitoring capabilities that most organisations could not replicate on-premises at equivalent cost. These are real advantages.

The problem is not the cloud. The problem is the unexamined assumption that cloud adoption is itself a resilience strategy, rather than an operational choice with its own resilience implications that need to be mapped, planned for, and tested. The industry absorbed the benefits of cloud migration and, in many cases, stopped asking the critical questions. The people who kept asking them were characterised as resistant to change. In retrospect, they were performing proper risk assessment.

The Question Every Organisation Should Be Able to Answer

Every organisation with material cloud dependency should be able to answer the following question clearly: what is our resilience posture if our primary cloud provider is unavailable for 72 hours, not due to a technical fault on their part, but due to a decision made in a boardroom or a government building that we have no relationship with and no influence over?

If the honest answer is that the organisation would be unable to function, then the business continuity programme has a gap. If the answer is that the backup strategy would cover it, the follow-up question is whether that backup strategy is hosted on the same provider. If it is, the gap remains.

Closing that gap does not require abandoning cloud infrastructure. It requires treating cloud providers as what they are: third-party dependencies with their own risk profiles, political exposures, and operational vulnerabilities, subject to the same rigorous dependency analysis that would be applied to any other critical supplier. The fact that they are large, well-resourced, and widely used does not make them immune to the threats that the current environment is generating.

The Role of the Resilience Professional

Resilience professionals have a responsibility that goes beyond maintaining the BCP document and scheduling an annual test. It includes being present in the rooms where infrastructure and migration decisions are made, and asking the questions that those conversations often do not naturally generate.

The value of that presence is not always visible until it is absent. Decisions made without a resilience lens carry risks that do not announce themselves immediately. They accumulate quietly, and they tend to surface at the worst possible moment: when a provider changes its terms without warning, when a regional outage cascades across dependencies that were mapped in isolation, when a geopolitical event materialises a scenario that was on nobody's radar because it had always seemed implausible.

Resilience is not a parallel process, activated when something goes wrong. It is a discipline embedded in ordinary decisions, including the decision of where and how you store the data your organisation depends upon. The cloud is a powerful tool. It is not, by itself, a resilience strategy.

At Tapping Frog, we help organisations identify and address the gaps in their continuity and resilience planning, including the assumptions that have gone unexamined for too long. If this article raises questions you would like to explore, we would welcome the conversation.

Get in touch