Cloud Strategy in the Age of Multi-Cloud

Cloud Strategy in the Age of Multi-Cloud

Today HashiCorp published the "State of Cloud Strategy Survey". It's a fantastic resource for anyone thinking about how their cloud strategy impacts key initiatives, whether that's cost reduction, digital transformation, desire for larger scale, or access to emerging technology.

HashiCorp states:

Our first-ever State of Cloud Strategy Survey uncovers some very clear results: a common multi-cloud operating model has become the de facto standard [...]

I'm not surprised to hear it. I am here to tell you that one of the key statistics that HashiCorp provides only tells part of the story, and as they say, here be dragons.

76% Are already multi-cloud

Multi-Cloud adoption is unequal. On one hand, you have organizations leveraging their own data center for private cloud, plus a single public cloud; you have teams using one cloud for the vast majority of workloads, while one, isolated service runs in another cloud; but on the other hand, you have some organizations strategically pursuing multi-cloud at scale for the full benefits - savings, scaling, access to innovation, and fueling digital transformation, among others.

I see a certain naiveté in the responses, because the second stated factor driving multi-cloud adoption is "avoid single cloud vendor lock-in". This makes a lot of sense for a HashiCorp customer, as a product like Hashi Terraform offers cloud-agnostic provisioning and management capabilities for workloads. This is more complex than many would like. Trading a "single public cloud" strategy for a multi-cloud strategy to avoid lock-in is unwise, as you likely double (or more) your lock-in in the process. It can lead to losing out on the benefits of cloud as well; if you opt to not use AWS Transcribe in your application stack because there isn't a compatible API at another CSP, you're re-inventing a wheel and there's a good chance your wheel will be worse. If you compare cloud image recognition services you discover they are not created equally. It's not just "X is better than Y" – it's that there is complexity in many dimensions that may make each service better for specific use cases.

Reaching the nirvana-like state of "enlightened multi-cloud", where your enterprise can seamlessly tap into multiple clouds and multiple cloud services, can certainly be challenged by issues that HashiCorp helps with – for example, giving your teams the ability to use a single automation stack while leveraging multiple clouds – but your efforts will eventually run up against challenges with data. Here are some we've seen writ large in the real world:

  • Data Movement – Data has gravity. Applications want to tap into it, applications create more data, and distance creates latency, which slows access to data. There's a reason why having your data on-prem and all your applications in the cloud, or vice versa, is a critical anti-pattern: it doesn't work. (Setting aside very specific use cases friendly to this, such as backup-to-cloud.) In a multi-cloud environment, this means that applications in one cloud eventually want access to data in another cloud. Then you have to worry about copying, moving, replicating, or some other solution – but the cure can be worse than the disease, as you more-than-double your costs (two copies instead of one, plus transit costs). Never mind the complexity operationally of keeping the copies in sync.
  • Access Management – Your teams need access to the data, but now you potentially have two sets of security policies, two methods for updating them, two systems of record to stitch into everything from your SIEM tools to your ServiceNow instance, and the system from one cloud, like your carefully crafted AWS IAM rules that use STS Assume-Role functions to seamlessly grant access to data to instances in certain VPCs, just don't work and have to be rebuilt.
  • Data Protection – There's a great chance you're not just creating a copy, but that each copy of data gets its own "lineage"; since teams on each side of the multi-cloud divide want to create and modify data, you likely have "original" data on each side that now needs to be protected: against malware, operator error, malicious ex-employees, and of course, the exploding threat of ransomware.
  • Governance – Extra copies of data for multi-cloud, movement across clouds, and other solutions to the other challenges can end up creating additional challenges around data governance. Understanding source, owner, classification, access rights, and so many other forms of governance challenges are multiplied by multi-cloud.

As a response to this, in the same way that Hashi proposes to solve multi-cloud automation with a unified platform for defining and automating infrastructure with Terraform, I propose everyone consider this not-so-radical solution to multi-cloud data challenges: don't deal with multi-cloud data at all. Have a single repository of data, use it in multiple clouds. Here's a picture:

No alt text provided for this image

This simple architecture, which uses our Cloud Control Volume data service to host data "between the clouds", but using high-speed, low-latency access to make it look equivalent to cloud-native in each region, solves the multi-cloud data challenge. By having a single copy, we reduce multi-cloud data complexity back to single-cloud levels: there is no replication, no copies adding cost, a single method for accessing it (and indeed, unified addressing across clouds), and governance can be tied to a single copy.

This is only the beginning - we've been adding features and products like the CyberVault, which provides air-gapped protection for multi-cloud data sets – or allows data centers to protect data with multi-cloud recoverability. We're adding more features and capabilities to further enable data transformation, and flexibility in the multi-cloud networking we enable. Yet it is, in some ways, early days for multi-cloud.

As the CTO of Faction, both the enthusiasm for multi-cloud and the benefits are unsurprising. I've been presenting on challenges and benefits related to multi-cloud for over 4 years now, as Faction's first multi-cloud data service launched in late 2016. Today, we have numerous Fortune 500 customers leveraging our platform to streamline multi-cloud adoption, and our partnership with Dell to power Dell's Multi-Cloud Data Services has further extended the reach of our capabilities. I'm privileged to have had a many conversations with organizations in all stages of their cloud strategy.

One clarion point supporting future growth of "true multi-cloud" is that a particular factor is the middle of the HashiCorp pack for "factors driving your multi-cloud adoption": emerging technologies. If you recall the state of cloud for the first decade following the launch of AWS S3, a lot of the large cloud players were in a bit of a "me too" game. There was a baseline of infrastructure services – compute instances, private networking, object and block storage, and then one step up into database services (relational, NoSQL, data warehousing) and then other services based on popular open-source packages, such as Hadoop-based elastic map-reduce services and so on.

As we highlighted with the image recognition services, we are entering a new era of "public cloud diversity". An ever-growing stable of services which are truly differentiated from other clouds will drive a new wave of multi-cloud. Some of this innovation is happening at a hardware level, with things like AWS Graviton processors or Google's TPU chips delivering it at a hardware level. Even when the hardware innovation isn't happening inside the CSP, things like nVidia GPUs are unevenly distributed. As ESG highlighted in their validation paper of the Dell-Faction multi-cloud data service, "CSPs do not offer the same GPU instance type in all regions"; and even when they do, they may not offer it at the same scale.

Many organizations increasingly rely on the seemingly-infinite scale of public cloud for workloads, but at High Performance Compute (HPC) levels of scale, they may come up short. Moreover, if your organization is looking to cloud add flexibility by using cloud resources only when needed, how do you ensure you have access to resources when you do need them, without the commitment of big contracts for reserved instances? Certainly multi-cloud can be an answer - but not if you have a cumbersome movement of data in order to switch clouds.

I'm very jazzed to see the results from the HashiCorp survey. It validates something I've worked on for years – but it also confirms for me that the best is yet to come, because I know how much our platform can smooth out the rough road to multi-cloud adoption. We have a lot to do in the world – so we can use all the speed innovating we can get.

Let's get to work.


Steven LeClair

Retired - technology sales executive

3y

Thanks, Matt!

John Gentry

Technologist and Change Agent | Proud voice for innovation and tech disruption

3y

Love seeing the collaborative innovation between great companies like HashiCorp and Faction! Count me IN on more of these thoughts on the value of #multicloud! Matt - let’s catch up soon! JG

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics