Cloud Complexity

Why cloud is a kludge of complexity

The cloud model was designed to be simple and nimble. Simple and nimble doesn’t necessarily mean fit for purpose. Over the past decade, new layers of capability have been added to cloud to address its shortcomings. While this has created more options and greater functionality, it has also meant greater complexity in its management.

Today, it is possible to create a virtual server on a public cloud and deploy an IT application within minutes. This simplicity is a significant value driver of cloud uptake. But building applications that are resilient, performant and compliant requires far greater consideration by cloud users.

Public cloud providers make few guarantees regarding the performance and resiliency of their services. They state that users should design their applications across availability zones, which are networked groups of data centers within a region, so they are resilient to an outage in a single zone. The onus is on the cloud user to build an IT application that works across multiple availability zones. This can be a complex task, especially for existing applications that were not designed for multi-availability zone cloud architecture. In other words, availability zones were introduced to make cloud more resilient, but they are just an enabler — the user must architect their use.

One of the original core tenets of cloud was the centralization of computing for convenience and outsourced management. The reality is many cloud buyers aren’t comfortable giving full control of all their workloads to a third party. Many are also bound by regulations requiring them to keep data in certain jurisdictions or under their own management. Private clouds were created to provide governance and control where public cloud failed, albeit with less scalability and flexibility than a public cloud.

Hybrid cloud makes public cloud more scalable by making it more distributed, which also means it is more flexible in terms of compliance and control. But this means cloud buyers must wrestle with designing and managing IT applications to work across different venues, where each venue has different capabilities and characteristics.

Public cloud providers now offer appliances or software that provide the same services found on public cloud but located in an on-premises environment. These appliances and software are designed to work “out of the box” with the public cloud, thereby allowing hybrid cloud to be implemented quicker than through a bespoke design. The hardware manufacturers, seeing the cloud providers enter their traditional territory of the on-premises data center, have responded with pay-as-you-go cloud servers that are billed according to usage.

Cloud management platforms provide a common interface to manage hybrid cloud, another consideration for cloud buyers. To manage applications effectively across venues, new application architectures are required. Software containers (an abstraction of code from operating systems) provide the basis of microservices, where applications are broken down into small, independent pieces of code that can scale independently — across venues if needed.

Applications that can scale effectively on the cloud are referred to as “cloud native.” Containers, microservices and cloud-native architectures were all introduced to make cloud scale effectively, but they all introduce new complexity. The Cloud Native Computing Foundation (CNCF) tracks over 1,200 projects, products and companies associated with cloud-native practices. The CNCF aims to reduce technical complexity in cloud-native practices, but these practices are all nascent and there is no clear standard approach to implementing cloud-native concepts.

To the uninitiated, cloud might appear a simple and nimble means to access capacity and cloud-enabling technologies (such as cloud software tools, libraries of application programming interfaces for integrations, etc.). This can still be the case for simple use cases, such as non-mission critical websites. However, users face complex and often onerous requirements for many of their workloads to run in a public cloud according to their business needs (such as resiliency and cost). The original cloud promised much, but the additional capabilities that have made cloud arguably more scalable and resilient have come at the cost of simplicity.

Today, there is no standard architecture for a particular application, no “best” approach or “right” combination of tools, venues, providers or services. Cloud users face a wall of options to consider. Amazon Web Services, the largest cloud provider, has over 200 products alone, with over five million variations. Most cloud deployments today are kludged — improvised or put together from an ill-assorted collection of parts. Different venues, different management interfaces and different frameworks, working together as best as they can. Functional, but not integrated.

The big threat of complexity is that more things can go wrong. When they do, the cause can be challenging to trace. The cloud sector has exploded with new capabilities to address mission-critical requirements — but choosing and assembling these capabilities to satisfactorily support a mission-critical application is a work in progress.

Share this