This is the third in a series of four articles discussing infrastructure as a service (IaaS) clouds. The series started at basic level setting and we will now begin diving progressively deeper. The topics for the series are:
1. Cloud 101
– What is cloud
– What value should cloud provide
– Public, private, and hybrid cloud
– Starting on a cloud project
2. Application taxonomy, what belongs in the cloud, and why
3. What you should look for in cloud infrastructure software
4. Evaluating different approaches to cloud infrastructure software
The concepts section describes architectural design points you should ask vendors about to make sure that the they are thinking like a true cloud provider and not simply cloud-washing older technology to try to be relevant in a new world. Keep in mind that these concepts are about infrastructure management in general, not just compute. You should think about the storage, network, and power aspects of your cloud in the same way.
The specific functionality section lists cloud features to check for. If too many of these are missing, the cloud value proposition will not be delivered.
Core Concepts and Philosophy
In a cloud, scale is the key to long-term success. The number of nodes and instances, simultaneous connections to the management system, the networking and security features, etc. all need to scale. For each and every exciting and valuable feature a cloud vendor touts, you need to ask, “Can I have tens of thousands of those? What is the experience at that scale? When, if ever, does the scale impact the end user and how they do their work?”
While one can deploy a small-scale cloud, if a cloud is successful, it will become a single pool of capacity for an entire organization or even multiple organizations. In fact, the larger clouds scale, the more cost savings and value they generate since you start to see the benefits of “the law of large numbers”. If you do not build for scale from the very beginning, you will hit a wall and need to create separately managed clouds. This will force end users to decide what workloads go on what clouds, thus the frictionless self-service model is broken. Furthermore, capex benefits will be lost as you are forced to overprovision each cloud fragment rather than benefiting from a single pool hosting many applications with offsetting resource consumption curves.
For example, in a non-cloud deployment, the datacenter management system is used by datacenter admins only. If it can only deal with tens of simultaneous connections and is limited to one or two nodes, there is no problem since the administrative team is relatively small. However, in a cloud, since a large number of end users drive the management system directly via self-service workflows, the management system requires a whole new level of scale.
Automation is the key for allowing end users to do their own work and also for lowering datacenter operation costs. Make sure there is a proper degree of automation for both application lifecycle operations and infrastructure operations.
The core principle for end user operations is that no end user task should ever trigger work on the datacenter administrator side, not even a single mouse click approval. There is still a high degree of control and protection required, but these controls must be implemented as up front policies where the right groups of people delegate the right privileges to the right consumers. Furthermore, there need to be audit trails so that one can show that the policies constrained people to the proper activities. However, none of this changes the fact that manual approval processes by central admins on regular daily end user operations cannot work in a cloud model.
The core principle for datacenter operations is that the cloud should be self-discovering, self-organizing, self-monitoring, and self-healing. Anyone that sells you a complete zero-touch datacenter today is certainly exaggerating, but you should check the features they have to make sure this philosophy is followed where technically feasible. Where manual intervention is needed, make sure that this intervention is required only for infrequent up front tasks and never for frequent operations that happen on a frequent basis.
As an example, initial cloud configuration and network setup may be items that require significant up front planning and hours to days of setup, but regularly growing the cloud deployment by adding nodes must require no more time and effort than it takes to rack the systems and plug them in.
Identity, Permissions and Delegation
Clouds need to understand who each user is, what groups they belong to and what customer or tenant their work is billed to. Each operation on each object needs to check the identity of the actor against the permissions system to make sure that the operation is allowed. Delegation then needs to be possible – from cloud admin to customer admin to end users and groups, and possibly between separate end users and groups. Without a strong concept of identity, permissions and delegation, your cloud will only scale to a single tenant and will never fully interoperate well with other clouds, thereby limiting the long-term benefit you derive from the system. Like scale and automation, this is a core design choice.
If cloud vendors do not have proper permissions systems for their objects or are lacking a way to delegate permissions through multiple levels, they are not thinking like a cloud vendor. The result will be trouble down the road as end users wind up having to place tickets to acquire permissions driving a heavyweight approval process where the owner of the resource and the end user’s management team need to be consulted.
Openness and Choice
Openness and choice mean that you have:
- Independence at each layer: Your different cloud components are not locked in from end to end. A choice at one layer does not dictate a choice at another unrelated area.
- Your choice of end-user self-service workflow management should never dictate your hypervisor or other core infrastructure component.
- Equally importantly, your private cloud software should never dictate the choice of public clouds to which can federate. Your end-user provisioning interface should work on your private cloud infrastructure, any public cloud using the same cloud software, and even any public cloud that uses competing or homegrown cloud software. Having to present different interfaces to your end users for clouds using different cloud infrastructure components is not open.
- Complete and open APIs: Your cloud vendor should have extensive APIs. At the very least the APIs should cover everything provided in the UI. This will allow customized workflows at both the infrastructure level and the end-user level.
- Extensible components: Your cloud vendor should use open and extensible components where possible. Open source, where anyone can insert code at any point, is the extreme example of this principle. In non-open source systems, there are ways to introduce more controlled, but still extremely flexible extensibility models. For example, major components can be general purpose enough that customers can add in other ecosystem products readily, as with the Linux domain 0 model for hypervisors. Alternately, APIs can be made to be robust and complete enough so that most conceivable useful integrations are possible such as the case with Windows APIs. This makes a big difference as you try to augment your cloud with best of breed 3rd party cloud management products.
- Standards: Your cloud vendor should take advantage of open standards where possible where those standards do not unduly constrain innovation.
Without openness and choice you risk vendor lock-in and the high cost that comes from not being able to have a meaningful option to replace an infrastructure component. Technology lock-in slows down the rate at which you get new features you request from your vendor. A limited ecosystem and an inability to augment your cloud with the latest and greatest offerings from companies both new and established or from the open source community further limits your ability to improve your cloud over time. Lastly, limited choice in public clouds to which you can federate may force you into a cloud with the wrong feature set or that is too expensive.
When evaluating the base functionality described here, make sure to bring in the philosophies above for each. Make sure that every feature below is implemented with scale, automation, permissions and delegation, and openness in mind.
In the spirit of openness, it is key to recognize that it is not required, nor even desirable, for all the features here to come from the cloud software vendor. Cloud software vendors should be able to present you with an ecosystem that helps fulfill the requirements below.
Self-Service Developer and Deployment Workflows
- This is the core of the concept of cloud. End users need a way to do the following on their own:
- Manage their images, update, and version them.
- Publish images to a selected community for use in deployment workflows.
- Deploy images and configure the following runtime parameters:
- The number of instances
- The images used
- The placement policy
- The network connectivity
- The application configuration
- The resource allocation
- The storage to mount
- Scale applications up and down and retire them when their usefulness has ended.
Reliability and Scale of the Management System
With cloud, when we think about the management system, we’re not just talking about basic monitoring. We’re also talking about the whole datacenter control system – how workloads are deployed, managed, and retired. In a non-cloud datacenter, the management system is for datacenter admins only. If it can only deal with tens of simultaneous connections and is limited to one or two nodes creating a single point of failure compromising reliability, there is no problem since the administrative team is relatively small. However, in a true cloud, with end users driving the management system directly in self-service workflows, a whole new level of scale is required.
The management system of a cloud needs to be able to scale to handle thousands of simultaneous connections. Furthermore, it can never be down. It needs to be self-monitoring and self-healing. When and if a management node is lost, the remaining management nodes need to continue operation, and the lost management node needs to be replaced from the remaining equipment in the cloud so that the degraded state is resolved. All of this needs to happen automatically without impact to the end user or intervention on the part of the administrator.
Multi-Tenancy and Networking
For clouds to be of use to anything more than the smallest organizations, robust and secure multi-tenancy separation is required. This involves a great deal of network functionality.
Some customers will require traditional separation at layer 2 like VLANs. Those networks need to be managed flexibly and securely.
- The cloud needs to allow for the layer 2 network to be exposed to large sets of nodes without difficult configuration.
- The cloud needs to make sure that there is a security system governing access to each layer 2 network so that only the right workloads from the right customers are placed on a given network.
- The Layer-2 network should provide the equivalent of a broadcast domain and support all traffic types (unicast, muliticast and broadcast). It should also support IPv6 and any other layer 3 protocol, not just IPv4.
Other customers, who do not want to be limited to the scale and manageability of layer 2 networks and that do not need a broadcast domain may choose to have a more modern and flexible large flat cloud network with an integrated distributed firewall providing the isolation between customers’ workloads. This distributed firewall service needs:
- To have a central configuration repository that ultimately informs the separation between workloads created by different customers as well as between workloads within individual customers.
- To be configurable by the infrastructure administrator, the individual customer administrators, and even the end users who need to control access to their own work product within their organizations.
- To have its configuration be independent of workloads and IPs – adding and removing workloads must not cause reconfiguration of the distributed firewall service.
- To execute in a distributed manner that avoids network bottlenecks.
- To be independent of server, building, network vendor, network topology, and even geography.
Like compute, storage needs to be aggregated into large pools for access by end users so that they are hidden from the details of the different storage devices and what objects are placed on what storage device.
Unlike compute, there are widely varying capabilities and prices for different storage devices, so some aggregation system is needed to create pools with different service levels where customers can decide the capabilities they need and are wiling to pay for to store their storage objects. This customer decision then drives automated pool selection and ultimately, device selection.
Like with all other cloud resources, the end-user created storage objects need to be created through self-service workflows without administrator interaction, but must also be governed by a robust permission and delegation system governing which storage can be used by which users.
The storage objects created by end-users in those pools need to be managed independently of the instances that mount and access them. This way, creating, updating or deleting workloads does not affect the core information the customer needs to preserve over time. Workloads that create data can be killed, redeployed from an updated template and reattached to storage without impact to the storage object itself. Also, storage objects should be able to be cloned or snapshotted for use by future instances or for rollback processes without any interaction with the running instance accessing it.
Billing and Chargeback
Core to the economic model of cloud is the ability to have end customers either pay for their usage or at the very least to understand their impact on datacenter costs. To that end, there needs to be complete metering APIs and a chargeback or showback system for the cloud.
Hands-Off Infrastructure Management
Management of the physical infrastructure should be as low touch as possible. This includes many aspects:
- Installation of the nodes: Node installation becomes a frequent operation in a big, fast-growing, and/or mature cloud where parts need to be replaced regularly. Manually installing or configuring servers will be too expensive and error-prone in this world. The only proper experience is for the servers to be racked and connected, then powered on – and nothing else. The cloud needs to auto-discover the server, install it, and make it ready to accept workloads.
- Intelligent workload placement: Workloads should be automatically, and without administrator involvement, placed such that they are:
- Loosely packed enough that bottlenecks and performance problems are not generated as dealing with those problems reactively will be problematic at scale.
- Tightly packed enough that hardware, power, and cooling are not wasted.
- Strategically placed so that related workloads cohabitate for enhanced inter-workload communication and that redundant workloads are separated to eliminate single points of failure for the service.
- Placed based on constraints such as the requirement to be on a node with GPUs or a node that is certified PCI compliant.
- Capacity tracking: There needs to be cloud-wide tracking of resources so that the datacenter operators are aware of cloud capacity and when they need to acquire more hardware.
- Isolating and retiring equipment: All systems should have a lifetime and a health status associated with them (due to length of maintenance contract, expected lifetime of component parts, and/or length of lease). When that lifetime is exceeded or when the part is failing or has failed, it is automatically isolated from the cloud and flagged for replacement. The cloud should be aware of datacenter layout so that administrators never have a problem locating the equipment at replacement time.
- Managing planned and unplanned downtimes within the datacenter: If you generally deploy cloud-ready applications (see last article in this series), most datacenter events should be transparent to the end users of the services. Scale-out applications can be scaled up to repopulate lost instances and chunks of huge compute jobs can be automatically respun. However, downtimes associated with persistent data need to be managed as well as compute or network downtimes that affect any of your more monolithic applications. The datacenter should recover whatever it can on its own, and for what it can’t, end users need to be alerted to upcoming planned downtime or recent unplanned downtime and made capable of adjusting their workload deployments accordingly.
Federation Across Clouds
To provide a single interface for all end-users, a cloud must hide distinctions between different datacenters, geographies and providers. There should be one end-user experience for deploying to anywhere in the cloud – public, private, or hybrid. While end users, due to compliance reasons, may need to dictate placement policy in terms of location or provider, it should be a policy component of their work within a single experience – it is not acceptable for there to be a private cloud experience and completely separate public cloud experience.
It is critical that the choice of public clouds to federate to is not forced by the cloud software provider, that would be too limiting. By not allowing the customer to pick the best possible provider at the right cost, cloud deployment will become unnecessarily expensive.
Federation features need to be as follows:
- From a single user interface, resources can be deployed and managed across multiple sites and providers.
- Each site and provider can be allowed or denied on a per customer or per user/group basis.
- When accessing public clouds that are tied to shared credential and billing information, like private keys and credit card numbers, the end user must have that information hidden from them. They don’t need to know it and they must not be able to take it with them when they change jobs or roles.
- Identity is preserved across sites and providers so that:
- Users are permitted access to resources at each site according to their specific permissions.
- Bills from public cloud providers can be itemized by department, user, and project.
- There needs to be a single audit trail showing who did what activity in what clouds.
When a cloud management system follows these rules, multiple sites within an organization can be managed as one. Furthermore, hybrid cloud becomes a reality with public cloud becoming a viable part of the IT toolkit, not a bootleg process hidden from the visibility of those most trained and responsible for keeping services safe and secure.
Enterprise IT departments and service providers have no shortage of choices today for cloud infrastructure software. But, for an organization doing a significant deployment, the list of requirements above can help separate the serious contenders from less mature or well thought through products.
When writing RFP’s don’t just fall back on the same enterprise management or virtualization platform requirements or you will wind up with the same old infrastructure. Start with your traditional requirements, but make sure to add in critical new requirements around what is really needed to take the next step and have a real cloud today!