13 min read
Doron Grinstein

What is Cloud Deployment? | Complete Guide

Deploying to the cloud might be simple if there weren’t so many clouds and so many options.

The application you’ve been working on in your local development environment is nearing completion – whether it consists of a virtual machine image, a workload consisting of multiple container images, or merely a collection of custom code and libraries in a folder – and now you’re ready to deploy.

As tempting as it might be to deploy your code to your Ubuntu server running behind your desk, you’re probably looking for a more resilient and scalable solution: to deploy to the cloud.

Problem solved! But the decision to deploy to “the cloud” is like the decision to move “to Europe”: fine, but are you talking Tromsø, Norway or downtown Madrid? There are a lot of choices.

In this article, I’ll lay out the options of cloud infrastructure, cloud providers, and cloud deployment models. There’s a lot of technical detail behind each topic, but in order to create a sound decision-making structure, I’ll stick to the foundational elements of these terms.
First off, let’s be clear about what we’re talking about. By “cloud deployment” I mean the process of running your application’s code on infrastructure that functions as a public or private cloud. Of course, servers of various sorts and sizes undergrid all cloud infrastructure. Still, the chief characteristic of cloud infrastructure is that the server – the hardware device and the physical infrastructure supporting it, cooling it, and connecting it – has been abstracted away by some interface, e.g. an API, allowing you to provision your app without the limitations of a particular machine or machines.

Benefits of Cloud Deployment

There is still the option to deploy to your own server at your location, or to a set of servers that you install, configure, and network in a data center colocation. In that case, you’re responsible for the physical tasks of getting hardware infrastructure operational: 

  • Mounting servers into racks
  • Installing operating systems and supporting applications and files 
  • Establishing secure remote access for administration 
  • Ensuring sufficient CPU, RAM, and hard drive space
  • Connecting to the internet
  • Opening internet-facing ports like 80 and 443
  • Configuring a DMZ

You’re also responsible for security for the most part. However, physical security is usually covered by the data center operator and might be augmented by a physical cage that only you have access to, surrounding the set of servers you own. This model is capital-intensive because you must purchase all the equipment upfront and includes the ongoing expense of power, cooling, and networking.

You might also choose to go with a data center that does some of the hardware heavy lifting for you, giving you an API or a UI that allows you to choose the number of servers and various system resource parameters virtually. In that case, you’ll be left with higher-level server administration like setting up SSH, installing software, and updating and patching the operating system.

In certain instances, these non-cloud deployment models may offer performance, control, and cost benefits over their cloud alternatives. However, in most instances, the benefits of the additional layers of abstraction that the cloud offers are significant.

Here are just a few of the benefits of deploying to the cloud:

  • Cost Effectiveness – One of the key benefits of cloud computing architecture and deployment is that it reduces the need for in-house resources and capital expenses. Rather than purchasing hardware for computing, data storage, and management, these costs can be left to the vendor, and your business can focus on your customers. In addition, having hardware off-site can free up working space and reduce power costs. Not to mention the fact that you’d likely need specialized in-house engineers or contractors to handle setting up hardware and performing server administration. In a few edge cases, a co-location may be more cost effective, and you may have business reasons for capitalizing versus amortizing computing costs. Still, in most cases, cloud deployment makes better financial sense.
  • Scalability – The computing resources of the cloud are nearly infinite. This is handy even if your app doesn’t need unlimited cloud resources because it allows you to provision elastically rather than having to predict and prepare for tomorrow’s load. This is particularly true if your application (like most applications) experiences non-linear growth in usage and peak load. If you’ve been working on your app in the garage with beta customers and are finally ready to buy a Super Bowl ad, the cloud allows you to provision your application with the resources it needs on the fly. Then, if the usage drops off after the initial rush, your cloud infrastructure can scale down to the new equilibrium. Trying to accomplish the same before the cloud is the source of scores of “the server crashed” stories. In a colocation, it might mean buying up another floor of data center space and sending a team of network engineers to install and wire racks, servers, and ethernet – which, by the time they’re done, might no longer be needed.
  • Customization – The cloud is efficient and elastic, but it’s also almost infinitely mutable. You can adapt your cloud infrastructure to meet nearly any set of requirements, no matter how specialized. This feature is important because the technology landscape changes quickly, and if you want your app to keep up with it, the cloud you’re deploying to must be able to change quickly, too. This is true not only in scale (how much computing capacity is at your app’s disposal) but also in regard to the backing services (like database, queuing, caching, and object storage) with which you equip your app.

Which Level of Abstraction? PaaS, IaaS, or FaaS?

The biggest cloud providers (AWS, Google Cloud, and Azure) offer many different models to deploy your application. Besides the big clouds, there are now dozens of smaller clouds and cloud vendors offering various deployment alternatives.

It’s sometimes difficult to know how to categorize all of these cloud deployment models, but here’s a general grouping:

  • Infrastructure-as-a-Service (IaaS) – The three hyperscalers (Amazon AWS, Google Cloud Platform, and Microsoft Azure) are the largest players in IaaS. AWS is by far the largest of these providers, but all three are enormous and provide a one-stop shop for all your cloud infrastructure needs, with computing regions in every corner of the world. Likewise, all three offer a complete range of services like RDS (Relational Database Service), AD (Active Directory), Big Query, and many others. Nevertheless, important differences between the big three public clouds involve various tradeoffs. Each cloud constantly adds services and capabilities to give it an edge over the other two. They are all, however, extraordinarily complex, often taking hours of work to accomplish simple tasks like deploying your app. Deploying to AWS, for instance, might mean one of five or six mutually incompatible paths, and each of the paths might require spinning up several auxiliary services like load balancers, NAT gateways, VPCs, and block storage devices before deploying the first public version of your app.
  • Platform-as-a-Service (PaaS) – Application development platforms like Google App Engine, Heroku, or DigitalOcean package up everything you need to run your app so that you don’t have to assemble it yourself using IaaS. The platform infrastructure manages storage, memory, and processing and gives you pre-configured services that your app can plug into for caching, database, and other common requirements. Usually, these platforms are built on AWS under the covers. They offer only a subset of the hyperscaler’s capabilities. Still, for many companies, this is a price worth paying to avoid the expensive, time-consuming task of building a custom platform out of the raw materials of the big clouds. It’s common for startups to deploy the first versions of their application using a platform and then, as their user base and requirements become more sophisticated, switch to IaaS after a year or two. This transition can take time, but it keeps small teams from getting bogged down in infrastructure before their product has seen the light of day.
  • Function as a Service (FaaS) – Also known as “serverless,” FaaS refers to a collection of services from AWS, Google, and Azure that allow you to run your code without provisioning any infrastructure. FaaS handles scaling beneath the surface, ensuring that your code has the resources it needs when it runs. Serverless is a good solution for many companies, but it’s not a panacea. Its single region, for one, has limitations in programming languages and language versions and makes testing more difficult. In addition, it is proprietary to each hyperscaler in its interface, and often poses issues with caching, connection pooling and other common requirements.

The categories above are exemplary but not exhaustive. Innovation happens quickly in every corner of the cloud. Nearly every time a company comes up with a new technology that makes deployment simpler, another company invents a new feature or capability that is compelling but more complicated.

Different Cloud Computing Deployment Models

As I’ve alluded to earlier, the cloud is a computing model in which computing resources and cloud services can be provisioned efficiently, elastically, and flexibly to meet the particular needs of your application. This model doesn’t necessarily live in one location or another. It has been implemented by the largest cloud service providers AWS, Google Cloud (GCP), and Azure. It has also been implemented by smaller cloud infrastructure providers and cloud platforms, and it has been implemented within many organizations inside private wholly-owned data centers. In fact, in its purest conceptual form, the end goal of the cloud is to make lower-level infrastructure decisions irrelevant.
That’s the concept. Getting there is not always so simple; along the way, you’ll need to evaluate whether it makes the most sense for your application to deploy to a public cloud, a private cloud, or a hybrid cloud that combines the two.

Private vs. Public vs. Hybrid Cloud Computing

There are several different models of cloud computing that are important to understand. Let’s look at these three service models one by one.


Most of the time, when we hear the term “cloud” it’s referring to a public cloud – usually AWS, Google Cloud, or perhaps Azure, Linode, or even a PaaS like Digital Ocean. 

The key attribute of a public cloud deployment model is that it’s a shared infrastructure made available for any company that wants to purchase the ability to use a portion of that infrastructure for a fee. The public cloud provider constructs and manages the datacenter locations, installs and maintains the hardware, cooling systems, security, networking, and more.

The public cloud is among the most popular choices because it alleviates the pressure to maintain, repair, and expand hardware resources. Plus, this model allows for quick scalability and reliability. Third-party service providers often have a large network of servers available publically, so your business can access storage and computing resources as needed. Unless your application’s resource needs are extremely large, it’s unlikely that you’d ever stretch the underlying hardware infrastructure of a public cloud. This makes scaling up and down much more manageable.


The private cloud computing model is designed to be accessible by only one business. Sometimes, this model is called an internal cloud or corporate cloud. This infrastructure can be hosted on the company’s property or can be hosted by a third party at a colocation.

It might be more accurate to say that private clouds are cloud-like. Usually, they lack the enormous scale and options and capabilities of their larger public contemporaries, but, like the public clouds, they have been constructed in such a way that their underlying hardware infrastructure is abstracted. Someone, of course, has to maintain and update the hardware. Still, software engineers deploying apps to a private cloud can provision resources to run their code without considering the hardware’s characteristics and limitations.
When a private cloud needs to be scaled up or down, it requires installing and configuring new hardware – new servers, networking equipment, cooling systems, racks, and more. The reason most companies choose to build private clouds comes down to security. By building private hardware infrastructure not shared with other companies, you can control more closely how your application’s data is processed and stored. For some companies, keeping data and data processing inside a private cloud is required to maintain compliance with their regulatory requirements. For organizations like cruise lines and defense contractors, the applications they deploy may not be able to access the public internet, making a private cloud model the only option.


The hybrid cloud model is essentially a combination of the public and private cloud deployment models. When businesses use the hybrid model, they combine the two models and customize them to fit their needs. Some parts of your application may benefit from the elastic scalability of a public cloud, while some parts may require the security and in-house control of a private cloud. A hybrid model allows you to create the combination that works best for you. It enables you to use the rich and expanding ecosystem of cloud services and capabilities in the public cloud wherever your requirements allow.

Sometimes a company’s accounting preferences play into the decision to build a hybrid cloud. It may make sense to capitalize some cloud expenses by building a private cloud while amortizing other costs through a public provider.

Challenges of Deploying to the Cloud

In many ways, the main challenge of deploying to the cloud is determining which path to take given the multitude of options, especially because many of these choices are consequential and not easily reversed.
Understanding the vertical and horizontal cloud deployment options can help triangulate the decision-making process to find the right model for your organization.

Deploying to a Virtual Cloud

The complexity of the cloud and the wide variety of deployment options was part of the reason we developed Control Plane. Control Plane is a virtual cloud that enables cloud architects to combine the services, regions,and computing power of Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and any other public or private cloud to give developers a flexible, secure global environment. You don’t have to combine the whole cloud with Control Plane (although you could); most of our customers simply mix and match the clouds and cloud services they need for their application.

The Control Plane platform allows microservices to consume any combination of cloud services without requiring embedded credentials using a technology called Universal Cloud IdentityTM. Plus, because the platform gives you precise control over scaling computing resources to your app’s requirements, most companies can save a substantial amount compared to using individual clouds directly. Control Plane enables you to deploy your application to the cloud without having to navigate the matrix of options and tradeoffs inherent in other cloud deployments.

Frequently Asked Questions

What are the three types of cloud deployment?

The three types of cloud deployment are software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS). Depending on a business’s goals and objectives, these models can be customized to fit the right needs.

What is AWS cloud deployment?

Amazon Web Services (AWS) offers many different options for cloud deployment, ranging from EC2, to Elastic Beanstalk, to Elastic Kubernetes Service (EKS), to Lambda – and many different options in-between. Each of these deployment methods offers advantages and disadvantages. Elastic Beanstalk, for instance, makes it easier to deploy an application, but doesn’t give developers the same flexibility and power of EKS.

Why should you compute on a cloud?

Many organizations favor cloud computing because it does not require you to buy and maintain hardware and infrastructure. This model helps save resources and make scalability faster and easier.