In my previous posts, I wrote about Public cloud infrastructure and Virtual private clouds. Compute, Storage and Network are three pillars of any infrastructure solution. Let's look at the fundamentals of compute (a.k.a VMs).
Before getting into cloud compute instances like EC2, it is beneficial to understand different terminology used to describe compute instances. As with any story- it is always good to start from the beginning.
Back to basics
If we roll back 2 decades- dedicated servers were put up for a specific purpose. These servers had resources like CPU, RAM, and Hard disk. If more CPU power was needed, the only option was to upgrade (if the server motherboard supported the upgrade) or scrap the server and go for a new one. Similarly, the number of memory slots determined the maximum RAM that could be installed. Hard disk drives were either IDE/SATA- storage capacity of server depended on the number of slots on motherboard/RAID controller and maximum size of HDD available.
Ironically, server resources were underutilized as IT teams "played safe" and purchased the biggest/meanest server they can afford. Often server resource utilization was around 30 - 40 %, but the operational cost kept increasing - even if the server ran at 30 percent load it consumed 100% power.
A data center in those days looked like below image
Age of Virtualization
VMWARE started the virtualization revolution with the VMWARE workstation way back in 1999. VMWARE workstation allowed users to run multiple operating systems on the same physical hardware. Later in 2002 VMWARE introduced the ESXi server- this was the first software solution to virtualize server hardware resources enabling optimal resource utilization. VMWARE company and product timeline here:
In addition to VMWARE, other software virtualization/hypervisor solutions including open-source ones exist today. My personal favorite on Linux is QEMU:
A virtualized datacenter looked like the below image.
Logically hypervisor "sliced" the physical server's CPU, memory, storage and presented it to "Guest Virtual Machines". The guest VMs "thought" they were running on actual hardware.
While all these worked, early CPUs were not built for "virtualization" and hypervisor software had to "deal with" this limitation. Things have changed a lot since then and most of CPU's shipping today (even lower-end desktop ones) have some form of tech that aids virtualization.
To really understand different flavors of compute instances offered by AWS or other cloud providers, a basic understanding of various CPU features which enable virtualization is essential- so here goes.
CPU Socket
CPU Socket is a set of "pins" on the server's motherboard that holds the processor in place.
A motherboard can have 1 or more sockets of the same type.
CPU Cores and clock speed
Each motherboard socket can take a single CPU. However, the CPU itself could be made up of multiple "cores" with each core capable of independent processing. The CPU clock speed determines how "fast" these cores execute instructions. Higher clock speed implies a faster CPU.
image source: https://en.wikipedia.org/wiki/Multi-core_processor
Each CPU core can handle multiple "threads".
Threads
At a high level, an operating system is made up of multiple processes. These processes occupy different "memory spaces". A single process is made up of multiple threads which is the path of execution within a process.
A multithreaded CPU can "handle" multiple threads at the same time. W.r.t multi-core processors each core should have the ability to execute multiple threads at the same time.
Hyperthreading
Hyperthreading is a technology introduced by Intel. It allows a single CPU core to be split into 2 logical cores. Each of these logical cores can independently maintain state and hence execute threads independently.
vCPU's
As discussed before, hypervisor software carves out a portion of the physical CPU and presents it to the guest operating system - this virtual CPU allocated is vCPU. It is possible to allocate more multiple vCPU's per VM so that VM's can take advantage of multithreading applications.
Each guest virtual machine requires atleast 1 vCPU.
This means number vCPU determines the number of VM's that can be run on a physical server.
So, how is the maximum number of vCPU's calculated?
The formula that gives a good approximation of number of vCPU's possible for given server hardware is below:
number of sockets on motherboard* number of CPU's * number of cores per CPU's * number of threads per CPU core
Assuming:
2 CPU sockets
8 cores per CPU
16 threads per core
Total number of vCPU= 16 * 8 * 2= 256 vCPU's
While this is a theoretical possibility, number of vCPUs depends on a variety of other factors. The following KB article from intel is a good starting point to understand practical limits:
Specifications for most server-class Intel CPU's can be found here:
Cloud compute instances (AWS EC2)
Cloud providers run software with hypervisor support (possibly a variation of OpenStack) which enables them to share the same server hardware resources with multiple customers/users. They allow users to deploy compute instances (VM's) with different capabilities. For compute instances requiring higher performance -vCPU and memory are 2 deciding factors. Higher the vCPU count (on a processor with higher clock speed)- faster the instance will be and hence it can serve more users.
AWS offers EC2 instances optimized for computing power- these generally start with the prefix "c". The following URL lists various compute-optimized instances:
Choice of instance type and flavour will depend on many factors like application performance requirements, business needs etc. As with anything, budget plays a very important role too. AWS uses "on-demand" billing- which means you are billed for what you use. The following tool will help you determine cost when running a particular instance:
I do hope this article helped you understand few nuances when choosing a compute instance in the cloud. Thanks for your time, likes and feedback. Have a great weekend and stay safe.
Kommentarer