AZURE

Azure data centre are build with racks with container size blades. Moduler blade servers in either compute or storage role. 40-50 blades per rack. Each rack has special switch on top connected to aggression switches to ensure connectivity between racks. Some serves has something called Fabric Controller.

Fabric Controller - responsible for provisioning VMs, Healing failed VMs, ReHydrating VMs, Managing health and lifecycle of VMs.

Stamp/Cluster - 20 rack group together to make a stamp, also termed as cluster. All hardware in a stamp uses the same processor generation. When resources are bound to affinity groups they are bound to be in same Stamp.

To learn more about Microsoft infrascture service - read book "Mastering microsoft Azure infrastructure services" by John Savill

Physical Security: Datacenteres are physically secure and extensive background check for people who enter DC.

Regional Availability and High Availability:

Often multiple regions in geographic location. A region can have multiple data centres. Datacentres will be in close proximity to each other.

Services aren't available in all regions. Some regions have restriction such as billing address of customers( ex Australian region can only be used by customers with billing address in AU or NZ)

Regional Datacentres - DC are divided into cluster/stamp are grouped by 20. Each rack functions as fault domain.

High Availability - When planning high availability we should specify availability set, so that the servers can be placed on different fault domains. If no availability set is specified, we may end up having the instances in the same rack(fault domain) and at the event of any failure, we might loose our services. Availability set should be defined for each tier of application.