For quite some time datacenter traffic changed its characteristics. Formerly, it was a good assumption that most of the datacenter traffic was exchanged with the rest of the internet. For the last years this shifted, and today most of the traffic in a cloud datacenter is between the hosts in the dc itself, and this part of traffic is growing at a fast pace. This is a result of applications being spread around many hosts to scale to the needed user amount, and a single request to a frontend like facebook hits many servers in the dc, and forces many servers to exchange data, to render a response.
The traditional way of designing a network was a fat tree with three layers (access, aggregation and core). These layers also provided different functionality, something the internal dc network does not need. The fat tree in conjunction with spanning tree also made sure no loops where present, a basic requirement for ethernet L2 networks. A typical Fat tree network is shown in the graph below. We can see that traffic that traffic to 50% of the endpoints needs to cross the core, and 75% of the traffic needs to cross the aggregation layer.
In contrast to the fat tree approach, a fabric approach has only two types of devices, Leaf and Spine switches. Each server is connected to multiple leaf switches, one for each spine plane. Each leaf switch is connected to all switches in one spine plane. This architecture creates many different, equal cost paths between servers. If you need to scale this design out, you can add new pods for more compute / servers, or additional spine planes if you need more intra dc bandwidth. In this model, outbound connections are attaches the same way a pod is, the outbound systems connect to each spine, and the needed gateways for external connections. As this network is by design not loop free, we need to run a L3 routing protocol to distribute routes, and enable equal cost multipath routing to utilize all the links available in a consistent manner. Alternatives to this L3 based solution are Transparent Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB) as L2 solutions.
A diagram of a fabric network is shown below: