Architecture Series: 10 Gig Pod

In efforts to transition myself to the Infrastructure side of the house, I decided to hit the white board a bit and explain the architecture of the current environment I am in. This is part of my Design Theory study (VCAP-Design) and is as much for reader benefit, as it is for my own learning benefit. I hope this brings forth questions and discussions. As a preemptive note: I am not the Principal Architect of this specific design. I merely inherited this design, and am learning it while taking over. This is not a post that I am going to whittle down to be perfect as if I was submitting for VCDX. I will try my best to keep it clear, concise, and in a proper order from the top down.

So let’s get this party started. The environment that I support now has several vCenter servers. These are spread across several geographic locations. We do have one “Primary” location, that has 2 different buildings. The “main” building houses our primary vCenter. This vCenter houses a couple legacy 1-Gig clusters, and our primary 10-Gig environment.

Our 10-Gig environment is currently split into 2 pods. These pods were built to be scalable, as needed. As a vEUC guy, I equate this design to the Horizon View “pod and block” type architecture. Scalable Pods that can be built out as needed. It’s a popular concept these days. Maybe not in this exact design, but scalability is important.

Our Pods are built in sets of 3 racks. Unlike our 1-Gig environment where we run all cabling to the distribution switches, our 10-Gig pods utilize 2 x Force10 Z9000’s in a Top-of-Rack or TOR setup for each 3-rack pod. Each TOR switch and Server in the pod are dual-homed with A/B power to separate PDU’s. The building has multiple street-power providers, and is rated to withstand an F5 tornado. Here is a visual representation of the pod:

IMG_0993

The switches are setup in an A/B setup cross-connected to each other. The switches reside in the center rack in each pod, as it services the cabinet it resides in, as well as the neighbors to the left and right:

IMG_0994

The switch ports are all configured as Trunk. We handle all of our tagging at the vSwitch. Each ESX host (R710’s or R720’s) house 2 x 8GB HBA’s for storage, and 2 x Dual-Port 10-Gig NIC’s for networking. We use 1:4 fan-out 40-Gig cables for network connectivity, like the ones here. Each 40-Gig cable has 4 ends (A,B,C,D). Each 40-Gig cable services the “A” or “B” switch side of 2 hosts, with 2 connections each.

IMG_0998

This leaves each host with an A + B (or C & D) from the “A Switch” and an A + B (or C & D) from the “B Switch”.  These are split out on the host to 2 x Virtual Distributed Switches across the environment:

10GB-VM-Network

10GB-vKernel-Network

vCenter Networking

Two of the links (1 x A-side, 1 x B-side) go to the VM-Network Virtual Distributed Switch. The other 2 links go to the vKernel-Network Virtual Distributed Switch. The vKernel Switch for each host has 1 x Management and 1 x vMotion virtual adapter configured. The VM-Network Switch contains the tagged port-groups for all of the VLAN’s needed for our virtual machine traffic.


 

This concludes the first of what I hope to be many Design Theory / Architectural posts. Thanks for playing along!

-vTimD