In my continuing efforts to grow in design, I am writing my next installment of the Architecture Series. This next bit is going to focus on storage.
Let’s start from the server and move through the physical fabric. As with the networking for this design which uses 2 x dual-port NIC’s, storage uses 2 x 8g dual-port FC HBA’s. In our environment, we go with Emulex LPe1200’s. They are placed in the host as HBA, NIC on top and HBA, NIC on bottom:
The dual-ports on the HBA’s are split so that each HBA has a cable to the A-side and a cable to the B-side.
As shown in the diagram below, the fabric provides redundant paths to each side of the storage fabric. Each side of the storage fabric has a link to one of the controllers in the storage frame. This gives each server 2 links to each fabric side, which each has 2 links into the storage controllers. This provides us failure tolerance of a storage controller, or an entire side of the FC fabric.
The storage frames all offer multiple tiers of storage for the customer. Tier 2a, Tier 2, Tier 3, and Tier 4. We have not had any use-case that requires an all-flash array at this point, so it is not currently available in our environment.
We cut LUN’s in 2TB size from the storage frame, and present them to the hosts. This is much smaller than the 64TB maximum allowed. We name the LUN’s based on the frame brand, frame ID, cluster, tier, and LUN #.
These are then grouped into their datastore clusters. The datastore clusters are broken down by frame brand, cluster, tier, and then a cluster ID. We do allow mixed frame ID’s in the clusters. We limit our clusters to 32 LUN’s even though the maximum supported is 64. This is simply to make things easier to manage from our perspective.
This has been another installment of the Architecture Series. Thanks for playing along!