Monthly Archives: October 2015

Home Lab: Hardware – v2

I have been meaning to refresh this post for some time. My original plan was to do 1 host and run a nested environment. Due to my own stupidity, and lack of ability to have patience and read, I bought a loaded up workstation with Dual 5160 processors. This is all well and good, but the processors are so old they don’t support VT-x/EPT. So I couldn’t properly “nest” 64 bit VM’s inside 64 bit VM’s. So, what did I do? The next logical choice. I went and bought a 2nd loaded up workstation.

IMG_0133

My hardware for the lab is as follows:

2 x Dell Precision 690 Workstations (Hosts)

1 x Custom PC (FreeNAS Storage)

1 x Cisco 2821 Router

1 x Cisco 2960-24-TT-L Switch

1 x F5 BigIP 1000


 

Hosts:

Each of the 2 hosts are Dell Precision 690 Workstations. They have the following hardware specs:

2 x Intel Xeon 5160

32GB of RAM

250GB HDD

2 x 4-Port Gigabit NIC’s

IMG_0011

IMG_0012


 

The storage server is a Custom built PC that I had laying around. It has the following specs:

ASUS P5Q-Deluxe Motherboard

Intel Core2Duo E6850

8GB of RAM

250GB HDD

500GB HDD

250GB SSD

1 x 4-Port Gigabit NIC

The server runs FreeNAS 9.3. The OS boots off of an 8GB thumb-drive so that all disks are available for storage.


 

Please check out the rest of the homelab series to check out the rest of the lab configurations.

-vTimD

Home Lab: Storage

I went through a few iterations of figuring out how I wanted my storage. And it’s not done yet. Each of my ESX hosts has a 250GB HDD on it. ESXi is installed on there, and the rest is VMFS space available if needed. The shared storage for the ESX hosts is a 3rd physical host that I have had for a while. The hardware stats are as follows:

Asus P5Q-Deluxe Motherboard

Intel Core2Duo E6850 3.0Ghz

8GB RAM

4GB Sandisk Extreme USB (FreeNAS OS Boot)

750GB 7200 RPM HDD

500GB 7200 RPM HDD

250GB SSD

4-Port Intel PCIe NIC

FreeNAS Home

I have this in an extremely simple setup so far. I have taken 2 of the NIC’s and created an LACP LAGG in FreeNAS:

LAGG

The 2 ports in the LAGG are connected to ports fa0/23 and fa0/24 on the switch. These two ports are configured into an LACP etherchannel.

sh etherchannel

sh-int-po1

I am currently, for the sake of getting things going, only serving 1 volume. I created a 500 gig volume with one of the HDD’s.

FreeNAS Volumes

Until I setup iSCSI at a later point, The volume is shared with NFS.

FreeNAS NFS

The NFS share is mounted on both hosts.

VMWARE NFS

Stay tuned for the update, including iSCSI, the SSD, and some performance testing.

-vTimD

Home Lab: Windows Infrastructure

Continuing in no particular order, we have made it to Windows Infrastructure. Now, a lot of my service will run on Windows, such as the Horizon lab. I am not going to get in to that one. I am just going to go over infrastructure services in general. Let’s get this show on the road!

I decided to build my Windows infrastructure on 2008 R2 out of the box. Why, you say? So that I can go through a full infrastructure upgrade to 2012 at a later point. This was a strategic choice on my part, that I may or may not regret at a later date. We’ll see.

I have created an Active Directory domain (lab.local) and set it at 2008 R2 domain function level. My PDC is vtimdDC-01.lab.local. I have setup some basic OU’s for Servers and Workstations, with sub-OU’s for different functions.

AD

I will use the virtual desktop and mirage desktop OU’s with separate GPO’s when those labs are finished building out to control their respective features, such as PCoIP performance tuning.

The domain controller is also running my DNS services for the lab. I have forward and reverse zones setup for all network segments. I am serving DHCP from the Cisco 2821 on the VDI DHCP sub-interface, so it is not being hosted on the domain.

DNS

Really the only other Windows infrastructure component I have at this time is the SQL server. It is also running on Windows Server 2008 R2. I am running SQL Server 2014 Standard. This is hosting databases for Horizon Composer, Horizon Error DB, and Pernix FVP so far. I’m sure it will expand as things grow.

SQL

 

Home Lab: Networking

In the continuing series of “Tim vs the Home Lab” we tackle my biggest flaw…Networking!  Thanks to the Networking crew at work, and some senior manager approvals, I was graced with some killer hardware that was about to be thrown in a dumpster. My networking now consists of the following:

1 x Linksys E2500 (Home WiFi Router)

1 x Cisco 2821 Router

1 x Cisco 2960-24TT-L Switch

1 x F5 BigIP

IMG_0133

I have not yet gotten to the F5 yet. I also have the F5 BigIP Virtual Edition, Fully Licensed. This should let me test out GTM for Horizon, once I get it all going. We’ll see.

Ultra huge shout out to @jshiplett and /u/kweevus for helping me get all this going. I have a basic setup. The home Linksys is doing what it always does. Serving internet and WiFi for the house at 192.168.1.1. I set a static route to the new IP space that I setup. The Linksys is routing 172.16.0.0 255.255.0.0 to 192.168.1.2 (Cisco Router). The Cisco 2821 Router is very basic, interface-wise. It has Gi0/0 and Gi0/1. Gi0/0 is the handoff set to 192.168.1.2. Gi0/1 is set with no ip address. All of my layer 3 VLAN’s are setup as sub-interfaces to Gi0/1:

!
interface GigabitEthernet0/1.1
encapsulation dot1Q 1 native
ip address 172.16.1.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.10
description Network Services
encapsulation dot1Q 10
ip address 172.16.10.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.20
description ESX Hosts
encapsulation dot1Q 20
ip address 172.16.20.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.30
description Storage Infrastructure
encapsulation dot1Q 30
ip address 172.16.30.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.40
description vCenter Infrastructure
encapsulation dot1Q 40
ip address 172.16.40.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.50
description Windows Infrastructure
encapsulation dot1Q 50
ip address 172.16.50.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.60
description Horizon Infrastructure
encapsulation dot1Q 60
ip address 172.16.60.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.70
description vCloud Infrastructure
encapsulation dot1Q 70
ip address 172.16.70.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.80
description Utility Infrastructure
encapsulation dot1Q 80
ip address 172.16.80.1 255.255.255.0
ip flow ingress
!
interface GigabitEthernet0/1.90
description VDI DHCP
encapsulation dot1Q 90
ip address 172.16.90.1 255.255.255.0
ip flow ingress
!

At this point, I am trying to be as minimal as I can on the config. The switch ports are VERY minimal. All the switch ports will be trunks set to allow all VLAN’s. I will do all my VLAN tagging from the Distributed Switches in ESX. My next step is to implement the F5 into the loop. I want my Horizon lab to be load balanced, and do iApps for external access. Updates to come!

-vTimD

Networking: An exercise in failure (Solved)

Shout out to @jshiplett and /u/kweevus for the resolutions on my switch configs and routing configs. At the top of my routing config was ‘no ip routing’ which was causing the router to, well, not route anything. Removed that, routing table populated. Done.

——————————————–

As I will post about in more detail later, I have acquired some new networking hardware. A Cisco 2821 Router, and 2960 Switch. I have it setup as best as possible. I have it setup all under one of the ports on my Linksys home router. I have a static route setup so I can access the lab network (172.16.0.0) from my home network (192.168.1.0). I have gi0/0 setup as the handoff network as 192.168.1.2. I then have gi0/1 setup with no IP. All of my sub-interfaces (gi0/1.10, gi0/1.20, etc.) are all setup for my VLAN’s. I have gi0/1 on the switch setup as a trunk to the router gi0/1. I can ping from my local laptop to the router gi0/0 and even the sub interface VLAN IP’s. I cannot, however, ping through to any VLAN hosts. I also cannot ping the router sub-interfaces from the switch. I am posting my configs here. Any and all input would be helpful. This post will be updated as I work out the kinks. I may have some configs on the switch that are shooting myself in the foot. I have just been trying things, and some things dont get removed or overwritten.

Update: I set the switchport 0/24 back to access VLAN 20. switchport mode access. I can ping my ESX host from the router (COOL). I can ping the router from the ESX host. Now what is not working is the 192.168.1.X to the 172.16.20.15 (ESX host). I can ping the router sub-int 172.16.20.1 from the laptop, but not the host on the network that I can ping from the router.

 

CURRENT ROUTER CONFIG:

Building configuration…
Current configuration : 4735 bytes
!
version 12.4
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname vtimd2821-01
!
boot-start-marker
boot-end-marker
!
logging message-counter syslog
logging buffered 51200 warnings
enable secret 5 XXXXXXXX
enable password XXXXXXXX
!
no aaa new-model
!
dot11 syslog
ip source-route
no ip routing
!
!
no ip cef
!
!
no ipv6 cef
multilink bundle-name authenticated
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
voice-card 0
!
!
crypto pki trustpoint TP-self-signed-2715857335
enrollment selfsigned
subject-name cn=IOS-Self-Signed-Certificate-2715857335
revocation-check none
rsakeypair TP-self-signed-2715857335
!
!
crypto pki certificate chain TP-self-signed-2715857335
certificate self-signed 01
30820244 308201AD A0030201 02020101 300D0609 2A864886 F70D0101 04050030
31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274
69666963 6174652D 32373135 38353733 3335301E 170D3135 31303130 32303432
30315A17 0D323030 31303130 30303030 305A3031 312F302D 06035504 03132649
4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D32 37313538
35373333 3530819F 300D0609 2A864886 F70D0101 01050003 818D0030 81890281
8100BEF9 48CDAED0 AA3944F8 3F8BEE45 088458AD 36D2B01B CFB967E0 86CC674D
FF0A08FE C1A3EDB9 1A3F9A82 8D0F4E99 1F736364 341214B3 C9E0AE01 0D1FC3AD
B28BEC9A DA3935A6 BACB3DE6 352511FC BCFB8364 1188210D E7BEFAC7 A7915771
49959B9D CA7C71D6 571C2B34 39E9A663 157F6EBF 022E4B04 2D2855B8 62E6AC7F
46990203 010001A3 6C306A30 0F060355 1D130101 FF040530 030101FF 30170603
551D1104 10300E82 0C767469 6D643238 32312D30 31301F06 03551D23 04183016
8014992A A4080938 E29A03DF 65AA45DD EF6D4C49 CF9C301D 0603551D 0E041604
14992AA4 080938E2 9A03DF65 AA45DDEF 6D4C49CF 9C300D06 092A8648 86F70D01
01040500 03818100 0659999C 827F3666 0DD3E061 94AD5DAC B703217F 85DCE6B8
633950DF 741F5260 1160EB20 E164D466 402EF739 6A12404B 23529116 AD39552F
7BE4401A 3581E834 F95C3C5C 169374D3 3C51ACAB B3BF32CE 42B4F134 01FDC766
20969D70 830CA142 74ED127A FCCD3D60 03CDD789 427AE42A 73BBB9B1 435A9EF9
12642360 D0DFDFC7
quit
!
!
username vtimd privilege 15 password 0 XXXXXXXX
archive
log config
hidekeys
!
!
!
!
!
!
interface GigabitEthernet0/0
ip address 192.168.1.2 255.255.255.252
no ip route-cache
duplex auto
speed auto
no mop enabled
!
interface GigabitEthernet0/1
description Davis Switch Access
no ip address
no ip redirects
no ip unreachables
ip flow ingress
no ip route-cache
duplex auto
speed auto
!
interface GigabitEthernet0/1.1
encapsulation dot1Q 1 native
ip address 172.16.1.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.10
description Network Services
encapsulation dot1Q 10
ip address 172.16.10.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.20
description ESX Hosts
encapsulation dot1Q 20
ip address 172.16.20.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.30
description Storage Infrastructure
encapsulation dot1Q 30
ip address 172.16.30.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.40
description Storage Infrastructure
encapsulation dot1Q 40
ip address 172.16.40.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.50
description Storage Infrastructure
encapsulation dot1Q 50
ip address 172.16.50.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.60
description Storage Infrastructure
encapsulation dot1Q 60
ip address 172.16.60.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.70
description Storage Infrastructure
encapsulation dot1Q 70
ip address 172.16.70.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.80
description Storage Infrastructure
encapsulation dot1Q 80
ip address 172.16.80.1 255.255.255.0
ip flow ingress
no ip route-cache
!
interface GigabitEthernet0/1.90
description Storage Infrastructure
encapsulation dot1Q 90
ip address 172.16.90.1 255.255.255.0
ip flow ingress
no ip route-cache
!
ip forward-protocol nd
ip route 0.0.0.0 0.0.0.0 192.168.1.1
ip http server
ip http authentication local
ip http secure-server
!
!
!
!
!
snmp-server community public RO
!
control-plane
!
!
!
!
mgcp fax t38 ecm
mgcp behavior g729-variants static-pt
!
!
!
!
!
line con 0
password XXXXXXXX
logging synchronous
line aux 0
line vty 0 4
privilege level 15
password XXXXXXXX
login local
transport input telnet ssh
line vty 5 15
password XXXXXXXX
logging synchronous
login
transport input telnet ssh
!
scheduler allocate 20000 1000
end

CURRENT SWITCH CONFIG:

Building Configuration…

Current configuration : 1711 bytes

!

version 12.2

no service pad

service timestamps debug datetime msec

service timestamps log datetime msec

no service password-encryption

!

hostname vtimd2960-01

!

boot-start-marker

boot-end-marker

!

enable secret 5 XXXXXXX

enable password XXXXXXX

!

username vtimd privilege 15 password 0 XXXXXXX

no aaa new-model

system mtu routing 1500

ip subnet-zero

!

!

!         

!

!

!

!

!

!

!

!

spanning-tree mode pvst

spanning-tree loopguard default

spanning-tree portfast bpduguard default

spanning-tree extend system-id

!

vlan internal allocation policy ascending

!

!

!

interface FastEthernet0/1

!

interface FastEthernet0/2

!

interface FastEthernet0/3

!         

interface FastEthernet0/4

!

interface FastEthernet0/5

!

interface FastEthernet0/6

!

interface FastEthernet0/7

!

interface FastEthernet0/8

!

interface FastEthernet0/9

!

interface FastEthernet0/10

!

interface FastEthernet0/11

!

interface FastEthernet0/12

!

interface FastEthernet0/13

!

interface FastEthernet0/14

!

interface FastEthernet0/15

!

interface FastEthernet0/16

!

interface FastEthernet0/17

!

interface FastEthernet0/18

!

interface FastEthernet0/19

!

interface FastEthernet0/20

!

interface FastEthernet0/21

!

interface FastEthernet0/22

!

interface FastEthernet0/23

!

interface FastEthernet0/24

switchport mode trunk

!

interface GigabitEthernet0/1

description Trunk-to-Router

switchport mode trunk

spanning-tree portfast trunk

!

interface GigabitEthernet0/2

!

interface Vlan1

ip address 172.16.1.2 255.255.255.0

no ip route-cache

!

ip default-gateway 172.16.1.1

ip http server

ip http authentication local

ip http secure-server

!

control-plane

!

!

line con 0

line vty 0 4

privilege level 15

login local

transport input telnet ssh

line vty 5 15

login

!         

end

-vTimD

Home Lab: PernixData FVP Freedom

Hey there, kids! I am doing the home lab posts way out of order. I still need to update on the vSphere infrastructure, the Horizon Lab, the vCloud lab, and even the Windows infrastructure. More on that later. I really wanted to get the PernixData FVP Freedom setup since I just setup my shared storage. This process couldn’t have been easier. Let’s give it a run down:

Hardware: 

2 x Physical ESX hosts for VM’s

1 x Physical FreeNAS

RAM for Acceleration

Software:

vCenter Server

PernixData FVP Freedom

 

Install:

I don’t have VUM up and running yet, so I went the good old fashioned manual route for getting the VIB’s on the hosts. I uploaded the VIB package to a local DS on each host, then used SSH to get into the host and used esxcli to install the VIB’s.

VIB_datastoreSSH_VIB_Install

Doing the VIB install requires each host to be in maintenance mode. This is key. The next thing I needed was a service account. I gave this admin permissions to vCenter, and local admin on the FVP server.

AD_Service_Account

Lets get into the meat of the install. This is on a standalone Windows server, with 4vCPU and 8GB of RAM (Recommended). I didn’t get screenshots, but I created a database in SQL called prnxdata and a user called pernix (DB_owner). This is needed for the FVP management. First, you can do do a full or custom install. Full is best, but here is what Custom can get you:

01-custom-setup

Next we’ll need to point it at vCenter. As we set the service account as admin, and on the box, the permissions will be just what we need.

02-vcenter-user

The next thing we need to do is point the installer at SQL. You can run SQL express locally, but I have a SQL 2014 instance for all my lab DB’s.

03-SQL

The next step is setting up the management server. I opted for the FQDN over the IP, and the default management ports. After a few more clicks of “Next” I clicked the final “Install” button. It did all the rest for me.

04-Network-Settings

Yay, we’re done! (Kinda.) Now lets pull up the console. The first thing you want to do is drop in the license:

05-License

Now that we’re licensed, we need to create an FVP cluster. I selected my Production cluster, as the Storage cluster simply has 1 host that holds my NAS VM.

06-Cluster-Settings

As soon as we get the cluster, we have to give it an acceleration device. Normally, you’d be able to use the SSD, or PCIe Flash device, but since this is Freedom (Free Edition) it is RAM only (up to 128GB! That is a LOT for a free lab). I don’t have a ton of RAM, so I allocated 4GB each host (8GB of acceleration total).

07-Resource

Now that we have the cluster built, and the flash device set, we need to tell it what storage to accelerate. I selected my 3 NAS DS’s that I have setup into a DS cluster in vCenter. I just setup the shared storage, so I don’t have a lot on it. The SQL server and 1 other.

08-Datastores

And we have…a giraffe!… I mean, a functional FVP Console!

09-Console

And a reporting console, too!

10-Reporting

This was really a very painless process. While it did take me two days, it was simply because of only doing it in my free time. Overall, it took longer to enable SSH on the hosts and setup the service account than it did to do the rest. Now time to move all my VM’s from the local stores to the NAS DS cluster. I’ll have a follow up post on this soon with some updated stats and my day to day interactions. Now, go out and get your copy!

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.