Author Archives: Tim Davis

Achievement Unlocked: VCAP5-DCD

My day has come! I am finally part of the VCAP club. Today, I sat, and passed, the VCAP5-DCD exam. I have to say, it was the most brutal 3 hours of my life. So many ups and downs. I had no idea how things were going to go when I clicked “End Exam”. It took me a few seconds of staring when I saw the score for it to sink in. I sat in the chair staring at the screen for a couple minutes. I even forgot to finalize the exam ending and walked out to the proctor to get the score sheet. When it didn’t come out, we realized that I didn’t click to close out. Then there it was. My score sheet. Pass.

The score was, in all honesty, better than I expected. I got a 337 out of 500. While you only need 300 to pass, making 337 not that impressive, I truly only had hoped for a 301. I am thrilled with what I got.

Let’s go for a little exam prep / experience. My exam prep was not what I recommend for someone wanting to sit down and crank out the DCD. I’ve been working towards this for a couple years now. I’ve been working on vSphere / VDI Design for a couple years. So a lot of it was practical knowledge. I also went over the VCAP5-DCD Simulator until I felt I was only answering from memory. Then I didn’t go back for months, then I did it again. It’s great. 100% use the blueprint and go through some of the PDF’s on consulting (risk, constraint, assumption, requirement type stuff). There is a Google Plus group you can join with a ton of great info / people / stories.

Also, I did the I really never sat back and said “Alright, lets do this!” and put my nose to the grindstone. I really only went for it because I had been kicking it along for a while. Decided today was the day. I don’t recommend this.

If you’ve never done a Design exam, or even a VCAP, I highly recommend you mentally prepare. Sleep well, have a good breakfast, take the exam early (unless you’re an afternoon kind of person.) I did not spend my evening before studying, as that bums me out, and I figured I wouldn’t get any better in 1 last night. The exam is 22 questions. Most are drag-and-drop style questions. There are 9 Visio-style design tool questions, 1 of which is a “Master Design.” My master design item came at like halfway through the exam. I immediately marked it and moved on. I highly recommend saving this for the end. They say to allow 30 minutes for it, I ended up starting this with 1 hr flat left. I truly can’t remember how long it took, I know I finished early.

As far as tips, I really have 1 big one. READ THE QUESTION. Did you read that? Did you get it? Read it again. Then read it again. Who knows, the question could give you part of the answer. You’d never know if you didn’t read it.

So that’s it. Not a lot of info, but enough. I just needed to tell anyone else that would listen that I passed. Best of luck for everyone else in your endeavors. Feel free to contact me if you have any questions about it. I’ll be glad to tell you what I can.

vcap5-dcd

-vTimD

Time For A Change: Career Edition

It is with mixed emotions that I am writing this today. I will be leaving my current role as Sr. Virtualization Engineer for Dell Services. Over the past 3 years, I have had the privilege of working for one of the largest accounts for Dell Services. I was hired on as a Wintel engineer with a strong knowledge of VMware. From there, I quickly became the Principal Architect and SME for the VDI solution. I have also become the Infrastructure Lead. I owe a huge debt to my original hiring manager, as well as the Cloud Services manager that helped build me up to where I am today. I will be leaving one of the greatest team of people I have had the opportunity to work with.

As of April 18th, 2016, I will shift directions slightly. I will be joining the VMware NSBU as Senior Systems Engineer, NSX Enterprise. While a lot of the existing NSBU folks have a heavy background in networking, I will be bringing my heavy virtualization background to the table. I am greatly looking forward to the opportunity to dive head first into one of the coolest pieces of technology to hit the streets. I will be working alongside and under some of the most gifted minds in the industry. This is extremely exciting for me, and I cannot wait to start.

One of the biggest goals I have for my life and career is to learn all I possibly can. While I have a grasp of basic networking, this job will give me the opportunity to jump out of my comfort zone and on to the bleeding edge of a rapidly growing technology. I can’t wait to see what the future has in store for NSX, and I’ll be sure to share it with all of you. Stay tuned!

-vTimD

OSPF, BGP, and STP: OH MY!

51RFD0QPY1L._SX258_BO1,204,203,200_

Welcome to another installment of the “Tim needs to learn networking better” series. This episode is not really anything specifically NSX related. I want to implement OSPF on my lab network, but before I can do that, I really should understand what it is and what it does. I decided to add a couple other protocols used in the lab. This write-up is simply for my own sanity-checking. If someone else finds it useful, cool!

Note: If you’re a network expert and have any corrections, or anything to add, please do! 🙂


 

Open Shortest Path First (OSPF)

OSPF_message

OSPF is the most widely used of all the Interior Gateway Protocols (IGP – Protocol used inside organizations / networks). OSPF is generally implemented when a network grows too big for RIP to be effective. RIP is not the fastest protocol at scale as it only keeps information about the local router and neighbors. OSPF stores information about the complete topology (Self, Neighbors, and all adjacent segments). This allows devices to calculate what the fastest route is from point to point based on full topology. The protocol works with effective “areas”. This is the equivalent of departments in an office building. The office building would be “area 0” then each of the departments would be the other area #’s. This sets up logical groupings of routers with 0 being the backbone communication area.


Border Gateway Protocol (BGP)

26634-bgp-toc2

Border Gateway Protocol is considered “the protocol of the internet”. It is the most widely used Exterior Gateway Protocol (EGP – Protocol used between organizations / networks). BGP allows routers to communicate with autonomous networks (networks outside of your own). The protocol is used to ensure your traffic makes it out of your network, through the vast internet, and into the correct destination network. As IP blocks are not “logically” assigned by geographic region, or anything like that, routers need another way of knowing how to get packets from your network to the destination network. The protocol allows routers to answer other routers that they know where the packets are supposed to go, so to send the info to them so it can be sent on.


Spanning Tree Protocol (STP)

2011-09-01-STP-Loopguard1

Spanning Tree Protocol was created before the time of switches, even though it is still widely implemented today on networks with switches. STP is used to ensure loop-free topology in bridged networks. This mitigates the issue of routing loops on the network with logical blocking. The protocol is also implemented to manage purposely-planned redundant loops. This allows for the active-standby use of connection loops, so that when a link goes down, STP can mitigate the dead path by activating the 2nd path.


Thanks for playing along, as always!

-vTimD

I…have made fire!: An NSX Story

In my last post I went through my initial deployment of the NSX Manager appliance. I have, since then, done so much more. As I told you in that post, networking is not my strong suit. I am really trying to learn as much as possible to try and fill in the holes. My big feat thus far? I have completely deployed a new network segment in my lab, using NSX. While in the grand scheme of things, this isn’t huge, it is to me.

I have 3 IP spaces in my house.

192.168.1.0/24 – Physical – Home Network

172.16.0.0/16 – Physical – Lab Network

10.0.60.0/24 – NSX – Horizon

The 10.0.60.0/24 network will soon be expanded to 10.0.0.0/16, I just wanted to get it working for now. I have generated a quick and dirty Visio of my current setup. With the magic of static routes in strategic places, I am able to communicate from my laptop on the 192 segment all the way through and back to my Horizon View servers that are Physically in the 172 segment, but logically (NSX) in the 10 segment.

2016-03-19 08_37_11-C__Windows_system32_cmd.exe

This is really cool for me. It was a struggle for me to configure the original handoff to from the 192 segment to the 172 segment. I have routes all over the place. Check out the Visio below:

Physical_Logical_Network

I’ll be doing some more posts on my overall NSX config, as well as some blogs on setting up Horizon View for Load Balancing and Distributed Firewall on NSX. Keep checking back for more fun!

-vTimD

VMware NSX: Appliance Deployment

NSX

Hello, and welcome to what will probably be a series of posts about a topic that is way over my head. Hopefully this exercise will make me a bit better in what seems to be a weak point for me. Networking and Security.

VMware NSX is the network virtualization platform for the Software-Defined Data Center (SDDC).

Today, we’ll be simply deploying the appliance, and configuring the vCenter in the appliance. This is a very non-technical procedure, but who knows. Someone may find it useful. Let’s get started.

First thing I did was to launch vCenter, and Deploy OVF Template. I pointed the wizard at the NSX manager I downloaded from my.vmware.com.

01 - OVF

From there, you’ll see the details of the OVF. Including the verified VMware publisher.

02 - OVF

Next, you’ll need to accept the EULA. Make sure you read and understand all of it. It’s a binding contract.

03 - OVF

Name the VM and select the folder within your vCenter structure.

04 - OVF

Select the cluster.

05 - OVF

Select which datastore or DS cluster you wish to put the NSX Manager VM on.

06 - OVF

I selected Thin Provisioning by default.

07 - OVF

Tell the wizard which portgroup you wish to put the NSX Manager VM on. I have mine on Network Services.

08 - OVF

The next page you will setup your passwords for the appliance. You’ll also setup the IP / DNS info.

09 - OVF

Verify on the next page that all of your information is correct, before you deploy.

10 - OVF

You should now see the deployment task in vCenter.

11 - vCenter Task

Voila! We have now deployed the NSX Manager appliance. Now let’s tie it to vCenter.

12 - OVFDONE

Login with the Admin user, and your default password that you specified during the OVF deployment.

13 - NSX Login

Once inside the NSX Manager console, we’ll want to go to Manage vCenter Registration.

14 - vCenter Reg

From here, we have a pair of settings we need to configure. The Lookup Service for SSO registration, and the vCenter Connection. Lookup service will be the IP of vCenter (or external PSC / SSO), the default port of 7444 (unless changed) and your SSO admin credentials.

15 - vCenter Reg 2

The vCenter Server info is the DNS name, admin user, and password used to access vCenter.

16 - vCenter Reg 3

Accept any certificate warnings.

17 - vCenter Reg 4

Now we’re all setup! We have two Green LED’s on the sections we need. Perfect!

18 - vCenter Reg Done

This is a shot of the home screen of the NSX Manager appliance portal. Shows resource usage, and service status.

19 - NSX Appliance Home

Now, if we login to the vCenter Web Client (Can’t access NSX from C# client) we see the NSX Networking & Security icon.

20 - NSX vCenter


There you have it. It’s extremely straight-forward to deploy and link the NSX Manager to vCenter.

Stay tuned for more on VMware NSX. The next blog should be on basic host-prep and service deployment. I’ll be deploying it for use in my vSphere home lab, as well as integration into VMware Horizon View.

-vTimD

VMW-LOGO-vEXPERT-2016-k

Architecture Series: Disaster Recovery

Welcome back to my continuing series on Architecture. In this next installment, we will be going over a disaster recovery design.

Disaster Recovery (DR) is a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. There are 2 big pieces to this planning. What is the most downtime you can stand to incur, and how much times worth of data can you stand to lose in the recovery. This is known as Recovery Time Objective (RTO) and Recovery Point Objective (RPO). We’ll go over these more later.

Now, what does this mean when you are designing an environment? Everything. What good is your design to the customer when it is wiped off the face of the earth by an F5 tornado?

IMG_0001

The answer is: It’s no good. So what do we do? We implement a Disaster Recovery solution into the design. For this current design, our DR plan gives us the availability to “failover” all of the protected critical workloads in the event of a catastrophe from one physical data center, to another physical data center in a completely different geographic location. This is a good.

IMG_0002

So let’s start from the racks in our Primary Data Center (PDC). DR doesn’t essentially just mean continuity from our PDC to our DR site. We take steps to ensure that uptime requirements are met at the PDC by dual-homing all of our devices. This is for power, network, and storage. All of our infrastructure is setup with an A-side and a B-side. This allows for a point of failure at just about any where in the physical hardware design, and the opposite side can withstand the outage without downtime.  This also makes maintenance on any of these services easy, as we can simply use the opposite side while one side is being worked on.

We also utilize some vCenter-level recovery options which help us to withstand points of failure. For example, we have vSphere HA enabled on our clusters. In a nutshell, if an ESX host suddenly fails, vCenter can automatically reboot all of the VM’s on other hosts in the cluster. While there is a bit of downtime for the reboot, it is an automated process to bring VM’s back online as quickly as possible in the event of hardware failure.

IMG_0010

Duncan Epping has written the gold standard in books on HA that you can read up on here.


Now let’s move on to the big stuff. Large-scale natural or human disaster. What do you do when your PDC is destroyed or completely loses power for X period of time?

IMG_0011

 

You’re starting to ask yourself now, “Ok, I need to plan for emergencies, but how do I do it??” This is where our DR solution comes into place. For this design, we will be using two main products. vSphere Replication (vR) and VMware Site Recovery Manager (SRM). These are two different products, that when run in tandem, give you a solid means to recover in the event of a disaster.

vR enables the continuous replication of a virtual machine from one side to another. The decision to use vR instead of Array-Based Replication was made so that the choice of what to replicate could be made on granular VM basis, as opposed to an entire datastore / LUN. vR is where we can specify our RPO. You can specify how often you want to replicate a VM, after the initial full-seed. Our RPO for this design is 15 minutes, so we set the replication time in vR to 15 minutes.

IMG_0013

The next setup in our design is the actual failover component. SRM. In SRM there is two major pieces that you need to do in order to be ready to go. Protection Groups and Recovery Plans.

Protection Groups are simply logical  groupings of VM’s that you are trying to protect. In a 3-tier application stack, you’d want to protect the web servers, app servers, and database servers. As the DR site is not 1:1 hardware, the design decision was made to only protect 1 of the DB clusters, 1 set of App servers, and 2 Web servers. The bare necessities to run. If we had chosen Array-Based Replication, then we wouldn’t need to specify what VM’s. It would simply replicate and protect all VM’s on the chosen volumes.

IMG_0015

The second piece is the Recovery Plan. This is where you configure SRM’s logic. Where is the primary site? What VM’s am I failing over? Where am I failing it over to? Should I start them in a particular order? Now, the second metric we need to meet is RTO. How long does it take you to recover? As long as vR and SRM is setup right, failing over is a fairly quick process. One of the biggest constraints here is how long it takes your recovery VM’s to power on, validate, and move on. Meeting your RTO is not just a software goal. This will require monitoring / engineering response + SRM Recovery Plan to meet the total goal of 60 minutes Recovery Time Objective.

IMG_0009

The recovery plan is configured exactly as the failover is needed to go. There is a step-by-step logic here. From finalizing replication (if PDC is still available) to bringing down the original VM’s to bringing up the recovery VM’s. Here, is where VM prerequisites (priority) are set. Our apps are 3-tier designs. Our DB servers start first. App servers are second. Then the Web servers comes up once all other prerequisites are met.

recovery_plan

SRM allows you to run “test” failover scenarios that will validate all the replication, recovery VM’s, etc. It is a great way to validate your Disaster Recovery plan, without actually failing over. Though, doing live failover tests to DR is very important to test all the external variables such as monitoring and engineering response. I have an article about a particular test scenario with SRM and some duct tape here.


 

Thanks for reading!

-vTimD

 

Architecture Series: Storage

In my continuing efforts to grow in design, I am writing my next installment of the Architecture Series. This next bit is going to focus on storage.

Let’s start from the server and move through the physical fabric. As with the networking for this design which uses 2 x dual-port NIC’s, storage uses 2 x 8g dual-port FC HBA’s. In our environment, we go with Emulex LPe1200’s. They are placed in the host as HBA, NIC on top and HBA, NIC on bottom:

IMG_1088

The dual-ports on the HBA’s are split so that each HBA has a cable to the A-side and a cable to the B-side.

IMG_1086

As shown in the diagram below, the fabric provides redundant paths to each side of the storage fabric. Each side of the storage fabric has a link to one of the controllers in the  storage frame. This gives each server 2 links to each fabric side, which each has 2 links into the storage controllers. This provides us failure tolerance of a storage controller, or an entire side of the FC fabric.

IMG_1074

The storage frames all offer multiple tiers of storage for the customer. Tier 2a, Tier 2, Tier 3, and Tier 4. We have not had any use-case that requires an all-flash array at this point, so it is not currently available in our environment.

We cut LUN’s in 2TB size from the storage frame, and present them to the hosts. This is much smaller than the 64TB maximum allowed. We name the LUN’s based on the frame brand, frame ID, cluster, tier, and LUN #.

Datastores

These are then grouped into their datastore clusters. The datastore clusters are broken down by frame brand, cluster, tier, and then a cluster ID. We do allow mixed frame ID’s in the clusters. We limit our clusters to 32 LUN’s even though the maximum supported is 64. This is simply to make things easier to manage from our perspective.

Datastore_Cluster


 

This has been another installment of the Architecture Series. Thanks for playing along!

-vTimD

Achievement Unlocked: VMware vExpert 2016!

The day has come! I have been working hard for the past year to try and get myself on the list. I really stepped up my blogging, community involvement, etc. And it paid off. I am officially a vExpert for 2016. Proud to have the moniker. I am among a list of great people in the community, and will do my part to do wear it well.  Congrats to all the other new vExperts, as well as the returning champions!

VMW-LOGO-vEXPERT-2016-k

-vTimD

Architecture Series: 10 Gig Pod

In efforts to transition myself to the Infrastructure side of the house, I decided to hit the white board a bit and explain the architecture of the current environment I am in. This is part of my Design Theory study (VCAP-Design) and is as much for reader benefit, as it is for my own learning benefit. I hope this brings forth questions and discussions. As a preemptive note: I am not the Principal Architect of this specific design. I merely inherited this design, and am learning it while taking over. This is not a post that I am going to whittle down to be perfect as if I was submitting for VCDX. I will try my best to keep it clear, concise, and in a proper order from the top down.

So let’s get this party started. The environment that I support now has several vCenter servers. These are spread across several geographic locations. We do have one “Primary” location, that has 2 different buildings. The “main” building houses our primary vCenter. This vCenter houses a couple legacy 1-Gig clusters, and our primary 10-Gig environment.

Our 10-Gig environment is currently split into 2 pods. These pods were built to be scalable, as needed. As a vEUC guy, I equate this design to the Horizon View “pod and block” type architecture. Scalable Pods that can be built out as needed. It’s a popular concept these days. Maybe not in this exact design, but scalability is important.

Our Pods are built in sets of 3 racks. Unlike our 1-Gig environment where we run all cabling to the distribution switches, our 10-Gig pods utilize 2 x Force10 Z9000’s in a Top-of-Rack or TOR setup for each 3-rack pod. Each TOR switch and Server in the pod are dual-homed with A/B power to separate PDU’s. The building has multiple street-power providers, and is rated to withstand an F5 tornado. Here is a visual representation of the pod:

IMG_0993

The switches are setup in an A/B setup cross-connected to each other. The switches reside in the center rack in each pod, as it services the cabinet it resides in, as well as the neighbors to the left and right:

IMG_0994

The switch ports are all configured as Trunk. We handle all of our tagging at the vSwitch. Each ESX host (R710’s or R720’s) house 2 x 8GB HBA’s for storage, and 2 x Dual-Port 10-Gig NIC’s for networking. We use 1:4 fan-out 40-Gig cables for network connectivity, like the ones here. Each 40-Gig cable has 4 ends (A,B,C,D). Each 40-Gig cable services the “A” or “B” switch side of 2 hosts, with 2 connections each.

IMG_0998

This leaves each host with an A + B (or C & D) from the “A Switch” and an A + B (or C & D) from the “B Switch”.  These are split out on the host to 2 x Virtual Distributed Switches across the environment:

10GB-VM-Network

10GB-vKernel-Network

vCenter Networking

Two of the links (1 x A-side, 1 x B-side) go to the VM-Network Virtual Distributed Switch. The other 2 links go to the vKernel-Network Virtual Distributed Switch. The vKernel Switch for each host has 1 x Management and 1 x vMotion virtual adapter configured. The VM-Network Switch contains the tagged port-groups for all of the VLAN’s needed for our virtual machine traffic.


 

This concludes the first of what I hope to be many Design Theory / Architectural posts. Thanks for playing along!

-vTimD

SRM: Fail Back With No Reprotect

As promised in my last “Job Role” related post, I have taken over Infrastructure Management responsibilities, and transitioned vEUC Management role to my 2nd in command. As promised, here is some new content, related to one of my new roles. Disaster Recovery! While I was not the SME for Disaster Recovery operations now, I am doing my best to takeover, as my former colleague was the man in charge when it came to DR stuff.

Let me get this out of the way now. I do not like waking up early on a Saturday to participate in a critical application DR test. The part that I really don’t like? How the team running the application wants to do the test. They do not simply want to run the fail-over, let the app run for a while, then fail-back. They don’t want to test or touch any production data at the primary site. Now, you may be asking yourself why this is a problem. Let’s bring SRM into the picture now. Note that this blog, technically, is going to assume you’re already familiar with vR and SRM to a degree.

Site Recovery Manager or SRM is VMware’s tool for Disaster Recovery in the vCenter environment. Paired with vSphere Replication or vR, the software set allows you to continuously replicate active VM’s over to your DR site, then when prompted, fail-over to that DR site. SRM allows you to setup what is called a Recovery Plan. This is essentially a script telling vCenter what to do in the event of a fail-over event. This Recovery Plan is setup inside a Protection Group. A Protection Group is a logical grouping of servers that make up an application stack. In our case today, this is a database server, some app servers, and some web servers. The basic setup is pretty easy.

01 - Basic SRM

Now, in a normal SRM fail-over and fail back, the process would be pretty straight forward.

Fail-Over:

  1. Replicate last bit of data
  2. Power off Primary Site VM’s
  3. Synchronize Storage
  4. Power on DR VM’s
  5. Verify

Fail Back:

  1. Start “reverse replication” back to Primary Site
  2. Power off DR Site VM’s
  3. Synchronize Storage
  4. Power on Primary VM’s
  5. Verify

This is a very straightforward process. The process of reverse replicating data back from DR to the Primary site is called “Re-Protect” in SRM. This way, data is sent from Primary to DR, then from DR back to Primary to ensure no data was lost in the process of failing over and back. This is not what my application team wants to do. They want to fail-over to the DR site with all the replicated data, then do the testing, then fail back without replicating any of the test data back to the Primary Site.

02 - No Reprotect

This is all fine and dandy, except 1 major issue. SRM doesn’t support re-protection without reverse-replication back to the primary site. This is where my problem is. Is there a solution? Of course there is. Is it as easy as hitting the play button for SRM? Not a chance. Let’s run through exactly how we’re accomplishing this DR test, the hard way.

06 - frown

The first phase is incredibly easy. SRM and vR are fantastic pieces of software magic. It takes me longer to confirm on the conference call that everyone is ready to fail-over than it does to start the fail-over. To begin, we go into SRM. Find the Protection Group you want to fail-over, and hit the big red play button to Run Recovery Plan:

03 - SRM

04 - SRM

05 - SRM

As you can see in the Recovery Plan progress area, step by step is listed out with status, times, and progress bars. Very automated. Very cool. Now, a change is made to our external DNS, and the application is now running at the DR site. Great.

Now, we’re ready to fail back. This is where the “unofficial procedure” comes into place. This is my first time completing this procedure, and I am not going to dive in to every single “click next” on this procedure. As stated before, I assume you have a general working knowledge of vR and SRM.

First things first. Let’s shut down the VM’s at the DR site. This will put the VM’s at the Primary site offline, and the DR site. At this time, you can also make the DNS change back to Primary site, as it will need time to propagate.

07 - Power off VM's

Once that is done, let’s delete the protection group from SRM. It will ask you to confirm, just go for it, as long as you know the settings that you will need to re-create it.

08 - Delete Protection Group

Now, since we pushed a “fail-over” of the Primary site VM’s in SRM, the old Primary site VM’s were flagged so that we cannot simply power them back on. The way to get around this is to remove the VM from inventory. Make a note of it’s Datastore and Cluster / Resource Group as you will need to browse the datastore to manually re-add the VM to inventory. Once it is re-added, you’ll have full control of power. Go ahead and flip it back on now.

09 - Remove Inventory

Once you confirm all the VM’s are back online, let’s get replication going again. Go into the Web Client. Yes I could have done everything in that… but I don’t want to. You need the Web Client to do anything with vR or SRM.

Make sure when you go to Configure Replication, that you select all the VM’s at once (or at least, more than 1). When you do it 1 by 1 the option to select an existing replication seed isn’t there. If you’re ok with completely replicating the seed over again, then do whatever you’d like.

10 - Configure Replication

Once that is setup, and all the configuration tasks are done, go into SRM and re-create the Protection Group.  Use the same settings you had before.

11 - Configure Protection Group

Now that your Protection Group is setup, if you didn’t remove the DR site VM’s from inventory, you’ll get an error that the Placeholder VM name already exists. I went into the DR vCenter and removed those VM’s from inventory. Once that was complete, I went in 1 by 1 in the Protection Group and selected Re-Create Placeholder. I used defaults.

12 - ReCreate Placeholder

Once that is all complete, the only thing left to do is re-associate the Recovery Plan with the Protection Group. Go in to the Recovery Plans, and Edit the one you want. During the wizard, just re-select the Protection Group you want to use. Make sure you noted down your test networks that you’ll need for the Recovery Plan.

13 - Reconfigure Recovery Plan

Once the Recovery Plan was associated back with the Protection Group, SRM should say that your Recovery Plan is back to Ready Status.

14 - All Done

We’re done! I hope this was good enough info to help someone else out in the future. If you want more detailed info, please feel free to reach out. Thanks!

vTimD