Tagged: storage
Moving a VMware Horizon View virtual desktop between separate Horizon View environments
Requirements:
Sometimes you may build two distinct VMware Horizon View environments for separate business units, for Disaster Recovery, or for testing purposes.
In that case, a need may arise to move a virtual desktop between the independent Horizon View infrastructures.
Assumptions:
There are many ways Horizon View may be configured. However, this article assumes the following settings in both environments:
- Manual, non-automated, dedicated pools for virtual desktops
- Full clone virtual desktops
- All user data is contained inside the virtual desktop, most likely on drive C
- All virtual desktop disks (vmdks, C and others) are contained within the same VM directory on the storage
- Storage is presented to ESXi through the NFSv3 protocol
- Microsoft Active Directory domain is the same across both sites
- VLANs/subnets the same or different between the two sites
- DHCP is configured for the desktop VM in both sites
- Virtual desktop has Windows 7 or Windows 10 operating system
- Connection Servers do not replicate between environments
- No Cloud Pod federation
- Horizon View v7.4
- vCenter VCSA 6.5 Update 1e
- ESXi 6.5 for some hosts and 6.0 Update 3 for other hosts
There are other ways to move a virtual desktop when the Horizon View is setup with automation and Linked Clones, but they are subject for a future article.
The first Horizon View infrastructure will be called “Source” in this article. The second Horizon View infrastructure, where the virtual desktop needs to be moved, will be called “Destination” in this article.
Instructions:
- Record which virtual desktop names are assigned to which Active Directory users on the Source side. You can do that by Exporting a CSV file from the Pool’s Inventory tab.
- If the Source Horizon View infrastructure is still available (not destroyed due to a disaster event), then continue with the following steps on the Source environment. If the Source Horizon View infrastructure has been destroyed due to a disaster, go to Step 9.
- Power off the virtual desktop. Ensure that in Horizon View Manager you don’t have a policy on your pool to keep powering the virtual desktop on.
- In Horizon View Manager, click on the pool name, select the Inventory tab.
- Right click the desktop name and select Remove.
- Choose “Remove VMs from View Manager only.”
- In vSphere Web Client, right click the desktop VM and select “Remove from Inventory.”
- Unmount the NFSv3 datastore that contains the virtual desktop from Source ESXi hosts.
- At this point how the datastore gets from Source to the Destination will vary based on your conditions.
- For example, for testing purposes, the NFSv3 datastore can be mounted on the Destination hosts.
- In case of disaster, there could be storage array technologies in place that replicate the datastore to the Destination side. If the Source storage array is destroyed, go to the Destination storage array and press the Failover button. Failover will usually make the Destination datastore copy Read/Write.
- Add the NFSv3 datastore that contains the virtual desktop to the Destination ESXi hosts, by going through the “New Datastore” wizard in vSphere Web Client.
- Browse the datastore File structure. Go to the directory of the virtual desktop’s VM, find the .vmx file.
- Right click on the .vmx file and select “Register VM…”
- Leave the same name for the desktop VM as offered by the wizard.
- Put the desktop VM in the correct VM folder and cluster/resource pool, that is visible by the Destination’s Horizon View infrastructure.
- Edit the desktop VM’s settings and select the new Port Group that exists on the Destination side (if required).
- Power on the desktop VM from the vSphere Web Client.
- You might get the “This virtual machine might have been moved or copied.” question.
- When vSphere sees that the storage path of the VM does not match what was originally in the .vmx file, you might get this question.
- Answering “Moved” keeps the UUID of the virtual machine, and therefore the MAC address of the network adapter and a few other things.
- Answering “Copied” changes the UUID of the virtual machine, and therefore the MAC address of the network adapter and a few other things.
- In the majority of cases (testing, disaster recovery), you will be moving the desktop virtual machine from one environment to another. Therefore, answer “I Moved It,” to keep the UUID and thus the MAC address the same.
- Wait until the desktop virtual machine obtains the IP address from the Destination’s DHCP server, and registers itself with the DNS server and Active Directory.
- Remember, we are assuming the same Active Directory domain across both sites. As a result, the desktop VM’s AD computer name and object will remain the same.
- Monitor the IP address and DNS assignment from the vSphere Web Client’s Summary tab for the desktop VM.
- In Destination’s Horizon View Manager, click on the Manual, Full Clone, Non-automated, Dedicated pool that you have created already.
- If you did not create the pool yet, create a new pool and put any available VM at the Destination in the pool. The VM that you put will just be a placeholder to create the pool. Once the pool is created, you can remove the placeholder VM and only keep your moved virtual desktops.
- Go to the Entitlements tab and add any user group or users to be entitled to get desktops from the pool. Most likely, it will the the same user group or user that was entitled to the pool on the Source side.
- Select the Inventory tab and click the Add button.
- Add the desktop VM that you just moved.
- Check the status of the desktop VM. First, the status will say “Waiting for agent,” then “In progress,” then “Available.”
- Right click on the desktop VM and select Assign User.
- Select the correct Active Directory user for the desktop.
- As the user to login to the virtual desktop using Horizon View Client or login on behalf of the user.
- For the first login after the move, the user may be asked by Windows to “Restart Now” or “Restart Later.” Please direct the user to “Restart Now.”
- After the restart, the user may utilize the Horizon View Client to login to the Destination’s moved desktop normally.
Virtual Desktops (VDI) on an Airplane
Recently, while flying on United Airlines I noticed the WiFi sign on the seat in front. I never used WiFi on planes before, so I thought it would be expensive. Imagine my surprise when it was cheap. It was probably cheap to compensate the absence of TV displays.
I immediately thought of our CDI Virtual Desktop (VDI) lab in Teterboro, NJ (USA). Would the Virtual Desktop even be usable? How will video run? I connected immediately, started recording my screen and opened my Virtual Desktop. It worked! Everything except video worked well.
My idea came because of Michael Webster, who has already tried doing this and wrote about it. I also wanted to do it in the Gunnar Berger style of protocol comparison. So, for your viewing pleasure — Virtual Desktops (VDI) on an Airplane.
——
Description:
This video is a demonstration of the Virtual Desktop (VDI) technology, located at CDI in Teterboro, NJ (USA) being accessed from an airplane 34,000 feet (10 km) high. Virtual Desktops allow you to use your Windows desktop from anywhere — even on satellite based WiFi. You will see PCoIP and HTML5 tests, Microsoft Word, HD video, YouTube video and vSphere client utilization.
Demonstration: Yury Magalif.
Lab Build: Chris Ruotolo.
Connecting From: Random clouds above Missouri, USA
Equipment and Software used:
VMware View
VMware vSphere
Cisco C-series servers.
EMC XtremIO all flash storage array.
10Zig Apex 2800 PCoIP acceleration card with a Teradici chip.
Inspired by:
Michael Webster’s blog article:
http://longwhiteclouds.com/2014/06/06/the-vmware-view-from-the-horizon-at-38000-feet-and-8000-miles-away/
Gunnar Berger’s low-latency VDI comparison video:
Is FCoE winning the war vs. Native FC?
Note: this article was first published in 2013 on my work blog Cloud-Giraffe. I am republishing it here because the article is no longer available from my work blog. The ideas here are still relevant.
I had a customer recently buy 2 Cisco Nexus 5548UP switches to be used for their Fibre Channel (FC) Storage functionality, which is 50% of the total switch functionality. For Ethernet features, the customer bought separate Nexus switches. I did not advise on the design of this solution. However, it is a telling sign of Cisco’s weakened efforts to play down native FC functionality in favor of native Fibre Channel over Ethernet (FCoE). Unfortunately, promoting FCoE is proving to be harder than it looked. This is due to two main problems — confusion about the management of FCoE and Cisco’s main storage switch competitor Brocade wielding formidable muscle to keep native FC alive.
In 2003, Cisco entered the FC native switch market with a splash. Cisco did this by perfecting one of their first spin-ins. A spin-in is when a company tells a few hundred of their engineers, “You will become a separate company and receive shares. Then, you will build us the best new technology on the market. If the tech is successful, Cisco will buy the company back, and all who received shares will profit handsomely.” Thus, Andiamo Systems was born and built the Cisco MDS FC storage switch.
At the time of their MDS FC switch release, Cisco’s primary competitor Brocade had 1/3 of the software features of the new MDS switch. I remember talking to my colleagues, old school FC guys, about the Cisco MDS. “Cisco does not know the FC storage market — they are network specialists. They will never release a switch as easy and robust as a Brocade or McData,” forecasted my bearded comrades. I believed them — they had years of experience.
Soon, Cisco was sending me to MDS courses, and the new CCIE certification in Storage was born. After I took the courses and realized the MDS’s feature dominance, I was hooked. Any geek worth his salt likes an abundance of gadgets — Cisco had Virtual Storage Area Networks (VSANs), FC pings, Inter-VSAN routing and Internet Small Computer System Interface (iSCSI) built in. This was when iSCSI storage was a novel, branch only solution. But with Cisco, you could do iSCSI with ANY storage.
The only Cisco problem was cost. Cisco attacked the large enterprise market first. As a result, an entry level MDS 9216 switch with all the wizardry cost $52 thousand list. The price was prohibitive for smaller markets. However, Cisco’s access to large enterprise decision makers allowed the company to quickly dispatch of the McData switch company. McData was gobbled up by Brocade. Brocade was not passive — it quickly ramped up development of software features. Further, Brocade’s main counter bet was the raw hardware speed strategy. Brocade decided to improve native FC chips faster than Cisco. As a result, Brocade was faster to market with 4 Gbit and then 8 Gbit FC.
At the time of Brocade’s 4 and 8 Gbit increases, most critics said that no client requires such speed, that it is purely a marketing gimmick. Such marketing gimmick charges were also leveled against Cisco for advanced software gadgets in their switches. The FC customer had a choice — raw FC speed vs. advanced software. While I enjoyed advanced software, I admit — few clients used switch based iSCSI and extra Cisco features. Both Brocade’s FC speed dominance and Cisco software advantage were pure marketing. But with time, it became evident that faster chips prevail in the marketing argument. Software was faster to develop than custom Application-specific Integrated Circuits (ASICs). Consequently, Brocade matched Cisco’s features, and was winning in FC speeds. Pundits that shot down the speed were silenced, as throngs of customers wanted faster Input/Output (I/O) for their growing VMware virtual server farms.
Still, Cisco had another card up its sleeve. Andiamo’s success was spectacular. The team behind the MDS was itching to disrupt another market, and Cisco was happy to oblige. Nuova Systems put together the same cast of characters in another spin-in. The spin-in secretly burrowed inside Cisco, building something unique. Speculation abounded, but when Cisco finally announced Nuova’s work, it was revolutionary.
Cisco/Nuova was the first to release the Nexus, a unified switch which supported native FC and Ethernet/Internet Protocol (IP) in the same box. Further, Cisco was the first to release the 10 Gbit FCoE protocol for their unified switches. Also, Cisco entered the server market with the Unified Computing System (UCS), a blade server platform with a network oriented architecture. Brocade was again caught off-guard, and went to its tried and true strategy — win with native FC, while countering Cisco’s feature superiority with “me toos.” Thus, Brocade was the first on the market with 16 Gbit FC. Meanwhile, Cisco touted the benefits of unified switching at 10 Gbit.
The promise of unified switching made complete sense. Why have 2 separate networks — storage and IP, which have similar concepts? Yet, in the 1990s, the storage market diverged onto a separate network path. Storage required a switching infrastructure, and the industry delivered the Storage Area Network (SAN). Developed in part by the SCSI guys mixed in with IP pros, FC was a robust protocol, perfect for storage. However, when FC expanded into multiple sites and routing, its management became similar to IP network management. Still, storage was managed by the storage team and IP by the network team. Separate, sometimes warring Network vs. Storage silos increased hardware and staff costs at large enterprises.
Cisco argued it could reintegrate the silos into one job role — an omniscient super Data Center guru. To that end, Cisco Certified Internetwork Expert (CCIE) certification in Storage was recently discontinued in favor of CCIE in Data Center. The Data Center CCIE requires knowledge of the server in Cisco UCS, the storage of Cisco MDS and Nexus FC, the Cisco Nexus Layer 2/3 network, and the Nexus 1000V virtual switching functionality. In Cisco’s view, the Data Center CCIE is the James Bond of the Data Center world, able to handle any problem thrown at him. I have no doubt that such people will exist, and they will assemble multi-disciplinary departments.
However, the current realities in the field seem to tell a different story. Over time, FC and storage developed into a whole separate area with its own deep expertise. The storage professional was required to know the management of multiple storage arrays from different manufacturers — EMC, NetApp, HP, Hitachi, Dell, and others. The number of storage protocols increased from just FC and occasional Fibre Channel over IP (FCIP) for routing across distances to iSCSI, Network File System (NFS), Common Internet File System (CIFS) and FCoE. Brocade absorbed McData, but Qlogic began to make FC switches.
On the network side, the amount of technology keeps increasing. The Routing and Switching CCIE of today has to know 3-5 times more than CCIE #1. Moreover, we have virtual and Software Defined Networking (SDN) disrupting the field. The future network guy has to know FabricPath, Overlay Transport Virtualization (OTV), the new Cisco’s Cloud Services Router 1000V (CSR), VMware’s vCloud Director, Nicira, and Brocade-acquired Vyatta.
The FCoE protocol was meant to unify storage and network management through the Nexus switch. Unfortunately, what I see is that the network guys are weak in the Nexus FC functionality. Yes, they may have gone to a class to learn the MDS or Nexus’s FC side, and they know the transport side. However, they lack the knowledge and control of the day to day management of the endpoints — server, virtualization and storage arrays. As a result, whoever controls the Nexus FCoE, cannot control and troubleshoot the native NetApp FCoE card, and also cannot control LUN presentation in a UCS blade. But, complete control of the FC stack is essential in FC troubleshooting. On the other hand, storage guys rarely want to delve into the network features of the Nexus switch. Consequently, because the Nexus is first and foremost a network switch, the storage guys never have daily administrative control.
When a shop is moving from one manufacturer to another, toward a converged network, it does not want to have multiple manufacturers. Another customer asked me to design a Nexus only solution, because they wanted to move away from Brocades. They had money for the Nexuses, but not enough Nexuses to accommodate all Brocade FC ports currently in existence. When I mentioned introducing Cisco MDSs, they said, “Why are you introducing native FC into the mix when the Nexus FCoE mantra is suppose to solve our storage needs?” The death of native FC, and especially Cisco MDSs has been predicted before. When Cisco retired CCIE in Storage, that was another sign that the MDS native FC switch is on its way out. Yet, Brocade released 16 Gbit native FC 2 years ago.
Today, the Data Center has many management headaches due to convergence.
- Who will manage the server and its network devices (Converged Network Adapter cards, end-host I/O modules like Hewlett-Packard Virtual Connect or Cisco UCS Fabric Interconnects) — VMware guys, old school server guys, storage or network gals?
- Who will manage the network going from the server out to the first access device (end-host I/O modules, Cisco UCS Fabric Interconnects)?
- Who will manage storage and network transports (Cisco Nexus, Brocade switches)?
- Who will manage the network in a virtual network world (Cisco Nexus 1000V, CSR, Vyatta, Nicira, hardware Nexus, OpenStack)?
These management questions have not been settled. In fact, what I see is the expertise of IT staff, like water, flows back and forth between departments. No one is quite sure where her responsibility ends and the colleague’s begins.
In this world of management confusion, the tried and true resonates. As a result, Brocade was successful in convincing the Data Center to wait on the death of native FC. And, just like before with 4 and 8 Gbit FC, Cisco had to answer. Therefore, the new Cisco 16 Gbit FC MDS 9710 Multilayer Director and the upcoming MDS 9250i Multiservice Fabric Switch that were just announced continue the time honored tradition of Brocade vs. Cisco FC war. In response, on LinkedIn Brocade immediately pointed to a press release touting added software features in their current code base. In addition, I am sure Brocade R&D is already well on the way to release the 32 Gbit FC ASICs in the next round of battle.
Meanwhile, many IT departments like my original customer will continue to follow a dual path — native FC separate from the Network, avoiding FCoE. Even when that avoidance is done on the switches from the same manufacturer — Cisco, and when it happens with the switch that fully supports FCoE — the Nexus. The native Fibre Channel protocol is alive and well.
Collateral for my session at the HP Discover 2014 conference
Thank you to the 260 people who attended my session and filled out the survey!
I am very grateful that you keep coming to hear what I have to say and hope to be back next year.
My presentation is called “TB3306 – Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”
Returning for the sixth year in a row, this tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.
Here are the collateral files for the session:
Slides:
Use #HPtrick hashtag to chat with me on Twitter:
June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).
Speaking at the HP Discover 2014 conference.
This is my 6th year to have the honor of speaking at the HP Discover conference.
Thank you to past attendees who rate my session high, and to HP staff who pick my session.
Here is the official HP link to the session:
https://h30496.www3.hp.com/connect/sessionDetail.ww?SESSION_ID=3306
Thank you to Morgan O’Leary of VMware for highlighting my session on the official VMware company blog:
http://blogs.vmware.com/vmware/2014/06/register-now-hp-discover-las-vegas-2014.html
Session time:
Wednesday, June 11, 2014
9 am – 10 am Pacific Standard Time (12:00 PM – 1 PM Eastern Standard Time).
Room: Lando 4202.
My presentation number is TB3306 and it is called “Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”
This tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.
You will be able to download the slides from my session the evening of June 12 on this blog.
Please live Tweet points you find interesting during the session, using the following hashtag:
#HPtrick
Look for suggested tricks in the slides.
In addition, use #HPtrick hashtag to chat with me on Twitter:
June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).
If you would like to attend the HP conference in person, please register:
https://h30496.www3.hp.com/portal/newreg.ww
Then, choose session TB3306 in the Session Scheduler:
Twitter Chat about HP blades, VMware, Virtual Connect & Storage
#HPtrick
hashtag to chat with me on Twitter:
June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).
The chat will be about HP blades, VMware, Virtual Connect & HP Storage, to answer new and remaining questions for my session at HP Discover 2013 “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”
How to Twitter Chat:
Go to Twitter.com or the Twitter mobile app and search using the universal search field on top for #HPtrick
Then, tweet a question and make sure to include the #HPtrick expression (hashtag) in the question.
I will be monitoring #HPtrick area for questions, and will respond with an answer, also including the #HPtrick hashtag.
You will be able to Refresh your browser page or mobile app, and will see the answer to your question.
See you there!
Collateral for my session at the HP Discover 2013 conference
Thank you to those who attended my session and filled out the survey! I hope to be back next year.
My presentation is called “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”
The session will discuss how VMware ESXi 5.1 (vSphere) can be implemented on HP c-Class blades, Virtual Connect and EVA, as well as the Flex-10/10D module and WS460c Gen8 blade with eight GPUs. The presentation will focus on real-world VMware and HP best practices, including how to load balance storage area network paths, how to make Virtual Connect really “connect” to Cisco Internet protocol switches, how to configure virtual local area networks for the Virtual Connect modules and VMware virtual switches and how to solve firmware and driver headaches with Virtual Connect and ESXi 5.1. Attendees will also receive tips on design decisions. A basic understanding of VMware, c-Class blades and Virtual Connect is recommended.
Here are the collateral files for the session:
Slides:
HP Documents:
DISCOVER_2013_HOL2653_Student_LAB_Guide_Rev.1.1
Use #HPtrick hashtag to chat with me on Twitter:
June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).
Speaking at the HP Discover 2013 conference
This is my 5th year to have the honor of speaking at the HP Discover conference.
Thank you to past attendees who rate my session high, and to HP staff who pick my session.
Session time:
Wednesday, June 12, 2013
4:30 PM – 5:30 PM Pacific Standard Time (7:30 PM – 8:30 PM Eastern Standard Time).
My presentation is called “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”
The session will discuss how VMware ESXi 5.1 (vSphere) can be implemented on HP c-Class blades, Virtual Connect and EVA, as well as the Flex-10/10D module and WS460c Gen8 blade with eight GPUs. The presentation will focus on real-world VMware and HP best practices, including how to load balance storage area network paths, how to make Virtual Connect really “connect” to Cisco Internet protocol switches, how to configure virtual local area networks for the Virtual Connect modules and VMware virtual switches and how to solve firmware and driver headaches with Virtual Connect and ESXi 5.1. Attendees will also receive tips on design decisions. A basic understanding of VMware, c-Class blades and Virtual Connect is recommended.
Here is the official HP link to the session:
https://h30496.www3.hp.com/connect/sessionDetail.ww?SESSION_ID=2603&tclass=popup
You will be able to download the session slides starting the evening of June 12 on this blog.
This year, I decided to do some Social Media experiments.
So, please live Tweet points you find interesting during the session, using the following hashtag:
#HPtrick
Look for suggested tricks in the slides.
In addition, use #HPtrick hashtag to chat with me on Twitter:
June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).
If you would like to attend the HP conference in person, please register:
https://h30496.www3.hp.com/portal/newreg.ww
Then, choose session TB2603 in the Session Scheduler: