9 Essential Tips for Virtual Desktop Security


  • #1 Do Not Use Persistent Virtual Desktops


Always use non-persistent virtual desktops. They are more secure because they are refreshed from their original image. Persistent virtual desktops behave like physical desktop PCs and are more susceptible to malware, virus infections, and corruption. They may be more difficult to implement and manage, with more requirements, but they are the safer bet in the long run.


Some users may be inconvenienced when their personal files such as Microsoft Word documents that they saved may no longer appear after a desktop refresh. However, as an administrator, you can address this problem by configuring the environment to save personal files and other auxiliary settings and restore them from the user’s network profile after they log in again.


Even though more time is required for managing a non-persistent refresh-ready virtual desktop environment, this investment is well worth the effort. As a case in point, a public school made a smart decision to virtualize about half of their nearly 1,000 desktops. When a virus attack was detected, they simply advised their users to log off. That action alone was all that was required to destroy the virus from all user-accessible VDI desktops, and in only about five minutes. Half the network was spared with only physical desktops and a few servers needing attention. Any non-virtualized PCs or non-persistent desktops required considerable time for remediation. Therefore, it is advised to virtualize the vast majority of your computing resources. For example, imagine the security you would enjoy if fully 90% of your desktops were virtual and only 10% of resources (typically servers) remained as physical hardware devices.


  • #2 Maintain Agentless Anti-Virus


Most PCs are running a standard anti-virus package. Don’t scale back on dedicated anti-virus. But if you want to optimize performance, you’ll need an agentless anti-virus solution. In tests, typical anti-virus software decreased storage IOPS performance by as much as 30 percent.


Consider an agentless option for the hypervisor, where a light agent is built into VMware Tools on every virtual machine. Since the agent is so small, the solution is considered agentless. VMware’s NSX or vShield also provide a structure to use agentless antivirus and you can put a product like TrendMicro Deep Security or McAfee MOVE on your infrastructure servers. You’ll achieve full-agentless antivirus scanning on virtual desktops.


When a user logs on, they get a fresh virtual machine with no virus. While using the desktop, real-time scans prevent a virus. And when the user logs off, the desktop is refreshed from a clean image. Again, no viruses.


Some customers (schools, municipalities, or small businesses looking to save money) might skip agentless anti-virus, or even skip out on licensing a standard anti-virus package on virtualized machines entirely. This is a poor decision. In these environments, the virus will be introduced, continue to exist, and spread. Even a refresh on a virtual desktop won’t eradicate the virus on these compromised systems. The recurrence of the virus will continue. Even if all users log off, while reducing infection risk dramatically, the potential threat continues to exist. You must maintain real-time anti-virus protection. Agentless options are preferred to eliminate the 30% performance hit.


  • #3 Disable Multiple Virtual Desktop Logins


Do not allow the same user to log on to multiple virtual desktops at the same time. As an administrator, you need to disable that setting.


The following example illustrates a potential problem scenario that you want to avoid:


A user logs on to their PC. Later that day, that user logs into a virtual machine (VM1). Without logging off of either machine, they go home and decide to use a remote connection to the same machine (VM1) or even a different one (VM2). The security concern is that the session on VM1 is still open and vulnerable while that user is not present. Anyone walking by the PC can assume control of that virtual session.


As a precaution, institute the following network security policy:


Whenever the same user logs into another virtual desktop, automatically log them off the previous machine or virtual desktop.


  • #4 Use Two-Factor Authentication


Strength and options depend on the vendor technology, but generally speaking, we’re talking about a strong password plus a second form of physical or biometric authentication. Authentication providers include Okta, Imprivata, RSA, Duo, Yubico and others.


You want to enable and maintain an effective two-factor authentication arrangement to prevent unwanted cyber-attacks, data breaches, security intrusions, viruses, malware, and hacks from home or remote PCs.


  • #5 Use Single-Sign-On (SSO) Tools


Network policies typically enforce strong passwords and force users to change their main desktop password used to establish SSO to network applications every 90 days. Strong SSO password policies typically enforce rules for a minimum length, number of special characters, letters, and numbers, as well as preventing common strings or recycled passwords as a precaution.


With SSO, instead of multiple passwords, users only have to remember one. They are automatically logged into their network applications based on their desktop ID in the corporate LDAP, active directory, or user store. Even remote cloud-hosted applications such as Salesforce.com and Office365 can authenticate users with SSO.  That one SSO password is more convenient for both backend administrators and for users. The administrators don’t have to maintain separate user stores with their own password policies. And the users can typically remember their password without writing it down or copying it from an unprotected Excel file.


Security is also improved because there is a 1:1 ratio of unique identifiable usernames with real human employees as opposed to an environment without SSO where a single person might have 10, 20, or more different usernames that obscure the very notion of an authentic identity. However, in the event of a breach, the distributed separate passwords would then be more secure. Hacking or phishing for SSO credentials can allow the hacker to infiltrate more data.


Today, biometrics, once confined to science fiction, Hollywood, and television media, are common today including fingerprint, thumbprint, and retina pattern scanning. Thus, you can combine biometric two-factor authentication with SSO for a successful easy to use, yet secure solution. For the next 10-15 years, dual-authentication consisting of a thumb print or retina scan paired with a traditional memorized password seems likely to remain the de facto two-factor authentication gold standard for government security. Financial institutions are likely to continue one tier below that with a silver standard that consists of a password and a dynamically-generated temporary code.


  • #6 Restrict Access by Device Type


You can and should restrict access by device type. This involves establishing policies on Windows or Mac servers that restrict access by device type. These restrictions help you respond to the bring-your-own-device (BYOD) mania that took over corporate wireless networks over the past 10-15 years. More secure variations on this theme include restricting access to pre-configured Windows-based thin clients (good), Linux-based thin clients (better), and even more secure zero-clients (best).


Thin-clients are typically Windows or Linux workstations. As such, they could contain viruses. For example, a virus spreads malware onto the thin-client that contains keystroke capture spyware that could compromise the virtual desktop credentials. Linux and Mac clients are considered more secure than Windows devices because of the large Windows market share, and thus larger number of existing Windows viruses.


A more secure alternative is to procure zero-client hardware right from the start. These are dedicated hardware devices with no OS and only a standard BIOS architecture. Zero-clients are available from 10zig, Dell, HP, Samsung, and other popular vendors. A zero-client has no other function but to provide a secure connection to the virtual desktop. For that reason, since they have no OS or other local apps, they are very secure. Windows-based thin clients are not as secure and still remain susceptible to viruses.


For example, a recent innovative hospital was wheeling out patient care carts with diagnostic equipment and each cart had its own Apple iPad to establish a virtual patient chart. The administrators established a policy to allow exclusive access to a patient care app on a virtual desktop infrastructure locked down beyond the reach of other devices.


The following access restriction strategies are common:


You can prohibit connections from certain unwanted devices. For example, you can allow or deny access to users with a PC, Mac, a specific OS, a specific set of login credentials to a virtual desktop, an iPad, an iPhone, a tablet, an Android device, a Windows phone, a Chromebook, or a specific mobile OS. (Hint: Based on recent history, Apple iOS devices are more secure than Android devices.)


You can use management tools to establish policies that secure your own preset thin-clients or zero-clients. For example, Apple utilities and third-party management tools can turn the iPad into a zero-client.


For maximum security, you can reduce the number of access points to your network by enabling client security certificates. Essentially, you enable a tool for handling certificates and then use management software to push a policy to all approved thin or zero-clients to verify a certificate before allowing login.


  • #7 Configure VDI Servers, Desktops, and Devices on Separate VLANs


Do not use the same VLAN for all network components. For optimum performance and security, you want your virtual desktops, access devices, and infrastructure servers on their own separate VLANs. When on the same VLAN, a weak access point such as a PC with an older OS might become infected with a virus that would easily spread to other virtual desktop clients on the same VLAN. Even servers are not immune when on the same shared VLAN.


Separate VLANs with discrete gateways also add variation to IP addresses, which make device hacking more difficult. Another benefit is more DHCP IP addresses are available because you are splitting access across VLANs. On one C-class VLAN, you would be limited to 256 devices on a single gateway.


  • #8 Use Network Micro-Segmentation


Gaining in popularity, especially among big government, banking, finance, and pharmaceutical organizations, a micro-segmentation security strategy integrates directly into the VDI without a hardware firewall. Your network policies are synchronized with a virtual network, virtual machine, OS, or other virtual security target to create a security bubble. Access control capabilities in virtual switches replace existing firewall functions for segregation and controlled access across data center tenants.


Micro-segmentation is ideal for today’s software-defined networks with virtual desktops and pools of users on multiple smaller devices. For example, let’s say you want to protect a pool of desktops for the accounting business unit. That department stores very sensitive information and you must maintain a secure environment. With micro-segmentation, you allocate virtual desktops in that specific zone so they can only communicate with Internet and VDI servers, and are blocked from seeing any other desktops. Restricting IP traffic to any sibling desktops is extremely effective at neutralizing the spread of malware or viruses.

  • #9 Ensure regular Windows Updates and other patches


You must ensure to update your desktops with the latest Windows Updates and other patches. Even though you may use non-persistent desktops, you should regularly update your parent image and recompose your desktop pool, so every desktop gets the updates. You should also not forget about Adobe Flash, Java and other updates for your applications. For non-persistent desktops it may be a one-time manual update to the parent image. For persistent desktops it may require the use of WSUS from Microsoft or software deployment tools like Microsoft SCCM, LanDesk or Kaseya.

10 Security Best Practices for Mobile Device Owners

Don’t be alarmed by these statistics. More importantly, don’t become a statistic yourself. I’m sharing a few factoids here to help protect you, as one of the nearly 4.6 billion mobile device users out there (Gartner).

  • Cybercrimes including hacking and theft cost American businesses over $55 million per year (Ponemon Institute)
  • Every month, one in four mobile devices succumbs to some type of cyber threat (Skycure)
  • Last year in the United States alone, over five million smartphones were stolen or lost (Consumer Reports)

Who is responsible for such mayhem? Hackers, of course, and online thieves all over the world.

But who is responsible for protecting your device? You are.

As IT and Networking professionals, we can manage mobile device security around the clock, seven days a week, 365 days a year, but it is you, the mobile device owner or user, who ultimately determines the relative health of your smartphone or tablet and the level of security you want to experience.

To protect your mobile device, follow these recommended best practices:

  1. Lock your device with a passcode: One of the most common ways your identity can be stolen is when your phone is stolen. Lock your device with a password, but do not use common combinations like 1234, 1111. On Android phones you can establish a swipe security pattern. Always set the device to auto-lock when not in use.
  2. Choose the Right Mobile OS for Your Risk Tolerance: Open source integrations, price, and app selection might guide you toward Android or Windows phones; however, Apple devices running iOS are generally more secure. A recent NBC Cybersecurity News article revealed that Google’s Android operating system has become a primary target for hackers because “app marketplaces for Android tend to be less regulated.” Hackers can more easily deploy malicious apps that can be downloaded by anyone. As an example, the article reported that over 180 different types of ransomware were designed to attack Android devices in 2015. If you’re an Android owner, fear not. Consumers who choose Android can still remain safe by being aware of the vulnerabilities and actively applying the other tips in this article.
  3. Monitor Links and Websites Carefully: Take a moment to monitor the links you tap and the websites you open. Links in emails, tweets, and ads are often how cybercriminals compromise your device. If it looks suspicious, it’s best to delete it, especially if you are not familiar with the source of the link. When in doubt, throw it out.If you have Android and your friend has an iOS device, and you both have a link you are not sure about opening, open the link on iOS first. This practice allows you to check out the link while lowering your exposure to risks including malware.
  4. Regularly Update Your Mobile OS: Take advantage of fixes in the latest OS patches and versions of apps. These updates include fixes for known vulnerabilities. (To avoid data plan charges, download these updates when connected to a trusted wireless network.) Every few days, and especially whenever you hear news about a new virus, take the time to check for OS updates or app patches.In 2016, an iOS 9.x flaw resulted in a vulnerability for iPhone users where simply receiving a certain image could leave the device susceptible to infection. Apple pushed out a patch. A year ago a similar flaw was detected on Android devices; however, the risk to users was significantly greater, impacting 95 percent of nearly one billion Android devices. An expected 90-day patch was late. Meanwhile, the flaw allowed hacking to the maximum extent possible including gaining complete control of the phone, wiping the device, and even accessing apps or secretly turning on the camera. Don’t ignore those prompts to update!At this point you may be asking, “Do I need a separate anti-virus app, especially if I use an Android device?” To answer that question, balance your need for security against how much risk you plan on taking with your device. Do you often use public wireless networks and make poor choices with the links you open? For now, you may not need an anti-virus app; however, some early industry trends are showing more anti-virus apps on the horizon.
  5. Do Not Jailbreak Your Smartphone: Reverse engineering and unauthorized modification of your phone (jailbreaking) leaves your phone vulnerable to malware. Even jailbreaking an iOS device leaves it open to infections. If your cousin already customized your device for you, it’s not too late. Restore the OS through the update process or check with an authorized reseller.

For the rest of the tips please read my work blog:


The Importance of Being Earnest in Monitoring Your Virtual Desktop User Experience

Did you recently complete a long-awaited project to upgrade your network and virtualize your PCs, data centers, and infrastructure?

I’m guessing you might be facing some challenges with monitoring how it went and how users are enjoying (or NOT enjoying) their virtual desktop experience.

While your IT Director does glow a bit more radiantly walking down the hallway and whistle a bit more frequently in the elevator now that bulky physical desktops are gone, you still need to troubleshoot problems and optimize performance.

Plus, the new CIO wants a report that validates your infrastructure changes were and will continue to be a sound investment and the executive team wants to know in advance about any performance bottlenecks.

They ultimately want to snapshot, quantify, and track changes in the user experience for all users, on all devices, 24/7!


Monitoring the Virtual Desktop User Experience

In the past, physical machines offered IT shops the opportunity to customize the user experience (UX). Christine in marketing had more RAM than Bill in accounting, and Ramesh in services had access to more network storage than either of them.

But with virtual machines, many shops do not monitor user experience and use a policy where all 20,000 employees get precisely the exact same virtual desktop; same processor, same RAM, same configuration, and same access to resources.

As you might have imagined, Ramesh would be cursing your IT staff through a support chat app, and Bill would be overwhelmed.

Christine just walked out.

In other words, without monitoring the user experience, this failed policy would:

  • Upgrade low-demand users who did not require access to advanced resources.
  • Downgrade high-demand power users who previously enjoyed a superior level of service.

Therefore, remember this important rule of virtualization—because you can dynamically allocate and throttle resources, monitoring the user experience is even more important than it was in the physical environment.

Four Reasons Why You Should Monitor the User Experience

  • Constant adjustments require usage data for maximum optimization. Monitoring helps you discover areas of improvement.
  • Users would otherwise experience issues and wrongly assign blame to virtualization.
  • Opportunities for automation, enhanced collection, and dynamic real-time reallocation of resources.
  • You can now more easily do it; much easier than physical environments.


How to Measure the Virtual User Experience

A rule-of-thumb in this business is that the virtual user experience must be at least the same or better than the physical experience. We can’t declare victory until that assertion is shared by a clear majority of users.

Naturally, you may be wondering, how do we measure that objectively?

Let me address three primary methods below.

1.           Delays and Crashes

First, establish a rubric or benchmark based on a standard set of factors. Track the following three parameters and chart trends over time:

  • App Load Delay
  • Login Delay
  • App Not Responding (ANR) and Similar Crashes


Delays and crashes are strong indicators of user frustration level. In any given range of time (3 weeks, 3 days, or 3 hours), these numbers are going to point to the issue. Remember, lower numbers are better when it comes to measuring load times and crashes. Four crashes are better than 40 and a three-second load time is better than 30 seconds.

Like the indicators used by economists to describe trends in the business market, these are lagging indicators. For example, existing home sales, jobless claims, and new jobs for the past month. Lagging indicators reliably report on events that have already occurred.


2.           Technical Metrics

Second, track the following four major technical metrics:

  • RAM
  • CPU
  • Disk Storage
  • Network Traffic

Trending data from those four metrics add-up and empirically point to general environment issues that contribute to user frustration.

To continue our economic metaphor, these are leading indicators such as bond yields or new housing starts. They are based on conditions that offer insight as to what might occur if we can quickly assess the data and make accurate predictions. For example, don’t cutover to a new enterprise app that uses a lot of RAM if two-thirds of desktops are reporting out-of-memory issues.

3.           User Experience Feedback Surveys

Third, conduct user experience feedback surveys. Because the results will be swayed by the current mood of each user in a highly subjective manner, you’ll need participation and feedback from many users to reliably establish objective statistical significance that reflects the population mean.

You might include the following survey questions:

  • How would you rate the speed of your virtual desktop?
  • Would you consider any of the applications you use to be slow?
  • If YES, please list which apps are slow and the time of day when they are slow.
  • List any applications that you have used in the past three months that crashed?
  • How often did each application crash?


Consult with your data scientist or marketing team to carefully construct the questions in your survey. For best results, you want to invest up front in getting the first survey as accurate as possible, and consistently track future results.

Monitoring Software

Skip the attempt to build a custom solution in-house. A few commercial tools are available to help you collect user experience data. Most solutions provide views with metrics that track architecture specs, infrastructure changes, desktops, laptops, workstations, kiosks, terminals, other devices, users, and apps.

Market tools include:

  • Liquidware Stratusphere UX: The reliable established market leader in this segment.
  • Lakeside Systrack: A good tool for automated reports and dashboards.
  • ControlUp: Their real-time product includes a responsive dashboard that helps you resolve issues quickly.
  • Nexthink: Another real-time product with historical usage and IT service performance records, visualizations, actionable dashboards, reporting, and feedback surveys integrated.

These solutions also include built-in root cause analysis and problem identification.

They all tend to be strong at monitoring crashes, delays, and metrics; however, they typically lack an end-user survey feedback function. Nexthink is an exception. It delivers on all three points I made in the previous section, including surveys, but has some other disadvantages such as configuration requirements and cost.

When it comes to evaluating the costs and features of these competitors, I invite you to compare and decide for yourself. I will suggest that you can likely conduct the surveys yourself using SurveyMonkey, SurveyGizmo, GetFeedback, or another popular online survey tool.

Data Collection Tips:

  • Collect metrics and feedback data for as large a user pool as possible with a consistent number of users. For example, if you cannot survey all 15,000 employees, poll 1,000 every quarter. If you can do it every 60 days or monthly, that’s even better. You also want to have data before a change to serve as a baseline, and after a change to make comparisons. For example, immediately before and immediately after a shift from physical to virtual desktops.
  • Run the delay, crash, and technical metrics tools as often as possible. You want them capturing data almost constantly. Compare the data every month, examine reports, and look for trends.
  • It’s also important to note that all the tools I mention are strictly for monitoring. They don’t perform any corrective actions. You could script your own, but most organizations today are cautious about building yet another in-house custom solution when the cloud promises so much including everything from automation to updates.
  • Corrective automation tools on servers are available; however, not for virtual desktops. Some server real-time resource allocation features exist in Turbonomics and VMware vRealize Operations Manager/Automation.

Evaluate the Trends

After collecting the data, examine any trends. If you see an increase in crashes, delays, helpdesk tickets, and other common issues, the overall user experience at your organization is in trouble. Like a crime drama or forensics TV show, go into analysis mode to determine why.

Use the feedback surveys to substantiate the trends. It works both ways; you can also use the metrics to support a trend in user feedback results.

For users that report poor performance, your survey should also ask them to specify when it occurs. If you can, try to pinpoint a two-hour window. Then, focus on that time and try to determine a root cause. You also have the names and machine IDs to go on.

Other forensic analysis tips:

  • Analyze just two or three users: They will reveal findings representative of a larger audience. Troubleshooting forensics for dozens of users will yield too many data points and too much variability.
  • Focus on snippets of user experience feedback: For example, three users reported crashes while using the same streaming app at the same time.
  • Look for patterns: For example, every 30 days you notice a block of days with high disk utilization metrics. Run another report for just that week and look for trends and sustained peaks. Within those peaks focus on just three hours, then one hour.
  • Filter out false positives: When you upgraded to a new application, everyone’s RAM suddenly became insufficient in the metrics; however, a new patch next week fixes a known memory leak vulnerability.
  • Memory is critical: The most common issues center around insufficient resources. Users often need more RAM. It’s typically more important than processor speed or flash storage.


Next Steps

After running monthly reports and tracking the trends, narrow your analysis window and draw your conclusions. It’s typical to prioritize the corrective actions that you want to make.

For example, after identifying a storage bottleneck or memory issue that impacts 500 users, you might choose to allocate more memory to the top 50 and monitor that change for a few days.

A perception issue also plays a role. Studies show that users do not notice an improvement unless it signifies at least a 20 percent increase over the previous state. In other words, don’t spread resource allocation adjustments so thin that each user is given a two percent incremental bump-up every six months. They won’t even notice the change. Better to boldly introduce a 20 percent increase today. Your users will definitely notice the improvement.

Monitor changes and look for new patterns for at least two full weeks after a significant change. Compare data before, during, and after the change. Look at variances expressed in units and as percentages. Make sure your audience, staff, and customers are aware of the changes. User engagement is helpful.

Finally, quantify the cost of slow performance in terms of its financial and political impacts:

Financial Impact:

When 500 users experience slow applications every day for a week, the lost productivity is significant. On a recent CDI engagement, we found an anti-virus process that crawled along very slowly during peak work hours. There was no need to impact users like this when the process could run after midnight.

Another financial example involves a hospital billing department. The accounts receivable team would face a severe challenge if slow network speeds prevented new billings from going out on time.

A critical medical procedure might require MRI images in the next 20 minutes while the patient remains under anesthesia. Now is not the time for performance delays.

Slow physical or virtualized environments also carry legal risks. A firm might be sued for losses involving delays for thousands of users.

Political Impact

Slow performance and a poor user experience does not reflect highly on the brand. Company executives and account managers want to look their best when showcasing new product demos. In these situations, some of your IT staff may receive phone calls from frustrated callers demanding a fix or your resignation.

Performance is no joke, especially when you factor in contractual service agreements and the competitive dynamics of the cloud economy. A sub-standard user experience impacts your bottom line and perception in the news and social media.

In the long run, prevention pays for itself, so fund your performance fixes and attack the next set of bugs early and often. Equipping your staff with faster performance is essential for business.

The Final Word

People expect robust, fast, responsive computing devices. They want to leverage powerful networks, platforms, and applications to increase their productivity. When a weak link in the system arises, it can snowball and user productivity can dramatically decline or drop-off altogether.

In the physical realm, you can still go buy a better laptop.

But in the virtual realm, monitoring the user experience is essential to identify pain points and make the right adjustments.

Assessing your Infrastructure for VDI with real data – Part 2 of 2 – Analysis

For VSI, we established that using analysis tools was a necessity, and VMware provided wonderful Capacity Planner tool. However, it soon became evident that for VDI, it is even more important to use analysis tools. That is because for VDI, when you buy hardware and software, the investment is generally higher. You need a lot more, faster storage. You need many servers and a fast network. So the margin of error is smaller.

Consequently, using Liquidware FIT or Lakeside SysTrack is essential. There are now a few more tools on the market, like ControlUp or Login PI. However, the new entrants have not been battle tested yet.

So how do you analyze your physical desktops for VDI?

First, buy a license for the Liquidware FIT tool (per user, inexpensive), or buy an engagement from your friendly Valued Added Reseller or Integrator who is a Liquidware partner. If you buy a service from a partner, then usually up to 250 desktop license will be included with the service.

Here, I will talk about services of the partner because that is what I do. However, if you are doing this yourself, just apply the same steps.

You will need to provide your partner’s engineer with space for 2 small Liquidware virtual appliances. The only gotcha is that you want them on the fastest storage you have (SSD preferable). That is because on slower storage, it takes much longer to process any analysis or reports.

The engineer will come and install the 2 appliances into your vSphere. Then, the engineer will give you an EXE or MSI with an agent. Usually, you can use the same mechanism you already use to install software on your desktops to distribute the agent. For example, distribution tools like Microsoft SCCM, Symantec Altiris, LANDesk, and even Microsoft Group Policy will all be good. If you don’t have a mechanism for software distribution, then your engineer can use a script to install the agents on all PCs.

Make sure to choose a subset of your PCs, and at least some from each possible group of similar users (Accounting, Sales, IT, etc.). Your sample size could be about 10-25% of total user count. Obviously, the higher the analysis percentage, the more accuracy you get. But the goal here is not 100% accuracy – it’s impossible to achieve 100%. Assessment and performance analysis is an art as much as a science. Thus, you need just enough users to get a ballpark estimate of what hardware you need to buy. Also, run the assessment for 1 month preferably, or at a bare minimum 2 weeks. The time of the start of the data collection above should start from the time you deploy your last user with the Liquidware agent.

Your partner engineer will need remote access, if possible, to check on the progress of the installation. First, the engineer will check if the agents are reporting successfully back to the Liquidware appliances. During the month, the engineer will make sure agents are reporting and data can be extracted from the appliance.

In the middle of the assessment, engineer will do a so-called “normalization” of the data. That is to make sure the results are compatible with rules of thumb for analysis. If necessary, the engineer will readjust thresholds and recalculate the data back to the beginning.

At the end of 30 days, the engineer will generate a machine-made report on the overall performance metrics, and will present the report to you.

At some partners, for an extra service price, the engineer will go further, and will analyze the report for the amount and performance parameters of hardware you need. In addition, the engineer will create a written report and present all the data to you.

In either case, you will know which desktops have the best score for virtualization, and which ones you should not virtualize. If you go with more advanced report services from your partner, then you will also understand how to map the results to hardware and further insights.

One way of mitigating bad VDI sizings is to also use a load simulation tool like LoginVSI. However, LoginVSI is only useful for clients who can afford to buy similar equipment for the lab that they will buy for production. Using LoginVSI, you can test robotic (fake) users doing tasks that normal users will do in VDI. LoginVSI allows you to have a ballpark hardware number that is good. However, the LoginVSI number does not have real user experience data. For that, you need tools like Liquidware FIT and associated work to determine proper VDI strategy.

Understanding what your current user experience is, and also how that experience could be accommodated with virtual desktops is essential to VDI. You should do this assessment before buying your hardware. Doing an assessment ensures that your users get the same experience or better on the virtual desktop as they have on the physical desktop (the holy grail of VDI).

Moving a VMware Horizon View virtual desktop between separate Horizon View environments


Sometimes you may build two distinct VMware Horizon View environments for separate business units, for Disaster Recovery, or for testing purposes.

In that case, a need may arise to move a virtual desktop between the independent Horizon View infrastructures.


There are many ways Horizon View may be configured. However, this article assumes the following settings in both environments:

  • Manual, non-automated, dedicated pools for virtual desktops
  • Full clone virtual desktops
  • All user data is contained inside the virtual desktop, most likely on drive C
  • All virtual desktop disks (vmdks, C and others) are contained within the same VM directory on the storage
  • Storage is presented to ESXi through the NFSv3 protocol
  • Microsoft Active Directory domain is the same across both sites
  • VLANs/subnets the same or different between the two sites
  • DHCP is configured for the desktop VM in both sites
  • Virtual desktop has Windows 7 or Windows 10 operating system
  • Connection Servers do not replicate between environments
  • No Cloud Pod federation
  • Horizon View v7.4
  • vCenter VCSA 6.5 Update 1e
  • ESXi 6.5 for some hosts and 6.0 Update 3 for other hosts

There are other ways to move a virtual desktop when the Horizon View is setup with automation and Linked Clones, but they are subject for a future article.

The first Horizon View infrastructure will be called “Source” in this article. The second Horizon View infrastructure, where the virtual desktop needs to be moved, will be called “Destination” in this article.


  1. Record which virtual desktop names are assigned to which Active Directory users on the Source side. You can do that by Exporting a CSV file from the Pool’s Inventory tab.
  2. If the Source Horizon View infrastructure is still available (not destroyed due to a disaster event), then continue with the following steps on the Source environment. If the Source Horizon View infrastructure has been destroyed due to a disaster, go to Step 9.
  3. Power off the virtual desktop. Ensure that in Horizon View Manager you don’t have a policy on your pool to keep powering the virtual desktop on.
  4. In Horizon View Manager, click on the pool name, select the Inventory tab.
  5. Right click the desktop name and select Remove.
  6. Choose “Remove VMs from View Manager only.”
  7. In vSphere Web Client, right click the desktop VM and select “Remove from Inventory.”
  8. Unmount the NFSv3 datastore that contains the virtual desktop from Source ESXi hosts.
  9. At this point how the datastore gets from Source to the Destination will vary based on your conditions.
    • For example, for testing purposes, the NFSv3 datastore can be mounted on the Destination hosts.
    • In case of disaster, there could be storage array technologies in place that replicate the datastore to the Destination side. If the Source storage array is destroyed, go to the Destination storage array and press the Failover button. Failover will usually make the Destination datastore copy Read/Write.
  10. Add the NFSv3 datastore that contains the virtual desktop to the Destination ESXi hosts, by going through the “New Datastore” wizard in vSphere Web Client.
  11. Browse the datastore File structure. Go to the directory of the virtual desktop’s VM, find the .vmx file.
  12. Right click on the .vmx file and select “Register VM…”
  13. Leave the same name for the desktop VM as offered by the wizard.
  14. Put the desktop VM in the correct VM folder and cluster/resource pool, that is visible by the Destination’s Horizon View infrastructure.
  15. Edit the desktop VM’s settings and select the new Port Group that exists on the Destination side (if required).
  16. Power on the desktop VM from the vSphere Web Client.
  17. You might get the “This virtual machine might have been moved or copied.” question.
    • When vSphere sees that the storage path of the VM does not match what was originally in the .vmx file, you might get this question.
    • Answering “Moved” keeps the UUID of the virtual machine, and therefore the MAC address of the network adapter and a few other things.
    • Answering “Copied” changes the UUID of the virtual machine, and therefore the MAC address of the network adapter and a few other things.
  18. In the majority of cases (testing, disaster recovery), you will be moving the desktop virtual machine from one environment to another. Therefore, answer “I Moved It,” to keep the UUID and thus the MAC address the same.
  19. Wait until the desktop virtual machine obtains the IP address from the Destination’s DHCP server, and registers itself with the DNS server and Active Directory.
    • Remember, we are assuming the same Active Directory domain across both sites. As a result, the desktop VM’s AD computer name and object will remain the same.
    • Monitor the IP address and DNS assignment from the vSphere Web Client’s Summary tab for the desktop VM.
  20. In Destination’s Horizon View Manager, click on the Manual, Full Clone, Non-automated, Dedicated pool that you have created already.
    • If you did not create the pool yet, create a new pool and put any available VM at the Destination  in the pool. The VM that you put will just be a placeholder to create the pool. Once the pool is created, you can remove the placeholder VM and only keep your moved virtual desktops.
  21. Go to the Entitlements tab and add any user group or users to be entitled to get desktops from the pool. Most likely, it will the the same user group or user that was entitled to the pool on the Source side.
  22. Select the Inventory tab and click the Add button.
  23. Add the desktop VM that you just moved.
  24. Check the status of the desktop VM. First, the status will say “Waiting for agent,” then “In progress,” then “Available.”
  25. Right click on the desktop VM and select Assign User.
  26. Select the correct Active Directory user for the desktop.
  27. As the user to login to the virtual desktop using Horizon View Client or login on behalf of the user.
  28. For the first login after the move, the user may be asked by Windows to “Restart Now” or “Restart Later.” Please direct the user to “Restart Now.”
  29. After the restart, the user may utilize the Horizon View Client to login to the Destination’s moved desktop normally.


Assessing your Infrastructure for VDI with real data – Part 1 of 2 – History

It is now a common rule of thumb that when you are building Virtual Server Infrastructure (VSI), you must assess your physical environment with analysis tools. The analysis tools show you how to fit your physical workloads onto virtual machines and hosts.

The golden standard in analysis tools is VMware’s Capacity Planner. Capacity Planner used to be made by a company called AOG. AOG was analyzing not just for physical to virtual migrations, but was doing overall performance analysis of different aspects of the system. AOG was one of the first agentless data collections tools. Agentless was better because you did not have to touch each system in a drastic way, there was less chance of drivers going bad or performance impact to the target system.

Thus, AOG partnered with HP and other manufacturers, and was doing free assessments for their customers, while getting paid by the manufacturer on the backend. AOG tried to sell itself to HP, but HP, stupidly, did not buy AOG. Suddenly, VMware came from nowhere and snapped up AOG. VMware at the time needed an analysis tool to help customers migrate to the virtual infrastructure faster.

When VMware bought AOG, VMware dropped AOG’s other analysis business, and made AOG a free tool for partners to analyze migrations to the virtual infrastructure. It was a shame, because AOG’s tool, renamed to Capacity Planner, was really good. Capacity Planner relies solely on Windows Management Instrumentation (WMI) functions that is already built into Windows and is collecting information all the time. Normally, WMI discards information like performance, unless it is collected somewhere by choice. Capacity Planner just enabled that choice, and collected WMI performance and configuration data from each physical machine.

When VMware entered the Virtual Desktop Infrastructure (VDI) business with Horizon View, it lacked major pieces in the VDI ecosphere. One of the pieces was profile management, another piece was planning and analysis, another piece was monitoring. Immediately, numerous companies sprang to life to help VMware fill the need. Liquidware Labs (where the founder worked for VMware) was the first to come up with a robust planning and analysis tool in Stratusphere FIT, then with monitoring tool in Stratusphere UX. Lakeside SysTrack came on the scene. VMware used both internally, although the preference was for Liquidware.

Finally, VMware realized that the lack of analysis tool for VDI, made by VMware, was hindering them. But what they failed to realize, was that such tool already existed at VMware for years – Capacity Planner. The Capacity Planner team was neglected, so rarely would any updates were done to the tool. However, since Capacity Planner could already analyze physical machines for performance, it was easy to modify the code to collect information on virtualizing physical desktops, in addition to servers.

Capacity Planner code was eventually updated with desktops analysis. All VMware partners were jumping with joy – we now had a great tool and we did not have to relearn any new software. I remember that I eagerly collected my first data, and began to analyze the data. After analysis, the tool told me I needed something like twenty physical servers to hold 400 virtual desktops. Twenty desktops per server? That sounded wasteful. I was a beginner VDI specialist then, so I trusted the tool but still had doubts. Then I did a few more passes at the analysis, and kept getting wildly different numbers. Trusting my gut instinct, I decided to redo one analysis with Liquidware FIT.

Of course, Liquidware FIT has agents, so I used it, but always thought that it would be nice not to have agents. So VMware’s addition of desktop analysis to agentless Capacity Planner was very welcome. So, back to my analysis, after running Liquidware FIT, I came up with completely different numbers. I don’t remember what they were – perhaps 60 desktops per physical server, or something else. But what I do remember was that Liquidware’s analysis made sense, where Capacity Planner did not. My suspicions about Capacity Planner as a tool were confirmed by VMware’s own VDI staff, who, when asked if they use Capacity Planner to size VDI said, “For VDI, avoid Cap Planner like the plague, and keep using Liquidware FIT.”

As a result, I kept using Liquidware FIT since then, and never looked back. While FIT does have agents, now I understand that getting metrics like Application load times and User Login delay is not possible without agents. That is because Windows does not include such metrics in WMI. Therefore, a rich agent is able to pick up many more user experience items, and thus do much better modeling.

The lure of Hyper-Converged for VDI

So you decided to implement Virtual Desktop Infrastructure (VDI). Virtual desktops and app delivery sound sexy, but once you’ve started delving into the nitty gritty, you quickly realize that VDI has many variables. In fact, so many, that you start to feel overwhelmed.

At this point you have a couple of options. First, you can keep doing this yourself, but that will take valuable time.  You can hire a VDI engineer to your team, but that also tales time and money to find a great engineer.

Another option is to hire a Value Added Reseller that has done VDI a hundred times. Great idea – I will love you forever, and will do great VDI for you. But I can be expensive.

One particular sticking point in VDI is the sizing of the hardware for the environment. If you undershoot the amount of compute, storage, memory or networking, you risk having unhappy users with underpowered virtual desktops. If you overshoot, you may be chastised for overspending.

Too often I have seen the user profile not properly examined, sized etc. The result is that the derived virtual desktop is low on memory or CPU. The user immediately blames the new technology, not even assigning blame to something they may have done. But the real performance problem culprit may lie somewhere else. However, the user just had his shiny physical machine taken away, and it was replaced with something intangible. Of course, all the problems, whether related to VDI or not, will be blamed on VDI, and possibly the VDI sizing. The bad buzz spreads through the company. Such buzz kills your VDI project faster than performance problems.

So, what is one way to avoid thinking about sizing? Hyper-Converged.

Hyper-Converged means a node in a cluster has a little bit of everything – compute, storage, memory, network. Each node is generally the same but there could be different types of nodes – for example, Simplivity has some nodes with everything, and some nodes only doing compute.

Since most nodes are the same, once you figured out how many average Virtual Desktops in a specific profile fit on a node, you can just keep adding nodes for scalability.

In fact, Nutanix capitalized on that brilliantly when they announced the famous guarantee – once the customer says how many users they want to put on Nutanix, the vendor will provide enough Hyper-Converged nodes to have a great user experience. The guarantee was hard to enforce on both the customer end and Nutanix end. But the guarantee sure had lots of marketing power. Time and time again I heard it from customers and other VARs. The guarantee was a placebo for making VDI easier.

Consequently, you should not just rely on a guarantee for VDI sizing. Sizing should be verified with load simulation tools like LoginVSI and View Planner. Then, the profile of your actual user should be evaluated by collecting user experience data with a tool like Liquidware FIT or Lakeside SysTrack.

Once the data is collected and analyzed, you can decide what number of Hyper-Converged nodes to buy. Hyper-Converged makes the sizing easy because you always deal with uniform nodes.

Once you are in production, you should be monitoring user experience constantly with a tool like Liquidware UX. UX will allow you to always have a solid idea of what your user profiles to. As a result, you can confidently say, “On my Hyper-Converged node I can host up to 50 users.” Thus, if you grow to 100 users, you need 2 Hyper-Converged nodes.

Saying the above is the holy grail of scalability. And therein lies the lure of Hyper-Converged – as a basic VDI building block. That is why Hyper-Converged companies started with strong VDI stories, and only later began marketing for Virtual Server Infrastructure.

And any technology that makes VDI easier, even by one iota, makes VDI more popular. Hail Hyper-Converged for VDI!

What’s next for Virtual Desktop Infrastructure?

Greetings CIOs, IT Managers, VM-ers, Cisco-ites, Microsoftians, and all other End-Users out there… Yury here. Yury Magalif. Inviting you now to take another virtual trip with me to the cloud, or at least to your data center. As Practice Manager at CDI, your company is depending on my team of seven (plus or minus a consultant or two) to manage the implementation of virtualized computing including hardware, software, equipment, service optimization, monitoring, provisioning, etc. And you thought we were sitting behind the helpdesk and concerned only with front-end connectivity. Haha (still laughing) that’s a good one!

Allow me to paint a simple picture and add a splash of math to illustrate why your CIO expects so much from me and my team. Your company posted double-digit revenue growth for three years running and somehow, now, in Q2 of year four, finds itself in a long fourth down and 20 situation. (What? You don’t understand American football analogies? Okay, in the international language of auto-racing, we are 20 laps behind and just lost a wheel.) One thousand employees need new laptops, docking stations, flat panel displays, and related hardware. Complicating the matter are annual software licensing fees for a group of 200 but with only five simultaneous concurrent users worldwide. At $1,500 per user times 1,000, plus the $100 fee, your CIO has to decide how it will explain to the board that it plans to spend another 1.5 million dollars on IT just after Q1 closed down 40 percent and Q2 is looking to be even worse.

To read the rest of this blog, where I try something different, please go to my work blog page:


How to use Adobe Flash Player after its End of Life — absolutely free

***NOW UPDATED with Apple MacOS instructions, in addition to Microsoft Windows*** 

***Also updated with the solution to the mms.cfg file not working due to the UTF-8 bug***

You may have seen plenty of announcements over the past few years about Adobe Flash coming to the end of life. Various browser manufacturers announced they will disable Flash. Microsoft announced they will uninstall Flash from Windows using a Windows Update (although only the Flash that came automatically with Windows, NOT user-installed Flash). Apple completely disabled Flash in Safari. Below is the dreaded Flash End of Life logo that you will see once Flash is finally turned off:

Yes, I agree with Steve Jobs — Flash is buggy and not secure. However, there are many IT manufacturers out there that used Flash to build their management software interfaces. Some common examples are VMware vSphere, Horizon, and HPE CommandView. That management software is not going away, even though most of it is older. In fact, some of these Flash-managed devices will be there for the next 10 years. So, what can the desperate IT administrator do to manage his or her devices?

Adobe sends users for extended Flash support to a company called Harman. HPE charges money for older CommandView support. Do not pay any money to these companies to use Flash.


I am not recommending Chrome or Edge browsers for the below solution because they will auto-update and newer versions will not support Flash at all. Further, turning off auto-update in Chrome and Edge is difficult.

Here are 3 methods to get Flash running on your favorite website. Windows methods assume 64-bit Operating systems. If you want to try 32-bit Windows, the files are available, but the functionality has not been tested (although it will probably work). All the files talked about in these methods are downloadable below:

firefox-flash-end-solution-versions.zip_.pdf — Right click on the link and choose “Save Link As” or “Download Linked File As”. Save the file to your computer. Unhide file extensions. Remove _.pdf from the end of the name and Unzip/ExtractAll the file.

The file contains:

Firefox Setup 78.6.0esr-64bit.exe
Firefox Setup 78.6.0esr-32bit.exe
Firefox 78.6.0esr.dmg

flash-eol-versions.zip_-1.pdf — Right click on the link and choose “Save Link As” or “Download Linked File As”. Save the file to your computer. Unhide file extensions. Remove _-1.pdf from the end of the name and Unzip/ExtractAll the file.

The file contains:

Flash player for Firefox and Win7 – use this for Solution: install_flash_player.exe
Flash for Safari and Firefox – Mac: install_flash_player_osx.dmg
Flash for Opera and Chromium – Mac: install_flash_player_osx_ppapi.dmg
Flash Player for Chromium and Opera browsers: install_flash_player_ppapi.exe
Flash Player for IE active x: install_flash_player_ax.exe
Flash Player Beta 32 bit – May 14-2020: flashplayer_32_sa.exe flashplayer_32_sa.dmg

Method 1 — Microsoft Windows, if you have Internet Explorer browser and Flash already installed

This method applies to many older Windows Operating systems like Server 2008, 2012, 2016 and Windows 7 and even older Windows 10. It assumes a 64-bit operating system.

  1. Do NOT upgrade Internet Explorer to the Microsoft Edge browser.
  2. Set Internet Explorer to be the default browser in Default Programs.
  3. Download the mms.cfg file.
  4. Open the mms.cfg file with Notepad.
  5. Edit the URL on the right of the Equals sign with an address of the Flash website or file that you need.
    1. Ex. AllowListUrlPattern=https://localhost/admin/
  6. If you need additional websites, place them on the next lines, like in this example.
    1. AllowListUrlPattern=https://localhost/admin/
    2. AllowListUrlPattern=http://testwebsite.com/
    3. AllowListUrlPattern=*://*.finallystopflash.com/
  1. Save mms.cfg file on the desktop.
    1. Important: if you did not use my file, but you are creating the file yourself, makes sure in Notepad Save As dialog, you select “All Files” as the type, and “UTF-8” as the Encoding.
  2. Copy the mms.cfg file into the following directory: C:\Windows\SysWOW64\Macromed\Flash\
    1. That disables Flash updates and allows to use Flash on specified websites.
    2. If you don’t see this directory, it means Flash is not installed and you need to use Method 2 instead.
  3. Restart the Internet Explorer browser.
  4. Go to your website.
  5. This will open Internet Explorer with Flash functional.

Method 2 — Microsoft Windows, if you don’t have Internet Explorer and/or Flash installed

This method applies to almost any Windows machine. It assumes a 64-bit operating system.

  1. If you already have another version of Firefox installed, uninstall it.
  2. Download the “Firefox Setup 78.6.0esr-64bit.exe” and “policies.json” files. This Firefox installer is the Enterprise version (what you need).
  3. Install Firefox ESR, but do NOT open it, or if it opens, close right away.
  4. In the “C:\Program Files\Mozilla Firefox\” directory, create a folder called “distribution”
  5. Put “policies.json” file into the folder “distribution” — this disables automatic Firefox updates.
  6. Start Firefox ESR.
  7. Go to URL: about:policies
  8. Check that “DisableAppUpdate” policy is there and it says “True”.
  9. Set Firefox to be the default browser in Default Programs.
  10. Download “Flash player for Firefox and Win7 – use this for Solution: install_flash_player.exe” and “mms.cfg”.
  11. Double click on the install_flash_player.exe to install Flash for Firefox. Click all Next prompts.
    1. If you are prompted to choose “Update Flash Player Preferences”, select “Never Check for Updates”.
  12. Open mms.cfg file with Notepad
  13. Edit the URL on the right of the Equals sign with an address of the Flash website or file that you need.
    1. Ex. AllowListUrlPattern=https://localhost/admin/
  14. If you need additional websites, place them on the next lines, like in these examples:
    1. AllowListUrlPattern=https://localhost/admin/
    2. AllowListUrlPattern=http://testwebsite.com/
    3. AllowListUrlPattern=*://*.finallystopflash.com/
  15. Save mms.cfg file on the desktop.
    1. Important: if you did not use my file, but you are creating the file yourself, makes sure in Notepad Save As dialog, you select “All Files” as the type, and “UTF-8” as the Encoding.
  16. Copy the “mms.cfg” file into the following directory: C:\Windows\SysWOW64\Macromed\Flash\
    1. That disables Flash updates and allows to use Flash on specified websites.
  17. Restart Firefox ESR.
  18. When going to the flash website you specified, click on the big logo in the middle, then “Allow”.

Method 3 — Apple MacOS

This method applies to almost any MacOS version.

  1. If you already have another version of Firefox installed, uninstall it.
  2. Download “Firefox 78.6.0esr.dmg” and “policies.json” files. This Firefox ESR for Mac installer is the Enterprise version (what you need).
  3. Open the DMG file. Drag the Firefox ESR icon to the Applications folder, which installs it on the Mac. Do NOT open Firefox ESR yet.
  4. Open the Terminal application.
  5. Type the following and press Enter (start typing from xattr).
    1. xattr -r -d com.apple.quarantine /Applications/Firefox.app
      1. This allows Firefox customization without corrupting the application.
  1. Go to the Applications folder. 
  2. Right click on the Firefox.app application and select “Show Package Contents”.
  3. Go to Contents>Resources folder and when there create a folder called “distribution”.
  4. Put “policies.json” file into the folder “distribution” — this disables automatic Firefox updates.
  1. Start Firefox ESR.
  2. Go to URL: about:policies
  3. Check that “DisableAppUpdate” policy is there and it says “True”.
  4. Download “Flash for Safari and Firefox – Mac: install_flash_player_osx.dmg” and “mms.cfg”.
  5. Double click on the install_flash_player_osx.dmg to mount the disk. Double click the installer to install Flash for Firefox. 
  6. When asked to choose on “Update Flash Player Preferences”, select “Never Check for Updates (not recommended)”.
  1. Place the mms.cfg file on the Desktop. Open mms.cfg file with TextEdit.
  2. Edit the URL on the right of the Equals sign with an address of the Flash website or file that you need.
    1. Ex. AllowListUrlPattern=https://localhost/admin/
  3. If you need additional websites, place them on the next lines, like in these examples:
    1. AllowListUrlPattern=https://localhost/admin/
    2. AllowListUrlPattern=http://testwebsite.com/
    3. AllowListUrlPattern=*://*.finallystopflash.com/
  1. Save mms.cfg file to the Desktop. Copy the mms.cfg file.
  2. Paste the “mms.cfg” file into the following directory:
    1. /Library/Application Support/Macromedia/     (Mac Hard Drive>Library>Application Support>Macromedia)
  3. If there is already an existing mms.cfg file in there, Replace it.
    1. That disables Flash updates and allows to use Flash on specified websites.
  1. Restart Firefox ESR for Mac. 
  2. When going to the flash website you specified, click on the big logo in the middle, then “Allow”.





15 takeaways from the End-User Computing 2020 survey

No alt text provided for this image

Download survey below.

  1. Surprised that the average memory/vCPUs for a single user virtual desktop is 12 GB and 3 vCPUs (more than my current sweet spot of 8GB, 2 vCPUs).
  2. #3 challenge in VDI is user experience with Unified communications (voice, webcam, VoIP, conferencing) — no surprise there.
  3. Dell and Nutanix are increasing server hardware market share, while HPE is decreasing and Cisco is all over the place.
  4. Centralized storage arrays are still winning vs. Hyperconverged storage.
  5. New storage entrants — Pure, VMware VSAN, and Nutanix are growing, while traditional storage vendors are declining, and Nutanix is beating with 14.92% of market share.
  6. Usage of GPUs (55%) surpassed non-GPU (CPU-only) environments.
  7. Most people do not use application layering solutions.
  8. Most people deliver desktops, not published applications.
  9. Most people manually install applications and updates into the master image.
  10. ControlUp is the leading VDI monitoring solution. Nexthink is not included in the list.
  11. Windows’ built-in Defender antivirus is the most popular one for virtual desktops.
  12. Majority of VDI is still on-premises, with control plane/broker delivered by the internal IT department. Desktop as a Service (DaaS) is the most popular for flexible/temporary workers. However, DaaS is the most important End-User Computing IT’s initiative in 2021.
  13. Biggest challenge in Desktop as a Service in Public Cloud is cost.
  14. Hottest Desktop as a Service is Microsoft’s Windows Virtual Desktops (WVD). However, VMware Cloud on Amazon AWS/Google Cloud Platform, or Itopia, another Google Cloud Platform-based DaaS provider, are not in the list.
  15. Ransomware is the biggest overall worry of EUC professionals.

Thank you to Christiaan Brinkhoff, Mark Plettenberg and Ruben Spruijt from the VDILIKEAPRO team for creating this wonderful resource.

Download survey here: