Collateral for my presentation at the Workshop of the Association of Environmental Authorities of NJ (AEANJ)

I was glad for a chance to present at the Workshop of the Association of Environmental Authorities of NJ (AEANJ). There were great questions from the audience.

Thank you to attendees, Leon McBride for the invitation, Peggy Gallos, Karen Burris, and to my colleague Lucy Valle for videotaping.

My presentation is called “Data Portability, Data Security, and Data Availability in Cloud Services”

Here are the collateral files for the session:

Slides:

AEANJ Workshop 2016-slides-YuryMagalif

Video:

AEANJ Workshop 2016 Video – Yury Magalif

The lure of Hyper-Converged for VDI

So you decided to implement Virtual Desktop Infrastructure (VDI). Virtual desktops and app delivery sound sexy, but once you’ve started delving into the nitty gritty, you quickly realize that VDI has many variables. In fact, so many, that you start to feel overwhelmed.

At this point you have a couple of options. First, you can keep doing this yourself, but that will take valuable time.  You can hire a VDI engineer to your team, but that also tales time and money to find a great engineer.

Another option is to hire a Value Added Reseller that has done VDI a hundred times. Great idea – I will love you forever, and will do great VDI for you. But I can be expensive.

One particular sticking point in VDI is the sizing of the hardware for the environment. If you undershoot the amount of compute, storage, memory or networking, you risk having unhappy users with underpowered virtual desktops. If you overshoot, you may be chastised for overspending.

Too often I have seen the user profile not properly examined, sized etc. The result is that the derived virtual desktop is low on memory or CPU. The user immediately blames the new technology, not even assigning blame to something they may have done. But the real performance problem culprit may lie somewhere else. However, the user just had his shiny physical machine taken away, and it was replaced with something intangible. Of course, all the problems, whether related to VDI or not, will be blamed on VDI, and possibly the VDI sizing. The bad buzz spreads through the company. Such buzz kills your VDI project faster than performance problems.

So, what is one way to avoid thinking about sizing? Hyper-Converged.

Hyper-Converged means a node in a cluster has a little bit of everything – compute, storage, memory, network. Each node is generally the same but there could be different types of nodes – for example, Simplivity has some nodes with everything, and some nodes only doing compute.

Since most nodes are the same, once you figured out how many average Virtual Desktops in a specific profile fit on a node, you can just keep adding nodes for scalability.

In fact, Nutanix capitalized on that brilliantly when they announced the famous guarantee – once the customer says how many users they want to put on Nutanix, the vendor will provide enough Hyper-Converged nodes to have a great user experience. The guarantee was hard to enforce on both the customer end and Nutanix end. But the guarantee sure had lots of marketing power. Time and time again I heard it from customers and other VARs. The guarantee was a placebo for making VDI easier.

Consequently, you should not just rely on a guarantee for VDI sizing. Sizing should be verified with load simulation tools like LoginVSI and View Planner. Then, the profile of your actual user should be evaluated by collecting user experience data with a tool like Liquidware FIT or Lakeside SysTrack.

Once the data is collected and analyzed, you can decide what number of Hyper-Converged nodes to buy. Hyper-Converged makes the sizing easy because you always deal with uniform nodes.

Once you are in production, you should be monitoring user experience constantly with a tool like Liquidware UX. UX will allow you to always have a solid idea of what your user profiles to. As a result, you can confidently say, “On my Hyper-Converged node I can host up to 50 users.” Thus, if you grow to 100 users, you need 2 Hyper-Converged nodes.

Saying the above is the holy grail of scalability. And therein lies the lure of Hyper-Converged – as a basic VDI building block. That is why Hyper-Converged companies started with strong VDI stories, and only later began marketing for Virtual Server Infrastructure.

And any technology that makes VDI easier, even by one iota, makes VDI more popular. Hail Hyper-Converged for VDI!

Is FCoE winning the war vs. Native FC?

Note: this article was first published in 2013 on my work blog Cloud-Giraffe. I am republishing it here because the article is no longer available from my work blog. The ideas here are still relevant.

I had a customer recently buy 2 Cisco Nexus 5548UP switches to be used for their Fibre Channel (FC) Storage functionality, which is 50% of the total switch functionality. For Ethernet features, the customer bought separate Nexus switches. I did not advise on the design of this solution. However, it is a telling sign of Cisco’s weakened efforts to play down native FC functionality in favor of native Fibre Channel over Ethernet (FCoE). Unfortunately, promoting FCoE is proving to be harder than it looked. This is due to two main problems — confusion about the management of FCoE and Cisco’s main storage switch competitor Brocade wielding formidable muscle to keep native FC alive.

In 2003, Ciscoentered the FC native switch market with a splash. Cisco did this by perfecting one of their first spin-ins. A spin-in is when a company tells a few hundred of their engineers, “You will become a separate company and receive shares. Then, you will build us the best new technology on the market. If the tech is successful, Cisco will buy the company back, and all who received shares will profit handsomely.” Thus, Andiamo Systems was born and built the Cisco MDS FC storage switch.
At the time of their MDS FC switch release, Cisco’s primary competitor Brocade had 1/3 of the software features of the new MDS switch. I remember talking to my colleagues, old school FC guys, about the Cisco MDS. “Cisco does not know the FC storage market — they are network specialists. They will never release a switch as easy and robust as a Brocade or McData,” forecasted my bearded comrades. I believed them — they had years of experience.
Soon, Cisco was sending me to MDS courses, and the new CCIE certification in Storage was born. After I took the courses and realized the MDS’s feature dominance, I was hooked. Any geek worth his salt likes an abundance of gadgets — Cisco had Virtual Storage Area Networks (VSANs), FC pings, Inter-VSAN routing and Internet Small Computer System Interface (iSCSI) built in. This was when iSCSI storage was a novel, branch only solution. But with Cisco, you could do iSCSI with ANY storage.
The only Cisco problem was cost. Cisco attacked the large enterprise market first. As a result, an entry level MDS 9216 switch with all the wizardry cost $52 thousand list. The price was prohibitive for smaller markets. However, Cisco’s access to large enterprise decision makers allowed the company to quickly dispatch of the McData switch company. McData was gobbled up by Brocade. Brocade was not passive — it quickly ramped up development of software features. Further, Brocade’s main counter bet was the raw hardware speed strategy. Brocade decided to improve native FC chips faster than Cisco. As a result, Brocade was faster to market with 4 Gbit and then 8 Gbit FC.
At the time of Brocade’s 4 and 8 Gbit increases, most critics said that no client requires such speed, that it is purely a marketing gimmick. Such marketing gimmick charges were also leveled against Cisco for advanced software gadgets in their switches. The FC customer had a choice — raw FC speed vs. advanced software. While I enjoyed advanced software, I admit — few clients used switch based iSCSI and extra Cisco features. Both Brocade’s FC speed dominance and Cisco software advantage were pure marketing. But with time, it became evident that faster chips prevail in the marketing argument. Software was faster to develop than custom Application-specific Integrated Circuits (ASICs). Consequently, Brocade matched Cisco’s features, and was winning in FC speeds. Pundits that shot down the speed were silenced, as throngs of customers wanted faster Input/Output (I/O) for their growing VMware virtual server farms.
Still, Cisco had another card up its sleeve. Andiamo’s success was spectacular. The team behind the MDS was itching to disrupt another market, and Cisco was happy to oblige. Nuova Systems put together the same cast of characters in another spin-in. The spin-in secretly burrowed inside Cisco, building something unique. Speculation abounded, but when Cisco finally announced Nuova’s work, it was revolutionary.
Cisco/Nuova was the first to release the Nexus, a unified switch which supported native FC and Ethernet/Internet Protocol (IP) in the same box. Further, Cisco was the first to release the 10 Gbit FCoE protocol for their unified switches. Also, Cisco entered the server market with the Unified Computing System (UCS), a blade server platform with a network oriented architecture. Brocade was again caught off-guard, and went to its tried and true strategy — win with native FC, while countering Cisco’s feature superiority with “me toos.” Thus, Brocade was the first on the market with 16 Gbit FC. Meanwhile, Cisco touted the benefits of unified switching at 10 Gbit.
The promise of unified switching made complete sense. Why have 2 separate networks — storage and IP, which have similar concepts? Yet, in the 1990s, the storage market diverged onto a separate network path. Storage required a switching infrastructure, and the industry delivered the Storage Area Network (SAN). Developed in part by the SCSI guys mixed in with IP pros, FC was a robust protocol, perfect for storage. However, when FC expanded into multiple sites and routing, its management became similar to IP network management. Still, storage was managed by the storage team and IP by the network team. Separate, sometimes warring Network vs. Storage silos increased hardware and staff costs at large enterprises.
Cisco argued it could reintegrate the silos into one job role — an omniscient super Data Center guru. To that end, Cisco Certified Internetwork Expert (CCIE) certification in Storage was recently discontinued in favor of CCIE in Data Center. The Data Center CCIE requires knowledge of the server in Cisco UCS, the storage of Cisco MDS and Nexus FC, the Cisco Nexus Layer 2/3 network, and the Nexus 1000V virtual switching functionality. In Cisco’s view, the Data Center CCIE is the James Bond of the Data Center world, able to handle any problem thrown at him. I have no doubt that such people will exist, and they will assemble multi-disciplinary departments.
However, the current realities in the field seem to tell a different story. Over time, FC and storage developed into a whole separate area with its own deep expertise. The storage professional was required to know the management of multiple storage arrays from different manufacturers — EMC, NetApp, HP, Hitachi, Dell, and others. The number of storage protocols increased from just FC and occasional Fibre Channel over IP (FCIP) for routing across distances to iSCSI, Network File System (NFS), Common Internet File System (CIFS) and FCoE. Brocade absorbed McData, but Qlogic began to make FC switches.
On the network side, the amount of technology keeps increasing. The Routing and Switching CCIE of today has to know 3-5 times more than CCIE #1. Moreover, we have virtual and Software Defined Networking (SDN) disrupting the field. The future network guy has to know FabricPath, Overlay Transport Virtualization (OTV), the new Cisco’s Cloud Services Router 1000V (CSR), VMware’s vCloud Director, Nicira, and Brocade-acquired Vyatta.
The FCoE protocol was meant to unify storage and network management through the Nexus switch. Unfortunately, what I see is that the network guys are weak in the Nexus FC functionality. Yes, they may have gone to a class to learn the MDS or Nexus’s FC side, and they know the transport side. However, they lack the knowledge and control of the day to day management of the endpoints — server, virtualization and storage arrays. As a result, whoever controls the Nexus FCoE, cannot control and troubleshoot the native NetApp FCoE card, and also cannot control LUN presentation in a UCS blade. But, complete control of the FC stack is essential in FC troubleshooting. On the other hand, storage guys rarely want to delve into the network features of the Nexus switch. Consequently, because the Nexus is first and foremost a network switch, the storage guys never have daily administrative control.
When a shop is moving from one manufacturer to another, toward a converged network, it does not want to have multiple manufacturers. Another customer asked me to design a Nexus only solution, because they wanted to move away from Brocades. They had money for the Nexuses, but not enough Nexuses to accommodate all Brocade FC ports currently in existence. When I mentioned introducing Cisco MDSs, they said, “Why are you introducing native FC into the mix when the Nexus FCoE mantra is suppose to solve our storage needs?” The death of native FC, and especially Cisco MDSs has been predicted before. When Cisco retired CCIE in Storage, that was another sign that the MDS native FC switch is on its way out. Yet, Brocade released 16 Gbit native FC 2 years ago.
Today, the Data Center has many management headaches due to convergence.
  • Who will manage the server and its network devices (Converged Network Adapter cards, end-host I/O modules like Hewlett-Packard Virtual Connect or Cisco UCS Fabric Interconnects) — VMware guys, old school server guys, storage or network gals?
  • Who will manage the network going from the server out to the first access device (end-host I/O modules, Cisco UCS Fabric Interconnects)?
  • Who will manage storage and network transports (Cisco Nexus, Brocade switches)?
  • Who will manage the network in a virtual network world (Cisco Nexus 1000V, CSR, Vyatta, Nicira, hardware Nexus, OpenStack)?
These management questions have not been settled. In fact, what I see is the expertise of IT staff, like water, flows back and forth between departments. No one is quite sure where her responsibility ends and the colleague’s begins.
In this world of management confusion, the tried and true resonates. As a result, Brocade was successful in convincing the Data Center to wait on the death of native FC. And, just like before with 4 and 8 Gbit FC, Cisco had to answer. Therefore, the new Cisco 16 Gbit FC MDS 9710 Multilayer Director and the upcoming MDS 9250i Multiservice Fabric Switch that were just announced continue the time honored tradition of Brocade vs. Cisco FC war. In response, on LinkedIn Brocade immediately pointed to a press release touting added software features in their current code base. In addition, I am sure Brocade R&D is already well on the way to release the 32 Gbit FC ASICs in the next round of battle.
Meanwhile, many IT departments like my original customer will continue to follow a dual path — native FC separate from the Network, avoiding FCoE. Even when that avoidance is done on the switches from the same manufacturer — Cisco, and when it happens with the switch that fully supports FCoE — the Nexus. The native Fibre Channel protocol is alive and well.
References:
Cisco Helps Customers Address Rising Cloud, Big Data Requirements By Raising the Bar for Storage Networking.
Brocade Advances Fibre Channel Innovation With New Fabric Vision Technology.

How to disable cluster shared volumes (CSV) on Windows Server 2008 R2.

Description:

Cluster Shared Volumes (CSV) are shared drives holding an NTFS volume that can be written to by all cluster nodes in a Windows Server Failover Cluster.

They are especially important for Microsoft Hyper-V virtualization, because all host servers can see the same volume and can store multiple Virtual Machines on that volume. Using CSVs will allow Windows to perform Live Migration with less timeouts.

Unfortunately, many third party applications do not support CSVs. In addition, CSVs can be much harder to troubleshoot.

Instructions:

If you need to disable CSVs, use the following command in PowerShell:

Get-Cluster | %{$_.EnableSharedVolumes = “Disabled”}

References:

http://windowsitpro.com/windows/q-how-can-i-disable-cluster-shared-volumes-windows-server-2008-r2

Collateral for my presentation at the NJ CTO Study Council

This was my first time presenting at the new NJ CTO Study Council event, and it was a wonderful experience. We did a Virtual Desktop demo which worked flawlessly.

Thank you to attendees and my speaking partners Dr. Richard O’Malley, Ralph Barca, Stan Bednarz, Dan Riordan, and to my colleagues Jeff Jackson and Ian Erikson for help with the presentation.

My presentation is called “Virtualization Roadmap through K-12”

Here are the collateral files for the session:

Slides:

NJ CTO Study Council – VIRTUALIZATION – ROADMAP THROUGH K12 – November 2014

My tips published in VMware “vSphere Design Pocketbook” 2nd edition

vSphere-pocketbook-blog-edition-coverFor the 2nd year in a row, Frank Denneman and PernixData published me in “vSphere Design Pocketbook – 2.0.”

The book has “no fluff” guidance on building VMware vSphere.

Get a free copy here:

http://info.pernixdata.com/vsphere-pocketbook-2.0?MS=T

Those at VMworld 2014 can get a printed copy in PernixData booth #1017.

Thank you to the panel: Frank Denneman, Duncan Epping, Brad Hedlund, Cormac Hogan, William Lam, Michael Webster, and Josh Odgers, who picked my contribution. Here is how the process transpired:

http://frankdenneman.nl/2014/08/07/pre-order-vsphere-pocketbook-blog-edition/

I don’t yet know which one of my entries was published. Once the book is released at VMworld, I will update.

Unfortunately, this year I cannot make it to VMworld 2014, but I hope my friends will bring me a printed copy.

Slides from my session at the BriForum 2014 conference

YuryMagalif_BrianMadden_03Thank you to those who attended my session at BriForum 2014 in Boston and filled out the survey!

This is my 2nd year speaking. I hope to be back next year.

Here is the session presentation slide deck:

AgentlessAntivirusTips&Tricks_YuryMagalif_July2014_BriForum_v3

Here is the link to the session description on the BriForum website:

http://briforum.com/US/sessions.html#tipstricks

This year, the conference in Boston was excellent. I got a chance to meet Brian Madden (pictured at left), Gabe Knuth, Jack Madden and the TechTarget crew. In addition, I met many amazing people who are the top experts in End-User Computing –Benny Tritsch, Shawn Bass, ProjectVRC team: Jeroen van de Kamp, Ryan Bijkerk & Ruben Spruijt.

In particular, Benny and Shawn’s HTML5 comparison session and ProjectVRC comparative testing session were the highlights of the conference for me.

In my own session, I was successful with a demo of McAfee and had a good number of questions from the audience.  Stay tuned for the video, coming in August of 2014.

My presentation is called “Tips and Tricks on Building Agentless Antivirus Scanners for VMware View Virtual Desktops”

This tips and techniques session is best for administrators and consultants looking to implement an Antivirus solution for their VMware Virtual Desktop Infrastructure (VDI). The goal is to minimize I/O impact to VDI. We will discuss the two most developed scanners taking advantage of VMware vShield Endpoint application programing interfaces (APIs), Trend Micro Deep Security Antivirus 9.0 and McAfee Agentless MOVE AntiVirus 3.0. New this year is the discussion of VM-based scan policies. Overall, we will focus on real-world examples of VMware, Trend Micro and McAfee best practices. For example, the participants will learn whether to use their current Antivirus for VDI versus VDI agentless antivirus, why the VM Communication Interface (VMCI) driver is important, how to deploy the Security Virtual Appliances (SVAs), why you should disable VMotion for SVAs, how to test your solution using EICAR test files and how to shut down your VDI agentless antivirus VMs properly if doing maintenance. A basic understanding of VMware vSphere, VMware View and Enterprise Antivirus solutions is recommended.

Attendees will learn:
• How to minimize AntiVirus scanning I/O impact to VDI
• Whether to use your current AntiVirus versus VDI agentless Antivirus
• How to pick the best AntiVirus vendor for your environment
• How to test your agentless AntiVirus for effectiveness using EICAR files
• How to deploy and maintain your Trend Micro or McAfee infrastructure

Please send me any remaining questions that come up.

Virtual Desktops (VDI) on an Airplane

Recently, while flying on United Airlines I noticed the WiFi sign on the seat in front. I never used WiFi on planes before, so I thought it would be expensive. Imagine my surprise when it was only $8.99. It was probably cheap to compensate the absence of TV displays.

I immediately thought of our CDI Virtual Desktop (VDI) lab in Teterboro, NJ (USA). Would the Virtual Desktop even be usable? How will video run? I connected immediately, started recording my screen and opened my Virtual Desktop. It worked! Everything except video worked well.

My idea came because of Michael Webster, who has already tried doing this and wrote about it. I also wanted to do it in the Gunnar Berger style of protocol comparison. So, for your viewing pleasure — Virtual Desktops (VDI) on an Airplane.

——

Description:

This video is a demonstration of the Virtual Desktop (VDI) technology, located at CDI in Teterboro, NJ (USA) being accessed from an airplane 34,000 feet (10 km) high. Virtual Desktops allow you to use your Windows desktop from anywhere — even on satellite based WiFi. You will see PCoIP and HTML5 tests, Microsoft Word, HD video, YouTube video and vSphere client utilization.

Demonstration: Yury Magalif.
Lab Build: Chris Ruotolo.
Date: June 7, 2014
Connecting From: Random clouds above Missouri, USA

Equipment and Software used:

VMware View 5.3.
VMware vSphere 5.5.
Cisco C-series servers.
EMC XtremIO all flash storage array.
10Zig Apex 2800 PCoIP acceleration card with a Teradici chip.

Inspired by:

Michael Webster’s blog article:
http://longwhiteclouds.com/2014/06/06/the-vmware-view-from-the-horizon-at-38000-feet-and-8000-miles-away/

Gunnar Berger’s low-latency VDI comparison video:

 

Collateral for my session at the HP Discover 2014 conference

Yury Magalif - HP Discover 2014 presentation 01

Thank you to the 260 people who attended my session and filled out the survey!

I am very grateful that you keep coming to hear what I have to say and hope to be back next year.

My presentation is called “TB3306 – Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”

Returning for the sixth year in a row, this tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.

Here are the collateral files for the session:

Slides:

Yury Magalif- VMware 5.5 w BladeSystem, Virtual Connect, HP 3PAR StoreServ – TB3306 – HP Discover 2014

Use #HPtrick hashtag to chat with me on Twitter:

June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

Speaking at the HP Discover 2014 conference.

HP Discover projections

 

This is my 6th year to have the honor of speaking at the HP Discover conference.

Thank you to past attendees who rate my session high, and to HP staff who pick my session.

Here is the official HP link to the session:

https://h30496.www3.hp.com/connect/sessionDetail.ww?SESSION_ID=3306

Thank you to Morgan O’Leary of VMware for highlighting my session on the official VMware company blog:

http://blogs.vmware.com/vmware/2014/06/register-now-hp-discover-las-vegas-2014.html

Session time:

Wednesday, June 11, 2014
9 am – 10 am Pacific Standard Time (12:00 PM – 1 PM Eastern Standard Time).

Room: Lando 4202.

My presentation number is TB3306 and it is called “Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”

This tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.

You will be able to download the slides from my session the evening of June 12 on this blog.

Please live Tweet points you find interesting during the session, using the following hashtag:

#HPtrick

Look for suggested tricks in the slides.

In addition, use #HPtrick hashtag to chat with me on Twitter:

June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

If you would like to attend the HP conference in person, please register:

https://h30496.www3.hp.com/portal/newreg.ww

Then, choose session TB3306 in the Session Scheduler:

https://h30496.www3.hp.com/connect/login.ww