I was glad for a chance to present at the Workshop of the Association of Environmental Authorities of NJ (AEANJ). There were great questions from the audience.
Thank you to attendees, Leon McBride for the invitation, Peggy Gallos, Karen Burris, and to my colleague Lucy Valle for videotaping.
My presentation is called “Data Portability, Data Security, and Data Availability in Cloud Services”
Here are the collateral files for the session:
This was my first time presenting at the new NJ CTO Study Council event, and it was a wonderful experience. We did a Virtual Desktop demo which worked flawlessly.
Thank you to attendees and my speaking partners Dr. Richard O’Malley, Ralph Barca, Stan Bednarz, Dan Riordan, and to my colleagues Jeff Jackson and Ian Erikson for help with the presentation.
My presentation is called “Virtualization Roadmap through K-12”
Here are the collateral files for the session:
Recently, while flying on United Airlines I noticed the WiFi sign on the seat in front. I never used WiFi on planes before, so I thought it would be expensive. Imagine my surprise when it was only $8.99. It was probably cheap to compensate the absence of TV displays.
I immediately thought of our CDI Virtual Desktop (VDI) lab in Teterboro, NJ (USA). Would the Virtual Desktop even be usable? How will video run? I connected immediately, started recording my screen and opened my Virtual Desktop. It worked! Everything except video worked well.
My idea came because of Michael Webster, who has already tried doing this and wrote about it. I also wanted to do it in the Gunnar Berger style of protocol comparison. So, for your viewing pleasure — Virtual Desktops (VDI) on an Airplane.
This video is a demonstration of the Virtual Desktop (VDI) technology, located at CDI in Teterboro, NJ (USA) being accessed from an airplane 34,000 feet (10 km) high. Virtual Desktops allow you to use your Windows desktop from anywhere — even on satellite based WiFi. You will see PCoIP and HTML5 tests, Microsoft Word, HD video, YouTube video and vSphere client utilization.
Demonstration: Yury Magalif.
Lab Build: Chris Ruotolo.
Date: June 7, 2014
Connecting From: Random clouds above Missouri, USA
Equipment and Software used:
VMware View 5.3.
VMware vSphere 5.5.
Cisco C-series servers.
EMC XtremIO all flash storage array.
10Zig Apex 2800 PCoIP acceleration card with a Teradici chip.
Michael Webster’s blog article:
Gunnar Berger’s low-latency VDI comparison video:
Here is the session slide deck:
Pictured here is Alexis St. Clair, who won the Nest programmable thermostat raffle. The Nest was offered by my company CDI, the sponsor of the event, along with VMware and EMC.
Twitter Chat for remaining questions:
March 12, 2014, Wednesday
2 pm to 3 pm EST
Use Hashtag #CDIVplex in your questions.
My presentation is called “Stretching VMware clusters across distances with EMC’s Vplex – the ultimate in High Availability.”
This session is for administrators and consultants looking to stretch their VMware clusters across 2 geographical sites for enhanced High Availability and Disaster Recovery. Readers will learn:
- Differences between High Availability and Disaster Recovery approaches.
- When to use VMware Stretched Clusters vs. VMware Site Recovery Manager.
- How to decrease your Recovery Time Objective across sites to under 5 minutes.
- Minimum storage, network and compute requirements for VMware Stretched Clusters.
- What is distributed storage and how it helps with VMware Stretched Clusters.
- What is EMC’s Vplex?
- How Vplex allows you to configure VMware Stretched Clusters.
- Best practices for VMware Stretched Clusters with EMC’s Vplex.
VSAN is a storage technology that pools all local disks on multiple servers into one large distributed volume. Caching is done via an SSD drive.
Unfortunately, licensing and pricing details get released at VSAN General Availability around March 10th.
Out of the door, the VSAN will have the following features:
- Full support for VMware Horizon / View (no VSAN inside View — yet)
- Up to 32 nodes.
- Up to 2 million IOPS.
- 4.5 PB of space.
- 13 VSAN Ready Node configurations at launch using Cisco, IBM, Fujitsu or Dell servers.
- Build your own supported.
However, VSAN will also have the following requirements:
- At least 1 SSD drive.
- Up to 7 mechanical drives.
- Cannot use all SSDs or SAN storage.
- SSD must be at least 10% of space.
- Need ESXi 5.5 Update 1.
- EMC’s ScaleIO — can build distributed storage on any OS out there (Windows, Linux plus VMware) and more nodes (per Duncan Epping).
- Nutanix — server, storage, VMware in a customized box.
- Simplivity — same concept as Nutanix.
- Pivot3 — same concept as Nutanix.
- Virtual Storage Appliance (VSA) solutions (VMware own VSA, Atlantis, HP Lefthand VSA, etc.).
- Regular storage arrays.
- Flash only storage arrays (XtremIO, EMC VNX-F, Cisco’s Whiptail/Invicta)
There is a lot of interest in VSAN. The beta had more than 10,000 people sign up. Some VMware partners around the country are preparing solutions already, ready to sell to eager customers.
However, everything depends on how it’s licensed and priced. The price has to be lower than traditional storage and even VSA solutions (except maybe VMware’s VSA). Only then it will make sense for the smaller customer.
Otherwise, especially for lower end Virtual Desktop Infrastructure (VDI), the VSAN is perfect — easy to set up (one checkbox), minimum of only 3 servers, provides enough IOPS with SSD caching. We are planning to use it for VDI.