Category: SAN

VMware vSphere misidentifies local or SAN-attached SSD drives as non-SSD

Symptom:

You are trying to configure Host Cache Configuration feature in VMware vSphere. The Host Cache feature will swap memory to a local SSD drive, if vSphere encounters memory constraints. It is similar to the famous Windows ReadyBoost.

Host Cache requires an SSD drive, and ESXi will detect the drive type as SSD. If the drive type is NOT SSD, Host Cache Configuration will not be allowed.

However, even though you put in some local SSD drives on the ESXi host, and also have an SSD drive on your storage array coming through, ESXi refuses to recognize the drives as SSD type, and thus refuses to let you use Host Cache.

Solution:

Apply some CLI commands to force ESXi into understanding that your drive is really SSD. Then reconfigure your Host Cache.

Instructions:

Look up the name of the disk and its naa.xxxxxx number in VMware GUI. In our example, we found that the disks that are not properly showing as SSD are:

  • Dell Serial Attached SCSI Disk (naa.600508e0000000002edc6d0e4e3bae0e)  — local SSD
  • DGC Fibre Channel Disk (naa.60060160a89128005a6304b3d121e111) — SAN-attached SSD

Check in the GUI that both show up as non-SSD type.

SSH to ESXi host. Each ESXi host will require you to look up the unique disk names and perform the commands below separately, once per host.

Type the following commands, and find the NAA numbers of your disks.

In the examples below, the relevant information is highlighted in RED.

The commands you need to type are BOLD.

The comments on commands are in GREEN.

———————————————————————————————-

~ # esxcli storage nmp device list

naa.600508e0000000002edc6d0e4e3bae0e

Device Display Name: Dell Serial Attached SCSI Disk (naa.600508e0000000002edc6d0e4e3bae0e)

Storage Array Type: VMW_SATP_LOCAL

Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.

Path Selection Policy: VMW_PSP_FIXED

Path Selection Policy Device Config: {preferred=vmhba0:C1:T0:L0;current=vmhba0:C1:T0:L0}

Path Selection Policy Device Custom Config:

Working Paths: vmhba0:C1:T0:L0

naa.60060160a89128005a6304b3d121e111

Device Display Name: DGC Fibre Channel Disk (naa.60060160a89128005a6304b3d121e111)

Storage Array Type: VMW_SATP_ALUA_CX

Storage Array Type Device Config: {navireg=on, ipfilter=on}{implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{TPG_id=1,TPG_state=ANO}{TPG_id=2,TPG_state=AO}}

Path Selection Policy: VMW_PSP_RR

Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}

Path Selection Policy Device Custom Config:

Working Paths: vmhba2:C0:T1:L0

naa.60060160a891280066fa0275d221e111

Device Display Name: DGC Fibre Channel Disk (naa.60060160a891280066fa0275d221e111)

Storage Array Type: VMW_SATP_ALUA_CX

Storage Array Type Device Config: {navireg=on, ipfilter=on}{implicit_support=on;explicit_support=on; explicit_allow=on;alua_followover=on;{TPG_id=1,TPG_state=ANO}{TPG_id=2,TPG_state=AO}}

Path Selection Policy: VMW_PSP_RR

Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}

Path Selection Policy Device Custom Config:

Working Paths: vmhba2:C0:T1:L3

———————————————————————————————-

Note that the Storage Array Type is VMW_SATP_LOCAL for the local SSD drive and VMW_SATP_ALUA_CX for the SAN-attached SSD drive.

Now we will check to see if in CLI, ESXi reports the disks as SSD or non-SSD for both disks. Make sure to specify your own NAA number when typing the command.

———————————————————————————————-

~ # esxcli storage core device list –device=naa.600508e0000000002edc6d0e4e3bae0e

naa.600508e0000000002edc6d0e4e3bae0e

Display Name: Dell Serial Attached SCSI Disk (naa.600508e0000000002edc6d0e4e3bae0e)

Has Settable Display Name: true

Size: 94848

Device Type: Direct-Access

Multipath Plugin: NMP

Devfs Path: /vmfs/devices/disks/naa.600508e0000000002edc6d0e4e3bae0e

Vendor: Dell

Model: Virtual Disk

Revision: 1028

SCSI Level: 6

Is Pseudo: false

Status: degraded

Is RDM Capable: true

Is Local: false

Is Removable: false

Is SSD: false

Is Offline: false

Is Perennially Reserved: false

Thin Provisioning Status: unknown

Attached Filters:

VAAI Status: unknown

Other UIDs: vml.0200000000600508e0000000002edc6d0e4e3bae0e566972747561

~ # esxcli storage core device list –device=naa.60060160a89128005a6304b3d121e111

naa.60060160a89128005a6304b3d121e111

Display Name: DGC Fibre Channel Disk (naa.60060160a89128005a6304b3d121e111)

Has Settable Display Name: true

Size: 435200

Device Type: Direct-Access

Multipath Plugin: NMP

Devfs Path: /vmfs/devices/disks/naa.60060160a89128005a6304b3d121e111

Vendor: DGC

Model: VRAID

Revision: 0430

SCSI Level: 4

Is Pseudo: false

Status: on

Is RDM Capable: true

Is Local: false

Is Removable: false

Is SSD: false

Is Offline: false

Is Perennially Reserved: false

Thin Provisioning Status: yes

Attached Filters: VAAI_FILTER

VAAI Status: supported

Other UIDs: vml.020000000060060160a89128005a6304b3d121e111565241494420

———————————————————————————————-

Now we will add a rule to enable SSD on those 2 disks. Make sure to specify your own NAA number when typing the commands.

———————————————————————————————-

~ # esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device naa.600508e0000000002edc6d0e4e3bae0e –option=enable_ssd

~ # esxcli storage nmp satp rule add –satp VMW_SATP_ALUA_CX –device naa.60060160a89128005a6304b3d121e111 –option=enable_ssd

———————————————————————————————-

Next, we will check to see that the commands took effect for the 2 disks.

———————————————————————————————-

~ # esxcli storage nmp satp rule list | grep enable_ssd

VMW_SATP_ALUA_CX     naa.60060160a89128005a6304b3d121e111                                                enable_ssd                  user

VMW_SATP_LOCAL       naa.600508e0000000002edc6d0e4e3bae0e                                                enable_ssd                  user

———————————————————————————————-

Then, we will run storage reclaim commands on those 2 disks. Make sure to specify your own NAA number when typing the commands.

———————————————————————————————-

~ # esxcli storage core claiming reclaim -d naa.60060160a89128005a6304b3d121e111

~ # esxcli storage core claiming reclaim -d naa.600508e0000000002edc6d0e4e3bae0e

Unable to unclaim path vmhba0:C1:T0:L0 on device naa.600508e0000000002edc6d0e4e3bae0e. Some paths may be left in an unclaimed state. You will need to claim them manually using the appropriate commands or wait for periodic path claiming to reclaim them automatically.

———————————————————————————————-

If you get the error message above, that’s OK. It takes time for the reclaim command to work.

You can check in the CLI by running the command below and looking for “Is SSD: false”

———————————————————————————————-

~ # esxcli storage core device list –device=naa.600508e0000000002edc6d0e4e3bae0e

naa.600508e0000000002edc6d0e4e3bae0e

Display Name: Dell Serial Attached SCSI Disk (naa.600508e0000000002edc6d0e4e3bae0e)

Has Settable Display Name: true

Size: 94848

Device Type: Direct-Access

Multipath Plugin: NMP

Devfs Path: /vmfs/devices/disks/naa.600508e0000000002edc6d0e4e3bae0e

Vendor: Dell

Model: Virtual Disk

Revision: 1028

SCSI Level: 6

Is Pseudo: false

Status: degraded

Is RDM Capable: true

Is Local: false

Is Removable: false

Is SSD: false

Is Offline: false

Is Perennially Reserved: false

Thin Provisioning Status: unknown

Attached Filters:

VAAI Status: unknown

Other UIDs: vml.0200000000600508e0000000002edc6d0e4e3bae0e566972747561

———————————————————————————————-

Check in the vSphere Client GUI. Rescan storage.

If it still does NOT say SSD, reboot the ESXi host. 

Then look in the GUI and rerun the command below.

———————————————————————————————-

~ # esxcli storage core device list —device=naa.60060160a89128005a6304b3d121e111

naa.60060160a89128005a6304b3d121e111

Display Name: DGC Fibre Channel Disk (naa.60060160a89128005a6304b3d121e111)

Has Settable Display Name: true

Size: 435200

Device Type: Direct-Access

Multipath Plugin: NMP

Devfs Path: /vmfs/devices/disks/naa.60060160a89128005a6304b3d121e111

Vendor: DGC

Model: VRAID

Revision: 0430

SCSI Level: 4

Is Pseudo: false

Status: on

Is RDM Capable: true

Is Local: false

Is Removable: false

Is SSD: true

Is Offline: false

Is Perennially Reserved: false

Thin Provisioning Status: yes

Attached Filters: VAAI_FILTER

VAAI Status: supported

Other UIDs: vml.020000000060060160a89128005a6304b3d121e111565241494420

———————————————————————————————-

If it still does NOT say SSD, you need to wait. Eventually, the command works and displays as SSD in CLI and the GUI. 

More Information:

See the article below:

Swap to host cache aka swap to SSD?

Collateral for my session at the HP Discover 2014 conference

Yury Magalif - HP Discover 2014 presentation 01

Thank you to the 260 people who attended my session and filled out the survey!

I am very grateful that you keep coming to hear what I have to say and hope to be back next year.

My presentation is called “TB3306 – Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”

Returning for the sixth year in a row, this tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.

Here are the collateral files for the session:

Slides:

Yury Magalif- VMware 5.5 w BladeSystem, Virtual Connect, HP 3PAR StoreServ – TB3306 – HP Discover 2014

Use #HPtrick hashtag to chat with me on Twitter:

June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

Speaking at the HP Discover 2014 conference.

HP Discover projections

 

This is my 6th year to have the honor of speaking at the HP Discover conference.

Thank you to past attendees who rate my session high, and to HP staff who pick my session.

Here is the official HP link to the session:

https://h30496.www3.hp.com/connect/sessionDetail.ww?SESSION_ID=3306

Thank you to Morgan O’Leary of VMware for highlighting my session on the official VMware company blog:

http://blogs.vmware.com/vmware/2014/06/register-now-hp-discover-las-vegas-2014.html

Session time:

Wednesday, June 11, 2014
9 am – 10 am Pacific Standard Time (12:00 PM – 1 PM Eastern Standard Time).

Room: Lando 4202.

My presentation number is TB3306 and it is called “Tips and tricks on building VMware vSphere 5.5 with BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage”

This tips-and-techniques session is for administrators and consultants who want to implement VMware ESXi 5.5 (vSphere) on HP c-Class BladeSystem, Virtual Connect, and HP 3PAR StoreServ storage. New topics will include the auto-deployment of domain configurations and Single Root I/O Virtualization (SR-IOV) for bypassing vSwitches. The session will focus on real-world examples of VMware and HP best practices. For example, you will learn how to load-balance SAN paths; make Virtual Connect really “connect” to Cisco IP switches in a true active/active fashion; configure VLANs for the Virtual Connect modules and virtual switches; solve firmware and driver problems. In addition, you will receive tips on how to make sound design decisions for iSCSI vs. Fibre Channel, and boot from SAN vs. local boot. To get the most from this session, we recommend attendees have a basic understanding of VMware ESX, HP c-Class BladeSystem, and Virtual Connect.

You will be able to download the slides from my session the evening of June 12 on this blog.

Please live Tweet points you find interesting during the session, using the following hashtag:

#HPtrick

Look for suggested tricks in the slides.

In addition, use #HPtrick hashtag to chat with me on Twitter:

June 16, 2014 — Monday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

If you would like to attend the HP conference in person, please register:

https://h30496.www3.hp.com/portal/newreg.ww

Then, choose session TB3306 in the Session Scheduler:

https://h30496.www3.hp.com/connect/login.ww

VMware announces VSAN to be released around March 10th.

ben fathi vsanBen Fathi, the CTO of VMware announced the Virtual Storage Area Network (VSAN) feature in vSphere ESX on March 6, 2014.

VSAN is a storage technology that pools all local disks on multiple servers into one large distributed volume. Caching is done via an SSD drive.

Unfortunately, licensing and pricing details get released at VSAN General Availability around March 10th.

Out of the door, the VSAN will have the following features:

  1. Full support for VMware Horizon / View (no VSAN inside View — yet)
  2. Up to 32 nodes.
  3. Up to 2 million IOPS.
  4. 4.5 PB of space.
  5. 13 VSAN Ready Node configurations at launch using Cisco, IBM, Fujitsu or Dell servers.
  6. Build your own supported.

However, VSAN will also have the following requirements:

  1. At least 1 SSD drive.
  2. Up to 7 mechanical drives.
  3. Cannot use all SSDs or SAN storage.
  4. SSD must be at least 10% of space.
  5. Need ESXi 5.5 Update 1.

VSAN competitors:

  1. EMC’s ScaleIO — can build distributed storage on any OS out there (Windows, Linux plus VMware) and more nodes (per Duncan Epping).
  2. Nutanix — server, storage, VMware in a customized box.
  3. Simplivity — same concept as Nutanix.
  4. Pivot3 — same concept as Nutanix.
  5. Virtual Storage Appliance (VSA) solutions (VMware own VSA, Atlantis, HP Lefthand VSA, etc.).
  6. Regular storage arrays.
  7. Flash only storage arrays (XtremIO, EMC VNX-F, Cisco’s Whiptail/Invicta)

Analysis:

There is a lot of interest in VSAN. The beta had more than 10,000 people sign up. Some VMware partners around the country are preparing solutions already, ready to sell to eager customers.

However, everything depends on how it’s licensed and priced. The price has to be lower than traditional storage and even VSA solutions (except maybe VMware’s VSA). Only then it will make sense for the smaller customer.

Otherwise, especially for lower end Virtual Desktop Infrastructure (VDI), the VSAN is perfect — easy to set up (one checkbox), minimum of only 3 servers, provides enough IOPS with SSD caching. We are planning to use it for VDI.

VSAN nodes

Twitter Chat about HP blades, VMware, Virtual Connect & Storage

 

Twitter_chatUse 

#HPtrick 

hashtag to chat with me on Twitter:

June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

The chat will be about HP blades, VMware, Virtual Connect & HP Storage, to answer new and remaining questions for my session at HP Discover 2013 “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”

How to Twitter Chat:

Go to Twitter.com or the Twitter mobile app and search using the universal search field on top for #HPtrick

Then, tweet a question and make sure to include the #HPtrick expression (hashtag) in the question.

I will be monitoring #HPtrick area for questions, and will respond with an answer, also including the #HPtrick hashtag.

You will be able to Refresh your browser page or mobile app, and will see the answer to your question.

See you there!

Collateral for my session at the HP Discover 2013 conference

Thank you to those who attended my session and filled out the survey! I hope to be back next year.

My presentation is called “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”

The session will discuss how VMware ESXi 5.1 (vSphere) can be implemented on HP c-Class blades, Virtual Connect and EVA, as well as the Flex-10/10D module and WS460c Gen8 blade with eight GPUs. The presentation will focus on real-world VMware and HP best practices, including how to load balance storage area network paths, how to make Virtual Connect really “connect” to Cisco Internet protocol switches, how to configure virtual local area networks for the Virtual Connect modules and VMware virtual switches and how to solve firmware and driver headaches with Virtual Connect and ESXi 5.1. Attendees will also receive tips on design decisions. A basic understanding of VMware, c-Class blades and Virtual Connect is recommended.

Here are the collateral files for the session:

Slides:

TB2603_Magalif_v5

HP Documents:

DISCOVER_2013_HOL2653_Student_LAB_Guide_Rev.1.1

HOL2653-VC4

Use #HPtrick hashtag to chat with me on Twitter:

June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

Speaking at the HP Discover 2013 conference

HP Discover screenshot

This is my 5th year to have the honor of speaking at the HP Discover conference.

Thank you to past attendees who rate my session high, and to HP staff who pick my session.

Session time:

Wednesday, June 12, 2013
4:30 PM – 5:30 PM Pacific Standard Time (7:30 PM – 8:30 PM Eastern Standard Time).

My presentation is called “TB2603 – Building VMware vSphere 5.1 with blades, Virtual Connect and EVA.”

The session will discuss how VMware ESXi 5.1 (vSphere) can be implemented on HP c-Class blades, Virtual Connect and EVA, as well as the Flex-10/10D module and WS460c Gen8 blade with eight GPUs. The presentation will focus on real-world VMware and HP best practices, including how to load balance storage area network paths, how to make Virtual Connect really “connect” to Cisco Internet protocol switches, how to configure virtual local area networks for the Virtual Connect modules and VMware virtual switches and how to solve firmware and driver headaches with Virtual Connect and ESXi 5.1. Attendees will also receive tips on design decisions. A basic understanding of VMware, c-Class blades and Virtual Connect is recommended.

Here is the official HP link to the session:

https://h30496.www3.hp.com/connect/sessionDetail.ww?SESSION_ID=2603&tclass=popup

You will be able to download the session slides starting the evening of June 12 on this blog.

This year, I decided to do some Social Media experiments.

So, please live Tweet points you find interesting during the session, using the following hashtag:

#HPtrick

Look for suggested tricks in the slides.

In addition, use #HPtrick hashtag to chat with me on Twitter:

June 18, 2013 — Tuesday, 2-3 pm Eastern Standard Time (11 am – 12 pm Pacific Standard Time).

If you would like to attend the HP conference in person, please register:

https://h30496.www3.hp.com/portal/newreg.ww

Then, choose session TB2603 in the Session Scheduler:

https://h30496.www3.hp.com/connect/login.ww