XenDesktop Storage design Series: Part 1

XenDesktop Storage design Series 1

As a solution architect, I have been working with customers, NetApp sales engineers and partners on design and deploy virtual desktop last 4 years. I hope this blog series will help you to deploy virtual desktops successfully.

I plan to discuss provisioning methods, storage protocol design, storage IO anatomy, and storage sizing. If you have any question, I am happy to add more topics .

Citrix XenDesktop has two provisioning methods: provisioning server (PVS) and machine creation service (MCS). Also you can use NetApp flexclones.

Design decision from Citrix:

Screen Shot 2012 10 30 at 3 32 55 PM

First let’s dive into MCS cloning method. MCS uses hypervisor snapshots same as VMWare’s linked clone technology. This can be vSphere, XenServer or Hyper-V snapshots.  The diagram below shows you why MCS uses 3 times more IO than PVS or NetApp clone. One write from the VM takes one metadata read, one metadata write and then the data write. Hypervisor snapshot is a costly solution without good scalability and performance. Take away point is that for small scale ( less than 1000) VDI only deployment, you can use MCS.

Screen Shot 2012 10 30 at 3 58 34 PM

Now let’s look at PVS. The diagram below shows you vDisk and write cache are the two main components with a PVS deployed XenDesktop solution. vDisk and write cache need to be on shared storage for enterprise to ensure data integrity, backup, scalability and easy management.

Screen Shot 2012 10 30 at 4 52 26 PM

PVS streams the vdisk ( the master image) over the network, so you need at least 2 1GigE NIC on your PVS servers.

vDisk attribute:

  • Read only when streaming
  • Shared by many VMs makes update easy
  • Recommended storage protocol: CIFS SMB for vDisk

Before PVS 6.1,  I recommended iSCSI LUN for PVS vDisk, the reason is CIFS with PVS 6.0 below cannot do windows caching. There is a constant CIFS traffic from PVS to the storage reading vDisk. PVS 6.1 changed windows registry on oplock and CIFS SMB enable, you can easily use CIFS SMB for vDisk. CIFS SMB brings performance and easy update on vDisk.

One PVS supports 500-600VM normally. For a 5000 seats VDI, you need N+1 PVS servers, in this case you need 11 PVS servers. When you update 11 servers on the same vDisk change, it is no fun for local storage. CIFS is just like windows file system, you only need to update once and the other PVS servers get the change.

Now you will ask a question where is the unique VDI session data? The answer is the write cache.

Writecache attribute:

  • One VM one writecache file
  • Writecache file will be empty after VM reboots
  • Recommended storage procotol: NSF for writecache.

NFS is space efficient and easy managed protocol.  NFS is default thin provisioning so you can give each user 5 GB write cache。 Not every user uses 5 GB at the same time, thin provisioning only takes space has data on.

I recommend PVS for non-persistent desktops and NetApp clone for persistent desktops.

Next blog I would like to discuss how to size a virtual desktop solution and some performance data.

About zhurachel

I am a solution architect focus on virtualization and storage.
This entry was posted in virtual desktop and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.