CVD: FlexPod Reference Architecture for a 2000 Seat Virtual Desktop Infrastructure with Citrix XenDesktop 7.1 on VMware vSphere 5.1: Storage

Two nodes FAS3240 with 4 shelves DS2246 were utilized in the deployment to support 1450 users of hosted shared desktops (HSD) and 550 users of hosted VDI (HVD). Clustered Data ONTAP version is 8.2P4.

To support the differing security, backup, performance, and data sharing needs of users, we group the physical data storage resources on your storage system into one or more aggregates. You can design and configure your aggregates to provide the appropriate level of performance and redundancy for your storage requirements. For information about best practices for working with aggregates, see Technical Report 3437: Storage Subsystem Resiliency Guide.  This design I used one aggregate per node since this is one deployment and with 2 shelves per node.

You create an aggregate to provide storage to one or more volumes. Aggregates are a physical storage object; they are associated with a specific node in the cluster.

The following table contains all aggregate configuration information. NetApp uses 3 disks for root aggregate as default.

Screen Shot 2014-03-06 at 10.06.56 AM

Volumes are data containers that enable you to partition and manage your data. Volumes are the highest-level logical storage objects. Unlike aggregates, which are composed of physical storage resources, volumes are completely logical objects. Understanding the types of volumes and their associated capabilities enables you to design your storage architecture for maximum storage efficiency and ease of administration.

A FlexVol volume is a data container associated with a virtual storage machine with FlexVol volumes. It gets its storage from a single associated aggregate, which it might share with other FlexVol volumes or Infinite Volumes. It can be used to contain files in a NAS environment, or LUNs in a SAN environment.

The following table lists the FlexVol configuration.

Screen Shot 2014-03-06 at 10.10.27 AM

The following diagram explains the storage layout.

725 RDS user write cache is on node 1. 725 RDS user and 550 HVD user write cache is on node 2. Two CIFS virtual storage servers are created for HSD and HVD users are created on each storage node. VMware ESXi 5.1 SAN boot volume is on node 1 and infrastructure virtual storage server is on node 2.

storage layoutI recommend to use NetApp SPM tool to figure out the storage data layout.  NetApp partners or sales engineers should all have access to this tool.

For hosted shared desktops and hosted VDI, storage best practice is similar.

PVS vDisk  CIFS/SMB 3 is used to host the PVS vDisk. CIFS/SMB 3 allows the same vDisk to be shared among multiple PVS servers and still has resilience during the storage node failover. This results in significant operational savings and architecture simplicity.

PVS write cache file. The PVS write cache file is hosted on NFS datastores for simplicity and scalability.  Deduplication should not be enabled on this volume.

Profile Management. To make sure that the user profiles and settings are preserved. We leverage the profile management software Citrix UPM to redirect the user profiles to the CIFS home directories.

User Data Management NetApp recommends hosting the user data on CIFS home directories to preserve data upon VM reboot or redeploy.

Monitoring and management NetApp recommends using OnCommand Balance and Citrix Desktop Director to provide end-to-end monitoring and management of the solution.

arch

About zhurachel

I am a solution architect focus on virtualization and storage.
This entry was posted in flexpod, NetApp Clustered ONTAP, virtual desktop and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s