CVD: FlexPod Reference Architecture for a 2000 Seat Virtual Desktop Infrastructure with Citrix XenDesktop 7.1 on VMware vSphere 5.1: Architecture

This CVD provides a 2000-Seat Virtual Desktop Infrastructure using Citrix XenDesktop 7.1 built on Cisco UCS B200-M3 blades with NetApp FAS 3200-series and the VMware vSphere ESXi 5.1 hypervisor platform.

The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, once the reference architecture contained in this document is built, it can easily be scaled as requirements and demands change. This includes scaling both up (adding additional resources within a UCS Domain) and out (adding additional UCS Domains and NetApp FAS Storage arrays).

The 2000-user XenDesktop 7 solution includes Cisco networking, Cisco UCS and NetApp FAS storage, which fits into a single data center rack, including the access layer network switches.

cvd rack

The workload contains the following hardware:

  • Two Cisco Nexus 5548UP Layer 2 Access Switches
  • Two Cisco UCS 6248UP Series Fabric Interconnects
  • Two Cisco UCS 5108 Blade Server Chassis with two 2204XP IO Modules per chassis
  • Four Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 384GB RAM, and VIC1240 mezzanine cards for the 550 hosted Windows 7 virtual desktop workloads with N+1 server fault tolerance.
  • Eight Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 256 GB RAM, and VIC1240 mezzanine cards for the 1450 hosted shared Windows Server 2012 server desktop workloads with N+1 server fault tolerance.
  • Two Cisco UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240 mezzanine cards for the infrastructure virtualized workloads
  • Two node NetApp FAS 3240 dual controller storage system running Data ONTAP cluster mode, 4 disk shelves, converged and 10GE ports for FCoE and NFS/CIFS connectivity respectively.
  • (Not Shown) One Cisco UCS 5108 Blade Server Chassis with 3 UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240 mezzanine cards for the Login VSI launcher infrastructure

Our design goal is a high availability, high performance and high efficiency end to end solution. Now I will explain how we achieve this goal on server, network and storage.

Server

The logical architecture of the validated is designed to support 2000 users within two chassis and fourteen blades, which provides physical redundancy for the chassis and blade servers for each workload.

In vCenter, we created 3 resource pool and followed N+1 high availability to achieve our end to end server, network and storage resilient goal.

  • 2 UCS servers for infrastructure VMs
  • 6 for 550 hosted VDI VMs
  • 8 for 64 hosted shared VMs for 1450 users

Network

We configured a fully redundant and highly-available network. Configuration guidelines are provided that refer to which redundant component is being configured with each step, whether that be A or B. For example Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly.

For best performance, We use 10 GbE and jumbo frame network between UCS and storage.

cvd network

5 VLANs are configured to ensure QOS on UCS and switches.

Screen Shot 2014-02-26 at 10.36.58 PM

Screen Shot 2014-02-26 at 10.47.00 PM

Storage

Screen Shot 2014-02-26 at 10.52.45 PM

This is the first Clustered Data ONTAP CVD.

With the release of NetApp clustered Data ONTAP (clustered ONTAP), NetApp was the first to market with enterprise-ready, unified scale-out storage. Developed from a solid foundation of proven Data ONTAP technology and innovation, clustered ONTAP is the basis for virtualized shared storage infrastructures that are architected for nondisruptive operations over the lifetime of the system. For details on how to configure clustered Data ONTAP with VMware® vSphere™, refer to TR-4068: VMware vSphere 5 on NetApp Data ONTAP 8.x Operating in Cluster-Mode.

All clustering technologies follow a common set of guiding principles. These principles include the following:

  • Nondisruptive operation. The key to efficiency and the basis of clustering is the ability to make sure that the cluster does not fail—ever.
  • Virtualized access is the managed entity. Direct interaction with the nodes that make up the cluster is in and of itself a violation of the term cluster. During the initial configuration of the cluster, direct node access is a necessity; however, steady-state operations are abstracted from the nodes as the user interacts with the cluster as a single entity.
  • Data mobility and container transparency. The end result of clustering—that is, the nondisruptive collection of independent nodes working together and presented as one holistic solution—is the ability of data to move freely within the boundaries of the cluster.
  • Delegated management and ubiquitous access. In large complex clusters, the ability to delegate or segment features and functions into containers that can be acted upon independently of the cluster means the workload can be isolated; it is important to note that the cluster architecture itself must not place these isolations. This should not be confused with security concerns around the content being accessed.

I will discuss the detail of storage architecture , solution best practice and test results on this solution in my future blogs.

Posted in flexpod, NetApp Clustered ONTAP, virtual desktop | Tagged , , , , , , , , , | 1 Comment

XenDesktop 7.1 with vSphere 5.1 on NetApp CVD is published

Finally after many late nights, we finished our CVD XenDesktop 7.1 with vSphere 5.1 on NetApp. I had great time working with the best engineers in Citrix and Cisco. I will start my series blog posts on this CVD shortly. Happy reading!

Screen Shot 2014-02-25 at 4.23.18 PM

Posted in flexpod, virtual desktop | Tagged , , , , , , , , , , | Leave a comment

How to deploy Citrix ShareFile on NetApp?

I introduced Citrix ShareFile and why NetApp is the best storage for ShareFile. Now we are ready to set a production ShareFile environment on NetApp. To deploy Citrix ShareFile, you need the following components:

Any type of hypervisors 

–       Two VMs: Windows Server 2008 R2 with IIS and Citrix StorageZones Controller software.

–       Two VMs: Windows Server 2008 R2 with Active Directory® and Domain Name Service (DNS) functions. NetApp recommends using the existing AD and DNS infrastructure.

Note that physical servers can also be used to run the above-mentioned server roles.

NetApp Storage

Note that Data ONTAP systems running 7-mode are also supported for the ShareFile solution.

  • Licenses: CIFS protocol and deduplication
  • Additional Licenses: snaprestore, snapmirror and snapvault (based on NRM-CS configuration)

Recommended management software: OnCommand System Manager 2.0 and Virtual Storage Console (VSC) 2.0 for Citrix XenServer

sharefile

A CIFS share needs to be created for ShareFile:

  • A New CIFS Share. Citrix ShareFile requires a CIFS share with no data in it since the system will create its own folder structure. The object files created by users are stored in a folder called persistentstorage (see Figure 9). Follow the NetApp best practices to set up a virtual storage server with a dedicated CIFS share.
  • CIFS Share Permissions. Once the setup is completed, configure the new CIFS share using System Manager. It is important to remove the default account permissions (everyone) and enter a dedicated administrative account for the StorageZones Controller to access the CIFS share through SMB. Specifying an administrative account to access the users’ files will prevent untrusted users from accessing this UNC path.

For detailed information about clustered Data ONTAP, refer to TR-3982 and TR-3967.

NetApp recommends creating a redundant infrastructure for the platform depending on the customer’s infrastructure, different approaches are available to configure redundant network access.

Posted in Uncategorized | Leave a comment

Why NetApp is the best storage for ShareFile?

Citrix ShareFile StorageZones and NetApp offer a cost-effective and scalable file sharing solution with the following key benefits.

  • Storage efficiency
  • On-demand flexibility
  • Nondisruptive operations
  • Unified storage architecture
  • Data protection

Storage Efficiency

With the ShareFile object storage–based architecture, every update to the user’s files results in the creation of a new file on the back-end storage. Therefore, NetApp storage efficiency technologies help deduplicate the storage required for this highly duplicated data, thereby allowing storage cost savings. Additional savings are also obtained with the use of cross-file deduplication. Customers simultaneously achieve mobility and transparent data access with Citrix ShareFile.

On-Demand Flexibility

NetApp storage can have up to 24 nodes can be nondisruptively added to a clustered Data ONTAP deployment. A single CIFS share can span multiple nodes. As a result, customers can scale out transparently without downtime.

Nondisruptive Operation

NetApp nondisruptive operations (NDO) allow seamless storage operations without downtime. Storage upgrades and maintenance can easily be achieved without interrupting the user’s access to files.

File sharing is critical to business users; any downtime results in loss of productivity. More importantly, it can result in poor customer satisfaction. NDO in Data ONTAP provides the following benefits.

  • Refresh hardware and software transparently without losing access to the customer’s data. When it is time for an update, administrators can simply move the CIFS volume to another node within the cluster nondisruptively to retire the old hardware from the cluster.
  • Move data to a different node to redistribute the workload across a cluster. This task can be accomplished during normal business hours, allowing for a more dynamic platform, without waiting for the next maintenance window.
  • Maintenance operations on specific hardware or software components can also be accomplished transparently. For example, adding a Flash Cache acceleration card or redistributing data across controllers can be done nondisruptively.

Unified Storage Architecture

The NetApp storage array can be shared between Citrix ShareFile, Citrix XenApp, and Citrix XenDesktop deployments. Several protocols can be leveraged on the same physical array. For example, numerous volumes are required when building a XenDesktop environment with Citrix Provisioning Server (PVS). It is possible to host the write-back cache files on an NFS volume where the desktop images (vDisk) will be set up on an iSCSI LUN or CIFS share. Additionally, the user profiles and ShareFile folders will remain on two separate CIFS shares.

NetApp Snapshot Based Backups

NetApp Snapshot technology provides great value when backing up the CIFS share object repository for ShareFile StorageZones since it does not require plenty of storage. By implementing Snapshot, administrators can benefit from a quick backup, flexible schedule, and custom retention policies. Remote replication of the backups can be achieved by leveraging NetApp SnapMirror® technology. Secondary backups can be achieved using NetApp SnapVault® technology.

NetApp OnCommand System Manager should be leveraged to configure backup and replication polices for the CIFS share.

After all these great points, NetApp has developed NetApp Recovery Manager for Citrix ShareFile (NRM-CS). NRM-CS  is a Citrix Ready certified product which provides an administrator driven user file(s) recovery solution for Citrix ShareFile StorageZone’ s deployments using on-premise NetApp Storage.  I will discuss ShareFile deployments and NRM-CS in my next two blogs.

Posted in Uncategorized | Leave a comment

Citrix ShareFile Series – Overview

I am working on Citrix ShareFile NetApp Insight presentation. NetApp Insight will be in  Las Vegas from Oct 7 to 9. If you have a chance to attend, please come to my sessions: XenDesktop 7 best practice and design, and Citrix ShareFile for Enterprise mobility or find me at virtualization ask expert booth.

Citrix ShareFile Enterprise is a follow-me data solution that meets the mobility and collaboration needs of all users while allowing IT to manage and store data wherever they want.

The ShareFile product architecture consists of two key components: The  ShareFile.com Control Plane, and Citrix-managed or customer-managed StorageZones. The client device can request access to the follow-me data service.

The Control Plane performs functions such as user authentication, access control,
reporting and all other brokering functions. The Control Plane is hosted in Citrix
datacenters and managed by Citrix as a service. Customers can choose between
the US Control Plane or the European Control Plane to address performance and
compliance requirements.

Screen Shot 2013-09-24 at 6.33.55 PM

Citrix ShareFile StorageZones allows customers to manage their data on the premises or in the cloud. Previously, with a similar architecture, Citrix ShareFile supported only Citrix-managed StorageZones, providing a pure cloud offering hosted on Amazon EC2 or Microsoft Windows Azure. These cloud offerings can also be combined with customer-managed StorageZones to provide a hybrid architecture.

This hybrid model illustrated in Figure below provides customers with the flexibility to leverage cloud or on-premises deployments depending on their compliance and performance requirements.

Screen Shot 2013-09-24 at 6.31.16 PMShareFile storagezone has two options: Native ShareFile Data and existing storage repository.  The existing storage repository needs ShareFile StorageZone connectors. StorageZones Connectors are available for ShareFile Enterprise. StorageZones Connectors are also provided with the XenMobile Enterprise, MDM, and App Editions.

StorageZones Connectors allow ShareFile remote users to securely access documents and folders stored in SharePoint document libraries and on network file shares. A StorageZones Connector is embedded on a StorageZones Controller and integrates with the on-premises StorageZones.

Screen Shot 2013-09-24 at 7.07.20 PM

Next blog, I will talk about ShareFile solution architecture including proof of concept and production design.

Posted in ShareFile | Tagged , , , , , , , , | Leave a comment

Where should I put Citrix PVS vDisk?

Often there is a debate if we should put Citrix XenDesktop PVS vDisk on CIFS or SAN LUN. I like CIFS’s simplicity and SAN LUN’s resilience. Can we have both? Answer is SMB 3.

To verify I setup a lab with NetApp 4 node cluster and Xendesktop 7 . I put vDisk on three different locations and tested with LIF failover and storage node failover. During the LIF failover and storage failover, IOMeter continues to generate work load.

vDisk Location:

  • vDisk hosted on SMB 2 CIFS share on NetApp ONTAP 8.1
  • SMB 3 CIFS share on NetApp ONTAP 8.2
  • iSCSI multipathing LUN on NetApp ONTAP 8.1.2

Test scenarios:

  • CIFS LIF move
  • Storage node failover
CIFS LIF migration test procedure:

You must have failover group setup properly to do this procedure. Please see my blog on how to setup failover group.
Steps:

  1. Use the network interface modify command to change the Status Admin of the LIF to down (offline):
    network interface modify -vserver vs1 -lif lif1 -status-admin down

2. Use the network interface modify command to change the status admin of the LIF to up (online):

network interface modify -vserver vs1 -lif lif1 -status-admin up
When thousands of desktop users share one infrastructure, virtual desktop solution resilience is critical. You should verify Clustered ONTAP  each node can successfully fail over to another node. This helps ensure that the system is configured correctly and that you can maintain access to data if a real failure occurs.
Storage node failover procedure:
Steps

  1. Check the failover status by entering the following command:
    storage failover show
  2. Take over the node by its partner:
    storage failover takeover -ofnode nodename
  3. Verify that failover was completed storage failover show command.
  4. Give back the storage to the original node:
    storage failover giveback -ofnode nodename
  5. Verify that giveback was completed by using the storage failover show-giveback command.
  6. Revert all LIFs back to their home nodes by entering the following command:
    network interface revert *

Test Finding:

Table below shows the result of test:

Screen Shot 2013-09-13 at 3.50.25 PM

During the LIF migration, there is no impact on the virtual desktop. During the node failover, vDisk on CIFS SMB 2 causes CIFS Share hung and VM hung for 1-3 minutes depends how fast the failover finishes.

SMB 3 and iSCSI LUN has no impact on VMs during storage node failover.

Summary:

CIFS is much easier to setup than multipathing SAN LUNs. Once setup, you can mount one CIFS share to multiple PVS servers. When you update the vDisk on a CIFS share, all the PVS servers get updated. CIFS  SMB 3 brings simplicity and resilience to the solution.

Posted in Storage Discussion, virtual desktop | Tagged , , , | 2 Comments

LIF Failover group – Clustered ONTAP best practice series

When you setup XenDesktop deployment on NetApp Cluster ONTAP, it is important to set LIF user defined failover groups. LIF failover refers to the automatic migration of a LIF in response to a link failure on the LIF’s current network port. When such a port failure is detected, the LIF is migrated to a working port.

It is best practice to use 10Gb network ports for data LIFs and  1GB network ports for management LIFs. So when the cluster failover, the 10GB data LIF will not fall into 1 GB management LIF. It is also best practice to create a LIF for each volume. So you need to modify each LIF to a proper failover group.

We have four networks in 2000 seat XenDesktop 7 virtual desktop environment and five failover groups.

Failover group:

  • Fg-cifs-3048  – CIFS for user data/profile
  • Fg-cifs-3073   – server data share, optional
  • Fg-nfs-3074 – Storage network, NFS volumes or iSCSI LUNs
  • fg-cluster_mgmt – Cluster management
  • fg-node_mgmt – Node management

Screen Shot 2013-08-30 at 11.01.41 AM

Here is output of failover groups and ports configured on a 4 nodes cluster.

Failover

Group               Node              Port

——————- —————– ———-

fg-cifs-3048

ccr-cmode-01-01   a0a-3048

ccr-cmode-01-02   a0a-3048

ccr-cmode-01-04   a0a-3048

ccr-cmode-01-03   a0a-3048

fg-cifs-3073

ccr-cmode-01-01   a0a-3073

ccr-cmode-01-02   a0a-3073

ccr-cmode-01-04   a0a-3073

ccr-cmode-01-03   a0a-3073

fg-nfs-3074

ccr-cmode-01-01   a0a-3074

ccr-cmode-01-02   a0a-3074

ccr-cmode-01-04   a0a-3074

ccr-cmode-01-03   a0a-3074

fg-cluster_mgmt

ccr-cmode-01-01   e0a

ccr-cmode-01-02   e0a

ccr-cmode-01-04   e0a

ccr-cmode-01-03   e0a

fg-node-mgmt-01

ccr-cmode-01-01   e0M

ccr-cmode-01-01   e0b

fg-node-mgmt-02

ccr-cmode-01-02   e0M

ccr-cmode-01-02   e0b

fg-node-mgmt-03

ccr-cmode-01-03   e0M

ccr-cmode-01-03   e0b

fg-node-mgmt-04

ccr-cmode-01-04   e0M

ccr-cmode-01-04   e0b

Once you create the failover group, you are ready to create LIF and assign the LIF to the proper failover group.

Make sure the following highlighted values are set probably.

  1. Avert True is to revert the LIF back to the primary port when the partner node gives back the control to the primary node.
  2. Home port and current port is different when the LIF failover to the other storage node.
  3. User failover should enable and failover group name.

ccr-cmode-01::> net int show -vserver hsvdi -lif hswc

(network interface show)

Vserver Name: hsvdi

Logical Interface Name: hswc

Role: data

Data Protocol: nfs

Home Node: ccr-cmode-01-01

Home Port: a0a-3074

Current Node: ccr-cmode-01-01

Current Port: a0a-3074

Operational Status: up

Extended Status: –

Is Home: true

Network Address: 172.20.74.107

Netmask: 255.255.255.0

IPv4 Link Local: –

Bits in the Netmask: 24

Routing Group Name: d172.20.74.0/24

Administrative Status: up

Failover Policy: nextavail

Firewall Policy: data

Auto Revert: true

Use Failover Group: enabled

Fully Qualified DNS Zone Name: none

Failover Group Name: fg-nfs-3074

FCP WWPN: –

Comment:

Proper configuration ensure the production systems continue to be functioning and no performance impact.

Posted in Uncategorized | Leave a comment

Load-Sharing Mirrors virtual storage server root volumes – Cluster OnTAP best practice series

I have not posted blogs for a while. In last a few weeks, I was heads down configuring NetApp FlexPod with XenDesktop. I would like to share some best practices on NetApp cluster OnTAP storage configuration in my next a few blogs.

One best practice with NetApp Cluster OnTAP is using load sharing SnapMirror for virtual storage server’s root volumes.

In Cluster-Mode, NAS clients can use a single NFS mount point or CIFS share to access a namespace of potentially thousands of volumes. The root volume for a Vserver namespace contains the paths where the data volumes are junctioned into the namespace. NAS clients cannot access data if the root volume is unavailable.

Every Vserver has a root volume that serves as the entry point to the namespace provided by that Vserver. The root volume of a Vserver is a FlexVol volume that resides at the top level of the namespace hierarchy and contains the directories that are used as mount points, the paths where data volumes are junctioned into the namespace.

In the unlikely event that the root volume of a Vserver namespace is unavailable, NAS clients cannot access the namespace hierarchy and therefore cannot access data in the namespace. For this reason, it is a NetApp best practice to create a load-sharing mirror for the root volume on each node of the cluster so that the namespace directory information remains available in the event of a node outage or failover.

The procedure is also stated in NetApp FlexPod Animal CVD.

Screen Shot 2013-08-02 at 1.27.23 PM

Posted in Uncategorized | Leave a comment

2000 Seat XenDesktop 7 Storage Sizing and Configuration Example

I am configuring 2000 seats XenDesktop 7 deployment these days. 

1000 Hosted Shared Desktop, 500 Hosted VDI (Pool desktops), and 500 Hosted VDI with personal vDisk. I used NetApp VDI Sizer to get the work loads spread to NetApp FAS 3240 2 nodes cluster with 4 SAS 15K RPM 600 GB disk shelves. Image

 

 

Also I listed the virtual storage servers and volumes below. You can put all these volumes in one virtual storage server if you want. Virtual storage server can separate different work load and give users more secure and flexible environment. 

vSERVER

VOLUME 

SIZE

PROTOCOL

Thin provisioning

Dedupe

Backup

Node

Infrastructure

Infrastructure

500GB

NFS

y

Y, nightly

Yes

2

 

vdisk

200GB

iSCSI

y

Y, nightly

Yes

2

Hosted Shared

Write Cache 1

2TB

NFS

y

no

no

1

 

write cache 2

2TB

NFS

y

no

no

1

Hosted VDI

Write Cache 1

2TB

NFS

y

no

no

2

 

write cache 2

2TB

NFS

y

no

no

2

 

PvDisk

6TB

NFS

y

Y, nightly

yes

2

CIFS 

HomeDIR

Hosted Shared

2TB

cifs

y

Y, nightly

yes

1

 

Hostedvdi

1TB

cifs

y

Y, nightly

yes

1

 

hostedvdiPvDisk

1TB

cifs

y

Y, nightly

yes

1

SAN boot Hypervisor is optional in any deployment. 

Posted in Uncategorized | 2 Comments

CIFS share with NetApp Clustered ONTAP configure wizard

I am working on vSphere 5.1 with Citrix XenDesktop 7 on Cisco UCS project these days.  CIFS share is created in a virtual storage server as desktop home directory or as a share between the infrastructure VMs.

I was setting up CIFS with ONTAP 8.1.2 with my old note from my last project ( create root user, create PC user/group). Now I found out a much easier way to create a vServer with CIFS and would love to share with you.  vServer setup wizard will save you a lot of time and headache. Don’t manually configure the CIFS in CLI or system manager unless you want to spend your whole morning troubleshooting like I did yesterday.  The wizard takes you 5 minutes.

Here are the four key points:

  • Volume language has to be “C”  not English or Japanese
  • Root volume needs to be NTFS
  • DNS must work
  • Server and storage time difference cannot be more than 5 minutes

Below is the screenshot of my system:

ccr-cmode-01::> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.

Step 1. Create a Vserver.
Enter the Vserver name: serverdata

Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi}:
cifs
Choose the Vserver client services to be configured {ldap, nis, dns}:
dns
Enter the Vserver’s root volume aggregate {aggr01, aggr02, aggr03, aggr04}
[aggr03]: aggr02
Enter the Vserver language setting, or “help” to see all languages [C]:       <—– Notice it has to be C  ( not US-English)

Enter the Vserver root volume’s security style {unix, ntfs, mixed} [unix]:
ntfs
Vserver creation might take some time to finish….Vserver serverdata with language set to C created. The permitted protocols are cifs.

Step 2: Create a data volume
You can type “back”, “exit”, or “help” at any question.

Do you want to create a data volume? {yes, no} [yes]: yes
Enter the volume name [vol1]: serverdata
Enter the name of the aggregate to contain this volume {aggr01, aggr02, aggr03,
aggr04} [aggr03]: aggr02
Enter the volume size: 1TB
Enter the volume junction path [/vol/serverdata]:
It can take up to a minute to create a volume…Volume serverdata of size 1TB created on aggregate aggr02 successfully.

Step 3: Create a logical interface.
You can type “back”, “exit”, or “help” at any question.

Do you want to create a logical interface? {yes, no} [yes]: yes
Enter the LIF name [lif1]: serverdata
Which protocols can use this interface [cifs]:
Enter the home node {ccr-cmode-01-01, ccr-cmode-01-02, ccr-cmode-01-04,
ccr-cmode-01-03} [ccr-cmode-01-03]: ccr-cmode-01-02
Enter the home port {a0a, a0a-3048, a0a-3073, a0a-3074, e0a, e0b} [a0a]:
a0a-3073
Enter the IP address: 172.20.73.60
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 172.20.73.1

LIF serverdata on node ccr-cmode-01-02, on port a0a-3073 with IP address
172.20.73.60 was created.
Do you want to create an additional LIF now? {yes, no} [no]: no
Step 4: Configure DNS (Domain Name Service).
You can type “back”, “exit”, or “help” at any question.

Do you want to configure DNS? {yes, no} [yes]:
Enter the comma separated DNS domain names: ccr.rtp.netapp.com
Enter the comma separated DNS server IP addresses: 172.20.73.41

DNS for Vserver serverdata is configured.

Step 5: Configure CIFS.
You can type “back”, “exit”, or “help” at any question.

Do you want to configure CIFS? {yes, no} [yes]:
Enter the CIFS server name [SERVERDATA-CCR-]: cifsdata
Enter the Active Directory domain name: ccr.rtp.netapp.com

In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the “CN=Computers” container within the
“ccr.rtp.netapp.com” domain.

Enter the user name [administrato]: administrator
Enter the password:

CIFS server “CIFSDATA” created and successfully joined the domain.
Do you want to share a data volume with CIFS clients? {yes, no} [yes]:
yes
Enter the CIFS share name [serverdata]:
Enter the CIFS share path [/vol/serverdata]:
Select the initial level of access that the group “Everyone” has to the share
{No_access, Read, Change, Full_Control} [No_access]: Full_Control

The CIFS share “serverdata” created successfully.
Default UNIX users and groups created successfully.
UNIX user “pcuser” set as the default UNIX user for unmapped CIFS users.
Default export policy rule created successfully.

Vserver serverdata, with protocol(s) cifs, and service(s) dns has been
configured successfully.

ccr-cmode-01::>

Posted in Uncategorized | 3 Comments