NetApp has introduced clustered OnTap. Now we can have up to 24 nodes in one cluster. Cluster makes new requirements on cluster interconnect network. I would like to share the concept and best practice on clustered Data OnTap networking.
The physical interfaces on a node are referred to as ports. IP addresses are assigned to logical interfaces (LIFs). LIFs are logically connected to a port in much the same way that VM virtual network adapter and VMkernel ports connect to physical adapters, except without the constructs of virtual switches and port groups. Physical ports can be grouped into interface groups. VLANs can be created on top of physical ports or interface groups. LIFs can be associated with a port, interface group, or VLAN.
NetApp OnCommand System Manager can be used to set up volumes and LIFs. Although LIFs can be created and managed through the command line, this document focuses on the NetApp OnCommand System Manager GUI. Note that System Manager 2.0R1 or later is required to perform these steps.
For good housekeeping purposes, NetApp recommends creating a new LIF whenever a new volume is created. A key feature in clustered Data ONTAP is its ability to move volumes in the same Vserver from one node to another. When you move a volume, make sure that you move the associated LIF as well. This will help keep the virtual cabling neat and prevent indirect I/O that will occur if the migrated volume does not have an associated LIF to use. It is also best practice to use the same port on each physical node for the same purpose. Due to the increased functionality in clustered Data ONTAP, more physical cables are necessary and can quickly become an administrative problem if care is not taken when labeling and placing cables. By using the same port on each cluster for the same purpose, you will always know what each port does.
LIFs for NFS and CIFS can be migrated, but iSCSI LIFs cannot. If you get Flash Cache, you may need to get the IOXM to add another 10GbE card. If you get IOXM, you’ll need to connect the HA pair with a fiber cable.
|10GbE for cluster interconnection and data networks. 1GbE is recommended for management network. Make sure you have enough 10GbE cards and ports.|
10 Gigabit Ethernet
Clustered Data ONTAP requires more cables because of its enhanced functionality. A minimum of three networks are required when creating a new clustered ONTAP system—Cluster at 10GbE, management at 1GbE, and data at 10GbE. The physical space saving functionality of the 32XX series can be a potential issue if Flash Cache cards are present in the controllers because 10GbE network cards are required and 32XX series controllers do not have onboard 10GbE NICs. Check with NetApp Sales Engineers if this is a system will be migrated from 7-Mode to clustered Data ONTAP to make sure enough open slots are available on your existing controllers.
|10GbE is not required for data, but is best practice for optimal data traffic.|
Enable jumbo frames for data network and LACP, if the switch supports it.
NetApp recommends using jumbo frames or MTU 9000 for the data network. Enabling jumbo frames can help speed up data traffic and use less CPU since fewer frames are being sent over the network. Jumbo frames must be enabled on all physical devices and logical entities from end-to-end to avoid truncation or fragmentation of packets with the maximum size. Link aggregation type will depend largely on the current switching technology used in the environment. Not all link aggregation implementations are alike or offer the same features. Some only offer failover from one link to another in the event of failure of one link. More complete solutions offer actual aggregation in which traffic can flow on two or more links at the same time. There is also the link aggregation control protocol (LACP) that allows devices to negotiate the configuration of ports into bundles. For more information regarding the setup of link aggregation with VMware ESXi and clustered Data ONTAP, refer to TR-4068: VMware vSphere 5 on NetApp Data ONTAP 8.1 Operating in Cluster-Mode.
The following figure shows you how to set jumbo frames on each network component.
|NetApp recommends using jumbo frames or MTU 9000 for the data network.|
Flow Control Overview
Modern network equipment and protocols generally handle port congestion better than in the past. Although NetApp had previously recommended flow control “send” on ESX hosts and NetApp storage controllers, the current recommendation, especially with 10GbE equipment, is to disable flow control on ESXi, NetApp FAS, and the switches in between.
With ESXi 5, flow control is not exposed in the vSphere Client GUI. The ethtool command sets flow control on a per-interface basis. There are three options for flow control: autoneg, tx, and rx. The tx option is equivalent to “send” on other devices.