Product Documentation

XenServer Networking Overview

Jun 13, 2017
XenServer Networking Overview
Prev Chapter 4. Networking Next

This section describes the general concepts of networking in the XenServer environment.

During XenServer installation, one network is created for each physical network interface card (NIC). When you add a server to a resource pool, these default networks are merged so that all physical NICs with the same device name are attached to the same network.

Typically, you would only add a new network if you wanted to create an internal network, set up a new VLAN using an existing NIC, or create a NIC bond.

You can configure four different types of networks in XenServer:

  • External networks have an association with a physical network interface and provide a bridge between a virtual machine and the physical network interface connected to the network, enabling a virtual machine to connect to resources available through the server's physical network interface card.

  • Bonded networks create a bond between two NICs to create a single, high-performing channel between the virtual machine and the network.

  • Single-Server Private networks have no association to a physical network interface and can be used to provide connectivity between the virtual machines on a given host, with no connection to the outside world.

  • Cross-Server Private networks extend the single server private network concept to allow VMs on different hosts to communicate with each other by using the vSwitch.

Note

Some networking options have different behaviors when used with standalone XenServer hosts compared to resource pools. This chapter contains sections on general information that applies to both standalone hosts and pools, followed by specific information and procedures for each.

This chapter uses three types of server-side software objects to represent networking entities. These objects are:

Both XenCenter and the xe CLI allow configuration of networking options, control over which NIC is used for management operations, and creation of advanced networking features such as virtual local area networks (VLANs) and NIC bonds.

Each XenServer host has one or more networks, which are virtual Ethernet switches. Networks that are not associated with a PIF are considered internal and can be used to provide connectivity only between VMs on a given XenServer host, with no connection to the outside world. Networks associated with a PIF are considered external and provide a bridge between VIFs and the PIF connected to the network, enabling connectivity to resources available through the PIF's NIC.

Virtual Local Area Networks (VLANs), as defined by the IEEE 802.1Q standard, allow a single physical network to support multiple logical networks. XenServer hosts can work with VLANs in multiple ways.

Note

All supported VLAN configurations are equally applicable to pools and standalone hosts, and bonded and non-bonded configurations.

Switch ports configured to perform 802.1Q VLAN tagging/untagging, commonly referred to as ports with a native VLAN or as access mode ports, can be used with management interfaces to place management traffic on a desired VLAN. In this case the XenServer host is unaware of any VLAN configuration.

Management interfaces cannot be assigned to a XenServer VLAN via a trunk port.

Switch ports configured as 802.1Q VLAN trunk ports can be used in combination with the XenServer VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, the XenServer host performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.

XenServer VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. XenServer networks can then be connected to the PIF representing the physical NIC to see all traffic on the NIC, or to a PIF representing a VLAN to see only the traffic with the specified VLAN tag.

For procedures on how to create VLANs for XenServer hosts, either standalone or part of a resource pool, see the section called “Creating VLANs”.

Dedicated storage NICs (also known as IP-enabled NICs or simply management interfaces) can be configured to use native VLAN / access mode ports as described above for management interfaces, or with trunk ports and XenServer VLANs as described above for virtual machines. To configure dedicated storage NICs, see the section called “Configuring a Dedicated Storage NIC”.

A single switch port can be configured with both trunk and native VLANs, allowing one host NIC to be used for a management interface (on the native VLAN) and for connecting guest VIFs to specific VLAN IDs.

Jumbo frames can be used to optimize the performance of storage traffic. Jumbo frames are Ethernet frames containing more than 1500 bytes of payload. Jumbo frames are typically used to achieve better throughput, reducing the load on system bus memory, and reducing the CPU overhead.

Note

XenServer supports jumbo frames only when using vSwitch as the network stack on all hosts in the pool.

Requirements for using jumbo frames

Customers should note the following when using jumbo frames:

  • Jumbo frames are configured at a pool level

  • vSwitch must be configured as the network backend on all hosts in the pool

  • Every device on the subnet must be configured to use jumbo frames

  • It is recommended that customers only enable jumbo frames on a dedicated storage network

  • Enabling jumbo frames on the Management network is not a supported configuration

  • Jumbo frames are not supported for use on VMs

In order to use jumbo frames, customers should set the Maximum Transmission Unit (MTU) to a value between 1500 and 9216. This can be done either by using XenCenter or the xe CLI. For more information about configuring networks with jumbo frames, see Designing XenServer Network Configurations in the Citrix Knowledge Center.

NIC bonds, sometimes also known as NIC teaming, improve XenServer host resiliency and/or bandwidth by enabling administrators to configure two or more NICs together so they logically function as one network card. All bonded NICs share the same MAC address.

If one NIC in the bond fails, the host's network traffic is automatically redirected through the second NIC. XenServer supports up to eight bonded networks.

XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:

  • LACP bonding is only available for the vSwitch whereas active-active and active-passive are available for both the vSwitch and Linux bridge.

  • When the vSwitch is the network stack, you can bond either two, three, or four NICs.

  • When the Linux bridge is the network stack, you can only bond two NICs.

In the illustration that follows, the management interface is on a bonded pair of NICs. XenServer will use this bond for management traffic.

This illustration shows a host with a management interface on a bond and two pairs of NICs bonded for guest traffic. Excluding the management interface bond, XenServer uses the other two NIC bonds and the two non-bonded NICs for VM traffic.

All bonding modes support failover; however, not all modes allow all links to be active for all traffic types. XenServer supports bonding the following types of NICs together:

  • NICs (non-management). You can bond NICs that XenServer is using solely for VM traffic. Bonding these NICs not only provides resiliency, but doing so also balances the traffic from multiple VMs between the NICs.

  • Management interfaces. You can bond a management interface to another NIC so that the second NIC provides failover for management traffic. Although configuring a LACP link aggregation bond provides load balancing for management traffic, active-active NIC bonding does not.

  • Secondary interfaces. You can bond NICs that you have configured as secondary interfaces (for example, for storage). However, for most iSCSI software initiator storage, Citrix recommends configuring multipathing instead of NIC bonding as described in the Designing XenServer Network Configurations.

    Through out this section, the term IP-based storage traffic is used to refer to iSCSI and NFS traffic collectively.

You can create a bond if a VIF is already using one of the interfaces that will be bonded: the VM traffic will be automatically migrated to the new bonded interface.

In XenServer, the NIC bond is represented by an additional PIF. XenServer NIC bonds completely subsume the underlying physical devices (PIFs).

Note

Creating a bond that contains only one NIC is not supported.

Key points about IP addressing

Bonded NICs will either have one IP address or no IP addresses, as follows:

  • Management and storage networks.

    • If you bond a management interface or secondary interface, a single IP address is assigned to the bond. That is, each NIC does not have its own IP address; XenServer treats the two NICs as one logical connection.

    • When bonds are used for non-VM traffic (to connect to shared network storage or XenCenter for management), you must configure an IP address for the bond. However, if you have already assigned an IP address to one of the NICs (that is, created a management interface or secondary interface), that IP address is assigned to the entire bond automatically.

    • If you bond a management interface or secondary interface to a NIC without an IP address, as of XenServer 6.0, the bond assumes the IP address of the respective interface automatically.

  • VM networks. When bonded NICs are used for VM (guest) traffic, you do not need to configure an IP address for the bond. This is because the bond operates at Layer 2 of the OSI model, the data link layer, and no IP addressing is used at this layer. IP addresses for virtual machines are associated with VIFs.

Bonding types

XenServer provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:

Note

Bonding is set up with an Up Delay of 31000ms and a Down Delay of 200ms. The seemingly long Up Delay is deliberate because of the time some switches take to actually enable the port. Without a delay, when a link comes back after failing, the bond could rebalance traffic onto it before the switch is ready to pass traffic. If you want to move both connections to a different switch, move one, then wait 31 seconds for it to be used again before moving the other. For information about changing the delay, see the section called “Changing the Up Delay for Bonds”.

Bond Status

XenServer provides status for bonds in the event logs for each host. If one or more links in a bond fails or is restored, it is noted in the event log. Likewise, you can query the status of a bond's links by using the links-up parameter as shown in the following example:

xe bond-param-get uuid=bond_uuid param-name=links-up 

XenServer checks the status of links in bonds approximately every 5 seconds. Consequently, if additional links in the bond fail in the five-second window, the failure is not logged until the next status check.

Bonding event logs appear in the XenCenter Logs tab. For users not running XenCenter, event logs also appear in /var/log/xensource.log on each host.

Active-active is an active/active configuration for guest traffic: both NICs can route VM traffic simultaneously. When bonds are used for management traffic, only one NIC in the bond can route traffic: the other NIC remains unused and provides fail-over support. Active-active mode is the default bonding mode when either the Linux bridge or vSwitch network stack is enabled.

When active-active bonding is used with the Linux bridge, you can only bond two NICs. When using the vSwitch as the network stack, you can bond either two, three, or four NICs in active-active mode. However, in active-active mode, bonding three or four NICs is generally only beneficial for VM traffic, as shown in the illustration that follows.

This illustration shows how bonding four NICs may only benefit guest traffic. In the top picture of a management network, NIC 2 is active but NICs 1, 3, and 4 are passive. For the VM traffic, all four NICs in the bond are active; however, this assumes a minimum of four VMs. For the storage traffic, only NIC 11 is active.

XenServer can only send traffic over two or more NICs when there is more than one MAC address associated with the bond. XenServer can use the virtual MAC addresses in the VIF to send traffic across multiple links. Specifically:

  • VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF's traffic is never split between NICs.

  • Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them. Configuring a management interface or secondary interface on a bonded network provides resilience.

  • Mixed traffic. If the bonded NIC carries a mixture of IP-based storage traffic and guest traffic, only the guest and control domain traffic are load balanced. The control domain is essentially a virtual machine so it uses a NIC like the other guests. XenServer balances the control domain's traffic the same way as it balances VM traffic.

Traffic Balancing

XenServer balances the traffic between NICs by using the source MAC address of the packet. Because, for management traffic, only one source MAC address is present, active-active mode can only use one NIC and traffic is not balanced. Traffic balancing is based on two factors:

  • The virtual machine and its associated VIF sending or receiving the traffic

  • The quantity of data (in kilobytes) being sent.

XenServer evaluates the quantity of data (in kilobytes) each NIC is sending and receiving. If the quantity of data sent across one NIC exceeds the quantity of data sent across the other NIC, XenServer rebalances which VIFs use which NICs. The VIF's entire load is transferred; one VIF's load is never split between two NICs.

While active-active NIC bonding can provide load balancing for traffic from multiple VMs, it cannot provide a single VM with the throughput of two NICs. Any given VIF only uses one of the links in a bond at a time. As XenServer periodically rebalances traffic, VIFs are not permanently assigned to a specific NIC in the bond.

Active-active mode is sometimes referred to as Source Load Balancing (SLB) bonding as XenServer uses SLB to share load across bonded network interfaces. SLB is derived from the open-source Adaptive Load Balancing (ALB) mode and reuses the ALB capability to dynamically re-balance load across NICs.

When rebalancing, the number of bytes going over each slave (interface) is tracked over a given period. When a packet to be sent contains a new source MAC address, it is assigned to the slave interface with the lowest utilisation. Traffic is re-balanced at regular intervals.

Each MAC address has a corresponding load and XenServer can shift entire loads between NICs depending on the quantity of data a VM sends and receives. For active-active traffic, all the traffic from one VM can be sent on only one NIC.

Note

Active-active bonding does not require switch support for EtherChannel or 802.3ad (LACP).

An active-passive bond routes traffic over only one of the NICs, so traffic fails over to the other NIC in the bond if the active NIC loses network connectivity. Active-passive bonds route traffic over the active NIC: only if the active NIC fails is traffic shifted to the passive NIC.

Active-passive bonding is available in the Linux bridge and the vSwitch network stack. When used with the Linux bridge, you can bond two NICs together. When used with the vSwitch, you can only bond two, three, or four NICs together. However, regardless of the traffic type, when you bond NICs together in active-passive mode, only one link is active and there is no load balancing between links.

The illustration that follows shows two bonded NICs configured in active-passive mode.

This illustration shows two NICs bonded in active-passive mode. NIC 1 is active. The bond includes a NIC for failover that is connected to a second switch. This NIC will be used only if NIC 1 fails.

Since active-active mode is the default bonding configuration in XenServer, if you are configuring bonds using the CLI, you must specify a parameter for the active-passive mode or the bond is created as active-active. However, you do not need to configure active-passive mode just because a network is carrying management traffic or storage traffic.

Active-passive can be a good choice for resiliency since it offers several benefits. With active-passive bonds, traffic does not move around between NICs. Likewise, active-passive bonding lets you configure two switches for redundancy but does not require stacking. (If the management switch dies, stacked switches can be a single point of failure.)

Active-passive mode does not require switch support for EtherChannel or 802.3ad (LACP).

Consider configuring active-passive mode in situations when you do not need load balancing or when you only intend to send traffic on one NIC.

Important

After you have created VIFs or your pool is in production, be extremely careful about making changes to bonds or creating new bonds.

LACP Link Aggregation Control Protocol is a type of bonding that bundles a group of ports together and treats it like a single logical channel. LACP bonding provides failover and can increase the total amount of bandwidth available.

Unlike other bonding modes, LACP bonding requires configuring both sides of the links: creating a bond on the host and, on the switch, creating a Link Aggregation Group (LAG) for each bond, as described in the section called “Switch Configuration for LACP Bonds”. To use LACP bonding, you must configure the vSwitch as the network stack. Also, your switches must support the IEEE 802.3ad standard.

The following table compares active-active SLB bonding and LACP bonding:

 BenefitsConsiderations
Active-active SLB Bonding
  • Can be used with any switch on the XenServer Hardware Compatibility List.

  • Does not require switches that support stacking.

  • Supports four NICs.

  • Optimal load balancing requires at least one NIC per VIF.

  • Storage or management traffic cannot be split on multiple NICs.

  • Load balancing occurs only if multiple MAC addresses are present.

LACP bonding
  • All links can be active regardless of traffic type.

  • Traffic balancing does not depend on source MAC addresses, so all traffic types can be balanced.

  • Switches must support the IEEE 802.3ad standard.

  • Requires switch-side configuration.

  • Supported only for the vSwitch.

  • Requires a single switch or stacked switch.

Traffic Balancing

XenServer supports two LACP bonding hashing types —the term hashing refers to the way in the NICs and the switch distribute the traffic— (1) load balancing based on IP and port of source and destination addresses and (2) load balancing based on source MAC address.

Depending on the hashing type and traffic pattern, LACP bonding can potentially distribute traffic more evenly than active-active NIC bonding.

Note

You configure settings for outgoing and incoming traffic separately on the host and the switch: the configuration does not have to match on both sides.

Load balancing based on IP and port of source and destination addresses.

This hashing type is the default LACP bonding hashing algorithm. Traffic coming from one guest can be distributed over two links provided that there is a variation in the source or destination IP or port numbers.

If one virtual machine is running several applications, which use different IP or port numbers, this hashing type distributes traffic over several links giving the guest the possibility of using the aggregate throughput. This hashing type lets one guest use the whole throughput of multiple NICs.

Likewise, as shown in the illustration that follows, this hashing type can distribute the traffic of two different applications on a virtual machine to two different NICs.

This illustration shows how, if you use LACP bonding and enable LACP with load balancing based on IP and port of source and destination as the hashing type, the traffic coming from two different applications on VM1 can be distributed to two NICs.

Configuring LACP bonding based on IP and port of source and destination address is beneficial when you want to balance the traffic of two different applications on the same VM (for example, when only one virtual machine is configured to use a bond of three NICs).

This illustration shows how, if you use LACP bonding and enable LACP with load balancing based on IP and port of source and destination as the hashing type, XenServer can send the traffic of each application in the virtual machine through one of the three NICs in the bond even though the number of NICs exceeds the number of VIFs.

The balancing algorithm for this hashing type uses five factors to spread traffic across the NICs: the source IP address, source port number, destination IP address, destination port number, and source MAC address.

Load balancing based on source MAC address.

This type of load balancing works well when there are multiple virtual machines on the same host. Traffic is balanced based on the virtual MAC address of the VM from which the traffic originated. XenServer sends outgoing traffic using the same algorithm as it does in the case of active-active bonding. Traffic coming from the same guest is not split over multiple NICs. As a result, this hashing type is not suitable if there are fewer VIFs than NICs: load balancing is not optimal because the traffic cannot be split across NICs.

This illustration shows how, if you use LACP bonding and enable LACP based on source MAC address as the hashing type, if the number of NICs exceeds the number of VIFs, not all NICs will be used. Because there are three NICs and only two VMs, only two NICs can be used at the same time and the maximum bond throughput cannot be achieved. The packets from one VM cannot be split across multiple VMs.

Depending on your redundancy requirements, you can connect the NICs in the bond to either the same or separate stacked switches. If you connect one of the NICs to a second, redundant switch and a NIC or switch fails, traffic fails over to the other NIC. Adding a second switch prevents a single point-of-failure in your configuration in the following ways:

  • When you connect one of the links in a bonded management interface to a second switch, if the switch fails, the management network still remains online and the hosts can still communicate with each other.

  • If you connect a link (for any traffic type) to a second switch and the NIC or switch fails, the virtual machines remain on the network since their traffic fails over to the other NIC/switch.

When you want to connect bonded NICs to multiple switches and you configured the LACP bonding mode, you must use stacked switches. The term stacked switches refers to configuring multiple physical switches to function as a single logical switch. You must join the switches together physically and through the switch-management software so the switches function as a single logical switching unit, as per the switch manufacturer's guidelines. Typically, switch stacking is only available through proprietary extensions and switch vendors may market this functionality under different terms.

Note

If you experience issues with active-active bonds, the use of stacked switches might be necessary. Active-passive bonds do not require stacked switches.

The illustration that follows shows how the cables and network configuration for the bonded NICs have to match.

This illustration shows how two NICs in a bonded pair use the same network settings, as represented by the networks in each host. The NICs in the bonds connect to different switches for redundancy.

While the specific details of switch configuration varies by manufacturer, there are a few key points to remember when configuring switches for use with LACP bonds:

  • The switch must support LACP and the IEEE 802.3ad standard.

  • When you create the LAG group on the switch, you must create one LAG group for each LACP bond on the host. This means if you have a five-host pool and you created a LACP bond on NICs 4 and 5 on each host, you must create five LAG groups on the switch. One group for each set of ports corresponding with the NICs on the host.

    You may also need to add your VLAN ID to your LAG group.

  • XenServer LACP bonds require setting the Static Mode setting in the LAG group to be set to Disabled.

As previously mentioned in the section called “Switch Configuration”, stacking switches are required to connect LACP bonds to multiple switches.

The XenServer host networking configuration is specified during initial host installation. Options such as IP address configuration (DHCP/static), the NIC used as the management interface, and hostname are set based on the values provided during installation.

When a host has multiple NICs the configuration present after installation depends on which NIC is selected for management operations during installation:

  • PIFs are created for each NIC in the host

  • the PIF of the NIC selected for use as the management interface is configured with the IP addressing options specified during installation

  • a network is created for each PIF ("network 0", "network 1", etc.)

  • each network is connected to one PIF

  • the IP addressing options of all other PIFs are left unconfigured

When a XenServer host has a single NIC, the follow configuration is present after installation:

  • a single PIF is created corresponding to the host's single NIC

  • the PIF is configured with the IP addressing options specified during installation and to enable management of the host

  • the PIF is set for use in host management operations

  • a single network, network 0, is created

  • network 0 is connected to the PIF to enable external connectivity to VMs

In both cases the resulting networking configuration allows connection to the XenServer host by XenCenter, the xe CLI, and any other management software running on separate machines via the IP address of the management interface. The configuration also provides external networking for VMs created on the host.

The PIF used for management operations is the only PIF ever configured with an IP address during XenServer installation. External networking for VMs is achieved by bridging PIFs to VIFs using the network object which acts as a virtual Ethernet switch.

The steps required for networking features such as VLANs, NIC bonds, and dedicating a NIC to storage traffic are covered in the sections that follow.

You can change your networking configuration by modifying the network object. To do so, you run a command that affects either the network object or the VIF.

You can modify aspects of an network, such as the frame size (MTU), name-label, name-description, and other values by using the xe network-param-set command and its associated parameters.

When you run the xe network-param-set command, the only required parameter is uuid.

Optional parameters include:

If a value for a parameter is not given, the parameter is set to a null value. To set a (key,value) pair in a map parameter, use the syntax 'map-param:key=value'.

As described in the section called “NIC Bonds”, by default, bonding is set up with an Up Delay of 31000ms to prevent traffic from being rebalanced onto a NIC after it fails. While seemingly long, the up delay is important for all bonding modes and not just active-active.

However, if you understand the appropriate settings to select for your environment, you can change the up delay for bonds by using the procedure that follows.


Prev Up Next
vSwitch Networks Home Managing Networking Configuration