Product Documentation

Managing Networking Configuration

Jun 13, 2017
Managing Networking Configuration
Prev Chapter 4. Networking Next

Some of the network configuration procedures in this section differ depending on whether you are configuring a stand-alone server or a server that is part of a resource pool.

Previous versions of XenServer allowed you to create single-server private networks that allowed VMs running on the same host to communicate with each other. The cross-server private network feature, which extends the single-server private network concept to allow VMs on different hosts to communicate with each other. Cross-server private networks combine the same isolation properties of a single-server private network but with the additional ability to span hosts across a resource pool. This combination enables use of VM agility features such as XenMotion live migration for VMs with connections to cross-server private networks.

Cross-server private networks are completely isolated. VMs that are not connected to the private network cannot sniff or inject traffic into the network, even when they are located on the same physical host with VIFs connected to a network on the same underlying physical network device (PIF). VLANs provide similar functionality, though unlike VLANs, cross-server private networks provide isolation without requiring configuration of a physical switch fabric, through the use of the Generic Routing Encapsulation (GRE) IP tunnelling protocol.

Private networks provide the following benefits without requiring a physical switch:

  • the isolation properties of single-server private networks

  • the ability to span a resource pool, enabling VMs connected to a private network to live on multiple hosts within the same pool

  • compatibility with features such as XenMotion

Cross-Server Private Networks must be created on a management interface or a secondary interface, as they require an IP addressable NIC. Any IP-enabled NIC can be used as the underlying network transport. If you choose to put cross-server private network traffic on a secondary interface, this secondary interface must be on a separate subnet.

If any management or secondary interfaces are on the same subnet, traffic will be routed incorrectly.

Note

To create a cross-server private network, the following conditions must be met:

  • All of the hosts in the pool must be using XenServer 6.0 or greater.

  • All of the hosts in the pool must be using the vSwitch for the networking stack.

  • The vSwitch Controller must be running and you must have added the pool to it. (The pool must have a vSwitch Controller configured that handles the initialization and configuration tasks required for the vSwitch connection.)

  • The cross-server private network must be created on a NIC configured as a management interface. This can be the management interface or a secondary interface (IP-enabled PIF) you configure specifically for this purpose, provided it is on a separate subnet.

For more information on configuring the vSwitch, see the XenServer vSwitch Controller User Guide. For UI-based procedures for configuring private networks, see the XenCenter Help.

Because external networks are created for each PIF during host installation, creating additional networks is typically only required to:

  • use a private network

  • support advanced operations such as VLANs or NIC bonding

To add or remove networks using XenCenter, refer to the XenCenter online Help.

At this point the network is not connected to a PIF and therefore is internal.

All XenServer hosts in a resource pool should have the same number of physical network interface cards (NICs), although this requirement is not strictly enforced when a XenServer host is joined to a pool.

Having the same physical networking configuration for XenServer hosts within a pool is important because all hosts in a pool share a common set of XenServer networks. PIFs on the individual hosts are connected to pool-wide networks based on device name. For example, all XenServer hosts in a pool with an eth0 NIC will have a corresponding PIF plugged into the pool-wide Network 0 network. The same will be true for hosts with eth1 NICs and Network 1, as well as other NICs present in at least one XenServer host in the pool.

If one XenServer host has a different number of NICs than other hosts in the pool, complications can arise because not all pool networks will be valid for all pool hosts. For example, if hosts host1 and host2 are in the same pool and host1 has four NICs while host2 only has two, only the networks connected to PIFs corresponding to eth0 and eth1 will be valid on host2. VMs on host1 with VIFs connected to networks corresponding to eth2 and eth3 will not be able to migrate to host host2.

Citrix recommends using XenCenter to create NIC bonds. For instructions, see the XenCenter help.

This section describes how to use the xe CLI to bond NIC interfaces on a XenServer host that is not in a pool. See the section called “Creating NIC Bonds in Resource Pools” for details on using the xe CLI to create NIC bonds on XenServer hosts that comprise a resource pool.

When you bond a NIC, the bond absorbs the PIF/NIC currently in use as the management interface. From XenServer 6.0 onwards, the management interface is automatically moved to the bond PIF.

Note

In previous releases, you specified the other-config:bond-mode to change the bond mode. While this command still works, it may be not be supported in future releases and it is not as efficient as the mode parameter. other-config:bond-mode requires running pif-unplug and pif-plug to get the mode change to take effect.

When you bond the management interface, the PIF/NIC currently in use as the management interface is subsumed by the bond. If the host uses DHCP, in most cases the bond's MAC address is the same as the PIF/NIC currently in use, and the management interface's IP address can remain unchanged.

You can change the bond's MAC address so that it is different from the MAC address for the (current) management-interface NIC. However, as the bond is enabled and the MAC/IP address in use changes, existing network sessions to the host will be dropped.

You can control the MAC address for a bond in two ways:

  • An optional mac parameter can be specified in the bond-create command. You can use this parameter to set the bond MAC address to any arbitrary address.

  • If the mac parameter is not specified, from XenServer 6.5 Service Pack 1 onwards, XenServer uses the MAC address of the management interface if this is one of the interfaces in the bond. If the management interface is not part of the bond, but another management interface is, the bond uses the MAC address (and also the IP address) that management interface. If none of the NICs in the bond are management interfaces, the bond uses the MAC of the first named NIC.

If reverting a XenServer host to a non-bonded configuration, be aware that the bond-destroy command automatically configures the primary-slave as the interface to be used for the management interface. Consequently, all VIFs will be moved to the management interface.

The term primary-slave refers to the PIF that the MAC and IP configuration was copied from when creating the bond. When bonding two NICs, the primary slave is:

  1. The management interface NIC (if the management interface is one of the bonded NICs).

  2. Any other NIC with an IP address (if the management interface was not part of the bond).

  3. The first named NIC. You can find out which one it is by running the following:

    xe bond-list params=all

Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required. Adding a NIC bond to an existing pool requires one of the following:

  • Using the CLI to configure the bonds on the master and then each member of the pool.

  • Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master.

  • Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.

For simplicity and to prevent misconfiguration, Citrix recommends using XenCenter to create NIC bonds. For details, refer to the XenCenter Help.

This section describes using the xe CLI to create bonded NIC interfaces on XenServer hosts that comprise a resource pool. See the section called “Creating a NIC Bond” for details on using the xe CLI to create NIC bonds on a standalone XenServer host.

Warning

Do not attempt to create network bonds while HA is enabled. The process of bond creation will disturb the in-progress HA heartbeating and cause hosts to self-fence (shut themselves down); subsequently they will likely fail to reboot properly and will need the host-emergency-ha-disable command to recover.

  1. Select the host you want to be the master. The master host belongs to an unnamed pool by default. To create a resource pool with the CLI, rename the existing nameless pool:

    xe pool-param-set name-label="New Pool" uuid=pool_uuid
  2. Create the NIC bond as described in the section called “Creating a NIC Bond”.

  3. Open a console on a host that you want to join to the pool and run the command:

    xe pool-join master-address=host1 master-username=root master-password=password

    The network and bond information is automatically replicated to the new host. The management interface is automatically moved from the host NIC where it was originally configured to the bonded PIF (that is, the management interface is now absorbed into the bond so that the entire bond functions as the management interface).

    1. Use the host-list command to find the UUID of the host being configured:

      xe host-list

Warning

Do not attempt to create network bonds while HA is enabled. The process of bond creation disturbs the in-progress HA heartbeating and causes hosts to self-fence (shut themselves down); subsequently they will likely fail to reboot properly and you will need to run the host-emergency-ha-disable command to recover them.

Note

If you are not using XenCenter for NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternatively, you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The management interface of each host must, however, be manually reconfigured.

Follow the procedure in previous sections to create a NIC Bond, see the section called “Adding NIC Bonds to New Resource Pools”.

You can use either XenCenter or the xe CLI to assign a NIC an IP address and dedicate it to a specific function, such as storage traffic. When you configure a NIC with an IP address, you do so by creating a secondary interface. (The IP-enabled NIC XenServer used for management is known as the management interface.)

When you want to dedicate a secondary interface for a specific purpose, you must ensure the appropriate network configuration is in place to ensure the NIC is used only for the desired traffic. For example, to dedicate a NIC to storage traffic, the NIC, storage target, switch, and/or VLAN must be configured so that the target is only accessible over the assigned NIC. If your physical and IP configuration do not limit the traffic that can be sent across the storage NIC, it is possible to send other traffic, such as management traffic, across the secondary interface.

When you create a new secondary interface for storage traffic, you must assign it an IP address that (a) is on the same subnet as the storage controller, if applicable, and (b) is not on the same subnet as any other secondary interfaces or the management interface.

When you are configuring secondary interfaces, each secondary interface must be on a separate subnet. For example, if you want to configure two additional secondary interfaces for storage, you will require IP addresses on three different subnets – one subnet for the management interface, one subnet for Secondary Interface 1, and one subnet for Secondary Interface 2.

If you are using bonding for resiliency for your storage traffic, you may want to consider using LACP instead of the Linux bridge bonding. To use LACP bonding, you must configure the vSwitch as your networking stack. For more information, see the section called “vSwitch Networks”.

Note

When selecting a NIC to configure as a secondary interface for use with iSCSI or NFS SRs, ensure that the dedicated NIC uses a separate IP subnet that is not routable from the management interface. If this is not enforced, then storage traffic may be directed over the main management interface after a host reboot, due to the order in which network interfaces are initialized.

If you want to use a secondary interface for storage that can be routed from the management interface also (bearing in mind that this configuration is not the best practice), you have two options:

  • After a host reboot, ensure that the secondary interface is correctly configured, and use the xe pbd-unplug and xe pbd-plug commands to reinitialize the storage connections on the host. This restarts the storage connection and routes it over the correct interface.

  • Alternatively, you can use xe pif-forget to remove the interface from the XenServer database and manually configure it in the control domain. This is an advanced option and requires you to be familiar with how to manually configure Linux networking.

Single Root I/O Virtualization (SR-IOV) is a PCI device virtualization technology that allows a single PCI device to appear as multiple PCI devices on the physical PCI bus. The actual physical device is known as a Physical Function (PF) while the others are known as Virtual Functions (VF). The purpose of this is for the hypervisor to directly assign one or more of these VFs to a Virtual Machine (VM) using SR-IOV technology: the guest can then use the VF as any other directly assigned PCI device.

Assigning one or more VFs to a VM allows the VM to directly exploit the hardware. When configured, each VM behaves as though it is using the NIC directly, reducing processing overhead and improving performance.

Warning

If your VM has an SR-IOV VF, functions that require VM mobility, for example, Live Migration, Rolling Pool Upgrade, High Availability and Disaster Recovery, are not possible. This is because the VM is directly tied to the physical SR-IOV enabled NIC VF. In addition, VM network traffic sent via an SR-IOV VF bypasses the vSwitch, so it is not possible to create ACLs or view QoS.

Procedure 4.6. Assigning a SR-IOV NIC VF to a VM

Note

SR-IOV is supported only with SR-IOV enabled NICs listed on the XenServer Hardware Compatibility List and only when used in conjunction with a Windows Server 2008 guest operating system.

  1. Open a local command shell on your XenServer host.

  2. Run the command lspci to display a list of the Virtual Functions (VF). For example:

    07:10.0 Ethernet controller: Intel Corporation 82559 \ 
      Ethernet Controller Virtual Function (rev 01)

    In the example above, 07:10.0 is the bus:device.function address of the VF.

  3. Assign the required VF to the target VM by running the following commands:

    xe vm-param-set other-config:pci=0/0000:bus:device.function uuid=vm-uuid
  4. Start the VM, and install the appropriate VF driver for your specific hardware.

Note

You can assign multiple VFs to a single VM, however the same VF cannot be shared across multiple VMs.

To limit the amount of outgoing data a VM can send per second, you can set an optional Quality of Service (QoS) value on VM virtual interfaces (VIFs). The setting lets you specify a maximum transmit rate for outgoing packets in kilobytes per second.

The QoS value limits the rate of transmission from the VM. The QoS setting does not limit the amount of data the VM can receive. If such a limit is desired, Citrix recommends limiting the rate of incoming packets higher up in the network (for example, at the switch level).

Depending on networking stack configured in the pool, you can set the Quality of Service (QoS) value on VM virtual interfaces (VIFs) in one of two places—either a) on the vSwitch Controller or b) in XenServer (using the CLI or XenCenter)—as described in the following table:

Networking StackConfiguration Methods Available
vSwitch
  • vSwitch Controller. This is the preferred method of setting the maximum transmission rate on a VIF when the vSwitch is the networking stack. When using the vSwitch stack, the XenCenter QoS option is not available.

  • xe commands. It is possible to set the QoS transmit rate using the commands in the example that follows. However, the preferred method is through the vSwitch Controller UI, which provides more finely grained control.

Linux bridge
  • XenCenter. You can set the QoS transmit rate limit value in the properties dialog for the virtual interface.

  • xe commands. You can set the QoS transmit rate using the CLI using the commands in the section that follow.

Important

When the vSwitch is configured as the networking stack, it is possible to inadvertently configure a QoS value on the vSwitch Controller and inside of the XenServer host. In this case, XenServer limits the outgoing traffic using the lowest rate that you set.

Example of CLI command for QoS:

To limit a VIF to a maximum transmit rate of 100 kilobytes per second using the CLI, use the vif-param-set command:

xe vif-param-set uuid=vif_uuid qos_algorithm_type=ratelimit 
xe vif-param-set uuid=vif_uuid qos_algorithm_params:kbps=100

Note

If you are using the vSwitch Controller, Citrix recommends setting the transmission rate limit in the vSwitch Controller instead of this CLI command. For directions on setting the QoS rate limit in the vSwitch Controller, see the vSwitch Controller User Guide.

This section discusses how to change the networking configuration of a XenServer host. This includes:

  • changing the hostname (that is, the Domain Name System (DNS) name)

  • adding or removing DNS servers

  • changing IP addresses

  • changing which NIC is used as the management interface

  • adding a new physical NIC to the server

  • enabling ARP filtering (switch-port locking)

The system hostname, also known as the domain or DNS name, is defined in the pool-wide database and modified using the xe host-set-hostname-live CLI command as follows:

xe host-set-hostname-live host-uuid=host_uuid host-name=host-name

The underlying control domain hostname changes dynamically to reflect the new hostname.

To add or remove DNS servers in the IP addressing configuration of a XenServer host, use the pif-reconfigure-ip command. For example, for a PIF with a static IP:

pif-reconfigure-ip uuid=pif_uuid mode=static DNS=new_dns_ip

Network interface configuration can be changed using the xe CLI. The underlying network configuration scripts should not be modified directly.

To modify the IP address configuration of a PIF, use the pif-reconfigure-ip CLI command. See the section called “pif-reconfigure-ip” for details on the parameters of the pif-reconfigure-ip command.

Note

See the section called “Changing IP Address Configuration in Resource Pools” for details on changing host IP addresses in resource pools.

XenServer hosts in resource pools have a single management IP address used for management and communication to and from other hosts in the pool. The steps required to change the IP address of a host's management interface are different for master and other hosts.

Note

Caution should be used when changing the IP address of a server, and other networking parameters. Depending upon the network topology and the change being made, connections to network storage may be lost. If this happens the storage must be replugged using the Repair Storage function in XenCenter, or the pbd-plug command using the CLI. For this reason, it may be advisable to migrate VMs away from the server before changing its IP configuration.

Procedure 4.7. To change the IP address of a member host (not pool master)

  1. Use the pif-reconfigure-ip CLI command to set the IP address as desired. See Appendix A, Command Line Interface for details on the parameters of the pif-reconfigure-ip command:

    xe pif-reconfigure-ip uuid=pif_uuid mode=DHCP
  2. Use the host-list CLI command to confirm that the member host has successfully reconnected to the master host by checking that all the other XenServer hosts in the pool are visible:

    xe host-list

Changing the IP address of the master XenServer host requires additional steps because each of the member hosts uses the advertised IP address of the pool master for communication and will not know how to contact the master when its IP address changes.

Whenever possible, use a dedicated IP address that is not likely to change for the lifetime of the pool for pool masters.

Procedure 4.8. To change the IP address of the pool master

  1. Use the pif-reconfigure-ip CLI command to set the IP address as desired. See Appendix A, Command Line Interface for details on the parameters of the pif-reconfigure-ip command:

    xe pif-reconfigure-ip uuid=pif_uuid mode=DHCP
  2. When the IP address of the pool master host is changed, all member hosts will enter into an emergency mode when they fail to contact the master host.

  3. On the master XenServer host, use the pool-recover-slaves command to force the master to contact each of the member hosts and inform them of the new master IP address:

    xe pool-recover-slaves

When XenServer is installed on a host with multiple NICs, one NIC is selected for use as the management interface. The management interface is used for XenCenter connections to the host and for host-to-host communication.

Procedure 4.9. To change the NIC used for the management interface

  1. Use the pif-list command to determine which PIF corresponds to the NIC to be used as the management interface. The UUID of each PIF is returned.

    xe pif-list
  2. Use the pif-param-list command to verify the IP addressing configuration for the PIF that will be used for the management interface. If necessary, use the pif-reconfigure-ip command to configure IP addressing for the PIF to be used. See Appendix A, Command Line Interface for more detail on the options available for the pif-reconfigure-ip command.

    xe pif-param-list uuid=pif_uuid
  3. Use the host-management-reconfigure CLI command to change the PIF used for the management interface. If this host is part of a resource pool, this command must be issued on the member host console:

    xe host-management-reconfigure pif-uuid=pif_uuid

Warning

Putting the management interface on a VLAN network is not supported.

To disable remote access to the management console entirely, use the host-management-disable CLI command.

Warning

Once the management interface is disabled, you will have to log in on the physical host console to perform management tasks and external interfaces such as XenCenter will no longer work.

Install a new physical NIC on a XenServer host in the usual manner. Then, after restarting the server, run the xe CLI command pif-scan to cause a new PIF object to be created for the new NIC.

The XenServer switch-port locking feature lets you control traffic being sent from unknown, untrusted, or potentially hostile VMs by limiting their ability to pretend they have a MAC or IP address that was not assigned to them. You can use the port-locking commands in this feature to block all traffic on a network by default or define specific IP addresses from which an individual VM is allowed to send traffic.

Switch-port locking is a feature designed for public cloud-service providers in environments concerned about internal threats. This functionality may help public cloud-service providers who have a network architecture in which each VM has a public, Internet-connected IP address. Because cloud tenants are always untrusted, it may be desirable to use security measures, such as spoofing protection, to ensure tenants cannot attack other virtual machines in the cloud.

Using switch-port locking lets you simplify your network configuration by enabling all of your tenants or guests to use the same Layer 2 network.

One of the most important functions of the port-locking commands is they can restrict the traffic that an untrusted guest can send, which, in turn, restricts the guest's ability to pretend it has a MAC or IP address it does not actually possess. Specifically, you can use these commands to prevent a guest from:

  • Claiming an IP or MAC address other than the ones the XenServer administrator has specified it can use

  • Intercepting, spoofing, or disrupting the traffic of other VMs

  • The XenServer switch-port locking feature is supported on the Linux bridge and vSwitch networking stacks.

  • When Role Based Access Control (RBAC) is enabled in your environment, the user configuring switch-port locking must be logged in with an account that has at least a Pool Operator or Pool Admin role. When RBAC is not enabled in your environment, the user must be logged in with the root account for the pool master.

  • When you run the switch-port locking commands, networks can be online or offline.

  • In Windows guests, the disconnected Network icon only appears when XenServer Tools are installed in the guest.

Without any switch-port locking configurations, VIFs are set to "network_default" and Networks are set to "unlocked."

Configuring switch-port locking is not supported when the vSwitch controller and other third-party controllers are in use in the environment.

Switch port locking does not prevent cloud tenants from:

  • Performing an IP-level attack on another tenant/user. However, switch-port locking prevents them performing the IP-level attack if they attempt to use the following means to do so and switch-port locking is configured: a) impersonating another tenant in the cloud or user or b) initiating an intercept of traffic intended for another user.

  • Exhausting network resources.

  • Receiving some traffic intended for other virtual machines through normal switch flooding behaviors (for broadcast MAC addresses or unknown destination MAC addresses).

Likewise, switch-port locking does not restrict where a VM can send traffic to.

You can implement the switch-port locking functionality either by using the command line or the XenServer API. However, in large environments, where automation is a primary concern, the most typical implementation method might be by using the API.

This section provides examples of how switch-port locking can prevent certain types of attacks. In these examples, VM-c is a virtual machine that a hostile tenant (Tenant C) is leasing and using for attacks. VM-a and VM-b are virtual machines leased by non-attacking tenants.

Example 1: How Switch Port Locking Can Prevent ARP Spoofing Prevention

ARP spoofing refers to an attacker's attempts to associate his or her MAC address with the IP address for another node, which could potentially result in the node's traffic being sent to the attacker instead. To achieve this goal the attacker sends fake (spoofed) ARP messages to an Ethernet LAN.

Scenario:

Virtual Machine A (VM-a) wants to send IP traffic from VM-a to Virtual Machine B (VM-b) by addressing it to VM-b's IP address. The owner of Virtual Machine C wants to use ARP spoofing to pretend his VM, VM-c, is actually VM-b.

  1. VM-c sends a speculative stream of ARP replies to VM-a. These ARP replies claim that the MAC address in the reply (c_MAC) is associated with the IP address, b_IP

    Result: Because the administrator enabled switch-port locking, these packets are all dropped because enabling switch-port locking prevents impersonation.

  2. VM-b sends an ARP reply to VM-a, claiming that the MAC address in the reply (b_MAC) is associated with the IP address, b_IP.

    Result: VM-a receives VM-b's ARP response.

Example 2: IP Spoofing Prevention

IP address spoofing is a process that conceals the identity of packets by creating Internet Protocol (IP) packets with a forged source IP address.

Scenario:

Tenant C is attempting to perform a Denial of Service attack using his host, Host-C, on a remote system to disguise his identity.

Attempt 1

Tenant C sets Host-C's IP address and MAC address to VM-a's IP and MAC addresses (a_IP and a_MAC). Tenant C instructs Host-C to send IP traffic to a remote system.

Result: The Host-C packets are dropped. This is because the administrator enabled switch-port locking; the Host-C packets are dropped because enabling switch-port locking prevents impersonation.

Attempt 2

Tenant C sets Host-C's IP address to VM-a's IP address (a_IP) and keeps his original c_MAC.

Tenant C instructs Host-C to send IP traffic to a remote system.

Result: The Host-C packets are dropped. This is because the administrator enabled switch-port locking, which prevents impersonation.

Example 3: Web Hosting

Scenario:

Alice is an infrastructure administrator.

One of her tenants, Tenant B, is hosting multiple websites from his VM, VM-b. Each website needs a distinct IP address hosted on the same virtual network interface (VIF).

Alice reconfigures Host-B's VIF to be locked to a single MAC but many IP addresses.

The switch-port locking feature lets you control packet filtering at one or more of two levels:

  • VIF level. Settings you configure on the VIF determine how packets are filtered. You can set the VIF to prevent the VM from sending any traffic, restrict the VIF so it can only send traffic using its assigned IP address, or allow the VM to send traffic to any IP address on the network connected to the VIF.

  • Network level. The XenServer network determines how packets are filtered. When a VIF’s locking mode is set to network_default, it refers to the network-level locking setting to determine what traffic to allow.

Regardless of which networking stack you use, the feature operates the same way. However, as described in more detail in the sections that follow, the Linux bridge does not fully support switch-port locking in IPv6.

The XenServer switch-port locking feature provides a locking mode that lets you configure VIFs in four different states. These states only apply when the VIF is plugged into a running virtual machine.

This illustration shows how three different VIF locking mode states behave when the network locking mode is set to unlocked and the VIF state is configured. In the first image, the VIF state is set to default so no traffic from the VM is filtered. In the second image, the VIF does not send or receive any packets because the locking mode is set to disabled. In the third image, the VIF state is set to locked, so the VIF can only send packets if those packets contain the correct MAC and IP address.

  • Network_default. When the VIF's state is set to network_default, XenServer uses the network's default-locking-mode parameter to determine if and how to filter packets travelling through the VIF. The behavior varies according to if the associated network has the network default locking mode parameter set to disabled or unlocked:

    • default-locking-mode=disabled, XenServer applies a filtering rule so that the VIF drops all traffic.

    • default-locking-mode=unlocked, XenServer removes all the filtering rules associated with the VIF. By default, the default locking mode parameter is set to unlocked.

    For information about the default-locking-mode parameter, see the section called “Network Commands”.

    The default locking mode of the network has no effect on attached VIFs whose locking state is anything other than network_default.

    Note

    You cannot change the default-locking-mode of a network that has active VIFs attached to it.

  • Locked. XenServer applies filtering rules so that only traffic sent to or sent from the specified MAC and IP addresses is allowed to be sent out through the VIF. In this mode, if no IP addresses are specified, then the VM cannot send any traffic through that VIF (on that network).

    To specify the IP addresses from which the VIF will accept traffic, use the IPv4 and/or IPv6 IP addresses by using the ipv4_allowed or ipv6_allowed parameters. However, if you have the Linux bridge configured, do not enter IPv6 addresses.

    XenServer lets you enter IPv6 addresses when the Linux bridge is active; however, XenServer cannot filter based on the IPv6 addresses entered. (The reason is the Linux bridge does not have modules to filter Neighbor Discovery Protocol (NDP) packets, so complete protection cannot be implemented and guests would be able to impersonate another guest by forging NDP packets.) As result, if you specify even one IPv6 address, XenServer lets all IPv6 traffic pass through the VIF. If you do not specify any IPv6 addresses, XenServer will not let any IPv6 traffic pass through to the VIF.

  • Unlocked. All network traffic can pass through the VIF. That is, no filters are applied to any traffic going to or from the VIF.

  • Disabled. No traffic is allowed to pass through the VIF. (That is, XenServer applies a filtering rule so that the VIF drops all traffic.)

This section provides three different procedures:

  • Restrict VIFs to use a specific IP address

  • Add an IP address to an existing restricted list (for example, if you need to add an IP address to a VIF while the VM is still running and connected to the network (for example, if you are taking a network offline temporarily) )

  • Remove an IP address from an existing restricted list

If a VIF's locking-mode is set to locked, it can only use the addresses specified in the ipv4-allowed or ipv6-allowed parameters.

Because, in some relatively rare cases, VIFs may have more than one IP address, it is possible to specify multiple IP addresses for a VIF.

You can perform these procedures before or after the VIF is plugged in (or the VM is started).

The following procedure prevents a virtual machine from communicating through a specific VIF. Since a VIF connects to a specific XenServer network, you can use this procedure to prevent a virtual machine from sending or receiving any traffic from a specific network. This provides a more granular level of control than disabling an entire network.

If you use the CLI command, you do not need to unplug the VIF to set the VIF's locking mode; the command changes the filtering rules while the VIF is running (live). In this case the network connection still appears to be present; however, the VIF drops any packets the VM attempts to send.

Tip

To find the UUID of a VIF, run the xe vif-list command on the host. The device ID indicates the device number of the VIF.

To revert back to the default (original) locking mode state, use the following procedure. By default, when you create a VIF, XenServer configures it so that it is not restricted to using a specific IP address.

Rather than running the VIF locking mode commands for each VIF, you can ensure all VIFs are disabled by default. To do so, you must modify the packet filtering at the network level, which causes the XenServer network to determine how packets are filtered, as described in the section called “How Switch-port Locking Works”.

Specifically, a network's default-locking-mode setting determines how new VIFs with default settings behave. Whenever a VIF's locking-mode is set to default, the VIF refers to the network-locking mode (default-locking-mode) to determine if and how to filter packets travelling through the VIF:

  • Unlocked. When the network default-locking-mode parameter is set to unlocked, XenServer lets the VM send traffic to any IP address on the network the VIF connects to.

  • Disabled. When the default-locking-mode parameter is set to disabled, XenServer applies a filtering rule so that the VIF drops all traffic.

By default, the default-locking-mode for all networks created in XenCenter and using the CLI are set to unlocked.

By setting the VIF's locking mode to its default (network_default), you can use this setting to create a basic default configuration (at the network level) for all newly created VIFs that connect to a specific network.

This illustration shows how, when a VIF's locking-mode is set to its default setting (network_default), the VIF uses the network default-locking-mode to determine its behavior.

This illustration shows how a VIF, when configured at its default setting (locking-mode=network_default), checks to see the setting associated with the default-locking-mode. In this illustration, the network is set to default-locking-mode=disabled so no traffic can pass through the VIF.

For example, since, by default, VIFs are created with their locking-mode set to network_default, if you set a network's default-locking-mode=disabled any new VIFs for which you have not configured the locking mode are disabled until you either (a) change the individual VIF's locking-mode parameter or (b) explicitly set the VIF's locking-mode to unlocked (for example, if you trust a specific VM enough to not want to filter its traffic at all).

Note

To get the UUID for a network, run the xe network-list command. This command displays the UUIDs for all the networks on the host on which you ran the command.


Prev Up Next
XenServer Networking Overview Home Networking Troubleshooting