-
Getting Started with Citrix NetScaler
-
Deploy a Citrix NetScaler VPX instance
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for Installing NetScaler VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the NetScaler Virtual Appliance by using OpenStack
-
Provisioning the NetScaler Virtual Appliance by using the Virtual Machine Manager
-
Configuring NetScaler Virtual Appliances to Use SR-IOV Network Interface
-
Configuring NetScaler Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the NetScaler Virtual Appliance by using the virsh Program
-
Provisioning the NetScaler Virtual Appliance with SR-IOV, on OpenStack
-
Configuring a NetScaler VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
-
-
Upgrade and downgrade a NetScaler appliance
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Use source IP address of the client when connecting to the server
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
-
-
-
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已动态机器翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
This content has been machine translated dynamically.
This content has been machine translated dynamically.
This content has been machine translated dynamically.
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.
Este artigo foi traduzido automaticamente.
这篇文章已经过机器翻译.放弃
Translation failed!
Prerequisites for installing a NetScaler VPX instance on Linux-KVM platform
Check the minimum system requirements for a Linux-KVM serves running a NetScaler VPX instance.
CPU requirement:
- 64-bit x86 processors with the hardware virtualization features included in the AMD-V and Intel VT-X processors.
To test whether your CPU supports Linux host, enter the following command at the host Linux shell prompt:
*.egrep'^flags.*(vmx|svm)'/proc/cpuinfo*
If the BIOS settings for the above extension are disabled, you must enable them in BIOS.
-
Provide at least 2 CPU cores to Host Linux.
-
There is no specific recommendation for processor speed, but higher the speed, the better the performance of the VM application.
Memory (RAM) requirement:
Minimum 4 GB for the host Linux kernel. Add additional memory as required by the VMs.
Hard disk requirement:
Calculate the space for Host Linux kernel and VM requirements. A single NetScaler VPX VM requires 20 GB of disk space.
Software requirements
The Host kernel used must be a 64-bit Linux kernel, release 2.6.20 or later, with all virtualization tools. Citrix recommends newer kernels, such as 3.6.11-4 and later.
Many Linux distributions such as Red Hat, Centos, and Fedora, have tested kernel versions and associated virtualization tools.
Guest VM hardware requirements
NetScaler VPX supports IDE and virtIO hard disk type. The Hard Disk Type has been configured in the XML file, which is a part of the NetScaler package.
Networking requirements
NetScaler VPX supports virtIO para-virtualized, SR-IOV, and PCI Passthrough network interfaces.
For more information about the supported network interfaces, see:
- Provision the NetScaler VPX instance by using the Virtual Machine Manager
- Configure a NetScaler VPX instance to use SR-IOV network interfaces
- Configure a NetScaler VPX instance to use PCI passthrough network interfaces
Source Interface and Modes
The source device type can be either Bridge or MacVTap. In case of MacVTap, four modes are possible - VEPA, Bridge, Private and Pass-through. Check the types of interfaces that you can use and the supported traffic types, as given below.
Bridge:
- Linux Bridge.
- Ebtables and iptables settings on host Linux might filter the traffic on the bridge if you do not choose the correct setting or disable IPtable services.
MacVTap (VEPA mode):
- Better performance than a bridge.
- Interfaces from the same lower device can be shared across the VMs.
- Inter-VM communication using the same
- lower device is possible only if upstream or downstream switch supports VEPA mode.
MacVTap (private mode):
- Better performance than a bridge.
- Interfaces from the same lower device can be shared across the VMs.
- Inter-VM communication using the same lower device is not possible.
MacVTap (bridge mode):
- Better as compared to bridge.
- Interfaces out of same lower device can be shared across the VMs.
- Inter-VM communication using the same lower device is possible, if lower device link is UP.
MacVTap (Pass-through mode):
- Better as compared to bridge.
- Interfaces out of same lower device cannot be shared across the VMs.
- Only one VM can use the lower device.
Note: For best performance by the VPX instance, ensure that the gro and lro capabilities are switched off on the source interfaces.
Properties of source interfaces
Make sure that you switch off the generic-receive-offload (gro) and large-receive-offload (lro) capabilities of the source interfaces. To switch off the gro and lro capabilities, run the following commands at the host Linux shell prompt.
ethtool -K eth6 gro off
ethool -K eth6 lro off
Example:
[root@localhost ~]# ethtool -K eth6
Offload parameters for eth6:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
[root@localhost ~]#
Example:
If the host Linux bridge is used as a source device, as in the following example, gro and lro capabilities must be switched off on the vnet interfaces, which are the virtual interfaces connecting the host to the guest VMs.
[root@localhost ~]# brctl show eth6_br
bridge name bridge id STP enabled interfaces
eth6_br 8000.00e0ed1861ae no eth6
vnet0
vnet2
[root@localhost ~]#
In the above example, the two virtual interfaces are derived from the eth6_br and are represented as vnet0 and vnet2. Run the following commands to switch off gro and lro capabilities on these interfaces.
ethtool -K vnet0 gro off
ethtool -K vnet2 gro off
ethtool -K vnet0 lro off
ethtool -K vnet2 lro off
Promiscuous mode
The promiscuos mode has to be enabled for the following features to work:
- L2 mode
- Multicast traffic processing
- Broadcast
- IPV6 traffic
- Virtual MAC
- Dynamic routing
Use the following command to enable the promicuous mode.
[root@localhost ~]# ifconfig eth6 promisc
[root@localhost ~]# ifconfig eth6
eth6 Link encap:Ethernet HWaddr 78:2b:cb:51:54:a3
inet6 addr: fe80::7a2b:cbff:fe51:54a3/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:9000 Metric:1
RX packets:142961 errors:0 dropped:0 overruns:0 frame:0
TX packets:2895843 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14330008 (14.3 MB) TX bytes:1019416071 (1.0 GB)
[root@localhost ~]#
Module required
For better network performance, make sure the vhost_net module is present in the Linux host. To check the existence of vhost_net module, run the following command on the Linux host :
lsmod | grep "vhost\_net"
If vhost_net is not yet running, enter the following command to run it:
modprobe vhost\_net
Share
Share
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.