-
Getting Started with Citrix NetScaler
-
Deploy a Citrix NetScaler VPX instance
-
Support matrix and usage guidelines
-
Install a Citrix NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for Installing NetScaler VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the NetScaler Virtual Appliance by using OpenStack
-
Provisioning the NetScaler Virtual Appliance by using the Virtual Machine Manager
-
Configuring NetScaler Virtual Appliances to Use SR-IOV Network Interface
-
Configuring NetScaler Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the NetScaler Virtual Appliance by using the virsh Program
-
-
Deploying NetScaler VPX Instances on AWS
-
-
Upgrade and downgrade a NetScaler appliance
-
-
-
-
-
-
Overriding Static Proximity Behavior by Configuring Preferred Locations
-
Example of a Complete Parent-Child Configuration Using the Metrics Exchange Protocol
-
Configuring Global Server Load Balancing for DNS Queries with NAPTR records
-
Using the EDNS0 Client Subnet Option for Global Server Load Balancing
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Use source IP address of the client when connecting to the server
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
-
-
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Support matrix and usage guidelines
This document lists the different hypervisors and features supported on a NetScaler VPX instance, and their usage guidelines and limitations.
- Table 1-6 list the different supported hypervisors.
- Table 7 lists the different VPX features and limitations for the different hypervisors supported on a NetScaler VPX instance.
- Table 8 lists the supported web browsers that allow you access the GUI and Dashboard.
Table 1. VPX instance on XenServer
XenServer version | SysID | VPX models |
---|---|---|
6.2, 6.5 | 450000 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G |
Table 2. VPX instance on VMware ESX server
VMware ESX version | SysID | VPX models |
---|---|---|
5.5 (build number: 3568722); 6.0 (build number: 3620759) | 450010 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G |
Table 3. VPX on Microsoft Hyper-V
Hyper-V version | SysID | VPX models |
---|---|---|
2012, 2012R2 | 450020 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000 |
Table 4. VPX instance on generic KVM
Generic KVM version | SysID | VPX models |
---|---|---|
RHEL 7.2, Ubuntu 15 | 450070 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G. VPX 25G, VPX 40G, VPX 100G |
Note: The VPX instance is qualified for hypervisor release versions mentioned in table 1–4, and not for patch releases within a version. However, the VPX instance is expected to work seamlessly with patch releases of a supported version. If it does not, log a support case for troubleshooting and debugging.
Table 5. VPX instance on AWS
AWS version | SysID | VPX models |
---|---|---|
N/A | 450040 | VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 15G, VPX BYOL |
Table 6. VPX instance on Azure
Azure version | SysID | VPX models |
---|---|---|
N/A | 450020 | VPX 10, VPX 200, VPX 1000, VPX 3000, VPX BYOL |
Table 7. VPX feature matrix
*Clustering support is available on SRIOV for client- and server-facing interfaces and not for the backplane.
**Interface DOWN events are not recorded in NetScaler VPX instances.
For Static LA, traffic might still be sent on the interface whose physical status is DOWN.
For LACP, peer device knows interface DOWN event based on LACP timeout mechanism.
Short timeout: 3 seconds
Long timeout: 90 seconds
For LACP, interfaces must not be shared across VMs.
For Dynamic routing, convergence time depends on the Routing Protocol since link events are not detected.
Monitored static Route functionality fails if monitors are not bound to static routes since Route state depends on the VLAN status. The VLAN status depends on the link status.
Partial failure detection does not happen in high availability if there is link failure. High availability-split brain condition might happen if there is link failure.
***When any link event (disable/enable, reset) is generated from a VPX instance, the physical status of the link does not change. For static LA, any traffic initiated by the peer gets dropped on the instance.
For LACP, peer device knows interface DOWN event based on LACP timeout mechanism.
Short timeout: 3 seconds
Long timeout: 90 seconds
For LACP, interfaces should not be shared across VMs.
- For the VLAN tagging feature to work, do the following:
On the VMware ESX, set the port group’s VLAN ID to 1 - 4095 on the vSwitch of VMware ESX server. For more information about setting a VLAN ID on the vSwitch of VMware ESX server, see pdf.
Table 8. Supported browsers
Operating system | Browser and versions |
---|---|
Windows 7 | Internet Explorer- 8, 9, 10, and 11; Mozilla Firefox 3.6.25 and above; Google Chrome- 15 and above |
Windows 64 bit | Internet Explorer - 8, 9; Google Chrome - 15 and above |
MAC | Mozilla Firefox - 12 and above; Safari - 5.1.3; Google Chrome - 15 and above |
Usage guidelines
Follow these usage guidelines:
-
See the VMware ESXi CPU Considerations section in the document Performance Best Practices for VMware vSphere 6.5. Here’s an extract:
It is not recommended that virtual machines with high CPU/Memory demand sit on a Host/Cluster that is overcommitted. In most environments ESXi allows significant levels of CPU overcommitment (that is, running more vCPUs on a host than the total number of physical processor cores in that host) without impacting virtual machine performance. If an ESXi host becomes CPU saturated (that is, the virtual machines and other loads on the host demand all the CPU resources the host has), latency-sensitive workloads might not perform well. In this case you might want to reduce the CPU load, for example by powering off some virtual machines or migrating them to a different host (or allowing DRS to migrate them automatically).
-
The Citrix ADC VPX is a latency-sensitive, high-performance virtual appliance. To deliver its expected performance, the appliance requires vCPU reservation, memory reservation, vCPU pinning on the host. Also, hyper threading must be disabled on the host. If the host does not meet these requirements, issues such as high-availability failover, CPU spike within the VPX instance, sluggishness in accessing the VPX CLI, pitboss daemon crash, packet drops, and low throughput occur.
-
A hypervisor is considered over-provisioned if one of the following two conditions is met:
-
The total number of virtual cores (vCPU) provisioned on the host is greater than the total number of physical cores (pCPUs).
-
The total number of provisioned VMs consume more vCPUs than the total number of pCPUs.
At times, if an instance is over-provisioned, the hypervisor might not be able to guarantee the resources reserved (such as CPU, memory, and others) for the instance due to hypervisor scheduling over-heads or bugs or limitations with the hypervisor. This can cause lack of CPU resource for Citrix ADC and might lead to issues mentioned in the first point under Usage guidelines. As administrators, you’re recommended to reduce the tenancy on the host so that the total number of vCPUs provisioned on the host is lesser or equal to the total number of pCPUs.
Example For ESX hypervisor, if the
%RDY%
parameter of a VPX vCPU is greater than 0 in theesxtop
command output, the ESX host is said to be having scheduling overheads, which can cause latency related issues for the VPX instance.In such a situation, reduce the tenancy on the host so that
%RDY%
returns to 0 always. Alternatively, contact the hypervisor vendor to triage the reason for not honoring the resource reservation done.
-
Share
Share
In this article
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.