-
Getting Started with Citrix ADC
-
Deploy a Citrix ADC VPX instance
-
Apply Citrix ADC VPX configurations at the first boot of the Citrix ADC appliance in cloud
-
Install a Citrix ADC VPX instance on Microsoft Hyper-V servers
-
Install a Citrix ADC VPX instance on Linux-KVM platform
-
Prerequisites for Installing Citrix ADC VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the Citrix ADC Virtual Appliance by using OpenStack
-
Provisioning the Citrix ADC Virtual Appliance by using the Virtual Machine Manager
-
Configuring Citrix ADC Virtual Appliances to Use SR-IOV Network Interface
-
Configuring Citrix ADC Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the Citrix ADC Virtual Appliance by using the virsh Program
-
Provisioning the Citrix ADC Virtual Appliance with SR-IOV, on OpenStack
-
Configuring a Citrix ADC VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
-
-
Deploy a Citrix ADC VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Configure a Citrix ADC VPX instance to use SR-IOV network interface
-
Configure a Citrix ADC VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a Citrix ADC VPX instance on Microsoft Azure
-
Network architecture for Citrix ADC VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a Citrix ADC VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Configure a Citrix ADC VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the Citrix high availability template with Azure ILB
-
Configure address pools (IIP) for a Citrix Gateway appliance
-
Upgrade and downgrade a Citrix ADC appliance
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
On-premises Citrix Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Citrix ADC Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
-
-
Synchronizing Configuration Files in a High Availability Setup
-
Restricting High-Availability Synchronization Traffic to a VLAN
-
Understanding the High Availability Health Check Computation
-
Managing High Availability Heartbeat Messages on a Citrix ADC Appliance
-
Remove and Replace a Citrix ADC in a High Availability Setup
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已动态机器翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
This content has been machine translated dynamically.
This content has been machine translated dynamically.
This content has been machine translated dynamically.
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.
Este artigo foi traduzido automaticamente.
这篇文章已经过机器翻译.放弃
Translation failed!
Citrix ADC appliance networking and VLAN best practices
A Citrix ADC appliance uses VLANs to determine which interface must be used for which traffic. In addition, Citrix ADC appliance does not participate in Spanning Tree. Without the proper VLAN configuration, the Citrix ADC appliance is unable to determine which interface to use, and it can function more like a HUB than a switch or a router. In other words, the Citrix ADC appliance can use all interfaces for each conversation.
Symptoms of VLAN misconfiguration
VLAN misconfiguration issue can manifest itself in many forms, including performance issues, inability to establish connections, randomly disconnected sessions, and in severe situations, network disruptions seemingly unrelated to the Citrix ADC appliance itself. The Citrix ADC appliance may also report MAC moves, muted interfaces, and/or management interface transmit or receive buffer overflows, depending on the exact nature of the interaction with your network.
MAC Moves (counter nic_tot_bdg_mac_moved): This issue indicates that the Citrix ADC appliance is using more than one interface to communicate with the same device (MAC address), because it could not properly determine which interface to use.
Muted interfaces (counter nic_err_bdg_muted): This issue indicates that the Citrix ADC appliance has detected that it is creating a routing loop due to VLAN configuration issues, and as such, it has shut down one or more of the offending interfaces in order to prevent a network outage.
Interface buffer overflows, typically referring to management interfaces (counter nic_err_tx_overflow):This issue can be caused if too much traffic is being transmitted over a management interface. Management interfaces on the Citrix ADC appliance is not designed to handle large volumes of traffic, which may result from network and VLAN misconfigurations triggering the Citrix ADC appliance to use a management interface for production data traffic. This often occurs because the Citrix ADC appliance has no way to differentiate traffic on the VLAN / subnet of the NSIP (NSVLAN) from regular production traffic. It is highly recommended that the NSIP be on a separate VLAN and subnet from any production devices such as workstations and servers.
Orphan ACKs (counter tcp_err_orphan_ack): This issue indicates that the Citrix ADC appliance received an ACK packet that it was not expecting, typically on a different interface than the ACK’d traffic originated from. This situation can be caused by VLAN misconfigurations where the Citrix ADC appliance transmits on a different interface than the target device would typically use to communicate with the Citrix ADC appliance (often seen in conjunction with MAC moves)
High rates of retransmissions or retransmit give ups (counters: tcp_err_retransmit_giveups, tcp_err_7th_retransmit, various other retransmit counters): The Citrix ADC appliance attempts to retransmit a TCP packet a total of 7 times before it gives up and terminates the connection. While this situation can be caused by network conditions, it often occurs as a result of VLAN and interface misconfiguration.
High Availability Split Brain: Split Brain is a condition where both high availability nodes believe they are Primary, leading to duplicate IP addresses and loss of Citrix ADC appliance functionality. This is caused when the two high availability nodes cannot communicate with each-other using high availability Heartbeats on UDP Port 3003 using the NSIP, across any interface. This is typically caused by VLAN misconfigurations where the native VLAN on the Citrix ADC appliance interfaces does not have connectivity between Citrix ADC appliances.
Best practices for VLAN and network configurations
-
Each subnet must be associated with a VLAN.
-
More than one subnet can be associated with the same VLAN (depending on your network design).
-
Each VLAN should be associated to only one interface (for purposes of this discussion, a LA channel counts as a single interface).
-
If you require more than one subnet to be associated with an interface, the subnets must be tagged.
-
Contrary to popular belief, the Mac-Based-Forwarding (MBF) feature on the Citrix ADC appliance is not designed to mitigate this type of issue. MBF is designed primarily for the DSR (Direct Server Return) mode of the Citrix ADC appliance, which is rarely used in most environments (it is designed to allow traffic to purposely bypass the Citrix ADC appliance on the return path from the back-end servers). MBF may hide VLAN issues in some instances, but it should not be relied-upon to resolve this type of problem.
-
Every interface on Citrix ADC appliance requires a native VLAN (unlike Cisco, where native VLANs are optional), although the TagAll setting on an interface can be used so that no untagged traffic leaves the interface in question.
-
The native VLAN can be tagged if necessary for your network design (this is the TagAll option for the interface).
-
The VLAN for the subnet of your Citrix ADC appliance’s NSIP is a special case. This is called the NSVLAN. The concepts are the same but the commands to configure it are different and changes to the NSVLAN require a reboot of the Citrix ADC appliance to take effect. If you attempt to bind a VLAN to a SNIP that shares he same subnet as the NSIP, you get “Operation not permitted.” This is because you have to use the NSVLAN commands instead. Also, on some firmware versions, you cannot set an NSVLAN if that VLAN number exists using
add VLAN
command. Simply remove the VLAN and then set the NSVLAN again. -
High availability Heartbeats always use the Native VLAN of the respective interface (optionally tagged if the TagAll option is set on the interface).
-
There must be communication between at least one set of Native VLAN(s) on the two nodes of an high availability pair (this can be direct or via a router). The native VLANs are used for high availability heartbeats. If the Citrix ADC appliances cannot communicate between native VLANs on any interface, this will lead to high availability failovers and possibly a split-brain situation where both Citrix ADC appliances think they are primary (leading to duplicate IP addresses, amongst other things).
-
The Citrix ADC appliance does not participate in spanning tree. As such, it is not possible to use spanning tree to provide for interface redundancy when using a Citrix ADC appliance. Instead, use a form of Link Aggregation (LACP or manual LAG) for this purpose.
Note: If you want to have link aggregation between multiple physical switches, you must have the switches configured as a virtual switch, using a feature such as Cisco’s Switch Stack.
-
The high availability synchronization and command Propagation, by default, use the NSIP/NSVLAN. To separate these out to a different VLAN, you can use the SyncVLAN option of the
set HA node
command. -
There is nothing built-in to the Citrix ADC appliance default configuration that denotes that a management interface (0/1 or 0/2) is restricted to management traffic only. This restriction must be enforced by the end user through VLAN configuration. The management interfaces are not designed to handle data traffic, so your network design must take this point into account. Management interfaces, contained on the Citrix ADC appliance motherboard, lack various offloading features such as CRC offload, larger packet buffers, and other optimizations, making them much less efficient in handling large amounts of traffic. To separate production data and management traffic, the NSIP must not be on the same subnet/VLAN as your data traffic.
-
If it is desired to use a management interface to carry management traffic, it is best practice that the Default Route be on a subnet other than the subnet of the NSIP (NSVLAN).
In many configurations, the default route is relied-upon for workstation commmunication (in an internet scenario). If the default route is on the same subnet as the NSIP, then the ADC appliance can use the management interface to send and receive data traffic. This use of data traffic can overload the management interface.
-
Also, an SDX-the SVM, XenServer, and all Citrix ADC instance NSIPs must be on the same VLAN and subnet. There is no backplane in the SDX appliance that allows for communication between SVM/Xen/Instances. If they are not on the same VLAN/subnet/interface, traffic between them must leave the physical hardware, be routed on your network, and return.
This configuration can lead to obvious connectivity issues between the instances and SVM and as such, is not recommended. A common symptom of this is a Yellow Instance State indicator in the SVM for the VPX instance in question, and the inability to use the SVM to reconfigure a VPX instance.
-
If some VLANs are bound to subnets and some are not, during a high availability failover, GARP packets are not be sent for any IP addresses on any of the subnets that are not bound to a VLAN. This configuration can cause dropped connections and connectivity issues during high availability failovers. This issue is caused because the Citrix ADC appliance cannot notify the network MAC ownership IP addresses change on non-VMAC-configured Citrix ADC appliances.
Symptoms of this are that during/after a high availability failover, the ip_tot_floating_ip_err counter increments on the former primary Citrix ADC appliance for more than a few seconds, indicating that the network did not receive or process GARP packets and the network is continuing to transmit data to the new secondary Citrix ADC appliance.
Share
Share
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.