ADC

TCP optimization

TCP uses the following optimization techniques and congestion control strategies (or algorithms) to avoid network congestion in data transmission.

Congestion Control Strategies

The Transmission Control Protocol (TCP) has long been used to establish and manage Internet connections, handle transmission errors, and smoothly connect web applications with client devices. But network traffic has become more difficult to control, because packet loss does not depend only on the congestion in the network, and congestion does not necessarily cause packet loss. Therefore, to measure congestion, a TCP algorithm should focus on both packet loss and bandwidth.

Proportional Rate Recovery (PRR) algorithm

TCP Fast Recovery mechanisms reduce web latency caused by packet losses. The new Proportional Rate Recovery (PRR) algorithm is a fast recovery algorithm that evaluates TCP data during a loss recovery. It is patterned after Rate-Halving, by using the fraction that is appropriate for the target window chosen by the congestion control algorithm. It minimizes window adjustment, and the actual window size at the end of recovery is close to the Slow-Start threshold (ssthresh).

TCP Fast Open (TFO)

TCP Fast Open (TFO) is a TCP mechanism that enables speedy and safe data exchange between a client and a server during TCP’s initial handshake. This feature is available as a TCP option in the TCP profile bound to a virtual server of a Citrix ADC appliance. TFO uses a TCP Fast Open Cookie (a security cookie) that the Citrix ADC appliance generates to validate and authenticate the client initiating a TFO connection to the virtual server. By using the TFO mechanism, you can reduce an application’s network latency by the time required for one full round trip, which significantly reduces the delay experienced in short TCP transfers.

How TFO works

When a client tries to establish a TFO connection, it includes a TCP Fast Open Cookie with the initial SYN segment to authenticate itself. If authentication is successful, the virtual server on the Citrix ADC appliance can include data in the SYN-ACK segment even though it has not received the final ACK segment of the three-way handshake. This saves up to one full round-trip compared to a normal TCP connection, which requires a three-way handshake before any data can be exchanged.

A client and a backend server perform the following steps to establish a TFO connection and exchange data securely during the initial TCP handshake.

  1. If the client does not have a TCP Fast Open Cookie to authenticate itself, it sends a Fast Open Cookie request in the SYN packet to the virtual server on the Citrix ADC appliance.
  2. If the TFO option is enabled in the TCP profile bound to the virtual server, the appliance generates a cookie (by encrypting the client’s IP address under a secret key) and responds to the client with a SYN-ACK that includes the generated Fast Open Cookie in a TCP option field.
  3. The client caches the cookie for future TFO connections to the same virtual server on the appliance.
  4. When the client tries to establish a TFO connection to the same virtual server, it sends SYN that includes the cached Fast Open Cookie (as a TCP option) along with HTTP data.
  5. The Citrix ADC appliance validates the cookie, and if the authentication is successful, the server accepts the data in the SYN packet and acknowledges the event with a SYN-ACK, TFO Cookie, and HTTP Response.

Note:

If the client authentication fails, the server drops the data and acknowledges the event only with a SYN indicating a session timeout.

  1. On the server side, if the TFO option is enabled in a TCP profile bound to a service, the Citrix ADC appliance determines whether the TCP Fast Open Cookie is present in the service to which it is trying to connect.
  2. If the TCP Fast Open Cookie is not present, the appliance sends a cookie request in the SYN packet.
  3. When the backend server sends the Cookie, the appliance stores the cookie in the server information cache.
  4. If the appliance already has a cookie for the given destination IP pair, it replaces the old cookie with the new one.
  5. If the cookie is available in the server information cache when the virtual server tries to reconnect to the same backend server by using the same SNIP address, the appliance combines the data in SYN packet with the cookie and sends it to the backend server.
  6. The backend server acknowledges the event with both data and a SYN.

Note: If the server acknowledges the event with only a SYN segment, the Citrix ADC appliance immediately resends the data packet after removing the SYN segment and the TCP options from the original packet.

Configuring TCP fast open

To use the TCP Fast Open (TFO) feature, enable the TCP Fast Open option in the relevant TCP profile and set the TFO Cookie Timeout parameter to a value that suits the security requirement for that profile.

To enable or disable TFO by using the command line:

At the command prompt, type one of the following commands to enable or disable TFO in a new or existing profile.

Note: The default value is DISABLED.

    add tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
    set tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
    unset tcpprofile <TCP Profile Name> - tcpFastOpen
    Examples
    add tcpprofile Profile1 – tcpFastOpen
    Set tcpprofile Profile1 – tcpFastOpen Enabled
    unset tcpprofile Profile1 – tcpFastOpen
<!--NeedCopy-->

At the command prompt, type:

    set tcpparam –tcpfastOpenCookieTimeout <Timeout Value>
    Example
    set tcpprofile –tcpfastOpenCookieTimeout 30secs
<!--NeedCopy-->

To configure the TCP Fast Open by using the GUI

  1. Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
  2. On the Configure TCP Profile page, select the TCP Fast Open checkbox.
  3. Click OK and then Done.

Navigate to Configuration > System > Settings > Change TCP Parameters and then Configure TCP Parameters page to set the TCP Fast Open Cookie timeout value.

TCP Hystart

A new TCP profile parameter, hystart, enables the Hystart algorithm, which is a slow-start algorithm that dynamically determines a safe point at which to terminate (ssthresh). It enables a transition to congestion avoidance without heavy packet losses. This new parameter is disabled by default.

If congestion is detected, Hystart enters a congestion avoidance phase. Enabling it gives you better throughput in high-speed networks with high packet loss. This algorithm helps maintain close to maximum bandwidth while processing transactions. It can therefore improve throughput.

Configuring TCP Hystart

To use the Hystart feature, enable the Cubic Hystart option in the relevant TCP profile.

To configure Hystart by using the command line interface (CLI)

At the command prompt, type one of the following commands to enable or disable Hystart in a new or existing TCP profile.

add tcpprofile <profileName> -hystart ENABLED
set tcpprofile <profileName> -hystart ENABLED
unset tcprofile <profileName> -hystart
<!--NeedCopy-->

Examples:

    add tcpprofile profile1 -hystart ENABLED
    set tcpprofile profile1 -hystart ENABLED
    unset tcprofile profile1 -hystart
<!--NeedCopy-->

To configure Hystart support by using the GUI

  1. Navigate to Configuration > System > Profiles > and click Edit to modify a TCP profile.
  2. On the Configure TCP Profile page, select the Cubic Hystart check box.
  3. Click OK and then Done.

TCP burst rate control

It is observed that TCP control mechanisms can lead to a bursty traffic flow on high speed mobile networks with a negative impact on the overall network efficiency. Due to mobile network conditions such as congestion or Layer-2 retransmission of data, TCP acknowledgements arrive clumped at the sender triggering a burst of transmission.  These group of consecutive packets sent with a short inter-packet gaps it is called TCP packet burst. To overcome traffic burst, the Citrix ADC appliance uses a TCP Burst Rate Control technique. This technique evenly spaces data into the network for an entire round-trip-time so that the data is not sent into a burst. By using this burst rate control technique, you can achieve better throughput and lower packet drop rates.

How TCP burst rate control works

In a Citrix ADC appliance, this technique evenly spreads the transmission of a packets across the entire duration of the round-trip-time (RTT). This is achieved by using a TCP stack and network packet scheduler that identifies the various network conditions to output packets for ongoing TCP sessions in order to reduce the bursts.  

At the sender, instead of transmitting packets immediately upon receipt of an acknowledgment, the sender can delay transmitting packets to spread them out at the rate defined by scheduler (Dynamic configuration) or by the TCP profile (Fixed configuration).

Configuring TCP burst rate control

To use the TCP Burst Rate Control option in the relevant TCP profile and set the burst rate control parameters.

To set TCP burst rate control by using the command line

At the command prompt, set one of the following TCP Burst Rate Control commands are configured in a new or existing profile.

Note: The default value is DISABLED.

add tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed

set tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed

unset tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed
<!--NeedCopy-->

where,

Disabled – If the Burst rate control is disabled, then a Citrix ADC appliance does not perform burst management other than the maxBurst setting.

Fixed – If the TCP burst rate control is Fixed, the appliance uses the TCP Connection Payload Send Rate value mentioned in the TCP Profile.

Dynamic – If the Burst Rate Control is “Dynamic” the connection is being regulated based on various network conditions to reduce TCP bursts. This mode works only when the TCP connection is in ENDPOINT mode. When Dynamic Burst Rate control is enabled the maxBurst parameter of the TCP profile is not in effect.

add tcpProfile  profile1 -burstRateControl Disabled

set tcpProfile profile1 -burstRateControl Dynamic

unset tcpProfile profile1 -burstRateControl Fixed
<!--NeedCopy-->

To set TCP Burst Rate Control parameters by using the command line interface

At the command prompt, type:

    set ns tcpprofile nstcp_default_profile –burstRateControl <type of burst rate control> –tcprate <TCP rate> -rateqmax <maximum bytes in queue>

    T1300-10-2> show ns tcpprofile nstcp_default_profile
            Name: nstcp_default_profile
            Window Scaling status:  ENABLED
            Window Scaling factor: 8
            SACK status: ENABLED
            MSS: 1460
            MaxBurst setting: 30 MSS
            Initial cwnd setting: 16 MSS
            TCP Delayed-ACK Timer: 100 millisec
            Nagle's Algorithm: DISABLED
            Maximum out-of-order packets to queue: 15000
            Immediate ACK on PUSH packet: ENABLED
            Maximum packets per MSS: 0
            Maximum packets per retransmission: 1
            TCP minimum RTO in millisec: 1000
            TCP Slow start increment: 1
            TCP Buffer Size: 8000000 bytes
            TCP Send Buffer Size: 8000000 bytes
            TCP Syncookie: ENABLED
            Update Last activity on KA Probes: ENABLED
            TCP flavor: BIC
            TCP Dynamic Receive Buffering: DISABLED
            Keep-alive probes: ENABLED
            Connection idle time before starting keep-alive probes: 900 seconds
            Keep-alive probe interval: 75 seconds
            Maximum keep-alive probes to be missed before dropping connection: 3
            Establishing Client Connection: AUTOMATIC
            TCP Segmentation Offload: AUTOMATIC
            TCP Timestamp Option: DISABLED
            RST window attenuation (spoof protection): ENABLED
            Accept RST with last acknowledged sequence number: ENABLED
            SYN spoof protection: ENABLED
            TCP Explicit Congestion Notification: DISABLED
            Multipath TCP: DISABLED
            Multipath TCP drop data on pre-established subflow: DISABLED
            Multipath TCP fastopen: DISABLED
            Multipath TCP session timeout: 0 seconds
            DSACK: ENABLED
            ACK Aggregation: DISABLED
            FRTO: ENABLED
            TCP Max CWND : 4000000 bytes
            FACK: ENABLED
            TCP Optimization mode: ENDPOINT
            TCP Fastopen: DISABLED
            HYSTART: DISABLED
            TCP dupack threshold: 3
            Burst Rate Control: Dynamic
            TCP Rate: 0
            TCP Rate Maximum Queue: 0
<!--NeedCopy-->

To configure the TCP Burst Rate Control by using the GUI

  1. Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
  2. On the Configure TCP Profile page, select TCP Burst Control option from the drop-down list:
    1. BurstRateCntrl
    2. CreditBytePrms
    3. RateBytePerms
    4. RateSchedulerQ
  3. Click OK and then Done.

Protection against wrapped sequence (PAWS) algorithm

If you enable the TCP timestamp option in the default TCP profile, the Citrix ADC appliance uses the Protection Against Wrapped Sequence (PAWS) algorithm to identify and reject old packets whose sequence numbers are within the current TCP connection’s receive window because the sequence has “wrapped” (reached its maximum value and restarted from 0).

If network congestion delays a non-SYN data packet and you open a new connection before the packet arrives, sequence-number wrapping might cause the new connection to accept the packet as valid, leading to data corruption. But if the TCP timestamp option is enabled, the packet is discarded.

By default, the TCP timestamp option is disabled. If you enable it, the appliance compares the TCP timestamp (SEG.TSval) in a packet’s header with the recent timestamp (Ts.recent) value. If SEG.TSval is equal to or greater than Ts.recent, the packet is processed. Otherwise, the appliance drops the packet and sends a corrective acknowledgement.

How PAWS works

The PAWS algorithm processes all the incoming TCP packets of a synchronized connection as follows:

  1. If SEG.TSval < Ts.recent: The incoming packet is not acceptable. PAWS sends an acknowledgement (as specified in RFC-793) and drops the packet. Note: Sending an ACK segment is necessary in order to retain TCP’s mechanisms for detecting and recovering from half-open connections.
  2. If packet is outside the window: PAWS rejects the packet, as in normal TCP processing.
  3. If SEG.TSval > Ts.recent: PAWS accepts the packet and processes it.
  4. If SEG.TSval <= Last.ACK.sent (arriving segment satisfies): PAWS copies the SEG.TSval value to Ts.recent.
  5. If packet is in sequence: PAWS accepts the packet.
  6. If packet is not in sequence: The packet is treated as a normal in-window, out-of-sequence TCP segment. For example, it might be queued for later delivery.
  7. If Ts.recent value is idle for more than 24 days: The validity of Ts.recent is checked if the PAWS timestamp check fails. If the Ts.recent value is found to be invalid, the segment is accepted and the PAWS rule updates Ts.recent with the TSval value from the new segment.

To enable or disable TCP timestamp by using the command line interface

At the command prompt, type:

set nstcpprofile nstcp_default_profile -TimeStamp (ENABLED | DISABLED)
<!--NeedCopy-->

To enable or disable TCP timestamp by using the GUI

Navigate to System > Profile > TCP Profile, select the default TCP profile, click Edit, and select or clear the TCP timestamp check box.

Optimization Techniques

TCP uses the following optimization techniques and methods for optimized flow controls.

Policy based TCP Profile Selection

Network traffic today is more diverse and bandwidth-intensive than ever before. With the increased traffic, the effect that Quality of Service (QoS) has on TCP performance is significant. To enhance QoS, you can now configure AppQoE policies with different TCP profiles for different classes of network traffic. The AppQoE policy classifies a virtual server’s traffic to associate a TCP profile optimized for a particular type of traffic, such as 3G, 4G, LAN, or WAN.

To use this feature, create a policy action for each TCP profile, associate an action with AppQoE policies, and bind the policies to the load balancing virtual servers.

For information about using subscriber attributes to perform TCP optimization, see Policy-based TCP Profile.

Configuring policy based TCP profile selection

Configuring policy based TCP profile selection consists of the following tasks:

  • Enabling AppQoE. Before configuring the TCP profile feature, you must enable the AppQoE feature.
  • Adding AppQoE Action. After enabling the AppQoE feature, configure an AppQoE action with a TCP profile.
  • Configuring AppQoE based TCP Profile Selection. To implement TCP profile selection for different classes of traffic, you must configure AppQoE policies with which your Citrix ADC ADC can distinguish the connections and bind the correct AppQoE action to each policy.
  • Binding AppQoE Policy to Virtual Server. Once you have configured the AppQoE policies, you must bind them to one or more load balancing, content switching, or cache redirection virtual servers.

Configuring using the command line interface

To enable AppQoE by using the command line interface

At the command prompt, type the following commands to enable the feature and verify that it is enabled:

  • enable ns feature appqoe
  • show ns feature

To bind a TCP profile while creating an AppQoE action using the command line interface

At the command prompt, type the following AppQoE action command with tcpprofiletobind option.

add appqoe action <name> [-priority <priority>] [-respondWith ( ACS | NS ) [<CustomFile>] [-altContentSvcName <string>] [-altContentPath <string>] [-maxConn <positive_integer>] [-delay <usecs>]] [-polqDepth <positive_integer>] [-priqDepth <positive_integer>] [-dosTrigExpression <expression>] [-dosAction ( SimpleResponse |HICResponse )] [-tcpprofiletobind <string>] show appqoe action

To configure an AppQoE policy by using the command line interface

At the command prompt, type:

add appqoe policy <name> -rule <expression> -action <string>

To bind an AppQoE policy to load balancing, cache redirection or content switching virtual servers by using the command line interface

At the command prompt, type:

bind cs vserver cs1 -policyName <appqoe_policy_name> -priority <priority> bind lb vserver <name> - policyName <appqoe_policy_name> -priority <priority> bind cr vserver <name> -policyName <appqoe_policy_name> -priority <priority>

Example

    add ns tcpProfile tcp1 -WS ENABLED -SACK ENABLED -WSVal 8 -nagle ENABLED -maxBurst 30 -initialCwnd 16 -oooQSize 15000 -minRTO 500 -slowStartIncr 1 -bufferSize 4194304 -flavor BIC -KA ENABLED -sendBuffsize 4194304 -rstWindowAttenuate ENABLED -spoofSynDrop ENABLED -dsack enabled -frto ENABLED -maxcwnd 4000000 -fack ENABLED -tcpmode ENDPOINT
    add appqoe action appact1 -priority HIGH -tcpprofile tcp1
    add appqoe policy apppol1 -rule "client.ip.src.eq(10.102.71.31)" -action appact1
    bind lb vserver lb2 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST
    bind cs vserver cs1 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST
<!--NeedCopy-->

Configuring policy based TCP profiling using the GUI

To enable AppQoE by using the GUI

  1. Navigate to SystemSettings.
  2. In the details pane, click Configure Advanced Features.
  3. In the Configure Advanced Features dialog box, select the AppQoE check box.
  4. Click OK.

To configure AppQoE policy by using the GUI

  1. Navigate to App-Expert > AppQoE > Actions.
  2. In the details pane, do one of the following:
  3. To create a new action, click Add.
  4. To modify an existing action, select the action, and then click Edit.
  5. In the Create AppQoE Action or the Configure AppQoE Action screen, type or select values for the parameters. The contents of the dialog box correspond to the parameters described in “Parameters for configuring the AppQoE Action” as follows (asterisk indicates a required parameter):
    1. Name—name
    2. Action type—respondWith
    3. Priority—priority
    4. Policy Queue Depth—polqDepth
    5. Queue Depth—priqDepth
    6. DOS Action—dosAction
  6. Click Create.

To bind AppQoE policy by using the GUI

  1. Navigate to Traffic Management > Load Balancing > Virtual Servers, select a server and then click Edit.
  2. In the Policies section and click (+) to bind an AppQoE policy.
  3. In the Policies slider, do the following:
    1. Select a policy type as AppQoE from the drop-down list.
    2. Select a traffic type from the drop-down list.
  4. In the Policy Binding section, do the following:
    1. Click New to create a new AppQoE policy.
    2. Click Existing Policy to select an AppQoE policy from the drop-down list.
  5. Set the binding priority and click Bind to the policy to the virtual server.
  6. Click Done.

SACK block generation

TCP performance slows down when multiple packets are lost in one window of data. In such a scenario, a Selective Acknowledgement (SACK) mechanism combined with a selective repeat retransmission policy overcomes this limitation. For every incoming out-of-order packet, you must generate a SACK block.

If the out-of-order packet fits in the reassembly queue block, insert packet info in the block, and set the complete block info as SACK-0. If an out-of-order packet does not fit into reassembly block, send the packet as SACK-0 and repeat the earlier SACK blocks. If an out-of-order packet is a duplicate and packet info is set as SACK-0 then D-SACK the block.

Note: A packet is considered as D-SACK if it is an acknowledged packet, or an out of order packet which is already received.

Client reneging

 A Citrix ADC appliance can handle client reneging during SACK based recovery.

Memory checks for marking end_point on PCB is not considering total available memory

In a Citrix ADC appliance, if the memory usage threshold is set to 75 percent instead of using the total available memory, it causes new TCP connections to bypass TCP optimization.

Unnecessary retransmissions due to missing SACK blocks

In a non-endpoint mode, when you send DUPACKS, if SACK blocks are missing for few out of order packets, triggers additional retransmissions from the server.

SNMP for number of connections bypassed optimization because of overload

The following SNMP ids have been added to a Citrix ADC appliance to track number of connections bypassed TCP optimization due to overload.

  1. 1.3.6.1.4.1.5951.4.1.1.46.131 (tcpOptimizationEnabled). To track the total number of connections enabled with TCP optimization.
  2. 1.3.6.1.4.1.5951.4.1.1.46.132 (tcpOptimizationBypassed). To track the total number of connections bypassed TCP Optimization.

Dynamic receive buffer

To maximize TCP performance, a Citrix ADC appliance can now dynamically adjust the TCP receive buffer size.