The enhancement and changes released in Build 49.16.
You can now change the credential default behavior by defining the loginschema so that the desired credentials (username and password) are used for SSO. To use the first factor for the SSO, you configure the loginschema to store the first factor credential at the specified indexes and use attribute expressions for the traffic policies.
Previously, multiple sets of login credentials were required for nFactor authentication. By default, the credentials used for the final factor were the default single sign-on (SSO) user name and password. If the first factor was LDAP (Lightweight Directory Access Protocol) but the second factor OTP (One Time Password) on a non-Active Directory password, the default credentials became OTP. This procedure was complex and affected usability.
> set authentication loginSchema ls1 -SSOCredentials YES Done
> set authentication loginSchema ls1 -SSOCredentials NO Done
SNMP MIB Support for Cluster Nodes
In a cluster setup, you can now configure the SNMP MIB in any node by including the ownerNode parameter in the set snmp mib command. Without this parameter, the set snmp mib command applies only to the cluster coordinator node.
To display the MIB configuration for an individual node other than the cluster coordinator node, include the ownerNode parameter in the show snmp mib command.
To know more about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-managing/monitoring-cluster-setup-using-snmp-mib-with-snmp-link.html.
Support for Reverse TCP Monitors
The NetScaler appliance now supports reverse TCP monitors. A reverse monitor marks the service as DOWN if the probe criteria are satisfied and UP if they are not satisfied.
A direct TCP monitor marks the service as DOWN if it receives a RESET in response to the monitor probe. However, a reverse TCP monitor treats RESET as a successful response and marks the service as UP.
To configure a reverse TCP monitor by using the NetScaler command line
At the command prompt, type:
add lb monitor <monitor-name> tcp -reverse yes -destip <primary-service ip> -destport <primary-service port>
bind service <svc-name> -monitorname <monitor-name>
To configure a reverse TCP monitor by using the NetScaler GUI
1. Navigate to Traffic Management > Load Balancing > Monitors.
2. Create a TCP monitor and select Reverse.
Support for ping and traceroute commands
You can now direct ping and traceroute operations to any host, by using the NITRO API through the NetScaler appliance.
Open Source packages are now available in NetScaler CPX
All the open source packages that are used in NetScaler CPX are available in the contrib/cpx/ folder.
The Unified Gateway wizard now provides an option to retrieve LDAP attributes when creating an LDAP Action. Also, you can choose an extracted LDAP attribute as an LDAP Login name, and you can evaluate LDAP bind credentials.
You can now do an EPA scan with Opswat. The Opswat application verifies that the drive is enabled and encrypted. Opswat also provides version information.
In the XenApp/XenDesktpo configuration wizard, the StoreFront Settings Download option is now hidden if StoreFront is not deployed.
New CCU packaging and pricing have changed in the following ways:
1. MaxAAA is automatically set to the maximum licensed number.
2. Licenses now use the following scheme:
a. Platinum: Unlimited (formerly 100)
b. Enterprise: 1000 (formerly 5)
c. Standard: 500 (formerly 5)
d. Any other license: 5
e. If additional CCU licenses are present on the system, you add those to the above values
(for example, standard is 500 + any additional CCU licenses).
f. Disregard additional CCUs for the platinum case, since platinum is already unlimited.
You can now extract attributes from Access Control Server (ACS) and Terminal Access Controller Access-Control System (TACACS). The extracted attributes allow the admin to use the NetScaler Group Attribute command to authorize usage.
The NetScaler appliance now allows you to choose an existing domain-server configuration, if available, instead of having to configure a new domain server when the appliance is updated.
Previously, if you configured a domain server, you would not be able to use the existing configuration when the NetScaler appliance was updated, because there was no provision to choose an existing configuration for the XA-XD Wizard.
You can now terminate specified RDP proxy connections.
kill rdpConnection [-userName <string>] [-all]
- userName: Terminates RDP Proxy connections that belong to the specified user.
- all: terminates all active rdp proxy connections.
The global AAA parameter "set aaa param -maxaAAUser <value>" has been enhanced to automatically increase or decrease when new concurrent user (CCU) licenses are added or removed. Previously, adjusting the MaxAAAUser count was a manual adjustment that needed to be done after extra licenses were added. This value represents the maximum number of global AAA sessions that can exist. If you want to restrict the number of AAA sessions to a value lower than the licensed limit, you can set the maxaAAUser parameter on the gateway virtual server.
New license for NetScaler VPX on XenServer and KVM platforms
The following licenses are now available for NetScaler VPX appliances on a XenServer platform:
Also, a 100G license is now available for NetScaler VPX appliances on a KVM platform.
For more information about recommended interfaces and performance details, see the latest VPX datasheet.
The number of unique IPv6 addresses that you can add to a NetScaler virtual appliance configured with SR-IOV interfaces is limited to 30 on the following platforms:
* VMware ESX
Support for Sending Response Traffic Through an IP-IP tunnel
You can now configure a NetScaler appliance to send response traffic through an IP-IP tunnel instead of routing it back to the source. Previously, when the appliance received a request from another NetScaler or a third-party device through an IP-IP tunnel, it had to route the response traffic instead of sending it through the tunnel. You can now use policy based routes (PBRs) or enable MAC-Based Forwarding (MBF) to send the response through the tunnel.
In a PBR rule, specify the subnets at both end points whose traffic is to traverse the tunnel. Also set the next hop as the tunnel name. When response traffic matches the PBR rule, the NetScaler appliance sends the traffic through the tunnel.
Alternatively, you can enable MBF to meet this requirement, but the functionality is limited to traffic for which the NetScaler appliance stores session information (for example, traffic related to load balancing or RNAT configurations). The appliance uses the session information to send the response traffic through the tunnel.
Wildcard TOS Monitors
In a load balancing configuration in DSR mode using TOS field, monitoring its services requires a TOS monitor to be created and bound to these services. A separate TOS monitor is required for each load balancing configuration in DSR mode using TOS field, because a TOS monitor requires the VIP address and the TOS ID to create an encoded value of the VIP address. The monitor creates probe packets in which the TOS field is set to the encoded value of the VIP address. It then sends the probe packets to the servers represented by the services of a load balancing configuration. With a large number of load balancing configurations, creating a separate custom TOS monitor for each configuration is a big, cumbersome task. Managing these TOS monitors is also a big task. Now, you can create wildcard TOS monitors. You need to create only one wildcard TOS monitor for all load balancing configurations that use the same protocol (for example, TCP or UDP).
A wildcard TOS monitor has the following mandatory settings:
-Type = <protocol>
-TOS = Yes
The following parameters can be set to a value or can be left blank:
A wildcard TOS monitor (with destination IP, Destination port, and TOS ID not set) bound to a DSR service automatically learns the TOS ID and the VIP address of the load balancing virtual server. The monitor creates probe packets with TOS field set to the encoded VIP address and then sends the probe packets to the server represented by the DSR service.
Support of Automatic ARP Resolution to Special MAC address
In a cluster deployment, when the client-side or server side-link to a node goes down, traffic is steered to this node through the peer nodes for processing. Previously, the steering of traffic was implemented on all nodes by configuring dynamic routing and adding static ARP entries pointing to the special MAC address of each node. If there are a large number of nodes in a cluster deployment, adding and managing static ARP entries with special MAC addresses on all the nodes is a cumbersome task. Now, nodes implicitly use special MAC addresses for steering packets. Therefore, static ARP entries pointing to special MAC addresses no longer have to be added to the cluster nodes.
To know more about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-managing/Route_Monitoring_Dynamic_Routes_Cluster.html.
Monitoring Command Propagation Failures in a Cluster Deployment
In a cluster deployment of NetScaler appliances, you can use the new command "show prop status" for faster monitoring and troubleshooting of issues related to command-propagation failure on non-CCO nodes. This command displays up to 20 of the most recent command propagation failures on all non-CCO nodes. You can use either the NetScaler command line or the NetScaler GUI to perform this operation after accessing them through the CLIP address or through the NSIP address of any node in the cluster deployment.
To know more information about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-managing/Monitoring-Command-Propagation-Failures-in-cluster-deployment.html.
Automatic TCP-Connection Reset for Inactive Nodes
Previously, a cluster node did not reset its existing TCP connections (to clients and servers) when its state became Inactive. As a result, the states of the client and server connections became undefined. Now, a node resets all its TCP connections before entering the Inactive state.
To know more about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-overview/cluster-node-states.html
Support for SR-IOV interfaces on NetScaler VPX Appliance in XenServer
You can now configure a NetScaler VPX appliance deployed on XenServer to use SR-IOV network interfaces.
For performance information about SR-IOV interfaces on XenServer, see the latest VPX datasheet.
For information about how to configure SR-IOV interfaces on a NetScaler VPX instance in XenServer, see http://docs.citrix.com/en-us/netscaler/11-1/deploying-vpx/install-vpx-on-xenserver/configure-SR-IOV-on-xenserver.html.
Support for PCI Passthrough interfaces on NetScaler VPX Appliance in Linux-KVM
You can now configure a NetScaler VPX instance deployed on Linux-KVM to use PCI passthrough interfaces.
For performance information about PCI passthrough interfaces on KVM, see the latest VPX datasheet.
For information about how to configure PCI passthrough interfaces on a NetScaler VPX instance in Linux-KVM, see http://docs.citrix.com/en-us/netscaler/11-1/deploying-vpx/install-vpx-on-kvm/configure-PCI-passthrough-KVM.html.
Specifying a domain name for a logging server
When configuring an auditlog action, you can specify the domain name of a syslog or nslog server instead of its IP address. Then, if the server's IP address changes, you do not have to change it on the NetScaler appliance.
Policy Infrastructure (PI) for Auditlog Framework
Audit log actions now support advance policies and expressions. Advance policy expressions are very powerful and provide endless use cases to work with. Previously, the audit module supported only classic policies. You can now bind advanced audit-log policies to the syslog and nslog global entities.
To know more about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/system/audit-logging/configuring-audit-logging.html.
TCP Burst Rate Control
A NetScaler appliance now uses a technique called "TCP Burst Rate Control" for burst management in a high speed mobile network. This technique evenly spaces the flow of data into the network, avoiding bursts by waiting for a period of time before sending the next group of packets. By using this technique, you can achieve better throughput and lower packet drop rates. This feature is available as a TCP option in the TCP profile bound to a virtual server on a NetScaler appliance.
To know more about this feature, see http://docs.citrix.com/en-us/netscaler/11-1/system/TCP_Congestion_Control_and_Optimization_General.html.
In a NetScaler appliance, if the Ring Receive buffer is full, the appliance starts to discard data packets at the Network Interface Card (NIC). As a result, the appliance drops packets leading to a probe failure.