Product Documentation

Protocol extensions - use cases

Protocol extensions can be used for the following use cases.

  • Message based load balancing (MBLB)
  • Streaming
  • Token based load balancing
  • Load balancing persistence
  • TCP connection based load balancing
  • SSL

Message based load balancing

Protocol extensions support Message Based Load Balancing (MBLB), which can parse any protocol on a Citrix ADC appliance and load balance the protocol messages arriving on one client connection, that is, distribute the messages over multiple server connections. MBLB is achieved by user code that parses the client TCP data stream.

The TCP data stream is passed to the on_data callbacks for client and server behaviors. The TCP data stream is available to the extension functions through a Lua string like interface. You can use an API similar to the Lua string API to parse the TCP data stream.

Useful APIs include:

data:len()

data:find()

data:byte()

data:sub()

data:split()

Once the TCP data stream has been parsed into a protocol message, the user code achieves load balancing by just sending the protocol message to the next context available from the context passed to the on_data callback for the client.

The ns.send() API is used to send messages to other processing modules. In addition to the destination context, the send API takes the event name and optional payload as arguments. There is one-to-one correspondence between the event name and the callback function names for the behaviors. The callbacks for events are called on_<event_name>. The callback names use only lowercase.

For example, the TCP client and server on_data callbacks are basically user-defined handlers for events named “DATA.” For sending the whole protocol message in one send call, the EOM event is used. EOM, which stands for end of message, signifies the end of protocol message to the LB context down stream, so a new load balancing decision is made for data that follows this message.

The extension code might sometimes not receive the whole protocol message in the on_data event. In such a case, the data can be held by using the ctxt:hold() API. The hold API is available for both TCP-client and server-callback contexts. When “hold with data” is called, the data is stored in the context. When more data is subsequently received in the same context, the newly received data is appended to the previously stored data and the on_data callback function is called again with the combined data.

Note: The load balancing method used depends on the configuration of the load balancing virtual server corresponding to the load balancing context.

The following code snippet shows the use of the send API to send the parsed protocol message.

Example:

    function client.on_data(ctxt, payload)
        --
        -- code to parse payload.data into protocol message comes here
        --
        -- sending the message to lb
        ns.send(ctxt.output, "EOM", {data = message})
    end -- client.on_data

    function server.on_data(ctxt, payload)
        --
        -- code to parse payload.data into protocol message comes here
        --
        -- sending the message to client
        ns.send(ctxt.output, "EOM", {data = message})

    end -- server.on_data

Streaming

In some scenarios, holding the TCP data stream until the whole protocol message is collected might not be necessary. In fact, it is not advised unless it is required. Holding the data increases memory usage on Citrix ADC appliance and can make the appliance susceptible to DDoS attacks by exhausting the memory on Citrix ADC appliance with incomplete protocol messages on a lot of connections.

Users can achieve streaming of TCP data in the extension callback handlers by using the send API. Instead of holding the data until the whole message is collected, data can be sent in chunks. Sending data to ctxt.output by using the DATA event sends a partial protocol message. It can be followed by more DATA events. An EOM event must be sent to mark the end of the protocol message. The load balancing context downstream makes the load balancing decision on the first data received. A new load balancing decision is made after the receipt of the EOM message.

To stream protocol message data, send multiple DATA events followed by an EOM event. The contiguous DATA events and the following EOM event are sent to the same server connection selected by load balancing decision for the first DATA event in the sequence.

For a send to client context, EOM and DATA events are effectively the same, because there is no special handling by the client context downstream for EOM events.

Token based load balancing

For natively supported protocols, a Citrix ADC appliance supports a token based load balancing method that uses PI expressions to create the token. For extensions, the protocol is not known in advance, so PI expressions cannot be used. For token based load balancing, you have to set the default load balancing vserver to use the USER_TOKEN load balancing method, and provide the token value from the extension code by calling the send API with a user_token field. If the token value is sent from the send API and the USER_TOKEN load balancing method is configured on the default load balancing vserver, the load balancing decision is made by calculating a hash based on the token value. The maximum length of token value is 64-bytes.

add lb vserver v\_mqttlb USER\_TCP –lbMethod USER\_TOKEN

The code snippet in the following example uses a send API to send an LB token value.

Example:

        -- send the message to lb




        -- user_token is set to do LB based on clientID




        ns.send(ctxt.output, "EOM", {data = message,

                                 user_token = token_info})

Load balancing persistence

Load balancing persistence is closely related to token based load balancing. Users have to be able to programmatically calculate the persistence session value and use it for load balancing persistence. The send API is used to send persistence parameters. To use load balancing persistence, you have to set the USERSESSION persistence type on the default load balancing virtual server and provide a persistence parameter from the extension code by calling the send API with a user_session field. The maximum length of the persistence parameter value is 64 bytes.

If you need multiple types of persistence for a custom protocol, you have to define user persistence types and configure them. The names of the parameters used to configure the virtual servers are decided by the protocol implementer. A parameter’s configured value is also available to the extension code.

The following CLI and code snippet shows the use of a send API to support load balancing persistence. The code listing in the section Code Listing for mqtt.lua also illustrates the use of the user_session field.

For persistency, you have to specify the USERSESSION persistency type on the load balancing virtual server and pass the user_session value from the ns.send API.

add lb vserver v\_mqttlb USER\_TCP –persistencetype USERSESSION

Send the MQTT message to the load balancer, with the user_session field set to clientID in the payload.

Example:

-- send the data so far to lb

-- user_session is set to clientID as well (it will be used to persist session)

ns.send(ctxt.output, “DATA”, {data = data, user_session = clientID})

TCP connection based load balancing

For some protocols, MBLB might not be needed. Instead, you might need TCP connection based load balancing. For example, the MQTT protocol must parse the initial part of the TCP stream to determine the token for load balancing. And, all the MQTT messages on the same TCP connection must be sent to the same server connection.

TCP connection based load balancing can be achieved by using the send API with only DATA events and not sending any EOM. That way the downstream load balancing context bases the load balancing decision on the data received first, and sends all the subsequent data to the same server connection selected by the load balancing decision.

Additionally, some use cases might require the ability to bypass extension handling after the load balancing decision has been made. Bypassing the extension calls results in better performance, because the traffic is processed purely by native code. Bypass can be done by using the ns.pipe() API. A call to the pipe() API extension code can connect input context to an output context. After the call to pipe(), all the events coming from input context directly go to the output context. Effectively, the module from which the pipe() call is made is removed from the pipeline.

The following code snippet shows streaming and the use of the pipe() API to bypass a module. The code listing in the section  Code Listing for mqtt.lua also illustrates how to do streaming and the use of pipe() API to bypass the module for rest of the traffic on the connection.

Example:

        -- send the data so far to lb
        ns.send(ctxt.output, "DATA", {data = data,
                                       user_token = clientID})
        -- pipe the subsequent traffic to the lb - to bypass the client on_data handler
        ns.pipe(ctxt.input, ctxt.output)

SSL

SSL for protocols using extensions is supported in ways similar to how SSL for native protocols is supported. Using the same parsing code for creating custom protocols, you can create a protocol instance over TCP or over SSL which can then be used to configure the virtual servers. Similarly, you can add user services over TCP or SSL.

For more information, see Configuring SSL Offloading for MQTT and Configuring SSL Offloading for MQTT With End-To-End Encryption.

Server connection multiplexing

In some cases, the client sends one request at a time and sends the next request only after the response for the first request is received from the server. In such a case, server connection can be reused for other client connections, and for the next message on the same connection, after the response has been sent to the client. To allow reuse of server connection by other client connections, you must use the ctxt: reuse_server_connection() API on the server side context.

Note: This API is available in Citrix ADC 12.1 build 49.xx and later.