The NetScaler SD-WAN WANOP accelerator uses a variety of zero-config optimizations to speed up HTTP traffic. This in turn accelerates Web pages and any other applications using the HTTP protocol (file downloads, video streaming, automatic updates, and so on).
Optimizations that accelerate HTTP include compression, traffic shaping, flow control, and caching.
HTTP is an ideal application for NetScaler SD-WAN WANOP multi-level compression.
Static content, including standard HTML pages, images, video, and binary files, receives variable amounts of first-pass compression, typically 1:1 on pre-compressed binary content, and 2:1 or more on text-based content. Starting with the second time the object is seen, the two largest compression engines (memory-based compression and disk-baed compression) deliver extremely high compression ratios, with larger objects receiving compression ratios of 1,000:1 or more. With such high compression ratios, the WAN link stops being the limiting factor, and the server, the client, or the LAN becomes the bottleneck.
The appliance switches between compressors dynamically to give maximum performance. For example, the appliance uses a smaller compressor on the HTTP header and a larger one on the HTTP body.
Dynamic content, including HTTP headers and dynamically generated pages – pages that are never the same twice but have similarities to each other – are compressed by the three compression engines that deal with smaller matches. The first time a page is seen, compression is good. When a variant on a previous page is seen, compression is better.
HTTP consists of a mix of interactive and bulk traffic. Every user’s traffic is a mix of both, and sometimes the same connection contains a mix of both. The traffic shaper seamlessly and dynamically ensures that each HTTP connection gets its fair share of the link bandwidth, preventing bulk transfers from monopolizing the link at the expense of interactive users, while also ensuring that bulk transfers get any bandwidth that interactive connections do not use.
Advanced retransmission algorithms and other TCP-level optimizations retain responsiveness and maintain transfer rates in the face of latency and loss.
HTTP caching for video files was introduced in release 7.0 Caching involves saving HTTP objects to local storage and serving them to local clients without reloading them from the server.
What is the difference between caching and compression? While caching provides speedup that is similar to compression, the two methods are different, making them complementary.
Compression speeds up transfers from the remote server, and this higher data rate can place a higher load on the server if compression were not present. Caching prevents transfers from the server, and reduces the load on the server.
Compression works on any data stream this is similar to a previous transfer – if you change the name of a file on the remote server and transfer it again, compression will work perfectly. Caching works only when the object being requested by the client and the object on the disk are known to be identical – if you change the name of a file on the remote server and transfer it again, the cached copy is not used.
Compressed data cannot be delivered faster than the server can send it. Cached data is dependent only on the speed of the client-side appliance.
Compression is CPU-intensive; caching is not.