What is the benefit of NetScaler SD-WAN WANOP Compression?
While the basic mechanism of compression is to make data streams smaller, the benefit of this is to make things faster. A smaller file (or a smaller transaction) takes less time to transfer. Size doesn’t matter: the point of compression is speed.
How is compression benefit measured?
There are two ways of measuring compression benefit: time and compression ratio. The two are related when the WAN link is the dominant bottleneck. Because the NetScaler SD-WAN WANOP compressor is very fast, compressing data in real time, a file that compresses by 5:1 transfers in one-fifth the time. This holds true until a secondary bottleneck is encountered. For example, if the client is too slow to handle a full-speed transfer, a 5:1 compression ratio delivers less than a 5:1 speedup.
How does compression work?
The compression engine retains data previously transferred over the link, with the more recent data retained in memory and a much larger amount on disk. When a string that was transferred before is encountered again, it is replaced with a reference to the previous copy. This reference is sent over the WAN instead of the actual string, and the appliance on the other end looks up the reference and copies it into the output stream.
What is the maximum achievable compression ratio?
The maximum achievable compression ratio on a NetScaler SD-WAN WANOP appliance is approximately 10,000:1.
What is the expected compression ratio?
Overall compression ratio is the average of all attempts to compress the data streams on the link. Some compresses better than others, and some never compress at all. The appliance uses service classes to prevent sending obviously uncompressible streams to the compressor. The effect of compression on different types of data varies as follows:
One-time compressed or encrypted data – streams that are never to be seen again and have already been compressed or encrypted, such as encrypted SSH tunnels and real-time video camera monitoring – are not compress, since their data streams are never the same twice.
Compressed binary data or encrypted data that is seen more than once compresses extremely well on the second and subsequent transfers, with compression ratios in the range of hundreds to thousands to one on these later transfers. On the first transfer, they do not compress. The average compression ratio for such data is dependent on how frequently data is seen more than once. While individual transfers sometimes show compression ratios over 1,000:1, averages for the compressed binary data on the link averages between 1.5:1 and 5:1 on most links, with averages over 10:1 on some links, depending on the nature of the traffic.
Text streams and uncompressed/unencrypted binary data compress even on the first pass. Text streams compress well because even unrelated texts have many substrings in common. This is true of documents, source code, HTML pages, and so on. First-pass compression on the order of 1.5:1 to 4:1 are common. On the second and subsequent passes, they compress almost as well as compressed binary data (100:1 or more). Uncompressed binary data is variable, but often compresses better than text. Examples of uncompressed binary data include CD images, executable files, and uncompressed image, audio, and video formats. On the second and subsequent passes, they compress about as well as compressed binary data.
XenApp and XenDesktop data compresses especially well with file transfers, printer output, and video, provided that the same data streams have traversed the link before. Because of protocol overhead, peak compression is approximately 40:1, and average compression is likely to be in the neighborhood of 3:1. Interactive data streams, such as screen updates), give compression results on the order of 2:1.
What is the difference between caching and compression?
Caching saves entire, named objects on the client-side appliance. The name may be a path and filename in the case of Filesystem caching, or a URL in the case of Web caching. If you transfer an identical object with a different name, the cache provides no benefit. If you transfer an object with the same name as a cached object, but with slight differences in content, the cache provides no benefit. If the object can be served from the cache, it is not fetched from the server.
Compression, on the other hand, has no concept of object names, and provided benefit whenever a string in the transfer matches one that is already in compression history. This means that if you download a file, change 1% of its content, and upload the new file, you might achieve 99:1 compression on the upload. If you download a file and then upload it to a different directory on the remote site, you might achieve a high compression ratio as well. Compression does not require file locking and does not suffer from “staleness.” The object is always fetched from the server and is thus always byte-for-byte correct.