People are creating, sharing, and storing data at a faster rate than at any other time in history. When it comes to innovating on storing and transmitting that data, at Facebook we're making advancements not only in hardware — such as larger hard drives and faster networking equipment — but in software as well. Software helps with data processing through compression, which encodes information, like text, pictures, and other forms of digital data, using fewer bits than the original. These smaller files take up less space on hard drives and are transmitted faster to other systems. There's a trade-off to compressing and decompressing information, though: time. The more time spent compressing to a smaller file, the slower the data is to process.
Fair use is a doctrine in the law of the United States that permits limited use of copyrighted material without having to first acquire permission from the copyright holder. Fair use is one of the limitations to copyright intended to balance the interests of copyright holders with the public.
Handbook of Mobile Data Privacy Aris Gkoulalas-Divanis download Z-Library. Download books for free. A Data set is a set or collection of data. This set is normally presented in a tabular pattern. Every column describes a particular variable. And each row corresponds to a given member of the data set, as per the given question. Handbook of Research on User Interface Design and Evaluation for Mobile Technology (2-Volume Set) Joanna Lumsden, Joanna Lumsden download Z-Library. Download books for free. People are creating, sharing, and storing data at a faster rate than at any other time in history. When it comes to innovating on storing and transmitting that data, at Facebook we're making.
Today, the reigning data compression standard is Deflate, the core algorithm inside Zip, gzip, and zlib [2]. For two decades, it has provided an impressive balance between speed and space, and, as a result, it is used in almost every modern electronic device (and, not coincidentally, used to transmit every byte of the very blog post you are reading). Over the years, other algorithms have offered either better compression or faster compression, but rarely both. We believe we've changed this.
We're thrilled to announce Zstandard 1.0, a new compression algorithm and implementation designed to scale with modern hardware and compress smaller and faster. Zstandard combines recent compression breakthroughs, like Finite State Entropy, with a performance-first design — and then optimizes the implementation for the unique properties of modern CPUs. As a result, it improves upon the trade-offs made by other compression algorithms and has a wide range of applicability with very high decompression speed. Zstandard, available now under the BSD license, is designed to be used in nearly every lossless compression [1] scenario, including many where current algorithms aren't applicable.
Comparing compression
There are three standard metrics for comparing compression algorithms and implementations:
- Compression ratio: The original size (numerator) compared with the compressed size (denominator), measured in unitless data as a size ratio of 1.0 or greater.
- Compression speed: How quickly we can make the data smaller, measured in MB/s of input data consumed.
- Decompression speed: How quickly we can reconstruct the original data from the compressed data, measured in MB/s for the rate at which data is produced from compressed data.
The type of data being compressed can affect these metrics, so many algorithms are tuned for specific types of data, such as English text, genetic sequences, or rasterized images. However, Zstandard, like zlib, is meant for general-purpose compression for a variety of data types. To represent the algorithms that Zstandard is expected to work on, in this post we'll use the Silesia corpus, a data set of files that represent the typical data types used every day.
Some algorithms and implementations commonly used today are zlib, lz4, and xz. Each of these algorithms offers different trade-offs: lz4 aims for speed, xz aims for higher compression ratios, and zlib aims for a good balance of speed and size. The table below indicates the rough trade-offs of the algorithms' default compression ratio and speed for the Silesia corpus by comparing the algorithms per lzbench, a pure in-memory benchmark meant to model raw algorithm performance.
As outlined, there are often drastic compromises between speed and size. The fastest algorithm, lz4, results in lower compression ratios; xz, which has the highest compression ratio, suffers from a slow compression speed. However, Zstandard, at the default setting, shows substantial improvements in both compression speed and decompression speed, while compressing at the same ratio as zlib.
While pure algorithm performance is important when compression is embedded within a larger application, it is extremely common to also use command line tools for compression — say, for compressing log files, tarballs, or other similar data meant for storage or transfer. In these cases, performance is often affected by overhead, such as checksumming. This chart shows the comparison of the gzip and zstd command line tools on Centos 7 built with the system's default compiler.
The tests were each conducted 10 times, with the minimum times taken, and were conducted on ramdisk to avoid filesystem overhead. These were the commands (which use the default compression levels for both tools):
Scalability
If an algorithm is scalable, it has the ability to adapt to a wide variety of requirements, and Zstandard is designed to excel in today's landscape and to scale into the future. Most algorithms have 'levels” based on time/space trade-offs: The higher the level, the greater the compression achieved at a loss of compression speed. Zlib offers nine compression levels; Zstandard currently offers 22, which enables flexible, granular trade-offs between compression speed and ratios for future data. For example, we can use level 1 if speed is most important and level 22 if size is most important.
Below is a chart of the compression speed and ratio achieved for all levels of Zstandard and zlib. The x-axis is a decreasing logarithmic scale in megabytes per second; the y-axis is the compression ratio achieved. To compare the algorithms, you can pick a speed to see the various ratios the algorithms achieve at that speed. Likewise, you can pick a ratio and see how fast the algorithms are when they achieve that level.
For any vertical line (i.e., compression speed), Zstandard achieves a higher compression ratio. For the Silesia corpus, decompression speed — regardless of ratio — was approximately 550 MB/s for Zstandard and 270 MB/s for zlib. The chart shows another difference between Zstandard and the alternatives: By using one algorithm and implementation, Zstandard allows for much finer-grained tuning for each use case. This means Zstandard can compete with some of the fastest and highest compression algorithms while maintaining a substantial decompression speed advantage. These improvements translate directly to faster data transfer and smaller storage requirements.
In other words, compared with zlib, Zstandard scales:
- At the same compression ratio, it compresses substantially faster: ~3-5x.
- At the same compression speed, it is substantially smaller: 10-15 percent smaller.
- It is almost 2x faster at decompression, regardless of compression ratio; the command line tooling numbers show an even bigger difference: more than 3x faster.
- It scales to much higher compression ratios, while sustaining lightning-fast decompression speeds.
Under the hood
Zstandard improves upon zlib by combining several recent innovations and targeting modern hardware:
Memory
By design, zlib is limited to a 32 KB window, which was a sensible choice in the early '90s. But, today's computing environment can access much more memory — even in mobile and embedded environments.
Zstandard has no inherent limit and can address terabytes of memory (although it rarely does). For example, the lower of the 22 levels use 1 MB or less. For compatibility with a broad range of receiving systems, where memory may be limited, it is recommended to limit memory usage to 8 MB. This is a tuning recommendation, though, not a compression format limitation.
A format designed for parallel execution
Today's CPUs are very powerful and can issue several instructions per cycle, thanks to multiple ALUs (arithmetic logic units) and increasingly advanced out-of-order execution design.
In essence, it means that if:
a = b1 + b2
c = d1 + d2
then both a
and c
will be calculated in parallel.
This is possible only if there is no relation between them. Therefore, in this example:
a = b1 + b2
c = d1 + a
c
must wait for a
to be calculated first, and only then will c
calculation start.
It means that, to take advantage of the modern CPU, one has to design a flow of operations with few or no data dependencies.
This is achieved with Zstandard by separating data into multiple parallel streams. A new generation Huffman decoder, Huff0, is able to decode multiple symbols in parallel with a single core. Such gain is cumulative with multi-threading, which uses multiple cores.
Branchless design
New CPUs are more powerful and reach very high frequencies, but this is only possible thanks to a multi-stage approach, where an instruction is split into a pipeline of multiple steps. At each clock cycle, the CPU is able to issue the result of multiple operations, depending on available ALUs. The more ALUs that are being used, the more work the CPU is doing, and hence the faster compression is occurring. Keeping the ALUs fed with work is crucial for modern CPU performance.
This turns out to be difficult. Consider the following simple situation:
When it encounters this, the CPU does not know what to do, since it depends on the value of condition
. A cautious CPU would wait for the result of condition
before working on either branch, which would be extremely wasteful.
Today's CPUs gamble. They do so intelligently, thanks to a branch predictor, which tells them in essence the most probable result of evaluating condition
. When the bet is right, the pipeline remains full and instructions are issued continuously. When the bet is wrong (a misprediction), the CPU has to stop all operations started speculatively, come back to the branch, and take the other direction. This is called a pipeline flush, and is extremely costly in modern CPUs.
Twenty-five years ago, pipeline flush was a non-issue. Today, it is so important that it's essential to design formats compatible with branchless algorithms. As an example, let's look at a bit-stream update:
As you can see, the branchless version has a predictable workload, without any condition. The CPU will always do the same work, and that work is never thrown away due to a misprediction. In contrast, the classic version does less work when (nbBitsUsed < 8)
. But the test itself is not free, and whenever the test is guessed incorrectly, it results in a full pipeline flush, which costs more than the work done by the branchless version.
As you can guess, this side effect has impacts on the way data is packed, read, and decoded. Zstandard has been created to be friendly to branchless algorithms, especially within critical loops.
Finite State Entropy: A next-generation probability compressor
In compression, data is first transformed into a set of symbols (the modeling stage), and then these symbols are encoded using a minimum number of bits. This second stage is called the entropy stage, in memory of Claude Shannon, which accurately calculates the compression limit of a set of symbols with given probabilities (called the “Shannon limit”). The goal is to get close to this limit while using as few CPU resources as possible.
A very common algorithm is Huffman coding, in use within Deflate. It gives the best possible prefix code, assuming each symbol is described with a natural number of bits (1 bit, 2 bits …). This works great in practice, but the limit of natural numbers means it's impossible to reach high compression ratios, because a symbol necessarily consumes at least 1 bit.
A better method is called arithmetic coding, which can come arbitrarily close to Shannon limit -log2(P)
, hence consuming fractional bits per symbol. It translates into a better compression ratio when probabilities are high, but it also uses more CPU power. In practice, even optimized arithmetic coders struggle for speed, especially on the decompression side, which requires divisions with a predictable result (e.g., not a floating point) and which proves to be slow.
Finite State Entropy is based on a new theory called ANS (Asymmetric Numeral System) by Jarek Duda. Finite State Entropy is a variant that precomputes many coding steps into tables, resulting in an entropy codec as precise as arithmetic coding, using only additions, table lookups, and shifts, which is about the same level of complexity as Huffman. It also reduces latency to access the next symbol, as it is immediately accessible from the state value, while Huffman requires a prior bit-stream decoding operation. Explaining how it works is outside the scope of this post, but if you're interested, there is a series of articles detailing its inner working.
Repcode modeling
Repcode modeling efficiently compresses structured data, which features sequences of almost equivalent content, differing by just one or a few bytes. This method isn't new but was first used after Deflate's publication, so it doesn't exist within zlib/gzip.
The efficiency of repcode modeling highly depends on the type of data being compressed, ranging anywhere from a single to a double-digit compression improvement. These combined improvements add up to a better and faster compression experience, offered within the Zstandard library.
Zstandard in practice
As mentioned before, there are several typical use cases of compression. For an algorithm to be compelling, it either needs to be extraordinarily good at one specific use case, such as compressing human readable text, or very good at many diverse use cases. Zstandard takes the latter approach. One way to think about use cases is how many times a specific piece of data might be decompressed. Zstandard has advantages in all of these cases.
Many times. For data processed many times, decompression speed and the ability to opt into a very high compression ratio without compromising decompression speed is advantageous. The storage of the social graph on Facebook, for instance, is repeatedly read as you and your friends interact with the site. Outside of Facebook, examples of when data needs to be decompressed many times include files downloaded from a server, such as the source code to the Linux kernel or the RPMs installed on servers, the JavaScript and CSS used by a webpage, or running thousands of MapReduces over data in a data warehouse.
Just once. For data compressed just once, especially for transmission over a network, compression is a fleeting moment in the flow of data. The less overhead it has on the server means the server can handle more requests per second. The less overhead on the client means the data can be acted upon more quickly. Typically this comes up with client/server situations where the data is unique for the client, such as a web server response that is custom — say, the data used to render when you receive a note from a friend on Messenger. The net result is your mobile device loads pages faster, uses less battery, and consumes less of your data plan. Zstandard in particular suits the mobile scenarios much better than other algorithms because of how it handles small data.
Possibly never. While seemingly counterintuitive, it is often the case that a piece of data — such as backups or log files — will never be decompressed but can be read if needed. For this type of data, compression typically needs to be fast, make the data small (with a time/space trade-off suitable for the situation), and perhaps store a checksum, but otherwise be invisible. On the rare occasion it does need to be decompressed, you don't want the compression to slow down the operational use case. Fast decompression is beneficial because it is often a small part of the data (such as a specific file in the backup or message in a log file) that needs to be found quickly.
In all of these cases, Zstandard brings the ability to compress and decompress many times faster than gzip, with the resulting compressed data being smaller.
Small data
There is another use case for compression that gets less attention but can be quite important: small data. These are use patterns where data is produced and consumed in small quantities, such as JSON messages between a web server and browser (typically hundreds of bytes) or pages of data in a database (a few kilobytes).
Databases provide an interesting use case. Systems such as MySQL, PostgreSQL, and MongoDB all store data intended for real-time access. Recent hardware advantages, particularly around the proliferation of flash (SSD) devices, have fundamentally changed the balance between size and throughput — we now live in a world where IOPs (IO operations per second) are quite high, but the capacity of our storage devices is lower than it was when hard drives ruled the data center.
In addition, flash has an interesting property regarding write endurance — after thousands of writes to the same section of the device, that section can no longer accept writes, often leading to the device being removed from service. Therefore it is natural to seek out ways to reduce the quantity of data being written because it can mean more data per server and burning out the device at a slower rate. Data compression is a strategy for this, and databases also are often optimized for performance, meaning read and write performance are equally important.
There is a complication for using data compression with databases, though. Databases like to randomly access data, whereas most typical use cases for compression read an entire file in linear order. This is a problem because data compression essentially works by predicting the future based on the past — the algorithms look at your data sequentially and predict what it might see in the future. The more accurate the predictions, the smaller it can make the data.
When you are compressing small data, such as pages in a database or tiny JSON documents being sent to your mobile device, there simply isn't much “past” to use to predict the future. Compression algorithms have attempted to address this by using pre-shared dictionaries to effectively jump-start. This is done by pre-sharing a static set of 'past' data as a seed for the compression.
Zstandard builds on this approach with highly optimized algorithms and APIs for dictionary compression. In addition, Zstandard includes tooling (zstd --train)
for easily making dictionaries for custom applications and provisions for registering standard dictionaries for sharing with larger communities. While compression varies based on the data samples, small data compression can range anywhere from 2x to 5x better than compression without dictionaries.
Dictionaries in action
While it can be hard to play with a dictionary in the context of a running database (it requires significant modifications to the database, after all), you can see dictionaries in action with other types of small data. JSON, the lingua franca of small data in the modern world, tends to be small, repetitive records. There are countless public data sets available; for the purpose of this demonstration, we will use the “user” data set from GitHub, available via HTTP. Here is a sample entry from this data set:
As you can see, there is quite a bit of repetition here — we can compress these nicely! But each user is a bit under 1 KB, and most compression algorithms really need more data to stretch their legs. A set of 1,000 users takes roughly 850 KB to store uncompressed. Naively applying either gzip
or zstd
individually to each file cuts this down to just over 300 KB; not bad! But if we create a one-time, pre-shared dictionary, with zstd
the size drops to 122 KB — taking the original compression ratio from 2.8x to 6.9. This is a significant improvement, available out-of-box with zstd
:
Picking a compression level
As shown above, Zstandard provides a substantial number of levels. This customization is powerful but leads to tough choices. The best way to decide is to review your data and measure, deciding what trade-offs you want to make. At Facebook, we find the default level 3 suitable for many use cases, but from time to time, we will adjust this slightly depending upon what our bottleneck is (often we are trying to saturate a network connection or disk spindle); other times, we care more about the stored size and will use a higher level.
A Handbook Of Small Data Sets Using
Ultimately, for the results most tailored to your needs, you will need to consider both the hardware you use and the data you care about — there are no hard and fast prescriptions that can be made without context. When in doubt, though, either stick with the default level of 3 or something from the 6 to 9 range for a nice trade-off of speed versus space; save level 20+ for cases where you truly care only about the size and not about the compression speed.
Try it out
Zstandard is both a command line tool (zstd)
and a library. It is written in highly portable C, making it suitable for practically every platform used today — be it the servers that run your business, your laptop, or even the phone in your pocket. You can grab it from our github repository, compile it with a simple make install
, and begin using it like you would use gzip
:
As you might expect, you can use it as part of a command pipeline, for example, to back up your critical MySQL database:
The tar
command supports different compression implementations out-of-box, so once you install Zstandard, you can immediately work with tarballs compressed with Zstandard. Here's a simple example that shows it in use with tar
and the speed difference compared with gzip:
Beyond command line use, there are the APIs, documented in the header files in the repository (start here for an overview of the APIs). We also include a zlib-compatible wrapper API (libWrapper) for easier integration with tools that already have zlib interfaces. Finally, we include a number of examples, both of basic use and of more advanced use such as dictionaries and streaming, also in the GitHub repository.
More to come
While we have hit 1.0 and consider Zstandard ready for every kind of production use, we're not done. Coming in future versions:
- Multi-threaded command line compression for even faster throughput on large data sets, similar to the pigz tool for zlib.
- New compression levels, in both directions, allowing for even faster compression and higher ratios.
- A community-maintained predefined set of compression dictionaries for common data sets such as JSON, HTML, and common network protocols.
We would like to thank all contributors, both of code and of feedback, who helped us get to 1.0. This is just the beginning. We know that for Zstandard to live up to its potential, we need your help. As mentioned above, you can try Zstandard today by grabbing the source or pre-built binaries from our GitHub project, or, for Mac users, installing via homebrew (brew install zstd)
. We'd love any feedback and interesting use cases you have, as well as additional language bindings and help integrating it with your favorite open source projects.
Footnotes
- While lossless data compression is the focus of this post, there exists a related but very different field of lossy data compression, used primarily for images, audio, and video.
- Deflate, zlib, gzip — three names intertwined. Deflate is the algorithm used by the zlib and gzip implementations. Zlib is a library providing Deflate, and gzip is a command line tool that uses zlib for Deflating data as well as checksumming. This checksumming can have significant overhead.
- All benchmarks were performed on an Intel E5-2678 v3 running at 2.5 GHz on a Centos 7 machine. Command line tools (
zstd
andgzip
) were built with the system GCC, 4.8.5. Algorithm benchmarks performed by lzbench were built with GCC 6.
A Handbook Of Small Data Sets Online
Большой англо-русский и русско-английский словарь. 2001.
Смотреть что такое 'disjoint sets' в других словарях:
Disjoint sets — Two disjoint sets. In mathematics, two sets are said to be disjoint if they have no element in common. For example, {1, 2, 3} and {4, 5, 6} are disjoint sets.[1] Explanation Formally, two sets A and … Wikipedia
Almost disjoint sets — In mathematics, two sets are almost disjoint if their intersection is small in some sense. Different definitions of small will therefore result in different definitions of almost disjoint . Definition The most common choice is to take small to… … Wikipedia
Disjoint — may refer to: Disjoint sets Disjoint union This disambiguation page lists articles associated with the same title. If an internal link led you here, you may wish to change the link to point directly to the intende … Wikipedia
Disjoint union — In mathematics, the term disjoint union may refer to one of two different concepts: In set theory, a disjoint union (or discriminated union) is a modified union operation that indexes the elements according to which set they originated in;… … Wikipedia
Disjoint-set data structure — In computing, a disjoint set data structure is a data structure that keeps track of a set of elements partitioned into a number of disjoint (nonoverlapping) subsets. A union find algorithm is an algorithm that performs two useful operations on… … Wikipedia
disjoint — [dis joint′, dis′joint΄] adj. [ME < OFr desjoint, pp. of desjoindre: see DISJOIN] 1. Obs. disjointed 2. Math. of sets having no members in common vt. [< DISJOINT the adj.] 1. to put out of joint; dislocate … English World dictionary
Disjoint union (topology) — In general topology and related areas of mathematics, the disjoint union (also called the direct sum, free union, free sum, topological sum, or coproduct) of a family of topological spaces is a space formed by equipping the disjoint union of the… … Wikipedia
disjoint — /dis joynt /, v.t. 1. to separate or disconnect the joints or joinings of. 2. to put out of order; derange. v.i. 3. to come apart. 4. to be dislocated; be out of joint. adj. 5. Math. a. (of two sets) having no common elements. b. (of a system of… … Universalium
disjoint — dis•joint [[t]dɪsˈdʒɔɪnt[/t]] v. t. 1) to separate or disconnect the joints or joinings of 2) to put out of order; derange 3) to come apart 4) to be dislocated; be out of joint 5) math. Math. a) (of two sets) having no common elements b) (of a… … From formal English to slang
disjoint — 1. adjective a) not smooth or continuous; disjointed b) (not used in the comparative or superlative) Of two or more sets, having no members in common; having an intersection equal to the empty set. Ant: non disjoint … Wiktionary
disjoint — I. adjective Etymology: Middle English disjoynt, from Anglo French desjoint, past participle of desjoindre Date: 15th century 1. obsolete disjointed 1a 2. having no elements in common < disjoint mathematical sets > II … New Collegiate Dictionary