![]() ![]() ![]() Bernstein | SHA3 rnd 2, 64bit opt | no |ĬubeHash512 | Daniel J. Bernstein | SHA3 rnd 2, 64bit opt | no |ĬubeHash384 | Daniel J. Bernstein | SHA3 rnd 2, 64bit opt | no |ĬubeHash256 | Daniel J. Name | author | version | default? | sourceīlake224 | multiple | SHA3 Final, 64bit opt | yes |īlake256 | multiple | SHA3 Final, 64bit opt | yes |īlake384 | multiple | SHA3 Final, 64bit opt | yes |īlake512 | multiple | SHA3 Final, 64bit opt | yes |īlueMidnightWish224 | Danilo Gligoroski | SHA3 rnd 2, 64bit opt | no |īlueMidnightWish256 | Danilo Gligoroski | SHA3 rnd 2, 64bit opt | no |īlueMidnightWish384 | Danilo Gligoroski | SHA3 rnd 1, 64bit opt | no |īlueMidnightWish512 | Danilo Gligoroski | SHA3 rnd 1, 64bit opt | no |ĬityHash32 | Geoff Pike, Jyrki Alakuijala | 1.1.0 | no | ĬityHash64 | Geoff Pike, Jyrki Alakuijala | 1.1.0 | no | ĬityHash128 | Geoff Pike, Jyrki Alakuijala | 1.1.0 | no | ĬrapWow | | | no | crypto++ - adler32 | Wei Dai | 5.6.1 | no | crypto++ - crc32 | Wei Dai | 5.6.1 | no | crypto++ - md5 | Colin Plumb, Wei Dai | 5.6.1 | no | crypto++ - sha224 | Steve Reid, Wei Dai | 5.6.1 | no | crypto++ - sha256 | Steve Reid, Wei Dai | 5.6.1 | no | crypto++ - sha384 | Steve Reid, Wei Dai | 5.6.1 | no | crypto++ - sha512 | Steve Reid, Wei Dai | 5.6.1 | no | CubeHash224 | Daniel J. Some codecs are in the default installation, some you have to enable with a compile-time option. For compressors - enough to contain both compressed and uncompressed data. What is small enough? For ciphers / hashes, take slightly less than the size of your cache. To do so, you need to specify a *small enough* block (with -b switch) and a fair number of overhead iterations (-o). Testing in-cache is being performed by running encoder/decoder repeatadly on a piece of data that fits completely in your cache. This is meant to reduce testing variability. Point 8 is repeated iters (-i) number of times, there's a minimum of runtimes taken. This is to compensate for clock inaccuracies.ġ0. Points 6 and 7 are repeated for at least small_iter_time ms. Decoding speed takes into account only blocks that were encoded successfully.ĩ. ![]() compress by at least sector_size), it doesn't get to decode it. Please note that these may be different from different codecs. If encoder is a compressor, block has to be reduced by it by at least sector_size of bytes (-m), otherwise it's left uncompressed.Ĩ. Each thread encodes asigned blocks and asks for another job. It's needed only for testing data that fits in L1 cache with the fastest transforms.ħ. This is meant to reduce the overhead introduced by the benchmark itself. Encoding of each block is repeated overhead_iterations number of times (-o). When the entire file comes in just 1 block (which happens often with the defaults), it will be assigned to a single thread as a whole. Note: Blocks are not split to smaller entities. Each thread gets a number of blocks with a total size of at least job_size (-j). It splits the data into blocks (see -b switch).ĥ. If job_size (-j) is not defined, it tries to come up with a good one.ģ. It warms the CPU up with several iterations (-w) of memory shuffling.Ģ. * overhead as low as 7 ticks/block on AMD64 and 11 on ARMġ. * testing in any level of cache or in RAM It started as a tool to simulate the way filesystems use compression, but its flexibility improved over time so much that testing other things works OK.Īside from compression algorithms it features ciphers and hashes and could be repurposed for nearly anything. ![]() This is an application meant to easen benchmarking of in-memory data transformations. Yeah, everybody puts nonsense they consider important in a place that's hard to miss, though they know that readers are skilled in skipping them. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |