Note that if you try reproducing the "contrib" csvs from scratch, the missing decompressor for cudpp-compress will break the plot script, so you have to fake decompressor timings with something like "sed s/;;/;99999;/" (these are of course not reported in the paper)
For MPC, GFC, nvCOMP LZ4, nvCOMP Cascaded and cudppCompress:
For ndzip-gpu (second number in the file name is the thread block size; we select 256 for single-precision and 512 for double-precision on all systems)
For CPU benchmarks on the Ryzen 3900X (RTX 2070) system
Invoke the plot script separately for each system and CSV file like so:
python3 src/benchmark/plot_benchmark.py bench-2070-ndzip-gpu-256.csv # For single-precision on RTX 2070 python3 src/benchmark/plot_benchmark.py bench-2070-ndzip-gpu-512.csv # For double-precision on RTX 2070 python3 src/benchmark/plot_benchmark.py bench-2070-contrib.csv # For all GPU 3rd-party compressors python3 src/benchmark/plot_benchmark.py bench-2070-cpu.csv # For CPU reference compressors
\documentclass[a4paper,8pt]{scrartcl} \usepackage{booktabs} \usepackage{graphicx} \usepackage{tabularx} \newcommand\algohead[1]{\rotatebox[origin=l]{90}{\hspace{-0.8ex}\ttfamily\small#1\hspace{0.5ex}}} \begin{document} \hspace{-20em} \renewcommand\arraystretch{1.1} \setlength\tabcolsep{0.45ex} ... \end{document}