On Sun, 20 Feb 2022 19:14:48 +0100 Sebastian Andrzej Siewior <sebastian@xxxxxxxxxxxxx> wrote: > > One benefit is to just have a test case for multiple versions. And who > > knows, if it is an easy implementation, perhaps there's something that > > needs it in a very limited embedded environment? Does it really hurt > > keeping it, even though it may never be used? > > Especially in embedded environment I would prefer zstd over zlib because > it performance. These days, if you can allow to compile trace-cmd you > should be able to include zstd, too. One downside is probably that the > zstd library is larger than libz. Maybe I'll make it a compile time option. Just for history sake ;-) > > > > It might make sense to use zstd while dumping the per-CPU data files > > > disks before the trace.dat is written. It probably makes sense to use > > > zstd by default instead of storing the .dat file uncompressed. > > > > I have a patch that does the compression by default. I'm holding off > > pushing it until I finished my testing of the compressed versions. > > Awesome. I don't know how you make trace.dat in the end but I saw that > there is one trace file per CPU first. Would it make sense to compress > these before they are written to disk? We could do that but it would definitely not be by default. The reason is that the per cpu files are created with the splice command. That is, the data *never* goes to user space from the time it leaves the internal kernel ring buffer to the time it is added to the file (with zero copy, as the data page is taken directly from the ring buffer). Having compression would require passing the data to user space and back, which would mean a higher probability of dropped events. Perhaps we could add the compression algorithm into the kernel? :-/ -- Steve