What data you have to compress? Do you bench compression efficiency? k Sent from my iPhone > On 21 Oct 2021, at 17:43, Elias Abacioglu <elias.abacioglu@xxxxxxxxxxxxxxxxx> wrote: > > Hi > > I've been trying to Google about Bluestore compression. But most articles I > find are quite old and are from Ceph versions where zstd compression level > was hardcoded to 5. > I've been thinking about enabling zstd compression with a ` > compressor_zstd_level` of "-1" in most of my pools. Any thoughts? > Is there any one that has any recent benchmarks of this? > > zstd compression with a normal server CPU should be faster than HDD writes. > And decompression should also be faster than any HDD reads. > I don't know about NVMe drives, there a lot of disks have faster write > speed than it takes the CPU to compress data, but also if the compression > ratio is good, you have less data to write. And when it comes to reads I > guess it would depend on the NVMe disk and the CPU you got. > > Also I've been wondering about the pro's and con's when it comes to > compression. > I guess that some pro's would be > - Less data to scrub (since less data are stored on the drives) > - Less network traffic (replicas and such, I guess it depends on where the > compression takes place)? > - Less wear and tear on the drives(since less data are written and read) > Also I wonder, where does the compression take place? > > A con would be if it is slower, but I guess this might be depending on > which CPU, drives and storage controller you use, also what data you > write/read. > > But it would be nice with a fresh benchmark. This would be especially > interesting since this PR was merged https://github.com/ceph/ceph/pull/33790 > which changed the default compression level to 1 and allows you to set your > own compression level for zstd. > > /Elias > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx