Re: bluestore zstd compression questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't have any data yet.
I set up a k8s cluster and set up CephFS, RGW and RBD for k8s.. So it's
hard to tell beforehand what we will store and know compression ratios.
Thus making it hard to know how to benchmark, but I guess a mix of
everything from very compressible to non-compressible stuff.

What happens if you turn on/off compression in a pool? Is it possible to
change after the pool is created and data exists?

/Elias

On Thu, Oct 21, 2021 at 4:50 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:

> What data you have to compress? Do you bench compression efficiency?
>
>
> k
>
> Sent from my iPhone
>
> > On 21 Oct 2021, at 17:43, Elias Abacioglu <
> elias.abacioglu@xxxxxxxxxxxxxxxxx> wrote:
> >
> > Hi
> >
> > I've been trying to Google about Bluestore compression. But most
> articles I
> > find are quite old and are from Ceph versions where zstd compression
> level
> > was hardcoded to 5.
> > I've been thinking about enabling zstd compression with a `
> > compressor_zstd_level` of "-1" in most of my pools. Any thoughts?
> > Is there any one that has any recent benchmarks of this?
> >
> > zstd compression with a normal server CPU should be faster than HDD
> writes.
> > And decompression should also be faster than any HDD reads.
> > I don't know about NVMe drives, there a lot of disks have faster write
> > speed than it takes the CPU to compress data, but also if the compression
> > ratio is good, you have less data to write. And when it comes to reads I
> > guess it would depend on the NVMe disk and the CPU you got.
> >
> > Also I've been wondering about the pro's and con's when it comes to
> > compression.
> > I guess that some pro's would be
> > - Less data to scrub (since less data are stored on the drives)
> > - Less network traffic (replicas and such, I guess it depends on where
> the
> > compression takes place)?
> > - Less wear and tear on the drives(since less data are written and read)
> > Also I wonder, where does the compression take place?
> >
> > A con would be if it is slower, but I guess this might be depending on
> > which CPU, drives and storage controller you use, also what data you
> > write/read.
> >
> > But it would be nice with a fresh benchmark. This would be especially
> > interesting since this PR was merged
> https://github.com/ceph/ceph/pull/33790
> > which changed the default compression level to 1 and allows you to set
> your
> > own compression level for zstd.
> >
> > /Elias
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux