Re: Compression of data in existing cephfs EC pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 1/4/21 5:27 PM, Paul Mezzanini wrote:
Hey everyone,

I've got an EC pool as part of our cephfs for colder data.  When we started using it, compression was still marked experimental.  Since then it has become stable so I turned compression on to "aggressive".  Using 'ceph df detail' I can see that new data is getting compressed.  We have a good chunk of data (~1P) that hasn't been through the compression side of things that I would love to get squished.

I know there currently isn't a way to do this in place and requires a new copy to be made so I made a simple script to "wiggle" the data.   If the file is older than the date I turned compression on, and the file size is >64k: rsync the file to a new file sitting right next to the old one, delete the old one then move the new file into it's place.  New inode = new file = compressed data!

In theory, this works awesome.  In practice however I'm not seeing the needles move as my script goes through.  Does anyone have any ideas into what I may have missed for this dance?


Just my two cents:

Compression is an OSD level operation, and the OSD involved in a PG do no know about each others' compression settings. And they probably also do not care, considering the OSD to be a black box.


I would propose to drain OSDs (one by one or host by host by setting osd weights) to move the uncompressed data off. Reset the weights to the former values later to move the data back, and upon writing the data it should be compressed.

Compression should also happen during writing the data to other osds when it is moved an OSD, but you will end up with a mix of compressed and uncompressed data on the same OSD. You will have to process all OSDs).


If this is working as expected, you do not have to touch the data on the filesystem level at all. The operation happens solely on the underlying storage.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux