Re: Compression of data in existing cephfs EC pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That does make sense and I wish it were true however what I'm seeing doesn't support your hypothesis.  I've had several drives die and be replaced since the go-live date and I'm actually in the home stretch on reducing the pg_num on that pool so pretty much every PG has already been moved several times over.

It's also possible that my method for checking compression is flawed.  Spot checks from what I can see in an OSD stat dump and ceph df detail seem to line up so I don't believe this is the case.

The only time I see the counters move is when someone puts new data in via globus or migration from a cluster job.

I will test what you proposed though by draining an OSD and refilling it then checking the stat dump to see what lives under compression and what does not.   

-paul 

--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Administration
Rochester Institute of Technology
o:(585) 475-3245 | pfmeec@xxxxxxx

CONFIDENTIALITY NOTE: The information transmitted, including attachments, is
intended only for the person(s) or entity to which it is addressed and may
contain confidential and/or privileged material. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon this
information by persons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
------------------------

________________________________________



Just my two cents:

Compression is an OSD level operation, and the OSD involved in a PG do
no know about each others' compression settings. And they probably also
do not care, considering the OSD to be a black box.


I would propose to drain OSDs (one by one or host by host by setting osd
weights) to move the uncompressed data off. Reset the weights to the
former values later to move the data back, and upon writing the data it
should be compressed.

Compression should also happen during writing the data to other osds
when it is moved an OSD, but you will end up with a mix of compressed
and uncompressed data on the same OSD. You will have to process all OSDs).


If this is working as expected, you do not have to touch the data on the
filesystem level at all. The operation happens solely on the underlying
storage.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux