Re: Compression on existing RGW buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bryan,

Luminous docs about pool placement and compression can be found at https://docs.ceph.com/docs/luminous/radosgw/placement/. You're correct that a bucket's placement target is set on creation and can't be changed. But the placement target itself can be modified to enable compression after the fact, and (once the gateways restart) the compression would take effect on new objects uploaded to buckets with that placement rule.

In Nautilus, the compression setting is per storage class. See the updated docs at https://docs.ceph.com/docs/nautilus/radosgw/placement/ for details. So you could either add a new storage class to your existing placement target that enables compression, and use the S3 apis like COPY Object or lifecycle transitions to compress existing object data. Or you could modify the default STANDARD storage class to enable compression, which would again apply only to new object uploads.

For per-user compression, you can specify a default placement target that applies when the user creates new buckets. And as of Nautilus you can specify a default storage class to be used for new object uploads - just note that some 'helpful' s3 clients will insert a 'x-amz-storage-class: STANDARD' header to requests that don't specify one, and the presence of this header will override the user's default storage class.

On 10/29/19 12:20 PM, Bryan Stillwell wrote:
I'm wondering if it's possible to enable compression on existing RGW buckets?  The cluster is running Luminous 12.2.12 with FileStore as the backend (no BlueStore compression then).

We have a cluster that recently started to rapidly fill up with compressible content (qcow2 images) and I would like to enable compression for new uploads to slow the growth.  The documentation seems to imply that changing zone placement rules can only be done at creation time.  Is there something I'm missing that would allow me to enable compression on a per-bucket or even a per-user basis after a cluster has been used for quite a while?

Thanks,
Bryan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux