On Tue, Oct 29, 2019 at 7:26 PM Bryan Stillwell <bstillwell@xxxxxxxxxxx> wrote: > > Thanks Casey, > > If I'm understanding this correctly the only way to turn on RGW compression is to do it essentially cluster wide in Luminous since all our existing buckets use the same placement rule? That's not going to work for what I want to do since it's a shared cluster and other buckets need the performance. Luminous supports placement rules (just not compression at this level), but you can create two placement rules (or storage classes) that go to different pools and you can compress on the pool level. Regarding enabling compression after the fact: could the ancient "bucket rewrite" be helpful here? I guess it detects that a rewrite isn't necessary and do nothing, but I guess it should be rather simple to add a --force-rewrite flag or something? Paul > > We're in the process of upgrading to Nautilus and switching to BlueStore, unfortunately this cluster hasn't been converted yet. I do appreciate the details of what Nautilus added though! > > Thanks again, > Bryan > > On Oct 29, 2019, at 11:12 AM, Casey Bodley <cbodley@xxxxxxxxxx> wrote: > > Luminous docs about pool placement and compression can be found at > > https://docs.ceph.com/docs/luminous/radosgw/placement/. You're correct > > that a bucket's placement target is set on creation and can't be > > changed. But the placement target itself can be modified to enable > > compression after the fact, and (once the gateways restart) the > > compression would take effect on new objects uploaded to buckets with > > that placement rule. > > > > In Nautilus, the compression setting is per storage class. See the > > updated docs at https://docs.ceph.com/docs/nautilus/radosgw/placement/ > > for details. So you could either add a new storage class to your > > existing placement target that enables compression, and use the S3 apis > > like COPY Object or lifecycle transitions to compress existing object > > data. Or you could modify the default STANDARD storage class to enable > > compression, which would again apply only to new object uploads. > > > > For per-user compression, you can specify a default placement target > > that applies when the user creates new buckets. And as of Nautilus you > > can specify a default storage class to be used for new object uploads - > > just note that some 'helpful' s3 clients will insert a > > 'x-amz-storage-class: STANDARD' header to requests that don't specify > > one, and the presence of this header will override the user's default > > storage class. > > > > On 10/29/19 12:20 PM, Bryan Stillwell wrote: > >> I'm wondering if it's possible to enable compression on existing RGW buckets? The cluster is running Luminous 12.2.12 with FileStore as the backend (no BlueStore compression then). > >> > >> We have a cluster that recently started to rapidly fill up with compressible content (qcow2 images) and I would like to enable compression for new uploads to slow the growth. The documentation seems to imply that changing zone placement rules can only be done at creation time. Is there something I'm missing that would allow me to enable compression on a per-bucket or even a per-user basis after a cluster has been used for quite a while? > >> > >> Thanks, > >> Bryan > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx