Re: rgw: streaming interfaces for object read/write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 29, 2021 at 9:57 AM Daniel Gryniewicz <dang@xxxxxxxxxx> wrote:
>
>
>
> On 10/28/21 12:42 PM, Casey Bodley wrote:
> <snip>
> >
> > right. for RGWGetObj, all the data eventually streams through
> > RGWGetObj_CB to be written out to the client via the frontend. the
> > decompression and decryption filters are wrapping that. so if we
> > wanted to allow zipper stores the ability to transform the
> > unencrypted/uncompressed stream, we'd want to give them a chance to
> > wrap the stream before those other filters are added
> >
> > in rgw::sal, this could look something like:
> >
> >    virtual RGWGetDataCB* add_filter(RGWGetDataCB* cb) { return cb; }
> >
> > which by default doesn't add any filters, but a store could override
> > this to return some other filter that wraps 'cb'
> >
>
> It occurs to me that we probably want encryption and compression to be
> per-store anyway, since different stores may have different built-in
> capabilities for those features.  For example, a PostgreSQL database
> with it's own encryption enabled.  This means we probably want to move
> the encryption/compression into the zipper stack, rather than above it,
> regardless.
>
> Daniel
>

as a feature of S3, server-side encryption has some specific
requirements that a backend's default encryption probably won't
satisfy. the backend itself shouldn't be able to decrypt this data
without assistance either from the client (sse-c) or the KMS (sse-kms,
sse-s3). so i think it makes sense for encryption to stay above zipper

i think compression probably makes more sense above zipper too, mainly
because we already have a generic implementation that should work
against any backend. the backends themselves may also support
compression (bluestore does!). but there will be tradeoffs to consider
(like cpu usage vs. bandwidth) when deciding which layer should apply
this compression, and i think it makes sense to support both and let
each deployment decide

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux