On Wed, Jul 5, 2023 at 10:38 AM Matt Benjamin <mbenjami@xxxxxxxxxx> wrote: > > The "active" strategy outlined here seems to be pretty strange. The attacker's data needs to be combined into the unencrypted data being sent via S3, so I guess it has compromised the S3 client environment or dataset already in some way? I'm not sure if that qualifies as a direct attack on compression+encryption in the S3 service. the active version does sound a bit contrived. but the passive version does leak some information about the data at rest, and server-side encryption is meant to protect against storage admins. any rados client with read access to the data pool can calculate these compression ratios to test whether two same-sized s3 objects might contain different data > In general, it seems like whether to compress+encrypt should be a policy decision, not something arbitrarily chosen by the S3 implementation (as this article claims minio does). agreed, but the default should be off, and we'd need to understand and clearly document the tradeoffs. the current 'zonegroup feature' mechanism defaults to on for new zones, so https://github.com/ceph/ceph/pull/52300 isn't sufficient as-is > > Matt > > On Wed, Jul 5, 2023 at 10:24 AM Casey Bodley <cbodley@xxxxxxxxxx> wrote: >> >> thanks Josh, i wasn't aware of attacks in this space. a naive search >> came up with a blog post from minio about a 'compression-ratio side >> channel' at https://blog.min.io/c-e-compression-encryption/. is that >> the same kind of issue you're concerned about here? >> >> without more time to evaluate this, i'm tempted to disable the feature for reef >> >> On Tue, Jul 4, 2023 at 3:13 AM Josh Salomon <jsalomon@xxxxxxxxxx> wrote: >> > >> > Please note that compression before encryption is considered a security breach. I would not implement this without a clear warning and specific user approval. >> > >> > Regards, >> > >> > Josh >> > >> > >> > On Mon, Jul 3, 2023 at 10:54 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote: >> >> >> >> i opened https://github.com/ceph/ceph/pull/52300 to require a >> >> 'compress-encrypted' zonegroup feature for this, and updated the >> >> documentation and release note accordingly >> >> >> >> >> >> On Mon, Jul 3, 2023 at 2:11 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote: >> >> > >> >> > hey Shilpa and team, >> >> > >> >> > early in the reef cycle, https://github.com/ceph/ceph/pull/46188 was >> >> > contributed to support the combination of server-side compression and >> >> > encryption on the same object data. only recently did we catch a >> >> > regression in multisite, where such objects fail to replicate and can >> >> > cause crashes. this bug, tracked in >> >> > https://tracker.ceph.com/issues/57905, was just fixed and backported >> >> > for reef in https://github.com/ceph/ceph/pull/52297. this was a >> >> > regression in reef, so i was planning to treat it as a blocker >> >> > >> >> > in that backport, i added a warning to the original release note: >> >> > >> >> > RGW: Compression is now supported for objects uploaded with >> >> > Server-Side Encryption. When both are enabled, compression is applied >> >> > before encryption. >> >> > WARNING: In a multisite configuration, objects that are both >> >> > compressed and encrypted will not replicate correctly to Pacific or >> >> > Quincy. Upgrade all zones to Reef before enabling compression. >> >> > >> >> > it occurs to me that we might add a new 'compress-encrypted' feature >> >> > flag to the zonegroup (similar to the 'resharding' flag in reef) to >> >> > prevent this combination of compression+encryption until all zones >> >> > upgrade and enable it. do you think that's worth doing, or is a >> >> > release note sufficient? >> >> _______________________________________________ >> >> Dev mailing list -- dev@xxxxxxx >> >> To unsubscribe send an email to dev-leave@xxxxxxx >> > > > -- > > Matt Benjamin > Red Hat, Inc. > 315 West Huron Street, Suite 140A > Ann Arbor, Michigan 48103 > > http://www.redhat.com/en/technologies/storage > > tel. 734-821-5101 > fax. 734-769-8938 > cel. 734-216-5309 _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx