Re: Copying big objects (>5GB) doesn't work after upgrade to Quincy on S3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Arvydas,

it looks like this change corresponds to
https://tracker.ceph.com/issues/48322 and
https://github.com/ceph/ceph/pull/38234. the intent was to enforce the
same limitation as AWS S3 and force clients to use multipart copy
instead. this limit is controlled by the config option
rgw_max_put_size which defaults to 5G. the same option controls other
operations like Put/PostObject, so i wouldn't recommend raising it as
a workaround for copy

this change really should have been mentioned in the release notes -
apologies for that omission

On Tue, Oct 10, 2023 at 10:58 AM Arvydas Opulskis <zebediejus@xxxxxxxxx> wrote:
>
> Hi all,
>
> after upgrading our cluster from Nautilus -> Pacific -> Quincy we noticed
> we can't copy bigger objects anymore via S3.
>
> An error we get:
> "Aws::S3::Errors::EntityTooLarge (Aws::S3::Errors::EntityTooLarge)"
>
> After some tests we have following findings:
> * Problems starts for objects bigger than 5 GB (multipart limit)
> * Issue starts after upgrading to Quincy (17.2.6). In latest Pacific
> (16.2.13) it works fine.
> * For Quincy it works ok with AWS S3 CLI "cp" command, but doesn't work
> using AWS Ruby3 SDK client with copy_object command.
> * For Pacific setup both clients work ok
> * From RGW logs seems like AWS S3 CLI client handles multipart copying
> "under the hood", so it is succesful.
>
> It is stated in AWS documentation, that for uploads (and copying) bigger
> than 5GB files we should use multi part API for AWS S3. For some reason it
> worked for years in Ceph and stopped working after Quincy release, even I
> couldn't find something in release notes addressing this change.
>
> So, is this change permanent and should be considered as bug fix?
>
> Both Pacific and Quincy clusters were running on Rocky 8.6 OS, using Beast
> frontend.
>
> Arvydas
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux