Re: Max object size GB or TB in a bucket

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



They will upload from the same network segment to the same network where the cluster is located.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Janne Johansson <icepic.dz@xxxxxxxxx> 
Sent: Friday, August 20, 2021 3:52 PM
To: Marc <Marc@xxxxxxxxxxxxxxxxx>
Cc: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>; Ceph Users <ceph-users@xxxxxxx>
Subject: Re:  Re: Max object size GB or TB in a bucket

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Den fre 20 aug. 2021 kl 10:45 skrev Marc <Marc@xxxxxxxxxxxxxxxxx>:
>
> > > S3cmd chunks 15MB.
>
> There seems to be an s5cmd, which should be much much faster than s3cmd.

There is both s4cmd, s5cmd, minio-mc and rclone which all have some things that make them "better" than s3cmd in various ways, at the expense of lacking other options that s3cmd has, which you may or may not use.

One can tune s3cmd a bit with multipart_chunk_size_mb (I use 256M if I am close network-wise to the rgws) and send_chunk / recv_chunk which I have at 262144, but if you need parallelism at the network layer, other s3 clients are probably better.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux