Re: RadosGW - Performance Expectations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For reference, with parallel writes using the S3 Go API (via hsbench: https://github.com/markhpc/hsbench), I was recently doing about 600ish MB/s to a single RGW instance from one client. RadosGW used around 3ish HW threads from a 2016 era Xeon to do that. Didn't try single-file tests in that case though which likely would have been slower.

Mark

On 2/10/23 09:59, Shawn Weeks wrote:
With this options I still see around 38-40MB/s for my 16gb test file. So far my testing is mostly synthetic, I’m going to be using some programs like GitLab and Sonatype Nexus that store their data in object storage. At work I deal with real S3 and regular see upload speeds in the 100s of MB/s so I was kinda surprised that the aws cli was only doing 25 or so.

Thanks
Shawn

On Feb 10, 2023, at 8:46 AM, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

The problem I’m seeing is after setting up RadosGW I can only upload to “S3” at around 25MBs with the official AWS CLI. Using s3cmd is slightly better at around 45MB/s. I’m going directly to the RadosGW instance with no load balancers in between and no ssl enabled. Just trying to figure out if this is normal. I’m not expecting it to be as fast as writing directly to a RBD but I was kinda hoping for more than this.

So what should I expect in performance from the RadosGW?

For s3cmd, I have some perf options I use,

multipart_chunk_size_mb = 256
send_chunk = 262144
recv_chunk = 262144
and frequently see 100-150MB/s for well connected client runs,
especially if you repeat uploads and use s3cmd's   --cache-file=FILE
option so that you don't benchmark your local computers ability to
checksum the object(s).

But I would also consider using rclone and/or something that actually
makes sure to split up large files/objects and uploads them in
parallel. We have hdd+nvme clusters on 25GE networks that ingest some
1.5-2 GB/s using lots of threads and many clients, but the totals are
in that vicinity. Several load balancers and some 6-9 rgws to share
the load helps there.

--
May the most significant bit of your life be positive.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux