Re: RadosGW - Performance Expectations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With s5cmd and its defaults I got around 127MB/s for a single 16gb test file. Is there any way to make s5cmd give feedback while it’s running. At first I didn’t think it was working because it just sat there for a while.

Thanks
Shawn

On Feb 10, 2023, at 8:45 AM, Matt Benjamin <mbenjami@xxxxxxxxxx> wrote:

Hi Shawn,

To get another S3 upload baseline, I'd recommend doing some upload testing with s5cmd [1].

1. https://github.com/peak/s5cmd

Matt


On Fri, Feb 10, 2023 at 9:38 AM Shawn Weeks <sweeks@xxxxxxxxxxxxxxxxxx<mailto:sweeks@xxxxxxxxxxxxxxxxxx>> wrote:
Good morning everyone, been running a small Ceph cluster with Proxmox for a while now and I’ve finally run across an issue I can’t find any information on. I have a 3 node cluster with 9 Samsung PM983 960GB NVME drives running on a dedicated 10gb network. RBD and CephFS performance have been great, most of the time I see over 500MBs writes and a rados benchmark shows 951 MB/s write and 1140 MB/s read bandwidth.

The problem I’m seeing is after setting up RadosGW I can only upload to “S3” at around 25MBs with the official AWS CLI. Using s3cmd is slightly better at around 45MB/s. I’m going directly to the RadosGW instance with no load balancers in between and no ssl enabled. Just trying to figure out if this is normal. I’m not expecting it to be as fast as writing directly to a RBD but I was kinda hoping for more than this.

So what should I expect in performance from the RadosGW?

Here are some rados bench results and my ceph report

https://gist.github.com/shawnweeks/f6ef028284b5cdb10d80b8dc0654eec5

https://gist.github.com/shawnweeks/7cfe94c08adbc24f2a3d8077688df438

Thanks
Shawn
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>


--

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux