AW: radosrgw performance problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i've added my answers below.

Thanks

Regards

Philipp

-----Ursprüngliche Nachricht-----
Von: Mark Nelson [mailto:mark.nelson@xxxxxxxxxxx] 
Gesendet: Dienstag, 11. Juni 2013 16:38
An: Jäger, Philipp
Cc: ceph-devel@xxxxxxxxxxxxxxx
Betreff: Re: radosrgw performance problems

On 06/11/2013 08:27 AM, Jäger, Philipp wrote:
> Hello,
>
> we have a performance problem with radosrgw.
> Only 8mb/s-9 per upload, also tested with s3cmd on the rgw itself.
> (2 uploads at the same time: combined 15mb/s, 3 uploads at the same 
> time: comb. 21mb/s) But when putting a file via rados rbd , we get 40mb/s upload, so no network or other problem in general.

One thing to check is to make sure that the rgw pool you are writing to has enough placement groups for your cluster.  The default may be extremely low.

[Philipp] We don't use standard pool, new pool with 1500pg, same problem. (30 osds) 

>
> Same speed with the inktank apache/fastcgi and the original one. Hardware also fast enough. We use Ubuntu 12.04 lts, ceph 0.61.2
>
> So have you any idea why the rgw is so slow? How can we identify where the problem is?

RBD is pretty streamlined so you can get good performance with it.  On 
my test setup I'm seeing 80-90% of the performance of raw rados object 
writes/reads (and in some cases much faster with RBD cache enabled!). 
RGW, Apache, fastcgi, and simply the requirements of supporting the S3 
protocol itself add a lot of overhead.  MD5 calculations by themselves 
start chewing up a ton of CPU once you try to support high throughput 
scenarios and there is a non-trivial amount of extra latency added as 
well.  You may be able to improve things with some tweaks, but I 
wouldn't be surprised if RBD is always going to be faster to an extent.

[Philipp]We are talking about 9mb/s per rgw, which is less then 1/4 of rbd (rados put: 40mb/s), with the rados bench we get actually: Bandwidth (MB/sec):     171.744.
So I think we are not talking about tweaking, rather a general problem?


For folks who want really fast object storage I think directly utilizing 
rados is probably the way to go, but that requires modifying the app and 
it's not for everyone.

>
> (I've heard something about the rgw admin socket to check perfcounters, but it seems that this is deprecated? Because when i type ceph --admin-daemon ... it says unknown command and I cannot find it in the ceph docu. Then i wanted to bench via rest-bench, but it says "ERROR: failed to create bucket: XmlParseFailure -failed initializing benchmark", so I could not bench the speed.)

connecting with the admin daemon should still be supported. 
Documentation is here:

http://ceph.com/docs/next/radosgw/troubleshooting/

If this doesn't work please let me know!

[Philipp] How can you activate a rgw admin socket? I think we have to add an entry in the ceph.conf?  The admin socket is not the "rgw socket path" I think?


Also, I've created a bug for the rest-bench issue:

http://tracker.ceph.com/issues/5302

Personally I've been using swift-bench for most of my recent rgw testing.

Mark

>
> Ceph.conf- rgw part:
>
> [client.radosgw.connect2]
> host = hcrgwko2
> rgw socket path = /tmp/connect2.sock
> log file = /var/log/ceph/connect2.log
> rgw dns name =  FQDN
>
> Thank you very much.
>
>
> Regards
>
> Philipp
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux