Re: poor radosgw performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

We're trying to test swift API performance of swift itself (1.9.0) and ceph's radosgw (0.67.3) using the following hardware configuration:

Shared servers:

* 1 server running keystone for authentication
* 1 server running swift-proxy, a single MON, and radosgw + Apache / FastCGI

Ceph:

* 4 storage servers, 5 storage disks / 5 OSDs on each (no separate disk(s) for journal)

Swift:

* 4 storage servers, 5 storage disks on each

All 10 machines have identical hardware configurations (including drive type & speed).

We deployed ceph w/ ceph-deploy and both swift and ceph have default configurations w/ the exception of the following:

* custom Inktank packages for apache2 / libapache2-mod-fastcgi
* rgw_print_continue enabled
* rgw_enable_ops_log disabled
* rgw_ops_log_rados disabled
* debug_rgw disabled

(actually, swift was deployed w/ a chef cookbook, so configurations may be slightly non-standard)

On the ceph storage servers, filesystem type (XFS) and filesystem mount options, pg_nums on pools, etc. have all been left with the defaults (8 on the radosgw-related pools IIRC).

Doing a preliminary test w/ swift-bench (concurrency = 10, object_size = 1), we're seeing the following:

Ceph:

1000 PUTS **FINAL** [0 failures], 14.8/s
10000 GETS **FINAL** [0 failures], 40.9/s
1000 DEL **FINAL** [0 failures], 34.6/s

Swift:

1000 PUTS **FINAL** [0 failures], 21.7/s
10000 GETS **FINAL** [0 failures], 139.5/s
1000 DEL **FINAL** [0 failures], 85.5/s

That's a relatively significant difference.  Would we see any real difference in moving the journals to an SSD per server or separate partition per OSD disk?  These machines are not seeing any load short of what's being generated by swift-bench.  Alternatively, would we see any quick wins standing up more MONs or moving the MON off the server running radosgw + Apache / FastCGI?

Thanks in advance for the assistance.

Regards,
Matt
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux