RGW: Get object ops performance problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi, everyone!

 I test RGW get obj ops, when I use 100 threads get one and the same  object , I find that performance is very good, meadResponseTime is 0.1s.
But when I  use 150 threads get one and the same object, performace is very bad, meadResponseTime is 1s.

and I observe the osd log and rgw log,
rgw log:
2014-07-15 10:36:42.999719 7f45596fb700  1 -- 10.0.1.61:0/1022376 --> 10.0.0.21:6835/24201 -- osd_op(client.6167.0:22721 default.5632.8_ws1411.jpg [getxattrs,stat,read 0~524288] 4.5210f70b ack+read e657) 
2014-07-15 10:36:44.064720 7f467efdd700  1 -- 10.0.1.61:0/1022376 <== osd.7 10.0.0.21:6835/24201 22210 ==== osd_op_reply(22721 

osd log:
10:36:43.001895 7f6cdb24c700  1 -- 10.0.0.21:6835/24201 <== client.6167 10.0.1.61:0/1022376 22436 ==== osd_op(client.6167.0:22721 default.5632.8_ws1411.jpg 
2014-07-15 10:36:43.031762 7f6cbf01f700  1 -- 10.0.0.21:6835/24201 --> 10.0.1.61:0/1022376 -- osd_op_reply(22721 default.5632.8_ws1411.jpg 

so I think the problem is not happened in the osd, why osd send op replay in  10:36:43.031762 , but rgw receive in 10:36:44.064720 ?





baijiaruo at 126.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140715/1b549703/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux