Re: RGW Blocking on 1-2 PG's - argonaut

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Great, thanks. Now i understand everything.

Best Regards
SS

Dnia 6 mar 2013 o godz. 15:04 Yehuda Sadeh <yehuda@xxxxxxxxxxx> napisał(a):

> On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron <szibis@xxxxxxxxx> wrote:
>> Hi, i do some test, to reproduce this problem.
>>
>> As you can see, only one drive (each drive in same PG) is much more
>> utilize, then others, and there are some ops in queue on this slow
>> osd. This test is getting heads from s3 objects, alphabetically
>> sorted. This is strange. why this files is going in much part only
>> from this triple osd's.
>>
>> checking what osd are in this pg.
>>
>> ceph pg map 7.35b
>> osdmap e117008 pg 7.35b (7.35b) -> up [18,61,133] acting [18,61,133]
>>
>> On osd.61
>>
>> { "num_ops": 13,
>>  "ops": [
>>        { "description": "osd_sub_op(client.10376104.0:961532 7.35b
>> 2b11a75b\/2013-03-06-13-8700.1-ocdn\/head\/\/7 [] v 117008'1370134
>
> The ops log is slowing you down. Unless you really need it, set 'rgw
> enable ops log = false'. This is off by default in bobtail.
>
>
> Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux