Re: deep scrubbing causes osd down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, I am not sure whether it is look ok in your production environment.

Maybe you could use the command: ceph tell osd.0 injectargs
"-osd_scrub_sleep 0.5" . This command would affect only one osd.

If it works fine for some days, you could set for all osd.

This is just a suggestion.

2015-04-13 14:34 GMT+08:00 Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>:
>
> On 13 April 2015 at 16:00, Christian Balzer <chibi@xxxxxxx> wrote:
>>
>> However the vast majority of people with production clusters will be
>> running something "stable", mostly Firefly at this moment.
>>
>> > Sorry, 0.87 is giant.
>> >
>> > BTW, you could also set osd_scrub_sleep to your cluster. ceph would
>> > sleep some time as you defined when it has scrub some objects.
>> > But I am not sure whether is could works good to you.
>> >
>> Yeah, that bit is backported to Firefly and can definitely help, however
>> the suggested initial value is too small for most people who have scrub
>> issues, starting with 0.5 seconds and see how it goes seems to work
>> better.
>
>
>
> Thanks xinze, Christian.
>
> Yah, I'm on 0.87 in production - I can wait for the next release :)
>
> In the meantime, from the prior msgs I've set this:
>
> [osd]
> osd_scrub_chunk_min = 1
> osd_scrub_chunk_max = 5
> osd_scrub_sleep = 0.5
>
>
> Do the values look ok? is the [osd] section the right spot?
>
> Thanks - Lindsay
>
>
>
> --
> Lindsay



-- 
Regards,
xinze
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux