Re: Hammer reduce recovery impact

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 11/09/2015 01:24, Lincoln Bryant a écrit :
> On 9/10/2015 5:39 PM, Lionel Bouton wrote:
>> For example deep-scrubs were a problem on our installation when at
>> times there were several going on. We implemented a scheduler that
>> enforces limits on simultaneous deep-scrubs and these problems are gone.
>
> Hi Lionel,
>
> Out of curiosity, how many was "several" in your case?

I had to issue ceph osd set nodeep-scrub several times with 3 or 4
concurrent deep-scrubs to avoid processes blocked in D state on VMs and
I could see the VM loads start rising with only 2. At the time I had
only 3 or 4 servers with 18 or 24 OSDs on Firefly. Obviously the more
servers and OSDs you have the more simultaneous deep scrubs you can handle.

One PG is ~5GB on our installation and it was probably ~4GB at the time.
As deep scrubs must read data on all replicas, with size=3 having 3 or 4
concurrent on only 3 or 4 servers means reading anywhere between 10 to
20G from disks on each server (and I don't think the OSDs are trying to
bypass the kernel cache).

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux