Re: ceph recovering results in offline VMs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 12.04.2013 18:48, schrieb Erdem Agaoglu:
We are also seeing a similar problem which we believe it's #3737. Our
VMs (running mongodbs) were being completely frozen for 2-3 minutes
(sometimes longer) while adding a new OSD. We have reduced recovery max
active and backfill settings and ensured that we have RBD caching and
now it seems things are better. We still see some increase in iowaits
but VM's continue to function.

I lowered
        osd recovery max active = 1
        osd max backfills = 3

but it still happens with spinning disks it doesn't with SSD and it also doesn't with disks backed by controller cache but i ran them directly at sata port.

But that i guess depends on what VM actually does at that moment. We did
BTW Stefan, i'm in no way experienced with ceph and i don't know about
your OSD's but 8128 pgs for a 8TB cluster seems too much. Or is it OK
when disks are SSDs?

Not sure if it matters but yes it is too high but as pg splitting was / is not possible i tend to use a higher value to be safe for expansion.

Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux