Re: 50% performance drop after disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Salut Eichi,

Michael Eichenberger <michael.eichenberger@xxxxxxxxxxxxxxxxx> writes:
> Symptoms:
> - After taking a failed disk out of our ceph cluster ('ceph osd out X')
>   the canary VM measures a 50% performance degree.

we have suffered similar symptoms in the past with Kraken and Mimic and our
solution was to heavily constrain the rebalancing using these settings:

--------------------------------------------------------------------------------
[osd]
osd max backfills = 1
osd recovery max active = 1
osd recovery op priority = 2
--------------------------------------------------------------------------------

In case you haven't tried them, might be worth a shot.

You can even inject them into the running daemons using:

--------------------------------------------------------------------------------
ceph tell 'osd.*' injectargs '--osd-max-backfills X --osd-recovery-max-active Y'
--------------------------------------------------------------------------------

Best regards from the other side of the mountains,

Nico


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux