Re: disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



no, I mean ceph sees it as a failure and marks it out for a while

On Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick <singapore@xxxxxxxxxxxxxx> wrote:
Is your HD actually failing and vanishing from the OS and then coming back shortly?

Or do you just mean your OSD is crashing and then restarting it self shortly later?


---- On Fri, 06 Sep 2019 01:55:25 +0800 solarflow99@xxxxxxxxx wrote ----

One of the things i've come to notice is when HDD drives fail, they often recover in a short time and get added back to the cluster.  This causes the data to rebalance back and forth, and if I set the noout flag I get a health warning.  Is there a better way to avoid this?


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux