Degraded: OSD failure vs crushmap change

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is there a difference between the degraded states triggered by an OSD
failure vs a crushmap change?

When an OSD fails the cluster is obviously degraded in the sense that
you have fewer copies of your data than the pool size mandates.

But when you change the crush map, say by adding an OSD, ceph also
reports HEALTH_WARN and degraded PGs. This makes sense in that data
isn't where it's supposed to be, but you do still have sufficient copies
of your data in the previous locations.

So what happens if you add an OSDs and some multi-OSD failure occurs
such that you have zero copies of data the the target location (i.e.
crush map including new OSD) but you still have a copy of the data in
the old location (i.e. crushmap before adding the new OSD). Is CEPH
smart enough to pull the data from the old location, or do you loose data?

-- 
Adam Carheden

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux