Re: Degraded: OSD failure vs crushmap change

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The PG just counts as degraded in both scenarios, but if you look at the objects in the degraded PGs (visible in ceph status) some of them are degraded objects and others are misplaced objects.  Degraded objects have less than your replica size of copies, like what happens when you lose an OSD.  When you change the CRUSH map you get misplaced objects.  The PG will have which OSDs it will end up on and the acting OSDs that it's currently on.  The acting OSDs will remain the authority for the PG until the new OSDs have a full up to date copy of the PG.  At that point the old OSDs will clean up their copy.  You should always be serving from your full replica size of OSDs when adding storage and making changes to your CRUSH map.

On Fri, Apr 14, 2017 at 11:20 AM Adam Carheden <carheden@xxxxxxxx> wrote:
Is there a difference between the degraded states triggered by an OSD
failure vs a crushmap change?

When an OSD fails the cluster is obviously degraded in the sense that
you have fewer copies of your data than the pool size mandates.

But when you change the crush map, say by adding an OSD, ceph also
reports HEALTH_WARN and degraded PGs. This makes sense in that data
isn't where it's supposed to be, but you do still have sufficient copies
of your data in the previous locations.

So what happens if you add an OSDs and some multi-OSD failure occurs
such that you have zero copies of data the the target location (i.e.
crush map including new OSD) but you still have a copy of the data in
the old location (i.e. crushmap before adding the new OSD). Is CEPH
smart enough to pull the data from the old location, or do you loose data?

--
Adam Carheden

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux