Is there a difference between the degraded states triggered by an OSD failure vs a crushmap change? When an OSD fails the cluster is obviously degraded in the sense that you have fewer copies of your data than the pool size mandates. But when you change the crush map, say by adding an OSD, ceph also reports HEALTH_WARN and degraded PGs. This makes sense in that data isn't where it's supposed to be, but you do still have sufficient copies of your data in the previous locations. So what happens if you add an OSDs and some multi-OSD failure occurs such that you have zero copies of data the the target location (i.e. crush map including new OSD) but you still have a copy of the data in the old location (i.e. crushmap before adding the new OSD). Is CEPH smart enough to pull the data from the old location, or do you loose data? -- Adam Carheden _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com