Hi,
what happens when size = 2 and some objects are in degraded state?
This sounds like easy data loss when the old but active OSD fails while recovery is in progress?
It would make more sense to have the pg replicate first and then remove the PG from the old OSD.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
-------- Original Message --------
Subject: Re: degraded objects after osd add (17-Nov-2016 9:14)
From: Burkhard Linke <Burkhard.Linke@computational.bio.uni-giessen.de >
To: ceph@xxxxxxxxxxxxx
Hi,
On 11/17/2016 08:07 AM, Steffen Weißgerber wrote:
> Hello,
>
> just for understanding:
>
> When starting to fill osd's with data due to setting the weigth from 0 to the normal value
> the ceph status displays degraded objects (>0.05%).
>
> I don't understand the reason for this because there's no storage revoekd from the cluster,
> only added. Therefore only the displayed object displacement makes sense.
If you just added a new OSD, a number of PGs will be backfilling or waiting for backfilling (the remapped ones). I/O to these PGs is not blocked, and thus object may be modified. AFAIK these objects show up as degraded.
I'm not sure how ceph handles these objects, e.g. whether it writes them to the old OSDs assigned to the PG, or whether they are put on the new OSD already, even if the corresponding PG is waiting for backfilling.
Nonetheless the degraded objects will be cleaned up during backfilling.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com