Re: EC pool degrades when adding device-class to crush rule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Eugen!

Quoting Eugen Block (eblock@xxxxxx):

> >When a client writes an object to the primary OSD, the primary OSD
> >is responsible for writing the replicas to the replica OSDs. After
> >the primary OSD writes the object to storage, the PG will remain
> >in a degraded state until the primary OSD has received an
> >acknowledgement from the replica OSDs that Ceph created the
> >replica objects successfully.
> 
> Applying that to your situation where PGs are moved across nodes
> (well, they're not moved but recreated) it can take quite some time
> until they become fully available, depending on the PG size and the
> number of objects in it. So unless you don't have "inactive PGs"
> you're fine in a degraded state as long as it resolves eventually.
> Having degraded PGs is nothing unusual, e. g. during maintenance
> when a server is rebooted.

Thank you for your reply and reassurance.  I've changed the rule on my
production cluster now and there aren't any degraded PGs at all, only
misplaced objects as I had originally hoped.

The main difference seems to be that Ceph on the production cluster decided
to backfill all of the PGs.  On my test cluster Ceph started recovery
operations instead.  I think this might have something to do with the number
of OSDs which is different between test and production (14 versus 42) or with
the number of objects in the PG. Probably the latter.

Kind regards,
LF.
-- 
Lars Fenneberg, lf@xxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux