Re: how to fix X is an unexpected clone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,
Hi Everyone,

I am in a similar situation like you were a year ago. During some
backfilling we removed an old snapshot and with the next deep-scrub we
ended with the same log as you did.

> deep-scrub 2.61b
2:d8736536:::rbd_data.e22260238e1f29.000000000046d527:177f6 : is an
unexpected clone

We run Luminous 12.2.10 and the snapshot 177f6 doesn't exist any more.
The unexpected clone is replicated correctly over three OSDs and is
still available in the file system.

Thanh Tran wrote[1] that moving the objects away fixes the problem. But
you wrote, that deleting the objects in the file system is crashing
Ceph.

What exactly means crashing? Was the PG, RBD or the whole cluster
unavailable for the clients? Or nothing at all?

I am not sure what is a good way to solve the problem:

1. Should I delete the objects in the file system with running OSDs and
hopefully Ceph will fix the rest (like Thanh Tran did it). Maybe
afterwards i have to do a remove-clone-metadata with the objectstore-
tool? Will the PG and RBD stay online or should I plan an
unavailability?

2. Should i use the ceph-objectstore-tool with remove and/or remove-
clone-metadata to delete the objects for each OSD, one after another,
so the PG can be online?

3. Should i use the ceph-objectstore-tool with remove or remove-clone-
metadata to delete the objects with all OSDs down (belonging to the
PG)?

Do you have any advice? Was your PG/RBD/Cluster unavailable during the
fix?

Thanks,
Achim


[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023199.html




-- 
Achim Ledermüller, M. Sc.
Lead Senior Systems Engineer

NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg
Tel: +49 911 92885-0 | Fax: +49 911 92885-77
CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207
http://www.netways.de | Achim.Ledermueller@xxxxxxxxxx

** Icinga Camp Berlin 2019 - March - icinga.com **
** OSDC 2019 - May - osdc.de **
** Icinga as a Service - nws.netways.de **
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux