Re: Deleting incomplete PGs from an erasure coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No need to delete it, that situation should be mostly salvagable by
setting osd_find_best_info_ignore_history_les temporarily on the
affected OSDs.
That should cause you to "just" lose some writes resulting in inconsistent data.


Paul

2018-08-28 11:08 GMT+02:00 Maks Kowalik <maks_kowalik@xxxxxxxxx>:
> What is the correct procedure for re-creating an incomplete placement group
> that belongs to an erasure coded pool?
> I'm facing a situation when too many shards of 3 PGs were lost during OSD
> crashes, and taking the data loss was decided, but can't force ceph to
> recreate those PGs. The query output shows:
> "peering_blocked_by_detail": [
>                 {"detail": "peering_blocked_by_history_les_bound"}
> What was tried:
> 1. manual deletion of all shards appearing in "peers" secion of PG query
> output
> 2. marking all shards as complete using ceph-objectstore-tool
> 3. deleting peering history from OSDs keeping the shards
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux