Re: upgraded to cluster to 16.2.6 PACIFIC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Di., 9. Nov. 2021 um 11:08 Uhr schrieb Dan van der Ster <dan@xxxxxxxxxxxxxx>:
>
> Hi Ansgar,
>
> To clarify the messaging or docs, could you say where you learned that
> you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
> that documented somewhere, or did you have it enabled from previously?
> The default is false so the corruption only occurs when users actively
> choose to fsck.

I have upgraded another cluster in the past with no issues as of
today, so I just followed my own instructions for this cluster

> As to recovery, Igor wrote the low level details here:
> https://www.spinics.net/lists/ceph-users/msg69338.html
> How did you resolve the omap issues in your rgw.index pool? What type
> of issues remain in meta and log?

for the index pool we run this script
https://paste.openstack.org/show/810861/ it adds a omap-key and
triggers a repair but is dose not work for the meta pool
my next best option  is to stop the radosgw and create a new pool with
the same data! like:

pool=default.rgw.meta
ceph osd pool create $pool.new 64 64
ceph osd pool application enable $pool.new rgw

# copy data
rados -p $pool export /tmp/$pool.img
rados -p $pool.new import /tmp/$pool.img

#swap pools
ceph osd pool rename $pool $pool.old
ceph osd pool rename $pool.new $pool

rm -f /tmp/$pool.img
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux