Hi,
if the OSDs are deployed as LVs (by ceph-volume) you could try to do a
pvmove to a healthy disk. There was a thread here a couple of weeks
ago explaining the steps. I don’t have it at hand right now, but it
should be easy to find.
Of course, there’s no guarantee that this will be successful. I also
can’t tell if Igor‘s approach is more promising.
Zitat von Igor Fedotov <igor.fedotov@xxxxxxxx>:
Hi Carl,
you might want to use ceph-objectstore-tool to export PGs from
faulty OSDs and import them back to healthy ones.
The process could be quite tricky though.
There is also pending PR (https://github.com/ceph/ceph/pull/54991)
to make the tool more tolerant to disk errors.
The patch worth trying in some cases, not a silver bullet though.
And generally whether the recovery doable greatly depends on the
actual error(s).
Thanks,
Igor
On 02/02/2024 19:03, Carl J Taylor wrote:
Hi,
I have a small cluster with some faulty disks within it and I want to clone
the data from the faulty disks onto new ones.
The cluster is currently down and I am unable to do things like
ceph-bluestore-fsck but ceph-bluestore-tool bluefs-export does appear to
be working.
Any help would be appreciated
Many thanks
Carl
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx