Re: Remove orphaned ceph volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is now resolved. I simply found the old systemd files
inside /etc/systemd/system/multi-user.target.wants and disabled them which
automatically cleaned them up.

Thanks!

On Wed, 16 Mar 2022 at 09:30, Chris Page <sirhc.page@xxxxxxxxx> wrote:

> Hi,
>
> We had to recreate our Ceph cluster and it seems some legacy data was left
> over. I think this is causing our valid OSD's to hang for 15-20 minutes
> before starting up on a machine reboot.
>
> When checking /var/log/ceph/ceph-volume.log I can see the following -
>
> [2022-03-08 09:32:10,581][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/ceph-volume lvm trigger 1-f5f2a63b-540d-4277-ba18-a7db63ce5359
> [2022-03-08 09:32:10,592][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/ceph-volume lvm trigger 3-eb671fc9-6db3-444e-b939-ae37ecaa1446
> [2022-03-08 09:32:10,825][ceph_volume.process][INFO  ] stderr -->
>  RuntimeError: could not find osd.2 with osd_fsid
> e45faa5d-f0af-45a9-8f6f-dac037d69569
> [2022-03-08 09:32:10,837][ceph_volume.process][INFO  ] stderr -->
>  RuntimeError: could not find osd.0 with osd_fsid
> 16d1d2ad-37c1-420a-bc18-ce89ea9654f9
> [2022-03-08 09:32:10,844][systemd][WARNING] command returned non-zero exit
> status: 1
> [2022-03-08 09:32:10,844][systemd][WARNING] failed activating OSD, retries
> left: 25
> [2022-03-08 09:32:10,853][ceph_volume.process][INFO  ] stderr -->
>  RuntimeError: could not find osd.1 with osd_fsid
> f5f2a63b-540d-4277-ba18-a7db63ce5359
> [2022-03-08 09:32:10,853][ceph_volume.process][INFO  ] stderr -->
>  RuntimeError: could not find osd.0 with osd_fsid
> 59992b5f-806b-4bed-9951-bca0ef4e6f0a
> [2022-03-08 09:32:10,855][systemd][WARNING] command returned non-zero exit
> status: 1
> [2022-03-08 09:32:10,855][systemd][WARNING] failed activating OSD, retries
> left: 25
> [2022-03-08 09:32:10,865][ceph_volume.process][INFO  ] stderr -->
>  RuntimeError: could not find osd.3 with osd_fsid
> eb671fc9-6db3-444e-b939-ae37ecaa1446
>
> When running ceph-volume lvm list | grep "osd fsid" we only have four
> OSD's and none match the fsid's mentioned above -
>
> osd fsid                  3038f5ae-c579-410b-bb6d-b3590c2834ff
> osd fsid                  b693f0d5-68de-462e-a1a8-fbdc137f4da4
> osd fsid                  4639ef09-a958-40f9-86ff-608ac651ca58
> osd fsid                  c4531f50-b192-494d-8e47-533fe780bfa3
>
> How can I tell Ceph to stop looking for these orphaned OSD's / volumes?
>
> Thanks,
> Chris.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux