Re: OSD fail to start - fsid problem with KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 3, 2019 at 6:25 PM Anton Aleksandrov <anton@xxxxxxxxxxxxxx> wrote:
>
> Hello community.
>
> We run Ceph on quite old hardware with quite low traffic. Yesterday we
> had to reboot one of the OSDs and after reboot it did not came up. The
> error message is:
>
> [2019-11-02 15:05:07,317][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/ceph-volume lvm trigger 22-1c0b3fd7-7d80-4de9-9594-17ac5b2bf92f
> [2019-11-02 15:05:07,473][ceph_volume.process][INFO  ] stderr -->
> RuntimeError: could not find osd.22 with fsid
> 1c0b3fd7-7d80-4de9-9594-17ac5b2bf92f
>
> This OSD has 2 disks, which are put into one logical volume (basically
> raid0) and then used for osd storage.

please don't do that (unless you have very good reason to)

> We are quite a beginners with Ceph and this error stuck us. What should
> we do? Change fsid (where?)? Right now cluster is in repair-state.. As
> the last resort we would drop osd and rebuild it, but it would be very
> important for us to understand - what and why happenned. Is it faulty
> config or did something bad happen with the disks?

Is the LV online? Do the LV tags look correct? Check lvs -o lv_tags



Paul


>
> Regards,
> Anton.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux