Re: OSD fail to start - fsid problem with KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Which is a different OSD.

Try running ceph-volume lvm activate --all


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Nov 4, 2019 at 7:44 AM Anton Aleksandrov <anton@xxxxxxxxxxxxxx> wrote:
>
> I think, that yes, LVM is online - both drives preset and LVM operates.
>
> lvs -o lv_tags returns:
>
> ceph.block_device=/dev/osd_vg/osd_lv,ceph.block_uuid=EyUggo-Ja06-MptY-Rt4Q-l6iV-0pse-7Y8rPh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31c48e79-7724-49db-a20f-dd86e195972f,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=a0aa881c-aa2d-4462-9c2f-cd289810e9e7,ceph.osd_id=23,ceph.type=block,ceph.vdo=0
>
> We deployed everything using ceph-deloy tool, pretty automatic and
> without doing anything extra special.
>
> Reason for this setup on this server is because all other OSDs have
> 1x8tb, but this host has 2x4tb and just 8gb of ram, therefore we did not
> want to make two separate OSDs.
>
> Anton
>
> On 04.11.2019 00:49, Paul Emmerich wrote:
> > On Sun, Nov 3, 2019 at 6:25 PM Anton Aleksandrov <anton@xxxxxxxxxxxxxx> wrote:
> >> Hello community.
> >>
> >> We run Ceph on quite old hardware with quite low traffic. Yesterday we
> >> had to reboot one of the OSDs and after reboot it did not came up. The
> >> error message is:
> >>
> >> [2019-11-02 15:05:07,317][ceph_volume.process][INFO  ] Running command:
> >> /usr/sbin/ceph-volume lvm trigger 22-1c0b3fd7-7d80-4de9-9594-17ac5b2bf92f
> >> [2019-11-02 15:05:07,473][ceph_volume.process][INFO  ] stderr -->
> >> RuntimeError: could not find osd.22 with fsid
> >> 1c0b3fd7-7d80-4de9-9594-17ac5b2bf92f
> >>
> >> This OSD has 2 disks, which are put into one logical volume (basically
> >> raid0) and then used for osd storage.
> > please don't do that (unless you have very good reason to)
> >
> >> We are quite a beginners with Ceph and this error stuck us. What should
> >> we do? Change fsid (where?)? Right now cluster is in repair-state.. As
> >> the last resort we would drop osd and rebuild it, but it would be very
> >> important for us to understand - what and why happenned. Is it faulty
> >> config or did something bad happen with the disks?
> > Is the LV online? Do the LV tags look correct? Check lvs -o lv_tags
> >
> >
> >
> > Paul
> >
> >
> >> Regards,
> >> Anton.
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux