Re: ceph-volume quite buggy compared to ceph-disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Marc,

 Did you have any success with `ceph-volume` for activating your OSD?

 I am having a similar problem where the command `ceph-bluestore-tool`
fails to be able to read a label for a previously created OSD on an
LVM partition. I had previously been using the OSD without issues, but
after a reboot it fails to load.

 1. I had initially created my OSD using Ceph Octopus 15.x with `ceph
orch daemon add osd <my hostname>:boot/cephfs_meta` that was able to
create an OSD on the LVM partition and bring up an OSD.
 2. After a reboot, the OSD fails to come up, with error from
`ceph-bluestore-tool` happening inside the container specifically
being unable to read the label of the device.
 3. When I query the symlinked device /dev/boot/cephfs_meta ->
/dev/dm3, with `dmsetup info /dev/dm-3`, I can see the state is active
and that it has a UUID, etc.
 4. I installed `ceph-osd` CentOS package providing the
ceph-bluestore-tool, and tried to manually test and `sudo
ceph-bluestore-tool show-label --dev /dev/dm-3` fails to read the
label. When I try with other OSD's that were created for entire disks
this command is able to read the label and print out information.

 I am considering submitting a ticket to the ceph issue tracker, as I
am unable to figure out why the ceph-bluestore-tool cannot read the
labels and it seems either the OSD was initially created incorrectly
or there is a bug in ceph-bluestore-tool.

 One possibility is that I did not have the LVM2 package installed on
this host prior to the `ceph orch daemon add ..` command and this
caused a particular issue with the LVM partition OSD.

 -Matt

On Sat, Sep 19, 2020 at 9:11 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
>
>
> [@]# ceph-volume lvm activate 36 82b94115-4dfb-4ed0-8801-def59a432b0a
> Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-36
> Running command: /usr/bin/ceph-authtool
> /var/lib/ceph/osd/ceph-36/lockbox.keyring --create-keyring --name
> client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a --add-key
> AQBxA2Zfj6avOBAAIIHqNNY2J22EnOZV+dNzFQ==
>  stdout: creating /var/lib/ceph/osd/ceph-36/lockbox.keyring
> added entity client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a
> auth(key=AQBxA2Zfj6avOBAAIIHqNNY2J22EnOZV+dNzFQ==)
> Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-36/lockbox.keyring
> Running command: /usr/bin/ceph --cluster ceph --name
> client.osd-lockbox.82b94115-4dfb-4ed0-8801-def59a432b0a --keyring
> /var/lib/ceph/osd/ceph-36/lockbox.keyring config-key get
> dm-crypt/osd/82b94115-4dfb-4ed0-8801-def59a432b0a/luks
> Running command: /usr/sbin/cryptsetup --key-file - --allow-discards
> luksOpen
> /dev/ceph-9263e83b-7660-4f5b-843a-2111e882a17e/osd-block-82b94115-4dfb-4
> ed0-8801-def59a432b0a I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb
>  stderr: Device I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb already exists.
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-36
> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir --dev /dev/mapper/I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb
> --path /var/lib/ceph/osd/ceph-36 --no-mon-config
>  stderr: failed to read label for
> /dev/mapper/I8MyTZ-RQjx-gGmd-XSRw-kfa1-L60n-fgQpCb: (2) No such file or
> directory
> -->  RuntimeError: command returned non-zero exit status: 1
>
> dmsetup ls lists this????
>
> Where is an option to set the weight? As far as I can see you can only
> set this after peering started?
>
> How can I mount this tmpfs manually to inspect this? Maybe put in the
> manual[1]?
>
>
> [1]
> https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Matt Larson, PhD
Madison, WI  53705 U.S.A.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux