Re: Unable to add osds with ceph-volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Eugene. I will try that.

Cheers

Get Outlook for Android<https://aka.ms/AAb9ysg>

________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Wednesday, April 28, 2021 8:42:39 PM
To: Andrei Mikhailovsky <andrei@xxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject: Re:  Unable to add osds with ceph-volume

Hi,

when specifying the db device you should use --block.db VG/LV not /dev/VG/LV

Zitat von Andrei Mikhailovsky <andrei@xxxxxxxxxx>:

> Hello everyone,
>
> I am running ceph version 15.2.8 on Ubuntu servers. I am using
> bluestore osds with data on hdd and db and wal on ssd drives. Each
> ssd has been partitioned such that it holds 5 dbs and 5 wals. The
> ssd were were prepared a while back probably when I was running ceph
> 13.x. I have been gradually adding new osd drives as needed.
> Recently, I've tried to add more osds, which have failed to my
> surprise. Previously I've had no issues adding the drives. However,
> it seems that I can no longer do that with version 15.2.x
>
> Here is what I get:
>
>
> root@arh-ibstorage4-ib  /home/andrei  ceph-volume lvm prepare
> --bluestore --data /dev/sds --block.db /dev/ssd3/db5 --block.wal
> /dev/ssd3/wal5
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 6aeef34b-0724-4d20-a10b-197cab23e24d
> Running command: /usr/sbin/vgcreate --force --yes
> ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f /dev/sds
> stderr: WARNING: PV /dev/sdp in VG
> ceph-bc7587b5-0112-4097-8c9f-4442e8ea5645 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdo in VG
> ceph-33eda27c-53ed-493e-87a8-39e1862da809 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdn in VG ssd2 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdm in VG ssd1 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdj in VG
> ceph-9d8da00c-f6b9-473f-b499-fa60d74b46c5 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdi in VG
> ceph-1603149e-1e50-4b86-a360-1372f4243603 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdh in VG
> ceph-a5f4416c-8e69-4a66-a884-1d1229785acb is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sde in VG
> ceph-aac71121-e308-4e25-ae95-ca51bca7aaff is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdd in VG
> ceph-1e216580-c01b-42c5-a10f-293674a55c4c is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdc in VG
> ceph-630f7716-3d05-41bb-92c9-25402e9bb264 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sdb in VG
> ceph-a549c28d-9b06-46d5-8ba3-3bd99ff54f57 is using an old PV header,
> modify the VG to update.
> stderr: WARNING: PV /dev/sda in VG
> ceph-70943bd0-de71-4651-a73d-c61bc624755f is using an old PV header,
> modify the VG to update.
> stdout: Physical volume "/dev/sds" successfully created.
> stdout: Volume group "ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f"
> successfully created
> Running command: /usr/sbin/lvcreate --yes -l 3814911 -n
> osd-block-6aeef34b-0724-4d20-a10b-197cab23e24d
> ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f
> stdout: Logical volume
> "osd-block-6aeef34b-0724-4d20-a10b-197cab23e24d" created.
> --> blkid could not detect a PARTUUID for device: /dev/ssd3/wal5
> --> Was unable to complete a new OSD, will rollback changes
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.15
> --yes-i-really-mean-it
> stderr: 2021-04-28T20:05:52.290+0100 7f76bbfa9700 -1 auth: unable to
> find a keyring on
> /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc
> /ceph/keyring.bin,: (2) No such file or directory
> 2021-04-28T20:05:52.290+0100 7f76bbfa9700 -1
> AuthRegistry(0x7f76b4058e60) no keyring found at
> /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyrin
> g,/etc/ceph/keyring.bin,, disabling cephx
> stderr: purged osd.15
> --> RuntimeError: unable to use device
>
> I have tried to find a solution, but wasn't able to resolve the
> problem? I am sure that I've previously added new volumes using the
> above command.
>
> lvdisplay shows:
>
> --- Logical volume ---
> LV Path /dev/ssd3/wal5
> LV Name wal5
> VG Name ssd3
> LV UUID WPQJs9-olAj-ACbU-qnEM-6ytu-aLMv-hAABYy
> LV Write Access read/write
> LV Creation host, time arh-ibstorage4-ib, 2020-07-29 23:45:17 +0100
> LV Status available
> # open 0
> LV Size 1.00 GiB
> Current LE 256
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:6
>
>
> --- Logical volume ---
> LV Path /dev/ssd3/db5
> LV Name db5
> VG Name ssd3
> LV UUID FVT2Mm-a00P-eCoQ-FZAf-AulX-4q9r-PaDTC6
> LV Write Access read/write
> LV Creation host, time arh-ibstorage4-ib, 2020-07-29 23:46:01 +0100
> LV Status available
> # open 0
> LV Size 177.00 GiB
> Current LE 45312
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:11
>
>
>
> How do I resolve the errors and create the new osd?
>
> Cheers
>
> Andrei
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux