After a reboot the OSD turned out to be corrupt. Not sure if ceph-volume
lvm new-db caused the problem, or failed because of another problem.
Rebuilding the OSD from scratch (with separate block.db) works, so my
problem is solved.
On 11/05/2022 20:47, Joost Nieuwenhuijse wrote:
Hi,
After upgrading to ceph 17.1.0 I'm unable to move block.db to a separate
SSD:
root@miles:~# ceph-volume lvm new-db --osd-id 0 --osd-fsid
bc016bac-a0cc-4909-89dd-6db05193ddbc --target vg-cephdisk0/blockdb
--> Making new volume at /dev/vg-cephdisk0/blockdb for OSD: 0
(/var/lib/ceph/osd/ceph-0)
stdout: inferring bluefs devices from bluestore path
stderr: Might need DB size specification, please set Ceph
bluestore-block-db-size config parameter
--> failed to attach new volume, error code:1
--> Undoing lv tag set
Failed to attach new volume: vg-cephdisk0/blockdb
root@miles:~#
I've tried to find out where to set bluestore-block-db-size, but it
seems this might be an internal problem in ceph-volume?
Before upgrading I was using ceph 16.2.7 and the above worked fine. Does
anyone have an idea?
Some more info below:
root@miles:~# ceph --version
ceph version 17.1.0 (c675060073a05d40ef404d5921c81178a52af6e0) quincy (dev)
root@miles:~# blockdev --getsize64 /dev/vg-cephdisk0/blockdb
171794497536
root@miles:~# ceph-volume lvm list
====== osd.0 =======
[block] /dev/vg-cephdisk0/ceph-block-0
block device /dev/vg-cephdisk0/ceph-block-0
block uuid g77x8j-gokx-CTXC-2cQ3-Fz6h-XyHZ-0CE7el
cephx lockbox secret
cluster fsid 660010bf-080c-48c0-9d68-a48019168206
cluster name ceph
crush device class None
db device /dev/vg-cephdisk0/blockdb
db uuid wrKZUc-67us-6Vna-IUd9-Iidl-3TgL-d71x6X
encrypted 0
osd fsid bc016bac-a0cc-4909-89dd-6db05193ddbc
osd id 0
osdspec affinity
type block
vdo 0
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx