On 4/5/2021 3:49 PM, Philip Brown wrote:
I would file this as a potential bug.. but it takes too long to get approved, and tracker.ceph.com doesnt have straightfoward google signin enabled :-/
I believe that with the new lvm mandate, ceph-volume should not be complaining about "missing PARTUUID".
This is stopping me from using my system.
Details on how to recreate:
1. have a system with 1 SSD and multiple HDDS
2. create a buncha OSDs with your preferred frontend, which will eventualy come down to
ceph-volume lvm batch --bluestore /dev/ssddevice /dev/sdA ... /dev/sdX
THIS will work great. batch mode will appropriately carve up the SSD device into multiple LVMs, and allocate one of them to be a DB device for each of the HDDs.
3. try to repair/replace an HDD
As soon as you have an HDD fail... you will need to recreate the OSD.. and you are then stuck. Because you cant use batch mode for it...
and you cant do it more granularly, with
ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg --block.db /dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here
This isn't a bug. You're specifying the LV incorrectly. Just use
--block.db ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd
without the /dev at the front. The /dev path gets treated like a normal
block device.
because ceph-volume will complain that,
blkid could not detect a PARTUUID for device: /dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here
but the lvm IS NOT SUPPOSED TO HAVE A PARTUUID.
Which is provable first all by the fact that it isnt a partition. But secondly, that none of the other block-db LVMs it created on the SSD in batch mode, have an partuuid either!!
So kindly quit checking for something that isnt supposed to be there in the first place?!
(This is a bug all the way back in nautilus, through latest, I believe)
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown@xxxxxxxxxx| www.medata.com
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx