Re: bug in ceph-volume create

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ohhhh....
sigh.
Thank you very much.
That actually makes sense, and isnt so bad after all.
Makes me surprised why I got no answers on my prior related question, couple weeks ago, about what the proper way to replace an HDD in a failed hybrid OSD.

At least I know now.

You guys might consider a feature request of doing some kind of check on long device path names getting passed in, to see if the util should complain to the user, "hey use the other syntax".





----- Original Message -----
From: "Jeff Bailey" <bailey@xxxxxxxxxxx>
To: "ceph-users" <ceph-users@xxxxxxx>
Sent: Monday, April 5, 2021 1:00:18 PM
Subject:  Re: bug in ceph-volume create

On 4/5/2021 3:49 PM, Philip Brown wrote:
>
> As soon as you have an HDD fail... you will need to recreate the OSD.. and you are then stuck. Because you cant use batch mode for it...
> and you cant do it more granularly, with
>
>    ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg --block.db /dev/ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd here


This isn't a bug.  You're specifying the LV incorrectly.  Just use


--block.db ceph-xx-xx-xx/ceph-osd-db-this-is-the-old-lvm-for-ssd


without the /dev at the front. 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux