It seems you are running into this: https://github.com/rook/rook/issues/11474#issuecomment-1365523469 You can check the output of the below command, and see if the disks are detected by the ceph-volume: ceph-volume inventory --format json-pretty Also try to add a specific device path to the above command for more detailed output if the device is not being detected successfully. Note that it's not a rook-specific issue. Cheers, Dongdong P Wagner-Beccard <wagner-kerschbaumer@xxxxxxxxxxxxx> 于2023年12月4日周一 09:00写道: > Hey Cephers, > > Hope you're all doing well! I'm in a bit of a pickle and could really use > some of your power. > > Here's the scoop: > > I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot > disks) > My initial goal was to configure part of the HDDs (6 out of 7TB) into an > md0 or similar device to be used as a DB device. (the rest would be nice to > use as an nvme osd) > I made some clumsy attempts to set them up "right" > > While the OSDs are getting deployed, they are not being shown back to the > dashboard. > The specific error when running `ceph orch device ls`: 'Insufficient space > (<10 extents) on vgs, LVM detected, locked.' > Given these, I have a few questions: > > Are there specific configurations or steps that I might be missing when > setting up the DB device with multiple HDDs? > (rel; I currently try things like this > https://paste.openstack.org/show/bdPHXQ0BMypWnZTYosT2/ ) > Could the error message indicate a particular issue with my current setup > or approach? > If anyone has successfully configured a similar setup, could you please > share your insights or steps taken? > > Thanks a bunch! > > Cheers, > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx