Octopus 15.2.2 unable to make drives available (reject reason locked)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Hitting an issue with a new 15.2.2 deployment using cephadm.  I am having a
problem creating encrypted, 2 osds per device OSDs (they are NVMe).

After removing and bootstrapping the cluster again, i am unable to create
OSDs as they're locked.  sgdisk, wipefs, zap all fail to leave the drives
as available.

Any help would be appreciated.
Any comments on performance experiences with ceph in containers (cephadm
deployed) vs bare metal (ceph-deploy) would be greatly appreciated as well.

Thanks,
Marco

ceph orch device ls
HOST             PATH          TYPE   SIZE  DEVICE
          AVAIL  REJECT REASONS
prdhcistonode01  /dev/nvme0n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_2006266528D1  False  *locked*
prdhcistonode01  /dev/nvme1n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_2006266534D9  False  *locked*
prdhcistonode01  /dev/nvme2n1  ssd    953G  INTEL
SSDPEKKF010T8_BTHH850215GA1P0E     False  *locked*
prdhcistonode01  /dev/nvme3n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_200626651473  False  *locked*
prdhcistonode01  /dev/nvme4n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_2006266508FB  False * locked*
prdhcistonode01  /dev/nvme5n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_20062664E6E8  False  *locked*
prdhcistonode01  /dev/nvme6n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_200626653CC0  False * locked*
prdhcistonode01  /dev/nvme7n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_1939243B797E  False * locked*
prdhcistonode01  /dev/nvme8n1  ssd   11.6T
 Micron_9300_MTFDHAL12T8TDR_200626652441  False  *locked*


lsblk

NAME
                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme2n1
                          259:0    0 953.9G  0 disk
├─nvme2n1p1
                          259:1    0   512M  0 part /boot/efi
└─nvme2n1p2
                          259:2    0 953.4G  0 part /
nvme3n1
                          259:3    0  11.7T  0 disk
└─ceph--5bd47cae--97b3--4cad--b010--215fd982497b-osd--data--e6045acd--a56d--41d2--a016--b8647b9a717a
 253:1    0  11.7T  0 lvm
nvme4n1
                          259:4    0  11.7T  0 disk
└─ceph--bf7dbfb4--afe3--4391--9847--08e461bf6247-osd--data--12faafac--b695--4c30--b6d7--7046d8275d9f
 253:0    0  11.7T  0 lvm
nvme0n1
                          259:5    0  11.7T  0 disk
└─ceph--1a5d8e23--ff7d--44c3--b6d2--de143fed2b7d-osd--block--b6593547--e99a--4add--8edd--5d0fb53254cd
253:2    0  11.7T  0 lvm
nvme5n1
                          259:6    0  11.7T  0 disk
└─ceph--7d85ff24--79c8--4792--a2c8--bb4908f77ff0-osd--data--fc4e9dbd--920f--41b8--8467--74e9dcbd57ca
 253:3    0  11.7T  0 lvm
nvme6n1
                          259:7    0  11.7T  0 disk
└─ceph--d8c8652a--1cd8--4e10--a333--4ea10f3b5004-osd--data--9a70a549--3cba--4f0d--a13a--8465781a10e9
 253:5    0  11.7T  0 lvm
nvme8n1
                          259:8    0  11.7T  0 disk
└─ceph--e1914f1c--2385--4c0c--9951--d4b9200b7164-osd--data--8876559c--6393--4fbc--821b--7ac74cfb5a54
 253:7    0  11.7T  0 lvm
nvme7n1
                          259:9    0  11.7T  0 disk
└─ceph--3765b53a--75eb--489e--97e1--d6b03bc25532-osd--data--777638e0--a325--401d--a01d--459676871003
 253:4    0  11.7T  0 lvm
nvme1n1
                          259:10   0  11.7T  0 disk
└─ceph--2124f206--2b50--41a1--8a3c--d47c1a909a3b-osd--block--88e4f1eb--73f4--4c83--b978--fe7cabc0c3e6
253:6    0  11.7T  0 lvm
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux