Hello,
Recently I deployed a small ceph cluster using cephadm.In this cluster, I have 3 OSD nodes with 8 HDDs Hitachi (9.1 TiB), 4 NVMes Micron_9300 (2.9 TiB), and 2 NVMes Intel Optane P4800X (375 GiB). I want to use spinning disks for the data block, 2.9 NVMes for the block.DB and the intel Optane for the block.wal.
I tried with a spec file and also via the ceph dashboard but I encountered one problem.
I would expect 1 lv on every data disk, 4 lv on wal disks, and 2 lv on DB disks. The problem arises on DB disks where only 1 lv gets created.
After some debugging, I think that the problem is generated when the VG gets divided into 2. I have 763089 Total PE and the first LV was created using 381545 PE (round-up for 763089/2). Thanks to that, the creation of the second LV fails: Volume group "ceph-c7078851-d3c1-4745-96b6-f98a45d3da93" has insufficient free space (381544 extents): 381545 required.
Is this an expected behavior or not? Should I create the LVs by myself?
Gheorghita BUTNARU,
Gheorghe Asachi Technical University of Iasi
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx