Hi, to have more meaningful names, I also prefer LVM for the db drive. Create the volume on OSD by hand too: vgcreate ceph-block-0 /dev/sda lvcreate -l 100%FREE -n block-0 ceph-block-0 DB Drive: vgcreate ceph-db-0 /dev/nvme0n1 lvcreate -L 50GB -n db-0 ceph-db-0 lvcreate -L 50GB -n db-1 ceph-db-0 ... ceph-volume lvm create --bluestore --data ceph-block-0/block-0 --block.db ceph-db-0/db-0 Unfortunately this is not ceph-disk, as requested in the original question. so long Thomas Am 22/10/2019 um 08:17 schrieb Ingo Schmidt: > Hi Frank > > We use such a setup on our nautilus cluster. I manually partitioned the > NVME drive to 8 equally sized partitions with fdisk (saved the partition > layout to a file for later reference). You can then create OSDs with > >> ceph-volume lvm create --bluestore --data /dev/sd<x> --block.db > /dev/nvme0n1<partition-nr> > > This will create an OSD on /dev/sdx and put block.db and the WAL to the > partition on the NVME. > > greetings > Ingo > > ------------------------------------------------------------------------ > *Von: *"Frank R" <frankaritchie@xxxxxxxxx> > *An: *"ceph-users" <ceph-users@xxxxxxxx> > *Gesendet: *Dienstag, 22. Oktober 2019 03:06:55 > *Betreff: * multiple nvme per osd > > Hi all, > Has anyone successfully created multiple partitions on an NVME device > using ceph-disk? > > If so, which commands were used? > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx