Hi, Sage Although we have ceph-volume now, ceps-volume cannot work as expected in most of scenarios. For example: when we use: ceph-volume lvm prepare --filestore --data /dev/sda --journal /dev/sdl1 or ceph-volume lvm prepare --bluestore --data /dev/sda --block.db /dev/sdl1 1. we need to create a Physical Volume for /dev/sda before it can be used. and ceph-volume does not help us to do that 2. the params of --journal or --block.db must be a partitioned disk such as /dev/sdl1 or /dev/sdl1, and cannot be a raw device without partition -- /dev/sdl like ceps-disk does and automatically create the partition by using sgdisk. 3. we must explicitly use --journal or --block.db, --block.wal to tell ceph-volume where to put the data. It cannot be automatically partition or create multiple logical volumes for different usage, such as lvcreate --size 100M VG -n odd-block-meta-{uuid} lvcreate --size {bluestore_block_db_size} VG -n osd-block-db-{uuid} lvcreate --size {bluestore_block_wal_size} VG -n osd-block-wal-{uuid} lvcreate -l 100%FREE VG -n osd-block-{uuid} (use the left space for data) it seems now hard to switch to ceph-volume from ceps-disk. And if though, we need to do much extra work in ceph-ansible and help to prepare all the thing above. so is there any plan to enhance the function, or design spec of ceph-volume? and we may help on this. Regards Ning Yao -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html