cephadm will handle the LVM for you when you deploy using an OSD specification. For example, we have NVME and rotational drives, and cephadm will automatically deploy servers with the DB/WAL on NVME and the data on the rotational drives, with a limit of 12 rotational per NVME - it handles all the LVM magic as long as we feed it bare drives. If you're happy with how it works, it makes management/expansion fairly easy with a well-written OSD specification vs. doing all of it manually (I had scripts I had written prior to bootstrap ceph node storage prior to deployment). On Tue, Jul 28, 2020 at 11:25 PM Jason Borden <jason@xxxxxxxxxxxxxxxxx> wrote: > Hi Robert! > > Thanks for answering my question. I take it you're working a lot with Ceph > these days! On my pre-octopus clusters I did use LVM backed by partitions, > but I always kind of wondered if it was a good practice or not as it added > an additional layer and obscures the underlying disk topology. Then on this > new octopus cluster I wanted to use the new cephadm approach for management > and it seems to steer you away from using partitions or LVM directly, thus > my question. I don't really have the option to not use partitions in this > particular instance. I was merely curious if there was a particular reason > that cephadm doesn't consider partitions (or LVM) as being "available" > devices. All the storage in this cluster is the same so no need to split > metadata on to faster storage in my instance. Anyway, it's good to hear > from you. Hope you and your family are doing well. > > Thanks, > Jason > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx