> However the meta-data is saved in a different LV, isn't it? I.e. isn't > practically the same as if you'd have used gpt partitions? No, its not, it is saved in LVM tags. LVM takes care of storing these in a transparent way somewhere else than in a separate partition. Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Nico Schottelius <nico.schottelius@xxxxxxxxxxx> Sent: 25 March 2021 13:29:10 To: Frank Schilder Cc: Marc; Nico Schottelius; ceph-users@xxxxxxx Subject: Re: Re: LVM vs. direct disk acess Frank Schilder <frans@xxxxxx> writes: > I think there are a couple of reasons for LVM OSDs: > > - bluestore cannot handle multi-path devices, you need LVM here > - the OSD meta-data does not require a separate partition However the meta-data is saved in a different LV, isn't it? I.e. isn't practically the same as if you'd have used gpt partitions? > - it is easy to provision 2 or more OSDs per disk > - LVM's dm_cache is an alternative to separate block/db devices with > the features that it can be dynamically re-sized at run-time and also > allows to deviate from the 3/30/300 without wasting fast storage > capacity; for example, we plan to have 1TB dm_cache per spinning disk > on NVMe in the future; this would not only fit WAL/DB, it would also > cache hot data; in addition one can configure it not to promote on > first hit to prevent cache wiping by backup software The last one is actually quite interesting and I've added it to our ceph todo list for the future. > I find it much easier to administrate LVM OSDs, I'm also using > customized scripts and the ceph/volume lvm command suite simplifies > things a lot. Thanks a lot for the feedback, this is really helpful for us to see real use cases on LVM. Best regards from sunny Switzerland, Nico -- Sustainable and modern Infrastructures by ungleich.ch _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx