Re: LVM vs. direct disk acess

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Frank Schilder <frans@xxxxxx> writes:

> I think there are a couple of reasons for LVM OSDs:
>
> - bluestore cannot handle multi-path devices, you need LVM here
> - the OSD meta-data does not require a separate partition

However the meta-data is saved in a different LV, isn't it? I.e. isn't
practically the same as if you'd have used gpt partitions?

> - it is easy to provision 2 or more OSDs per disk
> - LVM's dm_cache is an alternative to separate block/db devices with
> the features that it can be dynamically re-sized at run-time and also
> allows to deviate from the 3/30/300 without wasting fast storage
> capacity; for example, we plan to have 1TB dm_cache per spinning disk
> on NVMe in the future; this would not only fit WAL/DB, it would also
> cache hot data; in addition one can configure it not to promote on
> first hit to prevent cache wiping by backup software

The last one is actually quite interesting and I've added it to our ceph
todo list for the future.

> I find it much easier to administrate LVM OSDs, I'm also using
> customized scripts and the ceph/volume lvm command suite simplifies
> things a lot.

Thanks a lot for the feedback, this is really helpful for us to see real
use cases on LVM.

Best regards from sunny Switzerland,

Nico

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux