Re: Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the reasoning behind switching to lvm? Does it make sense to go 
through (yet) another layer to access the disk? Why creating this 
dependency and added complexity? It is fine as it is, or not?

In fact, the question is why one tool is replaced by another without saving functionality.
Why lvm, why not bcache?

It seems to me that in the heads dev team someone has pushed the idea that lvm solves all problems.
But this is also added the overhead, and since this is a kernel module with a update we can get a performance drop, changes in module settings, etc.
I understand that for Red Hat Storage this is a solution, but for a community with different distributions and hardware this may be superfluous.

I would like to get back possibility of preparing osd's with direct access was restored, and let it not be the default.
Also this will save configurations for ceph-ansible. Actually I was don't know what is create my osd's ceph-disk/ceph-volume or whatever before this deprecation.





k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux