Re: Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
> Answers:
> - unify setup, support for crypto & more
Unify setup by adding a dependency? There is / should be already support 
for crypto now, not?

> - none
Costs of lvm can be argued. Something to go through, is worse than 
nothing to go through.
https://www.researchgate.net/publication/284897601_LVM_in_the_Linux_environment_Performance_examination
https://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=216661

If there is no cost, then there is no discussion. But I cannot believe 
there is no cost. And if there is a cost, then the reason for adding 
this cost should not be something like unify setup or crypto. Ceph has 
been around already a long time, and it does not look like its users had 
many problems. 
What about the clusters that have thousands of disks, have them migrate 
to lvm for just for fun? Direct disk access should stay.




-----Original Message-----
From: ceph@xxxxxxxxxxxxxx [mailto:ceph@xxxxxxxxxxxxxx] 
Sent: vrijdag 8 juni 2018 12:47
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Why the change from ceph-disk to ceph-volume 
and lvm? (and just not stick with direct disk access)

Beuh ...

I have other questions:
- why not use LVM, and stick with direct disk access ?
- what are the cost of LVM (performance, latency etc) ?


Answers:
- unify setup, support for crypto & more
- none

Tldr: that technical choice is fine, nothing to argue about.


On 06/08/2018 07:15 AM, Marc Roos wrote:
> 
> I am getting the impression that not everyone understands the subject 
> that has been raised here.
> 
> Why do osd's need to be via lvm, and why not stick with direct disk 
> access as it is now?
> 
> - Bluestore is created to cut out some fs overhead,
> - everywhere 10Gb is recommended because of better latency. (I even 
> posted here something to make ceph better performing with 1Gb eth, 
> disregarded because it would add complexity, fine, I can understand)
> 
> And then because of some start-up/automation issues (because that is 
> the only thing being mentioned here for now), lets add the lvm tier? 
> Introducing a layer that is constantly there and adds some overhead 
> (maybe not that much) for every read and write operation?
> 
> 
> 
> 
> 
> -----Original Message-----
> From: Nick Fisk [mailto:nick@xxxxxxxxxx]
> Sent: vrijdag 8 juni 2018 12:14
> To: 'Konstantin Shalygin'; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Why the change from ceph-disk to ceph-volume 

> and lvm? (and just not stick with direct disk access)
> 
> http://docs.ceph.com/docs/master/ceph-volume/simple/
> 
> ?
> 
>  
> 
> From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of 
> Konstantin Shalygin
> Sent: 08 June 2018 11:11
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Why the change from ceph-disk to ceph-volume 

> and lvm? (and just not stick with direct disk access)
> 
>  
> 
> 	What is the reasoning behind switching to lvm? Does it make sense 
to 
> go
> 	through (yet) another layer to access the disk? Why creating this 

> 	dependency and added complexity? It is fine as it is, or not?
> 
> In fact, the question is why one tool is replaced by another without 
> saving functionality.
> Why lvm, why not bcache?
> 
> It seems to me that in the heads dev team someone has pushed the idea 
> that lvm solves all problems.
> But this is also added the overhead, and since this is a kernel module 

> with a update we can get a performance drop, changes in module 
> settings, etc.
> I understand that for Red Hat Storage this is a solution, but for a 
> community with different distributions and hardware this may be 
> superfluous.
> I would like to get back possibility of preparing osd's with direct 
> access was restored, and let it not be the default.
> Also this will save configurations for ceph-ansible. Actually I was 
> don't know what is create my osd's ceph-disk/ceph-volume or whatever 
> before this deprecation.
> 
> 
> 
> 
> 
> k
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux