Re: Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I actually tried to search the ML before bringing up this topic. Because 
I do not get the logic choosing this direction.

- Bluestore is created to cut out some fs overhead, 
- everywhere 10Gb is recommended because of better latency. (I even 
posted here something to make ceph better performing with 1Gb eth, 
disregarded because it would add complexity, fine, I can understand)

And then because of some start-up/automation issues, lets add the lvm 
tier? Introducing a layer that is constantly there and adds some 
overhead (maybe not that much) for every read and write operation? 

Is see ceph-disk as a tool to prepare the osd and the do the rest 
myself. Without ceph-deploy or ansible, because I trust more what I see 
I type than someone else scripted. I don’t have any startup problems.

Do assume I am not an expert in any field. But it is understandable that 
having nothing between the disk access and something (lvm) should have a 
performance penalty. 
I know you can hack around nicely with disks and lvm, but those pro's 
fall into the same category of questions people are suggesting related 
to putting disks in raid.

Let alone the risk that your are taking when there is going to be a 
significant performance penalty:
https://www.researchgate.net/publication/284897601_LVM_in_the_Linux_environment_Performance_examination
https://hrcak.srce.hr/index.php?show=clanak&id_clanak_jezik=216661



-----Original Message-----
From: David Turner [mailto:drakonstein@xxxxxxxxx] 
Sent: donderdag 31 mei 2018 23:48
To: Marc Roos
Cc: ceph-users
Subject: Re:  Why the change from ceph-disk to ceph-volume 
and lvm? (and just not stick with direct disk access)

Your question assumes that ceph-disk was a good piece of software.  It 
had a bug list a mile long and nobody working on it.  A common example 
was how simple it was to mess up any part of the dozens of components 
that allowed an OSD to autostart on boot.  One of the biggest problems 
was when ceph-disk was doing it's thing and an OSD would take longer 
than 3 minutes to start and ceph-disk would give up on it.

That is a little bit about why a new solution was sought after and why 
ceph-disk is being removed entirely.  LVM was a choice made to implement 
something other than partitions and udev magic while still incorporating 
the information still needed from all of that in a better solution.  
There has been a lot of talk about this on the ML.

On Thu, May 31, 2018 at 5:23 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:



	What is the reasoning behind switching to lvm? Does it make sense 
to go 
	through (yet) another layer to access the disk? Why creating this 
	dependency and added complexity? It is fine as it is, or not?
	
	
	
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux