Re: Running Jewel and Luminous mixed for a longer period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 2 Jan 2018 01:23:45 +0100 Ronny Aasen wrote:

> On 30.12.2017 15:41, Milanov, Radoslav Nikiforov wrote:
> > Performance as well - in my testing FileStore was much quicker than BlueStore.  
> 
> 
> with filestore you often have a ssd journal in front, this will often 
> mask/hide slow spinning disk write performance, until the journal size 
> becomes the bottleneck.
> 
The journal size basically never becomes the bottleneck, as Ceph will
start flushing very quickly with default settings and then hits the
effective HDD speed as well.
The journal can deal nicely with short/small bursts, though.

> with bluestore only metadata db and wal is on ssd. so there is no 
> doublewrite, and there is no journal bottleneck. but write latency will 
> be the speed of the disk, and not the speed of the ssd journal. this 
> will feel like a write performance regression.
> 
Small writes with bluestore also will go to the DB (SSD), as the Ceph
developers noted that latencies were rather bad otherwise. 

>From where I'm standing, Bluestore is rather wet behind the ears, with
probably some bugs lurking (a file system, even as simple as this one
isn't trivial) and more importantly space for performance improvements.

> you can use bcache in front of bluestore to regain the "journal+ 
> doublewrite" write characteristic of filestore+journal.
> 
I'm using bcache and tested LVM cache in a non-Ceph (DRBD) setup.
LVM cache is hilariously complex, poorly (outdated) documented and
performs under normal/typical workloads far worse than bcache.
OTOH while bcache will give you nice improvements (also on reads) it isn't
bug free (and I'm not even thinking about the 4.14 data corruption issue)
and when pushed hard (when it wants/needs to flush to HDD) it will overload
things and doesn't honor I/O priorities as others have mentioned here.

I'm using bcache for now because in my use case the issues above won't
show up, but I'd be wary to use it with Ceph in a cluster where I don't
control/know the IO patterns. 

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux