Re: bcache vs flashcache vs cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Sage,

On Wed, 15 Feb 2017 22:10:26 +0000 (UTC) Sage Weil wrote:

> On Wed, 15 Feb 2017, Nick Fisk wrote:
> > Just an update. I spoke to Sage today and the general consensus is that
> > something like bcache or dmcache is probably the long term goal, but work
> > needs to be done before its ready for prime time. The current tiering
> > functionality won't be going away in the short term and not until there is a
> > solid replacement with bcache/dmcache/whatever. But from the sounds of it,
> > there won't be any core dev time allocated to it.  
> 
> Quick clarification: I meant the cache tiering isn't going anywhere until 
> there is another solid *rados* tiering replacement.  The rados tiering 
> plans will keep metadata in the base pool but link to data in one or more 
> other (presumably colder) tiers (vs a sparse cache pool in front of the 
> base pool).
>
I see where you're coming from (EC pools basically as you mention below),
but for "burst buffers" users this looks rather glum.

In my use case (mentioned a dozen times by now I'm sure ^o^), we have
hundreds of VMs where the main application pretty much constantly writes
to (small, cyclic) logs, locks and state files. 
Creating a nice stream of (currently) about 8-10MB/s and 1000 IOPS (as Ceph
sees them).

This results in a comperatively small but quite hot working set, on quiet
days (no VM reboots, upgrades etc) we have nearly no tier promotions going
on.

While fast metadata is nice in the scenario you describe above I would
basically have to go all SSD or at least massively bcache to get the
performance needed w/o the current cache-tier design. 

My 2 yen,

Christian
 
> That is, you should consider rados tiering as totally orthogonal to 
> tiering devices beneath a single OSD with dm-cache/bcache/flashcache.
>  
> > I'm not really too bothered what the solution ends up being, but as we have
> > discussed the flexibility to  shrink/grow the cache without having to
> > rebuild all your nodes/OSD's is a major, almost essential, benefit to me.  
> 
> Exactly.  The new rados tiering approach would still provide this.
>  
> > I've still got some ideas which I think can improve performance of the
> > tiering functionality, but unsure as to whether I have the coding skills to
> > pull it off. This might motivate me though to try and improve it in its
> > current form.  
> 
> FWIW the effectiveness of the existing rados cache tiering will also 
> improve significantly with the EC overwrite support.  Whether it is 
> removed as part of a new/different rados tiering function in rados is 
> really a function of how the code refactor works out and how difficult it 
> is to support vs the use cases it covers that the new tiering does not.
> 
> sage
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux