> On Wed, 15 Feb 2017, Nick Fisk wrote: > > Just an update. I spoke to Sage today and the general consensus is > > that something like bcache or dmcache is probably the long term goal, > > but work needs to be done before its ready for prime time. The current > > tiering functionality won't be going away in the short term and not > > until there is a solid replacement with bcache/dmcache/whatever. But > > from the sounds of it, there won't be any core dev time allocated to it. > > Quick clarification: I meant the cache tiering isn't going anywhere until there > is another solid *rados* tiering replacement. The rados tiering plans will > keep metadata in the base pool but link to data in one or more other > (presumably colder) tiers (vs a sparse cache pool in front of the base pool). > > That is, you should consider rados tiering as totally orthogonal to tiering > devices beneath a single OSD with dm-cache/bcache/flashcache. > > > I'm not really too bothered what the solution ends up being, but as we > > have discussed the flexibility to shrink/grow the cache without > > having to rebuild all your nodes/OSD's is a major, almost essential, benefit > to me. > > Exactly. The new rados tiering approach would still provide this. > > > I've still got some ideas which I think can improve performance of the > > tiering functionality, but unsure as to whether I have the coding > > skills to pull it off. This might motivate me though to try and > > improve it in its current form. > > FWIW the effectiveness of the existing rados cache tiering will also improve > significantly with the EC overwrite support. Whether it is removed as part of > a new/different rados tiering function in rados is really a function of how the > code refactor works out and how difficult it is to support vs the use cases it > covers that the new tiering does not. Oh ok, awesome then. Keep me in the loop, I will be there with my trusty old testing rig :-) > > sage _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com