I've been testing flashcache, bcache, dm-cache and even dm-writeboost in production ceph clusters.
The only one that is working fine and gives the speed we need is bcache. All others failed with slow speeds or low latencies. Stefan
Excuse my typo sent from my mobile phone. On Tue, 14 Feb 2017 22:42:21 -0000 Nick Fisk wrote:-----Original Message-----
From: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
Sent: 14 February 2017 21:05
To: Wido den Hollander <wido@xxxxxxxx>
Cc: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>; Nick Fisk
<nick@xxxxxxxxxx>; Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: bcache vs flashcache vs cache tiering
On Tue, Feb 14, 2017 at 8:25 AM, Wido den Hollander <wido@xxxxxxxx>
wrote:
Op 14 februari 2017 om 11:14 schreef Nick Fisk <nick@xxxxxxxxxx>:
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
Behalf Of Dongsheng Yang
Sent: 14 February 2017 09:01
To: Sage Weil <notifications@xxxxxxxxxx>
Cc: ceph-devel@xxxxxxxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
Subject: bcache vs flashcache vs cache tiering
Hi Sage and all,
We are going to use SSDs for cache in ceph. But I am not sure
which one is the best solution, bcache? flashcache? or cache
tier?
I would vote for cache tier. Being able to manage it from within
Ceph, instead of having to manage X number of bcache/flashcache
instances, appeals to me more. Also last time I looked Flashcache
seems unmaintained and bcache might be going that way with talk of
this new bcachefs. Another point to consider is that Ceph has had a lot of
work done on it to ensure data consistency; I don't ever want to be in a
position where I'm trying to diagnose problems that might be being caused
by another layer sitting in-between Ceph and the Disk.
However, I know several people on here are using bcache and
potentially getting better performance than with cache tiering, so
hopefully someone will give their views.
I am using Bcache on various systems and it performs really well. The
caching layer in Ceph is slow. Promoting Objects is slow and it also involves
additional RADOS lookups.
Yeah. Cache tiers have gotten a lot more usable in Ceph, but the use cases
where they're effective are still pretty limited and I think in-node caching has
a brighter future. We just don't like to maintain the global state that makes
separate caching locations viable and unless you're doing something
analogous to the supercomputing "burst buffers" (which some people are!),
it's going to be hard to beat something that doesn't have to pay the cost of
extra network hops/bandwidth.
Cache tiers are also not a feature that all the vendors support in their
downstream products, so it will probably see less ongoing investment than
you'd expect from such a system.
Should that be taken as an unofficial sign that the tiering support is likely to fade away?
Nick, you also posted back in October in the "cache tiering deprecated in RHCS 2.0" thread and should remember thedeafening silence when I asked that question.I'm actually surprised that Greg said as much as he did now,unfortunately that doesn't really cover all the questions I had back then,in particular long term support and bug fixes, not necessarily morefeatures.We're literally about to order our next cluster and cache-tiering workslike a charm for us, even in Hammer.With the (still undocumented) knobs in Jewel and read-forward it will beeven more effective.So given the lack of any statements that next cluster will still use thesame design as the previous one, since BlueStore isn't ready, bcache andothers haven't been tested here to my satisfaction and we know very wellwhat works and what not.So 3 regular (HDD OSD, journal SSD) nodes and 3 cache-tier ones. Dedicatedcache-tier nodes allow for deployment of high end CPUs only in those nodes.Another point in favor of cache-tiering is that it can be added at alater stage, while in-node caching requires an initial design with largelocal SSDs/NVMes or at least the space for them. Because the journal SSDs most people will deploy initially don't tend tobe large enough to be effective when used with bcache or similar. I think both approaches have different strengths and probably the difference between a tiering system and a caching one is what causes some of the problems.
If something like bcache is going to be the preferred approach, then I think more work needs to be done around certifying it for use with Ceph and allowing its behavior to be more controlled by Ceph as well. I assume there are issues around backfilling and scrubbing polluting the cache? Maybe you would want to be able to pass hints down from Ceph, which could also allow per pool cache behavior??
According to the RHCS release notes back then their idea to achieverainbows and pink ponies was using dm-cache.Christian-- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communicationshttp://www.gol.com/_______________________________________________ceph-users mailing listceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|