Re: bcache vs flashcache vs cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'd also not like to see cache tiering in the current form go away.
We've explored using it in situations where we have a data pool with
replicas spread across WAN sites which we then overlay with a fast
cache tier local to the site where most clients will be using the
pool.  This significantly speeds up operations for that set of clients
as long as they don't involve waiting for the cache to flush data.  In
theory we could move the cache tier around (flush, delete, recreate)
as needed.  The drawback of course is that clients not near the cache
tier pool would still be forced into using it by Ceph.  What would be
really useful is multiple cache pools with rulesets that allow us to
direct clients by IP, GeoIP proximity lookups or something like that.

As I see it, the alternative configuration for this case is to simply
map replicas at the appropriate site.  Which makes more sense for us
might depend on different factors for different project users.

thanks,
Ben
---
OSiRIS htttp://www.osris.org

On Thu, Feb 16, 2017 at 3:40 AM, Dongsheng Yang
<dongsheng.yang@xxxxxxxxxxxx> wrote:
> BTW, is there any body using EnhanceIO?
>
> On 02/15/2017 05:51 PM, Dongsheng Yang wrote:
>>
>> thanx Nick, Gregory and Wido,
>>     So at least, we can say the cache tiering in Jewel is stable enough I
>> think.
>> I like cache tiering more than the others, but yes, there is a problem
>> about cache tiering in
>> flushing data between different nodes, which are not a problem in local
>> caching solution.
>>
>> guys:
>>     Is there any plan to enhance cache tiering to solve such problem? Or
>> as Nick asked, is
>> that cache tiering fading away?
>>
>> Yang
>>
>>
>> On 15/02/2017, 06:42, Nick Fisk wrote:
>>>>
>>>> -----Original Message-----
>>>> From: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
>>>> Sent: 14 February 2017 21:05
>>>> To: Wido den Hollander <wido@xxxxxxxx>
>>>> Cc: Dongsheng Yang <dongsheng.yang@xxxxxxxxxxxx>; Nick Fisk
>>>> <nick@xxxxxxxxxx>; Ceph Users <ceph-users@xxxxxxxxxxxxxx>
>>>> Subject: Re:  bcache vs flashcache vs cache tiering
>>>>
>>>> On Tue, Feb 14, 2017 at 8:25 AM, Wido den Hollander <wido@xxxxxxxx>
>>>> wrote:
>>>>>>
>>>>>> Op 14 februari 2017 om 11:14 schreef Nick Fisk <nick@xxxxxxxxxx>:
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
>>>>>>> Behalf Of Dongsheng Yang
>>>>>>> Sent: 14 February 2017 09:01
>>>>>>> To: Sage Weil <notifications@xxxxxxxxxx>
>>>>>>> Cc: ceph-devel@xxxxxxxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
>>>>>>> Subject:  bcache vs flashcache vs cache tiering
>>>>>>>
>>>>>>> Hi Sage and all,
>>>>>>>       We are going to use SSDs for cache in ceph. But I am not sure
>>>>>>> which one is the best solution, bcache? flashcache? or cache
>>>>>>
>>>>>> tier?
>>>>>>
>>>>>> I would vote for cache tier. Being able to manage it from within
>>>>>> Ceph, instead of having to manage X number of bcache/flashcache
>>>>>> instances, appeals to me more. Also last time I looked Flashcache
>>>>>> seems unmaintained and bcache might be going that way with talk of
>>>>>> this new bcachefs. Another point to consider is that Ceph has had a
>>>>>> lot of
>>>>
>>>> work done on it to ensure data consistency; I don't ever want to be in a
>>>> position where I'm trying to diagnose problems that might be being
>>>> caused
>>>> by another layer sitting in-between Ceph and the Disk.
>>>>>>
>>>>>> However, I know several people on here are using bcache and
>>>>>> potentially getting better performance than with cache tiering, so
>>>>
>>>> hopefully someone will give their views.
>>>>>
>>>>> I am using Bcache on various systems and it performs really well. The
>>>>
>>>> caching layer in Ceph is slow. Promoting Objects is slow and it also
>>>> involves
>>>> additional RADOS lookups.
>>>>
>>>> Yeah. Cache tiers have gotten a lot more usable in Ceph, but the use
>>>> cases
>>>> where they're effective are still pretty limited and I think in-node
>>>> caching has
>>>> a brighter future. We just don't like to maintain the global state that
>>>> makes
>>>> separate caching locations viable and unless you're doing something
>>>> analogous to the supercomputing "burst buffers" (which some people
>>>> are!),
>>>> it's going to be hard to beat something that doesn't have to pay the
>>>> cost of
>>>> extra network hops/bandwidth.
>>>> Cache tiers are also not a feature that all the vendors support in their
>>>> downstream products, so it will probably see less ongoing investment
>>>> than
>>>> you'd expect from such a system.
>>>
>>> Should that be taken as an unofficial sign that the tiering support is
>>> likely to fade away?
>>>
>>> I think both approaches have different strengths and probably the
>>> difference between a tiering system and a caching one is what causes some of
>>> the problems.
>>>
>>> If something like bcache is going to be the preferred approach, then I
>>> think more work needs to be done around certifying it for use with Ceph and
>>> allowing its behavior to be more controlled by Ceph as well. I assume there
>>> are issues around backfilling and scrubbing polluting the cache? Maybe you
>>> would want to be able to pass hints down from Ceph, which could also allow
>>> per pool cache behavior??
>>>
>>>> -Greg
>>>
>>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux