Cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/07/2014 10:38 AM, Gregory Farnum wrote:
> On Wed, May 7, 2014 at 8:13 AM, Dan van der Ster
> <daniel.vanderster at cern.ch> wrote:
>> Hi,
>>
>>
>> Gregory Farnum wrote:
>>
>> 3) The cost of a cache miss is pretty high, so they should only be
>> used when the active set fits within the cache and doesn't change too
>> frequently.
>>
>>
>> Can you roughly quantify how long a cache miss would take? Naively I'd
>> assume it would turn one read into a read from the backing pool, a write
>> into the cache pool, then the read from the cache. Is that right?
>
> Yes, that's roughly it. The part you're leaving out is that a write
> may also require promotion, and if it does and the cache is full then
> it requires an eviction, and that requires writes to the backing
> pool...
> Also, doubling the latency on a read can cross a lot of "I don't
> notice it" boundaries.
>
>> So, Ceph will not automatically redirect to the base pool in case of
>> failures; in the general case it *can't*, but you could set up
>> monitoring to remove a read-only pool if that happens. But in general,
>> I would only explore cache pools if you expect to periodically pull in
>> working data sets out of much larger sets of cold data (e.g., jobs run
>> against a particular bit of scientific data out of your entire
>> archive).
>>
>>
>> That's a pity. What would be your hesitation about using WB caching with RBD
>> images, assuming the cache pool is sized large enough to match the working
>> set.
>
> Just a general lack of data indicating it performs well. It will
> certainly function, and if you have e.g. 1/4 of your RBD volumes in
> use at a time according to time of day, I would expect it to do just
> fine.

 From what we've seen so far, there are definitely some tradeoffs with 
tiering.  Using it in the wrong way (IE for pools that have little hot 
data) can actually decrease overall performance.  We're working on doing 
some RBD tests with different skewed distributions now.

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux