Cache tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 7, 2014 at 5:05 AM, Gandalf Corvotempesta
<gandalf.corvotempesta at gmail.com> wrote:
> Very simple question: what happen if server bound to the cache pool goes down?
> For example, a read-only cache could be archived by using a single
> server with no redudancy.
> Is ceph smart enough to detect that cache is unavailable and
> transparently redirect all request to the main pool as usual ?
>
> This allow the usage of one very big server as cache-only. No need for
> redundacy.
>
> Second question: how can I set cache pool to be on a defined OSDs list
> and not distributed across all OSDs ? Should I change the crushmap ? I
> would like to set a single server (and all of its OSDs) for the cache
> pool.

At present, the cache pools are fairly limited in their real-world usefulness.
1) When used in writeback mode, they are the authoritative source for
data, so they must be redundant.
2) When used in readonly mode, they aren't consistent if the
underlying data gets modified.
3) The cost of a cache miss is pretty high, so they should only be
used when the active set fits within the cache and doesn't change too
frequently.

So, Ceph will not automatically redirect to the base pool in case of
failures; in the general case it *can't*, but you could set up
monitoring to remove a read-only pool if that happens. But in general,
I would only explore cache pools if you expect to periodically pull in
working data sets out of much larger sets of cold data (e.g., jobs run
against a particular bit of scientific data out of your entire
archive).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux