SSD journal deployment experiences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 05 Sep 2014, at 10:30, Nigel Williams <nigel.d.williams at gmail.com> wrote:
> 
> On Fri, Sep 5, 2014 at 5:46 PM, Dan Van Der Ster
> <daniel.vanderster at cern.ch> wrote:
>>> On 05 Sep 2014, at 03:09, Christian Balzer <chibi at gol.com> wrote:
>>> You might want to look into cache pools (and dedicated SSD servers with
>>> fast controllers and CPUs) in your test cluster and for the future.
>>> Right now my impression is that there is quite a bit more polishing to be
>>> done (retention of hot objects, etc) and there have been stability concerns
>>> raised here.
>> 
>> Right, Greg already said publicly not to use the cache tiers for RBD.
> 
> I lost the context for this statement you reference from Greg
> (presumably Greg Farnum?) - was it a reference to bcache or Ceph cache
> tiering? Could you point me to where it was stated please.

Cache tiering. I was referring to this thread back around when firefly was released.

   http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-May/039504.html

> At present, the cache pools are fairly limited in their real-world usefulness.
> ...
> 3) The cost of a cache miss is pretty high, so they should only be
> used when the active set fits within the cache and doesn't change too
> frequently.
> ... in general,
> I would only explore cache pools if you expect to periodically pull in
> working data sets out of much larger sets of cold data (e.g., jobs run
> against a particular bit of scientific data out of your entire
> archive).


There should be a wealth of real world experience with RBD and cache tiering by now, and I admit that I haven?t followed that line of development. Is anyone running RBD through a cache tier and getting good performance with it?

Cheers, Dan


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux