On 03/29/2016 04:53 PM, Nick Fisk wrote:
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Ric Wheeler
Sent: 29 March 2016 14:40
To: Nick Fisk <nick@xxxxxxxxxx>; 'Sage Weil' <sage@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx; device-mapper development <dm-
devel@xxxxxxxxxx>
Subject: Re: Local SSD cache for ceph on each compute node.
On 03/29/2016 04:35 PM, Nick Fisk wrote:
One thing I picked up on when looking at dm-cache for doing caching
with RBD's is that it wasn't really designed to be used as a writeback
cache for new writes, as in how you would expect a traditional
writeback cache to work. It seems all the policies are designed around
the idea that writes go to cache only if the block is already in the
cache (through reads) or its hot enough to promote. Although there did
seem to be some tunables to alter this behaviour, posts on the mailing
list seemed to suggest this wasn't how it was designed to be used. I'm
not sure if this has been addressed since I last looked at it though.
Depending on if you are trying to accelerate all writes, or just your
"hot"
blocks, this may or may not matter. Even <1GB local caches can make a
huge difference to sync writes.
Hi Nick,
Some of the caching policies have changed recently as the team has looked
at different workloads.
Happy to introduce you to them if you want to discuss offline or post
comments over on their list: device-mapper development <dm-
devel@xxxxxxxxxx>
thanks!
Ric
Hi Ric,
Thanks for the heads up, just from a quick flick through I can see there are
now separate read and write promotion thresholds, so I can see just from
that it would be a lot more suitable for what I intended. I might try and
find some time to give it another test.
Nick
Let us know how it works out for you, I know that they are very interested in
making sure things are useful :)
ric
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com