What about the following usecase (please forgive some of my ceph
architecture ignorance):
If it was possible to setup OSD caching tier at the host (if the host
had a dedicated SSD for accelerating I/O), then caching pools could be
created to cache VM rbds, since they are inherently exclusive to a
single host. Using a write through (or a readonly, depending on the
workload) policy would have a major increase in VM IOPs. Using
writethrough or readonly policy would also ensure any writes are first
written to the back end storage tier. Enabling hosts to service most of
their VM I/O reads would also increases the overall IOPs of the back end
storage tier.
Does this make sense?
- Luis
On 07/07/2014 03:29 PM, Sage Weil wrote:
On Mon, 7 Jul 2014, Luis Pabon wrote:
Hi all,
I am working on OSDMonitor.cc:5325 and wanted to confirm the following
read_forward cache tier transition:
readforward -> forward || writeback || (any && num_objects_dirty == 0)
forward -> writeback || readforward || (any && num_objects_dirty == 0)
writeback -> readforward || forward
Is this the correct cache tier state transition?
That looks right to me.
By the way, I had a thought after we spoke that we probably want something
that is somewhere inbetween the current writeback behavior (promote on
first read) and the read_forward behavior (never promote on read). I
suspect a good all-around policy is something like promote on second read?
This should probably be rolled into the writeback mode as a tunable...
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html