I found stochastic multi-queue and writeboost to be
insufficient for this purpose. I'm wondering if anything exists
that fits this description:
Device mapper creates a "cache" device on a fast device
(SSD/NVME, etc)... and writes to the device *always* hit the
fast device. Writes are later flushed to the slower device as
fast as the slow device can handle them.
The problem with the caching solutions I have found with
device mapper is that they require the block to be identified
as "hot" first, even when the cache itself is not 100%
utilized. Read performance is largely irrelevant.
Is there a device mapper implementation that works in this
fashion; where 100% of writes happen to the fast device? I was
thinking COW effectively does this, but the problem is it does
not automatically/periodically flush writes, you have to do
them all at once.
I had seen some conversations about trying to get
the multiqueue (mq, not smq) policy to do this with zero
values for thresholds, but it did not seem like any were
successful in ensuring the first write would hit the fast
device.
Thanks!