I've tried just about everything the last weeks and my findings are:
- the problem with LVM cache seems NOT to be caused by KVM/qemu. But is seems that it is noticeable more inside a KVM. So the slowdown of the cache also happens on HW-node, but you must give it a serious go before you notice.
- Not a single time I succeeded in creating a well working cache
using LVM2 cache. In artificial / KVM setups and with small
devices it works (dm-testsuite etc). But in real life scenario
with fully population all the PV's on 2TB HDDs and 250G
SSDs (both RAID 1) the cache stopped working after 20 - 50 GB of
writes although the cache is 150+G large. Please use fio examples
below and always use new filenames so not the same blocks are
The poor performance stayed most of the time even when all blocks
were flushed. Very unpredictable cache performance / behavior.
I finally decided to go for dm-writeboost in stead of lvm2 cache
(dm-cache). This was the only way to create a well working cache
that works till 95% filled. But of course would be nicer to have
something that generally more stable like LVM2.
I guess in the sens of this mailing-list this issue is resolved
because it does not seem to belong here.
-- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388 7949 @RimoteSaS (Twitter Serviceberichten/security updates)
On 04/20/2017 04:23 PM, Sandro Bonazzola wrote:
_______________________________________________ CentOS-virt mailing list CentOS-virt@xxxxxxxxxx https://lists.centos.org/mailman/listinfo/centos-virt