sequential_threshold=0 turns lvmcache into write-around cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey folks,

We're currently looking at DM Cache (and LVM Cache over it) as a way of dealing with write latencies on AWS, basically by using the ephemeral storage on an AWS VM as the cache, and EBS as the origin / long-term device.

As such, it's necessary to disable the sequential detection, as it will never be more efficient to write to the EBS backing device.

However, from some testing we've done, it seems that setting sequential_threshold to zero actually turns the cache into a write-around cache (i.e. it never writes to the cache - it only writes to the device directly), which suggests that either we've misunderstood the documentation, or there is a bug in how the code works.

The documentation states:

"If sequential threshold is set to 0 the sequential I/O detection is disabled and sequential I/O will no longer implicitly bypass the cache."

I took that to mean it will always write to the cache and will never bypass it, but perhaps it was intended to mean that it now explicitly bypasses the cache all the time?

A quick sample of what I mean:

root@ip-0.0.0.0:~# lvchange --cachesettings 'sequential_threshold=512' vg/OriginLV
  Logical volume "OriginLV" changed.
root@ip-0.0.0.0:~# dd if=/dev/zero of=/mnt/cache/out29 oflag=direct bs=8M count=200
200+0 records in
200+0 records out
1677721600 bytes (1.7 GB) copied, 5.88239 s, 285 MB/s
root@ip-0.0.0.0:~# lvchange --cachesettings 'sequential_threshold=0' vg/OriginLV
  Logical volume "OriginLV" changed.
root@ip-0.0.0.0:~# dd if=/dev/zero of=/mnt/cache/out30 oflag=direct bs=8M count=200
200+0 records in
200+0 records out
1677721600 bytes (1.7 GB) copied, 59.3204 s, 28.3 MB/s

While this was running, I was keeping an eye on the "Cpy%Sync" field from the lvs command, and when running the dd with sequential_threshold=0, it *never* moved - it stayed as 0.00 the entire time, suggesting that the write never touched the cache.

In addition, the speed of the first write is consistent with the speed we saw when writing directly to the cache device (formatted and mounted without LVM), while the speed of the second write is consistent with what we saw when writing directly to the backing device (formatted and mounted without LVM).

Thanks,

- Andrew


-- 
Andrew Thorburn
Senior Software Engineer
Pivotal Labs
London
United Kingdom
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux