Fast thin volume preallocation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
doing some tests on a 4-bays, entry-level NAS/SAN system, I discovered it is entirely based on lvm thin volumes.

On configuring what it calls "thick volumes" it create a new thin logical volume and pre-allocates all space inside the new volume.

What surprised me is the speed at which this allocation happens: a 2 TB volume was allocated (ie: all data chunks were touched) in about 2 min. This immediately exclude any simple zeroing of the volume, which will require much more time.

I tried on a regular CentOS box + lvmthin with lvm zeroing disabled and, indeed, allocating all blocks inside a thin volume requires much more time. I tried both a very simple "dd if=/dev/zero of=/dev/test/thinvol bs=1M oflag=direct" and "blkdiscard -z /dev/test/thinvol".

Being curious, I found that the NAS use a binary [1] that seems to create a pattern of null writes with extremely high queue depth [2].

So, my questions:
- for what you know, are commercial NAS using some patched/custom lvmthin version which enables fast volume preallocation, or early zero rejection? - does standard lvmthin support something similar? If not, how do you see a zero coalesce/compression/trim/whatever feature? - can I obtain something similar by simply touching (maybe with a 512B only write) once each thin chunk?

Thanks.

[1] I am not calling it because I don't know if discovering the binary, and so the NAS vendor, is contrary to this mailing list policy.

[2] iostat -x 1 produces the following example output. Please see how *no* writes are passed on backing devices:

   extended device statistics
device mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b sda 0 0 105.2 0.0 841.3 0.0 8.0 0.0 0.2 0.2 2 sdb 0 0 62.9 0.0 503.2 0.0 8.0 0.0 0.2 0.2 1 sdd 0 0 124.8 0.0 998.5 0.0 8.0 0.0 0.0 0.0 0 sdc 0 0 119.9 0.0 959.2 0.0 8.0 0.0 0.2 0.2 2 mtdblock0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock1 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock4 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock5 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 mtdblock6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 md9 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 md13 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 md256 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 md322 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 md1 0 0 412.8 0.0 3302.2 0.0 8.0 0.0 0.0 0.0 0 dm-1 0 0 412.8 0.0 3302.2 0.0 8.0 0.0 0.1 0.1 5 dm-2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-4 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-5 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-7 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 dm-8 0 0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 99 dm-0 0 0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 99 dm-9 0 0 0.0 0.0 0.0 0.0 0.0 345966.5 0.0 0.0 99 dm-10 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux