Dne 13.9.2017 v 17:33 Dale Stephenson napsal(a):
Distribution: centos-release-7-3.1611.el7.centos.x86_64
Kernel: Linux 3.10.0-514.26.2.el7.x86_64
LVM: 2.02.166(2)-RHEL7 (2016-11-16)
Volume group consisted of an 8-drive SSD (500G drives) array, plus an additional SSD of the same size. The array had 64 k stripes.
Thin pool had -Zn option and 512k chunksize (full stripe), size 3T with metadata volume 16G. data was entirely on the 8-drive raid, metadata was entirely on the 9th drive.
Virtual volume “thin” was 300 GB. I also filled it with dd so that it would be fully provisioned before the test.
Volume “thick” was also 300GB, just an ordinary volume also entirely on the 8-drive array.
Four tests were run directlyagainst each volume using fio-2.2.8, random read, random write, sequential read, sequential write. Single thread, 4k blocksize, 90s run time.
Hi
Can you please provide output of:
lvs -a -o+stripes,stripesize,seg_pe_ranges
so we can see how is your stripe placed on devices ?
SSD typically do needs ideally write 512K chunks.
(something like 'lvcreate -LXXX -i8 -I512k vgname' )
Wouldn't be 'faster' to just concatenate 8 disks together instead of striping
- or stripe only across 2 disk - and then you concatenate 4 such striped areas...
64k stripes do not seem to look like ideal match in this case of 3 disk with
512K blocks
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/