Re: poor thin performance, relative to thick

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Zdenek Kabelac <zkabelac@xxxxxxxxxx> wrote:
> Dne 11.7.2016 v 22:44 Jon Bernard napsal(a):
> > Greetings,
> > 
> > I have recently noticed a large difference in performance between thick
> > and thin LVM volumes and I'm trying to understand why that it the case.
> > 
> > In summary, for the same FIO test (attached), I'm seeing 560k iops on a
> > thick volume vs. 200k iops for a thin volume and these results are
> > pretty consistent across different runs.
> > 
> > I noticed that if I run two FIO tests simultaneously on 2 separate thin
> > pools, I net nearly double the performance of a single pool.  And two
> > tests on thin volumes within the same pool will split the maximum iops
> > of the single pool (essentially half).  And I see similar results from
> > linux 3.10 and 4.6.
> > 
> > I understand that thin must track metadata as part of its design and so
> > some additional overhead is to be expected, but I'm wondering if we can
> > narrow the gap a bit.
> > 
> > In case it helps, I also enabled LOCK_STAT and gathered locking
> > statistics for both thick and thin runs (attached).
> > 
> > I'm curious to know whether this is a know issue, and if I can do
> > anything the help improve the situation.  I wonder if the use of the
> > primary spinlock in the pool structure could be improved - the lock
> > statistics appear to indicate a significant amount of time contending
> > with that one.  Or maybe it's something else entirely, and in that case
> > please enlighten me.
> > 
> > If there are any specific questions or tests I can run, I'm happy to do
> > so.  Let me know how I can help.
> 
> 
> Have you tried different 'chunk-sizes' ?
> 
> The smaller the chunk/block-size  is - the better snapshot utilization is,
> but more contention (e.g. try 512K)

That's a good thought, I'm re-running my tests now with some adjustments
(including writes instead of reads) and I will include varied chunk
sizes as well.  I did run a couple of random write tests with 64k chunk
size and it does give slightly better performance, but the discrepancy
between think and thin is still present.  I'll post my numbers once I've
got everything collected and prepared.

> Also there is a big difference when you perform initial block provisioning
> or you use already provisioned block -  so the 'more' realistic measurement
> should be taken on already provisioned thin device.

That's helpful to know.  You're suggesting that I first write to each of
the blocks to trigger provisioning, and then run the fio test?

> And finally -  thin devices from a single thin-pool are not meant to be
> heavily used in parallel (I'd not recommend to use more then 16 devs) -
> there is still large room for improvement, but correctness has the priority.

My current testing setup looks like:

sda                      8:0    0   477G  0 disk  
└─md0                    9:0    0   3.7T  0 raid0 
  ├─thin-pool1_tmeta   253:0    0  15.8G  0 lvm   
  │ └─thin-pool1-tpool 253:2    0     1T  0 lvm   
  │   ├─thin-pool1     253:3    0     1T  0 lvm   
  │   └─thin-thindisk1 253:9    0   100G  0 lvm   
  ├─thin-pool1_tdata   253:1    0     1T  0 lvm   
  │ └─thin-pool1-tpool 253:2    0     1T  0 lvm   
  │   ├─thin-pool1     253:3    0     1T  0 lvm   
  │   └─thin-thindisk1 253:9    0   100G  0 lvm   
  ├─thin-pool2_tmeta   253:4    0  15.8G  0 lvm   
  │ └─thin-pool2-tpool 253:6    0     1T  0 lvm   
  │   ├─thin-pool2     253:7    0     1T  0 lvm   
  │   └─thin-thindisk2 253:10   0   100G  0 lvm   
  ├─thin-pool2_tdata   253:5    0     1T  0 lvm   
  │ └─thin-pool2-tpool 253:6    0     1T  0 lvm   
  │   ├─thin-pool2     253:7    0     1T  0 lvm   
  │   └─thin-thindisk2 253:10   0   100G  0 lvm   
  └─thin-thick         253:8    0   100G  0 lvm   

I'm running fio on either thin (253:9) or thick (253:8) but only one
volume at a time, so I don't think pressure from parallel use would be a
factor for me.  It would be interesting to see what kind of falloff
occurs as the number of devices increases.

-- 
Jon

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux