Re: higher agcount on LVM2 thinp volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 29, 2013 at 09:21:15PM -0600, Chris Murphy wrote:
> 
> On Aug 29, 2013, at 8:58 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > 
> > Check the contents of
> > /sys/block/<dev>/queue/{minimum,optimal}_io_size for the single
> > device, the standard LV and the thinp device.
> 
> physical device:
> 
> [root@f19s ~]# cat /sys/block/sda/queue/minimum_io_size 
> 512
> [root@f19s ~]# cat /sys/block/sda/queue/optimal_io_size 
> 0
> 
> conventional LV on that physical device:
>      
> [root@f19s ~]# cat /sys/block/dm-0/queue/minimum_io_size 
> 512
> [root@f19s ~]# cat /sys/block/dm-0/queue/optimal_io_size 
> 0
> 
> 
> thinp pool and LV:
> 
> lrwxrwxrwx. 1 root root       7 Aug 29 20:46 vg1-thinp -> ../dm-3
> 
> [root@f19s ~]# cat /sys/block/dm-3/queue/minimum_io_size 
> 512
> [root@f19s ~]# cat /sys/block/dm-3/queue/optimal_io_size 
> 262144
> [root@f19s ~]# 
> 
> lrwxrwxrwx. 1 root root       7 Aug 29 20:47 vg1-data -> ../dm-4
> 
> [root@f19s ~]# cat /sys/block/dm-4/queue/minimum_io_size 
> 512
> [root@f19s ~]# cat /sys/block/dm-4/queue/optimal_io_size 
> 262144

Yup, there's the problem - minimum_io_size is 512 bytes, which is
too small for a stripe unit to be set to. Hence sunit/swidth get set
to zero.

The problem here is that minimum_io_size is not the minimum IO size
that can be done, but the minimum IO size that is *efficient*. For
example, my workstation has a MD RAID0 device with a 512k chunk size
and two drives:

$ cat /sys/block/md0/queue/minimum_io_size 
524288
$ cat /sys/block/md0/queue/optimal_io_size 
1048576

Here we see the minimum *efficient* IO size is the stripe chunk size
(i.e. what gets written to a single disk) and the optimal is an IO
that hits all disks at once.

So, what dm-thinp is trying to tell us is that the minimum
*physical* IO size is 512 bytes (i.e. /sys/.../physical_block_size)
but the efficient IO size is 256k. So dm-thinp is exposing the
information incorrectly. What it shoul dbe doing is setting both the
minimum_io_size and the optimal_io_size to the same value of 256k...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux