Re: higher agcount on LVM2 thinp volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 29, 2013 at 08:08:25PM -0600, Chris Murphy wrote:
> 
> On Aug 29, 2013, at 7:44 PM, Stan Hoeppner
> <stan@xxxxxxxxxxxxxxxxx> wrote:
> > 
> > More information would be helpful, specifically WRT the device
> > stack underlying mkfs.xfs.  I.e. we need to know more about the
> > LVM configuration.
> > 
> > See:
> > 
> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
> Summary: laptop, one HDD, one 402GB partition is made into a PV,
> one VG is created with that PV and is the only VG on the system,
> one 400GB logical volume pool is created, one 100GB virtual sized
> logical volume is created from the thin pool.
....
> meta-data=/dev/vg1/data          isize=256    agcount=16, agsize=1638400 blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=12800, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> Whereas if I mkfs.xfs on /dev/sda7, or if I create a regular LV
> rather than a thinp volume, agcount is 4. It doesn't matter
> whether I create the thinp with the chunk option set to default
> (as above) or 1MB or 4MB.

Which means that the thinp device has some difference in what it is
telling mkfs.xfs about it's configuration that makes mkfs.xfs think
it is a RAID volume, not a single disk.

Basically, I think you'll find that the thinp device is emitting a
an optimal IO size that is not aligned to the filesystem block size,
so the AG count is being calculated as though it is a ~1TB
"multidisk" device (which gives 16 AGs) and then setting
sunit/swidth to zero because they aren't filesystem block aligned...

Check the contents of
/sys/block/<dev>/queue/{minimum,optimal}_io_size for the single
device, the standard LV and the thinp device. I think that you'll
find only the thinp device has a non-zero value. If the value from
the thinp code is 512 (i.e. single sector) then that's a bug in
the thinp device code as it should be zero...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux