Re: [PATCH 5/9] generic/031: Fix the test case for 64k blocksize config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/06/30 01:18PM, Theodore Ts'o wrote:
> On Wed, Jun 30, 2021 at 08:50:01AM -0700, Darrick J. Wong wrote:
> > > +# fcollapse need offset and len to be multiple of blocksize for filesystems
> > > +# hence make this test work with 64k blocksize as well.
> > ...
> >
> > What if the blocksize is 32k?
>
> ... or 8k?  or 16k?  (which might be more likely)
>
> How bout if we change the offsets and lengths in the test so they are
> all multiples of 64k, and adjusting 31.out appropriately?  That will
> allow the test to work for block sizes up to 64k without needing to
> having a special case for 031.out.64k.
>
> I don't know of architectures with a page size > 64k (yet), so this
> should hold us for a while.
>

yes, so I already had done the changes in such a way that we can adapt to any
blocksize.  I will make the changes such that we take fact=65536/4096 by
default. Then this should cover all the cases for all blocksizes and we don't
have to change 031.out file for different blocksizes.

And the test tries to test non-aligned writes, but since I am adding some
additional unaligned bytes and also I have kept the layout of the writes same
as before, so I think the test should still cover the regression it is meant
for.


fact=65536/4096
    $XFS_IO_PROG -f \
        -c "pwrite $((185332*fact + 12)) $((55756*fact + 12))" \
        -c "fcollapse $((28672 * fact)) $((40960 * fact))" \
        -c "pwrite $((133228 * fact + 12)) $((63394 * fact + 12))" \
        -c "fcollapse 0 $((4096 * fact))" \
    $testfile | _filter_xfs_io

-ritesh



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux