Re: xfstests, bad generic tests 009 and 308

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave,

many thanks for the support. Sorry for the double mail, after
first registering mails was not accepted, so i re-registered
with a company mail.


On 19/09/2015 00:44, Dave Chinner wrote:
On Fri, Sep 18, 2015 at 06:38:38PM +0200, Angelo Dureghello wrote:
Hi all,

working on arm (32bit arch), kernel 4.1.6.
Is this a new platform?

Also, we need to know what compiler you are using, because we know
that certain versions of gcc miscompile XFS kernel code on arm
(4.6, 4.7 and certain versions of 4.8 are suspect) due to a
combination of compiler mis-optimisations and kernel bugs in the
arm 64 bit division asm implementation.

As such, it would be worthwhile trying gcc-4.9 and a 4.3-rc1 kernel
to see if the problems still occur.

I am using actually gcc-linaro-4.9-2015.05-x86_64_arm-linux-gnueabihf

Looking to find the reason of some bad results on xfstests,

-tests/generic/009
------------------
i get several "all holes" messages

generic/009    [  842.949643] run fstests generic/009 at 2015-09-18
15:29:36
  - output mismatch (see
/home/angelo/xfstests/results//generic/009.out.bad)
     --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
     +++ /home/angelo/xfstests/results//generic/009.out.bad
2015-09-18 15:29:41.412784177 +0000
     @@ -1,79 +1,45 @@
      QA output created by 009
          1. into a hole
     -0: [0..7]: hole
     -1: [8..23]: unwritten
     -2: [24..39]: hole
     +0: [0..39]: hole
      daa100df6e6711906b61c9ab5aa16032

also some other tests are giving the same bad notices.
Can you attach the entire
/home/angelo/xfstests/results//generic/009.out.bad file? I'm not
sure which of the tests this output comes from, so I need to
confirm which specific operations are resulting in errors.
Sure, i completed the whole generic + shared + xfs tests.
In total i have 38 errors. And trying now one by one to understand the reason.
I attached the 009 output.

-tests/generic/308
------------------

I have now: CONFIG_LBDAF=y

In my target device this test creates a 16 Terabytes file 308.tempfile

-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308

While "df" is not complaining about:

/dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5

and next rm -f on it hands the cpu to 95%, forever.

This issue seems known from a long time, as it has been discussed in
the thread:

http://oss.sgi.com/archives/xfs/2013-04/msg00273.html

I was wondering if there was any special reason why the Jeff patch has
never been finally applied.
MAX_LFS_FILESIZE on 32 bits is 8TB, whereas xfs supports 16TB file
size on 32 bit systems. The specific issue this test fixed was
committed in commit 8695d27 ("xfs: fix infinite loop at
xfs_vm_writepage on 32bit system")

http://oss.sgi.com/archives/xfs/2014-05/msg00447.html

And, as you may notice now, generic/308 is the test case for the
exact problem the above commit fixed.

I have recent git version of xfstests, but generic/308 shows

#! /bin/bash
# FS QA Test No. 308
#
# Regression test for commit:
# f17722f ext4: Fix max file size and logical block counting of extent format file

Can you find out exactly where the CPU is looping? sysrq-l will
help, as will running 'perf top -U -g' to show you the hot code
paths, and so on.

Strangely, the patch http://oss.sgi.com/archives/xfs/2014-05/msg00447.html is already included
in the xfs that comes with this 4.1.6 kernel, while only applying previous

http://oss.sgi.com/archives/xfs/2013-04/msg00273.html patch from Jeff fix the issue and
test 308 get passed.


I have a 16MB partition, and wondering why xfs allows from test 308 to create a 16TB file.

-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308


When at 308 test exit, rm is invoked, system get blocked in infinite loop.

root 5445 0.7 0.2 3760 3180 ttyS0 S+ 10:53 0:00 /bin/bash /home/angelo/xfstests/tests/generic/308 root 5674 100 0.0 1388 848 ttyS0 R+ 10:53 0:27 rm -f /media/p5/testfile.308

Can't install actually perf-tools for some debian repos issue, but let me know, i will enable sysrq
if needed.

Best regards
Angelo



Cheers,

Dave.

--
Best regards,
Angelo Dureghello

QA output created by 009
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
daa100df6e6711906b61c9ab5aa16032
	4. hole -> data
0: [0..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	5. hole -> unwritten
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	6. data -> hole
0: [24..39]: hole
1b3779878366498b28c702ef88c4a773
	7. data -> unwritten
0: [32..39]: hole
1b3779878366498b28c702ef88c4a773
	8. unwritten -> hole
0: [24..39]: hole
daa100df6e6711906b61c9ab5aa16032
	9. unwritten -> data
0: [32..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	10. hole -> data -> hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
daa100df6e6711906b61c9ab5aa16032
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
daa100df6e6711906b61c9ab5aa16032
	4. hole -> data
0: [0..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	5. hole -> unwritten
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	6. data -> hole
0: [24..39]: hole
1b3779878366498b28c702ef88c4a773
	7. data -> unwritten
0: [32..39]: hole
1b3779878366498b28c702ef88c4a773
	8. unwritten -> hole
0: [24..39]: hole
daa100df6e6711906b61c9ab5aa16032
	9. unwritten -> data
0: [32..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	10. hole -> data -> hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
daa100df6e6711906b61c9ab5aa16032
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
cc58a7417c2d7763adc45b6fcd3fa024
	4. hole -> data
cc58a7417c2d7763adc45b6fcd3fa024
	5. hole -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	6. data -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	7. data -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	8. unwritten -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	9. unwritten -> data
cc58a7417c2d7763adc45b6fcd3fa024
	10. hole -> data -> hole
f6aeca13ec49e5b266cd1c913cd726e3
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
f6aeca13ec49e5b266cd1c913cd726e3
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
cc58a7417c2d7763adc45b6fcd3fa024
	4. hole -> data
cc58a7417c2d7763adc45b6fcd3fa024
	5. hole -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	6. data -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	7. data -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	8. unwritten -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	9. unwritten -> data
cc58a7417c2d7763adc45b6fcd3fa024
	10. hole -> data -> hole
f6aeca13ec49e5b266cd1c913cd726e3
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
f6aeca13ec49e5b266cd1c913cd726e3
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux