On Thu, Aug 20, 2015 at 02:12:24PM +0800, Eryu Guan wrote: > On Wed, Aug 19, 2015 at 07:56:11AM +1000, Dave Chinner wrote: > > On Tue, Aug 18, 2015 at 12:54:39PM -0700, Tejun Heo wrote: > > > Hello, > > > > > > On Tue, Aug 18, 2015 at 10:47:18AM -0700, Tejun Heo wrote: > > > > Hmm... the only possibility I can think of is tot_write_bandwidth > > > > being zero when it shouldn't be. I've been staring at the code for a > > > > while now but nothing rings a bell. Time for another debug patch, I > > > > guess. > > > > > > So, I can now reproduce the bug (it takes a lot of trials but lowering > > > the number of tested files helps quite a bit) and instrumented all the > > > early exit paths w/o the fix patch. bdi_has_dirty_io() and > > > wb_has_dirty_io() are never out of sync with the actual dirty / io > > > lists even when the test 048 fails, so the bug at least is not caused > > > by writeback skipping due to buggy bdi/wb_has_dirty_io() result. > > > Whenever it skips, all the lists are actually empty (verified while > > > holding list_lock). > > > > > > One suspicion I have is that this could be a subtle timing issue which > > > is being exposed by the new short-cut path. Anything which adds delay > > > seems to make the issue go away. Dave, does anything ring a bell? > > > > No, it doesn't. The data writeback mechanisms XFS uses are all > > generic. It marks inodes I_DIRTY_PAGES and lets the generic code > > take care of everything else. Yes, we do delayed allocation during > > writeback, and we log the inode size updates during IO completion, > > so if inode sizes are not getting updated, then Occam's Razor > > suggests that writeback is not happening. > > > > I'd suggest looking at some of the XFS tracepoints during the test: > > > > tracepoint trigger > > xfs_file_buffered_write once per write syscall > > xfs_file_sync once per fsync per inode > > xfs_vm_writepage every ->writepage call > > xfs_setfilesize every IO completion that updates inode size > > I gave the tracepoints a try, but my root fs is xfs so I got many > noises. I'll try to install a new vm with ext4 as root fs. But I'm not > sure if the new vm could reproduce the failure, will see. I installed a new vm with ext4 as root fs and got some trace info. On the new vm, only generic/048 is reproducible, generic/049 always passes. And I can only reproduce generic/048 when xfs tracepoints are enabled, if writeback tracepoints are enabled too, I can no longer reproduce the failure. All tests are done on 4.2-rc7 kernel. This is the trace-cmd I'm using: cd /mnt/ext4 trace-cmd record -e xfs_file_buffered_write \ -e xfs_file_fsync \ -e xfs_writepage \ -e xfs_setfilesize & pushd /path/to/xfstests ./check generic/048 popd kill -s 2 $! trace-cmd report >trace_report.txt I attached three files: 1) xfs-trace-generic-048.txt.bz2 trace report result 2) xfs-trace-generic-048.diff generic/048 failure diff output, could know which files has incorrect size 3) xfs-trace-generic-048.metadump.bz2 metadump of SCRATCH_DEV, which contains the test files If more info is needed please let me know. Thanks, Eryu
--- tests/generic/048.out 2015-08-20 15:00:06.210000000 +0800 +++ /root/xfstests/results//generic/048.out.bad 2015-08-20 20:52:58.847000000 +0800 @@ -1 +1,9 @@ QA output created by 048 +file /mnt/testarea/scratch/982 has incorrect size - sync failed +file /mnt/testarea/scratch/983 has incorrect size - sync failed +file /mnt/testarea/scratch/984 has incorrect size - sync failed +file /mnt/testarea/scratch/985 has incorrect size - sync failed +file /mnt/testarea/scratch/987 has incorrect size - sync failed +file /mnt/testarea/scratch/989 has incorrect size - sync failed +file /mnt/testarea/scratch/991 has incorrect size - sync failed +file /mnt/testarea/scratch/993 has incorrect size - sync failed
Attachment:
xfs-trace-generic-048.metadump.bz2
Description: BZip2 compressed data
Attachment:
xfs-trace-generic-048.txt.bz2
Description: BZip2 compressed data
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs