On 10/01/12 15:14, Brian Foster wrote:
<deletes by mt>
Heads up... I was doing some testing against my eofblocks set rebased
against this patchset and I'm reproducing a new 273 failure. The failure
bisects down to this patch.
With the bisection, I'm running xfs top of tree plus the following patch:
xfs: only update the last_sync_lsn when a transaction completes
... and patches 1-6 of this set on top of that. i.e.:
xfs: xfs_sync_data is redundant.
xfs: Bring some sanity to log unmounting
xfs: sync work is now only periodic log work
xfs: don't run the sync work if the filesystem is read-only
xfs: rationalise xfs_mount_wq users
xfs: xfs_syncd_stop must die
xfs: only update the last_sync_lsn when a transaction completes
xfs: Make inode32 a remountable option
This is on a 16p (according to /proc/cpuinfo) x86-64 system with 32GB
RAM. The test and scratch volumes are both 500GB lvm volumes on top of a
hardware raid. I haven't looked into this at all yet but I wanted to
drop it on the list for now. The 273 output is attached.
Brian
<deletes by mt>
273.out.bad
QA output created by 273
------------------------------
start the workload
------------------------------
_porter 31 not complete
_porter 79 not complete
_porter 149 not complete_porter 74 not complete
_porter 161 not complete
_porter 54 not complete
_porter 98 not complete
_porter 99 not complete
_porter 167 not complete
_porter 76 not complete
_porter 45 not complete
_porter 152 not complete
_porter 173 not complete_porter 24 not complete
<deletes by mt>
I see it too on a single machine. It looks like an interaction between
patch 06 and the "...update the last_sync_lsn...".
I like the "...update the last_sync_lsn ..." patch because it fixes the
"xlog_verify_tail_lsn: tail wrapped" and "xlog_verify_tail_lsn: ran out
of log space" messages that I am getting on that machine.
--Mark.
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs