On Sat, Jul 17, 2010 at 09:35:33PM -0400, Ilia Mirkin wrote: > On Sat, Jul 17, 2010 at 9:20 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > On Sat, Jul 17, 2010 at 12:01:11AM -0400, Ilia Mirkin wrote: > > I can't find a thread that holds the XFS inode lock that everything > > is waiting on. I think it is the ILOCK, but none of the threads in > > this trace should be holding it where they are blocked. IOWs, the > > output does not give me enough information to get to the root cause. > > In case this happens again, was there something more useful I could > have collected? Should I have grabbed all task states? All the task states, including the running tasks, is probably a good start. Also, if the kernel you are running has tracing events enabled and has the necessary XFS tracepoints (I can't remember off the top of my head whether they are in 2.6.33), you might want to enable tracing of: xfs_ilock xfs_ilock_nowait xfs_ilock_demote xfs_iunlock via: # echo 1 > /sys/kernel/debug/tracing/events/xfs/<trace_point>/enable and when the problem is hit dumping the trace via: # cat /sys/kernel/debug/tracing/trace > trace.log You may also want to bump up the trace buffer size to capture more events: # echo 32768 > /sys/kernel/debug/tracing/buffer_size_kb Though I suspect the only way to get to the bottom of it will be to work out a reproducable test case.... > It's pretty obvious that allowing userspace to hang the FS is really > bad, but I appreciate that the app is doing something that the kernel > didn't expect. Yeah, we need to fix the hang - it's the bigger issues of mixed direct/buffered IO that I was refering to... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs