> One thing that can cause this sort of behaviour is if the filesystem is in > the middle of a sync and has to complete it before the create can > complete, and the sync is writing out many megabytes of data. > > You can see if this is happening by running > > watch 'grep Dirty /proc/meminfo' OK, I did this. > if that is large when the hang starts, and drops down to zero, and the > hang lets go when it hits (close to) zero, then this is the problem. No, not really. The value of course rises and falls erratically during normal operation (anything from a few dozen K to 200 Megs), but it is not necessarily very high at the event onset. When the halt occurs it drops from whatever value it may have (perhaps 256K or so) to 16K, and then slowly rises to several hundred K until the event terminates. > If that doesn't turn out to be the problem, then knowing how the > "Dirty" count is behaving might still be useful, and I would probably > look at what processes are in 'D' state, (ps axgu) Well, nothing surprising, there. The process(es) involved with the transfer(s) are dirty (D+), as well as the trigger process (for testing, I simply copy /etc/hosts over to a directory on the RAID array), and pdflush had a D state (no plus), but that's all. > and look at their stack (/proc/$PID/stack).. Um, I thought I knew what you meant by this, but apparently not. I tried to `cat /proc/<PID of the process with a D status>/stack`, but the system returns "cat: /proc/8005/stack: No such file or directory". What did I do wrong? -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html