On Wed, Oct 19, 2011 at 11:42:06AM +1100, Dave Chinner wrote: > > +void > > +__xfs_iflock( > > + struct xfs_inode *ip) > > +{ > > + wait_queue_head_t *wq = bit_waitqueue(&ip->i_flags, __XFS_IFLOCK); > > + DEFINE_WAIT_BIT(wait, &ip->i_flags, __XFS_IFLOCK); > > + > > + do { > > + prepare_to_wait_exclusive(wq, &wait.wait, TASK_UNINTERRUPTIBLE); > > + if (xfs_isiflocked(ip)) > > + schedule(); > > + } while (!xfs_iflock_nowait(ip)); > > + > > + finish_wait(wq, &wait.wait); > > +} > > Given that the only way that the inode will become unlocked is for > IO to complete, that makes this an IO wait, right? Perhaps this > should call io_schedule() in that case? It probably should, and would help a bit with our %iowait accounting. The biggie for that is the buffer lock, though. Either we'll need a variant of the semaphore that does io_schedule, which is probably unlikely to get given that the grater gods want struct semaphore to die. Or we'll need to do the same bitlock trick there, even if we're not too worried about struct xfs_buf size - in fact that his how fs/buffer.c gets the iowait accounting right. > > @@ -380,6 +372,8 @@ static inline void xfs_ifunlock(xfs_inod > > #define XFS_IFILESTREAM 0x0010 /* inode is in a filestream directory */ > > #define XFS_ITRUNCATED 0x0020 /* truncated down so flush-on-close */ > > #define XFS_IDIRTY_RELEASE 0x0040 /* dirty release already seen */ > > +#define __XFS_IFLOCK 8 /* inode is beeing flushed right now */ > > +#define XFS_IFLOCK (1 << __XFS_IFLOCK) > > Any reason for leaving a gap in the flag space here? I can't remember. But looking at it again it might be a good idea to convert the other flags to the (1 << bit) scheme to make the number more obvious. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs