On Wed, Oct 09, 2019 at 02:21:14PM +1100, Dave Chinner wrote: > Factor the common AIL deletion code that does all the wakeups into a > helper so we only have one copy of this somewhat tricky code to > interface with all the wakeups necessary when the LSN of the log > tail changes. > > xfs_ail_push_sync() is temporary infrastructure to facilitate > non-blocking, IO-less inode reclaim throttling that allows further > structural changes to be made. Once those structural changes are > made, the need for this function goes away and it is removed, > leaving us with only the xfs_ail_update_finish() factoring when this > is all done. The xfs_ail_update_finish work here is in an earlier patch, so the changelog will need some updates. > + spin_lock(&ailp->ail_lock); > + while ((lip = xfs_ail_min(ailp)) != NULL) { > + prepare_to_wait(&ailp->ail_push, &wait, TASK_UNINTERRUPTIBLE); > + if (XFS_FORCED_SHUTDOWN(ailp->ail_mount) || > + XFS_LSN_CMP(threshold_lsn, lip->li_lsn) <= 0) > + break; > + /* XXX: cmpxchg? */ > + while (XFS_LSN_CMP(threshold_lsn, ailp->ail_target) > 0) > + xfs_trans_ail_copy_lsn(ailp, &ailp->ail_target, &threshold_lsn); This code looks broken on 32-bit given that xfs_trans_ail_copy_lsn takes the ail_lock there. Just replacing the xfs_trans_ail_copy_lsn call with a direct assignment would fix that, no need for cmpxchg either as far as I can tell (and it would fix that too long line as well). But a: while (XFS_LSN_CMP(threshold_lsn, ailp->ail_target) > 0) ailp->ail_target = threshold_lsn; still looks odd, I think this should simply be an if. > + wake_up_process(ailp->ail_task); > + spin_unlock(&ailp->ail_lock); xfsaild will take ail_lock pretty quickly. I think we should drop the lock before waking it.