From: Tao Ma <boyu.mt@xxxxxxxxxx> When we finish the end io work in ext4_flush_completed_IO, we take the io work away from the list, but don't free it. Then in the workqueue, we can check the list state and then avoid the extra work if it is also done. It is good, but we check list state in ext4_end_io_nolock with i_mutex held instead of the spin_lock in other places. This is wrong. So check the state within spin_lock and another side effect is that the heavy extra mutex_lock can be avoided. Cc: "Theodore Ts'o" <tytso@xxxxxxx> Signed-off-by: Tao Ma <boyu.mt@xxxxxxxxxx> --- fs/ext4/page-io.c | 11 ++++++++--- 1 files changed, 8 insertions(+), 3 deletions(-) diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index 92f38ee..f6b40f1 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -100,9 +100,6 @@ int ext4_end_io_nolock(ext4_io_end_t *io) "list->prev 0x%p\n", io, inode->i_ino, io->list.next, io->list.prev); - if (list_empty(&io->list)) - return ret; - if (!(io->flag & EXT4_IO_END_UNWRITTEN)) return ret; @@ -142,6 +139,13 @@ static void ext4_end_io_work(struct work_struct *work) unsigned long flags; int ret; + spin_lock_irqsave(&ei->i_completed_io_lock, flags); + if (list_empty(&io->list)) { + spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); + goto free; + } + spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); + if (!mutex_trylock(&inode->i_mutex)) { /* * Requeue the work instead of waiting so that the work @@ -170,6 +174,7 @@ static void ext4_end_io_work(struct work_struct *work) list_del_init(&io->list); spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); mutex_unlock(&inode->i_mutex); +free: ext4_free_io_end(io); } -- 1.7.0.4 -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html