On Thu, Jun 20, 2019 at 01:04:54PM +0200, Jan Kara wrote: > On Wed 19-06-19 11:21:55, Ross Zwisler wrote: > > Currently both journal_submit_inode_data_buffers() and > > journal_finish_inode_data_buffers() operate on the entire address space > > of each of the inodes associated with a given journal entry. The > > consequence of this is that if we have an inode where we are constantly > > appending dirty pages we can end up waiting for an indefinite amount of > > time in journal_finish_inode_data_buffers() while we wait for all the > > pages under writeback to be written out. > > > > The easiest way to cause this type of workload is do just dd from > > /dev/zero to a file until it fills the entire filesystem. This can > > cause journal_finish_inode_data_buffers() to wait for the duration of > > the entire dd operation. > > > > We can improve this situation by scoping each of the inode dirty ranges > > associated with a given transaction. We do this via the jbd2_inode > > structure so that the scoping is contained within jbd2 and so that it > > follows the lifetime and locking rules for that structure. > > > > This allows us to limit the writeback & wait in > > journal_submit_inode_data_buffers() and > > journal_finish_inode_data_buffers() respectively to the dirty range for > > a given struct jdb2_inode, keeping us from waiting forever if the inode > > in question is still being appended to. > > > > Signed-off-by: Ross Zwisler <zwisler@xxxxxxxxxx> > > The patch looks good to me. I was thinking whether we should not have > separate ranges for current and the next transaction but I guess it is not > worth it at least for now. So just one nit below. With that applied feel free > to add: > > Reviewed-by: Jan Kara <jack@xxxxxxx> We could definitely keep separate dirty ranges for each of the current and next transaction. I think the case where you would see a difference would be if you had multiple transactions in a row which grew the dirty range for a given jbd2_inode, and then had a random I/O workload which kept dirtying pages inside that enlarged dirty range. I'm not sure how often this type of workload would be a problem. For the workloads I've been testing which purely append to the inode, having a single dirty range per jbd2_inode is sufficient. I guess for now this single range seems simpler, but if later we find that someone would benefit from separate tracking for each of the current and next transactions, I'll take a shot at adding it. Thank you for the review! > > @@ -257,15 +262,24 @@ static int journal_finish_inode_data_buffers(journal_t *journal, > > /* For locking, see the comment in journal_submit_data_buffers() */ > > spin_lock(&journal->j_list_lock); > > list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) { > > + loff_t dirty_start = jinode->i_dirty_start; > > + loff_t dirty_end = jinode->i_dirty_end; > > + > > if (!(jinode->i_flags & JI_WAIT_DATA)) > > continue; > > jinode->i_flags |= JI_COMMIT_RUNNING; > > spin_unlock(&journal->j_list_lock); > > - err = filemap_fdatawait_keep_errors( > > - jinode->i_vfs_inode->i_mapping); > > + err = filemap_fdatawait_range_keep_errors( > > + jinode->i_vfs_inode->i_mapping, dirty_start, > > + dirty_end); > > if (!ret) > > ret = err; > > spin_lock(&journal->j_list_lock); > > + > > + if (!jinode->i_next_transaction) { > > + jinode->i_dirty_start = 0; > > + jinode->i_dirty_end = 0; > > + } > > This would be more logical in the next loop that moves jinode into the next > transaction. Yep, agreed, this is much better. Fixed in v2.