Jaegeuk wondered whether callers of write_inode_now() should hold i_rwsem, and whether that would also prevent this problem. Some existing callers of write_inode_now() do, eg ntfs and hfs: hfs_file_fsync() inode_lock(inode); /* sync the inode to buffers */ ret = write_inode_now(inode, 0); but there are also some that don't (eg fat, fuse, orangefs). Thanks, Martijn On Fri, May 22, 2020 at 5:36 PM Jan Kara <jack@xxxxxxx> wrote: > > On Fri 22-05-20 17:23:30, Martijn Coenen wrote: > > [ dropped android-storage-core@xxxxxxxxxx from CC: since that list > > can't receive emails from outside google.com - sorry about that ] > > > > Hi Jan, > > > > On Fri, May 22, 2020 at 4:41 PM Jan Kara <jack@xxxxxxx> wrote: > > > > The easiest way to fix this, I think, is to call requeue_inode() at the end of > > > > writeback_single_inode(), much like it is called from writeback_sb_inodes(). > > > > However, requeue_inode() has the following ominous warning: > > > > > > > > /* > > > > * Find proper writeback list for the inode depending on its current state and > > > > * possibly also change of its state while we were doing writeback. Here we > > > > * handle things such as livelock prevention or fairness of writeback among > > > > * inodes. This function can be called only by flusher thread - noone else > > > > * processes all inodes in writeback lists and requeueing inodes behind flusher > > > > * thread's back can have unexpected consequences. > > > > */ > > > > > > > > Obviously this is very critical code both from a correctness and a performance > > > > point of view, so I wanted to run this by the maintainers and folks who have > > > > contributed to this code first. > > > > > > Sadly, the fix won't be so easy. The main problem with calling > > > requeue_inode() from writeback_single_inode() is that if there's parallel > > > sync(2) call, inode->i_io_list is used to track all inodes that need writing > > > before sync(2) can complete. So requeueing inodes in parallel while sync(2) > > > runs can result in breaking data integrity guarantees of it. > > > > Ah, makes sense. > > > > > But I agree > > > we need to find some mechanism to safely move inode to appropriate dirty > > > list reasonably quickly. > > > > > > Probably I'd add an inode state flag telling that inode is queued for > > > writeback by flush worker and we won't touch dirty lists in that case, > > > otherwise we are safe to update current writeback list as needed. I'll work > > > on fixing this as when I was reading the code I've noticed there are other > > > quirks in the code as well. Thanks for the report! > > > > Thanks! While looking at the code I also saw some other paths that > > appeared to be racy, though I haven't worked them out in detail to > > confirm that - the locking around the inode and writeback lists is > > tricky. What's the best way to follow up on those? Happy to post them > > to this same thread after I spend a bit more time looking at the code. > > Sure, if you are aware some some other problems, just write them to this > thread. FWIW stuff that I've found so far: > > 1) __I_DIRTY_TIME_EXPIRED setting in move_expired_inodes() can get lost as > there are other places doing RMW modifications of inode->i_state. > > 2) sync(2) is prone to livelocks as when we queue inodes from b_dirty_time > list, we don't take dirtied_when into account (and that's the only thing > that makes sure aggressive dirtier cannot livelock sync). > > Honza > -- > Jan Kara <jack@xxxxxxxx> > SUSE Labs, CR