Re: [PATCH] jbd jbd2: fix dio writereturningEIOwhentry_to_release_page fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2008-08-12二的 09:28 -0400,Chris Mason写道: 
> On Mon, 2008-08-11 at 15:25 +0900, Hisashi Hifumi wrote:
> > >> >> >I am wondering why we need stronger invalidate hurantees for DIO->
> > >> >> >invalidate_inode_pages_range(),which force the page being removed from
> > >> >> >page cache? In case of bh is busy due to ext3 writeout,
> > >> >> >journal_try_to_free_buffers() could return different error number(EBUSY)
> > >> >> >to try_to_releasepage() (instead of EIO).  In that case,  could we just
> > >> >> >leave the page in the cache, clean pageuptodate() (to force later buffer
> > >> >> >read to read from disk) and then invalidate_complete_page2() return
> > >> >> >successfully? Any issue with this way?
> > >> >> 
> > >> >> My idea is that journal_try_to_free_buffers returns EBUSY if it fails due to
> > >> >> bh busy, and dio write falls back to buffered write. This is easy to fix.
> > >> >> 
> > >> >> 
> > >> >
> > >> >What about the invalidates done after the DIO has already run
> > >> >non-buffered?
> > >> 
> > >> Dio write falls back to buffered IO when writing to a hole on ext3, I 
> > >think. I want to 
> > >> apply this mechanism to fix this issue. When try_to_release_page fails on 
> > >a page 
> > >> due to bh busy, dio write does buffered write, sync_page_range, and 
> > >> wait_on_page_writeback, imvalidates page cache to preserve dio semantics. 
> > >> Even if page invalidation that is carried out after 
> > >wait_on_page_writeback fails, 
> > >> there is no inconsistency between HDD and page cache.
> > >> 
> > >
> > >Sorry, I'm sure I wasn't very clear, I was referencing this code from
> > >mm/filemap.c:
> > >
> > >        written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
> > >
> > >        /*
> > >         * Finally, try again to invalidate clean pages which might have been
> > >         * cached by non-direct readahead, or faulted in by get_user_pages()
> > >         * if the source of the write was an mmap'ed region of the file
> > >         * we're writing.  Either one is a pretty crazy thing to do,
> > >         * so we don't support it 100%.  If this invalidation
> > >         * fails, tough, the write still worked...
> > >         */
> > >        if (mapping->nrpages) {
> > >                invalidate_inode_pages2_range(mapping,
> > >                                              pos >> PAGE_CACHE_SHIFT, end);
> > >        }
> > >
> > >If this second invalidate fails during a DIO write, we'll have up to
> > >date pages in cache that don't match the data on disk.  It is unlikely
> > >to fail because the conditions that make jbd unable to free a buffer are
> > >rare, but it can still happen with the write combination of mmap usage.
> > >
> > >The good news is the second invalidate doesn't make O_DIRECT return
> > >-EIO.  But, it sounds like fixing do_launder_page to always call into
> > >the FS can fix all of these problems.  Am I missing something?
> > >
> > 
> > My approach is not implementing do_launder_page for ext3.
> > It is needed to modify VFS.
> > 
> > My patch is as follows:
> 
> Sorry, I'm still not sure why the do_launder_page implementation is a
> bad idea.  Clearly Mingming spent quite some time on it in the past, but
> given that it could provide a hook for the FS to do expensive operations
> to make the page really go away, why not do it?
> 

> As far as I can tell, the only current users afs, nfs and fuse.  Pushing
> down the PageDirty check to those filesystems should be trivial.
> 
> 

I thought about your suggestion before, there should be no problem to
push down the pagedirty check to underlying fs. 

 My concern is  even if we wait for  page writeback cleared  (from
ext3_ordered_writepage() )in the launder_page() ,  (which the wait
actually already done in previous DIO ->filemap_write_wait()),
ext3_ordered_writepage()  still possibly hold the ref to the bh and
later journal_try_to_free_buffers() could still fail due to that.

>        ->ext3_ordered_writepage()
>          walk_page_buffers() <- take a bh ref
>          block_write_full_page() <- unlock_page
>               : <- end_page_writeback
>                 : <- race! (dio write->try_to_release_page fails)

 here is the  window.
>                  walk_page_buffers() <-release a bh ref


And we need someway to notify DIO code from ext3_ordered_writepage() to
indicating they are done with those buffers. That's the hard way, as Jan
mentioned.

> With that said, I don't have strong feelings against falling back to
> buffered IO when the invalidate fails.  
> 
> 
It seems a little odd that we have to back to buffered IO in this case.
The pages are all flushed,  DIO just want to make sure the
journaltransactions who still keep those buffers are removed from their
list. It did that, the only reason to keep DIO fail is someone else
hasn't release the bh.

Current code enforce all the buffers have to be freed and pages are
removed from page cache, in order to force later read are from disk.  I
am not sure why can't we just leave the page in the cache, just clear it
uptodate flag, without reduce the page ref count?   I think DIO should
proceed it's IO in this case...


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux