Re: Possible deadlock in fuse write path (Was: Re: [PATCH 0/4] Some more lock_page work..)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 16, 2020 at 12:02:21PM +0200, Miklos Szeredi wrote:
> On Thu, Oct 15, 2020 at 11:22 PM Linus Torvalds
> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > On Thu, Oct 15, 2020 at 12:55 PM Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> > >
> > > I am wondering how should I fix this issue. Is it enough that I drop
> > > the page lock (but keep the reference) inside the loop. And once copying
> > > from user space is done, acquire page locks for all pages (Attached
> > > a patch below).
> >
> > What is the page lock supposed to protect?
> >
> > Because whatever it protects, dropping the lock drops, and you'd need
> > to re-check whatever the page lock was there for.
> >
> > > Or dropping page lock means that there are no guarantees that this
> > > page did not get written back and removed from address space and
> > > a new page has been placed at same offset. Does that mean I should
> > > instead be looking up page cache again after copying from user
> > > space is done.
> >
> > I don't know why fuse does multiple pages to begin with. Why can't it
> > do whatever it does just one page at a time?
> >
> > But yes, you probably should look the page up again whenever you've
> > unlocked it, because it might have been truncated or whatever.
> >
> > Not that this is purely about unlocking the page, not about "after
> > copying from user space". The iov_iter_copy_from_user_atomic() part is
> > safe - if that takes a page fault, it will just do a partial copy, it
> > won't deadlock.
> >
> > So you can potentially do multiple pages, and keep them all locked,
> > but only as long as the copies are all done with that
> > "from_user_atomic()" case. Which normally works fine, since normal
> > users will write stuff that they just generated, so it will all be
> > there.
> >
> > It's only when that returns zero, and you do the fallback to pre-fault
> > in any data with iov_iter_fault_in_readable() that you need to unlock
> > _all_ pages (and once you do that, I don't see what possible advantage
> > the multi-page array can have).
> >
> > Of course, the way that code is written, it always does the
> > iov_iter_fault_in_readable() for each page - it's not written like
> > some kind of "special case fallback thing".
> 
> This was added by commit ea9b9907b82a ("fuse: implement
> perform_write") in v2.6.26 and remains essentially unchanged, AFAICS.
> So this is an old bug indeed.
> 
> So what is the page lock protecting?   I think not truncation, because
> inode_lock should be sufficient protection.
> 
> What it does after sending a synchronous WRITE and before unlocking
> the pages is set the PG_uptodate flag, but only if the complete page
> was really written, which is what the uptodate flag really says:  this
> page is in sync with the underlying fs.
> 
> So I think the page lock here is trying to protect against concurrent
> reads/faults on not uptodate pages.  I.e. until the WRITE request
> completes it is unknown whether the page was really written or not, so
> any reads must block until this state becomes known.  This logic falls
> down on already cached pages, since they start out uptodate and the
> write does not clear this flag.
> 
> So keeping the pages locked has dubious value: short writes don't seem
> to work correctly anyway. Which means that we can probably just set
> the page uptodate right after filling it from the user buffer, and
> unlock the page immediately.

Hi Miklos,

As you said, for the full page WRITE, we can probably mark it
page uptodate write away and drop page lock (Keep reference and
send WRITE request to fuse server). For the partial page write this will
not work and there seem to be atleast two options.

A. Either we read the page back from disk first and mark it uptodate.

B. Or we keep track of such partial writes and block any further
   reads/readpage/direct_IO on these pages till partial write is
   complete. After that looks like page will be left notuptodate
   in page cache and reader will read it from disk. We are doing
   something similar for tracking writeback requests. It is much
   more complicated though and we probably can design something
   simpler for these writethrough/synchronous writes.

I am assuming that A. will lead to performance penalty for short
random writes. So B might be better from performance point of
view.

Is it worth giving option B a try.

Thanks
Vivek





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux