Re: [PATCH v2 05/18] xfs: Add xfs_break_layouts() to the inode eviction path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 29-09-22 16:33:27, Dan Williams wrote:
> Jan Kara wrote:
> > On Mon 26-09-22 09:54:07, Dave Chinner wrote:
> > > > I'd be more worried about stuff like vmsplice() that can add file pages
> > > > into pipe without holding inode alive in any way and keeping them there for
> > > > arbitrarily long time. Didn't we want to add FOLL_LONGTERM to gup executed
> > > > from vmsplice() to avoid issues like this?
> > > 
> > > Yes, ISTR that was part of the plan - use FOLL_LONGTERM to ensure
> > > FSDAX can't run operations that pin pages but don't take fs
> > > references. I think that's how we prevented RDMA users from pinning
> > > FSDAX direct mapped storage media in this way. It does not, however,
> > > prevent the above "short term" GUP UAF situation from occurring.
> > 
> > If what I wrote above is correct, then I understand and agree.
> > 
> > > > I agree that freeing VMA while there are pinned pages is ... inconvenient.
> > > > But that is just how gup works since the beginning - the moment you have
> > > > struct page reference, you completely forget about the mapping you've used
> > > > to get to the page. So anything can happen with the mapping after that
> > > > moment. And in case of pages mapped by multiple processes I can easily see
> > > > that one of the processes decides to unmap the page (and it may well be
> > > > that was the initial process that acquired page references) while others
> > > > still keep accessing the page using page references stored in some internal
> > > > structure (RDMA anyone?).
> > > 
> > > Yup, and this is why RDMA on FSDAX using this method of pinning pages
> > > will end up corrupting data and filesystems, hence FOLL_LONGTERM
> > > protecting against most of these situations from even arising. But
> > > that's that workaround, not a long term solution that allows RDMA to
> > > be run on FSDAX managed storage media.
> > > 
> > > I said on #xfs a few days ago:
> > > 
> > > [23/9/22 10:23] * dchinner is getting deja vu over this latest round
> > > of "dax mappings don't pin the filesystem objects that own the
> > > storage media being mapped"
> > > 
> > > And I'm getting that feeling again right now...
> > > 
> > > > I think it will be rather difficult to come up
> > > > with some scheme keeping VMA alive while there are pages pinned without
> > > > regressing userspace which over the years became very much tailored to the
> > > > peculiar gup behavior.
> > > 
> > > Perhaps all we should do is add a page flag for fsdax mapped pages
> > > that says GUP must pin the VMA, so only mapped pages that fall into
> > > this category take the perf penalty of VMA management.
> > 
> > Possibly. But my concern with VMA pinning was not only about performance
> > but also about applications relying on being able to unmap pages that are
> > currently pinned. At least from some processes one of which may be the one
> > doing the original pinning. But yeah, the fact that FOLL_LONGTERM is
> > forbidden with DAX somewhat restricts the insanity we have to deal with. So
> > maybe pinning the VMA for DAX mappings might actually be a workable
> > solution.
> 
> As far as I can see, VMAs are not currently reference counted they are
> just added / deleted from an mm_struct, and nothing guarantees
> mapping_mapped() stays true while a page is pinned.

I agree this solution requires quite some work. But I wanted to say that
in principle it would be a logically consistent and technically not that
difficult solution.
 
> I like Dave's mental model that the inode is the arbiter for the page,
> and the arbiter is not allowed to go out of scope before asserting that
> everything it granted previously has been returned.
> 
> write_inode_now() unconditionally invokes dax_writeback_mapping_range()
> when the inode is committed to going out of scope. write_inode_now() is
> allowed to sleep until all dirty mapping entries are written back. I see
> nothing wrong with additionally checking for entries with elevated page
> reference counts and doing a:
> 
>     __wait_var_event(page, dax_page_idle(page));
> 
> Since the inode is out of scope there should be no concerns with racing
> new 0 -> 1 page->_refcount transitions. Just wait for transient page
> pins to finally drain to zero which should already be on the order of
> the wait time to complete synchrounous writeback in the dirty inode
> case.

I agree this is doable but there's the nasty sideeffect that inode reclaim
may block for abitrary time waiting for page pinning. If the application
that has pinned the page requires __GFP_FS memory allocation to get to a
point where it releases the page, we even have a deadlock possibility.
So it's better than the UAF issue but still not ideal.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux