Re: [RFC] mm: gup: add helper page_try_gup_pin(page)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 3 Nov 2019 22:09:03 -0800 John Hubbard wrote:
> On 11/3/19 8:34 PM, Hillf Danton wrote:
> ...
> >>
> >> Well, as long as we're counting bits, I've taken 21 bits (!) to track
> >> "gupers". :)  More accurately, I'm sharing 31 bits with get_page()...please
> > 
> > Would you please specify the reasoning of tracking multiple gupers
> > for a dirty page? Do you mean that it is all fine for guper-A to add
> > changes to guper-B's data without warning and vice versa?
> 
> It's generally OK to call get_user_pages() on a page more than once.

Does this explain that it's generally OK to gup pin a page under
writeback and then start DMA to it behind the flusher's back without
warning? 

> And even though we are seeing some work to reduce the number of places
> in the kernel that call get_user_pages(), there are still lots of call sites.
> That means lots of combinations and situations that could result in more
> than one gup call per page.
> 
> Furthermore, there is no mechanism, convention, documentation, nor anything
> at all that attempts to enforce "for each page, get_user_pages() may only
> be called once."

What sense is this making wrt the data corruption resulting specifically
from multiple gup references?

> 
> ...
> >>
> >> I think you must have missed the many contentious debates about the
> >> tension between gup-pinned pages, and writeback. File systems can't
> >> just ignore writeback in all cases. This patch leads to either
> >> system hangs or filesystem corruption, in the presence of long-lasting
> >> gup pins.
> > 
> > The current risk of data corruption due to writeback with long-lived
> > gup references all ignored is zeroed out by detecting gup-pinned dirty
> > pages and skipping them; that may lead to problems you mention above.
> > 
> 
> Here, I believe you're pointing out that the current situation in the
> kernel is already broken, with respect to fs interactions (especially
> writeback) with gup. Yes, you are correct, there is a problem.
> 
> > Though I doubt anything helpful about it can be expected from fs in near
> 
> Actually, fs and mm folks are working together to solve this.
> 
> > future, we have options for instance that gupers periodically release
> > their references and re-pin pages after data sync the same way as the
> > current flusher does.
> > 
> 
> That's one idea. I don't see it as viable, given the behavior of, say,
> a compute process running OpenCL jobs on a GPU that is connected via
> a network or Infiniband card--the idea of "pause" really looks more like
> "tear down the complicated multi-driver connection, writeback, then set it
> all up again", I suspect. (And if we could easily interrupt the job, we'd
> probably really be running with a page-fault-capable GPU plus and IB card
> that does ODP, plus HMM, and we wouldn't need to gup-pin anyway...)

Well is it OK to shorten the behavior above to "data corruption in
writeback is tolerable in practice because of the expensive cost of
data sync"?

What is the point of writeback? Why can the writeback of long-lived
gup-pinned pages not be skipped while data sync can be entirely
ignored?

> Anyway, this is not amenable to quick fixes, because the problem is
> a couple of missing design pieces. Which we're working on putting in.
> But meanwhile, smaller changes such as this one are just going to move
> the problems to different places, rather than solving them. So it's best
> not to do that.
> 
> thanks,
> -- 
> John Hubbard
> NVIDIA





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux