On Fri, Feb 22, 2019 at 12:25:44PM +0800, Peter Xu wrote: > On Thu, Feb 21, 2019 at 10:53:11AM -0500, Jerome Glisse wrote: > > On Thu, Feb 21, 2019 at 04:56:56PM +0800, Peter Xu wrote: > > > The idea comes from a discussion between Linus and Andrea [1]. [...] > > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > > > index 248ff0a28ecd..d842c3e02a50 100644 > > > --- a/arch/x86/mm/fault.c > > > +++ b/arch/x86/mm/fault.c > > > @@ -1483,9 +1483,7 @@ void do_user_addr_fault(struct pt_regs *regs, > > > if (unlikely(fault & VM_FAULT_RETRY)) { > > > bool is_user = flags & FAULT_FLAG_USER; > > > > > > - /* Retry at most once */ > > > if (flags & FAULT_FLAG_ALLOW_RETRY) { > > > - flags &= ~FAULT_FLAG_ALLOW_RETRY; > > > flags |= FAULT_FLAG_TRIED; > > > if (is_user && signal_pending(tsk)) > > > return; > > > > So here you have a change in behavior, it can retry indefinitly for as > > long as they are no signal. Don't you want so test for FAULT_FLAG_TRIED ? > > These first five patches do want to allow the page fault to retry as > much as needed. "indefinitely" seems to be a scary word, but IMHO > this is fine for page faults since otherwise we'll simply crash the > program or even crash the system depending on the fault context, so it > seems to be nowhere worse. > > For userspace programs, if anything really really go wrong (so far I > still cannot think a valid scenario in a bug-free system, but just > assuming...) and it loops indefinitely, IMHO it'll just hang the buggy > process itself rather than coredump, and the admin can simply kill the > process to retake the resources since we'll still detect signals. > > Or did I misunderstood the question? No i think you are right, it is fine to keep retrying while they are no signal maybe just add a comment that says so in so many words :) So people do not see that as a potential issue. > > [...] > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > index 80bb6408fe73..4e11c9639f1b 100644 > > > --- a/include/linux/mm.h > > > +++ b/include/linux/mm.h > > > @@ -341,11 +341,21 @@ extern pgprot_t protection_map[16]; > > > #define FAULT_FLAG_ALLOW_RETRY 0x04 /* Retry fault if blocking */ > > > #define FAULT_FLAG_RETRY_NOWAIT 0x08 /* Don't drop mmap_sem and wait when retrying */ > > > #define FAULT_FLAG_KILLABLE 0x10 /* The fault task is in SIGKILL killable region */ > > > -#define FAULT_FLAG_TRIED 0x20 /* Second try */ > > > +#define FAULT_FLAG_TRIED 0x20 /* We've tried once */ > > > #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */ > > > #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */ > > > #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */ > > > > > > +/* > > > + * Returns true if the page fault allows retry and this is the first > > > + * attempt of the fault handling; false otherwise. > > > + */ > > > > You should add why it returns false if it is not the first try ie to > > avoid starvation. > > How about: > > Returns true if the page fault allows retry and this is the > first attempt of the fault handling; false otherwise. This is > mostly used for places where we want to try to avoid taking > the mmap_sem for too long a time when waiting for another > condition to change, in which case we can try to be polite to > release the mmap_sem in the first round to avoid potential > starvation of other processes that would also want the > mmap_sem. > > ? Looks perfect to me. Cheers, Jérôme