On Wed, 2019-05-22 at 10:40 -0700, David Miller wrote: > From: "Edgecombe, Rick P" <rick.p.edgecombe@xxxxxxxxx> > Date: Tue, 21 May 2019 01:59:54 +0000 > > > On Mon, 2019-05-20 at 18:43 -0700, David Miller wrote: > > > From: "Edgecombe, Rick P" <rick.p.edgecombe@xxxxxxxxx> > > > Date: Tue, 21 May 2019 01:20:33 +0000 > > > > > > > Should it handle executing an unmapped page gracefully? Because > > > > this > > > > change is causing that to happen much earlier. If something was > > > > relying > > > > on a cached translation to execute something it could find the > > > > mapping > > > > disappear. > > > > > > Does this work by not mapping any kernel mappings at the > > > beginning, > > > and then filling in the BPF mappings in response to faults? > > No, nothing too fancy. It just flushes the vm mapping immediatly in > > vfree for execute (and RO) mappings. The only thing that happens > > around > > allocation time is setting of a new flag to tell vmalloc to do the > > flush. > > > > The problem before was that the pages would be freed before the > > execute > > mapping was flushed. So then when the pages got recycled, random, > > sometimes coming from userspace, data would be mapped as executable > > in > > the kernel by the un-flushed tlb entries. > > If I am to understand things correctly, there was a case where 'end' > could be smaller than 'start' when doing a range flush. That would > definitely kill some of the sparc64 TLB flush routines. Ok, thanks. The patch at the beginning of this thread doesn't have that behavior though and it apparently still hung. I asked if Meelis could test with this feature disabled and DEBUG_PAGEALLOC on, since it flushes on every vfree and is not new logic, and also with a patch that logs exact TLB flush ranges and fault addresses on top of the kernel having this issue. Hopefully that will shed some light. Sorry for all the noise and speculation on this. It has been difficult to debug remotely with a tester and developer in different time zones.