On Tue, 15 Jul 2014, Vlastimil Babka wrote: > On 07/15/2014 12:31 PM, Hugh Dickins wrote: > > f00cdc6df7d7 ("shmem: fix faulting into a hole while it's punched") was > > buggy: Sasha sent a lockdep report to remind us that grabbing i_mutex in > > the fault path is a no-no (write syscall may already hold i_mutex while > > faulting user buffer). > > > > We tried a completely different approach (see following patch) but that > > proved inadequate: good enough for a rational workload, but not good > > enough against trinity - which forks off so many mappings of the object > > that contention on i_mmap_mutex while hole-puncher holds i_mutex builds > > into serious starvation when concurrent faults force the puncher to fall > > back to single-page unmap_mapping_range() searches of the i_mmap tree. > > > > So return to the original umbrella approach, but keep away from i_mutex > > this time. We really don't want to bloat every shmem inode with a new > > mutex or completion, just to protect this unlikely case from trinity. > > So extend the original with wait_queue_head on stack at the hole-punch > > end, and wait_queue item on the stack at the fault end. > > Hi, thanks a lot, I will definitely test it soon, although my reproducer is > rather limited - it already works fine with the current kernel. Trinity will > be more useful here. Yes, 2/2 (minus the page->swap addition) already proved good enough for your (more realistic than trinity) testcase, and for mine. And 1/2 (minus the new waiting) already proved good enough for you too, just more awkward to backport way back. I agree that it's trinity we most need, to check that I didn't mess up 1/2 - though your testing welcome too, thanks. > But there's something that caught my eye so I though I > would raise the concern now. Thank you. > > > @@ -760,7 +760,7 @@ static int shmem_writepage(struct page * > > spin_lock(&inode->i_lock); > > shmem_falloc = inode->i_private; > > Without ACCESS_ONCE, can shmem_falloc potentially become an alias on > inode->i_private and later become re-read outside of the lock? No, it could be re-read inside the locked section (which is okay since the locking ensures the same value would be re-read each time), but it cannot be re-read after the unlock. The unlock guarantees that (whereas an assignment after the unlock might be moved up before the unlock). I searched for a simple example (preferably not in code written by me!) to convince you. I thought it would be easy to find an example of spin_lock(&lock); thing_to_free = whatever; spin_unlock(&lock); if (thing_to_free) free(thing_to_free); but everything I hit upon was actually a little more complicated than than that (e.g. involving whatever(), or setting whatever = NULL after), and therefore less convincing. Please hunt around to convince yourself. > > > if (shmem_falloc && > > - !shmem_falloc->mode && > > + !shmem_falloc->waitq && > > index >= shmem_falloc->start && > > index < shmem_falloc->next) > > shmem_falloc->nr_unswapped++; ... > > if (unlikely(inode->i_private)) { > > struct shmem_falloc *shmem_falloc; > > > > spin_lock(&inode->i_lock); > > shmem_falloc = inode->i_private; > > Same here. Same here :) > > > - if (!shmem_falloc || > > - shmem_falloc->mode != FALLOC_FL_PUNCH_HOLE || > > - vmf->pgoff < shmem_falloc->start || > > - vmf->pgoff >= shmem_falloc->next) > > - shmem_falloc = NULL; > > - spin_unlock(&inode->i_lock); > > - /* > > - * i_lock has protected us from taking shmem_falloc seriously > > - * once return from shmem_fallocate() went back up that > > stack. > > - * i_lock does not serialize with i_mutex at all, but it does > > - * not matter if sometimes we wait unnecessarily, or > > sometimes > > - * miss out on waiting: we just need to make those cases > > rare. > > - */ > > - if (shmem_falloc) { > > + if (shmem_falloc && > > + shmem_falloc->waitq && > > Here it's operating outside of lock. No, it's inside the lock: just easier to see from the patched source than from the patch itself. Hugh -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html