On Fri, Apr 19, 2019 at 05:45:57PM +0200, Laurent Dufour wrote: > Hi Jerome, > > Thanks a lot for reviewing this series. > > Le 19/04/2019 à 00:48, Jerome Glisse a écrit : > > On Tue, Apr 16, 2019 at 03:45:00PM +0200, Laurent Dufour wrote: > > > From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > > > > > > Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence > > > counts such that we can easily test if a VMA is changed. > > > > > > The calls to vm_write_begin/end() in unmap_page_range() are > > > used to detect when a VMA is being unmap and thus that new page fault > > > should not be satisfied for this VMA. If the seqcount hasn't changed when > > > the page table are locked, this means we are safe to satisfy the page > > > fault. > > > > > > The flip side is that we cannot distinguish between a vma_adjust() and > > > the unmap_page_range() -- where with the former we could have > > > re-checked the vma bounds against the address. > > > > > > The VMA's sequence counter is also used to detect change to various VMA's > > > fields used during the page fault handling, such as: > > > - vm_start, vm_end > > > - vm_pgoff > > > - vm_flags, vm_page_prot > > > - vm_policy > > > > ^ All above are under mmap write lock ? > > Yes, changes are still made under the protection of the mmap_sem. > > > > > > - anon_vma > > > > ^ This is either under mmap write lock or under page table lock > > > > So my question is do we need the complexity of seqcount_t for this ? > > The sequence counter is used to detect write operation done while readers > (SPF handler) is running. > > The implementation is quite simple (here without the lockdep checks): > > static inline void raw_write_seqcount_begin(seqcount_t *s) > { > s->sequence++; > smp_wmb(); > } > > I can't see why this is too complex here, would you elaborate on this ? > > > > > It seems that using regular int as counter and also relying on vm_flags > > when vma is unmap should do the trick. > > vm_flags is not enough I guess an some operation are not impacting the > vm_flags at all (resizing for instance). > Am I missing something ? > > > > > vma_delete(struct vm_area_struct *vma) > > { > > ... > > /* > > * Make sure the vma is mark as invalid ie neither read nor write > > * so that speculative fault back off. A racing speculative fault > > * will either see the flags as 0 or the new seqcount. > > */ > > vma->vm_flags = 0; > > smp_wmb(); > > vma->seqcount++; > > ... > > } > > Well I don't think we can safely clear the vm_flags this way when the VMA is > unmap, I think it is used later when cleaning is doen. > > Later in this series, the VMA deletion is managed when the VMA is unlinked > from the RB Tree. That is checked using the vm_rb field's value, and managed > using RCU. > > > Then: > > speculative_fault_begin(struct vm_area_struct *vma, > > struct spec_vmf *spvmf) > > { > > ... > > spvmf->seqcount = vma->seqcount; > > smp_rmb(); > > spvmf->vm_flags = vma->vm_flags; > > if (!spvmf->vm_flags) { > > // Back off the vma is dying ... > > ... > > } > > } > > > > bool speculative_fault_commit(struct vm_area_struct *vma, > > struct spec_vmf *spvmf) > > { > > ... > > seqcount = vma->seqcount; > > smp_rmb(); > > vm_flags = vma->vm_flags; > > > > if (spvmf->vm_flags != vm_flags || seqcount != spvmf->seqcount) { > > // Something did change for the vma > > return false; > > } > > return true; > > } > > > > This would also avoid the lockdep issue described below. But maybe what > > i propose is stupid and i will see it after further reviewing thing. > > That's true that the lockdep is quite annoying here. But it is still > interesting to keep in the loop to avoid 2 subsequent write_seqcount_begin() > call being made in the same context (which would lead to an even sequence > counter value while write operation is in progress). So I think this is > still a good thing to have lockdep available here. Ok so i had to read everything and i should have read everything before asking all of the above. It does look good in fact, what worried my in this patch is all the lockdep avoidance as it is usualy a red flags. But after thinking long and hard i do not see how to easily solve that one as unmap_page_range() is in so many different path... So what is done in this patch is the most sane thing. Sorry for the noise. So for this patch: Reviewed-by: Jérôme Glisse <jglisse@xxxxxxxxxx>