On Thu, Jan 9, 2025 at 3:51 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > On Wed, Jan 08, 2025 at 06:30:09PM -0800, Suren Baghdasaryan wrote: > > Back when per-vma locks were introduces, vm_lock was moved out of > > vm_area_struct in [1] because of the performance regression caused by > > false cacheline sharing. Recent investigation [2] revealed that the > > regressions is limited to a rather old Broadwell microarchitecture and > > even there it can be mitigated by disabling adjacent cacheline > > prefetching, see [3]. > > Splitting single logical structure into multiple ones leads to more > > complicated management, extra pointer dereferences and overall less > > maintainable code. When that split-away part is a lock, it complicates > > things even further. With no performance benefits, there are no reasons > > for this split. Merging the vm_lock back into vm_area_struct also allows > > vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. > > This patchset: > > 1. moves vm_lock back into vm_area_struct, aligning it at the cacheline > > boundary and changing the cache to be cacheline-aligned to minimize > > cacheline sharing; > > 2. changes vm_area_struct initialization to mark new vma as detached until > > it is inserted into vma tree; > > 3. replaces vm_lock and vma->detached flag with a reference counter; > > 4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their > > reuse and to minimize call_rcu() calls. > > Does not clean up that reattach nonsense :-( Oh, no. I think it does. That's why in [1] I introduce vma_iter_store_attached() to be used on already attached vmas and to avoid marking them attached again. Also I added assertions in vma_mark_attached()/vma_mark_detached() to avoid re-attaching or re-detaching. Unless I misunderstood your comment? [1] https://lore.kernel.org/all/20250109023025.2242447-5-surenb@xxxxxxxxxx/