[PATCH 0/3, v2] mprotect() and working set sampling optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, people suggested to split out the change_protection() modification
into a third patch.

This series implements an mprotect() optimization that also
helps improve the quality of working set scanning:

  - working set scanning gets faster

  - we can scan with a touched-page rate, instead of with a
    virtual-memory proportional rate (within limits).

This is already part of numa/core, but wanted to send it out
separately as well, to get specific feedback for the mprotect()
bits.

Thanks,

	Ingo

---
Ingo Molnar (1):
  mm: Optimize the TLB flush of sys_mprotect() and change_protection()
    users

Peter Zijlstra (2):
  mm: Count the number of pages affected in change_protection()
  sched, numa, mm: Count WS scanning against present PTEs, not virtual
    memory ranges

 include/linux/hugetlb.h |  8 ++++++--
 include/linux/mm.h      |  6 +++---
 kernel/sched/fair.c     | 37 +++++++++++++++++++++----------------
 mm/hugetlb.c            | 10 ++++++++--
 mm/mprotect.c           | 46 ++++++++++++++++++++++++++++++++++------------
 5 files changed, 72 insertions(+), 35 deletions(-)

-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]