On 2017/10/10 2:26, Michal Hocko wrote: > On Wed 27-09-17 13:51:09, Xishi Qiu wrote: >> On 2017/9/26 19:00, Michal Hocko wrote: >> >>> On Tue 26-09-17 11:45:16, Vlastimil Babka wrote: >>>> On 09/26/2017 11:22 AM, Xishi Qiu wrote: >>>>> On 2017/9/26 17:13, Xishi Qiu wrote: >>>>>>> This is still very fuzzy. What are you actually trying to achieve? >>>>>> >>>>>> I don't expect page fault any more after mlock. >>>>>> >>>>> >>>>> Our apps is some thing like RT, and page-fault maybe cause a lot of time, >>>>> e.g. lock, mem reclaim ..., so I use mlock and don't want page fault >>>>> any more. >>>> >>>> Why does your app then have restricted mprotect when calling mlockall() >>>> and only later adjusts the mprotect? >>> >>> Ahh, OK I see what is goging on. So you have PROT_NONE vma at the time >>> mlockall and then later mprotect it something else and want to fault all >>> that memory at the mprotect time? >>> >>> So basically to do >>> --- >>> diff --git a/mm/mprotect.c b/mm/mprotect.c >>> index 6d3e2f082290..b665b5d1c544 100644 >>> --- a/mm/mprotect.c >>> +++ b/mm/mprotect.c >>> @@ -369,7 +369,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, >>> * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major >>> * fault on access. >>> */ >>> - if ((oldflags & (VM_WRITE | VM_SHARED | VM_LOCKED)) == VM_LOCKED && >>> + if ((oldflags & (VM_WRITE | VM_LOCKED)) == VM_LOCKED && >>> (newflags & VM_WRITE)) { >>> populate_vma_page_range(vma, start, end, NULL); >>> } >>> >> >> Hi Michal, >> >> My kernel is v3.10, and I missed this code, thank you reminding me. > > I guess I didn't get your answer. Does the above diff resolves your > problem? Hi Michal, This upstream patch 36f881883c57941bb32d25cea6524f9612ab5a2c has already resolve my problem, thank you for your attention. Thanks, Xishi Qiu -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>