On Thu 27-09-18 23:21:23, Souptick Joarder wrote: > vm_insert_kmem_page is similar to vm_insert_page and will > be used by drivers to map kernel (kmalloc/vmalloc/pages) > allocated memory to user vma. > > Previously vm_insert_page is used for both page fault > handlers and outside page fault handlers context. When > vm_insert_page is used in page fault handlers context, > each driver have to map errno to VM_FAULT_CODE in their > own way. But as part of vm_fault_t migration all the > page fault handlers are cleaned up by using new vmf_insert_page. > Going forward, vm_insert_page will be removed by converting > it to vmf_insert_page. > > But their are places where vm_insert_page is used outside > page fault handlers context and converting those to > vmf_insert_page is not a good approach as drivers will end > up with new VM_FAULT_CODE to errno conversion code and it will > make each user more complex. > > So this new vm_insert_kmem_page can be used to map kernel > memory to user vma outside page fault handler context. > > In short, vmf_insert_page will be used in page fault handlers > context and vm_insert_kmem_page will be used to map kernel > memory to user vma outside page fault handlers context. > > We will slowly convert all the user of vm_insert_page to > vm_insert_kmem_page after this API be available in linus tree. In general I do not like patches adding a new exports/functionality without any user added at the same time. I am not going to look at the implementation right now but the above opens more questions than it gives answers. Why do we have to distinguish #PF from other paths? -- Michal Hocko SUSE Labs