On Wed, Dec 2, 2015 at 1:55 PM, Toshi Kani <toshi.kani@xxxxxxx> wrote: > On Wed, 2015-12-02 at 12:54 -0800, Dan Williams wrote: >> On Wed, Dec 2, 2015 at 1:37 PM, Toshi Kani <toshi.kani@xxxxxxx> wrote: >> > On Wed, 2015-12-02 at 11:57 -0800, Dan Williams wrote: >> [..] >> > > The whole point of __get_user_page_fast() is to avoid the overhead of >> > > taking the mm semaphore to access the vma. _PAGE_SPECIAL simply tells >> > > __get_user_pages_fast that it needs to fallback to the >> > > __get_user_pages slow path. >> > >> > I see. Then, I think gup_huge_pmd() can simply return 0 when !pfn_valid(), >> > instead of VM_BUG_ON. >> >> Is pfn_valid() a reliable check? It seems to be based on a max_pfn >> per node... what happens when pmem is located below that point. I >> haven't been able to convince myself that we won't get false >> positives, but maybe I'm missing something. > > I believe we use the version of pfn_valid() in linux/mmzone.h. Talking this over with Dave we came to the conclusion that it would be safer to be explicit about the pmd not being mapped. He points out that unless a platform can guarantee that persistent memory is always section aligned we might get false positive pfn_valid() indications. Given the get_user_pages_fast() path is arch specific we can simply have an arch specific pmd bit and not worry about generically enabling a "pmd special" bit for now. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>