On Mon, Nov 29, 2010 at 10:23:11AM +0000, Mel Gorman wrote: > > > > @@ -353,7 +353,7 @@ static inline unsigned long pmd_page_vad > > > > * Currently stuck as a macro due to indirect forward reference to > > > > * linux/mmzone.h's __section_mem_map_addr() definition: > > > > */ > > > > -#define pmd_page(pmd) pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) > > > > +#define pmd_page(pmd) pfn_to_page((pmd_val(pmd) & PTE_PFN_MASK) >> PAGE_SHIFT) > > > > > > > > > > Why is it now necessary to use PTE_PFN_MASK? > > > > Just for the NX bit, that couldn't be set before the pmd could be > > marked PSE. > > > > Sorry, I still am missing something. PTE_PFN_MASK is this > > #define PTE_PFN_MASK ((pteval_t)PHYSICAL_PAGE_MASK) > #define PHYSICAL_PAGE_MASK (((signed long)PAGE_MASK) & __PHYSICAL_MASK) > > I'm not seeing how PTE_PFN_MASK affects the NX bit (bit 63). It simply clears it by doing & 0000... otherwise bit 51 would remain erroneously set on the pfn passed to pfn_to_page. Clearing bit 63 wasn't needed before because bit 63 couldn't be set on a not huge pmd. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>