[Hotplug_sig] Re: [Lhms-devel] 2.6.11-rc2-mm2-mhp1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2005-02-08 at 19:41 +0900, KAMEZAWA Hiroyuki wrote: 
> Hi Dave
> 
> I cannot catch what is done by codes below .
> --
> static unsigned long sparse_encode_mem_map(struct page *mem_map, int pnum)
> {
> return (unsigned long)(mem_map - (pnum << PFN_SECTION_SHIFT));
> }

It basically saves you doing the old __section_offset() calculation.
Let's take an example pfn: 0x00040005.  The section mask is 0xffff0000,
so the section number is 4.  The page's offset within the section is 5. 

So, if we store the mem_map normally, we'll access the pfn's page like this:

	&mem_section[4]->mem_map[5]
or
	mem_section[4]->mem_map + 5


But, that requires a mask to obtain both the section number (the 4),
*and* the page's offset within the section (the 5).  

So, with Andy's patch, what we effectively do is pretend that the
mem_map is stored starting from page 0, just like it is for a normal,
contiguous system.  We just assume that we'll never access the "virtual"
array out of the area that is present.  In the end, it saves one bitmask
in the calculation (the one to obtain the 5 in the above calculation).  

#define pfn_to_page(pfn)
{
	unsigned long __pfn = (pfn);
	__pfn_to_section(__pfn)->section_mem_map + __pfn;
}


-- Dave


[Index of Archives]     [Linux Kernel]     [Linux DVB]     [Asterisk Internet PBX]     [DCCP]     [Netdev]     [X.org]     [Util Linux NG]     [Fedora Women]     [ALSA Devel]     [Linux USB]

  Powered by Linux