Re: [PATCH 05/22] lmb: Add lmb_reserve_area/lmb_free_area

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So promote them to meminit...

"Yinghai" <yinghai.lu@xxxxxxxxxx> wrote:

>On 05/10/2010 12:10 AM, Benjamin Herrenschmidt wrote:
>> On Sat, 2010-05-08 at 08:17 -0700, Yinghai Lu wrote:
>>> They will check if the region array is big enough.
>>>
>>> __check_and_double_region_array will try to double the region array if that
>>> array spare slots is not big enough.  Old array will be copied to new array.
>>>
>>> Arch code should set lmb.default_alloc_limit accordingly, so the new array is in
>>> accessiable address.
>> 
>> More issues...
>> 
>>> +static void __init __check_and_double_region_array(struct lmb_region *type,
>>> +			 struct lmb_property *static_region)
>>> +{
>>> +	u64 size, mem;
>>> +	struct lmb_property *new, *old;
>>> +	unsigned long rgnsz = type->nr_regions;
>>> +
>>> +	/* Do we have enough slots left ? */
>>> +	if ((rgnsz - type->cnt) > 2)
>>> +		return;
>>> +
>>> +	old = type->region;
>>> +	/* Double the array size */
>>> +	size = sizeof(struct lmb_property) * rgnsz * 2;
>>> +
>>> +	mem = __lmb_alloc_base(size, sizeof(struct lmb_property), lmb.default_alloc_limit);
>>> +	if (mem == 0)
>>> +		panic("can not find more space for lmb.reserved.region array");
>> 
>> Now, that is not right because we do memory hotplug. Thus lmb_add() must
>> be able to deal with things running past LMB init.
>> 
>> slab_is_available() will do the job for now, unless somebody has bootmem
>> and tries to lmb_add() memory while bootmem is active, but screw that
>> for now. See the code I'll post tonight.
>> 
>>> +	new = __va(mem);
>>> +	/* Copy old to new */
>>> +	memcpy(&new[0], &old[0], sizeof(struct lmb_property) * rgnsz);
>>> +	memset(&new[rgnsz], 0, sizeof(struct lmb_property) * rgnsz);
>>> +
>>> +	memset(&old[0], 0, sizeof(struct lmb_property) * rgnsz);
>>> +	type->region = new;
>>> +	type->nr_regions = rgnsz * 2;
>>> +	printk(KERN_DEBUG "lmb.reserved.region array is doubled to %ld at [%llx - %llx]\n",
>>> +		type->nr_regions, mem, mem + size - 1);
>>> +
>>> +	/* Free old one ?*/
>>> +	if (old != static_region)
>>> +		lmb_free(__pa(old), sizeof(struct lmb_property) * rgnsz);
>>> +}
>> 
>> Similar comment, don't bother if slab is available.
>> 
>>> +void __init lmb_add_memory(u64 start, u64 end)
>>> +{
>>> +	lmb_add_region(&lmb.memory, start, end - start);
>>> +	__check_and_double_region_array(&lmb.memory, &lmb_memory_region[0]);
>>> +}
>> 
>> So you duplicate lmb_add() gratuituously ? 
>> 
>>> +void __init lmb_reserve_area(u64 start, u64 end, char *name)
>>> +{
>>> +	if (start == end)
>>> +		return;
>>> +
>>> +	if (WARN_ONCE(start > end, "lmb_reserve_area: wrong range [%#llx, %#llx]\n", start, end))
>>> +		return;
>>> +
>>> +	lmb_add_region(&lmb.reserved, start, end - start);
>>> +	__check_and_double_region_array(&lmb.reserved, &lmb_reserved_region[0]);
>>> +}
>> 
>> And lmb_reserve() ?
>> 
>> Do we want to end up with 5 copies of the same API with subtle
>> differences just for fun ?
>
>those functions have __init markers, and only can be used on boot stage. so do need to worry about hotplug mem.
>
>what I do is: use current lmb code for x86, and keep the affects to original lmb users to minimum. (should be near 0)
>
>YH

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux