Re: [RFC PATCH v2 0/4] mm: reclaim zbud pages on migration and compaction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/11/2013 07:25 PM, Minchan Kim wrote:
> +int set_pinned_page(struct pin_page_owner *owner,
> +			struct page *page, void *private)
> +{
> +	struct pin_page_info *pinfo = kmalloc(sizeof(pinfo), GFP_KERNEL);
> +
> +	INIT_HLIST_NODE(&pinfo->hlist);
> +	pinfo->owner = owner;
> +
> +	pinfo->pfn = page_to_pfn(page);
> +	pinfo->private = private;
> +	
> +	spin_lock(&hash_lock);
> +	hash_add(pin_page_hash, &pinfo->hlist, pinfo->pfn);
> +	spin_unlock(&hash_lock);
> +
> +	SetPinnedPage(page);
> +	return 0;
> +};

I definitely agree that we're getting to the point where we need to look
at this more generically.  We've got at least four use-cases that have a
need for deterministically relocating memory:

1. CMA (many sub use cases)
2. Memory hot-remove
3. Memory power management
4. Runtime hugetlb-GB page allocations

Whatever we do, it _should_ be good enough to largely let us replace
PG_slab with this new bit.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]