Re: [patch v2 4/4] mm, mempool: poison elements backed by page allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/27/2015 01:50 AM, David Rientjes wrote:
> On Thu, 26 Mar 2015, Andrey Ryabinin wrote:
> 
>>> +static void check_element(mempool_t *pool, void *element)
>>> +{
>>> +       /* Mempools backed by slab allocator */
>>> +       if (pool->free == mempool_free_slab || pool->free == mempool_kfree)
>>> +               __check_element(pool, element, ksize(element));
>>> +
>>> +       /* Mempools backed by page allocator */
>>> +       if (pool->free == mempool_free_pages) {
>>> +               int order = (int)(long)pool->pool_data;
>>> +               void *addr = page_address(element);
>>> +
>>> +               __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
>>>         }
>>>  }
>>>
>>> -static void poison_slab_element(mempool_t *pool, void *element)
>>> +static void __poison_element(void *element, size_t size)
>>>  {
>>> -       if (pool->alloc == mempool_alloc_slab ||
>>> -           pool->alloc == mempool_kmalloc) {
>>> -               size_t size = ksize(element);
>>> -               u8 *obj = element;
>>> +       u8 *obj = element;
>>> +
>>> +       memset(obj, POISON_FREE, size - 1);
>>> +       obj[size - 1] = POISON_END;
>>> +}
>>> +
>>> +static void poison_element(mempool_t *pool, void *element)
>>> +{
>>> +       /* Mempools backed by slab allocator */
>>> +       if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
>>> +               __poison_element(element, ksize(element));
>>> +
>>> +       /* Mempools backed by page allocator */
>>> +       if (pool->alloc == mempool_alloc_pages) {
>>> +               int order = (int)(long)pool->pool_data;
>>> +               void *addr = page_address(element);
>>>
>>> -               memset(obj, POISON_FREE, size - 1);
>>> -               obj[size - 1] = POISON_END;
>>> +               __poison_element(addr, 1UL << (PAGE_SHIFT + order));
>>
>> I think, it would be better to use kernel_map_pages() here and in
>> check_element().
> 
> Hmm, interesting suggestion.
> 
>> This implies that poison_element()/check_element() has to be moved out of
>> CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON ifdef (keeping only slab
>> poisoning under this ifdef).
> 
> The mempool poisoning introduced here is really its own poisoning built on 
> top of whatever the mempool allocator is.  Otherwise, it would have called 
> into the slab subsystem to do the poisoning and include any allocated 
> space beyond the object size itself. 

Perhaps, that would be a good thing to do. I mean it makes sense to check redzone
for corruption.

> Mempool poisoning is agnostic to the 
> underlying memory just like the chain of elements is, mempools don't even 
> store size.
> 
> We don't have a need to set PAGE_EXT_DEBUG_POISON on these pages sitting 
> in the reserved pool, nor do we have a need to do kmap_atomic() since it's 
> already mapped and must be mapped to be on the reserved pool, which is 
> handled by mempool_free().
> 

Well, yes. But this applies only to architectures that don't have ARCH_SUPPORTS_DEBUG_PAGEALLOC.
The rest of arches will only benefit from this as kernel_map_pages() potentially could find more bugs.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]