Re: [PATCH v11 19/19] kasan: enable instrumentation of global variables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can a module be freed in an interrupt?


On Mon, Feb 16, 2015 at 5:44 PM, Andrey Ryabinin <a.ryabinin@xxxxxxxxxxx> wrote:
> On 02/16/2015 05:58 AM, Rusty Russell wrote:
>> Andrey Ryabinin <a.ryabinin@xxxxxxxxxxx> writes:
>>> This feature let us to detect accesses out of bounds of
>>> global variables. This will work as for globals in kernel
>>> image, so for globals in modules. Currently this won't work
>>> for symbols in user-specified sections (e.g. __init, __read_mostly, ...)
>>>
>>> The idea of this is simple. Compiler increases each global variable
>>> by redzone size and add constructors invoking __asan_register_globals()
>>> function. Information about global variable (address, size,
>>> size with redzone ...) passed to __asan_register_globals() so we could
>>> poison variable's redzone.
>>>
>>> This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
>>> address making shadow memory handling ( kasan_module_alloc()/kasan_module_free() )
>>> more simple. Such alignment guarantees that each shadow page backing
>>> modules address space correspond to only one module_alloc() allocation.
>>
>> Hmm, I understand why you only fixed x86, but it's weird.
>>
>> I think MODULE_ALIGN belongs in linux/moduleloader.h, and every arch
>> should be fixed up to use it (though you could leave that for later).
>>
>> Might as well fix the default implementation at least.
>>
>>> @@ -49,8 +49,15 @@ void kasan_krealloc(const void *object, size_t new_size);
>>>  void kasan_slab_alloc(struct kmem_cache *s, void *object);
>>>  void kasan_slab_free(struct kmem_cache *s, void *object);
>>>
>>> +#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
>>> +
>>> +int kasan_module_alloc(void *addr, size_t size);
>>> +void kasan_module_free(void *addr);
>>> +
>>>  #else /* CONFIG_KASAN */
>>>
>>> +#define MODULE_ALIGN 1
>>
>> Hmm, that should be PAGE_SIZE (we assume that in several places).
>>
>>> @@ -1807,6 +1808,7 @@ static void unset_module_init_ro_nx(struct module *mod) { }
>>>  void __weak module_memfree(void *module_region)
>>>  {
>>>      vfree(module_region);
>>> +    kasan_module_free(module_region);
>>>  }
>>
>> This looks racy (memory reuse?).  Perhaps try other order?
>>
>
> You are right, it's racy. Concurrent kasan_module_alloc() could fail because
> kasan_module_free() wasn't called/finished yet, so whole module_alloc() will fail
> and module loading will fail.
> However, I just find out that this race is not the worst problem here.
> When vfree(addr) called in interrupt context, memory at addr will be reused for
> storing 'struct llist_node':
>
> void vfree(const void *addr)
> {
> ...
>         if (unlikely(in_interrupt())) {
>                 struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
>                 if (llist_add((struct llist_node *)addr, &p->list))
>                         schedule_work(&p->wq);
>
>
> In this case we have to free shadow *after* freeing 'module_region', because 'module_region'
> is still used in llist_add() and in free_work() latter.
> free_work() (in mm/vmalloc.c) processes list in LIFO order, so to free shadow after freeing
> 'module_region' kasan_module_free(module_region); should be called before vfree(module_region);
>
> It will be racy still, but this is not so bad as potential crash that we have now.
> Honestly, I have no idea how to fix this race nicely. Any suggestions?
>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]