Hello, Andrew. > An earlier version of this patch was accused of crashing the kernel: > > https://lists.01.org/pipermail/lkp/2019-April/010004.html > > does the v4 series address this? I tried before to narrow down that crash but i did not succeed, so i have never seen that before on my test environment as well as during running lkp-tests including trinity test case: test-url: http://codemonkey.org.uk/projects/trinity/ But after analysis of the Call-trace and slob_alloc(): <snip> [ 0.395722] Call Trace: [ 0.395722] slob_alloc+0x1c9/0x240 [ 0.395722] kmem_cache_alloc+0x70/0x80 [ 0.395722] acpi_ps_alloc_op+0xc0/0xca [ 0.395722] acpi_ps_get_next_arg+0x3fa/0x6ed <snip> <snip> /* Attempt to alloc */ prev = sp->lru.prev; b = slob_page_alloc(sp, size, align); if (!b) continue; /* Improve fragment distribution and reduce our average * search time by starting our next search here. (see * Knuth vol 1, sec 2.5, pg 449) */ if (prev != slob_list->prev && slob_list->next != prev->next) list_move_tail(slob_list, prev->next); <- Crash is here in __list_add_valid() break; } <snip> i see that it tries to manipulate with "prev" node that may be removed from the list by slob_page_alloc() earlier if whole page is used. I think that crash has to be fixed by the below commit: https://www.spinics.net/lists/mm-commits/msg137923.html it was introduced into 5.1-rc3 kernel. Why ("mm/vmalloc.c: keep track of free blocks for vmap allocation") was accused is probably because it uses "kmem cache allocations with struct alignment" instead of kmalloc()/kzalloc(). Maybe because of bigger size requests it became easier to trigger the BUG. But that is theory. -- Vlad Rezki