Hello, Adrian! > Hello Uladzislau, > > On Fri, Sep 27, 2024 at 12:16 AM Uladzislau Rezki <urezki@xxxxxxxxx> wrote: > > > > Hello, Adrian! > > > > > > > > > > > > From: Adrian Huang <ahuang12@xxxxxxxxxx> > > > > > After re-visiting code path about setting the kasan ptep (pte pointer), > > > > > it's unlikely that a kasan ptep is set and cleared simultaneously by > > > > > different CPUs. So, use ptep_get_and_clear() to get rid of the spinlock > > > > > operation. > > > > > > > > "unlikely" isn't particularly comforting. We'd prefer to never corrupt > > > > pte's! > > > > > > > > I'm suspecting we need a more thorough solution here. > > > > > > > > btw, for a lame fix, did you try moving the spin_lock() into > > > > kasan_release_vmalloc(), around the apply_to_existing_page_range() > > > > call? That would at least reduce locking frequency a lot. Some > > > > mitigation might be needed to avoid excessive hold times. > > > > > > I did try it before. That didn't help. In this case, each iteration in > > > kasan_release_vmalloc_node() only needs to clear one pte. However, > > > vn->purge_list is the long list under the heavy load: 128 cores (128 > > > vmap_nodes) execute kasan_release_vmalloc_node() to clear the corresponding > > > pte(s) while other cores allocate vmalloc space (populate the page table > > > of the vmalloc address) and populate vmalloc shadow page table. Lots of > > > cores contend init_mm.page_table_lock. > > > > > > For a lame fix, adding cond_resched() in the loop of > > > kasan_release_vmalloc_node() is an option. > > > > > > Any suggestions and comments about this issue? > > > > > One question. Do you think that running a KASAN kernel and stressing > > the vmalloc allocator is an issue here? It is a debug kernel, which > > implies it is slow. Also, please note, the synthetic stress test is > > not a real workload, it is tighten in a hard loop to stress it as much > > as we can. > > Totally agree. > > > Can you trigger such splat using a real workload. For example running > > stress-ng --fork XXX or any different workload? > > No, the issue could not be reproduced with stress-ng (over-weekend stress). > > So, please ignore it. Sorry for the noise. > No problem. This is a regular workflow what is normal, IMO :) -- Uladzislau Rezki