On 2020-12-07 12:43:47 [+0000], Colin Ian King wrote: > Hi, Hi, > Questions: > > 1. Are these issues expected? I haven't seen it, it is not expected. It hasn't been reported so far. > 2. Is there an official way to report my bug findings? Report it to the list, keep me in Cc:, please. > 3. I am keen to debug and fix these issues, have RT folk got some advice > on how to start debugging these kind of issues? Based on the backtrace (and please tell your client to not break lines while pasting bactraces / logs since it makes them hard to read): > BUG: scheduling while atomic: stress-ng-fstat/47271/0x00000002 > CPU: 4 PID: 47271 Comm: stress-ng-fstat Not tainted 5.10.0-6-realtime #7 … > Call Trace: > __schedule_bug.cold+0x4a/0x5b > __schedule+0x50d/0x6b0 > ? task_blocks_on_rt_mutex+0x29a/0x390 > preempt_schedule_lock+0x24/0x50 > rt_spin_lock_slowlock_locked+0x11b/0x2c0 > rt_spin_lock_slowlock+0x57/0x90 > rt_spin_lock+0x30/0x40 > alloc_pid+0x1bc/0x400 alloc_pid() acquired a spinlock_t somewhere while the context was "atomic". The output seems to lack details. Like the "scheduling while atomic" should also contain (somwhere) "atomic: x irqs disabled: x" which is missing. Also I would expect to see preemption level in your backtrace. Anyway. Something made the context atomic (like preempt_disable()) and then you attempted to acquire a lock at alloc_pid+0x1bc/0x400. While looking at the code, alloc_pid() should be preemptible. On PREEMPT_RT spin_lock() does not disable preemption so you should remain preemtible(). Sebastian