Hi All, I wonder if anyone has some thoughts on this - I have spent some time setting up syzkaller for a new subsystem and I've noticed that nth fault injection does not reliably cause things like xa_store() to fail. It seems the basic reason is that xarray will usually do two allocations, one in an atomic context which fault injection does reliably fail, but then it almost always follows up with a second allocation in a non-atomic context that doesn't fail because nth has become 0. This reduces the available coverage that syzkaller can achieve by randomizing fault injections. It does very rarely provoke a failure, which I guess is because the atomic allocation fails naturally sometimes with low probability and the nth takes out the non-atomic allocation.. But it is rare and very annoying to reproduce. Does anyone have some thoughts on what is an appropriate way to cope with this? It seems like some sort of general problem with these sorts of fallback allocations, so perhaps a GFP_ flag of some sort that causes allows the fault injection to fail but not decrement nth? Thanks, Jason