xarray, fault injection and syzkaller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I wonder if anyone has some thoughts on this - I have spent some time
setting up syzkaller for a new subsystem and I've noticed that nth
fault injection does not reliably cause things like xa_store() to
fail.

It seems the basic reason is that xarray will usually do two
allocations, one in an atomic context which fault injection does
reliably fail, but then it almost always follows up with a second
allocation in a non-atomic context that doesn't fail because nth has
become 0.

This reduces the available coverage that syzkaller can achieve by
randomizing fault injections. It does very rarely provoke a failure,
which I guess is because the atomic allocation fails naturally
sometimes with low probability and the nth takes out the non-atomic
allocation.. But it is rare and very annoying to reproduce.

Does anyone have some thoughts on what is an appropriate way to cope
with this? It seems like some sort of general problem with these sorts
of fallback allocations, so perhaps a GFP_ flag of some sort that
causes allows the fault injection to fail but not decrement nth?

Thanks,
Jason



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux