On Fri, Jan 26, 2024 at 5:04 PM Stephen Smalley <stephen.smalley.work@xxxxxxxxx> wrote: > > On Fri, Jan 26, 2024 at 10:03 AM Stephen Smalley > <stephen.smalley.work@xxxxxxxxx> wrote: > > > > On Fri, Jan 26, 2024 at 5:44 AM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote: > > > > > > The inode_getsecctx LSM hook has previously been corrected to have > > > -EOPNOTSUPP instead of 0 as the default return value to fix BPF LSM > > > behavior. However, the call_int_hook()-generated loop in > > > security_inode_getsecctx() was left treating 0 as the neutral value, so > > > after an LSM returns 0, the loop continues to try other LSMs, and if one > > > of them returns a non-zero value, the function immediately returns with > > > said value. So in a situation where SELinux and the BPF LSMs registered > > > this hook, -EOPNOTSUPP would be incorrectly returned whenever SELinux > > > returned 0. > > > > > > Fix this by open-coding the call_int_hook() loop and making it use the > > > correct LSM_RET_DEFAULT() value as the neutral one, similar to what > > > other hooks do. > > > > > > Reported-by: Stephen Smalley <stephen.smalley.work@xxxxxxxxx> > > > Link: https://lore.kernel.org/selinux/CAEjxPJ4ev-pasUwGx48fDhnmjBnq_Wh90jYPwRQRAqXxmOKD4Q@xxxxxxxxxxxxxx/ > > > Fixes: b36995b8609a ("lsm: fix default return value for inode_getsecctx") > > > Signed-off-by: Ondrej Mosnacek <omosnace@xxxxxxxxxx> > > > --- > > > > > > I ran 'tools/nfs.sh' on the patch and even though it fixes the most > > > serious issue that Stephen reported, some of the tests are still > > > failing under NFS (but I will presume that these are pre-existing issues > > > not caused by the patch). > > > > Do you have a list of the failing tests? For me, it was hanging on > > unix_socket and thus not getting to many of the tests. I would like to > > triage the still-failing ones to confirm that they are in fact > > known/expected failures for NFS. > > Applying your patch and removing unix_socket from the tests to be run > (since it hangs), I get the following failures: > mac_admin/test (Wstat: 0 Tests: 8 Failed: 2) > Failed tests: 5-6 > filesystem/ext4/test (Wstat: 512 (exited 2) Tests: 76 Failed: 2) > Failed tests: 1, 64 > Non-zero exit status: 2 > filesystem/xfs/test (Wstat: 512 (exited 2) Tests: 76 Failed: 2) > Failed tests: 1, 64 > Non-zero exit status: 2 > filesystem/jfs/test (Wstat: 512 (exited 2) Tests: 83 Failed: 2) > Failed tests: 1, 71 > Non-zero exit status: 2 > filesystem/vfat/test (Wstat: 512 (exited 2) Tests: 52 Failed: 2) > Failed tests: 1, 46 > Non-zero exit status: 2 > fs_filesystem/ext4/test (Wstat: 512 (exited 2) Tests: 75 Failed: 2) > Failed tests: 1, 63 > Non-zero exit status: 2 > fs_filesystem/xfs/test (Wstat: 512 (exited 2) Tests: 75 Failed: 2) > Failed tests: 1, 63 > Non-zero exit status: 2 > fs_filesystem/jfs/test (Wstat: 512 (exited 2) Tests: 82 Failed: 2) > Failed tests: 1, 70 > Non-zero exit status: 2 > fs_filesystem/vfat/test (Wstat: 512 (exited 2) Tests: 51 Failed: 2) > Failed tests: 1, 45 > Non-zero exit status: 2 > Files=77, Tests=1256, 308 wallclock secs ( 0.30 usr 0.10 sys + 6.84 > cusr 21.78 csys = 29.02 CPU) I got the same ones (I, too, removed unix_socket to allow the rest to run). -- Ondrej Mosnacek Senior Software Engineer, Linux Security - SELinux kernel Red Hat, Inc.