On Thu, Oct 10, 2019 at 12:50 AM Paul Moore <paul@xxxxxxxxxxxxxx> wrote: > On Wed, Oct 9, 2019 at 10:53 AM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote: > > On Wed, Oct 9, 2019 at 4:01 PM Paul Moore <paul@xxxxxxxxxxxxxx> wrote: > > > On Wed, Oct 9, 2019 at 9:53 AM Stephen Smalley <sds@xxxxxxxxxxxxx> wrote: > > > > On 10/8/19 5:30 PM, Paul Moore wrote: > > > > > On Mon, Sep 30, 2019 at 10:07 AM Stephen Smalley <sds@xxxxxxxxxxxxx> wrote: > > > > >> On 9/30/19 9:16 AM, Ondrej Mosnacek wrote: > > > > >>> Add a test that verifies that SELinux permissions are not checked when > > > > >>> mounting submounts. The test sets up a simple local NFS export on a > > > > >>> directory which has another filesystem mounted on its subdirectory. > > > > >>> Since the export is set up with the crossmnt option enabled, any client > > > > >>> mount will try to transparently mount any subdirectory that has a > > > > >>> filesystem mounted on it on the server, triggering an internal mount. > > > > >>> The test tries to access the automounted part of this export via a > > > > >>> client mount without having a permission to mount filesystems, expecting > > > > >>> it to succeed. > > > > >>> > > > > >>> The original bug this test is checking for has been fixed in kernel > > > > >>> commit 892620bb3454 ("selinux: always allow mounting submounts"), which > > > > >>> has been backported to 4.9+ stable kernels. > > > > >>> > > > > >>> The test first checks whether it is able to export and mount directories > > > > >>> via NFS and skips the actual tests if e.g. NFS daemon is not running. > > > > >>> This means that the testsuite can still be run without having the NFS > > > > >>> server installed and running. > > > > >> > > > > >> 1) We have to manually start nfs-server in order for the test to run; > > > > >> else it will be skipped automatically. Do we want to start/stop the > > > > >> nfs-server as part of the test script? > > > > > > > > > > My two cents are that I'm not sure we want to automatically start/stop > > > > > the NFS server with the usual "make test", perhaps we have a dedicated > > > > > NFS test target that does the setup-test-shutdown? Other ideas are > > > > > welcome. > > > > > > > > I guess my concern is that anything that doesn't run with the default > > > > make test probably won't get run at all with any regularity. > > > > > > FWIW, I think I'm the only one regularly running the tests on upstream > > > kernels and reporting the results. RH was running the tests at one > > > point, and may still be doing so, but I have no idea what kernels they > > > are testing (maybe just RHEL, stable Fedora, etc.) and what their > > > process is when they find failures. > > > > We do still run the selinux-testsuite nightly on Fedora Rawhide with > > your kernel-secnext kernel builds (I suppose we fetch them from COPR). > > I can't really describe what we do when they fail, because that hardly > > ever happens now :) > > I'm happy to hear that the tests are still running, but we must be > looking at different test results ;) Well, we pin the testsuite to a fixed commit and bump it manually as needed/wanted, so we generally don't see failures that are fixed quickly. But I don't recall many (non-false-positive) failures appearing on kernel-secnext in the past few months either. > > > But if we came across a failure that would suggest > > a bug, we would certainly investigate and report it. > > Great, thank you. > > > The testsuite is now also being run as part of CKI > > (https://github.com/cki-project), which AFAIK currently runs regularly > > on linux-stable kernels (the results are posted publicly to > > stable@xxxxxxxxxxxxxxx). I don't follow these reports closely, so I'm > > not sure if there were any non-false-positive failures there... > > That's good news. I assume CKI has some provision for emailing people > when there are test failures? I don't really need to see every > -stable kernel test, but it might be nice to see the failures. > Alternatively, now that I think about it, this shouldn't be that hard > to setup with the secnext stuff ... I don't know if it is possible to subscribe to test failures (I'd guess not). There is a notion of test maintainers, who are defined internally, who receive reports of failures for review before they are sent to relevant recipients, but this is an internal-only facility... Maybe you can try filing a feature request at [1]? [1] https://github.com/CKI-project/meta/issues > > > > > For > > > > something that requires specialized hardware (e.g. InfiniBand), this is > > > > reasonable but that isn't true of NFS. For the more analogous cases of > > > > e.g. labeled IPSEC, NetLabel, SECMARK, we already load and unload > > > > network configurations for the testsuite during testing. > > > > > > That's a good point about the other networking tests. My gut feeling > > > tells me that NFS should be "different", but I guess I can't really > > > justify that statement in an objectively meaningful way. > > > > I think the main reason why I didn't include NFS server starting was > > that I don't know how to do it robustly across distros... Already on > > RHEL the service name varies ("nfs-server" vs. just "nfs") and then > > there is "service xyz start" vs. "systemctl start xyz"... > > That's another good point. At this point in time I think it is > relatively safe to stick with systemd/systemctl (and skip if systemctl > is not found) as systemd appears to be eating the world; although this > doesn't help with the service name problem. It seems that wherever the service can be started by systemctl, it is called "nfs-server" (on RHEL-7 there is also an alias "nfs"), so I think we can stick to just 'systemctl start nfs-server' and it should work most (if not all) of the time. -- Ondrej Mosnacek <omosnace at redhat dot com> Software Engineer, Security Technologies Red Hat, Inc.