On Thu, Feb 19, 2015 at 08:19:05AM PST, Stephen Smalley spake thusly: > As Dominick pointed out, Fedora and RHEL migrated away from trying to > using MCS on users to using it for specific use cases, e.g. sandbox, > sVirt (KVM+SELinux), openshift, etc. So the MCS constraints may not be > applied to anything in that policy except for the domains used for those > specific applications. We intend to use it to sandbox web apps. This sounds like what RHEL is trying to use it for, right? Will it simply not work at all for users in RHEL6 as it used to for RHEL5? That seemed a very simple way to set it up and would work perfectly for our needs. If it won't work for users do we now have to assign a specific type/domain to our app? The app always runs under a specific user so we could actually associate that user with a domain instead of unconfined, correct? Here is our current setup, which is all messed up. I'm not sure how we arrived at this: # semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u SystemLow-SystemHigh p16001 p16001_u p16001 p16002 appuser_u AppAdmin-p16002 p16003 appuser_u AppAdmin-p16003 p16004 unconfined_u s0-s0:c0.c1023,c4 p16005 unconfined_u s0-s0:c0.c1023,c4,c5 p16006 unconfined_u s0-s0:c0.c1023,c6 p16007 unconfined_u s0-s0:c0.c1023,c7 p16008 unconfined_u s0-s0:c0.c1023,c8 p16009 unconfined_u s0-s0:c0.c1023,c9 root unconfined_u SystemLow-SystemHigh system_u system_u SystemLow-SystemHigh So the first problem I see is that the login names p16004-16009 are assigned to unconfined_u so they will never be denied anything except DAC and MCS will not be enforced, correct? Is the user p16001 setup correctly in that it has its own assigned SELinux user and one specific category assigned to it? Then we need to fix the MLS/MCS ranges for the other users. Currently unconfined_u has s0-s0:c0.c1023 plus a seemingly redundant ,c4,c5 etc. Just as a test I am trying to use: chcat -l -- -c4 p16005 to remove the c4 category from p16005 but that didn't work for some reason. We need to remove all of the categories except one which should be unique to each user since each instance of our web app runs under each user p16001 or p16002 etc. respectively. Currently I have the above setup and can login as p16001 and see files like this: -bash-4.1$ id uid=16001(p16001) gid=16001(p16001) groups=16001(p16001) context=p16001_u:user_r:user_t:p16001 -bash-4.1$ -bash-4.1$ ls -laZ drwxr-xr-x. root root system_u:object_r:default_t:SystemLow . drwxrwxr-x. root root system_u:object_r:default_t:SystemLow .. drwxr-xr-x. p16001 p16001 unconfined_u:object_r:default_t:p16001 p16001 drwxr-xr-x. p16002 p16002 unconfined_u:object_r:default_t:p16002 p16002 drwxr-xr-x. p16003 p16003 unconfined_u:object_r:default_t:p16003 p16003 -bash-4.1$ id uid=16001(p16001) gid=16001(p16001) groups=16001(p16001) context=p16001_u:user_r:user_t:p16001 -bash-4.1$ cd p16002/ -bash-4.1$ ls -laZ drwxr-xr-x. p16002 p16002 unconfined_u:object_r:default_t:p16002 . drwxr-xr-x. root root system_u:object_r:default_t:SystemLow .. -rw-r--r--. p16002 p16002 unconfined_u:object_r:default_t:p16002 testfile -bash-4.1$ cat testfile I am 16002 Why can I cat that file? User p16001 has category p16001 and the file I cat'd id category p16002. Seems like enforcement is not working here. Is this what Dominick was referring to in that I need to do something else to "opt-in" to the enforcement? What are the best resources for learning how to use MCS in RHEL6? > The -mls policy might be a better fit if you want to apply it system-wide. Isn't MLS even less used/supported than MCS? From my description of our use would you say that MCS is the right fit as opposed to MLS? It seems like the standard targeted policy for most stuff on the box plus MCS to confine/sandbox our apps would be the way to go. Thanks! -- Tracy Reed
Attachment:
pgpPEKkv1oUiE.pgp
Description: PGP signature
_______________________________________________ Selinux mailing list Selinux@xxxxxxxxxxxxx To unsubscribe, send email to Selinux-leave@xxxxxxxxxxxxx. To get help, send an email containing "help" to Selinux-request@xxxxxxxxxxxxx.