On Wed, 05 May 2004 08:02:41 -0400 Stephen Smalley wrote: >On Wed, 2004-05-05 at 02:12, Bob Gustafson wrote: On 'fixfiles relabel' > >> A typical bunch of diagnostics looked like this: >> snip >> >> /usr/sbin/setfiles: conflicting specifications for >> /usr/src/redhat/BUILD/ooo-build-1.1.53pre/build/OOO_1_1_1/ >> setup2/unxlngi4.pro/bin/tplx64590.res and >> /var/tmp/openoffice.org-1.1.1-root/usr/lib/ooo-1.1/program/resource/ >> tplx64590.res, using system_u:object_r:src_t. >> >> There is a pattern here, but I can't express it in fixable terms. > >A "conflicting specifications" warning means that setfiles thinks that >the two pathnames are multiple hard links to the same inode, which can >only have a single security context, but the two pathnames match entries >in file_contexts that have different security contexts, so there is a >conflict. setfiles will still label the inode (based on the ordering in >file_contexts, with later entries taking precedence, unless there is an >explicit entry for the complete pathname), but it is warning that there >is an ambiguity in the specification. Repeatedly relabeling won't help, >as the conflict will remain until: >- the conflicting hard link is removed, or >- the file_contexts configuration is altered to explicitly indicate that >both pathnames should have the same context, typically by adding >explicit entries for the conflicting files. > Guessing a bit here (still need to read some of those FAQs, etc.): The idea of Security Enhancements is to come up with a way for the computer to check the safety of an operation against a list of 'safe' operations which have been created by humans. This is a daunting task because the computer is very fast and is doing new operations all the time. Necessarily, there is a performance hit (no free lunch). Skillful design hopefully will minimize the performance hit without throwing the baby away (enforcing secure policy on all operations). It is daunting from another angle too: If the list of safe operations is created by humans - there could (will) be an error somewhere in the list. Thus the tools which have been created which (hopefully) will programatically [any 'Formal Methods' here?] (prove) (check) that the list is 'correct'. This then requires some sort of 'correctness' criteria. Is this the policy files? Or are the policy files somewhere between the list of safe operations and the correctness criteria? In any case, a diagnostic-free run of 'fixfiles relabel' using a programmatically checked set of policy files should result in a pretty secure system. Yes? -- OK this is all prolog to your last sentence where the fixfiles confliction errors can be fixed by "adding explicit entries for the conflicting files". This seems a bit tedious - and never ending. Can some 'security aware' guidelines be created for other developers so that other components of Linux can be written without somebody having to write (and check) potentially erroneous entries? Kind of a Surgeon General poster. In the meantime, maybe there is a way to solve the conflicts by just whacking the unneeded files. [In my case, one of the conflicting pair seems to be redundant - with a 'tmp' in the path] Maybe this can just be an enhancement to the fixfiles algorithm snip >Make sure that you are working with the latest policy, i.e. yum update >policy. > Yes. I did yum update policy yum update yum update \* And all said that I was up to date. BobG