Alan Cox <alan <at> lxorguk.ukuu.org.uk> writes: >> ... >> Alan Cox <alan <at> lxorguk.ukuu.org.uk> writes: >> ... >> Security models are complex for a complex system. That would appear to be >> unavoidable given the law of necessary variety. >> ... >> On 1/2o/2011 5:04 PM, JB wrote: >> ... >> With regard to ... Law of Requisite Variety. >> ... >> "If a system is to be stable the number of states of its control mechanism >> must be greater than or equal to the number of states in the system being >> controlled." >> ... > ,,, > > It uses the term "control" in the context of interactions between system's > > components, not security of the system. > > Security *is* a part of a set of interactions between system components. > It has to be able to mediate all sorts of complex interactions between > components and decide which are permissible. All those components have > state and all that state has to be managed. > ... I think the Law of Requisite Variety does not apply here. "A scientific law or scientific principle is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of science." If that law applied here, the following would have to be true at *all* times: "If a security system is to be stable the number of states of its control mechanism must be greater than or equal to the number of states in the security system being controlled." What you probably do is instinctively applying the "control system " model and its law to a corresponding "security" model. Admittedly, there is a similarity in models. The model of a "control system" could be utilized in that system's "security" model. For example, the components of a PC system (hardware: CPU, hard disk, keyboard, etc; software: corresponding kernel subsystems, other subsystems like networking, etc; at least a fixed minimum number of them are required to consitute a working PC) and their states could be considered a complex control system, for which the Law's "... greater than or equal ..." statement would apply. You could take that "control system" complex model and consider each of the hardware and software components worth of a corresponding security component, and that would make the security model complex as well. But, once again, "security" is a state and any measures to achieve it. The same "control"/"security" model, however useful to analyze security, would not be subjected to that Law's statement if we decided that only one of them, namely networking component, is *required* to have (worth of) a corresponding security component, namely iptables. >From a "control system" point of view the system would be complex, but from a "security" point of view not. But what is most important, our perception of security (according to its definition) would be satisfied. Perhaps because we are on an internal network that we consider secure. So why would we need SELinux on that machine ? We would like not to have it, but we are not allowed to. We could disable it ..., but suddenly perhaps not ! What if SELinux becomes an object of a hacker attack ? We know that in order to remove SELinux to disinfect the system you have to remove everything else with it. How about that for a security concept ? > ... > Failure is a necessary part of progress. It's called learning. Without > failure you have stasis. > > Alan JB -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines