>I agree whole heartedly. It is interesting to see OpenBSD transition >from a stance of "audit is the only way" to actually employing access >control [...] I persist in my belief that policy-based mechanisms do not improve security. If you cannot make a default policy that everyone can live under, you are creating a trap: 90-99% of people use the default policy, because they do not change it if that policy is restrictive, you have made a decision that security is more important than useability if that policy is not restrictive, you have made a decision that useability is more important than security (then there is also the issue of "it is restrictive towards what") The trap I talk of is the one where you believe that security technologies which make such a tradeoff (security vs useability) are useful. They're useless! These things are buttons for experts, for tweakers, for only security concerned people, to be used only by those who feel threatened, and at best can only be used in SPECIFIC instances. I do not know why there is so much emphass on such crap tech from some parts of the community (research?) when it is clear that the best technologies "just work" (translation: why do researchers research crap "smart" tech instead of dump tech that just works?). I still believe very strongly that efforts directed at "security technologies that only experts can use" matter far less than "security techologies that invisibly improve everything". >It is tough to change >your mind on big issues when you have a big public record to live down, >so I don't particularly want to abuse Theo for making this policy >change. Hah, there is no public record to live down. Nor a policy change, since we still audit code (with more emphasis on "audit means to improve wholesale"). We also modify a lot of software for priv-seperation or priv-revocation these days, to internally improve specific application's resistance against successful exploitation (for the situation of: it has a bug, but if you can exploit it, you gain much less). Since we have more people interested in other areas, we can expend efforts in other directions as well. > and intrusion prevention technologies. I translate this to mean that when some random bug does exist, system features exist which decrease the ease with which it can be exploited. ProPolice, StackGaurd, non-executable objects, random object addresses -- these kinds of things fall in that area. Such mechanisms must be automatic and always on. They must be very cheap. They must not break any part of POSIX or another defacto standard . I repeat - they must NOT break any part of POSIX; when developers have as much trouble understanding the interactions between POSIX, please do not make interactions that are even MORE magical. One of these days someone is going to use the magic of a system call interposition mechanism such systrace; and for their application accidentally create an operating system behaviour that is un-POSIX, and some application is going to misbehave as a result of that change and inadvertantly this will result in the CREATION of a hole. Be very careful when people tell you that their magic solutions are right. The programs we run expect the system to act in a POSIX way -- and consistantly too; but those who are writing policies do not understand the details of POSIX. Details matter more than anything else. Like a gun, these things create an process environment which is "POSIX maybe". There are things that can be done inside libc too, such as the atexit, stdio, and malloc modifications we have. Or inside ld.so, where W^X is done for the GOT and PLT. Some of this is fun, other stuff is really difficult. >I just want to tease him for choosing ProPolice instead of >StackGuard without so much as talking to me :) Hah, on the contrary, I chose ProPolice because I had talked to you. At least three times over the last five years I asked you if StackGuard had ever found a bug in software. Not a security hole, no... I was asking if a system compiled entirely with StackGuard had resulted in someone finding bugs, something inane like a buffer overflow in cat or ls or nroff or something minor. It is clear these bugs do exist. Come on.. programs dump core all the time! Bugs which had not been found another way yet, but bugs that were found because a user or programmer suddenly had StackGaurd abort their program due to such a bug. You replied "no" twice, and if I recall correctly the third time you just ignored my question (CHATS in Napa). Since we incorporated ProPolice into OpenBSD, we have found many bugs of this ilk. We've even found 2 buffer overflows inside our kernel. These were not as such security holes per se, but just bugs. This means the technology is working. If any security technology shows no success at finding other related but minor bugs, I really just don't see the point. (We've found a few other such "minor bugs" with W^X too) At some point, any serious technology must show side effects that also help improve quality, or there is something wrong. I've been told by a few people (who understand this area better than I) that StackGuard puts the canary in the wrong place (maybe that has been fixed in the meantime) I do not know if that is related. We chose ProPolice for three other reasons: 1) it also re-organizes the local stack frame to put buffers closer to the canary; non-buffer objects therefore cannot be overflowed without a hit on the canary. i think this is the other fantastic part of ProPolice rarely mentioned; I think the consequences of this change are fantastic. 2) ProPolice is multi-architecture -- mostly portable code. Support exists for at least i386, vax, m68k, sparc, sparc64, alpha, mips, powerpc. the author is apparently working on the difficult task of supporting hppa. 3) the author is very eager. Since ProPolice was not yet bug-free at the time we integrated it, we needed direct interaction with the author to get some of the last problem areas fixed. ProPolice impresses me. StackGuard did not, because it has not found bugs. >>Again, ISTM that the only way to get close to a reasonably secure system >>is to only rely on the smallest, most audited codebase possible to enforce >>security policy. [...] whenever I see the word 'security policy' everything after it starts to sound a lot like 'blah blah blah maybe NSA or DOD will give me money' and my brain fades out... (sorry, perhaps that is a little bit strong)