David, One thing you have to keep in mind is that a lot of things are incredibly variable when dealing with this subject. For instance, suppose you want to ensure that the URI in a web server is not overflowable. So you test with something like GET /[AAAAAAAAA x 4096] HTTP/1.1 Host: foobar.com Connection: close This is all fine and well, unless at 8192 is where the overflow takes place, or if I can stick as many characters as I want in, so long as I am using HTTP 1.1 and not HTTP 0.9, or if I am using HTTP/1.1 and Host doesn't contain a 36 backslashes, et cetera. This is generally why fuzzing is mostly inconclusive because it often misses a lot of conditions and you have essentially assured nothing. Without in-depth analysis of each software package you are basically pushing snake oil. There are just far too many variables to really standardize such a thing. Best Regards, Justin Ferguson Reverse Engineer NNSA IARC 702.942.2539 "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts." -- Sir Arthur Conan Doyle > -----Original Message----- > From: Adam Shostack [mailto:adam@xxxxxxxxxxxx] > Sent: Friday, May 12, 2006 11:35 AM > To: David Litchfield > Cc: bugtraq@xxxxxxxxxxxxxxxxx; > full-disclosure@xxxxxxxxxxxxxxxxx; > ntbugtraq@xxxxxxxxxxxxxxxxxxxxxx; dbsec@xxxxxxxxxxxxx > Subject: Re: How secure is software X? > > > Hi David, > > Very briefly because I'm swamped today: Please consider > bringing some of this to Metricon > (https://securitymetrics.org/content/Wiki.jsp?page=Welcome) > > Also there's a project of US DHS/NIST and probably others > called SAMATE Software Assurance Metrics and Tool Evaluation > http://samate.nist.gov/index.php/Main_Page > > which might be of interest. > > Adam > > On Fri, May 12, 2006 at 02:59:17AM +0100, David Litchfield wrote: > | How secure is software X? > | > | At least as secure as Vulnerability Assessment Assurance > Level P; or Q > | or > | R. Well, that's what I think we should be able to say. What > we need is an > | open standard, that has been agreed upon by recognized > experts, against > | which the absence of software security vulnerability can be > measured - > | something which improves upon the failings of the Common > Criteria. Let's > | choose web server software as an example. When looking for > flaws in a new > | piece of web server software there are a bunch of well > known checks that > | one would throw at it first. Try directory traversal > attacks and the > | several variations. Try overflowing the request method, the > URI, the query > | string, the host header field and so on. Try cross site > scripting attacks > | in server error pages and file not found messages. As I > said, there's a > | bunch of checks and I've mentioned but a few. If these were > all written > | down and labelled with as a "standard" then one could say > that web server > | software X is at least as secure as the standard - > providing of course the > | server stands up. > | > | For products that are based upon RFCs it would be trivial > to write a > | simple > | criteria that tests every aspect of the software as per the > RFCs. This > | would be called Vulnerability Assessment Assurance Level: > Protocol. If a > | bit of software was accredited at VAAL:Protocol then it > would given a > | level of assurance that it at least stood up to those attacks. > | > | Not all products are RFC compliant however. Sticking with > web servers, > | one > | bit of software might have a bespoke request method of > "FOOBAR". This opens > | up a whole new attack surface that's not covered by the > VAAL:Protocol > | standard. There are two aspects to this. Anyone with a > firewall capable of > | blocking non-RFC compliant requests could configure it to > do so - thus > | closing off the attack surface - from the outside at least. > As far as the > | standards go however - you'd have to introduce criteria to > cover that > | specific functionality. And what about different > application environments > | running on top of the web server? And what about more > complex products such > | as database servers? I suppose at a minimum for DB software > you could at > | least have a standard that simply checks if the server > falls to a long > | username or password buffer overflow attempt and then fuzz > SQL-92 language > | elements. It certainly makes standardization much more > difficult but I > | think by no means impossible. > | > | Clearly, what is _easy_ is writing and agreeing upon a VAAL:Protocol > | standard for many different types of servers. You could > then be assured > | that any server that passes is at least as secure as > VAAL:Protocol and for > | those looking for more "comfort" then they can at least > block non-RFC > | compliant traffic. > | > | Having had a chat with Steve Christey about this earlier > today I know > | there > | are other people thinking along the same lines and I bet > there are more > | projects out there being worked on that are attempting to > achieve the same > | thing. If anyone is currently working on this stuff or > would like to get > | involved in thrashing out some ideas then please mail me - > I'd love to hear > | from you. > | > | Cheers, > | David Litchfield > | http://www.databasesecurity.com/ > | http://www.ngssoftware.com/ >