In the recent and not-so-recent past I've seen claims of non-exploitability of several discovered vulnerabilities without actual facts to support them. A recent private email discussion of this matter as well as the public post to this list motivated me to reply with some thoughts on the issue.
While I understand the attempt to mitigate the spread of fear, uncertainity and doubt about newly disclosed vulnerabilities and the cost of massive deployment of patches across an organization's network I also believe that as security practitioners it is our job and our responsability to excercise extreme caution before making blunt statements about the exploitability of any given vulnerability. History has demostrated that bugs thought not to be exploitable turned demostrably so after a period of time ranging from a few hours to a few months. This applies to all past and current popular operating systems, including Windows, Linux, Solaris, the BSD variants (including OpenBSD) and all other UNIX flavors.
It is easier to demostrate that a vulnerability is exploitable than to demostrate the opposite. For the former the first proof of concept exploit code will suffice. For the later an in-depth analysis and public scrutiny of results is necessary, as well as complete awarness of the state of the art in exploit writing techniques and *even then*, there will be no hard proof that a given bug can not be exploited...ever.
It is my intent to steer away from the full/partial/none disclosure discussion often seen in this and other lists and not to debate the benefits and drawbacks of publishing proof of concept code but I would simply point out that in the absense of PoC either you:
1. Assume that a complex vulnerability is exploitable ONLY if you see proof of concept code and therefore when in doubt always ask or wait for PoC to be publicly available, or 2. Trust the analysis and report of a third party (vulnerability researcher, vendor, reporter, collegue, friend, barman or favourite pet). In the end this is a matter of where do you place your trust.
You can't have it both ways.
A third option would be to actually do the complete research yourself and come to verifyable and reviewable conclusions before emitting opinion. Mind you, this is *also* costly and time consuming, sometimes even more than just deploying patches, depending you organization's infrastructure and willingness to assume risk. As obvious as it is, many people seem to forget that vulnerability research and exploit writing in a professional manner is not free, someone is actually paying for it.
Personally, I tend to think that ALL vulnerabilities are exploitable
unless sufficient proof of the opposite has been given and carefully
scrutinized. I find this more in-line with my bias towards
adopting practices similar to what I learned about modern science during
my years at school. Even then, I will STILL have doubt about my judgement,
call me paranoid if you wish or call me humble, I do think of the
posibility of someone smarter than me finding a way to exploit a vulnerability that I could not exploit myself.
I would rather err on the safe side than provoke a disturbingly false
sense of security on the people tha value my opinion.
With regards to the specifics of the ASN.1 vulnerability and the statements on eEye's advisory, I have no reason to believe that claims of exploitability are false, let alone purposely false or part of a deliberate missinformation campaign.
> I work for a start-up security-intelligence vendor,
and we warned our customers that this bug was only
exploitable as a denial of service, yet many of them
were not willing to take the risk that the next
Blaster might appear over the weekend, despite our
in-depth explanation of why this bug is not
exploitable. Why wouldn't they believe us?
As I said, in the end, when no actual proof either way is available, it becomes a matter of who do you trust most. Apparently some of your customers choose to trust publicly disclosed information and the analysis of eEye or at least facing doubt decided to take the safe option, more expensive than sitting on the bug and doing nothing but definately less expensive than having to fix things iff and when the next worm/virus/etc hit them.
Sometimes misinformation in security advisories is unintentional, however in this case it appears to be intentionally misleading and I think it's time that someone spoke openly about it. I'm trying to promote
Erhm, note also that you are attributing intent to the alleged missinformation in eEye's advisory. I sincerely hope you understand the implications of such statement.
I have no affiliation whatsoever with eEye but I must admit that I am biased in this debate: I work for a company that has been doing vulnerability research and exploit development as part of its core business for several years and I am privvy to the vuln. researcher end of this discussion.
-ivan
--- To strive, to seek, to find, and not to yield. - Alfred, Lord Tennyson Ulysses,1842
Ivan Arce CTO CORE SECURITY TECHNOLOGIES
46 Farnsworth Street Boston, MA 02210 Ph: 617-399-6980 Fax: 617-399-6987 ivan.arce@coresecurity.com www.coresecurity.com
PGP Fingerprint: C7A8 ED85 8D7B 9ADC 6836 B25D 207B E78E 2AD1 F65A