> What exactly constitutes a 0day? From my perspective naming a > vulnerability 0day have absolutely no value whatsoever, it just doesn't > make any sense. 0day for who? The person who release it, sure, but for > the security community as a whole... nah. I consider a "0day" to be a vulnerability for which there is an exploit in the wild before there's a vendor patch for the problem. If this convention is followed, it has value to the community, because we know that having that software on our systems presents a significant risk. > I'm also personally starting to question the whole idea behind public > disclosure and advisories. Do they actually mean anything these days? > What good is it to know about a vulnerability that was "discovered" 6 > months ago? The important thing is to know what can be done BEFORE the > patch has been released. I presume you're talking about the situations where the researcher has coordinated their advisory release with the vendor such that vulnerability details are not disclosed before a patch is available. Yes, I still think these advisories are valuable because they often provide more details about what was broken than the vendor advisories do (which often read, "There's a security problem in product X. Install this patch."). Such details allow us to learn what goes wrong when building software systems and allows us to avoid such problems in the future. Occasionally the advisories from the researchers also provide some insight into how they found the vulnerabilities (although I've seen this decreasing in recent years), which helps others learn how to find vulnerabilities. > Also a big portion of "advisories" seem to be related to the most > obscure softwares and home made PHP applications that most of us never > even care about anyway. These advisories clutter the ones that have even > the slightest validity. To some extent, I agree. At the same time though, I think these tend to come from people who are learning how to do vulnerability analysis, starting with the low hanging fruit. To that end, I think the ability for these people to get peer-reviewed feedback on their work is immensely useful to the community as a whole. > One more thing about "advisories". I think it would be better to release > them immediately and let people know what they are facing. With public > dissemination of a vulnerability perhaps someone will release a 3rd > party patch or another inventive way of protecting oneself. Holding it > "secret" really doesn't help anyone. If anything it prevents people from > trying to find a way to fix the vulnerability. First off, I don't think anyone can seriously say it doesn't help _anybody_ -- it certainly helps the vendor. If it's an IDS/IPS company that holds the research and they've got a signature out for it on their system, it certainly helps them. Here we find a variation on the ancient (in Internet terms) argument about full disclosure: if bugs are public knowledge, will vendors be more responsive to fixing them? I don't think you're going to see publicly developed patches for any but the most extreme cases. At the same time, I see some advisories where the vendor was notified more than six months ago and just has a patch out now. That's a pretty large window of vulnerability if anyone malicious knows about the problem (and if we're finding them in the open community, there's no reason they wouldn't). I think security researchers need to continue to think about exuding due pressure on vendors to get bugs patched. My two-bits, Terry #include <stddisclaimer.h>