In the following, I assume that we want to minimize the probability that a user falls victim to an attack based on a preexisting vulnerability in a piece of software, one that you, the security researcher, have just discovered. I'll call this probability P. D. J. Bernstein wrote: >It's not _my_ bug. It's also not my student's bug. It's the _program's_ >bug. Sorry to have to break the news to you, but the attacker has had a >_year_ to exploit the bug if the program was released a year ago. > > Only attackers who know about the bug, though. P is proportional to the number of people, N, who know about the bug before it is fixed. Observation 1: When you disclose the existence of the bug, you increase N, which tends to increase P. But P is also proportional to the time, T, that elapses before the bug is fixed. By disclosing the existence of the bug, you greatly increase the likelihood that the bug is fixed in the near term. Observation 2: When you disclose the existence of the bug, you decrease T, which tends to decrease P. There is a trade-off. >Same delusion: you think that users are protected from security holes if >the security holes are patched before they're announced. Sorry, but >that's not nearly fast enough. Protecting the users means making the >programs secure before they're deployed in the first place. > > But the question of vendor notification is one of software that *has* been deployed with security bugs. In that case, notifying the vendor before disclosure (with the understanding that public disclosure will follow after a reasonable period of time) provides the benefits of Observation 2, without the costs of Observation 1. You would argue, I think, that by foregoing vendor notification, you put the fear of God into programmers, and this will ultimately lead to the eradication of bugs in the first place. I'm skeptical. -David Eisner