Ned Freed wrote: >> Granted that it's always dangerous to extrapolate from a small sample. > >> But is anybody's experience valid, then? > >> From my perspective, the guys who run these large email systems >> generally seem to believe that they have to do whatever they're doing, > > Keith, with all due respect, I haven't exactly seen a flood of well-designed > proposals for viable alternatives. Perhaps instead of simply reiterating over > and over that these beliefs are false you should instead try coming up with an > alternative that demonstrate their falseness. Well, at the risk of over-extrapolating from scant data again, there does seem to be some variation in both practice and and the quality of user experience among mail system operators. So maybe they don't all "have" to be doing exactly what they're doing now. >> regardless of how much the filtering criteria that they're using have >> any thing to do with the desirability of the mail to the recipient, > > Schemes that attempt to assess the desirability of the email to the recipient > have been tried - personal whitelists, personal Bayesian filters, etc. etc. In > practice they haven't worked all that well, perhaps due to the average user's > inability to capably and consistently perform such assessments. I think we're saying slightly different things. It's one thing to say that attempts to use recipient feedback to tune spam filters on an individual basis has so far not worked well (which is what I take you to be saying, and which also corresponds to my understanding). It's quite another thing to say that the recipient's experience of existing spam filtering systems (however they are tuned) is irrelevant. >> and >> regardless of any particular sender's or recipient's actual experience >> with having their mail filtered. > > Well, sure. When you have a million users it's not only difficult to focus on > an individual user's needs, it's also totally inappropriate. Mumble. It's those millions of individual users' experiences that fundamentally matter. If the test cases don't accurately predict their experiences, then the test cases are wrong. A more defensible statement is that it's not economically feasible to pay staff to deal with each of those individual users on an individual basis, understand their individual problems, and try to fix them on an individual basis. >> IOW, It's very easy for both the individual and the mail system operator >> to find reasons to disregard the other's experience. Who is to say who >> is right? > > Absent a working crystal ball there is of course no way to *know* who's right. > But consider this: If you have cancer, would you be more comfortable taking > that quack nostrum that one guy says cured him or the medication with proven > efficacy in a bunch of double blind clinical trials? That one guy *could* be > right. But is this a chance you want to take? It's an interesting analogy, because the medical profession (at least in the US) also seems to have lost its ability to consider the individual situation of each patient - assuming that individual patients' success rates will closely correspond to those predicted by aggregate statistics, and for which typically, only a small number of variables were considered. >> Once again, the crucial issues seem to be transparency, accountability, >> granularity rather than the reputation reporting mechanism. Which is >> not to say that the mechanism doesn't also warrant improvement. > > On this we agree, more or less. But it seems to me that these goals are far > more likely to be met with a set of standardized mechanisms than without. That's my assumption also. Keith _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf