Yakov Shafranovich wrote: > Go ahead - I am looking for any kind of solutions that the IETF can take > on in order to reduce the problem. Many solutions have been revolving > around trust - but in the world where a computer can be easily hijacked, > trust becomes harder to maintain. Trust is the problem [1]. What you mention below is a valid way to induce trust, namely by relying on trusted introducers (for trusted *and* distrusted MTAs). The question of qualifying the trusted introducers themselves is also qualitatively handled in the model you summarize. One thing that is missing is what I call the trusted witnesses, which are also necessary to induce trust [2]. Trusted introducers and trusted witnesses allow you to build two open-ended trust chains for every action, the witness chain providing the assurances ("how did we get here?") that led to action (including the action itself) while the introducer chain ("where do we go from here?") provides the assurances both for a continuation of that action and for other actions that may need assurances stemming from it. > One example of what the ASRG has been looking at is a distributed web of > reputation. Each MTAs or domain can publish a list of MTAs that it > knows, including basic statistics on how long the MTA has been sending > mail, average volume, etc. In addition to that basic information, you > can also publish additional information such as "I think this is a > spammer because SpamAssasin detects 99% of all email from that MTA as > spam", etc. The basic statistical information can be used to detect > zombies and the extended information can be used to allow like-thinking > domains to make joint decisions. The question of how much difference > this would make is up for debate, and there are questions of how a new > MTA can be introduced into the system, "rule of the mob", etc. Seeing this as a web of trust would seem to clarify the issues you mention and point out what's missing. Relying on reputation alone (and not also on current behavior, etc.) can lead to race conditions, bait-and-switch, spoofing, laggard reactions, and a host of other threats that are easily exploitable. BTW, if reputation alone would be a solution, eBay customers would not have so many problems with online auctions -- the Federal Trade Commission says it receives more complaints about Internet auction fraud than any other online scam. Internet auctions accounted for 48 percent of all Internet fraud complaints filed with the commission in 2003 (1/26/04 article by Brian Krebs on eBay Feedback Forgers). Comments? Cheers, Ed Gerck [1] Understanding human trust is exactly what brought me to that great IT question in 1997: how can I trust a set of bytes? My answer provided a framework that has been useful in the field of information security. The answer also provides a framework for understanding human trust (as expected fulfillment of behavior) and bridging trust between humans and machines (as qualified information based on factors independent of that information). The original reference is http://nma.com/mcg-mirror/trustdef.htm -- please google for "gerck trust" to find applications and comments by others. [2] An example is described in http://nma.com/papers/e2e-security.htm#TR "...under the principle that every action needs both a trusted introducer and a trusted witness. We call this principle the Trust Induction Principle."