Re: Preventing spammers from infiltrating the Red Hat mailing lists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 13 Apr 2005 12:52:22 -0500, Jeffrey C. Ollie <jeff@xxxxxxxxxx> wrote:

Yes they do have spiders.  And I'd bet that most of those spiders know
how to turn "user at example.com" and many of the other common
obfuscations into "user@xxxxxxxxxxx".  And if that doesn't work, they'll
just subscribe to all of the mailing lists that they can find to harvest
email addresses directly from the emails.  And if that doesn't work
they'll just try dictionary attacks against your SMTP server.  And once
one spammer has your email address they'll quickly sell it to every
other spammer.

This all just goes to show that you can't hide your email address.
They'll find it one way or the other, sooner or later.  So I wouldn't
waste a lot of time trying.

Instead, investigate one of the many spam filtering systems out there.
Since I see that you are using Mozilla, take a look at:

<http://www.mozilla.org/mailnews/spam.html>


This is nuts.

The best defense against spam is a defense in depth. It makes sense to filter spam at the mail server and the mail client, but it also makes sense to prevent one's address from being exposed to spammers.

I had one address that was so popular for viruses and spams that the mail server was regularly failing in one way or another because of the load, and I ultimately abandoned that account.

Yes, "user at example.com" is lame, but you can do a lot better by requiring:

(i) that a user have to perform a complicated task (register and log in, for instance) in order to harvest an address, and
(ii) wrapping the address in javascript + funny HTML tricks (for instance, using numeric entities for characters, inserting comments into the text) to make sure anything less than a complete HTML parser won't get the address.


These two actions will sideline general-purpose spamcrawlers that are trying to crawl te whole web. A defense against specialized webcrawlers involves keeping an eye on the behavior of crawlers -- prohibit legitimate crawlers from attempting to get e-mail addresses, and firewall any site that requests too many of them.

(And yes, I work on a site that periodically does get attacked with specialized webcralwers trying to do just that.)

The point of (ii) is to protect against worms/viruses that scan the browser caches -- you're better off the harder it is for a worm to parse it...


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux