There are a number of services and software systems that mail sites and users can use to reduce the load of spam on their systems and mailboxes. Some of these depend upon rejecting email from Internet sites known or likely to send spam. Others rely on automatically analyzing the content of email messages and weeding out those which resemble spam. These two approaches are sometimes termed blocking and filtering.
Blocking and filtering each have their advocates and advantages. While both reduce the amount of spam delivered to users’ mailboxes, blocking does much more to alleviate the bandwidth cost of spam, since spam can be rejected before the message is transmitted to the recipient’s mail server. Filtering tends to be more thorough, since it can examine all the details of a message. Many modern spam filtering systems take advantage of machine learning techniques, which vastly improve their accuracy over manual methods. However, some people find filtering intrusive to privacy, and many mail administrators prefer blocking to deny access to their systems from sites tolerant of spammers.
DNS-based Blackhole Lists, or DNSBLs, are used for heuristic filtering and blocking. A site publishes lists (typically of IP addresses) via the DNS, in such a way that mail servers can easily be set to reject mail from those sources. There are literally scores of DNSBLs, each of which reflects different policies: some list sites known to emit spam; others list open mail relays or proxies; others list ISPs known to support spam. Other DNS-based anti-spam systems list known good (“white”) or bad (“black”) IPs domains or URLs, including RHSBLs and URIBLs. For history, details, and examples of DNSBLs, see DNSBL.
Until recently, content filtering techniques relied on mail administrators specifying lists of words or regular expressions disallowed in mail messages. Thus, if a site receives spam advertising “herbal Viagra”, the administrator might place these words in the filter configuration. The mail server would thence reject any message containing the phrase.
Content based filtering can also filter based on content other than the words and phrases that make up the body of the message. Primarily, this means looking at the header of the email, the part of the message that contains information about the message, and not the body text of the message. Spammers will often spoof fields in the header in order to hide their identities, or to try to make the email look more legitimate than it is; many of these spoofing methods can be detected. Also, spam sending software often produces a header that violates the RFC 2822 standard on how the email header is supposed to be formed.
Disadvantages of this static filtering are threefold: First, it is time-consuming to maintain. Second, it is prone to false positives. Third, these false positives are not equally distributed: manual content filtering is prone to reject legitimate messages on topics related to products advertised in spam. A system administrator who attempts to reject spam messages which advertise mortgage refinancing may easily inadvertently block legitimate mail on the same subject.
Finally, spammers can change the phrases and spellings they use, or employ methods to try to trip up phrase detectors. This means more work for the administrator. However, it also has some advantages for the spam fighter. If the spammer starts spelling “Viagra” as “V1agra” or “Via_gra”, it makes it harder for the spammer’s intended audience to read their messages. If they try to trip up the phrase detector, by, for example, inserting an invisible-to-the-user HTML comment in the middle of a word (“Via<!—->gra”), this sleight of hand is itself easily detectable, and is a good indication that the message is spam. And if they send spam that consists entirely of images, so that anti-spam software can’t analyze the words and phrases in the message, the fact that there is no readable text in the body can be detected.
However, content filtering can also be implemented by examining the URLs present (i.e. spamvertised) in an email message. This form of content filtering is much harder to disguise as the URLs must resolve to a valid domain name. Extracting a list of such links and comparing them to published sources of spamvertised domains is a simple and reliable way to eliminate a large percentage of spam via content analysis.
Statistical filtering was first proposed in 1998 by Mehran Sahami et al., at the AAAI-98 Workshop on Learning for Text Categorization. A statistical filter is a kind of document classification system, and a number of machine learning researchers have turned their attention to the problem. Statistical filtering was popularized by Paul Graham’s influential 2002 article A Plan for Spam, which proposed the use of naive Bayes classifiers to predict whether messages are spam or not – based on collections of spam and nonspam (“ham”) email submitted by users. 
Statistical filtering, once set up, requires no maintenance per se: instead, users mark messages as spam or nonspam and the filtering software learns from these judgements. Thus, a statistical filter does not reflect the software author’s or administrator’s biases as to content, but it does reflect the user’s biases as to content; a biochemist who is researching Viagra won’t have messages containing the word “Viagra” flagged as spam, because “Viagra” will show up often in his or her legitimate messages. A statistical filter can also respond quickly to changes in spam content, without administrative intervention.
Spammers have attempted to fight statistical filtering by inserting many random but valid “noise” words or sentences into their messages while attempting to hide them from view, making it more likely that the filter will classify the message as neutral. Attempts to hide the noise words include setting them in tiny font or the same colour as the background. However, these noise countermeasures seem to have been largely ineffective.
Software programs that implement statistical filtering include Bogofilter, the e-mail programs Mozilla and Mozilla Thunderbird, and later revisions of SpamAssassin. Another interesting project is CRM114 which hashes phrases and does bayesian classification on the phrases.
There is also the free mail filter POPFile  which sorts mail in as many categories as you want (family, friends, co-worker, spam, whatever) with bayesian filtering.
Checksum-based filter takes advantage of the fact that often, for any individual spammer, all of the messages he or she sends out will be mostly identical, the only differences being web bugs, and when the text of the message contains the recipient’s name or email address. Checksum-based filters strip out everything that might vary between messages, reduce what remains to a checksum, and look that checksum up in a database which collects the checksums of messages that email recipients consider to be spam (some people have a button on their email client which they can click to nominate a message as being spam); if the checksum is in the database, the message is likely to be spam.
The advantage of this type of filtering is that it lets ordinary users help identify spam, and not just administrators, thus vastly increasing the pool of spam fighters. The disadvantage is that spammers can insert unique invisible gibberish — known as hashbusters — into the middle of each of their messages, thus making each message unique and having a different checksum. This leads to an arms race between the developers of the checksum software and the developers of the spam-generating software.
Checksum based filtering methods include:
- Distributed Checksum Clearinghouse
- Vipul’s Razor
Authentication and Reputation (A&R)
A number of systems have been proposed to allow acceptance of email from servers which have authenticated in some fashion as senders of only legitimate email. Many of these systems use the DNS, as do DNSBLs; but rather than being used to list nonconformant sites, the DNS is used to list sites authorized to send email, and (sometimes) to determine the reputation of those sites. Other methods of identifying ham and spam are still used. The A&R allows much ham to be more reliably identified, which allows spam detectors to be made more sensitive without causing more false positive results. The increased sensitivity allows more spam to be identified as such. Also, A&R methods tend to be less resource-intensive than other filtering methods, which can be skipped for messages identified by A&R as ham.
Sender-supported whitelists and tags
There are a small number of organizations which offer IP whitelisting and/or licensed tags that can be placed in email (for a fee) to assure recipients’ systems that the messages thus tagged are not spam. This system relies on legal enforcement of the tag. The intent is for email administrators to whitelist messages bearing the licensed tag.
A potential difficulty with such systems is that the licensing organization makes its money by licensing more senders to use the tag — not by strictly enforcing the rules upon licensees. A concern exists that senders whose messages are more likely to be considered spam who would accrue a greater benefit by using such a tag. The concern is that these factors form a perverse incentive for licensing organizations to be lenient with licensees who have offended. However, the value of a license would drop if it was not strictly enforced, and financial gains due to enforcement of a license itself can providee an additional incentive for strict enforcement. The Habeas mail classing system attempts to further address this issue this by classing email according to origin, purpose, and permission. The purpose is to describe why the email is not likely spam, but permission based email.
Another approach for countering spam is to use a “ham password”. Systems that use ham passwords ask unrecognised senders to include in their email a password that demonstrates that the email message is a “ham” (not spam) message. Typically the email address and ham password would be described on a web page, and the ham password would be included in the “subject” line of an email address. Ham passwords are often combined with filtering systems, to counter the risk that a filtering system will accidentally identify a ham message as a spam message.
The “plus addressing” technique appends a password to the “username” part of the email address.
Since spam occurs primarily because it is so cheap to send, a proposed set of solutions require that senders pay some cost in order to send spam, making it uneconomic.
Some gatekeeper such as Microsoft would sell electronic stamps, and keep the proceeds. Or a Micropayment, such as Electronic money would be paid by the sender to the recipient or their ISP, or some other gatekeeper.
Hashcash and similar systems require that a sender pay a computational cost by performing a calculation that the receiver can later verify. Verification must be much faster than performing the calculation, so that the computation slows down a sender but does not significantly impact a receiver. The point is to slow down machines that send most of spam — often millions and millions of them. While every user that wants to send email to a moderate number of recipients suffers just a seconds’ delay, sending millions of emails would take an unaffordable amount of time.
As a refinement to stamp systems was the idea of requiring that the micropayment only be retained if the recipient considered the email to be abusive. This addressed the principal objection to stamp systems: popular free legitimate mailing list hosts would be unable to continue to provide their services if they had to pay postage for every message they sent out.
A difficulty that must be dealt with by most anti-spam methods, including DNSBLs, Authentication and Reputation (A&R), Sender-supported whitelists and tags, Ham passwords, cost-based systems, Heuristic filtering, and Challenge/response systems is that spammers already (illegally) use other people’s computers to send spam. The computers in question are already infected with viruses and spyware operated by the spam senders, in some cases seriously damaging the computer’s responsiveness to the legitimate user. Spam from the legitimate user’s computer can be sent using the user’s and/or system’s identity, list of correspondents, reputation, credentials, stamps, hashcash and/or bonds. The added motivation to steal from such systems in order to abuse these things may simply impel spammers to infect more computers and cause greater damage. On the other hand, this could compel computer users to finally secure their systems, reducing Botnets, which would have myriad other benefits, as they are used for extortion, phishing, and terorrism, as well as spam. Ultimately, any system that holds senders responsible for the mail they send needs to deal with the situation of irresponsible senders that may send both spam and ham.
Heuristic filtering, such as is implemented in the program SpamAssassin, uses some or all of the various tests for spam mentioned above, and assigns a numerical score to each test. Each message is scanned for these patterns, and the applicable scores tallied up. If the total is above a fixed value, the message is rejected or flagged as spam. By ensuring that no single spam test by itself can flag a message as spam, the false positive rate can be greatly reduced. 
Tarpits and Honeypots
A tarpit is any server software which intentionally responds pathologically slowly to client commands. A honeypot is a server which attempts to attract attacks. Some mail administrators operate tarpits to impede spammers’ attempts at sending messages, and honeypots to detect the activity of spammers. By running a tarpit which appears to be an open mail relay, or which treats acceptable mail normally and known spam slowly, a site can slow down the rate at which spammers can inject messages into the mail facility.
One tarpit design is the teergrube, whose name is simply German for “tarpit.” This is an ordinary SMTP server which intentionally responds very slowly to commands. Such a system will bog down SMTP client software, as further commands cannot be sent until the server acknowledges the earlier ones. Several SMTP MTAs, including Postfix and Exim, have a teergrube capacity built-in: when confronted with a client session which causes errors such as spam rejections, they will slow down their responding . A similar approach is taken by TarProxy.
Another design for tarpits directly controls the TCP/IP protocol stack, holding the spammer’s network socket open without allowing any traffic over it. By reducing the TCP window size to zero, but continuing to acknowledge packets, the spammer’s process may be tied up indefinitely. This design is more difficult to implement than the former. Aside from anti-spam purposes, it has also been used to absorb attacks from network worms. 
As of late 2005 much of the spam sent is through so-called “zombie” systems, of which there are potentially a very large number. This makes the actual effectiveness of tarpits questionable, as there are so many spam sources that slowing just a few has little real effect on the volume of spam received.
Another approach is simply an imitation MTA (open relay honeypot) which gives the appearance of being an open mail relay. Spammers who probe systems for open relay will find such a host and attempt to send mail through it, wasting their time and potentially revealing information about themselves and the source of spam to the unexpected alert entity (in comparison to the anticipated careless or unskilled operator typically in charge of open relay MTA systems) that operates the honeypot. Such a system may simply discard the spam attempts, submit them to DNSBLs, or store them for analysis. It may be possible to examine or analyze the intercepted spam to find information that allows other countermeasures. (One honeypot operator was able to alert a freemail supplier to a large number of accounts that had been created as dropboxes for the receipt of responses to spam. Disabling these dropbox email accounts made the entire spam run, including the spam messages relayed through actual open relays, useless to the spammer: he could not receive any of the responses to the spam sent by gullible customers.) The SMTP honeypot may also selectively deliver relay test messages to give a stronger appearance of open relay (though care is needed here as this means the honeypot itself and the network it is on could end up on spam blacklists). SMTP honeypots of this sort have been suggested as a way that end-users can interfere with spammers’ activities (code: Java , Python ).
As of late 2005 open relay abuse to send spam has greatly declined, resulting in a lowered active effectiveness of open relay honeypots. (Passively, the honeypots or threat of same create an inducement for spammers to not abuse open relays.) Other types of honeypot (below) may still have great effectiveness.
Spammers also abuse open proxies, and open proxy honeypots (proxypots) have had substantial success. Ron Guillmette reported in 2003 that he succeeded in getting over 100 spammer accounts terminated in under 3 months, using his network (of unspecified size) of proxypots. At that time spammers were so careless that they sent spam directly from their servers to the abused open proxy, making determination of the identity of the spammer’s IP address trivial so that it was easy to report the spammer to the ISP in control of that IP address and easy for that ISP to terminate the spammer’s account.
Unlike most other anti-spam techniques tarpits and honeypots work at the relay, proxy, or zombie (collectively, “abuse”) level. They work by targeting spammer behavior rather than targeting spam content. One beneficial fallout from this is that these tools are not required to have any means of distinguishing spam from non-spam. Because they capture spam at the abuse level they are not part of any legitimate email pathway and it can be confidently assumed that what they capture is 100% spam or spam-related (e.g., test messages.) Anti-spam measures at (or after) the destination server level protect specific email addresses but must include code to distinguish spam from non-spam. Anti-spam measures at the abuse level protect whatever the email addresses are that are being targeted by the spam directed through them and are hence non-specific but need no code to distinguish spam from non-spam. The main purpose of abuse-level tools is targeting spam and spammers themselves while the main purpose of server-level tools is to protect speecific email addresses. What abuse-level tools lose in specificity may be more than made up by the inherent simplicity that results from not having to be able to separate valid email from invalid email.
In late 2005 Microsoft announced that it had converted an actual zombie system to a zombie honeypot. One result of this was a lawsuit by Microsoft against about 20 defendants, based on evidence collected by the zombie honeypot.
Note that there is some terminological confusion. Some people refer to “spamtraps” as “honeypots.” In this context a “spamtrap” is an email address created specifically to attract spam. These run at the destination level rather than at the relay, proxy or “spam zombie” level.
Another method which may be used by internet service providers (or by specialized services) to combat spam is to require unknown senders to pass various tests before their messages are delivered. These strategies are termed challenge/response systems or C/R, are currently controversial among email programmers and system administrators.
For a discussion of the advantages and disadvantages of these systems.