Spam Filters

Let's face it: Antispam laws haven't made a dent in the flood of junk e-mail. We put six Exchange-compatible technological spam killers to the test. Our pick for Editor's Choice

November 18, 2005

29 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Of the 16 vendors we invited, Cloudmark, Commtouch Software, MailFrontier, Red Earth Software, Sunbelt Software and Symantec delivered products that met our testing criteria. Unfortunately, our tests of Sunbelt's iHateSpam yielded unusable results; our Sunbelt cache file became corrupt during the test, which left its spam engines without the data needed to determine if a message was spam. Sunbelt support worked with us on the problem, something the company said it had not seen in the past, but we ran out of time and had to regretfully exclude iHateSpam from this review. Clearswift, GFI Software, Hexamail, Lyris Technologies, McAfee, NetIQ, SurfControl, Sybari Software, TrendMicro and TumbleWeed Communications declined our invitation.

Accuracy Remains King

We designed our accuracy test to assess antispam technologies out of the box, without reliance on customer tuning (see "How We Tested Spam Filters"). In fact, our methodology highlights the two most critical criteria for selecting a commercial antispam product: low administrative overhead and content-analysis accuracy. Unless you're a spammer, junk e-mail doesn't add a cent to your bottom line--it just eats into productivity--so limiting the cost of controlling it is important.

On the other hand, a misidentified customer e-mail could cost your company, so we weighted accuracy, at 30 percent, as the most important measurement of an antispam product's performance. It's important to understand our definition of accuracy--it differs from vendor marketecture that pegs accuracy based on how well engines and signatures identify spam messages, without reporting how often they classify legitimate mail as junk. We've found that spam engines are all too likely to identify as spam legit bulk e-mail, such as mailing-list traffic, newsletters and user-requested product-offer e-mail.

For our purposes, we called spam that made it to our inbox a false negative, while legitimate e-mail that was identified as spam and stopped was considered a false positive. Because false positives are much more detrimental to organizations, for scoring purposes we considered them five times more costly than false negatives (one false positive = five false negatives). We included our nonweighted numbers for comparison, but we stuck with the weighted information for scoring (see "Spam Filter Accuracy Results," page 60).

Vendors report accuracy numbers that can exceed 99 percent; some even tout a zero false-positive rate. Our testing shows these claims are mostly marketing hype based on results of "tests" with custom-tuned software--something we avoided to see how accurate these products were out of the box. Not all shops have the wherewithal to tweak spam rules before mail is tagged incorrectly. As always, caveat emptor; we recommend test-driving any product before entering into a software agreement.


Symantec's Mail Security for Exchange misidentified only 12 messages we were supposed to receive, whereas Red Earth Software's Policy Patrol misclassified a whopping 643 false positives! Using our weighted accuracy, Cloudmark Server Edition and Symantec's Mail Security for Exchange scored 93.5 percent and 92.4 percent effectiveness, respectively, while Red Earth Policy Patrol posted a -10.7 percent score. Its score was so skewed it made us question our testing methodology--briefly. Red Earth relies heavily on outbound e-mail to tune its policies in real time, but we sent no outbound e-mail so its offering was severely hampered. In our invitation, we specifically stated that no training of Bayesian filters will be done. In addition, sending e-mail through spam filters is much less common than receiving it through them. Spam filters should be able to identify spam without relying on outbound e-mail--that's what we pay for. All other competitors survived the test bed.

Our test bed presented the products with another difficult challenge because we used real e-mail directed to Network Computing editors, including press releases, HTML-formatted industry newsletters, expletive-laden notes to deadline scofflaws and other spammy-looking legitimate missives that were tough to analyze correctly. Remember, too, that we emphasized out-of-the-box performance and defined accuracy in a certain way--your environment may not be as demanding.

Ease of administration and configuration are important with antispam software. Moving the burden to determine what is and isn't spam from users to IT administrators is a recipe for disaster--one person's spam is another's ham, so end users are best qualified to make this decision. Antispam software should assist users in doing so and give them full control over their message stores.

All the software suites we tested can deliver spam to a folder in the user's Exchange account. The only software configured out of the box to not provide the end user with access to spam-identified messages is Red Earth's Policy Patrol; here the burden falls on the administrator, though the admin can reset the configuration, a tedious process.

Spam Filter Accuracy Results Click to enlarge in another window

Web-based quarantines are becoming a popular selling point; only the Commtouch and MailFrontier offerings have Web quarantine interfaces. Both of these and Symantec's Mail Security also can notify users of spam with a summary e-mail. Even if you currently just tag messages and allow users to write rules, or automatically place the e-mail in their inboxes, that configuration choice may change over time.



Reporting is critical to justify the purchase of antispam software and can give the administrator an overview of how well the product is working. Well-polished trend reports help justify the yearly expenses associated with the care and feeding of your antispam strategy. Reports also give those pesky users who complain of a spam or two in their inboxes a reality check. We found MailFrontier's Gateway Server the only product to offer a full range of reports we would feel confident sending to upper management.

Virus scanning is a critical component of every e-mail and desktop security strategy. Although not all viruses slither in through e-mail, stopping any that do should be a basic function of e-mail security software. Exercising the products' virus-scanning engines was beyond the scope of these tests, but we expect them to work as advertised. Commtouch's Anti-Spam Enterprise Solution is the only package that doesn't include a virus-scanning component--a serious deficiency in a mail gateway scanner.




If you use a simplistic approach to filtering, as in Red Earth's Policy Patrol, you may as well just delete every other message and hope for the best. RBLs (real-time blackhole lists) are becoming increasingly useless because spammers now use infected computers connected to high-speed Internet connections to do their dirty work. And just scanning e-mail for common words or phrases found in spam will cause more false positives than any business should be willing to accept. Just because we send an e-mail that has "click here" or "unsubscribe" in it does not mean it's spam--avoid at all costs products that simply classify words or phrases. Red Earth uses both of these concepts in its product, which resulted in a poor showing in our tests.

Rather, look for an engine that computes hashes of body and header contents to make decisions. Most of these engines deliver updates locally through an active update process; however, Commtouch and Cloudmark take a different approach, requiring hashes to be sent to their servers for analysis. Results are sometimes sent back stored in local caches, which generally adds only seconds to the delivery process. Either approach will work. Because e-mail can't be delivered without an Internet connection, don't shy away from a service that relies on that same connection to process spam fingerprints.

A lot of spammers use semi-random word generation to get around filters that use hash-based approaches, but Commtouch and Cloudmark do not hash the entire content; because spammers get paid for clicks or image views, hashing these items and ignoring the rest makes their approaches effective.

If your data center includes clustered Exchange servers, make sure the product you select is cluster-compatible. Be careful of lingo like "cluster-aware"--that usually means the software can detect a cluster but can't work out problems that arise. And if your company has multiple entry points for incoming e-mail, be sure the software package can replicate its policies; otherwise, your administration efforts will increase and may be unacceptably error-prone.

What'll It Cost

The average price for the antispam software in this review is about $12.50 per user, per year. Our as-tested price reflects a 1,000-user license on a two-year agreement, extrapolated to per-user, per-year figures. Red Earth's Policy Patrol was the only software that did not follow the common per-user, per-year pricing structure. It has a one-time price of $3,423 for a 1,000-user license with a two-year commitment, which we normalized to $1.71 per user, per year.

Commtouch, Cloudmark and MailFrontier all hovered near the average while Symantec's pricing for its Mail Security for Exchange was almost triple the average. Red Earth's price of $1.71 is by far the least expensive, beating its nearest rival by nearly $7 per user. Note that Commtouch's price of $8.50 per user, while second lowest, does not include antivirus support--that's why we scored it fourth in our price comparison; we figured another software add-on would raise the overall price by at least $3 per user. Cloudmark Server Edition costs $9 per user, and MailFrontier's package is $11.60 per user over two years.

Symantec does offer a more aggressive discount if you purchase more products, but it's still an inflated price. Meanwhile, Red Earth's low-ball pricing teaches you to look at features and accuracy as well as cost when evaluating antispam software. Sometimes, you do get what you pay for.

After all the numbers were tallied, MailFrontier's Gateway Server ran away with our Editor's Choice award. This easy-to-configure software provided the best reporting features, turned in a weighted accuracy rate just less than 90 percent and offered a very competitive per-user price. Cloudmark made up for its lack of reporting capabilities with the second-best price per user. Symantec and Commtouch were not far behind, turning in scores close to our runner-up.


MailFrontier installed in a snap using a wizard installation routine common to Windows-based software. Gateway Server includes copies of Sun's Java Runtime Engine and Apache's Tomcat for the administration console. The installation wizard warned us about Gateway Server's capacious space needs--it requested 40 GB of free space for use with its Web-based quarantine, dubbed "Junk Box."

Once installed, we logged in to the Web-based administration console using the default user name/password. Here we entered the license keys provided by MailFrontier to switch on various parts of the software. We changed the Microsoft SMTP service to run on a different port than its default, and enabled MailFrontier's Gateway Server to bind to the standard SMTP Port 25 and forward messages to the Microsoft SMTP service. Some simple configuration was left, including tagging e-mail subject lines and writing Outlook rules for our tests. In all, installation and setup took about 20 minutes, including one reboot.

MailFrontier's spam engine design is simple, but highly effective. The software looks at three parts of each message: senders and their reputations, content of the message headers and bodies, and contact points (URLs, phone numbers, mail-tos) and their reputations based on their e-mail traffic. For each e-mail, the software determines the spamminess of each part; if two or more report it's spam, the message is tagged as spam. If one reports the e-mail as spam, it's tagged as likely spam. If no classification can be performed, the software considers the e-mail legitimate.

Once Gateway Server determines that a message is spam or likely spam, it can take various actions. We set the software to tag the subject lines for both spam and likely spam, but its default configuration will put these messages into a Web-based quarantine. We also could set the software to delete the message, return it to the sender, send it to another person or deliver it to the end user's inbox. This is the only software we tested that specifically works to detect phishing attacks by classifying their reputations in a different category. Because phishing attacks are fraudulent in nature, MailFrontier warns users prominently so they don't accidentally act on a legitimate-looking e-mail in the spam folder--a useful safety feature.

MailFrontier says administration of Gateway Server takes at most 10 minutes per week and that most customers spend closer to five minutes. We consider these numbers realistic. Bottom line, long-term administration of this product will not require you to hire more people, but expect an initial time investment to maximize accuracy through custom tuning. This is true of any antispam system.

In our tests, MailFrontier Gateway Server turned in the third-best weighted accuracy score, 89.6 percent. The software identified only 37 false positives and 150 false negatives, and had the second-best unweighted score.

Gateway Server's reporting features were the best in class. All reports were driven directly from its Web interface and were polished enough to pop directly into reports for upper-level management. The initial dashboard view provided a quick overview of how the software had performed over the past 24 hours. We also liked the canned reports that showed top senders and receivers and the domains from which they originated, plus bandwidth savings, ROI projections, spam/not spam graphs and more. If none of the more than two-dozen canned reports fits your needs, try the custom reports wizard.

MailFrontier Gateway Server 4.1, $11.60 per user (as tested). MailFrontier, (866) 3NO-SPAM, (650) 461-7500. www.mailfrontier.com


Another easy-to-install piece of software, Cloudmark Server Edition required almost no effort on our part. We simply created a user account and granted Enterprise Administrator and Delegated Exchange rights. Running the installation program required us to enter our activation code, the server IP address, and domain and user credentials, then check a box to configure the software to automatically place e-mail into each Exchange user's spam folder--a nice touch. Installation kept pace with other products tested, taking less than 30 minutes.

Cloudmark Server Edition uses a plug-in to the Microsoft Management Console, so Windows administrators will get up to speed quickly. The MMC application is incredibly simple. It integrated with Exchange easily, and Cloudmark says most administrators set options once and forget about it--seems likely, since there aren't many options to play with. This will please some, but we would have liked more opportunities to customize. Limited features and very limited reporting cost this otherwise accurate spam filter the top spot in our tests, but shops that do not require advanced graphical reporting or have very limited IT resources should give Cloudmark Server a walkthrough. According to the vendor, the software scales to around 2,000 users per install, and its sweet spot is customers with 25 to 500 Exchange mailboxes.


We found Cloudmark Server's filtering service similar to Commtouch's in that it has no locally updated content filter definitions. In fact, Cloudmark takes an interesting method of identifying spam: It relies on more than 1 million users of its Cloudmark Desktop antispam product to send spam that hits their desktops back to the company. Using a custom fingerprinting approach, the Cloudmark software computes hashes for various headers, body content and MIME attachments and asks the Cloudmark spam catalog servers to judge the spamminess of this e-mail. These servers report back to the Server Edition software, which takes appropriate action. Cloudmark Desktop users who report spam back into the company's servers gain credibility; the more times they're right, the more they're trusted. Users of Server Edition cannot participate in this trusted user feedback model yet, but should be able to in the next release, which will report back to Cloudmark when a user moves spam inside Exchange.

We guess 1 million end users can't be wrong: Cloudmark turned in the best weighted accuracy performance in our tests, at 93.5 percent, as well as the best unweighted accuracy, at 97 percent. Its software was second best in false-positive and false-negative scoring, with 28 and 69 messages respectively. There's no doubt its unique approach to identifying spam is effective. Reporting, however, was a weakness; Cloudmark Server offered us only the ability to see how much time and money we saved with the product. A Web-based quarantine was also missing. Cloudmark says it believes this confuses users more than it helps. We disagree but see where the company is coming from, given that it aims its software at very small shops.

Cloudmark, $9 per user. Cloudmark Server Edition 1.6, (415) 543-1220. www.cloudmark.com


Symantec leverages its recent purchase of Brightmail's antispam technologies for the Mail Security for Exchange with Premium Anti-Spam Service the company sent us for testing. The Premium Anti-Spam Service is essentially the Brightmail service integrated into Symantec's Exchange product. Mail Security can integrate with Microsoft Intelligent Message Filter and Spam Confidence Level settings as well, a feature that was missing from all other competitors.

Installation of the software was simple. We answered a few questions, chose not to restart IIS, set the IP and port for the Web-based administration console, configured an administrator e-mail address and installed the two licenses we received from Symantec--one for Mail Security and one to enable the premium module. Mail Security for Exchange includes a Spam Folder Agent, which can automatically direct spam messages identified by the premium antispam module to a user's spam folder using server-side rules, though this was not tested. Don't be tempted to cheap out on the premium service--you'll save money, but you'll end up with a basic heuristics engine, updated infrequently. Accuracy will be drastically different from our test results.

We did basic configuration of Mail Security for Exchange using Symantec's admin console. Configuration included configuring Live Update to get virus and premium antispam updates every hour, telling the Premium Anti-Spam Service not to tag suspected spam, and setting Premium Anti-Spam to tag subject lines of spam messages for delivery into our spam folders, all based on recommendations from Symantec. We also configured the software to save spam report data so it would be available later and told the virus scan engine not to notify senders or receivers. The entire setup process took less than 30 minutes.

Symantec uses 2.5 million probe accounts with major ISPs to identify spam outbreaks soon after they start; its Brightmail operations center constantly monitors these probe accounts to create and test definitions and update its spam filter database, which is kept in sync with Symantec's Live Update software. The company supplements that with more than 20 filtering techniques, including URLs, signatures and its own RBL lists. In Symantec's own testing, the Brightmail software boasts one of the highest accuracy rates at 99.99 percent--that's one false positive per 1 million filtered messages.

In our testing, Symantec tagged the fewest messages incorrectly as false positives, identifying only 12. Not as accurate as the vendor's own numbers, but twice as good as the next closest competitor. Overall, Symantec posted a weighted accuracy rate of 92.4 percent, second overall. Mail Security for Exchange did have 184 false negatives, indicating that it errs on the side of caution.

We found reporting in the Mail Security product weak, offering only very high-level reports. We had to export data to CSV for more advanced queries. The company says the next release will include the ability to e-mail reports on a scheduled basis.

All in all, the as-tested price of $32.25 per user, per year for two years seems pretty steep. Companies already doing business with Symantec will receive additional discounts.

Symantec Mail Security 4.6 for Exchange, $32.25 per user. Symantec, (408) 517-8000. www.symantec.com


The Commtouch product was one of the easiest antispam suites to set up--a plus for small shops. It took just 20 minutes. Using the wizard, we accepted all defaults, including an MSDE engine used to store spam statistics. Commtouch also can automatically configure an existing MS SQL installation, if chosen during initial installation. Customer sites with a large volume of mail will want to budget for MS SQL licenses to overcome MSDE's limitations (a 2-GB database size limit). After installation, we had to restart our computer.

CommTouch Anti-Spam Enterprise uses a polished Web interface for administration. We were impressed with its layout and think most IT administrators will like its intuitive interface. Commtouch recommended that we set up our Anti-Spam Enterprise software to ignore the Received From header lines from our CommuniGate Pro mail server, integrate via LDAP with our Active Directory install and adjust some report data-retention times. We also set the software to tag the subject line for spam and bulk messages, then wrote Outlook server rules to process all messages with those tags into our spam folder. Commtouch differentiates between spam and bulk messages, allowing administrators to set different actions for those types of messages, including redirecting them to a Web-based quarantine that users can manage.

The Commtouch suite uses a unique spam engine design that relies on proprietary RPC (Recurrent- Pattern Detection) algorithms to identify the key characteristic of spam: messages sent in bulk. Commtouch does not update typical content-filter definitions found in many spam engines; it searches out patterns instead.

When the Anti-Spam Enterprise gateway receives an e-mail, it attempts to match it to local policies that can be set enterprisewide or by individual users. If the e-mail does not match a policy, Commtouch will search the local cache of answers it previously received from the Anti-Spam Detection Center. If it still can't match a rule, the gateway software queries the Anti-Spam Detection Center housed at Commtouch's facilities; if the center is not available, the message will be placed in the user's inbox. Once a message is categorized as spam or bulk, the gateway takes the appropriate action, as configured by the administrator. Legitimate e-mail is delivered to the end user's inbox. Using its RPC technology, Anti-Spam Enterprise can classify e-mail as suspected spam and hold it for a predetermined time while it checks in again with the Anti-Spam Detection Center to see if the message has been classified.

In our tests, the Commtouch software allowed the fewest false negatives--only 48 messages reached our inbox that we would consider spam. Overall, however, Commtouch posted a weighted accuracy of just 74.5 percent, fourth in our tests, because of its 154 false positives. This accuracy rate is abnormally low, according to Commtouch, and was likely a result of the unique e-mail flow of bulk messages, like press releases, newsletter subscriptions and product announcements, our editors receive. The company claims a spam detection rate higher than 97 percent with custom tuning, which we specifically disallowed.

Reporting in the Commtouch Web interface was adequate; we could produce reports of top spammers and top recipients, with detailed views of each. We also could set reports to generate daily and save for later perusal. We missed having a custom reporting interface, which would greatly improve the product's reporting. More than anything, however, Commtouch was hurt by its lack of an antivirus engine; this brought down its features and price scores, as we would have had to purchase a separate e-mail AV system.

Commtouch Anti-Spam Enterprise Solution 4.1, $8.50 per user (as tested). Commtouch, (650) 864-2000. www.commtouch.com


Red Earth's Policy Patrol was a little more difficult to configure than its rivals. The problem was that we wanted all spam messages delivered to the user, but by default the software is configured to place most spam into a folder on the server. An administrator must then choose whether to delete or deliver the messages. Configuring the software to deliver messages to users' junk mail files was tedious--we had to do it for each filter. Red Earth should make this process easier for the administrator. Beyond that difficulty, a standard Windows-based wizard installation asked only a few questions to set up the software.

Policy Patrol uses a lot of scanning techniques to determine the spamminess of an e-mail. Among the methods: DNS blacklists, including the Spamhaus Block list; URL blacklisting; Sender Policy Framework; Bayesian filters; and word/phrase blacklists. Each technique allowed us to configure an action, including placing the e-mail into the user's junk mailbox, dropping the SMTP connection, redirecting the e-mail to another user, placing it into a local folder on the Exchange server and setting a Spam Confidence Level.

Red Earth's Policy Patrol learns your company's behavior by automatically updating its Bayesian filters and whitelists based on outgoing mail from your Exchange server. Because our test systems had no outgoing e-mail, this product really struggled, turning in a dismal performance with a weighted accuracy at -10.7 percent. The spam engine misidentified a total of 987 messages, 643 of them false positives. Because we weighted the false positives by a factor of five, our numbers showed Policy Patrol classified 3,559 messages incorrectly--that's 345 more messages than we received. As we mentioned, although relying on outbound messages to create whitelists is a legitimate strategy, we disallowed training of Bayesian filters for these tests.

It seems Policy Patrol's engine tagged a lot of our e-mail based on simple rules, like "to" and "from" being the same. Some of the monitoring software we use sends e-mail from the same user it's received by, which is legitimate by RFC standards, but not common. Policy Patrol also picked up some editor-to-editor communications that had spam-like words or phrases in them--our editors sometimes have rather interesting conversations--and custom tuning would have helped in these cases.

The software includes a history of every message sent, specifically explaining what rule triggered it as spam--a nice touch that enabled us to assist Red Earth in figuring out why its software was so inaccurate in our tests. Although we could review each message individually, all other reporting options were conspicuously missing from this product. Its pricing was the only flat-rate setup in this review.

Policy Patrol Spam Filter 4, $1.71 per user. Red Earth Software, (800) 921-8215, (603) 436 1319. www.redearthsoftware.com


Christopher T. Beers is a contributing editor and manager of systems operations for a large broadband ISP, where he oversees daily operations of high-speed data and VoIP for the Northeast U.S., including Solaris and Linux administration. Write to him at [email protected].



With spam as pervasive as ever, we decided to take another look at the spambusting market. In May 2004, we tested a range of complex antispam appliances and firewalls, hosted spam services and products that could operate at the Internet boundary or perimeter as mail gateways (see "Filters Take a Bite Out of Spam").

This time, we asked 16 vendors for products that run on Microsoft Exchange servers and can stop spam right out of the box. We disallowed tweaking or training of filters and looked closely at ease of setup. Cloudmark, Commtouch Software, MailFrontier, Red Earth Software, Sunbelt Software and Symantec delivered products that met our testing criteria (unfortunately, a testing glitch forced us to exclude Sunbelt's iHateSpam).

We deployed the suites on our test bed, which was the equivalent of antispam boot camp: No outgoing mail meant the products could not depend on training of Bayesian filters, and our Network Computing mail stream is full of spammy-looking ham. After streaming the same 3,214 messages through each test Exchange server and evaluating ease of setup, architecture, reporting and pricing for 1,000 users over two years, we awarded our Editor's Choice to MailFrontier's elegant and effective Gateway Server.

Executive Summary

All testing was performed in our Syracuse University Real-World Labs??. Each product was required to integrate with Microsoft Exchange 2000 or 2003 and run on Microsoft Windows 2000 or 2003. Antispam appliances/firewalls, hosted offerings and products that did not integrate with Exchange were excluded from our tests because we wanted to focus on Exchange installs aimed at small and midsize businesses with few or no IT personnel.


We tested each product using Microsoft Windows 2003 Server running Microsoft Exchange 2003. Each product had a dedicated server; each server was configured as an Active Directory controller, had its own primary DNS servers for MX mail routing and received a real e-mail stream mirrored from our Stalker Software CommuniGate Pro mail server.

For 21 days, we pumped 3,214 pieces of Network Computing editors' production e-mail through the systems simultaneously. Using a CommuniGate Pro mirror rule, we ensured that identical copies of every e-mail message went through each antispam product and to one control Exchange server. We worked with each vendor to set the spam thresholds and configure the software to place spam messages into a spam folder, leaving ham messages in our inbox.

The mirror process added two X headers, X-Autogenerated: Mirror and X-Mirrored-by: <>, and two "Received from:" header entries to each mirrored message. Existing header entries and message bodies were preserved as exact copies of the original messages, including any existing From, To, Subject, Message-ID, Return-Path or Reply-To headers. The strength of this test methodology was that each vendor saw identical mail in real time for the duration of the test.

Although we used live, production e-mail for the test, we had to compromise in in ways that made our accuracy test tougher than what these products would face in the real world: Because every mail message was delivered to each system under test from a single host--our production mail server--the last two "Received from:" headers were always identical and always pointed to a host with a good reputation on the Internet. Antispam products that used header-analysis techniques to identify spam had to dig two levels deeper in the "Received from:" chain to see the IP address of the real inbound e-mail server. In addition, the SMTP conversation between the inbound server and the system under test was always well-behaved because our production mail server follows the SMTP standards faithfully. Some of the SMTP servers used by spammers do a poor job implementing SMTP standards, thereby providing the antispam engine a good clue that the message being sent is suspicious.

These departures from the real world had a positive impact on our accuracy test because they helped to isolate the antispam products' ability in two important areas: First, the antispam engine had to perform a deep header analysis to make use of source data that might hold a clue about a message's spamminess. Because spammers are cranking up their use of PCs infected with spam trojans to relay their spam, the last IP address identified in the "Received from:" header may very well be some unsuspecting home user's or college student's machine. A deep header analysis may reveal a known spammer's IP address, while reliance on the last one, two or even three "Received from:" header entries would come up empty. Second, if the SMTP conversation between the inbound and receiving mail server and header analysis fail to identify possible spam, the only thing left is the spam engine's ability to analyze the message content. We can't rely on spammers to use known IP addresses or noncompliant SMTP servers, so the content-analysis phase of the spam ID process must be the strongest. We tried to isolate the content-analysis component in our accuracy test.


Because we were looking for products with good content-analysis engines and low administrative overhead, we didn't supply any custom blacklist or whitelist information to any of the systems, and we permitted vendor-supplied blacklists, whitelists and third-party RBLs (real-time blacklists) only if they were used as one factor among many to determine the a message's spamminess. As a customer, you would never be able to implement one of these products without supplying whitelists to reduce false positives, but we had the luxury in the lab and wanted to see how well each product worked without relying on a custom whitelist. We told vendors that Bayesian filters would not be trained as part of this test, and our Exchange servers did not send any e-mail out to the Internet, leaving products that automatically create whitelists based on sent mail at a disadvantage.

At the end of the test run, we examined each message in the control account. Each message we identified as spam was moved from the control account's inbox to a spam folder, leaving only ham in the inbox. Since one person's spam may be another person's ham, we were careful to define spam specifically as any unsolicited e-mail that advertised products and were from companies or individuals with which we had no previous business relationship. E-mail that advertised products or services from vendors with which we have relationships was considered ham. Using a mailbox comparison application developed by Stalker Software for our past antispam review, we compared the contents of the control inbox to the contents of each antispam vendor's inbox looking for common messages. The number of common messages for each vendor was subtracted from the number of messages in the control inbox to determine the number of false positives--messages that were incorrectly tagged as spam. False negatives were calculated in a similar fashion using common messages in the spam mailbox.

All Network Computing product reviews are conducted by current or former IT professionals in our Real-World Labs?? or partner labs, according to our own test criteria. Vendor involvement is limited to assistance in configuration and troubleshooting. Network Computing schedules reviews based solely on our editorial judgment of reader needs, and we conduct tests and publish results without vendor influence.



R E V I E W

Spam Filters







Sorry,your browseris not Javaenabled



Welcome to NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ?? icon above. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ?? will re-sort (and re-grade!) the products based on the new category weights

you entered.

Click here for more information about our Interactive Report Card ??.


SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights