Ory over at IBM/Watchfire does a good job attempting to sort the wheat from the chaff in regards to Larry Suto's comparison report of web scanners. Couple it with HP/SPI's Jeff Forristal's report and you have a good idea about the difficulties of having a true apples to apples comparison of any type of security product, not just web scanners. If only WASC or OWASP or somebody has some guidelines for evaluating web scanner results :-). The Web Application Security Evaluation Criteria is a set of guidelines to evaluate web application security scanners on their identification of web application vulnerabilities and its completeness. It will cover things like crawling, parsing, session handling, types of vulnerabilities and information about those vulnerabilities.
Hopefully this will raise awareness about how confusing accurate product comparisons in the security space must be to product reviewers, prospective customers, academics, and even lay people and foster more participation in this WASC project. But back to Ory: In addition, I am concerned by the web application security industry - an industry filled with gifted security experts and practitioners, who embraced Suto's whitepaper warmly, without questioning its results or the methodology by which it was conducted for a single moment. Suto, having good intentions published what he thought was in the best interest of the industry, and my biggest complaint to him was that his experiment methodology was never fully disclosed to the public, therefore could never be confirmed nor rebutted. On the other hand, one would expect security experts to use a little more judgment when reading technical whitepapers, and be skeptical of results from experiments that are not well documented. Putting numbers into a table doesn't make them meaningful.
Ory, bravo for calling us all out for accepting things without fact checking. It seems even web professionals suffer from improper input validation for time to time! :-) Ory and the kicking of ass and taking of names |