Some thoughts on url scanning

| | Comments (1)
Url scanning seems to be an emerging trend. Detecting malware distribution channels and preventing infections is easier than cleaning up the mess they make. The basis of the idea is good, but the current implementations. I have been mulling on this for a while, ever since I read Russ McRae's post (rant?) on url shorteners needing to detect malware.

The initial problems that url scanners face are simple evasion techniques, such as the click to get infected method that you can see in my previous post. This blogspot url scores quite cleanly.
And why shouldn't it? It doesn't contain anything directly malicious and so it should score cleanly until reputation or reactive defense catches up with it. Listen you say, who cares about the herding page, it doesn't do anything, it's the delivery page we care about. If a user visits a "benign" page that redirects him to malware, it will still be stopped at the malicious page!

Alas dear friend, a simple server side block is all it takes to stop from accessing the offending page (
Other documented techniques seen in the wild include only delivering the malicious pay load on 1 of x requests, user agent filtering, js obfu that will break automated deobfu and more. I have seen an alert box break browser automation, so there is no shortage of options for the bad guys. However considering how simple it is to shutdown todays url scanners I doubt we will see too many advanced techniques yet. Url scanning might overcome these simple bypasses in the future, but they should not be considered defense and certainly not a replacement for your desktop AV.


I thought I should also mention that the final delivery url ( currently sabotages the scanner into an internal error.

No Clean Feed - Stop Internet Censorship in Australia
Creative Commons License
This weblog is licensed under a Creative Commons License.