One twist to the story is that the Dutch police worked with Kapersky Labs, on a way to force the botnet to “commit suicide”.
Reaching out to infected users and notifying them that they are, in fact, infected is a new twist, and it uses the spammers’ own work against them.
The idea of using malware against itself has been around a while, although this case may be the most public confirmed example. The wisdom of such “White Worms”, which use viral propagation of software to clean up and repair infected systems, has been debated for some time.
Bruce Schneier wrote about Benevolent Worms later that year, beginning with an assertion from a 2003 essay on the same subject:
A good software distribution mechanism has the following characteristics:
People can choose the options they want.
Installation is adapted to the host it’s running on.
It’s easy to stop an installation in progress, or uninstall the software.
It’s easy to know what has been installed where.
A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.
He concludes that, “Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.”
I have some small experience with this question, from an incident back when I was running MIT’s security team. We had discovered a compromised server, which was listening for connections from machines newly-compromised by a particular strain of malware, and handing out a configuration file to all comers.
We went looking for the malware itself, and got to know its design better, particularly the part of its own installation process which downloaded that configuration file.
At this point, one of the clever people on the security team, had the idea to replace the file the server was giving out with something of our own design. We decided to try doing something useful…
When an infected machine downloaded the config file from the server at MIT (now acting as our double agent), the newly-compromised machine would happily send a mail message to a public-facing cyber-crime reporting address at the FBI…
Each message would contain information about the location and identity of the victim machine, explaining that it had been compromised by bad guys, and needed some help. We left the server running, gleefully handing out our new Trojan to newly compromised hosts, as a service to the community. (of course, we watched for any local machines that were requesting the file, and visited them as they turned themselves in.)
Some months later, in a conversation with a Boston FBI agent on another topic, I explained what we did, and asked if they’d seen any reports from the net due to our little trick.
The agent was quiet for a moment, and then said, “that was you guys??”
Apparently, the idea had been (very) successful, at least as far as generating lots of traffic to the reporting address than they had ever expected to see… After agreeing it had been a clever and interesting thing to try, they asked that we take it down.
So yes- clever idea, and possibly of use in some edge cases. But I think that as a widespread strategy, it’s hard to get this sort of thing right. Picking a useful “white worm” behavior, while anticipating any dangerous or sub-optimal side effects, is just hard. As a “civilian” tool, it’s probably not the approach I’d use.
I do wonder about the cyberwar utility of this sort of thing. When there is already significant smoke in the room, it may be easier to consider viral responses, with hopefully beneficial effects. (and, perhaps, a higher tolerance for collateral damage)