Hacking and Unhacking August 22, 2008Posted by ficial in techy.
Williams got a bit hacked recently, but we’re doing OK for the moment. Aside from the stress of the situation, it was actually fairly interesting detective work.
The attackers exploited a known PHP weakness, which we should have dealt with earlier. The attackers used two main approaches to hindering detection and analysis. First, they implemented a randomizer so the effect could not be reliably reproduced – it was only because of anomalous Internet Explorer behavior that the effect was consistent. Second, they used a variety of code obfuscation tricks: extra white space, compacted white space, encoded code, unusual code location, and dynamically generated code. The code location was particularly nasty – the hook was in a CSS file as expression(eval(unescape(‘%69%66%20%28%21%64%6F….%7D%3B%20’))). There were also some kind of ID strings hidden in there, which you might search for in your own file system to check if your machine is compromised: a0b4df006e02184c60dbf503e71c87ad and a995d2cc661fa72452472e9554b5520c.
These sort of attacks have a fairly standard high-level approach. First, the target system is scanned for vulnerabilities by automated software. Second, when a vulnerability is found a different set of automated software puts the malicious code in place. Third, the software tries to hide its tracks and lie low, letting the emplaced code do its thing. These days the tools are so automated that it’s unlikely a human is involved at all – the target system is chosen by automated software, and the fruits of the emplaced code are harvested by automated software. Furthermore, the software that does all this is distributed across many computers rather than residing on a single one. This intrusion was an almost picture perfect example of that process.
On August 7 at 2:00 AM the automated software started checking likely files for vulnerabilities. It probably got a list of all potential files from a search engine or web spider. It looked through that list for files that might have a particular vulnerability – in this case it was hoping to use a PHP vulnerability that allows remote code to be executed on the target server via an include. So, it filtered that list looking for PHP files to which it could pass data, then it sent specially constructed requests to each one until it found an opening. These tests all came from the same machine, because a single machine requesting different pages wouldn’t arouse the suspicion of any automated log-file watchers.
From about 10:50 to 11:00 AM the software had found a vulnerable file and set to work exploiting it. It sent a series of special requests that caused code to run that put particular files in place on our system. This probably started with a series of exploratory files which reported back information about our system: our file structure, folder permissions, etc. These were repeated, similar looking requests all to the same file. To avoid log watchers, these commands (and all subsequent ones) each came from a different machine.
At 12:19 the software knew enough about our system to put its permanent code in place. From 12:20 to 12:25 the software was able to run its own emplaced scripts rather than acting through our code. It used this to arrange all its files and to put into place the hook that would cause its code to be executed when people arrived at our site from a search engine.
At about 12:40, 1:15, and 3:15 the software called a series of scripts that hid its tracks, removing most of the intermediate files it used to explore our system and set things up. It’s also pretty likely that the emplaced code was activated at this time.
There was a bit more vulnerability testing later in the day, but it stopped quickly. My best guess is that it took a while for the fact that our system was now owned to propagate across the distributed software. Once that data was widely known then the software would no longer bother with us. From then on the emplaced code just stays unobtrusive and does whatever it does – e.g. in our case it used the reputation of oit.williams.edu to direct web users to particular URLs.
For a bit more information about the technology and methods behind these kinds of attacks, check out
Some take away lessons from all this:
- we are lucky it wasn’t worse
- We need a good way to alerting all relevant parties and escalating intrusion issues
- We need a process of propagating vulnerability fixes across all our systems
- We need better (not just more) security review processes and tools for all our machines
- A more pro-active method of detecting intrusions and intrusion attempts would be good