Canonicalization, much like life, is a bitch. Yet another way higher character encodings get downgraded into lower character encodings, bypassing IDS/IPS signatures. Oh course, this is just another example of the fundamental problem: IDS aren't looking at the same bytes the destination service is looking at. Arian Evans does a good job scoping this: Somewhere along the path from HTTP protocol --> to app untrusted entry point --> to parser, there are several possible layers of decoding. These could include: -Web Sever itself -Web Server plugin -Canonicalization in framework (e.g.-some .NET modules) -Canonicalization steps in web app code. -Decoding and interpretation by shellscripts and the like. -Decoding certain encoding types for normalization (see this a lot in PHP, or cookies base64 file-system encoded, etc.) -etc. This means that: It is possible for an app to have one or more layers of canonicalization/conversion, allowing for even crazy things like double and triple-encoding, which IDS/IPS do not handle at all over HTTP
My homies in X-Force are going to have a shitty day tomorrow... ... but not as shitty as Bob Auger is going to have. I remember him starting to do this about 6 months ago, but he wasn't the one who broke the news. Bummer. Web hackers 9999, IDS 0 |