Create an Account
username: password:
 
  MemeStreams Logo

MemeStreams Discussion

search


This page contains all of the posts and discussion on MemeStreams referencing the following web page: HTTP: The Application Transport Layer?. You can find discussions on MemeStreams as you surf the web, even if you aren't a MemeStreams member, using the Threads Bookmarklet.

HTTP: The Application Transport Layer?
by Acidus at 2:00 pm EDT, May 22, 2008

In the early days of the web HTTP sat at the application layer (layer 7) and rode atop TCP, its transport layer.

An interesting thing happened on the way to the 21st century; HTTP became an application transport layer. Many web applications today use HTTP to transport other application protocols such as JSON and SOAP and RSS.

This is not the same as tunneling a different application through port 80 simply because almost all HTTP traffic flows through that port and it is therefore likely to be open on the corporate firewall. They're essentially just pretending to be HTTP by using the same port to fool firewalls into allowing their traffic to pass unhindered.

No, this is different.

This is the use of HTTP to wrap other application protocols and transport them. The web server interprets the HTTP and handles sessions and cookies and parameters, but another application is required to interpret the messages contained within because they represent the protocol of yet another application.

The problem is, of course, that there are no standards beyond HTTP. My JSON-based Web 2.0 application looks nothing like your SOAP-based Web 2.0 application. And yet a single solution must be able to adapt to those differences and provide the same level of scalability and reliability for me as it does you. It has to be extensible. It has to provide some mechanism for adding custom behavior and addressing the specific needs of application protocols that are unknown at the time the solution is created.

Applications aren't about HTTP anymore, they're about undefined and unknowable protocols.

There's a lot of traffic out there that's just HTTP, as it was conceived of and implemented years ago. But there's a growing amount of traffic out there that's more than HTTP, that's relegated this ubiquitous protocol to an application transport layer protocol and uses it as such to deliver custom applications that use protocols without RFCs, without standards bodies, without the W3C.

This is why Layer 4 IDS/IPS will not win. There's an RFC that defined IPv4, IPv6, TCP, SSL, etc. You can easily test structure and determine malformed IP packets. You can use stateful packet inspection to check FTP. There is no RFC that defines JSON. There is no RFC that defines what what the data inside the JSON literals is going to look like. There is no RFC about the character encodings that I'm applying. I've seen web applications using pipe (|) separated quoted strings that are Base64-ed to transfer data back and forth. How do you deep inspect something when you don't know the format?

(actually, this reminds me of an awesome presentation I saw in Toorcon back in 2004, Protocol Analysis using Bioinformatics Algorithms)

HTTP has become the long haul, reliable application transportation protocol of web applications, and we have no idea what the traffic traveling over it is supposed to look like. So how is an appliance in your DMZ suppose to validate it?


 
RE: HTTP: The Application Transport Layer?
by Decius at 2:20 pm EDT, May 22, 2008

Acidus wrote:
This is why Layer 4 IDS/IPS will not win. There's an RFC that defined IPv4, IPv6, TCP, SSL, etc. You can easily test structure and determine malformed IP packets. You can use stateful packet inspection to check FTP. There is no RFC that defines JSON. There is no RFC that defines what what the data inside the JSON literals is going to look like. There is no RFC about the character encodings that I'm applying. I've seen web applications using pipe (|) separated quoted strings that are Base64-ed to transfer data back and forth. How do you deep inspect something when you don't know the format?

The simple answer is that the RFC isn't how you knew what the format was in the first place. Most software does not obey RFCs and IPS systems aren't designed to enforce them. IPS's are built by people who study implementations. IPS systems parse all kinds of proprietary protocols and they are built with an understanding of how proprietary implementations actually work. Blindly blocking all malformed traffic per some paper standard would simply break operational applications.

The biggest differences involved in some of these new web applications are interface complexity and customization. In the case of the former, basically there are far more ways to permutate javascript code versus an SMTP session. In the case of the later, if everyone develops his own web application than they are all different and they each have different vulnerabilities. You don't get the same economies of scale that you do when nearly everyone has the same software with the same problems.

These changes are pushing IPS toward more heuristic detection methodologies that can look for certain attack classes without having a complete knowledge of message formats, and they are pushing some security into the client -- closer to where the vulnerable interfaces are.


 
 
Powered By Industrial Memetics