Create an Account
username: password:
 
  MemeStreams Logo

Worthersee's MemeStream

search

Worthersee
Picture of Worthersee
My Blog
My Profile
My Audience
My Sources
Send Me a Message

sponsored links

Worthersee's topics
Arts
Business
Games
Health and Wellness
Home and Garden
Miscellaneous
Current Events
Recreation
Local Information
Science
Society
Sports
(Technology)

support us

Get MemeStreams Stuff!


 
Current Topic: Technology

Bypassing Web Authentication and Authorization with HTTP Verb Tampering
Topic: Technology 2:06 pm EDT, May 29, 2008

This is a cool paper and all of you should read it for many reasons.

First, because it’s a perfect example of hacking. Hacking is just critical thinking and understanding how a system works. In this paper by understanding the nuances of web technologies the researchers found a very trivial way to bypass the authentication systems of many popular web frameworks!

Second, it’s a classic example how programmers with even a little security knowledge can make big mistakes.

Here is the paper in a nutshell:

Various web frameworks like Jave EE, ASP.NET, etc, allow you to configure the website so certain directories are only accessible to certain users with certain HTTP methods. So anyone can do a GET or POST to /public/ but only an admin can do a GET or POST to /admin/.

Enter the HTTP HEAD method. This is usually used to diagnostics and caching. If you send an HTTP HEAD instead of an HTTP GET to a URL, the website is supposed to do everything it would normally do when processing a GET, only it should only the HTTP response contains only header and no body. To make sure the same response (sans body) is sent for an HEAD as a GET, web servers simply handle the response as if it was a GET, and suppress the body when sending the response.

Do you see the trick yet?

HTTP HEAD method can be used to side-step authentication systems in many web applications. An attacker simply sends a HEAD to /admin/deleteUser?user=billy? instead of an GET. The authentication framework checks and sees that anyone can send HEADs to /admin/ and does not stop the processing of the request. The web server runs all the back end code that it normally runs for a GET, which deletes Billy as a user. The attacker does not see the body on the response, so it’s a blind attack. However the attacker can see the HTTP status code that is returned with the response to the HEAD and based on its value (200, vs 500) the attacker can tell if it worked.

This is exactly the reason why HTTP GET should be idempotent. In other works, GETs and HEADs should not modify the state of the web server so you can send multiple gets to the exact same URL and it should not cause problems. POSTs on the other hand are not idempotent. This is why e-commerce sites say things like “don’t click checkout again!” and your browser will say things like “You have already submitted POST data, are you sure you want to refresh and send this again?” (AMP, we aren’t doing this in our web frontend right?)

We even have an idea about how widespread this problem could be. In 2005 Google launched Google Web Accelerator. This was a browser plug in that pre-fetched links on the page you were looking to better utilize your bandwidth. Unfortunately, thousands of sites started breaking because developers all of the world were using simple hyperlinks (which issue a GET) to modify the state of the web app. There was lots of kicking and screaming, and I acquired a healthy dislike for Ruby on Rails developers who kept insisted that the rest of the world was wrong and they were right, but I digress.

In short, by knowing HTTP and understanding that a developer implemented a default “Allow All” feature, this very cool attack was discovered.

Bypassing Web Authentication and Authorization with HTTP Verb Tampering


Memristors, they exist!
Topic: Technology 9:58 am EDT, May  1, 2008

This is very cool. From way back in 1971 a professor Leon Chua at the University of California (Berkeley) wrote a paper describing four basic passive electrical components: resistors, capacitors, inductors, and memristors. Until this year, the last one of these was only theoretical in nature, but some bright folks have finally cracked it.

This is likely to crack open a whole boatload of new types of circuits and electronic applications. Very, very cool.

Memristors, they exist!


Mac Book Air vs. Lenovo X300
Topic: Technology 4:29 pm EDT, Apr 30, 2008

My friend that works at Lenovo sent me this video.

Mac Book Air vs. Lenovo X300


DoD Cybercrime Recovering Executables from memory
Topic: Technology 5:12 pm EDT, Apr 29, 2008

Found while searching for info on Msft COFEE. Review later...

DoD Cybercrime Recovering Executables from memory


Micro GPS Mail Logger
Topic: Technology 11:22 am EDT, Apr 25, 2008

The Mail Logger is the only GPS Tracking Device specifically designed for tracking your mail. Simply mail it in an envelope and later review where your mail has been. Save time and money by evaluating your delivery service’s reliability and efficiency.

Discover Delays And Inefficiencies In Delivery With GPS Location
The U.S. Postal Service delivers more than 212 billion pieces of mail annually and they are constantly looking for new ways to improve efficiency and eliminate delays. The answer? The GPS Mail Logger. At half of an inch thin, this GPS Mail Logger is the world’s only GPS Tracking Device that can be mailed in an envelope. Evaluate the effectiveness of your delivery service by pinpointing inefficiencies or delays in the delivery process.

Watch How The Micro GPS Mail Logger Works!

GPS Tracking Records Your Mail’s Exact Location, Speed, And Altitude
The GPS Mail Logger records the global position of your mail throughout the delivery process. Once you receive your mail, plug in the GPS Mail Logger’s MicroSD card and find out where your mail has been in seconds. With time stamps and recorded downtime you can find out where your mail has been and if it was delayed or misrouted. With GPS you get your mail’s exact satellite location, how fast it was traveling, and even its altitude throughout the delivery process. With this data, the GPS Mail Logger can help you evaluate your delivery company’s effectiveness.

Never Lose Track of Your Mail Again – Test Your Mail Delivery’s Efficiency!
With The GPS Mail Logger you will never wonder what took your mail so long to arrive. Why? Because this product tracks the GPS Location of your mail throughout the delivery process and stores the information on a removable MicroSD card.

Wonder what’s taking your mail so long to arrive? Test your delivery company’s efficiency by mailing out one hundred packages equipped with the GPS Mail Logger. Upon arrival, check the GPS record of where they have been. Maybe your delivery company misrouted 50% of these packages and that’s why they were late. Or maybe they sat in the warehouse for two days. Find out with the GPS Mail Logger!

1. Send out multiple packages equipped with the GPS Mail Logger.
2. Once the mail arrives, review where your mail has been.
3. Evaluate your delivery effectiveness.

Don't leave it up to the USPS to track your most sensitive mail properly. At just a fourth of an inch thin, the GPS Mail Logger ($695) is a GPS tracking device that can be mailed in an envelope, tracking the exact location, speed and altitude of your envelope, from the time it leaves you to the time it reaches Publishers Clearing House.

Micro GPS Mail Logger


RE: Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications
Topic: Technology 3:00 pm EDT, Apr 19, 2008

Key Design Points
The most important design question for constructing the constraint formula is to figure out what instructions to include in the formula. We need to include all the instructions for an exploitable path for the solver to generate a candidate exploit. However, the number of exploitable paths is usually only a fraction of all paths to the new check. Should the formula cover all such execution paths, some of them, or just one? We consider three approaches to answering this question: a dynamic approach which considers only a single path at a time, a static approach which considers multiple paths in the CFG without enumerating them, and a combined dynamic and static approach.

This is a really good example of combining Static Analysis and Dynamic Analysis to find and verify security vulnerabilities. Come see my Summercon presentation for more on this topic.

RE: Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications


Oklahoma isn't the only one
Topic: Technology 2:49 pm EDT, Apr 15, 2008

[sigh]. Now I have a good answer to the statement "Surely no one is stupid enough to put raw SQL into a URL!"

The best part if that the "blurring" of the email address is horrible and you can easily see many of the email addresses of register sex offenders.

Want to see who else is an idiot?

...

allinurl:?= SELECT FROM WHERE AND (sql|q|query)

... and watch the silliness.

Please don't INSERT me into the table as a registered sex offender.

Oklahoma isn't the only one


SANS Internet Storm Center - Advanced obfuscated JavaScript analysis
Topic: Technology 7:08 pm EDT, Apr  9, 2008

When we got contacted by ISC reader Greg in Hungary, whose web server had been hacked and adorned with a couple of obfuscated JavaScript files, we expected a variant of the "nmidahena" injection and a closed case. JavaScript is an interpreted language, and while the obfuscation attempts we see are getting more creative, the scripts can usually still be coerced quite easily into divulging their secrets. ISC handler Lenny Zeltser teaches the SANS course on malware analysis, and ISC handler Bojan Zdrnja wrote the portion on JavaScript analysis for that course, so we are usually able to make short work of bad stuff.

Cool example of self-defending javascript malware.

SANS Internet Storm Center - Advanced obfuscated JavaScript analysis


Muxtape
Topic: Technology 10:11 am EDT, Mar 26, 2008

Now you can give your sweetheart a URL instead of one of those yucky magnetic cassette tapes.

Muxtape


What Software Development can learn from Formula One
Topic: Technology 10:52 am EDT, Mar 23, 2008

The F1 World Championship started in 1950. Early years were notoriously dangerous, including spectators who stood on the kerb of whichever road it was they were racing down. Essentially it was all-or-nothing: if everyone kept on the track, no-one got hurt; if someone went off then lack-of-limbs was the best case scenario.
...
So what has this got to do with software? Well, I think there are direct parallels; the history of software is very similar. Thankfully, very little software is responsible for life-and-death issues, although there has been many instances of deaths being directly attributed to faulty software; the failures of software are mostly financial. Money is lost due to either: a) projects being abandoned or significantly over-running; or b) faulty or inefficient software in production.
...

So, how can the software development process improve? How did Formula 1 get to it's current very good safety record? The answer, isn't technical. Technology changes like HANS devices, etc., definitely have worked; but most of the shocking incidents of the 1970's and early 1980's were avoidable. They were caused by lack of preparation (e.g. fire-extinguishers around the track), or lack of communications (e.g. between cars and marshals).

1. Lack of complacency; no-one goes round saying "but why would two cars crash? You need to make a case before I can approve that new helmet".

2. Lack of buck-passing: Race control, from a safety point-of-view, is the ultimate responsibility of one man. He doesn't need to get approval from Bernie Ecclestone before calling out a Safety Car; he has the power. Whilst sporting decisions are made by a panel of stewards, safety issues are dealt with immediately.
...

The moral of the story: responsibility is meaningless without power. If you genuinely want quality, you have to have in your team a single "quality manager" - a person who will be rewarded or sacked based on the fitness-for-purpose of the end deliverable. And, which is even more important, this person must not be in a position where he can be over-ruled on quality issues by other managers.

We recently started taking that approach to quality assurance at work. A single QA person can stop a release. They no longer report to the development manager of the product they're responsible for assuring quality in. Hopefully this will mean less fiery, fiery deaths.

What Software Development can learn from Formula One


(Last) Newer << 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 >> Older (First)
 
 
Powered By Industrial Memetics
RSS2.0