Wednesday, November 26, 2008

Preventing SQL Injection Attacks The Right Way

SQL injection attacks are everywhere these days. They have become ubiquitous beyond measure and have been exploited by hackers of all grades across different applications, database engines and operating systems. Try to find a solution to the problem on the web and you will be blessed with many solutions and recommendations like this and this and this.

All these solutions have one thing in common though: they go to great lengths to place the blame for sql injections squarely at the feet of the web application developer. Granted, there are many defensive techniques which a developer should employ in order to foil sql injection attacks among other things, but in my opinion the fundamental security holes that make these attacks possible can only be fixed by sql database vendors themselves and not by web application developers. Let me explain.

There was a time when operating systems opened all ports on a computer by default, when smtp servers allow anyone to relay mail through them, when dns servers allow anyone to submit recursive queries and perform zone transfers. By the by, all these features were found to be exploitable holes and blocked by all modern software. However, it is not just that these holes are now commonly blocked — what is vitally important is that they are now commonly blocked by default.

This attitude of being secure by default has proven time and time again to be the correct one to take whenever an exploitable feature is identified or recognized. It has prevented countless unanticipated attacks against misconfigured computers while at the same time allowing experts to open these holes under controlled conditions when necessary. The problem with sql injection attacks today is that the insecure features that make these attacks possible have not been recognized as such even though they have been identified for quite some time.

What are these features, you ask. There are two of them and for the purposes of this article I will refer to them as multistatement query execution (MQEXEC) and union query execution (UQEXEC). The first feature allows an sql database engine to execute multiple statements in one query. For example, when faced with a query such as
SELECT a FROM table1; DROP table2;

the database engine will happily execute both statements in the query one after the other. The second feature (UQEXEC) allows the database engine to combine the results of multiple substatements into one dataset. For example, when faced with a query such as
SELECT a,b FROM table1 UNION SELECT c,d FROM table2;

the database engine will happily combine the results of each select substatement into one dataset (subject to certain restrictions which are not relevant here).

The problem with these features, and the reason they are such a huge security hole in today's computing environment, is that they allow an attacker to modify queries in web applications in unforeseeable ways with devastating consequences for all parties concerned. For example, it is quite natural for a web application developer to write a query such as
SELECT a,b FROM table1 WHERE c = '+ParameterValue;

along with a text box that allows the web application user to enter the parameter value. But an attacker can instead enter a parameter value that will turn this simple query into a multistatement query with malicious side-effects. Many pundits will no doubt blame the developer for writing a query like this in the first place. Yet that is not really the problem here. The problem here, really, is that when the developer wrote this query, he or she wrote it under a very specific assumption and most database engines do not provide a way for him or her to require that this assumption be enforced.

The specific assumption made by the developer here is that this query will not ever be a multistatement query. Unfortunately for the developer, however, the sql dialect of most database engines does not provide any way for the developer to make this assumption explicit or to require its enforcement. Even worse for the developer is the fact that these database engines operate in a mode that enables the MQEXEC and UQEXEC features by default — thereby making the attacker's task tremendously easier than the developer's, with the expectation that the developer should work around this sorry state of affairs in some way.

But suppose for a moment that sql database engines offer ways to selectively enable the MQEXEC and UQEXEC features e.g. in Microsoft SQL Server, one might have the statements
SET {MQEXEC|UQEXEC} {ON|OFF}

similar in many ways to the familiar NOEXEC option. Suppose further that the default mode for these options is OFF. Then the vast majority of sql injection attacks will be thwarted by default at the database engine level.

This recommendation has an additional benefit for security auditing in that by simply flipping these options to OFF, all areas in an application where multistatement and union queries are needed or assumed will fail unconditionally. Security experts can then review the code in these areas carefully and selectively enable the options as necessary.

With the MQEXEC and UQEXEC options OFF by default, the only sql injection attacks possible are those involving single statement, non-union queries. These generally tend to have less devastating consequences but can in any case be guarded against using application-level techniques such as query parametrization. But relying exclusively on application-level techniques to fight sql injection attacks is a recipe for disaster as real-world experiences continue to show.

Saturday, September 6, 2008

How NOT To Verify Googlebot

Maintaining a blog takes a lot more effort than one would imagine. Besides family, lifestyle and work, it's hard to understand how people manage to find the time to keep their blogs up to date. I suppose the trick is to find a way to bring blogging under work or lifestyle so that blogging becomes something you do without much ado. Definitely something I need to work on.




Type "verify googlebot" into a major search engine and you will come across many articles telling you how to verify googlebot and other search engines using what I will call the reverse lookup verification method. This method is recommended by all the major search engines bar none. Google, Yahoo!, Microsoft, Ask and many internet experts recommend it. So it may come as a suprise to some people to read my article where I strongly recommended that this method not be used.

I feel that webmasters who use this verification method may not fully understand the implications of their action, so I'm going to try to explain the issue once again. You see, when a suspected search engine hits your website, you come into possession of an ip address that you know absolutely nothing about. Using this ip address, you are supposed to do a reverse dns lookup to get a list of hostnames to verify. If you are lucky and the ip address happens to belong to Google or Yahoo!, their server will respond to your dns request with a nice list of hostnames for you to verify. However, if that ip address happens to belong to the bad guys, you will be naive to expect them to play by the rules.

Now, in a perfect world the bad guys may respond to your dns requests with a nice list of names for you to verify. But in the real world they are more likely to dig out their playbook and launch every dirty trick they possess against your web server. If you have been following internet security closely, you must have heard of recent dns attacks where the bad guys needed to find a way to make you initiate dns requests. You know what? When you send a dns query to their servers on your own initiative, you have saved them the trouble of needing to trick you in the first place!! All they have to do now is jump straight into their attacks against your server.

In fact, if this method of verification becomes hugely popular, you can expect the bad guys to make good use of it to attempt to compromise even more web servers. It really is remarkably simple: all they need to do is send a request to your web server and wait for your dns requests to come in to launch their attacks. We all love to rave and rant about how insecure dns is, but using that insecure dns to send automated requests to random servers from your production web servers takes the insecurity to a whole new level. JUST DON'T DO IT no matter what the big search engines tell you.Try using one of the following solutions instead.

SOLUTION #1: USE ISOLATED VERIFICATION SERVER


In this solution, any suspected search engine that hits your website will be verified against a locally maintained database on your network without sending any reverse dns queries from your web server. Then the ip address will be submitted to a separate and isolated verification server along with any other information from the request headers that you care about. The verification server should do the reverse lookup verification and update the shared database on your network.

This server should also do other kinds of verification just in case the ip address does belong to a major search engine but the search engine company failed to setup reverse dns for the ip address. The idea here is to isolate the verification server as much as you can from your production web servers so that even if the verification server is attacked or compromised, the attacks will not immediately affect your production servers.

Of course, if you are not in a position to maintain a verification server yourself, you can do the smart thing by signing up for a regularly updated free database like I offer at botslist.com.

SOLUTION #2: USE FORWARD VERIFICATION EXCLUSIVELY


If you insist on doing the verification on your production web servers instead of using an isolated server or signing up for a third-party database, then consider using the forward lookup verification method exclusively. I have described this verification method elsewhere. I will only say here that this method is much faster and more secure than the reverse lookup method that everyone else is promoting.

Also, when I first described this method over a year ago, botslist.com database contained only one or two records of servers that passed the forward lookup test. Today, however, I'm very pleased to count twelve matching records in the database as you can check from this link http://www.botslist.com/search?name=x\-verifiedmeth:%20uahome. The interesting thing here is that because the small search engines pass the forward lookup test, verifying them is much faster and safer than verifying the big search engines!

I think we can expect that many smaller search engines will continue to take the initiative on this issue while the big search engines will continue to do their thing. Perhaps until a few high profile websites using the reverse lookup method are compromised using that method as an attack vector. To be forewarned is to be forearmed, like they say.