Wednesday, February 29, 2012

How to publish database changes – SQL Tools

Here is a scenario that database developers face on regular bases: the implementation of application changes is often accompanied by changes in the database(s) that support the application. Some of the most common database changes required are:
  • Changing table column types to accommodate bigger values;
  • Adding new columns to tables;
  • Adding whole new tables;
  • Creating new views and / or modifying existing views
  • Creating new stored procedures and / or modifying existing stored procedures
You have completed your work and are now ready to publish the new build of your application. The challenge is that those database changes must be published together with the new build of the application otherwise your application will not work the way it is supposed to.
Here is how we managed those database changes prior to building xSQL Object: the developer who were working on the project would have to script out every change they were making on the database side (the use of the GUI tools to make changes on the database schema was prohibited other than for executing those change scripts), and diligently organize those change scripts which would then be submitted to the DBA who was responsible for reviewing and deploying those scripts. The issues we faced with this approach were:
  • Missed changes – some change scripts would not make it to the script repository. Those missed database changes would only be discovered when the application would error out because an object was missing or a type didn’t match etc. Those missed changes would results in many wasted hours and frustration. 
  • Failure to execute – some scripts would fail to execute successfully because of dependencies. Ordering the scripts is a painful, time consuming process especially when multiple developers are involved. 
  • Significant amount of time consumed in creating rollback scripts that would make it possible to reverse the changes if something went wrong.
Overall, this was a very expensive and error prone process, and that was the motivation for us to build a tool that would completely automate this process and eliminate the problems. The result was the release of version 1 of xSQL Object some 8 years ago. Here is how xSQL Object solves this problem:
  1. The developers can make the changes they need to make on the database without having to worry about keeping track of those changes and creating change scripts and rollback scripts. 
  2. When it is time to publish changes to staging or production environments you do the following:
    1. Launch xSQL Object and instruct it to compare the schema of the development database where those changes were made with the target database where you want to publish those changes. 
    2. xSQL Object will compare the schemas and show you exactly what changes have been made. It will then generate a synchronization script that when executed will make all those changes into the target database in the proper order. It will also generate the “rollback” script that will allow you to reverse those changes if necessary. 
    3. Review the script and execute against the target. 
    4. Done!
From many hours of tedious work with questionable results we have automated the process and brought the time required to complete the task down to a few minutes of easy work with guaranteed results.
Download your copy of xSQL Object today and see for yourself what you have been missing. There is no better way to spend the next 5 minutes of your time than downloading and installing a tool that will save you many valuable hours and for which you will likely not have to pay anything since the free lite edition is probably all you need.

Friday, February 24, 2012

Just enough free software

No one writes code for altruism. No one wakes up one beautiful day and decides to spend the next few thousand hours of his life staring and cursing at a computer monitor for the sole purpose of helping others. Whether explicit or not, the profit motive is the real driving force, so the question we as software publishers face is not whether we should give something out for free for the sake of helping the world or charge for it for the sake of profit but rather the question and the challenge we face is determining how much free is just enough free.

If you give out more than necessary you are sure to face a slow and painful "death" while somewhat paradoxically your software's popularity grows. If for example what you give out for free takes care of 90% of potential customers' needs you will hardly sell anything and will likely be "on the red" from start to finish. Your fall will become even more painful by the fact that all those users that have benefited from your work gradually grow to feel entitled – they demand that you continue improving your software while they will throw a fit if you dare to suggest that they should pay.

On the other hand, if you don't give out enough free there is a good chance that a low cost startup imitator or even a more powerful competitor will undercut you and potentially put you in a "death" spiral – the return on the investment you made starts to diminish rapidly, you are forced to make cuts on product development and marketing, the product gets outdated, sales start dropping and you, or at least that product, will soon be history.
How do you go about finding the right balance? I think the only safe way is to start giving out little to nothing for free. Give it sufficient time so that you have enough data to establish a reliable benchmark. Next, take another little step by increasing the value of what you are giving out for free and monitor the numbers very carefully. Keep repeating the process, a little bit at a time – chances are that you will find that as you increase the value of what you give out for free, up to a point, you will see the following happen:
  • Number of people using your product keeps increasing;
  • Percentage of people buying your product vs people using your product keeps decreasing
  • Number of people that buy your product keeps increasing
The moment you see the number of people buying your product stagnate or start to go down you know that you have reached the "magic" balance or that you have gone a bit too far and it is time to stop
How about going the other way? I think that is very hard to impossible to do – if you start with giving everything out for free, the moment you decide to start charging is the moment when you will see all your user base "evaporate" rapidly.
Here is the complete list of our products where you can see what we have decided to give out for free:
Redundant disclaimer: please note that this is a very simplistic view – in reality, this process is significantly harder to manage as those numbers mentioned above are affected by a multitude of factors that I have "conveniently" left out of the discussion.

Tuesday, February 21, 2012

Instead of a LIKE search

Here is a scenario you might face: let's say you have a table called "customers" that among other things has a column called "AccountNumber" that is defined as a varchar(20) column. Since searching by account number is a very common query, to improve the performance of it and guarantee a faster response time for your users you have decided to create an index on the  AccountNumber column. The situation is complicated a bit by the fact that those account numbers come from multiple sources and in some cases the number may be padded with zeroes in front depending on the source. You know that the max length for the account number including padding zeroes is 20 characters but the number of padding zeroes varies. In the meantime, when your users do a search they may include padding zeroes or may not - either way your query should find the row(s) the user is looking for. Easy enough, you can write something like:
  SELECT * FROM customers
  WHERE udf_trimZeroes(accountnumber) =  udf_trimZeroes(@accountnumber)

or something like:
  SELECT * FROM customers
  WHERE accountnumber like ‘%’+ udf_trimZeroes(@accountnumber)

Either one of those queries will produce the results you are seeking, however, in both cases, albeit for different reasons, the optimizer will choose to do a clustered index scan which would be very expensive and result in a slow response time. You have already added an index on AccountNumber but the optimizer is ignoring that! So, what can you do? Here is a simple solution that will significantly improve this query:

First: we take the @accountnumber parameter that is passed to our stored procedure and build a small table variable containing all the possible values of this account number by padding the number with leading zeros up to the max length of 20 characters.
  DECLARE @AccountNumber_Table TABLE
    [AccountNumber] [varchar](20) NOT NULL
    -- strip out the leading zeroes first
   SET @AccountNumber = SUBSTRING(@AccountNumber, PATINDEX(''%[^0]%'',@AccountNumber), len  (@AccountNumber))
   DECLARE @counter int
   SET @counter = len(@AccountNumber)
    WHILE @counter <= 20
      INSERT INTO @AccountNumber_Table VALUES (REPLICATE(''0'', @counter - len(@AccountNumber)) + @AccountNumber)
      SET @counter = @counter + 1

So, if the accountnumber the user is searching for is let's say 12345678 then the above table variable will contain values 12345678, 012345678, 0012345678 and so on. Now, instead of the where clause we simply join our "customers" table with this table variable we created as:
   SELECT customer.*
   FROM customers INNER JOIN @AccountNumber_Table ON
         customers.AccountNumber = @AccountNumber_Table.AccountNumber

the optimizer will now do an index seek on your accountnumber index and return the results in a fraction of the time while completely elimintaing the performance issues that a clustered index scan on a large table would cause.

PS check out our xSQL Profiler - it allows you to schedule, run traces and automatically collect trace data from multiple servers. It is free for one SQL Server instance.

Monday, February 20, 2012

SQL Data Compare mapping rules explained

By default xSQL Data Compare will automatically "map" tables from the two databases being compared based on the full object name, that is [schemaname].[objectname]. For example table [database1].[dbo].[Customer] will be mapped to table [database2].[dbo].[customer]. For most of the real life scenarios out there this automatic mapping is sufficient however, there are many cases when the tables on both sides cannot be mapped using this approach because:
  1. The same tables maybe owned by different schemas on different databases;
  2. Table names might not be exactly the same – for example, on one database the customer's table might be named dev_customer whereas on another database might be named prod_customer.
One utility that xSQL Data Compare provides to allow you to map tables regardless of the name is the "compare tables" functionality. This allows you to manually map a table from database1 to any table from database2 regardless of schema name or table name. This is perfect when you only have a few tables that you need to compare, but when you have hundreds or even thousands of tables this manual mapping is not practical. Here is where xSQL Data Compare "mapping rules" come in. There are two name based mapping rules that you can control:
  1. Schema name mapping rules. Using the schema mapping rules you can instruct xSQL Data Compare to:
    1.  Do an exact match on schema names (default behavior). In this case [database1].[dbo].[customer] will be mapped to [database2].[dbo].[customer] but [database1].[dev].[customer] will not be mapped to [database2].[prod].[customer]
    2. Completely ignore the schema name. In this case [database1].[dev].[customer] will be mapped to [database2].[prod].[customer]. 
    3. Manually map schema names. You could, for example map the schema “dev” to the schema ‘prod” in which case then xSQL Data Compare will automatically map [database1].[dev].[customer] to [database2].[prod].[customer]
  2. Table name mapping rules. Using this rule you can instruct xSQL Data Compare to:
    1. Ignore a certain name prefix on the left database and a certain name prefix on the right database when performing the object mapping. For example, you could say ignore the prefix “dev” on the left and the prefix “prod” on the right. In this case, the table [database1].[dbo].[dev_customer] will be mapped to [database2].[dbo].[prod_customer]  
    2. Ignore a certain name postfix on the left database and a certain name postfix on the right database.
Download your copy of xSQL Data Compare and see for yourself why this is the best sql data compare tool in the market, and on top of that you might not have to pay anything for it.

Wednesday, February 15, 2012

Script Executor now supports SQL 2012

A new build of Script Executor with support for SQL Server 2012 is now available for download. Script Executor allows you to execute multiple t-sql scripts, in the order you desire, against multiple SQL Server, MySQL and DB2 databases. It also allows you to package a set of scripts into an executable that can be easily deployed to remote clients. A command line utility is also included.

Tuesday, February 14, 2012

How to improve the performance of your .NET site

Following are some suggestions on how you can improve the performance of your .NET site. 
  • Use SqlDataReader as much as possible. It’s the lightest and the fastest way to read data from the database. Avoid DataSet unless you have no choice. 
  • Do not use exceptions in the code for flow control. Catching exception is expensive in terms of resources. 
  • Use html server side controls instead of ASP.NET web controls when is possible to do so. Html server-side controls are html elements with the attribute runat=”server” and they map to a specific html element. Web server control on the other hand, which starts with asp: (asp:label, asp:panel, asp:button), are function-oriented. There is no html element called “label” for example. ASP.NET might render it as “span” on the client. This could be easily overlooked because ASP.NET controls are tempting and rich in functionality, but they generate more code on the client. In many cases html server-side controls are sufficient. 
  • Use Asp.NET grid, list box and other data bound control carefully. Those are great controls, but they tend to generate a lot of code on the client. If you don’t need the extra functionality provided by the grid for example, use asp:Repeater which is the lightest and the faster ASP.NET bound control.
  • Determine whether ASP.NET view state is needed for a page. If not, turn it off so that the page is not loaded with unnecessary extra stuff.
In addition of these .NET specific suggestions there are many generic guidelines that apply to all websites regardless of the platform on which they are built. You can "scout" the web and try to assemble a comprehensive list OR you can just go to and test your site. The pagespeed tool will provide specific, targeted recommendations on how you can improve your site and also point out what you have done right. Those recommendations constitute in fact a fairly comprehensive list of guidelines. Now, of course, as is always the case, you must use your judgment to determine whether a certain suggestion would be helpful for your actual scenario or not but those are recommendations are a good starting point.

Lastly, if you were interested in this article chances are you develop with .NET /SQL Server in which case you will find our database comparison and synchronization tools, xSQL Object and xSQL Data Compare, very helpful in your day to day work. The tools are free for SQL Server Express with no limitations and also free for other editions of SQL Server depending on the number of objects in the database. Download your copy from:

Monday, February 13, 2012

xSQL Documenter v4.5 released

We just released xSQL Documenter v4.5 which includes a number of enhancements and bug fixes. Here is a complete list of changes:
  • Support for SSRS in SharePoint Integrated mode.
  • Support for Informix 9.
  • Support for exclusion lists in SSRS (Microsoft Report Server)
  • Support for fetching permissions objects in Oracle.
  • Support for fetching dependencies between objects in different schemas in Oracle.
  • Support for drawing data model diagrams with GraphViz.
  • Improved performance when excluding a large number of objects.
  • Added textboxes for delimiters in the GUI.
  • Bug fix: exclude indexes if parent table is also excluded.
  • Bug fix for DB2 if can't get pagesize.
  • Bug fix: fixed a bug for Oracle where functions weren't being returned unless DocumentProcedure was set TRUE.
  • SSIS fix: take server name from file path for DTSX files if the path is UNC.
xSQL Documenter supports virtually every database management platform including SQL Server, Oracle, DB2, MySQL, Informix, Sybase, Teradata, VistaDB, SQLite, PostgresSQL etc. You can download the free trial from:

Friday, February 10, 2012

Can't create account rss reporter xsql

This question comes up every so often: I am creating a new user in the RSS Reporter interface, I entered all the required information, but when I click on the "Add and Continue" button I get the error: "Failed to save the user information due to insufficient permissions. Write access to the path C:\inetpub\wwwroot\RssReporter\App_Data is denied".

RSS Reporter stores the user and profile information in the App_Data subfolder located under the main installation folder. You should assign write NTFS permission to IIS worker process account on this subfolder. If RSS Reporter has been installed on Windows XP or Windows 2000, assign write permission to machinename\ASPNET account, (where machine name is the computer on which RSS Reporter is installed); if you are using Windows Server 2003, assign write permission to the NETWORK SERVICE account.

RSS Reporter generates standard rss feeds from SQL Server and can be used to monitor SQL Jobs from multiple servers as well as to generate rss feeds containign the results of ad-hoc queries. RSS Reporter is free for one SQL Server instance. You can download your copy from:

Find SQL Server Database Objects quickly with the free xSQL Object Search

As a database administrator or developer you often need to search for database objects like tables, stored procedures, views etc., in one or more databases. You might be trying to find a stored procedure the name of which contains a certain string or let's say a function that contains a certain string in the definition. You can, of course, find the objects you are looking for by querying the system tables that contain the names and definitions of those objects, but why waste your time when you can do this better and faster using our free xSQL Object Search?
With xSQL Object Search you can quickly search one or all the databases on a SQL Server instance at once. xSQL Object Search allows you to search SQL Server 2005 and SQL Server 2008 databases for objects that match a search pattern. The pattern you specify is matched against two properties of the database objects: name and definition. The name of an object is its identifier used when the object is created, it is the same name that appears in the Management Studio object tree. The definition, for the purpose of the search, is the T-SQL that defines the object. For a stored procedure, for example, the definition is the statement that was used to create the stored procedure. The definition of an object is not the same as the T-SQL script that can be used to re-create that object. A table, like all database objects, can be created via a T-SQL script, but has no definition in the database. The following are the database objects that have a definition:
  • Views
  • Stored Procedures (excluding CLR procedures, which have no definition in the database)
  • User-Defined Functions (excluding CLR functions)
  • Defaults (both constraints and stand-alone defaults)
  • Rules
  • DML Triggers (excluding CLR triggers)
  • Database DDL Triggers
  • Server DDL Triggers
xSQL Object Search can search the names of the objects, their definitions (when the object has one) or both. The results of the search against each property are combined with AND or OR depending on the preferences to produce the final result set.

xSQL Object Search is free and does not require any installation or licensing, simply download and run - no hassle and no strings attached.

Thursday, February 9, 2012

LinkedIn ads - puzzled

From a software vendor's point of view, or any vendor's point of view for that matter, LinkedIn appears to be an ideal platform for promoting products. There are not many places where the audience by definition is exactly what you are looking for and exactly the audience that would benefit from your products. For example: if you target a SQL Server group like "SQL Server Developers" or "SQL Server Professionals" you know that other than the ever present recruiters everybody else is involved with SQL Server in one capacity or another. So, if the product you are promoting through your ad is directly related with SQL Server chances are that a number of those group members might be interested.
So we created a couple of simple ads that clearly state what the products we are promoting are about. The ad for xSQL Object reads: "SQL Schema Compare", "Compare and synchronize SQL Server Schema. Free lite edition. Supports 2008/2005/2000"; and the ad for Oracle Data Compare reads "Oracle Data Compare", "Compare and synchronize data between two databases – Oracle 9i/10g/11g". We carefully chose the groups we wanted to target with those ads. 
The minimum bid that LinkedIn enforces is $2 per click, pretty steep but if the people who are clicking are the ones we are looking for it may still be feasible so up to this point all is good.  Now we are live and that is where the puzzle begins:
  1. First we see that the number of impressions goes quickly in the thousands but the click through ratio is really, really low. This part of the puzzle turns out to be easy to solve – it seems that since we bid the minimum allowed, that is $2 instead of the LinkedIn recommended bid of $3.27 to $4.05, one needs to scroll about a mile (a bit of exaggeration here) to reach the bottom of the page for a given group where you might see the ad in question. So, of course most of those impressions (maybe over 99%) are worthless since virtually nobody really sees it – aren’t we glad we did not choose to pay per impression! 
  2. Now we start looking at the few clicks that at $2 each are eating that budget up very fast, and herein lies the biggest puzzle: almost all who clicked arrived to that one page that was linked to the ad but left as soon as they got there! Here is why I find this puzzling:
    • I cannot imagine anyone in this world that would click on an ad just to pass the time;
    • The ad is not trying to lure people to click – quite the opposite. Unless comparing and synchronizing sql server database schemas sounds like something you might be interested on then there is no reason for you to click on that ad;
    • The audience is as close to the perfect audience as you can get;
    • The pages they see (xSQL Object and Oracle Data Compare respectively) provide exactly the information you must be looking for since you clicked on that advertisement.
So, who is clicking on our ads!? We don’t know yet but what we know is that those people are certainly not looking for our products!
If you have a positive experience with LinkedIn ads please do write to us or just comment on this blog. We would love to hear from you as we still believe that LinkedIn can be a very valuable to us but maybe we are just approaching this the wrong way.

Free alternative for comparing and synchronizing SQL database schema

Is there a free alternative to sql schema compare tools? Yes there is, it is called xSQL Schema Compare and it is one of the best sql compare tools you can find. Is that completely free for everybody? Based on our statistics, the free lite edition of xSQL Schema Compare is all that 95% of potential users need, and here is why:
  • As far as functionality is concerned the free lite edition is exactly the same as the $499 professional edition;
  • If the databases you are comparing and synchronizing are SQL Express databases there are no restrictions – the experience is exactly the same as if you had purchased a professional license;
  • Only when, at least one of the databases in the comparison is on a SQL Server edition other than the Express edition then we count the number of objects in that database and if the number is greater than certain limits we have set then you will be kindly asked to purchase a license.
The companion tool, xSQL Data Compare (you download both tools in one package) is also free for SQL Server Express with no limitations and free for other editions of SQL Server with some restrictions.

Wednesday, February 8, 2012

Download SQL Server 2000

Surprisingly, there are still a lot of people asking "where can I download SQL Server 2000 from?" so here are a couple of helpful links:
Note that SQL Server 2000 is reaching the end of its life (support for it ends on April 2013) so consider downloading SQL Server 2008 instead unless you don't have a choice.
Also, here are some of our tools that support SQL Server 2000:
  • xSQL Object allows you to compare and synchronize database schemas and it is free for MSDE 2000;
  • xSQL Data Compare allows you to compare and synchronize data between two databases and it is also free for MSDE 2000
  • xSQL Profiler allows you to trace multiple SQL Server instances including tracing MSDE instances and it is free for one SQL Server instance.
  • Script Executor allows you to execute SQL Scripts against multiple databases; 
  • xSQL Documenter generates CHM and HTML documentation for any database including SQL Server 2000 and MSDE 2000. 
  • RSS Reporter allows you to generate rss feeds from SQL Server and it is free for one SQL Server instance.
Applies to:
  • download SQL Server 2000
  • descargar SQL Server 2000
  • download MSDE
  • download SQL Server tools

Tuesday, February 7, 2012

Listen to your customers, or not!

The most vocal customers you have, those that download the latest builds of your products, those that discover the bugs in your code, and take the time to make their opinion of your products known to you and sometimes to the rest of the world as well, are a great asset to your product development effort. However, if you are not careful those customers can lead you down the wrong path and put the future of your product in jeopardy. 
There are two particular characteristics that make those customers likeable to you:
  1. They are usually testing the limits of your product and are looking to do more with it; 
  2. They understand and appreciate the effort that goes into making those products.
Those two characteristics make you vulnerable – subconsciously you place those customers at a totally different level, and you have the desire to please and impress them.  Consequently, suggestions and feature requests coming from this small subset of customers tend to get “promoted” at a much faster pace and get approved at a much higher rate, often overlooking the financial viability. 
From the marketing point of view, those vocal customers have the power to "make" or "break" your product: a positive, unsolicited comment carries the potential to catapult your product to the top whereas a negative comment can quickly sink your product to the bottom where no one will ever see it again. 
So, what shall you do, listen or not listen to your customers? The key to answering this is to clarifying the definition of the term "listening". Listening to your customers does not mean rushing to implement whatever they tell you but rather:
  • Thoroughly understanding not just what the customer is suggesting or requesting but understanding why is the customer making that suggestion, what exactly is the situation that this let’s say new feature will help the customer with. If you go deep enough you may discover that you already have addressed that situation somewhere. For example: we often get inquiries about our database comparison API (xSQL SDK) when in fact the customer is really looking for something like xSQL Builder.  
  • Evaluating the request from the business and financial perspective on its own merit, completely separating the request from the source.  This is important not only for the customer requests but for the internal suggestions as well. 
  • Very important: communicating your decision back to the customer. Whether you have decided to adopt your customer’s suggestion or not is irrelevant, what matters is telling the customer what you have decided and candidly explaining your decision.
In conclusion: yes, do listen to your customers, their feedback is truly invaluable, but don’t let yourself be steered in the wrong direction. 
PS if you work with SQL Server consider spending the next 5 minutes reviewing our SQL Server Comparison tools, xSQL Object and xSQL Data Compare, or if you work with Oracle take a look at our Oracle Data Compare. Those tools are worth your time – you will agree.

Friday, February 3, 2012


DELETE FROM [table name] without using a WHERE clause and TRUNCATE TABLE [table name]  will have the same effect on the database, all the rows will be deleted from your table (assuming that the deletion of those rows does not violate any constraints). So which one should you use? The answer lies in the differences between those two statements:
  • DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by de-allocating the data pages used to store the table's data, and only the page de-allocations are recorded in the transaction log. 
  • TRUNCATE resets the counter used by an identity for new rows to the seed for the column, whereas DELETE does not reset the counter.
  • TRUNCATE TABLE cannot be used on a table referenced by a FOREIGN KEY constraint even if no violation of the constraint would occur, in fact a TRUNCATE on a table referenced by a FOREIGN KEY would fail even if the table is empty. The DELETE on the other side will succeed if deleting the rows does not violate the constraint.
  • TRUNCATE does not activate the triggers whereas a DELETE will activate the trigger for every row that is deleted.
  • TRUNCATE TABLE may not be used on tables participating in an indexed view.
Truncate table is a lot faster and it consumes significantly less resources than the Delete from table statement but you have to choose the right statement based on the particular requirements you have not simply based on which one runs faster.

If you read this article chances are that our SQL Data Compare tool would be very helpful to you - it is free for SQL Server Express with no limitations. Download your copy from:

Thursday, February 2, 2012

The danger of subqueries on a T-SQL delete statement

How can a sub-query cause havoc on your data? Consider this: you are doing some clean up on your database and need to delete from table [t1] all rows the id of which happens to refer to rows on a table [t2] that match a certain criteria - let's say were created before a given date. Without thinking twice you go ahead and write:

      delete from [t1]
        where id in (select id from [t2] where "some criteria")

You expect a few records will be deleted from table [t1] - you click on execute... you can't believe your eyes, SQL Server Management Studio is reporting that 200 thousand rows were deleted, oh no, the whole content of table [t1] was wiped out! How could this be? You are panicking... you realize you broke your own rules but first you want to understand how could this innocent query cause this. You examine it closer and you realize that there is no column named id on table [t2] - but shouldn't that have caused the sub-query, and consequently the whole query to fail?!

Well, it didn't, did it? Here is why: if you try running the sub-query by itself you will see that it will fail (assuming that there is no column named id on table [t2]), but when that sub-query is part of the bigger query things behave a bit differently. The reference id in the sub-query will be resolved against table [t1] in which case regardless of what the subquery criteria is, it will always return the id of the row being processed from table [t1] thus all rows from table [t1] will be deleted.

Here are a few simple rules that anyone working with data should follow religiously:
  1. Always qualify the column names - had you done that your query would have failed and you would have been safe;
  2. Never execute a delete statement without wrapping it in a transaction so that you can roll-back if you realize that you screwed up;
  3. Write the query as a SELECT query first - execute it, see how many and which rows will be effected and only when you are certain that the query returns the rows you want then replace the "select * from" with "delete from"
Lastly, whenever you mess around with production data make sure your backups are good and keep SQL Server Data Compare handy as it will enable you to selectively restore only what you need without overwriting everything else.

Wednesday, February 1, 2012

SQL Server trigger security - granting privileges you are not authorized to

Both DML and DDL triggers execute under the context of the user that caused the trigger to fire – in other words, if for example we have a DML trigger that fires whenever a row is deleted from table T then the trigger will fire under the context of the user that executes the delete statement. Does this tell you anything about the potential inherent risk with triggers? Unlikely, until you consider this: a rogue developer writes a DDL trigger that looks something like this:

         ON DATABASE

Now, if the RogueDev tries to get the trigger fired so that he can get Server Control his attempt will fail since he does not have permission to grant himself server control, but, remember what we said above! What is going to happen when the DBA with full control goes and alters a table in the database? You guessed it – he inadvertently will be granting server control to RogueDev!  Ooops!

There are trigger security best practices that the DBA can follow like maintaining a strict inventory of DML and DDL triggers in the database and on the server instance, disabling triggers etc.  However, a proactive DBA can do more – he can use the very triggers to protect his servers / databases against people like RougeDev. How? Here is one simple example – a DBA could write a trigger that looks something like this:

       CREATE TRIGGER no_grant_server
          ON ALL SERVER
             PRINT 'What do you think you are doing!'

What does this do? Anytime the DDL_RogueDev fires and attempts to grant server control to RougeDev the no_grant_server trigger will fire and prevent that from happening no matter under what authority the DDL_RogueDev may be running. Instead of printing a silly message you could log the attempt, send notifications etc.

Another quick and easy measure would be to utilize a tool like xSQL Object to take regular snapshots of the schema and compare those snapshots with each other and with the live database to determine what database objects may have been added or altered.

Such security countermeasures are not hard to implement but not many DBAs do until they have been burned once.

One-click SQL Data Compare

Comparing and synchronizing data in two SQL Server databases is most of the time a fairly straight forward process: you select the databases you want to compare; xSQL Data Compare maps the tables and views automatically, identifies and selects the comparison keys and performs the comparison at the end of which it shows you the results on the screen. At that point you can click a button and generate the synchronization script which, if executed on the target will make the target the same as the source. So, the interaction required is minimal.
However, there are often times when the process is not very simple. Here are some possible complications:
  • Two identical tables might be owned by different schemas in both databases so with the default settings xSQL Data Compare will not map those two tables with each other. So, you either have to change the mapping rules to ignore the schema or you have to manually map those two tables with each other;
  • You might have tables that have no primary key defined on them and no unique constraints that can be used as comparison keys. In these cases you will need to drill down to those object pairs and manually select a comparison key that can be a combination of columns from this table;
  • You might want to completely exclude some tables from the comparison;
  • You might want to tweak the behavior of the comparison engine by adjusting the options to your needs;
For a relatively large database you might spend hours preparing the comparison. That is where the xSQL Data Compare’s “comparison session” saves the day – once you go through the comparison configuration process every "bit" of the configuration is stored in a session. Next time you launch xSQL Data Compare you will see a box on the main window for that session with a "Launch" action link – one click on that link and the comparison process will be done for you - many hours saved.

xSQL Data Compare supports SQL Server 2008 R2/2008/2005/2000 and it is free for SQL Server Express – download your copy from: