Monday, December 17, 2012

SQL Schema Compare with Snapshot Utility

We just published a new build of xSQL Schema Compare for SQL Server. Here are the highlights of the new build:
  1. Snapshot Utility - allows you to take database schema snapshots via the command line. You can now for example, automatically take a daily snapshot of the database schema and then, whenever you need to, go back and see what schema changes were made from one snapshot to another or from one snapshot to the live database.
  2. Snapshot Converter - a standalone tool that allows you to convert in bulk database snapshots created with previous versions of the xSQL Schema Compare.
  3. UI support for old schema snapshots - adds UI support for snapshots created with previous version of the xSQL Schema Compare
  4. New command line features - object exclusion by type or name, new element for complex schema filters, new elements for comparison options etc.
  5. Bug fixes - fixes some issues with the Command Line utility.
You can download the new build of xSQL Schema Compare for SQL Server from our site: http://www.xsql.com/download/package.aspx?packageid=10

Wednesday, December 12, 2012

A deal a day for the rest of 2012

If you visit our site today you will notice that the rolling product slides on top have been replaced by a single more "festive" slide - click on the "LEARN MORE" button to see what the deal of the day is.

For the next 19 days of 2012 we will be running many different exciting promotions - visit the site daily to see what's on for a given day.

Note: the deals are distributed randomly - they will NOT get progressively better or worse as we approach the end of the year so if you see a deal that appeals to you don't wait for a better one as it may never come.


Today's deal is buy one license and get a second one free.

Tuesday, November 27, 2012

SQL Schema Compare – include dependant objects

The ultimate goal of a database schema comparison and synchronization tool, like xSQL Schema Compare for SQL Server (aka xSQL Object), is to allow database administrators and database developers to safely and efficiently propagate database schema changes from one environment to another, most commonly from development to staging to production.

By default, xSQL Schema Compare for SQL Server includes all database objects in the comparison and synchronization and arranges the synchronization script in such a way as to respect the dependencies between the database objects, so that the script can be successfully executed on the target.

However, often the user needs to run a partial comparison and synchronization including only certain objects. Let's say the user chooses to synchronize the database view vw_01 but neglects to include table tbl_01 on which this database view depends. In a scenario like this there is a good chance that the synchronization script will fail to execute successfully on the target (table tbl_01 might not exist in the target or might be different). The user will then have to identify the cause of the failure and go back and include table tbl_01 in the synchronization, but that is not the end of the story, there may be many similar situations involved and the script will keep failing. Going through this painful and expensive process of identifying the object dependencies through trial and error, would defeat the purpose of a schema comparison and synchronization tool.
xSQL Schema Compare for SQL Server provides a critical comparison option "Include dependant objects" which is ON by default. After the user has selected the database objects that he wants to include in the synchronization, xSQL Schema Compare will automatically identify all the additional objects that must be included in the synchronization for the script to execute successfully on the target, and will include those objects in the proper order regardless of the user selection.
We often get support requests from users with the following question: "I have specifically excluded those database objects from the synchronization – why am I still seeing them in the synchronization script?!"  The answer is the "include dependant objects" option the purpose of which was explained above. You can however easily turn the option off by un-checking the corresponding checkbox under comparison options (see the screen shot below).
xSQL Schema Compare for SQL Server is the best database schema compare tool in the market available to you at an un-beatable price – free for SQL Server Express with no limitations and also free for other editions of SQL Server but with limitations based on number of objects in the database (see editions here: http://www.xsql.com/products/sql_server_schema_compare/editions.aspx). You can download your copy today from: http://www.xsql.com/download/package.aspx?packageid=10

Monday, November 26, 2012

Schema Compare - now it's different, now it's not

When comparing two SQL Server database objects, like comparing two tables, comparing two stored procedures, comparing two triggers, comparing two functions etc., the result of the comparison is not a clear cut "equal" or "different". The result depends on what you care to compare. The simplest example that demonstrates this point is the issue of spaces in the object definition: let's say you are comparing [DB1].[SP1] with [DB2].[SP1] and  further let's assume that both stored procedures are identical except for the fact that [DB2].[SP1] contains an extra blank line at the end of the definition. The question is: are the stored procedures [DB1].[SP1] and [DB2].[SP1] equal or are they different? Well, in this particular case if you only care about the functionality you would say that they are equal but if you also want those stored procedures to appear visually identical then you would say they are different.

Like the example of "spaces in the definition" above, there are many other elements that affect the result of the comparison of different types of database objects. xSQL Schema Compare for SQL Server exposes over 40 comparison options that drive the comparison engine and affect the results of the comparison between two SQL Server database schemas giving you total control over the comparison and synchronization process.

While this level of flexibility and control is very helpful for many users it is also often the cause of confusion for others. You compare table1 to table2 and they show as equal but a minute later when you run a re-compare they show as different even though you did not make any changes to the structure of those two tables! After a closer look you realize that in the first compare you did not consider the "lock escalation property" whereas on the second run you did and that caused those two tables to appear as different.

So, does the user need to spend time understanding all those options in the new xSQL Schema Compare for SQL Server? No, absolutely not! In over 95% of database schema comparison and synchronization cases the user does not need to look at or even be aware of the existence of those comparison options – just compare, generate synchronization script and execute it on target.

xSQL Schema Compare for SQL Server is free for SQL Server Express – no limitations. It is also free for other editions of SQL Server but with some limitations on the number of objects in the database.
Download your copy of xSQL Schema Compare for SQL Server here…

Wednesday, November 14, 2012

xSQL Schema Compare for SQL Server v4 released

Faster, safer, free (lite edition only) – publishing database schema changes to production has never been easier! The just released version 4 of xSQL Schema Compare for SQL Server (previously known as xSQL Object) is built on top of a completely redesigned, faster, scalable database schema compare engine, and brings a new streamlined, easier to use and much more efficient interface. Here are the highlights:
  • Full support for SQL Server 2012.
  • Enhanced support for SQL Server 2008 and SQL Server 2005 (some objects that were partially supported or not supported in previous versions are now fully supported).
  • Support for SQL Server 2000 has been dropped. If you still need to compare and synchronize SQL Server 2000 databases you can contact us to obtain the older version of the schema compare tool.
  • Over 100 options (object type inclusion/exclusion, comparison options, scripting options,  and execution options give you total control over the comparison and synchronization process.
More details on the new release can be found here: http://www.xsql.com/products/sql_server_schema_compare/release_notes/v4/
 
The new version will continue to be free for SQL Server Express with no limitations and also free for other editions of SQL Server with some limitations - details here:  http://www.xsql.com/products/sql_server_schema_compare/editions.aspx
 
Download your copy now and see what you have been missing: http://www.xsql.com/download/package.aspx?packageid=10

Monday, October 22, 2012

Comparing large databases – xSQL Data Compare

The task of comparing the data in two databases seems fairly simple and straight forward until you start considering the size of the databases involved. To understand why let's present a common scenario and see what it takes to handle this seemingly simple task.
Let's say you have database DB1 on server S1 that is hosted on a data center somewhere on the west coast and a version of the same database DB1 on server S2 hosted on a data center somewhere on the east coast. Let's further say that DB1 contains approximately 100 GB of data and you need to:
  • Determine what's different in those two databases
  • Selectively push changes from the west coast database to the east coast one.
Lastly, let's say that you, the database master reside somewhere in Atlanta and have a nice setup with a powerful machine and high bandwidth connection to both servers.
 
On a default scenario here is what will happen:
  • You will read about 100GB of data from server S1 and another 100GB of data from server S2 
  • You compare the data and record the differences
  • You need a way to efficiently display those differences so that you can review them
  • You need to generate a change script that depending on the number of differences may get to be very large
  • Finally you need to execute the script against the target database
So what's the problem you say? 
  • Even with a blazing fast connection it will take a long time to transfer 200GB worth of data from those servers to your machine
  • While you probably have sufficient disk space you can't bring much of that data into memory so how do you compare two tables with millions of rows each when only a small portion of those rows will fit in memory
  • What do you do with let's say a 10 GB change script? How do you execute that against the target database?
There are many data compare tools, especially SQL Server data compare tools in the market today but, unfortunately, (or maybe fortunately for us) most of them will simply fail at this task – some of them will fail very quickly while some other may take a while to crash.
Our SQL Server Data Compare and its "twin sister" Oracle Data Compare are the very rare exception: if you have sufficient disk space those tools will not "choke" no matter how large the database is. Those tools will:
  • efficiently process the large data sets while tightly controlling the memory usage
  • Display the data differences on an easy to ready interface that will seamlessly page through the large datasets
  • Successfully execute the script in configurable “chunks” no matter how large the script may be
While SQL Server Data Compare and Oracle Data Compare will handle any size database it will still take a very long time if the databases are large therefore we have introduced to extremely useful features to make the process more efficient:
  • Row filters: if you know, as is often the case, that let's say on a given table any rows that have a modifieddate earlier than a given date have not changed and do not need to be compared then you add a row filter on that table pair and instead of comparing for example 10 million rows you only compare 100K rows – now that I a huge reduction on the workload! 
  • Column exclusions: SQL Server Data Compare and Oracle Data Compare allow the user to choose which columns on a given table to exclude from the comparison and synchronization. For example, a table that contains a text column might have 5GB worth of data on it but if you exclude that one text column the table only has 100MB of data – again a huge reduction on the workload. 
  • Lastly, you only need to go through this configuration ones – SQL Server Data Compare  and Oracle Data Compare will remember every single comparison configuration detail so next time you can launch the comparison with a single click.
Download the free, fully functional trial today and see for yourself

Friday, October 12, 2012

t-sql list of countries

On February 15, 2022 we released easy lookups by xSQL Software https://lookups.xsql.com/ - you can now get the countries list and other commonly used lists for free. You can consume those lists directly as JSON/xml services in your JavaScript apps, or download in one of the following, ready to use formats:
  • JSON
  • XML
  • CSV
  • SQL Server insert statements
  • Oracle insert statements
>>> original article below - links replaced with the new lookup link

The script which you can download from the link below creates a "countries" table and populates it with the complete ISO list including the Numerical Code of the country, the name of the country both in lower case letters and upper case letters, the ISO ALPHA-3 code and the ISO 3166-1-alpha-2 code.
A couple of notes:
  1. the Countries table created by the script does not have a Primary Key defined on it - you will need to define one.
  2. The following territories are listed on the ISO site but they are not listed on the UN site so I did not include them in the script.
UNITED STATES MINOR OUTLYING ISLANDS, UM
TAIWAN, PROVINCE OF CHINA, TW
HEARD ISLAND AND MCDONALD ISLANDS, HM
SOUTH GEORGIA AND THE SOUTH SANDWICH ISLANDS, GS
VIRGIN ISLANDS, BRITISH, VG
FRENCH SOUTHERN TERRITORIES, TF
CHRISTMAS ISLAND, CX
COCOS (KEELING) ISLANDS, CC
BOUVET ISLAND, BV
ANTARCTICA, AQ

You can download the script from here: https://lookups.xsql.com/

What are Sequence Numbers in SQL Server 2012

If you have worked with SQL Server for a while you either have built yourself or at least seen somewhere some sort of a GetNextValue(MyTable) function that returns the next available sequential ID for the given table. The way it worked was: you would have a small table with two columns (TableName, NextID) and the above function would read the corresponding ID from this table and increment the NextID column. Why was this necessary when you could use the auto incremented Identity column? A couple of reasons: you needed the ID before you inserted the row in the table; you wanted to be able to possibly re-use those IDs' and, you wanted to use the ID as a foreign key on dependent tables. You made this solution work and it was ok however, you had to handle concurrency (multiple processes requesting new IDs for the same table) and this process often became a bottleneck causing unexpected performance issues.

The uniqueidentifier introduced in SQL Server 2005 was great but often that is not what you want, instead of a 50 character long unique identifier you want a sequential number.

SQL Server 2012 addresses all those problems with a new user-defined schema-bound object called sequence that generates a sequence of numeric values according to a set of specifications. Here is how it works:
  • first, you create a sequence that you intend to use for a certain purpose as
         CREATE SEQUENCE MySchema.MyFirstSequence
            AS int  /* can be any built in integer type or a user defined integer type */
            START WITH 1
            INCREMENT BY 1 ;
  • Whenever we need to grab the next number in the sequence we simply refer to this object as NEXT VALUE FOR MySchema.MyFirstSequence
Here are a couple of examples of using the sequence object we created above:
 
Example 1: INSERT INTO MyTable (ID, SomeValue) VALUES (NEXT VALUE FOR MySchema.MyFirstSequence, 'some value')

Example2: Let's say we need to create a record in MyTable and also a couple of dependent records on MyDependentTable where the MyTable.ID participates as a foreign key. In that case we need to first grab the next ID and then use it in our insert statements, so we would do something like this:
DECLARE @nextID int
SET @nextID = NEXT VALUE FOR MySchema.MyFirstSequence

 INSERT INTO MyTable (ID, SomeValue) VALUES (@nextID, 'some value')
 INSERT INTO MyDependentTable (MyTableID, OtherColum) VALUES (@nextID, 'other value 1')
 INSERT INTO MyDependentTable (MyTableID, OtherColum) VALUES (@nextID, 'other value 1')


Here are a few things to note:
  • you don't have to worry about concurrency, SQL Server takes care of that
  • sql server also takes care of the performance as well - unlike your home build "next sequential value" mechanism the sequence object is unlikely to ever cause any issues for you. 
  • you can set min and max values for the sequence and you can cycle through the values in that range
  • important: the sequence numbers are generated outside the scope of the current transaction and they are "consumed" even if the transaction that requested the next value is rolled back
For a thorough explanation of the sequence object, the syntax for creating, dropping and altering a sequence, as well as examples of using the sequence object you can check out those two msdn articles: http://msdn.microsoft.com/en-us/library/ff878058.aspx  and http://msdn.microsoft.com/en-us/library/ff878091.aspx

Note: the next version of our schema compare tool, xSQL Object, that will be released next month will include support for the sequence object as well.

Thursday, October 11, 2012

What is SQL Server Database Engine Tunning Advisor?

SQL Server Database Engine Tuning Advisor helps you select and create an optimal set of indexes, indexed views, and partitions without requiring an expert understanding of the structure of the database or the internals of SQL Server.
 
SQL Server Database Engine Tuning Advisor analyzes a workload and the physical implementation of one or more SQL Server databases. A workload is a set of Transact-SQL statements that execute against a database or databases that you want to tune. Database Engine Tuning Advisor uses trace files, trace tables, or Transact-SQL scripts as workload input when tuning databases. You can create Transact-SQL script workloads with the Query Editor in SQL Server Management Studio. You can create trace file and trace table workloads by using the Tuning Template in SQL Server Profiler.
 
SQL Server Database Engine Tuning Advisor analyzes the workload and can recommend that you add, remove, or modify physical design structures in your databases. The advisor can also recommend what statistics should be collected to back up physical design structures. The physical design structures include clustered indexes, nonclustered indexes, indexed views, and partitioning. SQL Server Database Engine Tuning Advisor recommends a set of physical design structures that reduces the query optimizers estimated cost of the workload.

SQL Server Database Engine Tunning Advisor Capabilities

 Following is a summary of what SQL Server Tunning Advisor can do for you:
  • Recommend the best mix of indexes for databases by using the query optimizer to analyze queries in a workload.
  • Recommend aligned or non-aligned partitions for databases referenced in a workload.
  • Recommend indexed views for databases referenced in a workload.
  • Analyze the effects of the proposed changes, including index usage, query distribution among tables, and query performance in the workload.
  • Recommend ways to tune the database for a small set of problem queries.
  • Allow you to customize the recommendation by specifying advanced options such as disk space constraints.
  • Provide reports that summarize the effects of implementing the recommendations for a given workload.
  • Consider alternatives in which you supply possible design choices in the form of hypothetical configurations for Database Engine Tuning Advisor to evaluate.
SQL Server Database Engine Tunning Advisor Features

SQL Server Database Engine Tuning Advisor provides new features that enable both novice and experienced SQL Server database administrators to tune databases for better query performance. The following sections list and describe the Database Engine Tuning Advisor new features:

  • Improved Workload Parsing - handles batches that reference transient tables such as temp tables;
  • parses and tunes queries referencing user-defined functions; tunes statements in triggers and a lot more.
  • Enhanced Scalability - uses workload compression, which allows SQL Server Database Engine Tuning Advisor to reduce the amount of time spent tuning while producing high quality tuning recommendation results; uses an algorithm to avoid redundant statistics creation, which reduces the number of I/Os incurred during tuning.
  • Integrated Tuning - SQL Server Database Engine Tuning Advisor can consider the trade offs in performance that are offered by various different types of physical design structures (indexes, indexed views, partitioning). 
  • Multiple Database Tuning - applications often access multiple databases to complete their work, so workloads frequently refer to objects from multiple SQL Server databases. SQL Server Database Engine Tuning Advisor can simultaneously tune multiple databases.
  • Offload of Tuning Overhead to Test Server - tuning a large workload can create significant overhead on the server that is being tuned. This occurs because SQL Server Database Engine Tuning Advisor often needs to make several calls to the query optimizer during the tuning process. Using a test server in addition to your production server eliminates this problem.
  • Command-Prompt Utility and Graphical User Interface Tool - SQL Server Database Tuning Engine Advisor provides a dta command-prompt utility to make it easy to incorporate SQL Server Database Engine Tuning Advisor functionality with scripting and also provides a graphical user interface (GUI) tool, which makes it easy to view tuning sessions and results.
  • Drop-Only Tuning - physical design structures may accumulate over time in a SQL Server database, and database administrators need a way to drop those structures that are no longer useful. The drop-only tuning option is useful for identifying existing physical design structures that are either not used at all or have very little benefit for a given workload.
  • XML Support - all of the tuning operations that you can perform with the SQL Server Database Engine Tuning Advisor GUI and the dta command-prompt utility can be defined in a Database Engine Tuning Advisor XML input file. 
  • User-specified Configuration and "What-if" Analysis Support - SQL Server Database Engine Tuning Advisor enables users to provide a hypothetical configuration of physical design structures (indexes, indexed views, and partitioning strategies) as tuning input. You can then tune a database or databases as if the configuration were already implemented.
  • Analysis Reports - SQL Server Database Engine Tuning Advisor generates several analysis reports in text or in XML after a tuning session completes. These reports provide information such as the cost of queries occurring in the workload, the frequency of events that occur in the workload, or the relationships between queries and the indexes they reference.
  • Tuning Sessions - SQL Server Database Engine Tuning Advisor prompts you to create a unique name or number to identity the tuning session before the analysis can begin. After analyzing a workload or when analysis is stopped, SQL Server Database Engine Tuning Advisor saves the tuning session information and associates it with the identifier you supplied before tuning began.
 Related Tools: If you need to trace multiple SQL Servers on schedule of if you wish to run a profiler trace on SQL Server Express or MSDE you can download the free (free for one SQL Server instance only) xSQL Profiler. xSQL Profiler allows you to configure granular traces and schedule those traces to run on multiple servers at certain time intervals and it automatically aggregates all the trace data from those servers into a central repository database.  Download your free copy here

Wednesday, October 10, 2012

To automate or not to automate

For every routine task we do there is a tool out there that would help reduce the time and effort required to complete the task as well as ensure consistent and more accurate outcome. So why do we so often choose to forgo the opportunity for improvement? Here are a few reasons that don't need much analysis:
  • Job security – the innate fear that automation will render our position as obsolete.
  • Resistance to change – we do things this way because we have always done them this way; or, why try to fix something that is not broken.
  • Being always under pressure and adapting a short term thinking – right now I need to take care of this task, I don’t have time to spend on researching how to do it faster and better.
  • Underestimating the cumulative effect of manual tasks –a task that only takes let's say 15 minutes a day to complete does not “scream” I need automation BUT in one year you will end up spending about 10 full days of work on that task. 
  • Overestimating the cost of automation - it will take too much time and money to acquire and implement a solution. 
None of those reasons constitute a good, rational argument to forgo automation. On the other hand there is the other extreme, automating for the sake of automating. The decision on whether to automate a process / task or not should be based on a rigorous and objective method that is based on numbers and not on feelings.
 
To devise such method we need to first answer the question: what will the decision be based on? While there are various scenarios the most common one is that in which the decision will be based purely on financial considerations, that is, will the automation pay off? Will we get a return on the investment?  Here is the list of factors that you will need to consider in order to answer this question:
  • Estimated value of "labor" -> VT ($/hour)
  • Estimated time to complete task -> TT (hours)
  • Estimated frequency of task -> TF (#times / month)
  • Estimated time required to identify the right tool for the job -> RT (hours)
  • Estimated time required for implementing the automated solution -> AT (hours)
  • Estimated initial cost for acquiring the tool -> TIC ($)
  • Estimated cost for maintaining the tool -> TMC ($ / year)
  • Estimated time required to complete the task after automation ->TTAA
So the question is: how much time will it take to break even? Or, stated differently, how long will it take for this automation to pay for itself?

Let t denote the number of months it will take to reach this "magic" break-even point after which we start making a positive return on our investment. The break-even point will be reached when: 
VT*TT*TF *t = RT*VT + TIC + TMC/12 * t + AT * VT + TTAA*VT*t 
 
So, consequently, the break-even time can be calculated as:
 t = (RT*VT + TIC + AT*VT) / (VT*TT*TF - TMC/12 – TTAA*VT)
 
Let's use this on a real life example. Assume you are a DBA and your weekly routine includes deploying tens of scripts into your large server farm and let’s say that this takes one full hour of your time. Let’s further assume that based on your compensation, benefits, infrastructure etc. your cost to the company is approximately $70 / hour. You estimate that it will take you about 2 hours to identify a solution and another 2 hours to implement the automated solution in your environment. After the automation the one hour task will be reduced down to 6 minutes of monitoring and reviewing the results. As for the estimated costs of the solution let’s use some real numbers – the best solution in the market for this scenario is Script Executor which costs $249 for the license plus $50 / year for the upgrade subscription. So, we have:
  • VT = $70 / hour
  • TT = 1 hour
  • TF = 4 times / month
  • RT = 2 hours
  • AT = 1 hours
  • TIC = $249
  • TMC = $50 / year
  • TTAA = 6 minutes = 0.1 hours
Break even: t = (2 * 70 + 249 + 1 * 70) / (70 * 1 * 4 – 50 / 12 – 0.1*70) => t ~ 1.7 months
 Note that if TMC/12 + TTAA*VT  >=  VT*TT*TF then the investment will never pay off in pure financial terms so, unless there are other important considerations then this will be a no go.
 
So, what is a good t? That depends, but generally, anything under 6 months is usually a no brainer. However, as there are always other potential investments competing for the same budget you will need to compare the parameters of this with those of the other potential investments to make the right decision.
 
You can download a fully functional free trial of Script Executor from:  http://www.xsql.com/download/package.aspx?packageid=40 
 

Monday, October 8, 2012

Bidirectional synchronization using xSQL Data Compare

xSQL Data Compare is designed to compare the data in two SQL Server databases and generate the change script to make database A the same as database B or vice versa - in other words it is designed to go in only one direction at a time. However, while this one directional synchronization is very helpful in various scenarios it does not address the need of the users who are in charge of managing distributed databases and that for one reason or another are not able to implement a SQL replication solution. While not an ideal solution, with a bit of effort those users can use xSQL Data Compare to address this issue.
So, let's say you have a copy of database DB1 on Server S1 and a copy of it on server S2. Both servers/databases are active so they are both being modified simultaneously. Now, let’s say that you are in charge of synchronizing those two copies of the database every night. Before you can synchronize let's say table T1 on S1.DB1 with table T1 on S2.DB1 you will need to make a couple of structural changes:
  1. First let's assume that table T1 has a TimeStamp column that will be used to decide which record overrides which.
  2. Now, add column S2_TimeStamp on table S1.T1 and add column S1_TimeStamp on table S2.T1. You will need to make sure that the new timestamp columns are automatically updates anytime the row is modified. 
  3. Create a view for S1.T1 and a view for S2.T1 naming the timestamp columns in such a way that the name of S1.T1.TimeStamp matches S2.T1.S1_TimeStamp and S2.T1.TimeStamp matches S1.T1.S2_TimeStamp. The view on S1 will contain the following columns: T1.ID, T1.TimeStamp as TimeStamp1, T1.S2_TimeStamp as TimeStamp2 whereas the view on S2 will contain: T1.ID, T1.TimeStamp as TimeStamp2, T1.S1_TimeStamp as TimeStamp1. This will ensure that the columns of S1.T1 are automatically mapped with columns of S2.T1.
You will only need to do the above once for each table you wish to do a bidirectional data synchronization for. At this point everything is ready to set up the comparison and synchronization. Here are the steps:
  1. First, run a compare of the two views you created above and then:
    1. Exclude wT1.TimeStamp2 column and generate the script that makes S2.wT1 the same as S1.wT1. Execute the script – this will update the S2.T1 table with the timestamp from S1 without touching the S2 timestamp. 
    2. Now include wT1.TimeStamp2 and exclude wT1.TimeStamp1 and generate the script that makes S1.wT1 the same as S2.wT1. Execute the script – this will update the S1.T1 table with the timestamp from S1 without touching the S1 timestamp.
  2. Now you are ready for the final step – compare table S1.T1 with S2.T1 adding a WHERE clause for each side to exclude the rows where the timestamps are the same. You will need to repeat this process twice: first, you only include rows where S1.T1.TimeStamp > S1.T1.S2_TimeStamp – you will need to "push" those rows from S1 to S2; then, you include only the rows where S2.T1.TimeStamp > S2.T1.S1_TimeStamp – you will need to push those rows from S2 to S1.
 You can use the SQL Data Compare command line utility to automate all those comparison steps.
 
xSQL Software's Data Compare is free for SQL Server Express Edition.

Download Data Compare for SQL Server now. 

Thursday, October 4, 2012

Query Multiple Tables that may not exist using TRY CATCH to avoid errors

I had the priveledge of working on a document management system recently.  Unfortunately, for whatever reason, there were new tables created for each year.  For example, documents imported into the database in 2011 were in table Documents_2011.  There were over 200 databases, and to make matters worse, not all databases contained every year.  If the client started in 2008, there were tables starting from 2008.  If they started in 2011, the first table would be 2011.  After running a few thoughts in my head, I decided to try the new SQL TRY CATCH block.

I first found the earliest year that was used.  In this case it was 2002.  Start by creating a temp table.  Next, use a WHILE loop and start at the first year possible.  Dynamically create an INSERT INTO SELECT statement.  Execute the statement.  Add 1 to the year and continue looping until the current year.

Since this database required scripts to be run at the beginning of each year, this stored procedure would not have to change since it dynamically queries all tables from START year to CURRENT year.

CREATE PROCEDURE SelectDocumentsToBeArchived
 @RetentionPeriod INT
AS

DECLARE @YEAR INT = 2002
DECLARE @SQL VARCHAR(8000) = ''

DECLARE @CUTOFFDATE DATE
SET NOCOUNT ON
SET @CUTOFFDATE = DATEADD(MONTH, -@RetentionPeriod, GETDATE())
CREATE TABLE  #tmp_ToBeArchived (DocID BIGINT,[FileName] VARCHAR(256))

WHILE @YEAR <= YEAR(GETDATE())
BEGIN
 BEGIN TRY

  SET @SQL = 'INSERT INTO #tmp_ToBeArchived SELECT DocID, [FileName]'
  SET @SQL = @SQL + ' FROM DOCUMENTS_' + CAST(@YEAR AS VARCHAR(4)) + ' DOCUMENTS'
  SET @SQL = @SQL + ' WHERE DOCUMENTS.DateImported < ''' + CAST(@CUTOFFDATE AS VARCHAR(10)) + ''''
  SET @SQL = @SQL + ' AND ISNULL(DOCUMENTS.Deleted,0) = 0'

  --PRINT @SQL
  EXEC(@SQL)

 END TRY
 BEGIN CATCH
  PRINT @SQL
  PRINT ERROR_MESSAGE()
 END CATCH

 SET @YEAR = @YEAR + 1
END
;

SET NOCOUNT OFF
SELECT #tmp_ToBeArchived.* FROM #tmp_ToBeArchived
 ORDER BY DocID

DROP TABLE #tmp_ToBeArchived
GO

Wednesday, June 6, 2012

SQL Merge Replication beware of identity columns

You have set up SQL Server merge replication between your primary site and your DR (disaster recovery) site – this allows you to use the DR site as an overflow site when you have volume spikes on the primary site and the bottleneck is not SQL Server. It is a pretty cool setup indeed with SQL Server seamlessly taking care of keeping both databases in sync with each other. You have further tested the failover to the DR site multiple times and everything worked flawlessly, so you assure the management that you have implemented a “fool-proof” system and as far as SQL Server is concerned any failure on the primary site will be transparent to the end users.

Sooner or later, as it inevitably does, disaster strikes and your primary site is completely out of commission.  As planned and expected the DR site takes over and everything is working fine, at first… a couple of hours into this disaster a mission critical application starts failing. You start digging into your logs and see this “unique constraint violation” error which does not seem to make sense and the first thing that comes to mind is that the developers must have screwed up somewhere and you might need to revert to an older version of that application. In the meantime the operations have come to a complete halt and you don’t know where to hide.

So what caused this? No, it wasn’t the developers that messed up but rather the root cause of this is the way SQL Server Merge Replication handles identity columns. You can read this Microsoft article for a more complete overview http://msdn.microsoft.com/en-us/library/ms152543.aspx but in short to avoid “clashing” of the IDs between primary and secondary servers the primary server is in charge of assigning blocks of IDs to the secondary server that do not overlap with the IDs the primary is using and making sure that the secondary always has IDs available. But, if the primary server is not available for an extended period of time the secondary might exhaust all the available IDs after which any inserts to that table will fail.

There are multiple ways you can deal with this like managing the ID blocs so that sufficiently large blocks are assigned to the secondary; monitoring the usage of those IDs on the secondary and stopping the replication altogether if necessary, avoiding the use of the identity columns as primary keys for tables etc. However, the purpose of this brief article is to make you aware of this potential problem that you might face when using SQL server merge replication for disaster recovery.

Don't forget to check out our SQL Server Data Compare and Synchronization tool: http://www.xsql.com/products/sql_server_data_compare/ - it is one of the most useful sql server tools and it also comes in a free lite edition.

Monday, May 7, 2012

xSQL Comparison Bundle – new build available

A new build of xSQL Comparison bundle containing a few minor fixes is available for download: http://www.xsql.com/download/package.aspx?packageid=10
Issues addressed by this build include:
  1. Schema Compare tool (xSQL Object)  - fixes an issue with check constraints that depend on user-defined functions. Symptom:  database schema comparison fails with the error “Failed to generate check constraints synchronization script. Object reference not set to an instance of an object”; 
  2. Schema Compare tool (xSQL Object)  - fixes an issue with default constraints that depend on user-defined functions. Symptom: database schema comparison fails with the error “Failed to generate check constraints synchronization script. Object reference not set to an instance of an object”;
  3. SQL Data Compare tool - fixes a problem that occurs when the first column of the chosen data comparison key is created as a descending column. Symptom: SQL Data Compare tool fails to identify the rows that exist in both tables being compared. 
  4. SQL Data Compare tool - fixes a problem with the data candidate index defined on columns that allow NULL values. Symptom: Data Compare prepares update queries with where clause as “and fieldname = null” in cases when the key used includes a nullable column.

Thursday, April 19, 2012

Today's free product code

Today's free license code will appear on one of the product pages at 12 noon (valid for that product only). As explained in the email the code will appear at the end of the product description at the top of the product page.  Time zone: (UTC-05:00) Eastern Time (US & Canada)

Wednesday, April 18, 2012

My database is a jungle, I need an intelligent search engine

Thousands of stored procedures and no good way to find the one you are looking for, so what do you do? You write your own stored procedure thus, not only wasting valuable hours but also contributing into increasing the “density of the jungle” making it even harder to find anything in the future. Multiply this by 260 days a year and 10 other people like you and you get the picture. My guess is that if one would do a complete inventory of the stored procedures in a database would likely find a lot of repetition – multiple stored procedures created by different people at different times doing exactly the same thing.

What if a google type search engine could take queries like “get all north-west region customers that purchased product  x during a given period of time”, search through the database schema and return all the stored procedures that potentially answer this query as well as the list of tables that would potentially be involved in answering such query? That would be beautiful, wouldn’t it?

However, the search engine would need some help to be able to do this. Adding extended properties was a great step in the right directions - at least someone that is diligent enough has a way to tag the objects with keywords and phrases that can later be used to efficiently locate those objects.  I would argue that it would be beneficial if there were at least two predefined mandatory extended properties: “description” and “keywords”. I know that many DBAs and developers might find such enforcement a bit annoying but in the long run I think most of them will be thankful.

Related products/tools:

SSMS - lets you filter objects based on the name, schema, owner and creation date but it does not provide for filtering based on the extended properties or the definition of the object. However, you can always query the system tables - not ideal but better than nothing.

xSQL Documenter:  will document the extended properties and the object definitions so you can technically go into the documentation and search for an object.  The drawback is that the documentation becomes outdated soon after it is generated. Of course you can refresh it but that means more time.

Object Search:  a free tool that allows you to search in the definition of the object using a regular expression as the matching criteria. Current version does not look into the extended properties.

Monday, March 26, 2012

10 days, 10 free Oracle Data Compare licenses each day

Starting today (March 26, 2012) we will be giving away 10 Oracle Data Compare licenses (a $349 value) for free, every day for 10 days. The promo code ODC10DAYS will be active for a few minutes each day. The starting time and the estimated time interval will be announced at least 2 hours in advance on twitter (www.twitter.com/xsqlsoftware) and facebook (www.facebook.com/xsqlsoftware) every day. Once the first 10 licenses are claimed the promo code will be de-activated and you will have to wait until the following day to get your free license.

In all, during the next 1o days we will give away 100 Oracle Data Compare licenses or $34,900 in free software.

Thursday, March 22, 2012

xSQL Builder – the tool for software publishers

If you publish software that uses SQL Server on the backend then you are familiar with the pain of upgrading your clients from one version of your software to another. Some of the most common challenges you face include:
  • The customers don't know and they don’t want to know anything about SQL Server or t-sql so sending them a set of scripts to execute on the target server is not going to help;
  • Different customers may be on different versions of your software so you need to provide multiple upgrade paths;
  • You may have customized your solution for certain clients so the "standard" upgrade won't work for those cases;
  • You do not have direct access to the customers' databases;
  • You have hundreds of customers you need to distribute your software to.
Those were the challenges we set out to tackle when we decided to build the xSQL Builder. Here is how xSQL Builder works:
  1. A simple wizard guides you through a few steps that lead to the creation of an executable package that contains all the necessary information required to upgrade your customers’ databases. 
  2. When the customers executes the package the following happens:
    1. If applicable, a set of pre-synchronization scripts is executed against the target database;
    2. The schema of the target database is compared with the schema of the master database which has been embedded in the executable package;
    3. Synchronization script is generated and executed against the target;
    4. If applicable, a set of post-synchronization scripts  is executed against the target;
    5. A detailed log is emailed back to you so that you know exactly what happened at the customer's site.
The beauty of this is that the deployment package "does not care" what version of your database the customer is on.
 
Of course the wizard is quick and easy but what if you need to tweak something that the wizard does not provide for? We thought of that too – xSQL Builder comes with a set of customizable C# templates (C# classes) that contain the compare and synchronization code that runs on the client machine. xSQL Builder ships with a VS.NET 2005 C# project that you can run or modify as needed.
 
Download now and see for yourself what you have been missing all this time!

Monday, March 19, 2012

Paypal payment solutions - need improvement

Since Paypal (it’s parent eBay at the time) acquired Verisign’s payment gateway in 2005 we have been a Paypal customer and aside from a complete screw up a couple of years ago that left us without service for 3 days everything has gone smoothly. However, the experience of a few days ago when we decided to switch from payflow link to payflow pro prompted me to write this quick review.
Just to be fair let me say that the service we received from paypal tech support this time was outstanding, the reps we spoke with were knowledgeable and courteous, so Paypal is doing something right. However, as is often the case, satisfaction does not often motivate one to write about it, it is the unpleasant part that provides that motivation. So, here are my grudges:
  1. Since the acquisition of Verisign's gateway services Paypal has made no improvements whatsoever on the payflow link solution, which is what we were using. It was embarrassing to “hand the customer over” to paypal to finalize the payment – the customer was presented with an ugly form (the branding/customization capabilities were a joke) and an un-necessary two step process that left the customers with an un-easy feeling. I think Paypal should either put some serious effort on improving the solution or drop it altogether, it is just not serious to keep it the way it is right now. 
  2. We made the switch to Payflow Pro, or so we thought. A call to Paypal tech support helped us realize that the developer that worked on this instead of implementing Payflow Pro had implemented Website Payments Pro Payflow Edition! How did that happen? The answer is not hard to guess:
    1. Confusing solution names make it hard to tell which one is which
    2. Confusing documentation makes it even harder to distinguish one solution from the other;
  3. Lastly, while implementing all three solutions (we had Payflow Link, mistakenly implemented Website Payments Pro, and finally Payflow Pro) we were surprised by the inconsistent implementation on the paypal side, three solutions – three different sets of parameters even though we are passing basically the same set of data.
Of course we understand that Paypal has every incentive to push vendors like us away from those inherited “gateway only” solutions towards their “full service” solutions but frustrating vendors is not the way to do it.

Monday, March 12, 2012

xsqlsoftware.com is now xsql.com

We are very excited to announce that as of March 10, 2012 the primary domain for xSQL Software is http://www.xsql.com  Here are a few housekeeping items:
  • Email addresses: all email addresses have been changed to the new domain so sales@xsqlsoftware.com is now sales@xsql.com and so on. The old addresses have been forwarded to the corresponding new addresses. 
  • License activation: if you have an older build of any of our tools and you have acquired a license you might not be able to activate the license as the license activation services that were running under the xsqlsoftware.com are no longer available. In such case you can either download a new build of the tool or generate a license request file and email it to us (instructions are provided on the license activation window).
  • Links to our site: if you have linked to our website from your blog or your site we would be grateful if you would make the effort of updating the url with the new domain. The old url should work fine too but it would be very helpful to us if you did update the url.
Chronology for the curious: when we launched our first web site in 2004 our desire was to get the xsql.com domain but unfortunately we were not able to secure it then so we decided to go instead with x-sql.com. In 2007 after a complete revamp of our site and our key tools we decided that we did not like the dash in the url anymore and thought that it was better to have a longer url without dashes than a short one with dashes. Hence, our main domain became xsqlsoftware.com. Now, 8 years later we have what we wanted in the first place – the easy to remember, easy to type, elegant url: http://www.xsql.com. And yes, this is the end of our quest for the perfect domain - no more changes.
If you have any questions or concerns don't hesitate to contact us and don't forget to check out our tools: http://www.xsql.com/download/

Wednesday, March 7, 2012

RSS Reporter fails to install on machine with IIS 7

Installation of RSS Reporter V3 may fail with a generic message: “The installer was interrupted before RSS Reporter could be installed. You need to restart the installer and try again.” But, as you can easily guess, restarting the installer is not going to do any good. If you check the event viewer you will see an MsiInstaller event 1033 that misleadingly states that “Windows Installer installed the product…Installation success or error status: 1603”.

The cause of the problem is a missing IIS component (Metabase) that by default is not installed in IIS7 but, the Visual Studio installer that we use for RSS Reporter requires it. In  IIS7 this feature can be found in setup under Internet Information Services->Web Management Tools -> IIS 6.0 Management Capability Feature. Check the “IIS MEtabase and IIS6 configuration compatibility”.

Once the installation of the metabase component is finished re-run the RSS Reporter installation package and it should install successfully.
RSS Reporter is a simple tool that generates standard rss feeds from SQL Server. Predefined SQL job feeds allow you to monitor SQL jobs on multiple servers from a single rss feed. Ad-hoc query feeds allow you to generate rss feeds containing the data that the query returns.

RSS Reporter is free for a single SQL Server instance and very inexpensive for multiple SQL Server instances. You can download your copy here.

Tuesday, March 6, 2012

Lookup table changes

Whenever you publish a new version of your application chances are that in addition of the database schema changes you also need to publish changes you have made on your lookup tables. xSQL Object makes the publishing of the schema changes a breeze but how about the values on those lookup tables? The answer is xSQL Data Compare which allows you to:
  • Select the tables you need to compare and sync
  • Filter out the rows that you might not want to sync
  • Review the differences 
  • Generate the sync script 
  • Execute the sync script (this step in the process will publish all the data changes you made on development into the production database).
In addition of the granular control over the process there are three features that make xSQL Data Compare the ideal sql tool for the job:
  1.  “Configure once”: xSQL Data Compare saves all your comparison configuration settings in a comparison “session” so that next time you can launch the comparison with one click and complete the whole data synchronization process with two more clicks (one to generate the sync script and one to execute the sync script).
  2. Command line utility: allows you to run the comparison and synchronization in one step from the command line. 
  3. It is completely free for SQL Server Express without any limitations and it is also free for other editions of SQL Server as long as the number of objects in your database does not exceed certain limits.
Download your copy today and discover for yourself why thousands of users around the world swear by this simple but extremely helpful sql tool.

Wednesday, February 29, 2012

How to publish database changes – SQL Tools

Here is a scenario that database developers face on regular bases: the implementation of application changes is often accompanied by changes in the database(s) that support the application. Some of the most common database changes required are:
  • Changing table column types to accommodate bigger values;
  • Adding new columns to tables;
  • Adding whole new tables;
  • Creating new views and / or modifying existing views
  • Creating new stored procedures and / or modifying existing stored procedures
You have completed your work and are now ready to publish the new build of your application. The challenge is that those database changes must be published together with the new build of the application otherwise your application will not work the way it is supposed to.
Here is how we managed those database changes prior to building xSQL Object: the developer who were working on the project would have to script out every change they were making on the database side (the use of the GUI tools to make changes on the database schema was prohibited other than for executing those change scripts), and diligently organize those change scripts which would then be submitted to the DBA who was responsible for reviewing and deploying those scripts. The issues we faced with this approach were:
  • Missed changes – some change scripts would not make it to the script repository. Those missed database changes would only be discovered when the application would error out because an object was missing or a type didn’t match etc. Those missed changes would results in many wasted hours and frustration. 
  • Failure to execute – some scripts would fail to execute successfully because of dependencies. Ordering the scripts is a painful, time consuming process especially when multiple developers are involved. 
  • Significant amount of time consumed in creating rollback scripts that would make it possible to reverse the changes if something went wrong.
Overall, this was a very expensive and error prone process, and that was the motivation for us to build a tool that would completely automate this process and eliminate the problems. The result was the release of version 1 of xSQL Object some 8 years ago. Here is how xSQL Object solves this problem:
  1. The developers can make the changes they need to make on the database without having to worry about keeping track of those changes and creating change scripts and rollback scripts. 
  2. When it is time to publish changes to staging or production environments you do the following:
    1. Launch xSQL Object and instruct it to compare the schema of the development database where those changes were made with the target database where you want to publish those changes. 
    2. xSQL Object will compare the schemas and show you exactly what changes have been made. It will then generate a synchronization script that when executed will make all those changes into the target database in the proper order. It will also generate the “rollback” script that will allow you to reverse those changes if necessary. 
    3. Review the script and execute against the target. 
    4. Done!
From many hours of tedious work with questionable results we have automated the process and brought the time required to complete the task down to a few minutes of easy work with guaranteed results.
Download your copy of xSQL Object today and see for yourself what you have been missing. There is no better way to spend the next 5 minutes of your time than downloading and installing a tool that will save you many valuable hours and for which you will likely not have to pay anything since the free lite edition is probably all you need.

Friday, February 24, 2012

Just enough free software

No one writes code for altruism. No one wakes up one beautiful day and decides to spend the next few thousand hours of his life staring and cursing at a computer monitor for the sole purpose of helping others. Whether explicit or not, the profit motive is the real driving force, so the question we as software publishers face is not whether we should give something out for free for the sake of helping the world or charge for it for the sake of profit but rather the question and the challenge we face is determining how much free is just enough free.

If you give out more than necessary you are sure to face a slow and painful "death" while somewhat paradoxically your software's popularity grows. If for example what you give out for free takes care of 90% of potential customers' needs you will hardly sell anything and will likely be "on the red" from start to finish. Your fall will become even more painful by the fact that all those users that have benefited from your work gradually grow to feel entitled – they demand that you continue improving your software while they will throw a fit if you dare to suggest that they should pay.

On the other hand, if you don't give out enough free there is a good chance that a low cost startup imitator or even a more powerful competitor will undercut you and potentially put you in a "death" spiral – the return on the investment you made starts to diminish rapidly, you are forced to make cuts on product development and marketing, the product gets outdated, sales start dropping and you, or at least that product, will soon be history.
How do you go about finding the right balance? I think the only safe way is to start giving out little to nothing for free. Give it sufficient time so that you have enough data to establish a reliable benchmark. Next, take another little step by increasing the value of what you are giving out for free and monitor the numbers very carefully. Keep repeating the process, a little bit at a time – chances are that you will find that as you increase the value of what you give out for free, up to a point, you will see the following happen:
  • Number of people using your product keeps increasing;
  • Percentage of people buying your product vs people using your product keeps decreasing
  • Number of people that buy your product keeps increasing
The moment you see the number of people buying your product stagnate or start to go down you know that you have reached the "magic" balance or that you have gone a bit too far and it is time to stop
How about going the other way? I think that is very hard to impossible to do – if you start with giving everything out for free, the moment you decide to start charging is the moment when you will see all your user base "evaporate" rapidly.
Here is the complete list of our products where you can see what we have decided to give out for free: http://www.xsql.com/download/
Redundant disclaimer: please note that this is a very simplistic view – in reality, this process is significantly harder to manage as those numbers mentioned above are affected by a multitude of factors that I have "conveniently" left out of the discussion.

Tuesday, February 21, 2012

Instead of a LIKE search

Here is a scenario you might face: let's say you have a table called "customers" that among other things has a column called "AccountNumber" that is defined as a varchar(20) column. Since searching by account number is a very common query, to improve the performance of it and guarantee a faster response time for your users you have decided to create an index on the  AccountNumber column. The situation is complicated a bit by the fact that those account numbers come from multiple sources and in some cases the number may be padded with zeroes in front depending on the source. You know that the max length for the account number including padding zeroes is 20 characters but the number of padding zeroes varies. In the meantime, when your users do a search they may include padding zeroes or may not - either way your query should find the row(s) the user is looking for. Easy enough, you can write something like:
 SELECT * FROM customers
 WHERE udf_trimZeroes(accountnumber) = udf_trimZeroes(@accountnumber)

or something like:
 SELECT * FROM customers
 WHERE accountnumber like ‘%’+ udf_trimZeroes(@accountnumber)

Either one of those queries will produce the results you are seeking, however, in both cases, albeit for different reasons, the optimizer will choose to do a clustered index scan which would be very expensive and result in a slow response time. You have already added an index on AccountNumber but the optimizer is ignoring that! So, what can you do? Here is a simple solution that will significantly improve this query:
First: we take the @accountnumber parameter that is passed to our stored procedure and build a small table variable containing all the possible values of this account number by padding the number with leading zeros up to the max length of 20 characters.
  DECLARE @AccountNumber_Table TABLE
   (
    [AccountNumber] [varchar](20) NOT NULL
   )
    -- strip out the leading zeroes first
   SET @AccountNumber = SUBSTRING(@AccountNumber, PATINDEX(''%[^0]%'',@AccountNumber), len  (@AccountNumber))
   DECLARE @counter int
   SET @counter = len(@AccountNumber)
    WHILE @counter <= 20
     BEGIN
      INSERT INTO @AccountNumber_Table 
        VALUES (REPLICATE(''0'', @counter - len(@AccountNumber)) + @AccountNumber)
      SET @counter = @counter + 1
     END


So, if the accountnumber the user is searching for is let's say 12345678 then the above table variable will contain values 12345678, 012345678, 0012345678 and so on. Now, instead of the where clause we simply join our "customers" table with this table variable we created as:
   SELECT customer.*
   FROM customers INNER JOIN @AccountNumber_Table ON
 
        customers.AccountNumber = @AccountNumber_Table.AccountNumber

the optimizer will now do an index seek on your AccountNumber index and return the results in a fraction of the time while completely eliminating the performance issues that a clustered index scan on a large table would cause.

PS check out our xSQL Profiler - it allows you to schedule, run traces and automatically collect trace data from multiple servers. It is free for one SQL Server instance.

Monday, February 20, 2012

SQL Data Compare mapping rules explained

By default xSQL Data Compare will automatically "map" tables from the two databases being compared based on the full object name, that is [schemaname].[objectname]. For example table [database1].[dbo].[Customer] will be mapped to table [database2].[dbo].[customer]. For most of the real life scenarios out there this automatic mapping is sufficient however, there are many cases when the tables on both sides cannot be mapped using this approach because:
  1. The same tables maybe owned by different schemas on different databases;
  2. Table names might not be exactly the same – for example, on one database the customer's table might be named dev_customer whereas on another database might be named prod_customer.
One utility that xSQL Data Compare provides to allow you to map tables regardless of the name is the "compare tables" functionality. This allows you to manually map a table from database1 to any table from database2 regardless of schema name or table name. This is perfect when you only have a few tables that you need to compare, but when you have hundreds or even thousands of tables this manual mapping is not practical. Here is where xSQL Data Compare "mapping rules" come in. There are two name based mapping rules that you can control:
  1. Schema name mapping rules. Using the schema mapping rules you can instruct xSQL Data Compare to:
    1.  Do an exact match on schema names (default behavior). In this case [database1].[dbo].[customer] will be mapped to [database2].[dbo].[customer] but [database1].[dev].[customer] will not be mapped to [database2].[prod].[customer]
    2. Completely ignore the schema name. In this case [database1].[dev].[customer] will be mapped to [database2].[prod].[customer]. 
    3. Manually map schema names. You could, for example map the schema “dev” to the schema ‘prod” in which case then xSQL Data Compare will automatically map [database1].[dev].[customer] to [database2].[prod].[customer]
  2. Table name mapping rules. Using this rule you can instruct xSQL Data Compare to:
    1. Ignore a certain name prefix on the left database and a certain name prefix on the right database when performing the object mapping. For example, you could say ignore the prefix “dev” on the left and the prefix “prod” on the right. In this case, the table [database1].[dbo].[dev_customer] will be mapped to [database2].[dbo].[prod_customer]  
    2. Ignore a certain name postfix on the left database and a certain name postfix on the right database.
Download your copy of xSQL Data Compare and see for yourself why this is the best sql data compare tool in the market, and on top of that you might not have to pay anything for it.

Wednesday, February 15, 2012

Script Executor now supports SQL 2012

A new build of Script Executor with support for SQL Server 2012 is now available for download. Script Executor allows you to execute multiple t-sql scripts, in the order you desire, against multiple SQL Server, MySQL and DB2 databases. It also allows you to package a set of scripts into an executable that can be easily deployed to remote clients. A command line utility is also included.

Tuesday, February 14, 2012

How to improve the performance of your .NET site

Following are some suggestions on how you can improve the performance of your .NET site. 
  • Use SqlDataReader as much as possible. It’s the lightest and the fastest way to read data from the database. Avoid DataSet unless you have no choice. 
  • Do not use exceptions in the code for flow control. Catching exception is expensive in terms of resources. 
  • Use html server side controls instead of ASP.NET web controls when is possible to do so. Html server-side controls are html elements with the attribute runat=”server” and they map to a specific html element. Web server control on the other hand, which starts with asp: (asp:label, asp:panel, asp:button), are function-oriented. There is no html element called “label” for example. ASP.NET might render it as “span” on the client. This could be easily overlooked because ASP.NET controls are tempting and rich in functionality, but they generate more code on the client. In many cases html server-side controls are sufficient. 
  • Use Asp.NET grid, list box and other data bound control carefully. Those are great controls, but they tend to generate a lot of code on the client. If you don’t need the extra functionality provided by the grid for example, use asp:Repeater which is the lightest and the faster ASP.NET bound control.
  • Determine whether ASP.NET view state is needed for a page. If not, turn it off so that the page is not loaded with unnecessary extra stuff.
In addition of these .NET specific suggestions there are many generic guidelines that apply to all websites regardless of the platform on which they are built. You can "scout" the web and try to assemble a comprehensive list OR you can just go to https://developers.google.com/pagespeed and test your site. The pagespeed tool will provide specific, targeted recommendations on how you can improve your site and also point out what you have done right. Those recommendations constitute in fact a fairly comprehensive list of guidelines. Now, of course, as is always the case, you must use your judgment to determine whether a certain suggestion would be helpful for your actual scenario or not but those are recommendations are a good starting point.

Lastly, if you were interested in this article chances are you develop with .NET /SQL Server in which case you will find our database comparison and synchronization tools, xSQL Object and xSQL Data Compare, very helpful in your day to day work. The tools are free for SQL Server Express with no limitations and also free for other editions of SQL Server depending on the number of objects in the database. Download your copy from: http://www.xsql.com/download/package.aspx?packageid=10

Monday, February 13, 2012

xSQL Documenter v4.5 released

We just released xSQL Documenter v4.5 which includes a number of enhancements and bug fixes. Here is a complete list of changes:
  • Support for SSRS in SharePoint Integrated mode.
  • Support for Informix 9.
  • Support for exclusion lists in SSRS (Microsoft Report Server)
  • Support for fetching permissions objects in Oracle.
  • Support for fetching dependencies between objects in different schemas in Oracle.
  • Support for drawing data model diagrams with GraphViz.
  • Improved performance when excluding a large number of objects.
  • Added textboxes for delimiters in the GUI.
  • Bug fix: exclude indexes if parent table is also excluded.
  • Bug fix for DB2 if can't get pagesize.
  • Bug fix: fixed a bug for Oracle where functions weren't being returned unless DocumentProcedure was set TRUE.
  • SSIS fix: take server name from file path for DTSX files if the path is UNC.
xSQL Documenter supports virtually every database management platform including SQL Server, Oracle, DB2, MySQL, Informix, Sybase, Teradata, VistaDB, SQLite, PostgresSQL etc. You can download the free trial from: http://www.xsql.com/download/package.aspx?packageid=15

Friday, February 10, 2012

Can't create account rss reporter xsql

This question comes up every so often: I am creating a new user in the RSS Reporter interface, I entered all the required information, but when I click on the "Add and Continue" button I get the error: "Failed to save the user information due to insufficient permissions. Write access to the path C:\inetpub\wwwroot\RssReporter\App_Data is denied".

RSS Reporter stores the user and profile information in the App_Data subfolder located under the main installation folder. You should assign write NTFS permission to IIS worker process account on this subfolder. If RSS Reporter has been installed on Windows XP or Windows 2000, assign write permission to machinename\ASPNET account, (where machine name is the computer on which RSS Reporter is installed); if you are using Windows Server 2003, assign write permission to the NETWORK SERVICE account.

RSS Reporter generates standard rss feeds from SQL Server and can be used to monitor SQL Jobs from multiple servers as well as to generate rss feeds containign the results of ad-hoc queries. RSS Reporter is free for one SQL Server instance. You can download your copy from: http://www.xsql.com/download/package.aspx?packageid=20

Find SQL Server Database Objects quickly with the free xSQL Object Search


As a database administrator or developer you often need to search for database objects like tables, stored procedures, views etc., in one or more databases. You might be trying to find a stored procedure the name of which contains a certain string or let's say a function that contains a certain string in the definition. You can, of course, find the objects you are looking for by querying the system tables that contain the names and definitions of those objects, but why waste your time when you can do this better and faster using our free xSQL Object Search?
With xSQL Object Search you can quickly search one or all the databases on a SQL Server instance at once. xSQL Object Search allows you to search SQL Server 2005 and SQL Server 2008 databases for objects that match a search pattern. The pattern you specify is matched against two properties of the database objects: name and definition. The name of an object is its identifier used when the object is created, it is the same name that appears in the Management Studio object tree. The definition, for the purpose of the search, is the T-SQL that defines the object. For a stored procedure, for example, the definition is the statement that was used to create the stored procedure. The definition of an object is not the same as the T-SQL script that can be used to re-create that object. A table, like all database objects, can be created via a T-SQL script, but has no definition in the database. The following are the database objects that have a definition:
  • Views
  • Stored Procedures (excluding CLR procedures, which have no definition in the database)
  • User-Defined Functions (excluding CLR functions)
  • Defaults (both constraints and stand-alone defaults)
  • Rules
  • DML Triggers (excluding CLR triggers)
  • Database DDL Triggers
  • Server DDL Triggers
xSQL Object Search can search the names of the objects, their definitions (when the object has one) or both. The results of the search against each property are combined with AND or OR depending on the preferences to produce the final result set.

xSQL Object Search is free and does not require any installation or licensing, simply download and run - no hassle and no strings attached.

Thursday, February 9, 2012

LinkedIn ads - puzzled

From a software vendor's point of view, or any vendor's point of view for that matter, LinkedIn appears to be an ideal platform for promoting products. There are not many places where the audience by definition is exactly what you are looking for and exactly the audience that would benefit from your products. For example: if you target a SQL Server group like "SQL Server Developers" or "SQL Server Professionals" you know that other than the ever present recruiters everybody else is involved with SQL Server in one capacity or another. So, if the product you are promoting through your ad is directly related with SQL Server chances are that a number of those group members might be interested.
So we created a couple of simple ads that clearly state what the products we are promoting are about. The ad for xSQL Object reads: "SQL Schema Compare", "Compare and synchronize SQL Server Schema. Free lite edition. Supports 2008/2005/2000"; and the ad for Oracle Data Compare reads "Oracle Data Compare", "Compare and synchronize data between two databases – Oracle 9i/10g/11g". We carefully chose the groups we wanted to target with those ads. 
The minimum bid that LinkedIn enforces is $2 per click, pretty steep but if the people who are clicking are the ones we are looking for it may still be feasible so up to this point all is good.  Now we are live and that is where the puzzle begins:
  1. First we see that the number of impressions goes quickly in the thousands but the click through ratio is really, really low. This part of the puzzle turns out to be easy to solve – it seems that since we bid the minimum allowed, that is $2 instead of the LinkedIn recommended bid of $3.27 to $4.05, one needs to scroll about a mile (a bit of exaggeration here) to reach the bottom of the page for a given group where you might see the ad in question. So, of course most of those impressions (maybe over 99%) are worthless since virtually nobody really sees it – aren’t we glad we did not choose to pay per impression! 
  2. Now we start looking at the few clicks that at $2 each are eating that budget up very fast, and herein lies the biggest puzzle: almost all who clicked arrived to that one page that was linked to the ad but left as soon as they got there! Here is why I find this puzzling:
    • I cannot imagine anyone in this world that would click on an ad just to pass the time;
    • The ad is not trying to lure people to click – quite the opposite. Unless comparing and synchronizing sql server database schemas sounds like something you might be interested on then there is no reason for you to click on that ad;
    • The audience is as close to the perfect audience as you can get;
    • The pages they see (xSQL Object and Oracle Data Compare respectively) provide exactly the information you must be looking for since you clicked on that advertisement.
So, who is clicking on our ads!? We don’t know yet but what we know is that those people are certainly not looking for our products!
If you have a positive experience with LinkedIn ads please do write to us or just comment on this blog. We would love to hear from you as we still believe that LinkedIn can be a very valuable to us but maybe we are just approaching this the wrong way.

Free alternative for comparing and synchronizing SQL database schema

Is there a free alternative to sql schema compare tools? Yes there is, it is called xSQL Schema Compare and it is one of the best sql compare tools you can find. Is that completely free for everybody? Based on our statistics, the free lite edition of xSQL Schema Compare is all that 95% of potential users need, and here is why:
  • As far as functionality is concerned the free lite edition is exactly the same as the $499 professional edition;
  • If the databases you are comparing and synchronizing are SQL Express databases there are no restrictions – the experience is exactly the same as if you had purchased a professional license;
  • Only when, at least one of the databases in the comparison is on a SQL Server edition other than the Express edition then we count the number of objects in that database and if the number is greater than certain limits we have set then you will be kindly asked to purchase a license.
The companion tool, xSQL Data Compare (you download both tools in one package) is also free for SQL Server Express with no limitations and free for other editions of SQL Server with some restrictions.

Wednesday, February 8, 2012

Download SQL Server 2000

Surprisingly, there are still a lot of people asking "where can I download SQL Server 2000 from?" so here are a couple of helpful links:
Note that SQL Server 2000 is reaching the end of its life (support for it ends on April 2013) so consider downloading SQL Server 2008 instead unless you don't have a choice.
Also, here are some of our tools that support SQL Server 2000:
  • xSQL Object allows you to compare and synchronize database schemas and it is free for MSDE 2000;
  • xSQL Data Compare allows you to compare and synchronize data between two databases and it is also free for MSDE 2000
  • xSQL Profiler allows you to trace multiple SQL Server instances including tracing MSDE instances and it is free for one SQL Server instance.
  • Script Executor allows you to execute SQL Scripts against multiple databases; 
  • xSQL Documenter generates CHM and HTML documentation for any database including SQL Server 2000 and MSDE 2000. 
  • RSS Reporter allows you to generate rss feeds from SQL Server and it is free for one SQL Server instance.
Applies to:
  • download SQL Server 2000
  • descargar SQL Server 2000
  • download MSDE
  • download SQL Server tools

Tuesday, February 7, 2012

Listen to your customers, or not!

The most vocal customers you have, those that download the latest builds of your products, those that discover the bugs in your code, and take the time to make their opinion of your products known to you and sometimes to the rest of the world as well, are a great asset to your product development effort. However, if you are not careful those customers can lead you down the wrong path and put the future of your product in jeopardy. 
There are two particular characteristics that make those customers likeable to you:
  1. They are usually testing the limits of your product and are looking to do more with it; 
  2. They understand and appreciate the effort that goes into making those products.
Those two characteristics make you vulnerable – subconsciously you place those customers at a totally different level, and you have the desire to please and impress them.  Consequently, suggestions and feature requests coming from this small subset of customers tend to get “promoted” at a much faster pace and get approved at a much higher rate, often overlooking the financial viability. 
From the marketing point of view, those vocal customers have the power to "make" or "break" your product: a positive, unsolicited comment carries the potential to catapult your product to the top whereas a negative comment can quickly sink your product to the bottom where no one will ever see it again. 
So, what shall you do, listen or not listen to your customers? The key to answering this is to clarifying the definition of the term "listening". Listening to your customers does not mean rushing to implement whatever they tell you but rather:
  • Thoroughly understanding not just what the customer is suggesting or requesting but understanding why is the customer making that suggestion, what exactly is the situation that this let’s say new feature will help the customer with. If you go deep enough you may discover that you already have addressed that situation somewhere. For example: we often get inquiries about our database comparison API (xSQL SDK) when in fact the customer is really looking for something like xSQL Builder.  
  • Evaluating the request from the business and financial perspective on its own merit, completely separating the request from the source.  This is important not only for the customer requests but for the internal suggestions as well. 
  • Very important: communicating your decision back to the customer. Whether you have decided to adopt your customer’s suggestion or not is irrelevant, what matters is telling the customer what you have decided and candidly explaining your decision.
In conclusion: yes, do listen to your customers, their feedback is truly invaluable, but don’t let yourself be steered in the wrong direction. 
PS if you work with SQL Server consider spending the next 5 minutes reviewing our SQL Server Comparison tools, xSQL Object and xSQL Data Compare, or if you work with Oracle take a look at our Oracle Data Compare. Those tools are worth your time – you will agree.