Wednesday, July 5, 2017

Oracle Data Compare 3.0 released

The best Oracle Data Compare tool in the market is now better. We just released version 3.0 with full support for Oracle 9i, Oracle 10g, Oracle 11g, and Oracle 12c.

The new version removes the dependency on the Oracle unmanaged ODP.NET provider and replaces it with the compact, easily deploy-able Oracle Managed Data Access Provider for .NET. Furthermore the new version is faster, more robust, has a better GUI and a much improved command line utility.

Download your free trial now and see for yourself.

Monday, July 3, 2017

A license for your opinion

We want to hear your thoughts regarding our xSQL Profiler tool. In exchange we will grant you a "5 SQL Server Instances" license with 3 years of unlimited upgrades, a total value of over $1.5K.

What are the conditions:

  • you must have used xSQL Profiler in the last 3 months and preferably are currently using it. 
  • you agree to email us your candid answers to the following questions:
    • how long have you used xSQL Profiler?
    • what exactly do you use it for?
    • how often do you use the tool?
    • what do you like about the tool?
    • what new features would be helpful to you (be specific)?
    • how would you change / improve the existing features to make them more useful?
  • you agree to reply to at least 2 follow-up emails for the purpose of clarifying your suggestions. 
If you are interested please email us at support@xsql.com and reference xSQL Profiler on the subject line. 

Only the first 10 people to email us will get the license. If you are not one of the first 10 we will decide on a case by case bases whether we will grant you a license based on the quality of your feedback. 

Friday, June 30, 2017

xSQL Profiler Use case: Custom event definitions

SQL Traces can be very effective for gathering information about the operation of a SQL Server provided you have a good utility to manage them, and a tool like xSQL Profiler can prove to be essential for this purpose. Among a large number of problems, xSQL Profiler provides solutions for, it can help drastically reduce the amount of expert manpower needed to monitor SQL Servers. How does it achieve this? 3 words: Custom event definitions.
One might think that the task of choosing what events and what columns to include is relatively easy. However, if you look at the MSDN documentation for traces you'll see that there are hundreds of different SQL events and even more event columns to choose from. Keeping in mind that a trace needs to be as efficient as possible, you can see how this task can become very complicated, very quickly. Which is why any person handling the monitoring of SQL Servers needs to have extensive knowledge of SQL Traces. If there are only a few SQL Servers to monitor one person should be enough to handle this task but if there are more than that, there should be more than one person, which could get quite expensive, because experts don't come cheap.
By using custom event definitions you only need one expert to create the custom trace events and then other people can take care of starting/stopping traces and preparing reports on the data these traces collect. If you have all the right knowledge and experience with SQL Traces, creating custom event definitions in xSQL Profiler if fairly easy.
You can either click on Trace > Add event definition or right click on the list of event definitions and click 'Add Event Definition'. The following dialog shows up.


First, you choose the list of SQL events that will be included in the event definition and then click 'Get Columns' to get the all the columns that are supported by these SQL Events.
After this, you can select only the columns that you want to be included in this trace to make it as efficient as possible while getting all the information needed. After the event definition is created, anyone can create a trace with this event definition and have all of its columns selected without having to worry which columns need to be selected and which do not.


Wednesday, June 28, 2017

Performance auditing with xSQL Profiler: Identifying slow queries

Most large scale systems, at some point in time, are bound to display a database related performance issue. As a DBA, if you haven't already, you will undoubtedly get a phone call or an email from a developer telling you that, somewhere in the midst of all the database's T-SQL code, you need to find those queries that are disrupting the system's performance. When this happens, usually you will have two choices: spend hours and hours running and testing queries to see which one could potentially be blocking the system, or employ a tool like xSQL Profiler that will collect information while the database is at work, filter that information, and present you only the execution related information needed to identify slow running queries. Needless to say why, the second option is much better.

xSQL Profiler offers an abstraction on top of SQL Server's native tracing mechanisms, SQL Traces. Let's go over the process of creating a traces on xSQL Profiler to identify slow running queries.

Event Definitions

SQL Traces register information provided by SQL Events when they are triggered during SQL Server's operation. xSQL Profiler supports all the native SQL Server event definitions and, in addition to this, it comes with extra built in event definitions which group together SQL Events that kind of "go together" and provide the full picture behind an action in SQL Server. To identify slow running queries you need to create traces that collect T-SQL execution related information such as the duration, I/O operations, records affected, estimated execution plans, etc.
I am going to use 2 xSQL Profiler event definitions from the "Performance" section of the Event Selection: Statement performance, and Analyze performance issues, and one event from the native SQL Server events: Performance - Showplan XML.

All together these events register data related to:

  • Query compilation and execution plans. This information is very useful in finding out why a query is running slow. For example, if the estimated number of records is different from the actual number of records in an execution plan, that means that SQL Server statistics are not up to date, which is one of the reasons queries might not perform well. These data are collected by the 'Performance - Showplan XML' xSQL Profiler event.
  • Query execution. This is the most useful information in identifying problematic queries. These data will show you when a piece of T-SQL code started executing, how long it took, how many I/O operations were performed, how much CPU resources were used, who executed the query, etc. Information regarding the query execution is mostly recorded by the 'Statement performance' and 'Analyze performance issues' events. 

Event Columns

Another very important aspect in building traces to identify slow queries is specifying the correct event columns. This affects both the effectiveness and efficiency of the traces. Choosing the right columns is substantial to gathering the information you need, and besides that, you should keep in mind that with every column you select there is an extra amount of data that needs to be collected which presents extra overhead for the server. So knowing exactly which columns to choose will help you avoid any unnecessary overhead on the server. Let's go over the necessary columns for each event definition:

Statement Performance

  • Duration - this right here is a no brainer. If you want to identify slow running queries you will most definitely need to know how long a query took to execute. This column displays the duration of the event in microseconds.
  • ObjectName -  this column shows the SQL object that executed the query. If it is a stored procedure or a function, it will show the function name. If it is a T-SQL statement that is not in a stored procedure, it will have the value 'Dynamic SQL'.
  • TextData - the value of this column depends on the SQL event that is being captured. for this event, it will display the T-SQL code for each of the statements that are being executed, inside and out of stored procedures.
  • CPU - This column shows the amount of CPU time (in milliseconds) used by the query.
  • Reads - This column shows the number of logical reads performed by the query. The smaller this number is, the less I/O operations are performed.
  • Writes - This column displays the number of physical writes a query performs. Again, the smaller this number, the less I/O operations are performed.
  • IntegerData - for this event definition, it will show the actual number of rows that are returned by the query. If the query is being executed by an application and it returns a high number of rows, you might want to think about making the WHERE clause a bit more restrictive or applying paging to get a smaller number of records because chances are, the client application does not need that many records.
  • ApplicationName, LoginName - this column are self descriptive, and while they serve no purpose on indicating whether the query is slow or not, they are very useful filtering columns in case your database is being queried by more than one application or user. If more than one application communicates with the database, but you know that only one of those applications has performance issues, you could place a filter on the ApplicationName or LoginName columns to record events that are raised only by that specific application or user.
  • Event sequence - The value on this column serves to show the sequence in which the events were raised. It will be used to link each individual query with it's execution plan.

Analyze performance issues

The columns for this event are the same as for the previous because the SQL events this event definition gathers data for, complement the events from 'Statement performance'.

Performance - Showplan XML

  • TextData - The value in this column is the XML text of the query's estimated execution plan. To view the execution plan simply save this value in a .sqlplan file and open it SQL Server Management Studio.
  • IntegerData - for this event, the value in this column displays the estimated number of returned rows.
  • EventSequence - the value of this column will always be 1 unit smaller than the value of the same column for excecution related events for the same queries. For example, if the value of EventSequence for SP:StmtCompleted SQL Event is 1001, the excecution plan for that query is in XML format in the row with event sequence 1000.  So if you order the data by EventSequence, the data from Showplan XML will always appear above the rest of the data for one query, enabling you to understand which execution plan belongs to which query.

Filters

The most important filter here is placed on the 'Duration Column'. Here you specify the threshold for a query to be considered "slow". There is no real formula for this value because it depends entirely on what a DBA considers a slow query but a good starting point is specifying the Duration to be greater than or equal to 5,000,000 microseconds (5 sec). You can then lower or raise this threshold depending on the data that the trace returns. If running the trace again is not an option you can always use xSQL Profiler's reporting capabilities to filter the data however you see fit.

Results


xSQL Profiler v2.2 is now available - 50% OFF!

New and improved version 2.2 of the xSQL Profiler is now available.

Download the new version today and take advantage of a 50% discount if you decide to purchase a license before the end of this week, July 1, 2017 (US Eastern Time) - use code PROFILER2 on checkout.

Any suggestions or comments on how we can make this tool more helpful for you would be greatly appreciated! Please complete the simple feedback form on our site at: http://www.xsql.com/comments/ and we will consider including your suggestions on the next build.

Wednesday, June 21, 2017

Using xSQL Profiler with a limited access SQL user

The process of tracing events on a SQL Server and collecting those event data, is not among the usual tasks that a user performs on SQL Server. Because of this, the T-SQL commands that are executed to perform the start / stop trace and data collection operations, demand a very specific set of permissions from database users that are not system administrators. I've struggled a bit with this in the past which is why I'm writing this article to provide a step by step guide to making tracing with xSQL Profiler work with non-sysadmin users. To demonstrate these examples I've created a user called 'xsql_profiler_user'. Of course you can specify whatever name you like instead. Let's go over those requirements and how to fulfill them:

  • First of all, the SQL user needs to have access to the master database and all the databases that you want to trace. Giving the new user access to these databases is quite easy. on SSMS simply go to the [SQL Server name] > Security > Logins and double click on the name of your login. In this case, 'xsql_profiler_user'. Then go to the User Mapping section and select all the databases you want.
  • To be able to start / stop traces in a database, the xsql_profiler_user needs to have ALTER TRACE permissions on the server. To give these permissions to the user you again go to [SQL Server name] > Security > Logins > [name of the login], double click it, go to the 'Securables' section and grant Alter trace permissions from the 'Explicit' tab.
xSQL Profiler configures the traces to write their data into trace files which are then read, their data transferred to the repository and deleted. As you might have imagined, the proper permissions need to be in place to perform all these operations. 
  • xSQL Profiler uses SQL Server's xp_cmdshell to delete the files after it uses them, which means that the user you provide for the database connection (in this case xsql_profiler_user) needs to have EXECUTE permission on xp_cmdshell. To do this, simply run:
    GRANT EXECUTE ON xp_cmdshell TO xsql_profiler_user
    
  • Even though we granted EXECUTE permissions on xp_cmdshell we're still not done. Because the user is not system admin, SQL Server will not allow it to execute xp_cmdshell with the SQL Server's windows account, so we need to create a new windows account to use as a proxy for the xp_cmdshell. Go to Computer Management > Local Users and Groups > Users and create a new user. I've named mine 'xp_cmdshellproxy'.
  • After this, you need to give this user the 'Log on as batch job' right. This is because SQL Server will try to use this user to run batch commands without actually logging on with it, and windows will not allow it unless it has the Log on as batch jog right. To do this, search for 'Local Security Policy', go to Local Policies > User rights assignment, double-click on 'Log on as batch job' and add the user created in the previous step.
  • Next, assign full rights for the user in the Load temp trace data folder.
  • The last thing we need to do is configure SQL Server to use the windows account we just created as a proxy account for xp_cmdshell. To do that, run this query:
    EXEC sp_xp_cmdshell_proxy_account 'domain\acountname','password'
    
After you have completed all the above steps, you should be good to go.

Wednesday, May 10, 2017

SQL Server Expressers out there, we got you!

One of the most powerful features SQL Server has, is it's ability to monitor just about every aspect of it's operation with mechanisms such as SQL traces. A SQL Trace is an internal SQL Server mechanism for capturing events which are raised basically every time something "happens" on SQL Server. All versions of SQL Server support traces which can be created using a set of system stored procedures designed specifically for this task. However, anyone that has worked with these stored procedures before can tell you that they are pretty confusing because you have to specify every low level detail. For example, to start a trace, you have to specify the events it will trace, and for every event the properties or columns for which the trace will register data. These data will be saved in a trace file which, again needs to be specified as a parameter to the stored procedure, and then you use another stored procedure to read the data from the file. To make things even more complicated, the different events and event columns are not specified by their name (even though it is unique), but by their IDs. Taking into consideration the fact that there are hundreds of different events with tens of columns each, you can easily see how using T-SQL code to manipulate traces can be very difficult and time consuming. Well, Microsoft has seen this as well and constructed SQL Server Profiler which provides a UI that makes managing traces quite easy. Unfortunately, this is not available on Express versions of SQL Server. These versions do support traces, however, you have to do everything by hand.
This is where we come in. Our xSQL Profiler provides a user interface that hides the low level details of starting an analyzing trace data on SQL Server and, best of all, you can use it to monitor SQL Server Express instances free of charge. You can get the setup from this link and for a detailed explanation of the installation process check out our previous article on "Installing the new xSQL Profiler 2.1". Apart from offering every functionality that is offered by Microsoft's SQL Server Profiler, our xSQL Profiler also offers a few higher level features that simplify monitoring SQL Server Express instances even more. Lets have a look at some of the most important higher level features, offered for SQL Server Express:

Automatically persist trace data

xSQL Profiler automatically saves all trace data in a structured format in the database that's created during the installation process. This means that the DBA can view the trace information at any time through xSQL Profiler's UI. Also, because this data is persisted on a SQL Server database, you can always easily extend xSQL Profiler's monitoring capabilities. For example, you can place xSQL Profiler's database on a web server and build a web application on top of that database that allows users that may not have access to the machine where xSQL Profiler is installed to still be able to access and analyse real time trace data from the monitored servers.

Advanced filtering and reporting

xSQL Profiler offers the ability to add filters directly to your traces, to make them register only a portion of the data, reducing the load that running traces might impose on your server. Apart from this, xSQL Profiler also enables you to further filter the data that are saved in the database when you are generating reports. If the wide range of built in filters still does not meet your requirements, xSQL Profiler offers the ability to write your own T-SQL queries and query the underlying database to get the data you need.

Scheduling

One of the main benefits of xSQL Profiler is the ability to schedule the running time of traces. This is one of the things that cannot be done through Microsoft's SQL Server Profiler and would be way to complicated to achieve with queries. With xSQL Profiler you can add multiple schedules for each trace to make the monitoring process as efficient and effective as possible

High level event definition

xSQL Profiler offers the ability to start traces with the same event definitions as SQL Server Profiler does. However, we noticed that these event definitions were too low-level and to audit particular aspects of SQL Server, such as user logins, query execution and so on, you have to create traces based on groups of multiple event definitions. Thus, we have created xSQL Profiler events, which are built in event definitions that allow you to audit the aforementioned aspects of SQL Server without needing to manually pick every SQL Server event. Also, if the built in event definitions don't fit your needs, you can always create your own. 

So, there you have it, a guide to what xSQL Profiler can do to simplify monitoring of SQL Server Express instances.






Thursday, April 27, 2017

Installing the new xSQL Profiler 2.1

We just released the newest version of our xSQL Profiler and besides the increased range of supported SQL Server versions and events that can be traced, as well as some other bug fixes and improvements there were some changes in the setup and installation. Let's go through the new installation shall we?


A new feature in the xSQL Profiler installation is the ability to choose a "Standard Installation" and have all the configurations done behind the scenes as opposed to the "Advanced Installation" where you have to manually enter each configuration option.



There are a few things to keep in mind with the Standard Installation though.

  • First of all, you need to have a SQL Server 2008 and up installed as a default instance. By default xSQL Profiler uses "(local)" as the server name which means that it will search for the default SQL Server instance on the machine. One thing that might confuse you a little bit is that just because you have SQL Server installed on your machine, it's not necessarily the default instance, even if there is only one SQL Server installed. You need to manually tell the SQL Server installation to register the instance as a default instance. If you haven't done this during installation, and are reading all those forums that say it cannot be done after the instance is installed, there is no need to panic because they are simply not correct. Here is an article on how to do that.
  • The next thing to note is that the xSQL Profiler service will run under the "Local System" account. This is necessary because it is the only built in service account that is sure to have access to SQL Server (neither "Local Service" or "Network Service" can access SQL Server by default). However, if you have SQL Server 2012 and up, even the Local System account doesn't have access, so you will need to manually configure this account to be able to access the SQL Server. Keep in mind that this is a very high privileged account so you need to be very careful because you can expose your system to some security risks. You can always change the account on which the service is run on after it is installed and manually provide the necessary permissions.
Like all previous versions of xSQL Profiler this version supports the advanced installation if you like to manually specify all the configurations for xSQL Profiler. The first thing you do is specify the account that xSQL Profiler's Service will run on.



Your options here are one of the three built-in service accounts (for more information on those check out this MSDN page), or a separate windows account. If you do choose one of these accounts, like mentioned above, for versions up to SQL Server 2008 R2, by default, only Local System can connect with the server instance and for versions later than that, none of the service accounts can connect with the instance. So if you choose one of these you need to manually configure the account to be able to connect with the server. Keeping with the principle of least privilege, the best thing to do is create a new user specifically for the profiler's service and give it the necessary permissions. The minimum set of permissions it needs are "Log on as service" and permissions to connect with the SQL server instances.

The next thing you need to do is to choose the SQL server where the xSQL Repository will be created and provide the authentication details for that server.



This is fairly straightforward, much like providing the authentication details when you use SQL Server Management Studio. You provide the server name and authentication details, as well as the new database's name. The data you provide here will only be used by the setup to create the database and then discarded. You can choose a SQL server on the local machine or on another machine as the server that will hold the repository for xSQL Profiler. If you choose the Windows Authentication option, the current logged in user will be used to connect with the database, so make sure that user is registered as a login on SQL Server and has permissions to create a database.

The last thing you need to do is to provide the authentication details that will be used  by the xSQL Profiler's service to connect with the repository.



This is where you should be a little careful. You can choose either Windows or SQL Server Authentication and each of these options has benefits and drawbacks. From a security standpoint, Windows Authentication is better because the SQL Server Authentication option will save your login details on the service's XML configuration file. However, if you do choose windows authentication keep in mind that:
  • The service's user needs to be registered as a login on the repository's SQL server
  • The service's user needs to have the necessary permissions to access the database.
The security risk of SQL Server authentication can be mitigated by choosing a SQL Server user with the minimum set of required permissions but this will, again, require some work to be done. You can do this after xSQL Profiler has been installed because the authentication details in this step are not used during the setup. They are used when you run the application.

So there you have it. All you have to do now is click "Next", review the installation details and if you have provided the correct information, the setup should complete without errors.

xSQL Profiler 2.1 now available

We just published a new version (2.1) of xSQL Profiler with built-in support for tracing low level SQL Server events. Now, if you want to create a trace for a specific SQL Server event, you don't have to create a new event definition for that event. All low level SQL Server events are included. This is especially useful when you are using xSQL Profiler with SQL Server Express.

A few minor UI improvements such as better context menu and tooltips, updated grid etc. are also included in the new version.

xSQL Profiler is FREE for one SQL Server Express instance. You can download the new version from our website: http://www.xsql.com/download/sql_server_profiler/

Wednesday, April 19, 2017

SQL Server: Functions vs. Stored Procedures to return result sets

A while back, I was building the database schema for a web application which had some reporting functionality and among other things, I had do implement logic in the database to prepare the data for the application's reports. The queries I constructed were relatively complex which meant that I needed to construct objects in the database to encapsulate these queries. So, it came down to a choice between table-valued functions and stored procedures. If you do a little research you'll notice that there is no clear cut suggestion regarding the choice between functions or stored procedures for cases when you need to retrieve a result set from the database. So, here is a comparison of the two, which, in the end will be concluded with a suggestion for those of you out there who can't make up your minds.

T-SQL statements

With regards to the types T-SQL statements that each of the objects can contain, stored procedures are much more versatile because almost all T-SQL statements can be included in a stored procedure. The only exceptions are the following:
  • USE <database>
  • CREATE AGGREGATE, RULE, DEFAULT, CREATE, FUNCTION, TRIGGER, PROCEDURE, or VIEW
However, when it comes to table-valued functions, there is an entirely different story. Based on the T-SQL statements that can be used in them, they are quite limited. Namely, they cannot:
  • Apply schema or data changes in the database
  • Change the state of the database or a SQL Server instance
  • Create or access temporary tables
  • Call stored procedures
  • Execute dynamic SQL
  • Produce side effects such as relying on the information from a previous invocation.
So, basically, only SELECT statements are allowed in table-valued functions. The only exception is on multistatement-table valued functions, which must contain an INSERT ... SELECT statement that populates the table variable which will be returned by the multi-statement table-valued function. 

Parameters and return types

Both, stored procedures and table-valued functions accept parameters of all data types however, there are a few differences.
The first, and most important is that unlike stored procedures, table-valued functions do not accept output parameters. In fact, table-value functions return data to the client in only one way: through the RETURN statement. Stored procedures on the other hand, do accept output parameters and they have three ways to return data to the client: through output parameters, by executing a select statement in the procedure's body or by using a RETURN statement.
Another, more subtle difference is on how parameters with default values are handled. While both stored procedures and table valued functions support default values for parameters, these type of parameters are optional only on stored procedures. Weird as it is, if you want the default value for a parameter when using the function, you have to write the DEFAULT keyword in place of the value for that parameter. With stored procedures you can simply omit the value and SQL Server will supply the default value. 
The last difference is that when you call a stored procedure, you can specify the parameter values by association, meaning that you can use a syntax like this: <parameter_name> = <value> to supply the parameter values, which greatly improves the code's readability. You can't do this with functions. This might become an issue if the function has a lot of parameters, because you would constantly need to review the documentation just to find out the order of the parameters in the definition.
So, if you need to return multiple result sets or if you are worried about the readability of your code, stored procedures might be a better option

Performance

If you think about it, table-valued functions, especially inline-table valued functions, are a lot like another database object. Yeap, you guessed it, VIEWS. Even the SQL Server optimizer treats inline table valued functions the same as it does views. This is why one can think of table valued functions like parameterized views.
Performance wise, functions and stored procedures are identical. They both make use of execution plan caching, which means that they are not recompiled every time they are executed. To prove they are identical, you can create a function and a procedure with the same SELECT statement, execute each one a few times, and then check the sys.dm_exec_query_stats DMO. You will notice that the last_elapsed_times differ very little.

Usage

This is where I think, table-valued functions have the greatest advantage. Because they resemble views, they can be placed anywhere a table can be placed in a query. This means that you can filter the result set of the function, use them in join statements, etc. You cannot do the same with stored procedures. Of course, if you have enough knowledge and experience with T-SQL you could probably find some workaround, but generally, manipulating the result set returned by a stored procedure is not as straightforward as doing the same for a table-valued function. So, if for some reason, you need to apply some additional manipulation to the data returned by a function you can do that very easily. If that same data comes from a stored procedure, in most cases you may need to alter the procedure's code, which will require having the necessary permissions and what not. 
If you think that the result set of the stored procedure of function may need to be further manipulated, use table valued functions.

One last thing

One thing that I really like about table-valued functions, is that you can use the SCHEMABINDING option on them to prevent any changes on the underlying objects that can break the function. The same option is not always available on stored procedures. You can use it only on natively compiled stored procedures which are available only on SQL Server 2014 and up, and Azure SQL database. So, if you are using regular stored procedures to retrieve data, keep in mind that they can break if you change the structure of the referenced objects.


To conclude, as a general rule of thumb, I tend to use table-valued functions whenever I need to retrieve a result set from the database, and stored procedures when I need to perform some work on the database.