Wednesday, June 21, 2017

Using xSQL Profiler with a limited access SQL user

The process of tracing events on a SQL Server and collecting those event data, is not among the usual tasks that a user performs on SQL Server. Because of this, the T-SQL commands that are executed to perform the start / stop trace and data collection operations, demand a very specific set of permissions from database users that are not system administrators. I've struggled a bit with this in the past which is why I'm writing this article to provide a step by step guide to making tracing with xSQL Profiler work with non-sysadmin users. To demonstrate these examples I've created a user called 'xsql_profiler_user'. Of course you can specify whatever name you like instead. Let's go over those requirements and how to fulfill them:

  • First of all, the SQL user needs to have access to the master database and all the databases that you want to trace. Giving the new user access to these databases is quite easy. on SSMS simply go to the [SQL Server name] > Security > Logins and double click on the name of your login. In this case, 'xsql_profiler_user'. Then go to the User Mapping section and select all the databases you want.
  • To be able to start / stop traces in a database, the xsql_profiler_user needs to have ALTER TRACE permissions on the server. To give these permissions to the user you again go to [SQL Server name] > Security > Logins > [name of the login], double click it, go to the 'Securables' section and grant Alter trace permissions from the 'Explicit' tab.
xSQL Profiler configures the traces to write their data into trace files which are then read, their data transferred to the repository and deleted. As you might have imagined, the proper permissions need to be in place to perform all these operations. 
  • xSQL Profiler uses SQL Server's xp_cmdshell to delete the files after it uses them, which means that the user you provide for the database connection (in this case xsql_profiler_user) needs to have EXECUTE permission on xp_cmdshell. To do this, simply run:
    GRANT EXECUTE ON xp_cmdshell TO xsql_profiler_user
    
  • Even though we granted EXECUTE permissions on xp_cmdshell we're still not done. Because the user is not system admin, SQL Server will not allow it to execute xp_cmdshell with the SQL Server's windows account, so we need to create a new windows account to use as a proxy for the xp_cmdshell. Go to Computer Management > Local Users and Groups > Users and create a new user. I've named mine 'xp_cmdshellproxy'.
  • After this, you need to give this user the 'Log on as batch job' right. This is because SQL Server will try to use this user to run batch commands without actually logging on with it, and windows will not allow it unless it has the Log on as batch jog right. To do this, search for 'Local Security Policy', go to Local Policies > User rights assignment, double-click on 'Log on as batch job' and add the user created in the previous step.
  • Next, assign full rights for the user in the Load temp trace data folder.
  • The last thing we need to do is configure SQL Server to use the windows account we just created as a proxy account for xp_cmdshell. To do that, run this query:
    EXEC sp_xp_cmdshell_proxy_account 'domain\acountname','password'
    
After you have completed all the above steps, you should be good to go.

Wednesday, May 10, 2017

SQL Server Expressers out there, we got you!

One of the most powerful features SQL Server has, is it's ability to monitor just about every aspect of it's operation with mechanisms such as SQL traces. A SQL Trace is an internal SQL Server mechanism for capturing events which are raised basically every time something "happens" on SQL Server. All versions of SQL Server support traces which can be created using a set of system stored procedures designed specifically for this task. However, anyone that has worked with these stored procedures before can tell you that they are pretty confusing because you have to specify every low level detail. For example, to start a trace, you have to specify the events it will trace, and for every event the properties or columns for which the trace will register data. These data will be saved in a trace file which, again needs to be specified as a parameter to the stored procedure, and then you use another stored procedure to read the data from the file. To make things even more complicated, the different events and event columns are not specified by their name (even though it is unique), but by their IDs. Taking into consideration the fact that there are hundreds of different events with tens of columns each, you can easily see how using T-SQL code to manipulate traces can be very difficult and time consuming. Well, Microsoft has seen this as well and constructed SQL Server Profiler which provides a UI that makes managing traces quite easy. Unfortunately, this is not available on Express versions of SQL Server. These versions do support traces, however, you have to do everything by hand.
This is where we come in. Our xSQL Profiler provides a user interface that hides the low level details of starting an analyzing trace data on SQL Server and, best of all, you can use it to monitor SQL Server Express instances free of charge. You can get the setup from this link and for a detailed explanation of the installation process check out our previous article on "Installing the new xSQL Profiler 2.1". Apart from offering every functionality that is offered by Microsoft's SQL Server Profiler, our xSQL Profiler also offers a few higher level features that simplify monitoring SQL Server Express instances even more. Lets have a look at some of the most important higher level features, offered for SQL Server Express:

Automatically persist trace data

xSQL Profiler automatically saves all trace data in a structured format in the database that's created during the installation process. This means that the DBA can view the trace information at any time through xSQL Profiler's UI. Also, because this data is persisted on a SQL Server database, you can always easily extend xSQL Profiler's monitoring capabilities. For example, you can place xSQL Profiler's database on a web server and build a web application on top of that database that allows users that may not have access to the machine where xSQL Profiler is installed to still be able to access and analyse real time trace data from the monitored servers.

Advanced filtering and reporting

xSQL Profiler offers the ability to add filters directly to your traces, to make them register only a portion of the data, reducing the load that running traces might impose on your server. Apart from this, xSQL Profiler also enables you to further filter the data that are saved in the database when you are generating reports. If the wide range of built in filters still does not meet your requirements, xSQL Profiler offers the ability to write your own T-SQL queries and query the underlying database to get the data you need.

Scheduling

One of the main benefits of xSQL Profiler is the ability to schedule the running time of traces. This is one of the things that cannot be done through Microsoft's SQL Server Profiler and would be way to complicated to achieve with queries. With xSQL Profiler you can add multiple schedules for each trace to make the monitoring process as efficient and effective as possible

High level event definition

xSQL Profiler offers the ability to start traces with the same event definitions as SQL Server Profiler does. However, we noticed that these event definitions were too low-level and to audit particular aspects of SQL Server, such as user logins, query execution and so on, you have to create traces based on groups of multiple event definitions. Thus, we have created xSQL Profiler events, which are built in event definitions that allow you to audit the aforementioned aspects of SQL Server without needing to manually pick every SQL Server event. Also, if the built in event definitions don't fit your needs, you can always create your own. 

So, there you have it, a guide to what xSQL Profiler can do to simplify monitoring of SQL Server Express instances.






Thursday, April 27, 2017

Installing the new xSQL Profiler 2.1

We just released the newest version of our xSQL Profiler and besides the increased range of supported SQL Server versions and events that can be traced, as well as some other bug fixes and improvements there were some changes in the setup and installation. Let's go through the new installation shall we?


A new feature in the xSQL Profiler installation is the ability to choose a "Standard Installation" and have all the configurations done behind the scenes as opposed to the "Advanced Installation" where you have to manually enter each configuration option.



There are a few things to keep in mind with the Standard Installation though.

  • First of all, you need to have a SQL Server 2008 and up installed as a default instance. By default xSQL Profiler uses "(local)" as the server name which means that it will search for the default SQL Server instance on the machine. One thing that might confuse you a little bit is that just because you have SQL Server installed on your machine, it's not necessarily the default instance, even if there is only one SQL Server installed. You need to manually tell the SQL Server installation to register the instance as a default instance. If you haven't done this during installation, and are reading all those forums that say it cannot be done after the instance is installed, there is no need to panic because they are simply not correct. Here is an article on how to do that.
  • The next thing to note is that the xSQL Profiler service will run under the "Local System" account. This is necessary because it is the only built in service account that is sure to have access to SQL Server (neither "Local Service" or "Network Service" can access SQL Server by default). However, if you have SQL Server 2012 and up, even the Local System account doesn't have access, so you will need to manually configure this account to be able to access the SQL Server. Keep in mind that this is a very high privileged account so you need to be very careful because you can expose your system to some security risks. You can always change the account on which the service is run on after it is installed and manually provide the necessary permissions.
Like all previous versions of xSQL Profiler this version supports the advanced installation if you like to manually specify all the configurations for xSQL Profiler. The first thing you do is specify the account that xSQL Profiler's Service will run on.



Your options here are one of the three built-in service accounts (for more information on those check out this MSDN page), or a separate windows account. If you do choose one of these accounts, like mentioned above, for versions up to SQL Server 2008 R2, by default, only Local System can connect with the server instance and for versions later than that, none of the service accounts can connect with the instance. So if you choose one of these you need to manually configure the account to be able to connect with the server. Keeping with the principle of least privilege, the best thing to do is create a new user specifically for the profiler's service and give it the necessary permissions. The minimum set of permissions it needs are "Log on as service" and permissions to connect with the SQL server instances.

The next thing you need to do is to choose the SQL server where the xSQL Repository will be created and provide the authentication details for that server.



This is fairly straightforward, much like providing the authentication details when you use SQL Server Management Studio. You provide the server name and authentication details, as well as the new database's name. The data you provide here will only be used by the setup to create the database and then discarded. You can choose a SQL server on the local machine or on another machine as the server that will hold the repository for xSQL Profiler. If you choose the Windows Authentication option, the current logged in user will be used to connect with the database, so make sure that user is registered as a login on SQL Server and has permissions to create a database.

The last thing you need to do is to provide the authentication details that will be used  by the xSQL Profiler's service to connect with the repository.



This is where you should be a little careful. You can choose either Windows or SQL Server Authentication and each of these options has benefits and drawbacks. From a security standpoint, Windows Authentication is better because the SQL Server Authentication option will save your login details on the service's XML configuration file. However, if you do choose windows authentication keep in mind that:
  • The service's user needs to be registered as a login on the repository's SQL server
  • The service's user needs to have the necessary permissions to access the database.
The security risk of SQL Server authentication can be mitigated by choosing a SQL Server user with the minimum set of required permissions but this will, again, require some work to be done. You can do this after xSQL Profiler has been installed because the authentication details in this step are not used during the setup. They are used when you run the application.

So there you have it. All you have to do now is click "Next", review the installation details and if you have provided the correct information, the setup should complete without errors.

xSQL Profiler 2.1 now available

We just published a new version (2.1) of xSQL Profiler with built-in support for tracing low level SQL Server events. Now, if you want to create a trace for a specific SQL Server event, you don't have to create a new event definition for that event. All low level SQL Server events are included. This is especially useful when you are using xSQL Profiler with SQL Server Express.

A few minor UI improvements such as better context menu and tooltips, updated grid etc. are also included in the new version.

xSQL Profiler is FREE for one SQL Server Express instance. You can download the new version from our website: http://www.xsql.com/download/sql_server_profiler/

Wednesday, April 19, 2017

SQL Server: Functions vs. Stored Procedures to return result sets

A while back, I was building the database schema for a web application which had some reporting functionality and among other things, I had do implement logic in the database to prepare the data for the application's reports. The queries I constructed were relatively complex which meant that I needed to construct objects in the database to encapsulate these queries. So, it came down to a choice between table-valued functions and stored procedures. If you do a little research you'll notice that there is no clear cut suggestion regarding the choice between functions or stored procedures for cases when you need to retrieve a result set from the database. So, here is a comparison of the two, which, in the end will be concluded with a suggestion for those of you out there who can't make up your minds.

T-SQL statements

With regards to the types T-SQL statements that each of the objects can contain, stored procedures are much more versatile because almost all T-SQL statements can be included in a stored procedure. The only exceptions are the following:
  • USE <database>
  • CREATE AGGREGATE, RULE, DEFAULT, CREATE, FUNCTION, TRIGGER, PROCEDURE, or VIEW
However, when it comes to table-valued functions, there is an entirely different story. Based on the T-SQL statements that can be used in them, they are quite limited. Namely, they cannot:
  • Apply schema or data changes in the database
  • Change the state of the database or a SQL Server instance
  • Create or access temporary tables
  • Call stored procedures
  • Execute dynamic SQL
  • Produce side effects such as relying on the information from a previous invocation.
So, basically, only SELECT statements are allowed in table-valued functions. The only exception is on multistatement-table valued functions, which must contain an INSERT ... SELECT statement that populates the table variable which will be returned by the multi-statement table-valued function. 

Parameters and return types

Both, stored procedures and table-valued functions accept parameters of all data types however, there are a few differences.
The first, and most important is that unlike stored procedures, table-valued functions do not accept output parameters. In fact, table-value functions return data to the client in only one way: through the RETURN statement. Stored procedures on the other hand, do accept output parameters and they have three ways to return data to the client: through output parameters, by executing a select statement in the procedure's body or by using a RETURN statement.
Another, more subtle difference is on how parameters with default values are handled. While both stored procedures and table valued functions support default values for parameters, these type of parameters are optional only on stored procedures. Weird as it is, if you want the default value for a parameter when using the function, you have to write the DEFAULT keyword in place of the value for that parameter. With stored procedures you can simply omit the value and SQL Server will supply the default value. 
The last difference is that when you call a stored procedure, you can specify the parameter values by association, meaning that you can use a syntax like this: <parameter_name> = <value> to supply the parameter values, which greatly improves the code's readability. You can't do this with functions. This might become an issue if the function has a lot of parameters, because you would constantly need to review the documentation just to find out the order of the parameters in the definition.
So, if you need to return multiple result sets or if you are worried about the readability of your code, stored procedures might be a better option

Performance

If you think about it, table-valued functions, especially inline-table valued functions, are a lot like another database object. Yeap, you guessed it, VIEWS. Even the SQL Server optimizer treats inline table valued functions the same as it does views. This is why one can think of table valued functions like parameterized views.
Performance wise, functions and stored procedures are identical. They both make use of execution plan caching, which means that they are not recompiled every time they are executed. To prove they are identical, you can create a function and a procedure with the same SELECT statement, execute each one a few times, and then check the sys.dm_exec_query_stats DMO. You will notice that the last_elapsed_times differ very little.

Usage

This is where I think, table-valued functions have the greatest advantage. Because they resemble views, they can be placed anywhere a table can be placed in a query. This means that you can filter the result set of the function, use them in join statements, etc. You cannot do the same with stored procedures. Of course, if you have enough knowledge and experience with T-SQL you could probably find some workaround, but generally, manipulating the result set returned by a stored procedure is not as straightforward as doing the same for a table-valued function. So, if for some reason, you need to apply some additional manipulation to the data returned by a function you can do that very easily. If that same data comes from a stored procedure, in most cases you may need to alter the procedure's code, which will require having the necessary permissions and what not. 
If you think that the result set of the stored procedure of function may need to be further manipulated, use table valued functions.

One last thing

One thing that I really like about table-valued functions, is that you can use the SCHEMABINDING option on them to prevent any changes on the underlying objects that can break the function. The same option is not always available on stored procedures. You can use it only on natively compiled stored procedures which are available only on SQL Server 2014 and up, and Azure SQL database. So, if you are using regular stored procedures to retrieve data, keep in mind that they can break if you change the structure of the referenced objects.


To conclude, as a general rule of thumb, I tend to use table-valued functions whenever I need to retrieve a result set from the database, and stored procedures when I need to perform some work on the database.

Thursday, March 2, 2017

New Schema Compare build adds option to exclude checked property

A new build of xSQL Schema Compare is available for download. The new build adds a new comparison option that excludes a property called “checked” for check constraints and foreign keys. This property corresponds to the scripting clause “WITH CHECK”/”WITH NOCHECK”. The screen shot below shows the new option.
xSQL Schema Compare is currently free for SQL Server Express.  

Monday, February 20, 2017

Thursday, February 16, 2017

New DB Searcher tool for SQL Server/Azure

Just a Great Tool, No Cost, No Strings

The new xSQL Database Searcher tool, previously known as the xSQL Object Search is now available for download, no cost and no strings attached. The new version:
  • Supports all version of SQL Server from 2005 to 2016
  • Supports SQL Azure v11 and v12
  • Can be used as a stand-alone tool OR as an add-in to SSMS 2008 - 20016
  • Searches both the database objects and SQL Server jobs
  • Supports from simple equal or like searches to regular expression searches
Best of all, it costs nothing and has absolutely no strings attached, just download, install and enjoy!

Please tell us how you like the tool and how we can make it better. We would greatly appreciate it!

Wednesday, February 8, 2017

Get Bose SoundSport or Amazon Fire TV Stick with Alexa on us today

Purchase a new, 1 user, 1 year Silver Subscription before February 15, 2017 and we will send you an Amazon Fire TV Stick with Alexa - just enter the promo code FIRETV17 on the shopping cart page. Once you complete the order please email us with your name, and the address where you want us to ship the Fire TV Stick.
Make it a new, 5 user Silver Subscription license or any other combination of licenses with a total value of $800 or more and we will send you a Bose Sound Sport Wireless Headphones in black, aqua or citron color - just enter promo code SOUNDSPORT17 on the shopping cart page. Once you complete the order please email us with your name, your color choice, and the address where you want us to ship the Bose SoundSport headphones.

Available to US and Canadian residents only. Limit 1 per customer. Expires on January 31, 2017.

Monday, February 6, 2017

SQL Server: DATETIME vs DATETIME2

When it comes to a choice between data types for a field in a SQL Server database's table, an issue that is frequently discussed in popular forums is a choice between the DATETIME and DATETIME2 data types. According to the official MSDN documentation, it is recommended that you use DATETIME2 for new work because it is more portable, aligns with the SQL Standard, offers more precision and has a greater range. There aren't too many people who would dispute the recommendations of one of the "Big 4" companies, myself included, but, for those curious minds out there, let's see why DATETIME2 is the better choice.

Precision

DATETIME2 has a fractional precision of up to 7 digits compared to the DATETIME's precision of 3 fractional digits. The 'up to' part means that the user can manually specify the precision through an optional parameter. The default precision is 7 digits. This increased precision means that a conversion to the DATETIME2 data type of a string like '2016-11-11 20:20:20.4444' will succeed whereas the conversion of the same string to DATETIME will fail.

Accuracy

DATETIME2 supersedes DATETIME in accuracy by a relatively big margin. Although DATETIME has a precision of 3 fractional digits, it will round the last digit to an increment of .000, .003 or .007 whereas the DATETIME2 data type, supports an accuracy of 100 nanoseconds. Let's see how these differences affect the values by converting '2016-11-11 20:20:20.444' to DATETIME and DATETIME2 with 3 digits of precision. 
Even though the conversion is supported by both data types, converting to DATETIME means that you will be sacrificing accuracy. So if you aim to accurately store date and time with more than 2 fractional digits in your database the only choice for the data type is DATETIME2.

Range

DATETIME2 also supports a greater range of values than DATETIME. The former supports dates from 0001-01-01 00:00:00 to 9999-12-31 23:59:59.9999999 whereas the latter supports dates from 1753-01-01 00:00:00 to 9999-12-31 23:59:59.997. As a small additional benefit that avoids some confusion for those developers working with the .NET platform, the range of DATETIME2 complies with the range of the DateTime data type in C# and VB.NET.

Memory space required

If you are thinking that the additional capabilities of the DATETIME2 data type translate into additional storage space requirements, you are mistaken. DATETIME2 requires anywhere between 6 and 8 bytes whereas DATETIME requires 8 bytes of storage. The space required by DATETIME2 depends on the fractional precision you choose for the column: 
  • 0 to 2 digits - 6 Bytes
  • 3 to 4 digits - 7 Bytes
  • more than 4 digits - 8 Bytes
So if your aim is to save storage space and increase read performance, DATETIME2 is the way to go.

Compliance with standards

DATETIME2 is compliant with both the ANSI and ISO 8601 standards for SQL whereas DATETIME is not compliant with any of those standards.

In conclusion, if it's range, precision, accuracy, storage space optimization or compliance with standards that you require, DATETIME2 is a better choice.