Tag Archives | tsql

Beware the change scripts created by SQL Server Management Studio

Part of my job is moonlighting as a SQL Server database admin, so a co-worker recently asked me to run a script against a production table. The task was simple: add a few new columns and create some indexes. He had generated the script by using SQL Server 2008 Management Studio’s (SSMS) Generate Change Script function.

Although the general sequence of steps in the generated script made sense, I have some complaints about the SSMS output. Here’s a very simple recreation of the scenario.

Consider a table with two columns. One is a primary key, and one has a default constraint:

1
2
3
4
5
6
CREATE TABLE phillies (
   phils_id INT IDENTITY(1,1)
,  phils_year CHAR(4) NOT NULL
   CONSTRAINT df_phillies_year DEFAULT (2008) 
,  CONSTRAINT pk_phils_id 
      PRIMARY KEY CLUSTERED (phils_id))

Using the SSMS table design view to add two columns to the phillies table and saving the change script results in the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
BEGIN TRANSACTION
GO
ALTER TABLE dbo.phillies
   DROP CONSTRAINT df_phillies_year
GO
CREATE TABLE dbo.Tmp_phillies (
   phils_id INT NOT NULL IDENTITY (1, 1),
   phils_year CHAR(4) NOT NULL,
   division_champ_flag bit NOT NULL,
   national_champ_flag bit NOT NULL
)  ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_phillies 
   ADD CONSTRAINT
   df_phillies_year DEFAULT ((2008)) FOR phils_year
GO
SET IDENTITY_INSERT dbo.Tmp_phillies ON
GO
IF EXISTS(SELECT * FROM dbo.phillies)
   EXEC('INSERT INTO dbo.Tmp_phillies 
      (phils_id, phils_year)
   SELECT phils_id, phils_year 
   FROM dbo.phillies 
   WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_phillies OFF
GO
DROP TABLE dbo.phillies
GO
EXECUTE sp_rename 
   N'dbo.Tmp_phillies', N'phillies', 'OBJECT' 
GO
ALTER TABLE dbo.phillies 
   ADD CONSTRAINT
   pk_phils_id PRIMARY KEY CLUSTERED 
   (phils_id) ON [PRIMARY]
GO
COMMIT

This script contains BEGIN TRANSACTION and COMMIT TRANSACTION statements but doesn’t accompany them with any kind of error handling. So if you run it as-is and encounter an error, nothing gets rolled back.

Any error handling that you might add, however, is thwarted by the fact that the script’s statements are contained in individual batches (i.e., separated by GO statements).

Say you individually check each statement for errors and issue a rollback/return if something goes awry.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
BEGIN TRANSACTION
GO
[snip]
CREATE TABLE dbo.Tmp_phillies (
   phils_id INT NOT NULL IDENTITY (1, 1),
   phils_year CHAR(4) NOT NULL,
   division_champ_flag bit NOT NULL,
   national_champ_flag bit NOT NULL
)  ON [PRIMARY]
GO
[snip]
-- throw an error
SELECT 23/0
IF @@ERROR <> 0
BEGIN
   PRINT 'error!'
   ROLLBACK TRANSACTION
   RETURN
END
[snip]
GO
DROP TABLE dbo.phillies

In this scenario, changes that occurred prior to the error will be rolled back. However, although the RETURN statement exits the current batch, subsequent batches (for example, the one that deletes your table) will execute.

A TRY/CATCH block is another potential error-handling method, but TRY/CATCH blocks can’t span multiple batches.

So what’s with the batch-happy SSMS? In older versions of SQL it might have been necessary to separate a CREATE TABLE and subsequent ALTER TABLE statements into separate batches (I haven’t tested this). But SQL 2005 and 2008 provide statement-level recompilation, so the multitude of batches is not necessary.

When I tested a “GO-less” version of the script, it worked swimmingly. It’s understandable that SSMS can’t generate the error-handling, but if it would reduce the scripts’ number of batches to the minimum required, it would be easier to add the error handling yourself.

Conclusion: don’t rely on these SQL Server Management Studio-generated change scripts for any serious work, and definitely don’t use them as a model for how to write robust T-SQL.

Comments are closed

SQL Server operand type clash!

Note:  This article was originally published to the Wharton Computing Developer Center.

Yesterday a fellow developer hit a strange SQL error and determined that the culprit was a T-SQL CASE statement used in his ORDER BY clause.

1
2
3
4
5
6
7
8
9
10
ORDER BY
CASE
WHEN UPPER(@orderBy)='GROUPMESSAGEID' THEN groupMessageID
WHEN UPPER(@orderBy)='GAMEID' THEN gameID
WHEN UPPER(@orderBy)='GROUPID' THEN groupID
WHEN UPPER(@orderBy)='MESSAGEID' THEN messageID
WHEN UPPER(@orderBy)='ROUNDSENT' THEN roundSent
WHEN UPPER(@orderBy)='SENTON' THEN sentOn
ELSE groupMessageID
END

This code results in the following message:
Operand type clash: uniqueidentifier is incompatible with datetime

After some digging around, we found the underlying cause of the error:  the case statement return values have different data types.  In some cases, returning different data types will behave as expected.  However, mixing numeric and character data causes problems.

In the statement above, gameId, groupID, roundSent, and groupMessageID are integers, sentOn is a datetime, and messageID is a uniqueidentifier.  Because the data type precedence pecking order in this case is datetime, int, and then uniqueidentifier, SQL choose datetime as the return type. Uniqueidentifiers cannot be converted to datetimes, hence the error message.

It all became clear after reading this article by George Mastros.  Thank you to George.

Comments are closed

Dynamic SQL: exec vs. sp_executesql

Note: this article was originally published to the Wharton Computing Development Center

Dynamic SQL came up again at a recent code review.  If only for your own maintenance sanity, it’s worth eliminating dynamic SQL where possible, but there are times you can’t avoid it (for example, some implementations of the new SQL 2005 PIVOT function). If you must do the dynamic SQL thing, you should know that there are two ways to execute it

  • EXEC(@stringofsql)
  • sp_executesql @stringofsql, @someparameters

The first option is old school, but it still works. Sp_executesql, however, is newer and allows you to use parameters in conjunction with the dymanically-built SQL string. Having the option to use parameters is a great improvement–if the parameters are the only part of the SQL command that changes, the optimizer can reuse the execution plan instead of generating a new one each time the code is run. See SQL Server Books Online for the exact sp_executesql syntax.

If you want to know more about dynamic SQL, take a look at this aricle.

Comments are closed

SQL error-handling snafu

Note:  this article was originally published to the Wharton Computing Developer’s Center

Recently I encountered a SQL Server error-handling trap that caused a transaction not to roll back as expected. At the risk of sounding like an amateur, here’s what happened. The code was issuing two insert statements that were grouped into a single transaction:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
BEGIN TRANSACTION
 
INSERT INTO dbo.Table1 (SOME COLUMNS)
VALUES (SOME VALUES)
 
SELECT @error = @@error, @rowcount = @@rowcount
 
IF @error <> 0
BEGIN ROLLBACK TRANSACTION RAISERROR('error on the first insert!', 16, 1) RETURN END
 
[DO STUFF]
 
INSERT INTO dbo.Table2 (SOME COLUMNS)
VALUES (SOME VALUES)
 
SELECT @error = @@error, @rowcount = @@rowcount
 
IF @error <> 0
BEGIN ROLLBACK TRANSACTION RAISERROR('error on the second insert!', 16, 1) RETURN END
 
COMMIT TRANSACTION


The above error-handling works most of the time. However, if there is a run-time T-SQL error (for example, a missing object or a divide by 0 error), the offending statement fails but the rest of the transaction executes. In my case, the second insert failed because Table2 had been renamed. The subsequent @error check was never invoked, so the first insert wasn’t rolled back.

This behavior can be overridden by setting SET XACT_ABORT to ON, which specifies that the current transaction will automatically roll back in the event of a T-SQL run-time error.

External Link: (SET XACT_ABORT)

Comments are closed