Showing posts with label Ritesh Shah. Show all posts
Showing posts with label Ritesh Shah. Show all posts

SQLHub News: Interview of Ritesh Shah is published on PACKT website

Monday, September 3, 2012 |



There is a big difference between writing a book and writing a blog. I understand this when I have started writing a book “SQL Server 2012 Performance Tuning Cookbook”. 



As soon as PACKT proposed me to write the book on this subject, I have started deciding the topics to cover in this book with my co-author Bihag Thaker. It was a big challenge to decide the content of this book because performance tuning is a big topic and covering all aspects under the one book is little difficult but finally we have managed the book in three following parts.

1.)  Performance Monitoring
2.) Performance Tuning
3.) Performance Management

Each topic covered the practical recipe for hands-on experience of the readers. PACKT has conducted my interview regarding my journey of this book writing process. Further reading the interview, please click here.

Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Implementing physical Database Structure

Saturday, July 28, 2012 |


Our book “SQL Server 2012 Performance Tuning cookbook” is published this week. This book is based on the concept of performance tuning in SQL Server 2012. We have one chapter “Implementing physical Database Structure” in our book “SQL Server 2012 Performance Tuning cookbook”. Following article is completely based on the chapter we have in our book “SQL Server 2012 Performance Tuning cookbook”.

Database structure is one of the important factor which affects performance of your database. Your database performance heavily depends on how you have physically placed your database objects and how you have configured your disk subsystem. Designing the physical layout of your database correctly is the key factor to improve the performance of your database queries and thus the performance of your database. However, the correct decision on a physical design structure of the database depends on the available hardware resources that you might have. This includes the number of processors, RAM, and storage resources, such as how many, disks or RAID controllers you might have in your database environment. The best thing while designing physical layout of the database is to have multiple physical disks for your database. If you configure your database in such a way that it spreads across multiple disks, it can benefit from parallel I/O operations.

The following are some of the decisions that influence your database performance:
  • Where do you place data files?
  • Where do you place log files?
  • Where do you place large objects?
  • Where do you place indexes?
  • Where do you place the tempdb database?
You can control the physical location of database objects within the database by using files and filegroups.

To read complete article kindly click here.


Reference: Ritesh Shah

http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

350 Article on the blog and SQL Server 2012 book release

Friday, July 27, 2012 |


I have completed 350 articles on this blog and apart from that my book “SQL Server 2012 Performance Tuning Cookbook” has released yesterday officially.

I would like express my deepest gratitude to all my blog readers and online SQL Server community which always inspire me to work hard and returning back to the community.

In today’s blogpost I am not going to discuss any technical stuff but I would like express my truthful thanks to few person who has always played an important role in my life. Though I have expressed my thankfulness to them in the “Acknowledgement” section of my book, I can’t stop my temptation to express my gratitude blogpost.

I would sincerely like to thank Packt Publishing, for showing their confidence in me and providing the invaluable opportunity of being a part of this book. Individuals at Packt whom I am deeply grateful to, are Kedar Bhat, Sai Gamare, Madhuri Das, Ashwin Shetty, Apoorva Bolar, and Dhwani Devater. They have been very co-operative and supportive at all the stages of this book. I am extremely thankful to Michael Zilberstein (Blog) and Maria Zakourdaev (Blog, Twitter) and Satya (Blog,Twitter) the technical reviewers, for their excellent work of getting the accuracy of the technical details of the book in perfect shape.

I wouldn't have been able to author this book alone, so thanks should go to Mr. Bihag Thaker, as well, as he agreed to co-author this book with me and has worked even harder on it than I have myself.

Two individuals to whom I am indebted and whose disciple I have always been are Mr. Pinal Dave(Blog, Twitter) and Mr. Jacob Sebastian (Blog, Twitter). I have learnt a lot from them, and they are the reason I'm part of the IT community today.

1.) Pinal Dave, who blogs at SQLAuthority.com and is an author of several SQL Server books. Currently, he is working as a Technology Evangelist at Microsoft.

2.) Jacob Sebastian, who blogs at BeyondRelational.com  and is a SQL Server MVP, book author, well-known speaker in SQL Server technology, and much more.

Without my family support, a task such as writing a book would not have been achievable. I would like to heartily thank my parents, Mr. Ashwinkumar Shah and Mrs. Divya Shah. It is because of them that I exist, and I cherish their blessings, which are always with me. I would also like to thank my one-and-a-half-year-old son, Teerth, who used to often start crying at midnight, because of which I would lose my sleep and, not being able to get it back so started researching more on the subjects that helped me write this book. Last but not least, I would like to thank my wife, Alka Shah!

Finally I would appreciate help and support of all my friends who has directly and indirectly helped me a lot.

Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

Find SQL Server job ran on specific date with its status

Wednesday, July 25, 2012 |


It is very mandatory for SQL Server DBA to keep an eye on the SQL Server Job and take an immediate action if job gets failed. There are several ways of doing this. I will be showing one query which will return all jobs ran on the specific date along with its status like whether it was failed or succeed.

In order to gain information regarding JOBs, we have to query following list of system tables in MSDB database because all SQL Server jobs are stored in MSDB database only.

1.)     Sysjobhistory
2.)    Sysjobs
3.)    sysjobsteps

Here is the query which will return the desired result:

USE MSDB
GO

DECLARE @Today AS VARCHAR(8)
SET @Today = CONVERT(VARCHAR(8),GETDATE(),112)

SELECT * FROM (
      SELECT
           
            SysJob.name,
            CASE WHEN  JobHist.run_status =1
                  THEN 'Success'
                  WHEN JobHist.run_status =0
                  THEN 'Fail'
            END AS JobRunStatus,
            JobHist.run_date,
            JobHist.run_time,
            JobHist.run_duration AS RunDuration,
            JobStep.step_id,
            JobStep.command,ROW_NUMBER() OVER(PARTITION BY SysJob.name,JobStep.step_id ORDER BY run_time DESC) AS NumberOfExecution
      FROM
            dbo.sysjobhistory AS jobHist
      INNER JOIN
            dbo.sysjobs AS SysJob
      ON
            JobHist.job_id = SysJob.job_id
      INNER JOIN
            dbo.sysjobsteps AS JobStep
      ON
            (JobStep.job_id = SysJob.job_id)
      WHERE
            JobHist.run_date = @Today
)
AS T
WHERE
      NumberOfExecution=1

As a DBA, I keep this script handy as I never know when I will need this. We already have monitoring over each and every jobs and failure of any jobs will be notified to me via email though this has become time saving for me so many times.

One tip I would like to give especially when I am talking about JOB is, keep regular backup policy for MSDB database too as all the JOBs are being stored here and while restoring SQL Server from any disaster, we need to create all the JOBs again and MSDB database backup become life savior in that case.
I have explained the importance of backup in one of my past article, if you are interested to read, click here.

Happy Scripting!!!!

Reference: Ritesh Shah
http://www.sqlhub.com
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of
http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

TableDiff utility to compare data between two databases

Wednesday, July 18, 2012 |


Introduction

As a database professional, we might come across in situation where we need to compare data row by row or column wise between two tables which either resides in same database or in same instance or may be in different instance in different server. 

What do you in this situation?

1.) Do you write down script of your own?
2.) Do you use any third party software?
3.) Do you use “TableDiff” utility comes with SQL Server itself?

Third option, out of all of the above, seems good to me as we neither need to invent the zero again by writing down the script by our own nor we need to pay extra money to compare data.

“TableDiff” is one of the wonderful and oldest utility provided by Microsoft. It works fine with SQL Server 2000 to the latest SQL Server edition. However, I am providing you the script and example from my SQL Server 2008 instance.


Getting Ready

Before you move forward, you need to find out two tables whose data you wanted to compare. It might be in publisher/subscriber in replication, it might be in two different databases you are using for scale out or may be anywhere else.

If you don’t have this situation at the moment in your environment, don’t worry, I will be giving a script to raise the scenario to test “TableDiff” utility.


How to do it...

1.)    Open New Query window in you SQL Server

2.)    Create two different database by using following script:
USE master
GO

CREATE DATABASE TableDiffDb1
GO

CREATE DATABASE TableDiffDb2
GO


3.)    Create a sample table in “TableDiffDB1” database with following script

 USE TableDiffDb1
GO

--if orders table is already there. you can delete it than create new one with name "Orders"
IF OBJECT_ID('orders', 'U') IS NOT NULL BEGIN
      DROP TABLE orders
END
GO

--creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO

--inserting 1000 sample rows into table
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 1000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST('2010-02-01' AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO

4.)    Creating “Orders” table in second database by copying 900 records (out of total 1000 records) from “Orders” table from “TableDiffDB1” database by using following script.

USE TableDiffDb2
GO

--if orders table is already there. you can delete it than create new one with name "Orders"
IF OBJECT_ID('orders', 'U') IS NOT NULL BEGIN
      DROP TABLE orders
END
GO

--creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO

--inserting 900 sample rows into table from TableDiffDb1 database's Orders table
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 900 OrderDate,Amount,Refno FROM TableDiffDb1.dbo.orders

5.)    Now use following command to see the difference between two tables.

exec master..xp_cmdshell 'tablediff -sourceserver [RITESH-SHAH\MSSQL2008] -sourcedatabase TableDiffDb1 -sourcetable Orders -destinationserver [RITESH-SHAH\MSSQL2008] -destinationdatabase TableDiffDb2 -destinationtable Orders -et Difference -f D:\OrdersDifference.sql'

Replace your server instance name in “SourceServer” and “destinationServer” parameter in above given command and you will get one .SQL file in D drive. Running that SQL file will insert all missing records in “Orders” table of “TableDiffDb2” database as it shows you the list of all missing records there.

There's more...

I would like to draw your attention to some of the facts which can help you if you don’t find “TableDiff” working in your environment.

Remember that “TableDiff.exe”  file resides in installation directory of SQL Server by default which is “C:\Program Files\Microsoft SQL Server\100\COM” in my case.  So, there is chance that “TableDiff” command is not accessible via DOS prompt, you have to set path for “TableDiff” in “ServerVariable”. 

You can reach “ServerVariable” by “MY Computer Properties > Advanced System Settings > Advanced > Environment Variables > System Variables > PATH

If you find any path under “PATH” in “ServerVariable”, you can put “;” (semicolon) after that path and can add your path for “TableDiff”.

Generally people tend to use “TableDiff” from DOS prompt itself or via .bat (batch file) file but I have used “xp_cmdshell” extended stored procedure to show the use of command right from SQL Server but there may be a chance that “xp_cmdshellis disable in your environment. If your security constraint allows, you can enable “xp_cmdshell”. For more details about the steps, click here.

Reference: Ritesh Shah
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile

NOLOCK hint in SQL Server, Good guy or bad guy?

Friday, May 4, 2012 |


Introduction

It is quite often in production environment that multiple application and/or users try to access same resource at the same time which may create locking and blocking issues. It is tricky to deal with this situation as there is no rocket science which suits for all situations. It is all depends on situation because in some production environment, it is not possible to resolve blocking issues easily over the night.
The main root cause for locking is, we have long running transaction which keeps your object locked and meanwhile any request comes to access the same object, has to wait until the current transaction complete its operation. Best choice should be to minimize the transaction length so that it releases the lock quickly and other request doesn’t need wait due to lock but unfortunately it is possible to solve it easily in ALL environments.

Getting ready

I am going to perform this example in my SQL Server 2012 RTM version but it may work as it is in SQL Server 2005 / 2008 too.
Connect to your SQL Server and open three query window. We will call these query windows with following name:
·         Win1
·         Win2
·         Win3

How to do it...

1.       After connecting to SQL Server Management Studio (SSMS), open Win1
2.       Create one sample database, named SQLHub and create one table with sample rows with following T-SQL script:

create database SQLHub
GO

USE SQLHub
GO

--if orders table is already there. you can delete it than create new one with name "Orders"
IF OBJECT_ID('orders', 'U') IS NOT NULL BEGIN
      DROP TABLE orders
END
GO

--creating table
CREATE TABLE orders (OrderID INT IDENTITY, OrderDate DATETIME, Amount MONEY, Refno INT)
GO

--inserting 100000 sample rows into table
INSERT INTO orders (OrderDate, Amount, Refno)
SELECT TOP 100000
      DATEADD(minute, ABS(a.object_id % 50000 ), CAST('2010-02-01' AS DATETIME)),
      ABS(a.object_id % 10),
      CAST(ABS(a.object_id) AS VARCHAR)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO

3.  Execute one UPDATE statement in Win1 with BEGIN TRANSACTION. Note that we don’t have COMMIT or ROLLBACK after the UPDATE Statement at the moment. ROLLBACK is commented so it won’t be executed.

BEGIN TRANSACTION
UPDATE Orders SET Amount=5.00 WHERE OrderID<10
--ROLLBACK


4.  In Win2, try to execute following SELECT statement:

select * from Orders Where orderID<=15

5.       You will observe that SELECT statement wouldn’t return any results so in WIN3, try to execute following T-SQL and know what is going on behind the screen:
sp_who2
go

6.       We will get many rows in the result set from the above command but we have to look at last two rows of SQLHub database as per given in following screen capture:

7.       We can see that out UPDATE statement runs with SPID 54 and SELECT statement runs with SPID 55. SELECT query is blocked by SPID 54 given in row no.2 column no.5 in screen capture. Now I have two ways. Either I issue COMMIT / ROLLBACK command which is not a good idea to interrupt UPDATE or cancel SELECT query in SSMS which I will do now.
8.       Since we have cancelled SELECT query, we will not execute same SELECT statement with NOLOCK hint.
select * from Orders WITH (NOLOCK) Where orderID<=15

9.       Here is the result return by above query.


10.   Go to Win 1 and execute “ROLLBACK” statement.
11.   Go back to Win 2 and execute SELECT statement with or without “NOLOCK”. For eg:
select * from Orders  Where orderID<=15

12.   Here is the screen shot of result return by above query:



How it works...

When we have executed UPDATE statement in Step no 3 without COMMIT or ROLLBACK, It updates the records but didn’t release the lock it has acquired on the table so SELECT query was not able read data and return it.
How do you know that whether UPDATE has really updated the values or not? Since our select query is not giving results. We have executed same select query with “WITH (NOLOCK)” hint in step no. 8 and we can confirm that values are update with the screen capture given in step 9. Isn’t it good situation? Yes, may be as we were not even able to get the result of SELECT statement in step no 4. But I would say we can’t decide whether it is a good or bad without evaluating the business need.
Since UPDATE statement has updated the value but it is not saved so there is a chance to ROLLBACK. If you see resultset given in steps no 9, you will see “5.00” in “Amount” column which may not be a proper value as after ROLLBACK, it comes back to “2.00” again as per step no. 12.
Think if this would be a result of stock exchange, what would happen? Can we run with dirty read we seen in screen shot in step no.9? No certainly not that is why I have conveyed that NOLOCK is all depends on the business need and situation. Use it wisely as it is two sided sword.

See also

NOLOCK as known as READUNCOMMITED concept is somehow related to ISOLATION level.  Here are the links of some of my past article on the subject.
Happy Reading!!!
Reference: Ritesh Shah
Note: Microsoft Books online is a default reference of all articles but examples and explanations prepared by Ritesh Shah, founder of http://www.SQLHub.com
Ask me any SQL Server related question at my “ASK Profile