SQL Server security best practice

Security! This is the word comes in mind of every concerned person when it come...

Change the Collation Settings in MS SQL Server

This post will show you how to change the collation settings in MS SQL Server for specific database...

Resolve collation conflict

In this post I will show you how you can resolve collation conflict error...

Book: SQL Server 2008 High Availability

In this book I have tried to cover every single piece of information that might requires for installing and configuring SQL Server HA option like Clustering, Replication, Log Shipping and Database Mirroring...

Why to recompile Stored Procedure

Generally, we create views and stored procedures (proc here after) ...

Showing posts with label database administration. Show all posts
Showing posts with label database administration. Show all posts

10/21/2013

MSDB– cleanup is necessary

Recently I have been asked to work on the task which is to reduce the size of MSDB database. The MSDB has grown to 20 Gigs. Well, you would say, what is the big deal with that? 20 gigs is not considered a big for database. Correct, but for MSDB, yes it is.
So, the question is - why the size has grown this much, and what would be the adverse impact it would have on the performance on my system?
I would say, there could me many reasons, like:
  • Usually, we do not create user objects inside MSDB, but it is good to check
  • Check if there are multiple SSIS/DTS Packages that are large, check with development team if you can store them in file system. Check the link for the list of tables refer links SQL Server 2005, SQL Server 2008 , SQL Server 2008 R2 , and SQL Server 2012
  • There isn’t any CleanUp job configured
  • There are several hundreds of jobs running i.e. LS aka Log Shipping
  • And, so on….
The case:
Client has a server configured with LS for DR purpose. The LS is configured to sync every 15 minutes for several – hundreds of databases which intern inserting lots of data into the historical tables like backup, restore and log_shipping_monitor_history table –all of them had > 75 Lacks of records.
The issue for us was, that the MSDB is configured on the local drive aka where OS and binaries resides – no RAID. Also, the size of the C drive is nearing to it’s capacity – 30 Gigs and it’s quickly filling up. Well, on top of this, the database is in FULL recovery model. The reason that MSDB grows to 20 Gigs are
  1. It never had CleanUp job on it,
  2. there are hundreds of databases keep inserting records for backup and restore
Adverse impact:
  1. Possibly, your backup would take longer than usual as it would take time to write backup and /or restore history
  2. You would see timeout error when you try to dig out the reason for backup job failure
What I did was, I have created a maintenance plan for CleanUp which will call sp_delete_backuphistory which will run cleanup for below tables, per client’s request I have configured job to remove all the older data before 60 days.
To complete the cleanup activity job successfully, the MSDB and tempdb will need some space to grow which is not possible in our case since we are left with only 10 Gigs of space.  Hence, I have added an addition log file and data file for tempdb and msdb on another drive where we have ample space. Schedule LOG backup for MSDB to run every 10 minutes.
And, then, I’ve invoked the CleanUp job – it took about 3 hours to finish, but it did what it should.
Took FULL backup for MSDB, change the recovery model to simple and shrink it – we were able to shrunk the MSDB successfully and bring it down to 6.5 Gigs.
Constraints and possible options:
  1. We had a limited maintenance window to accomplish a task
  2. Another maintenance activity has to be performed once we are done
  3. Option: We could have script foreign keys and other constraint, drop constraints and keys, and delete the records. I haven’t opt this method because I personally never did this.
Take out from this post:  Do health check, and, schedule a Cleanup task for MSDB to run on regular basis.

-- Hemantgiri Goswami (http://www.sql-server-citation.com/)



photo credit: mccun934 via photopin cc

11/07/2011

Resolving 701 There is insufficient system memory to run this query

In recent past while working on an assignment I have encounter an error 701 There is insufficient system memory to run this query . I had a quick look at the server and noticed that server is not configured proper for max memory, and have suggested client to make changes to the max memory settings for the server. While I've suggested changes I have quote two articles, thought they would be a help to you as well to understand better on how SQL Server Memory managed. An article by SQL Server MVP Jonathan Kehayias will help you understand the memory management very well. 




1) http://sqlblog.com/blogs/jonathan_kehayias/archive/2009/08/24/troubleshooting-the-sql-server-memory-leak-or-understanding-sql-server-memory-usage.aspx


2) http://support.microsoft.com/kb/912439

-- Hemantgiri S. Goswami (http://www.sql-server-citation.com)

10/29/2011

DATABASE_OBJECT_CHANGE_GROUP do not audit SP or other object

Deepak Kumar (friend of mine and founder of http://www.sqlknowledge.com )  were chatting yesterday. We were discussing about audit feature in SQL 2008, Deepak has enabled this feature for one of his client since month. And when he was looking at the log he found that there were entries but they are related to Tables only, and not other objects like SP(s). 
We were discussing and google about the same and found an entry in connect where in it was answered.  

If we need the other objects to be audited we have to add SCHEMA_OBJECT_CHANGE_GROUP to the audit specification. Here is an excerpt from the connect:
Thanks for your feedback. The behaviour you're seeing is by-design. In order to audit CREATE/DROP of an SP, you need to add the SCHEMA_OBJECT_CHANGE_GROUP to your audit specification. The DATABASE_OBJECT_CHANGE_GROUP is actually auditing the ALTER permission check on the SCHEMA as part of the CREATE statement. 

You can find the complete entry and more information at connect article  


-- Hemantgiri S. Goswami (http://www.sql-server-citation.com )

10/28/2011

Refresh QA Database with Manual Scripts

Couple of week back Megha Sharma send me an email with a document attached, this document is all about how to refresh QA database with manual script. Here is the preview of the document, and the reason why we should follow the method in her own words:



QA environment frequently needs Database refresh and hence space on its disks. QA Database is refreshed from Production followed by a data purge, which leaves a lot of free space in the database. In order to release the free space, we follow database shrink command, a lengthy & single thread I / O operation, also cause high data fragmentation. To avoid this resource & time consuming activity, we do a Refresh Database with manual scripts, in which, we create a new database, transfer tables, data & other objects via scripts. The Generate Scripts option of a database generates script for the complete database and transfers data (using Insert into command, highly logged), to avoid this we do a Select * into, to transfer table definition & data (minimal logged & fast, being a bulk operation) & then generate Table Objects (Keys, Constraints, Triggers, Indexes) via manual scripts & other database objects (Views, Stored Procedures, Functions, Users, Roles, Schemas) via generate scripts.

Sounds interesting ? Want to download the complete document ? Download Link

Please post back your review in the comment section here.


-- Hemantgiri S. Goswami (http://www.sql-server-citation.com )

9/15/2011

Troubleshooting Oracle Link Server Issue

Most of the time, due to different business requirements we do have to work on various RDBMS systems, Oracle and MS SQL Server are the widely used and popular RDBMS. Sometimes we need to import/export data from/to SQL Server and for that we’ve to use Link Server feature of MS SQL Server.

In one of my recent project, a critical application has a job that pulls in the data from an Oracle database, we have a DTS Package and some job scheduled for this task. since couple of days we’ve observed below error message in the job history and while running ad-hoc queries. I am penning down the error message and the solution (rather, temporary workaround) we have found out :

1) The operation could not be performed because the OLE DB provider 'MSDAORA' was unable to begin a distributed transaction.

==> In this case, rebooting MS DTS service  helped us.

2) The maximum number of active transactions that the MS DTC log file can accommodate has been exceeded.  You must increase the size of the MS DTC log file if you wish to initiate more concurrent transactions.

==> In this case, increasing Log file size helped us.

MSDTS_Log

3)Server: Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "MSDAORA" for linked server "OraLnkSvr" reported an error. The provider ran out of memory.

4) Server: Msg 7330, Level 16, State 1, Line 1
Cannot fetch a row from OLE DB provider "MSDAORA" for linked server "OraLnkSvr".
OLE DB provider "MSDAORA" for linked server "OraLnkSvr" returned message "Out of memory.".

==> In case 3 & 4, rebooting SQL Server service helped us.

 

Hope this helps.

 

-- Hemantgiri S. Goswami (http://www.sql-server-citation.com )

3/07/2011

Describe HA and DR strategy and win

Dear Readers,

I am very please to made an announcement about the contest.  As you aware that I have wrote a book on SQL Server 2008 High Availability which was published on 24th January 2008 2011 by Packt Publication. Now you have a chance to win a subscription worth £150 of digital library at Packt Publication and a Paper book of SQL Server 2008 HA, here is the information.

Who this book is for?
This book is written for the System Administrator, experienced SQL Developers who want to learn about the topic – SQL Server High Availability, Aspiring DBAs. That means, this book is having a step-by-step instructions, pre-requisite with plenty snapshots to get you through the installation of SQL Server High Availability options like Cluster, Replications ( Snapshot, Transactional, Peer-2-Pee and Merge), Log Shipping and Database Mirroring.

I have tried to include external reference for further study on that particular topic if you wish to read some advance information, I have also include few common issues and how to resolve them for every topic i.e. How to troubleshoot common issues for Clustering, Replication, Log Shipping and Database Mirroring.

Here is the detailed index for every chapter of this book at - http://www.sql-server-citation.com/2011/01/sql-server-2008-high-availability-book.html

Where to buy?
One can purchase my book SQL Server 2008 High Availability from Packt Publication in Paper and eBook format at https://www.packtpub.com/microsoft-sql-server-2008-high-availability/book or one can purchase this from Amazon at http://www.amazon.com/Microsoft-Server-2008-High-Availability/dp/1849681228/ref=sr_1_1?ie=UTF8&s=books&qid=1299390127&sr=8-1



Contest Information:

Describe the best HA and DR solution you have designed or worked upon, especially in case of physically dispersed location, and the reason why?

The best answer will win a  subscription of digital library worth £150 at Packt Library for 1 year and 2nd best answer will win Paper book of SQL Server 2008 HA.

The best answer will be judged by me and Satya Shyam K Jayanty (whom I admired as Guru.), we will announced the winner by 25th April 2011.

Rules:
1. Their will be two winners - 1 Subscription and 1 Paper book of SQL Server 2008 HA
2. The contest will remain open until 5th of April 2011
3. The Packt Pub Subscription will be free for 1 year

Update:
By mistake I have mentioned that my book was published on 24th January 2008, I did not noticed it until Ashish Sharma draw my attention, I have corrected it now. Thank you Ashish :)

What are you waiting for, start participating and be a lucky winner!!!



Regards
-- Hemantgiri S. Goswami | www.sql-server-citation.com

1/31/2011

Tip for Backing up User Databases

We all take a database backup as a first resort to recover from a damage occur during the disaster / server crash / for any other reason if require to recover or restore our data back to its previous stat as it was before the database server was crashed!! And for this we schedule a job or make use of database maintenance plan which will take a database backup be it Full / Differential or Log, and we all take a special care and consideration when we design the backup strategy!!

But what happened if one fine day when you need your database backup file to work well and it doesn’t!! What if it is corrupted!! And you have only a Full backup for that day?

Today I am going to discuss about the script that I have posted in the download section couple of days back; the script will not only perform a full database backup of all user databases but it will also verify the validation of the backup file!  It's a must to have a database backup strategy where in we do not keep multiple copies of database backup  and if one have one or two days retention policy. 

declare @int int ,@dbname varchar(22),@maxdbid int
declare @int int ,@dbname varchar(22),@maxdbid int
declare @bkpath varchar(25),@path varchar(50)
select @maxdbid= max(dbid) from master..sysdatabases
set @int = 0
set @bkpath='C:\SQLDB\backup\'
USE master
while (@int < @maxdbid)
begin
set @dbname = (select [name] from master..sysdatabases where dbid=@int and status!=1073807392)  
if (@dbname != 'tempdb') 
 begin
  select @int as 'DBID',@dbname as 'Database'  
  select @path=@bkpath+@dbname+CONVERT(VARCHAR(12),GETDATE(),112)
  BACKUP DATABASE @dbname TO DISK = @path  
  print @path
  print @bkpath
  restore verifyonly from disk=@path
  end  
 set @int = @int + 1  
 continue 
 break
end

Check the line # 17, restore verifyonly from disk = @path is the line that will help us to overcome above issue. This is not a hidden code lying in a secret place somewhere but this is something that we may have missed in our code – I know most of you have already using this option for your database backup validation, how many of you are using this option? Or out of curiosity any other option!!

PS: This code is the very basic code that I have wrote initially, you may modify and use the script according to your requirement without any obligation.
Your suggestions welcome!

Regards
- Hemantgiri S. Goswami