Thursday, February 24, 2011

Managing Transaction Logs in SQL Server

Each database in a SQL Server instance has a log that records all database modifications. Because this log is written independently, before the modifications take place, the transaction log enables the database to roll back or restore transactions in the event of hardware failure or application error. Because of the importance of its role, the transaction log is stored in one or more log files that are separate from the data files; the log records are written to the transaction log before the modified contents in the buffer cache are written to the data files.

For each database, the transaction log can support any of the following operations:

  • Rolling back individual transactions if a ROLLBACK statement is issued or the database engine detects an error.
  • Rolling back incomplete transactions that result from server failure. The transactions are rolled back when SQL Server is restarted.
  • Recovering incomplete transactions written to the logs but not to the data files as a result of server failure. The transactions are written to the data files when SQL Server is restarted.
  • Rolling forward a restored database, filegroup, file, or page to the point of failure in the event of hardware failure. The transactions are rolled forward after the latest full and differential backups are applied.
  • Supporting transactional replication, database mirroring, and log shipping.

The file (or files, if more than one file is used) that makes up the transaction log are divided into virtual log files whose size, along with their quantity in the physical log, is determined by the database engine. The database engine also decides when, and which, virtual files get truncated. You can, however, specify the minimum and maximum sizes of the physical log, as well as configure the growth increments used when expanding that file. In addition, you can add physical files to the log, delete files, increase the size of the log, and shrink the log.

In this article, I explain how to perform these tasks so you can begin to manage your transaction logs, and I provide examples that demonstrate each how each task works. For these examples, I used the following code to create the EmployeeDB database on a local instance of SQL Server 2008:

USE master;

IF EXISTS(SELECT name FROM sys.databases WHERE name = 'EmployeeDB')

DROP DATABASE EmployeeDB;

CREATE DATABASE EmployeeDB

ON( NAME = EmployeeDB_dat, FILENAME = 'C:\SqlData\EmployeeDb.mdf')

LOG ON( NAME = EmployeeDB_log, FILENAME = 'C:\SqlData\EmployeeDb.ldf');

Notice that I created the database files in a location other than the default used by SQL Server. If you run this code, you can locate the database files wherever you think is appropriate. After I created the database, I used the following SELECT…INTO statement to retrieve data from the AdventureWorks2008 database and create the Employees table:

USE EmployeeDB;

IF OBJECT_ID ('Employees', 'U') IS NOT NULL

DROP TABLE dbo.Employees;

SELECT BusinessEntityID, FirstName, LastName, JobTitle, PhoneNumber,

EmailAddress, AddressLine1, AddressLine2, City, StateProvinceName,

PostalCode, CountryRegionName INTO dbo.Employees

FROM AdventureWorks2008.HumanResources.vEmployee;

You do not have to use this code to perform the examples in this database, but it does help to have a small test database that you can experiment with as you learn about transaction logs. If you plan to use a database other than the one shown here, simply insert the database name, where appropriate, in the code samples I provide.

Configuring the Recovery Model

Each SQL Server database includes the Recovery Model property, which determines how transactions are logged, whether the transaction log can be backed up, and the type of restore operations permitted. By default, a new database inherits the recovery model from the model database. However, you can override the default setting by assigning a different recovery model.

You can configure a SQL Server database with any one of the following three recovery models.

  • Simple: In this model, transaction log backups are not permitted, which means you do not have the administrative overhead associated with maintaining transaction log backups. The model also automatically reclaims log space, so there is almost no need to manage the transaction log space. However, this is also the riskiest of the models—a database can be restored only to the point of its last backup. Transactions that have been performed since the last backup are lost. This model is generally used for the system databases, and for both testing and development, although it is sometimes appropriate for a read-only database such as a data warehouse. In this model, some operations are only minimally logged.
  • Full: The log files can and should be backed up, as they provide full recoverability to a specific point in time. However, this model is less risky than the Simple model. In the Full recovery model, all operations are fully logged, including bulk import operations. The Full recovery model is generally the model used for production environments.
  • Bulk Logged: This model is intended as an adjunct to the Full recovery model because operations such as bulk import are only minimally logged. For example, you might want to bulk load data and you’re not concerned about logging these transactions because you can reload the data if necessary. In such cases, you can set the recovery model to Bulk Logged while importing the data, and then change the setting back to Full when you are finished. (Note that you should perform a full backup after you change the setting back to Full.)

You can switch the recovery model on a database by running an ALTER DATABASE statement and specifying the SET RECOVERY clause, as shown in the following example:

USE master;

ALTER DATABASE EmployeeDB

SET RECOVERY FULL;

As you can see, I am altering the EmployeeDB database and setting the recovery model to FULL. Note, however, that by default the model database is configured with the Full recovery model, which means that the EmployeeDB was automatically configured with the Full model because it inherited the setting from the model database. As a result, if the default recovery model wasn’t changed in the model database of your instance of SQL Server, the ALTER DATABASE example above did not change the setting. However, you should also note that if you switch a database from the Simple model to the Full model, there are other steps you must sometimes take, such as doing a full database backup. The topic "Considerations for Switching from the Simple Recovery Model" in SQL Server Books Online describes what steps you might need to take when switching from the Simple model to Full or Bulk Logged.

You can also set the recover model in SQL Server Management Studio. In Object Explorer, right-click the database name and then click Properties. In the Database Properties dialog box, click the Options page and then set the Recovery model property.

Monitoring the Log File

When maintaining a database’s transaction log, you’ll often want to retrieve information about the log so you can verify its settings or track how much log space is being used. One way to find information about the log is by using the sys.database_files catalog view. The view returns details about database files, including the type of file, the current size of the file, and how large the file is permitted to grow.

In the following example, I use the sys.database_files catalog view to retrieve data about the log file associated with the EmployeeDB database:

USE EmployeeDB;

SELECT name, size, -- in 8-KB pages

max_size, -- in 8-KB pages

growth,

is_percent_growth

FROM sys.database_files WHERE type_desc = 'LOG'

The statement returns the current size of the file (in 8-KB pages), the maximum size that the file is permitted to grow (also in 8_KB pages), the growth rate, and the is_percent_growth flag, which determines how the growth rate should be interpreted. If the flag is set to 0, the growth rate is the number of 8-KB pages. If the flag is set to 1, the growth rate is a percentage.

The above SELECT statement return results similar to the following:

Name size max_size growth is_percent_growth

EmployeeDB_log 128 268435456 10 1

As the results show, the statement returns only one row. That’s because the EmployeeDB database has only one log file associated with it. The results also indicate that the current size of the EmployeeDB_log file is 128 8-KB pages. However, the file can grow to 268,435,456 8-KB pages at a growth increment of 10%.

You can also use the DBCC SQLPERF statement to return information about the transaction logs for each database in a SQL Server instance. To retrieve log data, you must specify the LOGSPACE keyword in parentheses, as shown in the following example:

DBCC SQLPERF(LOGSPACE);

The statement returns the log size in MB, the percentage of log space used, and the status of the log for every database on your SQL Server instance. The following results show the information that the DBCC SQLPERF statement returns for the EmployeeDB database:

Database Name Log Size (MB) Log Space Used (%) Status

EmployeeDB 0.9921875 40.05906 0

In this case, the EmployeeDB log is about 1 MB in size, and about 40% of the log space is being used.

You can also generate a report in SQL Server Management Studio that graphically displays data similar to the results of the DBCC SQLPERF statement. To generate the report, right-click the name of the database in Object Explorer, then point to reports, next point to Standard Reports, and finally click Disk Usage.

Backing Up the Log File

If a database is configured with the Full or Bulk Logged recovery model, you should back up the transaction log regularly so it can be truncated to free up inactive log space. The backup can also be used (along with the database backups) to restore the database in the event of failure.

Before a log file can be backed up, a full database backup must be performed. For instance, before I back up the log file I am using for the examples in this article, I will run the following BACKUP DATABASE statement on the EmployeeDB database:

BACKUP DATABASE EmployeeDB

TO DISK = 'E:\DbBackup\EmployeeDB_dat.bak';

Note that, if you run this code, make sure the TO DISK location exists, or specify a different location.

After I backed up the database, I ran the following data modification statements so the log would contain transactions not included in the backup:

USE EmployeeDB;

UPDATE Employees

SET JobTitle = 'To be determined';

UPDATE Employees SET CountryRegionName = 'US' WHERE CountryRegionName = 'United States';

DELETE Employees WHERE BusinessEntityID > 5;

I then reran the DBCC SQLPERF statement to view the amount of log space being used. The statement returned the following results:

Database Name Log Size (MB) Log Space Used (%) Status

EmployeeDB 0.9921875 64.41929 0

As you can see, the percentage of log space being used increased from about 40% to nearly 65%.

After the database has been backed up, you can back up the transaction log. To perform a transaction log backup, use the BACKUP LOG statement and specify the target destination for the backup files, as shown in the following example:

-- back up transaction log

BACKUP LOG EmployeeDB

TO DISK = 'E:\LogBackup\EmployeeDB_log.bak';

Note that, if you run this code, make sure the TO DISK location exists, or specify a different location.

Notice that I include the TO DISK clause to specify the file destination. However, the BACKUP statement supports other options for backing up data. See the topic “BACKUP (Transact-SQL)” in SQL Server Books Online for more information.

After you back up the transaction log, the SQL Server database engine automatically truncates inactive log space. (Truncating a log file removes inactive virtual log files, but does not reduce the file size. In addition, you cannot specifically truncate a log. You can, however, shrink the file, which does reduce the size. I explain how to shrink a log file later in the article). To verify whether the log has been truncated, run the DBCC SQLPERF statement again. This time, the results should be similar to the following:

Database Name Log Size (MB) Log Space Used (%) Status

EmployeeDB 0.9921875 44.88189 0

Now the percentage of log space being used is back down around 45%.

Modifying a Log File

You can use the ALTER DATABASE statement to modify a log file. You must specify the MODIFY FILE clause, along with the appropriate options. In addition to specifying the logical name of the log file, you can define the following three arguments:

  • SIZE: The new size of the log file. You can specify the size as KB, MB, GB, or TB, such as 10 MB or 1 GB. If you do not specify a size when you add the file, the database engine uses the default size of 1 MB. The new size must be greater than the current size, otherwise you’ll receive an error when you run the statement.
  • MAXSIZE: The maximum size that the file can grow to. You can specify the size as KB, MB, GB, or TB. If you do not specify a maximum size, the file will grow until it fills the disk (assuming the space is needed).
  • FILEGROWTH: The growth increment used when expanding the file. You can specify the size as KB, MB, GB, or TB, or you can specify the size as a percentage, such as 10%. If a number is specified without a suffix, MB is used. If no number is specified, 10% is used. A value of 0 indicates that no automatic growth is allowed.

The following ALTER DATABASE statement modifies the EmployeeDB_log file in the EmployeeDB database:

-- modify log file

ALTER DATABASE EmployeeDB

MODIFY FILE

(

NAME = EmployeeDB_log, SIZE = 2MB,

MAXSIZE = 200MB, FILEGROWTH = 10MB

);

As the statement shows, after I specify the logical name of the log file, I set the new size for the file (2 MB), the maximum size (200 MB), and the growth increment (10 MB).

After you run the ALTER DATABASE statement, you can they query the sys.database_files catalog view, to verify the changes. Your results should be similar to the following:

Name size max_size growth is_percent_growth

EmployeeDB_log 256 25600 1280 0

The file size is now 256 8-KB pages, the maximum size is 25,600 8-KB pages, and the growth increment is 1,280 8-KB pages.

Shrinking a Log File

As you’ll recall, in order to truncate the transaction log, you must first back up the log. The database engine then automatically truncates the inactive records. However, truncating the log doesn’t reduce its size. Instead, you must shrink the log file, which removes one or more inactive virtual log files.

To shrink a log file, you can run a DBCC SHRINKFILE statement that specifies the name of the log file and the target size, in MB. For example, the following DBCC SHRINKFILE statement shrinks the EmployeeDB_log file:

-- shrink log file

DBCC SHRINKFILE (EmployeeDB_log, 1);

The target size in this statement is 1 MB (128 8-KB pages). When you run the statement, the database engine will shrink the file down to that size, but only if there are enough inactive virtual log files.

After you run the statement, you can verify the extent to which a file was reduced by querying the sys.database_files catalog view, which should return results similar to the following:

Name size max_size growth is_percent_growth

EmployeeDB_log 128 25600 1280 0

As you can see, the size has been reduced from 256 8-KB pages to 128. If the database engine cannot free up the space, it issues a message that suggests steps you can take to free up log space. Follow the suggested steps and then rerun the DBCC SHRINKFILE statement.

Adding or Deleting a Log File

If you need to enlarge your transaction log, one method you can use is to add a file to the log.

You can do this by using the ADD LOG FILE clause of the ALTER DATABASE statement. In addition to specifying the logical and physical names of the new log file, you can define the following three arguments:

  • SIZE: The initial size of the log file. You can specify the size as KB, MB, GB, or TB, such as 10 MB or 1 GB. If you do not specify a size when you add the file, the database engine uses the default size of 1 MB.
  • MAXSIZE: The maximum size that the file can grow to. You can specify the size as KB, MB, GB, or TB. If you do not specify a maximum size, the file will grow until it fills the disk (assuming the space is needed).
  • FILEGROWTH: The growth increment used when expanding the file. You can specify the size as KB, MB, GB, or TB, or you can specify the size as a percentage, such as 10%. If a number is specified without a suffix, MB is used. If no number is specified, 10% is used. A value of 0 indicates that no automatic growth is allowed.

The following example adds the EmployeeDB_log2 file to the EmployeeDB transaction log:

ALTER DATABASE EmployeeDB

ADD LOG FILE

(

NAME = EmployeeDB_log2,

FILENAME = 'C:\SqlData\EmployeeDB2.ldf',

SIZE = 2MB,

MAXSIZE = 50MB,

FILEGROWTH = 10%

);

Notice that I first specify the logical and physical file names, and then define the initial size, maximum size, and growth increment. After I run this statement, I can confirm that the file has been added to the log by querying the sys.database_files catalog view (using the same query as I used previously), which returns the following results:

Name size max_size growth is_percent_growth

EmployeeDB_log 128 268435456 10 1

EmployeeDB_log2 256 6400 10 1

As the results indicate, the EmployeeDB_log2 file has been added to the database with an initial size of 256 8-KB pages, a maximum size of 6,400 8-KB pages, and a growth increment of 10%.

You can also use the ALTER DATABASE statement to remove a log file by specifying the REMOVE FILE clause, as shown in the following example:

ALTER DATABASE EmployeeDB

REMOVE FILE EmployeeDB_log2;

To determine whether the file has been removed, you can once again query the sys.database_files catalog view, which returns the following results:

Name size max_size growth is_percent_growth

EmployeeDB_log 128 268435456 10 1

EmployeeDB_log2 1 6400 10 1

Notice that the EmployeeDB_log2 file is still listed, but the size has been set to 1 8-KB page. The physical file has been deleted, but the logical file is still associated with the database. You must back up the transaction log before the logical file is removed. After you back up the log, you can again query the sys.database_files catalog view. This time your results should look similar to the following:

Name size max_size growth is_percent_growth

EmployeeDB_log 128 268435456 10 1

As you can see, the logical file has been removed.

Conclusion

Clearly, transaction logs play an important role in SQL Server databases, and the information above should provide you with an introduction on how to work with them. What I have not covered, however, are the ways that the transaction log is used to support transactional replication, database mirroring, and log shipping. I also have not covered how to use the transaction log and its backups to restore a database. These topics each deserve their own article. But you should at least now have a basic foundation in transaction logs and be able to start working with them. However, I highly recommend that you check out the various topics in SQL Server Books Online on transaction logs as well as other sources on the subject so you have a complete picture of how the logs work and how they’re best managed.

Get a list of all the DBCC commands both documented and undocumented.

Shows the list of DBCC commands. then

DBCC HELP()

Shows the syntax of an individual co

If you run DBCC HELP on all the commands you end up with this list:

DBCC activecursors [(spid)]

DBCC addextendedproc (function_name, dll_name)

DBCC addinstance (objectname, instancename)

DBCC adduserobject (name)

DBCC auditevent (eventclass, eventsubclass, success, loginname
, rolename, dbusername, loginid)

DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])

DBCC balancefactor (variance_percent)

DBCC bufcount [(number_of_buffers)]

DBCC buffer ( {'dbname' | dbid} [, objid [, number [, printopt={0|1|2} ]
[, dirty | io | kept | rlock | ioerr | hashed ]]])

DBCC bytes ( startaddress, length )

DBCC cachestats

DBCC callfulltext

DBCC checkalloc [('database_name'[, NOINDEX | REPAIR])]
[WITH NO_INFOMSGS[, ALL_ERRORMSGS][, ESTIMATEONLY]]

DBCC checkcatalog [('database_name')] [WITH NO_INFOMSGS]

DBCC checkconstraints [( 'tab_name' | tab_id | 'constraint_name' | constraint_id )]
[WITH ALL_CONSTRAINTS | ALL_ERRORMSGS]

DBCC checkdb [('database_name'[, NOINDEX | REPAIR])]
[WITH NO_INFOMSGS[, ALL_ERRORMSGS]
[, PHYSICAL_ONLY][, ESTIMATEONLY][,DBCC TABLOCK]

DBCC checkdbts (dbid, newTimestamp)]

DBCC checkfilegroup [( [ {'filegroup_name' | filegroup_id} ]
[, NOINDEX] )] [WITH NO_INFOMSGS
[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC checkident ('table_name'[, { NORESEED | {RESEED [, new_reseed_value] } } ] )

DBCC checkprimaryfile ( {'FileName'} [, opt={0|1|2|3} ])

DBCC checktable ('table_name'[, {NOINDEX | index_id | REPAIR}])
[WITH NO_INFOMSGS[, ALL_ERRORMSGS]
[, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC cleantable ('database_name'|database_id, 'table_name'|table_id,[batch_size])

DBCC cacheprofile [( {actionid} [, bucketid])

DBCC clearspacecaches ('database_name'|database_id,
'table_name'|table_id, 'index_name'|index_id)

DBCC collectstats (on | off)

DBCC concurrencyviolation (reset | display | startlog | stoplog)

DBCC config

DBCC cursorstats ([spid [,'clear']])

DBCC dbinfo [('dbname')]

DBCC dbrecover (dbname [, IgnoreErrors])

DBCC dbreindex ('table_name' [, index_name [, fillfactor ]]) [WITH NO_INFOMSGS]

DBCC dbreindexall (db_name/db_id, type_bitmap)

DBCC dbrepair ('dbname', DROPDB [, NOINIT])

DBCC dbtable [({'dbname' | dbid})]

DBCC debugbreak

DBCC deleteinstance (objectname, instancename)

DBCC des [( {'dbname' | dbid} [, {'objname' | objid} ])]

DBCC detachdb [( 'dbname' )]

DBCC dropcleanbuffers

DBCC dropextendedproc (function_name)

DBCC dropuserobject ('object_name')

DBCC dumptrigger ({'BREAK', {0 | 1}} | 'DISPLAY' | {'SET', exception_number}
| {'CLEAR', exception_number})

DBCC errorlog

DBCC extentinfo [({'database_name'| dbid | 0}
[,{'table_name' | table_id} [, {'index_name' | index_id | -1}]])]

DBCC fileheader [( {'dbname' | dbid} [, fileid])

DBCC fixallocation [({'ADD' | 'REMOVE'},
{'PAGE' | 'SINGLEPAGE' | 'EXTENT' | 'MIXEDEXTENT'}
, filenum, pagenum [, objectid, indid])

DBCC flush ('data' | 'log', dbid)

DBCC flushprocindb (database)

DBCC free dll_name (FREE)

DBCC freeproccache

dbcc freeze_io (db)

dbcc getvalue (name)

dbcc icecapquery ('dbname', stored_proc_name
[, #_times_to_icecap (-1 infinite, 0 turns off)])
Use 'dbcc icecapquery (printlist)' to see list of SP's to profile.
Use 'dbcc icecapquery (icecapall)' to profile all SP's.

dbcc incrementinstance (objectname, countername, instancename, value)

dbcc ind ( { 'dbname' | dbid }, { 'objname' | objid }, { indid | 0 | -1 | -2 } )

DBCC indexdefrag ({dbid | dbname | 0}, {tableid | tablename}, {indid |indname})

DBCC inputbuffer (spid)

DBCC invalidate_textptr (textptr)

DBCC invalidate_textptr_objid (objid)

DBCC iotrace ( { 'dbname' | dbid | 0 | -1 }
, { fileid | 0 }, bufsize, [ { numIOs | -1 }
[, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )

DBCC latch ( address [, 'owners'] [, 'stackdumps'])

DBCC lock ([{'DUMPTABLE' | 'DUMPSTATS' | 'RESETSTATS' | 'HASH'}] |
[{'STALLREPORTTHESHOLD', stallthreshold}])

DBCC lockobjectschema ('object_name')

DBCC log ([dbid[,{0|1|2|3|4}[,['lsn','[0x]x:y:z']|['numrecs',num]|['xdesid','x:y']
|['extent','x:y']|['pageid','x:y']|['objid',{x,'y'}]|['logrecs',
{'lop'|op}...]|['output',x,['filename','x']]...]]])

DBCC loginfo [({'database_name' | dbid})]

DBCC matview ({'PERSIST' | 'ENDPERSIST' | 'FREE' | 'USE' | 'ENDUSE'})

DBCC memobjlist [(memory object)]

DBCC memorymap

DBCC memorystatus

DBCC memospy

DBCC memusage ([IDS | NAMES], [Number of rows to output])

DBCC monitorevents ('sink' [, 'filter-expression'])

DBCC newalloc - please use checkalloc instead

DBCC no_textptr (table_id , max_inline)

DBCC opentran [({'dbname'| dbid})] [WITH TABLERESULTS[,NO_INFOMSGS]]

DBCC outputbuffer (spid)

DBCC page ( {'dbname' | dbid}, filenum, pagenum
[, printopt={0|1|2|3} ][, cache={0|1} ])

DBCC perflog

DBCC perfmon

DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2}
, targetfile, targetpg, order={1|0})

DBCC pintable (database_id, table_id)

DBCC procbuf [({'dbname' | dbid}[, {'objname' | objid}
[, nbufs[, printopt = { 0 | 1 } ]]] )]

DBCC proccache

DBCC prtipage (dbid, objid, indexid [, [{{level, 0}
| {filenum, pagenum}}] [,printopt]])

DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]

DBCC readpage ({ dbid, 'dbname' }, fileid, pageid
, formatstr [, printopt = { 0 | 1} ])

DBCC rebuild_log (dbname [, filename])

DBCC renamecolumn (object_name, old_name, new_name)

DBCC resource

DBCC row_lock (dbid, tableid, set) - Not Needed

DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC ruleon ( rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC setcpuweight (weight)

DBCC setinstance (objectname, countername, instancename, value)

DBCC setioweight (weight)

DBCC show_statistics ('table_name', 'target_name')

DBCC showcontig (table_id | table_name [, index_id | index_name]
[WITH FAST, ALL_INDEXES, TABLERESULTS [,ALL_LEVELS]])

DBCC showdbaffinity

DBCC showfilestats [(file_num)]

DBCC showoffrules

DBCC showonrules

DBCC showtableaffinity (table)

DBCC showtext ('dbname', {textpointer | {fileid, pageid, slotid[,option]}})

DBCC showweights

DBCC shrinkdatabase ({dbid | 'dbname'}, [freespace_percentage
[, {NOTRUNCATE | TRUNCATEONLY}]])

DBCC shrinkfile ({fileid | 'filename'}, [compress_size
[, {NOTRUNCATE | TRUNCATEONLY | EMPTYFILE}]])

DBCC sqlmgrstats

DBCC sqlperf (LOGSPACE)({IOSTATS | LRUSTATS | NETSTATS | RASTATS [, CLEAR]}
| {THREADS} | {LOGSPACE})

DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, 'THREADID'}] )]

DBCC tab ( dbid, objid )

DBCC tape_control {'query' | 'release'}[,('\\.\tape')]

DBCC tec [( uid[, spid[, ecid]] )]

DBCC textall [({'database_name'|database_id}[, 'FULL' | FAST] )]

DBCC textalloc ({'table_name'|table_id}[, 'FULL' | FAST])

DBCC thaw_io (db)

DBCC traceoff [( tracenum [, tracenum ... ] )]

DBCC traceon [( tracenum [, tracenum ... ] )]

DBCC tracestatus (trace# [, ...trace#])

DBCC unpintable (dbid, table_id)

DBCC updateusage ({'database_name'| 0} [, 'table_name' [, index_id]])
[WITH [NO_INFOMSGS] [,] COUNT_ROWS]

DBCC upgradedb (db) DBCC usagegovernor (command, value)

DBCC useplan [(number_of_plan)]

DBCC useroptions DBCC wakeup (spid)

DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)

Wednesday, February 23, 2011

Xsl include vs import

xsl include vs import

I've recently been asked by a friend to explain the difference between include and import and how it would affect the templates being matched in these files. The answer is very simple, but you also must understand that there are other ways to foce which template will be called in the case that were are identical matchings.

First of all, let's explain "order of precedence". Basically, it gives the xslt transform engine the order in which templates will be prioritized. If two templates in the same document match the same path, then we cannot guarantee which will take precendence unless we specificy the priority attribute on the template. (The value of this attribute is an integer where the lower value has a higher precendence).

When we are importing an xsl file, using xsl:import, this is different. The file being imported always has a lower precedence than the current file. This means that if you have a template matching //Donkey and include a file called specialdonkey.xsl which contains a template //Donkey then the one in the current file will always be called over the one in the specialdonkey.xsl file.

When you include a file using xsl:include, you have the exact same order of precedence as the current file. Therefore, it is just like if the included file was copy/pasted into the current file. As we stated earlier, templates in the same file, that match the same xpath, who do not have a priority attribute, cannot guarantee which will be called. Therefore, if you plan on including an xsl file, make sure that it does not have conflicting templates and if so, either use the priority attribute, or take a look at our mode trick below.

/********Brief Description **********/

There is a common problem in XSLT where you have two XML of the same xPath (whether it be absolute or relative) that end up taking the same template. In many scenarios you do not have the ability to modify the XML so that it suits your XSLT and you must therefore turn to the mode trick to create seperate templates.

For the following example, say you had the following XML.

<Shop>
<FirstCartProduct>
<Product id="123" name="Apples" weight="2.4jg" price="$5.99"/>
</FirstCartProduct>
<CartReferences>
<Product ref="98342kjsd" id="123"/>
</CartReferences>
</Shop>

Now as you can see here, you could enter the FirstCartProduct template and want to do an apply-templates on your Product so that you display the products but then you would enter CartReferences, do any apply templates and end up with a buggy output since these Products do not have the same fields.

What we can do to resolve this issue is to give the templates a mode. By supplying the attribute mode with a unique identifier as a value, you can have multiple templates that match the same xPath but are applied from different sources.

For example, in the FirstCartProduct template, we could do an and then we would have a template that can only be applied from there (which means that the one in CartReference would no longer be applied) by doing this:

<xsl:template match="Product" mode="DisplayProduct">
Name: <xsl:value-of select="@name">
</xsl:template>

In the distinctSection there is no restriction you can start from any of the root map only you need to set the distinct control id in under use property, this distinct key will automatically find the all xml and distinct all the values mapped in the use property, Now using generic-id() you can easily get this distinct values of your xml.


I have placed here an example for my reference:-

<xsl:key name="distinctSection" match="elements/element[@type='Custom' and @title='Pay_Premium']/custom" use="./pansectitle"/>

<xsl:for-each select="/site/elements/element[@type='Custom']/custom[generate-id() = generate-id(key('distinctSection', ./pansectitle))]">

<xsl:value-of select="position()"/> : <xsl:value-of select="pantitle"/> [<xsl:value-of select="pansectitle"/>]

</xsl:for-each>

Tuesday, February 22, 2011

Set Max 85 Char in Xsl Substring

I want to set the char length max 50 Char more then 50 char should be not visible and make sure string willn't terminate in between the sentence/word


<xsl:value-of select="substring($bodyContent, 1, 85 + string-length(substring-before(substring($bodyContent, 86), ' ')))" disable-output-escaping="yes"/>

Thursday, February 17, 2011

Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies.

I am always getting this type of errors while publishing the sites like Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies. like

Line 1:  <%@ Page Language="C#" AutoEventWireup="true" CodeFile="index.aspx.cs" Inherits="index" %>
Line 2:
Line 3: <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %>Line 4:
Line 5:


To resolve this issue i had followed these steps, might be this will helpful to others also

1. Registered the assembly by using the following code:

<%@ Register Assembly="AjaxControlToolkit"
Namespace="AjaxControlToolkit" TagPrefix="ajaxControl" %>


You have already done the first step.

2. Copy the AjaxControlToolkit.dll into the /bin folder of the web site application or download library file from this location Download.

3. You must have included ScriptManager in your files

4. If Yet this problem isn't resolved please add/replace this line in your config file
<add assembly="System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>

Tuesday, February 15, 2011

SQL SERVER – Query to Find ByteSize of All the Tables in Database

I Am writing here two ways to find the database table size

Method One 1:
---------------------------------------------------

SELECT
CASE WHEN (GROUPING(sob.name)=1) THEN 'All_Tables'
ELSE ISNULL(sob.name, 'unknown') END AS Table_name,
SUM(sys.length) AS Byte_LengthFROM sysobjects sob, syscolumns sysWHERE sob.xtype='u' AND sys.id=sob.idGROUP BY sob.nameWITH CUBE


Method One 2:

---------------------------------------------------
Ever wonder how big a table really is in your database? You know there are a million rows in the table, but how much space is that really taking?

SQL Server provides a built-in stored procedure that you can run to easily show the size of a table, including the size of the indexes… which might surprise you.
Syntax:   sp_spaceused ‘Tablename’

Method One 3:
---------------------------------------------------
Actually SQL Server gives you everything you need with its Stored Procedure sp_spaceused. Unfortunately this SP does not support iterating over all tables in a database, so we needed to leverage another (undocumented) Stored Procedure sp_msForEachTable.

SET NOCOUNT ON 
DBCC UPDATEUSAGE(0) 
-- DB size.
EXEC sp_spaceused

-- Table row counts and sizes.
CREATE TABLE #t 
( 
    [name] NVARCHAR(128),
    [rows] CHAR(11),
    reserved VARCHAR(18), 
    data VARCHAR(18), 
    index_size VARCHAR(18),
    unused VARCHAR(18)
) 

INSERT #t EXEC sp_msForEachTable 'EXEC sp_spaceused ''?''' 

SELECT *
FROM   #t

-- # of rows.
SELECT SUM(CAST([rows] AS int)) AS [rows]
FROM   #t
 
DROP TABLE #t 
 
 
Method One 4:
---------------------------------------------------
CREATE PROCEDURE GetAllTableSizes
AS
/*
Obtains spaced used data for ALL user tables in the database
*/
DECLARE @TableName VARCHAR(100) --For storing values in the cursor
--Cursor to get the name of all user tables from the sysobjects listing
DECLARE tableCursor CURSOR
FOR
select [name]
from dbo.sysobjects
where OBJECTPROPERTY(id, N'IsUserTable') = 1
FOR READ ONLY
--A procedure level temp table to store the results
CREATE TABLE #TempTable
(
tableName varchar(100),
numberofRows varchar(100),
reservedSize varchar(50),
dataSize varchar(50),
indexSize varchar(50),
unusedSize varchar(50)
)
--Open the cursor
OPEN tableCursor
--Get the first table name from the cursor
FETCH NEXT FROM tableCursor INTO @TableName
--Loop until the cursor was not able to fetch
WHILE (@@Fetch_Status >= 0)
BEGIN
--Dump the results of the sp_spaceused query to the temp table
INSERT #TempTable
EXEC sp_spaceused @TableName
--Get the next table name
FETCH NEXT FROM tableCursor INTO @TableName
END
--Get rid of the cursor
CLOSE tableCursor
DEALLOCATE tableCursor
--Select all records so we can use the reults
SELECT *
FROM #TempTable
--Final cleanup!
DROP TABLE #TempTable
GO

Exec GetAllTableSizes

Wednesday, February 2, 2011

Bind Repeater form sql datasource

<asp:Repeater ID="Repeater1" runat="server" DataSourceID="SqlDataSource1" 
        onitemdatabound="Repeater1_ItemDataBound">
    <HeaderTemplate>
        <table border="1" cellpadding="5px">
    </HeaderTemplate>

    <ItemTemplate>
            <!-- Image -->
            <tr>
                <td colspan="2">
                    <asp:Image ID="Image1" runat="server" ImageUrl='<%# Eval("ImgId", "~/Image.aspx?id={0}") %>' />
                </td>
            </tr>
            <!-- Message/Date -->
            <tr>
                <td>
                    <%# Eval("ImgMessage"%>
                </td>
                <td>
                    <%#Eval("ImgDate""{0:d}")%>
                </td>
            </tr>
    </ItemTemplate>

    <FooterTemplate>
        </table>
    </FooterTemplate>
</asp:Repeater>
<asp:SqlDataSource ID="SqlDataSource1" runat="server" 
    ConnectionString="<%$ ConnectionStrings:ConnectionString %>" 
    SelectCommand="SELECT * FROM [Images]"></asp:SqlDataSource>