MASTERING SQL SERVER 2000
Chapter 25 Locking
Concurrency Problem that Locking solves
1. Lost Updates: - updates which are lost because one transaction updates something while other transaction updates the same rows.
2. Uncommitted Dependencies: - these are dirty reads .that means reading those records which are being updated.
3. Inconsistent Analysis: - It is related to Uncommitted Dependencies problem, aggregate functions are being applied on those records which are currently updated resulting in different result different times.
4. Phantom reads: - Reading data when data is being inserted, therefore mysterious rows would appear in next read whereas they were actually not present at the time of first read.
Locking Strategies
1. Optimistic : - Considers low chance of concurrency problems , uses low granularity
2. Pessimistic: - Considers high chance of concurrency problems , uses High granularity
Locking Mechanics
1. Granularity: - It is the size of the object on which lock has been applied, higher the size higher the granularity.
SQL Server provides following six levels of granularity
(i) RID : Row Level Lock
(ii) Keyid : Key or field level lock
(iii) Page lock (a Page is of 8k)
(iv) Extent lock (An extent is of 8 pages)
(v) Table Lock
(vi) Database Lock this lock is rare
Higher granularity results in significant reduction in locking overhead as server does not have to keep track of multiple locks, but higher granularity also decreases Concurrency resulting in longer waiting time for transactions.
2. Locking Modes : - not all locks are of similar nature In sql server , some locks are more flexible while others are not
Various locking modes are
1. Shared (S): Resource ensures that resource cannot be updated but it can be read by other transactions.
2. Update (U): This lock signals that a transaction intends to update a record however, this lock must be upgraded to exclusive Lock before update can begin.
3. Exclusive (X): This lock ensures that no other transaction can read or modify the data in that resource.
Etc...
4. Lock Escalation : -
SQL Server continuously monitors Lock Usage to strike a balance between granularity of locks and resources devoted to locking. If a large no of locks on a resource with lesser granularity is acquired by a single transaction, SQL Server might escalate these locks to fewer locks with higher granularity.
The Goal is to balance the number of locks that need to be monitored against the need to keep data as available to other processes as possible.
Viewing Current Lock
1. SP_lock
It shows the list of current locks on the server, identifies processes as SPID, current process SPID can be retrieved by using @@spid.
2. The Locking information is displayed in three nodes , all of them children of the current activity node in the management folder
(i) Process info
(ii) Lock/Process ID
(iii) Locks/Object
Customizing Locking Behavior
1. Marking a transaction as preferential deadlock victim
We can set our transaction as preferred deadlock victim even if it is not the least expensive transaction to be rolled back by
SET DEADLOCK_PRIORITY LOW
2. Setting a Lock timeout
SET LOCK_TIEMOUT timeout_period
Eg. SET LOCK_TIEMOUT 2000
3. Setting transaction Isolation level
SET Transaction Isolation Level
{Read Uncommitted | Read Committed | Repeatable Read | Serializable}
Read Uncommitted: - Session does not give shared lock or honor exclusive locks. dirty read possible.
Read Committed: - default isolation level. Shared lock are held while data is being read , but transaction can still modify or insert data therefore so nonrepeatable reads and phantom reads are there.
Repeatable read: - locks all the data so other users can’t change the data. but table level lock is not applied therefore new rows can be inserted resulting in phantom records.
Serializable: - Locks all the data and prevents new records from entering into table but also creates deadlock issues due to high locking
We can view current isolation level of session by DBCC USEROPTIONS
3. Locking Hints
Modifies locking behavior for an individual SQL statement rather than whole session
(i)HOLDLOCK: hold a shared lock until transaction is complete.
(ii)Nolock: no locks are issued or honored.
(iii) PAGLOCK: Forces page lock
(iv) READCOMMITTED: uses READ COMMITTED transaction isolation level for the statement.
(v) READUNCOMMITTED: uses READ UNCOMMITTED transaction isolation level for the statement.
(vi) REPEATABLEREAD: uses REPEATABLEREAD transaction isolation level
for the statement
(vii) SERIALIZABLE: uses SERIALIZABLE transaction isolation level for the statement
(viii) UPDLOCK: forces use of update rather than shared lock
Chapter 26 =Monitoring and Optimizing SQL Server
1. Performance Monitor
Go to Control Panel\Administrative Tools
Also SQL Profiler provides the option to go to Performance Monitor, from its own tools options
Performance Monitor is a system monitoring tool,
We can add objects like Processor and Memory In the graph
Three options are there Graph, Alert, and LOG
Default is Graph
Recommended levels for counters
%Processor Time Less than 75%
Memory (Page/sec) Fewer than 5
Etc
2. SQL Profiler
It is for SQL monitoring
Trace Files are created in SQL Profiler
Following are the Important options for a trace File
(i) Template: it determines the variables that profiler will note down in trace file.
Default is SQLProfilerStandard it contains the T-SQL Duration
(ii) Events: This option specifies, what are the classes of variables for which trace will be run. For Eg Cursor, Locks, Objects etc.
(iii) Data Columns : Contains the data field available for Events
(iv) Filter : Specify filtering criteria to avoid unnecessary information like exclude system id, specify user id , database id
We can also replay the Trace file to recapture what went while the trace was on
But always replay the trace file on test environment.
3. Index Tuning Wizard
This wizard recommends indexes on the basis of trace file
Select it from Tools from SQL Profiler
And load the trace file generated in peak hours for best results.
4. Server Optimization Techniques
(i) Always prefer stored procedure over local queries.
(ii) Tempdb It should be half the size of largest DB on server
(iii) Query Governor : to set max time taken by TSQL Commands
Sp_configure ‘Query Governor Cost Limit’,’1’
Now it will disable any user to run query running more than 1 sec
(iv) Setting Trace Flags : like DBCC TraceON(107) will tell that type of
Numbers with points should be treated like float and not like decimal.
Similarly to disable it command is DBCC TraceOff (107)
(v) Manually configuring memory: min server memory 100 (not clear)
Chapter 16 = Basic Administrative Tasks
1. All backups are Online Backup i.e. users can access DB while we are taking
Backup.
3. Taking Backups :
Right click on the DB → All Tasks → Backup Database → Follow Instruction
4. Types of Backups are
(i) Full Backup: - Backs up Entire DB. It backs up the DB Files, The locations of those files and portions of the transaction log (from the start to the end of the Backup).
(ii) Differential Backup: - It is designed to record all the changes made to database since last full backup was performed. This means that if we take full backup on Monday and then a differential backup on Tuesday, the differential would record all of the changes to the DB since the full backup on Monday.
(iii) Transaction Log Backups: - It is designed to record only those changes made to database since last Transaction Log Backups was performed. This type of the backup is the only type that will clear old transactions Log. If we perform only other types then Transaction Log will fill to 100% and users will be denied to the DB. Also , To clear the database log without backup run command
BACKUP LOG Database_Name with Truncate_Only
(iv) FileGroup Backup: - Using FileGroup backup, we can take backup of specific files which were specified at the time of DB creation.
5. Maintaining Indexes: -
To know whether an Index is suffering from fragmentation run following command Declare @objid as int
Set @objid =object_id ('Table_Name')
DBCC showcontig (@objid)
To recreate the index use following command
Create clustered index Index_Name on Table_Name (Field_Name) with drop_existing
To rebuild the index with changed fill factor
DBCC DBREindex (' Index_Name ','Index_Name', NewFillFactor)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment