Mahmoud ElMansy: knowledge meant to be free

SQL Server 2008 Diagnostic Information Queries

how to get information for current SQL instance and OS Version

SELECT @@VERSION AS [SQL Version Info];
 
--   2008 RTM Builds                    2008 SP1 Builds
-- Build       Description        Build        Description
-- 1600        Gold RTM
-- 1763        RTM CU1
-- 1779        RTM CU2
-- 1787        RTM CU3    -->      2531        SP1 RTM
-- 1798        RTM CU4    -->      2710        SP1 CU1
-- 1806        RTM CU5    -->      2714        SP1 CU2
-- 1812        RTM CU6    -->      2723        SP1 CU3
-- 1818        RTM CU7    -->      2734        SP1 CU4
-- 1823        RTM CU8    -->      2746        SP1 CU5

Hardware information from SQL Server 2008
(Cannot distinguish between HT and multi-core)

SELECT cpu_count AS [Logical CPU COUNT], hyperthread_ratio AS [Hyperthread Ratio],
cpu_count/hyperthread_ratio AS [Physical CPU COUNT],
physical_memory_in_bytes/1048576 AS [Physical Memory (MB)], sqlserver_start_time
FROM sys.dm_os_sys_info;

Get sp_configure values for instance

EXEC sp_configure 'Show Advanced Options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure;
 
-- Focus on
-- backup compression default
-- clr enabled
-- lightweight pooling (should be zero)
-- max degree of parallelism
-- max server memory (MB)
-- optimize for ad hoc workloads (should be 1)
-- priority boost (should be zero)

Get File Names and Paths for TempDB and all user databases in SQL instance

SELECT DB_NAME([dbid])AS [DATABASE Name], fileid, [filename]
FROM sys.sysaltfiles
WHERE [dbid] > 4 AND [dbid] <> 32767
OR [dbid] = 2;
 
-- Things to look at:
-- Are data files and log files on different drives?
-- Is everything on C: drive?
-- Is TempDB on dedicated drives?
-- Are there multiple data files?

Calculates average stalls per read, per write, and per total input/output for each database file.
Helps determine which database files on the entire instance have the most I/O bottlenecks

SELECT DB_NAME(database_id) AS [DATABASE Name], file_id ,io_stall_read_ms, num_of_reads,
CAST(io_stall_read_ms/(1.0 + num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],io_stall_write_ms,
num_of_writes,CAST(io_stall_write_ms/(1.0+num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],
io_stall_read_ms + io_stall_write_ms AS [io_stalls], num_of_reads + num_of_writes AS [total_io],
CAST((io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads + num_of_writes) AS NUMERIC(10,1))
AS [avg_io_stall_ms]
FROM sys.dm_io_virtual_file_stats(NULL,NULL)
ORDER BY avg_io_stall_ms DESC;

Recovery model, log reuse wait description, and compatibility level for all databases on instance
SELECT [name] AS [Database Name], recovery_model_desc, log_reuse_wait_desc, [compatibility_level]
FROM sys.databases;

— Things to look at
— How many databases are on the instance?
— What recovery models are they using?
— What is the log reuse wait description?
— What compatibility level are they on?

— Clear Wait Stats in SQL Server

DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);

Isolate top waits for server instance since last restart or statistics clear

WITH Waits AS
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN( 'SLEEP_TASK', 'BROKER_TASK_STOP',
'SQLTRACE_BUFFER_FLUSH', 'CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP')) -- filter out additional irrelevant waits
SELECT W1.wait_type,
CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
HAVING SUM(W2.pct) - W1.pct < 95;

Network Related Waits

here is list of wait types.

ASYNC_NETWORK_IO

Occurs on network writes when the task is blocked behind the network

Locking Waits

— LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock
— LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock
— LCK_M_S Occurs when a task is waiting to acquire a Shared lock

— *** I/O Related Waits ***
— ASYNC_IO_COMPLETION Occurs when a task is waiting for I/Os to finish
— IO_COMPLETION Occurs while waiting for I/O operations to complete.
— This wait type generally represents non-data page I/Os. Data page I/O completion waits appear
— as PAGEIOLATCH_* waits
— PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request.
— The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem.
— PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request.
— The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem.
— WRITELOG Occurs while waiting for a log flush to complete.
— Common operations that cause log flushes are checkpoints and transaction commits.
— PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request.
— The latch request is in Exclusive mode.
— BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data

— *** CPU Related Waits ***
— SOS_SCHEDULER_YIELD Occurs when a task voluntarily yields the scheduler for other tasks to execute.
— During this wait the task is waiting for its quantum to be renewed.

— THREADPOOL Occurs when a task is waiting for a worker to run on.
— This can indicate that the maximum worker setting is too low, or that batch executions are taking
— unusually long, thus reducing the number of workers available to satisfy other batches.
— CX_PACKET Occurs when trying to synchronize the query processor exchange iterator
— You may consider lowering the degree of parallelism if contention on this wait type becomes a problem

/* */– Signal Waits for instance
SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits],
CAST(100.0 * SUM(wait_time_ms – signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits]
FROM sys.dm_os_wait_stats;

— Signal Waits above 10-15% is usually a sign of CPU pressure

— Get CPU Utilization History for last 30 minutes
DECLARE @ts_now bigint = (SELECT cpu_ticks/(cpu_ticks/ms_ticks)FROM sys.dm_os_sys_info);

SELECT TOP(30) SQLProcessUtilization AS [SQL Server Process CPU Utilization],
SystemIdle AS [System Idle Process],
100 – SystemIdle – SQLProcessUtilization AS [Other Process CPU Utilization],
DATEADD(ms, -1 * (@ts_now – [timestamp]), GETDATE()) AS [Event Time]
FROM (
SELECT record.value(‘(./Record/@id)[1]’, ‘int’) AS record_id,
record.value(‘(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]’, ‘int’)
AS [SystemIdle],
record.value(‘(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]’,
‘int’)
AS [SQLProcessUtilization], [timestamp]
FROM (
SELECT [timestamp], CONVERT(xml, record) AS [record]
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = N’RING_BUFFER_SCHEDULER_MONITOR’
AND record LIKE ‘%%’) AS x
) AS y
ORDER BY record_id DESC;

— Page Life Expectancy (PLE) value for default instance
SELECT cntr_value AS [Page Life Expectancy]
FROM sys.dm_os_performance_counters
WHERE OBJECT_NAME = ‘SQLServer:Buffer Manager’
— Modify this if you have named instances
AND counter_name = ‘Page life expectancy';

— PLE is a good measurement of memory pressure.
— Higher PLE is better. Below 300 is generally bad.
— Watch the trend, not the absolute value.

Get Buffer cache hit ratio (higher is better)
Buffer cache hit ratio is another measure of memory pressure.
A higher value is better. Below 95% is generally bad.
Watch the trend, not the absolute value.

SELECT ROUND(CAST(A.cntr_value1 AS NUMERIC) / CAST(B.cntr_value2 AS NUMERIC),
3) AS [Buffer Cache Hit Ratio]
FROM ( SELECT cntr_value AS [cntr_value1]
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Buffer Manager' -- Modify this if you have named instances
AND counter_name = 'Buffer cache hit ratio'
) AS A,
(SELECT cntr_value AS [cntr_value2]
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Buffer Manager' -- Modify this if you have named instances
AND counter_name = 'Buffer cache hit ratio base'
) AS B;
 

 

Buffer Pool Usage for instance

SELECT TOP(20) [TYPE], SUM(single_pages_kb) AS [SPA Mem, Kb]
FROM sys.dm_os_memory_clerks
GROUP BY [TYPE]
ORDER BY SUM(single_pages_kb) DESC;
 
-- CACHESTORE_SQLCP  SQL Plans
- These are cached SQL statements OR batches that aren't in
-- stored procedures, functions and triggers
-- CACHESTORE_OBJCP  Object Plans
- These are compiled plans for stored procedures,
-- functions and triggers
-- CACHESTORE_PHDR   Algebrizer Trees
- An algebrizer tree is the parsed SQL text that
-- resolves the table and column names

Find single-use, ad-hoc queries that are bloating the plan cache
Gives you the text and size of single-use ad-hoc queries that waste space in plan cache
Enabling ‘optimize for ad hoc workloads’ for the instance can help (SQL Server 2008 only)
Enabling forced parameterization for the database can help

SELECT TOP(100) [text], cp.size_in_bytes
FROM sys.dm_Exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cacheobjtype = 'Compiled Plan'
AND cp.objtype = 'Adhoc'
AND cp.usecounts = 1
ORDER BY cp.size_in_bytes DESC;

SQL Server Process Address space info (shows whether locked pages is enabled, among other things) (get information about usage )

SELECT physical_memory_in_use_kb,large_page_allocations_kb, locked_page_allocations_kb,total_virtual_address_space_kb,
virtual_address_space_reserved_kb, virtual_address_space_committed_kb, virtual_address_space_available_kb,
page_fault_count, memory_utilization_percentage, available_commit_limit_kb, process_physical_memory_low,
process_virtual_memory_low
FROM sys.dm_os_process_memory;

Individual File Sizes and space available for current database
Look at how large and how full the files are and where they are located
Make sure the transaction log is not full!!

SELECT name AS [File Name] , physical_name AS [Physical Name], SIZE/128 AS [Total SIZE IN MB],
SIZE/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS [Available SPACE IN MB]
FROM sys.database_files;

Top Cached SPs By Execution Count (SQL 2008)

SELECT TOP(50) p.name AS [SP Name], qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(SECOND, qs.cached_time, GETDATE()), 0) AS [Calls/SECOND],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime], qs.total_worker_time AS [TotalWorkerTime],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time,
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.execution_count DESC;

Top Cached SPs By Total Worker time (SQL 2008). Worker time relates to CPU cost.

SELECT TOP(25) p.name AS [SP Name],
qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.execution_count, ISNULL(qs.execution_count/DATEDIFF(SECOND, qs.cached_time, GETDATE()), 0) AS [Calls/SECOND],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time,
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_worker_time DESC;

Top Cached SPs By Total Logical Reads (SQL 2008). Logical reads relate to memory pressure.

SELECT TOP(25) p.name AS [SP Name],
qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],
ISNULL(qs.execution_count/DATEDIFF(SECOND, qs.cached_time, GETDATE()), 0) AS [Calls/SECOND],
qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.execution_count,
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time,
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_logical_reads DESC;

Top Cached SPs By Total Physical Reads (SQL 2008). Physical reads relate to disk I/O pressure

SELECT TOP(25) p.name AS [SP Name],
qs.total_physical_reads AS [TotalPhysicalReads], qs.total_physical_reads/qs.execution_count AS [AvgPhysicalReads],
ISNULL(qs.execution_count/DATEDIFF(SECOND, qs.cached_time, GETDATE()), 0) AS [Calls/SECOND],
qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],
qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.execution_count,
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time,
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_physical_reads DESC;

Top Cached SPs By Total Logical Writes (SQL 2008). Logical writes relate to both memory and disk I/O pressure

SELECT TOP(25) p.name AS [SP Name],
qs.total_logical_writes AS [TotalLogicalWrites], qs.total_logical_writes/qs.execution_count AS [AvgLogicalWrites],
ISNULL(qs.execution_count/DATEDIFF(SECOND, qs.cached_time, GETDATE()), 0) AS [Calls/SECOND],
qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],
qs.total_worker_time AS [TotalWorkerTime], qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.execution_count,
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.last_elapsed_time,
qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_logical_writes DESC;

Lists the top statements by average input/output usage for the current database
Helps you find the most expensive statements for I/O by SP

SELECT TOP(50) OBJECT_NAME(qt.objectid) AS [SP Name],
(qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO],
SUBSTRING(qt.[text],qs.statement_start_offset/2,
(CASE
WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX), qt.[text])) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) AS [Query Text]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.[dbid] = DB_ID()
ORDER BY [Avg IO] DESC;

How to Determine Possible Bad Indexes (writes > reads)

SELECT OBJECT_NAME(s.[object_id]) AS [TABLE Name], i.name AS [INDEX Name], i.index_id,
user_updates AS [Total Writes], user_seeks + user_scans + user_lookups AS [Total Reads],
user_updates - (user_seeks + user_scans + user_lookups) AS [Difference]
FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
ON s.[object_id] = i.[object_id]
AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1
AND s.database_id = DB_ID()
AND user_updates > (user_seeks + user_scans + user_lookups)
AND i.index_id > 1
ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC;

How to alter index to Rebuild or Recognize

ALTER INDEX Index_Name ON TableName REORGANIZE
ALTER INDEX Index_Name ON TableName REBUILD

Missing Indexes for entire SQL instance by Index Advantage
Look at last user seek time, number of user seeks to help determine source and importance
SQL Server is overly eager to add included columns, so beware

SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS [index_advantage], migs.last_user_seek,
mid.[statement] AS [DATABASE.Schema.TABLE],
mid.equality_columns, mid.inequality_columns, mid.included_columns,
migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact
FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)
INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)
ON mig.index_handle = mid.index_handle
AND mid.[statement] LIKE '%mtsv2%'
ORDER BY index_advantage DESC;

Breaks down buffers used by current database by object (table, index) in the buffer cache
Tells you what tables and indexes are using the most memory in the buffer cache

SELECT OBJECT_NAME(p.[object_id]) AS [ObjectName], p.[object_id],
p.index_id, COUNT(*)/128 AS [buffer SIZE(MB)],  COUNT(*) AS [buffer_count]
FROM sys.allocation_units AS a
INNER JOIN sys.dm_os_buffer_descriptors AS b
ON a.allocation_unit_id = b.allocation_unit_id
INNER JOIN sys.partitions AS p
ON a.container_id = p.hobt_id
WHERE b.database_id = DB_ID()
AND p.[object_id] > 100
GROUP BY p.[object_id], p.index_id
ORDER BY buffer_count DESC;

Detect blocking (run multiple times)

SELECT t1.resource_type AS [LOCK TYPE],DB_NAME(resource_database_id) AS [DATABASE],
t1.resource_associated_entity_id AS [blk object],t1.request_mode AS [LOCK req], --- lock requested
t1.request_session_id AS [waiter sid], t2.wait_duration_ms AS [wait TIME], -- spid of waiter
(SELECT [text] FROM sys.dm_exec_requests AS r                              -- get sql for waiter
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle])
WHERE r.session_id = t1.request_session_id) AS [waiter_batch],
(SELECT SUBSTRING(qt.[text],r.statement_start_offset/2,
(CASE WHEN r.statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX), qt.[text])) * 2
ELSE r.statement_end_offset END - r.statement_start_offset)/2)
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS qt
WHERE r.session_id = t1.request_session_id) AS [waiter_stmt],    -- statement blocked
t2.blocking_session_id AS [blocker sid],                         -- spid of blocker
(SELECT [text] FROM sys.sysprocesses AS p                        -- get sql for blocker
CROSS APPLY sys.dm_exec_sql_text(p.[sql_handle])
WHERE p.spid = t2.blocking_session_id) AS [blocker_stmt]
FROM sys.dm_tran_locks AS t1
INNER JOIN sys.dm_os_waiting_tasks AS t2
ON t1.lock_owner_address = t2.resource_address;
 

 

Power shell

Key Meaning
(Alt)+(F7) Deletes the current command history
(PgUp), (PgDn) Display the first (PgUp) or last (PgDn) command you used in current session
(Enter) Send the entered lines to PowerShell for execution
(End) Moves the editing cursor to the end of the command line
(Del) Deletes the character to the right of the insertion point
(Esc) Deletes current command line
(F2) Moves in current command line to the next character corresponding to specified character
(F4) Deletes all characters to the right of the insertion point up to specified character
(F7) Displays last entered commands in a dialog box
(F8) Displays commands from command history beginning with the character that you already entered in the command line
(F9) Opens a dialog box in which you can enter the number of a command from your command history to return the command. (F7) displays numbers of commands in command history
(Left arrow),(Right arrow) Move one character to the left or right respectively
(Arrow up),(Arrow down), (F5),(F8) Repeat the last previously entered command
(Home) Moves editing cursor to beginning of command line
(Backspace) Deletes character to the left of the insertion point
(Ctrl)+(C) Cancels command execution
(Ctrl)+(End) Deletes all characters from current position to end of command line
(Ctrl)+(Arrow left),(Ctrl)+(Arrow right) Move insertion point one word to the left or right respectively
(Ctrl)+(Home) Deletes all characters of current position up to beginning of command line
(Tab) Automatically completes current entry, if possible

Cmd /c Help

For more information on a specific command, type HELP command-nameASSOC Displays or modifies file extension associations.
AT Schedules commands and programs to run on a computer.
ATTRIB Displays or changes file attributes.
BREAK Sets or clears extended CTRL+C checking.
CACLS Displays or modifies access control lists (ACLs) of files.
CALL Calls one batch program from another.
CD Displays the name of or changes the current directory.
CHCP Displays or sets the active code page number.
CHDIR Displays the name of or changes the current directory.
CHKDSK Checks a disk and displays a status report.
CHKNTFS Displays or modifies the checking of disk at boot time.
CLS Clears the screen.
CMD Starts a new instance of the Windows command interpreter.
COLOR Sets the default console foreground and background colors.
COMP Compares the contents of two files or sets of files.
COMPACT Displays or alters the compression of files on NTFS
partitions.
CONVERT Converts FAT volumes to NTFS. You cannot convert the
current drive.
COPY Copies one or more files to another location.
DATE Displays or sets the date.
DEL Deletes one or more files.
DIR Displays a list of files and subdirectories in a directory.
DISKCOMP Compares the contents of two floppy disks.
DISKCOPY Copies the contents of one floppy disk to another.
DOSKEY Edits command lines, recalls Windows commands, and creates
macros.
ECHO Displays messages, or turns command echoing on or off.
ENDLOCAL Ends localization of environment changes in a batch file.
ERASE Deletes one or more files.
EXIT Quits the CMD.EXE program (command interpreter).
FC Compares two files or sets of files, and displays the
differences between them.
FIND Searches for a text string in a file or files.
FINDSTR Searches for strings in files.
FOR Runs a specified command for each file in a set of files.
FORMAT Formats a disk for use with Windows.
FTYPE Displays or modifies file types used in file extension
associations.
GOTO Directs the Windows command interpreter to a labeled line
in a batch program.
GRAFTABL Enables Windows to display an extended character set in
graphics mode.
HELP Provides Help information for Windows commands.
IF Performs conditional processing in batch programs.
LABEL Creates, changes, or deletes the volume label of a disk.
MD Creates a directory.
MKDIR Creates a directory.
MODE Configures a system device.
MORE Displays output one screen at a time.
MOVE Moves one or more files from one directory to another
directory.
PATH Displays or sets a search path for executable files.
PAUSE Suspends processing of a batch file and displays a message.
POPD Restores the previous value of the current directory saved
by PUSHD.
PRINT Prints a text file.
PROMPT Changes the Windows command prompt.
PUSHD Saves the current directory then changes it.
RD Removes a directory.
RECOVER Recovers readable information from a bad or defective disk.
REM Records comments (remarks) in batch files or CONFIG.SYS.
REN Renames a file or files.
RENAME Renames a file or files.
REPLACE Replaces files.
RMDIR Removes a directory.
SET Displays, sets, or removes Windows environment variables.
SETLOCAL Begins localization of environment changes in a batch file.
SHIFT Shifts the position of replaceable parameters in batch
files.
SORT Sorts input.
START Starts a separate window to run a specified program or
command.
SUBST Associates a path with a drive letter.
TIME Displays or sets the system time.
TITLE Sets the window title for a CMD.EXE session.
TREE Graphically displays the directory structure of a drive or
path.
TYPE Displays the contents of a text file.
VER Displays the Windows version.
VERIFY Tells Windows whether to verify that your files are written
correctly to a disk.
VOL Displays a disk volume label and serial number.
XCOPY Copies files and directory trees.

$env:path += “;C:\programs\Windows NT\accessories”
wordpad.exe
& “C:\programs\Windows NT\accessories\wordpad.exe”
Get-Command -commandType cmdlet
Dir variable:
Dir variable:value*
Dir variable: -include value* -exclude *1*
dir variable: | Out-String -stream | Select-String ” 20 “

PowerShell Search For list Of Users

Power-Shell Is a very handy tool for administrators.
We meet again with this post to demonstrates hoe to get users info from active directory for list of users stored in CSV File

First we will make Text File with Search Criteria separated by comma

email,
email1@contoso.com,
email2@contoso.com,
email3@contoso.com,
email4@contoso.com,
email5@contoso.com,

Save it in CSV File New.csv

Now to PowerShell Code

Import-Module ActiveDirectory

$FilePath ='.\New.csv'
$Users = Import-Csv $FilePath -Header('email') -Delimiter ','

foreach($user in $Users)
{
 #$email='"'+$user.comment+'*"'
 $email=[string]::Format("{0}*", $user.comment);
 Get-ADUser -Filter {(Comment -like $email)} -Properties *| select mail,name,cn | Export-Csv -Path '\Out.csv' -NoTypeInformation -Append
 #Write-Host $email
}

TSQL – Concatenate Rows Into On Column Using For XML Path()

This case always happen when you are working  with large business that you need Concatenate rows into one string .

one of the methods to do that is to Use XML Path() to turn rows into one XML with no tags.

As follows
1)  Declare To Tables With one to many relation

DECLARE @Categories TABLE
(
CategoryID INT,
Category_Code VARCHAR(16),
Notes VARCHAR(128)
);

DECLARE @Products TABLE
(
ProductID INT,
CategoryID INT,
Product_Code VARCHAR(16),
Notes VARCHAR(128)
);

2) Fill Tables with Dummy Data

INSERT @Categories SELECT 1,'Cat_1','Notes Notes Notes'
UNION ALL SELECT 2,'Cat_2','Notes Notes Notes'
UNION ALL SELECT 3,'Cat_3','Notes Notes Notes';

INSERT @Products SELECT 1,1,'Product_1','Notes Notes Notes'
UNION ALL SELECT 1,1,'Product_1','Notes Notes Notes'
UNION ALL SELECT 2,1,'Product_2','Notes Notes Notes'
UNION ALL SELECT 3,1,'Product_3','Notes Notes Notes'
UNION ALL SELECT 4,2,'Product_4','Notes Notes Notes'
UNION ALL SELECT 5,2,'Product_5','Notes Notes Notes'
UNION ALL SELECT 6,2,'Product_6','Notes Notes Notes'
UNION ALL SELECT 7,3,'Product_7','Notes Notes Notes';

3) Run The Query

SELECT *,
STUFF( (SELECT ', ' + p.Product_Code FROM @Products AS p WHERE p.CategoryID=cat.CategoryID FOR XML PATH('')),1,1,'') Products
FROM @Categories AS cat

The result
8-21-2014 11-05-14 AM

8-21-2014 11-05-27 AM

8-21-2014 11-06-01 AM

How to make fixed length digit

this is one of the easy posts for me i will talk about how to fixed length digit .
means i have column stored in the DB and i want to show hime with fiexd lenght like “00001 or 00123″

that is pretty much easy using one of the sql functions Called STUFF

this function as discriped in the MSDN as “The STUFF function inserts a string into another string. It deletes a specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position.

writing the code

DECLARE @replaced VARCHAR(64)='123'

SELECT STUFF('0000000000', 11-LEN(@replaced), LEN(@replaced), @replaced);
-- note that 11 here is the length of your fixed string +1

Dynatree Lazy Load Example With asp.net

I faced issue on my work we use jquery Dynatreeat one of our projects and for performance issue i decide to use the lazy load feature that comes with Dynatree Plugin ,but i found no example in asp.net to how to achieve this.

So i made an example and will go in details how to make it up and running.

First : here is a link for code.google And plugins.jquery

And final link Plugin Homepage

And now my favorite part.

Second : We will make asp.net web site add page and add new aspx webpage call it “Dynatree.aspx” we will reference.

    <link href="CSS/Jquery.css" rel="stylesheet" type="text/css" />

    <script src="/Scripts/jquery-1.8.2.js" type="text/javascript"></script>

    <script src="/Scripts/ui.core.js" type="text/javascript"></script>

    <script src="/Scripts/jquery.dynatree.js" type="text/javascript"></script>

Third: in the page code behind file put this method.

///
/// Function get Tree leaves
///

//////
[System.Web.Services.WebMethod]
//this part is important to make asp convert result to json for us
[System.Web.Script.Services.ScriptMethod(ResponseFormat = System.Web.Script.Services.ResponseFormat.Json)]
public static List GetNodes(string key)
{

List lstDynaTreeNodes= new List();
string NodeTitle = "Tree Node ";
//First State Meanes load the first Row of the tree
if (key=="0")
{
//load data for intial the tree firt row of data
}
else
{
// load data (the sub items) for the given tree branch
}
int NodeNum;
for (int i = 1; i &lt;= 5; i++)
{
NodeNum = new Random().Next(i, i * 20);
lstDynaTreeNodes.Add(
new DynaTreeNode()
{
isFolder = true,
isLazy = true,
key = NodeNum.ToString(),
Title = NodeTitle + NodeNum.ToString()
}<strong>
);
}

return lstDynaTreeNodes;
}

Fourth : Now to the ajax call the will initiate the tree and get data.

// --- Initialize the sample tree
$("#tree").dynatree({
title: "Lazy Tree",
rootVisible: true,
fx: { height: "toggle", duration: 300 },
initAjax: {
type: 'POST',
url: "/Dynatree.aspx/getnodes",
data: JSON.stringify({ key: '0' }),
dataType: "json",
contentType: 'application/json; charset=utf-8'

},

onActivate: function (dtnode) {
$("#divCurrent").text(dtnode.data.title);
}
, onLazyRead: function (dtnode) {
dtnode.appendAjax({
type: 'POST',
url: "/Dynatree.aspx/getnodes",
data: JSON.stringify({ key: dtnode.data.key }),
dataType: "json",
contentType: 'application/json; charset=utf-8'
});
}

});

That is it you will have you lazy load Tree you can download
Full Example and enjoy the code

Implementing Custom Log4Net using AdoNetAppender and simple Wrapper with Asp.Net Mvc

Assalamo Alikom
This post comes after a quite long pause, but I found this very useful article which encouraged me to write in this topic.

In this post i will Illustrate how to make Wrapper class(mimic structure, but simply hide complexity/advanced/unwanted functionality) to manage Log4net Classes in an manageable and easy way .

First Log4Net for whom don’t know it as described on their homepage

The Apache log4net library is a tool to help the programmer output log statements to a variety of output targets.

I want talk more about Log4Net and you can review full Documentation Here

Now To Code :)

I will build simple Mvc Web Application to illustrate how to make Log4net Up and running and make easy Wrapper
to implement and catch all logs.

Enough talking open your visual studio 2013 IDE.

1) Create Mvc Project Called Log4Net_MVC.

  • And add new Project of type Class library name it Log4NetWrapper.
  • Download load Log4net DLL from Here
  • Add Folder name it Dlls and but the log4net.dll,log4net.xml files and add reference to them on class library project
  • Run the “Log.sql” Script to your database

2) In the Mvc Project

  • Add reference to Log4NetWrapper project.
  • Configure the Log4Net.config file to meet your requirement and here is some example of how to do this.

I configured my own File and here is some keys to understand the file .

I added two Appender “Ways to write logs”

   <appender name="DebugAppender" type="log4net.Appender.DebugAppender"> <!-- To write on the OutPut Window for Visual Studio To Debug Errors  -->
   <appender name="AdoNetAppender_SqlServer" type="log4net.Appender.AdoNetAppender">
<!-- To write logs into DataBase -->

I added two Custom Parameters i need to write “[ProjectID],[UserID]”
and to do that i added them to ‘commandText’

   <commandText value="INSERT INTO Log ([Date],[Thread],[Level],[Logger],[Message],[Exception],[ProjectID],[UserID]) VALUES (@log_date, @thread, @log_level, @logger, @message,@exception,@ProjectID,@UserID)"/>

And added them to the ‘Parameter’ collection .

 <parameter>
        <parameterName value="@ProjectID"/>
        <dbType value="String" />
        <size value="20" />
        <layout type="log4net.Layout.PatternLayout" >
          <conversionPattern value="%property{ProjectID}" />
        </layout>
      </parameter>
      <parameter>
        <parameterName value="@UserID"/>
        <dbType value="String" />
        <size value="20" />
        <layout type="log4net.Layout.PatternLayout" >
          <conversionPattern value="%property{UserID}" />
        </layout>
      </parameter>

Now i will refer to two important Keys

1- bufferSize this key means log4net store logs in queue and this the number where reache it will start flush the queue and write them into your database this very good for performance Choose wisely .

 <!-- number of logs to immediate write --><bufferSize value="1" />

2- useTransactions this key true/false

<useTransactions value="false" />

3) Add the Log4Net.config file to the root of your Mvc project and modify the Log4netConnectionString to refer to your database.

4) Add new class name it CommonMethods.

5) Add this global function to the class “WriteLog“.

public void WriteLog(object pMessage, Gehbeez.BaseDataLayer.Logging.LogType pLogType)
{
Gehbeez.BaseDataLayer.Logging.Logger.Instance.WriteLog(pMessage, pLogType,
new Gehbeez.BaseDataLayer.Logging.loggingParameter()
{ UserID = CommonMethods.Instance.CurrentUserID.HasValue ? CommonMethods.Instance.CurrentUserID.Value.ToString() : "0", ProjectID = "12" });
}

Here it is important to say that the loggingParameter is a class holds your custom parameter

Note That Parameter name must match your custom configuration parameters and i added this method to enable Log4Net to catch their values

 private void PushLoggingProperties(object loggingParameter)
        {
            if (loggingParameter != null)
            {
                Type attrType = loggingParameter.GetType();
                System.Reflection.PropertyInfo[] properties = attrType.GetProperties(
                               System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance);
                for (int i = 0; i < properties.Length; i++)
                {
                    object value = properties[i].GetValue(loggingParameter, null);
                    if (value != null)
                        log4net.ThreadContext.Properties[properties[i].Name]=value.ToString();
                }
            }
        }

6) Add these two additional Propertes

public static CommonMethods Instance
{
get
{
return new CommonMethods();
}
}

public int? CurrentUserID
{
get
{
try
{
return int.Parse(HttpContext.Current.User.Identity.Name);
}
catch { return null; }
}
}

7) Add now new Controller name it HomeController.

8) Past this code to Ilestrate how it will work.

public ActionResult Index()
{
// Normal Log
Log4Net_MVC.App_Code.CommonMethods.Instance.WriteLog("Execute Home&gt;&gt;&gt;&gt;Index", Gehbeez.BaseDataLayer.Logging.LogType.EnterMethod);

try {
//Sample
throw new Exception("Custom Exception");
}
catch (Exception ex) {/* Catch Error */ Log4Net_MVC.App_Code.CommonMethods.Instance.WriteLog(ex, Gehbeez.BaseDataLayer.Logging.LogType.Error); }

return View();
}

 

Now the result will be like this

Log Table

Log Table

Download full working sample Log4Net_MVC

Using Log4net for logging with asp.net and MVC log into SQL Table

I will write this article to show how to use Log4net with MVC.
First you need to read about Log4net

And for shower log4net is one of the best solutions for logging.

Now let us start

1) Download the Dlls from the site.

2) Add reference to it into your project.

3) Create The table in the attached project to make log Table.

4) If you are using MVC site add this section into your AssemblyInfo.cs .

using log4net.Config;
// this for Log4net
[assembly: XmlConfigurator(ConfigFile = "Log4Net.config", Watch = true)]

 

5) If the file “Log4Net.config” will be in a custom folder just use this code.

System.IO.FileInfo l4net = new System.IO.FileInfo(Server.MapPath("~/Log4Net.config"));
//System.IO.FileInfo l4net = new System.IO.FileInfo("Log4Net.config");// if you are using console or desktop application
log4net.Config.XmlConfigurator.ConfigureAndWatch(l4net);

6) To declare the log4net class.

//logger = log4net.LogManager.GetLogger(typeof(Program));//the Program is the name of the current class
// or you can use
logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

logger.Info("Test message for Log4Net");
logger.Warn("Test message for Log4Net");
logger.Error("Test message for Log4Net");
logger.Debug("Test message for Log4Net");

You can download a Consol sample TestLog4net

That is all hope you enjoyed the article.

SQL Server – REPLICATE function (repeat string N numbers of times)

REPLICATE function this is one of the sql server string functions.
This function repeat string or char N number of time

REPLICATE ( string_expression ,integer_expression )

Example.

DECLARE @st VARCHAR(10)='@1';
SELECT Replicate(@st,4)

The result.

@1@1@1@1

Problem Steps Recorder in Windows 7

This is a very handy tool if you want to record some step for documentation or send bug report to developer if you Work In a company that makes it’s own software
Or you you are making documentation for some steps you do

Here is your solution
Just type “psr” in the run .

5-11-2013 12-04-15 AM

Start recording .
5-11-2013 12-04-37 AM

Here is sample record file

Click to Dwonload