SQL Server 2000 system tables and their equivalent DMV in SQL Server 2005

Standard

Those who have been working with SQL Server administration for a while now undoubtedly have at times referred to the old SQL Server system tables in order to automate some processes, or document their tables by for example combining the sysobjects and syscolumns tables. As per SQL Server 2005 and onwards, Microsoft added a number of Dynamic Management Views (DMV) that take simplify all kinds of management tasks.

List of SQL Server 2000 system tables and their 2005 equivalent management views, as well as a brief description what kind of information to find in the views.

Dynamic Management Views existing in the Master database

SQL Server 2000 SQL Server2005 Description
sysaltfiles sys.master_files Contains a row per file of a database as stored in the master database.
syscacheobjects sys.dm_exec_cached_plans Returns a row for each query plan that is cached by SQL Server for faster query execution.
sys.dm_exec_plan_attributes Returns one row per plan attribute for the plan specified by the plan handle.
sys.dm_exec_sql_text Returns the text of the SQL batch that is identified by the specified sql_handle.
sys.dm_exec_cached_plan_dependent_objects Returns a row for each Transact-SQL execution plan, common language runtime (CLR) execution plan, and cursor associated with a plan.
syscharsets sys.syscharsets Contains one row for each character set and sort order defined for use by the SQL Server Database Engine.
sysconfigures sys.configurations Contains a row per server-wide configuration option value in the system.
syscurconfigs sys.configurations Contains a row per server-wide configuration option value in the system.
sysdatabases sys.databases Contains one row per database in the instance of Microsoft SQL Server.
sysdevices sys.backup_devices Contains a row for each backup-device registered by using sp_addumpdevice or created in SQL Server Management Studio.
syslanguages sys.syslanguages Contains one row for each language present in the instance of SQL Server.
syslockinfo sys.dm_tran_locks Returns information about currently active lock manager resources
syslocks[ sys.dm_tran_locks Returns information about currently active lock manager resources
syslogins sys.server_principals Contains a row for every server-level principal.
sys.sql_logins Returns one row for every SQL login.
sysmessages sys.messages Contains a row for each message_id or language_id of the error messages in the system, for both system-defined and user-defined messages.
sysoledbusers sys.linked_logins Returns a row per linked-server-login mapping, for use by RPC and distributed queries from local server to the corresponding linked server.
sysopentapes sys.dm_io_backup_tapes Returns the list of tape devices and the status of mount requests for backups.
sysperfinfo sys.dm_os_performance_counters Returns a row per performance counter maintained by the server.
sysprocesses sys.dm_exec_connections Returns information about the connections established to this instance of SQL Server and the details of each connection.
sys.dm_exec_sessions Returns one row per authenticated session on SQL Server.
sys.dm_exec_requests Returns information about each request that is executing within SQL Server.
sysremotelogins sys.remote_logins Returns a row per remote-login mapping. This catalog view is used to map incoming local logins that claim to be coming from a corresponding server to an actual local login.
sysservers sys.servers Contains a row per linked or remote server registered, and a row for the local server that has server_id = 0.

Dynamic Management Views existing in every database.

SQL Server 2000 SQL Server 2005 Description
fn_virtualfilestats sys.dm_io_virtual_file_stats Returns I/O statistics for data and log files.
syscolumns sys.columns Returns a row for each column of an object that has columns, such as views or tables.
syscomments sys.sql_modules Returnsa row for each object that is an SQL language-defined module. Objectsof type ‘P’, ‘RF’, ‘V’, ‘TR’, ‘FN’, ‘IF’, ‘TF’, and ‘R’ have an associated SQL module.
sysconstraints sys.check_constraints Contains a row for each object that is a CHECK constraint, with sys.objects.type = ‘C’.
sys.default_constraints Contains a row for each object that is a default definition (created as part of a CREATE TABLE or ALTER TABLE statement instead of a CREATE DEFAULT statement), with sys.objects.type = ‘D’.
sys.key_constraints Contains a row for each object that is a primary key or unique constraint. Includes sys.objects.type ‘PK’ and ‘UQ’.
sys.foreign_keys Contains a row per object that is a FOREIGN KEY constraint, with sys.object.type = ‘F’.
sysdepends sys.sql_expression_dependencies Contains one row for each by-name dependency on a user-defined entity in the current database. A dependency between two entities is created when one entity, called the referenced entity, appears by name in a persisted SQL expression of another entity, called the referencing entity.
sysfilegroups sys.filegroups Contains a row for each data space that is a filegroup.
sysfiles sys.database_files Contains a row per file of a database as stored in the database itself. This is a per-database view.
sysforeignkeys sys.foreign_key_columns Contains a row for each column, or set of columns, that comprise a foreign key.
sysindexes sys.indexes Contains a row per index or heap of a tabular object, such as a table, view, or table-valued function.
sys.partitions Contains a row for each partition of all the tables and most types of indexes in the database. Special index types like Fulltext, Spatial, and XML are not included in this view. All tables and indexes in SQL Server contain at least one partition, whether or not they are explicitly partitioned.
sys.allocation_units Contains a row for each allocation unit in the database.
sys.dm_db_partition_stats Returns page and row-count information for every partition in the current database.
sysindexkeys sys.index_columns Contains one row per column that is part of a sys.indexes index or unordered table (heap).
sysmembers sys.database_role_members Returns one row for each member of each database role.
sysobjects sys.objects Contains a row for each user-defined, schema-scoped object that is created within a database.
syspermissions sys.database_permissions Returns a row for every permission or column-exception permission in the database. For columns, there is a row for every permission that is different from the corresponding object-level permission. If the column permission is the same as the corresponding object permission, there will be no row for it and the actual permission used will be that of the object.
sys.server_permissions Returns one row for each server-level permission.
sysprotects sys.database_permissions Returns a row for every permission or column-exception permission in the database. For columns, there is a row for every permission that is different from the corresponding object-level permission. If the column permission is the same as the corresponding object permission, there will be no row for it and the actual permission used will be that of the object.
sys.server_permissions Returns one row for each server-level permission.
sysreferences sys.foreign_keys Contains a row per object that is a FOREIGN KEY constraint, with sys.object.type = ‘F’.
systypes sys.types Contains a row for each system and user-defined type.
sysusers sys.database_principals Returns a row for each principal in a database.
sysfulltextcatalogs sys.fulltext_catalogs Contains a row for each full-text catalog.
原文:

Script to retrieve SQL Server database backup history and no backups

Standard

原文:Script to retrieve SQL Server database backup history and no backups

Problem

There is a multitude of data to be mined from within the Microsoft SQL Server system views. This data is used to present information back to the end user of the SQL Server Management Studio (SSMS) and all third party management tools that are available for SQL Server Professionals. Be it database backup information, file statistics, indexing information, or one of the thousands of other metrics that the instance maintains, this data is readily available for direct querying and assimilation into your “home-grown” monitoring solutions as well. This tip focuses on that first metric: database backup information. Where it resides, how it is structured, and what data is available to be mined.

Solution

The msdb system database is the primary repository for storage of SQL Agent, backup, Service Broker, Database Mail, Log Shipping, restore, and maintenance plan metadata. We will be focusing on the handful of system views associated with database backups for this tip:

  • dbo.backupset: provides information concerning the most-granular details of the backup process
  • dbo.backupmediafamily: provides metadata for the physical backup files as they relate to backup sets
  • dbo.backupfile: this system view provides the most-granular information for the physical backup files

Based upon these tables, we can create a variety of queries to collect a detailed insight into the status of backups for the databases in any given SQL Server instance.


Database Backups for all databases For Previous Week

--------------------------------------------------------------------------------- 
--Database Backups for all databases For Previous Week 
--------------------------------------------------------------------------------- 
SELECT 
CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
msdb.dbo.backupset.database_name, 
msdb.dbo.backupset.backup_start_date, 
msdb.dbo.backupset.backup_finish_date, 
msdb.dbo.backupset.expiration_date, 
CASE msdb..backupset.type 
WHEN 'D' THEN 'Database' 
WHEN 'L' THEN 'Log' 
END AS backup_type, 
msdb.dbo.backupset.backup_size, 
msdb.dbo.backupmediafamily.logical_device_name, 
msdb.dbo.backupmediafamily.physical_device_name, 
msdb.dbo.backupset.name AS backupset_name, 
msdb.dbo.backupset.description 
FROM msdb.dbo.backupmediafamily 
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id 
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7) 
ORDER BY 
msdb.dbo.backupset.database_name, 
msdb.dbo.backupset.backup_finish_date

Note: for readability the output was split into two screenshots.


Most Recent Database Backup for Each Database

------------------------------------------------------------------------------------------- 
--Most Recent Database Backup for Each Database 
------------------------------------------------------------------------------------------- 
SELECT  
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name,  
   MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date 
FROM   msdb.dbo.backupmediafamily  
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
WHERE  msdb..backupset.type = 'D' 
GROUP BY 
   msdb.dbo.backupset.database_name  
ORDER BY  
   msdb.dbo.backupset.database_name 

 


Most Recent Database Backup for Each Database – Detailed

You can join the two result sets together by using the following query in order to return more detailed information about the last database backup for each database. The LEFT JOIN allows you to match up grouped data with the detailed data from the previous query without having to include the fields you do not wish to group on in the query itself.

------------------------------------------------------------------------------------------- 
--Most Recent Database Backup for Each Database - Detailed 
------------------------------------------------------------------------------------------- 
SELECT  
   A.[Server],  
   A.last_db_backup_date,  
   B.backup_start_date,  
   B.expiration_date, 
   B.backup_size,  
   B.logical_device_name,  
   B.physical_device_name,   
   B.backupset_name, 
   B.description 
FROM 
   ( 
   SELECT   
       CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
       msdb.dbo.backupset.database_name,  
       MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date 
   FROM    msdb.dbo.backupmediafamily  
       INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
   WHERE   msdb..backupset.type = 'D' 
   GROUP BY 
       msdb.dbo.backupset.database_name  
   ) AS A 
    
   LEFT JOIN  

   ( 
   SELECT   
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name,  
   msdb.dbo.backupset.backup_start_date,  
   msdb.dbo.backupset.backup_finish_date, 
   msdb.dbo.backupset.expiration_date, 
   msdb.dbo.backupset.backup_size,  
   msdb.dbo.backupmediafamily.logical_device_name,  
   msdb.dbo.backupmediafamily.physical_device_name,   
   msdb.dbo.backupset.name AS backupset_name, 
   msdb.dbo.backupset.description 
FROM   msdb.dbo.backupmediafamily  
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
WHERE  msdb..backupset.type = 'D' 
   ) AS B 
   ON A.[server] = B.[server] AND A.[database_name] = B.[database_name] AND A.[last_db_backup_date] = B.[backup_finish_date] 
ORDER BY  
   A.database_name

Note: for readability the output was split into two screenshots.


Databases Missing a Data (aka Full) Back-Up Within Past 24 Hours

At this point we’ve seen how to look at the history for databases that have been backed up. While this information is important, there is an aspect to backup metadata that is slightly more important – which of the databases you administer have not been getting backed up. The following query provides you with that information (with some caveats.)

------------------------------------------------------------------------------------------- 
--Databases Missing a Data (aka Full) Back-Up Within Past 24 Hours 
------------------------------------------------------------------------------------------- 
--Databases with data backup over 24 hours old 
SELECT 
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name, 
   MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date, 
   DATEDIFF(hh, MAX(msdb.dbo.backupset.backup_finish_date), GETDATE()) AS [Backup Age (Hours)] 
FROM    msdb.dbo.backupset 
WHERE     msdb.dbo.backupset.type = 'D'  
GROUP BY msdb.dbo.backupset.database_name 
HAVING      (MAX(msdb.dbo.backupset.backup_finish_date) < DATEADD(hh, - 24, GETDATE()))  

UNION  

--Databases without any backup history 
SELECT      
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,  
   master.dbo.sysdatabases.NAME AS database_name,  
   NULL AS [Last Data Backup Date],  
   9999 AS [Backup Age (Hours)]  
FROM 
   master.dbo.sysdatabases LEFT JOIN msdb.dbo.backupset 
       ON master.dbo.sysdatabases.name  = msdb.dbo.backupset.database_name 
WHERE msdb.dbo.backupset.database_name IS NULL AND master.dbo.sysdatabases.name <> 'tempdb' 
ORDER BY  
   msdb.dbo.backupset.database_name

 

Now let me explain those caveats, and this query. The first part of the query returns all records where the last database (full) backup is older than 24 hours from the current system date. This data is then combined via the UNION statement to the second portion of the query. That second statement returns information on all databases that have no backup history. I’ve taken the liberty of singling tempdb out from the result set since you do not back up that system database. It is recreated each time the SQL Server services are restarted. That is caveat #1. Caveat #2 is the arbitrary value I’ve assigned to the aging value for databases without any backup history. I’ve set that value at 9999 hours because in my environment I want to place a higher emphasis on those databases that have never been backed up.

SQL Server Management Studio Tips

Standard

原文地址:http://www.sqlservercentral.com/articles/SSMS/160267/?utm_source=SSC&utm_medium=pubemail

By Konstantin Taranov, 2017/08/14

Most tips works for SSMS higher 2008 but some of them only for SSMS 2016 and above.

Great thanks to:

  • Kendra Little
  • Slava Murygin
  • Mike Milligan
  • Kenneth Fisher
  • William Durkin
  • John Morehouse
  • Phil Factor
  • Klaus Aschenbrenner
  • Latish Sehgal
  • Arvind Shyamsundar
  • SQLMatters
  • MSSQLTips
  • Anthony Zanevsky, Andrew Zanevsky and Katrin Zanevsky
  • Andy Mallon
  • Aaron Bertrand

Import and Export Settings

Tools > Options > Environment > Import and Export Settings

You can configure so many settings in SSMS and then export it and use on all your computers. Below link provide detailed instruction and awesome Dark theme configuration: Making SSMS Pretty: My Dark Theme

Also you can create shared team settings file and use it from network location. Detailed information you can find in this article Symbolic Links for Sharing Template Files or “How I Broke Management Studio with Symbolic Links”

Import and Export Settings

SSMS Shortcuts

All 957 shortcuts you can find here

Most useful are:

Shortcut Description
Ctrl + U Change Selected Database
Ctrl + R Toggle Results Pane
Ctrl + Space Activate Autocomplete
Ctrl + Shift + V Cycle through clipboard ring
Ctrl + ] Navigate to matching parenthesis
Ctrl + – Navigate to last cursor location
Ctrl + Shift + – Navigate forward to cursor location
Ctrl + K, Ctrl + C Comments selected text
Ctrl + K, Ctrl + U Uncomments selected text
Ctrl + K, Ctrl + K Toggle Bookmark
Ctrl + K, Ctrl + N Go to Next Bookmark
Ctrl + L Display Estimated Query Execution plan
Shift + Alt + Enter View Code Editor in Full Screen
Ctrl + I Quick Search
Ctrl + F4 Close the current MDI child window
Ctrl + F5 Parse query to check for errors
Shift + F10 Simulate right mouse button
Ctrl + Alt + T Display Template Explorer
Ctrl + Shift + M Specify values for template parameters

Keyboard Shortcuts for Favorite Stored Procedures

Tools > Options > Environment > Keyboard > Query Shortcuts

Keyboard Shortcuts for Favorite Stored Procedures

3 Shortcuts can not be changed: Alt + F1Ctrl + 1 and Ctrl + 2. For another 9 shortcuts my recommendation awesome open source Brent Ozar teams procedures and with some limitations Adam Machanic sp_WhoIsActive:

Query Shortcut Stored Procedure
Alt + F1 sp_help
Ctrl + F1 sp_WhoIsActive
Ctrl + 1 sp_who
Ctrl + 2 sp_lock
Ctrl + 3 sp_Blitz
Ctrl + 4 sp_BlitzCache
Ctrl + 5 sp_BlitzWho
Ctrl + 6 sp_BlitzQueryStore
Ctrl + 7 sp_BlitzFirst
Ctrl + 8 usp_BulkUpload
Ctrl + 9 sp_BlitzTrace
Ctrl + 0 sp_foreachdb

Also recommended:

Script any object with data

Right click on database name > Tasks > Generate Scripts …

Script any object with data

Selecting a block of text using the ALT Key

By holding down the ALT key as you select a block of text you can control the width of the selection region as well as the number of rows. Also you can activate multi line mode with Shift + Alt keys and using keyboard arrows to format multi line code.

Script Table and Column Names by Dragging from Object Explorer

Save keystrokes by dragging Drag the Columns folder for a table in to auto-type all column names in the table in a single line.

  • Warning: this doesn’t include [brackets] around the column names, so if your columns contain spaces or special characters at the beginning, this shortcut isn’t for you
  • Dragging the table name over will auto-type the schema and table name, with brackets.

Disable Copy of Empty Text

  • Select a block of text to copy;
  • Move the cursor the place where you want to paste the code;
  • Accidentally press Ctrl+C again instead of Ctrl+V;
  • Block of copied text is replaced by an empty block;

This behavior can be disabled in SSMS: go to Tools > Options > Text Editor > All Languages > General > 'Apply Cut or Copy Commands to blank lines when there is no selection' and uncheck the checkbox.

Disable Copy of Empty Text

Client Statistics

When you enable that option for your session, SQL Server Management Studio will give you more information about the client side processing of your query.

The Network Statistics shows you the following information:

  • Number of Server Roundtrips
  • TDS Packets sent from Client
  • TDS Packets received from Server
  • Bytes sent from Client
  • Bytes received from Server

The Time Statistics additionally shows you the following information:

  • Client Processing Time
  • Total Execution Time
  • Wait Time on Server Replies

Configure Object Explorer to Script Compression and Partition Schemes for Indexes

Is this index compressed or partitioned?

By default, you wouldn’t know just by scripting out the index from Object Explorer. If you script out indexes this way to check them into source code, or to tweak the definition slightly, this can lead you to make mistakes.

You can make sure you’re aware when indexes have compression or are partitioned by changing your scripting settings:

  • Click Tools – > Options -> SQL Server Object Explorer -> Scripting
  • Scroll down in the right pane of options and set both of these to True
    • Script Data Compression Options
    • Script Partition Schemes
  • Click OK

Using GO X to Execute a Batch or Statement Multiple Times

The GO command marks the end of a batch of statements that should be sent to SQL Server for processing, and then compiled into a single execution plan. By specifying a number after the ‘GO’ the batch can be run specified number of times. This can be useful if, for instance, you want to create test data by running an insert statement a number of times. Note that this is not a Transact SQL statement and will only work in Management Studio (and also SQLCMD or OSQL). For instance the following SQL can be run in SSMS :

<span class="pl-k">CREATE</span> <span class="pl-k">TABLE</span> <span class="pl-en">TestData</span>(ID <span class="pl-k">INT</span> IDENTITY (<span class="pl-c1">1</span>,<span class="pl-c1">1</span>), CreatedDate DATETIME)
GO

<span class="pl-k">INSERT INTO</span> TestData(CreatedDate) <span class="pl-k">SELECT</span> GetDate()
GO <span class="pl-c1">10</span>

This will run the insert statement 10 times and therefore insert 10 rows into the TestData table. In this case this is a simpler alternative than creating a cursor or while loop.

SSMS Template Replacement

One under-used feature of Management Studio is the template replacement feature. SSMS comes with a library of templates, but you can also make your own templates for reusable scripts.

In your saved .sql script, just use the magic incantation to denote the parameters for replacement. The format is simple: <label, datatype, default value>

Then, when you open the .sql script, you hit CTRL + Shift + M, and SSMS will give you a pop-up to enter your replacement values.

Color coding of connections

SQL Server Management Studio has the capability of coloring the bar at the bottom of each query window, with the color dependent on which server is connected. This can be useful in order to provide a visual check of the server that a query is to be run against, for instance to color code production instances as red, development as green and amber as test. This can also be used in conjunction with Registered Servers and CMS (Central Management Server). To add a color bar when connecting to the server click on the Options button in the Connect to Database Engine window and then select the Connection Properties window. Select the check box towards the bottom of the window and use the ‘Select…’ button to choose a color.

SQLCMD mode

Switching on SQLCMD mode enables a number of useful extra scripting style commands in SSMS. In particular you can use it to change to the connection credentials within the query window, so that you can run a query against multiple servers from the same query window. There are more details of how to do this here: Changing the SQL Server connection within an SSMS Query Windows using SQLCMD Mode

Script multiple objects using the Object Explorer Details Windows

Individual database objects, such as a table or stored procedure, can be scripted within SSMS by right clicking on the object within Object Explorer and selecting the appropriate item in the drop down menu. However if you have a lot of objects to script that can quickly become time consuming. Fortunately it’s possible to select multiple objects and script them up all together in a single query window. To do this just open the Object Explorer Details window from the View menu (or press the F7 key). If you want to script up multiple (or all) tables, select the Tables item under the relevant database in Object Explorer. A list of all tables appears in the Object Explorer Details window. Select the tables you want to script (using the Control key if necessary) and then right click and select which script option you want – e.g. to create a table create script for all tables

Registered Servers / Central Management Server

If you have a lot of servers then re-entering the details in Object Explorer every time you start SSMS can be frustrating and time consuming. Fortunately there are two facilities within SSMS that enable these details to be entered just once and “remembered” each time you open up SSMS. These two facilities are Registered Servers and Central Management Servers. These were introduced in different versions of SQL Server and work in different ways, each has its own advantages and disadvantages so you may want to use both.

To add a registered server open the Registered Servers window from the View menu (or click CTRL + ALT + G), the window should appear in the top left corner of SSMS. Right click on the Local Server Groups folder and select ‘New Server Registration…’. Enter the server details and close the window. This new server should then appear under Local Server Groups, you can then right click and open up the server in Object Explorer or open a new query window. The server details are stored locally in an XML file and so will appear next time you open SSMS. If you have a lot of servers then you can also create Server Groups to group together similar servers. One advantage of creating groups (other than being able to logically group similar servers together) is that you can run a query against all servers in the group, by right clicking the group and selecting ‘New Group’.

Central Management Server are similar to Registered Servers but with some differences, the main one being that the server details are stored in a database (the Central Management Server) rather than a local file. A significant limitation with CMS is that the CMS server itself can’t be included in the list of servers.

Splitting the Query Window

The query window in SSMS can be split into two so that you can look at two parts of the same query simultaneously. Both parts of the split window can be scrolled independently. This is especially useful if you have a large query and want to compare different areas of the same query. To split the window simply drag the bar to the top right hand side of the window as shown below.

The splitter bar allows you to view one session with two panes. You can scroll in each pane independently. You can also edit in both the top and bottom pane.

Splitting the Query Window

Moving columns in the results pane

It may not be immediately obvious but you can switch columns around in the results pane when using the grid view, by dragging the column headers and dropping them next to another column header. This can be useful if you want to rearrange how the results are displayed without amending the query, especially if you have a lot of columns in your result set. This works only for one column.

Generating Charts and Drawings in SQL Server Management Studio

You don’t have to settle for T-SQL’s monochrome text output. These stored procedures let you quickly and easily turn your SELECT queries’ output into colorized charts and even computer-generated art. To turn your own data into a line, column, area, or bar chart using the Chart stored procedure, you need to design a SELECT query that serves as the first parameter in the stored procedure call.

Detailed article and code here: Generating Charts and Drawings in SQL Server Management Studio

Also you can generate amazing chart using awesome R libraries, detailed article: View R Plots from within SQL Server Management Studio

Additional Connection Parameters

One such change SSMS got for free is the connection resiliency logic within the SqlConnection.Open() method. To improve the default experience for clients which connect to Azure SQL Database, the above method will (in the case of initial connection errors / timeouts) now retry 1 time after sleeping for 10 seconds. These numbers are configurable by properties called ConnectRetryCount (default value 1) and ConnectRetryInterval (default value 10 seconds.) The previous versions of the SqlConnection class would not automatically retry in cases of connection failure.

There is a simple workaround for this situation. It is to add the following parameter string into the Additional Connection Parameters tab within the SSMS connection window. The good news is that you only need to do this once, as the property is saved for future sessions for that SQL Server (until of course it is removed by you later.)

<code>ConnectRetryCount=0
</code>

Additional Connection Parameters

Working with tabs headers

You can view SPID in tabs header, quickly open script containing folder or copy script file path.

Working with tabs headers

Hiding tables in SSMS Object Explorer

  1. You can actually hide an object from object explorer by assigning a specific extended property:
<code>EXEC sp_addextendedproperty
@name = N'microsoft_database_tools_support',
@value = 'Hide',
@level0type = N'Schema', @level0name = 'Person',
@level1type = N'Table', @level1name = 'Address';
GO
</code>

You can then remove the property (and the object will show back up) like so:

<code>EXEC sp_dropextendedproperty
@name = N'microsoft_database_tools_support',
@level0type = N'Schema', @level0name = 'Person',
@level1type = N'Table', @level1name = 'Address';
GO
</code>
  1. DENY VIEW DEFINITION
<code>DENY VIEW DEFINITION ON Schema.Table TO UserName;
</code>

Now UserName won’t be able to see Table in Object Explorer. In Fact, they won’t be able to see the table in sys.tables or INFORMATION_SCHEMA.TABLES

VIEW DEFINITION is the ability to see the definition of the object. In the case of SPs the code, same with Views and in the case of Tables it’s the columns definitions etc.

UnDock Tabs and Windows for Multi Monitor Support

From SSMS 2012 and onwards, you can easily dock/undock the query tabs as well as different object windows inside SSMS to make better use of the screen real estate and multiple monitors you have.

UnDock Tabs and Windows for Multi Monitor Support

RegEx-Based Finding and Replacing of Text in SSMS

So often, one sees developers doing repetitive coding in SSMS or Visual Studio that would be much quicker and easier by using the built-in Regular-Expression-based Find/Replace functionality. It is understandable, since the syntax is odd and some features are missing, but it is still well-worth knowing about.

More details and examples you can find here RegEx-Based Finding and Replacing of Text in SSMS.

My favorite regex: replace \t on \n,. It useful in many cases when you have column names copied from, for example, Excel and need quickly get sql query.

Changing what SSMS opens on startup

You can customize SSMS startup behavior in Tools -> Options -> Environment -> Startup and hide system objects in Object Explore:

Changing what SSMS opens on startup

Also you can disable the splash screen – this cuts the time it takes SSMS to load for versions before SSMS 17. Right click your shortcut to SSMS and select properties. Enter the text -nosplash right after the ending quote in the path:

SSMS link nosplash option

It is useful to create a solution of commonly used SQL scripts to always load at start-up.

  1. Display the Solution Explorer by pressing Ctrl+Alt+L or clicking View -> Solution Explorer.
  2. Then right click the Solution "Solution1" (0 projects) text and select Add -> New Project.
  3. Use the default SQL Server Scripts template and give your solution a clever name.
  4. Rename all of your SQL Code Snippets so the extension is .SQL. Drag them into the queries folder within the Solution Explorer.
  5. Open Windows explorer and browse to the location of your solution. Copy file location address to your clipboard. Go back to your SSMS shortcut properties and add within double quotes the location and file name of your solution before the “-nosplash”.

This is the complete text within my shortcut properties:

<code>"C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Binn\ManagementStudio\Ssms.exe" "C:\Users\taranov\Documents\SQL Server Management Studio\Projects\MySQLServerScripts.ssmssln" -nosplash
</code>

Create a solution of commonly used SQL scripts

Modifying New Query Template

You can modified New Query template for any instance SQL Server:

<code>C:\Program Files (x86)\Microsoft SQL Server\[140|130|120|110]\Tools\Binn\ ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
</code>

For example, you can add begin transaction statement for preventing ups queries:

<span class="pl-k">BEGIN</span> TRANSACTION;
 
 
<span class="pl-c">-- COMMIT   TRANSACTION;</span>
<span class="pl-c">-- ROLLBACK TRANSACTION;</span>

Thanks for this tip Aaron Bertrand: T-SQL Tuesday #92: Lessons Learned the Hard Way

Query Execution Options

More detailed article here: Knowing the Options

The options represent the SET values of the current session. SET options can affect how the query is execute thus having a different execution plan. You can find these options in two places within SSMS under Tools -> Options -> Query Execution -> SQL Server -> Advanced:

Query Execution Options Advanced

As well as Tools -> Options -> Query Execution -> SQL Server -> ANSI:

Query Execution Options ANSI

Using the interface to check what is set can get tiresome. Instead, you can use the system function @@OPTIONS. Each option shown above has a BIT value for all 15 options indicating whether or not it is enabled.

@@OPTIONS takes the binary representation and does a BITWISE operation on it to produce an integer value based on the sum of which BITS are enabled.

Default value for SELECT @@OPTIONS is 5496. Let’s assume for a moment that the only two options that are enabled on my machine are ANSI_PADDING and ANSI_WARNINGS. The values for these two options are 8 and 16, respectively speaking. The sum of the two is 24.

<span class="pl-c">/***************************************************************</span>
<span class="pl-c">  Author: John Morehouse</span>
<span class="pl-c">  Summary: This script display what SET options are enabled for the current session.</span>
 
<span class="pl-c">  You may alter this code for your own purposes. You may republish altered code as long as you give due credit.</span>
 
<span class="pl-c">  THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.</span>
<span class="pl-c">***************************************************************/</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>Disable_Def_Cnst_Chk<span class="pl-pds">'</span></span>    <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">1</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>IMPLICIT_TRANSACTIONS<span class="pl-pds">'</span></span>   <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">2</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>CURSOR_CLOSE_ON_COMMIT<span class="pl-pds">'</span></span>  <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">4</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_WARNINGS<span class="pl-pds">'</span></span>           <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">8</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_PADDING<span class="pl-pds">'</span></span>            <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">16</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULLS<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">32</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ARITHABORT<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">64</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ARITHIGNORE<span class="pl-pds">'</span></span>             <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">128</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>QUOTED_IDENTIFIER<span class="pl-pds">'</span></span>       <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">256</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>NOCOUNT<span class="pl-pds">'</span></span>                 <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">512</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULL_DFLT_ON<span class="pl-pds">'</span></span>       <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">1024</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULL_DFLT_OFF<span class="pl-pds">'</span></span>      <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">2048</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>CONCAT_NULL_YIELDS_NULL<span class="pl-pds">'</span></span> <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">4096</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>NUMERIC_ROUNDABORT<span class="pl-pds">'</span></span>      <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">8192</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>XACT_ABORT<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">16384</span> WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span>;

SQL Server Diagnostics Extension

Analyze Dumps – Customers using this extension will be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix. The memory dumps are stored in a secured and compliant manner as governed by the Microsoft Privacy Policy.

For example, Joe, a DBA from Contoso, Ltd., finds that SQL Server has generated a memory dump while running a workload, and he would like to debug the issue. Using this feature, John can upload the dump and receive recommended KB articles from Microsoft, which can help him fix the issue.

SQL Server Diagnostics Extension

sql server dbcc常用命令使用详解

Standard

常用DBCC命令详解

Transact-SQL 编程语言提供 DBCC 语句作为 SQL Server 的数据库控制台命令。

DBCC 命令使用输入参数并返回值。所有 DBCC 命令参数都可以接受 Unicode 和 DBCS 文字。
维护命令
1、DBCC INPUTBUFFER
功能:显示从客户端发送到 Microsoft SQL Server实例的最后一个语句。
格式:DBCC INPUTBUFFER ( session_id [ , request_id ] )[WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
用户必须具有 VIEW SERVER STATE 权限。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

2、DBCC OUTPUTBUFFER
功能:以十六进制和 ASCII 格式返回指定 session_id 的当前输出缓冲区。
格式:DBCC OUTPUTBUFFER ( session_id [ , request_id ] )[ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

3、DBCC SHOWCONTIG
功能:显示指定的表或视图的数据和索引的碎片信息。
格式:DBCC SHOWCONTIG [ (对象名) ]
[ WITH  { [ , [ ALL_INDEXES ] ] [ , [ TABLERESULTS ] ] [ , [ FAST ] ] [ , [ ALL_LEVELS ] ]  [ NO_INFOMSGS ] } ]
执行权限:用户必须是对象所有者,或是 sysadmin 固定服务器角色、db_owner 固定数据库角色或 db_ddladmin 固定数据库角色的成员。
例:DBCC SHOWCONTIG (‘TableName’)
说明:可使用DBCC SHOWCONTIG 和 DBCC INDEXDEFRAG 对数据库中的索引进行碎片整理

4、DBCC OPENTRAN
功能:如果在指定数据库内存在最早的活动事务和最早的分布式和非分布式复制事务,则显示与之有关的信息。
格式:DBCC OPENTRAN [ ( [ database_name | database_id | 0 ] ) ]
{ [ WITH TABLERESULTS ] [ , [ NO_INFOMSGS ] ] }]
例:DBCC OPENTRAN (DataBaseName) WITH TABLERESULTS

5、DBCC SQLPERF
功能:为所有数据库提供事务日志空间用法的统计信息。也可以用于重置等待和闩锁的统计信息。
格式:DBCC SQLPERF ([ LOGSPACE ]|
[ “sys.dm_os_latch_stats” , CLEAR ] |
[ “sys.dm_os_wait_stats” , CLEAR ])
[WITH NO_INFOMSGS ]
例:DBCC SQLPERF (LOGSPACE)

6、DBCC TRACESTATUS
功能:显示跟踪标志的状态
格式:DBCC TRACESTATUS ( [ [ trace# [ ,…n ] ] [ , ] [ -1 ] ] ) [ WITH NO_INFOMSGS ]

7、DBCC PROCCACHE
功能:以表格格式显示有关过程缓存的信息。
格式:DBCC PROCCACHE [ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色、db_owner 固定数据库角色的成员。

8、DBCC USEROPTIONS
功能:返回当前连接的活动(设置)的 SET 选项。
格式:DBCC USEROPTIONS [ WITH NO_INFOMSGS ]
执行权限:要求具有 public 角色成员身份。
例:DBCC USEROPTIONS

9、DBCC SHOW_STATISTICS
功能:显示指定表上的指定目标的当前分发统计信息。

10、DBCC SHOWFILESTATS
功能:显示文件使用情况的,需要通过换算所得
如显示的是extent,一个extent为64k。totalexents*64/1024/1024 换算成gb
验证语句

11、DBCC CHECKALLOC
功能:检查指定数据库的磁盘空间分配结构的一致性。
例:DBCC CHECKALLOC (‘DataBaseName’)
执行权限:要求具有 sysadmin 固定服务器角色或 db_owner 固定数据库角色的成员身份。

12、DBCC CHECKFILEGROUP
功能:检查当前数据库中指定文件组中的所有表和索引视图的分配和结构完整性。
格式:比如:DBCC CHECKFILEGROUP (‘DataBaseName’)

13、DBCC CHECKCATALOG
功能:检查指定数据库内的目录一致性。
比如:DBCC CHECKCATALOG (‘datapeng’)

14、DBCC CHECKIDENT
功能:检查指定表的当前标识值,如有必要,则更改标识值。

比如:DBCC CHECKIDENT (‘datapeng01’)

15、DBCC CHECKCONSTRAINTS
功能:检查当前数据库中指定表上的指定约束或所有约束的完整性。

16、DBCC CHECKTABLE
功能:检查组成表或索引视图的所有页和结构的完整性。

17、DBCC CHECKDB
功能:通过执行下列操作检查指定数据库中所有对象的逻辑和物理完整性:

对数据库运行 DBCC CHECKALLOC。
对数据库中的每个表和视图运行 DBCC CHECKTABLE。
对数据库运行 DBCC CHECKCATALOG。
验证数据库中每个索引视图的内容。
验证数据库中的 Service Broker 数据。

维护语句

18、DBCC CLEANTABLE
功能:回收表或索引视图中已删除的可变长度列的空间。

比如:DBCC cleantable (‘datapeng’,’datapeng01′)

19、DBCC INDEXDEFRAG
功能:指定表或视图的索引碎片整理。

比如:DBCC INDEXDEFRAG (‘datapeng’,’datapeng01′)

Pages Scanned Pages Moved Pages Removed————- ———– ————-359           346         8(1 row(s) affected)

20、DBCC DBREINDEX
功能:对指定数据库中的表重新生成一个或多个索引。

比如:DBCC DBREINDEX ( ‘datapeng’,’datapeng01′)

21、DBCC SHRINKDATABASE
功能:收缩指定数据库中的数据文件和日志文件的大小。

比如:DBCC SHRINKDATABASE (‘datapeng’)

21、DBCC SHRINKFILE
功能:收缩当前数据库的指定数据或日志文件的大小
比如:DBCC SHRINKFILE (‘datapeng’)

22、DBCC FREEPROCCACHE
功能:从过程缓存中删除所有元素。

23、DBCC UPDATEUSAGE
功能:报告目录视图中的页数和行数错误并进行更正。这些错误可能导致 sp_spaceused 系统存储过程返回不正确的空间使用报告。

杂项语句

24、DBCC dllname (FREE)
功能:从内存中上载指定的扩展存储过程 DLL。

25、DBCC HELP
功能:返回指定的 DBCC 命令的语法信息。

比如:DBCC   HELP (‘checkdb’)

26、DBCC FREESESSIONCACHE
功能:刷新针对 Microsoft SQL Server 实例执行的分布式查询所使用的分布式查询连接缓存。

27、DBCC TRACEON
功能:启用指定的跟踪标记。

格式:DBCC TRACEON ( trace# [ ,…n ][ , -1 ] ) [ WITH NO_INFOMSGS ]

28、DBCC TRACEOFF
功能:禁用指定的跟踪标记。
DBCC FREESYSTEMCACHE:从所有缓存中释放所有未使用的缓存条目。SQL Server 2005 数据库引擎会事先在后台清理未使用的缓存条目,以使内存可用于当前条目。但是,可以使用此命令从所有缓存中手动删除未使用的条目。

比如;DBCC FREESYSTEMCACHE(‘all’)

 

http://blog.itpub.net/29371470/viewspace-1082379

pandasql: Make python speak SQL

Standard

http://blog.yhat.com/posts/pandasql-intro.html

Introduction

One of my favorite things about Python is that users get the benefit of observing the R community and then emulating the best parts of it. I’m a big believer that a language is only as helpful as its libraries and tools.

This post is about pandasql, a Python package we (Yhat) wrote that emulates the R package sqldf. It’s a small but mighty library comprised of just 358 lines of code. The idea of pandasql is to make Python speak SQL. For those of you who come from a SQL-first background or still “think in SQL”, pandasql is a nice way to take advantage of the strengths of both languages.

In this introduction, we’ll show you to get up and running with pandasql inside of Rodeo, the integrated development environment (IDE) we built for data exploration and analysis. Rodeo is an open source and completely free tool. If you’re an R user, its a comparable tool with a similar feel to RStudio. As of today, Rodeo can only run Python code, but last week we added syntax highlighting for a bunch of other languages to the editor (markdown, JSON, julia, SQL, markdown). As you may have read or guessed, we’ve got big plans for Rodeo, including adding SQL support so that you can run your SQL queries right inside of Rodeo, even without our handy little pandasql. More on that in the next week or two!

Downloading Rodeo

Start by downloading Rodeo for Mac, Windows or Linux from the Rodeo page on the Yhat website.

ps If you download Rodeo and encounter a problem or simply have a question, we monitor our discourse forum 24/7 (okay, almost).

A bit of background, if you’re curious

Behind the scenes, pandasql uses the pandas.io.sql module to transfer data between DataFrame and SQLite databases. Operations are performed in SQL, the results returned, and the database is then torn down. The library makes heavy use of pandas write_frame and frame_query, two functions which let you read and write to/from pandas and (most) any SQL database.

Install pandasql

Install pandasql using the package manager pane in Rodeo. Simply search for pandasql and click Install Package.

You can also run ! pip install pandasql from the text editor if you prefer to install that way.

Check out the datasets

pandasql has two built-in datasets which we’ll use for the examples below.

  • meat: Dataset from the U.S. Dept. of Agriculture containing metrics on livestock, dairy, and poultry outlook and production
  • births: Dataset from the United Nations Statistics Division containing demographic statistics on live births by month

Run the following code to check out the data sets.

<code>#Checking out meat and birth data
from pandasql import sqldf
from pandasql import load_meat, load_births

meat = load_meat()
births = load_births()

#You can inspect the dataframes directly if you're using Rodeo
#These print statements are here just in case you want to check out your data in the editor, too
print meat.head()
print births.head()
</code>

Inside Rodeo, you really don’t even need the print.variable.head() statements, since you can actually just examine the dataframes directly.

An odd graph

<code># Let's make a graph to visualize the data
# Bet you haven't had a title quite like this before
import matplotlib.pyplot as plt
from pandasql import *
import pandas as pd

pysqldf = lambda q: sqldf(q, globals())

q  = """
SELECT
  m.date
  , m.beef
  , b.births
FROM
  meat m
LEFT JOIN
  births b
    ON m.date = b.date
WHERE
    m.date &gt; '1974-12-31';
"""

meat = load_meat()
births = load_births()

df = pysqldf(q)
df.births = df.births.fillna(method='backfill')

fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(pd.rolling_mean(df['beef'], 12), color='b')
ax1.set_xlabel('months since 1975')
ax1.set_ylabel('cattle slaughtered', color='b')

ax2 = ax1.twinx()
ax2.plot(pd.rolling_mean(df['births'], 12), color='r')
ax2.set_ylabel('babies born', color='r')
plt.title("Beef Consumption and the Birth Rate")
plt.show()
</code>

Notice that the plot appears both in the console and the plot tab (bottom right tab).

Tip: You can “pop out” your plot by clicking the arrows at the top of the pane. This is handy if you’re working on multiple monitors and want to dedicate one just to your data visualzations.

Usage

To keep this post concise and easy to read, we’ve just given the code snippets and a few lines of results for most of the queries below.

If you’re following along in Rodeo, a few tips as you’re getting started:

  • Run Script will indeed run everything you have written in the text editor
  • You can highlight a code chunk and run it by clicking Run Line or pressing Command + Enter
  • You can resize the panes (when I’m not making plots I shrink down the bottom right pane)

Basics

Write some SQL and execute it against your pandas DataFrame by substituting DataFrames for tables.

<code>q = """
    SELECT
        *
    FROM
        meat
    LIMIT 10;"""

print sqldf(q, locals())

#                   date  beef  veal  pork  lamb_and_mutton broilers other_chicken turkey
# 0  1944-01-01 00:00:00   751    85  1280               89     None          None   None
# 1  1944-02-01 00:00:00   713    77  1169               72     None          None   None
# 2  1944-03-01 00:00:00   741    90  1128               75     None          None   None
# 3  1944-04-01 00:00:00   650    89   978               66     None          None   None
</code>

pandasql creates a DB, schema and all, loads your data, and runs your SQL.

Aggregation

pandasql supports aggregation. You can use aliased column names or column numbers in your group byclause.

<code># births per year
q = """
    SELECT
        strftime("%Y", date)
        , SUM(births)
    FROM births
    GROUP BY 1
    ORDER BY 1;
            """

print sqldf(q, locals())

#    strftime("%Y", date)  SUM(births)
# 0                  1975      3136965
# 1                  1976      6304156
# 2                  1979      3333279
# 3                  1982      3612258
</code>

locals() vs. globals()

pandasql needs to have access to other variables in your session/environment. You can pass locals() to pandasql when executing a SQL statement, but if you’re running a lot of queries that might be a pain. To avoid passing locals all the time, you can add this helper function to your script to set globals() like so:

<code>def pysqldf(q):
    return sqldf(q, globals())

q = """
    SELECT
        *
    FROM
        births
    LIMIT 10;"""

print pysqldf(q)
# 0  1975-01-01 00:00:00  265775
# 1  1975-02-01 00:00:00  241045
# 2  1975-03-01 00:00:00  268849
</code>

joins

You can join dataframes using normal SQL syntax.

<code># joining meats + births on date
q = """
    SELECT
        m.date
        , b.births
        , m.beef
    FROM
        meat m
    INNER JOIN
        births b
            on m.date = b.date
    ORDER BY
        m.date
    LIMIT 100;
    """

joined = pysqldf(q)
print joined.head()
#date  births    beef
#0  1975-01-01 00:00:00.000000  265775  2106.0
#1  1975-02-01 00:00:00.000000  241045  1845.0
#2  1975-03-01 00:00:00.000000  268849  1891.0
</code>

WHERE conditions

Here’s a WHERE clause.

<code>q = """
    SELECT
        date
        , beef
        , veal
        , pork
        , lamb_and_mutton
    FROM
        meat
    WHERE
        lamb_and_mutton &gt;= veal
    ORDER BY date DESC
    LIMIT 10;
    """

print pysqldf(q)
#                   date    beef  veal    pork  lamb_and_mutton
# 0  2012-11-01 00:00:00  2206.6  10.1  2078.7             12.4
# 1  2012-10-01 00:00:00  2343.7  10.3  2210.4             14.2
# 2  2012-09-01 00:00:00  2016.0   8.8  1911.0             12.5
# 3  2012-08-01 00:00:00  2367.5  10.1  1997.9             14.2
</code>

It’s just SQL

Since pandasql is powered by SQLite3, you can do most anything you can do in SQL. Here are some examples using common SQL features such as subqueries, order by, functions, and unions.

<code>#################################################
# SQL FUNCTIONS
# e.g. `RANDOM()`
#################################################
q = """SELECT
    *
    FROM
        meat
    ORDER BY RANDOM()
    LIMIT 10;"""
print pysqldf(q)
#                   date  beef  veal  pork  lamb_and_mutton  broilers other_chicken  turkey
# 0  1967-03-01 00:00:00  1693    65  1136               61     472.0          None    26.5
# 1  1944-12-01 00:00:00   764   146  1013               91       NaN          None     NaN
# 2  1969-06-01 00:00:00  1666    50   964               42     573.9          None    85.4
# 3  1983-03-01 00:00:00  1892    37  1303               36    1106.2          None   182.7

#################################################
# UNION ALL
#################################################
q = """
        SELECT
            date
            , 'beef' AS meat_type
            , beef AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'veal' AS meat_type
            , veal AS value
        FROM meat

        UNION ALL

        SELECT
            date
            , 'pork' AS meat_type
            , pork AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'lamb_and_mutton' AS meat_type
            , lamb_and_mutton AS value
        FROM meat
        ORDER BY 1
    """
print pysqldf(q).head(20)
#                    date        meat_type  value
# 0   1944-01-01 00:00:00             beef    751
# 1   1944-01-01 00:00:00             veal     85
# 2   1944-01-01 00:00:00             pork   1280
# 3   1944-01-01 00:00:00  lamb_and_mutton     89


#################################################
# subqueries
# fancy!
#################################################
q = """
    SELECT
        m1.date
        , m1.beef
    FROM
        meat m1
    WHERE m1.date IN
        (SELECT
            date
        FROM meat
        WHERE
            beef &gt;= broilers
        ORDER BY date)
"""

more_beef_than_broilers = pysqldf(q)
print more_beef_than_broilers.head(10)
#                   date  beef
# 0  1960-01-01 00:00:00  1196
# 1  1960-02-01 00:00:00  1089
# 2  1960-03-01 00:00:00  1201
# 3  1960-04-01 00:00:00  1066
</code>

Final thoughts

pandas is an incredible tool for data analysis in large part, we think, because it is extremely digestible, succinct, and expressive. Ultimately, there are tons of reasons to learn the nuances of mergejoinconcatenatemelt and other native pandas features for slicing and dicing data. Check out the docs for some examples.

Our hope is that pandasql will be a helpful learning tool for folks new to Python and pandas. In my own personal experience learning R, sqldf was a familiar interface helping me become highly productive with a new tool as quickly as possible.

Connecting to MS SQL Server from Ubuntu

Standard

And now, in a break from the previous trend of fluffy posts, we have a tutorial on how to (deep breath): connect PHP to a MSSQL Server 2008 instance over ODBC from Ubuntu Linux using the FreeTDS driver and unixODBC. Theoretically it would also work for SQL Server 2005.

I don’t know whether half of the settings flags are necessary or even correct, but what follows Worked for Me™, YMMV, etc, etc.

In the commands below, I’ll use 192.168.0.1 as the server housing the SQL Server instance, with a SQL Server user name of devuser, password devpass. I’m assuming SQL Server is set up to listen on its default port, 1433. Keep an eye out, because you’ll need to change these things to your own settings.

First, install unixODBC:

sudo apt-get install unixodbc unixodbc-dev

I also installed the following (perhaps necessary) packages:
sudo apt-get install tdsodbc php5-odbc
Then download, untar, compile, and install FreeTDS (warning, the URL may change):
cd /usr/local
wget http://ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
tar xvfz freetds-stable.tgz
cd freetds-0.82
./configure --enable-msdblib --with-tdsver=8.0 --with-unixodbc=/usr
make
make install
make clean

Attempt a connection over Telnet to your SQL Server instance:
telnet 192.168.0.1 1433

Use the tsql tool to test out the connection:
tsql -S 192.168.0.1 -U devuser

This should prompt you for the password, after which you can hope against hope to see this beautiful sign:
1>

If that worked, I recommend throwing a (coding) party. Next up is some configging. Open the FreeTDS config file.
/usr/local/etc/freetds.conf

Add the following entry to the bottom of the file. We’re setting up a datasource name (DSN) called ‘MSSQL’.
[MSSQL]
host = 192.168.0.1
port = 1433
tds version = 8.0

Now open the ODBC configuration file:
/usr/local/etc/odbcinst.ini

And add the following MSSQL driver entry (FreeTDS) at the end:
[FreeTDS]
Description = FreeTDS driver
Driver = /usr/local/lib/libtdsodbc.so
Setup=/usr/lib/odbc/libtdsS.so
FileUsage = 1
UsageCount = 1 

Then, finally, set up the DSN within ODBC in the odbc.ini file here
/usr/local/etc/odbc.ini
By adding this bit to the file:
[MSSQL]
Description = MS SQL Server
Driver = /usr/local/lib/libtdsodbc.so
Server = 192.168.2.3
UID = devuser
PWD = devpass
ReadOnly = No
Port = 1433

Test out the connection using the isql tool:
isql -v MSSQL devuser 'devpass'
If you see “Connected!” you’re golden, congratulations! If not, I’m truly sorry; see below where there are some resources that might help.

Now restart Apache and test it from PHP using ‘MSSQL’ as the DSN. If something doesn’t work, you might try installing any or all of these packages:
mdbtools libmdbodbc libmdbtools mdbtools-gmdb

Here are some other resources that were helpful to me through this disastrous journey.

OVER(PARTITION BY)函数介绍

Standard
原文:http://www.cnblogs.com/lanzi/archive/2010/10/26/1861338.html
开窗函数
Oracle从8.1.6开始提供分析函数,分析函数用于计算基于组的某种聚合值,它和聚合函数的不同之处是:对于每个组返回多行,而聚合函数对于每个组只返回一行。

开窗函数指定了分析函数工作的数据窗口大小,这个数据窗口大小可能会随着行的变化而变化,举例如下:
1:over后的写法:
over(order by salary) 按照salary排序进行累计,order by是个默认的开窗函数
over(partition by deptno)按照部门分区

 

   over(partition by deptno order by salary)

 

2:开窗的窗口范围:
over(order by salary range between 5 preceding and 5 following):窗口范围为当前行数据幅度减5加5后的范围内的。

举例:

 

–sum(s)over(order by s range between 2 preceding and 2 following) 表示加2或2的范围内的求和

select name,class,s, sum(s)over(order by s range between 2 preceding and 2 following) mm from t2
adf        3        45        45  –45加2减2即43到47,但是s在这个范围内只有45
asdf       3        55        55
cfe        2        74        74
3dd        3        78        158 –78在76到80范围内有78,80,求和得158
fda        1        80        158
gds        2        92        92
ffd        1        95        190
dss        1        95        190
ddd        3        99        198
gf         3        99        198

 

 

 

over(order by salary rows between 5 preceding and 5 following):窗口范围为当前行前后各移动5行。

举例:

 

–sum(s)over(order by s rows between 2 preceding and 2 following)表示在上下两行之间的范围内
select name,class,s, sum(s)over(order by s rows between 2 preceding and 2 following) mm from t2
adf        3        45        174  (45+55+74=174)
asdf       3        55        252   (45+55+74+78=252)
cfe        2        74        332    (74+55+45+78+80=332)
3dd        3        78        379    (78+74+55+80+92=379)
fda        1        80        419
gds        2        92        440
ffd        1        95        461
dss        1        95        480
ddd        3        99        388
gf         3        99        293

 

 

over(order by salary range between unbounded preceding and unbounded following)或者

over(order by salary rows between unbounded preceding and unbounded following):窗口不做限制

 

3、与over函数结合的几个函数介绍

row_number()over()、rank()over()和dense_rank()over()函数的使用

下面以班级成绩表t2来说明其应用

t2表信息如下:
cfe        2        74
dss        1        95
ffd        1        95
fda        1        80
gds        2        92
gf         3        99
ddd        3        99
adf        3        45
asdf       3        55
3dd        3        78

select * from
(
select name,class,s,rank()over(partition by class order by s desc) mm from t2
)
where mm=1;
得到的结果是:
dss        1        95        1
ffd        1        95        1
gds        2        92        1
gf         3        99        1
ddd        3        99        1

注意:
1.在求第一名成绩的时候,不能用row_number(),因为如果同班有两个并列第一,row_number()只返回一个结果;
select * from
(
select name,class,s,row_number()over(partition by class order by s desc) mm from t2
)
where mm=1;
1        95        1  –95有两名但是只显示一个
2        92        1
3        99        1 –99有两名但也只显示一个

2.rank()和dense_rank()可以将所有的都查找出来:
如上可以看到采用rank可以将并列第一名的都查找出来;
rank()和dense_rank()区别:
–rank()是跳跃排序,有两个第二名时接下来就是第四名;
select name,class,s,rank()over(partition by class order by s desc) mm from t2
dss        1        95        1
ffd        1        95        1
fda        1        80        3 –直接就跳到了第三
gds        2        92        1
cfe        2        74        2
gf         3        99        1
ddd        3        99        1
3dd        3        78        3
asdf       3        55        4
adf        3        45        5
–dense_rank()l是连续排序,有两个第二名时仍然跟着第三名
select name,class,s,dense_rank()over(partition by class order by s desc) mm from t2
dss        1        95        1
ffd        1        95        1
fda        1        80        2 –连续排序(仍为2)
gds        2        92        1
cfe        2        74        2
gf         3        99        1
ddd        3        99        1
3dd        3        78        2
asdf       3        55        3
adf        3        45        4

–sum()over()的使用
select name,class,s, sum(s)over(partition by class order by s desc) mm from t2 –根据班级进行分数求和
dss        1        95        190  –由于两个95都是第一名,所以累加时是两个第一名的相加
ffd        1        95        190
fda        1        80        270  –第一名加上第二名的
gds        2        92        92
cfe        2        74        166
gf         3        99        198
ddd        3        99        198
3dd        3        78        276
asdf       3        55        331
adf        3        45        376

first_value() over()和last_value() over()的使用   


–找出这三条电路每条电路的第一条记录类型和最后一条记录类型

SELECT opr_id,res_type,
first_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) low,
last_value(res_type) over(PARTITION BY opr_id ORDER BY res_type rows BETWEEN unbounded preceding AND unbounded following) high
FROM rm_circuit_route
WHERE opr_id IN (‘000100190000000000021311′,’000100190000000000021355′,’000100190000000000021339’)
ORDER BY opr_id;

注:rows BETWEEN unbounded preceding AND unbounded following 的使用

–取last_value时不使用rows BETWEEN unbounded preceding AND unbounded following的结果

 

SELECT opr_id,res_type,
first_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) low,
last_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) high
FROM rm_circuit_route
WHERE opr_id IN (‘000100190000000000021311′,’000100190000000000021355′,’000100190000000000021339’)
ORDER BY opr_id;

如下图可以看到,如果不使用

rows BETWEEN unbounded preceding AND unbounded following,取出的last_value由于与res_type进行进行排列,因此取出的电路的最后一行记录的类型就不是按照电路的范围提取了,而是以res_type为范围进行提取了。

 

 

 

 

 

在first_value和last_value中ignore nulls的使用

数据如下:

 

取出该电路的第一条记录,加上ignore nulls后,如果第一条是判断的那个字段是空的,则默认取下一条,结果如下所示:

–lag() over()函数用法(取出前n行数据)
lag(expresstion,<offset>,<default>)
with a as
(select 1 id,’a’ name from dual
union
select 2 id,’b’ name from dual
union
select 3 id,’c’ name from dual
union
select 4 id,’d’ name from dual
union
select 5 id,’e’ name from dual
)
select id,name,lag(id,1,”)over(order by name) from a;

–lead() over()函数用法(取出后N行数据)

lead(expresstion,<offset>,<default>)
with a as
(select 1 id,’a’ name from dual
union
select 2 id,’b’ name from dual
union
select 3 id,’c’ name from dual
union
select 4 id,’d’ name from dual
union
select 5 id,’e’ name from dual
)
select id,name,lead(id,1,”)over(order by name) from a;

–ratio_to_report(a)函数用法 Ratio_to_report() 括号中就是分子,over() 括号中就是分母
with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over(partition by a) b from a
order by a;

with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over() b from a –分母缺省就是整个占比
order by a;

with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over() b from a
group by a order by a;–分组后的占比

 

percent_rank用法
计算方法:所在组排名序号-1除以该组所有的行数-1,如下所示自己计算的pr1与通过percent_rank函数得到的值是一样的:
SELECT a.deptno,
a.ename,
a.sal,
a.r,
b.n,
(a.r-1)/(n-1) pr1,
percent_rank() over(PARTITION BY a.deptno ORDER BY a.sal) pr2
FROM (SELECT deptno,
ename,
sal,
rank() over(PARTITION BY deptno ORDER BY sal) r –计算出在组中的排名序号
FROM emp
ORDER BY deptno, sal) a,
(SELECT deptno, COUNT(1) n FROM emp GROUP BY deptno) b –按部门计算每个部门的所有成员数
WHERE a.deptno = b.deptno;

cume_dist函数
计算方法:所在组排名序号除以该组所有的行数,但是如果存在并列情况,则需加上并列的个数-1,
如下所示自己计算的pr1与通过percent_rank函数得到的值是一样的:
SELECT a.deptno,
a.ename,
a.sal,
a.r,
b.n,
c.rn,
(a.r + c.rn – 1) / n pr1,
cume_dist() over(PARTITION BY a.deptno ORDER BY a.sal) pr2
FROM (SELECT deptno,
ename,
sal,
rank() over(PARTITION BY deptno ORDER BY sal) r
FROM emp
ORDER BY deptno, sal) a,
(SELECT deptno, COUNT(1) n FROM emp GROUP BY deptno) b,
(SELECT deptno, r, COUNT(1) rn,sal
FROM (SELECT deptno,sal,
rank() over(PARTITION BY deptno ORDER BY sal) r
FROM emp)
GROUP BY deptno, r,sal
ORDER BY deptno) c –c表就是为了得到每个部门员工工资的一样的个数
WHERE a.deptno = b.deptno
AND a.deptno = c.deptno(+)
AND a.sal = c.sal;

percentile_cont函数
含义:输入一个百分比(该百分比就是按照percent_rank函数计算的值),返回该百分比位置的平均值
如下,输入百分比为0.7,因为0.7介于0.6和0.8之间,因此返回的结果就是0.6对应的sal的1500加上0.8对应的sal的1600平均
SELECT ename,
sal,
deptno,
percentile_cont(0.7) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Cont”,
percent_rank() over(PARTITION BY deptno ORDER BY sal) “Percent_Rank”
FROM emp
WHERE deptno IN (30, 60);

若输入的百分比为0.6,则直接0.6对应的sal值,即1500
SELECT ename,
sal,
deptno,
percentile_cont(0.6) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Cont”,
percent_rank() over(PARTITION BY deptno ORDER BY sal) “Percent_Rank”
FROM emp
WHERE deptno IN (30, 60);

PERCENTILE_DISC函数
功能描述:返回一个与输入的分布百分比值相对应的数据值,分布百分比的计算方法见函数CUME_DIST,如果没有正好对应的数据值,就取大于该分布值的下一个值。
注意:本函数与PERCENTILE_CONT的区别在找不到对应的分布值时返回的替代值的计算方法不同

SAMPLE:下例中0.7的分布值在部门30中没有对应的Cume_Dist值,所以就取下一个分布值0.83333333所对应的SALARY来替代

SELECT ename,
sal,
deptno,
percentile_disc(0.7) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Disc”,
cume_dist() over(PARTITION BY deptno ORDER BY sal) “Cume_Dist”
FROM emp
WHERE deptno IN (30, 60);

 

 

.Net 高效开发之不可错过的实用工具

Standard

工欲善其事,必先利其器,没有好的工具,怎么能高效的开发出高质量的代码呢?本文为各ASP.NET 开发者介绍一些高效实用的工具,涉及SQL 管理,VS插件,内存管理,诊断工具等,涉及开发过程的各个环节,让开发效率翻倍。

  1. Visual Studio
    1. Visual Studio Productivity Power tool: VS 专业版的效率工具。
    2. Web Essentials: 提高开发效率,能够有效的帮助开发人员编写CSS, JavaScript, HTML 等代码。
    3. MSVSMON: 远程Debug 监控器 (msvsmon.exe) 是一种轻量级的应用程序,能够远程控制VS来调试程序。在远程调试期间,VS 在调试主机运行,MSVSMON 在远程机器中运行。
    4. WIX toolset: 可以将XML 源代码文件编译成Windows 安装包。
    5. Code digger: Code Digger 是VS 2012/2013 的扩展插件,能够帮助开发人员分析代码。
    6. CodeMaid: CodeMaid 是一款开源的VS2012/2013/2015 插件,提供代码分析,清理,简化代码的功能。
    7. OzCode: 非常强大的VS 调试工具。
    8. CodeRush: 是VS的提高代码重构和提升效率的VS插件。
    9. T4 Text Template:VS中T4 文本模板是生成代码文件最常用的模板文件,这种模板文件是通过编写文本块和控制逻辑来实现的。
    10. Indent Guides:  快速添加缩进行。
    11. PowerShell Tools:支持开发和调试PowerShell 脚本和VS2015代码块的工具包。
    12. Visual Studio Code: 免费的跨平台编辑器,可以编译和调试现代的Web和云应用。
  2. ASP.NET
    1. Fiddler: 能够捕获 http 请求/响应来模拟请求行为。
    2. AutoMapper: 自动生成对象到对象的映射代码,比如,能够生成从实体对象映射到域对象,而不是手动编写映射代码。Object to object mapping. Like, the tool can be used to map entity objects to domain objects instead of writing manual mapping code.
    3. Unity/Ninject/Castle Windsor/StructureMap/Spring.Net: 依赖性映射框架,提供很多可用的DI 框架。
    4. .NET Reflector: .NET 程序反编译器。
    5. dotPeek: .NET 程序反编译器。
    6. ILSpy: .NET 程序反编译器。
    7. memprofiler: 非常强大的查找内存泄露和优化内存使用的工具。
    8. PostSharp: 去除重复编码和避免由于交叉引用产生的代码冗余。
    9. ASPhere: Web.config 图形化编辑器
  3. WCF
    1. SOAP UI: API 测试工具,支持所有标准的协议和技术。
    2. WireShark:UNIX和Windows系统的网络协议分析器。用于捕获TCP 层的拥塞状况,还能帮你过滤无效信息。
    3. Svc TraceViewer: 提供文件追踪视图,是由WFO提供的。
    4. Svc Config Editor: 用于管理WCF相关配置的图形化界面工具。
  4. MSMQ
    1. QueueExplorer 3.4: 提供消息操作功能,如复制,删除,移动消息,保存和加载,强压测试,浏览编辑等
  5. LINQ
    1. LINQ Pad: LINQPad 是一个轻量级工具,用来测试Linq查询。 可以测试由不同语言写的.Net 语言脚本。
    2. LINQ Insight: LINQ Insight Express 可嵌入 Visual Studio 中,能够分析设计时的LINQ查询 。
  6. RegEx
    1. RegEx tester: 正则表达式插件。
    2. regexr: 在线正则表达式开发和测试工具。
    3. regexpal: 在线正则表达式开发和测试工具。
    4. Expresso: 桌面版的正则表达式工具。
    5. RegexMagic : 能够根据文本模式自动生成正则表达式的工具。
  7. Javascript/JQuery/AngularJS
    1. JSHint: JavaScript代码质量监控工具,定义了很多非常严格的规则。
    2. JSFiddle: 提供了浏览器内部的开发环境,能够测试HTML,CSS,Javascript/JQuery代码
    3. Protractor: 端到端的框架,能够测试Angular应用。
  8. SQL Server
    1. SQL Profiler: SQL 跟踪监控工具。
    2. ExpressProfiler: ExpressProfiler (aka SqlExpress Profiler) 是一个小型快速的SQL Server Profiler的替换工具,自带GUI界面。能够用于企业版和非企业版 的SQL Server。
    3. SQL Sentry Plan explorer: 提供了SQL 查询执行计划的很好的物理视图。
    4. SQL Complete: 为 SQL Server Management Studio and Visual Studio 提供非常智能的,优化SQL 格式的管理工具。
    5. NimbleText:文本操作和代码生成工具。
    6. Query Express: 轻量级的SQL 查询分析器。
    7. IO Meter: 提供IO 子系统的一些访问具体情况
    8. sqldecryptor: 可以解密SQL Server 中的加密对象,如存储过程,方法,触发器,视图。
    9. SpatialViewer: 可以预览和创建空间数据。
    10. ClearTrace: 导入跟踪和分析文件,并显示汇总信息。
    11. Internals Viewer for SQL Server: Internals Viewer 用来在SQL Server 的存储引擎中的查找工具,以及获取数据在物理层是如何分配,组织和存储的。
  9. NHibernate
    1. NHibernate Mapping Generator : 生成 NHibernate 映射文件,并从存在的数据库表映射到领域类。
  10. ​Tally
    1. Tally ERP 9
    2. Tally dll: .net 的动态链接库,能够将Tally Accounting 软件集成到应用程序中 ,通过代码对数据进行push或pull操作。
  11. 代码Review
    1. StyleCop: StyleCop 是静态代码分析工具,能够统一设置代码样式和规范。 可以在Visual Studio 中使用,也可以集成到 MSBuild 项目。
    2. FxCop: FxCop 是静态代码分析工具,能够通过分析.Net 程序集保证开发标准。
  12. 运行状况捕获
    1. WireShark: It is a network protocol analyzer for Unix and Windows. It can capture traffic at TCP level.
    2. HTTP Monitor: enables the developer to view all the HTTP traffic between your computer and the Internet. This includes the request data (such as HTTP headers and form GET and POST data) and the response data (including the HTTP headers and body).
  13. 诊断工具
    1. Glimpse:提供服务器端诊断数据。如 在ASP.NET MVC 项目,可以通过NuGet添加。
  14. 性能
    1. PerfMon: 使用 性能计数器监控系统性能。
  15. 代码转换器
    1. Telerik Code Converter: C# 到 VB 及 VB 到C# 代码转换器. I是一个在线编辑工具,可以选择 ‘Batch Converter’ ,并使用压缩包上传文件。
  16. 屏幕记录工具
    1. Wink: Using Wink, 可以轻松截图,并为截图添加描述等,也可以录制Demo。
  17. 文本编辑器
    1. Notepad++: 源码编辑器
    2. Notepad2: 轻量级功能丰富的文本编辑器
    3. sublimetext:富文本编辑器
  18. 文档工具
    1. GhostDoc: GhostDoc 是 Visual Studio 扩展项,能够自动生成 方法或属性的 文档注释,包括它们的类型,名称,其他上下文信息。
    2. helpndoc: helpndoc 用于创建帮助文档工具,能够根据文档源生成多种格式。
  19. 其他
    1. FileZilla: FileZilla 是开源的FTP 工具. 通过FileZilla 客户端可以将文件上传到FTP 服务器上。
    2. TreeTrim: TreeTrim 是调整代码的工具,能够删除一些无效的debug文件和临时文件等。
    3. BrowserStack: 支持跨浏览器测试的工具。
    4. BugShooting: 屏幕截图软件,能够铺货和附加工作项,bug,问题跟踪项等。
    5. Postman: REST 客户端,能够发送http请求,分析REST 应用程序发出的响应。
    6. Web developer checklist: checklist可用来管理开发计划
    7. PowerGUI: 能够快接收和使用PowerShell 来有效管理 Windows 开发环境。
    8. Beyond Compare: 提供文件对比功能。
    9. PostMan: REST Chrome 器扩展项
    10. Devart Codecompare: 文件区分工具,能够读取 C#, C++,VB 代码结构 。包括:文件夹对比工具,独立App 比较合并文件夹和文件,代码review 支持。

How to recover deleted data from SQL Server

Standard

In all my years of working SQL server, one of the most commonly asked questions has always been “How can we recover deleted record?”

Now, it is very easy to recover deleted data from your SQL server 2005 or above.

How to generate Insert statements from table data using SQL Server »

How to recover deleted data from SQL Server

October 22, 2011 by Muhammad Imran

In all my years of working SQL server, one of the most commonly asked questions has always been “How can we recover deleted record?”

Now, it is very easy to recover deleted data from your SQL server 2005 or above.(Note: This script can recover following data types & compatible with CS collation).

  • image
  • text
  • uniqueidentifier
  • tinyint
  • smallint
  • int
  • smalldatetime
  • real
  • money
  • datetime
  • float
  • sql_variant
  • ntext
  • bit
  • decimal
  • numeric
  • smallmoney
  • bigint
  • varbinary
  • varchar
  • binary
  • char
  • timestamp
  • nvarchar
  • nchar
  • xml
  • sysname

Let me explain this issue demonstrating simple example.

–Create Table

Create Table [Test_Table]

(

[Col_image] image,

[Col_text] text,

[Col_uniqueidentifier] uniqueidentifier,

[Col_tinyint] tinyint,

[Col_smallint] smallint,

[Col_int] int,

[Col_smalldatetime] smalldatetime,

[Col_real] real,

[Col_money] money,

[Col_datetime] datetime,

[Col_float] float,

[Col_Int_sql_variant] sql_variant,

[Col_numeric_sql_variant] sql_variant,

[Col_varchar_sql_variant] sql_variant,

[Col_uniqueidentifier_sql_variant] sql_variant,

[Col_Date_sql_variant] sql_variant,

[Col_varbinary_sql_variant] sql_variant,

[Col_ntext] ntext,

[Col_bit] bit,

[Col_decimal] decimal(18,4),

[Col_numeric] numeric(18,4),

[Col_smallmoney] smallmoney,

[Col_bigint] bigint,

[Col_varbinary] varbinary(Max),

[Col_varchar] varchar(Max),

[Col_binary] binary(8),

[Col_char] char,

[Col_timestamp] timestamp,

[Col_nvarchar] nvarchar(Max),

[Col_nchar] nchar,

[Col_xml] xml,

[Col_sysname] sysname

)

 

GO

–Insert data into it

INSERT INTO [Test_Table]

([Col_image]

,[Col_text]

,[Col_uniqueidentifier]

,[Col_tinyint]

,[Col_smallint]

,[Col_int]

,[Col_smalldatetime]

,[Col_real]

,[Col_money]

,[Col_datetime]

,[Col_float]

,[Col_Int_sql_variant]

,[Col_numeric_sql_variant]

,[Col_varchar_sql_variant]

,[Col_uniqueidentifier_sql_variant]

,[Col_Date_sql_variant]

,[Col_varbinary_sql_variant]

,[Col_ntext]

,[Col_bit]

,[Col_decimal]

,[Col_numeric]

,[Col_smallmoney]

,[Col_bigint]

,[Col_varbinary]

,[Col_varchar]

,[Col_binary]

,[Col_char]

,[Col_nvarchar]

,[Col_nchar]

,[Col_xml]

,[Col_sysname])

VALUES

(CONVERT(IMAGE,REPLICATE(‘A’,4000))

,REPLICATE(‘B’,8000)

,NEWID()

,10

,20

,3000

,GETDATE()

,4000

,5000

,getdate()+15

,66666.6666

,777777

,88888.8888

,REPLICATE(‘C’,8000)

,newid()

,getdate()+30

,CONVERT(VARBINARY(8000),REPLICATE(‘D’,8000))

,REPLICATE(‘E’,4000)

,1

,99999.9999

,10101.1111

,1100

,123456

,CONVERT(VARBINARY(MAX),REPLICATE(‘F’,8000))

,REPLICATE(‘G’,8000)

,0x4646464

,’H’

,REPLICATE(‘I’,4000)

,’J’

,CONVERT(XML,REPLICATE(‘K’,4000))

,REPLICATE(‘L’,100)

)

 

GO

–Delete the data

Delete from Test_Table

Go

–Verify the data

Select * from Test_Table

Go

–Recover the deleted data without date range

EXEC Recover_Deleted_Data_Proc ‘test’,’dbo.Test_Table’

GO

–Recover the deleted data it with date range

EXEC Recover_Deleted_Data_Proc ‘test’,’dbo.Test_Table’,’2012-06-01′,’2012-06-30′

Download Stored Procedure :

Now, you need to create the procedure to recover your deleted data

Explanation:

How does it work? Let’s go through it step by step. The process requires seven easy steps:

Step-1:

We need to get the deleted records from sql server. By using the standard SQL Server function fn_blog, we can easily get all transaction log (Including deleted data. But, we need only the selected deleted records from the transaction log. So we included three filters (Context, Operation & AllocUnitName).

  • Context (‘LCX_MARK_AS_GHOST’and ‘LCX_HEAP’)
  • Operation (‘LOP_DELETE_ROWS’)
  • AllocUnitName(‘dbo.Student’) –- Schema + table Name

Here is the code snippet:

Select [RowLog Contents 0] FROM sys.fn_dblog(NULL,NULL)WHEREAllocUnitName =‘dbo.Student’ AND Context IN (‘LCX_MARK_AS_GHOST’,‘LCX_HEAP’) AND Operation in (‘LOP_DELETE_ROWS’)

This query will return number of columns providing different information, but we only need to select the column “RowLog content o, to get the deleted data.

The Column “RowLog content 0″ will look like this:

“0x300018000100000000000000006B0000564920205900000

00500E001002800426F62206A65727279″

Step-2: 

Now,we have deleted data but in Hex values but SQL keeps this data in a specific sequence so we can easily recover it.But before recovering the data we need to understand the format. This format is defined in detail in Kalen Delaney’s SQL Internal’s book.

  • 1 Byte : Status Bit A
  • 1 Byte : Status Bit B
  • 2 Bytes : Fixed length size
  • n Bytes : Fixed length data
  • 2 Bytes : Total Number of Columns
  • n Bytes : NULL Bitmap (1 bit for each column as 1 indicates that the column is null and 0 indicate that the column is not null)
  • 2 Bytes : Number of variable-length columns
  • n Bytes : Column offset array (2x variable length column)
  • n Bytes : Data for variable length columns

So, the Hex data“RowLog content 0″ is equal to:

“Status Bit A + Status Bit B + Fixed length size + Fixed length data + Total Number of Columns + NULL Bitmap + Number of variable-length columns + NULL Bitmap+ Number of variable-length columns + Column offset array + Data for variable length columns.”

Step-3: 

Now, we need to break the RowLog Content o (Hex value of our deleted data) into the above defined structure.[Color codes are used for reference only]

  • [Fixed Length Data] = Substring (RowLog content 0, Status Bit A+ Status Bit B + 1,2 bytes)
  • [Total No of Columns]= Substring (RowLog content 0, [Fixed Length Data] + 1,2 bytes)
  • [Null Bitmap length] = Ceiling ([Total No of Columns]/8.0)
  • [Null Bytes]= Substring (RowLog content 0, Status Bit A+ Status Bit B +[Fixed Length Data] +1, [Null Bitmap length] )
  • Total no of variable columns = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 )
  • Column Offset Array= Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 , Total no of variable columns*2 )
  • Variable Column Start = Status Bit A+ Status Bit B + [Fixed Length Data] + [Null Bitmap length] + 2+( Total no of variable columns*2)

Step-4: 

Now, we have the split of data as well,so we can find that which one column value is null or not by using Null Bytes. To achieve this convert Null Bytes (Hex value) into Binary format (As discussed, 1 indicates null for the column and 0 means there is some data).Here in this data, the Null Bitmap values are 00000111.We have only five column in student table (used as sample) and first five value of null bitmap is 00000.It means there is no null values.

Step-5:

Now, we have the primary data split (Step-3) and null values (Step-4) as well. After that we need to use this code snippet to get the column data like column name, column size, precision, scale and most importantly the leaf null bit (to ensure that the column is fixed data (<=-1) or variable data sizes (>=1)) of the table.

Select * from sys.allocation_units allocunits INNER JOINsys.partitions partitions ON (allocunits.type IN (1, 3) ANDpartitions.hobt_id = allocunits.container_id) OR (allocunits.type= 2 AND partitions.partition_id = allocunits.container_id) INNERJOIN sys.system_internals_partition_columns cols ONcols.partition_id = partitions.partition_id LEFT OUTER JOINsyscolumns ON syscolumns.id = partitions.object_id ANDsyscolumns.colid = cols.partition_column_id

And join it with our collected data table (Step-1,2,3,4) on the basis of allocunits.[Allocation_Unit_Id].Till now we know the information about the table and data,so we need to utilize this data to break [RowLog Contents 0] into table column data but in hex value. Here we need to take care as the data is either in fixed column size or in variable column size. .

Step-6: 

We collected data in hex value (Step-5) with respect to each column. Now we need to convert the data with respect to its data type defined as [System_type_id]. Each type is having different mechanism
for data conversion.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

–NVARCHAR ,NCHAR

WHEN system_type_id IN (231, 239) THEN  LTRIM(RTRIM(CONVERT(NVARCHAR(max),hex_Value)))

 

–VARCHAR,CHAR

WHEN system_type_id IN (167,175) THEN  LTRIM(RTRIM(CONVERT(VARCHAR(max),REPLACE(hex_Value, 0x00, 0x20))))

 

–TINY INTEGER

WHEN system_type_id = 48 THEN CONVERT(VARCHAR(MAX), CONVERT(TINYINT, CONVERT(BINARY(1), REVERSE (hex_Value))))

 

–SMALL INTEGER

WHEN system_type_id = 52 THEN CONVERT(VARCHAR(MAX), CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (hex_Value))))

 

— INTEGER

WHEN system_type_id = 56 THEN CONVERT(VARCHAR(MAX), CONVERT(INT, CONVERT(BINARY(4), REVERSE(hex_Value))))

 

— BIG INTEGER

WHEN system_type_id = 127 THEN CONVERT(VARCHAR(MAX), CONVERT(BIGINT, CONVERT(BINARY(8), REVERSE(hex_Value))))

 

–DATETIME

WHEN system_type_id = 61 Then CONVERT(VARCHAR(Max),CONVERT(DATETIME,Convert(VARBINARY(max),REVERSE (hex_Value))),100)

 

–SMALL DATETIME

WHEN system_type_id =58 Then CONVERT(VARCHAR(Max),CONVERT(SMALLDATETIME,CONVERT(VARBINARY(MAX),REVERSE(hex_Value))),100) –SMALL DATETIME

 

— NUMERIC

WHEN system_type_id = 108 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(NUMERIC(18,14), CONVERT(VARBINARY,CONVERT(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT))

 

–MONEY,SMALLMONEY

WHEN system_type_id In(60,122) THEN CONVERT(VARCHAR(MAX),Convert(MONEY,Convert(VARBINARY(MAX),Reverse(hex_Value))),2)

 

— DECIMAL

WHEN system_type_id = 106 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(Decimal(38,34), Convert(VARBINARY,Convert(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT))

 

— BIT

WHEN system_type_id = 104 THEN CONVERT(VARCHAR(MAX),CONVERT (BIT,CONVERT(BINARY(1), hex_Value)%2))

 

— FLOAT

WHEN system_type_id = 62 THEN  RTRIM(LTRIM(Str(Convert(FLOAT,SIGN(CAST(Convert(VARBINARY(max),Reverse(hex_Value)) AS BIGINT)) * (1.0 + (CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x000FFFFFFFFFFFFF) * POWER(CAST(2 AS FLOAT), -52)) * POWER(CAST(2 AS FLOAT),((CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x7ff0000000000000) / EXP(52 * LOG(2))-1023))),53,LEN(hex_Value))))

 

–REAL

When  system_type_id =59 THEN  Left(LTRIM(STR(Cast(SIGN(CAST(Convert(VARBINARY(max),Reverse(hex_Value)) AS BIGINT))* (1.0 + (CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x007FFFFF) * POWER(CAST(2 AS Real), -23)) * POWER(CAST(2 AS Real),(((CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS INT) )& 0x7f800000)/ EXP(23 * LOG(2))-127))AS REAL),23,23)),8)

 

–BINARY,VARBINARY

WHEN system_type_id In (165,173) THEN (CASE WHEN Charindex(0x,cast(” AS XML).value(‘xs:hexBinary(sql:column(“hex_value”))’, ‘varbinary(max)’)) = 0 THEN ‘0x’ ELSE ” END) +cast(” AS XML).value(‘xs:hexBinary(sql:column(“hex_value”))’, ‘varchar(max)’)

 

–UNIQUEIDENTIFIER

WHEN system_type_id =36 THEN CONVERT(VARCHAR(MAX),CONVERT(UNIQUEIDENTIFIER,hex_Value))

Step-7: 

Finally we do a pivot table over the data and you will see the result. THE DELETED DATA IS BACK.

Note: This data will only for display. It is not available in your selected table but you can insert this data in your table.

I’d really appreciate your comments on my posts, whether you agree or not, do comment.

 

Recover_Deleted_Data_Proc.sql

How to find user who ran DROP or DELETE statements on your SQL Server Objects

Standard
Problem

Someone has dropped a table from your database and you want to track who did it.  Or someone has deleted some data from a table, but no one will say who did.  In this tip, we will look at how you can use the transaction log to track down some of this information.

Solution

I have already discussed how to read the transaction log file in my last tip “How to read SQL Server Database Log file“. Before reading this tip, I recommend that you read the previous tip to understand how the transaction log file logs all database activity.

Here we will use the same undocumented function “fn_dblog” to find any unauthorized or unapproved deletes or table drops. This tip will help you track or find any unethical or an unwanted user who has dropped a table or deleted data from a table. I strongly suggest testing any undocumented functions in a lab environment first.

One way to find such users is with the help of the default trace, because the default trace captures and tracks database activity performed on your instance, but if you have a busy system the trace files may roll over far too fast and you may not be able to catch some of the changes in your database.  But these changes are also tracked in the transaction log file of the database and we will use this to find the users in question.

Finding a user who ran a DELETE statement

Step 1

Before moving ahead, we will create a database and a table on which I will delete some data. Run the below SQL code to create a database and table.

--Create DB.
USE [master];
GO
CREATE DATABASE ReadingDBLog;
GO
-- Create tables.
USE ReadingDBLog;
GO
CREATE TABLE [Location] (
    [Sr.No] INT IDENTITY,
    [Date] DATETIME DEFAULT GETDATE (),
    [City] CHAR (25) DEFAULT 'Bangalore');

Step 2

We have created a database named “ReadingDBLog” and a table ‘Location’ with three columns. Now we will insert a 100 rows into the table.

USE ReadingDBLog
GO
INSERT INTO Location DEFAULT VALUES ;
GO 100

Step 3

Now go ahead and delete some rows to check who has deleted your data.

USE ReadingDBLog
GO
DELETE Location WHERE [Sr.No]=10
GO
SELECT * FROM Location WHERE [Sr.No]=10
GO
Delete a row from the table'location'

You can see in the above screenshot that a row has been deleted from the table “Location”. I also ran a SELECT statement to verify the data has been deleted.

Step 4

Now we have to search the transaction log file to find the info about the deleted rows. Run the below command to get info about all deleted transactions.

USE ReadingDBLog
GO
SELECT 
    [Transaction ID],
    Operation,
    Context,
    AllocUnitName
    
FROM 
    fn_dblog(NULL, NULL) 
WHERE 
    Operation = 'LOP_DELETE_ROWS'

 

Find all the deleted rows info from t-log file

All transactions which have executed a DELETE statement will display by running the above command and we can see this in the above screenshot. As we are searching for deleted data in table Location, we can see this in the last row. We can find the table name in the “AllocUnitName” column. The last row says a DELETE statement has been performed on a HEAP table ‘dbo.Location’ under transaction ID 0000:000004ce. Now capture the transaction ID from here for our next command.

Step 5

We found the transaction ID from the above command which we will use in the below command to get the transaction SID of the user who has deleted the data.

USE ReadingDBLog
GO
SELECT
    Operation,
    [Transaction ID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID]
FROM
    fn_dblog(NULL, NULL)
WHERE
    [Transaction ID] = '0000:000004ce'
AND
    [Operation] = 'LOP_BEGIN_XACT'

 

Find the transaction SID of the user

Here, we can see the [Begin Time] of this transaction which will also help filter out the possibilities in finding the exact info like when the data was deleted and then you can filter on the base of begin time when that command was executed.

We can read the above output as “A DELETE statement began at 2013/10/14 12:55:17:630 under transaction ID 0000:000004ce by user transaction SID 0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000.

Now our next step is to convert the transaction SID hexadecimal value into text to find the real name of the user.

Step 6

Now we will figure out who ran the DELETE command. We will copy the hexadecimal value from the transaction SID column for the DELETE transaction and then pass that value into the SUSER_SNAME () function.

USE MASTER
GO   
SELECT SUSER_SNAME(0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000)

 

Find the login name with the help of transaction SID

Now we have found the user that did the delete.

Finding a user who ran a DROP statement

Step 1

Here I am going to drop table Location.

USE ReadingDBLog
GO
DROP TABLE Location

 

Drop a table

Step 2

Similarly if you drop any object or you perform anything operation in your database it will get logged in the transaction log file which will be visible by using this function fn_dblog.

Run the below script to display all logs which have been logged under DROPOBJ statement.

USE ReadingDBLog
GO
SELECT 
Operation,
[Transaction Id],
[Transaction SID],
[Transaction Name],
 [Begin Time],
   [SPID],
   Description
FROM fn_dblog (NULL, NULL)
WHERE [Transaction Name] = 'DROPOBJ'
GO

 

Finding a user trasaction SID who ran DROP statement for table location

Here we can find the transaction SID and all required info which we need to find the user.

Step 3

Now we can pass the transaction SID into system function SUSER_SNAME () to get the exact user name.

SELECT SUSER_SNAME(0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000) 

 

Finding a user who ran DROP statement for table location

Once again, we found the user in question.

Next Step

Use this function to do more research into your transaction log file. There is a lot of informative data in more than 100 columns when you use this command. You may also need to look into this and correlate with other data. Explore more knowledge on SQL Server Database Administration Tips.

Source