sql server dbcc常用命令使用详解

Standard

常用DBCC命令详解

Transact-SQL 编程语言提供 DBCC 语句作为 SQL Server 的数据库控制台命令。

DBCC 命令使用输入参数并返回值。所有 DBCC 命令参数都可以接受 Unicode 和 DBCS 文字。
维护命令
1、DBCC INPUTBUFFER
功能:显示从客户端发送到 Microsoft SQL Server实例的最后一个语句。
格式:DBCC INPUTBUFFER ( session_id [ , request_id ] )[WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
用户必须具有 VIEW SERVER STATE 权限。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

2、DBCC OUTPUTBUFFER
功能:以十六进制和 ASCII 格式返回指定 session_id 的当前输出缓冲区。
格式:DBCC OUTPUTBUFFER ( session_id [ , request_id ] )[ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

3、DBCC SHOWCONTIG
功能:显示指定的表或视图的数据和索引的碎片信息。
格式:DBCC SHOWCONTIG [ (对象名) ]
[ WITH  { [ , [ ALL_INDEXES ] ] [ , [ TABLERESULTS ] ] [ , [ FAST ] ] [ , [ ALL_LEVELS ] ]  [ NO_INFOMSGS ] } ]
执行权限:用户必须是对象所有者,或是 sysadmin 固定服务器角色、db_owner 固定数据库角色或 db_ddladmin 固定数据库角色的成员。
例:DBCC SHOWCONTIG (‘TableName’)
说明:可使用DBCC SHOWCONTIG 和 DBCC INDEXDEFRAG 对数据库中的索引进行碎片整理

4、DBCC OPENTRAN
功能:如果在指定数据库内存在最早的活动事务和最早的分布式和非分布式复制事务,则显示与之有关的信息。
格式:DBCC OPENTRAN [ ( [ database_name | database_id | 0 ] ) ]
{ [ WITH TABLERESULTS ] [ , [ NO_INFOMSGS ] ] }]
例:DBCC OPENTRAN (DataBaseName) WITH TABLERESULTS

5、DBCC SQLPERF
功能:为所有数据库提供事务日志空间用法的统计信息。也可以用于重置等待和闩锁的统计信息。
格式:DBCC SQLPERF ([ LOGSPACE ]|
[ “sys.dm_os_latch_stats” , CLEAR ] |
[ “sys.dm_os_wait_stats” , CLEAR ])
[WITH NO_INFOMSGS ]
例:DBCC SQLPERF (LOGSPACE)

6、DBCC TRACESTATUS
功能:显示跟踪标志的状态
格式:DBCC TRACESTATUS ( [ [ trace# [ ,…n ] ] [ , ] [ -1 ] ] ) [ WITH NO_INFOMSGS ]

7、DBCC PROCCACHE
功能:以表格格式显示有关过程缓存的信息。
格式:DBCC PROCCACHE [ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色、db_owner 固定数据库角色的成员。

8、DBCC USEROPTIONS
功能:返回当前连接的活动(设置)的 SET 选项。
格式:DBCC USEROPTIONS [ WITH NO_INFOMSGS ]
执行权限:要求具有 public 角色成员身份。
例:DBCC USEROPTIONS

9、DBCC SHOW_STATISTICS
功能:显示指定表上的指定目标的当前分发统计信息。

10、DBCC SHOWFILESTATS
功能:显示文件使用情况的,需要通过换算所得
如显示的是extent,一个extent为64k。totalexents*64/1024/1024 换算成gb
验证语句

11、DBCC CHECKALLOC
功能:检查指定数据库的磁盘空间分配结构的一致性。
例:DBCC CHECKALLOC (‘DataBaseName’)
执行权限:要求具有 sysadmin 固定服务器角色或 db_owner 固定数据库角色的成员身份。

12、DBCC CHECKFILEGROUP
功能:检查当前数据库中指定文件组中的所有表和索引视图的分配和结构完整性。
格式:比如:DBCC CHECKFILEGROUP (‘DataBaseName’)

13、DBCC CHECKCATALOG
功能:检查指定数据库内的目录一致性。
比如:DBCC CHECKCATALOG (‘datapeng’)

14、DBCC CHECKIDENT
功能:检查指定表的当前标识值,如有必要,则更改标识值。

比如:DBCC CHECKIDENT (‘datapeng01’)

15、DBCC CHECKCONSTRAINTS
功能:检查当前数据库中指定表上的指定约束或所有约束的完整性。

16、DBCC CHECKTABLE
功能:检查组成表或索引视图的所有页和结构的完整性。

17、DBCC CHECKDB
功能:通过执行下列操作检查指定数据库中所有对象的逻辑和物理完整性:

对数据库运行 DBCC CHECKALLOC。
对数据库中的每个表和视图运行 DBCC CHECKTABLE。
对数据库运行 DBCC CHECKCATALOG。
验证数据库中每个索引视图的内容。
验证数据库中的 Service Broker 数据。

维护语句

18、DBCC CLEANTABLE
功能:回收表或索引视图中已删除的可变长度列的空间。

比如:DBCC cleantable (‘datapeng’,’datapeng01′)

19、DBCC INDEXDEFRAG
功能:指定表或视图的索引碎片整理。

比如:DBCC INDEXDEFRAG (‘datapeng’,’datapeng01′)

Pages Scanned Pages Moved Pages Removed————- ———– ————-359           346         8(1 row(s) affected)

20、DBCC DBREINDEX
功能:对指定数据库中的表重新生成一个或多个索引。

比如:DBCC DBREINDEX ( ‘datapeng’,’datapeng01′)

21、DBCC SHRINKDATABASE
功能:收缩指定数据库中的数据文件和日志文件的大小。

比如:DBCC SHRINKDATABASE (‘datapeng’)

21、DBCC SHRINKFILE
功能:收缩当前数据库的指定数据或日志文件的大小
比如:DBCC SHRINKFILE (‘datapeng’)

22、DBCC FREEPROCCACHE
功能:从过程缓存中删除所有元素。

23、DBCC UPDATEUSAGE
功能:报告目录视图中的页数和行数错误并进行更正。这些错误可能导致 sp_spaceused 系统存储过程返回不正确的空间使用报告。

杂项语句

24、DBCC dllname (FREE)
功能:从内存中上载指定的扩展存储过程 DLL。

25、DBCC HELP
功能:返回指定的 DBCC 命令的语法信息。

比如:DBCC   HELP (‘checkdb’)

26、DBCC FREESESSIONCACHE
功能:刷新针对 Microsoft SQL Server 实例执行的分布式查询所使用的分布式查询连接缓存。

27、DBCC TRACEON
功能:启用指定的跟踪标记。

格式:DBCC TRACEON ( trace# [ ,…n ][ , -1 ] ) [ WITH NO_INFOMSGS ]

28、DBCC TRACEOFF
功能:禁用指定的跟踪标记。
DBCC FREESYSTEMCACHE:从所有缓存中释放所有未使用的缓存条目。SQL Server 2005 数据库引擎会事先在后台清理未使用的缓存条目,以使内存可用于当前条目。但是,可以使用此命令从所有缓存中手动删除未使用的条目。

比如;DBCC FREESYSTEMCACHE(‘all’)

 

http://blog.itpub.net/29371470/viewspace-1082379

pandasql: Make python speak SQL

Standard

http://blog.yhat.com/posts/pandasql-intro.html

Introduction

One of my favorite things about Python is that users get the benefit of observing the R community and then emulating the best parts of it. I’m a big believer that a language is only as helpful as its libraries and tools.

This post is about pandasql, a Python package we (Yhat) wrote that emulates the R package sqldf. It’s a small but mighty library comprised of just 358 lines of code. The idea of pandasql is to make Python speak SQL. For those of you who come from a SQL-first background or still “think in SQL”, pandasql is a nice way to take advantage of the strengths of both languages.

In this introduction, we’ll show you to get up and running with pandasql inside of Rodeo, the integrated development environment (IDE) we built for data exploration and analysis. Rodeo is an open source and completely free tool. If you’re an R user, its a comparable tool with a similar feel to RStudio. As of today, Rodeo can only run Python code, but last week we added syntax highlighting for a bunch of other languages to the editor (markdown, JSON, julia, SQL, markdown). As you may have read or guessed, we’ve got big plans for Rodeo, including adding SQL support so that you can run your SQL queries right inside of Rodeo, even without our handy little pandasql. More on that in the next week or two!

Downloading Rodeo

Start by downloading Rodeo for Mac, Windows or Linux from the Rodeo page on the Yhat website.

ps If you download Rodeo and encounter a problem or simply have a question, we monitor our discourse forum 24/7 (okay, almost).

A bit of background, if you’re curious

Behind the scenes, pandasql uses the pandas.io.sql module to transfer data between DataFrame and SQLite databases. Operations are performed in SQL, the results returned, and the database is then torn down. The library makes heavy use of pandas write_frame and frame_query, two functions which let you read and write to/from pandas and (most) any SQL database.

Install pandasql

Install pandasql using the package manager pane in Rodeo. Simply search for pandasql and click Install Package.

You can also run ! pip install pandasql from the text editor if you prefer to install that way.

Check out the datasets

pandasql has two built-in datasets which we’ll use for the examples below.

  • meat: Dataset from the U.S. Dept. of Agriculture containing metrics on livestock, dairy, and poultry outlook and production
  • births: Dataset from the United Nations Statistics Division containing demographic statistics on live births by month

Run the following code to check out the data sets.

<code>#Checking out meat and birth data
from pandasql import sqldf
from pandasql import load_meat, load_births

meat = load_meat()
births = load_births()

#You can inspect the dataframes directly if you're using Rodeo
#These print statements are here just in case you want to check out your data in the editor, too
print meat.head()
print births.head()
</code>

Inside Rodeo, you really don’t even need the print.variable.head() statements, since you can actually just examine the dataframes directly.

An odd graph

<code># Let's make a graph to visualize the data
# Bet you haven't had a title quite like this before
import matplotlib.pyplot as plt
from pandasql import *
import pandas as pd

pysqldf = lambda q: sqldf(q, globals())

q  = """
SELECT
  m.date
  , m.beef
  , b.births
FROM
  meat m
LEFT JOIN
  births b
    ON m.date = b.date
WHERE
    m.date &gt; '1974-12-31';
"""

meat = load_meat()
births = load_births()

df = pysqldf(q)
df.births = df.births.fillna(method='backfill')

fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(pd.rolling_mean(df['beef'], 12), color='b')
ax1.set_xlabel('months since 1975')
ax1.set_ylabel('cattle slaughtered', color='b')

ax2 = ax1.twinx()
ax2.plot(pd.rolling_mean(df['births'], 12), color='r')
ax2.set_ylabel('babies born', color='r')
plt.title("Beef Consumption and the Birth Rate")
plt.show()
</code>

Notice that the plot appears both in the console and the plot tab (bottom right tab).

Tip: You can “pop out” your plot by clicking the arrows at the top of the pane. This is handy if you’re working on multiple monitors and want to dedicate one just to your data visualzations.

Usage

To keep this post concise and easy to read, we’ve just given the code snippets and a few lines of results for most of the queries below.

If you’re following along in Rodeo, a few tips as you’re getting started:

  • Run Script will indeed run everything you have written in the text editor
  • You can highlight a code chunk and run it by clicking Run Line or pressing Command + Enter
  • You can resize the panes (when I’m not making plots I shrink down the bottom right pane)

Basics

Write some SQL and execute it against your pandas DataFrame by substituting DataFrames for tables.

<code>q = """
    SELECT
        *
    FROM
        meat
    LIMIT 10;"""

print sqldf(q, locals())

#                   date  beef  veal  pork  lamb_and_mutton broilers other_chicken turkey
# 0  1944-01-01 00:00:00   751    85  1280               89     None          None   None
# 1  1944-02-01 00:00:00   713    77  1169               72     None          None   None
# 2  1944-03-01 00:00:00   741    90  1128               75     None          None   None
# 3  1944-04-01 00:00:00   650    89   978               66     None          None   None
</code>

pandasql creates a DB, schema and all, loads your data, and runs your SQL.

Aggregation

pandasql supports aggregation. You can use aliased column names or column numbers in your group byclause.

<code># births per year
q = """
    SELECT
        strftime("%Y", date)
        , SUM(births)
    FROM births
    GROUP BY 1
    ORDER BY 1;
            """

print sqldf(q, locals())

#    strftime("%Y", date)  SUM(births)
# 0                  1975      3136965
# 1                  1976      6304156
# 2                  1979      3333279
# 3                  1982      3612258
</code>

locals() vs. globals()

pandasql needs to have access to other variables in your session/environment. You can pass locals() to pandasql when executing a SQL statement, but if you’re running a lot of queries that might be a pain. To avoid passing locals all the time, you can add this helper function to your script to set globals() like so:

<code>def pysqldf(q):
    return sqldf(q, globals())

q = """
    SELECT
        *
    FROM
        births
    LIMIT 10;"""

print pysqldf(q)
# 0  1975-01-01 00:00:00  265775
# 1  1975-02-01 00:00:00  241045
# 2  1975-03-01 00:00:00  268849
</code>

joins

You can join dataframes using normal SQL syntax.

<code># joining meats + births on date
q = """
    SELECT
        m.date
        , b.births
        , m.beef
    FROM
        meat m
    INNER JOIN
        births b
            on m.date = b.date
    ORDER BY
        m.date
    LIMIT 100;
    """

joined = pysqldf(q)
print joined.head()
#date  births    beef
#0  1975-01-01 00:00:00.000000  265775  2106.0
#1  1975-02-01 00:00:00.000000  241045  1845.0
#2  1975-03-01 00:00:00.000000  268849  1891.0
</code>

WHERE conditions

Here’s a WHERE clause.

<code>q = """
    SELECT
        date
        , beef
        , veal
        , pork
        , lamb_and_mutton
    FROM
        meat
    WHERE
        lamb_and_mutton &gt;= veal
    ORDER BY date DESC
    LIMIT 10;
    """

print pysqldf(q)
#                   date    beef  veal    pork  lamb_and_mutton
# 0  2012-11-01 00:00:00  2206.6  10.1  2078.7             12.4
# 1  2012-10-01 00:00:00  2343.7  10.3  2210.4             14.2
# 2  2012-09-01 00:00:00  2016.0   8.8  1911.0             12.5
# 3  2012-08-01 00:00:00  2367.5  10.1  1997.9             14.2
</code>

It’s just SQL

Since pandasql is powered by SQLite3, you can do most anything you can do in SQL. Here are some examples using common SQL features such as subqueries, order by, functions, and unions.

<code>#################################################
# SQL FUNCTIONS
# e.g. `RANDOM()`
#################################################
q = """SELECT
    *
    FROM
        meat
    ORDER BY RANDOM()
    LIMIT 10;"""
print pysqldf(q)
#                   date  beef  veal  pork  lamb_and_mutton  broilers other_chicken  turkey
# 0  1967-03-01 00:00:00  1693    65  1136               61     472.0          None    26.5
# 1  1944-12-01 00:00:00   764   146  1013               91       NaN          None     NaN
# 2  1969-06-01 00:00:00  1666    50   964               42     573.9          None    85.4
# 3  1983-03-01 00:00:00  1892    37  1303               36    1106.2          None   182.7

#################################################
# UNION ALL
#################################################
q = """
        SELECT
            date
            , 'beef' AS meat_type
            , beef AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'veal' AS meat_type
            , veal AS value
        FROM meat

        UNION ALL

        SELECT
            date
            , 'pork' AS meat_type
            , pork AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'lamb_and_mutton' AS meat_type
            , lamb_and_mutton AS value
        FROM meat
        ORDER BY 1
    """
print pysqldf(q).head(20)
#                    date        meat_type  value
# 0   1944-01-01 00:00:00             beef    751
# 1   1944-01-01 00:00:00             veal     85
# 2   1944-01-01 00:00:00             pork   1280
# 3   1944-01-01 00:00:00  lamb_and_mutton     89


#################################################
# subqueries
# fancy!
#################################################
q = """
    SELECT
        m1.date
        , m1.beef
    FROM
        meat m1
    WHERE m1.date IN
        (SELECT
            date
        FROM meat
        WHERE
            beef &gt;= broilers
        ORDER BY date)
"""

more_beef_than_broilers = pysqldf(q)
print more_beef_than_broilers.head(10)
#                   date  beef
# 0  1960-01-01 00:00:00  1196
# 1  1960-02-01 00:00:00  1089
# 2  1960-03-01 00:00:00  1201
# 3  1960-04-01 00:00:00  1066
</code>

Final thoughts

pandas is an incredible tool for data analysis in large part, we think, because it is extremely digestible, succinct, and expressive. Ultimately, there are tons of reasons to learn the nuances of mergejoinconcatenatemelt and other native pandas features for slicing and dicing data. Check out the docs for some examples.

Our hope is that pandasql will be a helpful learning tool for folks new to Python and pandas. In my own personal experience learning R, sqldf was a familiar interface helping me become highly productive with a new tool as quickly as possible.

Connecting to MS SQL Server from Ubuntu

Standard

And now, in a break from the previous trend of fluffy posts, we have a tutorial on how to (deep breath): connect PHP to a MSSQL Server 2008 instance over ODBC from Ubuntu Linux using the FreeTDS driver and unixODBC. Theoretically it would also work for SQL Server 2005.

I don’t know whether half of the settings flags are necessary or even correct, but what follows Worked for Me™, YMMV, etc, etc.

In the commands below, I’ll use 192.168.0.1 as the server housing the SQL Server instance, with a SQL Server user name of devuser, password devpass. I’m assuming SQL Server is set up to listen on its default port, 1433. Keep an eye out, because you’ll need to change these things to your own settings.

First, install unixODBC:

sudo apt-get install unixodbc unixodbc-dev

I also installed the following (perhaps necessary) packages:
sudo apt-get install tdsodbc php5-odbc
Then download, untar, compile, and install FreeTDS (warning, the URL may change):
cd /usr/local
wget http://ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
tar xvfz freetds-stable.tgz
cd freetds-0.82
./configure --enable-msdblib --with-tdsver=8.0 --with-unixodbc=/usr
make
make install
make clean

Attempt a connection over Telnet to your SQL Server instance:
telnet 192.168.0.1 1433

Use the tsql tool to test out the connection:
tsql -S 192.168.0.1 -U devuser

This should prompt you for the password, after which you can hope against hope to see this beautiful sign:
1>

If that worked, I recommend throwing a (coding) party. Next up is some configging. Open the FreeTDS config file.
/usr/local/etc/freetds.conf

Add the following entry to the bottom of the file. We’re setting up a datasource name (DSN) called ‘MSSQL’.
[MSSQL]
host = 192.168.0.1
port = 1433
tds version = 8.0

Now open the ODBC configuration file:
/usr/local/etc/odbcinst.ini

And add the following MSSQL driver entry (FreeTDS) at the end:
[FreeTDS]
Description = FreeTDS driver
Driver = /usr/local/lib/libtdsodbc.so
Setup=/usr/lib/odbc/libtdsS.so
FileUsage = 1
UsageCount = 1 

Then, finally, set up the DSN within ODBC in the odbc.ini file here
/usr/local/etc/odbc.ini
By adding this bit to the file:
[MSSQL]
Description = MS SQL Server
Driver = /usr/local/lib/libtdsodbc.so
Server = 192.168.2.3
UID = devuser
PWD = devpass
ReadOnly = No
Port = 1433

Test out the connection using the isql tool:
isql -v MSSQL devuser 'devpass'
If you see “Connected!” you’re golden, congratulations! If not, I’m truly sorry; see below where there are some resources that might help.

Now restart Apache and test it from PHP using ‘MSSQL’ as the DSN. If something doesn’t work, you might try installing any or all of these packages:
mdbtools libmdbodbc libmdbtools mdbtools-gmdb

Here are some other resources that were helpful to me through this disastrous journey.

OVER(PARTITION BY)函数介绍

Standard
原文:http://www.cnblogs.com/lanzi/archive/2010/10/26/1861338.html
开窗函数
Oracle从8.1.6开始提供分析函数,分析函数用于计算基于组的某种聚合值,它和聚合函数的不同之处是:对于每个组返回多行,而聚合函数对于每个组只返回一行。

开窗函数指定了分析函数工作的数据窗口大小,这个数据窗口大小可能会随着行的变化而变化,举例如下:
1:over后的写法:
over(order by salary) 按照salary排序进行累计,order by是个默认的开窗函数
over(partition by deptno)按照部门分区

 

   over(partition by deptno order by salary)

 

2:开窗的窗口范围:
over(order by salary range between 5 preceding and 5 following):窗口范围为当前行数据幅度减5加5后的范围内的。

举例:

 

–sum(s)over(order by s range between 2 preceding and 2 following) 表示加2或2的范围内的求和

select name,class,s, sum(s)over(order by s range between 2 preceding and 2 following) mm from t2
adf        3        45        45  –45加2减2即43到47,但是s在这个范围内只有45
asdf       3        55        55
cfe        2        74        74
3dd        3        78        158 –78在76到80范围内有78,80,求和得158
fda        1        80        158
gds        2        92        92
ffd        1        95        190
dss        1        95        190
ddd        3        99        198
gf         3        99        198

 

 

 

over(order by salary rows between 5 preceding and 5 following):窗口范围为当前行前后各移动5行。

举例:

 

–sum(s)over(order by s rows between 2 preceding and 2 following)表示在上下两行之间的范围内
select name,class,s, sum(s)over(order by s rows between 2 preceding and 2 following) mm from t2
adf        3        45        174  (45+55+74=174)
asdf       3        55        252   (45+55+74+78=252)
cfe        2        74        332    (74+55+45+78+80=332)
3dd        3        78        379    (78+74+55+80+92=379)
fda        1        80        419
gds        2        92        440
ffd        1        95        461
dss        1        95        480
ddd        3        99        388
gf         3        99        293

 

 

over(order by salary range between unbounded preceding and unbounded following)或者

over(order by salary rows between unbounded preceding and unbounded following):窗口不做限制

 

3、与over函数结合的几个函数介绍

row_number()over()、rank()over()和dense_rank()over()函数的使用

下面以班级成绩表t2来说明其应用

t2表信息如下:
cfe        2        74
dss        1        95
ffd        1        95
fda        1        80
gds        2        92
gf         3        99
ddd        3        99
adf        3        45
asdf       3        55
3dd        3        78

select * from
(
select name,class,s,rank()over(partition by class order by s desc) mm from t2
)
where mm=1;
得到的结果是:
dss        1        95        1
ffd        1        95        1
gds        2        92        1
gf         3        99        1
ddd        3        99        1

注意:
1.在求第一名成绩的时候,不能用row_number(),因为如果同班有两个并列第一,row_number()只返回一个结果;
select * from
(
select name,class,s,row_number()over(partition by class order by s desc) mm from t2
)
where mm=1;
1        95        1  –95有两名但是只显示一个
2        92        1
3        99        1 –99有两名但也只显示一个

2.rank()和dense_rank()可以将所有的都查找出来:
如上可以看到采用rank可以将并列第一名的都查找出来;
rank()和dense_rank()区别:
–rank()是跳跃排序,有两个第二名时接下来就是第四名;
select name,class,s,rank()over(partition by class order by s desc) mm from t2
dss        1        95        1
ffd        1        95        1
fda        1        80        3 –直接就跳到了第三
gds        2        92        1
cfe        2        74        2
gf         3        99        1
ddd        3        99        1
3dd        3        78        3
asdf       3        55        4
adf        3        45        5
–dense_rank()l是连续排序,有两个第二名时仍然跟着第三名
select name,class,s,dense_rank()over(partition by class order by s desc) mm from t2
dss        1        95        1
ffd        1        95        1
fda        1        80        2 –连续排序(仍为2)
gds        2        92        1
cfe        2        74        2
gf         3        99        1
ddd        3        99        1
3dd        3        78        2
asdf       3        55        3
adf        3        45        4

–sum()over()的使用
select name,class,s, sum(s)over(partition by class order by s desc) mm from t2 –根据班级进行分数求和
dss        1        95        190  –由于两个95都是第一名,所以累加时是两个第一名的相加
ffd        1        95        190
fda        1        80        270  –第一名加上第二名的
gds        2        92        92
cfe        2        74        166
gf         3        99        198
ddd        3        99        198
3dd        3        78        276
asdf       3        55        331
adf        3        45        376

first_value() over()和last_value() over()的使用   


–找出这三条电路每条电路的第一条记录类型和最后一条记录类型

SELECT opr_id,res_type,
first_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) low,
last_value(res_type) over(PARTITION BY opr_id ORDER BY res_type rows BETWEEN unbounded preceding AND unbounded following) high
FROM rm_circuit_route
WHERE opr_id IN (‘000100190000000000021311′,’000100190000000000021355′,’000100190000000000021339’)
ORDER BY opr_id;

注:rows BETWEEN unbounded preceding AND unbounded following 的使用

–取last_value时不使用rows BETWEEN unbounded preceding AND unbounded following的结果

 

SELECT opr_id,res_type,
first_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) low,
last_value(res_type) over(PARTITION BY opr_id ORDER BY res_type) high
FROM rm_circuit_route
WHERE opr_id IN (‘000100190000000000021311′,’000100190000000000021355′,’000100190000000000021339’)
ORDER BY opr_id;

如下图可以看到,如果不使用

rows BETWEEN unbounded preceding AND unbounded following,取出的last_value由于与res_type进行进行排列,因此取出的电路的最后一行记录的类型就不是按照电路的范围提取了,而是以res_type为范围进行提取了。

 

 

 

 

 

在first_value和last_value中ignore nulls的使用

数据如下:

 

取出该电路的第一条记录,加上ignore nulls后,如果第一条是判断的那个字段是空的,则默认取下一条,结果如下所示:

–lag() over()函数用法(取出前n行数据)
lag(expresstion,<offset>,<default>)
with a as
(select 1 id,’a’ name from dual
union
select 2 id,’b’ name from dual
union
select 3 id,’c’ name from dual
union
select 4 id,’d’ name from dual
union
select 5 id,’e’ name from dual
)
select id,name,lag(id,1,”)over(order by name) from a;

–lead() over()函数用法(取出后N行数据)

lead(expresstion,<offset>,<default>)
with a as
(select 1 id,’a’ name from dual
union
select 2 id,’b’ name from dual
union
select 3 id,’c’ name from dual
union
select 4 id,’d’ name from dual
union
select 5 id,’e’ name from dual
)
select id,name,lead(id,1,”)over(order by name) from a;

–ratio_to_report(a)函数用法 Ratio_to_report() 括号中就是分子,over() 括号中就是分母
with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over(partition by a) b from a
order by a;

with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over() b from a –分母缺省就是整个占比
order by a;

with a as (select 1 a from dual
union all
select 1 a from dual
union  all
select 1 a from dual
union all
select 2 a from dual
union all
select 3 a from dual
union all
select 4 a from dual
union all
select 4 a from dual
union all
select 5 a from dual
)
select a, ratio_to_report(a)over() b from a
group by a order by a;–分组后的占比

 

percent_rank用法
计算方法:所在组排名序号-1除以该组所有的行数-1,如下所示自己计算的pr1与通过percent_rank函数得到的值是一样的:
SELECT a.deptno,
a.ename,
a.sal,
a.r,
b.n,
(a.r-1)/(n-1) pr1,
percent_rank() over(PARTITION BY a.deptno ORDER BY a.sal) pr2
FROM (SELECT deptno,
ename,
sal,
rank() over(PARTITION BY deptno ORDER BY sal) r –计算出在组中的排名序号
FROM emp
ORDER BY deptno, sal) a,
(SELECT deptno, COUNT(1) n FROM emp GROUP BY deptno) b –按部门计算每个部门的所有成员数
WHERE a.deptno = b.deptno;

cume_dist函数
计算方法:所在组排名序号除以该组所有的行数,但是如果存在并列情况,则需加上并列的个数-1,
如下所示自己计算的pr1与通过percent_rank函数得到的值是一样的:
SELECT a.deptno,
a.ename,
a.sal,
a.r,
b.n,
c.rn,
(a.r + c.rn – 1) / n pr1,
cume_dist() over(PARTITION BY a.deptno ORDER BY a.sal) pr2
FROM (SELECT deptno,
ename,
sal,
rank() over(PARTITION BY deptno ORDER BY sal) r
FROM emp
ORDER BY deptno, sal) a,
(SELECT deptno, COUNT(1) n FROM emp GROUP BY deptno) b,
(SELECT deptno, r, COUNT(1) rn,sal
FROM (SELECT deptno,sal,
rank() over(PARTITION BY deptno ORDER BY sal) r
FROM emp)
GROUP BY deptno, r,sal
ORDER BY deptno) c –c表就是为了得到每个部门员工工资的一样的个数
WHERE a.deptno = b.deptno
AND a.deptno = c.deptno(+)
AND a.sal = c.sal;

percentile_cont函数
含义:输入一个百分比(该百分比就是按照percent_rank函数计算的值),返回该百分比位置的平均值
如下,输入百分比为0.7,因为0.7介于0.6和0.8之间,因此返回的结果就是0.6对应的sal的1500加上0.8对应的sal的1600平均
SELECT ename,
sal,
deptno,
percentile_cont(0.7) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Cont”,
percent_rank() over(PARTITION BY deptno ORDER BY sal) “Percent_Rank”
FROM emp
WHERE deptno IN (30, 60);

若输入的百分比为0.6,则直接0.6对应的sal值,即1500
SELECT ename,
sal,
deptno,
percentile_cont(0.6) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Cont”,
percent_rank() over(PARTITION BY deptno ORDER BY sal) “Percent_Rank”
FROM emp
WHERE deptno IN (30, 60);

PERCENTILE_DISC函数
功能描述:返回一个与输入的分布百分比值相对应的数据值,分布百分比的计算方法见函数CUME_DIST,如果没有正好对应的数据值,就取大于该分布值的下一个值。
注意:本函数与PERCENTILE_CONT的区别在找不到对应的分布值时返回的替代值的计算方法不同

SAMPLE:下例中0.7的分布值在部门30中没有对应的Cume_Dist值,所以就取下一个分布值0.83333333所对应的SALARY来替代

SELECT ename,
sal,
deptno,
percentile_disc(0.7) within GROUP(ORDER BY sal) over(PARTITION BY deptno) “Percentile_Disc”,
cume_dist() over(PARTITION BY deptno ORDER BY sal) “Cume_Dist”
FROM emp
WHERE deptno IN (30, 60);

 

 

.Net 高效开发之不可错过的实用工具

Standard

工欲善其事,必先利其器,没有好的工具,怎么能高效的开发出高质量的代码呢?本文为各ASP.NET 开发者介绍一些高效实用的工具,涉及SQL 管理,VS插件,内存管理,诊断工具等,涉及开发过程的各个环节,让开发效率翻倍。

  1. Visual Studio
    1. Visual Studio Productivity Power tool: VS 专业版的效率工具。
    2. Web Essentials: 提高开发效率,能够有效的帮助开发人员编写CSS, JavaScript, HTML 等代码。
    3. MSVSMON: 远程Debug 监控器 (msvsmon.exe) 是一种轻量级的应用程序,能够远程控制VS来调试程序。在远程调试期间,VS 在调试主机运行,MSVSMON 在远程机器中运行。
    4. WIX toolset: 可以将XML 源代码文件编译成Windows 安装包。
    5. Code digger: Code Digger 是VS 2012/2013 的扩展插件,能够帮助开发人员分析代码。
    6. CodeMaid: CodeMaid 是一款开源的VS2012/2013/2015 插件,提供代码分析,清理,简化代码的功能。
    7. OzCode: 非常强大的VS 调试工具。
    8. CodeRush: 是VS的提高代码重构和提升效率的VS插件。
    9. T4 Text Template:VS中T4 文本模板是生成代码文件最常用的模板文件,这种模板文件是通过编写文本块和控制逻辑来实现的。
    10. Indent Guides:  快速添加缩进行。
    11. PowerShell Tools:支持开发和调试PowerShell 脚本和VS2015代码块的工具包。
    12. Visual Studio Code: 免费的跨平台编辑器,可以编译和调试现代的Web和云应用。
  2. ASP.NET
    1. Fiddler: 能够捕获 http 请求/响应来模拟请求行为。
    2. AutoMapper: 自动生成对象到对象的映射代码,比如,能够生成从实体对象映射到域对象,而不是手动编写映射代码。Object to object mapping. Like, the tool can be used to map entity objects to domain objects instead of writing manual mapping code.
    3. Unity/Ninject/Castle Windsor/StructureMap/Spring.Net: 依赖性映射框架,提供很多可用的DI 框架。
    4. .NET Reflector: .NET 程序反编译器。
    5. dotPeek: .NET 程序反编译器。
    6. ILSpy: .NET 程序反编译器。
    7. memprofiler: 非常强大的查找内存泄露和优化内存使用的工具。
    8. PostSharp: 去除重复编码和避免由于交叉引用产生的代码冗余。
    9. ASPhere: Web.config 图形化编辑器
  3. WCF
    1. SOAP UI: API 测试工具,支持所有标准的协议和技术。
    2. WireShark:UNIX和Windows系统的网络协议分析器。用于捕获TCP 层的拥塞状况,还能帮你过滤无效信息。
    3. Svc TraceViewer: 提供文件追踪视图,是由WFO提供的。
    4. Svc Config Editor: 用于管理WCF相关配置的图形化界面工具。
  4. MSMQ
    1. QueueExplorer 3.4: 提供消息操作功能,如复制,删除,移动消息,保存和加载,强压测试,浏览编辑等
  5. LINQ
    1. LINQ Pad: LINQPad 是一个轻量级工具,用来测试Linq查询。 可以测试由不同语言写的.Net 语言脚本。
    2. LINQ Insight: LINQ Insight Express 可嵌入 Visual Studio 中,能够分析设计时的LINQ查询 。
  6. RegEx
    1. RegEx tester: 正则表达式插件。
    2. regexr: 在线正则表达式开发和测试工具。
    3. regexpal: 在线正则表达式开发和测试工具。
    4. Expresso: 桌面版的正则表达式工具。
    5. RegexMagic : 能够根据文本模式自动生成正则表达式的工具。
  7. Javascript/JQuery/AngularJS
    1. JSHint: JavaScript代码质量监控工具,定义了很多非常严格的规则。
    2. JSFiddle: 提供了浏览器内部的开发环境,能够测试HTML,CSS,Javascript/JQuery代码
    3. Protractor: 端到端的框架,能够测试Angular应用。
  8. SQL Server
    1. SQL Profiler: SQL 跟踪监控工具。
    2. ExpressProfiler: ExpressProfiler (aka SqlExpress Profiler) 是一个小型快速的SQL Server Profiler的替换工具,自带GUI界面。能够用于企业版和非企业版 的SQL Server。
    3. SQL Sentry Plan explorer: 提供了SQL 查询执行计划的很好的物理视图。
    4. SQL Complete: 为 SQL Server Management Studio and Visual Studio 提供非常智能的,优化SQL 格式的管理工具。
    5. NimbleText:文本操作和代码生成工具。
    6. Query Express: 轻量级的SQL 查询分析器。
    7. IO Meter: 提供IO 子系统的一些访问具体情况
    8. sqldecryptor: 可以解密SQL Server 中的加密对象,如存储过程,方法,触发器,视图。
    9. SpatialViewer: 可以预览和创建空间数据。
    10. ClearTrace: 导入跟踪和分析文件,并显示汇总信息。
    11. Internals Viewer for SQL Server: Internals Viewer 用来在SQL Server 的存储引擎中的查找工具,以及获取数据在物理层是如何分配,组织和存储的。
  9. NHibernate
    1. NHibernate Mapping Generator : 生成 NHibernate 映射文件,并从存在的数据库表映射到领域类。
  10. ​Tally
    1. Tally ERP 9
    2. Tally dll: .net 的动态链接库,能够将Tally Accounting 软件集成到应用程序中 ,通过代码对数据进行push或pull操作。
  11. 代码Review
    1. StyleCop: StyleCop 是静态代码分析工具,能够统一设置代码样式和规范。 可以在Visual Studio 中使用,也可以集成到 MSBuild 项目。
    2. FxCop: FxCop 是静态代码分析工具,能够通过分析.Net 程序集保证开发标准。
  12. 运行状况捕获
    1. WireShark: It is a network protocol analyzer for Unix and Windows. It can capture traffic at TCP level.
    2. HTTP Monitor: enables the developer to view all the HTTP traffic between your computer and the Internet. This includes the request data (such as HTTP headers and form GET and POST data) and the response data (including the HTTP headers and body).
  13. 诊断工具
    1. Glimpse:提供服务器端诊断数据。如 在ASP.NET MVC 项目,可以通过NuGet添加。
  14. 性能
    1. PerfMon: 使用 性能计数器监控系统性能。
  15. 代码转换器
    1. Telerik Code Converter: C# 到 VB 及 VB 到C# 代码转换器. I是一个在线编辑工具,可以选择 ‘Batch Converter’ ,并使用压缩包上传文件。
  16. 屏幕记录工具
    1. Wink: Using Wink, 可以轻松截图,并为截图添加描述等,也可以录制Demo。
  17. 文本编辑器
    1. Notepad++: 源码编辑器
    2. Notepad2: 轻量级功能丰富的文本编辑器
    3. sublimetext:富文本编辑器
  18. 文档工具
    1. GhostDoc: GhostDoc 是 Visual Studio 扩展项,能够自动生成 方法或属性的 文档注释,包括它们的类型,名称,其他上下文信息。
    2. helpndoc: helpndoc 用于创建帮助文档工具,能够根据文档源生成多种格式。
  19. 其他
    1. FileZilla: FileZilla 是开源的FTP 工具. 通过FileZilla 客户端可以将文件上传到FTP 服务器上。
    2. TreeTrim: TreeTrim 是调整代码的工具,能够删除一些无效的debug文件和临时文件等。
    3. BrowserStack: 支持跨浏览器测试的工具。
    4. BugShooting: 屏幕截图软件,能够铺货和附加工作项,bug,问题跟踪项等。
    5. Postman: REST 客户端,能够发送http请求,分析REST 应用程序发出的响应。
    6. Web developer checklist: checklist可用来管理开发计划
    7. PowerGUI: 能够快接收和使用PowerShell 来有效管理 Windows 开发环境。
    8. Beyond Compare: 提供文件对比功能。
    9. PostMan: REST Chrome 器扩展项
    10. Devart Codecompare: 文件区分工具,能够读取 C#, C++,VB 代码结构 。包括:文件夹对比工具,独立App 比较合并文件夹和文件,代码review 支持。

How to recover deleted data from SQL Server

Standard

In all my years of working SQL server, one of the most commonly asked questions has always been “How can we recover deleted record?”

Now, it is very easy to recover deleted data from your SQL server 2005 or above.

How to generate Insert statements from table data using SQL Server »

How to recover deleted data from SQL Server

October 22, 2011 by Muhammad Imran

In all my years of working SQL server, one of the most commonly asked questions has always been “How can we recover deleted record?”

Now, it is very easy to recover deleted data from your SQL server 2005 or above.(Note: This script can recover following data types & compatible with CS collation).

  • image
  • text
  • uniqueidentifier
  • tinyint
  • smallint
  • int
  • smalldatetime
  • real
  • money
  • datetime
  • float
  • sql_variant
  • ntext
  • bit
  • decimal
  • numeric
  • smallmoney
  • bigint
  • varbinary
  • varchar
  • binary
  • char
  • timestamp
  • nvarchar
  • nchar
  • xml
  • sysname

Let me explain this issue demonstrating simple example.

–Create Table

Create Table [Test_Table]

(

[Col_image] image,

[Col_text] text,

[Col_uniqueidentifier] uniqueidentifier,

[Col_tinyint] tinyint,

[Col_smallint] smallint,

[Col_int] int,

[Col_smalldatetime] smalldatetime,

[Col_real] real,

[Col_money] money,

[Col_datetime] datetime,

[Col_float] float,

[Col_Int_sql_variant] sql_variant,

[Col_numeric_sql_variant] sql_variant,

[Col_varchar_sql_variant] sql_variant,

[Col_uniqueidentifier_sql_variant] sql_variant,

[Col_Date_sql_variant] sql_variant,

[Col_varbinary_sql_variant] sql_variant,

[Col_ntext] ntext,

[Col_bit] bit,

[Col_decimal] decimal(18,4),

[Col_numeric] numeric(18,4),

[Col_smallmoney] smallmoney,

[Col_bigint] bigint,

[Col_varbinary] varbinary(Max),

[Col_varchar] varchar(Max),

[Col_binary] binary(8),

[Col_char] char,

[Col_timestamp] timestamp,

[Col_nvarchar] nvarchar(Max),

[Col_nchar] nchar,

[Col_xml] xml,

[Col_sysname] sysname

)

 

GO

–Insert data into it

INSERT INTO [Test_Table]

([Col_image]

,[Col_text]

,[Col_uniqueidentifier]

,[Col_tinyint]

,[Col_smallint]

,[Col_int]

,[Col_smalldatetime]

,[Col_real]

,[Col_money]

,[Col_datetime]

,[Col_float]

,[Col_Int_sql_variant]

,[Col_numeric_sql_variant]

,[Col_varchar_sql_variant]

,[Col_uniqueidentifier_sql_variant]

,[Col_Date_sql_variant]

,[Col_varbinary_sql_variant]

,[Col_ntext]

,[Col_bit]

,[Col_decimal]

,[Col_numeric]

,[Col_smallmoney]

,[Col_bigint]

,[Col_varbinary]

,[Col_varchar]

,[Col_binary]

,[Col_char]

,[Col_nvarchar]

,[Col_nchar]

,[Col_xml]

,[Col_sysname])

VALUES

(CONVERT(IMAGE,REPLICATE(‘A’,4000))

,REPLICATE(‘B’,8000)

,NEWID()

,10

,20

,3000

,GETDATE()

,4000

,5000

,getdate()+15

,66666.6666

,777777

,88888.8888

,REPLICATE(‘C’,8000)

,newid()

,getdate()+30

,CONVERT(VARBINARY(8000),REPLICATE(‘D’,8000))

,REPLICATE(‘E’,4000)

,1

,99999.9999

,10101.1111

,1100

,123456

,CONVERT(VARBINARY(MAX),REPLICATE(‘F’,8000))

,REPLICATE(‘G’,8000)

,0x4646464

,’H’

,REPLICATE(‘I’,4000)

,’J’

,CONVERT(XML,REPLICATE(‘K’,4000))

,REPLICATE(‘L’,100)

)

 

GO

–Delete the data

Delete from Test_Table

Go

–Verify the data

Select * from Test_Table

Go

–Recover the deleted data without date range

EXEC Recover_Deleted_Data_Proc ‘test’,’dbo.Test_Table’

GO

–Recover the deleted data it with date range

EXEC Recover_Deleted_Data_Proc ‘test’,’dbo.Test_Table’,’2012-06-01′,’2012-06-30′

Download Stored Procedure :

Now, you need to create the procedure to recover your deleted data

Explanation:

How does it work? Let’s go through it step by step. The process requires seven easy steps:

Step-1:

We need to get the deleted records from sql server. By using the standard SQL Server function fn_blog, we can easily get all transaction log (Including deleted data. But, we need only the selected deleted records from the transaction log. So we included three filters (Context, Operation & AllocUnitName).

  • Context (‘LCX_MARK_AS_GHOST’and ‘LCX_HEAP’)
  • Operation (‘LOP_DELETE_ROWS’)
  • AllocUnitName(‘dbo.Student’) –- Schema + table Name

Here is the code snippet:

Select [RowLog Contents 0] FROM sys.fn_dblog(NULL,NULL)WHEREAllocUnitName =‘dbo.Student’ AND Context IN (‘LCX_MARK_AS_GHOST’,‘LCX_HEAP’) AND Operation in (‘LOP_DELETE_ROWS’)

This query will return number of columns providing different information, but we only need to select the column “RowLog content o, to get the deleted data.

The Column “RowLog content 0″ will look like this:

“0x300018000100000000000000006B0000564920205900000

00500E001002800426F62206A65727279″

Step-2: 

Now,we have deleted data but in Hex values but SQL keeps this data in a specific sequence so we can easily recover it.But before recovering the data we need to understand the format. This format is defined in detail in Kalen Delaney’s SQL Internal’s book.

  • 1 Byte : Status Bit A
  • 1 Byte : Status Bit B
  • 2 Bytes : Fixed length size
  • n Bytes : Fixed length data
  • 2 Bytes : Total Number of Columns
  • n Bytes : NULL Bitmap (1 bit for each column as 1 indicates that the column is null and 0 indicate that the column is not null)
  • 2 Bytes : Number of variable-length columns
  • n Bytes : Column offset array (2x variable length column)
  • n Bytes : Data for variable length columns

So, the Hex data“RowLog content 0″ is equal to:

“Status Bit A + Status Bit B + Fixed length size + Fixed length data + Total Number of Columns + NULL Bitmap + Number of variable-length columns + NULL Bitmap+ Number of variable-length columns + Column offset array + Data for variable length columns.”

Step-3: 

Now, we need to break the RowLog Content o (Hex value of our deleted data) into the above defined structure.[Color codes are used for reference only]

  • [Fixed Length Data] = Substring (RowLog content 0, Status Bit A+ Status Bit B + 1,2 bytes)
  • [Total No of Columns]= Substring (RowLog content 0, [Fixed Length Data] + 1,2 bytes)
  • [Null Bitmap length] = Ceiling ([Total No of Columns]/8.0)
  • [Null Bytes]= Substring (RowLog content 0, Status Bit A+ Status Bit B +[Fixed Length Data] +1, [Null Bitmap length] )
  • Total no of variable columns = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 )
  • Column Offset Array= Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 , Total no of variable columns*2 )
  • Variable Column Start = Status Bit A+ Status Bit B + [Fixed Length Data] + [Null Bitmap length] + 2+( Total no of variable columns*2)

Step-4: 

Now, we have the split of data as well,so we can find that which one column value is null or not by using Null Bytes. To achieve this convert Null Bytes (Hex value) into Binary format (As discussed, 1 indicates null for the column and 0 means there is some data).Here in this data, the Null Bitmap values are 00000111.We have only five column in student table (used as sample) and first five value of null bitmap is 00000.It means there is no null values.

Step-5:

Now, we have the primary data split (Step-3) and null values (Step-4) as well. After that we need to use this code snippet to get the column data like column name, column size, precision, scale and most importantly the leaf null bit (to ensure that the column is fixed data (<=-1) or variable data sizes (>=1)) of the table.

Select * from sys.allocation_units allocunits INNER JOINsys.partitions partitions ON (allocunits.type IN (1, 3) ANDpartitions.hobt_id = allocunits.container_id) OR (allocunits.type= 2 AND partitions.partition_id = allocunits.container_id) INNERJOIN sys.system_internals_partition_columns cols ONcols.partition_id = partitions.partition_id LEFT OUTER JOINsyscolumns ON syscolumns.id = partitions.object_id ANDsyscolumns.colid = cols.partition_column_id

And join it with our collected data table (Step-1,2,3,4) on the basis of allocunits.[Allocation_Unit_Id].Till now we know the information about the table and data,so we need to utilize this data to break [RowLog Contents 0] into table column data but in hex value. Here we need to take care as the data is either in fixed column size or in variable column size. .

Step-6: 

We collected data in hex value (Step-5) with respect to each column. Now we need to convert the data with respect to its data type defined as [System_type_id]. Each type is having different mechanism
for data conversion.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

–NVARCHAR ,NCHAR

WHEN system_type_id IN (231, 239) THEN  LTRIM(RTRIM(CONVERT(NVARCHAR(max),hex_Value)))

 

–VARCHAR,CHAR

WHEN system_type_id IN (167,175) THEN  LTRIM(RTRIM(CONVERT(VARCHAR(max),REPLACE(hex_Value, 0x00, 0x20))))

 

–TINY INTEGER

WHEN system_type_id = 48 THEN CONVERT(VARCHAR(MAX), CONVERT(TINYINT, CONVERT(BINARY(1), REVERSE (hex_Value))))

 

–SMALL INTEGER

WHEN system_type_id = 52 THEN CONVERT(VARCHAR(MAX), CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (hex_Value))))

 

— INTEGER

WHEN system_type_id = 56 THEN CONVERT(VARCHAR(MAX), CONVERT(INT, CONVERT(BINARY(4), REVERSE(hex_Value))))

 

— BIG INTEGER

WHEN system_type_id = 127 THEN CONVERT(VARCHAR(MAX), CONVERT(BIGINT, CONVERT(BINARY(8), REVERSE(hex_Value))))

 

–DATETIME

WHEN system_type_id = 61 Then CONVERT(VARCHAR(Max),CONVERT(DATETIME,Convert(VARBINARY(max),REVERSE (hex_Value))),100)

 

–SMALL DATETIME

WHEN system_type_id =58 Then CONVERT(VARCHAR(Max),CONVERT(SMALLDATETIME,CONVERT(VARBINARY(MAX),REVERSE(hex_Value))),100) –SMALL DATETIME

 

— NUMERIC

WHEN system_type_id = 108 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(NUMERIC(18,14), CONVERT(VARBINARY,CONVERT(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT))

 

–MONEY,SMALLMONEY

WHEN system_type_id In(60,122) THEN CONVERT(VARCHAR(MAX),Convert(MONEY,Convert(VARBINARY(MAX),Reverse(hex_Value))),2)

 

— DECIMAL

WHEN system_type_id = 106 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(Decimal(38,34), Convert(VARBINARY,Convert(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT))

 

— BIT

WHEN system_type_id = 104 THEN CONVERT(VARCHAR(MAX),CONVERT (BIT,CONVERT(BINARY(1), hex_Value)%2))

 

— FLOAT

WHEN system_type_id = 62 THEN  RTRIM(LTRIM(Str(Convert(FLOAT,SIGN(CAST(Convert(VARBINARY(max),Reverse(hex_Value)) AS BIGINT)) * (1.0 + (CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x000FFFFFFFFFFFFF) * POWER(CAST(2 AS FLOAT), -52)) * POWER(CAST(2 AS FLOAT),((CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x7ff0000000000000) / EXP(52 * LOG(2))-1023))),53,LEN(hex_Value))))

 

–REAL

When  system_type_id =59 THEN  Left(LTRIM(STR(Cast(SIGN(CAST(Convert(VARBINARY(max),Reverse(hex_Value)) AS BIGINT))* (1.0 + (CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS BIGINT) & 0x007FFFFF) * POWER(CAST(2 AS Real), -23)) * POWER(CAST(2 AS Real),(((CAST(CONVERT(VARBINARY(max),Reverse(hex_Value)) AS INT) )& 0x7f800000)/ EXP(23 * LOG(2))-127))AS REAL),23,23)),8)

 

–BINARY,VARBINARY

WHEN system_type_id In (165,173) THEN (CASE WHEN Charindex(0x,cast(” AS XML).value(‘xs:hexBinary(sql:column(“hex_value”))’, ‘varbinary(max)’)) = 0 THEN ‘0x’ ELSE ” END) +cast(” AS XML).value(‘xs:hexBinary(sql:column(“hex_value”))’, ‘varchar(max)’)

 

–UNIQUEIDENTIFIER

WHEN system_type_id =36 THEN CONVERT(VARCHAR(MAX),CONVERT(UNIQUEIDENTIFIER,hex_Value))

Step-7: 

Finally we do a pivot table over the data and you will see the result. THE DELETED DATA IS BACK.

Note: This data will only for display. It is not available in your selected table but you can insert this data in your table.

I’d really appreciate your comments on my posts, whether you agree or not, do comment.

 

Recover_Deleted_Data_Proc.sql

How to find user who ran DROP or DELETE statements on your SQL Server Objects

Standard
Problem

Someone has dropped a table from your database and you want to track who did it.  Or someone has deleted some data from a table, but no one will say who did.  In this tip, we will look at how you can use the transaction log to track down some of this information.

Solution

I have already discussed how to read the transaction log file in my last tip “How to read SQL Server Database Log file“. Before reading this tip, I recommend that you read the previous tip to understand how the transaction log file logs all database activity.

Here we will use the same undocumented function “fn_dblog” to find any unauthorized or unapproved deletes or table drops. This tip will help you track or find any unethical or an unwanted user who has dropped a table or deleted data from a table. I strongly suggest testing any undocumented functions in a lab environment first.

One way to find such users is with the help of the default trace, because the default trace captures and tracks database activity performed on your instance, but if you have a busy system the trace files may roll over far too fast and you may not be able to catch some of the changes in your database.  But these changes are also tracked in the transaction log file of the database and we will use this to find the users in question.

Finding a user who ran a DELETE statement

Step 1

Before moving ahead, we will create a database and a table on which I will delete some data. Run the below SQL code to create a database and table.

--Create DB.
USE [master];
GO
CREATE DATABASE ReadingDBLog;
GO
-- Create tables.
USE ReadingDBLog;
GO
CREATE TABLE [Location] (
    [Sr.No] INT IDENTITY,
    [Date] DATETIME DEFAULT GETDATE (),
    [City] CHAR (25) DEFAULT 'Bangalore');

Step 2

We have created a database named “ReadingDBLog” and a table ‘Location’ with three columns. Now we will insert a 100 rows into the table.

USE ReadingDBLog
GO
INSERT INTO Location DEFAULT VALUES ;
GO 100

Step 3

Now go ahead and delete some rows to check who has deleted your data.

USE ReadingDBLog
GO
DELETE Location WHERE [Sr.No]=10
GO
SELECT * FROM Location WHERE [Sr.No]=10
GO
Delete a row from the table'location'

You can see in the above screenshot that a row has been deleted from the table “Location”. I also ran a SELECT statement to verify the data has been deleted.

Step 4

Now we have to search the transaction log file to find the info about the deleted rows. Run the below command to get info about all deleted transactions.

USE ReadingDBLog
GO
SELECT 
    [Transaction ID],
    Operation,
    Context,
    AllocUnitName
    
FROM 
    fn_dblog(NULL, NULL) 
WHERE 
    Operation = 'LOP_DELETE_ROWS'

 

Find all the deleted rows info from t-log file

All transactions which have executed a DELETE statement will display by running the above command and we can see this in the above screenshot. As we are searching for deleted data in table Location, we can see this in the last row. We can find the table name in the “AllocUnitName” column. The last row says a DELETE statement has been performed on a HEAP table ‘dbo.Location’ under transaction ID 0000:000004ce. Now capture the transaction ID from here for our next command.

Step 5

We found the transaction ID from the above command which we will use in the below command to get the transaction SID of the user who has deleted the data.

USE ReadingDBLog
GO
SELECT
    Operation,
    [Transaction ID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID]
FROM
    fn_dblog(NULL, NULL)
WHERE
    [Transaction ID] = '0000:000004ce'
AND
    [Operation] = 'LOP_BEGIN_XACT'

 

Find the transaction SID of the user

Here, we can see the [Begin Time] of this transaction which will also help filter out the possibilities in finding the exact info like when the data was deleted and then you can filter on the base of begin time when that command was executed.

We can read the above output as “A DELETE statement began at 2013/10/14 12:55:17:630 under transaction ID 0000:000004ce by user transaction SID 0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000.

Now our next step is to convert the transaction SID hexadecimal value into text to find the real name of the user.

Step 6

Now we will figure out who ran the DELETE command. We will copy the hexadecimal value from the transaction SID column for the DELETE transaction and then pass that value into the SUSER_SNAME () function.

USE MASTER
GO   
SELECT SUSER_SNAME(0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000)

 

Find the login name with the help of transaction SID

Now we have found the user that did the delete.

Finding a user who ran a DROP statement

Step 1

Here I am going to drop table Location.

USE ReadingDBLog
GO
DROP TABLE Location

 

Drop a table

Step 2

Similarly if you drop any object or you perform anything operation in your database it will get logged in the transaction log file which will be visible by using this function fn_dblog.

Run the below script to display all logs which have been logged under DROPOBJ statement.

USE ReadingDBLog
GO
SELECT 
Operation,
[Transaction Id],
[Transaction SID],
[Transaction Name],
 [Begin Time],
   [SPID],
   Description
FROM fn_dblog (NULL, NULL)
WHERE [Transaction Name] = 'DROPOBJ'
GO

 

Finding a user trasaction SID who ran DROP statement for table location

Here we can find the transaction SID and all required info which we need to find the user.

Step 3

Now we can pass the transaction SID into system function SUSER_SNAME () to get the exact user name.

SELECT SUSER_SNAME(0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000) 

 

Finding a user who ran DROP statement for table location

Once again, we found the user in question.

Next Step

Use this function to do more research into your transaction log file. There is a lot of informative data in more than 100 columns when you use this command. You may also need to look into this and correlate with other data. Explore more knowledge on SQL Server Database Administration Tips.

Source

SQL Server – How to find Who Deleted What records at What Time

Standard

Let me explain it with simple example :

Create Table tbl_Sample 
([ID] int identity(1,1) ,
[Name] varchar(50))
GO
Insert into tbl_Sample values ('Letter A')
Insert into tbl_Sample values ('Letter B')
Insert into tbl_Sample values ('Letter C')

Select * from tbl_Sample

Now, you can change logins and delete records.

Given below is the code that can give you the recovered data with the user name who deleted it and the date and time as well.

-- Script Name: Recover_Deleted_Data_With_UID_Date_Time_Proc
-- Script Type : Recovery Procedure 
-- Develop By: Muhammad Imran
-- Date Created: 24 Oct 2012
-- Modify Date: 
-- Version    : 1.0
-- Notes      :

CREATE PROCEDURE Recover_Deleted_Data_With_UID_Date_Time_Proc
@Database_Name NVARCHAR(MAX),
@SchemaName_n_TableName NVARCHAR(Max),
@Date_From DATETIME='1900/01/01',
@Date_To DATETIME ='9999/12/31'
AS

DECLARE @RowLogContents VARBINARY(8000)
DECLARE @TransactionID NVARCHAR(Max)
DECLARE @AllocUnitID BIGINT
DECLARE @AllocUnitName NVARCHAR(Max)
DECLARE @SQL NVARCHAR(Max)
DECLARE @Compatibility_Level INT


SELECT @Compatibility_Level=dtb.compatibility_level
FROM
master.sys.databases AS dtb WHERE dtb.name=@Database_Name

Print @Compatibility_Level
--IF ISNULL(@Compatibility_Level,0)&lt;=80
--BEGIN
--	RAISERROR('The compatibility level should be equal to or greater SQL SERVER 2005 (90)',16,1)
--	RETURN
--END

IF (SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE [TABLE_SCHEMA]+'.'+[TABLE_NAME]=@SchemaName_n_TableName)=0
BEGIN
	RAISERROR('Could not found the table in the defined database',16,1)
	RETURN
END

DECLARE @bitTable TABLE
(
  [ID] INT,
  [Bitvalue] INT
)
--Create table to set the bit position of one byte.

INSERT INTO @bitTable
SELECT 0,2 UNION ALL
SELECT 1,2 UNION ALL
SELECT 2,4 UNION ALL
SELECT 3,8 UNION ALL
SELECT 4,16 UNION ALL
SELECT 5,32 UNION ALL
SELECT 6,64 UNION ALL
SELECT 7,128

--Create table to collect the row data.
DECLARE @DeletedRecords TABLE
(
    [Row ID]			INT IDENTITY(1,1),
    [RowLogContents]	VARBINARY(8000),
    [AllocUnitID]		BIGINT,
	[Transaction ID]	NVARCHAR(Max),
    [FixedLengthData]	SMALLINT,
	[TotalNoOfCols]		SMALLINT,
	[NullBitMapLength]	SMALLINT,
	[NullBytes]			VARBINARY(8000),
	[TotalNoofVarCols]	SMALLINT,
	[ColumnOffsetArray]	VARBINARY(8000),
	[VarColumnStart]	SMALLINT,
	[Slot ID]			INT,
    [NullBitMap]		VARCHAR(MAX)
    
)
--Create a common table expression to get all the row data plus how many bytes we have for each row.
;WITH RowData AS (
SELECT 

[RowLog Contents 0] AS [RowLogContents] 

,[AllocUnitID] AS [AllocUnitID] 

,[Transaction ID] AS [Transaction ID]  

--[Fixed Length Data] = Substring (RowLog content 0, Status Bit A+ Status Bit B + 1,2 bytes)
,CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) AS [FixedLengthData]  --@FixedLengthData

-- [TotalnoOfCols] =  Substring (RowLog content 0, [Fixed Length Data] + 1,2 bytes)
,CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2)))) as  [TotalNoOfCols]

--[NullBitMapLength]=ceiling([Total No of Columns] /8.0)
,CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) as [NullBitMapLength] 

--[Null Bytes] = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [NullBitMapLength] )
,SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3,
CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0))) as [NullBytes]

--[TotalNoofVarCols] = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 )
,(CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x10,0x30,0x70) THEN
CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3
+ CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))  ELSE null  END) AS [TotalNoofVarCols] 

--[ColumnOffsetArray]= Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 , [TotalNoofVarCols]*2 )
,(CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x10,0x30,0x70) THEN
SUBSTRING([RowLog Contents 0]
, CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3
+ CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) + 2
, (CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x10,0x30,0x70) THEN
CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3
+ CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))  ELSE null  END)
* 2)  ELSE null  END) AS [ColumnOffsetArray] 

--	Variable column Start = Status Bit A+ Status Bit B + [Fixed Length Data] + [Null Bitmap length] + 2+([TotalNoofVarCols]*2)
,CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1)In (0x10,0x30,0x70)
THEN  (
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 4 

+ CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) 

+ ((CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x10,0x30,0x70) THEN
CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3
+ CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2)
,REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))  ELSE null  END) * 2)) 

ELSE null End AS [VarColumnStart]
,[Slot ID]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(SELECT [Allocation_unit_id] FROM sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)  
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2 
AND partitions.partition_id = allocunits.container_id)  
WHERE object_id=object_ID('' + @SchemaName_n_TableName + ''))

AND Context IN ('LCX_MARK_AS_GHOST', 'LCX_HEAP') AND Operation in ('LOP_DELETE_ROWS') 
And SUBSTRING([RowLog Contents 0], 1, 1)In (0x10,0x30,0x70)

/*Use this subquery to filter the date*/
AND [TRANSACTION ID] IN (SELECT DISTINCT [TRANSACTION ID] FROM    sys.fn_dblog(NULL, NULL) 
WHERE Context IN ('LCX_NULL') AND Operation in ('LOP_BEGIN_XACT')  
And [Transaction Name]='DELETE'
And  CONVERT(NVARCHAR(11),[Begin Time]) BETWEEN @Date_From AND @Date_To)),

--Use this technique to repeate the row till the no of bytes of the row.
N1 (n) AS (SELECT 1 UNION ALL SELECT 1),
N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y),
N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y),
N4 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n)
           FROM N3 AS X, N3 AS Y)



INSERT INTO @DeletedRecords
SELECT	RowLogContents
		,[AllocUnitID]
		,[Transaction ID]
		,[FixedLengthData]
		,[TotalNoOfCols]
		,[NullBitMapLength]
		,[NullBytes]
		,[TotalNoofVarCols]
		,[ColumnOffsetArray]
		,[VarColumnStart]
        ,[Slot ID]
         ---Get the Null value against each column (1 means null zero means not null)
		,[NullBitMap]=(REPLACE(STUFF((SELECT ',' +
		(CASE WHEN [ID]=0 THEN CONVERT(NVARCHAR(1),(SUBSTRING(NullBytes, n, 1) % 2))  ELSE CONVERT(NVARCHAR(1),((SUBSTRING(NullBytes, n, 1) / [Bitvalue]) % 2)) END) --as [nullBitMap]
        
FROM
N4 AS Nums
Join RowData AS C ON n&lt;=NullBitMapLength
Cross Join @bitTable WHERE C.[RowLogContents]=D.[RowLogContents] ORDER BY [RowLogContents],n ASC FOR XML PATH('')),1,1,''),',',''))
FROM RowData D

IF (SELECT COUNT(*) FROM @DeletedRecords)=0
BEGIN
	RAISERROR('There is no data in the log as per the search criteria',16,1)
	RETURN
END

DECLARE @ColumnNameAndData TABLE
(
 [Transaction ID]   varchar(100),
 [Row ID]			int,
 [Rowlogcontents]	varbinary(Max),
 [NAME]				sysname,
 [nullbit]			smallint,
 [leaf_offset]		smallint,
 [length]			smallint,
 [system_type_id]	tinyint,
 [bitpos]			tinyint,
 [xprec]			tinyint,
 [xscale]			tinyint,
 [is_null]			int,
 [Column value Size]int,
 [Column Length]	int,
 [hex_Value]		varbinary(max),
 [Slot ID]			int,
 [Update]			int
)

--Create common table expression and join it with the rowdata table
-- to get each column details
/*This part is for variable data columns*/
--@RowLogContents, 
--(col.columnOffValue - col.columnLength) + 1,
--col.columnLength
--)
INSERT INTO @ColumnNameAndData
SELECT 
[Transaction ID],
[Row ID],
Rowlogcontents,
NAME ,
cols.leaf_null_bit AS nullbit,
leaf_offset,
ISNULL(syscolumns.length, cols.max_length) AS [length],
cols.system_type_id,
cols.leaf_bit_position AS bitpos,
ISNULL(syscolumns.xprec, cols.precision) AS xprec,
ISNULL(syscolumns.xscale, cols.scale) AS xscale,
SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null,
(CASE WHEN leaf_offset&lt;1 and SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0 
THEN
(Case When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000
THEN
CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) - POWER(2, 15)
ELSE
CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
END)
END)  AS [Column value Size],

(CASE WHEN leaf_offset&lt;1 and SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0  THEN
(Case 

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000
THEN  (Case When [System_type_id]In (35,34,99) Then 16 else 24  end)

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000
THEN  (Case When [System_type_id]In (35,34,99) Then 16 else 24  end) --24 

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000
THEN (CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart]))

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000

THEN POWER(2, 15) +CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])

END)

END) AS [Column Length]

,(CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1 THEN  NULL ELSE
 SUBSTRING
 (
 Rowlogcontents, 
 (

(Case When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000
THEN
CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) - POWER(2, 15)
ELSE
CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
END)

 - 
(Case When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000

THEN  (Case When [System_type_id]In (35,34,99) Then 16 else 24  end) --24 
When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000

THEN  (Case When [System_type_id]In (35,34,99) Then 16 else 24  end) --24 
When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000

THEN CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000

THEN POWER(2, 15) +CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])

END)

) + 1,
(Case When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000

THEN  (Case When [System_type_id] In (35,34,99) Then 16 else 24  end) --24 
When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &gt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000

THEN  (Case When [System_type_id] In (35,34,99) Then 16 else 24  end) --24 
When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&lt;30000

THEN ABS(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart]))

When CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2)))) &lt;30000 And 
ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])&gt;30000

THEN POWER(2, 15) +CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * leaf_offset*-1) - 1, 2))))
- ISNULL(NULLIF(CONVERT(INT, CONVERT(BINARY(2), REVERSE (SUBSTRING ([ColumnOffsetArray], (2 * ((leaf_offset*-1) - 1)) - 1, 2)))), 0), [varColumnStart])

END)
)

END) AS hex_Value
,[Slot ID]
,0
FROM @DeletedRecords A
Inner Join sys.allocation_units allocunits On A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset&lt;0
UNION
/*This part is for fixed data columns*/
SELECT  
[Transaction ID],
[Row ID],
Rowlogcontents,
NAME ,
cols.leaf_null_bit AS nullbit,
leaf_offset,
ISNULL(syscolumns.length, cols.max_length) AS [length],
cols.system_type_id,
cols.leaf_bit_position AS bitpos,
ISNULL(syscolumns.xprec, cols.precision) AS xprec,
ISNULL(syscolumns.xscale, cols.scale) AS xscale,
SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null,
(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset &gt;1 THEN max_length ELSE 0 END),0) FROM
sys.system_internals_partition_columns C WHERE cols.partition_id =C.partition_id And C.leaf_null_bit&lt;cols.leaf_null_bit)+5 AS [Column value Size],
syscolumns.length AS [Column Length]

,CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1 THEN NULL ELSE
SUBSTRING
(
Rowlogcontents,(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset &gt;1 And C.leaf_bit_position=0 THEN max_length ELSE 0 END),0) FROM
sys.system_internals_partition_columns C where cols.partition_id =C.partition_id And C.leaf_null_bit&lt;cols.leaf_null_bit)+5
,syscolumns.length) END AS hex_Value
,[Slot ID]
,0
FROM @DeletedRecords A
Inner Join sys.allocation_units allocunits ON A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset&gt;0
Order By nullbit

Declare @BitColumnByte as int
Select @BitColumnByte=CONVERT(INT, ceiling( Count(*)/8.0)) from @ColumnNameAndData Where [System_Type_id]=104

;With N1 (n) AS (SELECT 1 UNION ALL SELECT 1),
N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y),
N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y),
N4 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n)
           FROM N3 AS X, N3 AS Y),
CTE As(
Select RowLogContents,[nullbit]
        ,[BitMap]=Convert(varbinary(1),Convert(int,Substring((REPLACE(STUFF((SELECT ',' +
		(CASE WHEN [ID]=0 THEN CONVERT(NVARCHAR(1),(SUBSTRING(hex_Value, n, 1) % 2))  ELSE CONVERT(NVARCHAR(1),((SUBSTRING(hex_Value, n, 1) / [Bitvalue]) % 2)) END) --as [nullBitMap]

from N4 AS Nums
Join @ColumnNameAndData AS C ON n&lt;=@BitColumnByte And [System_Type_id]=104 And bitpos=0
Cross Join @bitTable WHERE C.[RowLogContents]=D.[RowLogContents] ORDER BY [RowLogContents],n ASC FOR XML PATH('')),1,1,''),',','')),bitpos+1,1)))
FROM @ColumnNameAndData D Where  [System_Type_id]=104)

Update A Set [hex_Value]=[BitMap]
from @ColumnNameAndData  A
Inner Join CTE B On A.[RowLogContents]=B.[RowLogContents]
And A.[nullbit]=B.[nullbit]


/**************Check for BLOB DATA TYPES******************************/
DECLARE @Fileid INT
DECLARE @Pageid INT
DECLARE @Slotid INT
DECLARE @CurrentLSN INT
DECLARE @LinkID INT
DECLARE @Context VARCHAR(50)
DECLARE @ConsolidatedPageID VARCHAR(MAX)
DECLARE @LCX_TEXT_MIX VARBINARY(MAX)

declare @temppagedata table 
(
[ParentObject] sysname,
[Object] sysname,
[Field] sysname,
[Value] sysname)

declare @pagedata table 
(
[Page ID] sysname,
[File IDS] int,
[Page IDS] int,
[AllocUnitId] bigint,
[ParentObject] sysname,
[Object] sysname,
[Field] sysname,
[Value] sysname)

DECLARE @ModifiedRawData TABLE
(
  [ID] INT IDENTITY(1,1),
  [PAGE ID] VARCHAR(MAX),
  [FILE IDS] INT,
  [PAGE IDS] INT,
  [Slot ID]  INT,
  [AllocUnitId] BIGINT,
  [RowLog Contents 0_var] VARCHAR(Max),
  [RowLog Length] VARCHAR(50),
  [RowLog Len] INT,
  [RowLog Contents 0] VARBINARY(Max),
  [Link ID] INT default (0),
  [Update] INT
)

            DECLARE Page_Data_Cursor CURSOR FOR 
            /*We need to filter LOP_MODIFY_ROW,LOP_MODIFY_COLUMNS from log for deleted records of BLOB data type&amp; Get its Slot No, Page ID &amp; AllocUnit ID*/
			SELECT LTRIM(RTRIM(Replace([Description],'Deallocated',''))) AS [PAGE ID]
			,[Slot ID],[AllocUnitId],NULL AS [RowLog Contents 0],NULL AS [RowLog Contents 0],Context
			FROM    sys.fn_dblog(NULL, NULL)  
			WHERE    
			AllocUnitId IN 
			(SELECT [Allocation_unit_id] FROM sys.allocation_units allocunits
			INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)  
			AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2 
			AND partitions.partition_id = allocunits.container_id)  
			WHERE object_id=object_ID('' + @SchemaName_n_TableName + ''))
			AND Operation IN ('LOP_MODIFY_ROW') AND [Context] IN ('LCX_PFS') 
			AND Description Like '%Deallocated%' 
			/*Use this subquery to filter the date*/
			AND [TRANSACTION ID] IN (SELECT DISTINCT [TRANSACTION ID] FROM    sys.fn_dblog(NULL, NULL) 
			WHERE Context IN ('LCX_NULL') AND Operation in ('LOP_BEGIN_XACT')  
			AND [Transaction Name]='DELETE'
			AND  CONVERT(NVARCHAR(11),[Begin Time]) BETWEEN @Date_From AND @Date_To)
			GROUP BY [Description],[Slot ID],[AllocUnitId],Context

            UNION

			SELECT [PAGE ID],[Slot ID],[AllocUnitId]
            ,Substring([RowLog Contents 0],15,LEN([RowLog Contents 0])) AS [RowLog Contents 0]
            ,CONVERT(INT,Substring([RowLog Contents 0],7,2)),Context --,CAST(RIGHT([Current LSN],4) AS INT) AS [Current LSN]
			FROM    sys.fn_dblog(NULL, NULL)  
			WHERE   
			 AllocUnitId IN 
			(SELECT [Allocation_unit_id] FROM sys.allocation_units allocunits
			INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)  
			AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2 
			AND partitions.partition_id = allocunits.container_id)  
			WHERE object_id=object_ID('' + @SchemaName_n_TableName + ''))
			AND Context IN ('LCX_TEXT_MIX') AND Operation in ('LOP_DELETE_ROWS') 
			/*Use this subquery to filter the date*/
			AND [TRANSACTION ID] IN (SELECT DISTINCT [TRANSACTION ID] FROM    sys.fn_dblog(NULL, NULL) 
			WHERE Context IN ('LCX_NULL') AND Operation in ('LOP_BEGIN_XACT')  
			And [Transaction Name]='DELETE'
			And  CONVERT(NVARCHAR(11),[Begin Time]) BETWEEN @Date_From AND @Date_To)
                        
			/****************************************/

		OPEN Page_Data_Cursor

		FETCH NEXT FROM Page_Data_Cursor INTO @ConsolidatedPageID, @Slotid,@AllocUnitID,@LCX_TEXT_MIX,@LinkID,@Context

		WHILE @@FETCH_STATUS = 0
		BEGIN
			DECLARE @hex_pageid AS VARCHAR(Max)
			/*Page ID contains File Number and page number It looks like 0001:00000130.
			  In this example 0001 is file Number &amp;  00000130 is Page Number &amp; These numbers are in Hex format*/
			SET @Fileid=SUBSTRING(@ConsolidatedPageID,0,CHARINDEX(':',@ConsolidatedPageID)) -- Seperate File ID from Page ID
		
			SET @hex_pageid ='0x'+ SUBSTRING(@ConsolidatedPageID,CHARINDEX(':',@ConsolidatedPageID)+1,Len(@ConsolidatedPageID))  ---Seperate the page ID
 			SELECT @Pageid=Convert(INT,cast('' AS XML).value('xs:hexBinary(substring(sql:variable("@hex_pageid"),sql:column("t.pos")) )', 'varbinary(max)')) -- Convert Page ID from hex to integer
			FROM (SELECT CASE substring(@hex_pageid, 1, 2) WHEN '0x' THEN 3 ELSE 0 END) AS t(pos) 
	        
            IF @Context='LCX_PFS' 	  
              BEGIN 
						DELETE @temppagedata
						INSERT INTO @temppagedata EXEC( 'DBCC PAGE(' + @DataBase_Name + ', ' + @fileid + ', ' + @pageid + ', 1) with tableresults,no_infomsgs;'); 
						INSERT INTO @pagedata SELECT @ConsolidatedPageID,@fileid,@pageid,@AllocUnitID,[ParentObject],[Object],[Field] ,[Value] FROM @temppagedata
              END
            ELSE IF @Context='LCX_TEXT_MIX' 
              BEGIN
                        INSERT INTO  @ModifiedRawData SELECT @ConsolidatedPageID,@fileid,@pageid,@Slotid,@AllocUnitID,NULL,0,CONVERT(INT,CONVERT(VARBINARY,REVERSE(SUBSTRING(@LCX_TEXT_MIX,11,2)))),@LCX_TEXT_MIX,@LinkID,0
              END	  
			FETCH NEXT FROM Page_Data_Cursor INTO  @ConsolidatedPageID, @Slotid,@AllocUnitID,@LCX_TEXT_MIX,@LinkID,@Context
		END
	
	CLOSE Page_Data_Cursor
	DEALLOCATE Page_Data_Cursor

	DECLARE @Newhexstring VARCHAR(MAX);

	--The data is in multiple rows in the page, so we need to convert it into one row as a single hex value.
	--This hex value is in string format
	INSERT INTO @ModifiedRawData ([PAGE ID],[FILE IDS],[PAGE IDS],[Slot ID],[AllocUnitId]
	,[RowLog Contents 0_var]
    , [RowLog Length])
	SELECT [Page ID],[FILE IDS],[PAGE IDS],Substring([ParentObject],CHARINDEX('Slot', [ParentObject])+4, (CHARINDEX('Offset', [ParentObject])-(CHARINDEX('Slot', [ParentObject])+4))-2 ) as [Slot ID]
	,[AllocUnitId]
	,Substring((
	SELECT 
    REPLACE(STUFF((SELECT REPLACE(SUBSTRING([Value],CHARINDEX(':',[Value])+1,CHARINDEX('†',[Value])-CHARINDEX(':',[Value])),'†','')
	FROM @pagedata C  WHERE B.[Page ID]= C.[Page ID] And Substring(B.[ParentObject],CHARINDEX('Slot', B.[ParentObject])+4, (CHARINDEX('Offset', B.[ParentObject])-(CHARINDEX('Slot', B.[ParentObject])+4)) )=Substring(C.[ParentObject],CHARINDEX('Slot', C.[ParentObject])+4, (CHARINDEX('Offset', C.[ParentObject])-(CHARINDEX('Slot', C.[ParentObject])+4)) ) And 
	[Object] Like '%Memory Dump%'  Order By '0x'+ LEFT([Value],CHARINDEX(':',[Value])-1)
	FOR XML PATH('') ),1,1,'') ,' ','')
	),1,20000) AS [Value]
    
    ,
     Substring((
	SELECT '0x' +REPLACE(STUFF((SELECT REPLACE(SUBSTRING([Value],CHARINDEX(':',[Value])+1,CHARINDEX('†',[Value])-CHARINDEX(':',[Value])),'†','')
	FROM @pagedata C  WHERE B.[Page ID]= C.[Page ID] And Substring(B.[ParentObject],CHARINDEX('Slot', B.[ParentObject])+4, (CHARINDEX('Offset', B.[ParentObject])-(CHARINDEX('Slot', B.[ParentObject])+4)) )=Substring(C.[ParentObject],CHARINDEX('Slot', C.[ParentObject])+4, (CHARINDEX('Offset', C.[ParentObject])-(CHARINDEX('Slot', C.[ParentObject])+4)) ) And 
	[Object] Like '%Memory Dump%'  Order By '0x'+ LEFT([Value],CHARINDEX(':',[Value])-1)
	FOR XML PATH('') ),1,1,'') ,' ','')
	),7,4) AS [Length]
    
	From @pagedata B
	Where [Object] Like '%Memory Dump%'
	Group By [Page ID],[FILE IDS],[PAGE IDS],[ParentObject],[AllocUnitId]--,[Current LSN]
	Order By [Slot ID]

	UPDATE @ModifiedRawData  SET [RowLog Len] = CONVERT(VARBINARY(8000),REVERSE(cast('' AS XML).value('xs:hexBinary(substring(sql:column("[RowLog Length]"),0))', 'varbinary(Max)')))
	FROM @ModifiedRawData Where [LINK ID]=0

    UPDATE @ModifiedRawData  SET [RowLog Contents 0] =cast('' AS XML).value('xs:hexBinary(substring(sql:column("[RowLog Contents 0_var]"),0))', 'varbinary(Max)')  
	FROM @ModifiedRawData Where [LINK ID]=0

	Update B Set B.[RowLog Contents 0] =
	(CASE WHEN A.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  A.[RowLog Contents 0]+C.[RowLog Contents 0] 
		WHEN A.[RowLog Contents 0] IS NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  C.[RowLog Contents 0]
		WHEN A.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NULL THEN  A.[RowLog Contents 0]  
		END)
    ,B.[Update]=ISNULL(B.[Update],0)+1
	from @ModifiedRawData B
	LEFT Join @ModifiedRawData A On A.[Page IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],15+14,2))))
	And A.[File IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],19+14,2)))) 
    And A.[Link ID]=B.[Link ID]
    LEFT Join @ModifiedRawData C On C.[Page IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],27+14,2))))
	And C.[File IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],31+14,2))))
	And C.[Link ID]=B.[Link ID]
    Where  (A.[RowLog Contents 0] IS NOT NULL OR C.[RowLog Contents 0] IS NOT NULL)


	Update B Set B.[RowLog Contents 0] =
	(CASE WHEN A.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  A.[RowLog Contents 0]+C.[RowLog Contents 0] 
		WHEN A.[RowLog Contents 0] IS NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  C.[RowLog Contents 0]
		WHEN A.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NULL THEN  A.[RowLog Contents 0]  
		END)
    --,B.[Update]=ISNULL(B.[Update],0)+1
	from @ModifiedRawData B
	LEFT Join @ModifiedRawData A On A.[Page IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],15+14,2))))
	And A.[File IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],19+14,2)))) 
    And A.[Link ID]&lt;&gt;B.[Link ID] And B.[Update]=0
    LEFT Join @ModifiedRawData C On C.[Page IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],27+14,2))))
	And C.[File IDS]=Convert(int,Convert(Varbinary(Max),Reverse(Substring(B.[RowLog Contents 0],31+14,2))))
	And C.[Link ID]&lt;&gt;B.[Link ID] And B.[Update]=0
    Where  (A.[RowLog Contents 0] IS NOT NULL OR C.[RowLog Contents 0] IS NOT NULL)

	UPDATE @ModifiedRawData  SET [RowLog Contents 0] =  
    (Case When [RowLog Len]&gt;=8000 Then 
    Substring([RowLog Contents 0] ,15,[RowLog Len]) 
    When [RowLog Len]&lt;8000 Then 
    SUBSTRING([RowLog Contents 0],15+6,Convert(int,Convert(varbinary(max),REVERSE(Substring([RowLog Contents 0],15,6)))))
    End)
	FROM @ModifiedRawData Where [LINK ID]=0

	UPDATE @ColumnNameAndData SET [hex_Value]=[RowLog Contents 0] 
    --,A.[Update]=A.[Update]+1
	FROM @ColumnNameAndData A
	INNER JOIN @ModifiedRawData B ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],17,4))))=[PAGE IDS]
	AND  Convert(int,Substring([hex_value],9,2)) =B.[Link ID] 
	Where [System_Type_Id] In (99,167,175,231,239,241,165,98) And [Link ID] &lt;&gt;0 

	UPDATE @ColumnNameAndData SET [hex_Value]=
    (CASE WHEN B.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  B.[RowLog Contents 0]+C.[RowLog Contents 0] 
    WHEN B.[RowLog Contents 0] IS NULL AND C.[RowLog Contents 0] IS NOT NULL THEN  C.[RowLog Contents 0]
    WHEN B.[RowLog Contents 0] IS NOT NULL AND C.[RowLog Contents 0] IS NULL THEN  B.[RowLog Contents 0]  
    END)
	--,A.[Update]=A.[Update]+1
	FROM @ColumnNameAndData A
	LEFT JOIN @ModifiedRawData B ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],5,4))))=B.[PAGE IDS]  And B.[Link ID] =0 
   	LEFT JOIN @ModifiedRawData C ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],17,4))))=C.[PAGE IDS]  And C.[Link ID] =0 
	Where [System_Type_Id] In (99,167,175,231,239,241,165,98)  And (B.[RowLog Contents 0] IS NOT NULL OR C.[RowLog Contents 0] IS NOT NULL)

	UPDATE @ColumnNameAndData SET [hex_Value]=[RowLog Contents 0] 
    --,A.[Update]=A.[Update]+1
	FROM @ColumnNameAndData A
	INNER JOIN @ModifiedRawData B ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],9,4))))=[PAGE IDS]
    And Convert(int,Substring([hex_value],3,2))=[Link ID]
	Where [System_Type_Id] In (35,34,99) And [Link ID] &lt;&gt;0 
    
	UPDATE @ColumnNameAndData SET [hex_Value]=[RowLog Contents 0]
    --,A.[Update]=A.[Update]+10
	FROM @ColumnNameAndData A
	INNER JOIN @ModifiedRawData B ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],9,4))))=[PAGE IDS]
	Where [System_Type_Id] In (35,34,99) And [Link ID] =0

	UPDATE @ColumnNameAndData SET [hex_Value]=[RowLog Contents 0] 
    --,A.[Update]=A.[Update]+1
	FROM @ColumnNameAndData A
	INNER JOIN @ModifiedRawData B ON 
	Convert(int,Convert(Varbinary(Max),Reverse(Substring([hex_value],15,4))))=[PAGE IDS]
	Where [System_Type_Id] In (35,34,99) And [Link ID] =0

    Update @ColumnNameAndData set [hex_value]= 0xFFFE + Substring([hex_value],9,LEN([hex_value]))
	--,[Update]=[Update]+1
    Where [system_type_id]=241

CREATE TABLE [#temp_Data]
(
    [FieldName]  VARCHAR(MAX),
    [FieldValue] NVARCHAR(MAX),
    [Rowlogcontents] VARBINARY(8000),
    [Row ID] int,
    [Transaction ID] VARCHAR(100),
    [Deletion Date Time] DATETIME,
    [Deleted By User Name] VARCHAR(Max)
)

INSERT INTO #temp_Data
SELECT NAME,
CASE
 WHEN system_type_id IN (231, 239) THEN  LTRIM(RTRIM(CONVERT(NVARCHAR(max),hex_Value)))  --NVARCHAR ,NCHAR
 WHEN system_type_id IN (167,175) THEN  LTRIM(RTRIM(CONVERT(VARCHAR(max),hex_Value)))  --VARCHAR,CHAR
 WHEN system_type_id IN (35) THEN  LTRIM(RTRIM(CONVERT(VARCHAR(max),hex_Value))) --Text
 WHEN system_type_id IN (99) THEN  LTRIM(RTRIM(CONVERT(NVARCHAR(max),hex_Value))) --nText 
 WHEN system_type_id = 48 THEN CONVERT(VARCHAR(MAX), CONVERT(TINYINT, CONVERT(BINARY(1), REVERSE (hex_Value)))) --TINY INTEGER
 WHEN system_type_id = 52 THEN CONVERT(VARCHAR(MAX), CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (hex_Value)))) --SMALL INTEGER
 WHEN system_type_id = 56 THEN CONVERT(VARCHAR(MAX), CONVERT(INT, CONVERT(BINARY(4), REVERSE(hex_Value)))) -- INTEGER
 WHEN system_type_id = 127 THEN CONVERT(VARCHAR(MAX), CONVERT(BIGINT, CONVERT(BINARY(8), REVERSE(hex_Value))))-- BIG INTEGER
 WHEN system_type_id = 61 Then CONVERT(VARCHAR(MAX),CONVERT(DATETIME,CONVERT(VARBINARY(8000),REVERSE (hex_Value))),100) --DATETIME
 WHEN system_type_id =58 Then CONVERT(VARCHAR(MAX),CONVERT(SMALLDATETIME,CONVERT(VARBINARY(8000),REVERSE(hex_Value))),100) --SMALL DATETIME
 WHEN system_type_id = 108 THEN CONVERT(VARCHAR(MAX),CONVERT(NUMERIC(38,20), CONVERT(VARBINARY,CONVERT(VARBINARY(1),xprec)+CONVERT(VARBINARY(1),xscale))+CONVERT(VARBINARY(1),0) + hex_Value)) --- NUMERIC
 WHEN system_type_id =106 THEN CONVERT(VARCHAR(MAX), CONVERT(DECIMAL(38,20), CONVERT(VARBINARY,Convert(VARBINARY(1),xprec)+CONVERT(VARBINARY(1),xscale))+CONVERT(VARBINARY(1),0) + hex_Value)) --- DECIMAL
 WHEN system_type_id In(60,122) THEN CONVERT(VARCHAR(MAX),Convert(MONEY,Convert(VARBINARY(8000),Reverse(hex_Value))),2) --MONEY,SMALLMONEY
 WHEN system_type_id = 104 THEN CONVERT(VARCHAR(MAX),CONVERT (BIT,CONVERT(BINARY(1), hex_Value)%2))  -- BIT
 WHEN system_type_id =62 THEN  RTRIM(LTRIM(STR(CONVERT(FLOAT,SIGN(CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT)) * (1.0 + (CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) &amp; 0x000FFFFFFFFFFFFF) * POWER(CAST(2 AS FLOAT), -52)) * POWER(CAST(2 AS FLOAT),((CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) &amp; 0x7ff0000000000000) / EXP(52 * LOG(2))-1023))),53,LEN(hex_Value)))) --- FLOAT
 When system_type_id =59 THEN  Left(LTRIM(STR(CAST(SIGN(CAST(Convert(VARBINARY(8000),REVERSE(hex_Value)) AS BIGINT))* (1.0 + (CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) &amp; 0x007FFFFF) * POWER(CAST(2 AS Real), -23)) * POWER(CAST(2 AS Real),(((CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS INT) )&amp; 0x7f800000)/ EXP(23 * LOG(2))-127))AS REAL),23,23)),8) --Real
 WHEN system_type_id In (165,173) THEN (CASE WHEN CHARINDEX(0x,cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'VARBINARY(8000)')) = 0 THEN '0x' ELSE '' END) +cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'varchar(max)') -- BINARY,VARBINARY
 WHEN system_type_id =34 THEN (CASE WHEN CHARINDEX(0x,cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'VARBINARY(8000)')) = 0 THEN '0x' ELSE '' END) +cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'varchar(max)')  --IMAGE
 WHEN system_type_id =36 THEN CONVERT(VARCHAR(MAX),CONVERT(UNIQUEIDENTIFIER,hex_Value)) --UNIQUEIDENTIFIER
 WHEN system_type_id =231 THEN CONVERT(VARCHAR(MAX),CONVERT(sysname,hex_Value)) --SYSNAME
 WHEN system_type_id =241 THEN CONVERT(VARCHAR(MAX),CONVERT(xml,hex_Value)) --XML

 WHEN system_type_id =189 THEN (CASE WHEN CHARINDEX(0x,cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'VARBINARY(8000)')) = 0 THEN '0x' ELSE '' END) +cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'varchar(max)') --TIMESTAMP
 WHEN system_type_id=98 THEN (CASE 
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=56 THEN CONVERT(VARCHAR(MAX), CONVERT(INT, CONVERT(BINARY(4), REVERSE(Substring(hex_Value,3,Len(hex_Value))))))  -- INTEGER
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=108 THEN CONVERT(VARCHAR(MAX),CONVERT(numeric(38,20),CONVERT(VARBINARY(1),Substring(hex_Value,3,1)) +CONVERT(VARBINARY(1),Substring(hex_Value,4,1))+CONVERT(VARBINARY(1),0) + Substring(hex_Value,5,Len(hex_Value)))) --- NUMERIC
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=167 THEN LTRIM(RTRIM(CONVERT(VARCHAR(max),Substring(hex_Value,9,Len(hex_Value))))) --VARCHAR,CHAR
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=36 THEN CONVERT(VARCHAR(MAX),CONVERT(UNIQUEIDENTIFIER,Substring((hex_Value),3,20))) --UNIQUEIDENTIFIER
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=61 THEN CONVERT(VARCHAR(MAX),CONVERT(DATETIME,CONVERT(VARBINARY(8000),REVERSE (Substring(hex_Value,3,LEN(hex_Value)) ))),100) --DATETIME
 WHEN CONVERT(INT,SUBSTRING(hex_Value,1,1))=165 THEN '0x'+ SUBSTRING((CASE WHEN CHARINDEX(0x,cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'VARBINARY(8000)')) = 0 THEN '0x' ELSE '' END) +cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'varchar(max)'),11,LEN(hex_Value)) -- BINARY,VARBINARY
 END)
 
END AS FieldValue
,[Rowlogcontents]
,[Row ID]
,[Transaction ID]
,null
,null
FROM @ColumnNameAndData ORDER BY nullbit

--Find the user ID and date time
Update #temp_Data Set [Deleted By User Name]=[name]
,[Deletion Date Time] = [Begin Time]
from #temp_Data  A
Inner Join fn_dblog(NULL,NULL) B On A.[Transaction ID]= B.[Transaction ID]
Inner Join sys.sysusers  C On B.[Transaction SID]=C.[Sid]
Where B.[Operation]='LOP_BEGIN_XACT' And B.[Context]='LCX_NULL' And B.[Transaction Name]='DELETE'

--Create the column name in the same order to do pivot table.

DECLARE @FieldName VARCHAR(max)
Declare @AdditionalField VARCHAR(max)
SET @FieldName = STUFF(
(
	SELECT ',' + CAST(QUOTENAME([Name]) AS VARCHAR(MAX)) FROM syscolumns WHERE id=object_id('' + @SchemaName_n_TableName + '')
	FOR XML PATH('')), 1, 1, '')

--Finally did pivot table and get the data back in the same format.

Set @AdditionalField=@FieldName + ' ,[Deleted By User Name],[Deletion Date Time]'

SET @sql = 'SELECT ' + @AdditionalField  + ' FROM #temp_Data PIVOT (Min([FieldValue]) FOR FieldName IN (' + @FieldName  + ')) AS pvt'
Print @sql
EXEC sp_executesql @sql

GO
--Execute the procedure like
--Recover_Deleted_Data_With_UID_Date_Time_Proc 'Database Name','Schema.table name'
--EXAMPLE #1 : FOR ALL DELETED RECORDS
EXEC Recover_Deleted_Data_With_UID_Date_Time_Proc 'test','dbo.tbl_sample' 
GO
--EXAMPLE #2 : FOR ANY SPECIFIC DATE RANGE
EXEC Recover_Deleted_Data_With_UID_Date_Time_Proc 'test','dbo.tbl_sample' ,'2011/12/01','2012/01/30'
--It will give you the result of all deleted records with the user name and date &amp; time of deletion.

How To: Install FreeTDS and UnixODBC On OSX Using Homebrew For Use With Ruby, Php, And Perl

Standard

This little project started out as a basic script to connect to a Microsoft SqlServer and get data. It was a nightmare as I probably spent 15 hours learning about and troubleshooting both FreeTDS and UnixODBC. My pain is now your gain.

NOTICE: I have homebrew configured to install all packages into my local directory /Users/jared/.homebrew/

1) Install UnixODBC

[jared@localhost]$ brew install unixodbc
==&gt; Downloading http://www.unixodbc.org/unixODBC-2.3.0.tar.gz
File already downloaded in /Users/jared/Library/Caches/Homebrew
==&gt; ./configure --disable-debug --prefix=/Users/jared/.homebrew/Cellar/unixodbc/2.3.0 --enable-gui=no
==&gt; make install
/Users/jared/.homebrew/Cellar/unixodbc/2.3.0: 24 files, 932K, built in 22 seconds
[jared@localhost]$

2) Edit the FreeTDS formula And install

What we are doing is changing the default tds version, enabling the msdblib, and pointing out where unixodbc installed.

require 'formula'
 
class Freetds &gt; Formula
url 'http://ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-0.91.tar.gz'
homepage 'http://www.freetds.org/'
md5 'b14db5823980a32f0643d1a84d3ec3ad'
 
def install
system "./configure",·
"--prefix=#{prefix}",·
"--with-tdsver=7.0",·
"--enable-msdblib",
"--with-unixodbc=/Users/USERNAME/.homebrew/Cellar/unixodbc/2.3.0",
"--mandir=#{man}"
system 'make'
ENV.j1 # Or fails to install on multi-core machines
system 'make install'
end
end
[jared@localhost]$ brew install freetds

3) Start a new terminal session to make sure all your paths update

4) Confirm that you can connect to the server

We need to make sure that you can connect to the sqlserver and that the port is open and available to you.

To do this we use telnet. If you see the following, success! The port is open on the server.

[jared@localhost]$ telnet server.example.com 1433
Trying 192.168.1.101...
Connected to server.example.com.
Escape character is '^]'.

If you see the following. You failed. Check the Sqlserver configuration, firewalls, or network configuration.

[jared@localhost]$ telnet server.example.com 1433
Trying 192.168.1.101...
telnet: connect to address 192.168.1.101: Connection refused
telnet: Unable to connect to remote host

Note: Press the ctrl + ] keys to break to a prompt and then type exit.

5) Tsql

FreeTDS comes with a couple cli applications. One of them is tsql. It isn’t great, but I use it test and see if at least FreeTDS is working correctly. After you install FreeTDS using homebrew try and connect to the host using the following command.

[jared@localhost]$ tsql -H server.example.com -U USERNAME -P PASSWORD -v
 
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1&gt;exit

If you see a prompt, you haz awesome!

6) Sym link the FreeTDS and UnixODBC conf files

I create 3 sym links to the following files just for simplicity.

ln -s /Users/jared/.homebrew/Cellar/freetds/0.91/etc/freetds.conf ~/.freetds.conf
ln -s /Users/jared/.homebrew/Cellar/unixodbc/2.3.0/etc/odbc.ini ~/.odbc.ini
ln -s /Users/jared/.homebrew/Cellar/unixodbc/2.3.0/etc/odbcinst.ini ~/.odbcinst.ini

7) edit the .freetds.conf and add the following

[example]
host = server.example.com
port = 1433
tds version = 7.0

8 ) edit the odbcinst.ini and add the following

You are telling unixodbc where your FreeTDS drivers are located using this configuration file.

[FreeTDS]
Description = FreeTDS
Driver = /Users/jared/.homebrew/lib/libtdsodbc.so
Setup = /Users/jared/.homebrew/lib/libtdsodbc.so
UsageCount = 1

9) edit the .odbc.ini and add the following

[myexample]
Driver = FreeTDS // we just set this up a second ago
Description = MyExample
ServerName = example // this is the name of the configuration we used in the .freetds.conf file
UID = USERNAME
PWD = PASSWORD

10) isql should work

[jared@localhost]$ isql sqlinternal USERNAME PASSWORD
+---------------------------------------+
| Connected!
| sql-statement
| help [tablename]
| quit
+---------------------------------------+
SQL&gt;quit

11) Osql Error

If you try osql, it throws an error.

[jared@localhost]$ osql -S myexample -U USERNAME -P PASSWORD
checking shared odbc libraries linked to isql for default directories...
/Users/jared/.homebrew/bin/osql: line 53: ldd: command not found
strings: can't open file: (No such file or directory)
osql: problem: no potential directory strings in "/Users/jared/.homebrew/bin/isql"
osql: advice: use "osql -I DIR" where DIR unixODBC\'s install prefix e.g. /usr/local
isql strings are:
checking odbc.ini files
reading /Users/jared/.odbc.ini
[myexample] found in /Users/jared/.odbc.ini
found this section:
[myexample]
Driver = FreeTDS
Description = MyExample
Servername = example
UID = USERNAME
PWD = PASSWORD
 
looking for driver for DSN [myexample] in /Users/jared/.odbc.ini
found driver line: " Driver = FreeTDS"
driver "FreeTDS" found for [myexample] in .odbc.ini
found driver named "FreeTDS"
"FreeTDS" is not an executable file
looking for entry named [FreeTDS] in /odbcinst.ini
found driver line: " Driver = /Users/jared/.homebrew/lib/libtdsodbc.so"
found driver /Users/jared/.homebrew/lib/libtdsodbc.so for [FreeTDS] in odbcinst.ini
/Users/jared/.homebrew/lib/libtdsodbc.so is not an executable file
osql: error: no driver found for sqlinternal
[jared@localhost]$

If you go through the error you will find that a certain driver is not executable. You just need to chmod the file.

[jared@localhost]$ chmod 554 /Users/jared/.homebrew/Cellar/freetds/0.91/lib/libtdsodbc.0.so

Now run it again.

[jared@localhost]$ osql -S myexample -U USERNAME -P PASSWORD
checking shared odbc libraries linked to isql for default directories...
/Users/jared/.homebrew/bin/osql: line 53: ldd: command not found
strings: can't open file: (No such file or directory)
osql: problem: no potential directory strings in "/Users/jared/.homebrew/bin/isql"
osql: advice: use "osql -I DIR" where DIR unixODBC\'s install prefix e.g. /usr/local
isql strings are:
checking odbc.ini files
reading /Users/jared/.odbc.ini
[myexample] found in /Users/jared/.odbc.ini
found this section:
[myexample]
Driver = FreeTDS
Description = myexamples
Servername = myexample
UID = USERNAME
PWD = PASSWORD
 
looking for driver for DSN [myexample] in /Users/jared/.odbc.ini
found driver line: " Driver = FreeTDS"
driver "FreeTDS" found for [myexample] in .odbc.ini
found driver named "FreeTDS"
"FreeTDS" is not an executable file
looking for entry named [FreeTDS] in /odbcinst.ini
found driver line: " Driver = /Users/jared/.homebrew/lib/libtdsodbc.so"
found driver /Users/jared/.homebrew/lib/libtdsodbc.so for [FreeTDS] in odbcinst.ini
/Users/jared/.homebrew/lib/libtdsodbc.so is an executable file
Using ODBC-Combined strategy
DSN [myexample] has servername "myexample" (from /Users/jared/.odbc.ini)
/Users/jared/.freetds.conf is a readable file
looking for [myexample] in /Users/jared/.freetds.conf
found this section:
[myexample]
host = myexample.bendcable.net
port = 1433
tds version = 7.0
 
Configuration looks OK. Connection details:
 
DSN: myexample
odbc.ini: /Users/jared/.odbc.ini
Driver: /Users/jared/.homebrew/lib/libtdsodbc.so
Server hostname: myexample.bendcable.net
Address: 192.168.12.103
 
Attempting connection as username ...
+ isql myexample USERNAME PASSWORD -v
+---------------------------------------+
| Connected!
| sql-statement
| help [tablename]
| quit
+---------------------------------------+
SQL&gt; quit

SUCCESS!!!

Some other useful commands.

Useful commands

odbcinst -j
 
odbcinst -q -d
 
odbcinst -q -s

SQL Server System Views: The Basics

Standard

SQL Server provides an assortment of system views for accessing metadata about the server environment and its database objects. There are catalog views and information schema views and dynamic management views and several other types of views. DBAs and developers alike can benefit significantly from the rich assortment of information they can derive through these views, and it is worth the effort to get to know them.

System views are divided into categories that each serve a specific purpose. The most extensive category is the one that contains catalog views. Catalog views let you retrieve information about a wide range of system and database components—from table columns and data types to server-wide configurations.

Information schema views are similar to some of the catalog views in that they provide access to metadata that describes database objects such as tables, columns, domains, and check constraints. However, information schema views conform to the ANSI standard, whereas catalog views are specific to SQL Server.

In contrast to either of these types of views, dynamic management views return server state data that can be used to monitor and fine-tune a SQL Server instance and its databases. Like catalog views, dynamic management views are specific to SQL Server.

In this article, we’ll focus on these three types of views, looking at examples in each category. We won’t be covering the other types of system views because they tend not to be as commonly used, with perhaps a couple exceptions. For the most part, catalog, information schema, and dynamic management views are the ones you’ll likely be using the most often. But just so you know, the other types are related to replication and data-tier application (DAC) instances as well as provide compatibility with earlier SQL Server releases. Although they have their places, for now we’ll stick with the big three.

Catalog views

Of the various types of system views available in SQL Server, catalog views represent the largest collection and most diverse. You can use catalog views to gather information about such components as AlwaysOn Availability Groups, Change Data Capture, change tracking, database mirroring, full-text search, Resource Governor, security, Service Broker, and an assortment of other features—all in addition to being able to view information about the database objects themselves.

In fact, SQL Server provides so many catalog views that it would be nearly impossible—or at least highly impractical—to try look at all of them in one article, but know that there is a vast storehouse of views waiting for you, and they all work pretty much the same way.

Microsoft recommends that you use catalog views as your primary method for accessing SQL Server metadata because they provide the most efficient mechanism for retrieving this type of information. Through the catalog views you can access all user-available metadata. For example, the following SELECT statement returns information about databases whose name starts with adventureworks:

SELECT name, database_id, compatibility_level

FROM sys.databases

WHERE name LIKE 'adventureworks%';

The columns specified in the SELECT clause—namedatabase_id, and compatibility_level—represent only a fraction of the many columns supported by this view. The view will actually return nearly 75 columns worth of information about each database installed on the SQL Server instance. I’ve kept it short for the sake of brevity, as shown in the following results:

name database_id compatibility_level
AdventureWorks2014 9 120
AdventureWorksDW2014 10 120

There is nothing remarkable here, except for the ease with which I was able to collect the metadata. The results include the database names, their auto-generated IDs, and their compatibility levels, which in both cases is 120. The120 refers to SQL Server 2014. (I created the examples in this article on a local instance of SQL Server 2014 running in a test virtual machine.)

The sys.databases view can also return information about database settings, such as whether the database is read-only or whether the auto-shrink feature is enabled. Many of the configuration-related columns take the bit data type to indicate whether a feature is on (1) or off (0).

As the preceding example illustrates, you access catalog views through the sys schema. Whichever view you use, it’s always a good idea to check the SQL Server documentation if you have any questions about its application to your particular circumstances. For example, the sys.databases view includes the state column, which provides status information such as whether a database is online, offline, or being restored. Each option is represented by one of nine predefined tinyint values. Some values in this column pertain only to certain environments. For instance, the value 7(copying) applies only to Azure SQL Database.

Now let’s look at the sys.objects catalog view, which returns a row for each user-defined, schema-scoped object in a database. The following SELECT statement retrieves the name and ID of all table-valued functions defined in the dboschema within the AdventureWorks2014 sample database:

USE AdventureWorks2014;
go

 

SELECT name, object_id

FROM sys.objects

WHERE SCHEMA_NAME(schema_id) = 'dbo'

AND type_desc = 'sql_table_valued_function';

 

 

Notice that I use the SCHEMA_NAME built-in function to match the schema ID to dbo in the WHERE clause. Functions such as SCHEMA_NAMEOBJECT_IDOBJECT_NAME, and so on can be extremely useful when working with catalog views.

Also in the WHERE clause, I match the type_desc column to sql_table_valued_function, giving me the following results:

name object_id
ufnGetContactInformation 103671417

The sys.objects view is a handy tool to have because it provides quick and easy access to all user-defined objects in your database, including tables, views, triggers, functions, and constraints. However, SQL Server also provides catalog views that are distinct to a specific object type. For example, the following SELECT statement retrieves data through the sys.tables view:

USE AdventureWorks2014;
go

SELECT name, max_column_id_used

FROM sys.tables

WHERE SCHEMA_NAME(schema_id) = ‘HumanResources’

 

 

 

 

 

The statement returns a list of all tables in the HumanResources schema, along with the maximum column ID used for each table, as shown in the following results:

name max_column_id_used
Shift 5
Department 4
Employee 16
EmployeeDepartmentHistory 6
EmployeePayHistory 5
JobCandidate 4

The interesting thing about the sys.tables view is that it inherits all the columns from the sys.objects view and then adds additional columns with table-specific information. For example, in the preceding example, the namecolumn is inherited from sys.objects but the max_column_id_used column is specific to sys.tables. (For information about which views inherit columns from other views, refer to the SQL Server documentation.)

You can also join catalog views to retrieve specific types of information. For example, the following SELECTstatement joins the sys.columns view to the sys.types view to retrieve information about the Person table:

USE AdventureWorks2014;

go

 

SELECT c.name AS ColumnName,

t.name AS DataType,

CASE t.is_user_defined

WHEN 1 THEN 'user-defined type'

ELSE 'system type' END AS UserOrSystem

FROM sys.columns c JOIN sys.types t

ON c.user_type_id = t.user_type_id

WHERE c.object_id = OBJECT_ID('Person.Person');

Not surprisingly, the sys.columns view returns a list of columns in the table, and the sys.types view returns the name of the column data types, along with whether they are system types or user-defined:

ColumnName DataType UserOrSystem
BusinessEntityID int system type
PersonType nchar system type
NameStyle NameStyle user-defined type
Title nvarchar system type
FirstName Name user-defined type
MiddleName Name user-defined type
LastName Name user-defined type
Suffix nvarchar system type
EmailPromotion int system type
AdditionalContactInfo xml system type
Demographics xml system type
rowguid uniqueidentifier system type
ModifiedDate datetime system type

Up to this point, the catalog views we’ve looked at have focused on the databases and their objects. However, we can use catalog views to retrieve all sorts of information, such as details about database files:

USE AdventureWorks2014;

go

 

SELECT file_id, name, state_desc, type_desc

FROM sys.database_files

WHERE name LIKE ‘adventureworks%’;

In this case, we’re using the sys.database_files view to retrieve the file ID, file name, file state, and file type.

file_id name state_desc type_desc
1 AdventureWorks2014_Data ONLINE ROWS
2 AdventureWorks2014_Log ONLINE LOG

We might instead use the sys.assembly_types view to return information about any assemblies added to the database:

USE AdventureWorks2014;

go

 

SELECT name, user_type_id, assembly_class

FROM sys.assembly_types;

As the following results show, the AdventureWorks2014 database includes three assemblies, all of which are SQL Server’s advanced data types:

name user_type_id assembly_class
hierarchyid 128 Microsoft.SqlServer.Types.SqlHierarchyId
geometry 129 Microsoft.SqlServer.Types.SqlGeometry
geography 130 Microsoft.SqlServer.Types.SqlGeography

You can even retrieve data about security-related metadata within your database. For example, the following SELECTstatement uses the sys.database_principals view to return the names and IDs of all security principals in theAdcentureWorks2014 database:

USE AdventureWorks2014;

go

 

SELECT name, principal_id

FROM sys.database_principals

WHERE type_desc = ‘DATABASE_ROLE’;

Notice that we’ve used a WHERE clause to qualify our query so the SELECT statement returns only theDATABASE_ROLE principal type:

name principal_id
public 0
db_owner 16384
db_accessadmin 16385
db_securityadmin 16386
db_ddladmin 16387
db_backupoperator 16389
db_datareader 16390
db_datawriter 16391
db_denydatareader 16392
db_denydatawriter 16393

Of course, SQL Server security occurs at the database level and at the server level. To address the server level, SQL Server also includes catalog views specific to the current instance. For example, the following SELECT statement joins the sys.server_principals view and the sys.server_permissions view to retrieve information about the server principals and their permissions:

SELECT pr.name, pr. principal_id,

pm.permission_name, pm.state_desc

FROM sys.server_principals pr

JOIN sys.server_permissions AS pm

ON pr.principal_id = pm.grantee_principal_id

WHERE pr.type_desc = ‘SERVER_ROLE’;

In this case, we’re concerned only with the SERVER_ROLE principal type, so we’ve added the WHERE clause, giving us the following results:

name principal_id oermission_name state_desc
public 2 VIEW ANY DATABASE GRANT
public 2 CONNECT GRANT
public 2 CONNECT GRANT
public 2 CONNECT GRANT
public 2 CONNECT GRANT

You can also use catalog views to retrieve server configuration information. For instance, the following SELECTstatement uses the the sys.configurations view to retrieve configuration information about the current server:

SELECT name, description

FROM sys.configurations

WHERE is_advanced = 1 AND is_dynamic = 0;

In this case, we’ve limited our query to non-dynamic advanced settings, as shown in the following results:

name description
user connections Number of user connections allowed
locks Number of locks for all users
open objects Number of open database objects
fill factor (%) Default fill factor percentage
c2 audit mode c2 audit mode
priority boost Priority boost
set working set size set working set size
lightweight pooling User mode scheduler uses lightweight pooling
scan for startup procs scan for startup stored procedures
affinity I/O mask affinity I/O mask
affinity64 I/O mask affinity64 I/O mask
common criteria compliance enabled Common Criteria compliance mode enabled

There are, of course, many more examples of catalog views I can show you, but you get the point. There’s a great deal of information to be had, and I’ve barely scratched the surface. For a complete listing of the available catalog views, check out the MSDN topic Catalog Views (Transact-SQL).

Information schema views

Information schema views provide a standardized method for querying metadata about objects within a database. The views are part of the schema INFORMATION_SCHEMA, rather than the sys schema, and are much more limited in scope than catalog views. At last count, SQL Server was providing only 21 information schema views, compared to over 200 catalog views.

The advantage of using information schema views is that, because they are ANSI-compliant, you can theoretically migrate your code to different database systems without having to update your view references. If portability is important to your solution, you should consider information schema views, just know that they don’t do nearly as much as catalog views. And, of course, using one type of view doesn’t preclude you from using another type of view.

With information schema views, you can retrieve metadata about database objects such as tables, constraints, columns, privileges, views, and domains. (In the world of information schema views, a domain is a user-defined data type, and a catalog is the database itself.)

Let’s look at a few examples. The first one uses the TABLES view to retrieve the name and type of all the tables and views in the Purchasing schema:

USE AdventureWorks2014;

go

 

SELECT TABLE_NAME, TABLE_TYPE

FROM INFORMATION_SCHEMA.TABLES

WHERE TABLE_SCHEMA = ‘purchasing’

ORDER BY TABLE_NAME;

No magic here. Just a simple query that returns basic information, as shown in the following results:

TABLE_NAME TABLE_TYPE
ProductVendor BASE TABLE
PurchaseOrderDetail BASE TABLE
PurchaseOrderHeader BASE TABLE
ShipMethod BASE TABLE
Vendor BASE TABLE
vVendorWithAddresses VIEW
vVendorWithContacts VIEW

We could have also retrieved the TABLE_CATALOG and TABLE_SCHEMA columns, which are included in the view to provide fully qualified, four-part names for each object, but we didn’t need that information in this case, and the table includes no other columns, falling far short of what you get with sys.tables.

Now let’s pull data through the COLUMNS view, which provides a few more details than we get with TABLES:

USE AdventureWorks2014;

go

 

SELECT COLUMN_NAME, DATA_TYPE, DOMAIN_NAME

FROM INFORMATION_SCHEMA.COLUMNS

WHERE TABLE_SCHEMA = ‘person’

AND TABLE_NAME = ‘contacttype’;

In this case, our query retrieves the column name, system data type, and user data type, if any, within the Person.ContactType table. In this case, the table includes only one user-defined data type (Name):

COLUMN_NAME DATA_TYPE DOMAIN_NAME
ContactTypeID int NULL
Name nvarchar Name
ModifiedDate datetime NULL

Now suppose we want to retrieve a list of user-defined data types in the AdventureWorks2014 database, along with the base type for each one:

USE AdventureWorks2014;

go

 

SELECT DOMAIN_NAME, DATA_TYPE

FROM INFORMATION_SCHEMA.DOMAINS

ORDER BY DOMAIN_NAME;

This time, we use the DOMAINS view, which gives us the following results (at least on my system):

DOMAIN_NAME DATA_TYPE
AccountNumber nvarchar
Flag bit
Name nvarchar
NameStyle bit
OrderNumber nvarchar
Phone nvarchar

Let’s look at one more example, this one of the CHECK_CONSTRAINTS view, which retrieves information about the check constraints in the Person schema:

USE AdventureWorks2014;

go

 

SELECT CONSTRAINT_NAME, CHECK_CLAUSE

FROM INFORMATION_SCHEMA.CHECK_CONSTRAINTS

WHERE CONSTRAINT_SCHEMA = ‘person’;

In this case, we get the name of the check constraints, along with the constraint definitions:

CONSTRAINT_NAME CHECK_CLAUSE
CK_Person_EmailPromotion ([EmailPromotion]>=(0) AND [EmailPromotion]<=(2))
CK_Person_PersonType ([PersonType] IS NULL OR (upper([PersonType])=’GC’ OR upper([PersonType])=’SP’ OR upper([PersonType])=’EM’ OR upper([PersonType])=’IN’ OR upper([PersonType])=’VC’ OR upper([PersonType])=’SC’))

That’s all there is to information schema views. There are relatively few of them and the ones that are there contain relatively little information, when compared to their catalog counterparts. You can find more details about information schema views by referring to the MSDN topic Information Schema Views (Transact-SQL).

Dynamic management views

With dynamic management views, we move into new territory. The views return server state information about your databases and servers, which can be useful for monitoring your systems, tuning performance, and diagnosing any issues that might arise.

Like catalog views, dynamic management views provide a wide range of information. For example, SQL Server includes a set of dynamic management that are specific to memory-optimized tables. One of these, dm_xtp_system_memory_consumers, returns information about database-level memory consumers:

SELECT memory_consumer_desc, allocated_bytes, used_bytes

FROM sys.dm_xtp_system_memory_consumers

WHERE memory_consumer_type_desc = ‘pgpool’;

The statement retrieves the consumer description, the amount of allocated bytes, and the amount of used bytes for the pgpool consumer type, giving us the following results.

memory_consumer_desc allocated_bytes used_bytes
System 256K page pool 262144 262144
System 64K page pool 0 0
System 4K page pool 0 0

Like catalog views, dynamic management views are part of the sys schema. In addition, their names always begin with the dm_ prefix. Unfortunately, Microsoft uses the same naming convention for SQL Server’s dynamic management functions. But you’ll quickly discover which ones are which when you try to run them and you’re prompted to provide input parameters. (I’ll save a discussion about the functions for a different article.)

Another category of dynamic management views focuses on the SQL Server Operating System (SQLOS), which manages the operating system resources specific to SQL Server. For example, you can use the dm_os_threadsview to retrieve a list of SQLOS threads running under the current SQL Server process:

SELECT os_thread_id, kernel_time, usermode_time

FROM sys.dm_os_threads

WHERE usermode_time > 300;

The statement returns the thread ID, kernel time, and user time, for those threads greater than 300 milliseconds, giving us the following results (on my test system):

os_thread_id kernel_time usermode_time
2872 140 327
2928 15 1014
2944 46 327
5500 78 1216

The SQLOS views even include one that returns miscellaneous information about the computer and its resources:

SELECT cpu_count, physical_memory_kb, virtual_memory_kb

FROM sys.dm_os_sys_info;

Although the dm_os_sys_info view can return a variety of information about the environment, in this case, we’ve limited that information to the CPU count, physical memory, and virtual memory:

cpu_count physical_memory_kb virtual_memory_kb
4 4193840 8589934464

SQL Server also includes dynamic management views for retrieving information about the indexes. For example, you can use the db_index_usage_stats view to return details about different types of index operations:

SELECT index_id, user_seeks, user_scans

FROM sys.dm_db_index_usage_stats

WHERE object_id = OBJECT_ID(‘AdventureWorks2014.HumanResources.Employee’);

The statement returns the data shown in the following table:

index_id user_seeks user_scans
1 4 9

Being able to query statistics about an index in this way can be useful when testing an application’s individual operations. This can help you pinpoint whether your queries are using the indexes effectively or whether you might need to build different indexes. Note, however, that index statistics can reflect all activity, whether generated by an application or generated internally by SQL Server.

Dynamic management views are either server-scoped or database-scoped. The ones we’ve look at so far have been server-scoped, even the dm_db_index_usage_stats index shown in the last example. In that case, however, we were concerned with only the AdventureWorks2014 database, so we specified the database in our WHERE clause.

If you want to run a database-scoped dynamic management view, you must do so within the context of the target database. In the following SELECT statement, I use the dm_db_file_space_usage view to return space usage data about the data file used by the AdventureWorks2014 database:

USE AdventureWorks2014;

go

 

SELECT total_page_count, allocated_extent_page_count, unallocated_extent_page_count

FROM sys.dm_db_file_space_usage

WHERE file_id = 1;

All I’m doing here is retrieving the total page count, allocated extent page count, and unallocated extent page count:

total_page_count allocated_extent_page_count unallocated_extent_page_count
30368 28368 2000

Note that page counts are always at the extent level, which means they will be multiples of eight.

We can instead use the dm_db_fts_index_physical_stats view to retrieve data about the full-text and sematic indexes in each table:

USE AdventureWorks2014;

go

 

SELECT OBJECT_NAME(object_id) ObjectName,

object_id ObjectID, fulltext_index_page_count IndexPages

FROM sys.dm_db_fts_index_physical_stats;

This time we get the object name and ID of the table that contains the index, as well as the page count for each index:

ObjectName ObjectID IndexPages
ProductReview 610101214 8
Document 1077578877 13
JobCandidate 1589580701 15

Let’s look at one more dynamic management view that is database-scoped. Thedm_db_persisted_sku_features view returns a list of edition-specific features that are enabled in the current database, but are not supported on all SQL Server versions. The view applies to SQL Server 2008 through the current version. The following SELECT statement uses the view to retrieve the feature name and ID:

USE AdventureWorks2014;

go

 

SELECT feature_name, feature_id

FROM sys.dm_db_persisted_sku_features;

In this case, the SELECT statement returns only one row:

feature_name feature_id
InMemoryOLTP 800

The dm_db_persisted_sku_features view includes the feature_id column only for informational purposes. The column is not supported and may not be part of the view in the future.

Although these are but a few of the dynamic management views that SQL Server supports, the examples should give you a good sense of the variety of data they can provide. For a complete list of dynamic management views and to learn more about each one, refer to the MSDN topic Dynamic Management Views and Functions (Transact-SQL).

Plenty more where that came from

As mentioned earlier, SQL Server also provides system views to support backward compatibility, replication, and DAC instances. The compatibility views might come in handy if you’re still running SQL Server 2000. You might also find the replication-related views useful if you’ve implemented replication, although Microsoft recommends that you instead use the stored procedures available for accessing replication metadata. As for the DAC views, SQL Server provides only two of them, and they reside only in the msdb database.

For many DBAs and database developers, the catalog views and dynamic management views will likely be their first line of defense when retrieving SQL Server metadata, whether it’s specific to particular database objects or the server environment as a whole. That’s not to diminish the importance of the other views, but rather to point out that Microsoft has put most of its effort into building an extensive set of catalog views and dynamic management views. And given all the work that’s gone into them, there’s certainly no reason not to take advantage of what’s available.

https://www.simple-talk.com/sql/learn-sql-server/sql-server-system-views-the-basics/