Enable SSL Protocols for your Integrations – TLS 1.1 and TLS 1.2

Standard

Introduction

When developing integrations with external services (REST, SOAP), there is often the need to use specific SSL protocols, namely:

  • TLS 1.1
  • TLS 1.2.

While trying to use those API’s in OutSystems applications, such attempts to integrate may not work, and produce errors like:

  • The request was aborted: Could not create SSL/TLS secure channel.
  • Unsupported procotol. You need to enable TLS X.X to use this API

(other types of errors may occur, related to the required SSL protocols)

 

Why do you see those errors

.NET Framework 4.5 (and older) uses, by default, two SSL Protocols: SSL v3 and TLS 1.0.

OutSystems Platform doesn’t change those default values, so your request will only include (by default) SSLv3 and TLS 1.0. If the API requires a different protocol, which is not available, such errors will occur.

For more information on this, refer to the topics in the References section.

 

Solution

It is possible for you to enable the protocol you need and include it in the request. That can be achieved through two steps:

  • A. Enable the TLS protocols on the server, as “Client”;

and one of the following:

  • B1. Enable the SchUseStrongCrypto property in the Windows registry to use as the default protocols: TLS 1.0, TLS 1.1 and TLS 1.2
  • B2. Include the TLS 1.1 and/or TLS 1.2 protocols in your application code, before the request to the API.

Note that, for cloud customers, option B2 is the only one available.

A. Enable the TLS protocols on the server, as “Client”

To enable the TLS protocols, you need to add new registry entries for the Schannel [1]

For that, please follow this steps:

  1. Start the registry editor by clicking on Start and Run. Type in “regedit” into the Run field (without quotations).
  2. Highlight Computer at the top of the registry tree.  Backup the registry first by clicking on File and then on Export.  Select a file location to save the registry file.

    Note: You will be editing the registry.  This could have detrimental effects on your computer if done incorrectly, so it is strongly advised to make a backup.

  3. Browse to the following registry key:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols
  4. Right click on the Protocols folder and select New and then Key from the drop-down menu. This will create new folder.  Rename this folder to TLS 1.1 or TLS 1.2 (depending on the protocol you want to enable)
  5. Right click on the TLS 1.1 or TLS 1.2 key and add a new key underneath it.
  6. Rename the new key as:
    • Client
  7. Right click on the Client key and select New and then DWORD (32-bit) Value from the drop-down list.
  8. Rename the DWORD to DisabledByDefault.
  9. Right-click the name DisabledByDefault and select Modify… from the drop-down menu.
  10. Ensure that the Value data field is set to 0 and the Base is Hexadecimal.  Click on OK.
  11. Create another DWORD for the Client key as you did in Step 7.
  12. Rename this second DWORD to Enabled.
  13. Right-click the name Enabled and select Modify… from the drop-down menu.
  14. Ensure that the Value data field is set to 1 and the Base is Hexadecimal. Click on OK.
  15. Reboot the server

After the reboot, the server will be able to communicate through the SSL protocol you enabled. However, you need now to add it to your applications requests.

 

B1. Enable the SchUseStrongCrypto property in the Windows registry to use as the default protocols: TLS 1.0, TLS 1.1 and TLS 1.2

If you want to make sure strong cryptography is enabled and the SSL protocols for your requests to be TLS 1.0, TLS 1.1 and TLS 1.2, please follow this steps:

  1. Start the registry editor by clicking on Start and Run. Type in “regedit” into the Run field (without quotations).
  2. Highlight Computer at the top of the registry tree.  Backup the registry first by clicking on File and then on Export.  Select a file location to save the registry file.

    Note: You will be editing the registry.  This could have detrimental effects on your computer if done incorrectly, so it is strongly advised to make a backup.

  3. Browse to the following registry key:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NetFramework\v4.0.30319
  4. Right-click on the right pane and create a new DWORD (32-bit) Value with Name SchUseStrongCrypto.
  5. Ensure that the Value data field is set to 1 and the Base is Hexadecimal. Click on OK.
  6. Repeat steps 4 and 5 for the registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319
  7. Reboot the server

 

B2. Include the TLS 1.1 and/or TLS 1.2 protocols in your application code, before the request to the API.

Should you need higher control on the used protocols, there’s a custom Extension that will allow you to:

  • Get the current Security Protocols available for the request
  • Add the TLS 1.1 protocol to the following requests
  • Add the TLS 1.2 protocol to the following requests

You can get it from Forge: ManageSecurityProtocols

Note: please note that the Forge module is built by our community and it’s not officially supported by the OutSystems Support.
Should you notice any issues with the extension, please report them through Forge (tab Support).

To allow you to use those protocols, you need to use one of the “AddTLS” actions (depending on which protocol you need) before you send a request to the API that requires that specific protocol.

From:https://success.outsystems.com/Support/Enterprise_Customers/Maintenance_and_Operations/(.NET)_Enable_SSL_Protocols_for_your_Integrations_-_TLS_1.1_and_TLS_1.2

SQL Server 2000 system tables and their equivalent DMV in SQL Server 2005

Standard

Those who have been working with SQL Server administration for a while now undoubtedly have at times referred to the old SQL Server system tables in order to automate some processes, or document their tables by for example combining the sysobjects and syscolumns tables. As per SQL Server 2005 and onwards, Microsoft added a number of Dynamic Management Views (DMV) that take simplify all kinds of management tasks.

List of SQL Server 2000 system tables and their 2005 equivalent management views, as well as a brief description what kind of information to find in the views.

Dynamic Management Views existing in the Master database

SQL Server 2000 SQL Server2005 Description
sysaltfiles sys.master_files Contains a row per file of a database as stored in the master database.
syscacheobjects sys.dm_exec_cached_plans Returns a row for each query plan that is cached by SQL Server for faster query execution.
sys.dm_exec_plan_attributes Returns one row per plan attribute for the plan specified by the plan handle.
sys.dm_exec_sql_text Returns the text of the SQL batch that is identified by the specified sql_handle.
sys.dm_exec_cached_plan_dependent_objects Returns a row for each Transact-SQL execution plan, common language runtime (CLR) execution plan, and cursor associated with a plan.
syscharsets sys.syscharsets Contains one row for each character set and sort order defined for use by the SQL Server Database Engine.
sysconfigures sys.configurations Contains a row per server-wide configuration option value in the system.
syscurconfigs sys.configurations Contains a row per server-wide configuration option value in the system.
sysdatabases sys.databases Contains one row per database in the instance of Microsoft SQL Server.
sysdevices sys.backup_devices Contains a row for each backup-device registered by using sp_addumpdevice or created in SQL Server Management Studio.
syslanguages sys.syslanguages Contains one row for each language present in the instance of SQL Server.
syslockinfo sys.dm_tran_locks Returns information about currently active lock manager resources
syslocks[ sys.dm_tran_locks Returns information about currently active lock manager resources
syslogins sys.server_principals Contains a row for every server-level principal.
sys.sql_logins Returns one row for every SQL login.
sysmessages sys.messages Contains a row for each message_id or language_id of the error messages in the system, for both system-defined and user-defined messages.
sysoledbusers sys.linked_logins Returns a row per linked-server-login mapping, for use by RPC and distributed queries from local server to the corresponding linked server.
sysopentapes sys.dm_io_backup_tapes Returns the list of tape devices and the status of mount requests for backups.
sysperfinfo sys.dm_os_performance_counters Returns a row per performance counter maintained by the server.
sysprocesses sys.dm_exec_connections Returns information about the connections established to this instance of SQL Server and the details of each connection.
sys.dm_exec_sessions Returns one row per authenticated session on SQL Server.
sys.dm_exec_requests Returns information about each request that is executing within SQL Server.
sysremotelogins sys.remote_logins Returns a row per remote-login mapping. This catalog view is used to map incoming local logins that claim to be coming from a corresponding server to an actual local login.
sysservers sys.servers Contains a row per linked or remote server registered, and a row for the local server that has server_id = 0.

Dynamic Management Views existing in every database.

SQL Server 2000 SQL Server 2005 Description
fn_virtualfilestats sys.dm_io_virtual_file_stats Returns I/O statistics for data and log files.
syscolumns sys.columns Returns a row for each column of an object that has columns, such as views or tables.
syscomments sys.sql_modules Returnsa row for each object that is an SQL language-defined module. Objectsof type ‘P’, ‘RF’, ‘V’, ‘TR’, ‘FN’, ‘IF’, ‘TF’, and ‘R’ have an associated SQL module.
sysconstraints sys.check_constraints Contains a row for each object that is a CHECK constraint, with sys.objects.type = ‘C’.
sys.default_constraints Contains a row for each object that is a default definition (created as part of a CREATE TABLE or ALTER TABLE statement instead of a CREATE DEFAULT statement), with sys.objects.type = ‘D’.
sys.key_constraints Contains a row for each object that is a primary key or unique constraint. Includes sys.objects.type ‘PK’ and ‘UQ’.
sys.foreign_keys Contains a row per object that is a FOREIGN KEY constraint, with sys.object.type = ‘F’.
sysdepends sys.sql_expression_dependencies Contains one row for each by-name dependency on a user-defined entity in the current database. A dependency between two entities is created when one entity, called the referenced entity, appears by name in a persisted SQL expression of another entity, called the referencing entity.
sysfilegroups sys.filegroups Contains a row for each data space that is a filegroup.
sysfiles sys.database_files Contains a row per file of a database as stored in the database itself. This is a per-database view.
sysforeignkeys sys.foreign_key_columns Contains a row for each column, or set of columns, that comprise a foreign key.
sysindexes sys.indexes Contains a row per index or heap of a tabular object, such as a table, view, or table-valued function.
sys.partitions Contains a row for each partition of all the tables and most types of indexes in the database. Special index types like Fulltext, Spatial, and XML are not included in this view. All tables and indexes in SQL Server contain at least one partition, whether or not they are explicitly partitioned.
sys.allocation_units Contains a row for each allocation unit in the database.
sys.dm_db_partition_stats Returns page and row-count information for every partition in the current database.
sysindexkeys sys.index_columns Contains one row per column that is part of a sys.indexes index or unordered table (heap).
sysmembers sys.database_role_members Returns one row for each member of each database role.
sysobjects sys.objects Contains a row for each user-defined, schema-scoped object that is created within a database.
syspermissions sys.database_permissions Returns a row for every permission or column-exception permission in the database. For columns, there is a row for every permission that is different from the corresponding object-level permission. If the column permission is the same as the corresponding object permission, there will be no row for it and the actual permission used will be that of the object.
sys.server_permissions Returns one row for each server-level permission.
sysprotects sys.database_permissions Returns a row for every permission or column-exception permission in the database. For columns, there is a row for every permission that is different from the corresponding object-level permission. If the column permission is the same as the corresponding object permission, there will be no row for it and the actual permission used will be that of the object.
sys.server_permissions Returns one row for each server-level permission.
sysreferences sys.foreign_keys Contains a row per object that is a FOREIGN KEY constraint, with sys.object.type = ‘F’.
systypes sys.types Contains a row for each system and user-defined type.
sysusers sys.database_principals Returns a row for each principal in a database.
sysfulltextcatalogs sys.fulltext_catalogs Contains a row for each full-text catalog.
原文:

Script to retrieve SQL Server database backup history and no backups

Standard

原文:Script to retrieve SQL Server database backup history and no backups

Problem

There is a multitude of data to be mined from within the Microsoft SQL Server system views. This data is used to present information back to the end user of the SQL Server Management Studio (SSMS) and all third party management tools that are available for SQL Server Professionals. Be it database backup information, file statistics, indexing information, or one of the thousands of other metrics that the instance maintains, this data is readily available for direct querying and assimilation into your “home-grown” monitoring solutions as well. This tip focuses on that first metric: database backup information. Where it resides, how it is structured, and what data is available to be mined.

Solution

The msdb system database is the primary repository for storage of SQL Agent, backup, Service Broker, Database Mail, Log Shipping, restore, and maintenance plan metadata. We will be focusing on the handful of system views associated with database backups for this tip:

  • dbo.backupset: provides information concerning the most-granular details of the backup process
  • dbo.backupmediafamily: provides metadata for the physical backup files as they relate to backup sets
  • dbo.backupfile: this system view provides the most-granular information for the physical backup files

Based upon these tables, we can create a variety of queries to collect a detailed insight into the status of backups for the databases in any given SQL Server instance.


Database Backups for all databases For Previous Week

--------------------------------------------------------------------------------- 
--Database Backups for all databases For Previous Week 
--------------------------------------------------------------------------------- 
SELECT 
CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
msdb.dbo.backupset.database_name, 
msdb.dbo.backupset.backup_start_date, 
msdb.dbo.backupset.backup_finish_date, 
msdb.dbo.backupset.expiration_date, 
CASE msdb..backupset.type 
WHEN 'D' THEN 'Database' 
WHEN 'L' THEN 'Log' 
END AS backup_type, 
msdb.dbo.backupset.backup_size, 
msdb.dbo.backupmediafamily.logical_device_name, 
msdb.dbo.backupmediafamily.physical_device_name, 
msdb.dbo.backupset.name AS backupset_name, 
msdb.dbo.backupset.description 
FROM msdb.dbo.backupmediafamily 
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id 
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7) 
ORDER BY 
msdb.dbo.backupset.database_name, 
msdb.dbo.backupset.backup_finish_date

Note: for readability the output was split into two screenshots.


Most Recent Database Backup for Each Database

------------------------------------------------------------------------------------------- 
--Most Recent Database Backup for Each Database 
------------------------------------------------------------------------------------------- 
SELECT  
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name,  
   MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date 
FROM   msdb.dbo.backupmediafamily  
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
WHERE  msdb..backupset.type = 'D' 
GROUP BY 
   msdb.dbo.backupset.database_name  
ORDER BY  
   msdb.dbo.backupset.database_name 

 


Most Recent Database Backup for Each Database – Detailed

You can join the two result sets together by using the following query in order to return more detailed information about the last database backup for each database. The LEFT JOIN allows you to match up grouped data with the detailed data from the previous query without having to include the fields you do not wish to group on in the query itself.

------------------------------------------------------------------------------------------- 
--Most Recent Database Backup for Each Database - Detailed 
------------------------------------------------------------------------------------------- 
SELECT  
   A.[Server],  
   A.last_db_backup_date,  
   B.backup_start_date,  
   B.expiration_date, 
   B.backup_size,  
   B.logical_device_name,  
   B.physical_device_name,   
   B.backupset_name, 
   B.description 
FROM 
   ( 
   SELECT   
       CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
       msdb.dbo.backupset.database_name,  
       MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date 
   FROM    msdb.dbo.backupmediafamily  
       INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
   WHERE   msdb..backupset.type = 'D' 
   GROUP BY 
       msdb.dbo.backupset.database_name  
   ) AS A 
    
   LEFT JOIN  

   ( 
   SELECT   
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name,  
   msdb.dbo.backupset.backup_start_date,  
   msdb.dbo.backupset.backup_finish_date, 
   msdb.dbo.backupset.expiration_date, 
   msdb.dbo.backupset.backup_size,  
   msdb.dbo.backupmediafamily.logical_device_name,  
   msdb.dbo.backupmediafamily.physical_device_name,   
   msdb.dbo.backupset.name AS backupset_name, 
   msdb.dbo.backupset.description 
FROM   msdb.dbo.backupmediafamily  
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id  
WHERE  msdb..backupset.type = 'D' 
   ) AS B 
   ON A.[server] = B.[server] AND A.[database_name] = B.[database_name] AND A.[last_db_backup_date] = B.[backup_finish_date] 
ORDER BY  
   A.database_name

Note: for readability the output was split into two screenshots.


Databases Missing a Data (aka Full) Back-Up Within Past 24 Hours

At this point we’ve seen how to look at the history for databases that have been backed up. While this information is important, there is an aspect to backup metadata that is slightly more important – which of the databases you administer have not been getting backed up. The following query provides you with that information (with some caveats.)

------------------------------------------------------------------------------------------- 
--Databases Missing a Data (aka Full) Back-Up Within Past 24 Hours 
------------------------------------------------------------------------------------------- 
--Databases with data backup over 24 hours old 
SELECT 
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server, 
   msdb.dbo.backupset.database_name, 
   MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date, 
   DATEDIFF(hh, MAX(msdb.dbo.backupset.backup_finish_date), GETDATE()) AS [Backup Age (Hours)] 
FROM    msdb.dbo.backupset 
WHERE     msdb.dbo.backupset.type = 'D'  
GROUP BY msdb.dbo.backupset.database_name 
HAVING      (MAX(msdb.dbo.backupset.backup_finish_date) < DATEADD(hh, - 24, GETDATE()))  

UNION  

--Databases without any backup history 
SELECT      
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,  
   master.dbo.sysdatabases.NAME AS database_name,  
   NULL AS [Last Data Backup Date],  
   9999 AS [Backup Age (Hours)]  
FROM 
   master.dbo.sysdatabases LEFT JOIN msdb.dbo.backupset 
       ON master.dbo.sysdatabases.name  = msdb.dbo.backupset.database_name 
WHERE msdb.dbo.backupset.database_name IS NULL AND master.dbo.sysdatabases.name <> 'tempdb' 
ORDER BY  
   msdb.dbo.backupset.database_name

 

Now let me explain those caveats, and this query. The first part of the query returns all records where the last database (full) backup is older than 24 hours from the current system date. This data is then combined via the UNION statement to the second portion of the query. That second statement returns information on all databases that have no backup history. I’ve taken the liberty of singling tempdb out from the result set since you do not back up that system database. It is recreated each time the SQL Server services are restarted. That is caveat #1. Caveat #2 is the arbitrary value I’ve assigned to the aging value for databases without any backup history. I’ve set that value at 9999 hours because in my environment I want to place a higher emphasis on those databases that have never been backed up.

SQL Server Management Studio Tips

Standard

原文地址:http://www.sqlservercentral.com/articles/SSMS/160267/?utm_source=SSC&utm_medium=pubemail

By Konstantin Taranov, 2017/08/14

Most tips works for SSMS higher 2008 but some of them only for SSMS 2016 and above.

Great thanks to:

  • Kendra Little
  • Slava Murygin
  • Mike Milligan
  • Kenneth Fisher
  • William Durkin
  • John Morehouse
  • Phil Factor
  • Klaus Aschenbrenner
  • Latish Sehgal
  • Arvind Shyamsundar
  • SQLMatters
  • MSSQLTips
  • Anthony Zanevsky, Andrew Zanevsky and Katrin Zanevsky
  • Andy Mallon
  • Aaron Bertrand

Import and Export Settings

Tools > Options > Environment > Import and Export Settings

You can configure so many settings in SSMS and then export it and use on all your computers. Below link provide detailed instruction and awesome Dark theme configuration: Making SSMS Pretty: My Dark Theme

Also you can create shared team settings file and use it from network location. Detailed information you can find in this article Symbolic Links for Sharing Template Files or “How I Broke Management Studio with Symbolic Links”

Import and Export Settings

SSMS Shortcuts

All 957 shortcuts you can find here

Most useful are:

Shortcut Description
Ctrl + U Change Selected Database
Ctrl + R Toggle Results Pane
Ctrl + Space Activate Autocomplete
Ctrl + Shift + V Cycle through clipboard ring
Ctrl + ] Navigate to matching parenthesis
Ctrl + – Navigate to last cursor location
Ctrl + Shift + – Navigate forward to cursor location
Ctrl + K, Ctrl + C Comments selected text
Ctrl + K, Ctrl + U Uncomments selected text
Ctrl + K, Ctrl + K Toggle Bookmark
Ctrl + K, Ctrl + N Go to Next Bookmark
Ctrl + L Display Estimated Query Execution plan
Shift + Alt + Enter View Code Editor in Full Screen
Ctrl + I Quick Search
Ctrl + F4 Close the current MDI child window
Ctrl + F5 Parse query to check for errors
Shift + F10 Simulate right mouse button
Ctrl + Alt + T Display Template Explorer
Ctrl + Shift + M Specify values for template parameters

Keyboard Shortcuts for Favorite Stored Procedures

Tools > Options > Environment > Keyboard > Query Shortcuts

Keyboard Shortcuts for Favorite Stored Procedures

3 Shortcuts can not be changed: Alt + F1Ctrl + 1 and Ctrl + 2. For another 9 shortcuts my recommendation awesome open source Brent Ozar teams procedures and with some limitations Adam Machanic sp_WhoIsActive:

Query Shortcut Stored Procedure
Alt + F1 sp_help
Ctrl + F1 sp_WhoIsActive
Ctrl + 1 sp_who
Ctrl + 2 sp_lock
Ctrl + 3 sp_Blitz
Ctrl + 4 sp_BlitzCache
Ctrl + 5 sp_BlitzWho
Ctrl + 6 sp_BlitzQueryStore
Ctrl + 7 sp_BlitzFirst
Ctrl + 8 usp_BulkUpload
Ctrl + 9 sp_BlitzTrace
Ctrl + 0 sp_foreachdb

Also recommended:

Script any object with data

Right click on database name > Tasks > Generate Scripts …

Script any object with data

Selecting a block of text using the ALT Key

By holding down the ALT key as you select a block of text you can control the width of the selection region as well as the number of rows. Also you can activate multi line mode with Shift + Alt keys and using keyboard arrows to format multi line code.

Script Table and Column Names by Dragging from Object Explorer

Save keystrokes by dragging Drag the Columns folder for a table in to auto-type all column names in the table in a single line.

  • Warning: this doesn’t include [brackets] around the column names, so if your columns contain spaces or special characters at the beginning, this shortcut isn’t for you
  • Dragging the table name over will auto-type the schema and table name, with brackets.

Disable Copy of Empty Text

  • Select a block of text to copy;
  • Move the cursor the place where you want to paste the code;
  • Accidentally press Ctrl+C again instead of Ctrl+V;
  • Block of copied text is replaced by an empty block;

This behavior can be disabled in SSMS: go to Tools > Options > Text Editor > All Languages > General > 'Apply Cut or Copy Commands to blank lines when there is no selection' and uncheck the checkbox.

Disable Copy of Empty Text

Client Statistics

When you enable that option for your session, SQL Server Management Studio will give you more information about the client side processing of your query.

The Network Statistics shows you the following information:

  • Number of Server Roundtrips
  • TDS Packets sent from Client
  • TDS Packets received from Server
  • Bytes sent from Client
  • Bytes received from Server

The Time Statistics additionally shows you the following information:

  • Client Processing Time
  • Total Execution Time
  • Wait Time on Server Replies

Configure Object Explorer to Script Compression and Partition Schemes for Indexes

Is this index compressed or partitioned?

By default, you wouldn’t know just by scripting out the index from Object Explorer. If you script out indexes this way to check them into source code, or to tweak the definition slightly, this can lead you to make mistakes.

You can make sure you’re aware when indexes have compression or are partitioned by changing your scripting settings:

  • Click Tools – > Options -> SQL Server Object Explorer -> Scripting
  • Scroll down in the right pane of options and set both of these to True
    • Script Data Compression Options
    • Script Partition Schemes
  • Click OK

Using GO X to Execute a Batch or Statement Multiple Times

The GO command marks the end of a batch of statements that should be sent to SQL Server for processing, and then compiled into a single execution plan. By specifying a number after the ‘GO’ the batch can be run specified number of times. This can be useful if, for instance, you want to create test data by running an insert statement a number of times. Note that this is not a Transact SQL statement and will only work in Management Studio (and also SQLCMD or OSQL). For instance the following SQL can be run in SSMS :

<span class="pl-k">CREATE</span> <span class="pl-k">TABLE</span> <span class="pl-en">TestData</span>(ID <span class="pl-k">INT</span> IDENTITY (<span class="pl-c1">1</span>,<span class="pl-c1">1</span>), CreatedDate DATETIME)
GO

<span class="pl-k">INSERT INTO</span> TestData(CreatedDate) <span class="pl-k">SELECT</span> GetDate()
GO <span class="pl-c1">10</span>

This will run the insert statement 10 times and therefore insert 10 rows into the TestData table. In this case this is a simpler alternative than creating a cursor or while loop.

SSMS Template Replacement

One under-used feature of Management Studio is the template replacement feature. SSMS comes with a library of templates, but you can also make your own templates for reusable scripts.

In your saved .sql script, just use the magic incantation to denote the parameters for replacement. The format is simple: <label, datatype, default value>

Then, when you open the .sql script, you hit CTRL + Shift + M, and SSMS will give you a pop-up to enter your replacement values.

Color coding of connections

SQL Server Management Studio has the capability of coloring the bar at the bottom of each query window, with the color dependent on which server is connected. This can be useful in order to provide a visual check of the server that a query is to be run against, for instance to color code production instances as red, development as green and amber as test. This can also be used in conjunction with Registered Servers and CMS (Central Management Server). To add a color bar when connecting to the server click on the Options button in the Connect to Database Engine window and then select the Connection Properties window. Select the check box towards the bottom of the window and use the ‘Select…’ button to choose a color.

SQLCMD mode

Switching on SQLCMD mode enables a number of useful extra scripting style commands in SSMS. In particular you can use it to change to the connection credentials within the query window, so that you can run a query against multiple servers from the same query window. There are more details of how to do this here: Changing the SQL Server connection within an SSMS Query Windows using SQLCMD Mode

Script multiple objects using the Object Explorer Details Windows

Individual database objects, such as a table or stored procedure, can be scripted within SSMS by right clicking on the object within Object Explorer and selecting the appropriate item in the drop down menu. However if you have a lot of objects to script that can quickly become time consuming. Fortunately it’s possible to select multiple objects and script them up all together in a single query window. To do this just open the Object Explorer Details window from the View menu (or press the F7 key). If you want to script up multiple (or all) tables, select the Tables item under the relevant database in Object Explorer. A list of all tables appears in the Object Explorer Details window. Select the tables you want to script (using the Control key if necessary) and then right click and select which script option you want – e.g. to create a table create script for all tables

Registered Servers / Central Management Server

If you have a lot of servers then re-entering the details in Object Explorer every time you start SSMS can be frustrating and time consuming. Fortunately there are two facilities within SSMS that enable these details to be entered just once and “remembered” each time you open up SSMS. These two facilities are Registered Servers and Central Management Servers. These were introduced in different versions of SQL Server and work in different ways, each has its own advantages and disadvantages so you may want to use both.

To add a registered server open the Registered Servers window from the View menu (or click CTRL + ALT + G), the window should appear in the top left corner of SSMS. Right click on the Local Server Groups folder and select ‘New Server Registration…’. Enter the server details and close the window. This new server should then appear under Local Server Groups, you can then right click and open up the server in Object Explorer or open a new query window. The server details are stored locally in an XML file and so will appear next time you open SSMS. If you have a lot of servers then you can also create Server Groups to group together similar servers. One advantage of creating groups (other than being able to logically group similar servers together) is that you can run a query against all servers in the group, by right clicking the group and selecting ‘New Group’.

Central Management Server are similar to Registered Servers but with some differences, the main one being that the server details are stored in a database (the Central Management Server) rather than a local file. A significant limitation with CMS is that the CMS server itself can’t be included in the list of servers.

Splitting the Query Window

The query window in SSMS can be split into two so that you can look at two parts of the same query simultaneously. Both parts of the split window can be scrolled independently. This is especially useful if you have a large query and want to compare different areas of the same query. To split the window simply drag the bar to the top right hand side of the window as shown below.

The splitter bar allows you to view one session with two panes. You can scroll in each pane independently. You can also edit in both the top and bottom pane.

Splitting the Query Window

Moving columns in the results pane

It may not be immediately obvious but you can switch columns around in the results pane when using the grid view, by dragging the column headers and dropping them next to another column header. This can be useful if you want to rearrange how the results are displayed without amending the query, especially if you have a lot of columns in your result set. This works only for one column.

Generating Charts and Drawings in SQL Server Management Studio

You don’t have to settle for T-SQL’s monochrome text output. These stored procedures let you quickly and easily turn your SELECT queries’ output into colorized charts and even computer-generated art. To turn your own data into a line, column, area, or bar chart using the Chart stored procedure, you need to design a SELECT query that serves as the first parameter in the stored procedure call.

Detailed article and code here: Generating Charts and Drawings in SQL Server Management Studio

Also you can generate amazing chart using awesome R libraries, detailed article: View R Plots from within SQL Server Management Studio

Additional Connection Parameters

One such change SSMS got for free is the connection resiliency logic within the SqlConnection.Open() method. To improve the default experience for clients which connect to Azure SQL Database, the above method will (in the case of initial connection errors / timeouts) now retry 1 time after sleeping for 10 seconds. These numbers are configurable by properties called ConnectRetryCount (default value 1) and ConnectRetryInterval (default value 10 seconds.) The previous versions of the SqlConnection class would not automatically retry in cases of connection failure.

There is a simple workaround for this situation. It is to add the following parameter string into the Additional Connection Parameters tab within the SSMS connection window. The good news is that you only need to do this once, as the property is saved for future sessions for that SQL Server (until of course it is removed by you later.)

<code>ConnectRetryCount=0
</code>

Additional Connection Parameters

Working with tabs headers

You can view SPID in tabs header, quickly open script containing folder or copy script file path.

Working with tabs headers

Hiding tables in SSMS Object Explorer

  1. You can actually hide an object from object explorer by assigning a specific extended property:
<code>EXEC sp_addextendedproperty
@name = N'microsoft_database_tools_support',
@value = 'Hide',
@level0type = N'Schema', @level0name = 'Person',
@level1type = N'Table', @level1name = 'Address';
GO
</code>

You can then remove the property (and the object will show back up) like so:

<code>EXEC sp_dropextendedproperty
@name = N'microsoft_database_tools_support',
@level0type = N'Schema', @level0name = 'Person',
@level1type = N'Table', @level1name = 'Address';
GO
</code>
  1. DENY VIEW DEFINITION
<code>DENY VIEW DEFINITION ON Schema.Table TO UserName;
</code>

Now UserName won’t be able to see Table in Object Explorer. In Fact, they won’t be able to see the table in sys.tables or INFORMATION_SCHEMA.TABLES

VIEW DEFINITION is the ability to see the definition of the object. In the case of SPs the code, same with Views and in the case of Tables it’s the columns definitions etc.

UnDock Tabs and Windows for Multi Monitor Support

From SSMS 2012 and onwards, you can easily dock/undock the query tabs as well as different object windows inside SSMS to make better use of the screen real estate and multiple monitors you have.

UnDock Tabs and Windows for Multi Monitor Support

RegEx-Based Finding and Replacing of Text in SSMS

So often, one sees developers doing repetitive coding in SSMS or Visual Studio that would be much quicker and easier by using the built-in Regular-Expression-based Find/Replace functionality. It is understandable, since the syntax is odd and some features are missing, but it is still well-worth knowing about.

More details and examples you can find here RegEx-Based Finding and Replacing of Text in SSMS.

My favorite regex: replace \t on \n,. It useful in many cases when you have column names copied from, for example, Excel and need quickly get sql query.

Changing what SSMS opens on startup

You can customize SSMS startup behavior in Tools -> Options -> Environment -> Startup and hide system objects in Object Explore:

Changing what SSMS opens on startup

Also you can disable the splash screen – this cuts the time it takes SSMS to load for versions before SSMS 17. Right click your shortcut to SSMS and select properties. Enter the text -nosplash right after the ending quote in the path:

SSMS link nosplash option

It is useful to create a solution of commonly used SQL scripts to always load at start-up.

  1. Display the Solution Explorer by pressing Ctrl+Alt+L or clicking View -> Solution Explorer.
  2. Then right click the Solution "Solution1" (0 projects) text and select Add -> New Project.
  3. Use the default SQL Server Scripts template and give your solution a clever name.
  4. Rename all of your SQL Code Snippets so the extension is .SQL. Drag them into the queries folder within the Solution Explorer.
  5. Open Windows explorer and browse to the location of your solution. Copy file location address to your clipboard. Go back to your SSMS shortcut properties and add within double quotes the location and file name of your solution before the “-nosplash”.

This is the complete text within my shortcut properties:

<code>"C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Binn\ManagementStudio\Ssms.exe" "C:\Users\taranov\Documents\SQL Server Management Studio\Projects\MySQLServerScripts.ssmssln" -nosplash
</code>

Create a solution of commonly used SQL scripts

Modifying New Query Template

You can modified New Query template for any instance SQL Server:

<code>C:\Program Files (x86)\Microsoft SQL Server\[140|130|120|110]\Tools\Binn\ ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
</code>

For example, you can add begin transaction statement for preventing ups queries:

<span class="pl-k">BEGIN</span> TRANSACTION;
 
 
<span class="pl-c">-- COMMIT   TRANSACTION;</span>
<span class="pl-c">-- ROLLBACK TRANSACTION;</span>

Thanks for this tip Aaron Bertrand: T-SQL Tuesday #92: Lessons Learned the Hard Way

Query Execution Options

More detailed article here: Knowing the Options

The options represent the SET values of the current session. SET options can affect how the query is execute thus having a different execution plan. You can find these options in two places within SSMS under Tools -> Options -> Query Execution -> SQL Server -> Advanced:

Query Execution Options Advanced

As well as Tools -> Options -> Query Execution -> SQL Server -> ANSI:

Query Execution Options ANSI

Using the interface to check what is set can get tiresome. Instead, you can use the system function @@OPTIONS. Each option shown above has a BIT value for all 15 options indicating whether or not it is enabled.

@@OPTIONS takes the binary representation and does a BITWISE operation on it to produce an integer value based on the sum of which BITS are enabled.

Default value for SELECT @@OPTIONS is 5496. Let’s assume for a moment that the only two options that are enabled on my machine are ANSI_PADDING and ANSI_WARNINGS. The values for these two options are 8 and 16, respectively speaking. The sum of the two is 24.

<span class="pl-c">/***************************************************************</span>
<span class="pl-c">  Author: John Morehouse</span>
<span class="pl-c">  Summary: This script display what SET options are enabled for the current session.</span>
 
<span class="pl-c">  You may alter this code for your own purposes. You may republish altered code as long as you give due credit.</span>
 
<span class="pl-c">  THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.</span>
<span class="pl-c">***************************************************************/</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>Disable_Def_Cnst_Chk<span class="pl-pds">'</span></span>    <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">1</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>IMPLICIT_TRANSACTIONS<span class="pl-pds">'</span></span>   <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">2</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>CURSOR_CLOSE_ON_COMMIT<span class="pl-pds">'</span></span>  <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">4</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_WARNINGS<span class="pl-pds">'</span></span>           <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">8</span>     WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_PADDING<span class="pl-pds">'</span></span>            <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">16</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULLS<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">32</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ARITHABORT<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">64</span>    WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ARITHIGNORE<span class="pl-pds">'</span></span>             <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">128</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>QUOTED_IDENTIFIER<span class="pl-pds">'</span></span>       <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">256</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>NOCOUNT<span class="pl-pds">'</span></span>                 <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">512</span>   WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULL_DFLT_ON<span class="pl-pds">'</span></span>       <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">1024</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>ANSI_NULL_DFLT_OFF<span class="pl-pds">'</span></span>      <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">2048</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>CONCAT_NULL_YIELDS_NULL<span class="pl-pds">'</span></span> <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">4096</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>NUMERIC_ROUNDABORT<span class="pl-pds">'</span></span>      <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">8192</span>  WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span> <span class="pl-k">UNION ALL</span>
<span class="pl-k">SELECT</span> <span class="pl-s"><span class="pl-pds">'</span>XACT_ABORT<span class="pl-pds">'</span></span>              <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Option<span class="pl-pds">'</span></span>, CASE @@options &amp; <span class="pl-c1">16384</span> WHEN <span class="pl-c1">0</span> THEN <span class="pl-c1">0</span> ELSE <span class="pl-c1">1</span> END <span class="pl-k">AS</span> <span class="pl-s"><span class="pl-pds">'</span>Enabled/Disabled<span class="pl-pds">'</span></span>;

SQL Server Diagnostics Extension

Analyze Dumps – Customers using this extension will be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix. The memory dumps are stored in a secured and compliant manner as governed by the Microsoft Privacy Policy.

For example, Joe, a DBA from Contoso, Ltd., finds that SQL Server has generated a memory dump while running a workload, and he would like to debug the issue. Using this feature, John can upload the dump and receive recommended KB articles from Microsoft, which can help him fix the issue.

SQL Server Diagnostics Extension

sql server dbcc常用命令使用详解

Standard

常用DBCC命令详解

Transact-SQL 编程语言提供 DBCC 语句作为 SQL Server 的数据库控制台命令。

DBCC 命令使用输入参数并返回值。所有 DBCC 命令参数都可以接受 Unicode 和 DBCS 文字。
维护命令
1、DBCC INPUTBUFFER
功能:显示从客户端发送到 Microsoft SQL Server实例的最后一个语句。
格式:DBCC INPUTBUFFER ( session_id [ , request_id ] )[WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
用户必须具有 VIEW SERVER STATE 权限。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

2、DBCC OUTPUTBUFFER
功能:以十六进制和 ASCII 格式返回指定 session_id 的当前输出缓冲区。
格式:DBCC OUTPUTBUFFER ( session_id [ , request_id ] )[ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色的成员。
相关命令:SELECT @@spid
SELECT request_id FROM sys.dm_exec_requests WHERE session_id = @@spid

3、DBCC SHOWCONTIG
功能:显示指定的表或视图的数据和索引的碎片信息。
格式:DBCC SHOWCONTIG [ (对象名) ]
[ WITH  { [ , [ ALL_INDEXES ] ] [ , [ TABLERESULTS ] ] [ , [ FAST ] ] [ , [ ALL_LEVELS ] ]  [ NO_INFOMSGS ] } ]
执行权限:用户必须是对象所有者,或是 sysadmin 固定服务器角色、db_owner 固定数据库角色或 db_ddladmin 固定数据库角色的成员。
例:DBCC SHOWCONTIG (‘TableName’)
说明:可使用DBCC SHOWCONTIG 和 DBCC INDEXDEFRAG 对数据库中的索引进行碎片整理

4、DBCC OPENTRAN
功能:如果在指定数据库内存在最早的活动事务和最早的分布式和非分布式复制事务,则显示与之有关的信息。
格式:DBCC OPENTRAN [ ( [ database_name | database_id | 0 ] ) ]
{ [ WITH TABLERESULTS ] [ , [ NO_INFOMSGS ] ] }]
例:DBCC OPENTRAN (DataBaseName) WITH TABLERESULTS

5、DBCC SQLPERF
功能:为所有数据库提供事务日志空间用法的统计信息。也可以用于重置等待和闩锁的统计信息。
格式:DBCC SQLPERF ([ LOGSPACE ]|
[ “sys.dm_os_latch_stats” , CLEAR ] |
[ “sys.dm_os_wait_stats” , CLEAR ])
[WITH NO_INFOMSGS ]
例:DBCC SQLPERF (LOGSPACE)

6、DBCC TRACESTATUS
功能:显示跟踪标志的状态
格式:DBCC TRACESTATUS ( [ [ trace# [ ,…n ] ] [ , ] [ -1 ] ] ) [ WITH NO_INFOMSGS ]

7、DBCC PROCCACHE
功能:以表格格式显示有关过程缓存的信息。
格式:DBCC PROCCACHE [ WITH NO_INFOMSGS ]
执行权限:用户必须是 sysadmin 固定服务器角色、db_owner 固定数据库角色的成员。

8、DBCC USEROPTIONS
功能:返回当前连接的活动(设置)的 SET 选项。
格式:DBCC USEROPTIONS [ WITH NO_INFOMSGS ]
执行权限:要求具有 public 角色成员身份。
例:DBCC USEROPTIONS

9、DBCC SHOW_STATISTICS
功能:显示指定表上的指定目标的当前分发统计信息。

10、DBCC SHOWFILESTATS
功能:显示文件使用情况的,需要通过换算所得
如显示的是extent,一个extent为64k。totalexents*64/1024/1024 换算成gb
验证语句

11、DBCC CHECKALLOC
功能:检查指定数据库的磁盘空间分配结构的一致性。
例:DBCC CHECKALLOC (‘DataBaseName’)
执行权限:要求具有 sysadmin 固定服务器角色或 db_owner 固定数据库角色的成员身份。

12、DBCC CHECKFILEGROUP
功能:检查当前数据库中指定文件组中的所有表和索引视图的分配和结构完整性。
格式:比如:DBCC CHECKFILEGROUP (‘DataBaseName’)

13、DBCC CHECKCATALOG
功能:检查指定数据库内的目录一致性。
比如:DBCC CHECKCATALOG (‘datapeng’)

14、DBCC CHECKIDENT
功能:检查指定表的当前标识值,如有必要,则更改标识值。

比如:DBCC CHECKIDENT (‘datapeng01’)

15、DBCC CHECKCONSTRAINTS
功能:检查当前数据库中指定表上的指定约束或所有约束的完整性。

16、DBCC CHECKTABLE
功能:检查组成表或索引视图的所有页和结构的完整性。

17、DBCC CHECKDB
功能:通过执行下列操作检查指定数据库中所有对象的逻辑和物理完整性:

对数据库运行 DBCC CHECKALLOC。
对数据库中的每个表和视图运行 DBCC CHECKTABLE。
对数据库运行 DBCC CHECKCATALOG。
验证数据库中每个索引视图的内容。
验证数据库中的 Service Broker 数据。

维护语句

18、DBCC CLEANTABLE
功能:回收表或索引视图中已删除的可变长度列的空间。

比如:DBCC cleantable (‘datapeng’,’datapeng01′)

19、DBCC INDEXDEFRAG
功能:指定表或视图的索引碎片整理。

比如:DBCC INDEXDEFRAG (‘datapeng’,’datapeng01′)

Pages Scanned Pages Moved Pages Removed————- ———– ————-359           346         8(1 row(s) affected)

20、DBCC DBREINDEX
功能:对指定数据库中的表重新生成一个或多个索引。

比如:DBCC DBREINDEX ( ‘datapeng’,’datapeng01′)

21、DBCC SHRINKDATABASE
功能:收缩指定数据库中的数据文件和日志文件的大小。

比如:DBCC SHRINKDATABASE (‘datapeng’)

21、DBCC SHRINKFILE
功能:收缩当前数据库的指定数据或日志文件的大小
比如:DBCC SHRINKFILE (‘datapeng’)

22、DBCC FREEPROCCACHE
功能:从过程缓存中删除所有元素。

23、DBCC UPDATEUSAGE
功能:报告目录视图中的页数和行数错误并进行更正。这些错误可能导致 sp_spaceused 系统存储过程返回不正确的空间使用报告。

杂项语句

24、DBCC dllname (FREE)
功能:从内存中上载指定的扩展存储过程 DLL。

25、DBCC HELP
功能:返回指定的 DBCC 命令的语法信息。

比如:DBCC   HELP (‘checkdb’)

26、DBCC FREESESSIONCACHE
功能:刷新针对 Microsoft SQL Server 实例执行的分布式查询所使用的分布式查询连接缓存。

27、DBCC TRACEON
功能:启用指定的跟踪标记。

格式:DBCC TRACEON ( trace# [ ,…n ][ , -1 ] ) [ WITH NO_INFOMSGS ]

28、DBCC TRACEOFF
功能:禁用指定的跟踪标记。
DBCC FREESYSTEMCACHE:从所有缓存中释放所有未使用的缓存条目。SQL Server 2005 数据库引擎会事先在后台清理未使用的缓存条目,以使内存可用于当前条目。但是,可以使用此命令从所有缓存中手动删除未使用的条目。

比如;DBCC FREESYSTEMCACHE(‘all’)

 

http://blog.itpub.net/29371470/viewspace-1082379

.NET 高效开发之不可错过的实用工具(第一的当然是ReSharper插件)

Standard

工欲善其事,必先利其器,没有好的工具,怎么能高效的开发出高质量的代码呢?本文为 ASP.NET 开发者介绍一些高效实用的工具,包括 SQL 管理,VS插件,内存管理,诊断工具等,涉及开发过程的各个环节,让开发效率翻倍。当然,吾乐吧软件站认为.NET神器第一名的绝对是ReSharper,有了这个,基本上其他工具都可以忽略!

.NET 高效开发之不可错过的实用工具(第一的当然是ReSharper插件)

.NET 高效开发之不可错过的实用工具(第一的当然是ReSharper插件)

Visual Studio

  • Jetbrains ReSharper Ultimate:VS最强的插件,没有之一,支持:C#, VB.NET, ASP.NET, XML, XAML, C++, JavaScript,CSS,Razor视图引擎等等~~
  • Visual Studio Productivity Power tool: VS 专业版的效率工具。
  • Web Essentials: 提高开发效率,能够有效的帮助开发人员编写CSS, JavaScript, HTML 等代码。
  • MSVSMON: 远程Debug 监控器 (msvsmon.exe) 是一种轻量级的应用程序,能够远程控制VS来调试程序。在远程调试期间,VS 在调试主机运行,MSVSMON 在远程机器中运行。
  • WIX toolset: 可以将XML 源代码文件编译成Windows 安装包。
  • Code digger: Code Digger 是VS 2012/2013 的扩展插件,能够帮助开发人员分析代码。
  • CodeMaid: CodeMaid 是一款开源的VS2012/2013/2015 插件,提供代码分析,清理,简化代码的功能。
  • OzCode: 非常强大的VS 调试工具。
  • CodeRush: 是VS的提高代码重构和提升效率的VS插件。
  • T4 Text Template:VS中T4 文本模板是生成代码文件最常用的模板文件,这种模板文件是通过编写文本块和控制逻辑来实现的。
  • Indent Guides:  快速添加缩进行。
  • PowerShell Tools:支持开发和调试PowerShell 脚本和VS2015代码块的工具包。
  • Visual Studio Code: 免费的跨平台编辑器,可以编译和调试现代的Web和云应用。

ASP.NET

  • Fiddler: 能够捕获 http 请求/响应来模拟请求行为。
  • AutoMapper: 自动生成对象到对象的映射代码,比如,能够生成从实体对象映射到域对象,而不是手动编写映射代码。Object to object mapping. Like, the tool can be used to map entity objects to domain objects instead of writing manual mapping code.
  • Unity/Ninject/Castle Windsor/StructureMap/Spring.Net: 依赖性映射框架,提供很多可用的DI 框架。
  • .NET Reflector: .NET 程序反编译器。
  • dotPeek: .NET 程序反编译器。
  • ILSpy: .NET 程序反编译器。
  • memprofiler: 非常强大的查找内存泄露和优化内存使用的工具。
  • PostSharp: 去除重复编码和避免由于交叉引用产生的代码冗余。
  • ASPhere: Web.config 图形化编辑器
  • ComponentOne Studio for ASP.NET 一整套完备的开发工具包

WCF

  • SOAP UI: API 测试工具,支持所有标准的协议和技术。
  • WireShark:UNIX和Windows系统的网络协议分析器。用于捕获TCP 层的拥塞状况,还能帮你过滤无效信息。
  • Svc TraceViewer: 提供文件追踪视图,是由WFO提供的。
  • Svc Config Editor: 用于管理WCF相关配置的图形化界面工具。

MSMQ

  • QueueExplorer 3.4: 提供消息操作功能,如复制,删除,移动消息,保存和加载,强压测试,浏览编辑等

LINQ

  • LINQ Pad: LINQPad 是一个轻量级工具,用来测试Linq查询。 可以测试由不同语言写的.Net 语言脚本。
  • LINQ Insight: LINQ Insight Express 可嵌入 Visual Studio 中,能够分析设计时的LINQ查询 。

RegEx

  • RegEx tester: 正则表达式插件。
  • regexr: 在线正则表达式开发和测试工具。
  • regexpal: 在线正则表达式开发和测试工具。
  • Expresso: 桌面版的正则表达式工具。
  • RegexMagic : 能够根据文本模式自动生成正则表达式的工具。

Javascript/JQuery/AngularJS

  • JSHint: JavaScript代码质量监控工具,定义了很多非常严格的规则。
  • JSFiddle: 提供了浏览器内部的开发环境,能够测试HTML,CSS,Javascript/JQuery代码
  • Protractor: 端到端的框架,能够测试Angular应用。

SQL Server

  • SQL Profiler: SQL 跟踪监控工具。
  • ExpressProfiler: ExpressProfiler (aka SqlExpress Profiler) 是一个小型快速的SQL Server Profiler的替换工具,自带GUI界面。能够用于企业版和非企业版 的SQL Server。
  • SQL Sentry Plan explorer: 提供了SQL 查询执行计划的很好的物理视图。
  • SQL Complete: 为 SQL Server Management Studio and Visual Studio 提供非常智能的,优化SQL 格式的管理工具。
  • NimbleText:文本操作和代码生成工具。
  • Query Express: 轻量级的SQL 查询分析器。
  • IO Meter: 提供IO 子系统的一些访问具体情况
  • sqldecryptor: 可以解密SQL Server 中的加密对象,如存储过程,方法,触发器,视图。
  • SpatialViewer: 可以预览和创建空间数据。
  • ClearTrace: 导入跟踪和分析文件,并显示汇总信息。
  • Internals Viewer for SQL Server: Internals Viewer 用来在SQL Server 的存储引擎中的查找工具,以及获取数据在物理层是如何分配,组织和存储的。

NHibernate

Tally

Tally ERP 9

  • Tally dll: .net 的动态链接库,能够将Tally Accounting 软件集成到应用程序中 ,通过代码对数据进行push或pull操作。

代码Review

  • StyleCop: StyleCop 是静态代码分析工具,能够统一设置代码样式和规范。 可以在Visual Studio 中使用,也可以集成到 MSBuild 项目。
  • FxCop: FxCop 是静态代码分析工具,能够通过分析.Net 程序集保证开发标准。

运行状况捕获

  • WireShark: It is a network protocol analyzer for Unix and Windows. It can capture traffic at TCP level.
  • HTTP Monitor: enables the developer to view all the HTTP traffic between your computer and the Internet. This includes the request data (such as HTTP headers and form GET and POST data) and the response data (including the HTTP headers and body).

诊断工具

  • Glimpse:提供服务器端诊断数据。如 在ASP.NET MVC 项目,可以通过NuGet添加。

性能

  • PerfMon: 使用 性能计数器监控系统性能。

代码转换器

  • Telerik Code Converter: C# 到 VB 及 VB 到C# 代码转换器. I是一个在线编辑工具,可以选择 ‘Batch Converter’ ,并使用压缩包上传文件。

屏幕记录工具

  • Wink: Using Wink, 可以轻松截图,并为截图添加描述等,也可以录制Demo。

文本编辑器

文档工具

  • GhostDoc: GhostDoc 是 Visual Studio 扩展项,能够自动生成 方法或属性的 文档注释,包括它们的类型,名称,其他上下文信息。
  • helpndoc: helpndoc 用于创建帮助文档工具,能够根据文档源生成多种格式。
  • 其他
  • FileZilla: FileZilla 是开源的FTP 工具. 通过FileZilla 客户端可以将文件上传到FTP 服务器上。
  • TreeTrim: TreeTrim 是调整代码的工具,能够删除一些无效的debug文件和临时文件等。
  • BrowserStack: 支持跨浏览器测试的工具。
  • BugShooting: 屏幕截图软件,能够铺货和附加工作项,bug,问题跟踪项等。
  • Postman: REST 客户端,能够发送http请求,分析REST 应用程序发出的响应。
  • Web developer checklist: checklist可用来管理开发计划
  • PowerGUI: 能够快接收和使用PowerShell 来有效管理 Windows 开发环境。
  • Beyond Compare: 提供文件对比功能。
  • PostMan: REST Chrome 器扩展项
  • Devart Codecompare: 文件区分工具,能够读取 C#, C++,VB 代码结构 。包括:文件夹对比工具,独立App 比较合并文件夹和文件,代码review 支持。

C++文件操作详解

Standard

原文:http://www.cnblogs.com/likebeta/archive/2012/06/16/2551662.html

C++ 通过以下几个类支持文件的输入输出:

ofstream: 写操作(输出)的文件类 (由ostream引申而来)
ifstream: 读操作(输入)的文件类(由istream引申而来)
fstream: 可同时读写操作的文件类 (由iostream引申而来)
打开文件(Open a file)
对这些类的一个对象所做的第一个操作通常就是将它和一个真正的文件联系起来,也就是说打开一个文件。被打开的文件在程序中由一个流对象(stream object)来表示 (这些类的一个实例) ,而对这个流对象所做的任何输入输出操作实际就是对该文件所做的操作。

要通过一个流对象打开一个文件,我们使用它的成员函数open():void open (const char * filename, openmode mode);

这里filename 是一个字符串,代表要打开的文件名,mode 是以下标志符的一个组合: ios::in 为输入(读)而打开文件
ios::out 为输出(写)而打开文件
ios::ate 初始位置:文件尾
ios::app 所有输出附加在文件末尾
ios::trunc 如果文件已存在则先删除该文件
ios::binary 二进制方式
这些标识符可以被组合使用,中间以”或”操作符(|)间隔。例如,如果我们想要以二进制方式打开文件”example.bin” 来写入一些数据,我们可以通过以下方式调用成员函数open()来实现:ofstream file;
file.open (“example.bin”, ios::out | ios::app | ios::binary);

ofstream, ifstream 和 fstream所有这些类的成员函数open 都包含了一个默认打开文件的方式,这三个类的默认方式各不相同: 类 参数的默认方式
ofstream ios::out | ios::trunc
ifstream ios::in
fstream ios::in | ios::out
只有当函数被调用时没有声明方式参数的情况下,默认值才会被采用。如果函数被调用时声明了任何参数,默认值将被完全改写,而不会与调用参数组合。

由 于对类ofstream, ifstream 和 fstream 的对象所进行的第一个操作通常都是打开文件,这些类都有一个构造函数可以直接调用open 函数,并拥有同样的参数。这样,我们就可以通过以下方式进行与上面同样的定义对象和打开文件的操作:ofstream file (“example.bin”, ios::out | ios::app | ios::binary);

两种打开文件的方式都是正确的。

你可以通过调用成员函数is_open()来检查一个文件是否已经被顺利的打开了:bool is_open();

它返回一个布尔(bool)值,为真(true)代表文件已经被顺利打开,假( false )则相反。

关闭文件(Closing a file)
当文件读写操作完成之后,我们必须将文件关闭以使文件重新变为可访问的。关闭文件需要调用成员函数close(),它负责将缓存中的数据排放出来并关闭文件。它的格式很简单:void close ();

这个函数一旦被调用,原先的流对象(stream object)就可以被用来打开其它的文件了,这个文件也就可以重新被其它的进程(process)所有访问了。

为防止流对象被销毁时还联系着打开的文件,析构函数(destructor)将会自动调用关闭函数close。

文本文件(Text mode files)
类ofstream, ifstream 和fstream 是分别从ostream, istream 和iostream 中引申而来的。这就是为什么 fstream 的对象可以使用其父类的成员来访问数据。

一般来说,我们将使用这些类与同控制台(console)交互同样的成员函数(cin 和 cout)来进行输入输出。如下面的例题所示,我们使用重载的插入操作符<<:

// writing on a text file
#include &lt;fstream&gt;
using namespace std;

int main()
{
    ofstream examplefile("example.txt");
    if (examplefile.is_open())
    {
        examplefile &lt;&lt; "This is a line.\n";
        examplefile &lt;&lt; "This is another line.\n";
        examplefile.close();
    }
    return 0;
}

从文件中读入数据也可以用与 cin的使用同样的方法:

// reading a text file
#include &lt;iostream&gt;
#include &lt;fstream&gt;
#include &lt;cstdlib&gt;
using namespace std;
int main ()
{
    char buffer[256];
    ifstream examplefile("example.txt");
    if (! examplefile.is_open())
    {
        cout &lt;&lt; "Error opening file"; exit (1);
    }
    while (!examplefile.eof())
    {
        examplefile.getline(buffer,100);
        cout&lt;&lt;buffer&lt;&lt; endl;
    }
    return 0;
}
//This is a line.
//This is another line.

上面的例子读入一个文本文件的内容,然后将它打印到屏幕上。注意我们使用了一个新的成员函数叫做eof ,它是ifstream 从类 ios 中继承过来的,当到达文件末尾时返回true 。

状态标志符的验证(Verification of state flags)
除了eof()以外,还有一些验证流的状态的成员函数(所有都返回bool型返回值):

bad()
如果在读写过程中出错,返回 true 。例如:当我们要对一个不是打开为写状态的文件进行写入时,或者我们要写入的设备没有剩余空间的时候。

fail()
除了与bad() 同样的情况下会返回 true 以外,加上格式错误时也返回true ,例如当想要读入一个整数,而获得了一个字母的时候。

eof()
如果读文件到达文件末尾,返回true。

good()
这是最通用的:如果调用以上任何一个函数返回true 的话,此函数返回 false 。

要想重置以上成员函数所检查的状态标志,你可以使用成员函数clear(),没有参数。

获得和设置流指针(get and put stream pointers)
所有输入/输出流对象(i/o streams objects)都有至少一个流指针:

ifstream, 类似istream, 有一个被称为get pointer的指针,指向下一个将被读取的元素。
ofstream, 类似 ostream, 有一个指针 put pointer ,指向写入下一个元素的位置。
fstream, 类似 iostream, 同时继承了get 和 put
我们可以通过使用以下成员函数来读出或配置这些指向流中读写位置的流指针:

tellg() 和 tellp()
这两个成员函数不用传入参数,返回pos_type 类型的值(根据ANSI-C++ 标准) ,就是一个整数,代表当前get 流指针的位置 (用tellg) 或 put 流指针的位置(用tellp).

seekg() 和seekp()
这对函数分别用来改变流指针get 和put的位置。两个函数都被重载为两种不同的原型:
seekg ( pos_type position );
seekp ( pos_type position );
使用这个原型,流指针被改变为指向从文件开始计算的一个绝对位置。要求传入的参数类型与函数 tellg 和tellp 的返回值类型相同。
seekg ( off_type offset, seekdir direction );
seekp ( off_type offset, seekdir direction );
使用这个原型可以指定由参数direction决定的一个具体的指针开始计算的一个位移(offset)。它可以是: ios::beg 从流开始位置计算的位移
ios::cur 从流指针当前位置开始计算的位移
ios::end 从流末尾处开始计算的位移
流指针 get 和 put 的值对文本文件(text file)和二进制文件(binary file)的计算方法都是不同的,因为文本模式的文件中某些特殊字符可能被修改。由于这个原因,建议对以文本文件模式打开的文件总是使用seekg 和 seekp的第一种原型,而且不要对tellg 或 tellp 的返回值进行修改。对二进制文件,你可以任意使用这些函数,应该不会有任何意外的行为产生。

以下例子使用这些函数来获得一个二进制文件的大小:

// obtaining file size
#include &lt;iostream&gt;
#include &lt;fstream&gt;
using namespace std;

int main ()
{
    const char * filename = "example.txt";
    long l,m;
    ifstream file(filename, ios::in|ios::binary);
    l = file.tellg();
    file.seekg(0, ios::end);
    m = file.tellg();
    file.close();
    cout &lt;&lt;"size of "&lt;&lt; filename;
    cout &lt;&lt;" is "&lt;&lt; (m-l)&lt;&lt;" bytes.\n";
    return 0;
}
//size of example.txt is 40 bytes.

二进制文件(Binary files)
在二进制文件中,使用<< 和>>,以及函数(如getline)来操作符输入和输出数据,没有什么实际意义,虽然它们是符合语法的。

文 件流包括两个为顺序读写数据特殊设计的成员函数:write 和 read。第一个函数 (write) 是ostream 的一个成员函数,都是被ofstream所继承。而read 是istream 的一个成员函数,被ifstream 所继承。类 fstream 的对象同时拥有这两个函数。它们的原型是:
write ( char * buffer, streamsize size );
read ( char * buffer, streamsize size );
这里 buffer 是一块内存的地址,用来存储或读出数据。参数size 是一个整数值,表示要从缓存(buffer)中读出或写入的字符数。

// reading binary file
#include &lt;iostream&gt;
#include &lt;fstream&gt;
using namespace std;
int main ()
{
    const char * filename = "example.txt";
    char * buffer;
    long size;
    ifstream file(filename, ios::in|ios::binary|ios::ate);
    size = file.tellg();
    file.seekg(0, ios::beg);
    buffer = new char [size];
    file.read(buffer, size);
    file.close();
    cout &lt;&lt;"the complete file is in a buffer";
    delete[] buffer;
    return 0;
}
//The complete file is in a buffer

缓存和同步(Buffers and Synchronization)
当我们对文件流进行操作的时候,它们与一个streambuf 类型的缓存(buffer)联系在一起。这个缓存(buffer)实际是一块内存空间,作为流(stream)和物理文件的媒介。例如,对于一个输出流, 每次成员函数put (写一个单个字符)被调用,这个字符不是直接被写入该输出流所对应的物理文件中的,而是首先被插入到该流的缓存(buffer)中。

当缓存被排放出来(flush)时,它里面的所有数据或者被写入物理媒质中(如果是一个输出流的话),或者简单的被抹掉(如果是一个输入流的话)。这个过程称为同步(synchronization),它会在以下任一情况下发生:

当文件被关闭时: 在文件被关闭之前,所有还没有被完全写出或读取的缓存都将被同步。
当缓存buffer 满时:缓存Buffers 有一定的空间限制。当缓存满时,它会被自动同步。
控制符明确指明:当遇到流中某些特定的控制符时,同步会发生。这些控制符包括:flush 和endl。
明确调用函数sync(): 调用成员函数sync() (无参数)可以引发立即同步。这个函数返回一个int 值,等于-1 表示流没有联系的缓存或操作失败。
在C++中,有一个stream这个类,所有的I/O都以这个“流”类为基础的,包括我们要认识的文件I/O,stream这个类有两个重要的运算符:

1、插入器(<<)
向流输出数据。比如说系统有一个默认的标准输出流(cout),一般情况下就是指的显示器,所以,cout<<“Write Stdout”<<‘n’;就表示把字符串”Write Stdout”和换行字符(‘n’)输出到标准输出流。

2、析取器(>>)
从流中输入数据。比如说系统有一个默认的标准输入流(cin),一般情况下就是指的键盘,所以,cin>>x;就表示从标准输入流中读取一个指定类型(即变量x的类型)的数据。

在C++中,对文件的操作是通过stream的子类fstream(file stream)来实现的,所以,要用这种方式操作文件,就必须加入头文件fstream.h。下面就把此类的文件操作过程一一道来。

一、打开文件
在fstream类中,有一个成员函数open(),就是用来打开文件的,其原型是:

void open(const char* filename,int mode,int access);

参数:

filename: 要打开的文件名
mode: 要打开文件的方式
access: 打开文件的属性
打开文件的方式在类ios(是所有流式I/O类的基类)中定义,常用的值如下:

ios::app: 以追加的方式打开文件
ios::ate: 文件打开后定位到文件尾,ios:app就包含有此属性
ios::binary: 以二进制方式打开文件,缺省的方式是文本方式。两种方式的区别见前文
ios::in: 文件以输入方式打开
ios::out: 文件以输出方式打开
ios::nocreate: 不建立文件,所以文件不存在时打开失败
ios::noreplace:不覆盖文件,所以打开文件时如果文件存在失败
ios::trunc: 如果文件存在,把文件长度设为0
可以用“或”把以上属性连接起来,如ios::out|ios::binary

打开文件的属性取值是:

0:普通文件,打开访问
1:只读文件
2:隐含文件
4:系统文件
可以用“或”或者“+”把以上属性连接起来 ,如3或1|2就是以只读和隐含属性打开文件。

例如:以二进制输入方式打开文件c:config.sys

fstream file1;
file1.open("c:config.sys",ios::binary|ios::in,0);

如果open函数只有文件名一个参数,则是以读/写普通文件打开,即:

file1.open("c:config.sys");&lt;=&gt;file1.open("c:config.sys",ios::in|ios::out,0);

另外,fstream还有和open()一样的构造函数,对于上例,在定义的时侯就可以打开文件了:

fstream file1("c:config.sys");

特别提出的是,fstream有两个子类:ifstream(input file stream)和ofstream(outpu file stream),ifstream默认以输入方式打开文件,而ofstream默认以输出方式打开文件。

ifstream file2("c:pdos.def");//以输入方式打开文件
ofstream file3("c:x.123");//以输出方式打开文件

所以,在实际应用中,根据需要的不同,选择不同的类来定义:如果想以输入方式打开,就用ifstream来定义;如果想以输出方式打开,就用ofstream来定义;如果想以输入/输出方式来打开,就用fstream来定义。

二、关闭文件
打开的文件使用完成后一定要关闭,fstream提供了成员函数close()来完成此操作,如:file1.close();就把file1相连的文件关闭。

三、读写文件
读写文件分为文本文件和二进制文件的读取,对于文本文件的读取比较简单,用插入器和析取器就可以了;而对于二进制的读取就要复杂些,下要就详细的介绍这两种方式

1、文本文件的读写
文本文件的读写很简单:用插入器(<<)向文件输出;用析取器(>>)从文件输入。假设file1是以输入方式打开,file2以输出打开。示例如下:

file2&lt;&lt;"I Love You";//向文件写入字符串"I Love You"
int i;
file1&gt;&gt;i;//从文件输入一个整数值。

这种方式还有一种简单的格式化能力,比如可以指定输出为16进制等等,具体的格式有以下一些

操纵符 功能 输入/输出
dec 格式化为十进制数值数据 输入和输出
endl 输出一个换行符并刷新此流 输出
ends 输出一个空字符 输出
hex 格式化为十六进制数值数据 输入和输出
oct 格式化为八进制数值数据 输入和输出
setpxecision(int p) 设置浮点数的精度位数 输出

比如要把123当作十六进制输出:file1<<hex<<123;要把3.1415926以5位精度输出:file1<<setpxecision(5)<<3.1415926。

2、二进制文件的读写
①put()
put()函数向流写入一个字符,其原型是ofstream &put(char ch),使用也比较简单,如file1.put(‘c’);就是向流写一个字符’c’。

②get()
get()函数比较灵活,有3种常用的重载形式:

一种就是和put()对应的形式:ifstream &get(char &ch);功能是从流中读取一个字符,结果保存在引用ch中,如果到文件尾,返回空字符。如file2.get(x);表示从文件中读取一个字符,并把读取的字符保存在x中。

另一种重载形式的原型是: int get();这种形式是从流中返回一个字符,如果到达文件尾,返回EOF,如x=file2.get();和上例功能是一样的。

还 有一种形式的原型是:ifstream &get(char *buf,int num,char delim=’n’);这种形式把字符读入由 buf 指向的数组,直到读入了 num 个字符或遇到了由 delim 指定的字符,如果没使用 delim 这个参数,将使用缺省值换行符’n’。例如:

file2.get(str1,127,’A’);//从文件中读取字符到字符串str1,当遇到字符’A’或读取了127个字符时终止。

③读写数据块
要读写二进制数据块,使用成员函数read()和write()成员函数,它们原型如下:

read(unsigned char *buf,int num);
write(const unsigned char *buf,int num);

read() 从文件中读取 num 个字符到 buf 指向的缓存中,如果在还未读入 num 个字符时就到了文件尾,可以用成员函数 int gcount();来取得实际读取的字符数;而 write() 从buf 指向的缓存写 num 个字符到文件中,值得注意的是缓存的类型是 unsigned char *,有时可能需要类型转换。

例:

unsigned char str1[]="I Love You";
int n[5];
ifstream in("xxx.xxx");
ofstream out("yyy.yyy");
out.write(str1,strlen(str1));//把字符串str1全部写到yyy.yyy中
in.read((unsigned char*)n,sizeof(n));//从xxx.xxx中读取指定个整数,注意类型转换
in.close();out.close();

四、检测EOF
成员函数eof()用来检测是否到达文件尾,如果到达文件尾返回非0值,否则返回0。原型是int eof();

例:

if(in.eof())ShowMessage("已经到达文件尾!");

五、文件定位
和 C的文件操作方式不同的是,C++ I/O系统管理两个与一个文件相联系的指针。一个是读指针,它说明输入操作在文件中的位置;另一个是写指针,它下次写操作的位置。每次执行输入或输出时, 相应的指针自动变化。所以,C++的文件定位分为读位置和写位置的定位,对应的成员函数是 seekg()和 seekp(),seekg()是设置读位置,seekp是设置写位置。它们最通用的形式如下:

istream &amp;seekg(streamoff offset,seek_dir origin);
ostream &amp;seekp(streamoff offset,seek_dir origin);

streamoff定义于 iostream.h 中,定义有偏移量 offset 所能取得的最大值,seek_dir 表示移动的基准位置,是一个有以下值的枚举:

ios::beg: 文件开头
ios::cur: 文件当前位置
ios::end: 文件结尾
这两个函数一般用于二进制文件,因为文本文件会因为系统对字符的解释而可能与预想的值不同。

例:

file1.seekg(1234,ios::cur);//把文件的读指针从当前位置向后移1234个字节
file2.seekp(1234,ios::beg);//把文件的写指针从文件开头向后移1234个字节

pandasql: Make python speak SQL

Standard

http://blog.yhat.com/posts/pandasql-intro.html

Introduction

One of my favorite things about Python is that users get the benefit of observing the R community and then emulating the best parts of it. I’m a big believer that a language is only as helpful as its libraries and tools.

This post is about pandasql, a Python package we (Yhat) wrote that emulates the R package sqldf. It’s a small but mighty library comprised of just 358 lines of code. The idea of pandasql is to make Python speak SQL. For those of you who come from a SQL-first background or still “think in SQL”, pandasql is a nice way to take advantage of the strengths of both languages.

In this introduction, we’ll show you to get up and running with pandasql inside of Rodeo, the integrated development environment (IDE) we built for data exploration and analysis. Rodeo is an open source and completely free tool. If you’re an R user, its a comparable tool with a similar feel to RStudio. As of today, Rodeo can only run Python code, but last week we added syntax highlighting for a bunch of other languages to the editor (markdown, JSON, julia, SQL, markdown). As you may have read or guessed, we’ve got big plans for Rodeo, including adding SQL support so that you can run your SQL queries right inside of Rodeo, even without our handy little pandasql. More on that in the next week or two!

Downloading Rodeo

Start by downloading Rodeo for Mac, Windows or Linux from the Rodeo page on the Yhat website.

ps If you download Rodeo and encounter a problem or simply have a question, we monitor our discourse forum 24/7 (okay, almost).

A bit of background, if you’re curious

Behind the scenes, pandasql uses the pandas.io.sql module to transfer data between DataFrame and SQLite databases. Operations are performed in SQL, the results returned, and the database is then torn down. The library makes heavy use of pandas write_frame and frame_query, two functions which let you read and write to/from pandas and (most) any SQL database.

Install pandasql

Install pandasql using the package manager pane in Rodeo. Simply search for pandasql and click Install Package.

You can also run ! pip install pandasql from the text editor if you prefer to install that way.

Check out the datasets

pandasql has two built-in datasets which we’ll use for the examples below.

  • meat: Dataset from the U.S. Dept. of Agriculture containing metrics on livestock, dairy, and poultry outlook and production
  • births: Dataset from the United Nations Statistics Division containing demographic statistics on live births by month

Run the following code to check out the data sets.

<code>#Checking out meat and birth data
from pandasql import sqldf
from pandasql import load_meat, load_births

meat = load_meat()
births = load_births()

#You can inspect the dataframes directly if you're using Rodeo
#These print statements are here just in case you want to check out your data in the editor, too
print meat.head()
print births.head()
</code>

Inside Rodeo, you really don’t even need the print.variable.head() statements, since you can actually just examine the dataframes directly.

An odd graph

<code># Let's make a graph to visualize the data
# Bet you haven't had a title quite like this before
import matplotlib.pyplot as plt
from pandasql import *
import pandas as pd

pysqldf = lambda q: sqldf(q, globals())

q  = """
SELECT
  m.date
  , m.beef
  , b.births
FROM
  meat m
LEFT JOIN
  births b
    ON m.date = b.date
WHERE
    m.date &gt; '1974-12-31';
"""

meat = load_meat()
births = load_births()

df = pysqldf(q)
df.births = df.births.fillna(method='backfill')

fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.plot(pd.rolling_mean(df['beef'], 12), color='b')
ax1.set_xlabel('months since 1975')
ax1.set_ylabel('cattle slaughtered', color='b')

ax2 = ax1.twinx()
ax2.plot(pd.rolling_mean(df['births'], 12), color='r')
ax2.set_ylabel('babies born', color='r')
plt.title("Beef Consumption and the Birth Rate")
plt.show()
</code>

Notice that the plot appears both in the console and the plot tab (bottom right tab).

Tip: You can “pop out” your plot by clicking the arrows at the top of the pane. This is handy if you’re working on multiple monitors and want to dedicate one just to your data visualzations.

Usage

To keep this post concise and easy to read, we’ve just given the code snippets and a few lines of results for most of the queries below.

If you’re following along in Rodeo, a few tips as you’re getting started:

  • Run Script will indeed run everything you have written in the text editor
  • You can highlight a code chunk and run it by clicking Run Line or pressing Command + Enter
  • You can resize the panes (when I’m not making plots I shrink down the bottom right pane)

Basics

Write some SQL and execute it against your pandas DataFrame by substituting DataFrames for tables.

<code>q = """
    SELECT
        *
    FROM
        meat
    LIMIT 10;"""

print sqldf(q, locals())

#                   date  beef  veal  pork  lamb_and_mutton broilers other_chicken turkey
# 0  1944-01-01 00:00:00   751    85  1280               89     None          None   None
# 1  1944-02-01 00:00:00   713    77  1169               72     None          None   None
# 2  1944-03-01 00:00:00   741    90  1128               75     None          None   None
# 3  1944-04-01 00:00:00   650    89   978               66     None          None   None
</code>

pandasql creates a DB, schema and all, loads your data, and runs your SQL.

Aggregation

pandasql supports aggregation. You can use aliased column names or column numbers in your group byclause.

<code># births per year
q = """
    SELECT
        strftime("%Y", date)
        , SUM(births)
    FROM births
    GROUP BY 1
    ORDER BY 1;
            """

print sqldf(q, locals())

#    strftime("%Y", date)  SUM(births)
# 0                  1975      3136965
# 1                  1976      6304156
# 2                  1979      3333279
# 3                  1982      3612258
</code>

locals() vs. globals()

pandasql needs to have access to other variables in your session/environment. You can pass locals() to pandasql when executing a SQL statement, but if you’re running a lot of queries that might be a pain. To avoid passing locals all the time, you can add this helper function to your script to set globals() like so:

<code>def pysqldf(q):
    return sqldf(q, globals())

q = """
    SELECT
        *
    FROM
        births
    LIMIT 10;"""

print pysqldf(q)
# 0  1975-01-01 00:00:00  265775
# 1  1975-02-01 00:00:00  241045
# 2  1975-03-01 00:00:00  268849
</code>

joins

You can join dataframes using normal SQL syntax.

<code># joining meats + births on date
q = """
    SELECT
        m.date
        , b.births
        , m.beef
    FROM
        meat m
    INNER JOIN
        births b
            on m.date = b.date
    ORDER BY
        m.date
    LIMIT 100;
    """

joined = pysqldf(q)
print joined.head()
#date  births    beef
#0  1975-01-01 00:00:00.000000  265775  2106.0
#1  1975-02-01 00:00:00.000000  241045  1845.0
#2  1975-03-01 00:00:00.000000  268849  1891.0
</code>

WHERE conditions

Here’s a WHERE clause.

<code>q = """
    SELECT
        date
        , beef
        , veal
        , pork
        , lamb_and_mutton
    FROM
        meat
    WHERE
        lamb_and_mutton &gt;= veal
    ORDER BY date DESC
    LIMIT 10;
    """

print pysqldf(q)
#                   date    beef  veal    pork  lamb_and_mutton
# 0  2012-11-01 00:00:00  2206.6  10.1  2078.7             12.4
# 1  2012-10-01 00:00:00  2343.7  10.3  2210.4             14.2
# 2  2012-09-01 00:00:00  2016.0   8.8  1911.0             12.5
# 3  2012-08-01 00:00:00  2367.5  10.1  1997.9             14.2
</code>

It’s just SQL

Since pandasql is powered by SQLite3, you can do most anything you can do in SQL. Here are some examples using common SQL features such as subqueries, order by, functions, and unions.

<code>#################################################
# SQL FUNCTIONS
# e.g. `RANDOM()`
#################################################
q = """SELECT
    *
    FROM
        meat
    ORDER BY RANDOM()
    LIMIT 10;"""
print pysqldf(q)
#                   date  beef  veal  pork  lamb_and_mutton  broilers other_chicken  turkey
# 0  1967-03-01 00:00:00  1693    65  1136               61     472.0          None    26.5
# 1  1944-12-01 00:00:00   764   146  1013               91       NaN          None     NaN
# 2  1969-06-01 00:00:00  1666    50   964               42     573.9          None    85.4
# 3  1983-03-01 00:00:00  1892    37  1303               36    1106.2          None   182.7

#################################################
# UNION ALL
#################################################
q = """
        SELECT
            date
            , 'beef' AS meat_type
            , beef AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'veal' AS meat_type
            , veal AS value
        FROM meat

        UNION ALL

        SELECT
            date
            , 'pork' AS meat_type
            , pork AS value
        FROM meat
        UNION ALL
        SELECT
            date
            , 'lamb_and_mutton' AS meat_type
            , lamb_and_mutton AS value
        FROM meat
        ORDER BY 1
    """
print pysqldf(q).head(20)
#                    date        meat_type  value
# 0   1944-01-01 00:00:00             beef    751
# 1   1944-01-01 00:00:00             veal     85
# 2   1944-01-01 00:00:00             pork   1280
# 3   1944-01-01 00:00:00  lamb_and_mutton     89


#################################################
# subqueries
# fancy!
#################################################
q = """
    SELECT
        m1.date
        , m1.beef
    FROM
        meat m1
    WHERE m1.date IN
        (SELECT
            date
        FROM meat
        WHERE
            beef &gt;= broilers
        ORDER BY date)
"""

more_beef_than_broilers = pysqldf(q)
print more_beef_than_broilers.head(10)
#                   date  beef
# 0  1960-01-01 00:00:00  1196
# 1  1960-02-01 00:00:00  1089
# 2  1960-03-01 00:00:00  1201
# 3  1960-04-01 00:00:00  1066
</code>

Final thoughts

pandas is an incredible tool for data analysis in large part, we think, because it is extremely digestible, succinct, and expressive. Ultimately, there are tons of reasons to learn the nuances of mergejoinconcatenatemelt and other native pandas features for slicing and dicing data. Check out the docs for some examples.

Our hope is that pandasql will be a helpful learning tool for folks new to Python and pandas. In my own personal experience learning R, sqldf was a familiar interface helping me become highly productive with a new tool as quickly as possible.

关于python文件操作

Standard

python中对文件、文件夹(文件操作函数)的操作需要涉及到os模块和shutil模块。

得到当前工作目录,即当前Python脚本工作的目录路径: os.getcwd()

返回指定目录下的所有文件和目录名:os.listdir()

函数用来删除一个文件:os.remove()

删除多个目录:os.removedirs(r“c:\python”)

检验给出的路径是否是一个文件:os.path.isfile()

检验给出的路径是否是一个目录:os.path.isdir()

判断是否是绝对路径:os.path.isabs()

检验给出的路径是否真地存:os.path.exists()

返回一个路径的目录名和文件名:os.path.split()     eg os.path.split(‘/home/swaroop/byte/code/poem.txt’) 结果:(‘/home/swaroop/byte/code’, ‘poem.txt’)

分离扩展名:os.path.splitext()

获取路径名:os.path.dirname()

获取文件名:os.path.basename()

运行shell命令: os.system()

读取和设置环境变量:os.getenv() 与os.putenv()

给出当前平台使用的行终止符:os.linesep    Windows使用’\r\n’,Linux使用’\n’而Mac使用’\r’

指示你正在使用的平台:os.name       对于Windows,它是’nt’,而对于Linux/Unix用户,它是’posix’

重命名:os.rename(old, new)

创建多级目录:os.makedirs(r“c:\python\test”)

创建单个目录:os.mkdir(“test”)

获取文件属性:os.stat(file)

修改文件权限与时间戳:os.chmod(file)

终止当前进程:os.exit()

获取文件大小:os.path.getsize(filename)
文件操作:
os.mknod(“test.txt”)        创建空文件
fp = open(“test.txt”,w)     直接打开一个文件,如果文件不存在则创建文件

关于open 模式:

w     以写方式打开,
a     以追加模式打开 (从 EOF 开始, 必要时创建新文件)
r+     以读写模式打开
w+     以读写模式打开 (参见 w )
a+     以读写模式打开 (参见 a )
rb     以二进制读模式打开
wb     以二进制写模式打开 (参见 w )
ab     以二进制追加模式打开 (参见 a )
rb+    以二进制读写模式打开 (参见 r+ )
wb+    以二进制读写模式打开 (参见 w+ )
ab+    以二进制读写模式打开 (参见 a+ )

 

fp.read([size])                     #size为读取的长度,以byte为单位

fp.readline([size])                 #读一行,如果定义了size,有可能返回的只是一行的一部分

fp.readlines([size])                #把文件每一行作为一个list的一个成员,并返回这个list。其实它的内部是通过循环调用readline()来实现的。如果提供size参数,size是表示读取内容的总长,也就是说可能只读到文件的一部分。

fp.write(str)                      #把str写到文件中,write()并不会在str后加上一个换行符

fp.writelines(seq)            #把seq的内容全部写到文件中(多行一次性写入)。这个函数也只是忠实地写入,不会在每行后面加上任何东西。

fp.close()                        #关闭文件。python会在一个文件不用后自动关闭文件,不过这一功能没有保证,最好还是养成自己关闭的习惯。  如果一个文件在关闭后还对其进行操作会产生ValueError

fp.flush()                                      #把缓冲区的内容写入硬盘

fp.fileno()                                      #返回一个长整型的”文件标签“

fp.isatty()                                      #文件是否是一个终端设备文件(unix系统中的)

fp.tell()                                         #返回文件操作标记的当前位置,以文件的开头为原点

fp.next()                                       #返回下一行,并将文件操作标记位移到下一行。把一个file用于for … in file这样的语句时,就是调用next()函数来实现遍历的。

fp.seek(offset[,whence])              #将文件打操作标记移到offset的位置。这个offset一般是相对于文件的开头来计算的,一般为正数。但如果提供了whence参数就不一定了,whence可以为0表示从头开始计算,1表示以当前位置为原点计算。2表示以文件末尾为原点进行计算。需要注意,如果文件以a或a+的模式打开,每次进行写操作时,文件操作标记会自动返回到文件末尾。

fp.truncate([size])                       #把文件裁成规定的大小,默认的是裁到当前文件操作标记的位置。如果size比文件的大小还要大,依据系统的不同可能是不改变文件,也可能是用0把文件补到相应的大小,也可能是以一些随机的内容加上去。

 

目录操作:
os.mkdir(“file”)                   创建目录
复制文件:
shutil.copyfile(“oldfile”,”newfile”)       oldfile和newfile都只能是文件
shutil.copy(“oldfile”,”newfile”)            oldfile只能是文件夹,newfile可以是文件,也可以是目标目录
复制文件夹:
shutil.copytree(“olddir”,”newdir”)        olddir和newdir都只能是目录,且newdir必须不存在
重命名文件(目录)
os.rename(“oldname”,”newname”)       文件或目录都是使用这条命令
移动文件(目录)
shutil.move(“oldpos”,”newpos”)   
删除文件
os.remove(“file”)
删除目录
os.rmdir(“dir”)只能删除空目录
shutil.rmtree(“dir”)    空目录、有内容的目录都可以删
转换目录
os.chdir(“path”)   换路径

 

相关例子 

 1 将文件夹下所有图片名称加上’_fc’

python代码:

# -*- coding:utf-8 -*-
import re
import os
import time
#str.split(string)分割字符串
#’连接符’.join(list) 将列表组成字符串
def change_name(path):
global i
if not os.path.isdir(path) and not os.path.isfile(path):
return False
if os.path.isfile(path):
file_path = os.path.split(path) #分割出目录与文件
lists = file_path[1].split(‘.’) #分割出文件与文件扩展名
file_ext = lists[-1] #取出后缀名(列表切片操作)
img_ext = [‘bmp’,’jpeg’,’gif’,’psd’,’png’,’jpg’]
if file_ext in img_ext:
os.rename(path,file_path[0]+’/’+lists[0]+’_fc.’+file_ext)
i+=1 #注意这里的i是一个陷阱
#或者
#img_ext = ‘bmp|jpeg|gif|psd|png|jpg’
#if file_ext in img_ext:
#    print(‘ok—‘+file_ext)
elif os.path.isdir(path):
for x in os.listdir(path):
change_name(os.path.join(path,x)) #os.path.join()在路径处理上很有用
img_dir = ‘D:\\xx\\xx\\images’
img_dir = img_dir.replace(‘\\’,’/’)
start = time.time()
i = 0
change_name(img_dir)
c = time.time() – start
print(‘程序运行耗时:%0.2f’%(c))
print(‘总共处理了 %s 张图片’%(i))

输出结果:

程序运行耗时:0.11
总共处理了 109 张图片

对ASP.NET程序员都非常有用的85个工具

Standard

介绍

这篇文章列出了针对ASP.NET开发人员的有用工具。

工具

1.Visual Studio

  1. Visual Studio Productivity Power tool:Visual Studio专业版(及以上)的扩展,具有丰富的功能,如快速查找,导航解决方案,可搜索的附加参考对话框等
  2. ReSharper:提高.NET开发人员生产力的工具,提高代码质量,通过提供快速修复消除错误,等等
  3. MZ-Tools:它可以在方法、文件、项目、解决方案或项目组、选定的文本,文件组合或项目组合中找到字符串。结果示于下面的结果窗口中,比由Microsoft IDE提供的要更方便。
  4. Web Essentials:提高生产力和帮助高效编写CSS,JavaScript,HTML等
  5. MSVSMON:远程调试监视器(msvsmon.exe)是一个Visual Studio连接进行远程调试的小型应用程序。在远程调试时,Visual Studio运行在一台计算机(调试器主机)上,远程调试监视器运行与你正在调试的应用程序运行在一台远程计算机上。
  6. WIX toolset:从XML源代码构建Windows安装程序包。
  7. Code digger::Code Digger是Visual Studio 2012/2013扩展程序,它可以帮助你了解你的代码行为。
  8. CodeMaid:CodeMaid是一个开源的Visual Studio 2012/2013/2015扩展程序,用于清理,挖掘和简化你的代码。
  9. OzCode:强大的Visual Studio调试器可视化工具。
  10. CodeRush:这是一个Visual Studio的重构和生产率插件。
  11. T4 Text Template::在Visual Studio中,T4 Text Template用作生成代码文件的模板。模板可以通过编写文本块和控制逻辑来定义。
  12. Indent Guides:在每个缩进级别添加垂直线。
  13. PowerShell Tools:一套用于开发和调试PowerShell脚本以及Visual Studio 2015中模块的工具。
  14. Visual Studio Code:免费的跨平台编辑器,用来构建和调试现代web和云的应用程序。 [由Cheung Tat Ming提供]
  15. AutoPoco:AutoPoco是一个高度可配置的框架,用于流畅构建可读的来自于Plain Old CLRObjects的测试数据。
  16. Supercharger:这是一个旨在显著改善Visual Studio开发体验的扩展程序。它建立在并增强了先前的VS10x产品,如CodeMAP、Editor View Enhancer、Comments Extender,同时还增加了新的高品质工具。

2.ASP.NET

  1. Fiddler:捕捉HTTP请求/响应以及模拟请求行为。
  2. AutoMapper:对象到对象的映射。例如,该工具可用于映射实体对象到领域对象,而不是写手动映射代码。
  3. Unity/Ninject/Castle Windsor/StructureMap/Spring.Net:依赖注入框架。有很多可用的DI框架。
  4. .NET Reflector:.NET程序集反编译器。
  5. dotPeek:.NET程序集反编译器。
  6. ILSpy::.NET程序集反编译器。
  7. memprofiler:查找内存泄漏并优化内存使用的强大工具。
  8. PostSharp:删除重复编码,并防止由于横切关注点以及面向方面编程而造成的代码膨胀。
  9. ASPhere:有GUI的web.config编辑器。

3.REST API

  1. Swagger UI:API测试和文档工具。[视频]
  2. PostMan:REST客户端Chrom扩展程序。 [由 Cheung Tat Ming提供]

4.WCF

  1. SOAP UI:API测试工具,支持所有标准协议和技术。
  2. WireShark:这是一个针对Unix和Windows的网络协议分析仪。它可以在TCP层捕获流量,帮助你发现soap封套。
  3. Svc TraceViewer:可以更好地查看由WCF产出的巨大跟踪文件。
  4. Svc Config Editor:用于管理WCF相关配置的GUI工具。

5.MSMQ

  1. QueueExplorer 3.4:复制,移动或删除邮件,保存和加载,压力测试,查看和编辑完整的邮件主体(通过.NET序列化对象的特殊支持),以及更多可对MSMQ做的。

6.LINQ

  1. LINQ Pad?:LINQPad是一个轻量级的工具,用于测试对SQL Server数据库的LINQ查询。它也可以测试用不同的.NET语言,如C#,VB等写的代码片断。
  2. LINQ Insight:LINQ Insight Express是一个Visual Studio插件,它允许你在设计时分析你的LINQ查询,并简化了编写和调试LINQ查询。

7.RegEx

  1. RegEx tester:用于正则表达式测试的Visual Studio扩展程序。
  2. regexr:在线RegEx开发和测试工具。
  3. regexpal:在线RegEx开发和测试工具。
  4. Expresso:Expresso是一个用于RegEx开发和测试的桌面工具。
  5. RegexMagic :用于自动生成来自于文本模式的正则表达式的工具。用户需要通过标记字串和选择不同的选项来培养模式。在此基础上,将自动生成正则表达式。这些工具还可以生成不同语言所需的代码。 [由: Samuel Christison提供]

8.Javascript / JQuery/ AngularJS

  1. JSHint:JavaScript代码质量的工具。还有一个工具,JSLine,它执行更严格的规则。
  2. JSFiddle:提供在浏览器中的一个环境,用来测试HTML,CSS和Javascript / JQuery。
  3. Protractor:端到端的框架用来测试angular应用程序。
  4. Batarang:添加工具用于调试和分析AngularJS应用程序。

9.SQL服务器

  1. SQL Profiler:SQL跟踪用来监测数据库引擎实例。
  2. ExpressProfiler:ExpressProfiler(又名SqlExpress Profiler)是有着基本GUI和集成的SQL Server Profiler简单又快捷的替代品。这可以与Express 和SQL Server 2005/2008 / 2008R2 / 2012/2014的non-Express 版本一起使用。 [由RickZeeland提供]
  3. SQL Sentry Plan explorer:工具提供SQL查询执行计划更好的图形视图。
  4. SQL Complete:提供SQL Server Management Studio和Visual Studio智能感知功能和改进的SQL格式器。
  5. NimbleText:文本操作和代码生成工具。
  6. Query Express:轻量级SQL查询分析器。
  7. IO Meter:提供IO子系统的细节。
  8. sqldecryptor:破译SQL Server对象,如存储过程,函数,触发器,视图,通过加密选项加密。
  9. SpatialViewer:查看和创建空间数据。
  10. ClearTrace:导入跟踪和探查文件到SQL Server并显示汇总性能信息。
  11. Internals Viewer for SQL Server:Internals Viewer是一个查看SQL Server存储引擎,查阅数据如何物理分配,组织和存储的工具。
  12. PAL:在性能日志读取,使用已知阈值分析。
  13. sqlquerystress:用T-SQL查询和程序的性能压力测试助攻。

10.NHibernate

  1. NHibernate Mapping Generator:生成NHibernate映射文件和对应于现有DB表的实体类。

11.Tally

  1. Tally ERP 9
  2. Tally dll:.NET的一个动态链接库,用于集成Tally Accounting软件以便于用编程的方式push和pull数据。

12.代码审查

  1. StyleCop:StyleCop是静态的代码分析工具,它强制你的C#源代码执行配置风格和一致性规则设置。它可以从Visual Studio内部运行或集成到MSBuild项目。
  2. FxCop?:FxCop是一个静态代码分析工具,它通过分析.NET程序集强制开发标准。

13.流量捕获

  1. WireShark:这是一个用于Unix和Windows的网络协议分析仪。它可以捕获TCP层的流量。
  2. HTTP Monitor:使开发人员可以查看你的计算机和互联网之间的所有HTTP流量。这包括请求数据(例如HTTP响应头和表单GET和POST数据)和响应数据(包括HTTP响应头和正文)。

14.诊断

  1. Glimpse:提供服务器端诊断数据。如,对于ASP.NET MVC项目,你需要从NuGet添加它。Glimpse的数据可以告诉你不同层面的延迟,真正表明你可以优化代码/解决方案以提高性能的区域。

15.性能

  1. PerfMon:使用性能计数器监控系统性能。
  2. yslow:YSlow分析web页面,并基于Yahoo!高性能网站的规则指出它们为什么这么缓慢。

16.代码转换器

  1. Telerik Code Converter:C#到VB以及VB到C#的代码转换器。这是一个在线编辑器。但是你可以选择“批量转换”以及zip格式上传文件。

17.数据提取和加载

  1. FileHelpers:.NET库,导入/导出文件、字符串或流中固定长度或有分隔记录的数据。
  2. LogParser:你可以写SQL到查询来应对各种日志文件,以及导出数据到各种目的地,如SQL表、CSV文件。

18.屏幕录制

  1. Wink:演示文稿制作软件。使用Wink,你可以捕捉截图,添加说明,注释等,以及创建演示。

19.文本编辑器

  1. Notepad++:源代码编辑器。
  2. Notepad2:轻量级又功能丰富的记事本般的文本编辑器。
  3. sublimetext:一个功能丰富的文本编辑器。

20.文档

  1. GhostDoc:GhostDoc是一个Visual Studio扩展程序,自动生成类型、参数、名称及其他相关信息方法和属性的XML文档注释。
  2. helpndoc:helpndoc是一个创建帮助文件的工具。它可以从单个源生成不同格式的文件。

21.其他

  1. FileZilla:FileZilla是一个免费的FTP解决方案。FileZilla Client用于FTP文件上传,FileZilla Server用于文件共享。
  2. TreeTrim:TreeTrim是用于修整源代码树的工具。它消除了调试文件,源代码控制绑定和临时文件。
  3. BrowserStack:跨浏览器测试网站。
  4. Firebug:功能丰富的针对于CSS,HTML和JavaScript开发关于生成网页的Firefox插件。
  5. BugShooting:屏幕截图软件,截屏并放到工作项,bug,问题跟踪项等。
  6. Web developer checklist:确保web开发最佳实践。
  7. XRAY:Firefox插件。功能丰富的书签。提供有关网页元素的信息。
  8. PowerGUI:有助于快速接纳和使用PowerShell来高效管理完整的Windows环境。
  9. Beyond Compare:它允许比较目录树和单个文件的内容。适应强,有流行语言的插件。 [由Ron Matuszek提供]
  10. Devart Codecompare:文件diff工具,读取C#,C ++,VB代码的结构。包括:文件夹比较工具,比较和合并文件和文件夹的独立app,代码审查支持。 [由Cheung Tat Ming提供]

警告

使用之前请验证工具。

许可证

这篇文章以及任何相关的源代码和文件,遵循 The Code Project Open License (CPOL)。

 

译文链接:http://www.codeceo.com/article/85-aspnet-tools.html
英文原文:Useful Tools for ASP.NET Developers