SQL Query Optimization and Performance Guide


Windows Settings    3

Storage on 2 Physical disks – standard solution    7

Storage on 3 Physical disks – extended solution    7

Storage on 4 physical disks – optimal solution    7

Storage on external hard disk or flash memory    9

Database Auto growth    9

Index Management    9

Stored procedures    12

Cursors    12

Query optimization    13

Scheduled Maintenance Plan    13

Check disk usage by top tables    14

Windows Settings

  • Adjust performance for background services
  • Configure size of windows paging file(virtual memory) to be twice the size of physical memory
  • Turn off system protection feature for all disks except for C
  • Disable unneeded sql services
  • Configure weekly defragmentation schedule for all disks

Storage on 2 Physical disks – standard solution

  • Store log file on C
  • Store data file on the second disk

Storage on 3 Physical disks – extended solution

  • Store data file on second disk
  • Store log file on third disk
  • Store Windows Paging File(Virtual Memory) on C

Storage on 4 physical disks – optimal solution

  • Store windows paging file on C
  • Store primary data file on second disk
  • Store log file on third disk
  • Create Secondary data file on fourth disk to store indexes

    When creating indexes, select the secondary file group

Storage on external hard disk or flash memory

  • Not allowed
  • This will disable all optimizations performed by SQL Server
  • This will limit read/write speed to 25Mbits/sec for external hard disk and slower for flash memory

Database Auto growth

  • Do not set to grow in %. Use static value instead(100MB is good option)

Index Management

  • Create non-clustered index only for tables that have rate of SELECT much larger than rate of INSERT and UPDATE
  • Do not create indexes for all table columns
  • For indexes created on base tables(for example Person, Admission, AdmissionRequestDetail), set the index fill factor to 70 instead of 80(default option). This will enhance performance for INSERT.
  • Use the following procedure to analyze index fragmentation. You should “reorganize” indexes when the External Fragmentation value for the index is between 10-15 and the Internal Fragmentation value is between 60-75. Otherwise, you should rebuild indexes.











    <> 0)

    si.index_id=dt.index_id AND dt.avg_fragmentation_in_percent>10

    dt.avg_page_space_used_in_percent<75 ORDER
    BY avg_fragmentation_in_percent



  • Create non-clustered indexes only for columns that appear in:
    • Where clause
    • Order By clause
    • Join clause
    • Distinct
    • All foreign keys
  • Avoid indexing small tables
  • Create indexed views for columns used by Linq query in the where clause, if the index is not created on the table but do not create the index on both.
  • Do not use Year(), Month() ,Day() functions in the where clause on a column even if it has an index. Using these functions and other similar functions will disable the index and will do a full table scan. So change the query to make use of the index. Example:



  • Use the index usage report to check if there are unused indexes on the table in order to delete them. If the number of seeks is 0. The index can be deleted.

Stored procedures

  • Do not name the procedure sp_something. This will cause a delay when the procedure executes.
  • Use Set Nocount On at the top of the procedure to avoid additional round trip to the server
  • Try to avoid using exec of dynamic sql statements


  • Do not use a cursor to iterate over a set of records. Creating and using a cursor is very expensive and resource consuming. Instead, use a while loop with defined upper bound. This option is not available for SQL 2000. Example:





    from SubDepartment

        while (@i<=@count)





Query optimization

  • Do not use OR in the where clause. Instead, use UNION ALL to implement OR functionality but only if there is an index on the column used in OR operator. If there is no index, this optimization will slow down performance by doing multiple table scans. If there is an index this approach will speed up the query. Example:

    person.Sex=0 or person.Sex=1

    person.Sex=0 union


  • Avoid using inline queries, use joins instead
  • Avoid using inline function in the query. Instead, create a precomputed column with the function formula to be stored for selection in the query. May not be applicable for all cases.
  • When writing queries containing NOT IN, the query will have poor performance as the optimizer need to use nested table scan to perform this activity. This can be avoided by using EXISTS or NOT EXISTS
  • When you have the choice to use IN or BETWEEN clause in the query, always use BETWEEN. This will speed up the query
  • When having multiple columns in where clause separated by AND operator, make sure to order the columns from least likely true to most likely true if the expected result is most likely true. Otherwise, order the columns from most likely false to least likely false. In other words, the condition that filters the least number of records must appear first in the where clause.
  • For queries that require immediate feedback to the user, use FAST n option to immediately return first n records while the query continues to fetch the remaining records. This option can be used in procedures and views but it does not apply for linq queries because .ToList() ignores this option. Example:

    OPTION(FAST 100)

Scheduled Maintenance Plan

  • Use daily maintenance plan to perform daily cleanup, update statistics, check integrity, rebuild indexes, organize indexes, shrink database files and perform a full backup.
  • Do not include system databases in the plan. Select the option “User databases only”
  • Make sure SQL Server Agent startup option is set to Automatic
  • Schedule the plan to run daily at night. 1am is a good option.
  • The below plan is standard and can be used for most cases.

Check disk usage by top tables

  • Monitor disk usage by most used tables. These will be the target for potential indexes.
  • Use this approach to check if the application is performing repeated unnecessary reads

Data mining in Sql Server 2008 & Visual Studio

Architecture of Microsoft SQL S

Image via Wikipedia

Creating a Project in the Business Intelligence Development Studio

Follow these steps to create a new project. To start BIDS, click the Start button and go to All Programs->Microsoft SQL Server 2008->SQL Server Business Intelligence Development Studio. In BIDS, select File New Project. You will see the Business Intelligence Projects template. Click the Analysis Services Project template. Type “AnalysisServices2008Tutorial” as the project name and select the directory in which you want to create this project. Click OK to create the project.

The Solution Explorer Pane

The Solution Explorer contains the following:

1) Data source objects: They contain details of a connection to a data source, which include server name, catalog or database name, and login credentials. You establish connections to relational servers by creating a data source for each one.

2) Data Source Views: When working with a large operational data store you don’t always want to see all the tables in the database. With Data Source Views (DSVs), you can limit the number of visible tables by including only the tables that are relevant to your analysis.

3) Cubes: A collection of measure groups (from the fact tables) and a collection of dimensions form a cube. Each measure group is composed of a set of measures. Cubes can have more than three dimensions and not necessarily the three – dimensional objects as their name suggests.

4) Dimensions: They are the set of tables that are used for building the cube. Attributes that are needed for the analysis task are selected from each table.

5) Mining Structures: Data mining is the process of analyzing raw data using algorithms that help discover interesting patterns not typically found by ad – hoc analysis. Mining Structures are objects that hold information about a data set. A collection of mining models form a mining structure. Each mining model is built using a specific data mining algorithm and can be used for analyzing patterns in existing data or predicting new data values.


The Properties Pane


If you click an object in the Solution Explorer, the properties for that object appear in the Properties pane. Items that cannot be edited are grayed out. If you click a particular property, the description of that property appears in the Description pane at the bottom of the Properties pane.




Data mining in sql server 2008


The data mining process is regarded as a series of steps to be followed which include the following:

1) Creating a Data Source:

Cubes and dimensions of an Analysis Services database must retrieve their data values from tables in a relational data store. This data store, typically part of a data warehouse, must be defined as a data source.

To create a data source, follow these steps:

a) Select the Data Sources folder in the Solution Explorer.

b) Right – click the Data Sources folder and click New Data Source. This launches the Data Source Wizard.

c) In the data source wizard you will provide the connection information about the relational data source that contains the “Adventure Works DW 2008” database. Click the New button under Data Connection Properties to specify the connection details. You will enter here the server name, the database name, and choose one of the two authentication modes either sql server authentication or windows authentication.

d) In the Impersonation Information page you need to specify the impersonation details that Analysis Services will use to connect to the relational data source. There are four options. You can provide a domain username and password to impersonate or select the Analysis Service instance’s service account for connection. The option Use the credentials of the current user is primarily used for data mining where you retrieve data from the relational server for prediction. If you use the Inherit option, Analysis Services uses the impersonation information specified for the database.

e) On the final page, the Data Source Wizard chooses the relational database name you have selected as the name for the data source object you are creating. You can choose the default name specified or specify a new name here.

2) Creating a Data Source View ( DSV )

The Adventure Works DW database contains 25 tables. The cube you build in this chapter uses 10 tables. Data Source Views give you a logical view of the tables that will be used within your OLAP database.

To create a Data Source View, follow these steps:

a) Select the Data Source Views folder in the Solution Explorer.

b) Right – click Data Source Views and select New Data Source View. This launches the Data Source View Wizard.

c) In the data source view wizard you can select the tables and views that are needed for the Analysis Services database you are creating. Click the > button

so that the tables move to the Included Objects list. We will include in the data source view here the following set of tables:

FactInternetSales, FactResellerSales, DimProduct, DimReseller, DimPromotion, DimCurrency, DimEmployee, DimSalesTerritory, DimTime, DimCustomer, Dim Geography.

d) At the final page of the DSV Wizard you can specify your own name for the DSV object or use the default name. Specify the “Adventure Works DW” for the DSV Name in the wizard and click Finish.

If you open the data source view in the solution explorer the data source view editor opens which contains three main areas: Diagram Organizer, the Tables view, and the Diagram view. In the diagram view you can see a diagram of all the added tables with their relationships among each other. In the tables view you can see all the tables that are contained in this data source view. In the diagram organizer, you can right click in the pane here to create a new diagram and drag and drop the tables that u wish to add, or simply add any table u want then right click on it and choose add related tables, this will add all the related tables to the given chosen table. In order to add a new field to a given table, you simply right click on the table in the diagram view and choose add named reference, a dialog will appear where you can enter the name of the new field and the formula upon which it is derived. For example, to add a new field named FullName to the table employee, you write the following formula: FirstName + ‘ ‘ + MiddleName + ‘ ‘ + LastName.

There are different layouts in the data source view. You can switch between rectangular layout and diagonal layout in the DSV by right – clicking in the DSV Designer and selecting the layout type of your choice.

To see a sample of the data specified by your DSV, right – click a table in the DSV Designer and select Explore Data. The data presented is only a subset of the underlying table data. By default the first 5,000 rows are retrieved and shown within this window. You can change the number of rows retrieved by clicking the Sampling Options button. Clicking the Sampling Options button launches the Data Exploration Options dialog where you can change the sampling method, sample count, and number of states per chart, which is used for displaying data in the chart format.

When you click the Pivot Table tab you get an additional window called PivotTable Field List that shows all the columns of the table. You can drag and drop these columns inside the pivot table in the row, column, details, or filter areas. The values in the row and column provide you with an intersection point for which the detailed data is shown.

3) Creating New Dimensions

Dimensions help you define the structure of your cube so as to facilitate effective data analysis. Specifically, dimensions provide you with the capability of slicing data within a cube, and these dimensions can be built from one or more dimension tables.

a) Create the DimGeography dimension:

 Launch the Dimension Wizard by right – clicking Dimensions in the Solution Explorer and selecting New Dimension.

 In the Select Creation Method screen select the “Use an existing table” option and click next.

 In the Specify Source Information page, you need to select the DSV for creating the dimension, select the main table from which the dimension is to be designed, specify the key columns for the dimension, and optionally specify a name column for the dimension key value. By default, the first DSV in your project is selected. Because the current project has only one DSV (the Adventure WorksDW DSV), it is selected. Select the DimGeography table from the Main table drop – down list.

 Click the Next button to proceed to the next step in the Dimension Wizard.

 The Dimension Wizard now analyzes the DSV to detect any outward – facing relationships from the DimGeography table. An outward – facing relationship is a relationship between the DimGeography table and another table, such that a column in the DimGeography table is a foreign key related to another table. The Select Related Tables screen shows that the wizard detected an outward relationship between the DimGeography table and the DimSalesTerritory table. In this example you will be modeling the DimGeography table as a star schema table instead of snowflake schema. Deselect the DimSalesTerritory table and click next.

 The Select Dimension Attributes screen of the Dimension Wizard displays the columns of the main table that have been selected for the dimension you’re creating.

 Select all the attributes of the DimGeography table (all the attributes in the screen), leave their Attribute Type as Regular, allow them to be browsed, and click next.

 The final screen of the Dimension Wizard shows the attributes that will be created for the dimension based on your choices in the wizard. Click the Finish button.

Open the DimGeography dimension by double clicking on it in the solution explorer. In the Dimension structure tab you can see all the table attributes that have been added to this dimension. In the hierarchies’ pane, drag and drop the English country region name attribute followed by the State Province Name followed by the city and then the postal code. Then you have to build the relationships among these attributes in the hierarchy by clicking on the attribute relationships tab, and then dragging the postal code attribute towards the city, this means that the postal code value determines

the city. Drag the city towards the state. Drag the state towards the country. This will build the functional dependencies among the attributes in the hierarchy. Then you have to ensure that the city value is unique in determining the state name value by setting the key columns property of the city attribute to both the state province code and city, and setting its name columns to the city attribute. Similarly set the key columns of the postal code attribute to the postal code, the city, and the state province code attributes, and set its name columns to the postal code.

Deploy the project, by right clicking the project name and choosing deploy. After a successful deployment, you can browse the dimension by selecting the browse tab, where you can see all the data of the dimgeography table arranged according to their hierarchical levels.

b) Create the DimTime dimension

 Launch the Dimension Wizard by right – clicking Dimensions in the Solution Explorer and selecting New Dimension. When the welcome screen of the Dimension Wizard opens up, click next.

 In the Select Creation Method page of the wizard, select the “Use an existing table” option and click next.

 In the Specify Source Information page, select DimTime as the main table from which the dimension is to be designed and click next.

  In the Select Dimension Attributes page, in addition to the Date Key attribute, enable the checkboxes for the following attributes: Calendar Year, Calendar Semester, Calendar Quarter, English Month Name, and Day Number of Month.

 Set the Attribute Type for the “Calendar Year” attribute to Date Calendar Year.

 Set the Attribute Type for the “Calendar Semester” attribute to Date Calendar Half Year.

 Set the Attribute Type for the “Calendar Quarter” attribute to Date Calendar Quarter.

 Set the Attribute Type for the “English Month Name” attribute to Date Calendar Month.

 Set the Attribute Type for the “Day Number of Month” attribute to Date Calendar Day of Month.

 Create a multilevel hierarchy Calendar Date with the levels Calendar year, Calendar Semester, Calendar Quarter, Month (rename English Month Name), and Day (rename Day Number Of Month).

 Save the project and deploy it to the analysis services instance.

 Switch to the Browser pane of the DimTime dimension, where you can see that the date hierarchy is arranged according to the hierarchy that we defined above.

c) Create the DimEmployee dimension

 Launch the Dimension Wizard by right – clicking Dimensions in the Solution Explorer and selecting New Dimension. If the welcome screen of the Dimension Wizard opens up, click next.

 Make sure the “Use an existing table” option is selected and click next.

 In the Specify Source Information page, select DimEmployee as the main table from which the dimension is to be designed and click next.

 On the Select Related Tables screen, uncheck the DimSalesTerritory table and click next.

 In the Select Dimensions Attributes dialog, the Dimension Wizard has detected three columns of the DimEmployee table to be included as attributes. The Dimension Wizard will select columns if they are either the primary key of the table or a foreign key of the table or another table in the DSV. The attributes suggested by the Dimension Wizard in this example are the key attribute Employee Key, the parent – child attribute Parent Employee Key, and the Sales Territory Key, which is a foreign key column to the DimSalesTerritory table.

 Select all the columns of the DimEmployee table as attributes and click next.

 Double – click the DimEmployee dimension in the Solution Explorer to open the Dimension Designer.

 Change the NameColumn property of the Key attribute Dim Employee to FullName and deploy the project to your Analysis Services instance.

When you browse the Parent – Child hierarchy, you will see the members of the hierarchy showing the full names of the employees.

4) Creating a Cube Using the Cube Wizard

Cubes are the principal objects of an OLAP database that help in data analysis. Cubes are multidimensional structures that are primarily composed of dimensions and facts. The data from a fact table that is stored within the cube for analysis are called measures.

To build a new cube, follow these steps:

a) Right – click the Cubes folder and select New Cube. Click next on the introduction page to proceed.

b) In the Select Creation Method page you have the option to build a cube from existing tables, create an empty cube, or create a cube based on a template and generate new tables in the data source. Choose to build the cube from the existing tables in the Adventure Works DW data source. Click Next to proceed to the next step in the Cube Wizard.

c) The next page of the Cube Wizard is the Measure Group Tables selection page. You now must select one or more tables that will serve as fact tables for your Measure Group. The Suggest button on this screen can be used to have the Cube Wizard scan the DSV to detect the fact tables in the DSV and

detect fact tables. Click the Suggest button to have the Cube Wizard automatically select potential Measure Group tables. The Cube Wizard now scans the DSV to detect the fact and dimension tables in the DSV, automatically selects the candidate tables. Any table that has an outgoing relationship is identified as a candidate fact table, whereas a table that has an incoming relationship is detected as a dimension table. Select both the FactResellerSales and the FactInternetSales as the fact tables. And then select the measures that you need to include from these fact tables for the analysis task.

d) In the Select Existing Dimensions page, the Cube Wizard displays a list of all existing dimensions defined in the project. Accept the selection of all the dimensions and click next.

e) The Cube Wizard asks you to select any new dimensions to be created from existing tables in the data source that are not already used for dimensions in the project. You can deselect dimensions that are not needed for your cube on this page. This illustration will use the Fact tables only as measure groups and not for dimensions. Deselect the Fact Reseller Sales and Fact Internet Sales dimensions on this page and click next.

f) In the final page of the Cube Wizard you can specify the name of the cube to be created and review the measure groups, measures, dimensions, attributes, and hierarchies. Use the default name Adventure Works DW suggested by the Cube Wizard and click Finish.

After creating the cube, the new dimensions are automatically created. But these dimensions will have only their primary and foreign keys selected. You have to open each created dimension and select the attributes that you need to add from each table.

g) Press F5 to deploy, build and process the cube. Deploying the cube means building the cube according to the structure that you have defined, while processing the cube means computing all the aggregation values for all the cells in the cube.

You can add a new calculated measure to the cube by Right – clicking in the Script Organizer pane of the Calculation Scripts tab and entering the formula for this new measure.

Now that the cube has been deployed, switch the BIDS Cube Designer view to the Browser page. In the Browser page you will see three panes: a Measure Group pane, a Filter pane, and a Data pane. Suppose you want to analyze the Internet sales of products based on the promotions offered to customers and the marital status of those customers. First you would need to drag and drop [DimPromotion].[English Promotion Type] from the Measure Group pane to the OWC rows area. Next, drag and drop [Dim Customer].[Marital Status] from the Measure Group pane to the OWC columns area. Finally, drag and drop the measure [Sales Amount] from the Fact Internet Sales measure group to the Drop Totals or Detail Fields Here area of the OWC pane.

You can also use MDX queries to query the cube. These MDX queries are similar to the sql server queries. Just as SQL (Structured Query Language) is a query language used to retrieve data from relational databases, MDX (Multi – Dimensional expressions) is a query language used to retrieve data from multidimensional databases.

The format of MDX query is shown below:

SELECT [< axis expression >, [< axis expression > …]]

FROM [< cube_expression >]

[WHERE [slicer expression]]

5) Creating a Mining Structure

Analysis Services 2008 provides nine data mining algorithms that can be utilized to solve various business problems. These algorithms can be broadly classified into five categories based on the nature of the business problem they can be applied to. They are:

1) Classification

2) Regression

3) Segmentation

4) Sequence analysis

5) Association

We aim at grouping customers that undergo similar characteristics.

To create a relational mining model, follow the following steps:

a) Right – click the Mining Structures folder in the Solution Explorer and select New Mining Structure as to launch the Data Mining Wizard that helps you to create data mining structures and models. Click the Next button.

b) Select the “From existing cube” radio button and click next.

c) Select Microsoft Clustering and click next.

d) Choose the Customer table as the primary table and enter the following attributes as inputs for building clusters:

Age, Yearly Income, Number of cars owned, Number of Children at home and Occupation.

You will now see the clustering mining model represented as several nodes with lines between these nodes. By default the clustering mining model groups the customer into ten different clusters. The number of clusters generated can be changed from a property for the cluster mining model. Each cluster is shown as a node in the cluster viewer. Darker shading on the node indicates that the cluster favors a specific input column and vice versa. If there is a similarity between two clusters, it is indicated by a line connecting the two nodes. Similar to the shade of the color node, if the relationship is stronger between two nodes, it is indicated via a darker line. You can move the slider on the left of the cluster diagram from All Links to Strongest Links. As you do this you can see the weaker relationships between the clusters are not displayed. You can change the cluster name by right – clicking the cluster and selecting Rename. You can select desired input columns of the mining model from the Shading Variable drop –

down to see the effect of the column on the various clusters. When you choose a specific shading variable column you need to choose one of the states of the column to be used as the shading variable for the clusters.

The Cluster Profiles view shows the relationship between the mining columns of the model and the clusters in a matrix format. The intersection cell of a specific column and a cluster shows a histogram bar of the various values of the column that are part of the cluster. The size of each bar reflects the number of items used to train the model.

The cluster Characteristics tab shows the characteristics of a single cluster and how the various states of the input columns make up the cluster.

The Cluster Discrimination tab shows the characteristics of a Cluster in comparison with the characteristics of the complement of this Cluster.

This row already belongs to another table

Occurs when you copy a DataRow from one DataTable to another throwing an ArgumentException

The error occurs because the same datarow object could not be added to different datatables.The best solution is using
the DataTable.ImportRow method which creates an exact copy of the row and adds it to the other table. However, in order to use this function
, both tables should have the same schema which is achieved using a method call to DataTable.Clone() to copy shema only or DataTable.Copy() to copy both data and schema. The code below shows how to do this:

Dim dt As New DataTable

'dt = FillRowFromDataBase

Dim dtClone As DataTable = dt.Clone

For Each row As DataRow In dt.Rows



Microsoft SQL Server Management Studio display...

Image via Wikipedia


Naming Conventions
All T-SQL Keywords must be upper case.
All declared variable names must be Camel Case while all stored procedure names, function names, trigger names, Table names and Columns names in query must be Pascal Case.
All view names must start with the letter ‘v’ followed by the name of the view in Pascal Case

DECLARE @minSalary int

If you are creating a table belonging to a specific module, make sure to append a 3 character prefix before the name of each table, example:


Note that all table names must be singular.
When creating columns, make sure to append a ‘_F’ to the end of each column you intend to use as a flag. If there are exactly two statuses for the flag, use ‘bit’ data type, if there are 3 or more statuses, use ‘char(1)’ data type. If the column is foreign key reference, append ‘_FK’ to the end of the column name. This makes it easy to distinguish flag and foreign key columns:

FirstName varchar(max),
Sex_F BIT,
Person_FK int,
Status_F CHAR(1)

Declaring Variables
Always declare variables at the top of your stored procedure and set their values directly after declaration. If your database runs on SQL Server 2008, you can declare and set the variable on the same line. Take a look at the following statement under SQL 2000/SQL 2005 and the second statement under SQL 2008. Standard programming language semantics are added in SQL 2008 for short assignment of values:

DECLARE @i int
SET @i = 1
SET @i = @i + 1
DECLARE @i int = 1
SET @i +=1

Select Statements
Do not use SELECT * in your queries. Always write the required column names after the SELECT statement. This technique results in reduced disk I/O and better performance:

SELECT CustomerID, CustomerFirstName, City From Customer

If you need to write a SELECT statement to retrieve data from a single table, don’t SELECT the data from a view that points to multiple tables. Instead, SELECT the data from the table directly, or from a view that only contains the table you are interested in. If you SELECT the data from the multi-table view, the query will experience unnecessary overhead, and performance will be hindered.

Try to avoid server side cursors as much as possible. Always stick to a ‘set-based approach’ instead of a ‘procedural approach’ for accessing and manipulating data. Cursors can often be avoided by using SELECT statements instead.
If a cursor is unavoidable, use a WHILE loop instead. A WHILE loop is always faster than a cursor. But for a WHILE loop to replace a cursor you need a column (primary key or unique key) to identify each row uniquely.

Wildcard Characters
Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that result in an index scan, which defeats the purpose of an index. The following statement results in an index scan, while the second statement results in an index seek:

SELECT EmployeeID FROM Locations WHERE FirstName LIKE '%li'
SELECT EmployeeID FROM Locations WHERE FirsName LIKE 'a%i'

Not Equal Operators
Avoid searching using not equals operators (<> and NOT) as they result in table and index scans.

Derived Tables
Use ‘Derived tables’ wherever possible, as they perform better. Consider the following query to find the second highest salary from the Employees table:

SELECT MIN(Salary) FROM Employees WHERE EmpID IN (SELECT TOP 2 EmpID FROM Employees ORDER BY Salary Desc)

The same query can be re-written using a derived table, as shown below, and it performs twice as fast as the above query:


This is just an example, and your results might differ in different scenarios depending on the database design, indexes, volume of data, etc. So, test all the possible ways a query could be written and go with the most efficient one.

SQL Batches
Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments.
This suppresses messages like ‘(1 row(s) affected)’ after executing INSERT, UPDATE, DELETE and SELECT statements. This improves the performance of stored procedures by reducing network traffic.

ANSI-Standard Join Clauses
Use the more readable ANSI-Standard Join clauses instead of the old style joins. With ANSI joins, the WHERE clause is used only for filtering data. Whereas with older style joins, the WHERE clause handles both the join condition and filtering data. The first of the following two queries shows the old style join, while the second one show the new ANSI join syntax:

SELECT a.au_id, t.title FROM titles t, authors a, titleauthor ta WHERE
a.au_id = ta.au_id AND
ta.title_id = t.title_id AND
t.title LIKE '%Computer%'
SELECT a.au_id, t.title
FROM authors a
INNER JOIN titleauthor ta
a.au_id = ta.au_id
INNER JOIN titles t
ta.title_id = t.title_id WHERE t.title LIKE '%Computer%'

Stored Procedures Naming Convention
Do not prefix your stored procedure names with “sp_”. The prefix sp_ is reserved for system stored procedure that ship with SQL Server. Whenever SQL Server encounters a procedure name starting with sp_, it first tries to locate the procedure in the master database, then it looks for any qualifiers (database, owner) provided, then it tries dbo as the owner.
So you can really save time in locating the stored procedure by avoiding the “sp_” prefix.

Using Views
Views are generally used to show specific data to specific users based on their interest. Views are also used to restrict access to the base tables by granting permission only on views. Yet another significant use of views is that they simplify your queries.
Incorporate your frequently required, complicated joins and calculations into a view so that you don’t have to repeat those joins/calculations in all your queries. Instead, just select from the view.

Text Data Types
Try not to use TEXT or NTEXT data types for storing large textual data.
The TEXT data type has some inherent problems associated with it and will be removed from future version of Microsoft SQL Server.
For example, you cannot directly write or update text data using the INSERT or UPDATE
Statements. Instead, you have to use special statements like READTEXT, WRITETEXT and UPDATETEXT.
There are also a lot of bugs associated with replicating tables containing text columns.
So, if you don’t have to store more than 8KB of text, use CHAR(8000) or VARCHAR(8000) data types instead.
In SQL 2005 and 2008, you can use VARCHAR(max) for storing unlimited amount of textual data.

Insert Statements
Always use a column list in your INSERT statements. This helps in avoiding problems when the table structure changes (like adding or dropping a column).

Accessing Tables
Always access tables in the same order in all your stored procedures and triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are:
1. Keep your transactions as short as possible. Touch as few data as possible during a transaction.
2. Never, ever wait for user input in the middle of a transaction.
3. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed.
4. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205.
5. In your applications, process all the results returned by SQL Server immediately so that the locks on the processed rows are released, hence no blocking.

Stored Procedure Returning Values
Make sure your stored procedures always return a value indicating their status. Standardize on the return values of stored procedures for success and failures.
The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters.
If your stored procedure always returns a single row result set, consider returning the result set using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than result sets returned by SELECT statements.

Object Case
Always be consistent with the usage of case in your code. On a case insensitive server, your code might work fine, but it will fail on a case sensitive SQL Server if your code is not consistent in case.
For example, if you create a table in SQL Server or a database that has a case-sensitive or binary sort order; all references to the table must use the same case that was specified in the CREATE TABLE statement.
If you name the table as ‘MyTable’ in the CREATE TABLE statement and use ‘mytable’ in the SELECT statement, you get an ‘object not found’ error.

T-SQL Variables
Though T-SQL has no concept of constants (like the ones in the C language), variables can serve the same purpose. Using variables instead of constant values within your queries improves readability and maintainability of your code. Consider the following example:

SELECT OrderID, OrderDate FROM Orders WHERE OrderStatus IN (5,6)

The same query can be re-written in a mode readable form as shown below:

SELECT OrderID, OrderDate FROM Orders

Offload tasks
Offload tasks, like string manipulations, concatenations, row numbering, case conversions, type conversions etc., to the front-end applications if these operations are going to consume more CPU cycles on the database server.
Also try to do basic validations in the front-end itself during data entry. This saves unnecessary network roundtrips.

Check for record Existence
If you need to verify the existence of a record in a table, don’t use SELECT COUNT (*) in your Transact-SQL code to identify it, which is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXITS to determine if the record in question exits, which is much more efficient. For example:
Here’s how you might use COUNT(*):

IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')

Here’s a faster way, using IF EXISTS:

IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')

The reason IF EXISTS is faster than COUNT(*) is because the query can end immediately when the text is proven true, while COUNT(*) must count go through every record, whether there is only one, or thousands, before it can be found to be true.

Object Owner
For best performance, all objects that are called from within the same stored procedure should all be owned by the same owner, preferably dbo. If they are not, then SQL Server must perform name resolution on the objects if the object names are the same but the owners are different. When this happens, SQL Server cannot use a stored procedure “in-memory plan” over, instead, it must re-compile the stored procedure, which hinders performance.
There are a couple of reasons, one of which relates to performance. First, using fully qualified names helps to eliminate any potential confusion about which stored procedure you want to run, helping to prevent bugs and other potential problems. But more importantly, doing so allows SQL Server to access the stored procedures execution plan more directly, and in turn, speeding up the performance of the stored procedure. Yes, the performance boost is very small, but if your server is running tens of thousands or more stored procedures every hour, these little time savings can add up.

Upsert Statements
SQL Server 2008 introduces Upsert statements which combine insert, update, and delete statements in one ‘Merge’ statement.
Always use the Merge statement to synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table

MERGE table1 AS target
FROM table2
) AS source (ID,Name)
target.Table2ID = source.ID
INSERT (name, Table2ID)
VALUES(name + ' not matched', source.ID)
SET target.name = source.name + ' matched'
OUTPUT $action,inserted.id,deleted.id;

DateTime Columns
Always use ‘datetime2’ data type in SQL 2008 instead of the classic ‘datetime’. Datetime2 offers optimized data storage by saving 1 additional byte from the classic datetime. It has a larger date range, a larger default fractional precision, and optional user-specified precision.
If your column is supposed to store the date only portion, use the ‘date’ date type while if you want to store the time portion, use the ‘time’ data type. Below is a list of examples of these new data types look like:

time 12:35:29. 1234567
date 2007-05-08
smalldatetime 2007-05-08 12:35:00
datetime 2007-05-08 12:35:29.123
datetime2 2007-05-08 12:35:29. 1234567
datetimeoffset 2007-05-08 12:35:29.1234567 +12:15

Measure Query Performance
Always use statistics time feature to measure your important query and stored procedure’s performance. Use statistics time to optimize your queries Take a look at this example:

EXEC GetMedicalProcedures 1,10

The below information will be displayed in the Messages tab:
SQL Server parse and compile time:
CPU time = 6 ms, elapsed time = 6 ms.
SQL Server Execution Times:
CPU time = 24 ms, elapsed time = 768 ms.
(10 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 125 ms.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 131 ms.

This provides a good estimation of how long the query took to be executed, showing the CPU time (processing time) and elapsed time (CPU + I/O).

Create indexes on tables that have high querying pressure using select statements. Be careful not to create an index on tables that are subject to real-time changes using CRUD operations.
An index speeds up a select clause if the indexed column is included in the query, especially if it is in the WHERE clause. However, the same index slows down an insert statement whether or not the indexed column is included in the query. This downside occurs because indexes readjust and update statistics every time the table structure is changed. So use indexes wisely for optimizing tables having high retrieval rate and low change rate.

ChiMerge discretization algorithm

Chi merge is a simple algorithm that uses the chi-square statistic to discretize numeric attributes. It is a supervised, bottom-up data discretization method. It checks each pair of adjacent rows in order to determine if the class frequencies of the two intervals are significantly different. It tests the following hypothesis: that the two adjacent intervals are independent. If the hypothesis is confirmed the intervals are merged into a single interval, if not, they remain separated.

In this problem we select one of the following as an attribute: sepal length, sepal width, petal length, and petal width. Three Classes are available which are Iris-setosa, Iris-versicolor, Iris-virginica.

Chi merge algorithm is described below:

  1. Set the values for minintervals and maxintervals in order not to result in a large number or small number of intervals. Choose minintervals and maxintervals such that:

2 ≤ minintervals < maxintervals

  1. Look up the threshold value to use based on the significance level and the number of degrees of freedom. The threshold value can be found in statistical tables depending on two factors:
  • The significance level, the higher the significance level, the higher the threshold value and the more likely the adjacent intervals will be merged.
  • The number of degrees of freedom=(2-1)*([number of classes]-1) = [number of classes] -1

Here we have 3 classes thus the number of degrees of freedom is 2.

  1. A) Sort the attributes into ascending numerical order.

B) Create a frequency table containing one row for each distinct attribute value and one column for each class. Enter the number of occurrences of each distinct value of the attribute for each possible class in the cells of the table. Then generate a set of distinct intervals. Set the interval lower bound equal to the attribute value (inclusive) that belongs to this interval, and set its upper bound to the attribute value (exclusive) belonging to the next interval.

C) If the number of rows = minintervals then stop otherwise go to next step.

D) For each pair of adjacent rows in the frequency table calculate e the expected frequency value for that combination from the product of row and column sums divided by the total number of occurrences in the two rows combined. The expected value is the frequency value that would be expected to occur by chance given the assumption of independence. Calculate the value of (O – E) 2/E (if E<0.5, replace E by 0.5) where O is the observed frequency value for that combination, then add the values obtained to give chi 2 for that pair of adjacent rows. No chi2 is calculated for the final interval because there is not one below it.

E) Chi merge selects the smallest value of chi 2.This value is compared     with a threshold, the two intervals are merged based on the following condition if chi2 is less than threshold or if the number of rows > maxintervals, if the condition is met then the two intervals are merged and the attribute for the merged row to that of the first of the two consistent rows and the upper interval bound to the upper bound interval of the next   attribute. Then go to step C, but if the condition is not met then the algorithm stops.

The chi2 values are calculated for the revised frequency table, chi merge proceeds iteratively in this way merging two intervals at each stage until the chi2 for the remaining pairs of intervals are greater than the threshold value and the number of intervals is less than the maximum number of intervals, hence no further merging of intervals is possible and the discretisation is complete.

Check these ChiMerge powerpoint slides that visualizes the above algorithm.

Now lets take the IRIS data set, obtained from http://www.ics.uci.edu/~mlearn/MLRepository.html (UC-Irvine Machine Learning Data Repository), as a data set to be discretized. We will perform data discretization for each of the four numerical attributes using the Chi-Merge method having the stopping criteria be: max-interval = 6.


These attributes refer to Sepal Length, Sepal Width, Petal Length,Petal Width & Class respectively. Lets create two classes that i’ll call “IRISDataItem” and “IRISWorkItem”. The former will be used to store the attribute values for each when the data file is read while the latter will be used for later processing and storing the ChiSquare for each distinct attribute.

Public Class IrisDataItem

    Public Enum IRISClass As Byte
        IrisSetosa = 1
        IrisVersicolour = 2
        IrisVirginica = 3
    End Enum

    Public Property SepalLength As Single
    Public Property SepalWidth As Single
    Public Property PetalLength As Single
    Public Property PetalWidth As Single
    Public Property [Class] As IRISClass

End Class

Public Class IrisWorkItem
    Public Property Attribute As Decimal
    Public Property Frequency As Integer
    Public Property SetosaCount As Integer
    Public Property VersicolorCount As Integer
    Public Property VirginicaCount As Integer
    Public Property Start As Decimal
    Public Property [End] As Decimal
    Public Property ChiSquare As Decimal

    'Operator overloading for merging of two adjacent items
    Public Shared Operator ^(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As IrisWorkItem
        Item1.SetosaCount += Item2.SetosaCount
        Item1.VirginicaCount += Item2.VirginicaCount
        Item1.VersicolorCount += Item2.VersicolorCount
        Item1.End = Item2.End
        Item2.Frequency = -1
        Return Item1
    End Operator

    'Operator overloading for calculating the ChiSqaure for two items
    Public Shared Operator &(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As Decimal
        Return ChiMerge.ChiSquare(Item1, Item2)
    End Operator
End Class

Our next task is to actually read the data file and fill create a List(Of IRISDataItem) object. This can be easily done by reading the file from disc, or you can embed the file in the application resources and read from there. Here is a couple of functions that handle the two ways of reading the file:

    Private Function ReadDataFile(ByVal Path As String) As StringCollection
        If Not String.IsNullOrWhiteSpace(Path) Then
            Dim reader As New IO.StreamReader(Path)
            Dim data As New StringCollection
            While Not reader.EndOfStream
            End While
            Return data
            Throw New Exception("Invalid IRIS File Path!")
        End If
    End Function

    Private Function ReadDataFileFromResource() As StringCollection
        Dim strFile As String = System.Text.Encoding.ASCII.GetString(My.Resources.iris)
        Dim reader As New IO.StringReader(strFile)
        Dim data As New StringCollection
        While reader.Peek > 0
        End While
        Return data
    End Function

    Public Shared Function CreateIrisDataItems(ByRef StringItems As StringCollection) As List(Of IrisDataItem)
        If StringItems IsNot Nothing AndAlso StringItems.Count > 0 Then
            Dim items As New List(Of IrisDataItem)
            For Each i In StringItems
                If Not String.IsNullOrWhiteSpace(i) Then
                    Dim item As New IrisDataItem
                    Dim strItem() As String = i.Split(",")
                    item.SepalLength = strItem(0)
                    item.SepalWidth = strItem(1)
                    item.PetalLength = strItem(2)
                    item.PetalWidth = strItem(3)
                    Select Case strItem(4)
                        Case "Iris-setosa"
                            item.Class = IrisDataItem.IRISClass.IrisSetosa
                        Case "Iris-versicolor"
                            item.Class = IrisDataItem.IRISClass.IrisVersicolour
                        Case "Iris-virginica"
                            item.Class = IrisDataItem.IRISClass.IrisVirginica
                    End Select
                End If
            Return items
        End If
        Return Nothing
    End Function

    Public Shared Function CreateIrisWorkItems(ByVal AttributeIndex As Integer, ByVal DataItems As List(Of IrisDataItem)) As List(Of IrisWorkItem)
        Dim workItems As New List(Of IrisWorkItem)
                              Dim workItem As IrisWorkItem = Nothing
                              Dim value As Decimal
                              Select Case AttributeIndex
                                  Case 0
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.SepalLength)
                                      value = dataItem.SepalLength
                                  Case 1
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.SepalWidth)
                                      value = dataItem.SepalWidth
                                  Case 2
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.PetalLength)
                                      value = dataItem.PetalLength
                                  Case 3
                                      workItem = workItems.Find(Function(w) w.Attribute = dataItem.PetalWidth)
                                      value = dataItem.PetalWidth
                              End Select
                              If workItem IsNot Nothing Then
                                  workItem.Frequency += 1
                                  workItem = New IrisWorkItem
                                  workItem.Frequency = 1
                              End If
                              workItem.Attribute = value
                              Select Case dataItem.Class
                                  Case IrisDataItem.IRISClass.IrisSetosa
                                      workItem.SetosaCount += 1
                                  Case IrisDataItem.IRISClass.IrisVersicolour
                                      workItem.VersicolorCount += 1
                                  Case IrisDataItem.IRISClass.IrisVirginica
                                      workItem.VirginicaCount += 1
                              End Select
                          End Sub)
        Return workItems.OrderBy(Function(w) w.Attribute).ToList
    End Function

Having the data ready in our hands, we can now proceed to implement the ChiSquare function which is basically an implementation of the formula:

Where :

k = number of classes

Aij = number of samples in ith interval, jth class

Eij = expected frequency of Aij = (Ri * Cj) / N

Ri = number of samples in ith interval

Cj = number of samples in jth class

N = total number of samples on the two intervals

If Eij=<0.5 then set Eij to 0.5

    Public Shared Function ChiSquare(ByVal Item1 As IrisWorkItem, ByVal Item2 As IrisWorkItem) As Decimal
        ChiSquare = 0
        Dim table(,) As Integer = {{Item1.SetosaCount, Item1.VersicolorCount, Item1.VirginicaCount}, {Item2.SetosaCount, Item2.VersicolorCount, Item2.VirginicaCount}}
        Dim rowSums() As Decimal = {Item1.SetosaCount + Item1.VersicolorCount + Item1.VirginicaCount, Item2.SetosaCount + Item2.VersicolorCount + Item2.VirginicaCount}
        Dim columnSums() As Decimal = {Item1.SetosaCount + Item2.SetosaCount, Item1.VersicolorCount + Item2.VersicolorCount, Item1.VirginicaCount + Item2.VirginicaCount}
        Dim totalSum As Decimal = rowSums.Sum
        Dim E As Decimal = 0
        For i As Integer = 0 To table.GetLength(0) - 1
            For j As Integer = 0 To table.GetLength(1) - 1
                E = rowSums(i) * columnSums(j) / totalSum
                If E > 0 Then ChiSquare += ((table(i, j) - E) ^ 2) / IIf(E < 0.5, 0.5, E)
    End Function

One last thing to do before we implement Chi-Merge algorithm is to setup the initial interval bounds and prepare them. This is easily achieved by taking the attribute value itself to be the lower bound of the interval and the next attribute value to be the upper bound of the interval. Below is an easy way of doing this:

    Public Shared Sub CreateIntervalBounds(ByVal Items As List(Of IrisWorkItem))
        For i As Integer = 0 To Items.Count - 2
            Items(i).Start = Items(i).Attribute
            Items(i).End = Items(i + 1).Attribute
        Items(Items.Count - 1).Start = Items(Items.Count - 1).Attribute
        Items(Items.Count - 1).End = Items(Items.Count - 1).Attribute + 1
    End Sub

Finally, we are ready to implement the Chi-Merge algorithm. One thing to note about the below implementation is the use of LINQ to Objects and Lambda Expressions to filter out tuples that need to be dropped and merged. So you could probably that the code below will compile only using Visual Studio 2010 and .Net Framework 4.0. I haven’t tested it on Visual Studio 2008 Sp1 but if you do let me know if it compiles well.

    Public Sub Compute(ByRef Items As List(Of IrisWorkItem))
        For i As Integer = 0 To Items.Count - 2
            Items(i).ChiSquare = Items(i) & Items(i + 1) 'calculate chiSquare
        Dim minChiSquare As Decimal = Items.Take(Items.Count - 1).Min(Function(x) x.ChiSquare)
        If minChiSquare < THRESHOLD Then
            Dim i As Integer = 0
            While i < Items.Count
                If Items(i).ChiSquare = minChiSquare Then
                    Items(i) ^= Items(i + 1) 'merge
                    Dim j As Integer = i
                    i += 1
                    While i < Items.Count - 1 AndAlso Items(i).ChiSquare = minChiSquare
                        Items(j) ^= Items(i + 1) 'merge
                        i += 1
                    End While
                End If
                i += 1
            End While
            Items.RemoveAll(Function(x) x.Frequency = -1)
            If Items.Count >= MIN_INTERVAL Then Compute(Items)
            If Items.Count > MAX_INTERVAL Then
                THRESHOLD += 0.5
            End If
        End If
    End Sub

Once you run the algorithm, you can compare the obtained intervals and split points with the ones found on the internet. Below are my final results compared to the results on http://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Kerber-ChimErge-AAAI92.pdf