Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Sunday, June 6, 2010

Madison --- Microsoft SQL Server 2008 R2 Parallel Data Warehouse

Another product from Microsoft that is going to revolutionize the IT industry is the Microsoft SQL Server 2008 R2 Parallel Data Warehouse. It was previously code-named  "Madison". In this post I am going to discuss about it and its benefits.


Introduction

Microsoft SQL Server 2008 R2 Parallel Data Warehouse (previously code named project "Madison") is a highly scalable appliance that delivers performance at low cost through a massively parallel processing (MPP).

The goal of Madison is to use MPP (massively parallel processing) to deliver high performance and scalability on SQL Server 2008, Windows Server 2008 and industry-standard hardware. The appliance partitions large tables across multiple physical nodes, with each node having dedicated CPU, memory and storage and running its own instance of SQL Server.


The query processing takes place within one physical instance of a database. A Control Node routes queries from applications to all Compute Nodes and then collects and returns the result. Since the data is evenly distributed across multiple nodes and processing occurs in parallel, queries can be performed faster than on single SMP (symmetric multiprocessing) database servers, according to Microsoft.



Microsoft has dubbed the product's parallel design "Ultra Shared Nothing."

Microsoft has made a number of announcements around its database and data warehousing business of late. According to Ron Van Zanten, directing officer of business intelligence, Premier Bankcard LLC, “Upgrading to SQL Server 2008 has improved the performance of our data warehouse significantly.  With more than 20 terabytes of data and growing, we are ready to take our data warehouse to the next level.  In our current evaluation of Madison we’re very impressed with the efficiency and raw power of the appliance and how it can really process many terabytes of data seamlessly.” Appliances ranging from 8 nodes to 20 nodes are now ready to host customers for early evaluation.

Benefits

Madison is a highly scalable data warehouse appliance that offers customers high performance at low cost through a massively parallel processing (MPP) architecture for SQL Server. Unlike its competitors, Madison offers hardware flexibility with configurations from the major hardware vendors and low cost through industry standard hardware. Project Madison extends Microsoft investments in data warehousing and allows SQL Server data warehouses to scale out, from fewer than 50 gigabytes to over a petabyte of data. It will utilize a Massively Parallel Processing (MPP) scale-out architecture to support data warehousing ranging from 10TB to 1+ PB. The Parallel Computing Edition is priced at $57,489 per processor.

Key customer benefits  “Madison” include:
  • Data warehouse scalability into the petabyte range
  • Low cost of ownership through industry standard hardware
  • Appliance model that simplifies data warehouse deployment and maintenance
  • Integrates with existing SQL Server 2008 data warehouses via a unique hub-and-spoke architecture

Key Features

  • Data warehouse scalability from tens to hundreds of terabytes
  • Low cost of ownership through industry standard hardware
  • Appliance model simplifies deployment and maintenance
  • Integrates with existing SQL Server 2008 data warehouses via hub-and-spoke architecture
  • Greater ROI from BI investments through integration with SQL Server 2008
  • Reduced risk through use of redundant, industry standard hardware
  • Balanced reference architectures deliver predictable performance
  • Better agility and business alignment through hub and spoke architecture


Scale out with Parallel Data Warehouse
A SQL Server 2008 R2 Parallel Data Warehouse MPP appliance acts as an enterprise “hub” that publishes data as needed to various business units or departments (spokes). For dedicated high performance requirements, individual business units can redeploy SQL Server 2008 data marts, or deploy their own Parallel Data Warehouse appliances, as spokes.

Scale Your Data Warehouse from tens to hundreds of Terabytes

SQL Server 2008 R2 Parallel Data Warehouse is a highly scalable appliance for enterprise data warehousing. It is the next step in the evolution of the data warehouse appliance created by DATAllegro. SQL Server 2008 R2 Parallel Data Warehouse uses massively parallel processing (MPP) to deliver the high performance and scalability on SQL Server 2008, Windows Server® 2008 and industry-standard hardware. The MPP architecture helps enable better scalability, better and more predictable performance, reduced risk and a lower cost per terabyte than other DW solutions.

Improve Performance at a Lower Price per Terabyte

In symmetric multi-processing (SMP) architecture, query processing occurs entirely within one physical instance of a database. CPU, memory and storage impose physical limits upon speed and scale. A SQL Server 2008 R2 Parallel Data Warehouse MPP data warehouse appliance partitions large tables across multiple physical nodes, each node having dedicated CPU, memory and storage, and each running its own instance of SQL Server, in a parallel shared nothing architecture. SQL Server 2008 R2 Parallel Data Warehouse delivers improved performance through its unique design.

All components are balanced against each other to reduce performance bottlenecks, and all server and storage components are mirrored for enterprise-class redundancy. A Control Node routes queries from applications to all Compute Nodes, then collects and returns the result. Because data is evenly distributed across multiple nodes and processing occurs in parallel, queries can be many times faster than on single SMP database servers.

This architecture can cost less for several reasons:
  • Rather than relying on expensive proprietary processors or storage, SQL Server 2008 R2 Parallel Data Warehouse appliances use industry-standard hardware
  • As data volumes grow, scalability simply requires the addition of capacity to the appliance
  • There is no need for a “forklift upgrade” where the entire appliance must be upgraded

Improve Performance at a Lower Price per Terabyte

In a traditional, symmetric multi-processing (SMP) architecture, query processing occurs entirely within one physical instance of a database. CPU, memory and storage impose physical limits upon speed and scale.

A Madison MPP data warehouse appliance partitions large tables across multiple physical nodes, each node having dedicated CPU, memory and storage, and each running its own instance of SQL Server, in a patented parallel design known as Ultra Shared Nothing™. All components are balanced against each other to reduce performance bottlenecks, and all server and storage components are mirrored for enterprise-class redundancy.

A Control Node routes queries from applications to all Compute Nodes, then collects and returns the result. Because data is evenly distributed across multiple nodes and processing occurs in parallel, queries can run many times faster than on single SMP database servers.

This architecture can cost less because, rather than relying on expensive proprietary processors or storage, Madison appliances use industry-standard hardware. As data volumes grow, scalability simply requires the addition of capacity to the appliance. There is no need for a “forklift upgrade” where the entire appliance must be upgraded.

Improve ROI of Existing BI Investments
Extract more value from existing BI investments through tight integration of Madison and Microsoft BI tools. The Madison MPP data warehouse integrates with SQL Server technologies including Integration Services, Reporting Services and Analysis Services. Existing SMP data arts can be redeployed and “plugged in” to the MPP cluster as “spokes”,receiving data from the MPP “hub.”

Appliance Model

Simplify data warehouse deployment and maintenance. Madison appliances include carefully balanced, pre-assembled hardware and software from leading hardware vendors. This “appliance model” significantly accelerates your time to value and reduces deployment costs.

Deliver Predictable Performance

Through carefully balanced configuration, tight specifications and testing, the Madison MPP data warehouse appliance is designed to deliver predictable performance. Microsoft designs the reference configurations to suit different usage scenarios, including reporting and ad hoc queries.

Automated workload management and system resource balancing enable you to load data concurrently with queries, without sacrificing performance. As data grows and business needs change, Microsoft offers options to support more users performing different types of queries.

Better Agility and Business Alignment through Hub-and-Spoke

Using a true hub-and-spoke architecture, all enterprise data can be maintained on a Madison hub while departments or business units keep their existing data marts to suit their needs. High-speed data transfer relieves traditional barriers to hub and spoke. Power users can even deploy a dedicated MPP appliance as a “spoke” so they can autonomously manage resources, while IT can enforce enterprise standards across all data.


Additional Benefits

Automated storage management replaces complex space allocation. Available diagnostics detect and broadcast hardware issues. A management console provides a simple GUI interface and dashboard for tracking and resolving issues. Ultra-fast loading and high-speed backups reduce administration for even the largest data warehouses.


You can use Madison MPP data warehouse applicance as:
  • As an economical, high-performance enterprise data warehouse or data mart
  • As a platform for high scale Data Warehouses from terabytes to over 1 petabyte
  • As the hub of a hub-and-spoke EDW
  • To offload applications from an EDW for high-speed aggregation and queries
  • As long-term data storage in multi-tier data warehouses
  • As a subject-oriented data mart or sandbox for new analytic applications

Further Readings

For more information about Project “Madison”, visit http://www.microsoft.com/madison

A good video explaining Madison is at MSDN Channel9.

Do keep me updated with your views and updates.

Saturday, February 13, 2010

Installing Sql-server 2005 express edition using C#.net

In the previous post i had discussed how to install sql-server 2005 express using command prompt. Today I am going to discuss how to install it using C#.net code. The code for it is as follows:


using System.Diagnostics;

void InstallExpress()
{
Process p = new Process();
            ProcessStartInfo psi = new ProcessStartInfo();
            psi.FileName = Application.StartupPath.Trim() + @"\SqlServer2005 Express\SQLEXPR.EXE";
           
            psi.Arguments = "/qn username=\"CustomerUsername\" companyname=\"OurCompany\" addlocal=ALL  disablenetworkprotocols=\"0\" instancename=\"MyInstance\" SECURITYMODE=\"SQL\" SAPWD=\"mypassword\"";

            p.StartInfo = psi;
            p.Start();
}

This is quite simple and easy to implement. Here we have created a process that would run the commands to install the sql-server 2005 express in silent mode.

Friday, January 29, 2010

Tips for improving performance and tuning of Sql Server database

Today I am going to touch upon a very important but rarely considered portion of software development. It is the management of database. Programmers daily code hundreds of line of code and make couple or so stored procedures but nobody pays attention to the fact that what all is happening to the database and is it being kept and updated correctly. One's attention only goes to it when in the implementation phase the performance of the application goes down. Thus we should pay attention to the performance of the database. Here I would focus of some key points which keeping in mind we can improve the database performance and tune it accordingly.
Here is a a list of do's and dont's that will be helpful :


DO know your tools.

Please, don't underestimate this tip. This is the best of all of those you'll see in this article. You'd be surprised of how many SQL Server programmers don't even know all T-SQL commands and all of those effective tools SQL Server has.
"What? I need to spend a month learning all those SQL commands I'll never use???" you might say. No, you don't need to. But spend a weekend at MSDN and browse through all T-SQL commands: the mission here is to learn a lot of what can and what can't be done. And, in the future, when designing a query, you'll remember "Hey, there's this command that does exactly what I need", and then you'll refer again to MSDN to see its exact syntax.
In this article I'll assume that you already know the T-SQL syntax or can find about it on MSDN.

DON'T use cursors

Let me say it again: DON'T use cursors. They should be your preferred way of killing the performance of an entire system. Most beginners use cursors and don't realize the performance hit they have. They use memory; they lock tables in weird ways, and they are slow. Worst of all, they defeat most of the performance optimization your DBA can do. Did you know that every FETCH being executed has about the same performance of executing a SELECT? This means that if your cursor has 10,000 records, it will execute about 10,000 SELECTs! If you can do this in a couple of SELECT, UPDATE or DELETE, it will be much faster.
Beginner SQL programmers find in cursors a comfortable and familiar way of coding. Well, unfortunately this lead to bad performance. The whole purpose of SQL is specifying what you want, not how it should be done.
I've once rewritten a cursor-based stored procedure and substituted some code for a pair of traditional SQL queries. The table had only 100,000 records and the stored procedure used to take 40 minutes to process. You should see the face of the poor programmer when the new stored procedure took 10 seconds to run!
Sometimes it's even faster to create a small application that gets all the data, proccess it and update the server. T-SQL was not done with loop performance in mind.
If you are reading this article, I need to mention: there is no good use for cursors; I have never seen cursors being well used, except for DBA work. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?

DO normalize your tables

There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. Often I see programmers de-normalizing databases because "this will be slow". And, more frequent than the inverse, the resulting design is slower. DBMSs were designed to be used with normalized databases, so design with normalization in mind.

DON'T SELECT *

This is hard to get used, I know. And I confess: often I use it; but try to specify only the columns you'll need. This will:
  1. Reduce memory consumption and network bandwidth
  2. Ease security design
  3. Gives the query optimizer a chance to read all the needed columns from the indexes

DO know how your data will be/is being accessed

A robust index design is one of the good things you can do for your database. And doing this is almost an art form. Everytime you add an index to a table, things get faster on SELECT, but INSERT and DELETE will be much slower. There's a lot of work in building and mantaining indexes. If you add several indexes to a table to speed your SELECT, you'll soon notice locks being held for a long time while updating indexes. So, the question is: what is being done with this table? Reading or Updating data? This question is tricky, specially with the DELETE and UPDATE, because they often involve a SELECT for the WHERE part and after this they update the table.

DON'T create an index on the "Gender" column

This is useless. First, let's understand how indexes speed up table access. You can see indexes as a way of quickly partitioning a table based on a criteria. If you create an index with a column like "Gender", you'll have only two partitions: Male and Female. What optimization will you have on a table with 1,000,000 rows? Remember, mantaining an index is slow. Always design your indexes with the most sparse columns first and the least sparse columns last, e.g, Name + Province + Gender.

DO use transactions

Specially on long-running queries. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash.

DO beware of deadlocks

Always access your tables on the same order. When working with stored procedures and transactions, you may find this soon. If you lock the table A then table B, always lock them in this very same order in all stored procedures. If you, by accident, lock the table B and then table A in another procedure some day you'll have a deadlock. Deadlocks can be tricky to find if the lock sequence is not carefully designed.

DON'T open large recordsets

A common request on programming forums is: "How can I quickly fill this combo with 100,00 items?". Well, this is an error. You can't and you shouldn't. First, your user will hate browsing through 100,000 records to find the right one. A better UI is needed here, because you should ideally show no more that 100 or 200 records to your users.

DON'T use server side cursors

Unless you know what your are doing. Client side cursors often (not always) put less overhead on the network and on the server, and reduce locking time.

DO use parametrized queries

Sometimes I see in programming forums, questions like: "My queries are failing with some chars, e.g. quotes. How can I avoid it?". And a common answer is: "Replace it by double quotes". Wrong. This is only a workaround and will still fail with other chars, and will introduce serious security bugs. Besides this, it will trash the SQL Server caching system, which will cache several similar queries, instead of caching only one. Learn how to use parameterized queries (in ADO, through the use of the Command Object, or in ADO.NET the SqlCommand) and never have these problems again.

DO always test with large databases

It's a common pattern programmers developing with a small test database, and the end user using large databases. This is an error: disk is cheap, and performance problems will only be noticed when it's too late.

DON'T import bulk data with INSERT

Unless strictly necessary. Use DTS or the BCP utility and you'll have both a flexible and fast solution.

DO beware of timeouts

When querying a database, the default timeout is often low, like 15 seconds or 30 seconds. Remember that report queries may run longer than this, specially when your database grows.

DON'T ignore simultaneous editing

Sometimes two users will edit the same record at the same time. When writing, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. If possible, merge changes. If there is a conflict, prompt the user for some action.

DON'T do SELECT max(ID) from Master when inserting in a Detail table.

This is another common mistake, and will fail when two users are inserting data at the same time. Use one of SCOPE_IDENTITY, IDENT_CURRENT, and @@IDENTITY. Avoid @@IDENTITY if possible because it can introduce some nasty bugs with triggers.

DO Avoid NULLable columns

When possible. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. The DAL will be harder to code, too, because everytime you access this column you'll need to check
I'm not saying that NULLs are the evil incarnation, like some people say. I believe they can have good uses and simplify coding when "missing data" is part of your business rules. But sometimes NULLable columns are used in situations like this:
CustomerName1
CustomerAddress1
CustomerEmail1
CustomerName2
CustomerAddress2
CustomerEmail3
CustomerName1
CustomerAddress2
CustomerEmail3
This is horrible. Please, don't do this, normalize your table. It will be more flexible and faster, and will reduce the NULLable columns.

DON'T use the TEXT datatype

Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better.

DON'T use temporary tables

Unless strictly necessary. Often a subquery can substitute a temporary table. They induce overhead and will give you a big headache when programming under COM+ because it uses a database connection pool and temporary tables will last forever. In SQL Server 2000, there are alternatives like the TABLE data type which can provide in-memory solutions for small tables inside stored procedures too.

DO learn how to read a query execution plan

The SQL Server query analyzer is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it.

DO use referential integrity

This can be a great time saver. Define all your keys, unique constraints and foreign keys. Every validation you create on the server will save you time in the future.


Quick List of Important features

These are compiled version of the best pratcices on very large databases:
1)     Have a backup strategy that allows you meet your SLAs
  • E.g. Weekly full backups with no HA solution only won’t allow you to meet zero data-loss SLAs
2)    Pay attention to index fragmentation

3)     Have page checksums turned on
4) Make sure auto-stats update is turned on

5)     Understand your SLAs
6)    Make sure tempdb is optimized for the workload
7) Make sure you’re managing the transaction log correctly
  • E.g. being in full recovery mode with no log backups means you’ll eventually run out of space on the log drive
  • Don’t have multiple log files – no need
  • Don’t let auto-grow go nuts – causes VLF fragmentation which leads to horrible perf
8) Don’t run database shrink
9) Consider turning on Instant Initialization to speed up file growth and restores
I'll keep this list updated as and when required from time to time. The Microsoft recommended tips an practices can be accessed at http://technet.microsoft.com/en-us/sqlserver/bb331794.aspx for more best practices.

LinkWithin

Related Posts with Thumbnails