Saturday, January 30, 2010

Z39.50 Client in C#.Net using Zoom.Net and Yaz.dll

Z39.50 is a client-server protocol for searching and retrieving information from remote computer databases. It is covered by ANSI/NISO standard Z39.50, and ISO standard 23950. The standard's maintenance agency is the Library of Congress. Z39.50 is widely used in library environments and is often incorporated into integrated library systems and personal bibliographic reference software. Interlibrary catalogue searches for interlibrary loan are often implemented with Z39.50 queries.Work on the Z39.50 protocol began in the 1970s, and led to successive versions in 1988, 1992, and 1995. The Common Query Language is based on Z39.50 semantics.

It supports a number of actions, including search, retrieval, sort, and browse. Searches are expressed using attributes, typically from the bib-1 attribute set, which defines six attributes to be used in searches of information on the server computer: use, relation, position, structure, truncation, completeness. The syntax of the Z39.50 protocol allows for very complex queries.

In practice, however, the functional complexity is limited by the uneven implementations by developers and commercial vendors. The syntax of Z39.50 is abstracted from the underlying database structure; for example, if the client specifies an author search (Use attribute 1003), it is up to the server to determine how to map that search to the indexes it has at hand. This allows Z39.50 queries to be formulated without having to know anything about the target database; but it also means that results for the same query can vary widely among different servers. One server may have an author index; another may use its index of personal names, whether they are authors or not; another may have no name index and fall back on its keyword index; and another may have no suitable index and return an error.
 
I have developed a z39.50 client which is based on zoom.net (which in turn uses the YAZ.dll to incorporate the z39.50). The client has been built on c#.net. Here I have used some basic queries of searching via author, title and publisher. Further I have used only AND and OR clauses in the query. The record is being fetched in MARCXML and selected parameters are displayed to user. The application is successfully retrieving data from the loc server whose connection parameters are:

Server name : z3950.loc.gov
Port : 7090
Database : Voyager


You can get the Zoom.Net dll from here [zoom.net dll] and the Yaz.dll from here [Yaz dll]. The support that the above mentioned server gives in regard to query syntax and symantics is described in loc server page. Kindly refer it to know more about the LOC server that I have used to query in the application.
The code for it is :


//namespaces to include
using Zoom.Net.YazSharp;
using Zoom.Net;
using System.Xml;
 
 //create a connection and provide the server details. Here I have used the LOC server
Connection ob1 = new Connection("z3950.loc.gov", 7090);  
//provide the name of the database on the server
ob1.DatabaseName = "Voyager";

//define the syntax type that will be required. Here i am defining XML viz MarcXml
ob1.Syntax = Zoom.Net.RecordSyntax.XML;

//Connect to the server
ob1.Connect();

//Declare your query
query = "Title=\"" + txtTitle.Text.Trim() + "\""; 

//Create the object for query. 
Zoom.Net.YazSharp.CQLQuery q = new CQLQuery(query);

IResultSet results;
//perform search
results = (ResultSet)ob1.Search(q);
 
Now iterate through to the results and get the xml of each record fetched and derive from it the needed values.

for (uint i = 0; i < results.Size; i++)
{
     string temp = Encoding.UTF8.GetString(results[i].Content);
    //This string is having the xml in string format. Convert it into the xml via XmlDocument
    XmlDocument doc = new XmlDocument();
    doc.LoadXml(temp);

   //perform the needful operations
  //............... 
  //...............
  //............... 
}

Hope this helps... Do keep me updated about your views and queries.

A comprehensive and exhaustive list of avaliable z39.50, sru and srw servers is given in indexdata website. Another list having avaliable z39.50 free softwares is in loc website.

Friday, January 29, 2010

Tips for improving performance and tuning of Sql Server database

Today I am going to touch upon a very important but rarely considered portion of software development. It is the management of database. Programmers daily code hundreds of line of code and make couple or so stored procedures but nobody pays attention to the fact that what all is happening to the database and is it being kept and updated correctly. One's attention only goes to it when in the implementation phase the performance of the application goes down. Thus we should pay attention to the performance of the database. Here I would focus of some key points which keeping in mind we can improve the database performance and tune it accordingly.
Here is a a list of do's and dont's that will be helpful :


DO know your tools.

Please, don't underestimate this tip. This is the best of all of those you'll see in this article. You'd be surprised of how many SQL Server programmers don't even know all T-SQL commands and all of those effective tools SQL Server has.
"What? I need to spend a month learning all those SQL commands I'll never use???" you might say. No, you don't need to. But spend a weekend at MSDN and browse through all T-SQL commands: the mission here is to learn a lot of what can and what can't be done. And, in the future, when designing a query, you'll remember "Hey, there's this command that does exactly what I need", and then you'll refer again to MSDN to see its exact syntax.
In this article I'll assume that you already know the T-SQL syntax or can find about it on MSDN.

DON'T use cursors

Let me say it again: DON'T use cursors. They should be your preferred way of killing the performance of an entire system. Most beginners use cursors and don't realize the performance hit they have. They use memory; they lock tables in weird ways, and they are slow. Worst of all, they defeat most of the performance optimization your DBA can do. Did you know that every FETCH being executed has about the same performance of executing a SELECT? This means that if your cursor has 10,000 records, it will execute about 10,000 SELECTs! If you can do this in a couple of SELECT, UPDATE or DELETE, it will be much faster.
Beginner SQL programmers find in cursors a comfortable and familiar way of coding. Well, unfortunately this lead to bad performance. The whole purpose of SQL is specifying what you want, not how it should be done.
I've once rewritten a cursor-based stored procedure and substituted some code for a pair of traditional SQL queries. The table had only 100,000 records and the stored procedure used to take 40 minutes to process. You should see the face of the poor programmer when the new stored procedure took 10 seconds to run!
Sometimes it's even faster to create a small application that gets all the data, proccess it and update the server. T-SQL was not done with loop performance in mind.
If you are reading this article, I need to mention: there is no good use for cursors; I have never seen cursors being well used, except for DBA work. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right?

DO normalize your tables

There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. Often I see programmers de-normalizing databases because "this will be slow". And, more frequent than the inverse, the resulting design is slower. DBMSs were designed to be used with normalized databases, so design with normalization in mind.

DON'T SELECT *

This is hard to get used, I know. And I confess: often I use it; but try to specify only the columns you'll need. This will:
  1. Reduce memory consumption and network bandwidth
  2. Ease security design
  3. Gives the query optimizer a chance to read all the needed columns from the indexes

DO know how your data will be/is being accessed

A robust index design is one of the good things you can do for your database. And doing this is almost an art form. Everytime you add an index to a table, things get faster on SELECT, but INSERT and DELETE will be much slower. There's a lot of work in building and mantaining indexes. If you add several indexes to a table to speed your SELECT, you'll soon notice locks being held for a long time while updating indexes. So, the question is: what is being done with this table? Reading or Updating data? This question is tricky, specially with the DELETE and UPDATE, because they often involve a SELECT for the WHERE part and after this they update the table.

DON'T create an index on the "Gender" column

This is useless. First, let's understand how indexes speed up table access. You can see indexes as a way of quickly partitioning a table based on a criteria. If you create an index with a column like "Gender", you'll have only two partitions: Male and Female. What optimization will you have on a table with 1,000,000 rows? Remember, mantaining an index is slow. Always design your indexes with the most sparse columns first and the least sparse columns last, e.g, Name + Province + Gender.

DO use transactions

Specially on long-running queries. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash.

DO beware of deadlocks

Always access your tables on the same order. When working with stored procedures and transactions, you may find this soon. If you lock the table A then table B, always lock them in this very same order in all stored procedures. If you, by accident, lock the table B and then table A in another procedure some day you'll have a deadlock. Deadlocks can be tricky to find if the lock sequence is not carefully designed.

DON'T open large recordsets

A common request on programming forums is: "How can I quickly fill this combo with 100,00 items?". Well, this is an error. You can't and you shouldn't. First, your user will hate browsing through 100,000 records to find the right one. A better UI is needed here, because you should ideally show no more that 100 or 200 records to your users.

DON'T use server side cursors

Unless you know what your are doing. Client side cursors often (not always) put less overhead on the network and on the server, and reduce locking time.

DO use parametrized queries

Sometimes I see in programming forums, questions like: "My queries are failing with some chars, e.g. quotes. How can I avoid it?". And a common answer is: "Replace it by double quotes". Wrong. This is only a workaround and will still fail with other chars, and will introduce serious security bugs. Besides this, it will trash the SQL Server caching system, which will cache several similar queries, instead of caching only one. Learn how to use parameterized queries (in ADO, through the use of the Command Object, or in ADO.NET the SqlCommand) and never have these problems again.

DO always test with large databases

It's a common pattern programmers developing with a small test database, and the end user using large databases. This is an error: disk is cheap, and performance problems will only be noticed when it's too late.

DON'T import bulk data with INSERT

Unless strictly necessary. Use DTS or the BCP utility and you'll have both a flexible and fast solution.

DO beware of timeouts

When querying a database, the default timeout is often low, like 15 seconds or 30 seconds. Remember that report queries may run longer than this, specially when your database grows.

DON'T ignore simultaneous editing

Sometimes two users will edit the same record at the same time. When writing, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. If possible, merge changes. If there is a conflict, prompt the user for some action.

DON'T do SELECT max(ID) from Master when inserting in a Detail table.

This is another common mistake, and will fail when two users are inserting data at the same time. Use one of SCOPE_IDENTITY, IDENT_CURRENT, and @@IDENTITY. Avoid @@IDENTITY if possible because it can introduce some nasty bugs with triggers.

DO Avoid NULLable columns

When possible. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. The DAL will be harder to code, too, because everytime you access this column you'll need to check
I'm not saying that NULLs are the evil incarnation, like some people say. I believe they can have good uses and simplify coding when "missing data" is part of your business rules. But sometimes NULLable columns are used in situations like this:
CustomerName1
CustomerAddress1
CustomerEmail1
CustomerName2
CustomerAddress2
CustomerEmail3
CustomerName1
CustomerAddress2
CustomerEmail3
This is horrible. Please, don't do this, normalize your table. It will be more flexible and faster, and will reduce the NULLable columns.

DON'T use the TEXT datatype

Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better.

DON'T use temporary tables

Unless strictly necessary. Often a subquery can substitute a temporary table. They induce overhead and will give you a big headache when programming under COM+ because it uses a database connection pool and temporary tables will last forever. In SQL Server 2000, there are alternatives like the TABLE data type which can provide in-memory solutions for small tables inside stored procedures too.

DO learn how to read a query execution plan

The SQL Server query analyzer is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it.

DO use referential integrity

This can be a great time saver. Define all your keys, unique constraints and foreign keys. Every validation you create on the server will save you time in the future.


Quick List of Important features

These are compiled version of the best pratcices on very large databases:
1)     Have a backup strategy that allows you meet your SLAs
  • E.g. Weekly full backups with no HA solution only won’t allow you to meet zero data-loss SLAs
2)    Pay attention to index fragmentation

3)     Have page checksums turned on
4) Make sure auto-stats update is turned on

5)     Understand your SLAs
6)    Make sure tempdb is optimized for the workload
7) Make sure you’re managing the transaction log correctly
  • E.g. being in full recovery mode with no log backups means you’ll eventually run out of space on the log drive
  • Don’t have multiple log files – no need
  • Don’t let auto-grow go nuts – causes VLF fragmentation which leads to horrible perf
8) Don’t run database shrink
9) Consider turning on Instant Initialization to speed up file growth and restores
I'll keep this list updated as and when required from time to time. The Microsoft recommended tips an practices can be accessed at http://technet.microsoft.com/en-us/sqlserver/bb331794.aspx for more best practices.

Thursday, January 28, 2010

Generic implementation of enums : Enumeration Classes

Today I am going to demonstrate an example of how to use enums to our advantage and make them customized and allocate values as per our requirement. Enums in C# have an integral base type (Int32 by default) but the items in it are always known at compile-time. There is simply no easy way how generics might even fit into the concept of an enum. But this can be done with a little tweaking and good thinking. This is also called generic implementation of enums.  There are other options I like to use when enumerations break down, and many times in the domain model, I go straight to the other option. For example, I've seen quite a few models like this:

public class Employee
{
    public EmployeeType Type { get; set; }
}

public enum EmployeeType
{
    Manager,
    Servant,
    AssistantToTheRegionalManager
}

The problem with a model like this is it tends to create lots of switch statements:

public void ProcessBonus(Employee employee)
{
    switch(employee.Type)
    {
        case EmployeeType.Manager:
            employee.Bonus = 1000m;
            break;
        case EmployeeType.Servant:
            employee.Bonus = 0.01m;
            break;
        case EmployeeType.AssistantToTheRegionalManager:
            employee.Bonus = 1.0m;
            break;
        default:
            throw new ArgumentOutOfRangeException();
    }
}

There are a few problems with enumerations like this:

    * Behavior related to the enumeration gets scattered around the application
    * New enumeration values require shotgun surgery
    * Enumerations don't follow the Open-Closed Principle

Adding a new enumeration value is quite a large pain, as there are lots of these switch statements around to go back and modify.  In the case above, we want the "default" behavior for defensive coding purposes, but a new enumeration value will cause an exception to be thrown. With enumeration behavior scattered around, we can never bring it back to the source type, because enumeration types can't have any behavior (or state for that matter). Instead of an enumeration, I like to use an enumeration class.

Creating the enumeration class

To move away from an enumeration to an enumeration class, I'll first use the Enumeration layer supertype:

public abstract class Enumeration : IComparable
{
    private readonly int _value;
    private readonly string _displayName;

    protected Enumeration()
    {
    }

    protected Enumeration(int value, string displayName)
    {
        _value = value;
        _displayName = displayName;
    }

    public int Value
    {
        get { return _value; }
    }

    public string DisplayName
    {
        get { return _displayName; }
    }

    public override string ToString()
    {
        return DisplayName;
    }

    public static IEnumerable GetAll() 

           where T : Enumeration, new()
    {
        var type = typeof(T);
        var fields = type.GetFields(BindingFlags.Public |

        BindingFlags.Static | BindingFlags.DeclaredOnly);

        foreach (var info in fields)
        {
            var instance = new T();
            var locatedValue = info.GetValue(instance) as T;

            if (locatedValue != null)
            {
                yield return locatedValue;
            }
        }
    }

    public override bool Equals(object obj)
    {
        var otherValue = obj as Enumeration;

        if (otherValue == null)
        {
            return false;
        }

        var typeMatches = GetType().Equals(obj.GetType());
        var valueMatches = _value.Equals(otherValue.Value);

        return typeMatches && valueMatches;
    }

    public override int GetHashCode()
    {
        return _value.GetHashCode();
    }

    public static int AbsoluteDifference(Enumeration firstValue,

    Enumeration secondValue)
    {
        var absoluteDifference = Math.Abs(firstValue.Value

        - secondValue.Value);
        return absoluteDifference;
    }

    public static T FromValue(int value) 

                    where T : Enumeration, new()
    {
        var matchingItem = parseint>(value,

        "value", item => item.Value == value);
        return matchingItem;
    }

    public static T FromDisplayName(string displayName)

                           where T : Enumeration, new()
    {
        var matchingItem = parsestring>(displayName,

        "display name", item => item.DisplayName == displayName);
        return matchingItem;
    }

    private static T parse(K value, string description,

    Funcbool> predicate) where T : Enumeration, new()
    {
        var matchingItem = GetAll().FirstOrDefault(predicate);

        if (matchingItem == null)
        {
            var message = string.Format("'{0}' is not a

            valid {1} in {2}", value, description, typeof(T));
            throw new ApplicationException(message);
        }

        return matchingItem;
    }

    public int CompareTo(object other)
    {
        return Value.CompareTo(((Enumeration)other).Value);
    }
}

It's a large class, but it gives us some nice functionality out of the box, such as equality operations and such.  Next, I'll create the subtype that will house all of my different enumeration values:

public class EmployeeType : Enumeration
{
}

I'd still like individual employee types, such as Manager and Servant, and I can do this by exposing static readonly fields representing these employee types:

public class EmployeeType : Enumeration
{
  public static readonly EmployeeType Manager
      = new EmployeeType(0, "Manager");
  public static readonly EmployeeType Servant
      = new EmployeeType(1, "Servant");
  public static readonly EmployeeType 

AssistantToTheRegionalManager = new

  EmployeeType(2, "Assistant to the Regional Manager");

    private EmployeeType() { }
    private EmployeeType(int value, string displayName)

: base(value, displayName) { }
}

Notice I also get a much nicer display name with spaces.  In the past, I always had to do a lot of finagling to put spaces in the names when I displayed them. When someone wants to assign the Employee's type, it doesn't look any different than before:

dwightSchrute.Type = EmployeeType.AssistantToTheRegionalManager;

Now that I have a real class that acts like a Value Object, I have a destination for behavior.  For example, I can tack on a "BonusSize" property:

public void ProcessBonus(Employee employee)
{
    employee.Bonus = employee.Type.BonusSize;
}

Yes, this is a rather silly example, but it illustrates that most, if not all, of the switch statements concerning the previous enum type go away.  The behavior can be pushed down into the enumeration class, with each specific enumeration type supplying specific behavior. This pattern can even be taken further, where I have subtypes for each individual EmployeeType.  I'll never need to expose them outside to anyone:

public abstract class EmployeeType : Enumeration
{
    public static readonly EmployeeType Manager
        = new ManagerType();

    protected EmployeeType() { }
    protected EmployeeType(int value, string displayName)

: base(value, displayName) { }

    public abstract decimal BonusSize { get; }

    private class ManagerType : EmployeeType
    {
        public ManagerType() : base(0, "Manager") { }

        public override decimal BonusSize
        {
            get { return 1000m; }
        }
    }
}

All of the variations of each enumeration type can be pushed down not only to the enumeration class, but to each specific subtype.  The BonusSize now becomes an implementation detail of individual EmployeeType. Enumerations work well in a variety of scenarios, but can break down quickly inside your domain model.  Enumeration classes provide much of the same usability, with the added benefit of becoming a destination for behavior. Switch statements are no longer necessary, as I can push that variability and knowledge where it belongs, back inside the model.  If for some reason I need to check specific enumeration class values, the option is still open for me.  This pattern shouldn't replace all enumerations, but it's nice to have an alternative.

Wednesday, January 6, 2010

AJAX for ASP.NET

The most remarkable feature of microsoft development products is that they make things that lets the programmner concentrate more on the problem rather than on how to implement its solution technological wise. The best example of it is the AJAX library provided by microsoft in visual studio 2008. It has a complete set of in-built scripts and related classes that would implement the hard core logics of AJAX. Inspite of all this it is very simple and easy to implement it ! It consists of the following controls:

1. Script Manager
2. Script Manager Proxy
3. Timer
4. Update Panel
5. Update Progress

I will discuss these in details one by one:

1. Script Manager

The ScriptManager control is central to AJAX functionality in ASP.NET. The control manages all ASP.NET AJAX resources on a page. This includes downloading Microsoft AJAX Library scripts to the browser and coordinating partial-page updates that are enabled by using UpdatePanel controls. In addition, the ScriptManager control enables you to do the following:

* Register script that is compatible with partial-page updates. In order to manage dependencies between your script and the core library, any script that you register is loaded after the Microsoft AJAX Library script.

* Specify whether release or debug scripts are sent to the browser.

* Provide access to Web service methods from script by registering Web services with the ScriptManager control.

* Provide access to ASP.NET authentication, role, and profile application services from client script by registering these services with the ScriptManager control.

* Enable culture-specific display of ECMAScript (JavaScript) Date, Number, and String functions in the browser.

* Access localization resources for embedded script files or for stand-alone script files by using the ResourceUICultures property of the ScriptReference control.

* Register server controls that implement the IExtenderControl or IScriptControl interfaces with the ScriptManager control so that script required by client components and behaviors is rendered.

When implementing Script Manager one has to remember that it should be placed before any control in the web-page. Further Using the ScriptManager Control with Master Pages, User Controls, and Other Child Components one has to remember that a page can contain only one ScriptManager control in its hierarchy. To register services and scripts for nested pages, user controls, or components when the parent page already has a ScriptManager control, use the ScriptManagerProxy control.

Microsoft also gives us the liberty of making our own scripts and using them via the script manager thus enabling to make our own library of scripts that suit our requirements.


2. Script Manager Proxy

Only one instance of the ScriptManager control can be added to the page. The page can include the control directly, or indirectly inside a nested component such as a user control, content page for a master page, or nested master page. In cases where a ScriptManager control is already on the page but a nested or parent component needs additional features of the ScriptManager control, the component can include a ScriptManagerProxy control. For example, the ScriptManagerProxy control enables you to add scripts and services that are specific to nested components.


3. Timer

This is the control with with most of the window programmers would be fimiliar. By implementing this in the webpage we can generate a request to the server from the client at specific time intervals which in turn would update he webpage from time to time. It updates anything that is present within the update panel.


4. Update Panel

It is a container that contains the controls on which the AJAX is to be implemented. All the controls within the update panel would be updated via AJAX implementation while the others would be done through post-back.

5. Update Progress

Many a times it is seen that even update panel takes a fraction of time to complete the request and response cycle. For that time being the end user does not know if updation operation is going on. To solve this we can use the Update progress control. In this we specify the template ,which can be any text or image, which will be shown whenever the update panel is fetching fresh values i.e. its in its request response cycle.

Example

Example for the above mentioned topics is as follows:

I have made a page where script manager control, update panel control, update progress control and timer control is used. The page consists of a textbox and a button. Whatever you write in the textbox, on clicking the button you get the length of the text entered. Now to simulate some processing on the server i have made a couple of nested loops that would execute for 3-4 seconds. This would fire the update progress control. Further there is a timer that would go to the server and bring the date-time stamp every 5 seconds. The screen shots of the coding is as follows:
















The code behind is as follows:

















The screen shots of the UI on execution will be as follows:

1. When the application is run we see the following in the webpage:





















2. When the timer control fires automatically after 5 seconds





















3. When we write a text in the texbox and click the Calculate button... the request goes to the server and update progress control is shown.

4.After the processing at the server is complete the length of the text entered is shown.



Hence we see how easily we can implement the AJAX controls in ASP.NET.

LinkWithin

Related Posts with Thumbnails