Introduction
Cloud computing is a general term for anything that involves delivering hosted services over the Internet. One can also say that Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on-demand. These hosted services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access).
It is a paradigm shift following the shift from mainframe to client–server that preceded it in the early 1980s. Details are abstracted from the users who no longer have need of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. A good video explaining the cloud computing basics is as follows:
Microsoft and Cloud Computing
In perspective of Microsoft Technologies the cloud computing is the technology that is going to be the backbone of most of the applications that runs on internet. Microsoft and other competitors, such as Yahoo, Amazon, Google, and IBM, have been building cloud-computing infrastructure and new software at a rapid pace to service the large number of potential users. Microsoft’s business now depends on an ever-expanding network of massive data centers: hundreds of thousands of servers, petabytes of data, hundreds of megawatts of power, and billions of dollars in capital and operational expenses. Because these data centers are being built with hardware and software technologies not designed for deployment at such massive scale, many of today’s data centers are expensive to build, costly to operate, and unable to provide all the services needed for emerging applications—resilience, geo-distribution, composability, and graceful recovery.
A good video explaining cloud computing in perspective of .net is below:
Two broad factors drive are driving the Cloud Computing development at Microsoft. The first is the shift by Microsoft and the software industry to delivering services along with their software. The term “services” encompasses a broad array of Internet delivery options that extend far beyond browser access to remote Web sites. At one end are Web 1.0 applications—Hotmail®, Messenger, search, and online commerce sites—and Web 2.0 applications—social networking, for example. An emerging suite of more sophisticated applications, such as business intelligence and rich games, are improved fundamentally when local clients are connected to services. Such connections enable entirely new features such as a new generation of immersive, interactive games; augmented-reality tools; and real-time data analysis and fusion. To provide services, a company must have a large number of computers housed in one or more data centers.
The second factor driving this research is the way cloud services and their support infrastructures are constructed. Today, they are assembled from vast numbers of PCs, packaged slightly differently, connected by the same networks used to deliver Internet services. Building data centers using standard, off-the-shelf technology was a great choice in the beginning. It let the Internet boom race ahead without the need to develop new types of computers and software systems. But the resulting data centers and software were not designed as integrated systems and are less efficient than they should be. One common analogy is that if one built utility power plants as we build data centers, we would start by going to Home Depot and buying millions of gasoline-powered generators.
Many researchers have seen an opportunity to make major improvements in the way data centers and cloud services are built, but this type of research and technology transfer is difficult because the efforts often cross many research disciplines. Effective research requires changes to both hardware and software, and the resulting prototypes must be constructed and tested at a scale difficult for small teams. For this reason, Microsoft is taking an integrated approach, drawing insights and lessons from Microsoft’s production services and data-center operations, and partnering with researchers and product teams worldwide.
A good video explaining more about Azure is below:
For more research in this area Microsoft has made a research organization called Cloud Computing Futures (CCF).
The commodity components and handcrafted software currently used to build cloud services introduce costly inefficiency into Microsoft’s business. Designs based on comprehensive optimization of all attributes offer an opportunity to create novel solutions that produce fundamental improvements in efficiency:
- Creating new hardware and software prototypes.
- Advancing the holistic design philosophy.
- Innovating with instrumentation and measurement, data acquisition, and analysis.
- Engaging Microsoft product groups and outward-facing properties.
CCF goal is to reduce data-center costs by fourfold or greater while accelerating deployment and increasing adaptability and resilience to failures, transferring ideas into products and practice. To date, we have focused our attention on four areas, though our agenda spans next-generation storage devices and memories, new processors and processor architectures, system packaging, and software tools:
Low-power services: The computers (“servers”) used to support cloud services are some of the fastest, most power-hungry computers built. The common wisdom has been to use the fastest computers because the workload is potentially huge and purchasing, installing, maintaining and operating computers is a complex task, so the fewer the machines, the better. But other computers, such as laptops, are far more energy-efficient, as measured in operations per joule, and can complete a unit of work with far less electricity and less cooling. These computers are not as fast as servers, though, and more of them are required to deliver the same service.
CCF has built two server clusters using low-power, Intel Atom chips and is conducting a series of experiments to see how well they support cloud services and how much their use can reduce the power consumed by those services. For example, power-efficient computers have low-power states, such as a laptop’s sleep and hibernate modes, that greatly reduce power consumption. We have built an intelligent control system called Marlowe that examines the workload on a group of computers and decides how many of them should be asleep at any time to reduce power consumption while still meeting the service’s acceptable level of performance.
In addition, they have worked with the Hotmail® team to evaluate the utility of low-power servers for the Hotmail® service. These experiments—the Cooperative Expendable Micro-Slice Servers prototype—have shown that overall power consumption can be reduced compared with standard servers while still delivering the same quality of service.
Improved networks: The networks that connect the computers in data centers use the same hardware and software as the rest of the Internet. It is great technology, but many of the design decisions that make it possible to transmit traffic across the globe to a vast, rapidly changing collection of computers are inappropriate for a cloud-service computing infrastructure consisting of a large, but fixed, collection of computers in a single room. Data-center networks are costly and impose many constraints on communications among data-center services, making writing cloud-service software far more difficult.
CCF have been working with researchers from Microsoft Research on several approaches to data-center networking. The most mature of these is Monsoon, which uses much of the existing networking hardware but replaces the software with a new set of communications protocols far better suited for a data center. This work will not only lead to more efficient networks, but by relaxing the constraints of existing networks, it also will open new possibilities to simplify data-center software and to build more robust platforms.
Orleans software platform: The software that runs in the data center is a complicated, distributed system. It must handle a vast number of requests from across the globe, and the computers on which the software runs fail regularly—but the service itself should not fail, even though the software is continually changing as the service evolves and new features are added. Orleans is a new software platform that runs on Microsoft’s Windows® Azure™ system and provides the abstractions, programming languages, and tools that make it easier to build cloud services.
Future cloud applications: To test the CCF hardware prototypes and the Orleans software platform, we are exploring future application scenarios that go beyond our current cloud workloads. These scenarios integrate many ideas from across Microsoft in areas such as computer vision, virtual reality, and natural-language processing.
The perspective of Microsoft products in respect to Cloud Computing can be summed up in the following image:
Following is a slide that explains what all to keep in mind while converting an existing Asp.Net application to Windows Azure so as to use cloud computing. Here quite good points have been highlighted that one need to keep in mind.
Benefits of Cloud Computing
There are some clear business benefits to building applications using Cloud Computing A few of these are listed here:
Almost zero upfront infrastructure investment: If you have to build a large-scale system it may cost a fortune to invest in real estate, hardware (racks, machines, routers, backup power supplies), hardware management (power management, cooling), and operations personnel. Because of the upfront costs, it would typically need several rounds of management approvals before the project could even get started. Now, with utility-style computing, there is no fixed cost or startup cost.
Just-in-time Infrastructure: In the past, if you got famous and your systems or your infrastructure did not scale you became a victim of your own success. Conversely, if you invested heavily and did not get famous, you became a victim of your failure. By deploying applications in-the-cloud with dynamic capacity management software architects do not have to worry about pre-procuring capacity for large-scale systems. The solutions are low risk because you scale only as you grow. Cloud Architectures can relinquish infrastructure as quickly as you got them in the first place (in minutes).
More efficient resource utilization: System administrators usually worry about hardware procuring (when they run out of capacity) and better infrastructure utilization (when they have excess and idle capacity). With Cloud Architectures they can manage resources more effectively and efficiently by having the applications request and relinquish resources only what they need (on-demand).
Usage-based costing: Utility-style pricing allows billing the customer only for the infrastructure that has been used. The customer is not liable for the entire infrastructure that may be in place. This is a subtle difference between desktop applications and web applications. A desktop application or a traditional client-server application runs on customer’s own infrastructure (PC or server), whereas in a Cloud Architectures application, the customer uses a third party infrastructure and gets billed only for the fraction of it that was used.
Potential for shrinking the processing time: Parallelization is the one of the great ways to speed up processing. If one compute-intensive or data-intensive job that can be run in parallel takes 500 hours to process on one machine, with Cloud Architectures, it would be possible to spawn and launch 500 instances and process the same job in 1 hour. Having available an elastic infrastructure provides the application with the ability to exploit parallelization in a cost-effective manner reducing the total processing time.
Read my another article on
parallel computing and .net for more.
Status as of 2010
As of year 2010 the status of cloud market and its strategy is well described in the following illustration (For more details refer
this article.)
A brief overview of the vendors of cloud and their current status is as follows. But note this does not include all the vendors and is not exhaustive. It's only to give a handy overview of cloud market in particular.
If you want to try out cloud computing for demo there are many vendors providing free cloud computing service. Here is a link to one such vendor
RightScale. Another one of
CloudSigma.
Further Readings
For more about types of clouds in cloud computing read my another article.
A complete list of Cloud platform providers is maintained
here. Refer it for getting list of providers.
Also as nothing comes for free :) one would like to know that how much Window Azure will cost us. For a complete detailed list of price rate of various services of Windows Azure refer
pricing page.
Further many friends have asked if Windows Azure can support Java applications too. The answer is YES which is good news for Java developers. The following image would make it more clear:
Windows Azure is supporting Java applications too,
for more refer this msdn starter kit. Further also refer an open source project named
windowsazure4j which is to provide software development kit for Windows Azure and Windows Azure Storage in respect to Java.
For an example of implementation of cloud computing by Google read in article on
Google Cloud Print and
Google Cloud Connect. For implementation of it by Amazon read
Amazon Cloud Drive and Player article. Also read my another article on
Cloud Computing and Open Source.
Keep me updated with your views and thoughts on the topic of cloud computing and .net.