If you aren’t familiar with cloud computing, read this article here:
Essentially, cloud computing is a data center “pattern” where identically configured servers (nodes) are linked together to form an extremely scalable pool of hardware resources where new servers can easily be added at will to scale the system with minimal (or automated) configuration. Running on this system is an entirely virtualized environment where virtual machines are programmatically provisioned or decommissioned in an on-demand sort of way. This allows you to design software applications that can scale themselves or be easily scaled through a control panel (by adding virtual hardware resources) in response to dynamically changing demand. That is my simplified understanding of Cloud Computing (also called grid computing in certain contexts. Grid computing can also mean other things, though, as in an application architecture context). When a Cloud Computing environment is created, maintained and sold to the public on a “pay per resources used” basis – essentially metered use of hardware resource-hours used, then that business model is called Utility Computing, reflecting its similarity to the Utility Company business model of billing for metered use of power, water, gas, etc.
The advantages of this way of doing things, especially if you aren’t a monolithic enterprise with your own high tech data centers and network engineers, are huge. Growing your applications, experimenting with different network or load balancing designs, throwing up experimental or test servers for a few hours or a few days and then tearing them down when they are no longer needed, is immeasurably easier and cheaper using this utility model.
There are no hardware purchases to make, no time spend pricing out a server, unpacking it, racking it, etc. If you need a server for a few days, or even a few hours, to test a concept or do a proof of concept or run a quick internal beta you will only be charged for the time it was running. If your app spikes in usage seasonally or because of a promotion, you just add some more web servers (these services often come with load balancing mechanisms you can use) for the peak periods and remove them when things are back to normal. You didn’t crowd your data center rack space with servers only needed some of the time and you don’t pay for what you aren’t actually using.
A precursor to utility computing, and still a useful and cost effective option (as compared to having in house talent to maintain your infrastructure) is standard dedicated hosting providers who will rent you a server and bandwidth for a monthly fee. There isn’t the same flexibility, but at least you don’t have to deal with your own hardware and network infrastructure. It seems that several of the major players in this market are also offering the utility model, however, which sure feels like the future to me, especially for SMB’s.
Utility Computing Providers that host Windows:
This looks like a good one to me. Flexible pre-paid and pay as you go plans. Don’t know if there is an API though, for managing your virtual environment from within your own code.
Here is a head to head price comparison of GoGrid and Amazon EC2. Looking good for GoGrid…
Also looks like a pretty nice system. This one DOES have a full API for controlling your virtual environment.
Another dedicated server hosting company getting into the utility market. Prices seem competitive with comprehensive features.
These guys are a little different – they host a unique virtual environment specifically for hosting web applications. You don’t actually get virtual machines, but a completely virtualized “deployment space” for your web apps. Mosso is by Rackspace – an established major dedicated server provider.
Traditional Dedicated Server Providers that host Windows:
There are a blue million companies that provide dedicated windows servers. Here are three I have found that seem to have good reputations and competitive pricing.