Windows Azure Load Balancing: What To grasp

Windows Azure promises cloud flexibility once IT overcomes Azure’s built-in load balancing constraints.

On April 16, 2013, Microsoft opened another chapter in its cloud offering by announcing its Windows Azure infrastructure-as-a-service (IaaS) platform, designed and priced to compete directly with Amazon Web Services (AWS). With the discharge of this new IaaS platform, Microsoft claims to give customers the facility to head applications easily from traditional on-premises environments into the cloud.

This offering has unfolded many exciting possibilities for patrons. They could now deploy new virtual machines quickly or migrate existing ones into Windows Azure. What’s much more promising is the belief of not having to switch the style the applications hosted on these virtual machines work — no application rewriting, no code updates, and no change to how transactional data is stored.

Virtually all web applications require a load-balancing mechanism for scale-out and high availability. To handle this requirement, Windows Azure provides a built-in load balancing layer in its architecture. Surely, the sole approach to access virtual machines created in Azure from a consumer machine on the net is to send the information via the built-in load balancer.

[Can Windows Azure help your small business move to the cloud? Read Microsoft Charges Into Enterprise Cloud Market.]

But incoming connections are load balanced at Layer 4 of the OSI model, which suggests there isn’t any intelligent awareness of the higher-layer application being load balanced — traffic is blindly passed. Basic round-robin scheduling (a dead ringer for DNS) is the sole traffic distribution method available, which frequently requires specific considerations within the overall application design to present the best user experience. Finally, many modern web applications have de facto requirements for persistence and rule-based request steering to maintain clients speaking to the precise server instance for the total length in their digital conversation. These concepts are what keep the shopping cart from accidentally being emptied when using online marketplaces. Unfortunately, the Windows Azure load balancer currently doesn’t support any form of persistence or advanced rule-based request steering.

These limitations make the unique aim — moving applications seamlessly from the on-premises datacenter to the cloud — difficult to gain and leads to additional architectural considerations when designing new large-scale production application deployments.

Now, in its defense, Microsoft has stated that as a general rule, “applications must be architected for a cloud environment” — simplistic, stateless on the network layer, and with transactional awareness across application instances. These are fine ideals when planning new application deployments, but what about all of the applications that exist already in a customer’s datacenter? These legacy applications will have complex underlying components that can’t easily accept a facelift and, in keeping with their age, won’t have even been developed by the client.

The reality is that many applications exist that require advanced traffic manipulation in order for requests are handled on a session-specific basis and brought to the best server instance. Updating legacy application architecture generally isn’t easy, and with the numerous day-to-day demands that application admin and engineering teams face, often updates just aren’t inside the cards.

Is there a better way? Absolutely! There are products available from Microsoft partners that deliver the functionality missing from the native Azure platform, similar to Layer 7 load balancing, application awareness, advanced traffic distribution, session persistence, and alertness health checking. Such products offer additional ADC functionality that adds tremendous value, reminiscent of application protection with a complicated Snort-based intrusion prevention system engine, caching and compression for published services, SSL acceleration, and the facility to share a single endpoint to publish multiple virtual services.

Let’s take an example of an internet application published through a single SSL endpoint designed for access and use by multiple unique client pools that automatically receive allocation of dedicated resources and environmental components in response to the usage of host headers. While this could sound like a fancy edge case, it’s actually a reasonably common architecture and will be a frightening and impossible task to cleanly deploy in Azure without re-architecture of the applying. Microsoft partners offer application delivery controller (ADC) products that let you arrange a single cloud service in Azure; create all the required virtual machines, application resources, and necessary advanced load balancing rules; and publish them without breaking a sweat. This gives you with a single point of entry on your IT environment, simplifies infrastructure management, and decreases overall cost by consolidating the volume of endpoints needed.

Windows Azure provides customers with flexible deployment options for his or her applications, but there still are limitations that have to be considered when deciding emigrate to this platform. Microsoft partners and their innovative ADC products may also help drive more adoption for the Windows Azure ecosystem.

Want to relegate cloud software to edge apps or smaller businesses? No way. Also within the new, all-digital Cloud Software: Where Next? special issue of InformationWeek: The tech industry is rife with over-the-top, groundless predictions and estimates. (Free registration required.)

More Insights