Hosting a Hot Site: How Load Balancing Can Keep Data Flowing Fast
Load Balancing and Web Hosting: How This Distribution Technique Helps Keep Your Site Up and Running
Load balancing in itself is a generic term; it can be used in any context from construction to computing. This technique has been used by mission-critical service providers and communication networks for decades. As the web exploded onto the scene, load balancing was used more and more often by a growing number of information and service providers to accelerate throughput, minimize response time and increase security.
The dirty work of load balancing, at least as far as web hosts are concerned, is typically taken care of by multi-layer switches and DNS servers. Multi-layer switches are essentially sophisticated routers that can route connections based on a much wider variety of criteria than can a traditional router. Typical routers generally use a CPU and software to route traffic, while Layer 3 switches and multilayer switches use an ASIC (application-specific integrated circuit,) achieving much faster routing. Layer 3 switches only use transport-layer information to route connections, while multi-layer switches can use OSI layers 4-7 to achieve a much greater degree of control over traffic.
Essentially, a multi-layer switch is the brain of the load-balancing process. It routes incoming connections to one IP address and uses Nat to route them to any number of servers which are hidden from the internet at large. Multi-layer switches can be used in simple web hosting, routing HTTPS connections, controlling traffic to VPNs or any other TCP/IP applications. These switches can also perform other operations such as SSL encryption/decryption and certificate storage to take the load off servers that may be too busy.
An integral component in a load-balancing system is "stickiness." That is, ensuring that a client is kept constantly connected to the same server so that session information is maintained and SSL connections are uninterrupted. There are many ways of maintaining stickiness, but a multi-layer switch can do this by simply directing all connections from one particular external IP address, that of the client, to the same server each time. This technique works as long as the server in question does not fail, in which case the session will be lost and the user may lose data in addition to the state of their connection. To mitigate this possibility, load-balanced databases are used that provide a central clearing house so that the client can always be directed to a functioning server while the session information is kept in tact. Rather than maintaining this database in one location, it can be distributed across multiple servers to improve performance and reliability. Other methods of maintaining a persistent connection include using session tags, SSL ids and cookies, all of which can be managed by a multi-layer switch in addition to web server software.
Load balancing offers a litany of benefits when used as a tool for web hosting, including:
- Security: Load balancing switches and software are designed to thwart a number of common attacks perpetrated on websites, such as denial of service attacks and threats to the kernel of the operating system on each server. The very nature of load balancing protects the back-end servers from attack from the internet at large, while software and switches use specific procedures such as delayed binding, which hides the client from the back-end servers until the TCP handshake is complete, to ensure DOS attacks are mitigated.
- Scalability: Load balancing computing clusters can be easily scaled up or down depending on load and projected future traffic. If a site is growing rapidly, it's much easier to scale a load-balancing cluster to fit the growth than it would be to add separate servers.
- Distribution: Layer 4-7 switches can offload CPU-intensive tasks such as SSL encryption/decryption, freeing the servers to handle less intensive requests and improving response time. Some load balancers can also consolidate all HTTP connections into one socket for each back-end server, drastically improving throughput. Content-aware switching allows some Layer 4-7 switches to route traffic depending on the application being requested, so processor or bandwidth-heavy requests can be sent to appropriate servers.
- Customization: Load balancers can also act as proxies or filters, adding custom error messages or filtering objectionable content as it passes through the system.
- Reliability: The ability of a load balancer to communicate with the application layer allows it to detect application faults and route traffic to alternate servers while the failing application is restarted.
Those are just a few of the benefits. For even more complex situations, some switches can use custom scripts to tell the system how to handle traffic in even more unique circumstances.
You may be thinking that load balancing is far too costly to be used by any but the most popular websites. Though it is one of the most expensive hosting options, shared-server load balancing allows smaller but still popular sites to reap the benefits of load balancing while load-balancing the cost – distributing it amongst several sites. These configurations offer all the benefits of a dedicated load-balancing system and are particularly useful to corporations requiring secure VPNs or websites that are growing but do not yet meet the threshold for dedicated servers.
As bandwidth costs inevitably fall and dedicated multi-layer switches become more affordable, load-balancing systems will become available to a wider and wider segment of the web community.
Roko Nastic is writer and editor at WebmasterFormat. He enjoys helping other webmasters and website owners succeed in creating better, faster and more profitable websites.View Roko Nastic`s profile for more