Latest Hosting Posts

Rating: 5 (1 votes cast)
Cluster And Its Concepts
2013-12-13 by  Neha Patel


As it is always occurring increasingly rapidly changing technological world, the requested around the globe demand is exceeding that speed. Several times, depending on the need computational level, the equation "cost-benefit" ends up being unworkable and therefore decide to use a cluster is to have scalability, availability, high throughput at a relatively moderate price and consequently, have resolved this equation. It is often thought that this is all great news but in reality is simply another classic.

Cluster is basically a system which performs the union of two or more computers to work together. In the context of this architecture, each computer that is part of
the cluster is called the node and independent of the number of interconnected machines, they should be seen by the user (or the system that need this processing or feature) as a single computer. It is not necessary that all nodes are equal to levels of hardware, but it's really important to have the same OS, ensuring that the driver software has full management of the group on all machines.

The idea of connecting mainframes at the time of generation baby bummer (60s) began the story of the Cluster. Forward thinking of the powerful IBM, a company that already had much of the field of technology at that time and who had the need for a solution at the same time that it would have the ability to be highly marketable and parallelism.

Already in the mid 1980s, some trends have emerged as high performance microprocessors, the system interconnect machines began to gain real. One trend highlighted is the need for processing on a larger scale, as in scientific and commercial applications, generating a negative since become allied to the exorbitant cost and difficult access to a single "supercomputer".

Cluster cloud server web hosting

It was then that the next decade, two researchers at NASA (Donald Becker and Thomas Sterling) created the draft of what would become the cluster model in the coming years, which would go against the concept of use of powerful machines. The initial model was created as prototype-based personal computers and had 16 DX4 processors connected by two coupled Ethernet channels. What an amazing and impressive guaranteed success, using the time only by academics and NASA research centers. This project was called Beowulf.

The types of clusters are divided into: High Availability (High Availability (HA) and failover), load balancing (Load Balance) and Cluster Combined: High Availability and Load Balancing.

The most discussed issue since it began studies on Cluster, is the debate: Super Computer Cluster Machines versus simpler. Each has their own opinion then. The most important is the use of the Cluster for scalability, availability, and not hold us to use only one hardware, sharing, aggregating and selecting computing resources of all kinds.

news Buffer

Neha Patel

Neha is an expert author and loves sharing useful tips on how to choose best web solutions company in India, tips and techniques about SEO, SEM on her official blog and about English cream golden retriever on her personal blog.

View Neha Patel`s profile for more

Leave a Comment