Follow Us

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Open source Linux clustering

Just 751,075,200 seconds after the PC launch, they've come a long way

Article comments

Today, it is exactly 23 years, nine months, 20 days; or 8,695 days; or 751,161,600 seconds; or 12,519,360 minutes; or 208,656 hours; or just less than 1,242 weeks since the launch of the original IBM PC on 12 August 1981.

A wonderful service on told us so. This site provides a number of almost useful calculators that determine such timely things as the duration between two dates, or when alternative birthdays (such as when you are 1 billion seconds old) will occur.

Other than providing you with yet another site on which to waste your highly valuable time when you should be doing far more productive things, we bring this up as a fairly thin, albeit not completely, uninteresting way for us to note how far PCs have come in the short time since their launch.

What brought this ooh-ah moment home for us was receiving a fantastic new book titled The Linux Enterprise Cluster by Karl Kopper.

The Linux Enterprise Cluster is a how-to book and explains how to convert two or more PCs into a high-reliability, high-availability cluster based on Linux and inexpensive hardware using free and mainly open source software - what would have been an unthinkable configuration back when mainframes ruled the earth.

The book starts by exploring what is meant when we talk about a "cluster" and offers the definition of a system that can be used as "a single computing resource" using "a local computing system comprising a set of independent computers and a network interconnecting them."

Key to the whole concept is that a cluster must not have a single point of failure. Should any of the individual computers in the cluster (the "nodes") fail, there must not be a failure of any service provided by the cluster. This means that any node in the cluster can fail and be rebooted without users of the cluster being aware of the events.

This leads to the four basic properties of a cluster, which are all about what we could quite reasonably, call "transparency":

Users accessing cluster services don't know that they are using a cluster.

Nodes that comprise the cluster don't need to be aware that they are part of a cluster.

Applications running on nodes don't need to know they are running in a cluster environment.

Servers that are not part of the cluster don't need to know when they are providing services to nodes in a cluster.

The basic architectural elements of a cluster are a load balancer, shared data storage and output devices. The load balancer sits between the nodes and the users and distributes the incoming workload to the node services. The shared data storage must support lock arbitration to ensure exclusive access for each process to items (files, blocks or bytes, as required) in the file system. The final basic architectural element, output devices, covers printers, fax lines, and so on.

To manage a cluster, we can have one more optional architectural element, a Cluster Node Manager. The cluster node manager can provide an application licence service -- a centralised user database and a performance-monitoring console.

Building a true enterprise-class cluster system is obviously quite a complex and challenging task. The book's approach is to use a number of readily available subsystems. These subsystems include server data synchronisation using the rsync package; failover management using the open source Heartbeat software, which includes Stonith (which stands for "Shoot The Other Node In The Head") to ensure a failed system is really dead; the Linux Virtual Server project kernel patches to enable load balancing; and the Ganglia package for collecting and displaying node and cluster performance statistics.

This book is fascinating, and while it is quite technical in places, it also explains the topics clearly enough for those not quite so familiar with Linux to develop an understanding of what a cluster is.

Over the next week or two, we'll look at some of these subsystems and how they work. Maybe we'll even try to get a test cluster running under VMware. Will the fun never end?


More from Techworld

More relevant IT news


Send to a friend

Email this article to a friend or colleague:

PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.

Techworld White Papers

Choose – and Choose Wisely – the Right MSP for Your SMB

End users need a technology partner that provides transparency, enables productivity, delivers...

Download Whitepaper

10 Effective Habits of Indispensable IT Departments

It’s no secret that responsibilities are growing while budgets continue to shrink. Download this...

Download Whitepaper

Gartner Magic Quadrant for Enterprise Information Archiving

Enterprise information archiving is contributing to organisational needs for e-discovery and...

Download Whitepaper

Advancing the state of virtualised backups

Dell Software’s vRanger is a veteran of the virtualisation specific backup market. It was the...

Download Whitepaper

Techworld UK - Technology - Business

Innovation, productivity, agility and profit

Watch this on demand webinar which explores IT innovation, managed print services and business agility.

Techworld Mobile Site

Access Techworld's content on the move

Get the latest news, product reviews and downloads on your mobile device with Techworld's mobile site.

Find out more...

From Wow to How : Making mobile and cloud work for you

On demand Biztech Briefing - Learn how to effectively deliver mobile work styles and cloud services together.

Watch now...

Site Map

* *