Follow Us

We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

Five ways to improve website uptime

Avoid embarrassing website outages

Article comments

Jason Mayes, a senior engineer at XMOS, says that working with Akamai Technologies played a major role in alleviating pressure when XMOS started offering online videos. XMOS would post a 200MB video, and thousands of users would attempt to access it at the same time, stressing XMOS's 10Mbit/sec. connection. "Videos made our site slower for regular users. This was a major concern, as a Web site reflects a company's ethos," he says.

After implementing Akamai's CDN, Mayes says, all page-delivery times, including those for text and video went from 17 seconds to 5 seconds in Asia, where many of XMOS's site visitors originate. He says he might also look at redesigning his company's content management system (CMS) to make it faster now that the main bottlenecks have been fixed.

3. Use more and better caching

Another common tactic for dealing with Internet problems is to cache content. This technique is becoming more common, according to Pieter Poll, chief technology officer at Qwest Communications International Inc. in Denver, because it enables a site to scale up more easily when users flock to popular content, such as a new episode of CSI: Miami on

Caching on the Internet works just like memory caching in your computer -- holding the most popular content in a cached storage allocation on the server for fast access. (A CDN is also a type of cache, in that popular content is delivered from a separate node.) Staten says tier-caching products such as Gigaspaces, Oracle Coherence and MemCached help cache content within the Web site infrastructure by making sure that content in a database is accessible at all times, even during the worst traffic spikes.

In fact, caching is one of the main ways that sites like GDGT, Twitter, Facebook and others deal with surges -- the technique is perfect for handling small chunks of data that change often, such as popular articles, forum posts or news items.

Yet caching is still not as widespread as it could be. GDGT uses it to solve ongoing congestion -- although it wasn't enough to keep the site running at launch. But many sites that are just gaining traction, such as the video delivery site Crackle, are still tweaking cache settings.

Poll and other industry observers note that Web usage increases at a rate of about 42% each year, but broadband has not increased at that speed, so caching and CDNs are increasingly important.

4. Use better programming methods

One emerging method of dealing with traffic problems is to program using techniques that can withstand spikes. Brian Sutherland, a managing partner at Vanguardistas, a company that provides a scaling architecture for sites such as that of US News and World Report, says that a vast majority of websites are poorly programmed and aren't able to withstand unanticipated traffic spikes.

"There's a large engineering effort required to make sure that a Web site is capable of withstanding a large and sudden load," says Sutherland.

A few examples of things that website software developers rarely do, according to Sutherland, include regularly benchmarking a representative copy of their servers against a simulated load; having an experienced developer review and approve every software change; and designing for the target performance level right from the start. "When you really want your Web site to stay up, you have to do these things. Twitter grew faster than anticipated and brought in a company after the fact to improve its uptime, which has worked."

According to Sutherland, these techniques -- which mirror methods used in enterprise computing -- might have to wait until Web expansion slows down a bit because developers tend to put speed ahead of reliability. He points out that banking Web sites are good examples of development initiatives that emphasized reliability from the outset, likely because they were subject heavy federal regulations and faced customer demands for reliable financial transactions.


More from Techworld

More relevant IT news


Send to a friend

Email this article to a friend or colleague:

PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.

Techworld White Papers

Choose – and Choose Wisely – the Right MSP for Your SMB

End users need a technology partner that provides transparency, enables productivity, delivers...

Download Whitepaper

10 Effective Habits of Indispensable IT Departments

It’s no secret that responsibilities are growing while budgets continue to shrink. Download this...

Download Whitepaper

Gartner Magic Quadrant for Enterprise Information Archiving

Enterprise information archiving is contributing to organisational needs for e-discovery and...

Download Whitepaper

Advancing the state of virtualised backups

Dell Software’s vRanger is a veteran of the virtualisation specific backup market. It was the...

Download Whitepaper

Techworld UK - Technology - Business

Innovation, productivity, agility and profit

Watch this on demand webinar which explores IT innovation, managed print services and business agility.

Techworld Mobile Site

Access Techworld's content on the move

Get the latest news, product reviews and downloads on your mobile device with Techworld's mobile site.

Find out more...

From Wow to How : Making mobile and cloud work for you

On demand Biztech Briefing - Learn how to effectively deliver mobile work styles and cloud services together.

Watch now...

Site Map

* *