mega Advertising Solutions
About Us

® Article Top
Improving Server Performance
...Before you upgrade to a new server

Beracah Yankama
Director of StudentsReview (StudentsReview.com)
Founder of OmegaAds

Sunday, Sep 23, 2007

Time to Move?

You are here because your server is overloaded and you are generally thinking that maybe it is about time to move to a new server.  Whether your virtual hosting provider has notified you that you are using too many resources, or your dedicated server is overloaded, the costs of a slow server are hampered growth, loss of visitors, eventual crashing, and potential fines by your virtual hosting provider.

Generally, we found that it is about time to move to a dedicated server when you are starting to hit 5-10,000 visits per day.  Theoretically, by this point advertising should be easily providing enough income to cover the costs of the bigger server (~$100/mo) if you are a content site, and definitely enough if you are an e-commerce site.

Still though, you may not be ready for a big complex move, and are interested in the cheapest ways to increase performance that you can find.  The first (and probably the easiest) way is to analyze your log files using the old-skool “analog” program.  (Not Webalizer or Google-Analytics).  Analog will tell you what files specifically comprise the largest proportion of your web requests.  Most of the time these are image files .jpg, .png, .gif, .js or .css files.  In other words, dead files.

Dead files are not the files your users visit for (some notable exceptions), but since web browsers request files in parallel, they consume up a number of channels waiting for the files to download.  When traffic gets large, it PREVENTS new visitors from accessing your normal HTML content files. 

So the first trick is to move those image, javascript, and css files off onto another server dedicated to them (or your university web account if you are a sneaky student).  The web browser's parallel requests will request your files from different servers, and not stall out on your content.  And since a small number of files are cached, your small image/css/js server won't buckle under the load.

After that, you might want to try several other tactics, which include: Compression, improving PHP speed, reducing MySql inefficiencies, and Caching. 


When your server is overloaded, one of the things that gets overlooked is the benefit of page compression.  It seems counter-intuitive, since who wants to load their server down with more cpu load compressing pages, when it is already overloaded, but ironically it works. 

The reason is this: Every time a client or customer connects to your server, they initiate a request and open a connection.  While transferring data to your visitor, that connection stays open and consumes CPU and memory in either a busy wait or blocked state.  It periodically has to be checked, make sure it is not dead, blocked indefinitely, or whatever.  When your users are on modems instead of DSL or cable (or Fiber, you bastards), the connection to the visitor stays open a longer time than normal.  Since servers have a limit associated with the number of open connections (to prevent the server from dying just trying to monitor them all), all additional visitors will be placed on a waiting queue, and then later refused entirely.  We found that 20-30 modem users can block out potentially hundreds of regular users just in a few page requests and lead to a large waiting queue and overload.  What you want to do is to get the pages and files off to the visitor as quickly as possible, freeing server resources from waiting mode.  Compression does that.  HTML files can get compressed by a factor anywhere from 4-8, meaning that your waiting server processes are freed 4-8 times faster.  CPU usage of compression?  generally nominal.

There are two ways to add compression quickly:
1.  In Apache 2, enable the MOD_DEFLATE modules in the httpd.conf file.  (google for how to do this, pretty easy). 
If you have a virtual host or shared server, this will most likely not be an option available for you. 

2.  If you do not have access to your apache config file (like on a shared server), and if you run PHP 4 or above, create header and footer files for all of your pages, include them, and add the following php code.

// in site header (before any html/ or most other php content) ob_start("ob_gzhandler”);

site content.

// in site footer (after all html content)

php site: http://us3.php.net/manual/en/function.ob-gzhandler.php

PHP will then take care of compressing and sending compressed page headers to your visitor. 


If you are certain that your PHP is running slowly, consuming the server, AND if you are not on a shared server (or if you are, maybe you can make a request to your hosting provider), then install ZEND Optimizer.  What's Zend?  Zend is, as is generally overlooked, the owner and maker of PHP.  Zend IS the PHP company.  They make a free optimizer that is easy to install, which runs in the background OPTIMIZING your php code into continuously faster code.  Zend knows that certain code structures are faster than others and essentially rewrites your code into a faster variant.

Site Speedup (~4x on PHP code & loops)

Beyond that, you will want to consider bytecode caching as a mechanism for speeding page load & execution.  Basically, PHP has to interpret your hand-written code in machine runnable code each time the webpage is requested — this increases speed by saving a PHP bytecode version which does not have to be interpreted again until the page code is changed.

I recommend eAccelerator (http://eaccelerator.net/).  a.  It's free, and b.  it works with Zend Optimizer.  Personally we've gained an additional 30% of performance above the Zend Optimizer with eAccelerator. 

If you want a Zend product, the Zend Platform performs code caching as well, (along with other features), but is expensive)

Zend essentially makes your code execute faster, but eAccelerator makes it load faster. 


Slightly more expensive is the idea of caching your pages.  This is what you do when:

1.  You know that you are doing CPU intensive things on every page load. 
2.  Not much is changing between each visitor — even if the page is dynamic.

Caching can be difficult to implement, and is generally done in conjunction with a database to relieve load on it.  For instance, if there are lots of reads from the database, but only a few writes that ever change anything.  It can be difficult, and the cost is more disk usage, but the advantage can be GREATLY reduced db server load.  In our setup, caching reduced a 100% overload of our database and webserver to a sub 20% usage level. 

Basically it works like this: Whenever a dynamic page is requested, you check to see if a plain HTML version of the page is cached in your tmp directory.  If not, you make the page just like normal, send it to the user, but ALSO save a copy in the tmp directory.  When you find it next time, you just return the page directly, with no further database connection or processing. 


While we are certainly no mysql experts, we'd like to share some of the effective strategies that we've learned in active use to keep our servers running.

Lazy Connection Stategy

One thing that I found helped a lot is not to connect to the database until the very last moment.  For instance, most people use a standard include file that connects to their database in a header, but if not all the pages actively use it, then you have a lot of open and unused connections created that are in block and waiting — stalling out other real db connections.

What you can do is use a Lazy connection strategy — create a db_query() function instead of the standard mysql_query() which first checks a flag for a connection to the db, and then connects if that flag is false.  In other words, the db doesn't get connected to until an actual query is ready to run, leaving more database handles free to answer real requests.  (Not to mention saving to overhead connection time).


Indexes, Indexes, Indexes.  Everone always talks about indexes, as if they know something that you don't.  Well guess what, they're right.  I never wanted to learn anything about indexes, thinking it was probably a waste of time, but one day I sat down, spent 3 hrs, and it was worth it.  Just as basic idea, think of a (albeit boring) book that contains the phrase “likelihood maximization”.  There are two ways that you can find that word and section.  Either you can flip through all 300 pages one at a time, or you can look in the index.  Guess which is faster?  Now imagine that you requested to do the same thing with different words 20 times a second.  The larger the book, the slower and more heinous the process is.  The same is true of your database.  If it only has 5 items in it, no big deal, that's like looking at 5 words.  But if there are a hundred entries in it, then an index starts to make a difference.

A mistake some people make & overlook in their mysql table creation is that they create an “auto increment” field, and think of it as though it were an index, but fail to set it as primary key.  Guess what, the database isn't smart, it has to search through all the numbered fields to find “#72”. 

Here's the article I used to increase our database query performance:

Optimizing MySQL: Queries and Indexes

Final Words: Things to check on

check your disk speed.  hdparm
check your disk usage.  iostat
check your user quotas.

Many server linux boxes are shipped with IDE drives, which are not operating in 32bit, much less UDMA Modes.  Most standard installs of linux that I've run across activate the ide drives in only 16bit, non-dma mode for safety, but leave it to the user to turn on.  None of my hosting companies have *ever* turned it on.  I had to.

/sbin/hdparm -t /dev/hda may show you transferring data in the 1-5MB/s range, instead of the 33-100MB/s range.

/sbin/hdparm /dev/hda

multcount = 16 (on)
I/O support = 3 (32-bit w/sync)
unmaskirq = 1 (on)
using_dma = 1 (on)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 4982/255/63, sectors = 80043264, start = 0

Look up hdparm on Google to find out more.

Disk access speeds give you HUGE boosts in performance, especially on a highly trafficked site.  Imagine the database or the web server just sitting there waiting for the disk to respond with the web page or some database field.  Our real performance improvement was 2X.

The cost of upgrading, while it seems trivial is actually huge — maintaining an additional server, moving over, monitoring, and hosting costs accummulate — if you can avoid the time and cost, you should do it.

The steps above reduced our server usage from 100%, with 100% database and disk usage down to 25%.  We had 260 apache threads running constantly, maxing out at 35,000 pages per day.  Now we are down to 36 threads, and pages served per day is up to 70,000.  All on the same server.  You can do the same.

Beracah Yankama
Director of StudentsReview (StudentsReview.com)
Founder of OmegaAds

Lose the headache of managing your ad network and all those varying prices!  Do it automatically: save time, earn more money, and regain lost earnings.


Ω Ad Scheduler
Allow your visitors to quickly schedule and place ads on your site themselves! 


Read some of what we and others have learned about ad optimizing, chaining, and server optimization!  And stuff. 

© 2005-2007 Omega Advertising & Beracah Yankama