Server Economics

Not every server needs 20 cores of Intel Xeon processor or 128GB RAM or 36TB RAID storage.  There are certainly applications that deserve a dedicated server with two or four cores, a few gigabytes of RAM and a small boot drive.  But are there decisions where a little more planning and expense before deployment extends the life of a server – or adds a life to a server?  Yes, even with current economic realities.

Modern software development strategies seem to take for granted that cores, clock speed and, especially, RAM, are nearly unlimited resources and that using more is always OK.  The effect is that a system that starts out properly sized in those resources is almost always under-powered before a common three-year life-cycle has run out.  The difference between 8GB RAM and 32GB RAM in late 2013 is less than $400.  Maybe the server will never need 32GB in its first life, but what is the cost and inconveinence of adding RAM later when you really need 12GB or 16GB?

Processors cost more, but even there, the difference between a single 4-core processor and a pair of 6-core processors is just over $1000.  There are surely some embedded scenarios where $1000 extra cost is a deal breaker.  But in most applications that are more “general purpose” it is can be justified much more easily.  And how often do users complain that the server resource they are using has too much horse power, during its first life?

What about storage?  Start with the boot drive.  Some users will say that the application is not so mission critical that it needs mirrored disks.  If the final decision is still for a single drive, then it really must be a solid state drive, SSD, and not a spinning disk.  An 80GB SSD costs about the same as a 250GB disk.  If you just need a boot drive, there is no question.  Mirrored 80GB SSDs only cost about $250.  No organization can replace and reload a boot drive for less than that, so that should really be the standard starting point for any server.

Some servers need more storage.  How often is a two-year-old server, still during its first life, described as having “too much storage capacity”?  It may happen, but I have never heard that sentiment.  Capacity demands always grow and usually grow more than expected.  Adding a couple of drives to your storage array, or bumping to the next size disk is almost always a small incremental increase.

This blog has already covered in some detail the subject of storage capacity versus storage performance.  If random read/write performance is important, than the primary storage should be SSD.  The performance of 24 SSDs would take thousands of spinning disks to achieve.  If the requirement is for large capacity, then 4TB, 7200 RPM disks is the obvious choice.  If some combination of size and speed is needed, then a combination of disks and SSDs is probably the answer.  Faster disks would only make senser if just a little more speed is needed.  If storage that is twice is fast is required, instead of 100s of times faster, then faster disks might be the answer. In most cases, if a little faster is a little better, then a lot faster is a lot better.  And like capacity, how often is a server described as having “storage that is too fast”?

Even the network is a place to give some thought to future needs and the life of the server.  1Gb server Ethernet ports can be added for about $100 per port.  How much network bandwidth or access will this server need during its first or second lifetimes?  What does it cost to take it down, take it apart and add it later?  For real forward thinking, two ports of 10Gb Ethernet can be added for around $600.  Maybe the environment has no 10Gb infrastructure now, but it is a safe bet to guess that it will in the next 3-5 years.

I have made several references to the server’s “first life”.  Yes, that implies a second life.  Maybe that just means re-tasking it for a less demanding or less critical application in three or four years.  Or, maybe that means adding it to a pool of virtual machine hosts and running a few less demanding VMs on it.  Even if the server is getting older and harder to service, running a VM that can be easily moved to another host in case of trouble is a low-risk way of continuing to benefit from aging hardware.  In either case, if the server will be deployed in a second life, a few years from now, how likely is it to need more cores, more RAM and more storage?  Quite likely.

The conclusion is that adding 10% – 20% to the initial cost of a server can not only make it both more reliable and better suited to its initial tasks over the first few years, but it can also ensure that the server is suitable for re-deployment for new applications after its first job is done.  It is reasonable to assume that any organization will be scrutinizing its costs just as carefully three years from now.  If a 20% increase in cost today means doubling the useful life of a server, that is probably a very good return on investment.

What do you think?

One Reply to “Server Economics”

  1. As an example, I am working this morning to develop a plan to replace a single power supply with a redundant power supply in a system about to be deployed. The end customer now realizes that redundant power makes a lot of sense. Unfortunately, this is one of those decisions best made before the purchase, not after. There seems to be a solution, but it is a non-trivial swap, now being done in the field.

Leave a Reply

Your email address will not be published.

*