[ Print Article! ]
Building A Budget Storage Server/Workstation
November 09, 2003 Alexis Dang

Summary: Storage Servers. They’re not simply computers with a bunch of hard disks, nor are they another name for RAID storage. In this article, Alexis builds a budget storage server and explains why you can’t take a random desktop and add a bunch of disks. Like the previous Opteron article, even if you’re just into building gaming systems, you’ll want to read this article to see our thoughts on cooling and power.


IntroductionPage:: ( 1 / 9 )
Today we are going to be building a budget, high performance storage server. So what exactly is a storage server? We’ll first go over the technical requirements and operational goals for our system, then move onto the design and assembly of the system.

First we need to discuss why we need a storage server. It is useful for a workgroup environment, where there are multiple users that need to share data across a network. In addition, it facilitates backup of data since the storage is centralized. Where cost is an issue, it is much cheaper to build a robust server with high levels of reliability than to submit that level of reliability and performance to all the network nodes.

At the most basic level, a storage server needs to be able to hold a lot of hard drives. To accomplish this, we could go out and buy a network attached storage device, but remember this is a budget system. Our goal is to maximize the functionality, reliability, and performance of the server, while keeping costs under control. It sounds like you could just add a bunch of hard drives to any networked PC and call it a “network attached storage device,” but if you want it to be reliable, you have to think about cooling, power, and anticipated usage. So, if you’re only interested in building a hardcore gaming PC, you’ll still want to read this article to see our thoughts on cooling and power.

We wanted a server that would serve only data files and not program files. This would limit our network bandwidth and maximize performance. At the same time, we wanted this server to act as a workstation with as much capability as the other systems attached to the storage server. Our minimum storage requirement would be one terabyte. Not too long ago, terabyte storage was reserved for government labs like Sandia National labs, Lawrence Livermore labs, or science fiction.

Another consideration specific to storage is expandability; how we will cope with increases in storage requirements over time. Some network attached systems are great in the first year, but as needs expand, you basically have to double your initial investment to double your storage, by duplicating your initial purchase. The technology that you bought the first time does nothing for your future expansion, this is something that we tried hard to prepare for.

Let’s start with discussing what we need to have and then build around that. First the hard drives.


SIDEBAR: CDs may self-destruct at sustained speeds of greater than 56x


Hard DrivesPage:: ( 2 / 9 )

A storage server should have a hard drive for the operating system and an array of drives for the shared storage. We feel that the most important feature for a storage hard drive is reliability. We went with IDE drives because of their superior price to performance ratio, as compared to SCSI. In our case, we don’t even need the bandwidth of the SCSI drives – quantity rather than blistering speed was important. With respect to SATA or parallel ATA, both are more then adequate for our needs.

With these needs in mind, we chose the Maxtor Maxline II Plus 250GB 7200rpm 8MB buffer hard drives. These drives are rated at 1,200,000 hours MTBF as compared to 600,000 hours for standard consumer drives. This does not mean that you can run your hard drive for 137 years, but does imply that it is more reliable than a standard desktop drive. Maxtor has advertised this drive as one designed for 24/7 applications, this is in stark contrast to the old line of IBM drives that did not recommend continuous usage. Currently 250GB is the maximum capacity for 7200rpm drives. The only other IDE/SATA drive with a similar MTBF rating is the Western Digital Raptor series, but the max capacity is still only 36GB, with a 74GB version coming soon.

We will use four of these drives for a nice and even one terabyte of storage, with a server design that will allow for an easy addition of another 4 drives for a peak of 2 terabytes. But, when we are ready for a storage upgrade there will likely be even higher capacity hard drives on the market, further extending our maximum storage capacity.

[image]

<% print_image("01"); %>

To RAID or not to RAID


The next decision we had was whether to RAID the drives or not. Since we were interested in reliability, RAID 1 or RAID mirror was considered. If you believe the numbers, running a drive in RAID mirror will double the effective MTBF, we have done that by choosing the Maxline series vs a standard consumer IDE hard drive. In addition, our budget constraints did limit our ability to implement a RAID 1 array.

Another possibility was RAID 5, which allows 5 drives to act as 4 drives. An additional parity track is written on each drive, so if one fails, then the other drives can recover the lost data. This is available through software or hardware. This is a great solution if you do not plan to upgrade your maximum server capacity. When the time comes to replace a drive with a higher capacity drive, you will be forced to replace the entire array. As this is a budget server, going to RAID 5 isn’t as important to us. One must realize that RAID designs really only protect against hard drive failures. With power supply or motherboard failures, more than one drive can be destroyed at a time. So it should be important to note that simply turning on the RAID option does not guarantee the safety of your data.

Our solution is to run the drives in a standard configuration. We plan to have 4 drives initially with room for another 4 drives. Since these drives are not in any disk arrays, we can remove drives as they fill up and place them into storage for backup. Better yet, our plan is to place full hard disks into external drive boxes. New technology in the form of higher density drives can easily be added to the server and the old drives can be removed and placed into USB 2.0 or Firewire enclosures, ensuring easy access to old data. This is also the reason Linux was not a good choice for our system -- it doesn't make sense to put XFS/ext3/ReiserFS drives into a USB2.0/Firewire external box. Since we anticipate going through 2 TB of data every year, this setup allows for that flexibility without a significant cost penalty. Remember, you do not need to populate a storage server to maximum configuration to start. In six months when we’ve filled up a terabyte, we’ll be able to buy another terabyte at a lower cost. With this setup, we are relying upon the reliability of the drives to last a year and will also attempt to maximize the reliability of the system as well. This was another reason why we chose standard ATA drives over SATA drives. DVD-Rs will serve as temporary backup along the way.

For the system drive, we went with a Seagate Barracuda 7200.7 SATA 120GB drive. We like the Seagate SATA series because they do not use an internal SATA to parallel ATA bridge. While this is more a theoretical rather than practical issue, with even the Raptor having a bridge, this is a nice design touch. In addition, the Barracuda is very aggressively priced. One consideration is that the Barracuda does not have a standard Molex power adapter and must be connected with a Serial ATA power adaptor. This makes it a less than ideal upgrade drive for older systems.

[image]

<% print_image("02"); %>

Cost: 4 x $250 for the Maxline II Plus, $110 for the Barracuda



SIDEBAR: If you live to 100 years old, your heart has a MTBF of 876,600 hours



Power supplyPage:: ( 3 / 9 )
Having all these hard drives really adds up in terms of power consumption. With 8 storage drives and a system drive, the hard drives alone will draw about 135 watts, assuming a conservative 15 watts per drive. This is a sustained current draw, during startup these values may increase by 50%. If you actually had 8 SCSI drives and ran them at peak ~2A/each, you’d need 16A on the 12V rail – even a 400W SilenX.com power supply only supplies 18A on the 12V rail. This is why many SCSI storage systems have dedicated power supplies for an external array of disk drives. Something else to consider is that a RAID array would use more power than our current setup because the drives are run simultaneously. With our design, the drive usage should be staggered. This keeps our power well under budget. Quick rule: Before you go to RAID, make sure your PSU can handle the power.

A Pentium 4 will draw about 100 watts, the video card another 70 watts, 40 watts for the motherboard, 64 watts for 1GB of DIMMS, 20 watts for a DVD-R, and another 15-25 watts or so for fans and accessories. This adds up to more than 450 watts. As you know from reading our power supply article, an advertised watt is not always a watt that gets to your components. Modern computers are increasingly relying upon the +12V voltage rail, some cheaper power supplies advertised inflated total power, when in fact the power is not where you need it. In addition, the advertised power is sometimes a peak power rating, but the calculations above imply a continuous power requirement.

Looking for the best power supply was easy. We went with the PC Power and Cooling Turbo Cool 510 ATX power supply. PC power and cooling rates their wattages as a continuous draw and also provide that rating at 40C, which is the operational temperature inside a power supply. In contrast, some power supplies are rated under ideal conditions that may not represent what is inside your server such as 25C. There is an interesting brochure about this on the PC Power and Cooling website. We all know the value of clean power; the PC Power and Cooling Turbo Cool 510 has an integrated line conditioner. This power supply provides 34A on the +12v with a peak of 38A.

[image]

<% print_image("03"); %>
It doesn’t look as fancy as some of the other supplies on the market, but they do offer an all black model with techflex sleeving on the wire bundles. This power supply is also one of the few that offer a Serial ATA connection with a +3.3V line. The cables are very generous in length and will easily reach your drives, even in the biggest of cases. It includes a power cable, but one that is only 18 gauge. Server power cables usually are 16 gauge. Supermicro bundles a 16 gauge power cable with its 450 watt power supply.

PC Power and Cooling also warranties their power supplies for 5 years, which is 4 more than the industry standard. Remember, we are looking to build the most reliable system for the money. This power supply does cost more, but compared to the competition, there really isn’t any competition until you get to the multiply redundant server power supplies that cost much more.

Cost: $190 at www.pcpowerandcooling.com


SIDEBAR: Any guesses on who makes PC Power and Cooling’s 550W competition power supply?


MotherboardPage:: ( 4 / 9 )

Motherboard

We went with an Intel i875 platform for the server because not only is it competitively priced, but we feel it is a more mature platform with most of the bugs worked out. Moreover, we only need 1GB of RAM for our use and so we don’t have to worry about our previous benchmarks on 8 banks of RAM.

There isn’t much of a performance differential between the top motherboards since we aren’t going to be overclocking in our server, but the feature sets do differ. Here are the most important features that we need in our server:

CSA GigE

CSA Gigabit interface – The i875 supports the CSA gigabit interface which bypasses the PCI bus. This allows for a full gigabit of Ethernet bandwidth without taking away from PCI bandwidth, with a maximum rate of 266 MB/s which is over 2000 megabits/sec, enough for full-duplex gigabit. We see this as an essential feature for a storage server. If we were going with a non-Intel solution, we’d want something with similarly dedicated 1GBps bandwidth such as the nForce3 with integrated GigE coming soon, or something like the K8W we reviewed earlier with a PCI-X network card.

Dual Ethernet

Secondary Ethernet port – For maximal performance we will use the CSA gigabit ports to connect to the LAN, while the secondary Ethernet port will connect to a firewalled Internet. Having an extra Ethernet port is essential to being a server.

Additional ATA Ports

Additional ATA ports – We can always add in an extra PCI ATA-133 card to drive additional hard drives, but if a system has extra ports then we are a little closer to our final system configuration and can do it at a lower cost.

USB 2.0 / Firewire

USB 2.0 and Firewire connectivity – So once we fill up our hard drives we can still get access to the data with our external drives.

With these requirements we chose the Tyan Trinity i875 motherboard. Tyan has a positive track record in building server class systems. This is a reputation that is built over time and one which is extremely valuable to any manufacturer. This also makes them a little more conservative, which is good for reliability. The Trinity has dual gigabit ports on board, which is unique among the i875 offerings, only one NIC uses the CSA interface, however. It has 4 total SATA ports and the capacity for 6 ATA drives, with RAID capability for 2 SATA and 2 PATA drives.

[image]

<% print_image("04"); %>

Tyan offers a great combination of features with a good track record of reliability. It does offer some overclocking capability with software voltage and clock speed adjustments, but they don’t make a big deal about this.

The layout of the board is nothing fancy, but it has been well thought out as nothing gets in the way. It uses a standard 12”x9.6” ATX form factor so it should fit in just about any case. The DIMM slots operate with good clearance from the AGP slot and the power connector is placed near the front of the board. Also differentiating the Trinity from other i875 boards is 6 PCI slots. Very few boards offer this level of expansion. We also like the two-digit LED module on the motherboard that can provide system error codes. Some boards require you to purchase an additional module to get this type of error reporting. No fancy software bundle is included with the board, but you get your standard USB and firewire headers, SATA interface and power cables, and ATA cables.

No board is perfect though. We would have liked to have additional fan headers instead of only three. The heatsink on the i875 chipset is a little crooked on the Trinity, but still securely attached. Interestingly, the picture of the board in the manual also shows a slightly rotated heatsink. We also don’t have active cooling of the chipset or power modules, but this shouldn’t be a problem if we don’t overclock. There is a SPDIF header on board, but the cable is not included. In addition, more and more manufacturers are providing comprehensive software bundles and high performance accessories, such as rounded hard drive cables. These are all accessories that we would need to buy anyways.

Cost: $200


SIDEBAR: 32-bit processors can only access 4GB or RAM


Heart and BrainsPage:: ( 5 / 9 )

Memory

The memory sweet spot right now is 1GB of RAM. In a previous review, we saw the Intel i875 Bonanza motherboard slow down significantly when 4 double-sided DIMMS were used. With the TYAN motherboard, we saw no significant slow down when 4 DIMM slots were used. We aren’t yet sure why we saw what we saw with the Intel motherboard, it is possible that it may have just gone to a more conservative memory timings with 4 DIMMs. If that wasn’t reason to build your own systems, I don’t know what is. Even though we aren’t overclocking the clock speed of the system, we can still use fast memory to gain performance with low latency modules (2-3-2) or CAS-2 (2-3-3). This compares to standard DDR400 which is CAS 3 (3-8-3)

When it comes to the best performing memory, the list is short. More often than not, Corsair RAM sits among the leaders in the high-end ram market. Corsair has impressed us with modules that always perform consistently and reliably.

The Corsair 512MB XMS Pro PC3200 Low Latency modules are a perfect match for the system. Without a window in the case the LEDS can’t be appreciated, but the PRO modules do have the better heatsink and are likely from Corsair’s best bins since it is their flagship line. Another advantage of the XMS Pro line is in debugging motherboard problems. If the lights do not turn on as they should during POST, then you can infer that your motherboard may not be giving the DIMM modules any power. According to Corsair the addition of the LEDs does not affect the integrity of the data nor does it add noise to the signal path. Given that the Tyan Trinity i875P can run the 8 banks reliably at full speed, we’ll have to agree.

[image]

<% print_image("05"); %>

Cost: ~$350 for low latency (2-3-2), ~$290 for CAS 2 (2-3-3). Save another ~$40 a pair with standard XMS modules. Limited availability of PRO modules.

CPU


Not much to say here. Following the recent price drops, the Pentium 4 3.0GHz (800 mhz bus) is the same price as the 2.8GHz last month. The marginal cost between a 2.8 and 3.0 is about $60, but to go from 3.0 to 3.2 is about $120. For a non-overclocked system, the 3.0GHz chip is a good choice. A single hyper-threaded CPU is sufficient for our purpose since the workgroup size will be small and the server will not need host a large number of simultaneous connections.

We used the stock heatsink and fan since we aren’t overclocking and don’t mind a little extra fan noise. The 3.0GHz Intel retail package includes a better heatsink than in their slower chips, this one has a copper core. We did remove the stock thermal pad to try out the new Arctic Silver 5, which is the newest and one of the best thermal compounds available. With this setup, our peak CPU temperature during prime95 testing was 50C with a system temp of 28C. This system temp was with all the case sides closed and reflects the great cooling of the case, but more on this later. The temp at the case intake was 22C.

[image]

<% print_image("06"); %>

Cost: $300



SIDEBAR: I’d like to see a case made from granite


Server ChassisPage:: ( 6 / 9 )

Chassis

We need a case that will hold up to 9 hard drives, a DVD-R, and a floppy drive. It is not enough that the case hold 9 hard drives, but it should help to organize them in groups of 4. This will help with cabling between the master and slave drives, in our parallel ATA config. I prefer cases that have dedicated bays for hard drives vs 5.25” drives because it reduces the need to buy extra brackets and is a more efficient use of space. Tower cases that have 14 or so 5.25” drives are not that useful if you are just going to use hard drives. You can get special bay modules that convert three 5.25” horizontal bays into five 3.5” hard drive bays, but these modules are not cheap.

A case is responsible for holding all your components together, protecting them, and also for keeping them cool. In our storage server, we need to keep our storage drives cool. It is well recognized that the lifetime of hard drives drops with increases in temperature. We went through almost every single case design on the market and finally found one that we liked, the Evercase 5000LX.

[image]

<% print_image("07"); %>
<% print_image("08"); %>

Evercase is listed among the approved case suppliers for AMD and Intel, but we don’t see many of their products on the market. This case has eight 3.5” drive bays, three 5.25” drive bays and space for a floppy drive. The key feature that sold me was that the eight 3.5” drive bays are organized in a pair of HD cages that hold four drives each. With some hard drive cages, the hard drives are stacked on top of each other, so there is no airflow between the drives. The 5000LX leaves some room between the drives for airflow; it just makes good engineering sense. By organizing the drive in sets of four, it matches the topology of an ATA config. It’s like it was designed with our server specifications in mind. These HD cages are also well cooled with 2 80mm fans blowing through one, and a 80mm fan sucking air through another. We didn’t like this second approach, but more on this later. It has a single 92mm exhaust fan in the back. We would have preferred a 120mm fan, but a 92mm is still better than a 80mm fan.

[image]
<% print_image("09"); %>

[image]
<% print_image("10"); %>

Our storage drives can go in the 8 3.5” drive bays, the system drive in a vented 5.25” bay, and a DVD-R in the other bay, which leaves room for a 5.25” accessory.

Case modification


So we didn’t like having the second set of hard drives cooled by a 80mm sucking air through the drives. For one, the area of airflow is severely limited by the hard drive cage, and this setup will pull warm air from inside the case through the drives as well. Looking at the 5000LX, we saw room for another front intake fan, but with the additional drive cage this was obscured. This was a simple fix. We just shifted the second drive cage about 1 inch. This gave us ample clearance for a front 92mm or 120mm fan. We needed to drill additional holes in the drive cage, which were matched with already present holes on the case. The holes in the case were not threaded, so we had to secure the hard drive cage with machine screws and bolts.

“Stock design”
[image]

<% print_image("11"); %>
<% print_image("12"); %>
“After mod”
[image]
<% print_image("13"); %>

We like this new design much better in that it pulls cold air from outside the case through the hard drives. The compromise that we had to make was that the case now becomes very tight if you were to use an extended ATX 12”x13” motherboard. This was not a problem for us. The Maxlines are now kept at just above ambient temperature and lower than the system temperature, thanks to this cooling mod. By keeping the drives as cool as possible, this case design adds to the reliability of our system.

This case is very well built with thick steel. Even though this was not the shiny polished steel, it was very strong as it easily dulled many Dremel bits. The other modification that we made was to adapt the case to have front USB ports. This is a simple mod that has been previously described. Basically, you need to extend the length of the motherboard USB header and drill some openings in your case. One caution is that some USB 2.0 ports are very sensitive to cable quality and extending your ports too much will cause the USB ports to stop functioning.

After working with this case, we aren’t sure why more people aren’t using it. It looks cleaner from the front than some of the more ubiquitous cases. It supports a rear 92mm and a front 120mm fan in addition to its standard 2 x 80mm fans, and it organizes the hard drive bays intelligently. The best part is that this case is competitively priced at about $80.

There were a few things we didn’t like though, for one, it didn’t include any front USB, firewire, or audio ports. These ports really do improve a usability of a system especially when the back of the case is hard to reach. In addition, we would have preferred a standard 3.5” bay for the floppy drive instead of a slot, this way we could have put in a media card reader in that top bay.

For a server, a case is one of the most crucial components. It needs to be big enough for your needs and it needs to keep your components cool. We all know that computer components last longer when they are kept cool, this is one responsibility of the case and shouldn’t be overlooked.

Cost $80



SIDEBAR: This system has 6 case fans, 4 of which are dedicated to cooling the hard drives.


Video CardPage:: ( 7 / 9 )
The choice of video card will really depend upon what else we decide to use this system for. Whether this system will be used for serious gaming or high-end graphics really determines the appropriate video card. Our main requirement was a 1280x1024 DVI output, good compatibility, and stable drivers.

We like the current NVIDIA offerings when 3D speed is not the most important consideration. We need a stable driver set and prefer the unified driver set because it simplifies upgrades within the same manufacturer without worrying about operating system compatibility issues. For workgroup settings we recommend using a single video card manufacturer for all your systems so you can minimize the number of spare parts you stock and also for the ability to do video card transplants on a whim.

The NVIDIA drivers also do a good job of desktop rotation, which simplifies using our LCD monitors in portrait mode. Most LCD manufacturers provide utilities to do this, but when it is integrated into the driver set, it should be more reliable.

[image]

<% print_image("14"); %>

For this application we’ll use an NVIDIA GeForce FX 5200 128MB card to stay within budget. No one should mistake this card for a 3D powerhouse though; it is among the slowest 3D cards on the market -- fast enough for us, but slow compared to the other NVIDIA and ATI offerings. Most “servers” do with an integrated graphics chip like an ATI Rage XL or less.

Cost: $70 with free t-shirt or hat

Matrix Orbital


Some people think that servers should be boring. Nothing could be further from the truth as many of the most impressive system cases are those of servers. Just take a look at the SGI servers. One feature of the old Onyx and current Origin servers was an LCD screen that provides system status and statistics. To provide this information, we added a Matrix Orbital MX2 to the system. It is also useful for pure server applications where you’re monitor-less.

The Matrix Orbital can easily be programmed to display essential server statistics, including network, CPU, and memory usage; real-time numbers on the free space on the hard drives, as well as temperature and fan monitoring. One great feature of the Matrix Orbital is that it can be used to control three additional case fans. The max current is 1A at 12V, which covers all but the most exotic of fans.

As great as it looks, the Matrix Orbital could be made a lot better. I would have liked to see some USB and firewire connections next to the LCD or a version of the matrix orbital with a built in USB 2.0 card reader. Either way, the Matrix Orbital MX2 is a great way to distinguish your system from the rest of the pack. It is relatively inexpensive when you consider the cumulative costs of all your fans, fan grill, round cables, and “mod” accessories, some of which are not easily visible.

[image]

<% print_image("15"); %>
<% print_image("16"); %>

Cost: $100

Optical Drive


We went with a Pioneer DVR-106 4x DVD±R/RW drive for its reported high compatibility with various media types, good support in terms of regular firmware upgrades, as well as its reasonable price. One nice touch is that the black drive not only has a black front face, but the drive tray itself is also black. Some black drives just have a new faceplate, but the inside tray is the standard beige color.

Cost: $150




SIDEBAR: Black computers are the trend these days, but the earliest IBM XT’s already had black drives in a beige case.


Input DevicesPage:: ( 8 / 9 )

Logitech Keyboard and Mouse


Ever since Logitech showed PC users that mice need more than 2 buttons, we have been big fans of their products. I have Logitech mice with a polished tracking surface from long-term use, but that continue to track true with buttons that still have a tactile click. Innovations in mice, with optical mice with two sensors, to the newest MX series are signs of Logitech’s dedication to the details. Investing in a good, comfortable keyboard and mouse is well worth it given your continuous interaction with these devices.

Our preferred pointing device is the Logitech MX 700. It tracks as well as the best corded mice. When talking about mice tracking, there are two factors. One factor is how smooth and precise the motion, this is useful for fine photo editing purposes. The other factor is how it performs with fast movements, does it jump around when you flick your wrist? This latter point determines how game-worthy a mouse is. The addition of a rechargeable base also provides added value to the system. Logitech mice are never the cheapest, but over the many years that I have been using computers, I have never had to replace a Logitech mouse for equipment failure, only for technological advances. Once you find a mouse that you like it is hard to switch since a bad mouse can really interfere with your productivity and add to your frustration.

[image]

<% print_image("17"); %>

A keyboard is not just a keyboard. Logitech keyboards have a less mushy feel than many other brands, but are not as stiff as the old school IBM M keyboards. The wireless keyboard that comes with the MX wireless duo combo has some interesting features beyond your now standard shortcut buttons and volume control. Our favorite feature is the addition of a scroll wheel on the left side of the keyboard. For you right handers, this allows you to scroll effortless through a web page while writing notes with your right hand. We actually prefer the ergonomic Logitech keyboard that is now currently only available with the standard wireless optical mouse and not the MX. A perfect keyboard would be an updated ergonomic keyboard with the new features present on the elite keyboard. We ended up buying an old Logitech Cordless Desktop Pro just for the keyboard.

[image]
<% print_image("18"); %>

Cost: $100/$60

UPS


Servers should have uninterruptible power supplies or battery backups. A backup really shouldn’t be used to keep your computer going so that you can do more work, but should be designed to give you enough juice to shut down the system. There is more to a UPS than just capacity, however, and that is the issue of their output waveforms. Some of the low cost backups provide a square wave power output, which can confuse some active PFC or auto-voltage sensing power supplies. Improvements to the square wave output include a stepped sine wave and a true sine wave output.

In keeping with a reasonable budget, we chose a stepped sine wave 1500va battery backup by APC. APC has been making UPS devices for a long time, ensuring future replacement battery availability.

[image]

<% print_image("19"); %>

Cost $200

Gigabit switch


We went with the 8-port gigabit switch from Netgear, the GS108. This switch retailed for nearly $800 when it was first released a full year ago. Today it can easily be found for under $200. Compared to other switches, this model does not require a cooling fan for silent operation, and every port can be used as an uplink to another hub, switch, or router.

Netgear is also a rapidly expanding company with a recent successful IPO, so hopefully they will be able to continue to provide tech support for this product in the long run.

Cost: $200

ATA-133 controllers


We added a PCI Promise ATA-133 controller so we can run our four Maxlines as all master drives. This will improve simultaneous access performance and allows for an easy upgrade to eight storage drives.

Cost: $30

Floppy Drive


Some may cringe at the thought of a floppy drive, but it is still a good method for flashing the bios, although we are seeing more bootable USB key options. Nevertheless, with the Evercase we couldn’t use that top bay for anything else, so we got a Samsung 1.44MB floppy drive to fill in the hole.

Cost: $10

Misc


We used rounded cables for all our devices to facilitate airflow through the system. Our fans were just your standard ball-bearing models. We added a fan in front of the power supply to help exhaust the hot air from the top of the case and made a fan bracket to cool our system drive. More expensive fans are often less powerful because they emphasize quiet operation over power. Cat 5e cables were used to carry our gigabit network.



SIDEBAR: Round cables really do simplify cable management.


Ballistics ReportPage:: ( 9 / 9 )

Ballistics Report


So after all that, this is what we ended up with:

[image]

<% print_image("20"); %>

Evercase 5000LX case – 88%
This is my new favorite case. It provides good cooling, great expansion, and doesn’t look like every other case on the market, plus it doesn’t cost too much as compared to the big Aluminum cases. It looks professional and a little intimidating because of its size, but doesn’t try too hard to look gimmicky. No one would make any comments about your case looking dorky with one these on your desk.

Cost: $80

PC Power and Cooling Turbo-Cool 510 ATX power supply – 85%
A no holds barred power supply for your system. It means never having to second guess this component choice, but you do pay for it. Throughout all our testing, I couldn’t get the +12v rail to budge from +12.04v to +12.10v. If it were about $50 cheaper it would be in every one of my systems. You can’t beat the built in line conditioning and just the overall output.

Cost: $190

Pentium 4 3.0GHz Retail box
Not much to say, a workhorse Intel CPU…A better deal following the recent price drops from Intel.

Cost: $300

Tyan Trinity i875 motherboard – 88%
Clearly intended for the demanding corporate customer, Tyan puts all the design into the practical features. Performance wise, most i875 boards are the same at stock speeds. Extra points for the dual gigabit interfaces and the 6 PCI slots. Price is competitive with other high-end i875 boards, but the extra gigabit NIC was the main factor that led us to choose this board.

Cost: $200

1GB Corsair XMS PRO low latency ram – 90%
When you need reliable and fast RAM, Corsair XMS is the recommended choice. I am still split on the practicality of the PRO series with its status LEDs, although I now see their value in diagnosing hardware problems, or seeing if your software is actually using all your ram. I would like to see a line of 2-2-2 DDR 3200 modules from Corsair though.

Cost: $340

Seagate 120GB Barracuda 7200.7 SATA – 78%
A good hard drive but nothing to brag about, or at least nobody will care if you brag about it. It is a little noisy during operation.

Cost: $110

Maxtor Maxline II Plus 250GB PATA x 4 – 86%
A hard drive with potential, given its SCSI-like MTBF ratings and design for 24/7 applications. Having a hard drive fail is no fun, so if Maxtor has really built a more reliable drive any additional cost is well worth it. It gets extra points for presumed reliability. It is also faster than our SATA 120GB Barracuda.

Cost: $1,000

Pioneer DVR-106 DVD±R/RW – 80%
There isn’t much that can differential the current generation of combo DVD burners. It does what it is supposed to do.

Cost: $150

Matrix Orbital MX2 – 90%
At first I though it was as gimmicky as a aquarium side panel for your case, but the information that the Matrix Orbital can provide can be quite useful, not only for keeping tabs on your system’s health status but on the latest stock quotes or surf reports. Extra points for innovation and wow factor, that is until everyone starts putting them into their computer. Anytime you want to

Cost: $100

NVIDIA GeForce FX 5200 – 78%
The slowest 3D card in the GeForce FX line, but enough for our purposes. You don’t always need to have the fastest video card.

Cost: $70

Samsung 1.44MB floppy drive – 70%
Probably the most expensive storage device in this system, when computed on a storage capacity per dollar. It will probably get used as often as an ice scraper in San Francisco, but we had that empty drive bay…

Cost: $10

ATA-133 controller – 70%
Hard drive speed is still limited by the hard drive, not so much by the interface. It was a cheap way to add more IDE ports

Cost: $30

Logitech wireless Ergonomic keyboard and MX 700 mouse – 90%
My personal input device of choice.

APC BX1500 UPS – 80%
Hopefully we won’t ever need to use it, but that it will work when the time comes. A UPS is highly recommended, although most UPS can do their job well. One thing that does distinguish the APC is that it looks cool enough to sit on your desk instead of under it. Another advantage of APC is the readily available replacement batteries.

Cost: $200


Netgear GS108 8 port gigabit switch – 80%
Strong offering with limited competition in the budget gigabit switch market. For smaller workgroups, the network traffic isn’t high enough to really differentiate between the more expensive switches.

Cost: $200


Total $3,140

These majority of the above parts were bought at retail from Ewiz, NewEgg, and Dell.

Looking at other network attached servers from either Dell or Apple brings a cost of $4360 for a 2.6ghz/400fsb P4 Dell Powervault for a one terabyte server and $5024 for a 1.33GHz G4 X-serve with 720GB. These “budget” servers are also IDE based, but they do not list what drives they use, and there is a question about the cost of additional storage upgrades, the configurations above were the maximum allowed for the initial purchase. In addition, these prices don’t include a UPS or gigabit switch. We think that for a small workgroup, our system can match the performance of these entry-level servers while also adding workstation functionality.

Those prices on network attached storage devices were one of the primary reasons why we had to build our own server that would meet our current needs and be prepared for our future needs. Our main goal in this system design was to ensure the ability to increase our storage capabilities as necessary and to maximize system reliability. Like we said, with most of the off the shelf servers, there aren’t open drive bays or the drives are in RAID configuration, limiting the ability of the hard drives to be used in other setups. While there are clear advantages to those approaches, our desire to be able to add another 4 drives at anytime and then to have the full drives “retired” to external enclosures influenced our decision-making. The components that we chose for our build were all chosen for a specific reason and because they work well with the other parts of the system. For each category, there are probably better parts that we could have gone with, but as with any project you need to prioritize your needs.

In summary, we talked about the key considerations in building a storage server, reliability, performance, and expandability. With our build, we have attempted to optimize all these values, while still keeping a reasonable budget. Don’t be afraid to build your own “monster” workstation or server. Often you can save much more money by doing this yourself since the profit margins in this segment are much higher than with entry level computers. So go spec out your own system and build it.

Any guesses on how fast it takes us to fill up a terabyte?



SIDEBAR: Have any firsthand experience with one of the components selected that you’d like to share? Perhaps you think there’s a part that we missed? Share your thoughts in the news comments!

© Copyright 2003 FS Media, Inc.
[ Print Article! | Close Window ]