NAS - Network Attached Storage

NAS Server

[June 23, 2016]

[Jan. 17, 2017]


[June 20, 2018]

Two years after I started up the system, it is still going strong. NAS4Free is working exactly as I hoped, and after an initial learning curve I have found it easy to work with. In those two years, I have had two drives fail (both drives were old - not due to any problem with the NAS); Both failed drives were replaced with no loss of data. There are two additional things I will mention by way of useful tips:

Update #2

April 13, 2023

At this time the server is still running nicely, with most of the original parts (there have been some hard drive upgrades to increase capacity).It should be noted that FreeNAS has now been replaced by XigmaNAS (currently )

The transition to XigmaNAS was trouble-free.

I continue to use the original furnace filter, which I clean once per year by blowing out the accumulated dust with an air compressor. I'm impressed that the original Rosewill cooling fans and the ThermalTake power supply continue to work flawlessly after running continuously for almost seven years.

Why I built this Server

This server was built to replace my old RAID storage system which was running on my Ubuntu desktop. The old system had four hard drives (two mirrored sets) - the system was maxed out, no more drives could be added. In addition, there were many inconveniences to running what was essentially a combine desktop/server.

The new system (pictured below) can handle a total of eight hard drives. The box was built from birch faced plywood (and trimmed with red oak). The top of the box is a 14"x20" furnace filter. The new server will reside on the wire shelf shown in the picture (I rotated the server at a slight angle for the picture, to show the box more fully).

Update July 2016:

Installed all six currently available drives (still have slots to add two more drives). Drive temperatures are reporting in the 36 - 43 C range, which is acceptable.

Operating System

System Hardware

[Note: Prices shown are June 2016 prices from]


The data drives were set up as three sets of RAID1 (mirrored). See the addendum at the end explaining the choice of RAID1.

Useful Links:

More Pictures

Here are some more pictures of the NAS box:

The partially finished box (note, the front piece with holes for the cooling fans is upside down in this picture). The design goal was to build a headless server (no monitor, keyboard, or mouse), with the top filter acting as the only air inlet. In practice, the back panel can be removed (held on by two screws), so that a monitor and keyboard can be temporarily attached for set-up.

In the design, the "front" of the box is the side with the power supply and cooling fans.

Attaching the support for the intake filter. I purchased all of the main components before I started the build, and then built the box to fit. In the picture you can see the power supply on the left of the box.

Nearly completed box, before finishing (note: the fans are in backwards in this picture! Fixed that later.) The front panel containing the fans simply drops into place, so it can be easily removed for system access.

A look inside the box from the top, showing approximate motherboard placement (I'll be installing a right-angle USB connector in the near future, to enable the board to moved further to the rear).

Note just left of center, above the motherboard, is one of the hard drive "hanger rails." The hanger rail is a piece of C-channel extruded aluminum, with slots milled in it to accept the hard drive mounting screws. The rail sits on two wooden rails at the front and back of the box, just under the intake filter. For the time being, the drives will simply hang from the rails; in the future, I may screw the rails to the frame. There will of course be a total of eight of these rails - six for the hard drives, and two for future expansion.

Close-up of the front panel of the nearly finished box. Note a small "panel" has been added between the fans. The panel include indicator LEDs for power and HD activity, and a reset button. I'm a little nervous about the reset button being so easily accessible (and possibly easy to bump into by accident), so I'll probably add some sort of safety feature in the future.

Close-up of the panel: HD activity light on top, power LED on bottom, and brass reset button in between. Just for fun, I made the panel out of figured maple, and turned a custom brass button on my lathe. The indicator LEDs, switch, and connector were salvaged from an old junk computer.

Here is the cabinet with all six drives installed. The cables were a tight fit, but manageable. I may need to re-machine the slots in the rails so that the drives can be moved further back to make more room for cabling.

Planned Improvements

Although the NAS box is working as built, I plan to make a few improvements (eventually):

Some Modifications

[July 17, 2016]

As noted above, I have re-milled the drive hangers with additional slots, so the drives can be moved further toward the back.

This photo shows two of the drive rails (top of the picture) modified, with the rest of the rails to follow.

This modification does not appear to have any effect on cooling, but it does make it easier to route the wiring. Drive temperatures (as reported by NAS4Free) range from 33 to 41 deg. C. The two drives which are running hottest are positioned over the cooling fan for the CPU, so the CPU fan is likely blowing warm air on the drives. It may be possible to reposition the drives to either side of the fan, or to install some sort of deflector to alleviate this. However, for the time being the drives are still running within the desired temperature range.

[July 21, 2016]

Here is the cabinet with all drive hangers modified. Note that the drives have been re-arranged so that no drive sits directly above the CPU cooling fan. This re-arrangement had no effect on drive temperature - the two Western Digital drives still ran hottest (but within appropriate limits); apparently these two drives just run hotter.

Additional improvements:

Addendum - Choice of RAID Level

RAID1 was chosen for my NAS for two reasons. First, when using large SATA drives, there is a significant risk of being unable to recover the RAID set when (not if) a disk fails. See this link for an explanation:

Dangers of RAID 5 Array with SATA Drives

and this article further explains the benefits of mirroring:

ZFS: You should use mirror vdevs, not RAIDZ

Second, as noted above, when using a NAS for an extended period of time (in my case, continuously over a period of years), there is a 100% probability that a drive will fail. When this happens (in my experience, once every two or three years), my strategy is to replace both of the drives in the failed RAID set with drives of the "next largest size".  For example, 1 TB drives are replaced with 2 TB drives, 2 TB drives are replaced with 4 TB drives, and so on. Because hard drive capacity typically doubles every two years for the same price range, this guarantees that I always pay the same price for my drives, while continually increasing capacity.