NAS - Network Attached Storage
NAS Server
[June 23, 2016]
[Jan. 17, 2017]
Update
[June 20, 2018]
Two years after I started up the system, it is still going strong. NAS4Free is working exactly as I hoped, and after an initial learning curve I have found it easy to work with. In those two years, I have had two drives fail (both drives were old - not due to any problem with the NAS); Both failed drives were replaced with no loss of data. There are two additional things I will mention by way of useful tips:
Do this: NAS4Free can be configured to send SMART data (hard drive status data) to an email address. I use this to get notified of drive problems so I can replace failed drives before I lose any data. The status messages arrive daily, so I use an email filter to automatically file all routine messages and only let the "problem" messages get to my inbox (any message with the word "DEGRADED").
Check regularly for firmware updates. Backup your configuration before any update. If you get an "out of memory" error when doing an update, the problem is likely that your NAS has been running for a while, and hard drive activity is using up most available RAM (this is normally a good thing - when the drives use RAM they are operating more efficiently). If you have this problem, reboot your NAS and then run the firmware upgrade immediately before the hard drives start grabbing RAM.
Update #2
April 13, 2023
At this time the server is still running nicely, with most of the original parts (there have been some hard drive upgrades to increase capacity).It should be noted that FreeNAS has now been replaced by XigmaNAS (currently 12.3.0.4 ).
The transition to XigmaNAS was trouble-free.
I continue to use the original furnace filter, which I clean once per year by blowing out the accumulated dust with an air compressor. I'm impressed that the original Rosewill cooling fans and the ThermalTake power supply continue to work flawlessly after running continuously for almost seven years.
Why I built this Server
This server was built to replace my old RAID storage system which was running on my Ubuntu desktop. The old system had four hard drives (two mirrored sets) - the system was maxed out, no more drives could be added. In addition, there were many inconveniences to running what was essentially a combine desktop/server.
The new system (pictured below) can handle a total of eight hard drives. The box was built from birch faced plywood (and trimmed with red oak). The top of the box is a 14"x20" furnace filter. The new server will reside on the wire shelf shown in the picture (I rotated the server at a slight angle for the picture, to show the box more fully).
Update July 2016:
Installed all six currently available drives (still have slots to add two more drives). Drive temperatures are reporting in the 36 - 43 C range, which is acceptable.
Operating System
NAS4Free (10.2.0.2 - Prester (revision 2545))
Now XigmaNAS which has replaced FreeNAS
System Hardware
[Note: Prices shown are June 2016 prices from NewEgg.com]
ASRock A88M-G/3.1 FM2+ / FM2 A88X (Bolton D4) Micro ATX Motherboard $60.99
Thermaltake SMART Series SP-750PCBUS 750W ATX 12V 2.3 80 PLUS BRONZE Certified Active PFC Power Supply $69.99
AMD A8-7600 Kaveri Quad-Core 3.1 GHz Socket FM2+ 65W AD7600YBJABOX Desktop Processor AMD Radeon R7 $79.99
Team Elite Plus 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model TPD316G1600C11DC01 $48.99
Rosewill ROCF-11003 - 140mm Computer Case Cooling Fan - Hydro Dynamic Bearing, Silent, 2 Rotation Speeds with PWM Control 2 @ $13.29 ea.
Drives
The data drives were set up as three sets of RAID1 (mirrored). See the addendum at the end explaining the choice of RAID1.
Using Cruzer Glide 16GB flash drive as system drive.
Note: Upgraded to Cruzer 64 GB flash drive when I started getting "out of memory" errors when updating firmware
RAID set 1:
2 x ($198.99) TOSHIBA X300 HDWE160XZSTA 6TB 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5" Desktop Internal Hard Drive Retail Kit
RAID set 2 :
2 x 3TB 64MB Cache 7200RPM (Enterprise Grade) SATA 6.0Gb/s 3.5" Hard Drive, Model: WL3000GSA6472E
Note: Drives were purchased in June 2014 and have been running nearly 7x24 to date of this writing (July 2016)
Note 2: One drive failed in Nov. 2016; replaced both drives with Toshiba 6 GB drives. RAID set was rebuilt successfully and no data was lost.
RAID set 3 :
WD Red 3TB NAS Hard Disk Drive - 5400 RPM Class SATA 6Gb/s 64MB Cache 3.5 Inch - WD30EFRX
Seagate Desktop HDD ST3000DM001 3TB 64MB Cache SATA 6.0Gb/s, 7200 RPM
Useful Links:
NAS4Free official site : http://www.nas4free.org/
Monitoring/alerting scripts: http://forums.nas4free.org/viewtopic.php?t=1223
How to Create and Maintain a ZFS Mirror in NAS4Free :
More Pictures
Here are some more pictures of the NAS box:
The partially finished box (note, the front piece with holes for the cooling fans is upside down in this picture). The design goal was to build a headless server (no monitor, keyboard, or mouse), with the top filter acting as the only air inlet. In practice, the back panel can be removed (held on by two screws), so that a monitor and keyboard can be temporarily attached for set-up.
In the design, the "front" of the box is the side with the power supply and cooling fans.
A look inside the box from the top, showing approximate motherboard placement (I'll be installing a right-angle USB connector in the near future, to enable the board to moved further to the rear).
Note just left of center, above the motherboard, is one of the hard drive "hanger rails." The hanger rail is a piece of C-channel extruded aluminum, with slots milled in it to accept the hard drive mounting screws. The rail sits on two wooden rails at the front and back of the box, just under the intake filter. For the time being, the drives will simply hang from the rails; in the future, I may screw the rails to the frame. There will of course be a total of eight of these rails - six for the hard drives, and two for future expansion.
Close-up of the front panel of the nearly finished box. Note a small "panel" has been added between the fans. The panel include indicator LEDs for power and HD activity, and a reset button. I'm a little nervous about the reset button being so easily accessible (and possibly easy to bump into by accident), so I'll probably add some sort of safety feature in the future.
Planned Improvements
Although the NAS box is working as built, I plan to make a few improvements (eventually):
Re-machine rail slots so that drives can be moved further back in the case
Implement some sort of cable routing - either a compartment or some sort of internal rack
Improve cooling - not sure how just yet. Cooling is working, but is just barely within acceptable limits, so I'd like to improve airflow.
Other?
Some Modifications
[July 17, 2016]
As noted above, I have re-milled the drive hangers with additional slots, so the drives can be moved further toward the back.
This photo shows two of the drive rails (top of the picture) modified, with the rest of the rails to follow.
This modification does not appear to have any effect on cooling, but it does make it easier to route the wiring. Drive temperatures (as reported by NAS4Free) range from 33 to 41 deg. C. The two drives which are running hottest are positioned over the cooling fan for the CPU, so the CPU fan is likely blowing warm air on the drives. It may be possible to reposition the drives to either side of the fan, or to install some sort of deflector to alleviate this. However, for the time being the drives are still running within the desired temperature range.
[July 21, 2016]
Here is the cabinet with all drive hangers modified. Note that the drives have been re-arranged so that no drive sits directly above the CPU cooling fan. This re-arrangement had no effect on drive temperature - the two Western Digital drives still ran hottest (but within appropriate limits); apparently these two drives just run hotter.
Additional improvements:
Installed a right angle USB adapter for the system flash drive. This gives more clearance at the back of the board, which improves accessibility if I ever need to connect a keyboard and monitor.
Addendum - Choice of RAID Level
RAID1 was chosen for my NAS for two reasons. First, when using large SATA drives, there is a significant risk of being unable to recover the RAID set when (not if) a disk fails. See this link for an explanation:
Dangers of RAID 5 Array with SATA Drives
and this article further explains the benefits of mirroring:
ZFS: You should use mirror vdevs, not RAIDZ
Second, as noted above, when using a NAS for an extended period of time (in my case, continuously over a period of years), there is a 100% probability that a drive will fail. When this happens (in my experience, once every two or three years), my strategy is to replace both of the drives in the failed RAID set with drives of the "next largest size". For example, 1 TB drives are replaced with 2 TB drives, 2 TB drives are replaced with 4 TB drives, and so on. Because hard drive capacity typically doubles every two years for the same price range, this guarantees that I always pay the same price for my drives, while continually increasing capacity.