Quiet Fan upgrades are now available!

Quiet Fan server upgrades now available on most Tower and Rackmount server designs!  The following designs are eligible for the upgrade:

  • 12 Drive Eco
  • 12 Drive Pro
  • 15 Drive Eco
  • 15 Drive Pro
  • 20 Drive Eco Tower
  • 20 Drive Eco Rackmount
  • 20 Drive Pro Tower Green
  • 20 Drive Pro Tower Black
  • 20 Drive Pro Rackmount Green
  • 20 Drive Pro Rackmount Black
  • 22 Drive Pro
Note: The 6 Drive Eco, 6 Drive Pro, 9 Drive Eco, and 9 Drive Pro servers are not eligible for the upgrade.  Why?  Because they are already quiet!  No fan upgrade needed.
The upgrade costs $75, which covers parts, installation, and testing.  Fans included in a Quiet Fan server upgrade are covered under the same Greenleaf warranty as the rest of the server hardware.  We are currently working on a new shopping cart system that will allow you to add a Quiet Fan upgrade to eligible server designs.  We hope to have this new shopping cart ready by the end of the month, but in the meantime you can order a Quiet Fan upgrade with your server by simply requesting it in the ‘comments’ section of your order.
A special thanks goes out to ProfQ of the unRAID forums whose pioneering fan research and testing has influenced our own choice of fan upgrades!  You can read about his test results here.  We have replicated his tests in our own lab and come to the same conclusion – the Coolink Swif2-801 is the ideal choice of fan upgrade for any server that utilizes the Norco SS-500 or SS-400 drive cages.  Our rackmount server designs use different but equally quiet fans.
Coming soon: Silent Server upgrades!

3 TB Compatibility Testing

At the time of writing, the current stable release of unRAID is unRAID 4.7. While 4.7 is a great product, one of its limitations is that it uses MBRs and not GPTs. In layman’s terms, that means that it is incompatible with any hard drive larger than 2.2 TB. As 2.5 TB and 3 TB drives are on the market today at attractive price points, many unRAID users have switched to using the latest unRAID beta (currently unRAID 5.0beta11), which has both MBR and GPT support. This means that any capacity drive can be used as a parity, data, or cache drive in the latest unRAID beta. I recently got my hands on 15 of the fabulous 3 TB Hitachi DeskStar 5K3000 CoolSpin hard drives. These are green drives that spin at 5400 RPM and use a SATA III (6.0 Gb/s) interface. I took the opportunity to test as much hardware as I had available to me for 3 TB hard drive compatibility. Here is the hardware I tested:

Supermicro X7SLA-H with a built-in Intel Atom CPU and 2 GB of DDR2 533 RAM (2 x 1GB)
ZOTAC GF6100-E-E with an AMD Sempron 140 CPU and 2 GB of DDR2 800 RAM (1 x 2GB)
Biostar A880G+ with an AMD Sempron 140 CPU and 2 GB of DDR3 1333 RAM (1 x 2GB)
Supermicro X8SIL-F-O with an Intel i3-540 CPU and 4 GB of DDR3 1333 RAM (2 x 2GB)
Asus M4A78LT-M with an AMD Sempron 140 CPU and 2 GB of DDR3 1333 RAM (1 x 2GB)

I also tested the 2 port PCIe x1 SIL3132 card that we use in certain GreenLeaf builds. I decided not to run any thorough tests on the Supermicro AOC-SASLP-MV8 controller as it has already been well established as being fully compatible with 3 TB drive through tests conducted by other members of the unRAID community.

All tests were performed using unRAID 5.0beta10 (which was the latest beta available at the time) and Joe L.’s preclear script 1.12beta (which is the only version of preclear currently available that supports 3 TB drives). A hardware component passed the test if it precleared successfully and was recognized by unRAID as an array drive. Here are the results:

Motherboard Backplane SATA Controller Result Duration (HH:MM:SS)
Supermicro X7SLA-H Norco SS-500 Onboard PASS 48:49:35
Supermicro X7SLA-H Norco SS-500 Onboard PASS 47:41:13
Supermicro X7SLA-H Norco SS-500 Onboard PASS 48:28:24
Supermicro X7SLA-H Norco SS-500 Onboard PASS 48:55:51
Supermicro X7SLA-H Norco SS-500 SIL3132 PASS 54:47:52
Supermicro X7SLA-H Norco SS-500 SIL3132 PASS 53:24:05
ZOTAC GF6100-E-E Kingwin 3-in-2 Onboard PASS 41:29:50
ZOTAC GF6100-E-E Kingwin 3-in-2 Onboard PASS 42:06:13
ZOTAC GF6100-E-E Kingwin 3-in-2 Onboard PASS 40:28:49
ZOTAC GF6100-E-E Kingwin 3-in-2 Onboard PASS 43:02:11
ZOTAC GF6100-E-E Kingwin 3-in-2 SIL3132 PASS 46:49:55
ZOTAC GF6100-E-E Kingwin 3-in-2 SIL3132 PASS 47:30:24
Biostar A880G+ Top Dock Onboard PASS 41:58:44
Biostar A880G+ None Onboard PASS 42:25:21
Biostar A880G+ None Onboard PASS 41:36:09
Biostar A880G+ None Onboard PASS 42:44:09
Supermicro X8SIL-F-O None Onboard PASS 42:12:56
Supermicro X8SIL-F-O None Onboard PASS 42:09:42
Supermicro X8SIL-F-O None Onboard PASS 41:22:09
Supermicro X8SIL-F-O None Onboard PASS 37:45:11
Supermicro X8SIL-F-O None Onboard PASS 38:29:49
Supermicro X8SIL-F-O None Onboard PASS 41:11:57
Asus M4A78LT-M None Onboard PASS 49:01:51
Asus M4A78LT-M None Onboard PASS 48:44:29
Asus M4A78LT-M None Onboard PASS 48:38:10
Asus M4A78LT-M None Onboard PASS 48:01:27
Asus M4A78LT-M None Onboard PASS 47:44:05
Asus M4A78LT-M None Onboard PASS 45:58:00

The good news is that every single piece of hardware I tested is fully compatible with the Hitachi 3 TB drives. However, as you can see from the duration results above, some of the preclear cycles were slower than others. At first I thought that certain SATA controllers were slower than others. Here’s a quick analysis of that hypothesis:

Motherboard SATA Controller Average Duration (hours)
Supermicro X7SLA-H Onboard 47.75
Supermicro X7SLA-H SIL3132 53.50
ZOTAC GF6100-E-E Onboard 41.50
ZOTAC GF6100-E-E SIL3132 46.50
Biostar A880G+ Onboard 41.50
Supermicro X8SIL-F-O Onboard 40.17
Asus M4A78LT-M Onboard 47.50

The slowest set of hardware was the Supermicro X7SLA-H and the SIL3132 controller with an average duration of 47.5 hours. This happens to the be motherboard with the slowest CPU and slowest RAM as well. The fastest set of hardware was the Supermicro X8SIL-F-O with an average duration of 40.17 hours. This also happened to be the motherboard with the fastest CPU and the most RAM. I believe these test results show that when preclearing 3 TB drives, the speed of the CPU and the RAM installed matters more than the SATA controller being used. Given this revised hypothesis, here’s the take-home analysis of this data:

CPU Amount of RAM RAM Speed Channels Average Duration (hours)
Atom 2GB DDR2 533 Dual 50.63
Sempron 140 2GB DDR2 800 Single 44
Sempron 140 2GB DDR3 1333 Single 44.5
i3-540 4GB DDR3 1333 Single 40.17

If you plan to preclear a lot of 3 TB drives, more RAM and a faster processor can help speed up the process by as much 10 hours.

Testing hardware for unRAID compatibility

Introduction
MurrayW of the unRAID forums sent me a Broadcom RAIDCore BC4000 SATA/RAID Controller card and asked me to test it for unRAID compatibility. This card has 8 SATA I ports (1.5 Gb/s) on a PCI-X interface, and according to eBay it is worth about $75. Based on these specs alone I knew it wouldn’t be terribly useful to the average unRAID user, but I agreed to test it anyway. I then decided to write up this article about the testing process as it is a good chance to explain how we at GreenLeaf Technology go about testing new hardware. This article is more about our rigorous testing procedures than it is about the hardware itself.

Without further ado, here’s the card:

The Broadcom card in action

There are only six SATA cables in that picture, you’ll have to take my word for it that the card actually has eight ports (all of which were tested). First, let’s analyze the specs – PCI-X is an older format typically only found on server motherboards. It is very difficult to find it supported on a modern server class or consumer-grade motherboard. If you can find an older motherboard that supports it, it is very fast. I am fully confident that this card paired with a PCI-X motherboard would result in a great unRAID server. However, I didn’t have a PCI-X motherboard handy. The good news is that PCI-X cards will run in standard PCI slots, albeit at significantly slower speeds. The picture above depicts the card plugged into a PCI slot – notice that a good number of the gold connectors on the card are hanging off the end of the slot, not connected to anything. This card (and most PCI-X cards) still work fine in this configuration, just with limited speed. When using a PCI-X card in a PCI slot in a long term configuration, it is a good idea to cover the exposed gold connectors with some electrical tape to prevent accidental short circuiting.

Testing
I tested the Broadcom card on four different motherboards:

Biostar A760G M2+

Biostar A880G+

Supermicro C2SEE

Supermicro X7SLA-H

The Biostar boards are both consumer-grade, whereas the Supermicros are considered to be server class. These motherboards also represent a good range of newer and older features. The Biostar A880G+ uses the modern AMD AM3 CPU socket and the brand new 880G chipset. The Supermico X7SLA-H uses a modern Intel Atom processor and the 945GC chipset. The Biostar A760G M2+ uses the older generation AMD AM2+ CPU socket, and the older 760G chipset.The Supermicro C2SEE uses the older Intel LGA 775 CPU socket and ICH10 chipset. I figured that if the Broadcom card worked on this diverse set of motherboards, then it was very likely to work on any motherboard that the average unRAID user would care to try.

As none of these motherboards have PCI-X slots, the card was used on the PCI slot of each board. Normally I would test all eight SATA ports simultaneously, but as I was short a few test drives after a recent round of hard drive failures, so I had only four drives at my disposal. I used a simple round-robin method to make sure that all eight ports on the card worked on every motherboard. I specifically tested the following:

1) Boot the server and verify that the drives are detected by unRAID

2) Run a single pass of preclear on each drive through the Broadcom card

3) Build an array and run a parity sync

4) Run a parity check and verify that there are no errors

5) Transfer data to and from drives in the array

6) Analyze the syslog to look for any oddities

7) Verify compatibility with the latest stable and beta versions of unRAID

8) Confirm that drive spin up and spin down work properly

Results Summary
1) PASS

2) PASS

3) PASS, but slow

4) PASS, but slow

5) PASS, but slow when multiple drives accessed at once

6) PASS

7) PASS

8) PASS

Detailed Results
I won’t bore you with the details of how the card ran on each motherboard, since it essentially ran the same on all four. However, there was one exception: the card did not work reliably on the Supermicro X7SLA-H. Sometimes it would work as expected, sometimes it wouldn’t work at all. However, further testing with that board revealed that the board itself was defective, so I can’t blame the Broadcom board for not working on a flaky motherboard. Unfortunately that means my array of test motherboards was reduced by one.

The card proved itself compatible with both unRAID 4.7 and 5.0beta6a, so unRAID must have the appropriate drivers built-in. For each of the three working motherboards, the Broadcom card worked perfectly. It passed all seven tests with flying colors. Well, let me temper that a bit – it passed all tests with no errors, but I was by no means impressed with the card’s performance. This is to be expected – even just four drives on a single PCI bus suffer a significant bandwidth bottleneck. I expected this card to be slow, and I was not disappointed.

Below are some screenshots of the four test drives in a parity sync (step 3 above) at various stages of completion (click each for an enlarged version). Notice that the parity sync does not pick up speed after the smaller 640 GB drive is no longer involved as the parity sync is bottlenecked by the PCI bus the entire time.

Just starting, 0.1% complete

About half way, 52.8% complete

Over three-quarters done, 78.8% complete

Almost there, 97.2% complete

According to the syslog the parity sync finished in 93449 seconds. That’s nearly 26 hours, ouch! Also recorded in the syslog is the average parity sync speed of 20904K/sec, which translates to about 26 KB/s. Keep in mind this is using only four drives, just half the card’s capacity. More drives would take even longer, while fewer would take even more time. Parity checks will take about the same amount of time as a parity sync. For comparison, the same parity sync operation using the same four drives on a modern PCIe SATA controller card such as those used in GreenLeaf servers would run over four times as fast – it would complete in under 8 hours and average about 60 KB/s.

Regarding step 5 of the testing procedures, data transfers to individual drives on the card (either through disk or user shares with no cache drive involved) were completely normal at 25 – 35 MB/s. This is expected behavior – the PCI bus bottleneck is only an issue when more than one drive is accessed. The PCI bus has enough bandwidth for a single drive to operate at full speed, so there’s no slowdown when transferring data to a single drive during normal operation. However, to demonstrate the PCI bus limitation when multiple drives are being written to, I created a special situation. I created a user share that included all three data disks and used the ‘most free’ disk allocation method. This would require that data transferred to that share was written to all three disks in a round-robin fashion, which should saturate the PCI bus and result in a much slower transfer. I transferred a large folder containing many files of varying size, from tiny documents and mp3s to large HD video files and disc images. The folder totaled 180 GBs and was comprised of 2,779 individual files spread across 193 folders. That’s a lot of files that would have to be split up across the three data drives. The transfer took place across a fully gigabit network with no other network traffic and the source drive was a 7200 rpm 500 GB Seagate data drive in a Windows 7 computer. The results were clear. The transfer took 6 hours and 26 minutes to complete and had an average transfer speed of 7.96 MB/s, one third or less the speed of a transfer to a single drive.

Conclusion
I would recommend this Broadcom card without hesitation to any unRAID user with a PCI-X motherboard as a fast and reliable way to gain an extra 8 SATA ports. For someone with a PCI port, all hopes for speed when multiple drive access is needed must be cast to the wind, but reliability is not sacrificed. If you have a desire for a slow and steady card, the Broadcom is a good choice. It has proved itself to be 100% compatible with unRAID and a slew of modern and older motherboards and chipsets. Also keep in mind that in common day-to-day use of your server only a single drive needs to be read or written to and the PCI bus will offer no limitations. Using a cache drive in the server will also help alleviate any painfully slow write performance, as the writes will be deferred until a later time.

I believe this card has its place in the unRAID community, though it may be a little too late to be helpful to most.

Stephen

5-in-3 Hot Swap Drive Cage Review

In this rant I’ll be comparing three hot swap 5-in-3 drive cages from the big names in the server hardware world – Supermicro, Icy Dock, and Norco.  Note that Norco is the American face of the company, and is rebranded as X-Case in Europe.

Here are the products that have come under my scrutiny:

I’ll abbreviate these as simply Supermicro, Icy Dock, and Norco.

Here are the categories under which I will rate these units:

  • Price in USD (including shipping)
  • Build Quality
  • Airflow
  • Fan Noise and Quality
  • Ease of Fan Replacement
  • Ease of Installation
  • Drive Tray Quality
  • Aesthetics

Brief Reviews

Supermicro

  • Price – $113.50
  • Decent build quality
  • Airflow – Good
  • Fan – Poor.  Requires replacement (fan size: 92mm)
  • Ease of Fan Replacement – Excellent
  • Ease of Installation – Poor
  • Drive Tray Quality – Poor
  • Aesthetics – Decent

Icy Dock

  • Price – $128.98
  • Excellent build quality
  • Airflow – Good
  • Fan – Good  (fan size: 80mm)
  • Ease of Fan Replacement – Good
  • Ease of Installation – Excellent
  • Drive Tray Quality – Good
  • Aesthetics – Excellent

Norco

  • Price – $100.17
  • Excellent build quality
  • Airflow – Excellent
  • Fan – Good  (fan size: 80mm)
  • Ease of Fan Replacement – Poor
  • Ease of Installation – Good
  • Drive Tray Quality – Excellent
  • Aesthetics - Good

Full Reviews

Supermicro
Of these three drive cages, the Supermicro is the only truly server class drive cage.  It has some advanced features, such as temperature alarms and fan fail warnings.  As such, these units are intended to be installed in server class cases, not the consumer class cases that I often use in my server designs.

That said, I’m going to stop using these cages.  First of all, the stock fans are insanely loud.  Even if you were hiding your server away in a closet, I expect that these fans may still be audible.  I replaced the stock fans with these Gelid 92mm fans for about $13 each.  Add that to the cost of the cage and you are at  $126.50, roughly the same price as the Icy Dock cages.  Replacing the fans is very easy – the plastic fan holster simply clips onto the back of the cage, no screws to remove.  The fan then has to be unscrewed from the holster to be replaced.  Still, probably the simplest fan replacement process of any of the drive cages.  The funny thing is, the other two cages don’t need replacement fans, so this really doesn’t count for much in the scope of this review.  After replacing the fan, the airflow through this cage is perfectly adequate (green drives will stay in the low 30s).  I should also note that the larger 92 mm fan will provide better cooling if hotter 7200 rpm drives are used.  For green drives, the 80 mm fans used in the Icy Dock and Norco cages are fine.  The Supermicro drive trays also come with ‘dummy drives’ (plastic trays that could be used to store spare screws, etc.) that will block airflow in empty drive bays.

My second gripe with the Supermico cage is the difficulty of installation into a standard consumer class case.  I’ll preface by saying that most cases require you to flatten the tabs separating each 5.25″ bay before any 5-in-3 drive cage will fit.  I accomplish this with a deep C clamp, like this.  Now 5.25″ bays are a standard width, these cages should fit into any case easily.  However, these cages must be a millimeter or so too wide, because they require he-man strength to force them into a 5.25″ bay.  They will fit, but be prepared for a lot of pushing, shoving, cursing, and metal-on-metal screeching.  I highly recommend that you remove your case internals before attempting this, as it isn’t difficult to slip and break something fragile.

My final quibble is with the quality of the drive cages.  Drive trays are removed from the drive cages using a thin plastic handle that feels like it is going to break if I put any pressure on it.  The problem is exacerbated by the fact that the drive cages slide in and out of the cage with a lot of resistance, probably because the exterior of the cage is under constant pressure due to being slightly too large for the space it is in.  Another issue with the drive tray design is that if you accidentially close the tray handle before the drive is fully seated in the cage, it is somethat difficult to get it to pop open again.  The release button doesn’t seem to work as consistently as I would like.

Overall Impression: Fair

Icy Dock
I genuinely like these cages, but they are just a bit too expensive.  Let’s start off with the aesthetics – these cages look awesome.  They are very attractive, and compliment a fancy case very nicely (such as the Antec 902 in the picture below).  When all drive trays are full, airflow is excellent – all drives will stay in the 30s.  However, the Icy Dock does lose a few points on airflow as it has no way of blocking airflow through empty drive trays!

The best aspect of the Icy Dock cage is that the stock fan is good quality and doesn’t need to be replaced (in my opinion, at least).  The stock fan defintely isn’t silent, but it is within acceptable noise levels to my ears.  If the server were tucked away in a closet I doubt you would hear the fans.  If you wanted to replace the fan, the process is basically the same as the Supermicro cage.  The only difference is that the fan plugs into the cage inside the fan cover, so it makes it slightly more difficult as you have to make sure the fan plug is lining up with the pins correctly.

The Icy Dock cage slides nice and easy into a case (with the 5.25″ tabs flattened as described above).  Secure it with a few screws and you are good to go.  The drive tray quality is excellent as well.  The handle is metal and feels sturdy.  The unlatching mechanism works reliably.  My only minor gripe with the drive trays is that they simply have too many screw holes.  It can be confusing where you are supposed to screw the hard drive into the drive tray, and if you choose the wrong set of screw holes the drive won’t line up in the cage correctly.  There may be a bit of trial and error as you first get used to these trays.  Why Icy Dock would choose to include a bunch of extra screw holes is beyond me – hard drive screw spacing has been standardized for years.

Overall Impression: Good

Norco
We have a winner!  The Norco is by far my favorite from this set of drive cages.
The Good:
First of all, there is price.  These cages are significantly cheaper than either of the others.  The included fans are good quality, quiet enough, and don’t need to be replaced.  Airflow through the cage is excellent (drives stay in the 30s), and best of all each drive tray has a metal slide that allows you to close off airflow to empty bays.  Installation is easy – the cage slides in and out of the case nicely, as it should (again, 5.25″ tabs need to be flattened beforehand).  The drive tray quality is excellent, better than any of the competition in my opinion.  If you have used any of the Norco rackmount cases (4220, 4224, etc.) you are already familiar with these drive trays – they are identical as far as I can tell.  In fact, the drive trays are interchangable with those in the Norco rackmount cases.  The drive trays are sturdy and have a solid plate of metal across the bottom to protect the drive’s circuit board.  They slide in and out of the drive cage nicely, and the latch is reliable.  The tray handle is just plastic, not metal, but it is thick enough that it feels sturdy.  Also, there are plastic tabs behind the handle that can be used to pull the tray out of the cage if it feels a bit stiff.  Aesthetically I think these cages look pretty good.  The four rivets on the corners detract from the clean black plastic look of the rest of the cage in my opinion, but this is quite minor.  You could black them out with a marker if desired.

The Bad:
Nothing in life is perfect, and these cages are no exception.  While my impression of these cages is overwhelmingly positive, there are few significant issues that I need to point out.  First of all, there’s the issue of the placement of the screw holes on the exterior of the drive cage (not the drive trays, those are fine).  I ordered four of these Norco drive cages within the past couple of weeks.  The first I ordered for initial testing, then I ordered three more for a client build once I decided I liked the first one.  While it makes no sense, here’s what I found – the screw hole locations across different cages are not the same!  Two of the cages have two pairs of screw holes towards the top and middle of the cage and one pair at the bottom.  The other two cages have the opposite – one pair at the top and two pairs at the middle and bottom.  Now in most cases this wouldn’t matter at all since you aren’t going to use all the screw holes anyway, just two to four on each side.  However, in the case I’m using for this client’s build (the Lian Li PC-P80) it meant that three of the cages had to be touching, even forcibly touching, towards the bottom of the case while the fourth sits by itself at the top with a small gap between it and the group of three (look closely at the picture below and you’ll see a slightly larger line between the top and second cage as opposed to all the others).  At a glance you probably wouldn’t notice it, but it still bothered me.  Why don’t all four cages have identical screw patterns?  Perhaps I got two cages from an old batch and two from a new batch, or something like that.  Anyway, it is definitely something worth mentioning.

The second issue I ran into with these cages is that the screws that secure the cage to the case actually penetrate into the outer-most drive chambers.  So if you use screws that are too long (or if the case walls are too thin, which I believe is the case with the Lian Li I’m using), then the tips of the screws can rub up against and scratch the hard drives (possibly voiding the warranty!), or in some cases even prevent the drive from being inserted into the cage.  When using these Norco cages with other cases that have thicker case walls, I haven’t had this issue.  If you have this issue and can’t find shorter screws, you could also add a small washer on the outside of the case wall to force the screw to sit at the right depth.
Overall Impression: Excellent