Testing hardware for unRAID compatibility

Introduction
MurrayW of the unRAID forums sent me a Broadcom RAIDCore BC4000 SATA/RAID Controller card and asked me to test it for unRAID compatibility. This card has 8 SATA I ports (1.5 Gb/s) on a PCI-X interface, and according to eBay it is worth about $75. Based on these specs alone I knew it wouldn’t be terribly useful to the average unRAID user, but I agreed to test it anyway. I then decided to write up this article about the testing process as it is a good chance to explain how we at GreenLeaf Technology go about testing new hardware. This article is more about our rigorous testing procedures than it is about the hardware itself.

Without further ado, here’s the card:

The Broadcom card in action

There are only six SATA cables in that picture, you’ll have to take my word for it that the card actually has eight ports (all of which were tested). First, let’s analyze the specs – PCI-X is an older format typically only found on server motherboards. It is very difficult to find it supported on a modern server class or consumer-grade motherboard. If you can find an older motherboard that supports it, it is very fast. I am fully confident that this card paired with a PCI-X motherboard would result in a great unRAID server. However, I didn’t have a PCI-X motherboard handy. The good news is that PCI-X cards will run in standard PCI slots, albeit at significantly slower speeds. The picture above depicts the card plugged into a PCI slot – notice that a good number of the gold connectors on the card are hanging off the end of the slot, not connected to anything. This card (and most PCI-X cards) still work fine in this configuration, just with limited speed. When using a PCI-X card in a PCI slot in a long term configuration, it is a good idea to cover the exposed gold connectors with some electrical tape to prevent accidental short circuiting.

Testing
I tested the Broadcom card on four different motherboards:

Biostar A760G M2+

Biostar A880G+

Supermicro C2SEE

Supermicro X7SLA-H

The Biostar boards are both consumer-grade, whereas the Supermicros are considered to be server class. These motherboards also represent a good range of newer and older features. The Biostar A880G+ uses the modern AMD AM3 CPU socket and the brand new 880G chipset. The Supermico X7SLA-H uses a modern Intel Atom processor and the 945GC chipset. The Biostar A760G M2+ uses the older generation AMD AM2+ CPU socket, and the older 760G chipset.The Supermicro C2SEE uses the older Intel LGA 775 CPU socket and ICH10 chipset. I figured that if the Broadcom card worked on this diverse set of motherboards, then it was very likely to work on any motherboard that the average unRAID user would care to try.

As none of these motherboards have PCI-X slots, the card was used on the PCI slot of each board. Normally I would test all eight SATA ports simultaneously, but as I was short a few test drives after a recent round of hard drive failures, so I had only four drives at my disposal. I used a simple round-robin method to make sure that all eight ports on the card worked on every motherboard. I specifically tested the following:

1) Boot the server and verify that the drives are detected by unRAID

2) Run a single pass of preclear on each drive through the Broadcom card

3) Build an array and run a parity sync

4) Run a parity check and verify that there are no errors

5) Transfer data to and from drives in the array

6) Analyze the syslog to look for any oddities

7) Verify compatibility with the latest stable and beta versions of unRAID

8) Confirm that drive spin up and spin down work properly

Results Summary
1) PASS

2) PASS

3) PASS, but slow

4) PASS, but slow

5) PASS, but slow when multiple drives accessed at once

6) PASS

7) PASS

8) PASS

Detailed Results
I won’t bore you with the details of how the card ran on each motherboard, since it essentially ran the same on all four. However, there was one exception: the card did not work reliably on the Supermicro X7SLA-H. Sometimes it would work as expected, sometimes it wouldn’t work at all. However, further testing with that board revealed that the board itself was defective, so I can’t blame the Broadcom board for not working on a flaky motherboard. Unfortunately that means my array of test motherboards was reduced by one.

The card proved itself compatible with both unRAID 4.7 and 5.0beta6a, so unRAID must have the appropriate drivers built-in. For each of the three working motherboards, the Broadcom card worked perfectly. It passed all seven tests with flying colors. Well, let me temper that a bit – it passed all tests with no errors, but I was by no means impressed with the card’s performance. This is to be expected – even just four drives on a single PCI bus suffer a significant bandwidth bottleneck. I expected this card to be slow, and I was not disappointed.

Below are some screenshots of the four test drives in a parity sync (step 3 above) at various stages of completion (click each for an enlarged version). Notice that the parity sync does not pick up speed after the smaller 640 GB drive is no longer involved as the parity sync is bottlenecked by the PCI bus the entire time.

Just starting, 0.1% complete

About half way, 52.8% complete

Over three-quarters done, 78.8% complete

Almost there, 97.2% complete

According to the syslog the parity sync finished in 93449 seconds. That’s nearly 26 hours, ouch! Also recorded in the syslog is the average parity sync speed of 20904K/sec, which translates to about 26 KB/s. Keep in mind this is using only four drives, just half the card’s capacity. More drives would take even longer, while fewer would take even more time. Parity checks will take about the same amount of time as a parity sync. For comparison, the same parity sync operation using the same four drives on a modern PCIe SATA controller card such as those used in GreenLeaf servers would run over four times as fast – it would complete in under 8 hours and average about 60 KB/s.

Regarding step 5 of the testing procedures, data transfers to individual drives on the card (either through disk or user shares with no cache drive involved) were completely normal at 25 – 35 MB/s. This is expected behavior – the PCI bus bottleneck is only an issue when more than one drive is accessed. The PCI bus has enough bandwidth for a single drive to operate at full speed, so there’s no slowdown when transferring data to a single drive during normal operation. However, to demonstrate the PCI bus limitation when multiple drives are being written to, I created a special situation. I created a user share that included all three data disks and used the ‘most free’ disk allocation method. This would require that data transferred to that share was written to all three disks in a round-robin fashion, which should saturate the PCI bus and result in a much slower transfer. I transferred a large folder containing many files of varying size, from tiny documents and mp3s to large HD video files and disc images. The folder totaled 180 GBs and was comprised of 2,779 individual files spread across 193 folders. That’s a lot of files that would have to be split up across the three data drives. The transfer took place across a fully gigabit network with no other network traffic and the source drive was a 7200 rpm 500 GB Seagate data drive in a Windows 7 computer. The results were clear. The transfer took 6 hours and 26 minutes to complete and had an average transfer speed of 7.96 MB/s, one third or less the speed of a transfer to a single drive.

Conclusion
I would recommend this Broadcom card without hesitation to any unRAID user with a PCI-X motherboard as a fast and reliable way to gain an extra 8 SATA ports. For someone with a PCI port, all hopes for speed when multiple drive access is needed must be cast to the wind, but reliability is not sacrificed. If you have a desire for a slow and steady card, the Broadcom is a good choice. It has proved itself to be 100% compatible with unRAID and a slew of modern and older motherboards and chipsets. Also keep in mind that in common day-to-day use of your server only a single drive needs to be read or written to and the PCI bus will offer no limitations. Using a cache drive in the server will also help alleviate any painfully slow write performance, as the writes will be deferred until a later time.

I believe this card has its place in the unRAID community, though it may be a little too late to be helpful to most.

Stephen

Leave a Reply

Your email address will not be published. Required fields are marked *


6 × = six

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>