- I work from home so I end up needing to run a home lab.
- I tend to be conservative about what data I throw out.
- I backup a lot of my data.
- I keep offline archives of data that doesn’t require regular backups.
Between all this, I have somewhere in the order of 100TB of raw storage floating around at home*.
Over time that’s resulted in me buying a few different storage units.
Years ago, I went for a Drobo 2, the Firewire-800/USB-2 unit capable of taking up to 4 2TB hard drives. That’s long since been retired from primary production use, but these days provides a fairly stable slab of TimeMachine storage via a Mac Mini OSX server.
I retired the Drobo 2 by replacing it with the Drobo 5S and populated it with 5 x 7200RPM 2TB drives. That gets used for multimedia – videos, iTunes library, etc.
Over time as I accumulated more data and my partner switched from having a desktop and laptop to laptop only, I eventually caved in to the notion of home NAS and purchased a Synology 1513+ with 5 x 7200RPM 3TB drives, and that’s sitting around half full now.
Finally, recently I found myself needing to refresh my virtual lab. It had been running off my Mac Pro with 3 x 2TB 7200RPM drives inside in RAID-0 for maximum speed, but the Mac Pro was just old enough that its CPUs didn’t support VT-x for ESX within Fusion. Not having the money to buy a new Mac Pro, I decided to go for an interim 27″ iMac with a Quad Core i7 processor, 32GB of RAM and most importantly, Thunderbolt2.
To make the most of Thunderbolt2 with a limited budget, I purchased the Promise Pegasus2 4-drive enclosure and populated it with 4 x 7200RPM 2TB drives.
Suddenly I found myself idly running various performance tests against all 3 RAID units as well as the fast internal SSD on the iMac and a new 4TB Seagate USB-3 hard drive connected to the machine. After running a few ad-hoc tests I decided to start capturing the information.
I came up with 3 core tests:
- Read/Write a 64GB file:
- In 1MB block sizes
- In 512KB block sizes
- In 256KB block sizes
- In 128KB block sizes
- In 64KB block sizes
- In 32KB block sizes
- Untar/Tar a dense filesystem structure:
- 30,458 MB spread across 65,411 files/directories
- Create a uniform filesystem structure of 38GB:
- 12 directories, 32 subdirectories of each parent, with 10 x 10MB files written in 4KB blocks.
These may not be a perfect/ideal set of tests, but they gave me a sufficient variance based on the sort of things I’ve spent years having to deal with, viz.:
- Big files
- Dense filesystems
- Evenly distributed filesystems
With the exception of dense filesystem structure scenario, I ran each test on:
- The Promise Pegasus2
- The Drobo 5S with 2-drive redundancy (what it normally runs with)
- The Drobo 5S with 1-drive redundancy
- The iMac’s 256GB SSD
- The Seagate 4TB external USB-3 drive
- The Synology over gigabit LAN
- The Synology natively
For the dense filesystem structure, I didn’t run the tests on either the iMac 256GB SSD or the Synology natively. Nothing I had could stream data as fast as the iMac SSD, so it was used as the source point for the dense filesystem tests, and the Synology had no internal storage large enough other than its own filesystem to store the tar file used, so the test would have been significantly skewed against it.
I don’t have enough drives of the same type – and each enclosure has data on it anyway – that I could afford to populate every enclosure with exactly the same type.
All drives were 7200 RPM – I learnt with Drobo some time ago that going cheap and installing 5400 RPM drives is just asking for trouble, even if you’re not after performance.
5 x 3TB 7200 RPM Hitachi Deskstar 7K3000. Each drive is SATA III and has a 64MB cache. These are undoubtedly the most modern drives I have. The Synology is connected over 1Gbit Ethernet, and for comparison I ran as many tests as I could via the CIFS shares and locally, on the Synology itself.
Leveraging Drobo’s drive agnosticism, the contents of the Drobo 5S were built up over time, so there’s:
- 3 x Seagate Barracuda ST2000DM001 SATA-III with 64MB cache.
- 2 x Hitachi Deskstar 7K2000 SATA-II with 32MB cache.
The Drobo S5 was attached via USB-3 directly into the main ports on the back of the iMac.
This is populated with my oldest drives:
- 4 x Hitachi Deskstar 7K2000 SATA-II with 32MB cache.
The Pegasus2 was attached via Thunderbolt2 directly into one of the Thunderbolt2 ports on the back of the iMac.
Apple supplied SSD SM0256F (Samsung).
External 4TB Seagate
Seagate 4TB drive, “Expansion” model, attached via USB-3 directly into the main ports on the back of the iMac.
Comparing Apples and Oranges
Clearly these aren’t all the same drive types. There’s some SATA-III and some SATA-II, there’s different cache sizes and there’s different drive manufacturers. So maybe there’s little to no point in the comparison other than my satisfying general curiosity.
But let’s keep in mind that pristine configurations don’t always happen for home or SOHO storage.
And there’s also the nagging Drobo question. In brief – I ran the tests against the Drobo in 2-drive redundancy mode initially – mainly because I forgot I’d enabled it. At the end of the tests I realised it was in 2-drive redundancy mode and that it was hardly a fair comparison, so I switched the Drobo back to 1-drive redundancy mode.
Within 30 seconds of switching back to 1-drive redundancy mode, the Drobo advised the transition was complete and I started testing. I got abysmal results throughout the testing until about 2 days later when suddenly the results shot up. The net effect is that when Drobo tells you it’s finished converting from 2-drive redundancy to 1-drive redundancy, it’s clearly not telling the whole story.
As a result, I ended up running the tests a second time for the Drobo in 1-Drive redundancy mode (with the exception of the 64GB tests, which weren’t started until after the Drobo had finished its background cleanup tasks).
To perform the dense filesystem test, I ran a utility of mine, generate-filesystem.pl, which goes and generates a lot of random files. These files where then tar’d up without compression – the test consisted of reading the tar file from the iMac SSD and writing directly to each drive in question.
|Drive||Avg Read MB/s||Avg Write MB/s||Best Read MB/s||Best Write MB/s||Worst Read MB/s||Worst Write MB/s|
|Seagate USB3 4TB||51.6||110.02||53.34||113.65||50.43||107.63|
|Drobo 5S 2 Drive Redundancy||30.35||37.25||39.2||39.92||22.51||32.65|
|Drobo 5S 1 Drive Redundancy Round #1||12.67||24.8||14.64||30.37||9.47||19.46|
|Drobo 5S 1 Drive Redundancy Round #2||26.84||55.08||28.07||56.4||24.72||52.42|
|Synology via IP||27.48||20.28||31.24||20.62||20.09||19.79|
Evenly distributed filesystem
This test consisted of a script which generated 12 parent directories, 32 subdirectories of each parent, and within each subdirectory, 10 x 10MB files written with a 4KB block size.
|Drive||Average Write MB/s||Best Write MB/s||Worst Write MB/s|
|Segate USB3 4TB||131.21||132.41||130.17|
|Drobo 5S 2 Drive Redundancy||29.59||30.82||28.34|
|Drobo 5S 1 Drive Redundancy Round #1||27.07||38.17||17.63|
|Drobo 5S 1 Drive Redundancy Round #2||36.05||36.99||34.19|
|Synology via IP||65.1||67.61||63.37|
64GB File Write, Varying Block Sizes
|Drive||Block Size||Average Read MB/s||Average Write MB/s||Best Read MB/s||Best Write MB/s||Worst Read MB/s||Worst Write MB/s|
|Seagate USB3 4TB||1024||139.77||131.05||152.78||148.3||132.71||112.38|
|Seagate USB3 4TB||512||141.48||130.11||150.45||146.22||135.71||109.41|
|Seagate USB3 4TB||256||144.33||129.09||149.28||146.18||134.56||109.57|
|Seagate USB3 4TB||128||144.19||140.06||149.99||144.62||134.62||131.44|
|Seagate USB3 4TB||64||143.12||139.32||150.48||146.47||131.45||127.68|
|Seagate USB3 4TB||32||139.38||138.13||144.07||144.96||130.76||127.71|
|iMac 256GB SSD||1024||778.53||664.34||780.81||688.59||776.41||640.03|
|iMac 256GB SSD||512||646.95||690.43||681.55||695.59||628.99||684.81|
|iMac 256GB SSD||256||646.26||686||681.02||691.9||627.93||678.75|
|iMac 256GB SSD||128||641.75||684.24||678.38||695.5||623.14||674.1|
|iMac 256GB SSD||64||640.37||687.3||677.12||690.76||621.62||680.42|
|iMac 256GB SSD||32||638.49||686.27||677.43||691.78||618.61||681.58|
|Drobo 5S 2-Dr Redundancy||1024||79.77||57.11||100.7||58.69||62.95||55.73|
|Drobo 5S 2-Dr Redundancy||512||74.88||48.72||99.92||54.08||58.22||42.5|
|Drobo 5S 2-Dr Redundancy||256||70.84||55.82||98.45||55.98||50.12||55.53|
|Drobo 5S 2-Dr Redundancy||128||68.64||54.42||89.48||56.31||53.63||51.63|
|Drobo 5S 2-Dr Redundancy||64||72.53||55.84||78.79||56.58||60.68||55.17|
|Drobo 5S 2-Dr Redundancy||32||74.33||56.2||79.81||57.13||70.25||55.22|
|Drobo 5S 1-Dr Redundancy||1024||93.13||93.55||111.84||98.51||80.29||90.57|
|Drobo 5S 1-Dr Redundancy||512||83.11||87.84||108.41||103.2||70.05||67.85|
|Drobo 5S 1-Dr Redundancy||256||83.27||82.32||107.97||96.23||67.16||56.37|
|Drobo 5S 1-Dr Redundancy||128||89.44||94.47||118.45||103.52||70.12||89.38|
|Drobo 5S 1-Dr Redundancy||64||89.29||98.15||118.49||102.07||74.31||94.97|
|Drobo 5S 1-Dr Redundancy||32||97.28||97.98||129.12||105.41||65.1||92.02|
|Synology over IP||1024||92.77||86.32||94.33||89.24||91.97||82.76|
|Synology over IP||512||92.64||88.5||93.82||88.85||91.96||87.95|
|Synology over IP||256||91.83||87.63||92.43||89.5||91.13||86.48|
|Synology over IP||128||92.46||86.15||93.67||88.49||91.43||81.74|
|Synology over IP||64||91.13||87.92||92.62||89.17||89.56||87.17|
|Synology over IP||32||91.53||87.78||91.89||89.13||91.28||86.73|
I want to start by spending a little time talking about Drobo. For years I’ve been a huge fan of Drobo. I stuck through it when I had hard drive after hard drive fail in the 4-drive Drobo-2. Drobo insisted that 5400 RPM drives were fine, and maybe I just had a succession of failures, but the only way I got stability was to switch to 7200 RPM drives. When I migrated from the Drobo-2 to the Drobo 5S, all I had to do was take the drives out of the 2, put them in the 5S and add another drive.
For quite some time with an older Mac Pro the Drobo was hooked up via Firewire-800 (maximum speed around 80 MB/s throughput), and I always attributed the performance I got out of it to being a symptom of that legacy connection speed. Once connected via USB-3 though, it didn’t seem to improve much. Part of that was 2-drive redundancy, but switching to 1-drive redundancy still left the Drobo wanting for speed.
Seriously wanting for speed.
Drobo sells on the basis of ease of use, and it is indeed easy to use, but it’s abundantly clear that it comes at the cost of performance and transparency. Converting from 2-drive to 1-drive redundancy is supposedly a near-instantaneous process, but it would seem to trigger some fairly serious back-end garbage collection on the part of the Drobo that you’re not told about. Due to that desire for simplicity, the Dashboard doesn’t tell you a thing – there’s no indication anything is going on at all. Not a whisper from Drobo until up to 2 days later is suddenly announces that protection is in progress, then five minutes later announces it’s complete.
That’s why for the dense and evenly distributed filesystem tests feature two Drobo 1-Drive redundancy tests – the first was performed while the unit was secretly running whatever garbage collection process it doesn’t tell you about, and the second was run after I realised what had been going on and repeated the tests.
Drobo’s secrecy extends to every aspect of its operation, something that has irked me for some time. Most of its logs – potentially the really useful stuff in diagnosing issues, is locked away in binary format. (Drobo support have told me in the past this is important, but I think the importance lays in convincing you to continue to buy extended maintenance.)
The secrecy is so mind-numbingly obtuse that when it came time to document what hard drives were in the unit, I had to power it down and remove the drives, one by one.
The Synology, the Promise – both of those have rich dashboards which give you access to essential operations information. By far the Drobo is the most user friendly unit of the three, but between the secrecy of its operations and the poor performance, that seems a very poor compromise indeed.
After all, a single USB-3 drive outperformed the Drobo regularly on the same interface. For writes that might be understandable, but I’d expect better in read situations.
Given it had my oldest hard drives in it, I was pretty impressed with the performance of the Promise Pegasus2. 4-drive RAID-5 isn’t exactly comfortable – an extra drive always works well to balance performance out, after all. It’s clearly no slouch, and offered reasonably stable performance throughout testing.
I started the tests on the basis of wanting to check out what sort of performance I could get out of the Promise Pegasus2, but I learnt two other lessons:
- I’m done with Drobo. This is the last unit I’ll own – ease of use is no longer a good enough pay-off in return for loss of transparency or performance.
- My next lab array will be all-SSD.
By far and away, the SSD in my iMac outstripped every other thing I tested it against. I kind of expected that – I’d previously used SATA-I and SATA-II SSDs, but the performance coming out of a single SATA-III SSD is intense.
At the time I was working on the tests, I had a brief Twitter exchange with Vaughn Stewart, who suggested I should have gone for SSDs in the Promise. My counter argument at the time was a simple one: I need more capacity than I can comfortably get with even 4 x 500GB SSDs.
From a price perspective, I don’t regret my decision – I had a limited budget, to the point where I only purchased a single extra drive to populate the Promise. 4 x 500GB SSDs undoubtedly would have given substantially greater performance, but it would have also added another $1500 to the purchase price – and that’s not including the 2.5″ to 3.5″ mount kits for each drive.
When you have to buy your own lab equipment, that’s a lot.
An EMC presentation I was at a few years ago had a simple message: people buy storage for capacity, but upgrade for performance. That was mentioned at the enterprise level, but it was running through my head every time I looked at the difference in performances between a solitary SSD and a storage unit. So I’m left with one conclusion: it’s going to apply for my lab workloads … the next time I build a lab storage unit, that Thunderbolt2 connection (or whatever it is by then) will be fully populated with SSDs.
Other than the above, I’m not spending a huge amount of time trying to put an interpretation on any of the individual results – the numbers are all presented and speak for themselves … except for the Drobo. Those numbers crawl off into a dark corner and mutter terrible things.
* (Since a chunk of that storage is in offline archives, not all of that storage is spun up at once, and once various RAID formats are taken into account, the actual amount of presented storage is less.)