Archive

Archive for August, 2017

Veeam repository recommendation

August 11th, 2017 No comments

Updated this post from 2015 for 2017 prices and other updates.

I repost this so much on reddit I decided to just create an entry here to reference:

This is what I recommend if you want cheap, without support, but with decent reliability/redundancy and excellent performance. Use RAID 6 for capacity, RAID 10 for highest reliability and performance. Deduplication will increase your available space by 25-35% or more depending on what you are storing. Increase number of disks and JBODs for more storage.

This method requires a dedicated server to provide NFS, however I think iSCSI is built into 2012r2 and 2016 now. It has the advantage of being able to house Veeam also, though you should use at least one VM as a proxy for a hot add disk performance boost.  Note SuperMicro also offers storage enclosures with a server motherboard, etc, but they don’t have the high disk bay count that this enclosure does.

JBOD enclosure with space for 45 SAS drives. Use any server you have laying around that supports PCIx2 and install Server 2012r2/2016 with deduplication enabled.  This approach is also favorable with Linux variants like FreeNAS, but verify compatibility with the RAID controller before proceeding.

JBOD chassis (1x) – $2499 SuperMicro CSE-847E26-RJBOD1 http://www.newegg.com/Product/Product.aspx?Item=N82E16811152143
(At the time of this post CDW had more favorable prices on this enclosure than NewEgg or Amazon.)

SAS RAID controller (1x) – $310 Avago/LSI 9280-8e http://www.newegg.com/Product/Product.aspx?Item=N82E16816118109

RAID controller backup battery (1x) – $165 MegaRAID LSIiBBU08 http://www.newegg.com/Product/Product.aspx?Item=N82E16816118163&Tpk=N82E16816118163

SAS cables (2x) – $58ea=$116 SFF-8088(M) to SFF-8088(M) https://www.newegg.com/Product/Product.aspx?Item=9SIA1K02CM9365

Disks (??x) – $191ea=? Seagate ST4000NM0023 4tb Enterprise Capacity 128mb 7200rpm http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM1PU0823&cm_re=ST4000NM0023-_-22-178-306-_-Product
(This was my spec when I purchased in 2015, obviously higher capacity versions exist.  Make sure to purchase SAS drives, higher the warranty term and RPM the better)

Internal mini SAS cables (2x) = $68 http://www.cdw.com/shop/products/Tripp-Lite-3ft-Internal-SAS-Cable-mini-SAS-SFF-8087-to-mini-SAS-SFF8087-1M/1464242.aspx

 

Setting up the JBOD enclosure cabling can be a little difficult, this review from Amazon was very helpful for me:

on February 23, 2014

I just completed a ZFS on Linux deployment and am very impressed with the results. There is no better deal than a setup like this: very inexpensive with excellent performance. The components were a Supermicro 847 45 drive 4U chasis, an LSI 9200-8e external SAS card, 2 Monoprice 2M SFF-8088 cables, 10 Hitachi Ultrastar 4TB 7K4000 SAS enterprise drives, and a SanDisk Extreme II 480GB SSD (as high speed L2ARC cache and ZIL). Despite running raidz2 in an 8 drive (+2 hot spares) configuration, I have read speeds of 760 MB/s and write speeds of 330 MB/s (on a Dell PowerEdge R610). I have complete confidence that this performance will scale up to saturate the SAS link with read/write speeds of 1 GB/s as I add in more drives, matching performance of my other (much more expensive, commercially sourced) disk arrays. The content on these disk arrays is being served over NFS via Intel 10 Gigabit Ethernet cards with read speeds to RAM on the clients that are in the 500 MB/s range. The entire setup cost less than 6k for 40TB raw capacity; it’s beautiful. Total hardware setup time was about 4 hours one afternoon with two people.

This JBOD array is very nice. It has 24 disks in the front and 21 in the rear each with their own redundant dual-SAS expander backplane. It has tons of fans in the center of the box, each easily detachable if any should fail. There are four SFF-8088 connectors in the rear and, aside from redundant 1400W power, that is the only connectivity this JBOD has. The unit ships without any of those SAS connectors wired up, so you have to open the box and route things as desired. Particularly since this is a dual-SAS expander backplane on both backplanes (for redundant data paths) and also has auxiliary input connections for nearly-double SAS bandwidth, there are quite a few choices on how to set things up. Further, if so desired, you could even wire up each of the backplanes independently and have two entirely separate disk arrays (one in the front and one in the rear) all in one unit. It’s just a matter of how you choose to wire up the backplanes. Check appendix C/D of the manual for diagrams and more information. The tech support at Supermicro are also very helpful and knowledgeable, but I had a bit of a hold time (10-15 minutes).

Since the SAS routing is the most complicated thing to understand with this unit, let me go into more detail. Each bank of disks (24 front/21 back) has its own redundant dual-SAS backplane. There is another slightly cheaper model that doesn’t have the redundant backplane chip/SAS connectors wired in, but the price difference isn’t all that significant. In the front, each redundant SAS port expander has three connections: primary, auxiliary, and pass-through. Since this is a redundant SAS backplane, there are a total of six SAS connections on the backplane, so be careful, it can be easy to get confused. Primary and auxiliary are used for connecting to the front bank, and pass-through is used for chaining out to the rear backplane. If you use both primary and auxiliary connections, you can get nearly double the SAS bandwidth out of your front array since they are dedicated routes. The rear backplane has a similar set of connections, but lacks an auxiliary port, and has only primary and pass-through. With redundancy, this is a total of four SAS connectors. All this connectivity is amazing, but you only get to route four SAS connectors to the outside of your unit unless you want to leave the lid open or drill out into the side (which is quite doable), so you have to choose a configuration. You sadly can’t expose all ten SAS connectors, although that would have been truly awesome.

A couple things to note are that the redundant dual-SAS backplane functionality only works with SAS drives, so don’t populate this with SATA drives if that’s what you want (this is just a fact of the protocols, nothing specific to this unit). This concept also holds if you are daisy-chaining the rear backplane to the front backplane. You’ll also want to populate with SAS drives in that case too because SATA doesn’t do well with daisy-chained SAS expansion sets. I wasn’t planning on either of those configurations though and went with SATA drives because they’re a bit faster than their SAS equivalents. I’ve only populated less than one half of the front backplane so far and already am very impressed.

Installation was pretty simple once you decipher how the included rails are supposed to be setup. Everything snapped into place with super smooth sliding rails. It is a pretty heavy beast though, you will want a dolly to roll it into the server room with and a friend/colleague to help you with sliding it in. At around 70-80 pounds, it’s too much for one person to carry, but it was no problem for two people to install. It’s somewhat amazing to get this high a drive density in a 4U package, but Supermicro pulled it off very well. I now have years of expandability for my array at a fraction of the cost of commercially-prepared systems. If you have any hesitations about this system, I’d cast them aside. I’ve had two of these monsters deployed for three years already without a single hiccup. This third one was the first disk array I purchased piece-by-piece myself. Definitely the right move.

Categories: Uncategorized Tags: