Every year multiple bloggers and tech journalists put out a build log of building a home/small NAS. Most of the time the goal of these builds is to be as low power and cheap has possible. To be fair for an average consumer that needs a place to store family photos and a music collection, reusing an old computer or building a small NAS fits their needs to pretty well, but at TDSheridanlab we need something a little bigger. We need a SAN for a VMWare HA/DRS cluster.
Over the last 4 years, I’ve been diving into the world of virtualization pretty heavily. When I first started down the virtualization path, I built a Supermicro machine and it served me pretty well. I bought a few servers when I took on the VCP the first time and now I am trying to simplify in environment while still being able to test out what ever pecks my interest at the time.
The Supermicro build has the following specs.
- AMD Opteron 6128 (8 cores 8 threads)
- 32 GBs for DDR3 ECC Registered
- LSI 9240-8i
- 4 3TB WD Western Digital Red NAS drives in a RAID 5.
- 2 160 Seagate Server drives in a RAID 1.
- ESXi 5.5 Update 3 (current patches as of New Years Eve)
As the Lab has grown over the years, this ESXi host now only hosts two virtual machines. A domain controller and my Plex VM. When I was working on VCP6 I was looking at power usage and found that this server has the same power usage as one of my two Dual Intel E5 Xeon hosts but only half the amount of CPU and RAM resources. Also with the increased load from all the data on the Plex virtual machine, the speeds are starting to drop when accessing files or streaming from the device.
Another factor for a new storage solution is the Supermicro system is AMD and all the rest of the systems in the LAB are Intel so there is no live vMotion available between them.
This is how the need to retire this system came to light.
The the hurdle in the project is that my Plex virtual machine has amount 5 TBs of data on it currently. It acts as a mediaserver for DVD and Blu ray rips. A music server for my home Sonos set up, a general file server for documents and photos, and a crashplan server for local PC backups. As well all know in building these systems the Hard Drives is the expensive part, the rest of the build is relatively cheap.
While working on the VCP certifications I used a spare system with a relatively small storage footprint to make shared storage for the items on the exam I wanted to try. The system itself has changed for each renewal of the VCP but the result was the same; the system had 1-2 TBs of storage running FreeNAS for iSCSI storage. In December of 2016 I kept the lab up and running after I passed the exam so I could test out some of the newer features of FreeNAS and evaluate it as a replacement for my current storage solution.
Requirements for new Storage Server:
- 7200 RPM drives (the WD Red is a 5900 RPM drives that can spike to 7200)
- 10+ TBs of usable space.
- Easily add 10Gbe connectivity
- Space to accommodate 4 SSDs for Read and Write Caching.
- A Storage Controller/RAID Card that supports JBOD or pass-through.
- Rackmount form factor
- IPMI for OOBM
- Ability to scale to 64GBs of RAM
The since the requirements are a little bit higher than the standard FreeNAS build the only thing that is on the Wish list is SAS drives instead of SATA drives for improved IOPs.
With all those requirements listed above let me explain how I came up with that list. For the last 4 Years I’ve been using 4 WD Red drives and for the most part they have been working fine but since this will be a shared storage solution I want a little bit more performance per drive without getting crazy expensive. Along with the increase in disk speed to boost performance, it’s easy and relatively inexpensive to add SSDs for caching then it is build out a large SSD based SAN.
Every day the cost of 10Gbe is getting cheaper and cheaper (relative term) so I want either built in 10Gbe connectivity or an easy way to add 10Gbe connectivity down the line when I upgrade the lab to 10Gbe. Personally, I don’t care if this expansion is a mezzanine card, PCIe card or built in. I can work with anything. Right now the lab has a Dell N2048 Switch and Dell2024 switch. Both switches have 2 10GB SFP+ powers on them that I can use if I find a system that has 10Gbe built in.
FreeNAS does not recommend using RAID Card RAID arrays for its storage. FreeNAS not fully supported in virtual environments and using a RAID Card based RAID Array defeats the point of ZFS. Technically, to run FreeNAS in ESXi you have to pass-through the Storage Adapter to the virtual machine and then create a sudo vSAN environment. If I did this for this new system I would basically creating the same issue just with new hard drives. This system will be a standalone SAN system so finding a system with a compatible storage controller is key.
Along with the storage controller, the system must be able to hold at least 64 GBs of RAM. I’m still debating on enabling Data duplication on FreeNAS. The recommended sizing guide for Data Dedup is 5GBs of RAM for every 1TB of space. With the requirement of 10TBs of usable space right off the bat, I’ll need 50 GBs of RAM to support Data DeDup. Up until recently, the E3 Xeon had a 32 GB limit. The newest E3 Xeons support up to 64 GBs of RAM. Even with the increase in RAM that would limit the storage size roughly 12TBs of usable space when you account for the RAM FreeNAS needs to runs. This system will probably be a Dual E5 Xeon build to account for RAM and expansion options.
TDSheridanLAB has a Server Rack so it makes sense to buy a rack mount system so it can be mounted correctly. Also, if there is an issue I want to be able to see the machine from where ever I am so IPMI is a must as well.
So that is the plan, I will start researching and shopping. Once I have parts I’ll post all the steps the build process. After the new SAN is created I’ll probably re use the hard drives from the original system for Veeam storage and I will document that build as well.