I don't think that's too bad considering I don't have a cache device set up yet. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage - Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.
Keep up to date with all that is going on in the life of Luke Manning ranging anywhere from server virtualisation to what I did last weekend.
Showing posts with label parity. Show all posts
Showing posts with label parity. Show all posts
Wednesday, June 24, 2015
unRAID 6 benchmarks
Now that I've got unRAID up & running I thought it would be interesting to run some benchmarks so I could determine what kind of speed to expect. I have purchased a gigabit switch and cat6a ethernet cables for wiring everything up so this as fast as I can get for now. The program I used to run these benchmarks is called 'CrystalDiskMark'. This is with two WD Red 3TB drives with one acting as a parity disk.
I don't think that's too bad considering I don't have a cache device set up yet. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage - Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.
I'll probably revisit this in the future when I set up my cache device.
I don't think that's too bad considering I don't have a cache device set up yet. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage - Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.
Labels:
benchmark,
cat6,
crystaldiskmark,
G1610T,
gen8,
homelab,
HP,
microserver,
nas,
parity,
proliant,
read speed,
SATA,
server,
unraid,
WD Red,
write speed
Saturday, June 20, 2015
New components for my HP G1610T Gen8 Microserver - Upgrades!
I decided it would be best to invest some more money into my microserver rather than trying to struggle through with the limited resources available in the server itself. I also needed some hard drives to full up the drive bays for my NAS.
1 x 2 Port SATA Controller
3 x Cat6a ethernet cables
2 x 8GB ECC DDR3 RAM
The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that.
The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn't come cheap at about €160 for the two stick set - ECC RAM is bloody expensive.
Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network. I've been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the requirement for additional cable bandwidth.
What did I buy?
2 x 3TB WD Red NAS drives1 x 2 Port SATA Controller
3 x Cat6a ethernet cables
2 x 8GB ECC DDR3 RAM
The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that.
The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn't come cheap at about €160 for the two stick set - ECC RAM is bloody expensive.
Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network. I've been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the requirement for additional cable bandwidth.
Sunday, June 14, 2015
How does NAS work in unRAID?
Why unRAID isn't your standard NAS
I'm going to start this section off by explaining a little bit about what RAID is. If you know what RAID is and how it works feel free to skip the next section.RAID originally stood for 'redundant array of inexpensive disks' but is now commonly known as 'redundant array of independent disks'. It is a storage virtualisation technology that combines multiple disk drives into a single logical unit for the purposes of data redundancy or performance improvement. Most RAID implementations perform an action called striping. Striping is a term used to describe when individual files are split up and spread across more than one disk. By performing read and write operations to all the disks in the array simultaneously, RAID works around the performance limitation of mechanical drives resulting in much higher read and write speeds. In layman's terms - Let's say you have an array of 4 separate disks. In this case the file would be split into four pieces with one piece written to each drive at the same time therefore theoretically gaining 4 times the speed of one drive. That's not quite how it works in reality though..
Striping can be done at a byte level or a block level. Byte-level striping means that each file is split up into little pieces of one byte in size (8 binary digits) and each byte is written to a separate drive i.e. the first byte gets written to the first drive, second to the second, and so on. Block level striping on the other hand splits the file into logical blocks of data with a default block size being 512 bytes. Each block of 512 bytes is then written to an individual disk. Obviously striping is used to improve performance but that comes with a caveat - it provides no fault tolerance or redundancy. This is known as a RAID0 setup.
If all the files are split up among the drives what happens when one dies? Well this is where parity and mirroring come into RAID. Mirroring is the most simple method by using redundant storage. When data is written to one disk, it is simultaneously written to the other disk, so the array would have two drives that are always an exact copy of each other. If one of the drives fails the other drive still contains all the lost data (Assuming that doesn't die too!). This is obviously not an efficient use of storage space when half of the space can't be utilised. This is where parity comes in - Parity can be used alongside striping as a way to offer redundancy without losing half of the total capacity. With parity one disk can be used (Depending on the RAID implementation) to store enough parity data to recover the entire array in the event of a drive failure. It does this through mathematical XOR equations which I'm not going into in this post. There is one glaring problem with this setup - What if two drives fail? You're more than likely screwed.. This is part of the reason nobody uses RAID4 in practice.
There are implementations available such as RAID6 which use double parity, meaning the array can have two drives fail without data loss which is a better implementation if you plan on storing many many terabytes of data across a large number of drives. Logically the more drives you have the higher percentage chance you have that one will fail.
What does that have to do with unRAID.. ?
Hold on, I'm getting to that! unRAID’s storage capabilities are broken down into three components: the array, the cache, and the user share file system. Let's start with the array first of all. unRAID makes uses a single dedicated parity disk without striping. Due to the fact that unRAID does not utilise striping this means you have the ability to use multiple hard drives with differing sizes, types, etc. This also has the benefit of making the array more resistant to data loss because your data isn't striped across multiple disks.
The reason striping isn't used (Or can't be used) is because unRAID treats each drive as an individual file system. In a traditional RAID setup all the drives are simultaneously spinning while in unRAID spin down can be controlled per drive - so a drive with rarely accessed files may stay off (theoretically increasing it's lifespan!). If the array fails, the individual drives are still accessible unlike traditional RAID arrays where you might have total data loss.
Because each drive is treated as an individual file system, this allows for the user share file system that I mentioned earlier. Let's just take an example to explain this. I could create a user share and call it media. For this media folder I can specify:
- How the data is allocated across the disks i.e. I can include / exclude some disks
- How the data is exposed on the network using what protocols (NFS, SMB, AFP)
- Who can access the data by creating user accounts with permissions
The following image taken from the unRAID website explains it better than words can:
Finally we need to address the issue of performance due to the fact that striping is not used. To get around this limitation unRAID introduced the ability to utilise an SSD as a cache disk for faster writes. All the data to be written to the array is initially written directly to the dedicated cache device and then moved to the mechanical drives at a later time. Because this device is not a part of the array, the write speed is unaffected by parity calculations. However with a single cache device, data captured there is at risk as a parity device doesn’t protect it. To minimise this risk you can build a cache pool with multiple devices both to increase your cache capacity as well as to add protection for that data.
Subscribe to:
Posts (Atom)