Thursday, December 3, 2015

The anatomy of a WordPress theme

My cousin asked me to develop a website for her Father's business. Initially I thought this would be a great learning experience as I knew the basics of web development (HTML, CSS, PHP, and a bit of JavaScript) but had never completed a full website before. I slowly but surely realised just what I had landed myself into... !

To cut a long story short I decided it would be best to implement a WordPress theme! I've always heard this makes content management very straightforward and doesn't require much knowledge of PHP / HTML for managing. Sounds perfect - Unless you're the one developing the site in which case it's a fucking balls.


Had I known how difficult this would be I definitely would not have chosen WordPress for my first development job. I guess the first hurdle I encountered was understanding the template hierarchy. See what I did at first was simply hard code an index.php and styles.css and upload them as a theme. Then I tried to create a new page from within WordPress and that wouldn't work - It kept bringing me back to index.php. I had also tried to create a contact.php with the company contact details - Now how the hell do I open this in WordPress? Hmmmm.. !

It was at this point I learned about the WordPress Theme Hierarchy. I didn't understand it, but I knew it existed. So I kept trying numerous different implementations, none of which worked. Read multiple blog posts and just ended up confused. Until I saw the 'template-loader.php' being mentioned somewhere and this is what finally made me understand the theme template. Show me in code and I'll understand!

Let's take a look:

Examining this file is what finally made everything click with me in terms of how pages get selected, and the reason why creating a new page in the WordPress GUI was not working. I only had an index.php! Basically, this code steps through each of the query context conditionals in a specific order, and defines the template to use for the first one that returns true. So in my case, only index.php was being found and so only the contents of index.php were being returned.

When a person browses to your website, WordPress selects which template to use for rendering that page. As we learned earlier in the Template Hierarchy, WordPress looks for template files in the following order:
  1. Page Template — If the page has a custom template assigned, WordPress looks for that file and, if found, uses it.
  2. page-{slug}.php — If no custom template has been assigned, WordPress looks for and uses a specialized template that contains the page’s slug. A slug is a few words that describe a post or a page. If the page slug is recent-news, WordPress will look to use page-recent-news.php.
  3. page-{id}.php — If a specialized template that includes the page’s slug is not found, WordPress looks for and uses a specialized template named with the page’s ID. If the page ID is 6, WordPress will look to use page-6.php.
  4. page.php — If a specialized template that includes the page’s ID is not found, WordPress looks for and uses the theme’s default page template.
  5. index.php — If no specific page templates are assigned or found, WordPress defaults back to using the theme’s index file to render pages.
I realise this is just the basic knowledge needed but I hope to discuss this further. Writing here helps me understand and get my thoughts clear.

Saturday, June 27, 2015

Setting up the Plex container in unRAID 6

In this post I am going to discuss how I went about adding the Plex container in unRAID 6. It's a fairly straightforward process and didn't cause me many headaches thankfully. I was originally running the server from my desktop but that wasn't an ideal solution because I don't leave my desktop on all the time.





Enabling Docker

The first step involved here was enabling Docker. To do this you simply navigate to the Settings tab, click Docker, and then use the drop down menu to enable Docker. PlexMediaServer is one of the default docket templates that comes packaged by lime-technology in unRAID 6. Navigate to the new Docker tab and you will see a section named 'Add container'. From there you want to choose 'PlexMediaServer' under the limetech heading. 

There is only one required folder named 'config' for this docker but you're going to want add more. The config folder does exactly what it says on the tin - Stores the configuration settings for this particular docker. I originally made the silly mistake of pointing this folder to a directory on the flash drive running unRAID. As soon as I rebooted the server I lost all the configuration I had spent the previous setting up - bummer. So for this directory you're going to want to specify a directory on the array. I created a folder called 'DockerConfig'.

In order to add any media you will also need to specify these directories too. I added one named /movies pointing to /mnt/disk1/movies and another named /series pointing to /mnt/disk1/series. 

All that is left now it to allow unRAID to create the docker container. Simple as.


Configuring Plex

There wasn't a whole lot of configuration required with Plex assuming you only want to use it locally within your home. If you plan on streaming media externally you will need to setup remote access. There are three steps required in doing this. The first step required is to sign into your Plex account, assuming you created one. If not you will need to register an account. The second is port forwarding - By default Plex will use port 32400 but you can specify another port if you prefer. You will need to forward this port to the IP of your server. Lastly you need to edit the settings of your plex server for remote access and manually specify whatever port you chose by ticking the box to manually specify. 

Wednesday, June 24, 2015

unRAID 6 benchmarks

Now that I've got unRAID up & running I thought it would be interesting to run some benchmarks so I could determine what kind of speed to expect. I have purchased a gigabit switch and cat6a ethernet cables for wiring everything up so this as fast as I can get for now. The program I used to run these benchmarks is called 'CrystalDiskMark'. This is with two WD Red 3TB drives with one acting as a parity disk.


I don't think that's too bad considering I don't have a cache device set up yet. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage - Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.


I'll probably revisit this in the future when I set up my cache device.

Monday, June 22, 2015

Benchmarking my desktop SSD

My desktop currently has a 120GB M500 Crucial SSD installed for booting my OS and related applications. It's not exactly a top-of-the-line drive by any means but it's more than enough for every day use. The spec sheet for this model claims I should be getting up to 500 MB/second reads and 400MB/s writes. So when I saw the results of my CrystalDiskMark test I was underwhelmed to say the least. It's worth nothing that this drive was 75% full at the time of the test so I understand this has the potential to affect speeds - But not this much!

56MB/s sequential writes?? Time to figure out what's going on here..



Troubleshooting steps

First off I needed to ensure that AHCI was enabled. AHCI stand for Advance Host Controller Interface. This is a hardware mechanism that allows software to communicate with Serial ATA (SATA) devices such as SSDs. Windows supports AHCI out-of-the-box by default but just to be sure I went into device manager to confirm the AHCI controller was enabled and running.

So it's enabled - Great! Now I just need to confirm that the SSD is being managed by this controller. Right click the controller, choose properties, then navigate to the 'Details' tab. In this section you are greeted with a drop down menu where I chose 'Children' and could see my SSD listed so AHCI is definitely enabled.

Then I needed to confirm that 'TRIM' was enabled. TRIM support is essential for an SSD to run the way it should to avoid slow writing times. This command allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally. To test if this is enabled you can run the command "fsutil behavior query DisableDeleteNotify". If this returns as 0 then TRIM is enabled.

The next step was to make sure I was at the latest revision of the firmware. The firmware download for my SSD comes as an ISO package so I stuck it onto a USB using unetbootin and made a backup of my current SSD before proceeding - I've had enough bad experiences to warrant backups! Thankfully the installation completed successfully without any issues and I was upgraded from MU03 to the MU05 revision. Rebooted and went through another benchmark with CrystalDiskMark. No improvement whatsoever.



So considering there was little to no change in the speeds I figured I would confirm the results with another benchmark tool - Atto disk benchmark. Now I'm seeing more along the lines of the expected speeds! At my maximum I reached over 500MB/s reads and maxed out about 140MB/s writes. I'm still a little disappointed with the write speeds though but this is a start at least.




As this was an SSD I was recommended to test out 'AS SSD Benchmark' which also confirmed that AHCI was enabled and my alignment was okay. This reported speeds were pretty much along the same lines as what I've seen so far with write speeds being reported between 113MB/s and 134MB/s. Still disappointing.





I went ahead and asked someone else to run the same test on their Crucial M4, which is an older model than mine. He was clocking double my write speeds with similar read speeds using the same test config as me. Only other difference between his test and mine was that his drive has higher total capacity with about 50% free whereas I only had 25% free. I went about reducing the space consumed on my drive and then re-ran the test only to receive roughly the same numbers again..





The conclusion

After a bit more digging I realised this was down to false advertising rather than anything being wrong with my drive / configuration. The M500 series drives are capable of up to 400MB/s write speed at the higher capacity spectrum of drives. My specific model - CT120M500SSD1 - I found is only capable of about 130MB/s which more or less matches up with the speeds I was recording in some of the benchmarks. Here is a screenshot from the product page on newegg:


Sunday, June 21, 2015

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

In this post I am going to discuss how I went about installing unRAID 6 on my HP Proliant Microserver G1610T.


Downloading the unRAID image

Before going about installing unRAID I was under the impression that it would come in an ISO package like every other OS. However unRAID just comes as a package of files with a 'make-bootable' executable inside. Preparing the USB is easy! First of all your USB needs to be formatted to FAT32 and the volume label must be set to UNRAID in capital letters. 

Then simply transfer all the files to the root directory of the USB (i.e. not in a folder) and then run the 'make-bootable.bat' file as administrator. (Right click -> Run as administrator) A CMD prompt will appear just asking you to confirm you want to proceed - Press any button to continue and hey presto job done.



Now you just need to eject the usb from your computer, connect it to the internal USB slot of the microserver and boot it up. Mine booted into the USB straight away without editing any BIOS options. After successfully booting up I was able to navigate to http://tower from my desktop and was greeted with the unRAID web GUI. It really was that easy!



Licensing

Before you're able to do anything inside unRAID you need a license key. Upon first installation you're entitled to a 30 day evaluation period to test out the software. To activate your license navigate to 'Tools -> Registration'. You will need to enter in your email address so the license URL can be sent to you which you then paste into the box provided.



After that you're pretty much good to go! 

Saturday, June 20, 2015

New components for my HP G1610T Gen8 Microserver - Upgrades!

I decided it would be best to invest some more money into my microserver rather than trying to struggle through with the limited resources available in the server itself. I also needed some hard drives to full up the drive bays for my NAS. 


What did I buy?

2 x 3TB WD Red NAS drives
1 x 2 Port SATA Controller
3 x Cat6a ethernet cables
2 x 8GB ECC DDR3 RAM

The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that. 

The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn't come cheap at about €160 for the two stick set - ECC RAM is bloody expensive.

Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network. I've been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the requirement for additional cable bandwidth.

How the application server works in unRAID - What is Docker?

I'm going to take you back to one of my first posts where I explained what the purpose of a hypervisor is and what the differences between a level 1 and a level 2 hypervisor are. I think this would be useful to understand before proceeding with this post. 

Basically the idea behind a hypervisor is that it has the ability to emulate hardware in such a way that an operating system running on the hypervisor has no idea that it is not a physical machine. By creating multiple VMs you have the ability to isolate applications from each other, for example you might have one VM for torrents and automated media management, and then another for development work. This is all great in theory but the issue with this is that virtualisation is resource intensive! The alternative to deploying Virtual Machines is using 'containers'. Containers are similar to virtual machines in that they also allow for a level of isolation between applications but there are some significant differences..


What's a container? How does it differ from a Virtual Machine?

Going back to the idea of a virtual machine - it helped us to get past the idea of a one server for one application paradigm that was formerly common in data centers and enterprise. By introducing the hypervisor layer it allowed for multiple different OS to run on the same hardware so that many different applications could be used without wasting resources. While this was a huge improvement it is still limited because each application you want to run will require a guest operating system, it's own CPU, dedicated memory and virtualised hardware. This lead to the idea of containers..

unRAID makes use of Docker containers. Similar to the idea of a virtual machine Docker containers bundle up a complete filesystem with all of the required dependencies for an application to run. This will guarantee that it will run the same way on every computer that it is installed on. If you have ever done any development work you may understand the frustration of developing an application on one machine, and then deploying it on another machine only to find out that it won't launch. Docker helps to do away with this by bundling all the required libraries etc. into one lightweight package with the 'Docker Engine' being the one and only requirement. The Docker Engine is is a program that allows for containers to be built and run.

Fundamentally a container looks similar to a virtual machine. A container uses the kernel of the host operating system to run multiple guest instances of applications. Each instance is called a container and will have it's own root file system, processes, network stack, etc. However the fundamental difference being that it does not require a full guest OS. Docker can control the resources (e.g. CPU, memory, disk, and network) that Containers are allocated and isolate them from conflicting with other applications on the same system. This provides all the benefits of traditional virtual machines, but with none of the overhead associated with emulating hardware.

The Docker Hub

One of biggest advantages Docker provides is in its application repository: The Docker Hub. This is a huge repository of 'Dockerised' applications that I can download and run on my microserver. With unRAID and Docker it doesn't matter if it's a windows or linux application, I can run it in a docker container. Really cool stuff.

Sunday, June 14, 2015

How does NAS work in unRAID?

Why unRAID isn't your standard NAS

I'm going to start this section off by explaining a little bit about what RAID is. If you know what RAID is and how it works feel free to skip the next section. 

RAID originally stood for 'redundant array of inexpensive disks' but is now commonly known as 'redundant array of independent disks'. It is a storage virtualisation technology that combines multiple disk drives into a single logical unit for the purposes of data redundancy or performance improvement. Most RAID implementations perform an action called striping. Striping is a term used to describe when individual files are split up and spread across more than one disk. By performing read and write operations to all the disks in the array simultaneously, RAID works around the performance limitation of mechanical drives resulting in much higher read and write speeds. In layman's terms - Let's say you have an array of 4 separate disks. In this case the file would be split into four pieces with one piece written to each drive at the same time therefore theoretically gaining 4 times the speed of one drive. That's not quite how it works in reality though..

Striping can be done at a byte level or a block level. Byte-level striping means that each file is split up into little pieces of one byte in size (8 binary digits) and each byte is written to a separate drive i.e. the first byte gets written to the first drive, second to the second, and so on. Block level striping on the other hand splits the file into logical blocks of data with a default block size being 512 bytes. Each block of 512 bytes is then written to an individual disk. Obviously striping is used to improve performance but that comes with a caveat - it provides no fault tolerance or redundancy. This is known as a RAID0 setup.



If all the files are split up among the drives what happens when one dies? Well this is where parity and mirroring come into RAID. Mirroring is the most simple method by using redundant storage. When data is written to one disk, it is simultaneously written to the other disk, so the array would have two drives that are always an exact copy of each other. If one of the drives fails the other drive still contains all the lost data (Assuming that doesn't die too!). This is obviously not an efficient use of storage space when half of the space can't be utilised. This is where parity comes in - Parity can be used alongside striping as a way to offer redundancy without losing half of the total capacity. With parity one disk can be used (Depending on the RAID implementation) to store enough parity data to recover the entire array in the event of a drive failure. It does this through mathematical XOR equations which I'm not going into in this post. There is one glaring problem with this setup - What if two drives fail? You're more than likely screwed.. This is part of the reason nobody uses RAID4 in practice.

There are implementations available such as RAID6 which use double parity, meaning the array can have two drives fail without data loss which is a better implementation if you plan on storing many many terabytes of data across a large number of drives. Logically the more drives you have the higher percentage chance you have that one will fail.

What does that have to do with unRAID.. ?

Hold on, I'm getting to that! unRAID’s storage capabilities are broken down into three components: the array, the cache, and the user share file system. Let's start with the array first of all. unRAID makes uses a single dedicated parity disk without striping. Due to the fact that unRAID does not utilise striping this means you have the ability to use multiple hard drives with differing sizes, types, etc. This also has the benefit of making the array more resistant to data loss because your data isn't striped across multiple disks.

The reason striping isn't used (Or can't be used) is because unRAID treats each drive as an individual file system. In a traditional RAID setup all the drives are simultaneously spinning while in unRAID spin down can be controlled per drive - so a drive with rarely accessed files may stay off (theoretically increasing it's lifespan!). If the array fails, the individual drives are still accessible unlike traditional RAID arrays where you might have total data loss. 

Because each drive is treated as an individual file system, this allows for the user share file system that I mentioned earlier. Let's just take an example to explain this. I could create a user share and call it media. For this media folder I can specify:
  • How the data is allocated across the disks i.e. I can include / exclude some disks
  • How the data is exposed on the network using what protocols (NFS, SMB, AFP)
  • Who can access the data by creating user accounts with permissions
The following image taken from the unRAID website explains it better than words can:


Finally we need to address the issue of performance due to the fact that striping is not used. To get around this limitation unRAID introduced the ability to utilise an SSD as a cache disk for faster writes. All the data to be written to the array is initially written directly to the dedicated cache device and then moved to the mechanical drives at a later time. Because this device is not a part of the array, the write speed is unaffected by parity calculations. However with a single cache device, data captured there is at risk as a parity device doesn’t protect it. To minimise this risk you can build a cache pool with multiple devices both to increase your cache capacity as well as to add protection for that data.

Thursday, June 11, 2015

Forget ESXi, unRAID 6 looks perfect for what I need

Up until this point I have been completely focused on installing ESXi and trying to wedge it into my plans for this server. Today I discovered unRAID - NAS and a hypervisor all in one baremetal solution. It sounds absolutely perfect for my needs as it will allow me to do everything that I want in terms of virtualising some machines, as well as offering me a NAS solution without the need for messing with passing through SATA controllers etc. 


Why unRAID?

When I was looking up FreeNAS the main information I was getting was that it is resource intensive, doesn't like to be virtualised, and some of the plugins aren't very stable at the best of times. The thing to bear in mind with this is that people only post online when things go wrong, not when they're working as expected. Obviously FreeNAS is popular for a reason but it just doesn't seem to fit into what I want to do. This more or less left with a choice of a NAS or an ESXi box unless I wanted to go passing through SATA controllers to the a virtualised FreeNAS OS - Which I can imagine might cause issues down the line.. 

Then I stumbled across unRAID from a post on a forum I regular. Some member had mentioned he was running unRAID on a USB stick on his HP Proliant MicroServer without any issues. A few minutes later I was sold after watching their promotional video.. NAS, Apps, and virtualisation all in one lightweight package..

I'm really looking forward to this now!


What happens next?

The one thing FreeNAS has over unRAID is that it's free (the name probably gave that away..). If I do plan on using unRAID then I will need to fork out for a license which is a one-time cost of $59 which certainly isn't going to break the bank. I'll make a decision on this after trying to the 30-day trial but I really think this is the way to go.

In a previous post I mentioned that I wanted to do the VCP certification exam to become a VMware certified professional. I might just need to build myself an ESXi whitebox..

Wednesday, June 10, 2015

Server requirements and my plans for the future

In my previous post I was pretty happy with my purchase of a HP Proliant Gen8 G1610T Microserver so I might finally be able to get working on ESXi. Today I am starting to have some doubts as I may have underestimated what I am looking for in a host.. 


Doubts about the Gen8 G1610T

Did I make the right choice?

By all means this is a great piece of kit and pretty good value too. The problem I have is that I want both a NAS and an ESXi host but working on an all-in-one solutions seems a little difficult to implement without issues. My original plan had been to install ESXi as my hypervisor and virtualise FreeNAS  (Or some other alternative) but it seems FreeNAS does not play nice when it is virtualised as it requires block level access to the drives to function properly. You can kind of force this by passing through (I previously mentioned GPU pass-through) the disks to the VM but I've heard mixed reviews about attempting this. Seems like some people got this working without issues while others are experiencing crashes, down time, etc.




If I did manage to successfully virtualise FreeNAS without any problems this still leaves me with a resource issue on the host. The recommended resources for FreeNAS are 1GB RAM for every 1TB of storage space. I was considering a 4 x 4TB array of WD Red drives but this leaves me with no memory available for the VMs. Even if I drop to 3TB drives this still only leave me with about 4GB of usable RAM to work with. Certainly not ideal and wouldn't allow for much wiggle room.

Then there is the financial cost to consider. Obviously the drives are going to be expensive but I had planned on doing this anyway so I consider this more of an investment. I think I would like to go all the way to 4 x 4TB which I believe is the maximum allowed on this server, but honestly 4 x 3TB would be more than enough. That isn't the main concern here though - the cost of the additional RAM is. Normal DDR3 RAM is pretty expensive nowadays although it is finally dropping in price. The problem is this server requires ECC memory or Error-correcting code memory. Unlike normally every day RAM this can detect and correct any internal corruption so it is mainly used in systems where a fault could be catastrophic such as financial services. Unfortunately this kind of technology demands a hefty price tag at almost twice the price of normal RAM - Over €150 for one of the cheaper Value RAM sets of 16GB (8GB x 2). 


So what is really needed?

This is the question I am asking myself. My original intention as a starting point was to set up a media server VM in ESXi including Plex and Torrenting as well as additional media library applications such as SickBeard. I thought this would be a good starting point as it isn't exactly out of my comfort zone. This would work great in combination with a NAS fileserver so I need to find out how best to implement this - Virtualised or otherwise. I believe FreeNAS has some plugins that can be used to support these features but I've heard far from happy reviews about these in terms of uptime and general issues with staying running. 



I definitely do want some sort of functional vSphere implementation as I want to gain my VCP certification this year if I can. I'm thinking at this point it might be best to build an ESXi whitebox to accomplish everything I want in terms of different VMs and NAS as it doesn't look like the Gen8 is going to be enough for my needs.. This could well be the beginning of a home lab... Stay tuned

Sunday, June 7, 2015

Screw it I'll just buy a server instead

Considering the level of difficulty I've been having with getting any hypervisor installed I just decided to give up on my original plan and buy a server. I came across an offer on HP Micro Servers where HP were offering €110 cashback on certain models. After I get my cashback the total cost will have be about €160 for the bare-bones server. How could I say no to that? Ordered on Wednesday night and it arrived this morning, happy days!

Introducing the HP Proliant Microserver G1610T

When I collected the box I was concerned because it looked like the box may have been dropped. The below image should explain what I mean. It looks like there is a handle of sorts on the box itself and it was ripped when I collected it - If this ripped while someone was carrying the box they more than likely would have dropped it. I have to admit I felt slightly panicky when powering it up for the first time...

When I finally removed the server from its box I was pleasantly surprised at just how small this thing is. I was expecting something a fair bit bigger but this fits cleanly on my desk. It's not a whole lot bigger than my hand! My whole setup looks pretty sweet now after some desk re-organisation. I might go into this in a separate blog post.
Stylish
Hand for scale

When it finally came time to power on the server I was nervous.. but for once things went my way and it booted up without issues! Setup was a breeze with HP's intelligent provisioning wizard. A few clicks of my mouse and I had it up and running ready to go. The only issue I encountered was trying to access the iLO console - For those of you that do not know what iLO is, it allows me to mange and interact with the server from my computer via a web browser. It's almost as if I have a monitor, mouse, and keyboard connected to the server. Really handy feature. I tried to give iLO a static IP address so I wouldn't have to go looking for it every time it changed with DHCP. However this IP address did not allow me to access the server, and I could not ping the address from the command line. A reboot of the server seemed to resolve this - DHCP had been re-enabled after the reboot and the new IP worked. I just changed this to static and all was well with the world! :)

What did not occur to me when I was buying the server was that there was no HDD included in it so there wasn't a whole lot I could do disappointingly. Some time in the next few days I will get around to installing ESXi using the internal USB slot and hook up the second SSD from my desktop for VM storage. I might finally get around to setting up some VMs! But let's not jinx it..

Thursday, June 4, 2015

Giving up on Proxmox, working with Hyper-V, and BitLocker

The installation of Proxmox should have been straightforward but unfortunately it just was not to be. I attempted the installation process from scratch this time and came out the other end with the same result.. Debian refusing to boot with the pve kernel. This left me with a few options.. 

  • Install Proxmox on a different flavour of Linux
  • Install the bare metal ISO of Proxmox
  • Go with a different hypervisor

After weighing up my options and a bit more research I decided to just ditch Proxmox altogether. The first attempt at installation didn't leave me with much confidence and I was tired of messing about with bootable USBs. I decided to go completely against my original decision and try out something different..


Client Hyper-V

Client Hyper-V is the name for the virtualisation technology that was introduced in Windows 8. It is not enabled by default so many people probably don't even realise they have it. It needs to be enabled via the control panel. Client Hyper-V is more or less a slightly more limited version of the Server implementation of Hyper-V. From reading the technet article regarding these limitations I do not think they are going to affect me, but I did read somewhere that the free version can only run a small number of VMs. I haven't really looked into this to be sure, but I don't see this being a major issue yet. The only requirements for enabling Hyper-V are:
  • Your desktop must have at least 4GB of RAM. Mine has 16GB so I have more than enough for multiple VMs
  • Your CPU must support SLAT technology (Second Level Address Translation). My AMD 8350 supports SLAT so this also not an issue.

Ok, good to go!

Does my CPU support SLAT?

Microsoft has a handy little utility called coreinfo that allows you to check for this. Click here to download Coreinfo. Once you have downloaded it you will need to extract it to some directory of your choice. Then you will want to open a command prompt (Admin) in this directory by pressing Win + x and choosing "Command Prompt (Admin)". Now navigate to the directory where you stored coreinfo. Now run the command 'coreinfo -v'.

On an AMD if your processor supports SLAT it will have an asterix in the NP row as below:

Enabling Hyper-V

Because Hyper-V is an optional feature it will need to be enabled via the control panel. Open the Control Panel, click Programs, and then click Programs and Features. Click Turn Windows features on or off. Find Hyper-V in this list and enable it, click OK, and then it will request to reboot your machine.


Easy right? 

The fun begins..

Guess what!?! More issues! yay! For some reason I am just not allowed to implement any sort of virtualisation technology outside of VirtualBox. Damn you Oracle, what you have done! During my first attempt at enabling Hyper-V it reached about 90% progress after restarting so I just figured I had missed a prerequisite or something on those lines.. So I tried again and got the same result. The message appearing on my screen was "We couldn't complete the features". Interesting, but at least I didn't turn my PC into a paperweight this time. Some furious google-fu brought me all kinds of results but the primary answer seemed to be related to virus guards, but I do not have one installed other than Windows Defender or Microsoft Security Essentials - Whatever the built in option is called for Windows 8. So I am working under the assumption that this is not responsible.

A second common answer for this was a backlog of Windows Updates preventing the hyper-v installation from completing. Apparently my computer just hates me because even something as simple as getting windows updates to install was proving difficult. Windows continuously refused to connect to the update server every time I tried checking. I kept getting Windows Update Error 0x80243003 in return so I needed to run a utility from Microsoft for repairing Windows update. This worked... after running it countless times as each time I ran it there was a new problem! Eventually after getting all the updates installed I attempted to enable Hyper-V once more and this time reached about 93%! On top of that the error changed to "We couldn't complete the updates" which I guess can be considered progress.. ?


I tried reviewing the event logs for my installation attempt but this did not shed any light for me as to what happened. Moving on the final relatively common resolution was to enable bit locker on the drive where hyper-v is installed. Was worth a try.. I mean, what harm could it do?

Bitlocker - Windows encryption

BitLocker lets you encrypt the hard drive(s) on your Windows based system. It's basically there to protect your data on the off chance that someone robs your physical computer so they will not be able to boot or access your data without the password. So  went through the process of enabling BitLocker..

To cut a long story short, it worked but my computer didn't play nice with it (surprise surprise). Following a reboot after enabling bitlocker I was greeted with an orange and white striped screen like below:

Not quite paperweight material this time as I quickly realised I could still type in the password that I had set despite having nothing to look at. I just typed and hoped and thankfully it worked out. I found this rather annoying and potentially a dodgy situation since I can't see what or where I'm typing.. I did not test to see if Hyper-V worked as I did not have the patience to let the process completely encrypt the SSD before disabling it..

So.. What now?

I never did get Hyper-V enabled and I do not think I will any time soon. So for now I am going to decide on how to proceed from here.. Surely I'll get something working eventually..


Wednesday, June 3, 2015

Tried installing Proxmox and it didn't go so well..

When I finally got my PC back working as expected and was successfully able to boot into both Linux and Windows I figured it was about time to actually install my hypervisor. As I mentioned in my previous post I had decided to install Proxmox as a level two hypervisor on Debian. The instructions for this seemed pretty straightforward, but alas as per my luck so far it was not...

I used unetbootin to create the bootable USB for Proxmox. If you have not heard of unetbootin then I highly recommend it - http://unetbootin.sourceforge.net/. 


Getting started

The installation steps listed for Debian on the Promox site seemed relatively straightforward. The first step involved checking my hosts file to ensure that my hostname is resolvable:

luke@debian:~$ cat /etc/hosts
127.0.0.1 localhost

127.0.1.1 debian.mydomain.com debian

Next I needed to edit my sources list to add the Proxmox VE repository. If you have ever used Linux before you have more than likely come across the command apt-get before. Well Apt uses a file that lists the 'sources' from which packages can be obtained. This file is /etc/apt/sources.list. The below three entries needed to be added to this list:

luke@debian:~$ nano /etc/apt/sources.list
deb http://ftp.at.debian.org/debian wheezy main contrib
deb http://download.proxmox.com/debian wheezy pve
deb http://security.debian.org/ wheezy/updates main contrib

Add the Proxmox VE repository key:
luke@debian:~$ wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

Update your repository and system by running
luke@debian:~$ apt-get update && apt-get dist-upgrade

Install the kernel and kernel headers
luke@debian:~$ apt-get install pve-firmware pve-kernel-2.6.32-26-pve
luke@debian:~$ apt-get install pve-headers-2.6.32-26-pve

At this point the next step is to restart and select the pve kernel. It's a surprisingly straightforward process. I honestly thought it would be more complicated. However, as is the usual paradigm so far it's just one step forward two steps back with everything...

Problems begin..

Once again I am having some issues with actually booting the system. After rebooting into GRUB and choosing the appropriate kernel, this appears on my screen..

Loading, please wait...
Usb 5-1: device descriptor read/64, error - 62
Fsck from util-linux 2.25.2
/dev/mapper/debian--vg-root: clean, 166811/7012352 files, 1550982/28045312 blocks

Left it at that for well over an hour before just turning it off and have not gone back to it since. Proxmox has just straight out refused to work with me so far and at this point I'm just about ready to give up and continue just using VirtualBox.. We'll see

Friday, May 29, 2015

Installing Debian, write protected USBs, and Windows MBR..

Installing Proxmox didn't go quite as smoothly as I wanted - quite the opposite in fact. I ended up spending hours fixing the utter mess I made of my computer. My PC had more or less become an expensive paper weight for a brief period of time. Read on for more..

Creating a bootable USB...

When building my PC I made the decision to not purchase a DVD drive as I figured I never use it. This means when installing an operating system I need to create what is known as a bootable USB - Basically the equivalent of inserting your Windows installation DVD when installing Windows. I have done this numerous times in the past and usually it's a straight forward process. Not this time though. For some reason my USB stick had become write protected meaning that I could not write any new files to it. This also meant I couldn't format it either so it was pretty much useless. Great start. 




I brought up DISKPART in the hopes that I could just remove the attribute and then try cleaning it. Nothing is straight forward though and this too just spat back an error at me stating that the disk is write protected. "DiskPart has encountered an error: The media is write protected". 

So at this point I can't write to it. I can't format it. I can't clean it using DiskPart. I was about to give up and just get another USB but I figured trusty Ubuntu might be able to help. I have an Ubuntu VM that I use as a development environment with Oracle VirtualBox but I don't consider that an ideal solution, hence me looking for a decent hypervisor.. Anyway back on topic! I booted up my Ubuntu VM and for some reason managed to resolve the issue using fdisk! Here's what I did:


The following command will list all detected hard disks:

# fdisk -l | grep '^Disk'

Disk /dev/sda: 251.0 GB, 251000193024 bytes

Disk /dev/sdb: 251.0 GB, 251000193024 bytes

You want to find the appropriate entry for your disk and then run fdisk against that disk
# fdisk /dev/sdb

After successfully completing the required tasks I created a new ext3 filesystem on the USB
mkfs.ext3 /dev/sdb1

And it worked! The read only attribute was removed but obviously the USB was not visible in Windows due to the unsupported filesystem. So back into DISKPART I went, ensured the attribute had been removed and then ran the clean command. Brought up the Windows DiskManagement GUI and formatted the USB with NTFS. Success!

All that effort just to make the USB work.. What now?

Remember that I haven't even gotten around to getting the ISO onto the USB yet! In my previous post I mentioned that Proxmox could be used as either a type 1 or type 2 hypervisor so I felt that my life might be easier if I installed it as a type 2 on Debian. So I loaded the Debian ISO onto the USB using a piece of software called unetbootin. This is probably the one and only thing that went smoothly throughout this whole process. Rebooted my computer, chose the USB from the boot menu and followed the installation process successfully to my spare SSD. No issues so far and everything appeared to be working properly until I restarted and realised that Windows wasn't appearing in the GRUB boot menu. Everything on the SSD was still accessible and I could mount the drive in Debian no problem. A quick google lead me to believe this was an easy fix:
# os-prober
# update-grub

Alas Windows failed to appear in the menu after these two commands. Some research into this was pointing me towards a BIOS option called 'Secure Boot' which might not be allowing the Windows UEFI to interact with GRUB. So in my attempt to resolve this I set my BIOS to SecureBoot UEFI only and then disabled an option called 'Compatibility Support Module'. I am not a smart man. After saving these settings and restarted the computer I was now facing a black screen. No BIOS splash screen, no GRUB, nothing. I went from a fully functional windows desktop, to a semi-functional debian desktop, right down to an expensive paperweight all in the course of about one hour. Impressive stuff.

My first step was to consult my motherboard documentation in which I learned I have a button on my motherboard that allows me to boot straight into the BIOS which is quite a handy feature to have. Unfortunately this button was no use to me and I still couldn't see anything other than a black screen. Some further digging made me realise at this point my only option was to short the motherboard CMOS in an attempt to reset it. Thankfully this allowed me back into the BIOS and from there I was able to get back into Debian. Phew!

Windows Startup Repair isn't very useful

To rule out any Debian / GRUB involvement in the Windows issue I disconnected the Debian SSD to try and force a Windows boot. "Reboot and Select proper Boot device or Insert Boot Media in Selected Boot device and press a key" appeared on my screen. To cut a long story short it turns out I somehow managed to destroy my Windows Master Boot Record (MBR) while installing Debian. I still have not figured out how and I'm not sure I ever will. So I install gparted on Debian, reformat the USB and install Windows using unetbootin. With this I attempted to complete a startup repair but this just told me the repair failed. I brought up the command prompt and ran the command 'bootrec /fixmbr' against the Windows SSD and rebooted - This time I'm no longer getting the 'Reboot and Select proper Boot device or Insert Boot Media in Selected Boot device and press a key'. Instead I'm getting a blinking cursor so I don't really know if I've made progress or gone backwards..

At this point I'm really thinking anything that can go wrong will go wrong so I'm just waiting for the whole thing to burst into flames. Fixing the MBR was quite a painful process in which I learned if you are ever running startup repair from the windows installation disk, run it 3 times. Don't ask why, just do it. So back into startup repair I went and ran it 3 times despite it telling me it failed each time. Then:
bootrec /fixmbr
bootrec /fixboot
bootrec /scanos
bootrec /redbuildbcd

At this point the rebuildbcd failed to add the windows entry to the MBR. Went back into DISKPART, set the partition as active again then restarted and got a new error about failing to read the boot configuration table. Went through another 3 startup repairs for good measure and volia, Windows booted. Easy as that. Only took about 6 hours of work..

Now to actually try and install Proxmox..

Tuesday, May 26, 2015

Choosing a hypervisor

As I discussed in my previous post there is a wealth of information out around the web on virtualisation, and when I started researching hypervisors it was no different. In fact the more I looked the more indecisive I found myself being. However I recalled my first post on this topic and the article I had mentioned reading about setting up gaming machines and utilising what is known as PCI pass-through. I quickly learned that this doesn't work well with Nvidia graphics cards and this turned out to be deciding factor for me in the end.


What is PCI pass-through and why does it matter?

If you have ever tried playing a game on a virtual machine you would quickly realise that this is not a viable solution at all for a gaming desktop. This is because normally the GPU is emulated and a resource manager carves up the resources and passes them to the individual machines. This is where PCI pass-through comes into play - Rather than sharing the resources of the GPU among the multiple machines, you can instead assign the whole card to the machine so it has independent ownership of it. There is no software layer in between the GPU and the VM managing resources so this allows for near native performance. In theory you should not even know that you are using a VM!

Many months ago I decided to get two GTX 970's for my gaming desktop rather than opting for an AMD alternative. I am living to regret that decision somewhat as I am now learning that Nvidia does not allow their consumer grade GPU's to utilise this passthrough technology. For this privilege you need to upgrade to their Quadro series which from what I can tell offer no other benefits other than allowing passthrough. Did I mention they're also much more expensive and far inferior when compared to their GTX counterparts? Nice one Nvidia! Since I don't plan on replacing my GPUs any time soon so this has more or less ruled out ESXi for me but I learned that it is possible (with a lot of effort) to implement this on a Linux based hypervisor such as KVM / Proxmox. 


And the winner is..

I narrowed my choices down to KVM and Proxmox (Which is based on KVM) as the only two viable options. In the end I decided I would proceed with Proxmox for the simple reason that it has built-in web GUI for management and it has the option of either a type 1 or 2 hypervisor. This leaves me with plenty of flexibility and simple management.

The hypervisor

I never realised until today just how many hypervisor options are out there. Not only how many but that there are different types as well. Obviously I had heard of the industry standards ESXi and the Microsoft alternative Hyper-V but little did I realise that is only scratching the surface. You've also got XenServer, Proxmox and KVM just to name some of the more popular alternatives. In the last few days I have managed to go from having a general idea of what I wanted to implement, to landing myself in a vast sea of information that just seems limitless. Each option has it's benefits and limitations when compared to it's competitors so I have a lot of research to do before making my final decision and sticking with it. So let's start with the basics:


What is a hypervisor?

A hypervisor is a piece of software that can create and run virtual machines. The computer or server that this software runs on is known as the host, and the virtual machines are known as guests. A certain defined portion of the resources from the host machine such as memory and disk space are allocated to a guest machine to use. These guest machines have no idea that they do not own these resources - As far as the VM is concerned it is a single entity with no dependencies. The hypervisor is actually controlling the resources of the host and distributing them as required.

There are in fact two types of hypervisors - Type 1 and Type 2. This is an arbitrary distinction really as they serve the same purpose at the end of it all, but for educations sake the distinction is there. Type 1 hypervisors are commonly known as a 'bare metal hypervisors' because they run directly on the hardware of the host. In other words there is no operating system or any other software in between the hardware layer and hypervisor.
Img Source: https://www.flexiant.com
The other type is a Type 2 hypervisor, commonly referred to as a hosted hypervisor. This is software that runs on an operating system. One of the more common examples of this would be Oracle Virtualbox or VMware Workstation. I guess the main disadvantage of a hosted hypervisor is that you are adding an unnecessary extra layer to the environment. However this has the benefit of making management somewhat more intuitive in my opinion. 
Img Source: https://www.flexiant.com

Which is the best hypervisor option?

If you find out please let me know! Generally people are quick to recommend ESXi due to the fact it has more or less become the industry standard and having experience in such a widely used product would be beneficial. This was part of my original reasoning behind ESXi due to fact that I do encounter it frequently in work but I don't have a whole lot of experience working with it. It makes ESXi very hard to ignore for this reason, but on the flip side the possibilities are somewhat limited without purchasing a license. I'm not going into the differences between the free and licensed options in this post, but it is definitely something I am going to do in the future.

Then of course you have Hyper-V, another very commonly implemented option from Microsoft. This has the massive advantage of being free and comes bundled with Windows Server as an add-on feature. If you are running Windows 8 at home you can also enable Client Hyper-v which isn't as feature-filled as the Server alternative but offers the same purpose. Finally you have your Linux alternatives which are also mostly free if you are not interested in receiving support. These options are much less widely used but are certainly growing in popularity. 

I've got a lot of work to do....

Sunday, May 24, 2015

Adventures in virtualisation

I had this crazy idea to set up a home server - not that I really need one. My end goal would be to maintain multiple virtual machines each with their own individual responsibilities running on a hypervisor such as ESXi. I thought it might be beneficial to start a blog so I could keep track of what I've learned along the way and perhaps inspire someone else to follow the same path. You might be questioning my reasoning behind such a project in a home environment.. After reading about setting up a multi-headed ESXi server my inner-geek couldn't relax thinking of all the possibilities this could be used for. That, and it's just a very cool thing to do.




Source: https://www.pugetsystems.com/labs/articles/Multi-headed-VMWare-Gaming-Setup-564/

Seriously how cool does that look!? Four independent machines, each running an instance of Battlefield, running off the one physical server. Now realistically I don't have a need for this kind of setup but perhaps on a small scale this might be useful. Imagine being able to deploy a new computer at the touch of a button to any room in the house. On top of that you have the advantage of keeping everything segregated which is not only beneficial from an organisational point of view, but I imagine it helps to improve security as well.

Maybe I have deluded myself into thinking this would be useful.. We'll see..