Saturday, June 27, 2015

Setting up the Plex container in unRAID 6

In this post I am going to discuss how I went about adding the Plex container in unRAID 6. It's a fairly straightforward process and didn't cause me many headaches thankfully. I was originally running the server from my desktop but that wasn't an ideal solution because I don't leave my desktop on all the time.





Enabling Docker

The first step involved here was enabling Docker. To do this you simply navigate to the Settings tab, click Docker, and then use the drop down menu to enable Docker. PlexMediaServer is one of the default docket templates that comes packaged by lime-technology in unRAID 6. Navigate to the new Docker tab and you will see a section named 'Add container'. From there you want to choose 'PlexMediaServer' under the limetech heading. 

There is only one required folder named 'config' for this docker but you're going to want add more. The config folder does exactly what it says on the tin - Stores the configuration settings for this particular docker. I originally made the silly mistake of pointing this folder to a directory on the flash drive running unRAID. As soon as I rebooted the server I lost all the configuration I had spent the previous setting up - bummer. So for this directory you're going to want to specify a directory on the array. I created a folder called 'DockerConfig'.

In order to add any media you will also need to specify these directories too. I added one named /movies pointing to /mnt/disk1/movies and another named /series pointing to /mnt/disk1/series. 

All that is left now it to allow unRAID to create the docker container. Simple as.


Configuring Plex

There wasn't a whole lot of configuration required with Plex assuming you only want to use it locally within your home. If you plan on streaming media externally you will need to setup remote access. There are three steps required in doing this. The first step required is to sign into your Plex account, assuming you created one. If not you will need to register an account. The second is port forwarding - By default Plex will use port 32400 but you can specify another port if you prefer. You will need to forward this port to the IP of your server. Lastly you need to edit the settings of your plex server for remote access and manually specify whatever port you chose by ticking the box to manually specify. 

Wednesday, June 24, 2015

unRAID 6 benchmarks

Now that I've got unRAID up & running I thought it would be interesting to run some benchmarks so I could determine what kind of speed to expect. I have purchased a gigabit switch and cat6a ethernet cables for wiring everything up so this as fast as I can get for now. The program I used to run these benchmarks is called 'CrystalDiskMark'. This is with two WD Red 3TB drives with one acting as a parity disk.


I don't think that's too bad considering I don't have a cache device set up yet. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage - Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.


I'll probably revisit this in the future when I set up my cache device.

Monday, June 22, 2015

Benchmarking my desktop SSD

My desktop currently has a 120GB M500 Crucial SSD installed for booting my OS and related applications. It's not exactly a top-of-the-line drive by any means but it's more than enough for every day use. The spec sheet for this model claims I should be getting up to 500 MB/second reads and 400MB/s writes. So when I saw the results of my CrystalDiskMark test I was underwhelmed to say the least. It's worth nothing that this drive was 75% full at the time of the test so I understand this has the potential to affect speeds - But not this much!

56MB/s sequential writes?? Time to figure out what's going on here..



Troubleshooting steps

First off I needed to ensure that AHCI was enabled. AHCI stand for Advance Host Controller Interface. This is a hardware mechanism that allows software to communicate with Serial ATA (SATA) devices such as SSDs. Windows supports AHCI out-of-the-box by default but just to be sure I went into device manager to confirm the AHCI controller was enabled and running.

So it's enabled - Great! Now I just need to confirm that the SSD is being managed by this controller. Right click the controller, choose properties, then navigate to the 'Details' tab. In this section you are greeted with a drop down menu where I chose 'Children' and could see my SSD listed so AHCI is definitely enabled.

Then I needed to confirm that 'TRIM' was enabled. TRIM support is essential for an SSD to run the way it should to avoid slow writing times. This command allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally. To test if this is enabled you can run the command "fsutil behavior query DisableDeleteNotify". If this returns as 0 then TRIM is enabled.

The next step was to make sure I was at the latest revision of the firmware. The firmware download for my SSD comes as an ISO package so I stuck it onto a USB using unetbootin and made a backup of my current SSD before proceeding - I've had enough bad experiences to warrant backups! Thankfully the installation completed successfully without any issues and I was upgraded from MU03 to the MU05 revision. Rebooted and went through another benchmark with CrystalDiskMark. No improvement whatsoever.



So considering there was little to no change in the speeds I figured I would confirm the results with another benchmark tool - Atto disk benchmark. Now I'm seeing more along the lines of the expected speeds! At my maximum I reached over 500MB/s reads and maxed out about 140MB/s writes. I'm still a little disappointed with the write speeds though but this is a start at least.




As this was an SSD I was recommended to test out 'AS SSD Benchmark' which also confirmed that AHCI was enabled and my alignment was okay. This reported speeds were pretty much along the same lines as what I've seen so far with write speeds being reported between 113MB/s and 134MB/s. Still disappointing.





I went ahead and asked someone else to run the same test on their Crucial M4, which is an older model than mine. He was clocking double my write speeds with similar read speeds using the same test config as me. Only other difference between his test and mine was that his drive has higher total capacity with about 50% free whereas I only had 25% free. I went about reducing the space consumed on my drive and then re-ran the test only to receive roughly the same numbers again..





The conclusion

After a bit more digging I realised this was down to false advertising rather than anything being wrong with my drive / configuration. The M500 series drives are capable of up to 400MB/s write speed at the higher capacity spectrum of drives. My specific model - CT120M500SSD1 - I found is only capable of about 130MB/s which more or less matches up with the speeds I was recording in some of the benchmarks. Here is a screenshot from the product page on newegg:


Sunday, June 21, 2015

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

In this post I am going to discuss how I went about installing unRAID 6 on my HP Proliant Microserver G1610T.


Downloading the unRAID image

Before going about installing unRAID I was under the impression that it would come in an ISO package like every other OS. However unRAID just comes as a package of files with a 'make-bootable' executable inside. Preparing the USB is easy! First of all your USB needs to be formatted to FAT32 and the volume label must be set to UNRAID in capital letters. 

Then simply transfer all the files to the root directory of the USB (i.e. not in a folder) and then run the 'make-bootable.bat' file as administrator. (Right click -> Run as administrator) A CMD prompt will appear just asking you to confirm you want to proceed - Press any button to continue and hey presto job done.



Now you just need to eject the usb from your computer, connect it to the internal USB slot of the microserver and boot it up. Mine booted into the USB straight away without editing any BIOS options. After successfully booting up I was able to navigate to http://tower from my desktop and was greeted with the unRAID web GUI. It really was that easy!



Licensing

Before you're able to do anything inside unRAID you need a license key. Upon first installation you're entitled to a 30 day evaluation period to test out the software. To activate your license navigate to 'Tools -> Registration'. You will need to enter in your email address so the license URL can be sent to you which you then paste into the box provided.



After that you're pretty much good to go! 

Saturday, June 20, 2015

New components for my HP G1610T Gen8 Microserver - Upgrades!

I decided it would be best to invest some more money into my microserver rather than trying to struggle through with the limited resources available in the server itself. I also needed some hard drives to full up the drive bays for my NAS. 


What did I buy?

2 x 3TB WD Red NAS drives
1 x 2 Port SATA Controller
3 x Cat6a ethernet cables
2 x 8GB ECC DDR3 RAM

The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that. 

The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn't come cheap at about €160 for the two stick set - ECC RAM is bloody expensive.

Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network. I've been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the requirement for additional cable bandwidth.

How the application server works in unRAID - What is Docker?

I'm going to take you back to one of my first posts where I explained what the purpose of a hypervisor is and what the differences between a level 1 and a level 2 hypervisor are. I think this would be useful to understand before proceeding with this post. 

Basically the idea behind a hypervisor is that it has the ability to emulate hardware in such a way that an operating system running on the hypervisor has no idea that it is not a physical machine. By creating multiple VMs you have the ability to isolate applications from each other, for example you might have one VM for torrents and automated media management, and then another for development work. This is all great in theory but the issue with this is that virtualisation is resource intensive! The alternative to deploying Virtual Machines is using 'containers'. Containers are similar to virtual machines in that they also allow for a level of isolation between applications but there are some significant differences..


What's a container? How does it differ from a Virtual Machine?

Going back to the idea of a virtual machine - it helped us to get past the idea of a one server for one application paradigm that was formerly common in data centers and enterprise. By introducing the hypervisor layer it allowed for multiple different OS to run on the same hardware so that many different applications could be used without wasting resources. While this was a huge improvement it is still limited because each application you want to run will require a guest operating system, it's own CPU, dedicated memory and virtualised hardware. This lead to the idea of containers..

unRAID makes use of Docker containers. Similar to the idea of a virtual machine Docker containers bundle up a complete filesystem with all of the required dependencies for an application to run. This will guarantee that it will run the same way on every computer that it is installed on. If you have ever done any development work you may understand the frustration of developing an application on one machine, and then deploying it on another machine only to find out that it won't launch. Docker helps to do away with this by bundling all the required libraries etc. into one lightweight package with the 'Docker Engine' being the one and only requirement. The Docker Engine is is a program that allows for containers to be built and run.

Fundamentally a container looks similar to a virtual machine. A container uses the kernel of the host operating system to run multiple guest instances of applications. Each instance is called a container and will have it's own root file system, processes, network stack, etc. However the fundamental difference being that it does not require a full guest OS. Docker can control the resources (e.g. CPU, memory, disk, and network) that Containers are allocated and isolate them from conflicting with other applications on the same system. This provides all the benefits of traditional virtual machines, but with none of the overhead associated with emulating hardware.

The Docker Hub

One of biggest advantages Docker provides is in its application repository: The Docker Hub. This is a huge repository of 'Dockerised' applications that I can download and run on my microserver. With unRAID and Docker it doesn't matter if it's a windows or linux application, I can run it in a docker container. Really cool stuff.

Sunday, June 14, 2015

How does NAS work in unRAID?

Why unRAID isn't your standard NAS

I'm going to start this section off by explaining a little bit about what RAID is. If you know what RAID is and how it works feel free to skip the next section. 

RAID originally stood for 'redundant array of inexpensive disks' but is now commonly known as 'redundant array of independent disks'. It is a storage virtualisation technology that combines multiple disk drives into a single logical unit for the purposes of data redundancy or performance improvement. Most RAID implementations perform an action called striping. Striping is a term used to describe when individual files are split up and spread across more than one disk. By performing read and write operations to all the disks in the array simultaneously, RAID works around the performance limitation of mechanical drives resulting in much higher read and write speeds. In layman's terms - Let's say you have an array of 4 separate disks. In this case the file would be split into four pieces with one piece written to each drive at the same time therefore theoretically gaining 4 times the speed of one drive. That's not quite how it works in reality though..

Striping can be done at a byte level or a block level. Byte-level striping means that each file is split up into little pieces of one byte in size (8 binary digits) and each byte is written to a separate drive i.e. the first byte gets written to the first drive, second to the second, and so on. Block level striping on the other hand splits the file into logical blocks of data with a default block size being 512 bytes. Each block of 512 bytes is then written to an individual disk. Obviously striping is used to improve performance but that comes with a caveat - it provides no fault tolerance or redundancy. This is known as a RAID0 setup.



If all the files are split up among the drives what happens when one dies? Well this is where parity and mirroring come into RAID. Mirroring is the most simple method by using redundant storage. When data is written to one disk, it is simultaneously written to the other disk, so the array would have two drives that are always an exact copy of each other. If one of the drives fails the other drive still contains all the lost data (Assuming that doesn't die too!). This is obviously not an efficient use of storage space when half of the space can't be utilised. This is where parity comes in - Parity can be used alongside striping as a way to offer redundancy without losing half of the total capacity. With parity one disk can be used (Depending on the RAID implementation) to store enough parity data to recover the entire array in the event of a drive failure. It does this through mathematical XOR equations which I'm not going into in this post. There is one glaring problem with this setup - What if two drives fail? You're more than likely screwed.. This is part of the reason nobody uses RAID4 in practice.

There are implementations available such as RAID6 which use double parity, meaning the array can have two drives fail without data loss which is a better implementation if you plan on storing many many terabytes of data across a large number of drives. Logically the more drives you have the higher percentage chance you have that one will fail.

What does that have to do with unRAID.. ?

Hold on, I'm getting to that! unRAID’s storage capabilities are broken down into three components: the array, the cache, and the user share file system. Let's start with the array first of all. unRAID makes uses a single dedicated parity disk without striping. Due to the fact that unRAID does not utilise striping this means you have the ability to use multiple hard drives with differing sizes, types, etc. This also has the benefit of making the array more resistant to data loss because your data isn't striped across multiple disks.

The reason striping isn't used (Or can't be used) is because unRAID treats each drive as an individual file system. In a traditional RAID setup all the drives are simultaneously spinning while in unRAID spin down can be controlled per drive - so a drive with rarely accessed files may stay off (theoretically increasing it's lifespan!). If the array fails, the individual drives are still accessible unlike traditional RAID arrays where you might have total data loss. 

Because each drive is treated as an individual file system, this allows for the user share file system that I mentioned earlier. Let's just take an example to explain this. I could create a user share and call it media. For this media folder I can specify:
  • How the data is allocated across the disks i.e. I can include / exclude some disks
  • How the data is exposed on the network using what protocols (NFS, SMB, AFP)
  • Who can access the data by creating user accounts with permissions
The following image taken from the unRAID website explains it better than words can:


Finally we need to address the issue of performance due to the fact that striping is not used. To get around this limitation unRAID introduced the ability to utilise an SSD as a cache disk for faster writes. All the data to be written to the array is initially written directly to the dedicated cache device and then moved to the mechanical drives at a later time. Because this device is not a part of the array, the write speed is unaffected by parity calculations. However with a single cache device, data captured there is at risk as a parity device doesn’t protect it. To minimise this risk you can build a cache pool with multiple devices both to increase your cache capacity as well as to add protection for that data.

Thursday, June 11, 2015

Forget ESXi, unRAID 6 looks perfect for what I need

Up until this point I have been completely focused on installing ESXi and trying to wedge it into my plans for this server. Today I discovered unRAID - NAS and a hypervisor all in one baremetal solution. It sounds absolutely perfect for my needs as it will allow me to do everything that I want in terms of virtualising some machines, as well as offering me a NAS solution without the need for messing with passing through SATA controllers etc. 


Why unRAID?

When I was looking up FreeNAS the main information I was getting was that it is resource intensive, doesn't like to be virtualised, and some of the plugins aren't very stable at the best of times. The thing to bear in mind with this is that people only post online when things go wrong, not when they're working as expected. Obviously FreeNAS is popular for a reason but it just doesn't seem to fit into what I want to do. This more or less left with a choice of a NAS or an ESXi box unless I wanted to go passing through SATA controllers to the a virtualised FreeNAS OS - Which I can imagine might cause issues down the line.. 

Then I stumbled across unRAID from a post on a forum I regular. Some member had mentioned he was running unRAID on a USB stick on his HP Proliant MicroServer without any issues. A few minutes later I was sold after watching their promotional video.. NAS, Apps, and virtualisation all in one lightweight package..

I'm really looking forward to this now!


What happens next?

The one thing FreeNAS has over unRAID is that it's free (the name probably gave that away..). If I do plan on using unRAID then I will need to fork out for a license which is a one-time cost of $59 which certainly isn't going to break the bank. I'll make a decision on this after trying to the 30-day trial but I really think this is the way to go.

In a previous post I mentioned that I wanted to do the VCP certification exam to become a VMware certified professional. I might just need to build myself an ESXi whitebox..

Wednesday, June 10, 2015

Server requirements and my plans for the future

In my previous post I was pretty happy with my purchase of a HP Proliant Gen8 G1610T Microserver so I might finally be able to get working on ESXi. Today I am starting to have some doubts as I may have underestimated what I am looking for in a host.. 


Doubts about the Gen8 G1610T

Did I make the right choice?

By all means this is a great piece of kit and pretty good value too. The problem I have is that I want both a NAS and an ESXi host but working on an all-in-one solutions seems a little difficult to implement without issues. My original plan had been to install ESXi as my hypervisor and virtualise FreeNAS  (Or some other alternative) but it seems FreeNAS does not play nice when it is virtualised as it requires block level access to the drives to function properly. You can kind of force this by passing through (I previously mentioned GPU pass-through) the disks to the VM but I've heard mixed reviews about attempting this. Seems like some people got this working without issues while others are experiencing crashes, down time, etc.




If I did manage to successfully virtualise FreeNAS without any problems this still leaves me with a resource issue on the host. The recommended resources for FreeNAS are 1GB RAM for every 1TB of storage space. I was considering a 4 x 4TB array of WD Red drives but this leaves me with no memory available for the VMs. Even if I drop to 3TB drives this still only leave me with about 4GB of usable RAM to work with. Certainly not ideal and wouldn't allow for much wiggle room.

Then there is the financial cost to consider. Obviously the drives are going to be expensive but I had planned on doing this anyway so I consider this more of an investment. I think I would like to go all the way to 4 x 4TB which I believe is the maximum allowed on this server, but honestly 4 x 3TB would be more than enough. That isn't the main concern here though - the cost of the additional RAM is. Normal DDR3 RAM is pretty expensive nowadays although it is finally dropping in price. The problem is this server requires ECC memory or Error-correcting code memory. Unlike normally every day RAM this can detect and correct any internal corruption so it is mainly used in systems where a fault could be catastrophic such as financial services. Unfortunately this kind of technology demands a hefty price tag at almost twice the price of normal RAM - Over €150 for one of the cheaper Value RAM sets of 16GB (8GB x 2). 


So what is really needed?

This is the question I am asking myself. My original intention as a starting point was to set up a media server VM in ESXi including Plex and Torrenting as well as additional media library applications such as SickBeard. I thought this would be a good starting point as it isn't exactly out of my comfort zone. This would work great in combination with a NAS fileserver so I need to find out how best to implement this - Virtualised or otherwise. I believe FreeNAS has some plugins that can be used to support these features but I've heard far from happy reviews about these in terms of uptime and general issues with staying running. 



I definitely do want some sort of functional vSphere implementation as I want to gain my VCP certification this year if I can. I'm thinking at this point it might be best to build an ESXi whitebox to accomplish everything I want in terms of different VMs and NAS as it doesn't look like the Gen8 is going to be enough for my needs.. This could well be the beginning of a home lab... Stay tuned

Sunday, June 7, 2015

Screw it I'll just buy a server instead

Considering the level of difficulty I've been having with getting any hypervisor installed I just decided to give up on my original plan and buy a server. I came across an offer on HP Micro Servers where HP were offering €110 cashback on certain models. After I get my cashback the total cost will have be about €160 for the bare-bones server. How could I say no to that? Ordered on Wednesday night and it arrived this morning, happy days!

Introducing the HP Proliant Microserver G1610T

When I collected the box I was concerned because it looked like the box may have been dropped. The below image should explain what I mean. It looks like there is a handle of sorts on the box itself and it was ripped when I collected it - If this ripped while someone was carrying the box they more than likely would have dropped it. I have to admit I felt slightly panicky when powering it up for the first time...

When I finally removed the server from its box I was pleasantly surprised at just how small this thing is. I was expecting something a fair bit bigger but this fits cleanly on my desk. It's not a whole lot bigger than my hand! My whole setup looks pretty sweet now after some desk re-organisation. I might go into this in a separate blog post.
Stylish
Hand for scale

When it finally came time to power on the server I was nervous.. but for once things went my way and it booted up without issues! Setup was a breeze with HP's intelligent provisioning wizard. A few clicks of my mouse and I had it up and running ready to go. The only issue I encountered was trying to access the iLO console - For those of you that do not know what iLO is, it allows me to mange and interact with the server from my computer via a web browser. It's almost as if I have a monitor, mouse, and keyboard connected to the server. Really handy feature. I tried to give iLO a static IP address so I wouldn't have to go looking for it every time it changed with DHCP. However this IP address did not allow me to access the server, and I could not ping the address from the command line. A reboot of the server seemed to resolve this - DHCP had been re-enabled after the reboot and the new IP worked. I just changed this to static and all was well with the world! :)

What did not occur to me when I was buying the server was that there was no HDD included in it so there wasn't a whole lot I could do disappointingly. Some time in the next few days I will get around to installing ESXi using the internal USB slot and hook up the second SSD from my desktop for VM storage. I might finally get around to setting up some VMs! But let's not jinx it..

Thursday, June 4, 2015

Giving up on Proxmox, working with Hyper-V, and BitLocker

The installation of Proxmox should have been straightforward but unfortunately it just was not to be. I attempted the installation process from scratch this time and came out the other end with the same result.. Debian refusing to boot with the pve kernel. This left me with a few options.. 

  • Install Proxmox on a different flavour of Linux
  • Install the bare metal ISO of Proxmox
  • Go with a different hypervisor

After weighing up my options and a bit more research I decided to just ditch Proxmox altogether. The first attempt at installation didn't leave me with much confidence and I was tired of messing about with bootable USBs. I decided to go completely against my original decision and try out something different..


Client Hyper-V

Client Hyper-V is the name for the virtualisation technology that was introduced in Windows 8. It is not enabled by default so many people probably don't even realise they have it. It needs to be enabled via the control panel. Client Hyper-V is more or less a slightly more limited version of the Server implementation of Hyper-V. From reading the technet article regarding these limitations I do not think they are going to affect me, but I did read somewhere that the free version can only run a small number of VMs. I haven't really looked into this to be sure, but I don't see this being a major issue yet. The only requirements for enabling Hyper-V are:
  • Your desktop must have at least 4GB of RAM. Mine has 16GB so I have more than enough for multiple VMs
  • Your CPU must support SLAT technology (Second Level Address Translation). My AMD 8350 supports SLAT so this also not an issue.

Ok, good to go!

Does my CPU support SLAT?

Microsoft has a handy little utility called coreinfo that allows you to check for this. Click here to download Coreinfo. Once you have downloaded it you will need to extract it to some directory of your choice. Then you will want to open a command prompt (Admin) in this directory by pressing Win + x and choosing "Command Prompt (Admin)". Now navigate to the directory where you stored coreinfo. Now run the command 'coreinfo -v'.

On an AMD if your processor supports SLAT it will have an asterix in the NP row as below:

Enabling Hyper-V

Because Hyper-V is an optional feature it will need to be enabled via the control panel. Open the Control Panel, click Programs, and then click Programs and Features. Click Turn Windows features on or off. Find Hyper-V in this list and enable it, click OK, and then it will request to reboot your machine.


Easy right? 

The fun begins..

Guess what!?! More issues! yay! For some reason I am just not allowed to implement any sort of virtualisation technology outside of VirtualBox. Damn you Oracle, what you have done! During my first attempt at enabling Hyper-V it reached about 90% progress after restarting so I just figured I had missed a prerequisite or something on those lines.. So I tried again and got the same result. The message appearing on my screen was "We couldn't complete the features". Interesting, but at least I didn't turn my PC into a paperweight this time. Some furious google-fu brought me all kinds of results but the primary answer seemed to be related to virus guards, but I do not have one installed other than Windows Defender or Microsoft Security Essentials - Whatever the built in option is called for Windows 8. So I am working under the assumption that this is not responsible.

A second common answer for this was a backlog of Windows Updates preventing the hyper-v installation from completing. Apparently my computer just hates me because even something as simple as getting windows updates to install was proving difficult. Windows continuously refused to connect to the update server every time I tried checking. I kept getting Windows Update Error 0x80243003 in return so I needed to run a utility from Microsoft for repairing Windows update. This worked... after running it countless times as each time I ran it there was a new problem! Eventually after getting all the updates installed I attempted to enable Hyper-V once more and this time reached about 93%! On top of that the error changed to "We couldn't complete the updates" which I guess can be considered progress.. ?


I tried reviewing the event logs for my installation attempt but this did not shed any light for me as to what happened. Moving on the final relatively common resolution was to enable bit locker on the drive where hyper-v is installed. Was worth a try.. I mean, what harm could it do?

Bitlocker - Windows encryption

BitLocker lets you encrypt the hard drive(s) on your Windows based system. It's basically there to protect your data on the off chance that someone robs your physical computer so they will not be able to boot or access your data without the password. So  went through the process of enabling BitLocker..

To cut a long story short, it worked but my computer didn't play nice with it (surprise surprise). Following a reboot after enabling bitlocker I was greeted with an orange and white striped screen like below:

Not quite paperweight material this time as I quickly realised I could still type in the password that I had set despite having nothing to look at. I just typed and hoped and thankfully it worked out. I found this rather annoying and potentially a dodgy situation since I can't see what or where I'm typing.. I did not test to see if Hyper-V worked as I did not have the patience to let the process completely encrypt the SSD before disabling it..

So.. What now?

I never did get Hyper-V enabled and I do not think I will any time soon. So for now I am going to decide on how to proceed from here.. Surely I'll get something working eventually..


Wednesday, June 3, 2015

Tried installing Proxmox and it didn't go so well..

When I finally got my PC back working as expected and was successfully able to boot into both Linux and Windows I figured it was about time to actually install my hypervisor. As I mentioned in my previous post I had decided to install Proxmox as a level two hypervisor on Debian. The instructions for this seemed pretty straightforward, but alas as per my luck so far it was not...

I used unetbootin to create the bootable USB for Proxmox. If you have not heard of unetbootin then I highly recommend it - http://unetbootin.sourceforge.net/. 


Getting started

The installation steps listed for Debian on the Promox site seemed relatively straightforward. The first step involved checking my hosts file to ensure that my hostname is resolvable:

luke@debian:~$ cat /etc/hosts
127.0.0.1 localhost

127.0.1.1 debian.mydomain.com debian

Next I needed to edit my sources list to add the Proxmox VE repository. If you have ever used Linux before you have more than likely come across the command apt-get before. Well Apt uses a file that lists the 'sources' from which packages can be obtained. This file is /etc/apt/sources.list. The below three entries needed to be added to this list:

luke@debian:~$ nano /etc/apt/sources.list
deb http://ftp.at.debian.org/debian wheezy main contrib
deb http://download.proxmox.com/debian wheezy pve
deb http://security.debian.org/ wheezy/updates main contrib

Add the Proxmox VE repository key:
luke@debian:~$ wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

Update your repository and system by running
luke@debian:~$ apt-get update && apt-get dist-upgrade

Install the kernel and kernel headers
luke@debian:~$ apt-get install pve-firmware pve-kernel-2.6.32-26-pve
luke@debian:~$ apt-get install pve-headers-2.6.32-26-pve

At this point the next step is to restart and select the pve kernel. It's a surprisingly straightforward process. I honestly thought it would be more complicated. However, as is the usual paradigm so far it's just one step forward two steps back with everything...

Problems begin..

Once again I am having some issues with actually booting the system. After rebooting into GRUB and choosing the appropriate kernel, this appears on my screen..

Loading, please wait...
Usb 5-1: device descriptor read/64, error - 62
Fsck from util-linux 2.25.2
/dev/mapper/debian--vg-root: clean, 166811/7012352 files, 1550982/28045312 blocks

Left it at that for well over an hour before just turning it off and have not gone back to it since. Proxmox has just straight out refused to work with me so far and at this point I'm just about ready to give up and continue just using VirtualBox.. We'll see