My challenge was to build a home lab, a box that runs my PC as well as the virtual infrastructure for home lab. I cannot afford more than 1 box because of space constraints. I am not very well versed with Vmware technology, but chose because my network software is only supported on Vmware.
CPU – Intel i5-6500
Mother B – Asrock H170M-Pro4
Memory – Kingston Fury 16GB DDR4.
Apparently, the ESXi 6.0 has no support for the Onboard Nic. I learnt it when I tried to installed the ESXI 6.0 Update 1. So, I had to create a custom ESXi installation ISO , which has older drivers for the onboard NIC. The tool is great and a wonderful effort from the guy.
Installation went well and ESXi was installed.
Guest Installation – Win10 64
I don’t have an installation disc, I only have a lappy running Win10. So I decided to convert. Using VMware Standalone converter tool with target amy new Host. The migration took almost 7 hours over network for 300GB disk. I don’t know if SSD on host would have made things much faster. Source laptop is running on a Samsung SSD. Conversion was completed and Win10 booted nicely without any hiccups. Although, the ESXi lists guest as Win 8.
Directed I/O or Passthrough
I did a research and after e-mailing Asrock support I found the Asrock support VT-d options. Which is necessary on both CPU and MB to get passthrough working. Intel calls it “Intel® Virtualization Technology for Directed I/O (VT-d). The CPU I use (i5-6500) has support for this. Asrock too has it almost on all its motherboards, I used online manual to track it, it can be found under Chipset configuration chapter and I later confirmed it from Asrock support. It should enabled and I don’t remember if default setting is enabled or disabled.
Graphics Card as passthrough.
I has few PCIe cards , first one I tried was Asus GeForce 210 which didn’t work and then I used a Radeon HD 4350. Both cards are pre windows 8. However, they do have Win 8 drivers available.
Working Motherboard settings as follows: Advanced-Chipset Configuration ::: Primary Graphics Controller -> PCI EXPRESS ::: VT-d —> Enabled ::: IGPU Multi-Monitor –> Disabled
After booting the ESXi host, i could see the card in advanced settings via the VMWare client as well as the onboard NIC. After adding it to passthrough , reboot of ESXi is required.
Click to enlarge
Added it to the VM and booted the VM. I could not see the card in device manager nor did Win 10 installed any drivers for it. Apparently, you cannot see driverless hardware in the device manager like you see in a physical PC. Just have to install the driver blindly, i download the Win 8 drivers and installed the device. The screen flickered and I could see the card installed.
Now you can plug the monitor into the card itself and output will be via the monitor. Windows will detect 2 monitors, one the VMware console and other the graphics card. You can also use it as typical 2 monitor scenario. I just deleted the other. You can keep using Vmware client console to work on the PC.
VIDEO_TDR_FAILURE on guest OS Win 10 64bit
This issue I tracked to motherboard settings. Primary Graphics Controller -> Onboard.
At first it seemed logical to me, that onboard graphics card being used by ESXi host while PCI-e used by guest, settings accepted but Windows would crash with this message. ESXi remained alive when the VM dies. A lot of people blame it on drivers but I am not sure. The windows will only be stable when the Onboard is disabled.
This has consequences as you have no console screen for your ESXI host. only Network client.
ESXi Pink Panic Screen.
This baffled me, after booting and working Guest. The host would die because of some issue. The VM would not report any crash. I could not capture what it was as I have no console screen for ESXi. I had no intention to enable shell and fetch logs. So I updated the BIOS of motherboard from 1.0 to 1.5 as first resort, it made the system stable with no crashes…so far. In release notes, it does say “improves system stability”.
Asus GeForce210 did not work as mentioned earlier , I installed drivers and system would see the card with error code 43. I tried a lot but could not get it to work. As some blogs suggests, the AMD radeon is better choice.
Disabling onboard leaves you with no console. You can see the initial ESXi messages but eventually passthrough kicks in and ESXi stops stops console. Following is the last message you will see. ESXi will boot and you can access it from client.
I have a working system and I am using it now for more than 6 months. I have added SSDs for both host and guest systems.
Passthrough USB controllers
USB controllers are working perfectly. So far, it has been worth the effort as I seamlessly run multiple systems on the host.
I should mention ..I am not using onboard USB controllers. These are USB PCI cards.