GPU Passthrough Guide

This guide covers how to set GPU passthrough using Arch and Nvidia. I originally wrote this guide on reddit but decided to put it here in case that one gets removed.

First, I’d like to show you the results of this guide. Here’s a firestrike run using gpu passthrough.

This is meant to be a start to finish, holy shit this actually works, guide and is another lengthy post because there’s a lot to cover so stick with it and you’ll be happy you did. This post is going to cover UEFI specific hardware because every GPU made in the last few years has had it. I believe Nvidia implemented UEFI bios in the 600 series cards and some of you may need to flash the bios for that support, so all of you 500 series owners looking to pass them through will need to refer to the previous post and you want the q35 bios method. I’m unsure on when exactly AMD implemented support so do your research, I’m also supporting Intel/Nvidia exclusively in this post because I don’t own any AMD hardware. Everyone else can continue reading.

My Hardware Config

CPU: i7 4790k

Our primary concern is VT-d support. This is our bread and butter tech that allows us to pass through the GPU.

Mobo: MSI z97a Gaming 7

It’s a bit of an upgrade for me because my g1.sniper h6 was giving me fits when I upgraded to 32GB of ram. Manual doesn’t say the MSI supports VT-d specifically but does say it supports “virtualization technology” which is what we’re after.

GPU: 1 x EVGA GTX 970 FTW, 1 x ASUS STRIX GTX 980ti

I got a fancy 1440p 144hz monitor and the 970 wasn’t cutting it so I picked up a 980ti. You do not need these expensive cards to achieve this. The 980ti supports UEFI. That’s all we’re after in this post. I’ve done this with everything from a gtx 260, 550ti, 970, and now 980ti.

Storage: 1 x Intel 730 240GB SSD, 4 x 1TB WD in Raid 10

I changed my storage up a bit for peace of mind reasons. I’m now running windows off of a qcow2 container rather than a physical hdd. You can do either one.

RAM: 32GB of ddr3 1600

Linux doesn’t require much ram. I simply have 32GB because I run a lot of VM’s for work and emulate an enterprise environment. Generally Linux doesn’t require much RAM and you can get by with 8-12GB easily.

Monitors: 2 x 1920×1080, 1 x 1440p

The gist of monitor setups with this is to wire everything twice. You need one cable going out of each GPU to each monitor. If your monitor only has one input you can buy a switch for whatever connection you need. If you get a switch you can either leave both outputs enabled all the time or turn the linux output off with xrandr and the switch should failover automatically to the second input. If you don’t have a switch then I would recommend using xrandr because it gets annoying to manually switch the inputs every time on the monitor.


Setup and Installation

I break things constantly on my system. So I’m using Antergos to install Arch. I put my /home on the RAID-10 and just change /etc/fstab if I need to reinstall so I never lose anything important. Pretty much any offshoot of Arch and vanilla Arch should work just fine for this post. I have also helped people do this on Debian and derivatives. For this guide we will be sticking to Arch. I’m using grub2 as my bootloader with UEFI.

The first thing we need to do is enable VT-d and ensure functionality. We need to edit /etc/default/grub . Look for the line that says “GRUB_CMDLINE_DEFAULT=”” and append “intel_iommu=on” to what is inside of that line. Mine looks like this

GRUB_CMDLINE_LINUX_DEFAULT="resume=UUID=6771936b-06b6-493c-b655-6f60122f5228 intel_iommu=on"

Once you’ve done that we need to rebuild our grub.cfg file. Run this to do so

sudo grub-mkconfig -o /boot/grub/grub.cfg

Next we reboot to activate iommu/vt-d. Once you’re back in Arch we need to verify that VT-d is enabled and functioning. First we need to identify the pci-e bus the GPU we’re passing through is on. We run “lspci -nnk” to find this information. Here are the lines important to me.


02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce GTX 980 Ti] [10de:17c8] (rev a1)
Subsystem: ASUSTeK Computer Inc. Device [1043:8548]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb0] (rev a1)
Subsystem: ASUSTeK Computer Inc. Device [1043:8548]
Kernel driver in use: snd_hda_in

My 980ti is in the second pci-e slot on my motherboard so this is correct. 02.00.1 is the audio bus for the card and is also important later on. Next we need to see if the 980ti falls into another pci-e bus’s iommu group and if that will conflict. To find this out you look in /sys/bus/pci/devices/YOUR_BUS/iommu_group/devices/. If you do not have an iommu_group folder then vt-d was not enabled properly! Here is the output for mine

 [kemmler@arch ~]$ ls -lha /sys/bus/pci/devices/0000\:02\:00.0/iommu_group/devices/
 total 0
 drwxr-xr-x 2 root root 0 Sep 19 18:47 .
 drwxr-xr-x 3 root root 0 Sep 19 18:46 ..
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:00:01.0 -> ../../../../devices/pci0000:00/0000:00:01.0
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:00:01.1 -> ../../../../devices/pci0000:00/0000:00:01.1
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:01:00.1 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.1
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
 lrwxrwxrwx 1 root root 0 Sep 19 18:47 0000:02:00.1 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.1

0000:00:01.0 and 0000:00:01.1 can be ignored. The issue is 0000:01:00.0 and 0000:01:00.1. These are my 970 and will cause GPU passthrough to fail unless I pass both cards through to the VM. If you only see device of 0000:0#.00.# in your output then your iommu group is clean and you can skip this next section


Fixing IOMMU Grouping

I installed Antergos with AUR support. This provides a tool called yaourt. A very thoughtful member of the Arch community has been steadily providing a current kernel patched with a few fixes that deal with IOMMU groups and other issues. This package is called “linux-vfio”. Assuming you have yaourt installed you will run this.

yaourt -S linux-vfio

It will ask you if you want to edit a few things, you can say no. It will then ask you if you want to only install linux-vfio. Say NO so it will also install the docs and headers for the kernel. Proceed through the installation with common sense. Once you’ve installed it, rebuild your grub.cfg as above with “sudo grub-mkconfig -o /boot/grub/grub.cfg”.


Skip if using Nouveau

If you installed the “nvidia” package from the arch repo your driver will probably break in the new kernel. I simply installed the binary from nvidia’s website. A simple way to do that against the new kernel is to download the binary, reboot, edit grub by pressing “e” with linux-vfio selected, and then append “nomodeset systemd.unit=multi-user.target” to the end of the long line that begins with “linux …” so you do a one time edit of the boot parameters. Then navigate to the binary and “sh NVIDIA-###…sh” It should disable nouveau if needed and install. Reboot and continue. You may also want to look into nvidia-dkms if you plan on updating your linux-vfio kernel regularly.


Once you’re in the linux-vfio kernel you need to enable the acs_override patch. The easiest way is to just use the downstream method. There are other optionsbut should not be necessary. We will add “pcie_acs_override=downstream” to our grub.cfg at /etc/default/grub. “sudo grub-mkconfig -o /boot/grub/grub.cfg” once again to rebuild it.

GRUB_CMDLINE_LINUX_DEFAULT="resume=UUID=6771936b-06b6-493c-b655-6f60122f5228 pcie_acs_override=downstream intel_iommu=on"

Reboot and then check your iommu group again.


[kemmler@arch ~]$ ls -lha /sys/bus/pci/devices/0000\:02\:00.0/iommu_group/devices/
total 0
drwxr-xr-x 2 root root 0 Sep 19 19:11 .
drwxr-xr-x 3 root root 0 Sep 19 19:11 ..
lrwxrwxrwx 1 root root 0 Sep 19 19:11 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
lrwxrwxrwx 1 root root 0 Sep 19 19:11 0000:02:00.1 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.1

 

0000:01:00.0/1 are now missing from the initial output. This is exactly what we want to see. This means that the 980ti is now in it’s own IOMMU group and can be allocated to a VM by itself. We’re now ready to move on.


Setup Continued

The next thing we need to do is blacklist the GPU we’re passing through to the VM so that the Nvidia driver doesn’t try to grab it. Nvidia is a dick and doesn’t conform to standards properly. You can’t easily unbind a gpu from the Nvidia driver so we use a module called “pci-stub” to claim the card before nvidia can. We achieve this by putting pci-stub in our initramfs that is loaded before the kernel and passing a parameter to grub telling it what to do. So, edit “/etc/mkinitcpio.conf” and add “pci-stub” to the Modules=”” section like so

MODULES="pci-stub"

If you’re running the stock kernel use this to rebuild your initramfs

sudo mkinitcpio -p linux

linux-vfio users run this

sudo mkinitcpio -p linux-vfio

Now we edit grub once again and add our pci_stub options to bind our card to pci-stub. Get your device IDs from “lspci -nnk”. My id’s are “10de:17c8” and “10de:0fb0” as seen here again

02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce GTX 980 Ti] [10de:17c8] (rev a1)
Subsystem: ASUSTeK Computer Inc. Device [1043:8548]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb0] (rev a1)
Subsystem: ASUSTeK Computer Inc. Device [1043:8548]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel

Edit /etc/default/grub and then run “sudo grub-mkconfig -o /boot/grub/grub.cfg” Your line should similar to this. Remember that you need to pass what’s in the IOMMU group so you need the main card and the audio bus if it’s there.


GRUB_CMDLINE_LINUX_DEFAULT="resume=UUID=6771936b-06b6-493c-b655-6f60122f5228 pcie_acs_override=downstream intel_iommu=on pci-stub.ids=10de:17c8,10de:0fb0"

Reboot. If everything goes to plan you should now see “pci-stub” as the module in use from your “lspci -nnk” output for the card.

Kernel driver in use: pci-stub

You’re all set. Now we can move on to installing the core software.


Setup KVM/QEMU

We need to install qemu first.

sudo pacman -S qemu

Next we need the UEFI bios called OVMF. Look hereand get the edk2.git-ovmf-x64-###.noarch.rpm file. Install rpmextract

sudo pacman -S rpmextract

Next we’ll extract the files and move them over as they had them.

 
[kemmler@arch ovmf]$ ls edk2.git-ovmf-x64-0-20150916.b1214.g2f667c5.noarch.rpm 
[kemmler@arch ovmf]$ rpmextract.sh edk2.git-ovmf-x64-0-20150916.b1214.g2f667c5.noarch.rpm 
[kemmler@arch ovmf]$ ls edk2.git-ovmf-x64-0-20150916.b1214.g2f667c5.noarch.rpm usr 
[kemmler@arch ovmf]$ sudo cp -R usr/share/* /usr/share/
[kemmler@arch ovmf]$ ls /usr/share/edk2.git/ovmf-x64/ 
OVMF_CODE-pure-efi.fd OVMF_CODE-with-csm.fd OVMF-pure-efi.fd OVMF_VARS-pure-efi.fd OVMF_VARS-with-csm.fd OVMF-with-csm.fd UefiShell.iso 

Now we need to create our vfio-bind script that will replace our pci-stub placeholder driver. I suggest putting it in /usr/bin/vfio-bind and then running “chmod +x /usr/bin/vfio-bind”


#!/bin/bash

modprobe vfio-pci

for dev in "$@"; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done

Next we bind our gpu. Remember to bind the whole gpu if necessary which means both buses if present. Replace your pci bus

sudo vfio-bind 0000:0#:00.0 0000:0#:00.1

Verify that vfio-bind is now in control of the gpu with “lspci -nnk”

Kernel driver in use: vfio-pci

Now we can test it out and see if it works! Make sure to verify paths are correct, change your pci bus ID, and remove the second pci bus line if you only have one for your card. Throw this script in a file and run it like you did with the vfio-bind script. Once you do that you should be able to switch the input on your monitor or KVM switch and be greeted with a black terminal with yellow text. This is the UEFI shell and means that everything is working wonderfully!


#!/bin/bash

cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-cpu host,kvm=off \
-vga none \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd

Setting Up Windows

Now that we have video out all we need to do next is change our script up and install windows to a disk or to a qcow2 container. I’ll be doing the latter but can help with the former. You need a windows iso. I’m using windows 10 enterprise. You also need to get the VirtIO drivers from redhat. You can download them here. I am using the “Latest virtio-win iso” but stable should also be fine. First let’s create our qcow2 container. I’m using a few params to increase performance, if you have any tips on better methods I’d be glad to hear it. Command below. Change 120G to whatever size you want your container to be.

qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on win.img 120G

Next we modify our script to include the windows iso, virtio iso, and our new container. Once again, verify all path names. I’m also passing through my usb keyboard to the guest. This line begins with -usb. Find your usb device by using “lsusb”. I’d recommend not passing through your mouse just yet so that if you fuck up you can simply close the black qemu window that pops up to get your keyboard back. Alterantively just hook up a second keyboard if you have one. I always keep a spare around in case windows hangs. Notice I’m using writeback cache on my qcow2 image. Remove that if you do not want it.


#!/bin/bash

cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-cpu host,kvm=off \
-vga none \
-usb -usbdevice host:1b1c:1b09 \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/home/kemmler/kvm/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/home/kemmler/kvm/win.img,id=disk,format=qcow2,if=none,cache=writeback -device scsi-hd,drive=disk \
-drive file=/home/kemmler/kvm/virt.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd

Once you boot you should see the “press any key to boot from cd…” if you miss it you’ll eventually get dumped into a Shell> prompt. Just type “exit” hit enter, and navigate to “Boot Manager”. Select the first SCSI device and you should get the prompt again. Go through and install windows as you normally would. When you get to the disk selection screen it will prompt you for a driver. Navigate to the virtio iso, virtscsi folder, Windows 8.1, x64, assuming you’re using windows 10 x64 like me. Go through the rest of the install and then shut it down when done. Next we’re going to get sound working, add our mouse to the usb passthrough, and set our monitors to switch automatically with xrandr.

Run “xrandr” to find your output device names. They correspond to your connection types. eg dvi-d-0. I’m going to be using two monitors while in windows and leaving 1 for linux to keep conky running to monitor the system. My two monitors i’ll be switching are called “DVI-I-1” and “DVI-D-0”. I’m also changing the cpu values to match my 4790k and my ram to 8GB. Note the mode and the rate on the xrandr commands. This refers to the resolution and refresh rate respectively.


#!/bin/bash

xrandr --output DVI-I-1 --off
xrandr --output DVI-D-0 --off
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
QEMU_PA_SAMPLES=128 QEMU_AUDIO_DRV=pa
qemu-system-x86_64 \
-enable-kvm \
-m 8196 \
-smp cores=4,threads=2 \
-cpu host,kvm=off \
-vga none \
-soundhw hda \
-usb -usbdevice host:1b1c:1b09 -usbdevice host:046d:c07d \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/home/kemmler/kvm/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/home/kemmler/kvm/win.img,id=disk,format=qcow2,if=none,cache=writeback -device scsi-hd,drive=disk \
-drive file=/home/kemmler/kvm/virt.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd
xrandr --output DVI-D-0 --mode "2560x1440" --rate 60 --left-of HDMI-0
xrandr --output DVI-I-1 --mode "1920x1080" --rate 144 --left-of DVI-D-0

From here you should be able to install the nvidia driver, steam, etc. You’ll probably need to reboot to get the nvidia driver working. Once that’s done everything should be great.


Common Questions/Problems and Final Thoughts

  1. Can I SLI in the guest VM? Short answer is probably not. Neither of my lga 1150 motherboards will allow me to try.
  2. Can I use two identical GPUs with one being in each OS? You will run into problems binding just one with pci-stub if both gpus share the same identifier found in “lspci -nnk”. pci-stub will bind both of them and the nvidia driver will cease to function in linux. A potential workaround is to use xen’s pciback module. This will allow you to grab based on the pci bus rather than the device id but I haven’t tried this.
  3. Sound isn’t working! I’m using pulseaudio and the hda device. Honestly you’ll have to experiment. There’s a 200+ page forum post with people posting working configs here
  4. The guide is too long or not easy enough. This guide isn’t for you then.
  5. Can I use X device? Generally you can pass through any pci, or usb device. If you want an actual answer the only way would be to donate me the part so I can figure it out short of finding someone on google that claims it works.
  6. My windows ISO won’t boot! Make sure you’re using an unmodified copy. I’m not positive but a lot of the dual purpose ISOs and hand crafted don’t include bootx64.efi and I believe that’s the cause. I can confirm that a clean version of Windows 10 Enterprise x64 boots just fine.

Hopefully you found this guide useful. I’m sure it seems like a gigantic pain in the ass but really once you set it up you don’t have to mess with it again. I cannot stress how useful it is to simply be able to boot a VM play what I want and then turn it off without having to close all my shit by dual booting constantly. So let me know if this guide helped or how I can improve on it!

68 thoughts on “GPU Passthrough Guide

  1. Nate February 9, 2016 / 5:04 pm

    I’m loving the new guide man, amazing.

    I hate to sound ungrateful but have you ever considered doing the guide with a non-rolling release distro, like Ubuntu or Fedora?

    • Nate February 9, 2016 / 5:26 pm

      Another question, You mention older cards using a q35 BIOS set up, so I was wondering does that mean UEFI setups don’t emulate the chipset?

      And if they do, then because you didn’t specify q35 isn’t QEMU defaulting to the older i440fx chipset?

  2. Richard February 9, 2016 / 9:03 pm

    Hi, Firstly great guide it is easy to follow and helps make a very difficult task much easier although I do have trouble with my own set-up. When I try to test vga pass-through my monitor back-light comes on briefly then indicates it is receiving no signal and goes into power save mode. After boot it is just a black screen with a white dash in top left corner so I know something happens. I also get a small qemu window on my main screen that just has a qemu prompt.

    I have a GTX 970, Z170 Extreme4+ and a i5 6600k CPU. The card is being bound to the vfio-pci driver without issue. Something I have done differently to you is Having this card in my first slot, going by your bus IDs you have it in your second, will this cause my problem?

    I also get the message ‘-device vfio-pci,host=01:00.0: VFIO 0000:01:00.0 BAR 3 mmap unsupported. Performance may be slow’ and wasn’t sure if you had come across that before and knew if that was related to my main issue.

    Thanks for any help you can give.

    • tully February 24, 2016 / 4:07 am

      Double check your iommu grouping and you can switch your cards around if your bios lets you pick the pci boot device.

  3. Richard February 12, 2016 / 10:38 pm

    Hello, Just thought I would mention that I was able to get it working by switching the graphics cards around, it also removed that error message as well.

    I did have some trouble with getting any .iso files to boot at all but it turns out that the iso’s I was using were not UEFI bootable ones 🙁 so I had to slightly change how I do things but now it all works great.

    Thanks for the guide

    • tully February 24, 2016 / 4:07 am

      Glad you got it working!

  4. Shawn / Glyph February 13, 2016 / 1:15 am

    Thanks, bookmarked, will status update when accomplished.

  5. Dave February 21, 2016 / 3:32 pm

    Finally got this working today. Thanks for the guide :)!

    • tully February 24, 2016 / 4:07 am

      You’re welcome!

  6. Richard March 12, 2016 / 10:58 am

    I recently ran into some big problems with this set-up, basically Nvidia put in some rather more serious VM detection into their drivers and I was running into an error 43 in the device manager. I would get a basic display but locked at 800×600.

    To solve this there is a kvm option ‘hv_vendor_id=Nvidia43FIX’ and bam everything works like a charm again.

    So my whole -cpu line looks like: -cpu host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=Nvidia43FIX

    Hope this helps someone.

  7. spinningD20 March 14, 2016 / 2:49 am

    Thanks for this. It’s one of those things I didn’t know was possible on a small scale like this, and then I stumbled over your reddit post before I was about to clear off my desktop to configure a raid array (lamenting over installing windows just for the few games I play on it for steam streaming). Great write-up, thanks again!

    • tully March 23, 2016 / 5:20 am

      Happy you liked it.

  8. Scott March 17, 2016 / 5:15 pm

    Thank you so very much for the guide. Any chance we could get you to add a brief mention of two variants: One with an installation to disk (you did say you could help, grin), and installation of Windows from the original DVD – I’m assuming more of us have one of those than an ISO. Thanks again!

    • tully March 23, 2016 / 5:24 am

      Sure. If you’re wanting to install from a dvd you can change the win10.iso to read /dev/cdrom or whatever your cdrom’s /dev is like so

      -drive file=/dev/cdrom,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \

      If you want to install to a physical hdd you can simply change win.img to /dev/sda assuming that’s the intended disk. fdisk -l to list your disks. Then change format=qcow2 to format=raw. Should look like this

      -drive file=/dev/sdb,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
  9. Richard March 24, 2016 / 8:39 pm

    Moving on a bit from this guide I decided that I also wanted to set-up a Linux VM with full graphics performance for games that work in Linux 🙂

    In theory it should have been simple but on boot no matter the distro I would receive a sudden black screen then no output at all and would have to close my guest from the host.

    This would happen rather quickly and just before crashing would see a black screen with some purple / green lines in a thinish bar (maybe 25-50 pixels big) across the screen and then the monitor would stop receiving a signal.

    I found the solution, you need to add the kernel parameter ‘nomodeset’ before it boots and it works like a charm. I found this courtesy of this askubunut post: http://askubuntu.com/questions/613969/acpi-ppc-probe-failed-starting-version-219-nvidia. Don’t know if this is specifically related to Nvidia devices or not.

    Thought I would post in case someone else had the same problem. Now the only hard part is deciding what distro to go with.

    Thanks again tully, without any of your guides on this subject I would still be stuck in the dark ages of Windows.

  10. Stephen April 9, 2016 / 12:24 am

    Quick question: I got to the part where I need to fix my IOMMU grouping, but I don’t run arch. I use ubuntu gnome, so I can’t get yaourt. What would I need to do to get this to work?

    Also, would there be any differences in doing this with AMD cards?

    (Sorry if I’m being retarded. I’m still a noob to linux)

  11. thibaut noah April 10, 2016 / 6:36 pm

    Hey mate, did you try playing a bit with cset and all?
    Also with the same material configuration (cpu, gpu and motherboard) you own me on the physics score (http://www.3dmark.com/fs/8159184)
    What is your ram frequency if i may ask?

  12. Mark April 11, 2016 / 2:13 am

    This is really a fantastic guide and I’ve really enjoyed following along with it. I’ve recently put together a new box using the i-6700k processor and my passthrough is a 980ti. My guest is Win 10 (not from upgrade), but yet I have issues using the guest for any length of time. Within 5 minutes of logging into Win 10 it freezes. The mouse can still be moved but nothing else can be done. the HDD light on the box goes blank and it will sit like that until I kill the QEMU session. Have any idea on where to start debugging a problem like that?

    • Mark April 11, 2016 / 2:40 am

      I believe I fixed my problem. Not surprisingly, it has nothing to do with running Win10 as a VM… It’s Nvidia’s driver, there is a power setting that enforces an “adaptive” power management scheme. I was getting freezes non-stop. Haven’t had one since changing this parameter.

      • Honigmelone August 30, 2016 / 7:53 pm

        Hey nice you got it working. I plan to buy a new pc to do this too and I was wondering which motherboard you are using. What are your long term results? Thanks!

        • tully September 21, 2016 / 2:44 pm

          My motherboard is currently an MSI z97a Gaming 7. My longterm results are the same as my initial. Everything still works fine and it all works well.

  13. Randy April 22, 2016 / 6:45 pm

    MODULES=”pci-stub” should be MODULES=”pci_stub”

  14. Randy April 22, 2016 / 6:57 pm

    Disregard last. I thought it was a problem but instead my mkinitcpio.conf had multiple MODULE lines which clobbered each other.

  15. Conscious Exaltation April 27, 2016 / 2:34 am

    Whenever I do:

    sudo grub-mkconfig -o /boot/grub2/grub.cfg

    I get:

    /etc/default/grub: line 1: GRUB_DEFAULT=0: command not found

    What am I doing wrong?

  16. Skeer April 29, 2016 / 5:40 pm

    I am loving this guide.. I’ve been having a very rough time the past two weeks trying to get a virtualized Windows gaming rig set up. I really hope you can help me out with one tiny problem so far with your guide.

    I get to the first minimal script that verifies the display is working.. which was successful. I changed the source on my lcd and VIOLA! there is yellow/grey test from the UEFI boot.

    I then create the qcow2 container, then modify the above script to include paths to the ISOs and the image file. I save then run the script only to be greeted with a system-wide lock up. The alternative display signal never lights up, I even routed it to another monitor. No signal. The keyboard and mouse both as unresponsive.

    I’ve verified the script time and time again.. I do not know whats happening. Any help would be appreciated!

  17. mrcs May 29, 2016 / 9:08 pm

    Thx for the nice HowTo. This is my first attemp on Arch Linux after i tryed it on Lubuntu some month ago.

    My hardware setup is a little different and thats why i am stuck in my head i guess.

    I use the intel-gpu of my i7 for my linux desktop and will use my gtx 780ti for the vm only.

    got this pci-stub placeholder claiming my NVIDIA devices but now i am wondering if i have to use this vfio stuff. And if not .. how do I proceed?

  18. Stuart June 25, 2016 / 11:40 pm

    Hello. I am having an issue with passing my gpu through.
    The device is claimed by pci-stub at boot, and then vfio-pci after binding. Then when I run the initial test script I get no output on my 2nd monitor. The Qemu console is not showing any errors either.

    When I use `info pci` in the qemu monitor my gpu shows up (both the vga and audio controller). If I remove the `-vga none` flag I get output from the bios saying there is nothing to boot, and it eventually gets to the uefi bios shell with the yellow text. My GPU also shows up when running the pci command in the uefi bios when the `-vga none` is left out. So it seems like it has been passed successfully to the VM, but I am not getting any output.

    I also have my mobo configured to use the integrated graphics, and ignore other ones for booting. The secondary monitor does not come on while booting, and the IGPU shows the mobo splash screen in case that is relevant.

    I saw something similar somewhat in the reddit post, but the only suggestion was to switch inputs on the monitor which wasn’t the issue for them or myself since we had them plugged into separate monitors.

    System Info:
    ubuntu mate 15.10
    i7-6700k (using IGPU for host)
    R9 270 Graphics card (same chip as r7 370 to my knowledge)
    I have the host (integrated GPU) and the R9 plugged into separate monitors.

    • tully August 25, 2016 / 1:38 am

      Try another or an official windows iso. You can boot using an Ubuntu iso and isolate the issue. If Ubuntu boots then it’s the windows iso.

  19. R3P1N5 July 6, 2016 / 4:45 am

    Hey, I’m interested in doing this, however in my quest, I have a question about this image: http://i.imgur.com/UnXpIJn.png. Is that Windows instance being displayed through the virtual video card, or is the dedicated video card supporting that image (it only makes sense to me that it’d be the virtual video device).

    Thanks for your tutorial, hopefully it saves me some headaches.

    • tully August 25, 2016 / 1:36 am

      You see two monitors in the device manager and two display adapters. I’m emulating a monitor as well as using a physical monitor. You can emulate a monitor using “vga qxl” as a qemu parameter.

  20. Trollwut August 9, 2016 / 2:56 am

    Hey mate! I’m new to the idea of GPU passthrough and want to test it. Unfortunatelly, I only have i dedicated GPU.

    Is it possible to make this magic happen with 1x dedicated GPU and 1x integrated GPU?

    I guess I can take it from there on, just want to know if I should even start to begin with. 🙂

    Already read through the guide and have to say that it’s easily understandable for me. Nice work there, mate!

    • tully August 25, 2016 / 1:33 am

      Yes, you will need to vga arbiter patch on your kernel though.

  21. Galun August 16, 2016 / 10:37 am

    Thanks a lot for the very helpful guide. I ran into a problem after the final step, for some reason all of the Windows versions I tried froze after install on first boot. I found out that it was because I’m using the GTX 1080.

    To solve it, remove the lines that load the OVFM/UEFI bios, it’s the two lines that start with “-drive if=pflash”, you can also remove the cp line at the start of the script.

    Then go here https://sourceforge.net/projects/edk2/files/OVMF/ and download the one appropriate for your system, for me it’s x64 so I can’t verify if the other one works. I used the stable release and not the alpha. Extract it and copy the file that ends in .fd, rename it bios.bin and move it to /usr/share/qemu/

    Then add the line “-bios /usr/share/qemu/bios.bin \” to the script, I put it right under the “-vga none” line.

    Now everything works for me, I can even passthrough HTC Vive by adding the USB devices same way you add the mouse/keyboard.

    My first time setting up something like this, it took me a while to find the solution so I hope this helps someone.

    • tully August 25, 2016 / 1:33 am

      Thanks for the post. Glad you got it working!

    • x99 August 25, 2016 / 6:54 am

      Thanks! I was looking to see if anyone had success with a gtx 1080, this will come in handy

  22. Daflo August 24, 2016 / 8:46 pm

    Hello,
    I found your really interesting guide via reddit. Since this seems to be quite complex and time-consuming to set up, I would appreciate it if someone who has this already up and running could answer two questions:

    1. Does the passthrough-gpu draw any power when the VM is not running ( = only linux is active)? Is its fan running? How does linux treat such a “deactivated” gpu?

    2. How easy is it to reactivate the passthrough-gpu for use in linux? Can you easily switch between using the gpu in linux and using the gpu in the VM? Maybe even without a reboot?

    Thank you very much for your response!

    • tully August 25, 2016 / 1:32 am

      1. The second GPU will only act as any other pci device you plugged in but never did anything with.

      2. You never have to reboot linux. You can simply boot windows, then turn windows off to resume using linux.

  23. emcenrue August 31, 2016 / 2:59 pm

    Holy crap, thank you for this guide!

    • tully September 21, 2016 / 2:43 pm

      You’re welcome

  24. Anders Øen Fylling September 4, 2016 / 1:27 pm

    Really great article, thank you!

    I just have to ask really guick tho, as I have an issue I’m unable to solve :/

    On the first script you try to boot up windows, mine is identical (except the windows path and ids), when i start the script I only get the yellow text image. Also when I stop the emulation, I get the error: “/usr/bin/virtualization-windows-10: line 17: -device: command not found”

    Removing everything after -device, creates more errors. Any idea what this means?
    my Qemu-system-x86_64 version is 6.2.1.

    If I run “qemu-system-x86_64 -boot d -cdrom Win10.iso -m 2048”. Difference is ofcourse the -cdrom, and a lot less of the commands. I’ve started reading up on QEMU, but I’m curious if you have any idea about why I can’t boot windows and why I get a -device command not found?

    PS. Windows iso was downloaded directly from microsoft.

    • Anders Øen Fylling September 4, 2016 / 1:30 pm

      I forgot to mention my system!

      asus Z97,
      i7 4790K,
      Geforce gtx970,
      Geforce gtx970.

      I’m passing bouth GPU’s through (if I blacklist one, I’ll blacklist the other anyways). And I’m using my integrated graphics for the HOST arch linux. The videocards works as I can see the image created by QEMU 🙂

    • tully September 21, 2016 / 2:43 pm

      My initial guess is that the line before the -device line doesn’t have a \ to define a multi-line statement. If you post the full script I can help further.

  25. wax September 17, 2016 / 8:59 pm

    Hey tully,

    Thanks a lot for the guide. Following along with everything, I’ve managed to get this working on a GTX 980, a GTX 780, and a GTX 750 Ti. I’m even able to run two VMs at a time! (Though I haven’t tried hooking up 3 video cards, I need a riser cable for that.)

    The problem I’m having is that my sound is very stutterey/choppy/mechanical/slow. This seems to apply to any sound in Windows. Sometimes shorter sounds will play okay, but most sounds seem to get stretched out. In general, this also seems to have an effect on whatever is connected to the sound being played. For example, YouTube videos will play slowly. Games don’t seem to have their performance effected (tested in League of Legends and Titan Quest), but their sound still suffers this distortion.

    There are a few guides I’ve found that seem to address this problem using MSI, but they don’t seem to be compatible with vfio-bind or some other part of this guide’s PCI-hogging.

    I’m looking to get USB sound cards to temporarily fix this problem , but being able to fix this on the host without extra hardware would be strongly preferred. Any idea what the problem is?

    • tully September 21, 2016 / 2:41 pm

      Try this for your sound. Set these environment vars before the big qemu multi line

      QEMU_ALSA_DAC_BUFFER_SIZE=512 QEMU_ALSA_DAC_PERIOD_SIZE=170 QEMU_AUDIO_DRV=alsa

      Then, change your sound line to this in your qemu args

      -soundhw ac97

      Once that’s done disable driver signing enforcement and install the realtek audio driver.

      That should hopefully clean up your audio

      • wax September 30, 2016 / 5:21 pm

        That seems to have done the trick! Thanks a lot!

      • Wax October 1, 2016 / 7:18 pm

        Update!

        After about 15 minutes in a youtube video (not sure about anything else), the stuttering starts to come back. If the video is fullscreen it can cause problems because the entire computer will start to stutter with it.

        I have noticed some mild stuttering will occasionally happen, too, if I start to do stuff on the host computer.

  26. KLIK September 26, 2016 / 9:44 pm

    I have wanted to do something like this on my desktop for ages, but I don’t have a cpu with an integrated gpu. My question is, can/how might this be done running the Arch install headless while the VM is active and then somehow kill the VM, giving the gpu back to Arch?

    • tully October 25, 2016 / 2:33 am

      What you’re talking about is called GPU hotswapping. I’ve seen people do this with Xen, and primarily AMD cards. Nvidia wants you to buy their quadro cards for this capability. However, I have read of people having mild success doing tricks with bumblebee.

  27. Hell October 2, 2016 / 6:19 pm

    SuperB guide. Got it all working in a single day (It helps having almost the same hardware configuration as yours)
    Just one question, How are you doing networking?

    Because I am using SLIRP:
    -device e1000,netdev=user.0 -netdev user,id=user.0

    This gives me poor performance and I can not reach the guest from the host and viceversa, thus making it impossible to use synergy to share keyboard + mouse.

    I am curious, are you using TAP? if so, how did you get it working?

    Thanks in advance.-

  28. lonesurvivor November 20, 2016 / 11:44 am

    I had NO idea that this is actually possible. Now I can finally completely switch to linux (dual boot was so annoying, so I just used windows most of the time).

    Took like one or two days to get it working (including research for the solution of several problems that occured), but now it works. Well, the audio is still a bit of a problem, but my games are totally playable and I’m enjoying it.

    I’m running the setup on my Lenovo W520 Laptop, using the integrated nvidia card for the host and an external gtx 660 connected via the express card slot for windows.

    So thank you very much for your guide!

  29. Simple January 25, 2017 / 3:52 am

    This worked absolutely fantastically. I did personally for mine switch to HDD for faster disk rw, and switched to a bridged network adapter for slightly higher performance but this guide got me there.

    Thanks!

  30. Daflo January 25, 2017 / 9:59 pm

    Hello all,

    according to the Linux Kernel 4.10 release notes there is going to be improved support for virtual pci devices for using graphics hardware in virtual machines.

    Has anyone looked into this yet? What does this mean for this passthrough guide? Does it make sense to continue using the method explained above, or might virtual pci devices be a better way for the future? Can anyone estimate the differences in performance for the virtual machines? Any possible dissadvantages or benefits that the virtual pci device approach might bring?

    Thank you for your ideas and replies!

    • sitilge February 5, 2017 / 10:21 pm

      Hi,

      Yes, I’m going through the process right now. Using systemd bootloader + vfio-pci (instead of pci-stub). All in all, there are some minor changes but the concept stays the same.

      The only problem I had so far (excluding some stupid ones, like switching to the correct monitor) is that my Windows 10 UEFI does not recognized x86 virtio drivers but recognizes AMD64 drivers. Despite that I am able to install machine + nvidia drivers but this fact is still confusing. I’ve made a SO question here, will be grateful for any help: http://stackoverflow.com/questions/42052814/windows-10-virtio-drivers-not-found-for-x86

  31. Pawnee Jbrauxx February 16, 2017 / 7:32 am

    Hi, as i know in my case, i’have two discrete VGA, one another is nvidia and another one is amd, I have success to isolate and blacklist or pass it to GUest machines the nvidia, but the amd, I don’t have idea with it

  32. phyper February 20, 2017 / 6:12 am

    Hi,

    Thanks for the guide, it has been very useful and great that the run script isn’t too long and convoluted like a couple of other guides out there.

    I did have to go with the opensource nvidia drivers before installing linux-vfio, then install the official nvidia binary including dkms from inside the vfio kernel. Doing it the way you described didn’t work for me.

    Aside from that and an issue with the windows iso I was initially using, everything was smooth. I managed to get Windows 10 installed.

    However, I do have a strange issue with the sound. For some reason, the sound on the VM is coming through the output port as the Arch Linux host. The audio on the host is fine and my run script is set up with the right pci bus for the VM’s gpu so not a vfio set up issue

    -device vfio-pci,host=02:00.0,multifunction=on \
    -device vfio-pci,host=02:00.1 \

    Any ideas? Not sure if it’s an issue with pulse audio or alsa

    My set up is a home theater PC
    i7-4790K
    Asus z87 deluxe
    GTX 980ti (host)
    R9 390 (windows VM)
    AV Receiver to switch video between host and VM

    • phyper February 20, 2017 / 5:38 pm

      OK so after a couple of hours playing around with my AV reciever settings and the HDMI cables I can confirm that it is not the reciever either, but as I initially suspected, something to do with the VM or pulse or hda settings.

      For the time being, I have plugged the R9 390 gpu I am using for the VM directly into the TV and keeping the AV reciever pointing to the Linux host hdmi.

  33. Ben February 27, 2017 / 6:29 pm

    I am a huge fan of this blog entry since I found it almost half a year ago. Now is the time to work with it. But after reading it completely for the first time I realized, that you use a placeholder driver for NVIDIA cards.

    Did I understood it correctly, that this will prevent the Linux NVIDIA driver completely to communicate with the GPU? Which means disabling the possibility to play games natively on Linux?

    Thanks for your work! Thrilled to use it now 🙂

    • tully March 27, 2017 / 11:43 pm

      The pci-stub driver is simply used to capture the passthrough card, not any other cards. So, you can have a 980ti as your passthrough card and a 970 for linux gaming, or you can have two grub entries, one with the pci-stub and one without. However, the latter requires a reboot and that kind of defeats the purpose of this.

  34. icedfiend March 1, 2017 / 12:13 pm

    Have anyone ever tried to do this with a laptop with Optimus?

    Using the discrete GPU for the Windows Guest and the Intel GPU for the Linux Host.

    I might get my hands on a new laptop soon with this configuration and give it a try, but I was just curious if anyone have ever tried it before.

    • tully March 27, 2017 / 11:41 pm

      It’s been tried before but success is varied

  35. Snipes April 11, 2017 / 12:24 pm

    Dude, you are the freaking best!! I’ve followed SO MANY guides including a 15+ page linux mint tutorial that didn’t work. This is the easiest most explained tutorial that I have come across and it actually freaking works! Your guide is amazingly explained and so simple. I can’t thank you enough for sharing your knowledge and doing this guide. I would kiss you if I could!

    Thanks for your hard work!!

    • tully May 4, 2017 / 2:38 am

      You’re welcome

  36. René April 25, 2017 / 9:56 am

    Could keep still using your linux host while the windows kvm is running?

    • tully May 4, 2017 / 2:37 am

      yes

  37. Robert Washbourne May 23, 2017 / 3:27 am

    Hi there, thanks so much for the guide! How can I install the qemu container to a disk instead? Linux is running on my ssd and the hdd is just lying around there.

    • tully May 26, 2017 / 3:36 am

      -drive file=/dev/sda,id=disk,format=raw

  38. Richard June 16, 2018 / 7:57 pm

    I had issue with this set-up recently in that the VM would fail to initialise and give a USB error “-usbdevice host:1532:000c: could not add USB device ‘host:1532:000c'”

    After looking into this and seeing notices that -usbdevice was deprecated I found the solution was to update the usb entries to -device usb-host,vendorid=0x1532,productid=0x000c

    In essence you split the code we had before and add a 0x before it.

    and also set -machine usb=on and then it all seems to work again as desired.

    Hope this helps someone.

Leave a Reply

Your email address will not be published. Required fields are marked *