Abstract

Open source virtualization technologies widely available in the Linux software ecosystem lack the ease of use of graphical performance enhancements available in commercial virtualization technologies such as VMWare Workstation or VMWare vSphere/ESXi. Intel GVT-g is a virtual graphics acceleration technology which can be accessed with the QEMU virtualization system. QEMU serves as an open-source alternative to technologies such as VMWare Workstation or VMWare vSphere/ESXi. Intel GVT-g was configured on a Thinkpad X1 Generation 6 laptop containing Intel integrated graphics resulting in successful GPU acceleration on a UEFI Windows 10 64-bit guest without relying on proprietary software aside from the guest operating system itself. Substantially improved virtualization performance is possible due to working Intel GVT-g GPU acceleration on Linux hosts.

Introduction

Computer users rely on software written for many different operating systems. Virtual machines allow computer users to simultaneously run different operating systems and switch between them easily. Virtualization has benefits such as being able to migrate installed systems to other physical machines with lower downtime, the ability to contain untrusted code in a sandbox that is difficult to escape from, maintain operation of legacy systems that are difficult to keep running on obsolete hardware, or simply running a Windows-only program on a Linux host.

Virtual machines with graphical user interfaces typically suffer from input lag and stuttering, both of which lead to a degraded user experience. Additionally, software which relies on heavy computation such as photo editing or engineering is dependent on efficient GPU access to speed up calculations by an order of magnitude or more over the host machine CPU to finish calculations in a reasonable time period. Unfortunately, not all virtualization solutions are able to leverage the physical chips on the host machine in an efficient manner, regardless of cost.

GPU command architectures

  1. VGA Emulation (VE)
  2. API forwarding (AF)
    • Intel GVT-s
    • VMWare Virtual Shared Graphics Acceleration (vSGA)
    • Oracle VirtualBox 3D Acceleration
  3. Direct Pass-Through (DPT)
    • Intel GVT-d
    • VMWare Virtual Dedicated Graphics Acceleration (vDGA)
    • Not available in VMWare Workstation
  4. Full GPU Virtualization (FGV)
    • Intel GVT-g
    • VMWare Virtual Shared Pass-Through Graphics Acceleration (vGPU or MxGPU)
    • Not available in VMWare Workstation

The most primitive graphics display for any virtual machine is VGA Emulation (VE). This mode is also the most inefficient. QEMU emulates a Cirrus Logic GD5446 Video card. All Windows versions starting from Windows 95 should recognize and use this graphic card.

Most hypervisors which advertise some form of “hardware acceleration” use API Forwarding (AF), which is a high performance proxy service that requires specialized drivers on both the host and guest to create a high performance instruction pipeline.

API Forwarding (AF) works by:

  1. intercepting the GPU command requested by a piece of software
  2. proxying the GPU command to the host hypervisor
  3. executing the captured GPU command on the host from the hypervisor
  4. bubbling the response back up to the virtual machine

This mode is very useful when many virtual machines are competing for resources of a single GPU and Full GPU Virtualization (FGV) is not possible. The hypervisor queues graphics card operations from one or more virtual machine and schedules virtual execution and memory slots for each virtual machine on a single physical GPU resource. Each virtual machine sees its own graphics card while the hypervisor splits the single physical resource up. A key drawback of AF is that usually only OpenGL and DirectX interfaces are supported by the GPU instruction proxy.

The process by which API Forwarding (AF) works is known as paravirtualization.

Direct Pass-Through (DPT) is a system which exposes the GPU as a PCI device which is directly addressable by the virtual machine. Nothing besides the virtual machine can reference any resources on the GPU and it cannot be shared with the physical machine or any other virtual machines. Many devices have only one graphics card installed and using this system would mean making the graphical user interface inoperable. This method is most useful when:

  • the physical graphics card does not support Full GPU Virtualization (FGV)
  • two or more graphics cards are attached to a system
  • paravirtualized drivers are not available or do not work with the installed physical GPU, host hypervisor, or guest operating system

Sharing a GPU natively among multiple virtual machines is possible with Full GPU Virtualization (FGV) solutions such as Intel GVT-g. This process is also known as Hardware Assisted Virtualization (HVM), not to be confused with Paravirtualization (PV). In this mode the IOMMU hardware exposes a GPU memory interface to each virtual machine while it internally handles the memory address mappings between what it exposes to virtual machines and the actual physical memory on the GPU.

In “IOMMU and Virtualization,” Susanta Nanda writes:

IOMMU provides two main functionalities: virtual-to-physical address translation and access protection on the memory ranges that an I/O device is trying to operate on.

Peak media workload performance is 95% of the native host alone when running one virtual machine and the average performance is 85% of the native host alone on media workloads according to Intel engineer Zhenyu Wang in XDC2017 presentation “Full GPU virtualization in mediated pass-through way”

Hypervisors

The hypervisors I use are software systems that enable multiple virtual machines to run simultaneously on a single physical machine. Linux users have many hypervisor options. A longer list is available here. These are a sample of some Linux hypervisors:

  1. Oracle VirtualBox
  2. VMWare vSphere/ESXi (technically runs beneath Linux)
  3. VMWare Workstation
  4. QEMU

Oracle VirtualBox

Possibly the most widely-used hypervisor is VirtualBox by Oracle. VirtualBox is open source software with the exception of the optional extension pack.

The Oracle VirtualBox extension pack provides many features which are not available in the free version. The PCI passthrough module was shipped as a Oracle VM VirtualBox extension package until the feature was scrapped.

These features are available gratis for personal and non-commerical use only.

Possession of the VirtualBox Extension Pack without a license can be problematic:

Got an email today informing me (Urgent Virtual Box Licensing Information for Company X) that there have been TWELVE (12!) downloads of the VirtualBox Extension Pack at my employer in the past year. And since the extensions are licensed differently than the base product, they’d love for us to call them and talk about how much money we owe them. Their report attached to email listed source IPs and AS number, as well as date/product/version. Out of the twelve (12!), there were always two on the same day of the same version, so really six (6!) downloads. We’ll probably end up giving them $150, and I’ll make sure they never get any business from places I work, because fuck Oracle. I wouldn’t piss on Larry Ellison if he was on fire.

VirtualBox Linux hosts do not support GPU DPT (Direct Pass-Through) at all. All of the preliminary PCI pass-through work for Linux hosts which is needed for GPU DPT was completely stripped out on December 5th, 2019 with this message:

Linux host: Drop PCI passthrough, the current code is too incomplete (cannot handle PCIe devices at all), i.e. not useful enough

VirtualBox 2D and 3D acceleration both work according to the same principle: API forwarding (AF)

Oracle VM VirtualBox implements 3D acceleration by installing an additional hardware 3D driver inside the guest when the Guest Additions are installed. This driver acts as a hardware 3D driver and reports to the guest operating system that the virtual hardware is capable of 3D hardware acceleration. When an application in the guest then requests hardware acceleration through the OpenGL or Direct3D programming interfaces, these are sent to the host through a special communication tunnel implemented by Oracle VM VirtualBox. The host then performs the requested 3D operation using the host’s programming interfaces.

VMWare vSphere/ESXi

VMWare vSphere/ESXi is a bare metal hypervisor which runs beneath any end-user operating systems. This property makes it a Type 1 Hypervisor. It supports all GPU acceleration technologies.

  • Virtual Shared Graphics Acceleration (vSGA) is a form of API forwarding (AF).
  • Virtual Dedicated Graphics Acceleration (vDGA) technology is a form of Direct Pass-Through (DPT).
  • VMWare Virtual Shared Pass-Through Graphics Acceleration (vGPU or MxGPU) is a form of Full GPU Virtualization (FGV).

The main limitation of VMWare vSphere/ESXi GPU acceleration is the graphics card selection. GPU passthrough is possible only with a small set of GPUs because NVIDIA drivers disable consumer-market GPUs such as the GeForce series when the drivers detect that they are running in a virtual environment. Comprehensive list of all supported graphics cards for any hardware acceleration purposes.

VMWare Workstation

VMWare Workstation only provides Virtual Shared Graphics Acceleration (vSGA), a form of API forwarding (AF). In this regard, the GPU acceleration story is identical to Oracle VirtualBox.

QEMU

QEMU is an open source virtual machine platform that is also capable of translating instructions between wholly unrelated computer architectures. It is widely available in most Linux distributions and is used extensively in industry.

After enabling GVT-g in QEMU you must also recompile QEMU with the 60 fps fix to get smooth video. There is no way around this issue as of the time of publishing. I describe how to get this working in the the section “Fix QEMU graphics refresh rate.”

QEMU setup with Intel GVT-g

Setting up the Linux Host

You must add these parameters to your kernel command line at boot:

i915.enable_gvt=1 intel_iommu=igfx_off kvm.ignore_msrs=1 kvm.report_ignored_msrs=0

These parameters will not be picked up correctly if you place them as options within a config file in /etc/modprobe.d/

I do not use a bootloader. I boot with EFISTUB so my efibootmgr commandline looks like this:

efibootmgr --disk /dev/nvme0n1 --part 1 --create --label "Arch Linux" --loader /vmlinuz-linux --unicode 'root=/dev/nvme0n1p2 rw initrd=\initramfs-linux.img i915.enable_gvt=1  intel_iommu=igfx_off kvm.ignore_msrs=1 kvm.report_ignored_msrs=0' --verbose

Create a dedicated folder for your virtual machine assets, including virtual disks and UEFI variable stores.

mkdir ~/vms
cd ~/vms

# Generate a random UUID for the next step
uuidgen

# Create Intel GVT-g device
# This step must be run each time you reboot your Linux host
# If i915-GVTg_V5_2 is not available you must go to BIOS settings and even potentially change Thunderbolt security settings to allow the GPU to address more memory
sudo su -c "echo 5b9fa453-8b2f-413c-aa19-cfba99ffbed9 > /sys/devices/pci0000\:00/0000\:00\:02.0/mdev_supported_types/i915-GVTg_V5_2/create"

# Create disk for Windows 10 installation
qemu-img create -f qcow2 win10.qcow2 40G

# Convert the qemu disk image to a compressed image and replace the original
qemu-img convert win10.qcow2 -O qcow2 win10.c.qcow2 -c
mv -v win10.c.qcow2 win10.qcow2

# Create a writable UEFI variable store based on the UEFI defaults
cp /usr/share/ovmf/x64/OVMF_VARS.fd my_uefi_vars.bin

# Download Windows 10 virtio drivers
curl -LO https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

# Create a memory pool for the virtual machine using Linux hugepages.
# 2000 hugepages is enough for ~4000 RAM allocated in QEMU if each is ~2MB
sudo su -c "echo 2000 > /proc/sys/vm/nr_hugepages"

# Check to make sure we actually have the number of huge pages we want
sudo grep -i hugepages /proc/meminfo

VM installation execution

(NOTE: vfio-pci display is OFF here)

sudo qemu-system-x86_64 \
-cpu host \
-enable-kvm \
-smp cores=$(nproc),threads=1,sockets=1 \
-rtc clock=host,base=localtime \
-device virtio-rng-pci \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=my_uefi_vars.bin \
-drive file=win10.qcow2,if=virtio \
-drive file=virtio-win.iso,media=cdrom \
-nic user,model=virtio-net-pci,smb="$(realpath shared)" \
-m 4000M \
-mem-path /dev/hugepages \
-mem-prealloc \
-k en-us \
-machine kernel_irqchip=on \
-global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 \
-parallel none -serial none \
-vga none \
-display gtk,gl=on \
-device vfio-pci-nohotplug,sysfsdev=/sys/bus/pci/devices/0000\:00\:02.0/5b9fa453-8b2f-413c-aa19-cfba99ffbed9,x-igd-opregion=on,romfile=vbios_gvt_uefi.rom,ramfb=on,display=off

Navigate to D:\NetKVM\w10\amd64, and right click on the netkvm (setup information) file where D:\ is the disk drive associated with the virtio disk. Select the “Install” option from the context menu. When installing drivers such as the virtio drivers from the Fedora CDROM the VM may appear to completely lock-up for a few minutes. Be patient!

After installation and after first boot, install the network virtio driver from the CDROM as well for internet access. Also install the RNG and Balloon virtio drivers.

After setup is complete, go to Windows Device Manager. I expanded the “Display Adapters” section and left it open for two minutes while doing nothing. Out of nowhere the first display adapter was replaced with “Intel UHD Graphics 620.” I did not have to manually install it.

If Windows does not automatically install the Intel driver, wait up to 15 minutes with the Device Manager open to the Display Adapter section then manually force the driver installation. Find the display adapter with the triangle and exclamation point and right click on it. Select “Update Driver” to download and install the Intel graphics driver. This froze up my entire virtual machine UI for a few minutes. If the driver download part succeeds be patient with the installation. It will unfreeze after a few minutes.

Do not proceed unless you can see Intel UHD Graphics listed in the Display Adapters section of the Windows Device Manager.

Normal VM execution

(NOTE: vfio-pci display is ON here)

sudo qemu-system-x86_64 \
-cpu host \
-enable-kvm \
-smp cores=$(nproc),threads=1,sockets=1 \
-rtc clock=host,base=localtime \
-device virtio-rng-pci \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=my_uefi_vars.bin \
-drive file=win10.qcow2,if=virtio \
-drive file=virtio-win.iso,media=cdrom \
-nic user,model=virtio-net-pci,smb="$(realpath shared)" \
-m 4000M \
-mem-path /dev/hugepages \
-mem-prealloc \
-k en-us \
-machine kernel_irqchip=on \
-global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 \
-parallel none -serial none \
-vga none \
-display gtk,gl=on \
-device vfio-pci-nohotplug,sysfsdev=/sys/bus/pci/devices/0000\:00\:02.0/5b9fa453-8b2f-413c-aa19-cfba99ffbed9,x-igd-opregion=on,romfile=vbios_gvt_uefi.rom,ramfb=on,display=on

I use vfio-pci-nohotplug with ramfb=on to see what happens on the display before the graphics driver is loaded by Windows. It is functionally the same as vfio-pci otherwise. Just remove ramfb=on to replace vfio-pci-nohotplug with vfio-pci.

Fix QEMU graphics refresh rate

Despite enabling graphics acceleration our virtual machine graphics are fixed to a low framerate which makes interaction with the virtual machine very choppy and generally unpleasant.

This issue was first reported to Intel on Apr 26, 2018 and remains unresolved as of the time of publication. Fortunately, on Nov 13, 2018, Zhang Ch. N. reported that the C macro GUI_REFRESH_INTERVAL_DEFAULT in include/ui/console.h of QEMU is responsible for the performance issues. The downside is that changing this parameters requires QEMU recompilation.

To fix this issue you must recompile QEMU with the following patch applied:

diff --unified --recursive --text qemu-4.2.0/include/ui/console.h qemu-4.2.0.new/include/ui/console.h
--- qemu-4.2.0/include/ui/console.h     2019-12-12 13:20:48.000000000 -0500
+++ qemu-4.2.0.new/include/ui/console.h 2020-04-07 14:50:19.995242274 -0400
@@ -26,7 +26,7 @@
 #define QEMU_CAPS_LOCK_LED   (1 << 2)
 
 /* in ms */
-#define GUI_REFRESH_INTERVAL_DEFAULT    30
+#define GUI_REFRESH_INTERVAL_DEFAULT    16
 #define GUI_REFRESH_INTERVAL_IDLE     3000
 
 /* Color number is match to standard vga palette */

I prepared a package for the Arch Linux AUR with this patch applied. You can install it by obtaining qemu-60fps from the Arch Linux AUR. I use the yay AUR helper to install and manage packages from the AUR conveniently.

The package description diff against the official Arch Linux QEMU package looks like this:

diff --git a/qemu/trunk/GUI_REFRESH_INTERVAL_DEFAULT.patch b/qemu/trunk/GUI_REFRESH_INTERVAL_DEFAULT.patch
new file mode 100644
index 0000000..ff65df5
--- /dev/null
+++ b/qemu/trunk/GUI_REFRESH_INTERVAL_DEFAULT.patch
@@ -0,0 +1,12 @@
+diff --unified --recursive --text qemu-4.2.0/include/ui/console.h qemu-4.2.0.new/include/ui/console.h
+--- qemu-4.2.0/include/ui/console.h    2019-12-12 13:20:48.000000000 -0500
++++ qemu-4.2.0.new/include/ui/console.h        2020-04-07 14:50:19.995242274 -0400
+@@ -26,7 +26,7 @@
+ #define QEMU_CAPS_LOCK_LED   (1 << 2)
+ 
+ /* in ms */
+-#define GUI_REFRESH_INTERVAL_DEFAULT    30
++#define GUI_REFRESH_INTERVAL_DEFAULT    16
+ #define GUI_REFRESH_INTERVAL_IDLE     3000
+ 
+ /* Color number is match to standard vga palette */
diff --git a/qemu/trunk/PKGBUILD b/qemu/trunk/PKGBUILD
index 6319066..6a8b1d4 100644
--- a/qemu/trunk/PKGBUILD
+++ b/qemu/trunk/PKGBUILD
@@ -1,8 +1,11 @@
-# Maintainer: Tobias Powalowski <tpowa@archlinux.org>
+# Maintainer: Adam Gradzki <hi@adamgradzki.com>
+# Contributor: Tobias Powalowski <tpowa@archlinux.org>
 # Contributor: Sébastien "Seblu" Luttringer <seblu@seblu.net>
 
-pkgbase=qemu
-pkgname=(qemu qemu-headless qemu-arch-extra qemu-headless-arch-extra
+pkgbase=qemu-60fps
+pkgname=(qemu-60fps qemu-headless-60fps qemu-arch-extra-60fps qemu-headless-arch-extra-60fps
+         qemu-block-{iscsi,rbd,gluster}-60fps qemu-guest-agent-60fps)
+provides=(qemu qemu-headless qemu-arch-extra qemu-headless-arch-extra
          qemu-block-{iscsi,rbd,gluster} qemu-guest-agent)
 pkgdesc="A generic and open source machine emulator and virtualizer"
 pkgver=4.2.0
@@ -17,11 +20,13 @@ depends=(virglrenderer sdl2 vte3 libpulse brltty "${_headlessdeps[@]}")
 makedepends=(spice-protocol python ceph libiscsi glusterfs python-sphinx)
 source=(https://download.qemu.org/qemu-$pkgver.tar.xz{,.sig}
         qemu-ga.service
-        65-kvm.rules)
+        65-kvm.rules
+        GUI_REFRESH_INTERVAL_DEFAULT.patch)
 sha512sums=('2a79973c2b07c53e8c57a808ea8add7b6b2cbca96488ed5d4b669ead8c9318907dec2b6109f180fc8ca8f04c0f73a56e82b3a527b5626b799d7e849f2474ec56'
             'SKIP'
             '269c0f0bacbd06a3d817fde02dce26c99d9f55c9e3b74bb710bd7e5cdde7a66b904d2eb794c8a605bf9305e4e3dee261a6e7d4ec9d9134144754914039f176e4'
-            'bdf05f99407491e27a03aaf845b7cc8acfa2e0e59968236f10ffc905e5e3d5e8569df496fd71c887da2b5b8d1902494520c7da2d3a8258f7fd93a881dd610c99')
+            'bdf05f99407491e27a03aaf845b7cc8acfa2e0e59968236f10ffc905e5e3d5e8569df496fd71c887da2b5b8d1902494520c7da2d3a8258f7fd93a881dd610c99'
+            'ac88a7307c081c88a4c20a563fc566e8c3474f33bac428a8c2d4415573d11a1da7f6ca08c1ae60b5298085635ef460a05efd8a3fb42f59f81703cb346992b48f')
 validpgpkeys=('CEACC9E15534EBABB82D3FA03353C9CEF108B584')
 
 case $CARCH in
@@ -33,7 +38,8 @@ prepare() {
   mkdir build-{full,headless}
   mkdir -p extra-arch-{full,headless}/usr/{bin,share/qemu}
 
-  cd ${pkgname}-${pkgver}
+  cd qemu-${pkgver}
+  patch --forward --strip=1 --input="${srcdir}/GUI_REFRESH_INTERVAL_DEFAULT.patch"
 }
 
 build() {
@@ -60,7 +66,7 @@ _build() (
   # http://permalink.gmane.org/gmane.comp.emulators.qemu/238740
   export CFLAGS+=" -fPIC"
 
-  ../${pkgname}-${pkgver}/configure \
+  ../qemu-${pkgver}/configure \
     --prefix=/usr \
     --sysconfdir=/etc \
     --localstatedir=/var \
@@ -75,19 +81,20 @@ _build() (
   make
 )
 
-package_qemu() {
-  optdepends=('qemu-arch-extra: extra architectures support')
-  provides=(qemu-headless)
-  conflicts=(qemu-headless)
+package_qemu-60fps() {
+  optdepends=('qemu-arch-extra-60fps: extra architectures support')
+  provides=(qemu qemu-headless)
+  conflicts=(qemu qemu-headless)
   replaces=(qemu-kvm)
 
   _package full
 }
 
-package_qemu-headless() {
+package_qemu-headless-60fps() {
   pkgdesc="QEMU without GUI"
   depends=("${_headlessdeps[@]}")
   optdepends=('qemu-headless-arch-extra: extra architectures support')
+  provides=(qemu-headless)
 
   _package headless
 }
@@ -167,7 +174,7 @@ _package() {
   done
 }
 
-package_qemu-arch-extra() {
+package_qemu-arch-extra-60fps() {
   pkgdesc="QEMU for foreign architectures"
   depends=(qemu)
   provides=(qemu-headless-arch-extra)
@@ -177,38 +184,43 @@ package_qemu-arch-extra() {
   mv extra-arch-full/usr "$pkgdir"
 }
 
-package_qemu-headless-arch-extra() {
+package_qemu-headless-arch-extra-60fps() {
   pkgdesc="QEMU without GUI, for foreign architectures"
   depends=(qemu-headless)
+  provides=(qemu-headless-arch-extra)
   options=(!strip)
 
   mv extra-arch-headless/usr "$pkgdir"
 }
 
-package_qemu-block-iscsi() {
+package_qemu-block-iscsi-60fps() {
   pkgdesc="QEMU iSCSI block module"
   depends=(glib2 libiscsi jemalloc)
+  provides=(qemu-block-iscsi)
 
   install -D build-full/block-iscsi.so "$pkgdir/usr/lib/qemu/block-iscsi.so"
 }
 
-package_qemu-block-rbd() {
+package_qemu-block-rbd-60fps() {
   pkgdesc="QEMU RBD block module"
   depends=(glib2 ceph)
+  provides=(qemu-block-rbd)
 
   install -D build-full/block-rbd.so "$pkgdir/usr/lib/qemu/block-rbd.so"
 }
 
-package_qemu-block-gluster() {
+package_qemu-block-gluster-60fps() {
   pkgdesc="QEMU GlusterFS block module"
   depends=(glib2 glusterfs)
+  provides=(qemu-block-gluster)
 
   install -D build-full/block-gluster.so "$pkgdir/usr/lib/qemu/block-gluster.so"
 }
 
-package_qemu-guest-agent() {
+package_qemu-guest-agent-60fps() {
   pkgdesc="QEMU Guest Agent"
   depends=(gcc-libs glib2 libudev.so)
+  provides=(qemu-guest-agent)
 
   install -D build-full/qemu-ga "$pkgdir/usr/bin/qemu-ga"
   install -Dm644 qemu-ga.service "$pkgdir/usr/lib/systemd/system/qemu-ga.service"

Additional remarks and gotchas

Hyper-V CPU flags for QEMU cause issues with the Intel Graphics Driver so do not use them.

QEMU Hyper-V flags look like this:

-cpu host,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_tlbflush,hv_ipi
# USB support - may cause stuttering - root causes not known
-device qemu-xhci \

QEMU file sharing requires you to pass an absolute path to the right side of smb=

I like to use realpath to resolve my relative path to the corresponding absolute path.

smb="$(realpath shared)"

The folder is accessible in Windows File Explorer at \\10.0.2.4

In the future I will get Pulseaudio working with the Windows Virtual Machine. At this time it is not important.

Conclusion

QEMU eclipses VirtualBox in features and exceeds VMWare capabilities. VirtualBox is limited to API forwarding (AF) since it is not able to allow virtual machines to address graphics hardware directly in any way. VMWare solutions support all types of GPU addressing but most graphic cards made by NVIDIA disable themselves when they detect being called in Direct Pass-Through (DPT) or Full GPU Virtualization (FGV) modes. QEMU exceeds the features and provides hardware use flexibility beyond that of VMWare to bring near-native graphics performance to guest operating systems such as Windows 10 with truly minimal driver support required in the guest operating system. I recommend using QEMU on Linux when high graphics performance and low operational costs are prioritized for deploying a virtual machine environment.