Configuring the Isolated Desktop Solution

This document outlines the steps to configure your integrated graphics adapter as the primary display adapter. Then, it explains how to detach the discrete GPU from the host operating system and assign it to a virtual machine (VM) for local remote debugging purposes.

Client Desktop

In the isolated desktop solution, the VM host controls the display via the integrated graphics processor using the Ubuntu-provided kernel. The VM host uses PCI passthrough to allocate the discrete graphics processor to the VM guest. The VM guest runs the Intel out-of-tree kernel module. After configuring the host, follow the instructions to set up and use a VM guest for compute and media offload tasks. Using the standard QEMU virtualization machine manager, you can create a local compute node with the Intel Iris Xe MAX graphics to execute media and compute applications. At the same time, you can use the Intel Iris Xe graphics adapter for your display.

Configuring the VM host

Follow this procedure to detach the driver from the host operating system to allow its usage on a virtual machine.

Prerequisites:

  • Install and boot Ubuntu 22.04.2 LTS or later. You can do it as the sole operating system, in a dual boot setup, or using external storage. Since the steps for installing Linux vary significantly by platform, we do not provide detailed installation instructions here. For specific guidance, consult your platform or operating system provider.

  • Ensure that virtualization is enabled in your system’s BIOS. For details on configuring the BIOS, refer to your platform supplier’s documentation. Additionally, on some systems, you may need to enable Secure Boot.

  • Activate sudo in your session by running sudo -l. This will allow you to copy and paste the instructions that use sudo without being prompted for your password.

Configuration procedure:

  1. Check the PCI IDs of your graphics cards.

    lspci -nn | grep -Ei 'VGA|DISPLAY'
    

    For example, on an Intel® NUC 12 Enthusiast Mini PC - NUC12SNKi72VA the output is:

    00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-P Integrated Graphics Controller [8086:46a6] (rev 0c)
    03:00.0 VGA compatible controller [0300]: Intel Corporation Device [8086:5690] (rev 08)
    

    The PCI IDs are the numbers prefixed with 8086:, so in this example, 46a6 and 5690 are PCI IDs.

  2. Find your PCI IDs in the hardware table to identify the names of your GPUs.

    In our example, 5690 indicates that the discrete GPU is Intel® Arc™ A770M Graphics.

  3. Store your PCI ID in a variable in your bash session.

    # Change the value to your PCI device ID
    export INTEL_GPU="8086:5690"
    
  4. To bind the discrete GPU to the vfio-pci driver, modify the value of intel_iommu=on and vfio-pci.ids=${INTEL_GPU} parameters located in the GRUB_CMDLINE_LINUX_DEFAULT value in your system’s /etc/default/grub file.

    eval "$(grep ^GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub)"
    declare -i updated=0
    if [[ ! "intel_iommu=on" =~ ${GRUB_CMDLINE_LINUX_DEFAULT} ]]; then
      echo "Adding 'intel_iommu=on'"
      GRUB_CMDLINE_LINUX_DEFAULT+=" intel_iommu=on"
      updated=1
    fi
    if [[ ! "vfio-pci.ids=${INTEL_GPU}" =~ ${GRUB_CMDLINE_LINUX_DEFAULT} ]]; then
      echo "Adding 'vfio-pci.ids=${INTEL_GPU}'"
      GRUB_CMDLINE_LINUX_DEFAULT+=" vfio-pci.ids=${INTEL_GPU}"
      updated=1
    fi
    if ! grep -q "vfio-pci" /etc/modules; then
       echo "vfio-pci" | sudo tee -a /etc/modules >/dev/null
       echo "Added 'vfio-pci' to /etc/modules to ensure it loads and binds early."
    fi
    if (( updated )); then
       sudo cp /etc/default/grub /etc/default/grub.bk
       sudo sed -i -e "s/^GRUB_CMDLINE_LINUX_DEFAULT=.*\$/GRUB_CMDLINE_LINUX_DEFAULT=\"${GRUB_CMDLINE_LINUX_DEFAULT}\"/" \
          /etc/default/grub
       if ! sudo update-grub; then
          sudo cp /etc/default/grub /etc/default/grub.bk
          echo "update-grub failed. /etc/default/grub restored from backup" >&2
       else
          if ! sudo update-initramfs -u; then
             sudo cp /etc/default/grub /etc/default/grub.bk
             echo "update-initramfs failed. /etc/default/grub restored from backup" >&2
             if ! sudo update-grub; then
                echo "Unable to update-grub to original configuration" >&2
             fi
          fi
       fi
       echo "GRUB configuration updated setting: ${GRUB_CMDLINE_LINUX_DEFAULT}"
    fi
    grep ^GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub
    
  5. Reboot your system.

    sudo reboot
    
  6. Verify whether the following command returns the correct PCI information and the kernel driver bound to your GPUs.

    lspci -nnk | grep -A 3 -Ei 'VGA|DISPLAY' | grep -E "VGA|driver"
    

    The output should look similar to the following, listing two devices, one bound to the i915 driver and the other to vfio-pci.

    00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-P Integrated Graphics Controller [8086:46a6] (rev 0c)
            Kernel driver in use: i915
    03:00.0 VGA compatible controller [0300]: Intel Corporation Device [8086:5690] (rev 08)
            Kernel driver in use: vfio-pci
    

    The first number of the returned GPU record is the address of the PCI device. If the device is not binding to vfio-pci, you need to perform additional actions. One option is to rebind the device from its current driver and then bind it to vfio-pci:

    # Change the value to your PCI device ID
    export INTEL_GPU="8086:5690"
    export INTEL_GPU_PCI="0000:$(lspci -nn | grep -Ei "${INTEL_GPU}" | sed -nE 's/^([^ ]+).*/\1/p')"
    
    if [[ ${#INTEL_GPU_PCI} == $(expr length "0000:00:00.0") ]]; then
       echo "${INTEL_GPU_PCI}" | 
          sudo tee /sys/bus/pci/devices/${INTEL_GPU_PCI}/driver/unbind >/dev/null
       echo "${INTEL_GPU/:/ }" |
          sudo tee /sys/bus/pci/drivers/vfio-pci/new_id >/dev/null
    fi
    

    You can then verify the results using the following command:

    lspci -nnk | grep -A 3 -Ei 'VGA|DISPLAY' | grep -E "VGA|driver"
    

    If it is still not bound to vfio-pci, check your kernel logs:

    sudo dmesg | grep -iE '(i915|vfio|xe)'
    
  7. Assign the address of the PCI device that is bound to the vfio-pci driver to a bash environment variable. This address is required to give your VM access to the discrete graphics adapter.

    # Change the value to your PCI device ID
    export INTEL_GPU="8086:5690"
    export INTEL_GPU_PCI="0000:$(lspci -nn | grep -Ei "${INTEL_GPU}" | sed -nE 's/^([^ ]+).*/\1/p')"   
    

Configuring the VM guest

Follow this procedure to create an Ubuntu virtual machine that will serve as a VM guest and configure it to use the out-of-tree driver for the discrete GPU.

  1. Install packages on the VM host for managing your VM.

    sudo apt install qemu-kvm qemu-utils \
      libvirt-daemon-system libvirt-clients \
      bridge-utils \
      virt-manager ovmf gir1.2-spiceclientgtk-3.0
    
  2. To verify whether Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) is enabled on your system, run the kvm-ok utility.

    sudo kvm-ok
    

    After this step, you should receive a message similar to the following example. Otherwise, ensure that virtualization is enabled in your system’s BIOS. For information on BIOS configuring, see your platform supplier’s documentation. On some systems, you may also need to enable Secure Boot.

    INFO: /dev/kvm exists
    KVM acceleration can be used
    
  3. Create a disk image file for your VM with 50G. If you do not want to place the disk image in /opt, you can place it somewhere else where you have write access on the host.

    sudo qemu-img create -f qcow2 /opt/ubuntu-disk.qcow2 50G
    sudo chown $(whoami) /opt/ubuntu-disk.qcow2
    

    Note

    To handle compute work involving the download of large models, consider either increasing the size of the disk image or configuring an alternative connection from the virtual machine and external storage, such as NFS or virtiofs.

  4. Install the Ubuntu 22.04.2 LTS image on the VM. You can install either Ubuntu Server or Ubuntu Desktop within the VM. You may install the server image to reduce disk usage because the graphics display is not required for GPU offloading. Since you just installed the Desktop image on the host platform, you may wish to reuse the ISO image you downloaded instead of downloading the Server image.

    If you wish to download a new image, you can do so using the following command:

    wget https://releases.ubuntu.com/jammy/ubuntu-22.04.2-live-server-amd64.iso
    

    The command downloads the ubuntu-22.04.2-live-server-amd64.iso file, which you can pass to QEMU and set the BOOT_MEDIA environment variable:

    export FILE=./ubuntu-22.04.2-live-server-amd64.iso
    export BOOT_MEDIA="-cdrom ${FILE}"
    

    Instead of using QEMU directly from the command line, you can also use a graphical configuration tool such as virt-manager.

  5. Install the operating system on the VM. This step starts the VM with 4G of RAM and copies the CPU configuration from the host CPU.

    sudo qemu-system-x86_64 -machine pc \
      -m 4G \
      -cpu host \
      -enable-kvm \
      -drive file=/opt/ubuntu-disk.qcow2 \
      -netdev user,id=net0,hostfwd=tcp::10022-:22 \
      -device virtio-net-pci,netdev=net0 \
      ${BOOT_MEDIA}
    

    The VM boots the OS image specified in the BOOT_MEDIA variable declared previously. To access the UI presented in the VM, you can either use your local display (default) or tell QEMU to use VNC*. This example uses the local display. If you would like to use VNC, you can add the -nographic -display vnc=:20 parameters and then connect your VNC client to port 5920 on the host machine.

    This step results in opening a new window and starting the Ubuntu installer.

  6. During the package installation, configure the network and install openssh-server when prompted, as Secure Shell (SSH) protocol will be later required to connect to the VM.

    sudo apt install -y openssh-server
    

    When prompted about partitioning the disk, follow these rules:

    • Deselect LVM, as you do not need to use volume management

    • Add a 2G partition mounted to /boot, formatted as ext4. This provides additional space for changing kernels. The default is about 750M.

    • Set the remaining space to approximately 47G

  7. After the installation, reboot the VM and log in using the username and password you configured. Once the virtual machine starts, it boots the Ubuntu operating system you installed and the port 10022 on the host is forwarded to port 22 in the guest.

  8. On the host, connect to the VM using SSH.

    ssh -p 10022 localhost
    sudo -l
    
  9. Optionally, if you installed the server OS version, you can disable “cloud-init” inside the VM to improve boot times.

    echo 'datasource_list: [ None ]' | sudo tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
    sudo apt purge -y cloud-init &&
    sudo rm -rf /etc/cloud &&
    sudo rm -rf /var/lib/cloud
    
  10. Optionally, if you installed the desktop OS version, you may want to disable the graphics display to reduce memory usage and improve performance.

    sudo systemctl disable gdm3 &&
    sudo systemctl set-default multi-user &&
    sudo systemctl stop gdm3
    
  11. Configure the package repository.

    wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \
      sudo gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg
    echo "deb [arch=amd64,i386 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | \
      sudo tee /etc/apt/sources.list.d/intel-gpu-jammy.list
    sudo apt update
    
  12. Install the out-of-tree kernel module.

    sudo apt install -y \
      intel-i915-dkms intel-platform-vsec-dkms \
      intel-platform-cse-dkms intel-fw-gpu 
    
  13. Open the the /etc/default/grub file in the VM and modify the GRUB_CMDLINE_LINUX_DEFAULT and GRUB_TERMINAL value to enable serial output so the console output can be seen from the Virtual Machine Manager (VMM). This will allow you to launch the VM later without a virtual display.

    sudo nano /etc/default/grub
    

    Modify the GRUB_CMDLINE_LINUX_DEFAULT and GRUB_TERMINAL value as in the following example:

    GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"
    GRUB_TERMINAL=console
    
  14. Update the bootloader with the additional grub configuration and power off the VM.

    sudo update-grub
    
  15. Shut down the VM.

    sudo shutdown -h 0
    

Starting and connecting the VM

When your VM is configured to boot the kernel containing support for the discrete graphics adapter, start the VM, connect to it and install the remaining packages.

  1. Restart QEMU KVM using the disk image you created and pass through the GPU to the guest OS. As this will now be the environment for executing development work, increase the amount of memory passed to the VM so it can load data models.

    export INTEL_GPU="8086:5690"
    export INTEL_GPU_PCI="0000:$(lspci -nn | grep -Ei "${INTEL_GPU}" | sed -nE 's/^([^ ]+).*/\1/p')"
    sudo qemu-system-x86_64 \
      -m 32G \
      -cpu host \
      -enable-kvm \
      -drive file=/opt/ubuntu-disk.qcow2 \
      -device vfio-pci,host=${INTEL_GPU_PCI},id=hostdev0,bus=pci.0,x-igd-gms=2,x-igd-opregion=off \
      -netdev user,id=net0,hostfwd=tcp::10022-:22 -device virtio-net-pci,netdev=net0 \
      -smp $(nproc) \
      -vga none \
      -nographic
    

    ${INTEL_GPU_PCI} refers to the environment variable created during the VM host configuration. It refers to the address of the PCI device that is bound to the vfio-pci driver.

    It may take several seconds before you see any output from the VM.

  2. Connect to the VM.

    ssh -p 10022 localhost
    
  3. Use lspci to verify whether the PCI device was passed through and is initialized by the i915 kernel driver.

    lspci -nnk | grep VGA -A 3 | grep -E "VGA|driver"
    

    The output should look similar to the following:

    00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:5690] (rev 08)
            Kernel driver in use: i915
    
  4. On the guest VM, activate a sudo session so you are not prompted for a password while copying and pasting instructions to the terminal.

    sudo -l
    
  5. Install the Compute and Media packages with the latest versions of the OpenCL* runtime, Level Zero, oneVPL, Media SDK, and Media driver.

    sudo apt install -y \
      intel-opencl-icd intel-level-zero-gpu level-zero \
      intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \
      va-driver-all vainfo
    
  6. To access the GPU, configure permissions by adding the user to the render group owning /dev/dri/render*:

    sudo gpasswd -a ${USER} render
    newgrp render
    

Testing the configuration

  1. Verify the graphics platform name.

    sudo grep "platform:" /sys/kernel/debug/dri/0/i915_capabilities
    
  2. Use the clinfo program to verify whether the OpenCL packages have installed correctly.

    sudo apt install clinfo
    clinfo | grep "Device Name"
    

    You should see an output similar to the following:

      Device Name                                     Intel(R) Arc(TM) A770M Graphics
        Device Name                                   Intel(R) Arc(TM) A770M Graphics
        Device Name                                   Intel(R) Arc(TM) A770M Graphics
        Device Name                                   Intel(R) Arc(TM) A770M Graphics
    
  3. Use the vainfo program to verify whether the media driver have installed correctly.

    sudo apt install vainfo
    vainfo --display drm | grep Driver
    

    You should see an output similar to the following:

    libva info: VA-API version 1.18.0
    libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_1_18
    libva info: va_openDriver() returns 0
    vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.1.4 (12e141d)