- https://bootc-org.gitlab.io/documentation/
- https://github.com/osbuild/bootc-image-builder
- https://github.com/fabiendupont/rhel-bootc-nvidia
- RHSM subscription - org ID and activation key
- A non-root user with sudo permissions
- Optional for running a VM:
- Install KVM for running a VM
$ sudo dnf install -y qemu-kvm libvirt virt-install virt-viewer $ sudo systemctl enable libvirtd --now
- Enable PCI pass-through to allow attaching a GPU to a VM - add
intel_iommu=on
to the host's boot arguments. - Update KVM/libvirt permission.
- Install KVM for running a VM
Attaching a PIC device to a running VM can be done either via the VM manager GUI (Add Hardware
> PCI Host Device
), or using the CLI:
-
Create an XML file, e.g. pci-device.xml:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x<domain>' bus='0x<bus>' slot='0x<slot>' function='0x<function>'/> </source> </hostdev>
-
Attach the device to the VM:
$ sudo virsh attach-device <domain> pci-device.xml
Note: A host PCI device can be attached when creating a VM, but I never tried that:
One of the ways to verify that the GPU is being used, after booting into a bootable image:
$ curl -O https://raw.githubusercontent.com/waggle-sensor/gpu-stress-test/main/stress.py
$ python stress.py
and observe nvidia-smi
.
Triggering utilization of multiple GPUs (e.g. 0 through 2):
$ curl -O https://raw.githubusercontent.com/waggle-sensor/gpu-stress-test/main/stress.py
$ for i in $(seq 0 2); do (CUDA_VISIBLE_DEVICES=$i python stress.py &) ; done