Date
1 - 1 of 1
UEFI nested virtualization
ben.morrice@...
Hello,
We have a use case for nested virtualization with UEFI guests.
I am encountering an issue where the L1 VM 'resets' when attempting to spawn the L2 guest. The 'reset' is the qemu process terminating with a 15 signal with no other obvious logs.
Our infrastructure is as follows:
L0 hypervisor: CentOS 7 with qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 and using host-model cpu mode. We define the L1 guest with the the OVMF package from CentOS (OVMF-20180508-6.gitee3198e672e2.el7) (OVMF_CODE.secboot.fd firmware).
L1 guest: CentOS Stream 8 with qemu-kvm-6.2.0-5.module_el8.6.0+1087+b42c8331.x86_64. We define the L2 guest with the edk2-ovmf package from CentOS Steam 8 (edk2-ovmf-20220126gitbb1bba3d77-2.el8.noarch) (OVMF_CODE.secboot.fd firmware)
I have been playing with different combinations of ovmf releases and firmware filenames across both L0 and L1 hosts. The only combination that works (L2 VM can boot) is utilising the OVMF_CODE-pure-efi.fd loader (from the https://www.kraxel.org/repos/jenkins) repo (edk2.git-ovmf-x64 package).
I've tried researching the differences between the OVMF_CODE-pure-efi.fd loader versus OVMF_CODE.secboot.fd but i'm still confused.
Can anyone shed any light on the differences? Or, is there something fundamental that i'm maybe missing when looking at nested UEFI virtualization?
Is there any reason NOT to use the OVMF_CODE-pure-efi.fd loader?
Thanks for reading,
Ben Morrice
CERN IT Department
We have a use case for nested virtualization with UEFI guests.
I am encountering an issue where the L1 VM 'resets' when attempting to spawn the L2 guest. The 'reset' is the qemu process terminating with a 15 signal with no other obvious logs.
Our infrastructure is as follows:
L0 hypervisor: CentOS 7 with qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 and using host-model cpu mode. We define the L1 guest with the the OVMF package from CentOS (OVMF-20180508-6.gitee3198e672e2.el7) (OVMF_CODE.secboot.fd firmware).
L1 guest: CentOS Stream 8 with qemu-kvm-6.2.0-5.module_el8.6.0+1087+b42c8331.x86_64. We define the L2 guest with the edk2-ovmf package from CentOS Steam 8 (edk2-ovmf-20220126gitbb1bba3d77-2.el8.noarch) (OVMF_CODE.secboot.fd firmware)
I have been playing with different combinations of ovmf releases and firmware filenames across both L0 and L1 hosts. The only combination that works (L2 VM can boot) is utilising the OVMF_CODE-pure-efi.fd loader (from the https://www.kraxel.org/repos/jenkins) repo (edk2.git-ovmf-x64 package).
I've tried researching the differences between the OVMF_CODE-pure-efi.fd loader versus OVMF_CODE.secboot.fd but i'm still confused.
Can anyone shed any light on the differences? Or, is there something fundamental that i'm maybe missing when looking at nested UEFI virtualization?
Is there any reason NOT to use the OVMF_CODE-pure-efi.fd loader?
Thanks for reading,
Ben Morrice
CERN IT Department