Re: Q: Side effects of incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size?


Laszlo Ersek
 

Hi Aaron,

On 02/10/20 21:15, aaron.young@... wrote:

 Hello, After adding some GPU cards to our system (each containing a
16GB BAR), we ran out of MEM64 BAR space as indicated by the following
OVMF DEBUG output:

--------
[...]
-----------

 Incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size from the
default value (0x800000000) to 0x2000000000 fixed the issue and should
allow enough space to add several (7 or so) GPUs to the system.

 However, we are concerned about side-effects of such a change.

 So, my question is: Could incrementing PcdPciMmio64Size cause any
negative side effects?

 Thanks in advance for any help/comments/suggestions...
Please refer to the following thread:

* [edk2-discuss]
[OVMF] resource assignment fails for passthrough PCI GPU

https://edk2.groups.io/g/discuss/message/59

The gist is that

(a) you can increase the 64-bit MMIO aperture without rebuilding OVMF,
like this (on the QEMU command line):

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

(This sets a 64GiB aperture)

(b) Implementing such a change in the DSC file(s) statically is not
recommended, as it could prevent guests from booting on hosts that
support EPT (= nested paging on Intel CPUs) but only support 36 physical
address bits.

(c) Sizing the aperture in OVMF dynamically to whatever the host CPU's
phys address width supports would also not be ideal, due to guest RAM
consumption and possible VM migration problems. QEMU doesn't even report
the host CPU's phys address width accurately to the guest, at the
moment, AIUI.

Thanks
Laszlo

Join rfc@edk2.groups.io to automatically receive all group messages.