Date
1 - 3 of 3
Q: Side effects of incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size?
Aaron Young
Hello, After adding some GPU cards to our system (each containing a 16GB BAR), we ran out of MEM64 BAR space as indicated by the following OVMF DEBUG output:
-------- PciHostBridge: SubmitResources for PciRoot(0x0) I/O: Granularity/SpecificFlag = 0 / 01 Length/Alignment = 0x1000 / 0xFFF Mem: Granularity/SpecificFlag = 32 / 00 Length/Alignment = 0x3100000 / 0xFFFFFF Mem: Granularity/SpecificFlag = 64 / 00 Length/Alignment = 0x804000000 / 0x3FFFFFFFF PciBus: HostBridge->SubmitResources() - Success PciHostBridge: NotifyPhase (AllocateResources) RootBridge: PciRoot(0x0) * Mem64: Base/Length/Alignment =** **FFFFFFFFFFFFFFFF/804000000/3FFFFFFFF - Out Of** **Resource!* Mem: Base/Length/Alignment = C0000000/3100000/FFFFFF - Success I/O: Base/Length/Alignment = C000/1000/FFF - Success Call PciHostBridgeResourceConflict(). PciHostBridge: Resource conflict happens! RootBridge[0]: I/O: Length/Alignment = 0x1000 / 0xFFF Mem: Length/Alignment = 0x3100000 / 0xFFFFFF Granularity/SpecificFlag = 32 / 00 Mem: Length/Alignment = 0x0 / 0x0 Granularity/SpecificFlag = 32 / 06 (Prefetchable) Mem: Length/Alignment = 0x804000000 / 0x3FFFFFFFF Granularity/SpecificFlag = 64 / 00 Mem: Length/Alignment = 0x0 / 0x0 Granularity/SpecificFlag = 64 / 06 (Prefetchable) Bus: Length/Alignment = 0x1 / 0x0 PciBus: HostBridge->NotifyPhase(AllocateResources) - Out of Resources ----------- Incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size from the default value (0x800000000) to 0x2000000000 fixed the issue and should allow enough space to add several (7 or so) GPUs to the system. However, we are concerned about side-effects of such a change. So, my question is: Could incrementing PcdPciMmio64Size cause any negative side effects? Thanks in advance for any help/comments/suggestions... -Aaron Young |
|
Laszlo Ersek
Hi Aaron,
On 02/10/20 21:15, aaron.young@... wrote: Please refer to the following thread: * [edk2-discuss] [OVMF] resource assignment fails for passthrough PCI GPU https://edk2.groups.io/g/discuss/message/59 The gist is that (a) you can increase the 64-bit MMIO aperture without rebuilding OVMF, like this (on the QEMU command line): -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536 (This sets a 64GiB aperture) (b) Implementing such a change in the DSC file(s) statically is not recommended, as it could prevent guests from booting on hosts that support EPT (= nested paging on Intel CPUs) but only support 36 physical address bits. (c) Sizing the aperture in OVMF dynamically to whatever the host CPU's phys address width supports would also not be ideal, due to guest RAM consumption and possible VM migration problems. QEMU doesn't even report the host CPU's phys address width accurately to the guest, at the moment, AIUI. Thanks Laszlo |
|
Aaron Young
Thanks Laszlo! Very helpful.
toggle quoted message
Show quoted text
-Aaron On 02/11/2020 01:15 AM, Laszlo Ersek wrote:
-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536 |
|