Q: Side effects of incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size?


Aaron Young
 

Hello, After adding some GPU cards to our system (each containing a 16GB BAR), we ran out of MEM64 BAR space as indicated by the following OVMF DEBUG output:

--------
PciHostBridge: SubmitResources for PciRoot(0x0)
I/O: Granularity/SpecificFlag = 0 / 01
Length/Alignment = 0x1000 / 0xFFF
Mem: Granularity/SpecificFlag = 32 / 00
Length/Alignment = 0x3100000 / 0xFFFFFF
Mem: Granularity/SpecificFlag = 64 / 00
Length/Alignment = 0x804000000 / 0x3FFFFFFFF
PciBus: HostBridge->SubmitResources() - Success
PciHostBridge: NotifyPhase (AllocateResources)
RootBridge: PciRoot(0x0)
* Mem64: Base/Length/Alignment =**
**FFFFFFFFFFFFFFFF/804000000/3FFFFFFFF - Out Of**
**Resource!*
Mem: Base/Length/Alignment = C0000000/3100000/FFFFFF - Success
I/O: Base/Length/Alignment = C000/1000/FFF - Success
Call PciHostBridgeResourceConflict().
PciHostBridge: Resource conflict happens!
RootBridge[0]:
I/O: Length/Alignment = 0x1000 / 0xFFF
Mem: Length/Alignment = 0x3100000 / 0xFFFFFF
Granularity/SpecificFlag = 32 / 00
Mem: Length/Alignment = 0x0 / 0x0
Granularity/SpecificFlag = 32 / 06 (Prefetchable)
Mem: Length/Alignment = 0x804000000 / 0x3FFFFFFFF
Granularity/SpecificFlag = 64 / 00
Mem: Length/Alignment = 0x0 / 0x0
Granularity/SpecificFlag = 64 / 06 (Prefetchable)
Bus: Length/Alignment = 0x1 / 0x0
PciBus: HostBridge->NotifyPhase(AllocateResources) -
Out of Resources

-----------

Incrementing gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size from the default value (0x800000000) to 0x2000000000 fixed the issue and should allow enough space to add several (7 or so) GPUs to the system.

However, we are concerned about side-effects of such a change.

So, my question is: Could incrementing PcdPciMmio64Size cause any negative side effects?

Thanks in advance for any help/comments/suggestions...

-Aaron Young

Join rfc@edk2.groups.io to automatically receive all group messages.