On Apr 27, 2021, at 2:40 PM, Michael Brown <email@example.com> wrote:
On 27/04/2021 18:31, Andrew Fish via groups.io wrote:
One trick people have pulled in the past is to write a driver that produces a “fake” PCI IO Protocol. The “fake” PCI IO driver abstracts how the MMIO device shows up on the platform. This works well if the MMIO device is really the same IP block as a PCI device. This usually maps to the PCI BAR being the same thing as the magic MMIO range. The “fake” PCI IO Protocol also abstracts platform specific DMA rules from the generic driver.Slightly off-topic, but I've always been curious about this: given that the entire purpose of PCI BARs is to allow for the use of straightforward MMIO operations, in which standard CPU read/write instructions can be used to access device registers with zero overhead and no possible error conditions, why do the EFI_PCI_IO_PROTOCOL.Mem.Read (and related) abstractions exist? They seem to add a lot of complexity for negative benefit, and I'd be interested to know if there was some reason why the design was chosen.
Assuming physical address == non-catchable memory region is not always true. For example on Itainium you had to flip bit 63 in physical mode to do a non-cached transaction. There are also some high end servers that have different physical address ranges for PCI v.s DRAM.
Basically we were paranoid about portability. That and we really don’t want #ifdef in the code for different architectures.