Date   

Re: [OVMF] resource assignment fails for passthrough PCI GPU

Eduardo Habkost <ehabkost@...>
 

(+Jiri, +libvir-list)

On Fri, Nov 22, 2019 at 04:58:25PM +0000, Dr. David Alan Gilbert wrote:
* Laszlo Ersek (lersek@redhat.com) wrote:
(+Dave, +Eduardo)

On 11/22/19 00:00, dann frazier wrote:
On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits?
"PCPU address width" is not a "function" of the available physical bits
-- it *is* the available physical bits. "PCPU" simply stands for
"physical CPU".

IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Maybe.

The current logic in OVMF works from the guest-physical address space
size -- as deduced from multiple factors, such as the 64-bit MMIO
aperture size, and others -- towards the guest-CPU (aka VCPU) address
width. The VCPU address width is important for a bunch of other purposes
in the firmware, so OVMF has to calculate it no matter what.

Again, the current logic is to calculate the highest guest-physical
address, and then deduce the VCPU address width from that (and then
expose it to the rest of the firmware).

Your suggestion would require passing the PCPU (physical CPU) address
width from QEMU/KVM into the guest, and reversing the direction of the
calculation. The PCPU address width would determine the VCPU address
width directly, and then the 64-bit PCI MMIO aperture would be
calculated from that.

However, there are two caveats.

(1) The larger your guest-phys address space (as exposed through the
VCPU address width to the rest of the firmware), the more guest RAM you
need for page tables. Because, just before entering the DXE phase, the
firmware builds 1:1 mapping page tables for the entire guest-phys
address space. This is necessary e.g. so you can access any PCI MMIO BAR.

Now consider that you have a huge beefy virtualization host with say 46
phys address bits, and a wimpy guest with say 1.5GB of guest RAM. Do you
absolutely want tens of *terabytes* for your 64-bit PCI MMIO aperture?
Do you really want to pay for the necessary page tables with that meager
guest RAM?

(Such machines do exist BTW, for example:

http://mid.mail-archive.com/9BD73EA91F8E404F851CF3F519B14AA8036C67B5@DGGEMI521-MBX.china.huawei.com
)

In other words, you'd need some kind of knob anyway, because otherwise
your aperture could grow too *large*.


(2) Exposing the PCPU address width to the guest may have nasty
consequences at the QEMU/KVM level, regardless of guest firmware. For
example, that kind of "guest enlightenment" could interfere with migration.

If you boot a guest let's say with 16GB of RAM, and tell it "hey friend,
have 40 bits of phys address width!", then you'll have a difficult time
migrating that guest to a host with a CPU that only has 36-bits wide
physical addresses -- even if the destination host has plenty of RAM
otherwise, such as a full 64GB.

There could be other QEMU/KVM / libvirt issues that I m unaware of
(hence the CC to Dave and Eduardo).
host physical address width gets messy. There are differences as well
between upstream qemu behaviour, and some downstreams.
I think the story is that:

a) Qemu default: 40 bits on any host
b) -cpu blah,host-phys-bits=true to follow the host.
c) RHEL has host-phys-bits=true by default

As you say, the only real problem with host-phys-bits is migration -
between say an E3 and an E5 xeon with different widths. The magic 40's
is generally wrong as well - I think it came from some ancient AMD,
but it's the default on QEMU TCG as well.
Yes, and because it affects live migration ability, we have two
constraints:
1) It needs to be exposed in the libvirt domain XML;
2) QEMU and libvirt can't choose a value that works for everybody
(because neither QEMU or libvirt know where the VM might be
migrated later).

Which is why the BZ below is important:


I don't think there's a way to set it in libvirt;
https://bugzilla.redhat.com/show_bug.cgi?id=1578278 is a bz asking for
that.

IMHO host-phys-bits is actually pretty safe; and makes most sense in a
lot of cases.
Yeah, it is mostly safe and makes sense, but messy if you try to
migrate to a host with a different size.


Dave


Thanks,
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
--
Eduardo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/22/19 17:58, Dr. David Alan Gilbert wrote:
* Laszlo Ersek (lersek@redhat.com) wrote:
(+Dave, +Eduardo)

On 11/22/19 00:00, dann frazier wrote:
On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits?
"PCPU address width" is not a "function" of the available physical bits
-- it *is* the available physical bits. "PCPU" simply stands for
"physical CPU".

IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Maybe.

The current logic in OVMF works from the guest-physical address space
size -- as deduced from multiple factors, such as the 64-bit MMIO
aperture size, and others -- towards the guest-CPU (aka VCPU) address
width. The VCPU address width is important for a bunch of other purposes
in the firmware, so OVMF has to calculate it no matter what.

Again, the current logic is to calculate the highest guest-physical
address, and then deduce the VCPU address width from that (and then
expose it to the rest of the firmware).

Your suggestion would require passing the PCPU (physical CPU) address
width from QEMU/KVM into the guest, and reversing the direction of the
calculation. The PCPU address width would determine the VCPU address
width directly, and then the 64-bit PCI MMIO aperture would be
calculated from that.

However, there are two caveats.

(1) The larger your guest-phys address space (as exposed through the
VCPU address width to the rest of the firmware), the more guest RAM you
need for page tables. Because, just before entering the DXE phase, the
firmware builds 1:1 mapping page tables for the entire guest-phys
address space. This is necessary e.g. so you can access any PCI MMIO BAR.

Now consider that you have a huge beefy virtualization host with say 46
phys address bits, and a wimpy guest with say 1.5GB of guest RAM. Do you
absolutely want tens of *terabytes* for your 64-bit PCI MMIO aperture?
Do you really want to pay for the necessary page tables with that meager
guest RAM?

(Such machines do exist BTW, for example:

http://mid.mail-archive.com/9BD73EA91F8E404F851CF3F519B14AA8036C67B5@DGGEMI521-MBX.china.huawei.com
)

In other words, you'd need some kind of knob anyway, because otherwise
your aperture could grow too *large*.


(2) Exposing the PCPU address width to the guest may have nasty
consequences at the QEMU/KVM level, regardless of guest firmware. For
example, that kind of "guest enlightenment" could interfere with migration.

If you boot a guest let's say with 16GB of RAM, and tell it "hey friend,
have 40 bits of phys address width!", then you'll have a difficult time
migrating that guest to a host with a CPU that only has 36-bits wide
physical addresses -- even if the destination host has plenty of RAM
otherwise, such as a full 64GB.

There could be other QEMU/KVM / libvirt issues that I m unaware of
(hence the CC to Dave and Eduardo).
host physical address width gets messy. There are differences as well
between upstream qemu behaviour, and some downstreams.
I think the story is that:

a) Qemu default: 40 bits on any host
b) -cpu blah,host-phys-bits=true to follow the host.
c) RHEL has host-phys-bits=true by default

As you say, the only real problem with host-phys-bits is migration -
between say an E3 and an E5 xeon with different widths. The magic 40's
is generally wrong as well - I think it came from some ancient AMD,
but it's the default on QEMU TCG as well.

I don't think there's a way to set it in libvirt;
https://bugzilla.redhat.com/show_bug.cgi?id=1578278 is a bz asking for
that.

IMHO host-phys-bits is actually pretty safe; and makes most sense in a
lot of cases.
Thanks -- this is a useful piece of the puzzle to know. It seems that
the guest can learn about the guest-phys address width via CPUID.
(cpu_x86_cpuid() in "target/i386/cpu.c" consumes "cpu->phys_bits", which
seems to be set in x86_cpu_realizefn().)

Cheers!
Laszlo


Re: Examples opening and reading/writing a file with EDK2

Laszlo Ersek
 

On 11/22/19 02:54, alejandro.estay@gmail.com wrote:
Hi, I'm making a little UEFI app, just for check basic functionality
of the firmware. inside this app I want to load, read and write a
file, binary or text. However I can't find a "complete explanation" or
examples about the use of the procedures (EFI_FILE_PROTOCOL.Open(),
EFI_FILE_PROTOCOL.Read()) from the UEFI API (steps, what to check).
The only thing I found was some little Uefi Shell apps doing this
using the shell API. However I would like to do it using the "bare
firmware" instead of loading the shell. For me, the most confusing
part, is when the program has to check the handle database to find
the particular handle of the file that is being opened. Also I have
some doubts about how to check, without the shell, what volume or
partition would have the exact file I'm looking for (i.e. what if 2
volumes have simmilar, or even identical root directories).
First, you need to find the EFI_SIMPLE_FILE_SYSTEM_PROTOCOL instance in
the protocol database that is right for your purposes. You could locate
this protocol instance for example with the LocateDevicePath() boot
service. There could be other ways for you to locate the right handle,
and then open the Simple File System protocol interface on that handle.

This really depends on your use case. It's your application that has to
know on what device (such as, what PCI(e) controller, what SCSI disk,
what ATAPI disk, what partition, etc) to look for the interesting file.

For example, if you simply check every EFI_SIMPLE_FILE_SYSTEM_PROTOCOL
in the protocol database, and among those, you cannot distinguish two
from each other (because both look suitable), then you'll have to
investigate the device path protocol installed on each handle. You might
be able to make a decision based on the structure / semantics of the
device paths themselves. Alternatively, you might have to traverse the
device paths node by node, and open further protocol interfaces on the
corresponding handles, to ultimately pick the right
EFI_SIMPLE_FILE_SYSTEM_PROTOCOL.


Once you have the EFI_SIMPLE_FILE_SYSTEM_PROTOCOL interface open, you
need to call its OpenVolume() member function. See the UEFI spec for
details please. It will give you the EFI_FILE_PROTOCOL for the root
directory of that file system.

Once you got the EFI_FILE_PROTOCOL interface for the root directory, you
can call the Open() member function for opening files or directories
relative to the root directory. Either way, you'll get a new
EFI_FILE_PROTOCOL interface for the opened object (file or directory).
If you've opened a directory previously, then you can issue further
Open() calls for opening files or directories relative to *that*
(sub)directory.

In case you start with an EFI_DEVICE_PATH_PROTOCOL instance that
identifies a particular file in a particular filesystem, then the device
path protocol will contain *at least one* File Path Media Device Path
node. It is important that there may be more than one such device path
node, and the full pathname (within the filesystem), from the root
directory to the particular file, may be split over a number of device
path nodes.

For example, you could have just one File Path node containing
"\dir1\dir2\hello.txt". Or you could have three File Path nodes
containing "dir1", "dir2", "hello.txt", respectively. Or you could have
two File Path nodes containing "dir1\dir2\" and "hello.txt",
respectively.

In these cases, you'd need one, three, or two, EFI_FILE_PROTOCOL.Open()
calls, accordingly.

Alternatively, you'd need to concatenate the pathname fragments into a
whole pathname, making sure that there be precisely one backslash
separator between each pair of pathname components, and then issue a
single EFI_FILE_PROTOCOL.Open() call in the end.


You can find a helper function called EfiOpenFileByDevicePath() in
"MdePkg/Library/UefiLib/UefiLib.c".

A somewhat similar function is GetFileBufferByFilePath(), in
"MdePkg/Library/DxeServicesLib/DxeServicesLib.c".

Thanks,
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Dr. David Alan Gilbert <dgilbert@...>
 

* Laszlo Ersek (lersek@redhat.com) wrote:
(+Dave, +Eduardo)

On 11/22/19 00:00, dann frazier wrote:
On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits?
"PCPU address width" is not a "function" of the available physical bits
-- it *is* the available physical bits. "PCPU" simply stands for
"physical CPU".

IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Maybe.

The current logic in OVMF works from the guest-physical address space
size -- as deduced from multiple factors, such as the 64-bit MMIO
aperture size, and others -- towards the guest-CPU (aka VCPU) address
width. The VCPU address width is important for a bunch of other purposes
in the firmware, so OVMF has to calculate it no matter what.

Again, the current logic is to calculate the highest guest-physical
address, and then deduce the VCPU address width from that (and then
expose it to the rest of the firmware).

Your suggestion would require passing the PCPU (physical CPU) address
width from QEMU/KVM into the guest, and reversing the direction of the
calculation. The PCPU address width would determine the VCPU address
width directly, and then the 64-bit PCI MMIO aperture would be
calculated from that.

However, there are two caveats.

(1) The larger your guest-phys address space (as exposed through the
VCPU address width to the rest of the firmware), the more guest RAM you
need for page tables. Because, just before entering the DXE phase, the
firmware builds 1:1 mapping page tables for the entire guest-phys
address space. This is necessary e.g. so you can access any PCI MMIO BAR.

Now consider that you have a huge beefy virtualization host with say 46
phys address bits, and a wimpy guest with say 1.5GB of guest RAM. Do you
absolutely want tens of *terabytes* for your 64-bit PCI MMIO aperture?
Do you really want to pay for the necessary page tables with that meager
guest RAM?

(Such machines do exist BTW, for example:

http://mid.mail-archive.com/9BD73EA91F8E404F851CF3F519B14AA8036C67B5@DGGEMI521-MBX.china.huawei.com
)

In other words, you'd need some kind of knob anyway, because otherwise
your aperture could grow too *large*.


(2) Exposing the PCPU address width to the guest may have nasty
consequences at the QEMU/KVM level, regardless of guest firmware. For
example, that kind of "guest enlightenment" could interfere with migration.

If you boot a guest let's say with 16GB of RAM, and tell it "hey friend,
have 40 bits of phys address width!", then you'll have a difficult time
migrating that guest to a host with a CPU that only has 36-bits wide
physical addresses -- even if the destination host has plenty of RAM
otherwise, such as a full 64GB.

There could be other QEMU/KVM / libvirt issues that I m unaware of
(hence the CC to Dave and Eduardo).
host physical address width gets messy. There are differences as well
between upstream qemu behaviour, and some downstreams.
I think the story is that:

a) Qemu default: 40 bits on any host
b) -cpu blah,host-phys-bits=true to follow the host.
c) RHEL has host-phys-bits=true by default

As you say, the only real problem with host-phys-bits is migration -
between say an E3 and an E5 xeon with different widths. The magic 40's
is generally wrong as well - I think it came from some ancient AMD,
but it's the default on QEMU TCG as well.

I don't think there's a way to set it in libvirt;
https://bugzilla.redhat.com/show_bug.cgi?id=1578278 is a bz asking for
that.

IMHO host-phys-bits is actually pretty safe; and makes most sense in a
lot of cases.

Dave


Thanks,
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Gerd Hoffmann <kraxel@...>
 

Hi,

Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits? IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Yes. You can see it as "address sizes" in /proc/cpuinfo

Problem is this isn't reliable in virtual machines. qemu reports 40
bits physical even in case the host supports less. Intel hardware often
has 36 or 39 bits (depending on age). So if edk2 would go with the 40
bits (=> 1TB physical address space), then reserve -- for example --
topmost 25% of that (everything above 768 MB) for I/O things would
simply not work on a host with 39 (or less) bits physical address space
because the 64bit PCI bars would not be addressable by the CPU.

So edk2 tries to be as conservative as possible by default ...

cheers,
Gerd


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/22/19 07:18, Gerd Hoffmann wrote:
Hi,

Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits? IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Yes. You can see it as "address sizes" in /proc/cpuinfo

Problem is this isn't reliable in virtual machines. qemu reports 40
bits physical even in case the host supports less. Intel hardware often
has 36 or 39 bits (depending on age). So if edk2 would go with the 40
bits (=> 1TB physical address space), then reserve -- for example --
topmost 25% of that (everything above 768 MB) for I/O things would
simply not work on a host with 39 (or less) bits physical address space
because the 64bit PCI bars would not be addressable by the CPU.

So edk2 tries to be as conservative as possible by default ...
Heh, now that you explain this, I *vaguely* recall it from discussions
conducted maybe years ago. :)

It's just as well that I wrote, in my sibling response, "There could be
other QEMU/KVM / libvirt issues that I m unaware of" ;)

Thanks!
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

(+Dave, +Eduardo)

On 11/22/19 00:00, dann frazier wrote:
On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits?
"PCPU address width" is not a "function" of the available physical bits
-- it *is* the available physical bits. "PCPU" simply stands for
"physical CPU".

IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Maybe.

The current logic in OVMF works from the guest-physical address space
size -- as deduced from multiple factors, such as the 64-bit MMIO
aperture size, and others -- towards the guest-CPU (aka VCPU) address
width. The VCPU address width is important for a bunch of other purposes
in the firmware, so OVMF has to calculate it no matter what.

Again, the current logic is to calculate the highest guest-physical
address, and then deduce the VCPU address width from that (and then
expose it to the rest of the firmware).

Your suggestion would require passing the PCPU (physical CPU) address
width from QEMU/KVM into the guest, and reversing the direction of the
calculation. The PCPU address width would determine the VCPU address
width directly, and then the 64-bit PCI MMIO aperture would be
calculated from that.

However, there are two caveats.

(1) The larger your guest-phys address space (as exposed through the
VCPU address width to the rest of the firmware), the more guest RAM you
need for page tables. Because, just before entering the DXE phase, the
firmware builds 1:1 mapping page tables for the entire guest-phys
address space. This is necessary e.g. so you can access any PCI MMIO BAR.

Now consider that you have a huge beefy virtualization host with say 46
phys address bits, and a wimpy guest with say 1.5GB of guest RAM. Do you
absolutely want tens of *terabytes* for your 64-bit PCI MMIO aperture?
Do you really want to pay for the necessary page tables with that meager
guest RAM?

(Such machines do exist BTW, for example:

http://mid.mail-archive.com/9BD73EA91F8E404F851CF3F519B14AA8036C67B5@DGGEMI521-MBX.china.huawei.com
)

In other words, you'd need some kind of knob anyway, because otherwise
your aperture could grow too *large*.


(2) Exposing the PCPU address width to the guest may have nasty
consequences at the QEMU/KVM level, regardless of guest firmware. For
example, that kind of "guest enlightenment" could interfere with migration.

If you boot a guest let's say with 16GB of RAM, and tell it "hey friend,
have 40 bits of phys address width!", then you'll have a difficult time
migrating that guest to a host with a CPU that only has 36-bits wide
physical addresses -- even if the destination host has plenty of RAM
otherwise, such as a full 64GB.

There could be other QEMU/KVM / libvirt issues that I m unaware of
(hence the CC to Dave and Eduardo).

Thanks,
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Examples opening and reading/writing a file with EDK2

alejandro.estay@...
 

Hi, I'm making a little UEFI app, just for check basic functionality of the firmware. inside this app I want to load, read and write a file, binary or text. However I can't find a "complete explanation" or examples about the use of the procedures (EFI_FILE_PROTOCOL.Open(), EFI_FILE_PROTOCOL.Read()) from the UEFI API (steps, what to check). The only thing I found was some little Uefi Shell apps doing this using the shell API. However I would like to do it using the "bare firmware" instead of loading the shell. For me, the most confusing part, is when the program has to check the handle database to find the particular handle of the file that is being opened. Also I have some doubts about how to check, without the shell, what volume or partition would have the exact file I'm looking for (i.e. what if 2 volumes have simmilar, or even identical root directories).

Thanks in advance


Re: [OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits? IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?

-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Design discussion for SEV-ES

Tom Lendacky <thomas.lendacky@...>
 

I'd like to be added to the TianoCore Design Meeting to discuss support
SEV-ES in OVMF.

Looking at the calendar, the meeting scheduled for December 12, 2019 would
be best.

Discussion length will depend on how much everyone understands the current
SEV support and the additional requirements of SEV-ES.

Thank you,
Tom Lendacky


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.

Thanks
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me. I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits. What would be the downside of bumping the
default aperture to, say, 48GB?

-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


[OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563

-dann


Re: Establish network and run 'ping'

King Sumo
 

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...
TIP: you can use the FvSimpleFileSystem.efi module to mount the Firmware Volume of your BIOS and then locate the EFI drivers / files.
(build FvSimpleFileSystem from edk2 sources)

E.g.
load FvSimpleFileSystem.efi
FS0:
dir
Directory of: FS0:\
00/00/0000 00:00 r 11,040 FbGop.efi
00/00/0000 00:00 r 12,446 7BB28B99-61BB-11D5-9A5D-0090273FC14D
00/00/0000 00:00 r 918,880 Shell.efi
00/00/0000 00:00 r 55,040 dpDynamicCommand.efi
00/00/0000 00:00 r 35,744 tftpDynamicCommand.efi
00/00/0000 00:00 r 24,704 OhciDxe.efi
00/00/0000 00:00 r 14,624 UsbMassStorageDxe.efi
00/00/0000 00:00 r 19,680 UsbKbDxe.efi
00/00/0000 00:00 r 21,728 UsbBusDxe.efi
00/00/0000 00:00 r 35,392 XhciDxe.efi
00/00/0000 00:00 r 22,656 EhciDxe.efi
00/00/0000 00:00 r 20,032 UhciDxe.efi
00/00/0000 00:00 r 15,328 SdDxe.efi
...


Re: Establish network and run 'ping'

Laszlo Ersek
 

On 11/05/19 11:47, Tomas Pilar (tpilar) wrote:
I am rather surprised that the default value for Network Stack is disabled on a platform. If the platform has a working implementation, I would strongly suggest you use that.

Otherwise you'll probably need to spend a lot more time poking around and familiarising yourself with the environment and the individual modules that comprise the network stack. Also note that platform vendors often modify the upstream network stack code to add new features or optimise the way it works on their hardware.
Agreed -- if there is a platform-specific HII knob in the Setup UI, then
it can control anything at all.

Your question is very generic and not something I can walk you through using email (maybe someone else here can), but I am happy to try and answer more specific questions when you have any (though admittedly I am not an expert on the network stack).

If you do want to learn more and play around, I would suggest starting with OVMF, rather than a platform, for a number of different reasons.
OVMF *is* a firmware platform, it's just not a physical one. :)

(But, of course, I agree with you -- OVMF is fully open source, the
"boards" underneath are fully open source (QEMU, KVM, Xen), and having
software, as opposed to hardware, beneath the software that you want to
debug, is helpful.)

Thanks
Laszlo


Re: Establish network and run 'ping'

Buhrow, Simon <simon.buhrow@...>
 

Hi,

well, playing around I found that when I enable manually the Network Stack in the BIOS menu everything works fine (ping, ifconfig). Then all the drivers are loaded fine from beginning.
As the default value for the Network Stack is "disabled" I´d like to get run the Network stack via the UEFI shell without the need to enter the BIOS menu.
When the Network Stack is disabled I get the results I mentioned before in my e-mails.

So, might be that I have to do more than just load the drivers? Is there any flag/variable I have to change so I can bind the drivers correctly.

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...

Any advices are welcome.

Thanks,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Freitag, 25. Oktober 2019 12:39
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Tomas Pilar (tpilar)
 

I am rather surprised that the default value for Network Stack is disabled on a platform. If the platform has a working implementation, I would strongly suggest you use that.

Otherwise you'll probably need to spend a lot more time poking around and familiarising yourself with the environment and the individual modules that comprise the network stack. Also note that platform vendors often modify the upstream network stack code to add new features or optimise the way it works on their hardware.

Your question is very generic and not something I can walk you through using email (maybe someone else here can), but I am happy to try and answer more specific questions when you have any (though admittedly I am not an expert on the network stack).

If you do want to learn more and play around, I would suggest starting with OVMF, rather than a platform, for a number of different reasons.

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 05 November 2019 08:26
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

well, playing around I found that when I enable manually the Network Stack in the BIOS menu everything works fine (ping, ifconfig). Then all the drivers are loaded fine from beginning.
As the default value for the Network Stack is "disabled" I´d like to get run the Network stack via the UEFI shell without the need to enter the BIOS menu.
When the Network Stack is disabled I get the results I mentioned before in my e-mails.

So, might be that I have to do more than just load the drivers? Is there any flag/variable I have to change so I can bind the drivers correctly.

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...

Any advices are welcome.

Thanks,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Freitag, 25. Oktober 2019 12:39
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Buhrow, Simon <simon.buhrow@...>
 

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only
Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Tomas Pilar (tpilar)
 

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only
Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.

901 - 920 of 971