Date   

Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

(+Dave, +Eduardo)

On 11/22/19 00:00, dann frazier wrote:
On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits?
"PCPU address width" is not a "function" of the available physical bits
-- it *is* the available physical bits. "PCPU" simply stands for
"physical CPU".

IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?
Maybe.

The current logic in OVMF works from the guest-physical address space
size -- as deduced from multiple factors, such as the 64-bit MMIO
aperture size, and others -- towards the guest-CPU (aka VCPU) address
width. The VCPU address width is important for a bunch of other purposes
in the firmware, so OVMF has to calculate it no matter what.

Again, the current logic is to calculate the highest guest-physical
address, and then deduce the VCPU address width from that (and then
expose it to the rest of the firmware).

Your suggestion would require passing the PCPU (physical CPU) address
width from QEMU/KVM into the guest, and reversing the direction of the
calculation. The PCPU address width would determine the VCPU address
width directly, and then the 64-bit PCI MMIO aperture would be
calculated from that.

However, there are two caveats.

(1) The larger your guest-phys address space (as exposed through the
VCPU address width to the rest of the firmware), the more guest RAM you
need for page tables. Because, just before entering the DXE phase, the
firmware builds 1:1 mapping page tables for the entire guest-phys
address space. This is necessary e.g. so you can access any PCI MMIO BAR.

Now consider that you have a huge beefy virtualization host with say 46
phys address bits, and a wimpy guest with say 1.5GB of guest RAM. Do you
absolutely want tens of *terabytes* for your 64-bit PCI MMIO aperture?
Do you really want to pay for the necessary page tables with that meager
guest RAM?

(Such machines do exist BTW, for example:

http://mid.mail-archive.com/9BD73EA91F8E404F851CF3F519B14AA8036C67B5@DGGEMI521-MBX.china.huawei.com
)

In other words, you'd need some kind of knob anyway, because otherwise
your aperture could grow too *large*.


(2) Exposing the PCPU address width to the guest may have nasty
consequences at the QEMU/KVM level, regardless of guest firmware. For
example, that kind of "guest enlightenment" could interfere with migration.

If you boot a guest let's say with 16GB of RAM, and tell it "hey friend,
have 40 bits of phys address width!", then you'll have a difficult time
migrating that guest to a host with a CPU that only has 36-bits wide
physical addresses -- even if the destination host has plenty of RAM
otherwise, such as a full 64GB.

There could be other QEMU/KVM / libvirt issues that I m unaware of
(hence the CC to Dave and Eduardo).

Thanks,
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Examples opening and reading/writing a file with EDK2

alejandro.estay@...
 

Hi, I'm making a little UEFI app, just for check basic functionality of the firmware. inside this app I want to load, read and write a file, binary or text. However I can't find a "complete explanation" or examples about the use of the procedures (EFI_FILE_PROTOCOL.Open(), EFI_FILE_PROTOCOL.Read()) from the UEFI API (steps, what to check). The only thing I found was some little Uefi Shell apps doing this using the shell API. However I would like to do it using the "bare firmware" instead of loading the shell. For me, the most confusing part, is when the program has to check the handle database to find the particular handle of the file that is being opened. Also I have some doubts about how to check, without the shell, what volume or partition would have the exact file I'm looking for (i.e. what if 2 volumes have simmilar, or even identical root directories).

Thanks in advance


Re: [OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

On Tue, Nov 19, 2019 at 06:06:15AM +0100, Laszlo Ersek wrote:
On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.
Thanks, yeah - now that I read the code comments that is clear (as
clear as it can be w/ my low level of base knowledge). In the commit you
mention Gerd (CC'd) had suggested a heuristic-based approach for
sizing the aperture. When you say "PCPU address width" - is that a
function of the available physical bits? IOW, would that approach
allow OVMF to automatically grow the aperture to the max ^2 supported
by the host CPU?

-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Design discussion for SEV-ES

Tom Lendacky <thomas.lendacky@...>
 

I'd like to be added to the TianoCore Design Meeting to discuss support
SEV-ES in OVMF.

Looking at the calendar, the meeting scheduled for December 12, 2019 would
be best.

Discussion length will depend on how much everyone understands the current
SEV support and the additional requirements of SEV-ES.

Thank you,
Tom Lendacky


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/19/19 01:54, dann frazier wrote:
On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me.
Good to hear, thanks.

I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits.
Right.

What would be the downside of bumping the
default aperture to, say, 48GB?
The placement of the aperture is not trivial (please see the code
comments in the linked commit). The base address of the aperture is
chosen so that the largest BAR that can fit in the aperture may be
naturally aligned. (BARs are whole powers of two.)

The largest BAR that can fit in a 48 GB aperture is 32 GB. Therefore
such an aperture would be aligned at 32 GB -- the lowest base address
(dependent on guest RAM size) would be 32 GB. Meaning that the aperture
would end at 32 + 48 = 80 GB. That still breaches the 36-bit phys
address width.

32 GB is the largest aperture size that can work with 36-bit phys
address width; that's the aperture that ends at 64 GB exactly.

Thanks
Laszlo


-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

On Fri, Nov 15, 2019 at 11:51:18PM +0100, Laszlo Ersek wrote:
On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38
Hi Laszlo,

Thanks for taking the time to describe this in detail! The -fw_cfg
option did avoid the problem for me. I also noticed that the above
commit message mentions the existence of a 24GB card as a reasoning
behind choosing the 32GB default aperture. From what you say below, I
understand that bumping this above 64GB could break hosts w/ <= 37
physical address bits. What would be the downside of bumping the
default aperture to, say, 48GB?

-dann

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


Re: [OVMF] resource assignment fails for passthrough PCI GPU

Laszlo Ersek
 

On 11/15/19 19:56, dann frazier wrote:
Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563
By default, OVMF exposes such a 64-bit MMIO aperture for PCI MMIO BAR
allocation that is 32GB in size. The generic PciBusDxe driver collects,
orders, and assigns / allocates the MMIO BARs, but it can work only out
of the aperture that platform code advertizes.

Your GPU's region 1 is itself 32GB in size. Given that there are further
PCI devices in the system with further 64-bit MMIO BARs, the default
aperture cannot accommodate everything. In such an event, PciBusDxe
avoids assigning the largest BARs (to my knowledge), in order to
conserve the most aperture possible, for other devices -- hence break
the fewest possible PCI devices.

You can control the aperture size from the QEMU command line. You can
also do it from the libvirt domain XML, technically speaking. The knob
is experimental, so no stability or compatibility guarantees are made.
(That's also the reason why it's a bit of a hack in the libvirt domain XML.)

The QEMU cmdline options is described in the following edk2 commit message:

https://github.com/tianocore/edk2/commit/7e5b1b670c38

For example, to set a 64GB aperture, pass:

-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536

The libvirt domain XML syntax is a bit tricky (and it might "taint" your
domain, as it goes outside of the QEMU features that libvirt directly
maps to):

<domain
type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<qemu:commandline>
<qemu:arg value='-fw_cfg'/>
<qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
</qemu:commandline>
</domain>

Some notes:

(1) The "xmlns:qemu" namespace definition attribute in the <domain> root
element is important. You have to add it manually when you add
<qemu:commandline> and <qemu:arg> too. Without the namespace
definition, the latter elements will make no sense, and libvirt will
delete them immediately.

(2) The above change will grow your guest's physical address space to
more than 64GB. As a consequence, on your *host*, *if* your physical CPU
supports nested paging (called "ept" on Intel and "npt" on AMD), *then*
the CPU will have to support at least 37 physical address bits too, for
the guest to work. Otherwise, the guest will break, hard.

Here's how to verify (on the host):

(2a) run "egrep -w 'npt|ept' /proc/cpuinfo" --> if this does not produce
output, then stop reading here; things should work. Your CPU does not
support nested paging, so KVM will use shadow paging, which is slower,
but at least you don't have to care about the CPU's phys address width.

(2b) otherwise (i.e. when you do have nested paging), run "grep 'bits
physical' /proc/cpuinfo" --> if the physical address width is >=37,
you're good.

(2c) if you have nested paging but exactly 36 phys address bits, then
you'll have to forcibly disable nested paging (assuming you want to run
a guest with larger than 64GB guest-phys address space, that is). On
Intel, issue:

rmmod kvm_intel
modprobe kvm_intel ept=N

On AMD, go with:

rmmod kvm_amd
modprobe kvm_amd npt=N

Hope this helps,
Laszlo


[OVMF] resource assignment fails for passthrough PCI GPU

dann frazier
 

Hi,
I'm trying to passthrough an Nvidia GPU to a q35 KVM guest, but UEFI
is failing to allocate resources for it. I have no issues if I boot w/
a legacy BIOS, and it works fine if I tell the linux guest to do the
allocation itself - but I'm looking for a way to make this work w/
OVMF by default.

I posted a debug log here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563/+attachment/5305740/+files/q35-uefidbg.log

Linux guest lspci output is also available for both seabios/OVMF boots here:
https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/1849563

-dann


Re: Establish network and run 'ping'

King Sumo
 

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...
TIP: you can use the FvSimpleFileSystem.efi module to mount the Firmware Volume of your BIOS and then locate the EFI drivers / files.
(build FvSimpleFileSystem from edk2 sources)

E.g.
load FvSimpleFileSystem.efi
FS0:
dir
Directory of: FS0:\
00/00/0000 00:00 r 11,040 FbGop.efi
00/00/0000 00:00 r 12,446 7BB28B99-61BB-11D5-9A5D-0090273FC14D
00/00/0000 00:00 r 918,880 Shell.efi
00/00/0000 00:00 r 55,040 dpDynamicCommand.efi
00/00/0000 00:00 r 35,744 tftpDynamicCommand.efi
00/00/0000 00:00 r 24,704 OhciDxe.efi
00/00/0000 00:00 r 14,624 UsbMassStorageDxe.efi
00/00/0000 00:00 r 19,680 UsbKbDxe.efi
00/00/0000 00:00 r 21,728 UsbBusDxe.efi
00/00/0000 00:00 r 35,392 XhciDxe.efi
00/00/0000 00:00 r 22,656 EhciDxe.efi
00/00/0000 00:00 r 20,032 UhciDxe.efi
00/00/0000 00:00 r 15,328 SdDxe.efi
...


Re: Establish network and run 'ping'

Laszlo Ersek
 

On 11/05/19 11:47, Tomas Pilar (tpilar) wrote:
I am rather surprised that the default value for Network Stack is disabled on a platform. If the platform has a working implementation, I would strongly suggest you use that.

Otherwise you'll probably need to spend a lot more time poking around and familiarising yourself with the environment and the individual modules that comprise the network stack. Also note that platform vendors often modify the upstream network stack code to add new features or optimise the way it works on their hardware.
Agreed -- if there is a platform-specific HII knob in the Setup UI, then
it can control anything at all.

Your question is very generic and not something I can walk you through using email (maybe someone else here can), but I am happy to try and answer more specific questions when you have any (though admittedly I am not an expert on the network stack).

If you do want to learn more and play around, I would suggest starting with OVMF, rather than a platform, for a number of different reasons.
OVMF *is* a firmware platform, it's just not a physical one. :)

(But, of course, I agree with you -- OVMF is fully open source, the
"boards" underneath are fully open source (QEMU, KVM, Xen), and having
software, as opposed to hardware, beneath the software that you want to
debug, is helpful.)

Thanks
Laszlo


Re: Establish network and run 'ping'

Buhrow, Simon <simon.buhrow@...>
 

Hi,

well, playing around I found that when I enable manually the Network Stack in the BIOS menu everything works fine (ping, ifconfig). Then all the drivers are loaded fine from beginning.
As the default value for the Network Stack is "disabled" I´d like to get run the Network stack via the UEFI shell without the need to enter the BIOS menu.
When the Network Stack is disabled I get the results I mentioned before in my e-mails.

So, might be that I have to do more than just load the drivers? Is there any flag/variable I have to change so I can bind the drivers correctly.

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...

Any advices are welcome.

Thanks,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Freitag, 25. Oktober 2019 12:39
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Tomas Pilar (tpilar)
 

I am rather surprised that the default value for Network Stack is disabled on a platform. If the platform has a working implementation, I would strongly suggest you use that.

Otherwise you'll probably need to spend a lot more time poking around and familiarising yourself with the environment and the individual modules that comprise the network stack. Also note that platform vendors often modify the upstream network stack code to add new features or optimise the way it works on their hardware.

Your question is very generic and not something I can walk you through using email (maybe someone else here can), but I am happy to try and answer more specific questions when you have any (though admittedly I am not an expert on the network stack).

If you do want to learn more and play around, I would suggest starting with OVMF, rather than a platform, for a number of different reasons.

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 05 November 2019 08:26
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

well, playing around I found that when I enable manually the Network Stack in the BIOS menu everything works fine (ping, ifconfig). Then all the drivers are loaded fine from beginning.
As the default value for the Network Stack is "disabled" I´d like to get run the Network stack via the UEFI shell without the need to enter the BIOS menu.
When the Network Stack is disabled I get the results I mentioned before in my e-mails.

So, might be that I have to do more than just load the drivers? Is there any flag/variable I have to change so I can bind the drivers correctly.

Reconnecting the Intel driver was without success. Unloading the intel driver, I do not get it back to reload as I don´t know how to reload a driver of which I do not have an efi-file...

Any advices are welcome.

Thanks,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Freitag, 25. Oktober 2019 12:39
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Buhrow, Simon <simon.buhrow@...>
 

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only
Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Establish network and run 'ping'

Tomas Pilar (tpilar)
 

Once you have all the required drivers from the network stack loaded, I would reconnect the Intel driver (maybe even unload and reload it) to try and get all the network stack service bindings happen in the correct order.

Not sure what else to try at this stage, you'll have to play around with it.
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 25 October 2019 08:18
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

Thanks fort that advice!

Loading MnpDxe runs successfully. And I get a nice entry in drivers table.
Looking to the details I get (more details see below)
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)


How do I start MNP?


I tried to load ArpDxe as this is the next driver in the EFINetworkStackGettingStarted.pdf. But there I get only
Image 'FS0:\EFI\Netzwerk\ArpDxe.efi' loaded at D51A2000 - Success
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpStart: MnpStartSnp failed, Already started.
MnpConfigReceiveFilters: Snp->ReceiveFilters failed, Not started.


So that sounds as it is already startet? Hm...


Regards,

Simon



FS0:\EFI\Netzwerk\> dh -d -v 16E
16E: D3264F98
ComponentName(D51B3F00)
DriverBinding(D51B3ED0)
ImageDevicePath(D3267018)
PciRoot(0x0)/Pci(0x1D,0x0)/USB(0x0,0x0)/USB(0x1,0x0)/HD(1,MBR,0x257E68F1,0x3E,0xE8C8A8)/\EFI\Netzwerk\MnpDxe.efi
LoadedImage(D40C3640)
Revision......: 0x00001000
ParentHandle..: D40C2818
SystemTable...: D5B48F18
DeviceHandle..: D4091318
FilePath......: \EFI\Netzwerk\MnpDxe.efi
PdbFileName...: c:\edk2\Build\NetworkPkg\DEBUG_VS2015x86\X64\NetworkPkg\MnpDxe\MnpDxe\DEBUG\MnpDxe.pdb
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D51A9000
ImageSize.....: BAA0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D51A92A0
Driver Name [16E] : MNP Network Service Driver
Driver Image Name : \EFI\Netzwerk\MnpDxe.efi
Driver Version : 0000000A
Driver Type : Bus
Configuration : NO
Diagnostics : NO
Managing :
Ctrl[161] : Intel(R) I210 Gigabit Network Connection
Child[16F] : MNP (Not started)

-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 15. Oktober 2019 11:36
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

I also don't see the MnpDxe in your list, you'll need that one too.

Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 15 October 2019 09:25
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi,

You are right, there is no SnpDxe loaded/installed. I double checked it but I really wrote all network relevant drivers in my first post.

I´m running the shell on a real System with Intel-Atom 64-Bit Processor and the mentioned Intel Network Device, it´s not a specific server type.

Point 6 let me assume, that the SnpDxe must be installed at a certain stage of driver initialization?!?
Nevertheless I compiled the NetworkPkg which gives me a SnpDxe.efi file (I did not make any changes in inf files).
When I load that file inside the uefi shell I do get a successful entry for my network device [161]:
Managed by :
Drv[16D] : Simple Network Protocol Driver

That looks fine to be. But does not change any behavior of ping or ifconfig command.

Furthermore as I understand the EFINetworkStackGettingStarted.pdf (https://sourceforge.net/projects/network-io/files/Documents/) there are more drivers needed to get SnpDxe run fine?!
But with the dh -d -v <num> cmd I can´t see any relations. Only that the SnpDxe is managing the Network Devices but no reference to any other Protocol/Driver.

Regards,

Simon



-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Montag, 14. Oktober 2019 14:02
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

My current reading of what is going on is as follows:

1. The Intel driver looks like a normal driver-model-compliant edk2 driver.
2. The Intel driver correctly binds to the PCI device [153] and creates a network device with MAC address device path [161] 3. The network device has HIIConfigAccess installed so I reckon the HII forms were correctly installed and published as one would expect 4. The AIP has MediaState with EFI_SUCCESS which leads me to believe that the underlying driver works okay and can talk to device alright, also that you have link up 5. The driver seems to have installed NII_31, so you probably have an UNDI device with a legacy? PXE blob. This is very common.
6. The SimpleNetworkProtocol is not installed on the network device. This should have been done by a callback in SnpDxe as soon as the NII_31 was installed.
7. Given you don't have SNP installed on the device, the platform network did not bind to the device.

The problem seems to be in step 6.

What is the nature of the platform you are running in? Is it OVMF?
Is it a specific generation of a specific server type (say DELL 14th generation or HP 10th generation platform)?
When you list drivers in the UEFI shell, can you find SnpDxe or Simple Network Protocol Driver?

Cheers,
Tom
________________________________________
From: Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Sent: 14 October 2019 12:31
To: Tomas Pilar; discuss@edk2.groups.io
Subject: AW: AW: [edk2-discuss] Establish network and run 'ping'

Hi Tom,

thanks a lot for your effort!

Yes, I think C0 has to manage 161. And it seems, that this is already done. Looking one step further, I get that C0 is managing the PciRoot[153] and it´s child the Network Device (see below attached outputs).

Hm,... so that should be fine, right?
So the ping command does fail because of another error?

It might be, that I have to bind/connect the other network stack drivers to C0? (e.g. B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe, for detailed information see below). I´m not able to do so with the 'connect' command.
The network stack drivers should show any information at 'managing' entry to run fine, or?

Regards,

Simon




PS: I hope this does not lead to confusion, if so forget the following part:
Ifconfig and ping does not give me any console output when calling with corresponding parameters: e.g.
FS0:\> ifconfig -s eth0 static 192.168.51.1 255.255.255.0 192.168.51.0
FS0:\> ifconfig -l eth0
FS0:\> ping 192.168.51.2
With the same Hardware but another EFI-Shell version (one I downloaded from Internet and did not build via EDK2) I get the same device-driver relationship but still ifconfig and ping not working. But this time it throws an error: IfConfig: Locate protocol error - 'Ip4Config Protocol'


FS0:\> dh -d -v B8
B8: D40B5718
ComponentName2(D5320528)
ComponentName(D5320510)
DriverBinding(D53203C0)
ImageDevicePath(D40B5518)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
LoadedImage(D40B7640)
Name..........: Ip4ConfigDxe
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D5320000
ImageSize.....: 4BE0
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D5320BC0
Driver Name [B8] : IP4 CONFIG Network Service Driver
Driver Image Name : FvFile(26841BDE-920A-4E7A-9FBE-637F477143A6)
Driver Version : 0000000A
Driver Type : <Unknown>
Configuration : NO
Diagnostics : NO
Managing : None

FS0:\> dh -d -v 153
153: D3E1CB98
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3E47A18)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
PCIIO(D3E49818)
Segment #.....: 00
Bus #.........: 02
Device #......: 00
Function #....: 00
ROM Size......: FC00
ROM Location..: D3E0C018
Vendor ID.....: 8086
Device ID.....: 1533
Class Code....: 00 00 02
Configuration Header :
86803315070010000300000210000000
000080F80000000001D00000000082F8
00000000000000000000000086800000
0000000040000000000000000A010000
Controller Name : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Controller Type : BUS
Configuration : NO
Diagnostics : NO
Managed by :
Drv[C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Parent Controllers :
Parent[8B] : PciRoot(0x0)
Child Controllers :
Child[161] : Intel(R) I210 Gigabit Network Connection


FS0:\> dh -d -v C0
C0: D40A8018
SupportedEfiSpecVersion(D52AE548)
0x00020028
DriverHealth(D52AE550)
DriverConfiguration(D52AD920)
DriverDiagnostics2(D52AE5E8)
ComponentName2(D52AE530)
DriverDiagnostics(D52AE5D8)
ComponentName(D52AE518)
DriverBinding(D52AE848)
ImageDevicePath(D40A2E18)
Fv(5C60F367-A505-419A-859E-2A4FF6CA6FE5)/FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
LoadedImage(D40A6840)
Name..........: IntelGigabitLan
Revision......: 0x00001000
ParentHandle..: D4EB1F18
SystemTable...: D5B48F18
DeviceHandle..: D4EA6B18
FilePath......: FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
PdbFileName...: <null string>
OptionsSize...: 0
LoadOptions...: 0
ImageBase.....: D52AC000
ImageSize.....: 4F560
CodeType......: EfiBootServicesCode
DataType......: EfiBootServicesData
Unload........: D52C9A14
Driver Name [C0] : Intel(R) PRO/1000 6.6.04 PCI-E
Driver Image Name : FvFile(95B3BF67-0455-4947-B92C-4BF437684E31)
Driver Version : 06060400
Driver Type : Bus
Configuration : YES
Diagnostics : YES
Managing :
Ctrl[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child[161] : Intel(R) I210 Gigabit Network Connection




-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 17:16
An: Buhrow, Simon <simon.buhrow@sieb-meyer.de>; discuss@edk2.groups.io
Betreff: Re: AW: [edk2-discuss] Establish network and run 'ping'

Is the Parent device managed by something? Because I am surprised that a PCI device that it not managed/created by a driver has a MAC device path.

I assume that you want the driver C0 to drive your device? What does checking that driver handle return? Is that driver managing anything?

Tom



On 24/09/2019 14:11, Buhrow, Simon wrote:
Hi Tom,

thanks for that advice!
That´s really nice. But overall it says me the same: There is no driver for the Networkadapter...
And "connect" does not run successfully.

Regards,

Simon


FS0:\EFI\Netzwerk\> dh -d -v 161
161: D3D9D298
HIIConfigAccess(D33C9AF8)
AdapterInfo(D33C9B80)
Supported Information Types:
Guid[1] : D7C74207-A831-4A26-B1F5-D193065CE8B6 - gEfiAdapterInfoMediaStateGuid
MediaState: 0x00000000 - Success
Guid[2] : 25B6A2C7-410B-AD42-9145-11BFC750D202 - UnknownInfoType

34D59603-1428-4429-A414-E6B3B5FD7DC1(D33C9B10)
0E1AD94A-DCF4-11DB-9705-00E08161165F(D52AE570)
NetworkInterfaceIdentifier31(D33C6020)
E3161450-AD0F-11D9-9669-0800200C9A66(D33C6048)
DevicePath(D3D9D218)
PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
51DD8B21-AD8D-48E9-BC3F-24F46722C748(D33C9B50)
Controller Name : Intel(R) I210 Gigabit Network Connection
Device Path : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)/MAC(0030D6156DDE,0x1)
Controller Type : DEVICE
Configuration : NO
Diagnostics : NO
Managed by : <None>
Parent Controllers :
Parent[153] : PciRoot(0x0)/Pci(0x1C,0x3)/Pci(0x0,0x0)
Child Controllers : <None>


-----Ursprüngliche Nachricht-----
Von: Tomas Pilar <tpilar@solarflare.com>
Gesendet: Dienstag, 24. September 2019 11:44
An: discuss@edk2.groups.io; Buhrow, Simon <simon.buhrow@sieb-meyer.de>
Betreff: Re: [edk2-discuss] Establish network and run 'ping'

Hi Simon,

A handy trick that might help you is interrogating handles using 'dh
-d -v <handle>' which gives you a lot of information about what's
connected and installed and driving what. Works on all handles
(drivers, devices,
etc.)

Cheers,
Tom

On 24/09/2019 10:29, Buhrow, Simon wrote:
Hi,

in order to pass files via network (using TFTP) from/to the UEFI-Shell, I´m trying to establish a network connection.
For that I want to check it using the ping command (Ifconfig and ping does not give me any console output when calling with corresponding parameters).

Looking for drivers and devices everything looks fine to me (see below parts of it).
Running 'connect' I don't get any entry about network adapter.
So I think I have to connect the Network Adapter with the corresponding drivers. But 'connect 161' fails.

What´s wrong? Do I misunderstand anything or is there just any step missing?

The Shell I use is the shell I get with the edk2-stable201908 when just compiling the ShellPkg.

Regards,

Simon


FS0:\> devices
...
161 D - - 1 0 0 Intel(R) I210 Gigabit Network Connection
...
FS0:\> drivers
...
93 0000000A ? - - - - TCP Network Service Driver TcpDxe
94 0000000A ? - - - - TCP Network Service Driver TcpDxe
95 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
96 0000000A ? - - - - UEFI PXE Base Code Driver UefiPxeBcDxe
97 0000000A ? - - - - IP6 Network Service Driver Ip6Dxe
99 0000000A ? - - - - UDP6 Network Service Driver Udp6Dxe
9A 0000000A ? - - - - DHCP6 Protocol Driver Dhcp6Dxe
9B 0000000A ? - - - - MTFTP6 Network Service Driver Mtftp6Dxe
B7 0000000A ? - - - - DHCP Protocol Driver Dhcp4Dxe
B8 0000000A ? - - - - IP4 CONFIG Network Service Driver Ip4ConfigDxe
B9 0000000A ? - - - - IP4 Network Service Driver Ip4Dxe
BA 0000000A ? - - - - MTFTP4 Network Service Mtftp4Dxe
BB 0000000A ? - - - - UDP Network Service Driver Udp4Dxe
C0 06060400 B X X 1 1 Intel(R) PRO/1000 6.6.04 PCI-E IntelGigabitLan
...
FS0:\> connect
Connect - Handle [149] Result Success.
Connect - Handle [14A] Result Success.
FS0:\> connect 161
Connect No drivers could be connected.



The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.


Re: Could not load PC BIOS

Laszlo Ersek
 

On 10/24/19 16:19, Peter Wiehe wrote:
Hello Tianocore list,

I am trying (with Qemu) to emulate an UEFI 64bit PC that boots from a
virtual CD and installs an OS on a virtual HD.

But I get always an error (See below).

I use:

- Xubuntu 18.04 LTS

- qemu-system-x86_64: QEMU emulator version 2.11.1(Debian
1:2.11+dfsg-1ubuntu7.19)

- gcc-5 (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010

I followed your instructions in the documentation about building and
installing edk2. I tried to download OVMF repo with yum and did set
both targets IA32 and x86_64.

The emulation command I enter is:*
*

*qemu-system-x86_64 -L . -drive format=raw,file=hd.img,readonly=off
-cdrom os.iso -net none -bios bios.bin -m 1G*

The OS-CD-ISO is Haiku if that matters. You probably want more
information, but I don't know what else I can tell you.

I get the following outpout:

*qemu-system-x86_64: warning: TCG doesn't support requested feature:
CPUID.01H:ECX.vmx [bit 5]**
**qemu: could not load PC BIOS 'bios.bin'*

The first line is just a warning, so it's probably not important (I
hope). But the last line is bad. I also tried without "-L .". DIdn't
work either.

What can I do? Do I have to build OVMF from source? Or how precisely
can I find the OVMF "BIOS" file?

Or maybe it isn't a Tianocore problem, but a Qemu bug?
First, let me see the contents of the OVMF package that your distro
offers... You mention Xubuntu 18.04 LTS, I think that means "bionic":

https://packages.ubuntu.com/

Then, we have:

https://packages.ubuntu.com/bionic/all/ovmf/filelist

So you got

/usr/share/OVMF/OVMF_CODE.fd
/usr/share/OVMF/OVMF_VARS.fd

Seems OK.

Assuming you want to use the raw QEMU command line, here's what you
should run:

VM=my-virtual-machine
ISO=/path/to/Haiku-OS.iso

# create a private variable store file for the VM, from the
# template, if such a variable store does not exist yet for the VM
if ! test -e "$VM.varstore"; then
cp /usr/share/OVMF/OVMF_VARS.fd "$VM.varstore"
fi

# create the virtual disk image if it doesn't exist yet
if ! test -e "$VM.qcow2"; then
qemu-img create -f qcow2 "$VM.qcow2" 20G
fi

qemu-system-x86_64 \
-m 1024 \
-smp 2 \
-machine pc,accel=kvm \
\
-boot menu=on,splash-time=5000 \
\
-drive if=pflash,unit=0,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,unit=1,format=raw,file="$VM.varstore" \
\
-debugcon file:"$VM.ovmf.log" \
-global isa-debugcon.iobase=0x402 \
\
-drive id=disk,if=none,format=qcow2,file="$VM.qcow2" \
-drive id=cdrom,if=none,format=raw,readonly,file="$ISO" \
\
-device ide-hd,bus=ide.0,unit=0,drive=disk,bootindex=0 \
-device ide-cd,bus=ide.0,unit=1,drive=cdrom,bootindex=1

With this command line, OVMF will attempt to boot the disk first, and if
that fails, attempt to boot the ISO. You can use this to install the OS
from the ISO to the disk, and then continue booting from the disk.

The OVMF log will be written to "$VM.ovmf.log".

Thanks
Laszlo


Could not load PC BIOS

Peter Wiehe <peter.wiehe2@...>
 

Hello Tianocore list,

I am trying (with Qemu) to emulate an UEFI 64bit PC that boots from a virtual CD and installs an OS on a virtual HD.

But I get always an error (See below).

I use:

- Xubuntu 18.04 LTS

-  qemu-system-x86_64: QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.19)

- gcc-5 (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010

I followed your instructions in the documentation about building and installing edk2. I tried to download OVMF repo with yum and did set both targets IA32 and x86_64.

The emulation command I enter is:*
*

*qemu-system-x86_64 -L . -drive format=raw,file=hd.img,readonly=off -cdrom os.iso -net none -bios bios.bin -m 1G*

The OS-CD-ISO is Haiku if that matters. You probably want more information, but I don't know what else I can tell you.

I get the following outpout:

*qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]**
**qemu: could not load PC BIOS 'bios.bin'*

The first line is just a warning, so it's probably not important (I hope). But the last line is bad. I also tried without "-L .". DIdn't work either.

What can I do? Do I have to build OVMF from source? Or how precisely can I find the OVMF "BIOS" file?

Or maybe it isn't a Tianocore problem, but a Qemu bug?

Thanks for any help

Peter


Piping output to unload fails

Byte Enable <ByteEnable@...>
 

Hi,

Is there a way to pipe to unload.  For example:

echo "123" | unload -n 

Thanks.


Re: EDK2 UEFI Specification and Revision

sergestus@...
 

Hi Liming, there is only DEC_SPECIFICATION in the MdePkg.dec, what is definitely not I'm look in for.


Re: EDK2 UEFI Specification and Revision

Liming Gao
 

You can check Edk2\MdePkg\MdePkg.dec. It lists UEFI spec definition.

-----Original Message-----
From: discuss@edk2.groups.io <discuss@edk2.groups.io> On Behalf Of sergestus@yandex.ru
Sent: Friday, October 18, 2019 8:31 PM
To: discuss@edk2.groups.io
Subject: [edk2-discuss] EDK2 UEFI Specification and Revision

When runs "ver" command in EFI Shell command line it shows UEFI specification and revision. The question is how to find out what UEFI
specification and revision does EDK2 support?


EDK2 UEFI Specification and Revision

sergestus@...
 

When runs "ver" command in EFI Shell command line it shows UEFI specification and revision. The question is how to find out what UEFI specification and revision does EDK2 support?

801 - 820 of 865