Date   
Re: [PATCH v4 20/35] OvmfPkg/XenPlatformPei: Introduce XenPvhDetected

Roger Pau Monné
 

On Thu, Aug 08, 2019 at 11:38:13AM +0100, Anthony PERARD wrote:
On Wed, Aug 07, 2019 at 05:03:46PM +0200, Roger Pau Monné wrote:
On Mon, Jul 29, 2019 at 04:39:29PM +0100, Anthony PERARD wrote:
+BOOLEAN
+XenPvhDetected (
+ VOID
+ )
+{
+ //
+ // This function should only be used after XenConnect
+ //
+ ASSERT (mXenInfo.VersionMajor != 0);
That's IMO dangerous. Using the version as an indication that
XenConnect has run seems like a bad idea, since returning a major
version of 0 is a valid number to return. Can't you check against
something else that doesn't depends on hypervisor provided data? (ie:
like some allocations or such that happen in XenConnect)

A paranoid could provider could even return major == 0 and minor == 0
in order to attempt to hide the Xen version used, since guests are not
supposed to infer anything from the Xen version, available hypervisor
features are reported by other means.
I'm sure a paranoid provider wouldn't use a debug build of OVMF :-). So
that assert doesn't matter. There's nothing dangerous in a `nop'! :-D

But I could use mXenInfo.HyperPages instead.
It's just a nit, and TBH it's quite unlikely for anyone to report a
major version of 0, it's just that if you have something else to
assert for initialization it might be safer.

Thanks, Roger.

Re: [PATCH v4 22/35] OvmfPkg/XenPlatformPei: no hvmloader: get the E820 table via hypercall

Roger Pau Monné
 

On Thu, Aug 08, 2019 at 11:41:18AM +0100, Anthony PERARD wrote:
On Wed, Aug 07, 2019 at 05:14:33PM +0200, Roger Pau Monné wrote:
On Mon, Jul 29, 2019 at 04:39:31PM +0100, Anthony PERARD wrote:
When the Xen PVH entry point has been used, hvmloader hasn't run and
hasn't prepared an E820 table. The only way left to get an E820 table
is to ask Xen via an hypercall, the call can only be made once so keep
the result cached for later.
I think we agreed that the above is not true, and that the memory
map can be fetched as many times as desired using the hypercall
interface.
Yes, I'll change the commit message. How about:

When the Xen PVH entry point has been used, hvmloader hasn't run and
hasn't prepared an E820 table. The only way left to get an E820 table
is to ask Xen via an hypercall. We keep the result cached to avoid
making a second hypercall which would give the same result.
LGTM, thanks.

Re: [PATCH 1/1] BaseTools: Remove tool chain in tools_def.template

Liming Gao
 

Leif:

-----Original Message-----
From: Leif Lindholm [mailto:leif.lindholm@...]
Sent: Thursday, August 08, 2019 5:55 PM
To: devel@edk2.groups.io; Zhang, Shenglei <shenglei.zhang@...>
Cc: Feng, Bob C <bob.c.feng@...>; Gao, Liming
<liming.gao@...>; Ard Biesheuvel <ard.biesheuvel@...>;
Eugene Cohen <eugene@...>
Subject: Re: [edk2-devel] [PATCH 1/1] BaseTools: Remove tool chain in
tools_def.template

Hi Shenglei,

On Thu, Aug 08, 2019 at 04:09:18PM +0800, Zhang, Shenglei wrote:
Remove definition of RVCT, RVCTLINUX, RVCTCYGWIN and CLANG35
in tools_def.template. These tool chains are for ARM and AARCH64 only.
There is no change recently and they are not used.
https://bugzilla.tianocore.org/show_bug.cgi?id=1750
This still does not address my comment in the BZ that deleting all
RVCT profiles before full VS support is enabled for (32-bit) ARM, we
orphan an awful lot of .asm files.
How about submit another BZ for VS tool chain ARM fully support?
When there is real request, this support can be added in future.

This may not have much of a practical effect, since I doubt anyone is
using these toolchains today - but it does prevent someone from
actively going through and testing future updates (where before, they
may just have neglected to do so).

This point needs discussing rather than ignoring, and I think we're
getting too close to the freeze to consider the patch to go in as is
at this point.
Agree for more discussion.

Whenever this patch does go in should be in the week after a stable
tag is made, to give plenty of time for anyone affected to shout
before the next stable tag is made.

After the 2019.08 stable tag has been made, I am happy for a patch
going in that deletes CLANG35, RVCTCYGWIN and *one*of* RVCT/RVCTLINUX.
If no one maintain or use it, this tool chain may not work now.
If so, do we still need to keep it?

Thanks
Liming
The deletion of the final RVCT profile needs further discussion.

Best Regards,

Leif

Re: [edk2-platforms: PATCH v2] Marvell/Drivers: XenonDxe: Explicitly disable HS400

Leif Lindholm
 

On Wed, Aug 07, 2019 at 09:46:12PM +0200, Marcin Wojtas wrote:
On another SoC revision, the capability register marks HS400 support
as enabled. However in case the interface itself is powered with 3.3V
this flag must be unset by the SdMmcOverride protocol callback -
otherwise the generic EmmcSwitchToHS400 () would be executed
with a failure.

Ensure that in case of SlowMode or 3.3V operation, the HS400 capability
will be disabled in the SdMmc driver, along with other highest-speed
modes.

Signed-off-by: Marcin Wojtas <mw@...>
Reviewed-by: Leif Lindholm <leif.lindholm@...>
Pushed as ca4f575fd63b.

Thanks!

---
Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdhci.h | 1 +
Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdMmcOverride.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdhci.h b/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdhci.h
index afc2b2f..2ad23e2 100644
--- a/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdhci.h
+++ b/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdhci.h
@@ -55,6 +55,7 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
#define SDHC_CAP_SDR50 BIT32
#define SDHC_CAP_SDR104 BIT33
#define SDHC_CAP_DDR50 BIT34
+#define SDHC_CAP_HS400 BIT63
#define SDHC_MAX_CURRENT_CAP 0x0048
#define SDHC_FORCE_EVT_AUTO_CMD 0x0050
#define SDHC_FORCE_EVT_ERR_INT 0x0052
diff --git a/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdMmcOverride.c b/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdMmcOverride.c
index 3b54459..afd650b 100644
--- a/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdMmcOverride.c
+++ b/Silicon/Marvell/Drivers/SdMmc/XenonDxe/XenonSdMmcOverride.c
@@ -330,7 +330,8 @@ XenonSdMmcCapability (
Capability &= ~(UINT64)(SDHC_CAP_VOLTAGE_33 | SDHC_CAP_VOLTAGE_30);
} else {
Capability &= ~(UINT64)(SDHC_CAP_SDR104 | SDHC_CAP_DDR50 |
- SDHC_CAP_SDR50 | SDHC_CAP_VOLTAGE_18);
+ SDHC_CAP_SDR50 | SDHC_CAP_HS400 |
+ SDHC_CAP_VOLTAGE_18);
}

if (!SdMmcDesc.Xenon8BitBusEnabled) {
@@ -338,7 +339,7 @@ XenonSdMmcCapability (
}

if (SdMmcDesc.XenonSlowModeEnabled) {
- Capability &= ~(UINT64)(SDHC_CAP_SDR104 | SDHC_CAP_DDR50);
+ Capability &= ~(UINT64)(SDHC_CAP_SDR104 | SDHC_CAP_DDR50 | SDHC_CAP_HS400);
}

Capability &= ~(UINT64)(SDHC_CAP_SLOT_TYPE_MASK);
--
2.7.4

Re: [PATCH v4 23/35] OvmfPkg/XenPlatformPei: Rework memory detection

Anthony PERARD
 

On Wed, Aug 07, 2019 at 05:34:32PM +0200, Roger Pau Monné wrote:
On Mon, Jul 29, 2019 at 04:39:32PM +0100, Anthony PERARD wrote:
When running as a Xen PVH guest, there is no CMOS to read the memory
size from. Rework GetSystemMemorySize(Below|Above)4gb() so they can
work without CMOS by reading the e820 table.

Rework XenPublishRamRegions to also care for the reserved and ACPI
entry in the e820 table. The region that was added by InitializeXen()
isn't needed as that same entry is in the e820 table provided by
hvmloader.

MTRR settings aren't modified anymore, on HVM it's already done by
hvmloader, on PVH it is supposed to have sane default. MTRR will need
to be done properly but keeping what's already been done by programmes
^ firmware
I've used programmes instead of firmware because in case of PVH, OVMF is
the first firmware to run, libxl (and what ever it called) is what
causes an MTRR to be setup, no firmware are involved in that.

+ //
+ // Round up the start address, and round down the end address.
+ //
+ Base = ALIGN_VALUE (Entry->BaseAddr, (UINT64)EFI_PAGE_SIZE);
+ End = (Entry->BaseAddr + Entry->Length) & ~(UINT64)EFI_PAGE_MASK;
+
+ switch (Entry->Type) {
+ case EfiAcpiAddressRangeMemory:
+ AddMemoryRangeHob (Base, End);
+ break;
+ case EfiAcpiAddressRangeACPI:
+ AddReservedMemoryRangeHob (Base, End, FALSE);
+ break;
+ case EfiAcpiAddressRangeReserved:
+ if (Base < LocalApic && LocalApic < End) {
Don't you also need to check for equality? In case such region starts
at Base == LocalApic?

I guess it doesn't matter that much since this is to workaround a
specific issue with hvmloader, but I would like to see this sorted out
in hvmloader so that there's no clash anymore.
Indeed, it doesn't matter to much, so I've been lazy. But Laszlo have
pointed that out as well, so there were going to be a patch to make the
workaround better. But I feel like I'm going to need to repost the
series, so I'm probably going to fix that as well.

Thanks,

--
Anthony PERARD

Re: [PATCH 1/1] BaseTools: Remove tool chain in tools_def.template

Leif Lindholm
 

On Thu, Aug 08, 2019 at 10:51:54AM +0000, Gao, Liming wrote:
Leif:

-----Original Message-----
From: Leif Lindholm [mailto:leif.lindholm@...]
Sent: Thursday, August 08, 2019 5:55 PM
To: devel@edk2.groups.io; Zhang, Shenglei <shenglei.zhang@...>
Cc: Feng, Bob C <bob.c.feng@...>; Gao, Liming
<liming.gao@...>; Ard Biesheuvel <ard.biesheuvel@...>;
Eugene Cohen <eugene@...>
Subject: Re: [edk2-devel] [PATCH 1/1] BaseTools: Remove tool chain in
tools_def.template

Hi Shenglei,

On Thu, Aug 08, 2019 at 04:09:18PM +0800, Zhang, Shenglei wrote:
Remove definition of RVCT, RVCTLINUX, RVCTCYGWIN and CLANG35
in tools_def.template. These tool chains are for ARM and AARCH64 only.
There is no change recently and they are not used.
https://bugzilla.tianocore.org/show_bug.cgi?id=1750
This still does not address my comment in the BZ that deleting all
RVCT profiles before full VS support is enabled for (32-bit) ARM, we
orphan an awful lot of .asm files.
How about submit another BZ for VS tool chain ARM fully support?
When there is real request, this support can be added in future.
Good point. I have raised
https://bugzilla.tianocore.org/show_bug.cgi?id=2065
and assigned it to myself.

This may not have much of a practical effect, since I doubt anyone is
using these toolchains today - but it does prevent someone from
actively going through and testing future updates (where before, they
may just have neglected to do so).

This point needs discussing rather than ignoring, and I think we're
getting too close to the freeze to consider the patch to go in as is
at this point.
Agree for more discussion.

Whenever this patch does go in should be in the week after a stable
tag is made, to give plenty of time for anyone affected to shout
before the next stable tag is made.

After the 2019.08 stable tag has been made, I am happy for a patch
going in that deletes CLANG35, RVCTCYGWIN and *one*of* RVCT/RVCTLINUX.
If no one maintain or use it, this tool chain may not work now.
If so, do we still need to keep it?
So there are two questions here, really:
The first - "why can't we delete all of these now?", I think I have
already explained above. (I am not suggesting you did not understand,
but I want to clarify that we also need agreement on the timing of
this patch in general.)

For the second: the Visual Studio assembler (for ARM/AArch64) shares
the .asm syntax (and source file name) with the RVCT assembler.
If we delete the whole RVCT family of profiles, we are left with a
bunch of .asm files that are defined to be assembled by a non-existing
toolchain family. Whilst still being of the name and syntax that we
will need when enabling the MSFT family.
A not exactly precise execution of
find * -name "*.inf" -exec grep -H "RVCT" {} \; | grep "|" | grep "\.asm" | wc -l
suggests 50 source files are affected in edk2. A further 6 in
edk2-platforms.

Hence, my preferred obsoletion path for RVCT would mean the family
(and at least one toolchain profile) remaining in the tree until
the Visual Studio enablement has switched the source files to MSFT.

Best Regards,

Leif

Re: [edk2-platforms: PATCH 0/9] Marvell Octeon CN913X SoC family support

Leif Lindholm
 

Hi Marcin,

On Thu, Aug 08, 2019 at 01:30:21AM +0200, Marcin Wojtas wrote:
Hi,

Marvell Octeon CN913X SoC is a new device, which is built of
upgraded hardware blocks known from previously supported line
of SoCs. It is avaialble in 3 variants - CN9130/CN9131/CN9132.

CN9130 is made of a single Application Processor unit
(AP807) and one internal south bridge (CP115). It can
be extended to CN9131 (internal + external south bridges).
The CN9132 has 3 south bridge units.

This patchset adds all necessary components (.dsc/.fdf,
libraries, ACPI, DT) to support all 3 variants, which
are available on a modular CN913x Development Board.
Thanks for this contribution.
Do you have any further information on this SoC/Devboard?
Searching only gets me the CN8xxx SoCs.

The patches are available in the github:
https://github.com/MarvellEmbeddedProcessors/edk2-open-platform/commits/cn913x-upstream-r20190808

I'm looking forward to your comments or remarks.
First issue I run into is that 9130/9131 bail out on DSDT.aml:
"iasl"
-p/work/git/tianocore/Build/Cn9130DbA-AARCH64/DEBUG_GCC5/AARCH64/Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA/OUTPUT/Cn913xDbA/Dsdt.aml
/work/git/tianocore/Build/Cn9130DbA-AARCH64/DEBUG_GCC5/AARCH64/Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA/OUTPUT/Cn913xDbA/Dsdt.iiii
/work/git/tianocore/Build/Cn9130DbA-AARCH64/DEBUG_GCC5/AARCH64/Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA/OUTPUT/Cn913xDbA/Dsdt.iiii
17: DefinitionBlock ("DSDT.aml", "DSDT", 2, "MVEBU ", "CN9130DBA", 3)

Intel ACPI Component Architecture
ASL+ Optimizing Compiler/Disassembler version 20181213
Copyright (c) 2000 - 2018 Intel Corporation

ASL Input:
/work/git/tianocore/Build/Cn9130DbA-AARCH64/DEBUG_GCC5/AARCH64/Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA/OUTPUT/Cn913xDbA/Dsdt.iiii
- 328 lines, 9303 bytes, 97 keywords

Compilation complete. 1 Errors, 0 Warnings, 0 Remarks, 34
Optimizations
Error 6155 -
Invalid OEM Table ID ^ (Length cannot exceed 8 characters)

This does not affect Cn9132DbA, since that one does not include the
ACPI module. Is this intenional?

Which version of iasl has this been tested with?

(Plese don't respin a v2, I will go through things a bit more and
provide feedback.)

Best Regards,

Leif

Best regards,
Marcin


Marcin Wojtas (9):
Marvell/Cn9130Db: Add ACPI tables
Marvell/Cn9130Db: Add DeviceTree
Marvell/Cn9130Db: Introduce board support
Marvell/Library: ArmadaSoCDescLib: Extend Xenon information
Marvell/Library: MppLib: Allow to configure more Xenon PHYs
Marvell/Library: IcuLib: Fix debug information
Marvell/Cn9131Db: Introduce board support
Marvell/Cn9132Db: Introduce board support
Marvell/Drivers: SmbiosPlatformDxe: Use more generic board name

Platform/Marvell/Cn913xDb/Cn9130DbA.dsc.inc | 107 ++++
Platform/Marvell/Cn913xDb/Cn9131DbA.dsc.inc | 72 +++
Platform/Marvell/Cn913xDb/Cn9132DbA.dsc.inc | 72 +++
Platform/Marvell/Cn913xDb/Cn9130DbA.dsc | 46 ++
Platform/Marvell/Cn913xDb/Cn9131DbA.dsc | 47 ++
Platform/Marvell/Cn913xDb/Cn9132DbA.dsc | 45 ++
Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9130DbABoardDescLib.inf | 29 +
Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9132DbABoardDescLib.inf | 29 +
Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.inf | 37 ++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA.inf | 56 ++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9131DbA.inf | 57 ++
Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9130DbA.inf | 22 +
Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9131DbA.inf | 22 +
Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9132DbA.inf | 22 +
Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.h | 25 +
Silicon/Marvell/Armada7k8k/Library/Armada7k8kSoCDescLib/Armada7k8kSoCDescLib.h | 5 +-
Silicon/Marvell/OcteonTx/AcpiTables/T91/AcpiHeader.h | 39 ++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Pcie.h | 20 +
Silicon/Marvell/OcteonTx/AcpiTables/T91/IcuInterrupts.h | 36 ++
Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9130DbABoardDescLib.c | 126 +++++
Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9132DbABoardDescLib.c | 135 +++++
Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.c | 215 ++++++++
Silicon/Marvell/Armada7k8k/Library/Armada7k8kSoCDescLib/Armada7k8kSoCDescLib.c | 34 +-
Silicon/Marvell/Drivers/SmbiosPlatformDxe/SmbiosPlatformDxe.c | 4 +-
Silicon/Marvell/Library/IcuLib/IcuLib.c | 4 +-
Silicon/Marvell/Library/MppLib/MppLib.c | 4 +-
Platform/Marvell/Cn913xDb/Cn9130DbA.fdf.inc | 17 +
Platform/Marvell/Cn913xDb/Cn9131DbA.fdf.inc | 18 +
Platform/Marvell/Cn913xDb/Cn9132DbA.fdf.inc | 13 +
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9131DbA/Ssdt.asl | 98 ++++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Dsdt.asl | 324 ++++++++++++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Mcfg.aslc | 41 ++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Fadt.aslc | 80 +++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Gtdt.aslc | 58 ++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Madt.aslc | 135 +++++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Pptt.aslc | 210 ++++++++
Silicon/Marvell/OcteonTx/AcpiTables/T91/Spcr.aslc | 49 ++
Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-ap806-quad.dtsi | 43 ++
Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-ap806.dtsi | 264 ++++++++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-common.dtsi | 10 +
Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-cp110.dtsi | 552 ++++++++++++++++++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130-db-A.dts | 185 +++++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130-db.dtsi | 168 ++++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130.dtsi | 126 +++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9131-db-A.dts | 29 +
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9131-db.dtsi | 175 +++++++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9132-db-A.dts | 70 +++
Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9132-db.dtsi | 159 ++++++
48 files changed, 4113 insertions(+), 21 deletions(-)
create mode 100644 Platform/Marvell/Cn913xDb/Cn9130DbA.dsc.inc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9131DbA.dsc.inc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9132DbA.dsc.inc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9130DbA.dsc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9131DbA.dsc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9132DbA.dsc
create mode 100644 Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9130DbABoardDescLib.inf
create mode 100644 Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9132DbABoardDescLib.inf
create mode 100644 Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.inf
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9130DbA.inf
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9131DbA.inf
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9130DbA.inf
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9131DbA.inf
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/Cn9132DbA.inf
create mode 100644 Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.h
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/AcpiHeader.h
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Pcie.h
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/IcuInterrupts.h
create mode 100644 Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9130DbABoardDescLib.c
create mode 100644 Platform/Marvell/Cn913xDb/BoardDescriptionLib/Cn9132DbABoardDescLib.c
create mode 100644 Platform/Marvell/Cn913xDb/NonDiscoverableInitLib/NonDiscoverableInitLib.c
create mode 100644 Platform/Marvell/Cn913xDb/Cn9130DbA.fdf.inc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9131DbA.fdf.inc
create mode 100644 Platform/Marvell/Cn913xDb/Cn9132DbA.fdf.inc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn9131DbA/Ssdt.asl
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Dsdt.asl
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Cn913xDbA/Mcfg.aslc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Fadt.aslc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Gtdt.aslc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Madt.aslc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Pptt.aslc
create mode 100644 Silicon/Marvell/OcteonTx/AcpiTables/T91/Spcr.aslc
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-ap806-quad.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-ap806.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-common.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/armada-cp110.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130-db-A.dts
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130-db.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9130.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9131-db-A.dts
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9131-db.dtsi
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9132-db-A.dts
create mode 100644 Silicon/Marvell/OcteonTx/DeviceTree/T91/cn9132-db.dtsi

--
2.7.4

[PATCH edk2-platforms v3 0/3] Robust Netsec Initialiation

Masahisa Kojima
 

This patch series is bugfix for the hang-up issue in Netsec driver.

Some linux distributions such as Ubuntu power down the ethernet phy
in reboot. In this case, Netsec initialization fails and
system hungs.

This patch series add the robust netsec initialization,
set ethernet phy as loopback mode to expect stable RXCLK,
and wait for media link up.

The disadvantage of this patch series is that user has to wait
several seconds until netsec driver gives up ethernet link-up
if the ethernet cable is not connected.

Changes since v2:
- use From: tag instead of Signed-off-by:

Changes since v1:
- modified wrong indent
- updated order of new parameter
- updated comment
- removed unrelated whitespace changes

Satoru Okamoto (3):
NetsecDxe: embed phy address into NETSEC SDK internal structure
NetsecDxe: put phy in loopback mode to guarantee stable RXCLK input
NetsecDxe: SnpInitialize() waits for media linking up

.../Socionext/DeveloperBox/DeveloperBox.dsc | 1 +
.../Drivers/Net/NetsecDxe/NetsecDxe.c | 236 ++++++++----------
.../Drivers/Net/NetsecDxe/NetsecDxe.dec | 1 +
.../Drivers/Net/NetsecDxe/NetsecDxe.h | 2 -
.../Drivers/Net/NetsecDxe/NetsecDxe.inf | 1 +
.../netsec_sdk/include/ogma_api.h | 6 +-
.../netsec_sdk/src/ogma_gmac_access.c | 61 ++---
.../netsec_sdk/src/ogma_internal.h | 2 +
.../netsec_sdk/src/ogma_misc.c | 78 ++++++
.../netsec_for_uefi/netsec_sdk/src/ogma_reg.h | 4 +
10 files changed, 211 insertions(+), 181 deletions(-)

--
2.17.1

[PATCH edk2-platforms v3 1/3] NetsecDxe: embed phy address into NETSEC SDK internal structure

Masahisa Kojima
 

From: Satoru Okamoto <okamoto.satoru@...>

This is a refactoring of phy address handling in Netsec driver.
NETSEC SDK, low level driver for NetsecDxe, did not store phy address.
User should specify the phy address as an argument to
the SDK public functions.
It prevented NETSEC SDK from internally controlling phy,
and it also bothers user application with phy address management.

With that, we encapsulate the phy address into NETSEC SDK.

Signed-off-by: Masahisa Kojima <masahisa.kojima@...>
Reviewed-by: Leif Lindholm <leif.lindholm@...>
---
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c | 10 ++--
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.h | 2 -
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/include/ogma_api.h | 6 +-
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_gmac_access.c | 61 +++++---------------
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_internal.h | 2 +
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c | 6 ++
6 files changed, 28 insertions(+), 59 deletions(-)

diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
index 160bb08a4632..0b91d4af44a3 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
@@ -59,6 +59,8 @@ Probe (
// phy-interface
Param.gmac_config.phy_interface = OGMA_PHY_INTERFACE_RGMII;

+ Param.phy_addr = LanDriver->Dev->Resources[2].AddrRangeMin;
+
// Read and save the Permanent MAC Address
EepromBase = LanDriver->Dev->Resources[1].AddrRangeMin;
GetCurrentMacAddress (EepromBase, LanDriver->SnpMode.PermanentAddress.Addr);
@@ -107,8 +109,6 @@ Probe (
return EFI_DEVICE_ERROR;
}

- LanDriver->PhyAddress = LanDriver->Dev->Resources[2].AddrRangeMin;
-
ogma_enable_top_irq (LanDriver->Handle,
OGMA_TOP_IRQ_REG_NRM_RX | OGMA_TOP_IRQ_REG_NRM_TX);

@@ -280,7 +280,7 @@ SnpInitialize (
ReturnUnlock (EFI_DEVICE_ERROR);
}

- ogma_err = ogma_get_phy_link_status (LanDriver->Handle, LanDriver->PhyAddress,
+ ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
&phy_link_status);
if (ogma_err != OGMA_ERR_OK) {
DEBUG ((DEBUG_ERROR,
@@ -438,7 +438,7 @@ NetsecPollPhyStatus (
LanDriver = INSTANCE_FROM_SNP_THIS (Snp);

// Update the media status
- ogma_err = ogma_get_phy_link_status (LanDriver->Handle, LanDriver->PhyAddress,
+ ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
&phy_link_status);
if (ogma_err != OGMA_ERR_OK) {
DEBUG ((DEBUG_ERROR,
@@ -662,7 +662,7 @@ SnpGetStatus (
LanDriver = INSTANCE_FROM_SNP_THIS (Snp);

// Update the media status
- ogma_err = ogma_get_phy_link_status (LanDriver->Handle, LanDriver->PhyAddress,
+ ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
&phy_link_status);
if (ogma_err != OGMA_ERR_OK) {
DEBUG ((DEBUG_ERROR,
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.h b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.h
index 870833c8d31c..c95ff215199d 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.h
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.h
@@ -70,8 +70,6 @@ typedef struct {
NON_DISCOVERABLE_DEVICE *Dev;

NETSEC_DEVICE_PATH DevicePath;
-
- UINTN PhyAddress;
} NETSEC_DRIVER;

#define NETSEC_SIGNATURE SIGNATURE_32('n', 't', 's', 'c')
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/include/ogma_api.h b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/include/ogma_api.h
index 66f39150430b..be80dd9ae1fd 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/include/ogma_api.h
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/include/ogma_api.h
@@ -318,6 +318,7 @@ struct ogma_param_s{
ogma_desc_ring_param_t desc_ring_param[OGMA_DESC_RING_ID_MAX+1];
ogma_gmac_config_t gmac_config;
ogma_uint8 mac_addr[6];
+ ogma_uint8 phy_addr;
};

struct ogma_tx_pkt_ctrl_s{
@@ -412,14 +413,12 @@ ogma_err_t ogma_set_gmac_mode (

void ogma_set_phy_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr,
ogma_uint16 value
);

ogma_uint16 ogma_get_phy_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr
);

@@ -660,7 +659,6 @@ ogma_err_t ogma_get_gmac_lpitimer_reg (

void ogma_set_phy_mmd_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr,
ogma_uint16 value
@@ -668,14 +666,12 @@ void ogma_set_phy_mmd_reg (

ogma_uint16 ogma_get_phy_mmd_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
);

ogma_err_t ogma_get_phy_link_status (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_phy_link_status_t *phy_link_status_p
);

diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_gmac_access.c b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_gmac_access.c
index 88c149c10466..150d25ac3fbf 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_gmac_access.c
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_gmac_access.c
@@ -40,14 +40,12 @@
**********************************************************************/
static void ogma_set_phy_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr,
ogma_uint16 value
);

static ogma_uint16 ogma_get_phy_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr
);

@@ -57,14 +55,12 @@ void ogma_dump_gmac_stat (ogma_ctrl_t *ctrl_p);

static void ogma_set_phy_target_mmd_reg_addr (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
);

static void ogma_set_phy_mmd_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr,
ogma_uint16 value
@@ -72,7 +68,6 @@ static void ogma_set_phy_mmd_reg_sub (

static ogma_uint16 ogma_get_phy_mmd_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
);
@@ -435,7 +430,6 @@ ogma_err_t ogma_set_gmac_mode (

static void ogma_set_phy_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr,
ogma_uint16 value
)
@@ -447,7 +441,7 @@ static void ogma_set_phy_reg_sub (
OGMA_GMAC_REG_ADDR_GDR,
value);

- cmd = ( ( phy_addr << OGMA_GMAC_GAR_REG_SHIFT_PA) |
+ cmd = ( ( ctrl_p->param.phy_addr << OGMA_GMAC_GAR_REG_SHIFT_PA) |
( reg_addr << OGMA_GMAC_GAR_REG_SHIFT_GR) |
( OGMA_CLOCK_RANGE_IDX << OGMA_GMAC_GAR_REG_SHIFT_CR) |
OGMA_GMAC_GAR_REG_GW |
@@ -466,7 +460,6 @@ static void ogma_set_phy_reg_sub (

void ogma_set_phy_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr,
ogma_uint16 value
)
@@ -476,27 +469,25 @@ void ogma_set_phy_reg (

if (( ctrl_p == NULL)
|| ( !ctrl_p->param.use_gmac_flag)
- || ( phy_addr >= 32)
|| ( reg_addr >= 32) ) {
pfdep_print( PFDEP_DEBUG_LEVEL_FATAL,
"An error occurred at ogma_set_phy_reg.\nPlease set valid argument.\n");
return;
}

- ogma_set_phy_reg_sub( ctrl_p, phy_addr, reg_addr, value);
+ ogma_set_phy_reg_sub( ctrl_p, reg_addr, value);

}

static ogma_uint16 ogma_get_phy_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr
)
{

ogma_uint32 cmd;

- cmd = ( ( phy_addr << OGMA_GMAC_GAR_REG_SHIFT_PA) |
+ cmd = ( ( ctrl_p->param.phy_addr << OGMA_GMAC_GAR_REG_SHIFT_PA) |
( reg_addr << OGMA_GMAC_GAR_REG_SHIFT_GR) |
( OGMA_CLOCK_RANGE_IDX << OGMA_GMAC_GAR_REG_SHIFT_CR) |
OGMA_GMAC_GAR_REG_GB);
@@ -516,7 +507,6 @@ static ogma_uint16 ogma_get_phy_reg_sub (

ogma_uint16 ogma_get_phy_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 reg_addr
)
{
@@ -525,14 +515,13 @@ ogma_uint16 ogma_get_phy_reg (

if ( ( ctrl_p == NULL)
|| ( !ctrl_p->param.use_gmac_flag)
- || ( phy_addr >= 32)
|| ( reg_addr >= 32) ) {
pfdep_print( PFDEP_DEBUG_LEVEL_FATAL,
"An error occurred at ogma_get_phy_reg.\nPlease set valid argument.\n");
return 0;
}

- value = ogma_get_phy_reg_sub(ctrl_p, phy_addr, reg_addr);
+ value = ogma_get_phy_reg_sub(ctrl_p, reg_addr);


return value;
@@ -702,7 +691,6 @@ ogma_err_t ogma_get_gmac_status (

static void ogma_set_phy_target_mmd_reg_addr (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
)
@@ -713,21 +701,20 @@ static void ogma_set_phy_target_mmd_reg_addr (
cmd = ( ogma_uint32)dev_addr;

/*set command to MMD access control register */
- ogma_set_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_MMD_AC, cmd);
+ ogma_set_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_MMD_AC, cmd);

/* set MMD access address data register Write reg_addr */
- ogma_set_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_MMD_AAD, reg_addr);
+ ogma_set_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_MMD_AAD, reg_addr);

/* write value to MMD ADDR */
cmd = ( (1U << 14) | dev_addr);

/* set command to MMD access control register */
- ogma_set_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_MMD_AC, cmd);
+ ogma_set_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_MMD_AC, cmd);
}

static void ogma_set_phy_mmd_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr,
ogma_uint16 value
@@ -735,30 +722,27 @@ static void ogma_set_phy_mmd_reg_sub (
{
/* set target mmd reg_addr */
ogma_set_phy_target_mmd_reg_addr( ctrl_p,
- phy_addr,
dev_addr,
reg_addr);

/* Write value to MMD access address data register */
- ogma_set_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_MMD_AAD, value);
+ ogma_set_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_MMD_AAD, value);

}

static ogma_uint16 ogma_get_phy_mmd_reg_sub (
ogma_ctrl_t *ctrl_p,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
)
{
/* set target mmd reg_addr */
ogma_set_phy_target_mmd_reg_addr( ctrl_p,
- phy_addr,
dev_addr,
reg_addr);

/* Read value for MMD access address data register */
- return ogma_get_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_MMD_AAD);
+ return ogma_get_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_MMD_AAD);
}

ogma_err_t ogma_set_gmac_lpictrl_reg (
@@ -878,7 +862,6 @@ ogma_err_t ogma_get_gmac_lpitimer_reg (

void ogma_set_phy_mmd_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr,
ogma_uint16 value
@@ -890,8 +873,7 @@ void ogma_set_phy_mmd_reg (
return;
}

- if ( ( phy_addr > 31U) ||
- ( dev_addr > 31U) ) {
+ if ( dev_addr > 31U) {
return;
}

@@ -900,7 +882,6 @@ void ogma_set_phy_mmd_reg (
}

ogma_set_phy_mmd_reg_sub ( ctrl_p,
- phy_addr,
dev_addr,
reg_addr,
value);
@@ -911,7 +892,6 @@ void ogma_set_phy_mmd_reg (

ogma_uint16 ogma_get_phy_mmd_reg (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_uint8 dev_addr,
ogma_uint16 reg_addr
)
@@ -923,8 +903,7 @@ ogma_uint16 ogma_get_phy_mmd_reg (
return 0;
}

- if ( ( phy_addr > 31U) ||
- ( dev_addr > 31U) ) {
+ if ( dev_addr > 31U) {
return 0;
}

@@ -933,7 +912,6 @@ ogma_uint16 ogma_get_phy_mmd_reg (
}

value = ogma_get_phy_mmd_reg_sub ( ctrl_p,
- phy_addr,
dev_addr,
reg_addr);

@@ -943,7 +921,6 @@ ogma_uint16 ogma_get_phy_mmd_reg (

ogma_err_t ogma_get_phy_link_status (
ogma_handle_t ogma_handle,
- ogma_uint8 phy_addr,
ogma_phy_link_status_t *phy_link_status_p
)
{
@@ -955,10 +932,6 @@ ogma_err_t ogma_get_phy_link_status (
return OGMA_ERR_PARAM;
}

- if ( phy_addr >= 32) {
- return OGMA_ERR_RANGE;
- }
-
if ( !ctrl_p->param.use_gmac_flag) {
return OGMA_ERR_NOTAVAIL;
}
@@ -966,17 +939,17 @@ ogma_err_t ogma_get_phy_link_status (
pfdep_memset( phy_link_status_p, 0, sizeof( ogma_phy_link_status_t) );

/* Read PHY CONTROL Register */
- tmp = ogma_get_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_CONTROL);
+ tmp = ogma_get_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_CONTROL);

/* Read PHY STATUS Register */
- value = ogma_get_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_STATUS);
+ value = ogma_get_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_STATUS);

/* check latched_link_down_flag */
if ( ( value & ( 1U << OGMA_PHY_STATUS_REG_LINK_STATUS) ) == 0) {
phy_link_status_p->latched_link_down_flag = OGMA_TRUE;

/* Read PHY STATUS Register */
- value = ogma_get_phy_reg_sub( ctrl_p, phy_addr, OGMA_PHY_REG_ADDR_STATUS);
+ value = ogma_get_phy_reg_sub( ctrl_p, OGMA_PHY_REG_ADDR_STATUS);

}

@@ -1036,12 +1009,10 @@ ogma_err_t ogma_get_phy_link_status (

/* Read MASTER-SLAVE Control Register */
value = ogma_get_phy_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_REG_ADDR_MASTER_SLAVE_CONTROL);

/* Read MASTER-SLAVE Status Register */
tmp = ogma_get_phy_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_REG_ADDR_MASTER_SLAVE_STATUS);

/* Check Current Link Speed */
@@ -1061,12 +1032,10 @@ ogma_err_t ogma_get_phy_link_status (

/* Read Auto-Negotiation Advertisement register */
value = ogma_get_phy_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_REG_ADDR_AUTO_NEGO_ABILTY);

/* Read Auto-Negotiation Link Partner Base Page Ability register */
tmp = ogma_get_phy_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_REG_ADDR_AUTO_NEGO_LINK_PATNER_ABILTY);

value = ( ( ( value & tmp) >> OGMA_PHY_ANA_REG_TAF) &
@@ -1109,13 +1078,11 @@ ogma_err_t ogma_get_phy_link_status (

/* Read EEE advertisement register */
value = ogma_get_phy_mmd_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_DEV_ADDR_AUTO_NEGO,
OGMA_PHY_AUTO_NEGO_REG_ADDR_EEE_ADVERTISE);

/* Read EEE link partner ability register */
tmp = ogma_get_phy_mmd_reg_sub( ctrl_p,
- phy_addr,
OGMA_PHY_DEV_ADDR_AUTO_NEGO,
OGMA_PHY_AUTO_NEGO_REG_ADDR_EEE_LP_ABILITY);

diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_internal.h b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_internal.h
index ed09a7ada85d..a7bc69cf0777 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_internal.h
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_internal.h
@@ -111,6 +111,8 @@ struct ogma_ctrl_s{

pfdep_phys_addr_t dummy_desc_entry_phys_addr;

+ ogma_uint8 phy_addr;
+
#ifdef OGMA_CONFIG_REC_STAT
/**
* Statistics information.
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
index 4dec66313aa1..7481d2da2d24 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
@@ -388,6 +388,12 @@ ogma_err_t ogma_init (
return OGMA_ERR_DATA;
}

+ if ( param_p->phy_addr >= 32) {
+ pfdep_print( PFDEP_DEBUG_LEVEL_FATAL,
+ "Error: phy_addr out of range\n");
+ return OGMA_ERR_DATA;
+ }
+
ogma_err = ogma_probe_hardware( base_addr);

if ( ogma_err != OGMA_ERR_OK) {
--
2.17.1

[PATCH edk2-platforms v3 2/3] NetsecDxe: put phy in loopback mode to guarantee stable RXCLK input

Masahisa Kojima
 

From: Satoru Okamoto <okamoto.satoru@...>

NETSEC hardware requires stable RXCLK input upon initialization
triggered with DISCORE = 0.
However, RXCLK input could be unstable depending on phy chipset
and deployed network environment, which could cause NETSEC to
hang up during initialization.

We solve this platform/environment dependent issue by temporarily
putting phy in loopback mode, then we can expect the stable RXCLK input.

Signed-off-by: Masahisa Kojima <masahisa.kojima@...>
---
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c | 72 ++++++++++++++++++++
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_reg.h | 4 ++
2 files changed, 76 insertions(+)

diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
index 7481d2da2d24..5f6ddc0c745e 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_misc.c
@@ -327,6 +327,60 @@ STATIC ogma_uint32 ogma_calc_pkt_ctrl_reg_param (
return param;
}

+STATIC
+void
+ogma_pre_init_microengine (
+ ogma_handle_t ogma_handle
+ )
+{
+ UINT16 Data;
+
+ /* Remove dormant settings */
+ Data = ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ ~((1U << OGMA_PHY_CONTROL_REG_POWER_DOWN) |
+ (1U << OGMA_PHY_CONTROL_REG_ISOLATE));
+
+ ogma_set_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL, Data);
+
+ while ((ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ ((1U << OGMA_PHY_CONTROL_REG_POWER_DOWN) |
+ (1U << OGMA_PHY_CONTROL_REG_ISOLATE))) != 0);
+
+ /* Put phy in loopback mode to guarantee RXCLK input */
+ Data |= (1U << OGMA_PHY_CONTROL_REG_LOOPBACK);
+
+ ogma_set_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL, Data);
+
+ while ((ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ (1U << OGMA_PHY_CONTROL_REG_LOOPBACK)) == 0);
+}
+
+STATIC
+void
+ogma_post_init_microengine (
+ IN ogma_handle_t ogma_handle
+ )
+{
+ UINT16 Data;
+
+ /* Get phy back to normal operation */
+ Data = ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ ~(1U << OGMA_PHY_CONTROL_REG_LOOPBACK);
+
+ ogma_set_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL, Data);
+
+ while ((ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ (1U << OGMA_PHY_CONTROL_REG_LOOPBACK)) != 0);
+
+ Data |= (1U << OGMA_PHY_CONTROL_REG_RESET);
+
+ /* Apply software reset */
+ ogma_set_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL, Data);
+
+ while ((ogma_get_phy_reg (ogma_handle, OGMA_PHY_REG_ADDR_CONTROL) &
+ (1U << OGMA_PHY_CONTROL_REG_RESET)) != 0);
+}
+
ogma_err_t ogma_init (
void *base_addr,
pfdep_dev_handle_t dev_handle,
@@ -551,6 +605,17 @@ ogma_err_t ogma_init (
ogma_write_reg( ctrl_p, OGMA_REG_ADDR_DMA_TMR_CTRL,
( ogma_uint32)( ( OGMA_CONFIG_CLK_HZ / OGMA_CLK_MHZ) - 1) );

+ /*
+ * Do pre-initialization tasks for microengine
+ *
+ * In particular, we put phy in loopback mode
+ * in order to make sure RXCLK keeps provided to mac
+ * irrespective of phy link status,
+ * which is required for microengine intialization.
+ * This will be disabled once microengine initialization complete.
+ */
+ ogma_pre_init_microengine (ctrl_p);
+
/* start microengines */
ogma_write_reg( ctrl_p, OGMA_REG_ADDR_DIS_CORE, 0);

@@ -573,6 +638,13 @@ ogma_err_t ogma_init (
goto err;
}

+ /*
+ * Do post-initialization tasks for microengine
+ *
+ * We put phy in normal mode and apply reset.
+ */
+ ogma_post_init_microengine (ctrl_p);
+
/* clear microcode load end status */
ogma_write_reg( ctrl_p, OGMA_REG_ADDR_TOP_STATUS,
OGMA_TOP_IRQ_REG_ME_START);
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_reg.h b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_reg.h
index 30c716352b37..ca769084cb31 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_reg.h
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/netsec_for_uefi/netsec_sdk/src/ogma_reg.h
@@ -138,8 +138,12 @@
/* bit fields for PHY CONTROL Register */
#define OGMA_PHY_CONTROL_REG_SPEED_SELECTION_MSB (6)
#define OGMA_PHY_CONTROL_REG_DUPLEX_MODE (8)
+#define OGMA_PHY_CONTROL_REG_ISOLATE (10)
+#define OGMA_PHY_CONTROL_REG_POWER_DOWN (11)
#define OGMA_PHY_CONTROL_REG_AUTO_NEGO_ENABLE (12)
#define OGMA_PHY_CONTROL_REG_SPEED_SELECTION_LSB (13)
+#define OGMA_PHY_CONTROL_REG_LOOPBACK (14)
+#define OGMA_PHY_CONTROL_REG_RESET (15)

/* bit fields for PHY STATUS Register */
#define OGMA_PHY_STATUS_REG_LINK_STATUS (2)
--
2.17.1

[PATCH edk2-platforms v3 3/3] NetsecDxe: SnpInitialize() waits for media linking up

Masahisa Kojima
 

From: Satoru Okamoto <okamoto.satoru@...>

The latest NetsecDxe requires issueing phy reset at the
last stage of initialization to safely exit loopback mode.
However, as a result, it takes a couple of seconds for link state
to get stable, which could cause auto-chosen pxeboot to fail
due to MediaPresent check error.

This patch adds link state check with 5s timeout in NetsecDxe
initialization. The timeout value can be adjustable via
configuration file.

Signed-off-by: Masahisa Kojima <masahisa.kojima@...>
---
Platform/Socionext/DeveloperBox/DeveloperBox.dsc | 1 +
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c | 232 +++++++++-----------
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.dec | 1 +
Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.inf | 1 +
4 files changed, 110 insertions(+), 125 deletions(-)

diff --git a/Platform/Socionext/DeveloperBox/DeveloperBox.dsc b/Platform/Socionext/DeveloperBox/DeveloperBox.dsc
index 97fb8c410c60..9f8cb68cdd26 100644
--- a/Platform/Socionext/DeveloperBox/DeveloperBox.dsc
+++ b/Platform/Socionext/DeveloperBox/DeveloperBox.dsc
@@ -137,6 +137,7 @@ [PcdsFixedAtBuild]
gNetsecDxeTokenSpaceGuid.PcdFlowCtrl|0
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStartThreshold|36
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStopThreshold|48
+ gNetsecDxeTokenSpaceGuid.PcdMediaDetectTimeoutOnBoot|5
gNetsecDxeTokenSpaceGuid.PcdPauseTime|256

gSynQuacerTokenSpaceGuid.PcdNetsecEepromBase|0x08080000
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
index 0b91d4af44a3..a304e02208fa 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.c
@@ -169,6 +169,98 @@ ExitUnlock:
return Status;
}

+EFI_STATUS
+EFIAPI
+NetsecUpdateLink (
+ IN EFI_SIMPLE_NETWORK_PROTOCOL *Snp
+ )
+{
+ NETSEC_DRIVER *LanDriver;
+ ogma_phy_link_status_t phy_link_status;
+ ogma_gmac_mode_t ogma_gmac_mode;
+ ogma_err_t ogma_err;
+ BOOLEAN ValidFlag;
+ ogma_gmac_mode_t GmacMode;
+ BOOLEAN RxRunningFlag;
+ BOOLEAN TxRunningFlag;
+ EFI_STATUS ErrorStatus;
+
+ LanDriver = INSTANCE_FROM_SNP_THIS (Snp);
+
+ // Update the media status
+ ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
+ &phy_link_status);
+ if (ogma_err != OGMA_ERR_OK) {
+ DEBUG ((DEBUG_ERROR,
+ "NETSEC: ogma_get_phy_link_status failed with error code: %d\n",
+ (INT32)ogma_err));
+ ErrorStatus = EFI_DEVICE_ERROR;
+ goto Fail;
+ }
+
+ // Update the GMAC status
+ ogma_err = ogma_get_gmac_status (LanDriver->Handle, &ValidFlag, &GmacMode,
+ &RxRunningFlag, &TxRunningFlag);
+ if (ogma_err != OGMA_ERR_OK) {
+ DEBUG ((DEBUG_ERROR,
+ "NETSEC: ogma_get_gmac_status failed with error code: %d\n",
+ (INT32)ogma_err));
+ ErrorStatus = EFI_DEVICE_ERROR;
+ goto Fail;
+ }
+
+ // Stop GMAC when GMAC is running and physical link is down
+ if (RxRunningFlag && TxRunningFlag && !phy_link_status.up_flag) {
+ ogma_err = ogma_stop_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
+ if (ogma_err != OGMA_ERR_OK) {
+ DEBUG ((DEBUG_ERROR,
+ "NETSEC: ogma_stop_gmac() failed with error status %d\n",
+ ogma_err));
+ ErrorStatus = EFI_DEVICE_ERROR;
+ goto Fail;
+ }
+ }
+
+ // Start GMAC when GMAC is stopped and physical link is up
+ if (!RxRunningFlag && !TxRunningFlag && phy_link_status.up_flag) {
+ ZeroMem (&ogma_gmac_mode, sizeof (ogma_gmac_mode_t));
+ ogma_gmac_mode.link_speed = phy_link_status.link_speed;
+ ogma_gmac_mode.half_duplex_flag = (ogma_bool)phy_link_status.half_duplex_flag;
+ if (!phy_link_status.half_duplex_flag && FixedPcdGet8 (PcdFlowCtrl)) {
+ ogma_gmac_mode.flow_ctrl_enable_flag = FixedPcdGet8 (PcdFlowCtrl);
+ ogma_gmac_mode.flow_ctrl_start_threshold = FixedPcdGet16 (PcdFlowCtrlStartThreshold);
+ ogma_gmac_mode.flow_ctrl_stop_threshold = FixedPcdGet16 (PcdFlowCtrlStopThreshold);
+ ogma_gmac_mode.pause_time = FixedPcdGet16 (PcdPauseTime);
+ }
+
+ ogma_err = ogma_set_gmac_mode (LanDriver->Handle, &ogma_gmac_mode);
+ if (ogma_err != OGMA_ERR_OK) {
+ DEBUG ((DEBUG_ERROR,
+ "NETSEC: ogma_set_gmac() failed with error status %d\n",
+ (INT32)ogma_err));
+ ErrorStatus = EFI_DEVICE_ERROR;
+ goto Fail;
+ }
+
+ ogma_err = ogma_start_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
+ if (ogma_err != OGMA_ERR_OK) {
+ DEBUG ((DEBUG_ERROR,
+ "NETSEC: ogma_start_gmac() failed with error status %d\n",
+ (INT32)ogma_err));
+ ErrorStatus = EFI_DEVICE_ERROR;
+ goto Fail;
+ }
+ }
+
+ /* Updating link status for external guery */
+ Snp->Mode->MediaPresent = phy_link_status.up_flag;
+ return EFI_SUCCESS;
+
+Fail:
+ Snp->Mode->MediaPresent = FALSE;
+ return ErrorStatus;
+}
+
/*
* UEFI Initialize() function
*/
@@ -185,9 +277,9 @@ SnpInitialize (
EFI_TPL SavedTpl;
EFI_STATUS Status;

- ogma_phy_link_status_t phy_link_status;
ogma_err_t ogma_err;
- ogma_gmac_mode_t ogma_gmac_mode;
+
+ UINT32 Index;

// Check Snp Instance
if (Snp == NULL) {
@@ -271,48 +363,18 @@ SnpInitialize (
ogma_disable_desc_ring_irq (LanDriver->Handle, OGMA_DESC_RING_ID_NRM_TX,
OGMA_CH_IRQ_REG_EMPTY);

- // Stop and restart the physical link
- ogma_err = ogma_stop_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_stop_gmac() failed with error status %d\n",
- ogma_err));
- ReturnUnlock (EFI_DEVICE_ERROR);
- }
-
- ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
- &phy_link_status);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_get_phy_link_status() failed error code %d\n",
- (INT32)ogma_err));
- ReturnUnlock (EFI_DEVICE_ERROR);
- }
-
- SetMem (&ogma_gmac_mode, sizeof (ogma_gmac_mode_t), 0);
- ogma_gmac_mode.link_speed = phy_link_status.link_speed;
- ogma_gmac_mode.half_duplex_flag = (ogma_bool)phy_link_status.half_duplex_flag;
- if ((!phy_link_status.half_duplex_flag) && FixedPcdGet8 (PcdFlowCtrl)) {
- ogma_gmac_mode.flow_ctrl_enable_flag = FixedPcdGet8 (PcdFlowCtrl);
- ogma_gmac_mode.flow_ctrl_start_threshold = FixedPcdGet16 (PcdFlowCtrlStartThreshold);
- ogma_gmac_mode.flow_ctrl_stop_threshold = FixedPcdGet16 (PcdFlowCtrlStopThreshold);
- ogma_gmac_mode.pause_time = FixedPcdGet16 (PcdPauseTime);
- }
-
- ogma_err = ogma_set_gmac_mode (LanDriver->Handle, &ogma_gmac_mode);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_set_gmac() failed with error status %d\n",
- (INT32)ogma_err));
- ReturnUnlock (EFI_DEVICE_ERROR);
- }
-
- ogma_err = ogma_start_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_start_gmac() failed with error status %d\n",
- (INT32)ogma_err));
- ReturnUnlock (EFI_DEVICE_ERROR);
+ // Wait for media linking up
+ for (Index = 0; Index < (UINT32)FixedPcdGet8 (PcdMediaDetectTimeoutOnBoot) * 10; Index++) {
+ Status = NetsecUpdateLink (Snp);
+ if (Status != EFI_SUCCESS) {
+ ReturnUnlock (EFI_DEVICE_ERROR);
+ }
+
+ if (Snp->Mode->MediaPresent) {
+ break;
+ }
+
+ MicroSecondDelay(100000);
}

// Declare the driver as initialized
@@ -420,14 +482,6 @@ NetsecPollPhyStatus (
)
{
EFI_SIMPLE_NETWORK_PROTOCOL *Snp;
- NETSEC_DRIVER *LanDriver;
- ogma_phy_link_status_t phy_link_status;
- ogma_gmac_mode_t ogma_gmac_mode;
- ogma_err_t ogma_err;
- BOOLEAN ValidFlag;
- ogma_gmac_mode_t GmacMode;
- BOOLEAN RxRunningFlag;
- BOOLEAN TxRunningFlag;

Snp = (EFI_SIMPLE_NETWORK_PROTOCOL *)Context;
if (Snp == NULL) {
@@ -435,66 +489,7 @@ NetsecPollPhyStatus (
return;
}

- LanDriver = INSTANCE_FROM_SNP_THIS (Snp);
-
- // Update the media status
- ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
- &phy_link_status);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_get_phy_link_status failed with error code: %d\n",
- (INT32)ogma_err));
- return;
- }
-
- // Update the GMAC status
- ogma_err = ogma_get_gmac_status (LanDriver->Handle, &ValidFlag, &GmacMode,
- &RxRunningFlag, &TxRunningFlag);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_get_gmac_status failed with error code: %d\n",
- (INT32)ogma_err));
- return;
- }
-
- // Stop GMAC when GMAC is running and physical link is down
- if (RxRunningFlag && TxRunningFlag && !phy_link_status.up_flag) {
- ogma_err = ogma_stop_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_stop_gmac() failed with error status %d\n",
- ogma_err));
- return;
- }
- }
-
- // Start GMAC when GMAC is stopped and physical link is up
- if (!RxRunningFlag && !TxRunningFlag && phy_link_status.up_flag) {
- ZeroMem (&ogma_gmac_mode, sizeof (ogma_gmac_mode_t));
- ogma_gmac_mode.link_speed = phy_link_status.link_speed;
- ogma_gmac_mode.half_duplex_flag = (ogma_bool)phy_link_status.half_duplex_flag;
- if (!phy_link_status.half_duplex_flag && FixedPcdGet8 (PcdFlowCtrl)) {
- ogma_gmac_mode.flow_ctrl_enable_flag = FixedPcdGet8 (PcdFlowCtrl);
- ogma_gmac_mode.flow_ctrl_start_threshold = FixedPcdGet16 (PcdFlowCtrlStartThreshold);
- ogma_gmac_mode.flow_ctrl_stop_threshold = FixedPcdGet16 (PcdFlowCtrlStopThreshold);
- ogma_gmac_mode.pause_time = FixedPcdGet16 (PcdPauseTime);
- }
-
- ogma_err = ogma_set_gmac_mode (LanDriver->Handle, &ogma_gmac_mode);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_set_gmac() failed with error status %d\n",
- (INT32)ogma_err));
- return;
- }
-
- ogma_err = ogma_start_gmac (LanDriver->Handle, OGMA_TRUE, OGMA_TRUE);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_start_gmac() failed with error status %d\n",
- (INT32)ogma_err));
- }
- }
+ NetsecUpdateLink (Snp);
}

/*
@@ -631,7 +626,6 @@ SnpGetStatus (
pfdep_pkt_handle_t pkt_handle;
LIST_ENTRY *Link;

- ogma_phy_link_status_t phy_link_status;
ogma_err_t ogma_err;

// Check preliminaries
@@ -661,18 +655,6 @@ SnpGetStatus (
// Find the LanDriver structure
LanDriver = INSTANCE_FROM_SNP_THIS (Snp);

- // Update the media status
- ogma_err = ogma_get_phy_link_status (LanDriver->Handle,
- &phy_link_status);
- if (ogma_err != OGMA_ERR_OK) {
- DEBUG ((DEBUG_ERROR,
- "NETSEC: ogma_get_phy_link_status failed with error code: %d\n",
- (INT32)ogma_err));
- ReturnUnlock (EFI_DEVICE_ERROR);
- }
-
- Snp->Mode->MediaPresent = phy_link_status.up_flag;
-
ogma_err = ogma_clean_tx_desc_ring (LanDriver->Handle,
OGMA_DESC_RING_ID_NRM_TX);

diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.dec b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.dec
index 6b9f60293879..3b1de62c6e31 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.dec
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.dec
@@ -37,4 +37,5 @@ [PcdsFixedAtBuild.common]
gNetsecDxeTokenSpaceGuid.PcdFlowCtrl|0x0|UINT8|0x00000005
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStartThreshold|0x0|UINT16|0x00000006
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStopThreshold|0x0|UINT16|0x00000007
+ gNetsecDxeTokenSpaceGuid.PcdMediaDetectTimeoutOnBoot|0x0|UINT8|0x00000009
gNetsecDxeTokenSpaceGuid.PcdPauseTime|0x0|UINT16|0x00000008
diff --git a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.inf b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.inf
index 49dd28efc65b..0fb06ba80bf4 100644
--- a/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.inf
+++ b/Silicon/Socionext/SynQuacer/Drivers/Net/NetsecDxe/NetsecDxe.inf
@@ -61,4 +61,5 @@ [FixedPcd]
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStartThreshold
gNetsecDxeTokenSpaceGuid.PcdFlowCtrlStopThreshold
gNetsecDxeTokenSpaceGuid.PcdJumboPacket
+ gNetsecDxeTokenSpaceGuid.PcdMediaDetectTimeoutOnBoot
gNetsecDxeTokenSpaceGuid.PcdPauseTime
--
2.17.1

[PATCH 0/4] Build cache enhancement

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

Enhance the edk2 build cache with below patches:
Patch 01/04: Improve the cache hit rate through new cache checkpoint and hash algorithm
Patch 02/04: Print more info to explain why a module build cache miss
Patch 03/04: Fix the unsafe [self.Arch][self.Name] key usage in build cache
Patch 04/04 Add the GenFds multi-thread support in build cache

This patch set is based on patch set of [Patch 00/10 V8] Enable multiple process AutoGen
https://edk2.groups.io/g/devel/topic/patch_00_10_v8_enable/32779325?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,140,32779325

You can directly try this patch set in the branch:
https://github.com/shijunjing/edk2/tree/build_cache_improve_v1


Shi, Steven (4):
BaseTools: Improve the cache hit in the edk2 build cache
BaseTools: Print first cache missing file for build cachle
BaseTools: Change the [Arch][Name] module key in Build cache
BaseTools: Add GenFds multi-thread support in build cache

.../Source/Python/AutoGen/AutoGenWorker.py | 23 +
BaseTools/Source/Python/AutoGen/CacheIR.py | 28 +
BaseTools/Source/Python/AutoGen/DataPipe.py | 8 +
BaseTools/Source/Python/AutoGen/GenMake.py | 229 +++---
.../Source/Python/AutoGen/ModuleAutoGen.py | 742 ++++++++++++++++--
BaseTools/Source/Python/Common/GlobalData.py | 9 +
BaseTools/Source/Python/build/build.py | 171 ++--
7 files changed, 979 insertions(+), 231 deletions(-)
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/AutoGenWorker.py
create mode 100755 BaseTools/Source/Python/AutoGen/CacheIR.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/DataPipe.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/GenMake.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
mode change 100644 => 100755 BaseTools/Source/Python/Common/GlobalData.py
mode change 100644 => 100755 BaseTools/Source/Python/build/build.py

--
2.17.1

[PATCH 1/4] BaseTools: Improve the cache hit in the edk2 build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1927

Current cache hash algorithm does not parse and generate
the makefile to get the accurate dependency files for a
module. It instead use the platform and package meta files
to get the module depenedency in a quick but over approximate
way. These meta files are monolithic and involve many redundant
dependency for the module, which cause the module build
cache miss easily.
This patch introduces one more cache checkpoint and a new
hash algorithm besides the current quick one. The new hash
algorithm leverages the module makefile to achieve more
accurate and precise dependency info for a module. When
the build cache miss with the first quick hash, the
Basetool will caculate new one after makefile is generated
and then check again.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/AutoGenWorker.py | 21 +++++++++++++++++++++
BaseTools/Source/Python/AutoGen/CacheIR.py | 28 ++++++++++++++++++++++++++++
BaseTools/Source/Python/AutoGen/DataPipe.py | 8 ++++++++
BaseTools/Source/Python/AutoGen/GenMake.py | 223 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 643 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------------------------------------------
BaseTools/Source/Python/Common/GlobalData.py | 9 +++++++++
BaseTools/Source/Python/build/build.py | 122 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------
7 files changed, 862 insertions(+), 192 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
old mode 100644
new mode 100755
index e583828741..a84ed46f2e
--- a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
@@ -182,6 +182,12 @@ class AutoGenWorkerInProcess(mp.Process):
GlobalData.gDisableIncludePathCheck = False
GlobalData.gFdfParser = self.data_pipe.Get("FdfParser")
GlobalData.gDatabasePath = self.data_pipe.Get("DatabasePath")
+ GlobalData.gBinCacheSource = self.data_pipe.Get("BinCacheSource")
+ GlobalData.gBinCacheDest = self.data_pipe.Get("BinCacheDest")
+ GlobalData.gCacheIR = self.data_pipe.Get("CacheIR")
+ GlobalData.gEnableGenfdsMultiThread = self.data_pipe.Get("EnableGenfdsMultiThread")
+ GlobalData.file_lock = self.file_lock
+ CommandTarget = self.data_pipe.Get("CommandTarget")
pcd_from_build_option = []
for pcd_tuple in self.data_pipe.Get("BuildOptPcd"):
pcd_id = ".".join((pcd_tuple[0],pcd_tuple[1]))
@@ -193,10 +199,13 @@ class AutoGenWorkerInProcess(mp.Process):
FfsCmd = self.data_pipe.Get("FfsCommand")
if FfsCmd is None:
FfsCmd = {}
+ GlobalData.FfsCmd = FfsCmd
PlatformMetaFile = self.GetPlatformMetaFile(self.data_pipe.Get("P_Info").get("ActivePlatform"),
self.data_pipe.Get("P_Info").get("WorkspaceDir"))
libConstPcd = self.data_pipe.Get("LibConstPcd")
Refes = self.data_pipe.Get("REFS")
+ GlobalData.libConstPcd = libConstPcd
+ GlobalData.Refes = Refes
while True:
if self.module_queue.empty():
break
@@ -223,8 +232,20 @@ class AutoGenWorkerInProcess(mp.Process):
Ma.ConstPcd = libConstPcd[(Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path)]
if (Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path) in Refes:
Ma.ReferenceModules = Refes[(Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path)]
+ if GlobalData.gBinCacheSource and CommandTarget in [None, "", "all"]:
+ Ma.GenModuleFilesHash(GlobalData.gCacheIR)
+ Ma.GenPreMakefileHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyPreMakefileCache(GlobalData.gCacheIR):
+ continue
+
Ma.CreateCodeFile(False)
Ma.CreateMakeFile(False,GenFfsList=FfsCmd.get((Ma.MetaFile.File, Ma.Arch),[]))
+
+ if GlobalData.gBinCacheSource and CommandTarget in [None, "", "all"]:
+ Ma.GenMakeHeaderFilesHash(GlobalData.gCacheIR)
+ Ma.GenMakeHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
+ continue
except Empty:
pass
except:
diff --git a/BaseTools/Source/Python/AutoGen/CacheIR.py b/BaseTools/Source/Python/AutoGen/CacheIR.py
new file mode 100755
index 0000000000..2d9ffe3f0b
--- /dev/null
+++ b/BaseTools/Source/Python/AutoGen/CacheIR.py
@@ -0,0 +1,28 @@
+## @file
+# Build cache intermediate result and state
+#
+# Copyright (c) 2019, Intel Corporation. All rights reserved.<BR>
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+
+class ModuleBuildCacheIR():
+ def __init__(self, Path, Arch):
+ self.ModulePath = Path
+ self.ModuleArch = Arch
+ self.ModuleFilesHashDigest = None
+ self.ModuleFilesHashHexDigest = None
+ self.ModuleFilesChain = []
+ self.PreMakefileHashHexDigest = None
+ self.CreateCodeFileDone = False
+ self.CreateMakeFileDone = False
+ self.MakefilePath = None
+ self.AutoGenFileList = None
+ self.DependencyHeaderFileSet = None
+ self.MakeHeaderFilesHashChain = None
+ self.MakeHeaderFilesHashDigest = None
+ self.MakeHeaderFilesHashChain = []
+ self.MakeHashDigest = None
+ self.MakeHashHexDigest = None
+ self.MakeHashChain = []
+ self.PreMakeCacheHit = False
+ self.MakeCacheHit = False
diff --git a/BaseTools/Source/Python/AutoGen/DataPipe.py b/BaseTools/Source/Python/AutoGen/DataPipe.py
old mode 100644
new mode 100755
index 2052084bdb..84e77c301a
--- a/BaseTools/Source/Python/AutoGen/DataPipe.py
+++ b/BaseTools/Source/Python/AutoGen/DataPipe.py
@@ -158,3 +158,11 @@ class MemoryDataPipe(DataPipe):
self.DataContainer = {"FdfParser": True if GlobalData.gFdfParser else False}

self.DataContainer = {"LogLevel": EdkLogger.GetLevel()}
+
+ self.DataContainer = {"BinCacheSource":GlobalData.gBinCacheSource}
+
+ self.DataContainer = {"BinCacheDest":GlobalData.gBinCacheDest}
+
+ self.DataContainer = {"CacheIR":GlobalData.gCacheIR}
+
+ self.DataContainer = {"EnableGenfdsMultiThread":GlobalData.gEnableGenfdsMultiThread}
\ No newline at end of file
diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
old mode 100644
new mode 100755
index 5802ae5ae4..79387856bd
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -906,6 +906,11 @@ cleanlib:
self._AutoGenObject.IncludePathList + self._AutoGenObject.BuildOptionIncPathList
)

+ self.DependencyHeaderFileSet = set()
+ if FileDependencyDict:
+ for Dependency in FileDependencyDict.values():
+ self.DependencyHeaderFileSet.update(set(Dependency))
+
# Check if header files are listed in metafile
# Get a list of unique module header source files from MetaFile
headerFilesInMetaFileSet = set()
@@ -1096,7 +1101,7 @@ cleanlib:
## For creating makefile targets for dependent libraries
def ProcessDependentLibrary(self):
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
self.LibraryBuildDirectoryList.append(self.PlaceMacro(LibraryAutoGen.BuildDir, self.Macros))

## Return a list containing source file's dependencies
@@ -1110,114 +1115,9 @@ cleanlib:
def GetFileDependency(self, FileList, ForceInculeList, SearchPathList):
Dependency = {}
for F in FileList:
- Dependency[F] = self.GetDependencyList(F, ForceInculeList, SearchPathList)
+ Dependency[F] = GetDependencyList(self._AutoGenObject, self.FileCache, F, ForceInculeList, SearchPathList)
return Dependency

- ## Find dependencies for one source file
- #
- # By searching recursively "#include" directive in file, find out all the
- # files needed by given source file. The dependencies will be only searched
- # in given search path list.
- #
- # @param File The source file
- # @param ForceInculeList The list of files which will be included forcely
- # @param SearchPathList The list of search path
- #
- # @retval list The list of files the given source file depends on
- #
- def GetDependencyList(self, File, ForceList, SearchPathList):
- EdkLogger.debug(EdkLogger.DEBUG_1, "Try to get dependency files for %s" % File)
- FileStack = [File] + ForceList
- DependencySet = set()
-
- if self._AutoGenObject.Arch not in gDependencyDatabase:
- gDependencyDatabase[self._AutoGenObject.Arch] = {}
- DepDb = gDependencyDatabase[self._AutoGenObject.Arch]
-
- while len(FileStack) > 0:
- F = FileStack.pop()
-
- FullPathDependList = []
- if F in self.FileCache:
- for CacheFile in self.FileCache[F]:
- FullPathDependList.append(CacheFile)
- if CacheFile not in DependencySet:
- FileStack.append(CacheFile)
- DependencySet.update(FullPathDependList)
- continue
-
- CurrentFileDependencyList = []
- if F in DepDb:
- CurrentFileDependencyList = DepDb[F]
- else:
- try:
- Fd = open(F.Path, 'rb')
- FileContent = Fd.read()
- Fd.close()
- except BaseException as X:
- EdkLogger.error("build", FILE_OPEN_FAILURE, ExtraData=F.Path + "\n\t" + str(X))
- if len(FileContent) == 0:
- continue
- try:
- if FileContent[0] == 0xff or FileContent[0] == 0xfe:
- FileContent = FileContent.decode('utf-16')
- else:
- FileContent = FileContent.decode()
- except:
- # The file is not txt file. for example .mcb file
- continue
- IncludedFileList = gIncludePattern.findall(FileContent)
-
- for Inc in IncludedFileList:
- Inc = Inc.strip()
- # if there's macro used to reference header file, expand it
- HeaderList = gMacroPattern.findall(Inc)
- if len(HeaderList) == 1 and len(HeaderList[0]) == 2:
- HeaderType = HeaderList[0][0]
- HeaderKey = HeaderList[0][1]
- if HeaderType in gIncludeMacroConversion:
- Inc = gIncludeMacroConversion[HeaderType] % {"HeaderKey" : HeaderKey}
- else:
- # not known macro used in #include, always build the file by
- # returning a empty dependency
- self.FileCache[File] = []
- return []
- Inc = os.path.normpath(Inc)
- CurrentFileDependencyList.append(Inc)
- DepDb[F] = CurrentFileDependencyList
-
- CurrentFilePath = F.Dir
- PathList = [CurrentFilePath] + SearchPathList
- for Inc in CurrentFileDependencyList:
- for SearchPath in PathList:
- FilePath = os.path.join(SearchPath, Inc)
- if FilePath in gIsFileMap:
- if not gIsFileMap[FilePath]:
- continue
- # If isfile is called too many times, the performance is slow down.
- elif not os.path.isfile(FilePath):
- gIsFileMap[FilePath] = False
- continue
- else:
- gIsFileMap[FilePath] = True
- FilePath = PathClass(FilePath)
- FullPathDependList.append(FilePath)
- if FilePath not in DependencySet:
- FileStack.append(FilePath)
- break
- else:
- EdkLogger.debug(EdkLogger.DEBUG_9, "%s included by %s was not found "\
- "in any given path:\n\t%s" % (Inc, F, "\n\t".join(SearchPathList)))
-
- self.FileCache[F] = FullPathDependList
- DependencySet.update(FullPathDependList)
-
- DependencySet.update(ForceList)
- if File in DependencySet:
- DependencySet.remove(File)
- DependencyList = list(DependencySet) # remove duplicate ones
-
- return DependencyList

## CustomMakefile class
#
@@ -1599,7 +1499,7 @@ cleanlib:
def GetLibraryBuildDirectoryList(self):
DirList = []
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
DirList.append(os.path.join(self._AutoGenObject.BuildDir, LibraryAutoGen.BuildDir))
return DirList

@@ -1735,7 +1635,7 @@ class TopLevelMakefile(BuildFile):
def GetLibraryBuildDirectoryList(self):
DirList = []
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
DirList.append(os.path.join(self._AutoGenObject.BuildDir, LibraryAutoGen.BuildDir))
return DirList

@@ -1743,3 +1643,108 @@ class TopLevelMakefile(BuildFile):
if __name__ == '__main__':
pass

+## Find dependencies for one source file
+#
+# By searching recursively "#include" directive in file, find out all the
+# files needed by given source file. The dependencies will be only searched
+# in given search path list.
+#
+# @param File The source file
+# @param ForceInculeList The list of files which will be included forcely
+# @param SearchPathList The list of search path
+#
+# @retval list The list of files the given source file depends on
+#
+def GetDependencyList(AutoGenObject, FileCache, File, ForceList, SearchPathList):
+ EdkLogger.debug(EdkLogger.DEBUG_1, "Try to get dependency files for %s" % File)
+ FileStack = [File] + ForceList
+ DependencySet = set()
+
+ if AutoGenObject.Arch not in gDependencyDatabase:
+ gDependencyDatabase[AutoGenObject.Arch] = {}
+ DepDb = gDependencyDatabase[AutoGenObject.Arch]
+
+ while len(FileStack) > 0:
+ F = FileStack.pop()
+
+ FullPathDependList = []
+ if F in FileCache:
+ for CacheFile in FileCache[F]:
+ FullPathDependList.append(CacheFile)
+ if CacheFile not in DependencySet:
+ FileStack.append(CacheFile)
+ DependencySet.update(FullPathDependList)
+ continue
+
+ CurrentFileDependencyList = []
+ if F in DepDb:
+ CurrentFileDependencyList = DepDb[F]
+ else:
+ try:
+ Fd = open(F.Path, 'rb')
+ FileContent = Fd.read()
+ Fd.close()
+ except BaseException as X:
+ EdkLogger.error("build", FILE_OPEN_FAILURE, ExtraData=F.Path + "\n\t" + str(X))
+ if len(FileContent) == 0:
+ continue
+ try:
+ if FileContent[0] == 0xff or FileContent[0] == 0xfe:
+ FileContent = FileContent.decode('utf-16')
+ else:
+ FileContent = FileContent.decode()
+ except:
+ # The file is not txt file. for example .mcb file
+ continue
+ IncludedFileList = gIncludePattern.findall(FileContent)
+
+ for Inc in IncludedFileList:
+ Inc = Inc.strip()
+ # if there's macro used to reference header file, expand it
+ HeaderList = gMacroPattern.findall(Inc)
+ if len(HeaderList) == 1 and len(HeaderList[0]) == 2:
+ HeaderType = HeaderList[0][0]
+ HeaderKey = HeaderList[0][1]
+ if HeaderType in gIncludeMacroConversion:
+ Inc = gIncludeMacroConversion[HeaderType] % {"HeaderKey" : HeaderKey}
+ else:
+ # not known macro used in #include, always build the file by
+ # returning a empty dependency
+ FileCache[File] = []
+ return []
+ Inc = os.path.normpath(Inc)
+ CurrentFileDependencyList.append(Inc)
+ DepDb[F] = CurrentFileDependencyList
+
+ CurrentFilePath = F.Dir
+ PathList = [CurrentFilePath] + SearchPathList
+ for Inc in CurrentFileDependencyList:
+ for SearchPath in PathList:
+ FilePath = os.path.join(SearchPath, Inc)
+ if FilePath in gIsFileMap:
+ if not gIsFileMap[FilePath]:
+ continue
+ # If isfile is called too many times, the performance is slow down.
+ elif not os.path.isfile(FilePath):
+ gIsFileMap[FilePath] = False
+ continue
+ else:
+ gIsFileMap[FilePath] = True
+ FilePath = PathClass(FilePath)
+ FullPathDependList.append(FilePath)
+ if FilePath not in DependencySet:
+ FileStack.append(FilePath)
+ break
+ else:
+ EdkLogger.debug(EdkLogger.DEBUG_9, "%s included by %s was not found "\
+ "in any given path:\n\t%s" % (Inc, F, "\n\t".join(SearchPathList)))
+
+ FileCache[F] = FullPathDependList
+ DependencySet.update(FullPathDependList)
+
+ DependencySet.update(ForceList)
+ if File in DependencySet:
+ DependencySet.remove(File)
+ DependencyList = list(DependencySet) # remove duplicate ones
+
+ return DependencyList
\ No newline at end of file
diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
old mode 100644
new mode 100755
index ed6822334e..5749b8a9fa
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -26,6 +26,8 @@ from Workspace.MetaFileCommentParser import UsageList
from .GenPcdDb import CreatePcdDatabaseCode
from Common.caching import cached_class_function
from AutoGen.ModuleAutoGenHelper import PlatformInfo,WorkSpaceInfo
+from AutoGen.CacheIR import ModuleBuildCacheIR
+import json

## Mapping Makefile type
gMakeTypeMap = {TAB_COMPILER_MSFT:"nmake", "GCC":"gmake"}
@@ -252,6 +254,8 @@ class ModuleAutoGen(AutoGen):
self.AutoGenDepSet = set()
self.ReferenceModules = []
self.ConstPcd = {}
+ self.Makefile = None
+ self.FileDependCache = {}

def __init_platform_info__(self):
pinfo = self.DataPipe.Get("P_Info")
@@ -1594,12 +1598,37 @@ class ModuleAutoGen(AutoGen):

self.IsAsBuiltInfCreated = True

+ def CacheCopyFile(self, OriginDir, CopyDir, File):
+ sub_dir = os.path.relpath(File, CopyDir)
+ destination_file = os.path.join(OriginDir, sub_dir)
+ destination_dir = os.path.dirname(destination_file)
+ CreateDirectory(destination_dir)
+ try:
+ CopyFileOnChange(File, destination_dir)
+ except:
+ EdkLogger.quiet("[cache warning]: fail to copy file:%s to folder:%s" % (File, destination_dir))
+ return
+
def CopyModuleToCache(self):
- FileDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.Name, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ self.GenPreMakefileHash(GlobalData.gCacheIR)
+ if not (self.MetaFile.Path, self.Arch) in GlobalData.gCacheIR or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate PreMakefileHash for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ self.GenMakeHash(GlobalData.gCacheIR)
+ if not (self.MetaFile.Path, self.Arch) in GlobalData.gCacheIR or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashChain or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate MakeHashChain for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = str(GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest)
+ FileDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName, MakeHashStr)
+ FfsDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name, MakeHashStr)
+
CreateDirectory (FileDir)
- HashFile = path.join(self.BuildDir, self.Name + '.hash')
- if os.path.exists(HashFile):
- CopyFileOnChange(HashFile, FileDir)
+ self.SaveHashChainFileToCache(GlobalData.gCacheIR)
ModuleFile = path.join(self.OutputDir, self.Name + '.inf')
if os.path.exists(ModuleFile):
CopyFileOnChange(ModuleFile, FileDir)
@@ -1617,38 +1646,76 @@ class ModuleAutoGen(AutoGen):
CreateDirectory(destination_dir)
CopyFileOnChange(File, destination_dir)

- def AttemptModuleCacheCopy(self):
- # If library or Module is binary do not skip by hash
- if self.IsBinaryModule:
+ def SaveHashChainFileToCache(self, gDict):
+ if not GlobalData.gBinCacheDest:
+ return False
+
+ self.GenPreMakefileHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate PreMakefileHash for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ self.GenMakeHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate MakeHashChain for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ # save the hash chain list as cache file
+ MakeHashStr = str(GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest)
+ CacheDestDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ CacheHashDestDir = path.join(CacheDestDir, MakeHashStr)
+ ModuleHashPair = path.join(CacheDestDir, self.Name + ".ModuleHashPair")
+ MakeHashChain = path.join(CacheHashDestDir, self.Name + ".MakeHashChain")
+ ModuleFilesChain = path.join(CacheHashDestDir, self.Name + ".ModuleFilesChain")
+
+ # save the HashChainDict as json file
+ CreateDirectory (CacheDestDir)
+ CreateDirectory (CacheHashDestDir)
+ try:
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ if os.path.exists(ModuleHashPair):
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ PreMakeHash = gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest
+ MakeHash = gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest
+ ModuleHashPairList.append((PreMakeHash, MakeHash))
+ ModuleHashPairList = list(set(map(tuple, ModuleHashPairList)))
+ with open(ModuleHashPair, 'w') as f:
+ json.dump(ModuleHashPairList, f, indent=2)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save ModuleHashPair file in cache: %s" % ModuleHashPair)
+ return False
+
+ try:
+ with open(MakeHashChain, 'w') as f:
+ json.dump(gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain, f, indent=2)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save MakeHashChain file in cache: %s" % MakeHashChain)
+ return False
+
+ try:
+ with open(ModuleFilesChain, 'w') as f:
+ json.dump(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain, f, indent=2)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save ModuleFilesChain file in cache: %s" % ModuleFilesChain)
return False
- # .inc is contains binary information so do not skip by hash as well
- for f_ext in self.SourceFileList:
- if '.inc' in str(f_ext):
- return False
- FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.Name, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
- HashFile = path.join(FileDir, self.Name + '.hash')
- if os.path.exists(HashFile):
- f = open(HashFile, 'r')
- CacheHash = f.read()
- f.close()
- self.GenModuleHash()
- if GlobalData.gModuleHash[self.Arch][self.Name]:
- if CacheHash == GlobalData.gModuleHash[self.Arch][self.Name]:
- for root, dir, files in os.walk(FileDir):
- for f in files:
- if self.Name + '.hash' in f:
- CopyFileOnChange(HashFile, self.BuildDir)
- else:
- File = path.join(root, f)
- sub_dir = os.path.relpath(File, FileDir)
- destination_file = os.path.join(self.OutputDir, sub_dir)
- destination_dir = os.path.dirname(destination_file)
- CreateDirectory(destination_dir)
- CopyFileOnChange(File, destination_dir)
- if self.Name == "PcdPeim" or self.Name == "PcdDxe":
- CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
- return True
- return False
+
+ # save the autogenfile and makefile for debug usage
+ CacheDebugDir = path.join(CacheHashDestDir, "CacheDebug")
+ CreateDirectory (CacheDebugDir)
+ CopyFileOnChange(gDict[(self.MetaFile.Path, self.Arch)].MakefilePath, CacheDebugDir)
+ if gDict[(self.MetaFile.Path, self.Arch)].AutoGenFileList:
+ for File in gDict[(self.MetaFile.Path, self.Arch)].AutoGenFileList:
+ CopyFileOnChange(str(File), CacheDebugDir)
+
+ return True

## Create makefile for the module and its dependent libraries
#
@@ -1657,6 +1724,11 @@ class ModuleAutoGen(AutoGen):
#
@cached_class_function
def CreateMakeFile(self, CreateLibraryMakeFile=True, GenFfsList = []):
+ gDict = GlobalData.gCacheIR
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ return
+
# nest this function inside it's only caller.
def CreateTimeStamp():
FileSet = {self.MetaFile.Path}
@@ -1687,8 +1759,8 @@ class ModuleAutoGen(AutoGen):
for LibraryAutoGen in self.LibraryAutoGenList:
LibraryAutoGen.CreateMakeFile()

- # Don't enable if hash feature enabled, CanSkip uses timestamps to determine build skipping
- if not GlobalData.gUseHashCache and self.CanSkip():
+ # CanSkip uses timestamps to determine build skipping
+ if self.CanSkip():
return

if len(self.CustomMakefile) == 0:
@@ -1704,6 +1776,24 @@ class ModuleAutoGen(AutoGen):

CreateTimeStamp()

+ MakefileType = Makefile._FileType
+ MakefileName = Makefile._FILE_NAME_[MakefileType]
+ MakefilePath = os.path.join(self.MakeFileDir, MakefileName)
+
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.MakefilePath = MakefilePath
+ MewIR.DependencyHeaderFileSet = Makefile.DependencyHeaderFileSet
+ MewIR.CreateMakeFileDone = True
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakefilePath = MakefilePath
+ IR.DependencyHeaderFileSet = Makefile.DependencyHeaderFileSet
+ IR.CreateMakeFileDone = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
def CopyBinaryFiles(self):
for File in self.Module.Binaries:
SrcPath = File.Path
@@ -1715,6 +1805,11 @@ class ModuleAutoGen(AutoGen):
# dependent libraries will be created
#
def CreateCodeFile(self, CreateLibraryCodeFile=True):
+ gDict = GlobalData.gCacheIR
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone:
+ return
+
if self.IsCodeFileCreated:
return

@@ -1730,8 +1825,9 @@ class ModuleAutoGen(AutoGen):
if not self.IsLibrary and CreateLibraryCodeFile:
for LibraryAutoGen in self.LibraryAutoGenList:
LibraryAutoGen.CreateCodeFile()
- # Don't enable if hash feature enabled, CanSkip uses timestamps to determine build skipping
- if not GlobalData.gUseHashCache and self.CanSkip():
+
+ # CanSkip uses timestamps to determine build skipping
+ if self.CanSkip():
return

AutoGenList = []
@@ -1771,6 +1867,16 @@ class ModuleAutoGen(AutoGen):
(" ".join(AutoGenList), " ".join(IgoredAutoGenList), self.Name, self.Arch))

self.IsCodeFileCreated = True
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.CreateCodeFileDone = True
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.CreateCodeFileDone = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
return AutoGenList

## Summarize the ModuleAutoGen objects of all libraries used by this module
@@ -1840,46 +1946,469 @@ class ModuleAutoGen(AutoGen):

return GlobalData.gModuleHash[self.Arch][self.Name].encode('utf-8')

+ def GenModuleFilesHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict:
+ if gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ DependencyFileSet = set()
+ # Add Module Meta file
+ DependencyFileSet.add(self.MetaFile)
+
+ # Add Module's source files
+ if self.SourceFileList:
+ for File in set(self.SourceFileList):
+ DependencyFileSet.add(File)
+
+ # Add modules's include header files
+ # Search dependency file list for each source file
+ SourceFileList = []
+ OutPutFileList = []
+ for Target in self.IntroTargetList:
+ SourceFileList.extend(Target.Inputs)
+ OutPutFileList.extend(Target.Outputs)
+ if OutPutFileList:
+ for Item in OutPutFileList:
+ if Item in SourceFileList:
+ SourceFileList.remove(Item)
+ SearchList = []
+ for file_path in self.IncludePathList + self.BuildOptionIncPathList:
+ # skip the folders in platform BuildDir which are not been generated yet
+ if file_path.startswith(os.path.abspath(self.PlatformInfo.BuildDir)+os.sep):
+ continue
+ SearchList.append(file_path)
+ FileDependencyDict = {}
+ ForceIncludedFile = []
+ for F in SourceFileList:
+ # skip the files which are not been generated yet, because
+ # the SourceFileList usually contains intermediate build files, e.g. AutoGen.c
+ if not os.path.exists(F.Path):
+ continue
+ FileDependencyDict[F] = GenMake.GetDependencyList(self, self.FileDependCache, F, ForceIncludedFile, SearchList)
+
+ if FileDependencyDict:
+ for Dependency in FileDependencyDict.values():
+ DependencyFileSet.update(set(Dependency))
+
+ # Caculate all above dependency files hash
+ # Initialze hash object
+ FileList = []
+ m = hashlib.md5()
+ for File in sorted(DependencyFileSet, key=lambda x: str(x)):
+ if not os.path.exists(str(File)):
+ EdkLogger.quiet("[cache warning]: header file %s is missing for module: %s[%s]" % (File, self.MetaFile.Path, self.Arch))
+ continue
+ f = open(str(File), 'rb')
+ Content = f.read()
+ f.close()
+ m.update(Content)
+ FileList.append((str(File), hashlib.md5(Content).hexdigest()))
+
+
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.ModuleFilesHashDigest = m.digest()
+ MewIR.ModuleFilesHashHexDigest = m.hexdigest()
+ MewIR.ModuleFilesChain = FileList
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.ModuleFilesHashDigest = m.digest()
+ IR.ModuleFilesHashHexDigest = m.hexdigest()
+ IR.ModuleFilesChain = FileList
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenPreMakefileHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ self.GenModuleFilesHash(gDict)
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate ModuleFilesHashDigest for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ # Initialze hash object
+ m = hashlib.md5()
+
+ # Add Platform level hash
+ if ('PlatformHash') in gDict:
+ m.update(gDict[('PlatformHash')].encode('utf-8'))
+ else:
+ EdkLogger.quiet("[cache warning]: PlatformHash is missing")
+
+ # Add Package level hash
+ if self.DependentPackageList:
+ for Pkg in sorted(self.DependentPackageList, key=lambda x: x.PackageName):
+ if (Pkg.PackageName, 'PackageHash') in gDict:
+ m.update(gDict[(Pkg.PackageName, 'PackageHash')].encode('utf-8'))
+ else:
+ EdkLogger.quiet("[cache warning]: %s PackageHash needed by %s[%s] is missing" %(Pkg.PackageName, self.MetaFile.Name, self.Arch))
+
+ # Add Library hash
+ if self.LibraryAutoGenList:
+ for Lib in sorted(self.LibraryAutoGenList, key=lambda x: x.Name):
+ if not (Lib.MetaFile.Path, Lib.Arch) in gDict or \
+ not gDict[(Lib.MetaFile.Path, Lib.Arch)].ModuleFilesHashDigest:
+ Lib.GenPreMakefileHash(gDict)
+ m.update(gDict[(Lib.MetaFile.Path, Lib.Arch)].ModuleFilesHashDigest)
+
+ # Add Module self
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest)
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.PreMakefileHashHexDigest = m.hexdigest()
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenMakeHeaderFilesHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone:
+ if self.IsLibrary:
+ if (self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path) in GlobalData.libConstPcd:
+ self.ConstPcd = GlobalData.libConstPcd[(self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path)]
+ if (self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path) in GlobalData.Refes:
+ self.ReferenceModules = GlobalData.Refes[(self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path)]
+ self.CreateCodeFile()
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ self.CreateMakeFile(GenFfsList=GlobalData.FfsCmd.get((self.MetaFile.File, self.Arch),[]))
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ EdkLogger.quiet("[cache warning]: Cannot create CodeFile or Makefile for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ DependencyFileSet = set()
+ # Add Makefile
+ if gDict[(self.MetaFile.Path, self.Arch)].MakefilePath:
+ DependencyFileSet.add(gDict[(self.MetaFile.Path, self.Arch)].MakefilePath)
+ else:
+ EdkLogger.quiet("[cache warning]: makefile is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+
+ # Add header files
+ if gDict[(self.MetaFile.Path, self.Arch)].DependencyHeaderFileSet:
+ for File in gDict[(self.MetaFile.Path, self.Arch)].DependencyHeaderFileSet:
+ DependencyFileSet.add(File)
+ else:
+ EdkLogger.quiet("[cache warning]: No dependency header found for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+
+ # Add AutoGen files
+ if self.AutoGenFileList:
+ for File in set(self.AutoGenFileList):
+ DependencyFileSet.add(File)
+
+ # Caculate all above dependency files hash
+ # Initialze hash object
+ FileList = []
+ m = hashlib.md5()
+ for File in sorted(DependencyFileSet, key=lambda x: str(x)):
+ if not os.path.exists(str(File)):
+ EdkLogger.quiet("[cache warning]: header file: %s doesn't exist for module: %s[%s]" % (File, self.MetaFile.Path, self.Arch))
+ continue
+ f = open(str(File), 'rb')
+ Content = f.read()
+ f.close()
+ m.update(Content)
+ FileList.append((str(File), hashlib.md5(Content).hexdigest()))
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.AutoGenFileList = self.AutoGenFileList.keys()
+ IR.MakeHeaderFilesHashChain = FileList
+ IR.MakeHeaderFilesHashDigest = m.digest()
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenMakeHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ self.GenModuleFilesHash(gDict)
+ if not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest:
+ self.GenMakeHeaderFilesHash(gDict)
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashChain:
+ EdkLogger.quiet("[cache warning]: Cannot generate ModuleFilesHash or MakeHeaderFilesHash for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ # Initialze hash object
+ m = hashlib.md5()
+ MakeHashChain = []
+
+ # Add hash of makefile and dependency header files
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest)
+ New = list(set(gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ # Add Library hash
+ if self.LibraryAutoGenList:
+ for Lib in sorted(self.LibraryAutoGenList, key=lambda x: x.Name):
+ if not (Lib.MetaFile.Path, Lib.Arch) in gDict or \
+ not gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashChain:
+ Lib.GenMakeHash(gDict)
+ if not gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashDigest:
+ print("Cannot generate MakeHash for lib module:", Lib.MetaFile.Path, Lib.Arch)
+ continue
+ m.update(gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashDigest)
+ New = list(set(gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ # Add Module self
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest)
+ New = list(set(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeHashDigest = m.digest()
+ IR.MakeHashHexDigest = m.hexdigest()
+ IR.MakeHashChain = MakeHashChain
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ ## Decide whether we can skip the left autogen and make process
+ def CanSkipbyPreMakefileCache(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return False
+
+ # If Module is binary, do not skip by cache
+ if self.IsBinaryModule:
+ return False
+
+ # .inc is contains binary information so do not skip by hash as well
+ for f_ext in self.SourceFileList:
+ if '.inc' in str(f_ext):
+ return False
+
+ # Get the module hash values from stored cache and currrent build
+ # then check whether cache hit based on the hash values
+ # if cache hit, restore all the files from cache
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ FfsDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ self.GenPreMakefileHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: PreMakefileHashHexDigest is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = None
+ CurrentPreMakeHash = gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ if PreMakefileHash == CurrentPreMakeHash:
+ MakeHashStr = str(MakeHash)
+
+ if not MakeHashStr:
+ return False
+
+ TargetHashDir = path.join(FileDir, MakeHashStr)
+ TargetFfsHashDir = path.join(FfsDir, MakeHashStr)
+
+ if not os.path.exists(TargetHashDir):
+ EdkLogger.quiet("[cache warning]: Cache folder is missing: %s" % TargetHashDir)
+ return False
+
+ for root, dir, files in os.walk(TargetHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.OutputDir, TargetHashDir, File)
+ if os.path.exists(TargetFfsHashDir):
+ for root, dir, files in os.walk(TargetFfsHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.FfsOutputDir, TargetFfsHashDir, File)
+
+ if self.Name == "PcdPeim" or self.Name == "PcdDxe":
+ CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.PreMakeCacheHit = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache hit]: checkpoint_PreMakefile:", self.MetaFile.Path, self.Arch)
+ #EdkLogger.quiet("cache hit: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return True
+
+ ## Decide whether we can skip the make process
+ def CanSkipbyMakeCache(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return False
+
+ # If Module is binary, do not skip by cache
+ if self.IsBinaryModule:
+ print("[cache miss]: checkpoint_Makefile: binary module:", self.MetaFile.Path, self.Arch)
+ return False
+
+ # .inc is contains binary information so do not skip by hash as well
+ for f_ext in self.SourceFileList:
+ if '.inc' in str(f_ext):
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeCacheHit = False
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache miss]: checkpoint_Makefile: .inc module:", self.MetaFile.Path, self.Arch)
+ return False
+
+ # Get the module hash values from stored cache and currrent build
+ # then check whether cache hit based on the hash values
+ # if cache hit, restore all the files from cache
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ FfsDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ self.GenMakeHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: MakeHashHexDigest is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = None
+ CurrentMakeHash = gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ if MakeHash == CurrentMakeHash:
+ MakeHashStr = str(MakeHash)
+
+ if not MakeHashStr:
+ print("[cache miss]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
+ return False
+
+ TargetHashDir = path.join(FileDir, MakeHashStr)
+ TargetFfsHashDir = path.join(FfsDir, MakeHashStr)
+ if not os.path.exists(TargetHashDir):
+ EdkLogger.quiet("[cache warning]: Cache folder is missing: %s" % TargetHashDir)
+ return False
+
+ for root, dir, files in os.walk(TargetHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.OutputDir, TargetHashDir, File)
+
+ if os.path.exists(TargetFfsHashDir):
+ for root, dir, files in os.walk(TargetFfsHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.FfsOutputDir, TargetFfsHashDir, File)
+
+ if self.Name == "PcdPeim" or self.Name == "PcdDxe":
+ CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeCacheHit = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache hit]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
+ return True
+
## Decide whether we can skip the ModuleAutoGen process
- def CanSkipbyHash(self):
+ def CanSkipbyCache(self, gDict):
# Hashing feature is off
- if not GlobalData.gUseHashCache:
+ if not GlobalData.gBinCacheSource:
return False

- # Initialize a dictionary for each arch type
- if self.Arch not in GlobalData.gBuildHashSkipTracking:
- GlobalData.gBuildHashSkipTracking[self.Arch] = dict()
+ if self in GlobalData.gBuildHashSkipTracking:
+ return GlobalData.gBuildHashSkipTracking[self]

# If library or Module is binary do not skip by hash
if self.IsBinaryModule:
+ GlobalData.gBuildHashSkipTracking[self] = False
return False

# .inc is contains binary information so do not skip by hash as well
for f_ext in self.SourceFileList:
if '.inc' in str(f_ext):
+ GlobalData.gBuildHashSkipTracking[self] = False
return False

- # Use Cache, if exists and if Module has a copy in cache
- if GlobalData.gBinCacheSource and self.AttemptModuleCacheCopy():
+ if not (self.MetaFile.Path, self.Arch) in gDict:
+ return False
+
+ if gDict[(self.MetaFile.Path, self.Arch)].PreMakeCacheHit:
+ GlobalData.gBuildHashSkipTracking[self] = True
return True

- # Early exit for libraries that haven't yet finished building
- HashFile = path.join(self.BuildDir, self.Name + ".hash")
- if self.IsLibrary and not os.path.exists(HashFile):
- return False
+ if gDict[(self.MetaFile.Path, self.Arch)].MakeCacheHit:
+ GlobalData.gBuildHashSkipTracking[self] = True
+ return True

- # Return a Boolean based on if can skip by hash, either from memory or from IO.
- if self.Name not in GlobalData.gBuildHashSkipTracking[self.Arch]:
- # If hashes are the same, SaveFileOnChange() will return False.
- GlobalData.gBuildHashSkipTracking[self.Arch][self.Name] = not SaveFileOnChange(HashFile, self.GenModuleHash(), True)
- return GlobalData.gBuildHashSkipTracking[self.Arch][self.Name]
- else:
- return GlobalData.gBuildHashSkipTracking[self.Arch][self.Name]
+ return False

## Decide whether we can skip the ModuleAutoGen process
# If any source file is newer than the module than we cannot skip
#
def CanSkip(self):
+ # Don't skip if cache feature enabled
+ if GlobalData.gUseHashCache or GlobalData.gBinCacheDest or GlobalData.gBinCacheSource:
+ return False
+
if self.MakeFileDir in GlobalData.gSikpAutoGenCache:
return True
if not os.path.exists(self.TimeStampPath):
diff --git a/BaseTools/Source/Python/Common/GlobalData.py b/BaseTools/Source/Python/Common/GlobalData.py
old mode 100644
new mode 100755
index bd45a43728..df10814f04
--- a/BaseTools/Source/Python/Common/GlobalData.py
+++ b/BaseTools/Source/Python/Common/GlobalData.py
@@ -119,3 +119,12 @@ gModuleBuildTracking = dict()
# Top Dict: Key: Arch Type Value: Dictionary
# Second Dict: Key: Module\Library Name Value: True\False
gBuildHashSkipTracking = dict()
+
+# Common dictionary to share module cache intermediate result and state
+gCacheIR = None
+# Common lock for the multiple process AutoGens
+file_lock = None
+# Common dictionary to share platform libraries' constant Pcd
+libConstPcd = None
+# Common dictionary to share platform libraries' reference info
+Refes = None
\ No newline at end of file
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
old mode 100644
new mode 100755
index 4de3f43c27..84540d61f5
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -595,7 +595,7 @@ class BuildTask:
#
def AddDependency(self, Dependency):
for Dep in Dependency:
- if not Dep.BuildObject.IsBinaryModule and not Dep.BuildObject.CanSkipbyHash():
+ if not Dep.BuildObject.IsBinaryModule and not Dep.BuildObject.CanSkipbyCache(GlobalData.gCacheIR):
self.DependencyList.append(BuildTask.New(Dep)) # BuildTask list

## The thread wrapper of LaunchCommand function
@@ -811,7 +811,7 @@ class Build():
self.AutoGenMgr = None
EdkLogger.info("")
os.chdir(self.WorkspaceDir)
- self.share_data = Manager().dict()
+ GlobalData.gCacheIR = Manager().dict()
self.log_q = log_q
def StartAutoGen(self,mqueue, DataPipe,SkipAutoGen,PcdMaList,share_data):
try:
@@ -820,6 +820,13 @@ class Build():
feedback_q = mp.Queue()
file_lock = mp.Lock()
error_event = mp.Event()
+ GlobalData.file_lock = file_lock
+ FfsCmd = DataPipe.Get("FfsCommand")
+ if FfsCmd is None:
+ FfsCmd = {}
+ GlobalData.FfsCmd = FfsCmd
+ GlobalData.libConstPcd = DataPipe.Get("LibConstPcd")
+ GlobalData.Refes = DataPipe.Get("REFS")
auto_workers = [AutoGenWorkerInProcess(mqueue,DataPipe.dump_file,feedback_q,file_lock,share_data,self.log_q,error_event) for _ in range(self.ThreadNumber)]
self.AutoGenMgr = AutoGenManager(auto_workers,feedback_q,error_event)
self.AutoGenMgr.start()
@@ -827,9 +834,21 @@ class Build():
w.start()
if PcdMaList is not None:
for PcdMa in PcdMaList:
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ PcdMa.GenModuleFilesHash(share_data)
+ PcdMa.GenPreMakefileHash(share_data)
+ if PcdMa.CanSkipbyPreMakefileCache(share_data):
+ continue
+
PcdMa.CreateCodeFile(False)
PcdMa.CreateMakeFile(False,GenFfsList = DataPipe.Get("FfsCommand").get((PcdMa.MetaFile.File, PcdMa.Arch),[]))

+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ PcdMa.GenMakeHeaderFilesHash(share_data)
+ PcdMa.GenMakeHash(share_data)
+ if PcdMa.CanSkipbyMakeCache(share_data):
+ continue
+
self.AutoGenMgr.join()
rt = self.AutoGenMgr.Status
return rt, 0
@@ -1199,10 +1218,11 @@ class Build():
mqueue.put(m)

AutoGenObject.DataPipe.DataContainer = {"FfsCommand":FfsCommand}
+ AutoGenObject.DataPipe.DataContainer = {"CommandTarget": self.Target}
self.Progress.Start("Generating makefile and code")
data_pipe_file = os.path.join(AutoGenObject.BuildDir, "GlobalVar_%s_%s.bin" % (str(AutoGenObject.Guid),AutoGenObject.Arch))
AutoGenObject.DataPipe.dump(data_pipe_file)
- autogen_rt, errorcode = self.StartAutoGen(mqueue, AutoGenObject.DataPipe, self.SkipAutoGen, PcdMaList,self.share_data)
+ autogen_rt,errorcode = self.StartAutoGen(mqueue, AutoGenObject.DataPipe, self.SkipAutoGen, PcdMaList, GlobalData.gCacheIR)
self.Progress.Stop("done!")
if not autogen_rt:
self.AutoGenMgr.TerminateWorkers()
@@ -1799,6 +1819,15 @@ class Build():
CmdListDict = None
if GlobalData.gEnableGenfdsMultiThread and self.Fdf:
CmdListDict = self._GenFfsCmd(Wa.ArchList)
+
+ # Add Platform and Package level hash in share_data for module hash calculation later
+ if GlobalData.gBinCacheSource or GlobalData.gBinCacheDest:
+ GlobalData.gCacheIR[('PlatformHash')] = GlobalData.gPlatformHash
+ for PkgName in GlobalData.gPackageHash.keys():
+ GlobalData.gCacheIR[(PkgName, 'PackageHash')] = GlobalData.gPackageHash[PkgName]
+ GlobalData.file_lock = mp.Lock()
+ GlobalData.FfsCmd = CmdListDict
+
self.Progress.Stop("done!")
MaList = []
ExitFlag = threading.Event()
@@ -1808,20 +1837,23 @@ class Build():
AutoGenStart = time.time()
GlobalData.gGlobalDefines['ARCH'] = Arch
Pa = PlatformAutoGen(Wa, self.PlatformFile, BuildTarget, ToolChain, Arch)
+ GlobalData.libConstPcd = Pa.DataPipe.Get("LibConstPcd")
+ GlobalData.Refes = Pa.DataPipe.Get("REFS")
for Module in Pa.Platform.Modules:
if self.ModuleFile.Dir == Module.Dir and self.ModuleFile.Name == Module.Name:
Ma = ModuleAutoGen(Wa, Module, BuildTarget, ToolChain, Arch, self.PlatformFile,Pa.DataPipe)
if Ma is None:
continue
MaList.append(Ma)
- if Ma.CanSkipbyHash():
- self.HashSkipModules.append(Ma)
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
- continue
- else:
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ Ma.GenModuleFilesHash(GlobalData.gCacheIR)
+ Ma.GenPreMakefileHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyPreMakefileCache(GlobalData.gCacheIR):
+ self.HashSkipModules.append(Ma)
+ EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ continue
+
# Not to auto-gen for targets 'clean', 'cleanlib', 'cleanall', 'run', 'fds'
if self.Target not in ['clean', 'cleanlib', 'cleanall', 'run', 'fds']:
# for target which must generate AutoGen code and makefile
@@ -1841,6 +1873,18 @@ class Build():
self.Progress.Stop("done!")
if self.Target == "genmake":
return True
+
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ Ma.GenMakeHeaderFilesHash(GlobalData.gCacheIR)
+ Ma.GenMakeHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
+ self.HashSkipModules.append(Ma)
+ EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ continue
+ else:
+ EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ Ma.PrintFirstMakeCacheMissFile(GlobalData.gCacheIR)
+
self.BuildModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
if Ma.Arch not in GlobalData.gModuleBuildTracking:
@@ -1985,11 +2029,18 @@ class Build():
if GlobalData.gEnableGenfdsMultiThread and self.Fdf:
CmdListDict = self._GenFfsCmd(Wa.ArchList)

+ # Add Platform and Package level hash in share_data for module hash calculation later
+ if GlobalData.gBinCacheSource or GlobalData.gBinCacheDest:
+ GlobalData.gCacheIR[('PlatformHash')] = GlobalData.gPlatformHash
+ for PkgName in GlobalData.gPackageHash.keys():
+ GlobalData.gCacheIR[(PkgName, 'PackageHash')] = GlobalData.gPackageHash[PkgName]
+
# multi-thread exit flag
ExitFlag = threading.Event()
ExitFlag.clear()
self.AutoGenTime += int(round((time.time() - WorkspaceAutoGenTime)))
self.BuildModules = []
+ TotalModules = []
for Arch in Wa.ArchList:
PcdMaList = []
AutoGenStart = time.time()
@@ -2009,6 +2060,7 @@ class Build():
ModuleList.append(Inf)
Pa.DataPipe.DataContainer = {"FfsCommand":CmdListDict}
Pa.DataPipe.DataContainer = {"Workspace_timestamp": Wa._SrcTimeStamp}
+ Pa.DataPipe.DataContainer = {"CommandTarget": self.Target}
for Module in ModuleList:
# Get ModuleAutoGen object to generate C code file and makefile
Ma = ModuleAutoGen(Wa, Module, BuildTarget, ToolChain, Arch, self.PlatformFile,Pa.DataPipe)
@@ -2019,19 +2071,7 @@ class Build():
Ma.PlatformInfo = Pa
Ma.Workspace = Wa
PcdMaList.append(Ma)
- if Ma.CanSkipbyHash():
- self.HashSkipModules.append(Ma)
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
- continue
- else:
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
-
- # Not to auto-gen for targets 'clean', 'cleanlib', 'cleanall', 'run', 'fds'
- # for target which must generate AutoGen code and makefile
-
- self.BuildModules.append(Ma)
+ TotalModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
if Ma.Arch not in GlobalData.gModuleBuildTracking:
GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
@@ -2042,7 +2082,22 @@ class Build():
mqueue.put(m)
data_pipe_file = os.path.join(Pa.BuildDir, "GlobalVar_%s_%s.bin" % (str(Pa.Guid),Pa.Arch))
Pa.DataPipe.dump(data_pipe_file)
- autogen_rt, errorcode = self.StartAutoGen(mqueue, Pa.DataPipe, self.SkipAutoGen, PcdMaList,self.share_data)
+ autogen_rt, errorcode = self.StartAutoGen(mqueue, Pa.DataPipe, self.SkipAutoGen, PcdMaList, GlobalData.gCacheIR)
+
+ # Skip cache hit modules
+ if GlobalData.gBinCacheSource:
+ for Ma in TotalModules:
+ if (Ma.MetaFile.Path, Ma.Arch) in GlobalData.gCacheIR and \
+ GlobalData.gCacheIR[(Ma.MetaFile.Path, Ma.Arch)].PreMakeCacheHit:
+ self.HashSkipModules.append(Ma)
+ continue
+ if (Ma.MetaFile.Path, Ma.Arch) in GlobalData.gCacheIR and \
+ GlobalData.gCacheIR[(Ma.MetaFile.Path, Ma.Arch)].MakeCacheHit:
+ self.HashSkipModules.append(Ma)
+ continue
+ self.BuildModules.append(Ma)
+ else:
+ self.BuildModules.extend(TotalModules)

if not autogen_rt:
self.AutoGenMgr.TerminateWorkers()
@@ -2050,9 +2105,24 @@ class Build():
raise FatalError(errorcode)
self.AutoGenTime += int(round((time.time() - AutoGenStart)))
self.Progress.Stop("done!")
+
+ if GlobalData.gBinCacheSource:
+ EdkLogger.quiet("Total cache hit driver num: %s, cache miss driver num: %s" % (len(set(self.HashSkipModules)), len(set(self.BuildModules))))
+ CacheHitMa = set()
+ CacheNotHitMa = set()
+ for IR in GlobalData.gCacheIR.keys():
+ if 'PlatformHash' in IR or 'PackageHash' in IR:
+ continue
+ if GlobalData.gCacheIR[IR].PreMakeCacheHit or GlobalData.gCacheIR[IR].MakeCacheHit:
+ CacheHitMa.add(IR)
+ else:
+ # There might be binary module or module which has .inc files, not count for cache miss
+ CacheNotHitMa.add(IR)
+ EdkLogger.quiet("Total module num: %s, cache hit module num: %s" % (len(CacheHitMa)+len(CacheNotHitMa), len(CacheHitMa)))
+
for Arch in Wa.ArchList:
MakeStart = time.time()
- for Ma in self.BuildModules:
+ for Ma in set(self.BuildModules):
# Generate build task for the module
if not Ma.IsBinaryModule:
Bt = BuildTask.New(ModuleMakeUnit(Ma, Pa.BuildCommand,self.Target))
--
2.17.1

[PATCH 2/4] BaseTools: Print first cache missing file for build cachle

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1925

When a module build cache miss, add support to print the first
cache missing file path and name.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/AutoGenWorker.py | 2 ++
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
index a84ed46f2e..30d2f96fc7 100755
--- a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
@@ -246,6 +246,8 @@ class AutoGenWorkerInProcess(mp.Process):
Ma.GenMakeHash(GlobalData.gCacheIR)
if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
continue
+ else:
+ Ma.PrintFirstMakeCacheMissFile(GlobalData.gCacheIR)
except Empty:
pass
except:
diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
index 5749b8a9fa..67875f7532 100755
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -2368,6 +2368,82 @@ class ModuleAutoGen(AutoGen):
print("[cache hit]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
return True

+ ## Show the first file name which causes cache miss
+ def PrintFirstMakeCacheMissFile(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict:
+ return
+
+ # Only print cache miss file for the MakeCache not hit module
+ if gDict[(self.MetaFile.Path, self.Arch)].MakeCacheHit:
+ return
+
+ if not gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain:
+ EdkLogger.quiet("[cache insight]: MakeHashChain is missing for: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ # Find the cache dir name through the .ModuleHashPair file info
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache insight]: Cannot find ModuleHashPair file for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache insight]: Cannot load ModuleHashPair file for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ MakeHashSet = set()
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ TargetHashDir = path.join(FileDir, str(MakeHash))
+ if os.path.exists(TargetHashDir):
+ MakeHashSet.add(MakeHash)
+ if not MakeHashSet:
+ EdkLogger.quiet("[cache insight]: Cannot find valid cache dir for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ TargetHash = list(MakeHashSet)[0]
+ TargetHashDir = path.join(FileDir, str(TargetHash))
+ if len(MakeHashSet) > 1 :
+ EdkLogger.quiet("[cache insight]: found multiple cache dirs for this module, random select dir '%s' to search the first cache miss file: %s[%s]" % (TargetHash, self.MetaFile.Path, self.Arch))
+
+ ListFile = path.join(TargetHashDir, self.Name + '.MakeHashChain')
+ if os.path.exists(ListFile):
+ try:
+ f = open(ListFile, 'r')
+ CachedList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache insight]: Cannot load MakeHashChain file: %s" % ListFile)
+ return
+ else:
+ EdkLogger.quiet("[cache insight]: Cannot find MakeHashChain file: %s" % ListFile)
+ return
+
+ CurrentList = gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain
+ for idx, (file, hash) in enumerate (CurrentList):
+ (filecached, hashcached) = CachedList[idx]
+ if file != filecached:
+ EdkLogger.quiet("[cache insight]: first different file in %s[%s] is %s, the cached one is %s" % (self.MetaFile.Path, self.Arch, file, filecached))
+ break
+ if hash != hashcached:
+ EdkLogger.quiet("[cache insight]: first cache miss file in %s[%s] is %s" % (self.MetaFile.Path, self.Arch, file))
+ break
+
+ return True
+
## Decide whether we can skip the ModuleAutoGen process
def CanSkipbyCache(self, gDict):
# Hashing feature is off
--
2.17.1

[PATCH 3/4] BaseTools: Change the [Arch][Name] module key in Build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1951

Current build cache use the module's [self.Arch][self.Name]
info as the ModuleAutoGen object key in hash list and dictionary.
The [self.Arch][self.Name] is not safe as the module key because
there could be two modules with same module name and arch name in
one platform. E.g. A platform can override a module or library
instance in another different path, the overriding module can has
the same module name and arch name as the original one.
Directly use the ModuleAutoGen obj self as the key, because
the obj __hash__ and __repr__ attributes already contain the
full path and arch name.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/GenMake.py | 6 +-----
BaseTools/Source/Python/build/build.py | 49 +++++++++++++++++++++----------------------------
2 files changed, 22 insertions(+), 33 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index 79387856bd..de820eeb2f 100755
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -940,16 +940,12 @@ cleanlib:
continue
headerFileDependencySet.add(aFileName)

- # Ensure that gModuleBuildTracking has been initialized per architecture
- if self._AutoGenObject.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[self._AutoGenObject.Arch] = dict()
-
# Check if a module dependency header file is missing from the module's MetaFile
for aFile in headerFileDependencySet:
if aFile in headerFilesInMetaFileSet:
continue
if GlobalData.gUseHashCache:
- GlobalData.gModuleBuildTracking[self._AutoGenObject.Arch][self._AutoGenObject] = 'FAIL_METAFILE'
+ GlobalData.gModuleBuildTracking[self._AutoGenObject] = 'FAIL_METAFILE'
EdkLogger.warn("build","Module MetaFile [Sources] is missing local header!",
ExtraData = "Local Header: " + aFile + " not found in " + self._AutoGenObject.MetaFile.Path
)
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 84540d61f5..81f0bbb467 100755
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -630,12 +630,11 @@ class BuildTask:

# Set the value used by hash invalidation flow in GlobalData.gModuleBuildTracking to 'SUCCESS'
# If Module or Lib is being tracked, it did not fail header check test, and built successfully
- if (self.BuildItem.BuildObject.Arch in GlobalData.gModuleBuildTracking and
- self.BuildItem.BuildObject in GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch] and
- GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch][self.BuildItem.BuildObject] != 'FAIL_METAFILE' and
+ if (self.BuildItem.BuildObject in GlobalData.gModuleBuildTracking and
+ GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject] != 'FAIL_METAFILE' and
not BuildTask._ErrorFlag.isSet()
):
- GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch][self.BuildItem.BuildObject] = 'SUCCESS'
+ GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject] = 'SUCCESS'

# indicate there's a thread is available for another build task
BuildTask._RunningQueueLock.acquire()
@@ -1169,25 +1168,24 @@ class Build():
return

# GlobalData.gModuleBuildTracking contains only modules or libs that cannot be skipped by hash
- for moduleAutoGenObjArch in GlobalData.gModuleBuildTracking.keys():
- for moduleAutoGenObj in GlobalData.gModuleBuildTracking[moduleAutoGenObjArch].keys():
- # Skip invalidating for Successful Module/Lib builds
- if GlobalData.gModuleBuildTracking[moduleAutoGenObjArch][moduleAutoGenObj] == 'SUCCESS':
- continue
+ for Ma in GlobalData.gModuleBuildTracking:
+ # Skip invalidating for Successful Module/Lib builds
+ if GlobalData.gModuleBuildTracking[Ma] == 'SUCCESS':
+ continue

- # The module failed to build, failed to start building, or failed the header check test from this point on
+ # The module failed to build, failed to start building, or failed the header check test from this point on

- # Remove .hash from build
- ModuleHashFile = os.path.join(moduleAutoGenObj.BuildDir, moduleAutoGenObj.Name + ".hash")
- if os.path.exists(ModuleHashFile):
- os.remove(ModuleHashFile)
+ # Remove .hash from build
+ ModuleHashFile = os.path.join(Ma.BuildDir, Ma.Name + ".hash")
+ if os.path.exists(ModuleHashFile):
+ os.remove(ModuleHashFile)

- # Remove .hash file from cache
- if GlobalData.gBinCacheDest:
- FileDir = os.path.join(GlobalData.gBinCacheDest, moduleAutoGenObj.Arch, moduleAutoGenObj.SourceDir, moduleAutoGenObj.MetaFile.BaseName)
- HashFile = os.path.join(FileDir, moduleAutoGenObj.Name + '.hash')
- if os.path.exists(HashFile):
- os.remove(HashFile)
+ # Remove .hash file from cache
+ if GlobalData.gBinCacheDest:
+ FileDir = os.path.join(GlobalData.gBinCacheDest, Ma.PlatformInfo.OutputDir, Ma.BuildTarget + "_" + Ma.ToolChain, Ma.Arch, Ma.SourceDir, Ma.MetaFile.BaseName)
+ HashFile = os.path.join(FileDir, Ma.Name + '.hash')
+ if os.path.exists(HashFile):
+ os.remove(HashFile)

## Build a module or platform
#
@@ -1887,10 +1885,7 @@ class Build():

self.BuildModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
- if Ma.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
- if Ma not in GlobalData.gModuleBuildTracking[Ma.Arch]:
- GlobalData.gModuleBuildTracking[Ma.Arch][Ma] = 'FAIL'
+ GlobalData.gModuleBuildTracking[Ma] = 'FAIL'
self.AutoGenTime += int(round((time.time() - AutoGenStart)))
MakeStart = time.time()
for Ma in self.BuildModules:
@@ -2073,10 +2068,8 @@ class Build():
PcdMaList.append(Ma)
TotalModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
- if Ma.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
- if Ma not in GlobalData.gModuleBuildTracking[Ma.Arch]:
- GlobalData.gModuleBuildTracking[Ma.Arch][Ma] = 'FAIL'
+ GlobalData.gModuleBuildTracking[Ma] = 'FAIL'
+
mqueue = mp.Queue()
for m in Pa.GetAllModuleInfo:
mqueue.put(m)
--
2.17.1

[PATCH 4/4] BaseTools: Add GenFds multi-thread support in build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1923

Fix the issue that the GenFds multi-thread will build fail
if enable the build cache together.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
index 67875f7532..e73664f3b0 100755
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -1248,11 +1248,13 @@ class ModuleAutoGen(AutoGen):
fStringIO.close ()
fInputfile.close ()
return OutputName
+
@cached_property
def OutputFile(self):
retVal = set()
OutputDir = self.OutputDir.replace('\\', '/').strip('/')
DebugDir = self.DebugDir.replace('\\', '/').strip('/')
+ FfsOutputDir = self.FfsOutputDir.replace('\\', '/').rstrip('/')
for Item in self.CodaTargetList:
File = Item.Target.Path.replace('\\', '/').strip('/').replace(DebugDir, '').replace(OutputDir, '').strip('/')
retVal.add(File)
@@ -1268,6 +1270,12 @@ class ModuleAutoGen(AutoGen):
if File.lower().endswith('.pdb'):
retVal.add(File)

+ for Root, Dirs, Files in os.walk(FfsOutputDir):
+ for File in Files:
+ if File.lower().endswith('.ffs') or File.lower().endswith('.offset') or File.lower().endswith('.raw') \
+ or File.lower().endswith('.raw.txt'):
+ retVal.add(File)
+
return retVal

## Create AsBuilt INF file the module
@@ -1638,13 +1646,16 @@ class ModuleAutoGen(AutoGen):
for File in self.OutputFile:
File = str(File)
if not os.path.isabs(File):
- File = os.path.join(self.OutputDir, File)
+ NewFile = os.path.join(self.OutputDir, File)
+ if not os.path.exists(NewFile):
+ NewFile = os.path.join(self.FfsOutputDir, File)
+ File = NewFile
if os.path.exists(File):
- sub_dir = os.path.relpath(File, self.OutputDir)
- destination_file = os.path.join(FileDir, sub_dir)
- destination_dir = os.path.dirname(destination_file)
- CreateDirectory(destination_dir)
- CopyFileOnChange(File, destination_dir)
+ if File.lower().endswith('.ffs') or File.lower().endswith('.offset') or File.lower().endswith('.raw') \
+ or File.lower().endswith('.raw.txt'):
+ self.CacheCopyFile(FfsDir, self.FfsOutputDir, File)
+ else:
+ self.CacheCopyFile(FileDir, self.OutputDir, File)

def SaveHashChainFileToCache(self, gDict):
if not GlobalData.gBinCacheDest:
--
2.17.1

[edk2-platforms: PATCH v3 0/1] Platform/RPi3: Add Debian 10 installation in Systems.md

Pete Batard
 

Changes from v2:
- Use "on-CPU" rather than "ondie".
- Make sure the options to force FAT16 are provided for both Windows and Linux.
- Provide a maximum size for FAT16 and add a forward references to the additional
notes and the `fdisk` fixup, for people who might still choose FAT32.

Pete Batard (1):
Platform/RPi3: Add Debian 10 installation in Systems.md

Platform/RaspberryPi/RPi3/RPi3.fdf | 2 +-
Platform/RaspberryPi/RPi3/Readme.md | 4 +-
Platform/RaspberryPi/RPi3/Systems.md | 127 +++++++++++++++++++-
3 files changed, 126 insertions(+), 7 deletions(-)

--
2.21.0.windows.1

[edk2-platforms: PATCH v3 1/1] Platform/RPi3: Add Debian 10 installation in Systems.md

Pete Batard
 

This documents the installation of vanilla Debian 10.0 ARM64 (netinst),
which we validated for both Model B and Model B+.
Also fix an erroneous reference in an RPi3.fdf comment.

Signed-off-by: Pete Batard <@pbatard>
---
Platform/RaspberryPi/RPi3/RPi3.fdf | 2 +-
Platform/RaspberryPi/RPi3/Readme.md | 4 +-
Platform/RaspberryPi/RPi3/Systems.md | 127 +++++++++++++++++++-
3 files changed, 126 insertions(+), 7 deletions(-)

diff --git a/Platform/RaspberryPi/RPi3/RPi3.fdf b/Platform/RaspberryPi/RPi3/RPi3.fdf
index c7c3f7a2ab8c..c62d649834c7 100644
--- a/Platform/RaspberryPi/RPi3/RPi3.fdf
+++ b/Platform/RaspberryPi/RPi3/RPi3.fdf
@@ -300,7 +300,7 @@ [FV.FvMain]
INF Platform/RaspberryPi/$(PLATFORM_NAME)/Drivers/LogoDxe/LogoDxe.inf

#
- # FDT (GUID matches mRaspberryPiFfsFileGuid in RaspberryPiPlatformDxe)
+ # FDT (GUID matches gRaspberryPiFdtFileGuid in FdtDxe)
#
FILE FREEFORM = DF5DA223-1D27-47C3-8D1B-9A41B55A18BC {
SECTION RAW = Platform/RaspberryPi/$(PLATFORM_NAME)/DeviceTree/bcm2710-rpi-3-b.dtb
diff --git a/Platform/RaspberryPi/RPi3/Readme.md b/Platform/RaspberryPi/RPi3/Readme.md
index 624f3a8d287a..797da1bab4a9 100644
--- a/Platform/RaspberryPi/RPi3/Readme.md
+++ b/Platform/RaspberryPi/RPi3/Readme.md
@@ -18,8 +18,8 @@ Raspberry Pi is a trademark of the [Raspberry Pi Foundation](http://www.raspberr

This firmware, that has been validated to compile against the current
[edk2](https://github.com/tianocore/edk2)/[edk2-platforms](https://github.com/tianocore/edk2-platforms),
-should be able to boot Linux (SUSE, Ubuntu), NetBSD, FreeBSD as well as Windows 10 ARM64
-(full GUI version).
+should be able to boot Linux (Debian, Ubuntu, SUSE), NetBSD, FreeBSD as well as Windows
+10 ARM64 (full GUI version).

It also provides support for ATF ([Arm Trusted Platform](https://github.com/ARM-software/arm-trusted-firmware)).

diff --git a/Platform/RaspberryPi/RPi3/Systems.md b/Platform/RaspberryPi/RPi3/Systems.md
index f6410eb25f0d..3a313c29cbdc 100644
--- a/Platform/RaspberryPi/RPi3/Systems.md
+++ b/Platform/RaspberryPi/RPi3/Systems.md
@@ -1,5 +1,128 @@
# Tested Operating Systems

+## Debian
+
+[Debian 10](https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/) netinst has been
+tested and confirmed to work, both on the Model B and Model B+, including installation in
+either wired or wireless mode.
+
+Below are steps you can follow to install Debian 10 onto an SD card:
+* Partition the media as MBR and create a ~300 MB partition on it with MBR type `0x0e`.
+ __Note:__ Make sure that the partition scheme is MBR (not GPT) and the type `0x0e` (not
+ `0xef` for instance), as the on-CPU Broadcom bootloader supports neither the GPT scheme
+ nor the ESP MBR type.
+* Set the partition as active/bootable. This is needed as the Debian partition manager can
+ not detect it as ESP otherwise, which we need for GRUB installation. If using `fdisk` on
+ Linux, you can use the `a` command to set a partition as active. On Windows, you can use
+ `diskpart` and then type `active` after selecting the relevant disk and partition.
+* Format the partition as FAT. Here you should make sure that you use FAT16 over FAT32 else
+ the Debian partition manager may not automatically detect the partition as ESP. If you
+ are using Windows `diskpart` then `format fs=fat quick` will format a drive to FAT16. On
+ Linux, the equivalent command would be `mkfs.vfat -F 16 /dev/<your_device>`. As long as
+ the partition is smaller than 2 GB, the use of FAT16 over FAT32 should not be a problem.
+ Note that it is also possible to use FAT32, but you will probably have to invoke `fdisk`
+ before rebooting, as shown in the _Additional Notes_ below, to reset the partition type.
+* Copy the UEFI firmware files (`RPI_EFI.fd`, `bootcode.bin`, `fixup.dat` and `start.elf`)
+ as well as an appropriate `config.txt` onto the FAT partition. If needed you can download
+ the non UEFI binary files from https://github.com/raspberrypi/firmware/tree/master/boot.
+* (Optional) If you plan to install through WiFi, you will need to download relevant
+ non-free WLAN firmware binaries from your WLAN interface (`brcmfmac43430-sdio.txt` and
+ `brcmfmac43430-sdio.bin` for a Raspberry Pi 3 Model B, `brcmfmac43455-sdio.txt` and
+ `brcmfmac43455-sdio.bin` for a Raspberry Pi 3 Model B+). You may also want to obtain the
+ relevant `.clm_blob` (`brcmfmac43430-sdio.clm_blob` or `brcmfmac43455-sdio.clm_blob`),
+ though wireless networking should work even if you do not provide these files. Copy these
+ files either at the root of your FAT partition or into a `firmware/` directory there.
+* Download the latest `debian-##.#.#-arm64-netinst.iso` from
+ https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/
+* Extract the full content of the ISO onto the FAT partition you created.
+* Insert the media and power up the Raspberry Pi device.
+* On the GRUB menu select `Install` and let the Debian Installer process start.
+ __Note:__ In case anything goes wrong during the install process, you can use
+ <kbd>Alt</kbd>-<kbd>F4</kbd> to check the installation log.
+* Select your Language, Country and Keyboard and let the installer proceed until it reports
+ that `No Common CD-ROM drive was detected.`
+* On `Load CD-ROM drivers from removable media` select `No`.
+* On `Manually select a CD-ROM module and device` select `Yes`.
+* On `Module needed for accessing the CD-ROM` select `none`.
+* On `Device file for accessing the CD-ROM` type the following exactly:
+ ```
+ -t vfat -o rw /dev/mmcblk0p1
+ ```
+* (Optional) If you have copied the non-free WLAN firmware binaries, and plan to install
+ through wireless, you can let the installer select the firmware files. Please be mindful
+ that you may be asked multiple times as there are multiple files to provide.
+* If requested by the installer, set up your network by choosing the network interface you
+ want to use for installation and (optionally) your access point and credentials.
+* Go through the hostname, user/password set up and customize those as you see fit.
+* Let the installer continue until you get to the `Partition disks` screen. There, for
+ `Partitioning method` select `Manual`. You __should__ see something like this:
+ ```
+ MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
+ #1 primary 314.6 MB B K ESP
+ pri/log FREE SPACE
+ ```
+ In other words, the partition manager should already detect your existing partition as
+ `ESP`, with the `B` (bootable) and `K` (keep data) flags. If that is not the case, (e.g.
+ if it says `fat16` or `fat32` instead of `ESP`) then it probably means you either didn't
+ format the partition to FAT16 or you forgot to set the bootable flag. In that case,
+ please refer to the _Additional Notes_ below.
+* Select `FREE SPACE` &rarr; `Create a new partition` and create a `1 GB` primary `swap`
+ partition.
+* Select `FREE SPACE` &rarr; `Create a new partition` and allocate the rest to a primary
+ `ext4` root partition (mountpoint = `/`)
+* After doing the above, your partition report should look like this:
+ ```
+ MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
+ #1 primary 314.6 MB B K ESP
+ #2 primary 1.0 GB f swap swap
+ #3 primary 14.7 GB f ext4 /
+ ```
+* Select `Finish partitioning and write changes to disk` and then `Yes` and let the
+ installer continue with the base system installation.
+* After a while, the installer should produce a message that states:
+ ```
+ [!!] Configure the package manager
+
+ apt-configuration problem
+ An attempt to configure apt to install additional packages from the CD failed.
+ ```
+ This is a __benign__ message that you can safely ignore by selecting `Continue` (The
+ reason it is benign is we are running a net install and won't need to access the "CD-ROM"
+ files post install).
+* Once you have dimissed the message above, pick the mirror closest to your geographical
+ location and let the installer proceed with some more software installation.
+* Finally, at the `Software selection` screen, choose any additional software package you
+ wish to install. `Debian desktop environment` should work out of the box if you choose to
+ install it.
+* Let the process finalize the software and GRUB bootloader installation and, provided you
+ didn't run into the partition manager issue described above (installation partition not
+ seen as `ESP`) you can reboot your machine when prompted, which, once completed, should
+ bring you to your newly installed Debian environment.
+
+### Additional Notes for Debian
+
+The reason we use `-t vfat -o rw /dev/mmcblk0p1` for the source media (i.e. "CD-ROM" device)
+is because, whereas the first partition on the SD card is indeed `/dev/mmcblk0p1`, we also
+need to provide additional parameters for the `mount` command that the installer invokes
+behind the scenes. For instance, if we don't use `-t vfat`, then ISO-9660 is forced as the
+file system, and if we don't use `-o rw` then the partition will be mounted as read-only
+which then prevents the same partition from being remounted when locating the non-free
+firmware files or when setting up `/efi/boot`.
+
+With regards to fixing the partitioning if you don't see `B K ESP` when entering the
+partition manager, what you need to do is:
+* Before you create the additional partitions, select the first partition and change its
+ type to `ESP`. Note however that doing this changes the type of the partition to `0xef`
+ which is precisely what we're trying to avoid by having the partition manager already
+ detect it as ESP, as type `0xef` is __unbootable__ by the Broadcom CPU.
+* To fix this then, before you choose `Continue` on the `Installation complete` prompt you
+ should open a new console with <kbd>Alt</kbd>-<kbd>F2</kbd> and type:
+ ```
+ chroot /target fdisk /dev/mmcblk0
+ ```
+ Then press <kbd>t</kbd>, <kbd>1</kbd>, <kbd>e</kbd> <kbd>w</kbd>, to reset the partition
+ to type `0x0e` (FAT16 LBA).
+
## Ubuntu

[Ubuntu 18.04 LTS](http://releases.ubuntu.com/18.04/) has been tested and confirmed to work,
@@ -35,10 +158,6 @@ Then, to have your changes applied run `update-grub` and reboot.

## Other Linux distributions

-* Debian ARM64 does not currently work, most likely due to missing required module support
- in its kernel. However its installation process works, so it may be possible to get it
- running with a custom kernel.
-
* OpenSUSE Leap 42.3 has been reported to work on Raspberry 3 Model B.

* Other ARM64 Linux releases, that support UEFI boot and have the required hardware support
--
2.21.0.windows.1

[PATCH] BaseTools/Scripts: Add GetUtcDateTimer script.

Chiu, Chasel
 

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2067

A script that can return UTC date and time in ascii
format which is convenient for patching build time
information in any binary.

Cc: Bob Feng <bob.c.feng@...>
Cc: Liming Gao <liming.gao@...>
Signed-off-by: Chasel Chiu <chasel.chiu@...>
---
BaseTools/Scripts/GetUtcDateTime.py | 47 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)

diff --git a/BaseTools/Scripts/GetUtcDateTime.py b/BaseTools/Scripts/GetUtcDateTime.py
new file mode 100644
index 0000000000..8b25a0a867
--- /dev/null
+++ b/BaseTools/Scripts/GetUtcDateTime.py
@@ -0,0 +1,47 @@
+## @file
+# Get current UTC date and time information and output as ascii code.
+#
+# Copyright (c) 2019, Intel Corporation. All rights reserved.<BR>
+#
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+
+VersionNumber = '0.1'
+import sys
+import datetime
+
+def Usage():
+ print ("GetUtcDateTime - Version " + VersionNumber)
+ print ("Usage:")
+ print ("GetUtcDateTime [type]")
+ print (" --year: Return UTC year of now")
+ print (" Example output (2019): 39313032")
+ print (" --date: Return UTC date MMDD of now")
+ print (" Example output (7th August): 37303830")
+ print (" --time: Return 24-hour-format UTC time HHMM of now")
+ print (" Example output (4:25): 35323430")
+
+def Main():
+ if len(sys.argv) == 1:
+ Usage()
+ return 0
+
+ today = datetime.datetime.utcnow()
+ if sys.argv[1].strip().lower() == "--year":
+ ReversedNumber = str(today.year)[::-1]
+ print (''.join(hex(ord(HexString))[2:] for HexString in ReversedNumber))
+ return 0
+ if sys.argv[1].strip().lower() == "--date":
+ ReversedNumber = str(today.strftime("%m%d"))[::-1]
+ print (''.join(hex(ord(HexString))[2:] for HexString in ReversedNumber))
+ return 0
+ if sys.argv[1].strip().lower() == "--time":
+ ReversedNumber = str(today.strftime("%H%M"))[::-1]
+ print (''.join(hex(ord(HexString))[2:] for HexString in ReversedNumber))
+ return 0
+ else:
+ Usage()
+ return 0
+
+if __name__ == '__main__':
+ sys.exit(Main())
--
2.13.3.windows.1

Re: [Patch 00/10 V8] Enable multiple process AutoGen

Laszlo Ersek
 

(+ Andrew, Leif, Mike; Liming)

On 08/07/19 06:25, Bob Feng wrote:
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1875

In order to improve the build performance, we implemented
multiple-processes AutoGen. This change will reduce 20% time
for AutoGen phase.

The design document can be got from:
https://edk2.groups.io/g/devel/files/Designs/2019/0627/Multiple-thread-AutoGen.pdf

This patch serial pass the build of Ovmf, MinKabylake, MinPurley,
packages under Edk2 repository and intel client and server platforms.

V8:
1. Fixed a regression issue that AsbuildInf files are not generated.
That was introduced by V7.

V7:
1. Fixed regression issue for build -m, build modules, build
libraries. implement this fix in patch 05/10.

2. Fixed the regression issue for duplicat items in PcdTokenNumber
file. implement this fix in patch 02/10.

V6:

This version fixed a regression issue on incremental build.
The change is in patch 04/10

In the autogen sub-process, we need to use
PlatformInfo.Pcds rather than PlatformInfo.Platform.Pcds
because we don't want to re-evaluate the Pcds in
the autogen sub-process.

V5:
Restructure the patches of V4. Squash 11/11 patch of V4 into V5 4/10,
5/10 and 10/10.

The details of the changes are as following:

1. Update Log queue maxsize as the value of thread number * 10.
This change is implemented in patch 10/10

2. Fixed the regression issue that the PCDs from build command line
are not passed into autogen sub-process correctly.

3. Fixed the regression issue that the exception raised inside
autogen sub-process is not well handled, including code exception
and Ctrl+C keyboard interrupt.

The above 2 changes are implemented in patch 5/10

4. Fixed the regression issue that multiple builds of the same
driver/application with FILE_GUID overrides is not handle correctly.

5. Fixed the regression issue that shared fixed PCD between lib
and module is not handle correctly, which cause the autogen.c and
autogen.h are different from orignal.

The above 2 changes are implemented in patch 4/10
Thank you for the above explanation!


(1) In my clone of the QEMU repository (master @ v4.1.0-rc4), I advanced
the roms/edk2 submodule to commit 96603b4f02b9 ("BaseTools/PatchCheck:
Disable text conversion in 'git show'", 2019-08-07), and applied your
patches on top. Then I ran

$ nice make -j4 -C roms efi

in the QEMU project root.

This build forces the usage of Python2, due to
<https://bugzilla.tianocore.org/show_bug.cgi?id=1607>.

The build worked fine, Ctrl-S / Ctrl-Q worked fine, and the resultant
firmware binaries all worked fine. All good.


(2) QEMU carries a simpe UEFI application that exposes SMBIOS anchor
addresses, and ACPI RSD PTR addresses, from the UEFI guest to some QEMU
unit tests that run on the host. In the same environment as (1), I
rebuilt this application:

$ nice make -j4 -C tests/uefi-test-tools

This command uses the "-m" switch of "build", internally. It also forces
Python2, due to <https://bugzilla.tianocore.org/show_bug.cgi?id=1607>.

The build completed fine, and I tested the AARCH64 and X64 binaries
(bootable ISO images). They worked fine. Good.


(3) In my normal edk2 clone, I cleaned the tree, applied your patches
(again on top of commit 96603b4f02b9), and started a build:

$ . edksetup.sh
$ nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)
$ nice -n 19 build \
-a IA32 \
-p OvmfPkg/OvmfPkgIa32.dsc \
-t GCC48 \
-b NOOPT \
-n 4 \
-D SMM_REQUIRE \
-D SECURE_BOOT_ENABLE \
-D NETWORK_TLS_ENABLE \
-D NETWORK_IP6_ENABLE \
-D NETWORK_HTTP_BOOT_ENABLE \
--report-file=.../build.ovmf.32.report \
--log=.../build.ovmf.32.log \
--cmd-len=65536 \
--hash \
--genfds-multi-thread

This command located Python3:

WORKSPACE = .../edk2
EDK_TOOLS_PATH = .../edk2/BaseTools
CONF_PATH = .../edk2/Conf
PYTHON_COMMAND = /usr/bin/python3.6


Processing meta-data .
Architecture(s) = IA32
Build target = NOOPT
Toolchain = GCC48
The build launched fine.

After 10-20 seconds into the build, I interrupted it with Ctrl-C:

build.py...
: error 7000: Failed to execute command
make tbuild [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/ShellPkg/Library/UefiShellDebug1CommandsLib/UefiShellDebug1CommandsLib]


build.py...
: error 7000: Failed to execute command
make tbuild [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/ShellPkg/Library/UefiShellDriver1CommandsLib/UefiShellDriver1CommandsLib]


build.py...
: error 7000: Failed to execute command
make tbuild [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/CryptoPkg/Library/OpensslLib/OpensslLib]


build.py...
: error 7000: Failed to execute command
make tbuild [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/MdePkg/Library/BaseLib/BaseLib]

- Aborted -
Build end time: 14:05:56, Aug.08 2019
Build total time: 00:00:15
As next step, I repeated the same "build" command as above, in order to
continue the interrupted build. Unfortunately, this failed:

WORKSPACE = .../edk2
EDK_TOOLS_PATH = .../edk2/BaseTools
CONF_PATH = .../edk2/Conf
PYTHON_COMMAND = /usr/bin/python3.6


Processing meta-data
.Architecture(s) = IA32
Build target = NOOPT
Toolchain = GCC48

Active Platform = .../edk2/OvmfPkg/OvmfPkgIa32.dsc
..... done!

Fd File Name:OVMF (.../edk2/Build/OvmfIa32/NOOPT_GCC48/FV/OVMF.fd)

Generate Region at Offset 0x0
Region Size = 0x40000
Region Name = DATA

Generate Region at Offset 0x40000
Region Size = 0x1000
Region Name = None

Generate Region at Offset 0x41000
Region Size = 0x1000
Region Name = DATA

Generate Region at Offset 0x42000
Region Size = 0x42000
Region Name = None

Generate Region at Offset 0x84000
Region Size = 0x348000
Region Name = FV

Generating FVMAIN_COMPACT FV

Generating PEIFV FV
###### ['GenFv', '-a', '.../edk2/Build/OvmfIa32/NOOPT_GCC48/FV/Ffs/PEIFV.inf', '-o', '.../edk2/Build/OvmfIa32/NOOPT_GCC48/FV/PEIFV.Fv', '-i', '.../edk2/Build/OvmfIa32/NOOPT_GCC48/FV/PEIFV.inf']
Return Value = 2
GenFv: ERROR 0001: Error opening file
.../edk2/Build/OvmfIa32/NOOPT_GCC48/FV/Ffs/52C05B14-0B98-496c-BC3B-04B50211D680PeiCore/52C05B14-0B98-496c-BC3B-04B50211D680.ffs




build.py...
: error 7000: Failed to generate FV



build.py...
: error 7000: Failed to execute command


- Failed -
Build end time: 14:06:25, Aug.08 2019
Build total time: 00:00:06
To be honest, I'm not sure what to ask for, at this point.

- On one hand, this is certainly not ideal. Continuing a manually
interrupted build should preferably work -- that's a form of incremental
build. And, it did work in my v3 testing; see bullet (5) in:

http://mid.mail-archive.com/4ea3d3fa-2210-3642-2337-db525312d312@...
https://edk2.groups.io/g/devel/message/44246

(Is this perhaps a regression from the V6 update, which was related to
incremental builds?)

- On the other hand, this is not necessarily show-stopper, and I'm quite
out of capacity for testing further versions of this full patch set.
Perhaps you can work on this issue incrementally -- bugfixes can be
accepted during the freeze periods.

I don't feel comfortable giving Tested-by or Regression-tested-by in
this state, but I also won't block the patch set from being merged.

Note that this problem appears repeatable, and it reproduces using
Python2 as well. It should be possible for you to reproduce and to
debug.


(4) In this test, I repeated (3), but instead of interrupting the build
with Ctrl-C, I introduced a syntax error to one of the C source files
under OvmfPkg (I simply appended the constant "1" to the end of the
file).

As expected, the build failed (and correctly stopped, too):

.../edk2/OvmfPkg/VirtioNetDxe/SnpReceive.c:186:1: error: expected identifier or '(' before numeric constant
1
^
make: *** [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/OvmfPkg/VirtioNetDxe/VirtioNet/OUTPUT/SnpReceive.obj] Error 1


build.py...
: error 7000: Failed to execute command
make tbuild [.../edk2/Build/OvmfIa32/NOOPT_GCC48/IA32/OvmfPkg/VirtioNetDxe/VirtioNet]


build.py...
: error F002: Failed to build module
.../edk2/OvmfPkg/VirtioNetDxe/VirtioNet.inf [IA32, GCC48, NOOPT]

- Failed -
Build end time: 14:29:18, Aug.08 2019
Build total time: 00:00:38
I undid the syntax error, and repeated the "build" command.

The build resumed fine, and produced a functional OVMF binary. Good.


(5) I also verified that changes to C files, made after the build
completed successfully for the first time, would cause those files to be
re-built, if the "build" command was repeated. So that's OK too.


... All in all, I think the series is mature enough to merge, in order
to expose it to wider testing by the community, with the soft feature
freeze just around the corner. The main functionality seems to work,
there don't seem to be show-stoppers. IMO a BaseTools series doesn't
have to be *perfect* -- as long as it doesn't get in the way of people
doing their work, it should be possible to improve upon, incrementally.
Therefore, from my side, I'm willing to give you a (somewhat reserved)

Acked-by: Laszlo Ersek <lersek@...>

for the series.

I suggest seeking feedback from the other stewards as well.

To reiterate, the only issue I have found is that the build could not be
resumed after I interrupted it with Ctrl-C, in section (3). If there is
consensus to push the v8 series with that, I would suggest filing a
TianoCore BZ about issue (3) first, and to reference the BZ as a "known
issue" in the commit message of patch#4 or patch#5.

Thanks
Laszlo