Date   
Re: CPU hotplug using SMM with QEMU+OVMF

Laszlo Ersek
 

On 08/14/19 16:04, Paolo Bonzini wrote:
On 14/08/19 15:20, Yao, Jiewen wrote:
- Does this part require a new branch somewhere in the OVMF SEC code?
How do we determine whether the CPU executing SEC is BSP or
hot-plugged AP?
[Jiewen] I think this is blocked from hardware perspective, since the first instruction.
There are some hardware specific registers can be used to determine if the CPU is new added.
I don’t think this must be same as the real hardware.
You are free to invent some registers in device model to be used in OVMF hot plug driver.
Yes, this would be a new operation mode for QEMU, that only applies to
hot-plugged CPUs. In this mode the AP doesn't reply to INIT or SMI, in
fact it doesn't reply to anything at all.

- How do we tell the hot-plugged AP where to start execution? (I.e. that
it should execute code at a particular pflash location.)
[Jiewen] Same real mode reset vector at FFFF:FFF0.
You do not need a reset vector or INIT/SIPI/SIPI sequence at all in
QEMU. The AP does not start execution at all when it is unplugged, so
no cache-as-RAM etc.

We only need to modify QEMU so that hot-plugged APIs do not reply to
INIT/SIPI/SMI.

I don’t think there is problem for real hardware, who always has CAR.
Can QEMU provide some CPU specific space, such as MMIO region?
Why is a CPU-specific region needed if every other processor is in SMM
and thus trusted.
I was going through the steps Jiewen and Yingwen recommended.

In step (02), the new CPU is expected to set up RAM access. In step
(03), the new CPU, executing code from flash, is expected to "send board
message to tell host CPU (GPIO->SCI) -- I am waiting for hot-add
message." For that action, the new CPU may need a stack (minimally if we
want to use C function calls).

Until step (03), there had been no word about any other (= pre-plugged)
CPUs (more precisely, Jiewen even confirmed "No impact to other
processors"), so I didn't assume that other CPUs had entered SMM.

Paolo, I've attempted to read Jiewen's response, and yours, as carefully
as I can. I'm still very confused. If you have a better understanding,
could you please write up the 15-step process from the thread starter
again, with all QEMU customizations applied? Such as, unnecessary steps
removed, and platform specifics filled in.

One more comment below:


Does CPU hotplug apply only at the socket level? If the CPU is
multi-core, what is responsible for hot-plugging all cores present in
the socket?
I can answer this: the SMM handler would interact with the hotplug
controller in the same way that ACPI DSDT does normally. This supports
multiple hotplugs already.

Writes to the hotplug controller from outside SMM would be ignored.

(03) New CPU: (Flash) send board message to tell host CPU (GPIO->SCI)
-- I am waiting for hot-add message.
Maybe we can simplify this in QEMU by broadcasting an SMI to existent
processors immediately upon plugging the new CPU.
The QEMU DSDT could be modified (when secure boot is in effect) to OUT
to 0xB2 when hotplug happens. It could write a well-known value to
0xB2, to be read by an SMI handler in edk2.
(My comment below is general, and may not apply to this particular
situation. I'm too confused to figure that out myself, sorry!)

I dislike involving QEMU's generated DSDT in anything SMM (even
injecting the SMI), because the AML interpreter runs in the OS.

If a malicious OS kernel is a bit too enlightened about the DSDT, it
could willfully diverge from the process that we design. If QEMU
broadcast the SMI internally, the guest OS could not interfere with that.

If the purpose of the SMI is specifically to force all CPUs into SMM
(and thereby force them into trusted state), then the OS would be
explicitly counter-interested in carrying out the AML operations from
QEMU's DSDT.

I'd be OK with an SMM / SMI involvement in QEMU's DSDT if, by diverging
from that DSDT, the OS kernel could only mess with its own state, and
not with the firmware's.

Thanks
Laszlo




(NOTE: Host CPU can only
send
instruction in SMM mode. -- The register is SMM only)
Sorry, I don't follow -- what register are we talking about here, and
why is the BSP needed to send anything at all? What "instruction" do you
have in mind?
[Jiewen] The new CPU does not enable SMI at reset.
At some point of time later, the CPU need enable SMI, right?
The "instruction" here means, the host CPUs need tell to CPU to enable SMI.
Right, this would be a write to the CPU hotplug controller

(04) Host CPU: (OS) get message from board that a new CPU is added.
(GPIO -> SCI)

(05) Host CPU: (OS) All CPUs enter SMM (SCI->SWSMI) (NOTE: New CPU
will not enter CPU because SMI is disabled)
I don't understand the OS involvement here. But, again, perhaps QEMU can
force all existent CPUs into SMM immediately upon adding the new CPU.
[Jiewen] OS here means the Host CPU running code in OS environment, not in SMM environment.
See above.

(06) Host CPU: (SMM) Save 38000, Update 38000 -- fill simple SMM
rebase code.

(07) Host CPU: (SMM) Send message to New CPU to Enable SMI.
Aha, so this is the SMM-only register you mention in step (03). Is the
register specified in the Intel SDM?
[Jiewen] Right. That is the register to let host CPU tell new CPU to enable SMI.
It is platform specific register. Not defined in SDM.
You may invent one in device model.
See above.

(10) New CPU: (SMM) Response first SMI at 38000, and rebase SMBASE to
TSEG.
What code does the new CPU execute after it completes step (10)? Does it
halt?
[Jiewen] The new CPU exits SMM and return to original place - where it is
interrupted to enter SMM - running code on the flash.
So in our case we'd need an INIT/SIPI/SIPI sequence between (06) and (07).

(11) Host CPU: (SMM) Restore 38000.
These steps (i.e., (06) through (11)) don't appear RAS-specific. The
only platform-specific feature seems to be SMI masking register, which
could be extracted into a new SmmCpuFeaturesLib API.

Thus, would you please consider open sourcing firmware code for steps
(06) through (11)?

Alternatively -- and in particular because the stack for step (01)
concerns me --, we could approach this from a high-level, functional
perspective. The states that really matter are the relocated SMBASE for
the new CPU, and the state of the full system, right at the end of step
(11).

When the SMM setup quiesces during normal firmware boot, OVMF could
use
existent (finalized) SMBASE infomation to *pre-program* some virtual
QEMU hardware, with such state that would be expected, as "final" state,
of any new hotplugged CPU. Afterwards, if / when the hotplug actually
happens, QEMU could blanket-apply this state to the new CPU, and
broadcast a hardware SMI to all CPUs except the new one.
I'd rather avoid this and stay as close as possible to real hardware.

Paolo


[PATCH v6 5/5] BaseTools: Improve the file saving and copying reliability

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ:https://bugzilla.tianocore.org/show_bug.cgi?id=2079

The Basetool CopyFileOnChange() and SaveFileOnChange()
functions might raise the IOError occasionally when build
in Windows with multi-process and build cache enabled.
The CopyFileOnChange() and SaveFileOnChange() might be invoked
in multiple sub-processes simultaneously, and this patch adds
global locks to sync these functions invoking which can
harden their reliability.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/AutoGenWorker.py | 6 ++++--
BaseTools/Source/Python/AutoGen/CacheIR.py | 1 +
BaseTools/Source/Python/AutoGen/DataPipe.py | 2 --
BaseTools/Source/Python/AutoGen/GenC.py | 0
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 101 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------
BaseTools/Source/Python/Common/GlobalData.py | 2 ++
BaseTools/Source/Python/Common/Misc.py | 44 +++++++++++++++++++++++++++++++++++++-------
BaseTools/Source/Python/build/build.py | 5 ++++-
8 files changed, 119 insertions(+), 42 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
index 30d2f96fc7..2e68538b1c 100755
--- a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
@@ -133,7 +133,7 @@ class AutoGenManager(threading.Thread):
def kill(self):
self.feedback_q.put(None)
class AutoGenWorkerInProcess(mp.Process):
- def __init__(self,module_queue,data_pipe_file_path,feedback_q,file_lock, share_data,log_q,error_event):
+ def __init__(self,module_queue,data_pipe_file_path,feedback_q,file_lock,cache_lock,share_data,log_q,error_event):
mp.Process.__init__(self)
self.module_queue = module_queue
self.data_pipe_file_path =data_pipe_file_path
@@ -141,6 +141,7 @@ class AutoGenWorkerInProcess(mp.Process):
self.feedback_q = feedback_q
self.PlatformMetaFileSet = {}
self.file_lock = file_lock
+ self.cache_lock = cache_lock
self.share_data = share_data
self.log_q = log_q
self.error_event = error_event
@@ -184,9 +185,10 @@ class AutoGenWorkerInProcess(mp.Process):
GlobalData.gDatabasePath = self.data_pipe.Get("DatabasePath")
GlobalData.gBinCacheSource = self.data_pipe.Get("BinCacheSource")
GlobalData.gBinCacheDest = self.data_pipe.Get("BinCacheDest")
- GlobalData.gCacheIR = self.data_pipe.Get("CacheIR")
+ GlobalData.gCacheIR = self.share_data
GlobalData.gEnableGenfdsMultiThread = self.data_pipe.Get("EnableGenfdsMultiThread")
GlobalData.file_lock = self.file_lock
+ GlobalData.cache_lock = self.cache_lock
CommandTarget = self.data_pipe.Get("CommandTarget")
pcd_from_build_option = []
for pcd_tuple in self.data_pipe.Get("BuildOptPcd"):
diff --git a/BaseTools/Source/Python/AutoGen/CacheIR.py b/BaseTools/Source/Python/AutoGen/CacheIR.py
index 2d9ffe3f0b..715be5273c 100755
--- a/BaseTools/Source/Python/AutoGen/CacheIR.py
+++ b/BaseTools/Source/Python/AutoGen/CacheIR.py
@@ -24,5 +24,6 @@ class ModuleBuildCacheIR():
self.MakeHashDigest = None
self.MakeHashHexDigest = None
self.MakeHashChain = []
+ self.CacheCrash = False
self.PreMakeCacheHit = False
self.MakeCacheHit = False
diff --git a/BaseTools/Source/Python/AutoGen/DataPipe.py b/BaseTools/Source/Python/AutoGen/DataPipe.py
index 84e77c301a..99c27dd50d 100755
--- a/BaseTools/Source/Python/AutoGen/DataPipe.py
+++ b/BaseTools/Source/Python/AutoGen/DataPipe.py
@@ -163,6 +163,4 @@ class MemoryDataPipe(DataPipe):

self.DataContainer = {"BinCacheDest":GlobalData.gBinCacheDest}

- self.DataContainer = {"CacheIR":GlobalData.gCacheIR}
-
self.DataContainer = {"EnableGenfdsMultiThread":GlobalData.gEnableGenfdsMultiThread}
\ No newline at end of file
diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
old mode 100644
new mode 100755
diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
index c489c3b9c4..af67244ff8 100755
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -28,6 +28,7 @@ from Common.caching import cached_class_function
from AutoGen.ModuleAutoGenHelper import PlatformInfo,WorkSpaceInfo
from AutoGen.CacheIR import ModuleBuildCacheIR
import json
+import tempfile

## Mapping Makefile type
gMakeTypeMap = {TAB_COMPILER_MSFT:"nmake", "GCC":"gmake"}
@@ -1702,9 +1703,8 @@ class ModuleAutoGen(AutoGen):
try:
ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
if os.path.exists(ModuleHashPair):
- f = open(ModuleHashPair, 'r')
- ModuleHashPairList = json.load(f)
- f.close()
+ with open(ModuleHashPair, 'r') as f:
+ ModuleHashPairList = json.load(f)
PreMakeHash = gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest
MakeHash = gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest
ModuleHashPairList.append((PreMakeHash, MakeHash))
@@ -1766,10 +1766,12 @@ class ModuleAutoGen(AutoGen):

if os.path.exists (self.TimeStampPath):
os.remove (self.TimeStampPath)
- with open(self.TimeStampPath, 'w+') as fd:
+ with tempfile.NamedTemporaryFile('w+', dir=os.path.dirname(self.TimeStampPath), delete=False) as tf:
for f in FileSet:
- fd.write(f)
- fd.write("\n")
+ tf.write(f)
+ tf.write("\n")
+ tempname = tf.name
+ SaveFileOnChange(self.TimeStampPath, tempname, False)

# Ignore generating makefile when it is a binary module
if self.IsBinaryModule:
@@ -1806,7 +1808,7 @@ class ModuleAutoGen(AutoGen):
MewIR.MakefilePath = MakefilePath
MewIR.DependencyHeaderFileSet = Makefile.DependencyHeaderFileSet
MewIR.CreateMakeFileDone = True
- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
try:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.MakefilePath = MakefilePath
@@ -1891,7 +1893,7 @@ class ModuleAutoGen(AutoGen):
self.IsCodeFileCreated = True
MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
MewIR.CreateCodeFileDone = True
- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
try:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.CreateCodeFileDone = True
@@ -1951,9 +1953,8 @@ class ModuleAutoGen(AutoGen):
m.update(GlobalData.gModuleHash[self.Arch][Lib.Name].encode('utf-8'))

# Add Module self
- f = open(str(self.MetaFile), 'rb')
- Content = f.read()
- f.close()
+ with open(str(self.MetaFile), 'rb') as f:
+ Content = f.read()
m.update(Content)

# Add Module's source files
@@ -1974,6 +1975,11 @@ class ModuleAutoGen(AutoGen):
if gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain:
return gDict[(self.MetaFile.Path, self.Arch)]

+ # skip if the module cache already crashed
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return
+
DependencyFileSet = set()
# Add Module Meta file
DependencyFileSet.add(self.MetaFile)
@@ -2021,9 +2027,8 @@ class ModuleAutoGen(AutoGen):
if not os.path.exists(str(File)):
EdkLogger.quiet("[cache warning]: header file %s is missing for module: %s[%s]" % (File, self.MetaFile.Path, self.Arch))
continue
- f = open(str(File), 'rb')
- Content = f.read()
- f.close()
+ with open(str(File), 'rb') as f:
+ Content = f.read()
m.update(Content)
FileList.append((str(File), hashlib.md5(Content).hexdigest()))

@@ -2032,7 +2037,7 @@ class ModuleAutoGen(AutoGen):
MewIR.ModuleFilesHashDigest = m.digest()
MewIR.ModuleFilesHashHexDigest = m.hexdigest()
MewIR.ModuleFilesChain = FileList
- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
try:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.ModuleFilesHashDigest = m.digest()
@@ -2050,6 +2055,11 @@ class ModuleAutoGen(AutoGen):
gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
return gDict[(self.MetaFile.Path, self.Arch)]

+ # skip if the module cache already crashed
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return
+
# skip binary module
if self.IsBinaryModule:
return
@@ -2091,7 +2101,7 @@ class ModuleAutoGen(AutoGen):
# Add Module self
m.update(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest)

- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.PreMakefileHashHexDigest = m.hexdigest()
gDict[(self.MetaFile.Path, self.Arch)] = IR
@@ -2104,6 +2114,11 @@ class ModuleAutoGen(AutoGen):
gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest:
return gDict[(self.MetaFile.Path, self.Arch)]

+ # skip if the module cache already crashed
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return
+
# skip binary module
if self.IsBinaryModule:
return
@@ -2159,7 +2174,7 @@ class ModuleAutoGen(AutoGen):
m.update(Content)
FileList.append((str(File), hashlib.md5(Content).hexdigest()))

- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.AutoGenFileList = self.AutoGenFileList.keys()
IR.MakeHeaderFilesHashChain = FileList
@@ -2174,6 +2189,11 @@ class ModuleAutoGen(AutoGen):
gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain:
return gDict[(self.MetaFile.Path, self.Arch)]

+ # skip if the module cache already crashed
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return
+
# skip binary module
if self.IsBinaryModule:
return
@@ -2222,7 +2242,7 @@ class ModuleAutoGen(AutoGen):
New.sort(key=lambda x: str(x))
MakeHashChain += New

- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.MakeHashDigest = m.digest()
IR.MakeHashHexDigest = m.hexdigest()
@@ -2236,6 +2256,12 @@ class ModuleAutoGen(AutoGen):
if not GlobalData.gBinCacheSource:
return False

+ if gDict[(self.MetaFile.Path, self.Arch)].PreMakeCacheHit:
+ return True
+
+ if gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return False
+
# If Module is binary, do not skip by cache
if self.IsBinaryModule:
return False
@@ -2255,12 +2281,15 @@ class ModuleAutoGen(AutoGen):
ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
if not os.path.exists(ModuleHashPair):
EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ with GlobalData.cache_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.CacheCrash = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
return False

try:
- f = open(ModuleHashPair, 'r')
- ModuleHashPairList = json.load(f)
- f.close()
+ with open(ModuleHashPair, 'r') as f:
+ ModuleHashPairList = json.load(f)
except:
EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
return False
@@ -2300,7 +2329,7 @@ class ModuleAutoGen(AutoGen):
if self.Name == "PcdPeim" or self.Name == "PcdDxe":
CreatePcdDatabaseCode(self, TemplateString(), TemplateString())

- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.PreMakeCacheHit = True
gDict[(self.MetaFile.Path, self.Arch)] = IR
@@ -2313,6 +2342,12 @@ class ModuleAutoGen(AutoGen):
if not GlobalData.gBinCacheSource:
return False

+ if gDict[(self.MetaFile.Path, self.Arch)].MakeCacheHit:
+ return True
+
+ if gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return False
+
# If Module is binary, do not skip by cache
if self.IsBinaryModule:
print("[cache miss]: checkpoint_Makefile: binary module:", self.MetaFile.Path, self.Arch)
@@ -2321,7 +2356,7 @@ class ModuleAutoGen(AutoGen):
# .inc is contains binary information so do not skip by hash as well
for f_ext in self.SourceFileList:
if '.inc' in str(f_ext):
- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.MakeCacheHit = False
gDict[(self.MetaFile.Path, self.Arch)] = IR
@@ -2338,12 +2373,15 @@ class ModuleAutoGen(AutoGen):
ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
if not os.path.exists(ModuleHashPair):
EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ with GlobalData.cache_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.CacheCrash = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
return False

try:
- f = open(ModuleHashPair, 'r')
- ModuleHashPairList = json.load(f)
- f.close()
+ with open(ModuleHashPair, 'r') as f:
+ ModuleHashPairList = json.load(f)
except:
EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
return False
@@ -2383,7 +2421,7 @@ class ModuleAutoGen(AutoGen):

if self.Name == "PcdPeim" or self.Name == "PcdDxe":
CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
- with GlobalData.file_lock:
+ with GlobalData.cache_lock:
IR = gDict[(self.MetaFile.Path, self.Arch)]
IR.MakeCacheHit = True
gDict[(self.MetaFile.Path, self.Arch)] = IR
@@ -2395,6 +2433,10 @@ class ModuleAutoGen(AutoGen):
if not GlobalData.gBinCacheSource:
return

+ # skip if the module cache already crashed
+ if gDict[(self.MetaFile.Path, self.Arch)].CacheCrash:
+ return
+
# skip binary module
if self.IsBinaryModule:
return
@@ -2420,9 +2462,8 @@ class ModuleAutoGen(AutoGen):
return

try:
- f = open(ModuleHashPair, 'r')
- ModuleHashPairList = json.load(f)
- f.close()
+ with open(ModuleHashPair, 'r') as f:
+ ModuleHashPairList = json.load(f)
except:
EdkLogger.quiet("[cache insight]: Cannot load ModuleHashPair file for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
return
diff --git a/BaseTools/Source/Python/Common/GlobalData.py b/BaseTools/Source/Python/Common/GlobalData.py
index 452dca32f0..09e92ce08c 100755
--- a/BaseTools/Source/Python/Common/GlobalData.py
+++ b/BaseTools/Source/Python/Common/GlobalData.py
@@ -122,6 +122,8 @@ gBuildHashSkipTracking = dict()

# Common dictionary to share module cache intermediate result and state
gCacheIR = None
+# Common lock for the module cache intermediate data
+cache_lock = None
# Common lock for the file access in multiple process AutoGens
file_lock = None
# Common dictionary to share platform libraries' constant Pcd
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
old mode 100644
new mode 100755
index 554ec010dd..4799635cc4
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -448,7 +448,7 @@ def RemoveDirectory(Directory, Recursively=False):
# @retval True If the file content is changed and the file is renewed
# @retval False If the file content is the same
#
-def SaveFileOnChange(File, Content, IsBinaryFile=True):
+def SaveFileOnChange(File, Content, IsBinaryFile=True, FileLock=None):

if os.path.exists(File):
if IsBinaryFile:
@@ -479,6 +479,13 @@ def SaveFileOnChange(File, Content, IsBinaryFile=True):
if IsBinaryFile:
OpenMode = "wb"

+ # use default file_lock if no input new lock
+ if not FileLock:
+ FileLock = GlobalData.file_lock
+ if FileLock:
+ FileLock.acquire()
+
+
if GlobalData.gIsWindows and not os.path.exists(File):
# write temp file, then rename the temp file to the real file
# to make sure the file be immediate saved to disk
@@ -487,14 +494,26 @@ def SaveFileOnChange(File, Content, IsBinaryFile=True):
tempname = tf.name
try:
os.rename(tempname, File)
- except:
- EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+ except IOError as X:
+ if GlobalData.gBinCacheSource:
+ EdkLogger.quiet("[cache error]:fails to save file with error: %s" % (X))
+ else:
+ EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+ finally:
+ if FileLock:
+ FileLock.release()
else:
try:
with open(File, OpenMode) as Fd:
Fd.write(Content)
except IOError as X:
- EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+ if GlobalData.gBinCacheSource:
+ EdkLogger.quiet("[cache error]:fails to save file with error: %s" % (X))
+ else:
+ EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+ finally:
+ if FileLock:
+ FileLock.release()

return True

@@ -510,7 +529,7 @@ def SaveFileOnChange(File, Content, IsBinaryFile=True):
# @retval True The two files content are different and the file is copied
# @retval False No copy really happen
#
-def CopyFileOnChange(SrcFile, Dst):
+def CopyFileOnChange(SrcFile, Dst, FileLock=None):
if not os.path.exists(SrcFile):
return False

@@ -531,6 +550,12 @@ def CopyFileOnChange(SrcFile, Dst):
if not os.access(DirName, os.W_OK):
EdkLogger.error(None, PERMISSION_FAILURE, "Do not have write permission on directory %s" % DirName)

+ # use default file_lock if no input new lock
+ if not FileLock:
+ FileLock = GlobalData.file_lock
+ if FileLock:
+ FileLock.acquire()
+
# os.replace and os.rename are the atomic operations in python 3 and 2.
# we use these two atomic operations to ensure the file copy is atomic:
# copy the src to a temp file in the dst same folder firstly, then
@@ -546,9 +571,14 @@ def CopyFileOnChange(SrcFile, Dst):
if GlobalData.gIsWindows and os.path.exists(DstFile):
os.remove(DstFile)
os.rename(tempname, DstFile)
-
except IOError as X:
- EdkLogger.error(None, FILE_COPY_FAILURE, ExtraData='IOError %s' % X)
+ if GlobalData.gBinCacheSource:
+ EdkLogger.quiet("[cache error]:fails to copy file with error: %s" % (X))
+ else:
+ EdkLogger.error(None, FILE_COPY_FAILURE, ExtraData='IOError %s' % X)
+ finally:
+ if FileLock:
+ FileLock.release()

return True

diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 299fa64311..2c10670a69 100755
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -820,13 +820,15 @@ class Build():
file_lock = mp.Lock()
error_event = mp.Event()
GlobalData.file_lock = file_lock
+ cache_lock = mp.Lock()
+ GlobalData.cache_lock = cache_lock
FfsCmd = DataPipe.Get("FfsCommand")
if FfsCmd is None:
FfsCmd = {}
GlobalData.FfsCmd = FfsCmd
GlobalData.libConstPcd = DataPipe.Get("LibConstPcd")
GlobalData.Refes = DataPipe.Get("REFS")
- auto_workers = [AutoGenWorkerInProcess(mqueue,DataPipe.dump_file,feedback_q,file_lock,share_data,self.log_q,error_event) for _ in range(self.ThreadNumber)]
+ auto_workers = [AutoGenWorkerInProcess(mqueue,DataPipe.dump_file,feedback_q,file_lock,cache_lock,share_data,self.log_q,error_event) for _ in range(self.ThreadNumber)]
self.AutoGenMgr = AutoGenManager(auto_workers,feedback_q,error_event)
self.AutoGenMgr.start()
for w in auto_workers:
@@ -1826,6 +1828,7 @@ class Build():
for PkgName in GlobalData.gPackageHash.keys():
GlobalData.gCacheIR[(PkgName, 'PackageHash')] = GlobalData.gPackageHash[PkgName]
GlobalData.file_lock = mp.Lock()
+ GlobalData.cache_lock = mp.Lock()
GlobalData.FfsCmd = CmdListDict

self.Progress.Stop("done!")
--
2.17.1

[PATCH v6 4/5] BaseTools: Add GenFds multi-thread support in build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1923

Fix the issue that the GenFds multi-thread will build fail
if enable the build cache together.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
index 6db3b47a91..c489c3b9c4 100755
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -1262,11 +1262,13 @@ class ModuleAutoGen(AutoGen):
fStringIO.close ()
fInputfile.close ()
return OutputName
+
@cached_property
def OutputFile(self):
retVal = set()
OutputDir = self.OutputDir.replace('\\', '/').strip('/')
DebugDir = self.DebugDir.replace('\\', '/').strip('/')
+ FfsOutputDir = self.FfsOutputDir.replace('\\', '/').rstrip('/')
for Item in self.CodaTargetList:
File = Item.Target.Path.replace('\\', '/').strip('/').replace(DebugDir, '').replace(OutputDir, '').strip('/')
retVal.add(File)
@@ -1282,6 +1284,12 @@ class ModuleAutoGen(AutoGen):
if File.lower().endswith('.pdb'):
retVal.add(File)

+ for Root, Dirs, Files in os.walk(FfsOutputDir):
+ for File in Files:
+ if File.lower().endswith('.ffs') or File.lower().endswith('.offset') or File.lower().endswith('.raw') \
+ or File.lower().endswith('.raw.txt'):
+ retVal.add(File)
+
return retVal

## Create AsBuilt INF file the module
@@ -1652,13 +1660,16 @@ class ModuleAutoGen(AutoGen):
for File in self.OutputFile:
File = str(File)
if not os.path.isabs(File):
- File = os.path.join(self.OutputDir, File)
+ NewFile = os.path.join(self.OutputDir, File)
+ if not os.path.exists(NewFile):
+ NewFile = os.path.join(self.FfsOutputDir, File)
+ File = NewFile
if os.path.exists(File):
- sub_dir = os.path.relpath(File, self.OutputDir)
- destination_file = os.path.join(FileDir, sub_dir)
- destination_dir = os.path.dirname(destination_file)
- CreateDirectory(destination_dir)
- CopyFileOnChange(File, destination_dir)
+ if File.lower().endswith('.ffs') or File.lower().endswith('.offset') or File.lower().endswith('.raw') \
+ or File.lower().endswith('.raw.txt'):
+ self.CacheCopyFile(FfsDir, self.FfsOutputDir, File)
+ else:
+ self.CacheCopyFile(FileDir, self.OutputDir, File)

def SaveHashChainFileToCache(self, gDict):
if not GlobalData.gBinCacheDest:
--
2.17.1

[PATCH v6 3/5] BaseTools: Change the [Arch][Name] module key in Build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1951

Current build cache use the module's [self.Arch][self.Name]
info as the ModuleAutoGen object key in hash list and dictionary.
The [self.Arch][self.Name] is not safe as the module key because
there could be two modules with same module name and arch name in
one platform. E.g. A platform can override a module or library
instance in another different path, the overriding module can has
the same module name and arch name as the original one.
Directly use the ModuleAutoGen obj self as the key, because
the obj __hash__ and __repr__ attributes already contain the
full path and arch name.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/GenMake.py | 6 +-----
BaseTools/Source/Python/build/build.py | 48 ++++++++++++++++++++----------------------------
2 files changed, 21 insertions(+), 33 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index c11b423ae1..039c913280 100755
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -959,16 +959,12 @@ cleanlib:
# Keep the file to be checked
headerFileDependencySet.add(aFileName)

- # Ensure that gModuleBuildTracking has been initialized per architecture
- if self._AutoGenObject.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[self._AutoGenObject.Arch] = dict()
-
# Check if a module dependency header file is missing from the module's MetaFile
for aFile in headerFileDependencySet:
if aFile in headerFilesInMetaFileSet:
continue
if GlobalData.gUseHashCache:
- GlobalData.gModuleBuildTracking[self._AutoGenObject.Arch][self._AutoGenObject] = 'FAIL_METAFILE'
+ GlobalData.gModuleBuildTracking[self._AutoGenObject] = 'FAIL_METAFILE'
EdkLogger.warn("build","Module MetaFile [Sources] is missing local header!",
ExtraData = "Local Header: " + aFile + " not found in " + self._AutoGenObject.MetaFile.Path
)
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index d7c817b95c..299fa64311 100755
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -630,12 +630,11 @@ class BuildTask:

# Set the value used by hash invalidation flow in GlobalData.gModuleBuildTracking to 'SUCCESS'
# If Module or Lib is being tracked, it did not fail header check test, and built successfully
- if (self.BuildItem.BuildObject.Arch in GlobalData.gModuleBuildTracking and
- self.BuildItem.BuildObject in GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch] and
- GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch][self.BuildItem.BuildObject] != 'FAIL_METAFILE' and
+ if (self.BuildItem.BuildObject in GlobalData.gModuleBuildTracking and
+ GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject] != 'FAIL_METAFILE' and
not BuildTask._ErrorFlag.isSet()
):
- GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject.Arch][self.BuildItem.BuildObject] = 'SUCCESS'
+ GlobalData.gModuleBuildTracking[self.BuildItem.BuildObject] = 'SUCCESS'

# indicate there's a thread is available for another build task
BuildTask._RunningQueueLock.acquire()
@@ -1171,25 +1170,24 @@ class Build():
return

# GlobalData.gModuleBuildTracking contains only modules or libs that cannot be skipped by hash
- for moduleAutoGenObjArch in GlobalData.gModuleBuildTracking.keys():
- for moduleAutoGenObj in GlobalData.gModuleBuildTracking[moduleAutoGenObjArch].keys():
- # Skip invalidating for Successful Module/Lib builds
- if GlobalData.gModuleBuildTracking[moduleAutoGenObjArch][moduleAutoGenObj] == 'SUCCESS':
- continue
+ for Ma in GlobalData.gModuleBuildTracking:
+ # Skip invalidating for Successful Module/Lib builds
+ if GlobalData.gModuleBuildTracking[Ma] == 'SUCCESS':
+ continue

- # The module failed to build, failed to start building, or failed the header check test from this point on
+ # The module failed to build, failed to start building, or failed the header check test from this point on

- # Remove .hash from build
- ModuleHashFile = os.path.join(moduleAutoGenObj.BuildDir, moduleAutoGenObj.Name + ".hash")
- if os.path.exists(ModuleHashFile):
- os.remove(ModuleHashFile)
+ # Remove .hash from build
+ ModuleHashFile = os.path.join(Ma.BuildDir, Ma.Name + ".hash")
+ if os.path.exists(ModuleHashFile):
+ os.remove(ModuleHashFile)

- # Remove .hash file from cache
- if GlobalData.gBinCacheDest:
- FileDir = os.path.join(GlobalData.gBinCacheDest, moduleAutoGenObj.Arch, moduleAutoGenObj.SourceDir, moduleAutoGenObj.MetaFile.BaseName)
- HashFile = os.path.join(FileDir, moduleAutoGenObj.Name + '.hash')
- if os.path.exists(HashFile):
- os.remove(HashFile)
+ # Remove .hash file from cache
+ if GlobalData.gBinCacheDest:
+ FileDir = os.path.join(GlobalData.gBinCacheDest, Ma.PlatformInfo.OutputDir, Ma.BuildTarget + "_" + Ma.ToolChain, Ma.Arch, Ma.SourceDir, Ma.MetaFile.BaseName)
+ HashFile = os.path.join(FileDir, Ma.Name + '.hash')
+ if os.path.exists(HashFile):
+ os.remove(HashFile)

## Build a module or platform
#
@@ -1889,10 +1887,7 @@ class Build():

self.BuildModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
- if Ma.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
- if Ma not in GlobalData.gModuleBuildTracking[Ma.Arch]:
- GlobalData.gModuleBuildTracking[Ma.Arch][Ma] = 'FAIL'
+ GlobalData.gModuleBuildTracking[Ma] = 'FAIL'
self.AutoGenTime += int(round((time.time() - AutoGenStart)))
MakeStart = time.time()
for Ma in self.BuildModules:
@@ -2075,10 +2070,7 @@ class Build():
PcdMaList.append(Ma)
TotalModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
- if Ma.Arch not in GlobalData.gModuleBuildTracking:
- GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
- if Ma not in GlobalData.gModuleBuildTracking[Ma.Arch]:
- GlobalData.gModuleBuildTracking[Ma.Arch][Ma] = 'FAIL'
+ GlobalData.gModuleBuildTracking[Ma] = 'FAIL'

mqueue = mp.Queue()
for m in Pa.GetAllModuleInfo:
--
2.17.1

[PATCH v6 2/5] BaseTools: Print first cache missing file for build cachle

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1925

When a module build cache miss, add support to print the first
cache missing file path and name.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/AutoGenWorker.py | 2 ++
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
index a84ed46f2e..30d2f96fc7 100755
--- a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
@@ -246,6 +246,8 @@ class AutoGenWorkerInProcess(mp.Process):
Ma.GenMakeHash(GlobalData.gCacheIR)
if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
continue
+ else:
+ Ma.PrintFirstMakeCacheMissFile(GlobalData.gCacheIR)
except Empty:
pass
except:
diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
index 613b0d2fb8..6db3b47a91 100755
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -2379,6 +2379,82 @@ class ModuleAutoGen(AutoGen):
print("[cache hit]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
return True

+ ## Show the first file name which causes cache miss
+ def PrintFirstMakeCacheMissFile(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict:
+ return
+
+ # Only print cache miss file for the MakeCache not hit module
+ if gDict[(self.MetaFile.Path, self.Arch)].MakeCacheHit:
+ return
+
+ if not gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain:
+ EdkLogger.quiet("[cache insight]: MakeHashChain is missing for: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ # Find the cache dir name through the .ModuleHashPair file info
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache insight]: Cannot find ModuleHashPair file for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache insight]: Cannot load ModuleHashPair file for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ MakeHashSet = set()
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ TargetHashDir = path.join(FileDir, str(MakeHash))
+ if os.path.exists(TargetHashDir):
+ MakeHashSet.add(MakeHash)
+ if not MakeHashSet:
+ EdkLogger.quiet("[cache insight]: Cannot find valid cache dir for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return
+
+ TargetHash = list(MakeHashSet)[0]
+ TargetHashDir = path.join(FileDir, str(TargetHash))
+ if len(MakeHashSet) > 1 :
+ EdkLogger.quiet("[cache insight]: found multiple cache dirs for this module, random select dir '%s' to search the first cache miss file: %s[%s]" % (TargetHash, self.MetaFile.Path, self.Arch))
+
+ ListFile = path.join(TargetHashDir, self.Name + '.MakeHashChain')
+ if os.path.exists(ListFile):
+ try:
+ f = open(ListFile, 'r')
+ CachedList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache insight]: Cannot load MakeHashChain file: %s" % ListFile)
+ return
+ else:
+ EdkLogger.quiet("[cache insight]: Cannot find MakeHashChain file: %s" % ListFile)
+ return
+
+ CurrentList = gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain
+ for idx, (file, hash) in enumerate (CurrentList):
+ (filecached, hashcached) = CachedList[idx]
+ if file != filecached:
+ EdkLogger.quiet("[cache insight]: first different file in %s[%s] is %s, the cached one is %s" % (self.MetaFile.Path, self.Arch, file, filecached))
+ break
+ if hash != hashcached:
+ EdkLogger.quiet("[cache insight]: first cache miss file in %s[%s] is %s" % (self.MetaFile.Path, self.Arch, file))
+ break
+
+ return True
+
## Decide whether we can skip the ModuleAutoGen process
def CanSkipbyCache(self, gDict):
# Hashing feature is off
--
2.17.1

[PATCH v6 1/5] BaseTools: Improve the cache hit in the edk2 build cache

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=1927

Current cache hash algorithm does not parse and generate
the makefile to get the accurate dependency files for a
module. It instead use the platform and package meta files
to get the module depenedency in a quick but over approximate
way. These meta files are monolithic and involve many redundant
dependency for the module, which cause the module build
cache miss easily.
This patch introduces one more cache checkpoint and a new
hash algorithm besides the current quick one. The new hash
algorithm leverages the module makefile to achieve more
accurate and precise dependency info for a module. When
the build cache miss with the first quick hash, the
Basetool will caculate new one after makefile is generated
and then check again.

Cc: Liming Gao <liming.gao@...>
Cc: Bob Feng <bob.c.feng@...>
Signed-off-by: Steven Shi <steven.shi@...>
---
BaseTools/Source/Python/AutoGen/AutoGenWorker.py | 21 +++++++++++++++++++++
BaseTools/Source/Python/AutoGen/CacheIR.py | 28 ++++++++++++++++++++++++++++
BaseTools/Source/Python/AutoGen/DataPipe.py | 8 ++++++++
BaseTools/Source/Python/AutoGen/GenMake.py | 227 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------------------------------------------------------------------------------------------------
BaseTools/Source/Python/AutoGen/ModuleAutoGen.py | 639 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------------------------------------------
BaseTools/Source/Python/Common/GlobalData.py | 9 +++++++++
BaseTools/Source/Python/build/build.py | 129 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------------------
7 files changed, 865 insertions(+), 196 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
old mode 100644
new mode 100755
index e583828741..a84ed46f2e
--- a/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGenWorker.py
@@ -182,6 +182,12 @@ class AutoGenWorkerInProcess(mp.Process):
GlobalData.gDisableIncludePathCheck = False
GlobalData.gFdfParser = self.data_pipe.Get("FdfParser")
GlobalData.gDatabasePath = self.data_pipe.Get("DatabasePath")
+ GlobalData.gBinCacheSource = self.data_pipe.Get("BinCacheSource")
+ GlobalData.gBinCacheDest = self.data_pipe.Get("BinCacheDest")
+ GlobalData.gCacheIR = self.data_pipe.Get("CacheIR")
+ GlobalData.gEnableGenfdsMultiThread = self.data_pipe.Get("EnableGenfdsMultiThread")
+ GlobalData.file_lock = self.file_lock
+ CommandTarget = self.data_pipe.Get("CommandTarget")
pcd_from_build_option = []
for pcd_tuple in self.data_pipe.Get("BuildOptPcd"):
pcd_id = ".".join((pcd_tuple[0],pcd_tuple[1]))
@@ -193,10 +199,13 @@ class AutoGenWorkerInProcess(mp.Process):
FfsCmd = self.data_pipe.Get("FfsCommand")
if FfsCmd is None:
FfsCmd = {}
+ GlobalData.FfsCmd = FfsCmd
PlatformMetaFile = self.GetPlatformMetaFile(self.data_pipe.Get("P_Info").get("ActivePlatform"),
self.data_pipe.Get("P_Info").get("WorkspaceDir"))
libConstPcd = self.data_pipe.Get("LibConstPcd")
Refes = self.data_pipe.Get("REFS")
+ GlobalData.libConstPcd = libConstPcd
+ GlobalData.Refes = Refes
while True:
if self.module_queue.empty():
break
@@ -223,8 +232,20 @@ class AutoGenWorkerInProcess(mp.Process):
Ma.ConstPcd = libConstPcd[(Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path)]
if (Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path) in Refes:
Ma.ReferenceModules = Refes[(Ma.MetaFile.File,Ma.MetaFile.Root,Ma.Arch,Ma.MetaFile.Path)]
+ if GlobalData.gBinCacheSource and CommandTarget in [None, "", "all"]:
+ Ma.GenModuleFilesHash(GlobalData.gCacheIR)
+ Ma.GenPreMakefileHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyPreMakefileCache(GlobalData.gCacheIR):
+ continue
+
Ma.CreateCodeFile(False)
Ma.CreateMakeFile(False,GenFfsList=FfsCmd.get((Ma.MetaFile.File, Ma.Arch),[]))
+
+ if GlobalData.gBinCacheSource and CommandTarget in [None, "", "all"]:
+ Ma.GenMakeHeaderFilesHash(GlobalData.gCacheIR)
+ Ma.GenMakeHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
+ continue
except Empty:
pass
except:
diff --git a/BaseTools/Source/Python/AutoGen/CacheIR.py b/BaseTools/Source/Python/AutoGen/CacheIR.py
new file mode 100755
index 0000000000..2d9ffe3f0b
--- /dev/null
+++ b/BaseTools/Source/Python/AutoGen/CacheIR.py
@@ -0,0 +1,28 @@
+## @file
+# Build cache intermediate result and state
+#
+# Copyright (c) 2019, Intel Corporation. All rights reserved.<BR>
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+
+class ModuleBuildCacheIR():
+ def __init__(self, Path, Arch):
+ self.ModulePath = Path
+ self.ModuleArch = Arch
+ self.ModuleFilesHashDigest = None
+ self.ModuleFilesHashHexDigest = None
+ self.ModuleFilesChain = []
+ self.PreMakefileHashHexDigest = None
+ self.CreateCodeFileDone = False
+ self.CreateMakeFileDone = False
+ self.MakefilePath = None
+ self.AutoGenFileList = None
+ self.DependencyHeaderFileSet = None
+ self.MakeHeaderFilesHashChain = None
+ self.MakeHeaderFilesHashDigest = None
+ self.MakeHeaderFilesHashChain = []
+ self.MakeHashDigest = None
+ self.MakeHashHexDigest = None
+ self.MakeHashChain = []
+ self.PreMakeCacheHit = False
+ self.MakeCacheHit = False
diff --git a/BaseTools/Source/Python/AutoGen/DataPipe.py b/BaseTools/Source/Python/AutoGen/DataPipe.py
old mode 100644
new mode 100755
index 2052084bdb..84e77c301a
--- a/BaseTools/Source/Python/AutoGen/DataPipe.py
+++ b/BaseTools/Source/Python/AutoGen/DataPipe.py
@@ -158,3 +158,11 @@ class MemoryDataPipe(DataPipe):
self.DataContainer = {"FdfParser": True if GlobalData.gFdfParser else False}

self.DataContainer = {"LogLevel": EdkLogger.GetLevel()}
+
+ self.DataContainer = {"BinCacheSource":GlobalData.gBinCacheSource}
+
+ self.DataContainer = {"BinCacheDest":GlobalData.gBinCacheDest}
+
+ self.DataContainer = {"CacheIR":GlobalData.gCacheIR}
+
+ self.DataContainer = {"EnableGenfdsMultiThread":GlobalData.gEnableGenfdsMultiThread}
\ No newline at end of file
diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
old mode 100644
new mode 100755
index 499ef82aea..c11b423ae1
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -906,6 +906,11 @@ cleanlib:
self._AutoGenObject.IncludePathList + self._AutoGenObject.BuildOptionIncPathList
)

+ self.DependencyHeaderFileSet = set()
+ if FileDependencyDict:
+ for Dependency in FileDependencyDict.values():
+ self.DependencyHeaderFileSet.update(set(Dependency))
+
# Get a set of unique package includes from MetaFile
parentMetaFileIncludes = set()
for aInclude in self._AutoGenObject.PackageIncludePathList:
@@ -1115,7 +1120,7 @@ cleanlib:
## For creating makefile targets for dependent libraries
def ProcessDependentLibrary(self):
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
self.LibraryBuildDirectoryList.append(self.PlaceMacro(LibraryAutoGen.BuildDir, self.Macros))

## Return a list containing source file's dependencies
@@ -1129,114 +1134,9 @@ cleanlib:
def GetFileDependency(self, FileList, ForceInculeList, SearchPathList):
Dependency = {}
for F in FileList:
- Dependency[F] = self.GetDependencyList(F, ForceInculeList, SearchPathList)
+ Dependency[F] = GetDependencyList(self._AutoGenObject, self.FileCache, F, ForceInculeList, SearchPathList)
return Dependency

- ## Find dependencies for one source file
- #
- # By searching recursively "#include" directive in file, find out all the
- # files needed by given source file. The dependencies will be only searched
- # in given search path list.
- #
- # @param File The source file
- # @param ForceInculeList The list of files which will be included forcely
- # @param SearchPathList The list of search path
- #
- # @retval list The list of files the given source file depends on
- #
- def GetDependencyList(self, File, ForceList, SearchPathList):
- EdkLogger.debug(EdkLogger.DEBUG_1, "Try to get dependency files for %s" % File)
- FileStack = [File] + ForceList
- DependencySet = set()
-
- if self._AutoGenObject.Arch not in gDependencyDatabase:
- gDependencyDatabase[self._AutoGenObject.Arch] = {}
- DepDb = gDependencyDatabase[self._AutoGenObject.Arch]
-
- while len(FileStack) > 0:
- F = FileStack.pop()
-
- FullPathDependList = []
- if F in self.FileCache:
- for CacheFile in self.FileCache[F]:
- FullPathDependList.append(CacheFile)
- if CacheFile not in DependencySet:
- FileStack.append(CacheFile)
- DependencySet.update(FullPathDependList)
- continue
-
- CurrentFileDependencyList = []
- if F in DepDb:
- CurrentFileDependencyList = DepDb[F]
- else:
- try:
- Fd = open(F.Path, 'rb')
- FileContent = Fd.read()
- Fd.close()
- except BaseException as X:
- EdkLogger.error("build", FILE_OPEN_FAILURE, ExtraData=F.Path + "\n\t" + str(X))
- if len(FileContent) == 0:
- continue
- try:
- if FileContent[0] == 0xff or FileContent[0] == 0xfe:
- FileContent = FileContent.decode('utf-16')
- else:
- FileContent = FileContent.decode()
- except:
- # The file is not txt file. for example .mcb file
- continue
- IncludedFileList = gIncludePattern.findall(FileContent)
-
- for Inc in IncludedFileList:
- Inc = Inc.strip()
- # if there's macro used to reference header file, expand it
- HeaderList = gMacroPattern.findall(Inc)
- if len(HeaderList) == 1 and len(HeaderList[0]) == 2:
- HeaderType = HeaderList[0][0]
- HeaderKey = HeaderList[0][1]
- if HeaderType in gIncludeMacroConversion:
- Inc = gIncludeMacroConversion[HeaderType] % {"HeaderKey" : HeaderKey}
- else:
- # not known macro used in #include, always build the file by
- # returning a empty dependency
- self.FileCache[File] = []
- return []
- Inc = os.path.normpath(Inc)
- CurrentFileDependencyList.append(Inc)
- DepDb[F] = CurrentFileDependencyList
-
- CurrentFilePath = F.Dir
- PathList = [CurrentFilePath] + SearchPathList
- for Inc in CurrentFileDependencyList:
- for SearchPath in PathList:
- FilePath = os.path.join(SearchPath, Inc)
- if FilePath in gIsFileMap:
- if not gIsFileMap[FilePath]:
- continue
- # If isfile is called too many times, the performance is slow down.
- elif not os.path.isfile(FilePath):
- gIsFileMap[FilePath] = False
- continue
- else:
- gIsFileMap[FilePath] = True
- FilePath = PathClass(FilePath)
- FullPathDependList.append(FilePath)
- if FilePath not in DependencySet:
- FileStack.append(FilePath)
- break
- else:
- EdkLogger.debug(EdkLogger.DEBUG_9, "%s included by %s was not found "\
- "in any given path:\n\t%s" % (Inc, F, "\n\t".join(SearchPathList)))
-
- self.FileCache[F] = FullPathDependList
- DependencySet.update(FullPathDependList)
-
- DependencySet.update(ForceList)
- if File in DependencySet:
- DependencySet.remove(File)
- DependencyList = list(DependencySet) # remove duplicate ones
-
- return DependencyList

## CustomMakefile class
#
@@ -1618,7 +1518,7 @@ cleanlib:
def GetLibraryBuildDirectoryList(self):
DirList = []
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
DirList.append(os.path.join(self._AutoGenObject.BuildDir, LibraryAutoGen.BuildDir))
return DirList

@@ -1754,11 +1654,116 @@ class TopLevelMakefile(BuildFile):
def GetLibraryBuildDirectoryList(self):
DirList = []
for LibraryAutoGen in self._AutoGenObject.LibraryAutoGenList:
- if not LibraryAutoGen.IsBinaryModule and not LibraryAutoGen.CanSkipbyHash():
+ if not LibraryAutoGen.IsBinaryModule:
DirList.append(os.path.join(self._AutoGenObject.BuildDir, LibraryAutoGen.BuildDir))
return DirList

+## Find dependencies for one source file
+#
+# By searching recursively "#include" directive in file, find out all the
+# files needed by given source file. The dependencies will be only searched
+# in given search path list.
+#
+# @param File The source file
+# @param ForceInculeList The list of files which will be included forcely
+# @param SearchPathList The list of search path
+#
+# @retval list The list of files the given source file depends on
+#
+def GetDependencyList(AutoGenObject, FileCache, File, ForceList, SearchPathList):
+ EdkLogger.debug(EdkLogger.DEBUG_1, "Try to get dependency files for %s" % File)
+ FileStack = [File] + ForceList
+ DependencySet = set()
+
+ if AutoGenObject.Arch not in gDependencyDatabase:
+ gDependencyDatabase[AutoGenObject.Arch] = {}
+ DepDb = gDependencyDatabase[AutoGenObject.Arch]
+
+ while len(FileStack) > 0:
+ F = FileStack.pop()
+
+ FullPathDependList = []
+ if F in FileCache:
+ for CacheFile in FileCache[F]:
+ FullPathDependList.append(CacheFile)
+ if CacheFile not in DependencySet:
+ FileStack.append(CacheFile)
+ DependencySet.update(FullPathDependList)
+ continue
+
+ CurrentFileDependencyList = []
+ if F in DepDb:
+ CurrentFileDependencyList = DepDb[F]
+ else:
+ try:
+ Fd = open(F.Path, 'rb')
+ FileContent = Fd.read()
+ Fd.close()
+ except BaseException as X:
+ EdkLogger.error("build", FILE_OPEN_FAILURE, ExtraData=F.Path + "\n\t" + str(X))
+ if len(FileContent) == 0:
+ continue
+ try:
+ if FileContent[0] == 0xff or FileContent[0] == 0xfe:
+ FileContent = FileContent.decode('utf-16')
+ else:
+ FileContent = FileContent.decode()
+ except:
+ # The file is not txt file. for example .mcb file
+ continue
+ IncludedFileList = gIncludePattern.findall(FileContent)
+
+ for Inc in IncludedFileList:
+ Inc = Inc.strip()
+ # if there's macro used to reference header file, expand it
+ HeaderList = gMacroPattern.findall(Inc)
+ if len(HeaderList) == 1 and len(HeaderList[0]) == 2:
+ HeaderType = HeaderList[0][0]
+ HeaderKey = HeaderList[0][1]
+ if HeaderType in gIncludeMacroConversion:
+ Inc = gIncludeMacroConversion[HeaderType] % {"HeaderKey" : HeaderKey}
+ else:
+ # not known macro used in #include, always build the file by
+ # returning a empty dependency
+ FileCache[File] = []
+ return []
+ Inc = os.path.normpath(Inc)
+ CurrentFileDependencyList.append(Inc)
+ DepDb[F] = CurrentFileDependencyList
+
+ CurrentFilePath = F.Dir
+ PathList = [CurrentFilePath] + SearchPathList
+ for Inc in CurrentFileDependencyList:
+ for SearchPath in PathList:
+ FilePath = os.path.join(SearchPath, Inc)
+ if FilePath in gIsFileMap:
+ if not gIsFileMap[FilePath]:
+ continue
+ # If isfile is called too many times, the performance is slow down.
+ elif not os.path.isfile(FilePath):
+ gIsFileMap[FilePath] = False
+ continue
+ else:
+ gIsFileMap[FilePath] = True
+ FilePath = PathClass(FilePath)
+ FullPathDependList.append(FilePath)
+ if FilePath not in DependencySet:
+ FileStack.append(FilePath)
+ break
+ else:
+ EdkLogger.debug(EdkLogger.DEBUG_9, "%s included by %s was not found "\
+ "in any given path:\n\t%s" % (Inc, F, "\n\t".join(SearchPathList)))
+
+ FileCache[F] = FullPathDependList
+ DependencySet.update(FullPathDependList)
+
+ DependencySet.update(ForceList)
+ if File in DependencySet:
+ DependencySet.remove(File)
+ DependencyList = list(DependencySet) # remove duplicate ones
+
+ return DependencyList
+
# This acts like the main() function for the script, unless it is 'import'ed into another script.
if __name__ == '__main__':
- pass
-
+ pass
\ No newline at end of file
diff --git a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
old mode 100644
new mode 100755
index 9ecf5c2dbe..613b0d2fb8
--- a/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
@@ -26,6 +26,8 @@ from Workspace.MetaFileCommentParser import UsageList
from .GenPcdDb import CreatePcdDatabaseCode
from Common.caching import cached_class_function
from AutoGen.ModuleAutoGenHelper import PlatformInfo,WorkSpaceInfo
+from AutoGen.CacheIR import ModuleBuildCacheIR
+import json

## Mapping Makefile type
gMakeTypeMap = {TAB_COMPILER_MSFT:"nmake", "GCC":"gmake"}
@@ -252,6 +254,8 @@ class ModuleAutoGen(AutoGen):
self.AutoGenDepSet = set()
self.ReferenceModules = []
self.ConstPcd = {}
+ self.Makefile = None
+ self.FileDependCache = {}

def __init_platform_info__(self):
pinfo = self.DataPipe.Get("P_Info")
@@ -1608,12 +1612,37 @@ class ModuleAutoGen(AutoGen):

self.IsAsBuiltInfCreated = True

+ def CacheCopyFile(self, OriginDir, CopyDir, File):
+ sub_dir = os.path.relpath(File, CopyDir)
+ destination_file = os.path.join(OriginDir, sub_dir)
+ destination_dir = os.path.dirname(destination_file)
+ CreateDirectory(destination_dir)
+ try:
+ CopyFileOnChange(File, destination_dir)
+ except:
+ EdkLogger.quiet("[cache warning]: fail to copy file:%s to folder:%s" % (File, destination_dir))
+ return
+
def CopyModuleToCache(self):
- FileDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.Name, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ self.GenPreMakefileHash(GlobalData.gCacheIR)
+ if not (self.MetaFile.Path, self.Arch) in GlobalData.gCacheIR or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate PreMakefileHash for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ self.GenMakeHash(GlobalData.gCacheIR)
+ if not (self.MetaFile.Path, self.Arch) in GlobalData.gCacheIR or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashChain or \
+ not GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate MakeHashChain for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = str(GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest)
+ FileDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName, MakeHashStr)
+ FfsDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name, MakeHashStr)
+
CreateDirectory (FileDir)
- HashFile = path.join(self.BuildDir, self.Name + '.hash')
- if os.path.exists(HashFile):
- CopyFileOnChange(HashFile, FileDir)
+ self.SaveHashChainFileToCache(GlobalData.gCacheIR)
ModuleFile = path.join(self.OutputDir, self.Name + '.inf')
if os.path.exists(ModuleFile):
CopyFileOnChange(ModuleFile, FileDir)
@@ -1631,38 +1660,73 @@ class ModuleAutoGen(AutoGen):
CreateDirectory(destination_dir)
CopyFileOnChange(File, destination_dir)

- def AttemptModuleCacheCopy(self):
- # If library or Module is binary do not skip by hash
- if self.IsBinaryModule:
+ def SaveHashChainFileToCache(self, gDict):
+ if not GlobalData.gBinCacheDest:
+ return False
+
+ self.GenPreMakefileHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate PreMakefileHash for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return False
+
+ self.GenMakeHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate MakeHashChain for module: %s[%s]" % (self.MetaFile.Path, self.Arch))
return False
- # .inc is contains binary information so do not skip by hash as well
- for f_ext in self.SourceFileList:
- if '.inc' in str(f_ext):
- return False
- FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.Name, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
- HashFile = path.join(FileDir, self.Name + '.hash')
- if os.path.exists(HashFile):
- f = open(HashFile, 'r')
- CacheHash = f.read()
- f.close()
- self.GenModuleHash()
- if GlobalData.gModuleHash[self.Arch][self.Name]:
- if CacheHash == GlobalData.gModuleHash[self.Arch][self.Name]:
- for root, dir, files in os.walk(FileDir):
- for f in files:
- if self.Name + '.hash' in f:
- CopyFileOnChange(HashFile, self.BuildDir)
- else:
- File = path.join(root, f)
- sub_dir = os.path.relpath(File, FileDir)
- destination_file = os.path.join(self.OutputDir, sub_dir)
- destination_dir = os.path.dirname(destination_file)
- CreateDirectory(destination_dir)
- CopyFileOnChange(File, destination_dir)
- if self.Name == "PcdPeim" or self.Name == "PcdDxe":
- CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
- return True
- return False
+
+ # save the hash chain list as cache file
+ MakeHashStr = str(GlobalData.gCacheIR[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest)
+ CacheDestDir = path.join(GlobalData.gBinCacheDest, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ CacheHashDestDir = path.join(CacheDestDir, MakeHashStr)
+ ModuleHashPair = path.join(CacheDestDir, self.Name + ".ModuleHashPair")
+ MakeHashChain = path.join(CacheHashDestDir, self.Name + ".MakeHashChain")
+ ModuleFilesChain = path.join(CacheHashDestDir, self.Name + ".ModuleFilesChain")
+
+ # save the HashChainDict as json file
+ CreateDirectory (CacheDestDir)
+ CreateDirectory (CacheHashDestDir)
+ try:
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ if os.path.exists(ModuleHashPair):
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ PreMakeHash = gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest
+ MakeHash = gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest
+ ModuleHashPairList.append((PreMakeHash, MakeHash))
+ ModuleHashPairList = list(set(map(tuple, ModuleHashPairList)))
+ with open(ModuleHashPair, 'w') as f:
+ json.dump(ModuleHashPairList, f, indent=2)
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save ModuleHashPair file in cache: %s" % ModuleHashPair)
+ return False
+
+ try:
+ with open(MakeHashChain, 'w') as f:
+ json.dump(gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain, f, indent=2)
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save MakeHashChain file in cache: %s" % MakeHashChain)
+ return False
+
+ try:
+ with open(ModuleFilesChain, 'w') as f:
+ json.dump(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain, f, indent=2)
+ except:
+ EdkLogger.quiet("[cache warning]: fail to save ModuleFilesChain file in cache: %s" % ModuleFilesChain)
+ return False
+
+ # save the autogenfile and makefile for debug usage
+ CacheDebugDir = path.join(CacheHashDestDir, "CacheDebug")
+ CreateDirectory (CacheDebugDir)
+ CopyFileOnChange(gDict[(self.MetaFile.Path, self.Arch)].MakefilePath, CacheDebugDir)
+ if gDict[(self.MetaFile.Path, self.Arch)].AutoGenFileList:
+ for File in gDict[(self.MetaFile.Path, self.Arch)].AutoGenFileList:
+ CopyFileOnChange(str(File), CacheDebugDir)
+
+ return True

## Create makefile for the module and its dependent libraries
#
@@ -1671,6 +1735,11 @@ class ModuleAutoGen(AutoGen):
#
@cached_class_function
def CreateMakeFile(self, CreateLibraryMakeFile=True, GenFfsList = []):
+ gDict = GlobalData.gCacheIR
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ return
+
# nest this function inside it's only caller.
def CreateTimeStamp():
FileSet = {self.MetaFile.Path}
@@ -1701,8 +1770,8 @@ class ModuleAutoGen(AutoGen):
for LibraryAutoGen in self.LibraryAutoGenList:
LibraryAutoGen.CreateMakeFile()

- # Don't enable if hash feature enabled, CanSkip uses timestamps to determine build skipping
- if not GlobalData.gUseHashCache and self.CanSkip():
+ # CanSkip uses timestamps to determine build skipping
+ if self.CanSkip():
return

if len(self.CustomMakefile) == 0:
@@ -1718,6 +1787,24 @@ class ModuleAutoGen(AutoGen):

CreateTimeStamp()

+ MakefileType = Makefile._FileType
+ MakefileName = Makefile._FILE_NAME_[MakefileType]
+ MakefilePath = os.path.join(self.MakeFileDir, MakefileName)
+
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.MakefilePath = MakefilePath
+ MewIR.DependencyHeaderFileSet = Makefile.DependencyHeaderFileSet
+ MewIR.CreateMakeFileDone = True
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakefilePath = MakefilePath
+ IR.DependencyHeaderFileSet = Makefile.DependencyHeaderFileSet
+ IR.CreateMakeFileDone = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
def CopyBinaryFiles(self):
for File in self.Module.Binaries:
SrcPath = File.Path
@@ -1729,6 +1816,11 @@ class ModuleAutoGen(AutoGen):
# dependent libraries will be created
#
def CreateCodeFile(self, CreateLibraryCodeFile=True):
+ gDict = GlobalData.gCacheIR
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone:
+ return
+
if self.IsCodeFileCreated:
return

@@ -1744,8 +1836,9 @@ class ModuleAutoGen(AutoGen):
if not self.IsLibrary and CreateLibraryCodeFile:
for LibraryAutoGen in self.LibraryAutoGenList:
LibraryAutoGen.CreateCodeFile()
- # Don't enable if hash feature enabled, CanSkip uses timestamps to determine build skipping
- if not GlobalData.gUseHashCache and self.CanSkip():
+
+ # CanSkip uses timestamps to determine build skipping
+ if self.CanSkip():
return

AutoGenList = []
@@ -1785,6 +1878,16 @@ class ModuleAutoGen(AutoGen):
(" ".join(AutoGenList), " ".join(IgoredAutoGenList), self.Name, self.Arch))

self.IsCodeFileCreated = True
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.CreateCodeFileDone = True
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.CreateCodeFileDone = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
return AutoGenList

## Summarize the ModuleAutoGen objects of all libraries used by this module
@@ -1854,46 +1957,468 @@ class ModuleAutoGen(AutoGen):

return GlobalData.gModuleHash[self.Arch][self.Name].encode('utf-8')

+ def GenModuleFilesHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict:
+ if gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ DependencyFileSet = set()
+ # Add Module Meta file
+ DependencyFileSet.add(self.MetaFile)
+
+ # Add Module's source files
+ if self.SourceFileList:
+ for File in set(self.SourceFileList):
+ DependencyFileSet.add(File)
+
+ # Add modules's include header files
+ # Search dependency file list for each source file
+ SourceFileList = []
+ OutPutFileList = []
+ for Target in self.IntroTargetList:
+ SourceFileList.extend(Target.Inputs)
+ OutPutFileList.extend(Target.Outputs)
+ if OutPutFileList:
+ for Item in OutPutFileList:
+ if Item in SourceFileList:
+ SourceFileList.remove(Item)
+ SearchList = []
+ for file_path in self.IncludePathList + self.BuildOptionIncPathList:
+ # skip the folders in platform BuildDir which are not been generated yet
+ if file_path.startswith(os.path.abspath(self.PlatformInfo.BuildDir)+os.sep):
+ continue
+ SearchList.append(file_path)
+ FileDependencyDict = {}
+ ForceIncludedFile = []
+ for F in SourceFileList:
+ # skip the files which are not been generated yet, because
+ # the SourceFileList usually contains intermediate build files, e.g. AutoGen.c
+ if not os.path.exists(F.Path):
+ continue
+ FileDependencyDict[F] = GenMake.GetDependencyList(self, self.FileDependCache, F, ForceIncludedFile, SearchList)
+
+ if FileDependencyDict:
+ for Dependency in FileDependencyDict.values():
+ DependencyFileSet.update(set(Dependency))
+
+ # Caculate all above dependency files hash
+ # Initialze hash object
+ FileList = []
+ m = hashlib.md5()
+ for File in sorted(DependencyFileSet, key=lambda x: str(x)):
+ if not os.path.exists(str(File)):
+ EdkLogger.quiet("[cache warning]: header file %s is missing for module: %s[%s]" % (File, self.MetaFile.Path, self.Arch))
+ continue
+ f = open(str(File), 'rb')
+ Content = f.read()
+ f.close()
+ m.update(Content)
+ FileList.append((str(File), hashlib.md5(Content).hexdigest()))
+
+
+ MewIR = ModuleBuildCacheIR(self.MetaFile.Path, self.Arch)
+ MewIR.ModuleFilesHashDigest = m.digest()
+ MewIR.ModuleFilesHashHexDigest = m.hexdigest()
+ MewIR.ModuleFilesChain = FileList
+ with GlobalData.file_lock:
+ try:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.ModuleFilesHashDigest = m.digest()
+ IR.ModuleFilesHashHexDigest = m.hexdigest()
+ IR.ModuleFilesChain = FileList
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ except:
+ gDict[(self.MetaFile.Path, self.Arch)] = MewIR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenPreMakefileHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ self.GenModuleFilesHash(gDict)
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ EdkLogger.quiet("[cache warning]: Cannot generate ModuleFilesHashDigest for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ # Initialze hash object
+ m = hashlib.md5()
+
+ # Add Platform level hash
+ if ('PlatformHash') in gDict:
+ m.update(gDict[('PlatformHash')].encode('utf-8'))
+ else:
+ EdkLogger.quiet("[cache warning]: PlatformHash is missing")
+
+ # Add Package level hash
+ if self.DependentPackageList:
+ for Pkg in sorted(self.DependentPackageList, key=lambda x: x.PackageName):
+ if (Pkg.PackageName, 'PackageHash') in gDict:
+ m.update(gDict[(Pkg.PackageName, 'PackageHash')].encode('utf-8'))
+ else:
+ EdkLogger.quiet("[cache warning]: %s PackageHash needed by %s[%s] is missing" %(Pkg.PackageName, self.MetaFile.Name, self.Arch))
+
+ # Add Library hash
+ if self.LibraryAutoGenList:
+ for Lib in sorted(self.LibraryAutoGenList, key=lambda x: x.Name):
+ if not (Lib.MetaFile.Path, Lib.Arch) in gDict or \
+ not gDict[(Lib.MetaFile.Path, Lib.Arch)].ModuleFilesHashDigest:
+ Lib.GenPreMakefileHash(gDict)
+ m.update(gDict[(Lib.MetaFile.Path, Lib.Arch)].ModuleFilesHashDigest)
+
+ # Add Module self
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest)
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.PreMakefileHashHexDigest = m.hexdigest()
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenMakeHeaderFilesHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone:
+ if self.IsLibrary:
+ if (self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path) in GlobalData.libConstPcd:
+ self.ConstPcd = GlobalData.libConstPcd[(self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path)]
+ if (self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path) in GlobalData.Refes:
+ self.ReferenceModules = GlobalData.Refes[(self.MetaFile.File,self.MetaFile.Root,self.Arch,self.MetaFile.Path)]
+ self.CreateCodeFile()
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ self.CreateMakeFile(GenFfsList=GlobalData.FfsCmd.get((self.MetaFile.File, self.Arch),[]))
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateCodeFileDone or \
+ not gDict[(self.MetaFile.Path, self.Arch)].CreateMakeFileDone:
+ EdkLogger.quiet("[cache warning]: Cannot create CodeFile or Makefile for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ DependencyFileSet = set()
+ # Add Makefile
+ if gDict[(self.MetaFile.Path, self.Arch)].MakefilePath:
+ DependencyFileSet.add(gDict[(self.MetaFile.Path, self.Arch)].MakefilePath)
+ else:
+ EdkLogger.quiet("[cache warning]: makefile is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+
+ # Add header files
+ if gDict[(self.MetaFile.Path, self.Arch)].DependencyHeaderFileSet:
+ for File in gDict[(self.MetaFile.Path, self.Arch)].DependencyHeaderFileSet:
+ DependencyFileSet.add(File)
+ else:
+ EdkLogger.quiet("[cache warning]: No dependency header found for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+
+ # Add AutoGen files
+ if self.AutoGenFileList:
+ for File in set(self.AutoGenFileList):
+ DependencyFileSet.add(File)
+
+ # Caculate all above dependency files hash
+ # Initialze hash object
+ FileList = []
+ m = hashlib.md5()
+ for File in sorted(DependencyFileSet, key=lambda x: str(x)):
+ if not os.path.exists(str(File)):
+ EdkLogger.quiet("[cache warning]: header file: %s doesn't exist for module: %s[%s]" % (File, self.MetaFile.Path, self.Arch))
+ continue
+ f = open(str(File), 'rb')
+ Content = f.read()
+ f.close()
+ m.update(Content)
+ FileList.append((str(File), hashlib.md5(Content).hexdigest()))
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.AutoGenFileList = self.AutoGenFileList.keys()
+ IR.MakeHeaderFilesHashChain = FileList
+ IR.MakeHeaderFilesHashDigest = m.digest()
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ def GenMakeHash(self, gDict):
+ # Early exit if module or library has been hashed and is in memory
+ if (self.MetaFile.Path, self.Arch) in gDict and \
+ gDict[(self.MetaFile.Path, self.Arch)].MakeHashChain:
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ # skip binary module
+ if self.IsBinaryModule:
+ return
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest:
+ self.GenModuleFilesHash(gDict)
+ if not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest:
+ self.GenMakeHeaderFilesHash(gDict)
+
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest or \
+ not gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashChain:
+ EdkLogger.quiet("[cache warning]: Cannot generate ModuleFilesHash or MakeHeaderFilesHash for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return
+
+ # Initialze hash object
+ m = hashlib.md5()
+ MakeHashChain = []
+
+ # Add hash of makefile and dependency header files
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashDigest)
+ New = list(set(gDict[(self.MetaFile.Path, self.Arch)].MakeHeaderFilesHashChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ # Add Library hash
+ if self.LibraryAutoGenList:
+ for Lib in sorted(self.LibraryAutoGenList, key=lambda x: x.Name):
+ if not (Lib.MetaFile.Path, Lib.Arch) in gDict or \
+ not gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashChain:
+ Lib.GenMakeHash(gDict)
+ if not gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashDigest:
+ print("Cannot generate MakeHash for lib module:", Lib.MetaFile.Path, Lib.Arch)
+ continue
+ m.update(gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashDigest)
+ New = list(set(gDict[(Lib.MetaFile.Path, Lib.Arch)].MakeHashChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ # Add Module self
+ m.update(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesHashDigest)
+ New = list(set(gDict[(self.MetaFile.Path, self.Arch)].ModuleFilesChain) - set(MakeHashChain))
+ New.sort(key=lambda x: str(x))
+ MakeHashChain += New
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeHashDigest = m.digest()
+ IR.MakeHashHexDigest = m.hexdigest()
+ IR.MakeHashChain = MakeHashChain
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+
+ return gDict[(self.MetaFile.Path, self.Arch)]
+
+ ## Decide whether we can skip the left autogen and make process
+ def CanSkipbyPreMakefileCache(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return False
+
+ # If Module is binary, do not skip by cache
+ if self.IsBinaryModule:
+ return False
+
+ # .inc is contains binary information so do not skip by hash as well
+ for f_ext in self.SourceFileList:
+ if '.inc' in str(f_ext):
+ return False
+
+ # Get the module hash values from stored cache and currrent build
+ # then check whether cache hit based on the hash values
+ # if cache hit, restore all the files from cache
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ FfsDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ self.GenPreMakefileHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest:
+ EdkLogger.quiet("[cache warning]: PreMakefileHashHexDigest is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = None
+ CurrentPreMakeHash = gDict[(self.MetaFile.Path, self.Arch)].PreMakefileHashHexDigest
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ if PreMakefileHash == CurrentPreMakeHash:
+ MakeHashStr = str(MakeHash)
+
+ if not MakeHashStr:
+ return False
+
+ TargetHashDir = path.join(FileDir, MakeHashStr)
+ TargetFfsHashDir = path.join(FfsDir, MakeHashStr)
+
+ if not os.path.exists(TargetHashDir):
+ EdkLogger.quiet("[cache warning]: Cache folder is missing: %s" % TargetHashDir)
+ return False
+
+ for root, dir, files in os.walk(TargetHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.OutputDir, TargetHashDir, File)
+ if os.path.exists(TargetFfsHashDir):
+ for root, dir, files in os.walk(TargetFfsHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.FfsOutputDir, TargetFfsHashDir, File)
+
+ if self.Name == "PcdPeim" or self.Name == "PcdDxe":
+ CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
+
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.PreMakeCacheHit = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache hit]: checkpoint_PreMakefile:", self.MetaFile.Path, self.Arch)
+ #EdkLogger.quiet("cache hit: %s[%s]" % (self.MetaFile.Path, self.Arch))
+ return True
+
+ ## Decide whether we can skip the make process
+ def CanSkipbyMakeCache(self, gDict):
+ if not GlobalData.gBinCacheSource:
+ return False
+
+ # If Module is binary, do not skip by cache
+ if self.IsBinaryModule:
+ print("[cache miss]: checkpoint_Makefile: binary module:", self.MetaFile.Path, self.Arch)
+ return False
+
+ # .inc is contains binary information so do not skip by hash as well
+ for f_ext in self.SourceFileList:
+ if '.inc' in str(f_ext):
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeCacheHit = False
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache miss]: checkpoint_Makefile: .inc module:", self.MetaFile.Path, self.Arch)
+ return False
+
+ # Get the module hash values from stored cache and currrent build
+ # then check whether cache hit based on the hash values
+ # if cache hit, restore all the files from cache
+ FileDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, self.Arch, self.SourceDir, self.MetaFile.BaseName)
+ FfsDir = path.join(GlobalData.gBinCacheSource, self.PlatformInfo.OutputDir, self.BuildTarget + "_" + self.ToolChain, TAB_FV_DIRECTORY, "Ffs", self.Guid + self.Name)
+
+ ModuleHashPairList = [] # tuple list: [tuple(PreMakefileHash, MakeHash)]
+ ModuleHashPair = path.join(FileDir, self.Name + ".ModuleHashPair")
+ if not os.path.exists(ModuleHashPair):
+ EdkLogger.quiet("[cache warning]: Cannot find ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ try:
+ f = open(ModuleHashPair, 'r')
+ ModuleHashPairList = json.load(f)
+ f.close()
+ except:
+ EdkLogger.quiet("[cache warning]: fail to load ModuleHashPair file: %s" % ModuleHashPair)
+ return False
+
+ self.GenMakeHash(gDict)
+ if not (self.MetaFile.Path, self.Arch) in gDict or \
+ not gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest:
+ EdkLogger.quiet("[cache warning]: MakeHashHexDigest is missing for module %s[%s]" %(self.MetaFile.Path, self.Arch))
+ return False
+
+ MakeHashStr = None
+ CurrentMakeHash = gDict[(self.MetaFile.Path, self.Arch)].MakeHashHexDigest
+ for idx, (PreMakefileHash, MakeHash) in enumerate (ModuleHashPairList):
+ if MakeHash == CurrentMakeHash:
+ MakeHashStr = str(MakeHash)
+
+ if not MakeHashStr:
+ print("[cache miss]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
+ return False
+
+ TargetHashDir = path.join(FileDir, MakeHashStr)
+ TargetFfsHashDir = path.join(FfsDir, MakeHashStr)
+ if not os.path.exists(TargetHashDir):
+ EdkLogger.quiet("[cache warning]: Cache folder is missing: %s" % TargetHashDir)
+ return False
+
+ for root, dir, files in os.walk(TargetHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.OutputDir, TargetHashDir, File)
+
+ if os.path.exists(TargetFfsHashDir):
+ for root, dir, files in os.walk(TargetFfsHashDir):
+ for f in files:
+ File = path.join(root, f)
+ self.CacheCopyFile(self.FfsOutputDir, TargetFfsHashDir, File)
+
+ if self.Name == "PcdPeim" or self.Name == "PcdDxe":
+ CreatePcdDatabaseCode(self, TemplateString(), TemplateString())
+ with GlobalData.file_lock:
+ IR = gDict[(self.MetaFile.Path, self.Arch)]
+ IR.MakeCacheHit = True
+ gDict[(self.MetaFile.Path, self.Arch)] = IR
+ print("[cache hit]: checkpoint_Makefile:", self.MetaFile.Path, self.Arch)
+ return True
+
## Decide whether we can skip the ModuleAutoGen process
- def CanSkipbyHash(self):
+ def CanSkipbyCache(self, gDict):
# Hashing feature is off
- if not GlobalData.gUseHashCache:
+ if not GlobalData.gBinCacheSource:
return False

- # Initialize a dictionary for each arch type
- if self.Arch not in GlobalData.gBuildHashSkipTracking:
- GlobalData.gBuildHashSkipTracking[self.Arch] = dict()
+ if self in GlobalData.gBuildHashSkipTracking:
+ return GlobalData.gBuildHashSkipTracking[self]

# If library or Module is binary do not skip by hash
if self.IsBinaryModule:
+ GlobalData.gBuildHashSkipTracking[self] = False
return False

# .inc is contains binary information so do not skip by hash as well
for f_ext in self.SourceFileList:
if '.inc' in str(f_ext):
+ GlobalData.gBuildHashSkipTracking[self] = False
return False

- # Use Cache, if exists and if Module has a copy in cache
- if GlobalData.gBinCacheSource and self.AttemptModuleCacheCopy():
+ if not (self.MetaFile.Path, self.Arch) in gDict:
+ return False
+
+ if gDict[(self.MetaFile.Path, self.Arch)].PreMakeCacheHit:
+ GlobalData.gBuildHashSkipTracking[self] = True
return True

- # Early exit for libraries that haven't yet finished building
- HashFile = path.join(self.BuildDir, self.Name + ".hash")
- if self.IsLibrary and not os.path.exists(HashFile):
- return False
+ if gDict[(self.MetaFile.Path, self.Arch)].MakeCacheHit:
+ GlobalData.gBuildHashSkipTracking[self] = True
+ return True

- # Return a Boolean based on if can skip by hash, either from memory or from IO.
- if self.Name not in GlobalData.gBuildHashSkipTracking[self.Arch]:
- # If hashes are the same, SaveFileOnChange() will return False.
- GlobalData.gBuildHashSkipTracking[self.Arch][self.Name] = not SaveFileOnChange(HashFile, self.GenModuleHash(), True)
- return GlobalData.gBuildHashSkipTracking[self.Arch][self.Name]
- else:
- return GlobalData.gBuildHashSkipTracking[self.Arch][self.Name]
+ return False

## Decide whether we can skip the ModuleAutoGen process
# If any source file is newer than the module than we cannot skip
#
def CanSkip(self):
+ # Don't skip if cache feature enabled
+ if GlobalData.gUseHashCache or GlobalData.gBinCacheDest or GlobalData.gBinCacheSource:
+ return False
if self.MakeFileDir in GlobalData.gSikpAutoGenCache:
return True
if not os.path.exists(self.TimeStampPath):
diff --git a/BaseTools/Source/Python/Common/GlobalData.py b/BaseTools/Source/Python/Common/GlobalData.py
old mode 100644
new mode 100755
index bd45a43728..452dca32f0
--- a/BaseTools/Source/Python/Common/GlobalData.py
+++ b/BaseTools/Source/Python/Common/GlobalData.py
@@ -119,3 +119,12 @@ gModuleBuildTracking = dict()
# Top Dict: Key: Arch Type Value: Dictionary
# Second Dict: Key: Module\Library Name Value: True\False
gBuildHashSkipTracking = dict()
+
+# Common dictionary to share module cache intermediate result and state
+gCacheIR = None
+# Common lock for the file access in multiple process AutoGens
+file_lock = None
+# Common dictionary to share platform libraries' constant Pcd
+libConstPcd = None
+# Common dictionary to share platform libraries' reference info
+Refes = None
\ No newline at end of file
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
old mode 100644
new mode 100755
index 4bfa54666b..d7c817b95c
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -595,7 +595,7 @@ class BuildTask:
#
def AddDependency(self, Dependency):
for Dep in Dependency:
- if not Dep.BuildObject.IsBinaryModule and not Dep.BuildObject.CanSkipbyHash():
+ if not Dep.BuildObject.IsBinaryModule and not Dep.BuildObject.CanSkipbyCache(GlobalData.gCacheIR):
self.DependencyList.append(BuildTask.New(Dep)) # BuildTask list

## The thread wrapper of LaunchCommand function
@@ -811,7 +811,7 @@ class Build():
self.AutoGenMgr = None
EdkLogger.info("")
os.chdir(self.WorkspaceDir)
- self.share_data = Manager().dict()
+ GlobalData.gCacheIR = Manager().dict()
self.log_q = log_q
def StartAutoGen(self,mqueue, DataPipe,SkipAutoGen,PcdMaList,share_data):
try:
@@ -820,6 +820,13 @@ class Build():
feedback_q = mp.Queue()
file_lock = mp.Lock()
error_event = mp.Event()
+ GlobalData.file_lock = file_lock
+ FfsCmd = DataPipe.Get("FfsCommand")
+ if FfsCmd is None:
+ FfsCmd = {}
+ GlobalData.FfsCmd = FfsCmd
+ GlobalData.libConstPcd = DataPipe.Get("LibConstPcd")
+ GlobalData.Refes = DataPipe.Get("REFS")
auto_workers = [AutoGenWorkerInProcess(mqueue,DataPipe.dump_file,feedback_q,file_lock,share_data,self.log_q,error_event) for _ in range(self.ThreadNumber)]
self.AutoGenMgr = AutoGenManager(auto_workers,feedback_q,error_event)
self.AutoGenMgr.start()
@@ -827,14 +834,28 @@ class Build():
w.start()
if PcdMaList is not None:
for PcdMa in PcdMaList:
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ PcdMa.GenModuleFilesHash(share_data)
+ PcdMa.GenPreMakefileHash(share_data)
+ if PcdMa.CanSkipbyPreMakefileCache(share_data):
+ continue
+
PcdMa.CreateCodeFile(False)
PcdMa.CreateMakeFile(False,GenFfsList = DataPipe.Get("FfsCommand").get((PcdMa.MetaFile.File, PcdMa.Arch),[]))

+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ PcdMa.GenMakeHeaderFilesHash(share_data)
+ PcdMa.GenMakeHash(share_data)
+ if PcdMa.CanSkipbyMakeCache(share_data):
+ continue
+
self.AutoGenMgr.join()
rt = self.AutoGenMgr.Status
return rt, 0
- except Exception as e:
- return False,e.errcode
+ except FatalError as e:
+ return False, e.args[0]
+ except:
+ return False, UNKNOWN_ERROR

## Load configuration
#
@@ -1199,10 +1220,11 @@ class Build():
mqueue.put(m)

AutoGenObject.DataPipe.DataContainer = {"FfsCommand":FfsCommand}
+ AutoGenObject.DataPipe.DataContainer = {"CommandTarget": self.Target}
self.Progress.Start("Generating makefile and code")
data_pipe_file = os.path.join(AutoGenObject.BuildDir, "GlobalVar_%s_%s.bin" % (str(AutoGenObject.Guid),AutoGenObject.Arch))
AutoGenObject.DataPipe.dump(data_pipe_file)
- autogen_rt, errorcode = self.StartAutoGen(mqueue, AutoGenObject.DataPipe, self.SkipAutoGen, PcdMaList,self.share_data)
+ autogen_rt,errorcode = self.StartAutoGen(mqueue, AutoGenObject.DataPipe, self.SkipAutoGen, PcdMaList, GlobalData.gCacheIR)
self.Progress.Stop("done!")
if not autogen_rt:
self.AutoGenMgr.TerminateWorkers()
@@ -1799,6 +1821,15 @@ class Build():
CmdListDict = None
if GlobalData.gEnableGenfdsMultiThread and self.Fdf:
CmdListDict = self._GenFfsCmd(Wa.ArchList)
+
+ # Add Platform and Package level hash in share_data for module hash calculation later
+ if GlobalData.gBinCacheSource or GlobalData.gBinCacheDest:
+ GlobalData.gCacheIR[('PlatformHash')] = GlobalData.gPlatformHash
+ for PkgName in GlobalData.gPackageHash.keys():
+ GlobalData.gCacheIR[(PkgName, 'PackageHash')] = GlobalData.gPackageHash[PkgName]
+ GlobalData.file_lock = mp.Lock()
+ GlobalData.FfsCmd = CmdListDict
+
self.Progress.Stop("done!")
MaList = []
ExitFlag = threading.Event()
@@ -1808,20 +1839,23 @@ class Build():
AutoGenStart = time.time()
GlobalData.gGlobalDefines['ARCH'] = Arch
Pa = PlatformAutoGen(Wa, self.PlatformFile, BuildTarget, ToolChain, Arch)
+ GlobalData.libConstPcd = Pa.DataPipe.Get("LibConstPcd")
+ GlobalData.Refes = Pa.DataPipe.Get("REFS")
for Module in Pa.Platform.Modules:
if self.ModuleFile.Dir == Module.Dir and self.ModuleFile.Name == Module.Name:
Ma = ModuleAutoGen(Wa, Module, BuildTarget, ToolChain, Arch, self.PlatformFile,Pa.DataPipe)
if Ma is None:
continue
MaList.append(Ma)
- if Ma.CanSkipbyHash():
- self.HashSkipModules.append(Ma)
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
- continue
- else:
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ Ma.GenModuleFilesHash(GlobalData.gCacheIR)
+ Ma.GenPreMakefileHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyPreMakefileCache(GlobalData.gCacheIR):
+ self.HashSkipModules.append(Ma)
+ EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ continue
+
# Not to auto-gen for targets 'clean', 'cleanlib', 'cleanall', 'run', 'fds'
if self.Target not in ['clean', 'cleanlib', 'cleanall', 'run', 'fds']:
# for target which must generate AutoGen code and makefile
@@ -1841,6 +1875,18 @@ class Build():
self.Progress.Stop("done!")
if self.Target == "genmake":
return True
+
+ if GlobalData.gBinCacheSource and self.Target in [None, "", "all"]:
+ Ma.GenMakeHeaderFilesHash(GlobalData.gCacheIR)
+ Ma.GenMakeHash(GlobalData.gCacheIR)
+ if Ma.CanSkipbyMakeCache(GlobalData.gCacheIR):
+ self.HashSkipModules.append(Ma)
+ EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ continue
+ else:
+ EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
+ Ma.PrintFirstMakeCacheMissFile(GlobalData.gCacheIR)
+
self.BuildModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
if Ma.Arch not in GlobalData.gModuleBuildTracking:
@@ -1985,11 +2031,18 @@ class Build():
if GlobalData.gEnableGenfdsMultiThread and self.Fdf:
CmdListDict = self._GenFfsCmd(Wa.ArchList)

+ # Add Platform and Package level hash in share_data for module hash calculation later
+ if GlobalData.gBinCacheSource or GlobalData.gBinCacheDest:
+ GlobalData.gCacheIR[('PlatformHash')] = GlobalData.gPlatformHash
+ for PkgName in GlobalData.gPackageHash.keys():
+ GlobalData.gCacheIR[(PkgName, 'PackageHash')] = GlobalData.gPackageHash[PkgName]
+
# multi-thread exit flag
ExitFlag = threading.Event()
ExitFlag.clear()
self.AutoGenTime += int(round((time.time() - WorkspaceAutoGenTime)))
self.BuildModules = []
+ TotalModules = []
for Arch in Wa.ArchList:
PcdMaList = []
AutoGenStart = time.time()
@@ -2009,6 +2062,7 @@ class Build():
ModuleList.append(Inf)
Pa.DataPipe.DataContainer = {"FfsCommand":CmdListDict}
Pa.DataPipe.DataContainer = {"Workspace_timestamp": Wa._SrcTimeStamp}
+ Pa.DataPipe.DataContainer = {"CommandTarget": self.Target}
for Module in ModuleList:
# Get ModuleAutoGen object to generate C code file and makefile
Ma = ModuleAutoGen(Wa, Module, BuildTarget, ToolChain, Arch, self.PlatformFile,Pa.DataPipe)
@@ -2019,30 +2073,34 @@ class Build():
Ma.PlatformInfo = Pa
Ma.Workspace = Wa
PcdMaList.append(Ma)
- if Ma.CanSkipbyHash():
- self.HashSkipModules.append(Ma)
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache hit: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
- continue
- else:
- if GlobalData.gBinCacheSource:
- EdkLogger.quiet("cache miss: %s[%s]" % (Ma.MetaFile.Path, Ma.Arch))
-
- # Not to auto-gen for targets 'clean', 'cleanlib', 'cleanall', 'run', 'fds'
- # for target which must generate AutoGen code and makefile
-
- self.BuildModules.append(Ma)
+ TotalModules.append(Ma)
# Initialize all modules in tracking to 'FAIL'
if Ma.Arch not in GlobalData.gModuleBuildTracking:
GlobalData.gModuleBuildTracking[Ma.Arch] = dict()
if Ma not in GlobalData.gModuleBuildTracking[Ma.Arch]:
GlobalData.gModuleBuildTracking[Ma.Arch][Ma] = 'FAIL'
+
mqueue = mp.Queue()
for m in Pa.GetAllModuleInfo:
mqueue.put(m)
data_pipe_file = os.path.join(Pa.BuildDir, "GlobalVar_%s_%s.bin" % (str(Pa.Guid),Pa.Arch))
Pa.DataPipe.dump(data_pipe_file)
- autogen_rt, errorcode = self.StartAutoGen(mqueue, Pa.DataPipe, self.SkipAutoGen, PcdMaList,self.share_data)
+ autogen_rt, errorcode = self.StartAutoGen(mqueue, Pa.DataPipe, self.SkipAutoGen, PcdMaList, GlobalData.gCacheIR)
+
+ # Skip cache hit modules
+ if GlobalData.gBinCacheSource:
+ for Ma in TotalModules:
+ if (Ma.MetaFile.Path, Ma.Arch) in GlobalData.gCacheIR and \
+ GlobalData.gCacheIR[(Ma.MetaFile.Path, Ma.Arch)].PreMakeCacheHit:
+ self.HashSkipModules.append(Ma)
+ continue
+ if (Ma.MetaFile.Path, Ma.Arch) in GlobalData.gCacheIR and \
+ GlobalData.gCacheIR[(Ma.MetaFile.Path, Ma.Arch)].MakeCacheHit:
+ self.HashSkipModules.append(Ma)
+ continue
+ self.BuildModules.append(Ma)
+ else:
+ self.BuildModules.extend(TotalModules)

if not autogen_rt:
self.AutoGenMgr.TerminateWorkers()
@@ -2050,9 +2108,24 @@ class Build():
raise FatalError(errorcode)
self.AutoGenTime += int(round((time.time() - AutoGenStart)))
self.Progress.Stop("done!")
+
+ if GlobalData.gBinCacheSource:
+ EdkLogger.quiet("Total cache hit driver num: %s, cache miss driver num: %s" % (len(set(self.HashSkipModules)), len(set(self.BuildModules))))
+ CacheHitMa = set()
+ CacheNotHitMa = set()
+ for IR in GlobalData.gCacheIR.keys():
+ if 'PlatformHash' in IR or 'PackageHash' in IR:
+ continue
+ if GlobalData.gCacheIR[IR].PreMakeCacheHit or GlobalData.gCacheIR[IR].MakeCacheHit:
+ CacheHitMa.add(IR)
+ else:
+ # There might be binary module or module which has .inc files, not count for cache miss
+ CacheNotHitMa.add(IR)
+ EdkLogger.quiet("Total module num: %s, cache hit module num: %s" % (len(CacheHitMa)+len(CacheNotHitMa), len(CacheHitMa)))
+
for Arch in Wa.ArchList:
MakeStart = time.time()
- for Ma in self.BuildModules:
+ for Ma in set(self.BuildModules):
# Generate build task for the module
if not Ma.IsBinaryModule:
Bt = BuildTask.New(ModuleMakeUnit(Ma, Pa.BuildCommand,self.Target))
--
2.17.1

[PATCH v6 0/5] Build cache enhancement

Steven Shi
 

From: "Shi, Steven" <steven.shi@...>

This patch set is for the 201908 stable tag

Enhance the edk2 build cache with below patches:
Patch 01/05: Improve the cache hit rate through new cache checkpoint and hash algorithm
Patch 02/05: Print more info to explain why a module build cache miss
Patch 03/05: Fix the unsafe [self.Arch][self.Name] key usage in build cache
Patch 04/05 Add the GenFds multi-thread support in build cache
Patch 05/05 Improve the file saving and copying functions reliability in build cache


You can directly try this patch set in the branch:
https://github.com/shijunjing/edk2/tree/build_cache_improve_v6_3

V6:
In the patch 5, add error handling to skip hash calculation if find module cache
already crashed

V5:
Fix the method name typo in Misc.py from EdkLogger.quite() to EdkLogger.quiet()

V4:
Change single global lock into two locks, which are cache_lock and file_lock,
for better cache performance and IO reliability in windows

V3:
Add patch 5. To improve the autogen CopyFileOnChange() and SaveFileOnChange()
functions reliability for build cache

V2:
Enhance the SaveHashChainFileToCache() function in ModuleAutoGen.py and
not need to call f.close() in the "with open(xxx) as f:" block. The
with block will close the file automatically

V1:
Initial patch set

Shi, Steven (5):
BaseTools: Improve the cache hit in the edk2 build cache
BaseTools: Print first cache missing file for build cachle
BaseTools: Change the [Arch][Name] module key in Build cache
BaseTools: Add GenFds multi-thread support in build cache
BaseTools: Improve the file saving and copying reliability

.../Source/Python/AutoGen/AutoGenWorker.py | 27 +-
BaseTools/Source/Python/AutoGen/CacheIR.py | 29 +
BaseTools/Source/Python/AutoGen/DataPipe.py | 6 +
BaseTools/Source/Python/AutoGen/GenC.py | 0
BaseTools/Source/Python/AutoGen/GenMake.py | 233 +++---
.../Source/Python/AutoGen/ModuleAutoGen.py | 791 ++++++++++++++++--
BaseTools/Source/Python/Common/GlobalData.py | 11 +
BaseTools/Source/Python/Common/Misc.py | 44 +-
BaseTools/Source/Python/build/build.py | 182 ++--
9 files changed, 1073 insertions(+), 250 deletions(-)
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/AutoGenWorker.py
create mode 100755 BaseTools/Source/Python/AutoGen/CacheIR.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/DataPipe.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/GenC.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/GenMake.py
mode change 100644 => 100755 BaseTools/Source/Python/AutoGen/ModuleAutoGen.py
mode change 100644 => 100755 BaseTools/Source/Python/Common/GlobalData.py
mode change 100644 => 100755 BaseTools/Source/Python/Common/Misc.py
mode change 100644 => 100755 BaseTools/Source/Python/build/build.py

--
2.17.1

[PATCH v1 11/11] ShellPkg: acpiview: DBG2: Validate global pointers before use

Krzysztof Koch
 

Check if global (in the scope of the DBG2 parser) pointers have been
successfully updated before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c | 43 ++++++++++++++++++++
1 file changed, 43 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c
index 869e700b9beda4886bf7bc5ae4ced3ab9a59efa3..0f730a306a94329a23fbaf54b59f1833b44616ba 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c
@@ -123,6 +123,24 @@ DumpDbgDeviceInfo (
PARSER_PARAMS (DbgDevInfoParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((GasCount == NULL) ||
+ (NameSpaceStringLength == NULL) ||
+ (NameSpaceStringOffset == NULL) ||
+ (OEMDataLength == NULL) ||
+ (OEMDataOffset == NULL) ||
+ (BaseAddrRegOffset == NULL) ||
+ (AddrSizeOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient Debug Device Information Structure length. " \
+ L"Length = %d.\n",
+ Length
+ );
+ return;
+ }
+
// GAS
Index = 0;
Offset = *BaseAddrRegOffset;
@@ -224,6 +242,18 @@ ParseAcpiDbg2 (
PARSER_PARAMS (Dbg2Parser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((OffsetDbgDeviceInfo == NULL) ||
+ (NumberDbgDeviceInfo == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
Offset = *OffsetDbgDeviceInfo;
Index = 0;

@@ -239,6 +269,19 @@ ParseAcpiDbg2 (
PARSER_PARAMS (DbgDevInfoHeaderParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if (DbgDevInfoLen == NULL) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"Debug Device Information structure's 'Length' field. " \
+ L"RemainingTableBufferLength = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure the Debug Device Information structure lies inside the table.
if ((Offset + *DbgDevInfoLen) > AcpiTableLength) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 07/11] ShellPkg: acpiview: MADT: Validate global pointers before use

Krzysztof Koch
 

Check if the MadtInterruptControllerType and
MadtInterruptControllerLength pointers have been successfully updated
before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c
index 90bdafea1970db522e8ed96de7c6e986cdaca5ba..438905cb24f58b8b82e8fe61280e72f765d578d8 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c
@@ -260,6 +260,19 @@ ParseAcpiMadt (
PARSER_PARAMS (MadtInterruptControllerHeaderParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((MadtInterruptControllerType == NULL) ||
+ (MadtInterruptControllerLength == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"Interrupt Controller Structure header. Length = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure forward progress is made.
if (*MadtInterruptControllerLength < 2) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 10/11] ShellPkg: acpiview: GTDT: Validate global pointers before use

Krzysztof Koch
 

Check if global (in the scope of the GTDT parser) pointers have been
successfully updated before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c | 37 ++++++++++++++++++++
1 file changed, 37 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c
index 57174e14c80072f12b90e1996ebe8f0002d0c404..699a55b549ec3fa61bbd156898821055dc019199 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c
@@ -189,6 +189,18 @@ DumpGTBlock (
PARSER_PARAMS (GtBlockParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((GtBlockTimerCount == NULL) ||
+ (GtBlockTimerOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient GT Block Structure length. Length = %d.\n",
+ Length
+ );
+ return;
+ }
+
Offset = *GtBlockTimerOffset;
Index = 0;

@@ -272,6 +284,18 @@ ParseAcpiGtdt (
PARSER_PARAMS (GtdtParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((GtdtPlatformTimerCount == NULL) ||
+ (GtdtPlatformTimerOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d.\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
TimerPtr = Ptr + *GtdtPlatformTimerOffset;
Offset = *GtdtPlatformTimerOffset;
Index = 0;
@@ -290,6 +314,19 @@ ParseAcpiGtdt (
PARSER_PARAMS (GtPlatformTimerHeaderParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((PlatformTimerType == NULL) ||
+ (PlatformTimerLength == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"Platform Timer Structure header. Length = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure the Platform Timer is inside the table.
if ((Offset + *PlatformTimerLength) > AcpiTableLength) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 09/11] ShellPkg: acpiview: IORT: Validate global pointers before use

Krzysztof Koch
 

Check if global (in the scope of the IORT parser) pointers have been
successfully updated before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c | 52 ++++++++++++++++++++
1 file changed, 52 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c
index f1cdb9ac01d848f22ab588d8f824886387c5983d..c43ed4ee5fdd8de409052d57c13a27811c75c7d0 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c
@@ -317,6 +317,20 @@ DumpIortNodeSmmuV1V2 (
PARSER_PARAMS (IortNodeSmmuV1V2Parser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((InterruptContextCount == NULL) ||
+ (InterruptContextOffset == NULL) ||
+ (PmuInterruptCount == NULL) ||
+ (PmuInterruptOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient SMMUv1/2 node length. Length = %d\n",
+ Length
+ );
+ return;
+ }
+
Offset = *InterruptContextOffset;
Index = 0;

@@ -428,6 +442,17 @@ DumpIortNodeIts (
PARSER_PARAMS (IortNodeItsParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if (ItsCount == NULL) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient ITS group length. Length = %d.\n",
+ Length
+ );
+ return;
+ }
+
Index = 0;

while ((Index < *ItsCount) &&
@@ -612,6 +637,18 @@ ParseAcpiIort (
PARSER_PARAMS (IortParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((IortNodeCount == NULL) ||
+ (IortNodeOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d.\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
Offset = *IortNodeOffset;
NodePtr = Ptr + Offset;
Index = 0;
@@ -630,6 +667,21 @@ ParseAcpiIort (
PARSER_PARAMS (IortNodeHeaderParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((IortNodeType == NULL) ||
+ (IortNodeLength == NULL) ||
+ (IortIdMappingCount == NULL) ||
+ (IortIdMappingOffset == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"IORT node header. Length = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure the IORT Node is inside the table
if ((Offset + (*IortNodeLength)) > AcpiTableLength) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 08/11] ShellPkg: acpiview: PPTT: Validate global pointers before use

Krzysztof Koch
 

Check if the NumberOfPrivateResources, ProcessorTopologyStructureType
and ProcessorTopologyStructureLength pointers have been successfully
updated before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c | 25 ++++++++++++++++++++
1 file changed, 25 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c
index 6254b9913fffb429fc54bb1301bf3e4b2e5bf161..675ba75f02b367cd5ad9f2ac23c30ed0ab58f286 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c
@@ -264,6 +264,17 @@ DumpProcessorHierarchyNodeStructure (
PARSER_PARAMS (ProcessorHierarchyNodeStructureParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if (NumberOfPrivateResources == NULL) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient Processor Hierarchy Node length. Length = %d.\n",
+ Length
+ );
+ return;
+ }
+
// Make sure the Private Resource array lies inside this structure
if (Offset + (*NumberOfPrivateResources * sizeof (UINT32)) > Length) {
IncrementErrorCount ();
@@ -387,6 +398,7 @@ ParseAcpiPptt (
AcpiTableLength,
PARSER_PARAMS (PpttParser)
);
+
ProcessorTopologyStructurePtr = Ptr + Offset;

while (Offset < AcpiTableLength) {
@@ -400,6 +412,19 @@ ParseAcpiPptt (
PARSER_PARAMS (ProcessorTopologyStructureHeaderParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((ProcessorTopologyStructureType == NULL) ||
+ (ProcessorTopologyStructureLength == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"processor topology structure header. Length = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure the PPTT structure lies inside the table
if ((Offset + *ProcessorTopologyStructureLength) > AcpiTableLength) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 05/11] ShellPkg: acpiview: SLIT: Validate System Locality count

Krzysztof Koch
 

1. Check if the 'Number of System Localities' provided can be
represented in the SLIT table. The table 'Length' field is a 32-bit
value while the 'Number of System Localities' field is 64-bit long.

2. Check if the SLIT matrix fits in the table buffer. If N is the SLIT
locality count, then the matrix used to represent the localities is
N*N bytes long. The ACPI table length must be big enough to fit the
matrix.

3. Remove (now) redundant 64x64 bit multiplication.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Validate the 'Number of System Localities' Field [Krzysztof]
- Remove redundant 64x64 bit multiplication [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c | 47 +++++++++++++++++---
1 file changed, 42 insertions(+), 5 deletions(-)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
index 17e2166a09d8615b714e0c51d4d93d293fcdf601..e4625ee8b13907893a9b6990ecb956baf91cc3b9 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
@@ -30,7 +30,7 @@ STATIC CONST ACPI_PARSER SlitParser[] = {
/**
Macro to get the value of a System Locality
**/
-#define SLIT_ELEMENT(Ptr, i, j) *(Ptr + (MultU64x64 (i, LocalityCount)) + j)
+#define SLIT_ELEMENT(Ptr, i, j) *(Ptr + (i * LocalityCount) + j)

/**
This function parses the ACPI SLIT table.
@@ -57,9 +57,9 @@ ParseAcpiSlit (
)
{
UINT32 Offset;
- UINT64 Count;
- UINT64 Index;
- UINT64 LocalityCount;
+ UINT32 Count;
+ UINT32 Index;
+ UINT32 LocalityCount;
UINT8* LocalityPtr;
CHAR16 Buffer[80]; // Used for AsciiName param of ParseAcpi

@@ -87,8 +87,45 @@ ParseAcpiSlit (
return;
}

+ /*
+ Despite the 'Number of System Localities' being a 64-bit field in SLIT,
+ the maximum number of localities that can be represented in SLIT is limited
+ by the 'Length' field of the ACPI table.
+
+ Since the ACPI table length field is 32-bit wide. The maximum number of
+ localities that can be represented in SLIT can be calculated as:
+
+ MaxLocality = sqrt (MAX_UINT32 - sizeof (EFI_ACPI_6_3_SYSTEM_LOCALITY_DISTANCE_INFORMATION_TABLE_HEADER))
+ = 65535
+ = MAX_UINT16
+ */
+ if (*SlitSystemLocalityCount > MAX_UINT16) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: The Number of System Localities provided can't be represented " \
+ L"in the SLIT table. SlitSystemLocalityCount = %ld. " \
+ L"MaxLocalityCountAllowed = %d.\n",
+ *SlitSystemLocalityCount,
+ MAX_UINT16
+ );
+ return;
+ }
+
+ LocalityCount = (UINT32)*SlitSystemLocalityCount;
+
+ // Make sure system localities fit in the table buffer provided
+ if (Offset + (LocalityCount * LocalityCount) > AcpiTableLength) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Invalid Number of System Localities. " \
+ L"SlitSystemLocalityCount = %ld. AcpiTableLength = %d.\n",
+ *SlitSystemLocalityCount,
+ AcpiTableLength
+ );
+ return;
+ }
+
LocalityPtr = Ptr + Offset;
- LocalityCount = *SlitSystemLocalityCount;

// We only print the Localities if the count is less than 16
// If the locality count is more than 16 then refer to the
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

Re: [edk2-platforms: PATCH 1/1] Platforms/RPi3: Add multiple embedded Device Tree selection

Pete Batard
 

Hi Leif,

On 2019.08.15 13:13, Leif Lindholm wrote:
On Mon, Aug 12, 2019 at 02:06:34PM +0100, Pete Batard wrote:
The Raspberry Pi 3 platform currently has 2 different models, each with a
different Device Tree. Rather than embedding a single one, and requiring
users to manually provide the other, this patch ensures that we now embed
both and and serve the relevant one at runtime.

Signed-off-by: Pete Batard <@pbatard>
Changeset looks very useful. Some style issues. And one question.
First the question: is there no practical size restriction on the
firmware image? This is a 24k increase in image size.
I thought about that too, but we still have plenty of unused payload to go by, which is why I've been adding stuff like the EBC driver, and now this, without worrying too much about the extra space taken.

For one, in the current 2 MB firmware image, we're still barely scratching 1.2 MB of effective data (for RELEASE. For DEBUG that's ~1.5 MB), so we have plenty of space to add additional Device Trees, logos, and similar non critical data. As a matter of fact, I was almost tempted to have this patchset also include the Raspberry Pi 4 Device Tree, so that we would just have all of the DT's we might ever be interested in available at runtime, regardless of whether they are applicable or not (as this would simplify FdtDxe reuse a well GUID handling).

Also, I would expect the build process to complain loudly if we go over capacity.

Finally, I'm pretty sure we could relatively easily switch to a 4 MB or 8 MB firmware image, if we start to run out of space with the current 2 MB, since, from what I am aware of, the SoC is designed to be able to boot large Linux kernels directly and doesn't enforce a hardcoded limit on our FV size.

In other words, even if we start to regularly add a tens of KB here and there, it's going to take some time before we have to start to worry about going over capacity...

---
Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.c | 57 ++++++++++++++++----
Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.inf | 3 +-
Platform/RaspberryPi/RPi3/RPi3.dec | 3 +-
Platform/RaspberryPi/RPi3/RPi3.fdf | 6 ++-
4 files changed, 55 insertions(+), 14 deletions(-)

diff --git a/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.c b/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.c
index c84e5b2767e2..7c0ab75e7cb1 100644
--- a/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.c
+++ b/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.c
@@ -20,6 +20,11 @@
#include <Guid/Fdt.h>
+CONST EFI_GUID *mFdtGuid[] = {
+ &gRaspberryPiFdtGuid1,
+ &gRaspberryPiFdtGuid2,
+ };
+
STATIC VOID *mFdtImage;
STATIC RASPBERRY_PI_FIRMWARE_PROTOCOL *mFwProtocol;
@@ -368,7 +373,9 @@ FdtDxeInitialize (
EFI_STATUS Status;
VOID *FdtImage;
UINTN FdtSize;
+ UINTN FdtIndex;
INT32 Retval;
+ UINT32 BoardRevision;
BOOLEAN Internal;
Status = gBS->LocateProtocol (&gRaspberryPiFirmwareProtocolGuid, NULL,
@@ -383,13 +390,41 @@ FdtDxeInitialize (
* Have FDT passed via config.txt.
*/
FdtSize = fdt_totalsize (FdtImage);
This
- DEBUG ((DEBUG_INFO, "DTB passed via config.txt of 0x%lx bytes\n", FdtSize));
+ DEBUG ((DEBUG_INFO, "Device Tree passed via config.txt (0x%lx bytes)\n", FdtSize));
looks unrelated to the changeset.

Status = EFI_SUCCESS;
} else {
+ /*
+ * Use one of the embedded FDT's.
+ */
Internal = TRUE;
This DEBUG changes bit
- DEBUG ((DEBUG_INFO, "No/bad FDT at %p (%a), trying internal DTB...\n",
- FdtImage, fdt_strerror (Retval)));
- Status = GetSectionFromAnyFv (&gRaspberryPiFdtFileGuid, EFI_SECTION_RAW, 0,
+ DEBUG ((DEBUG_INFO, "No/Bad Device Tree found at address 0x%p (%a), "
+ "looking up internal one...\n", FdtImage, fdt_strerror (Retval)));
to here looks completely unrelated to the changeset.
Which garbles the diff somewhat with regards to the functional line in
the middle..
I mentioned in the cover letter that changes were also added to harmonize the debug output to avoid acronyms and provide human readable error codes. But you're right, at the very least, these changes should have been mentioned in the commit message for actual patch itself, and are probably better off on their own.

I'll break them out as a separate patch as requested.

Regards,

/Pete


+ /*
+ * Query the board revision to differentiate between models.
+ */
+ Status = mFwProtocol->GetModelRevision (&BoardRevision);
+ if (EFI_ERROR (Status)) {
+ FdtIndex = 0;
+ DEBUG ((DEBUG_ERROR, "Failed to get board type: %r\n", Status));
+ } else {
+ // www.raspberrypi.org/documentation/hardware/raspberrypi/revision-codes/README.md
+ switch ((BoardRevision >> 4) & 0xFF) {
+ case 0x08: // Raspberry 3 Model B
+ DEBUG ((DEBUG_INFO, "Using RPi3 Model B internal Device Tree\n"));
+ FdtIndex = 0;
+ break;
+ case 0x0D: // Raspberry 3 Model B+
+ DEBUG ((DEBUG_INFO, "Using RPi3 Model B+ internal Device Tree\n"));
+ FdtIndex = 1;
+ break;
+ default:
+ DEBUG ((DEBUG_INFO, "Using default internal Device Tree\n"));
+ FdtIndex = 0;
+ break;
+ }
+ }
+ ASSERT (FdtIndex < ARRAY_SIZE (mFdtGuid));
+ Status = GetSectionFromAnyFv (mFdtGuid[FdtIndex], EFI_SECTION_RAW, 0,
&FdtImage, &FdtSize);
if (Status == EFI_SUCCESS) {
if (fdt_check_header (FdtImage) != 0) {
Everything from here...

@@ -419,27 +454,27 @@ FdtDxeInitialize (
Status = SanitizePSCI ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to sanitize PSCI (error %d)\n", Status);
+ Print (L"Failed to sanitize PSCI: %r\n", Status);
}
Status = CleanMemoryNodes ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to clean memory nodes (error %d)\n", Status);
+ Print (L"Failed to clean memory nodes: %r\n", Status);
}
Status = CleanSimpleFramebuffer ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to clean frame buffer (error %d)\n", Status);
+ Print (L"Failed to clean frame buffer: %r\n", Status);
}
Status = FixEthernetAliases ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to fix ethernet aliases (error %d)\n", Status);
+ Print (L"Failed to fix ethernet aliases: %r\n", Status);
}
Status = UpdateMacAddress ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to update MAC address (error %d)\n", Status);
+ Print (L"Failed to update MAC address: %r\n", Status);
}
if (Internal) {
@@ -448,11 +483,11 @@ FdtDxeInitialize (
*/
Status = UpdateBootArgs ();
if (EFI_ERROR (Status)) {
- Print (L"Failed to update boot arguments (error %d)\n", Status);
+ Print (L"Failed to update boot arguments: %r\n", Status);
}
}
- DEBUG ((DEBUG_INFO, "Installed FDT is at %p\n", mFdtImage));
+ DEBUG ((DEBUG_INFO, "Installed Device Tree at address 0x%p\n", mFdtImage));
Status = gBS->InstallConfigurationTable (&gFdtTableGuid, mFdtImage);
ASSERT_EFI_ERROR (Status);
...to here is unrelated printout message changes.
So, I don't object to the debug printout changes as such, but they
obscure the functional changeset. Can you please break them out as a
separate patch?
Best Regards,
Leif

diff --git a/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.inf b/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.inf
index 5b0b1a09f374..c994a2b69d78 100644
--- a/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.inf
+++ b/Platform/RaspberryPi/RPi3/Drivers/FdtDxe/FdtDxe.inf
@@ -35,7 +35,8 @@ [LibraryClasses]
[Guids]
gFdtTableGuid
- gRaspberryPiFdtFileGuid
+ gRaspberryPiFdtGuid1
+ gRaspberryPiFdtGuid2
[Protocols]
gRaspberryPiFirmwareProtocolGuid ## CONSUMES
diff --git a/Platform/RaspberryPi/RPi3/RPi3.dec b/Platform/RaspberryPi/RPi3/RPi3.dec
index 22de439fde8f..d90ea8329818 100644
--- a/Platform/RaspberryPi/RPi3/RPi3.dec
+++ b/Platform/RaspberryPi/RPi3/RPi3.dec
@@ -24,7 +24,8 @@ [Protocols]
[Guids]
gRaspberryPiTokenSpaceGuid = {0xCD7CC258, 0x31DB, 0x11E6, {0x9F, 0xD3, 0x63, 0xB0, 0xB8, 0xEE, 0xD6, 0xB5}}
- gRaspberryPiFdtFileGuid = {0xDF5DA223, 0x1D27, 0x47C3, { 0x8D, 0x1B, 0x9A, 0x41, 0xB5, 0x5A, 0x18, 0xBC}}
+ gRaspberryPiFdtGuid1 = {0xDF5DA223, 0x1D27, 0x47C3, { 0x8D, 0x1B, 0x9A, 0x41, 0xB5, 0x5A, 0x18, 0xBC}}
+ gRaspberryPiFdtGuid2 = {0x3D523012, 0x73FE, 0x40E5, { 0x89, 0x2E, 0x1A, 0x4D, 0xF6, 0x0F, 0x3C, 0x0C}}
gRaspberryPiEventResetGuid = {0xCD7CC258, 0x31DB, 0x11E6, {0x9F, 0xD3, 0x63, 0xB4, 0xB4, 0xE4, 0xD4, 0xB4}}
gConfigDxeFormSetGuid = {0xCD7CC258, 0x31DB, 0x22E6, {0x9F, 0x22, 0x63, 0xB0, 0xB8, 0xEE, 0xD6, 0xB5}}
diff --git a/Platform/RaspberryPi/RPi3/RPi3.fdf b/Platform/RaspberryPi/RPi3/RPi3.fdf
index c62d649834c7..83753d84badc 100644
--- a/Platform/RaspberryPi/RPi3/RPi3.fdf
+++ b/Platform/RaspberryPi/RPi3/RPi3.fdf
@@ -300,11 +300,15 @@ [FV.FvMain]
INF Platform/RaspberryPi/$(PLATFORM_NAME)/Drivers/LogoDxe/LogoDxe.inf
#
- # FDT (GUID matches gRaspberryPiFdtFileGuid in FdtDxe)
+ # Device Tree support
+ # GUIDs match gRaspberryPiFdtGuid# from FdtDxe.
#
FILE FREEFORM = DF5DA223-1D27-47C3-8D1B-9A41B55A18BC {
SECTION RAW = Platform/RaspberryPi/$(PLATFORM_NAME)/DeviceTree/bcm2710-rpi-3-b.dtb
}
+ FILE FREEFORM = 3D523012-73FE-40E5-892E-1A4DF60F3C0C {
+ SECTION RAW = Platform/RaspberryPi/$(PLATFORM_NAME)/DeviceTree/bcm2710-rpi-3-b-plus.dtb
+ }
[FV.FVMAIN_COMPACT]
FvAlignment = 16
--
2.21.0.windows.1

[PATCH v1 02/11] ShellPkg: acpiview: RSDP: Validate global pointer before use

Krzysztof Koch
 

Check if XsdtAddress pointer has been successfully updated before it
is used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c
index 5a5c4b50c12e6eb0aa0efb1765df7e123f614da3..f4a8732a7db7c437031f2a3d2f266b80eff17b4b 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c
@@ -138,6 +138,18 @@ ParseAcpiRsdp (
PARSER_PARAMS (RsdpParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if (XsdtAddress == NULL) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d." \
+ L"RSDP parsing aborted.\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
// This code currently supports parsing of XSDT table only
// and does not parse the RSDT table. Platforms provide the
// RSDT to enable compatibility with ACPI 1.0 operating systems.
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 06/11] ShellPkg: acpiview: SRAT: Validate global pointers before use

Krzysztof Koch
 

Check if SratRAType and SratRALength pointers have been successfully
updated before they are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c
index a8aa420487bb6bf29fc38221d0b221573c64b8b3..e09a7db8f5c92b44c96b6c37a44a39693352b442 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c
@@ -219,6 +219,19 @@ ParseAcpiSrat (
PARSER_PARAMS (SratResourceAllocationParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((SratRAType == NULL) ||
+ (SratRALength == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient remaining table buffer length to read the " \
+ L"Static Resource Allocation structure header. Length = %d.\n",
+ AcpiTableLength - Offset
+ );
+ return;
+ }
+
// Make sure the SRAT structure lies inside the table
if ((Offset + *SratRALength) > AcpiTableLength) {
IncrementErrorCount ();
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 03/11] ShellPkg: acpiview: FADT: Validate global pointer before use

Krzysztof Koch
 

Check if global pointers have been successfully updated before they
are used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c
index e40c9ef8ee4b3285faf8c6edf3cb6236ee367397..e218e45926abced1096e75441e22108db7a3a811 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c
@@ -203,6 +203,20 @@ ParseAcpiFadt (
PARSER_PARAMS (FadtParser)
);

+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if ((DsdtAddress == NULL) ||
+ (FadtMinorRevision == NULL) ||
+ (X_DsdtAddress == NULL)) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d. " \
+ L"FADT parsing aborted.\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
if (Trace) {
Print (L"\nSummary:\n");
PrintFieldName (2, L"FADT Version");
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 00/11] Test against invalid pointers in acpiview

Krzysztof Koch
 

Prevent the use of invalid pointers when parsing ACPI tables in the UEFI
shell acpiview tool.

The parsing of ACPI tables is often controlled with the values read
earlier from the same table. For example, the 'Offset' or 'Count' fields
found in a structure are later used to parse the substructures. If such
fields lie outside the structure's buffer length provided, then there
is a possibility for a wild or dangling pointer.

Currently, if the ParseAcpi() function terminates early because the end
of the input table data buffer has been reached, then the pointers
which were supposed to be updated by this function are left untouched.
This is a security issue as the values pointed to by these pointers are
later used for flow control.

This patch series aims to solve this security issue by explicitly
initializing any pointers lying outside the input ACPI data buffer to
NULL and testing for NULL whenever these pointers are dereferenced.

Changes can be seet at: https://github.com/KrzysztofKoch1/edk2/tree/612_add_pointer_validation_v1

Krzysztof Koch (11):
ShellPkg: acpiview: Set ItemPtr to NULL for unprocessed table fields
ShellPkg: acpiview: RSDP: Validate global pointer before use
ShellPkg: acpiview: FADT: Validate global pointer before use
ShellPkg: acpiview: SLIT: Validate global pointer before use
ShellPkg: acpiview: SLIT: Validate System Locality count
ShellPkg: acpiview: SRAT: Validate global pointers before use
ShellPkg: acpiview: MADT: Validate global pointers before use
ShellPkg: acpiview: PPTT: Validate global pointers before use
ShellPkg: acpiview: IORT: Validate global pointers before use
ShellPkg: acpiview: GTDT: Validate global pointers before use
ShellPkg: acpiview: DBG2: Validate global pointers before use

ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c | 9 ++-
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Dbg2/Dbg2Parser.c | 43 ++++++++++++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Fadt/FadtParser.c | 14 +++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Gtdt/GtdtParser.c | 37 ++++++++++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Iort/IortParser.c | 52 +++++++++++++++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Madt/MadtParser.c | 13 +++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Pptt/PpttParser.c | 25 ++++++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Rsdp/RsdpParser.c | 12 ++++
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c | 61 ++++++++++++++++++--
ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Srat/SratParser.c | 13 +++++
10 files changed, 272 insertions(+), 7 deletions(-)

--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 01/11] ShellPkg: acpiview: Set ItemPtr to NULL for unprocessed table fields

Krzysztof Koch
 

For fields outside the buffer length provided, reset any pointers,
which were supposed to be updated by a ParseAcpi() function call to
NULL. This way one can easily validate if a pointer was successfully
updated.

The ParseAcpi() function parses the given ACPI table buffer by a
number of bytes which is a minimum of the buffer length and the length
described by ACPI_PARSER array. If the buffer length is shorter than
the array describing how to process the ACPI structure, then it is
possible that the ItemPtr inside ACPI_PARSER may not get updated or
initialized. This can lead to an error if the value pointed to by
ItemPtr is later used to control the parsing logic.

A typical example would be a 'number of elements' field in an ACPI
structure header which defines how many substructures of a given type
are present in the structure body. If the 'number of elements' field
is not parsed, this could result in a dangling pointer which could
cause a problem later.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Set ItemPtr to NULL for unprocessed table fields [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c
index 2d6ff80e299eebe7853061d3db89332197c0dc0e..1ede12859721db75d17fd0bfc14dc9e9c0d573aa 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/AcpiParser.c
@@ -502,8 +502,15 @@ ParseAcpi (

for (Index = 0; Index < ParserItems; Index++) {
if ((Offset + Parser[Index].Length) > Length) {
+
+ // For fields outside the buffer length provided, reset any pointers
+ // which were supposed to be updated by this function call
+ if (Parser[Index].ItemPtr != NULL) {
+ *Parser[Index].ItemPtr = NULL;
+ }
+
// We don't parse past the end of the max length specified
- break;
+ continue;
}

if (GetConsistencyChecking () &&
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'

[PATCH v1 04/11] ShellPkg: acpiview: SLIT: Validate global pointer before use

Krzysztof Koch
 

Check if SlitSystemLocalityCount pointer has been successfully updated
before it is used for further table parsing.

Signed-off-by: Krzysztof Koch <krzysztof.koch@...>
---

Notes:
v1:
- Test against NULL pointers [Krzysztof]

ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
index ca2808db526b1bbb79aeb21ccfc0ae6c79b2dfd8..17e2166a09d8615b714e0c51d4d93d293fcdf601 100644
--- a/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
+++ b/ShellPkg/Library/UefiShellAcpiViewCommandLib/Parsers/Slit/SlitParser.c
@@ -1,7 +1,7 @@
/** @file
SLIT table parser

- Copyright (c) 2016 - 2018, ARM Limited. All rights reserved.
+ Copyright (c) 2016 - 2019, ARM Limited. All rights reserved.
SPDX-License-Identifier: BSD-2-Clause-Patent

@par Reference(s):
@@ -75,9 +75,21 @@ ParseAcpiSlit (
AcpiTableLength,
PARSER_PARAMS (SlitParser)
);
+
+ // Check if the values used to control the parsing logic have been
+ // successfully read.
+ if (SlitSystemLocalityCount == NULL) {
+ IncrementErrorCount ();
+ Print (
+ L"ERROR: Insufficient table length. AcpiTableLength = %d.\n",
+ AcpiTableLength
+ );
+ return;
+ }
+
LocalityPtr = Ptr + Offset;
-
LocalityCount = *SlitSystemLocalityCount;
+
// We only print the Localities if the count is less than 16
// If the locality count is more than 16 then refer to the
// raw data dump.
--
'Guid(CE165669-3EF3-493F-B85D-6190EE5B9759)'