Igor Mammedov <imammedo@...>
On Mon, 30 Sep 2019 13:51:46 +0200
"Laszlo Ersek" <email@example.com> wrote:
On 09/24/19 13:19, Igor Mammedov wrote:
On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <firstname.lastname@example.org> wrote:
Considering the plan at , the two patch sets   should cover
I've got good results. For this (1/2) QEMU patch:Laszlo, thanks for trying it out.
Tested-by: Laszlo Ersek <email@example.com>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled).
Mostly, I diffed these text files between the test scenarios (looking
for desired / undesired differences). In the Linux guests, I checked
/ compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global
mch.smbase-smram=off" (another regression test), Fedora guest,
normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now --
we don't know yet if this path will take us to the destination. For
now, it certainly looks great.)
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
step (01); at least as proof of concept.
 [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
 The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address
 [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)
So what's next?
To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
we can try to resurrect and put over it some kind of protocol
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):
The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.
Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
(addressed in UEFI 2.6B).
(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )