Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address


Yao, Jiewen
 

below

-----Original Message-----
From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Igor
Mammedov
Sent: Monday, September 30, 2019 8:37 PM
To: Laszlo Ersek <lersek@redhat.com>
Cc: devel@edk2.groups.io; qemu-devel@nongnu.org; Chen, Yingwen
<yingwen.chen@intel.com>; phillip.goerl@oracle.com;
alex.williamson@redhat.com; Yao, Jiewen <jiewen.yao@intel.com>; Nakajima,
Jun <jun.nakajima@intel.com>; Kinney, Michael D
<michael.d.kinney@intel.com>; pbonzini@redhat.com;
boris.ostrovsky@oracle.com; rfc@edk2.groups.io; joao.m.martins@oracle.com;
Brijesh Singh <brijesh.singh@amd.com>
Subject: Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K
SMRAM at default SMBASE address

On Mon, 30 Sep 2019 13:51:46 +0200
"Laszlo Ersek" <lersek@redhat.com> wrote:

Hi Igor,

On 09/24/19 13:19, Igor Mammedov wrote:
On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@redhat.com> wrote:
I've got good results. For this (1/2) QEMU patch:

Tested-by: Laszlo Ersek <lersek@redhat.com>

I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled).
Mostly, I diffed these text files between the test scenarios (looking
for desired / undesired differences). In the Linux guests, I checked
/ compared the dmesg too (wrt. the UEFI memmap).

- unpatched OVMF (regression test), Fedora guest, normal boot and S3

- patched OVMF, but feature disabled with "-global
mch.smbase-smram=off" (another regression test), Fedora guest,
normal boot and S3

- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3

- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested

SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):

- unpatched OVMF (regression test), normal boot

- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot

- patched OVMF, feature enabled, normal boot.

I plan to post the OVMF patches tomorrow, for discussion.

(It's likely too early to push these QEMU / edk2 patches right now --
we don't know yet if this path will take us to the destination. For
now, it certainly looks great.)
Laszlo, thanks for trying it out.
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.

Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
Considering the plan at [1], the two patch sets [2] [3] should cover
step (01); at least as proof of concept.

[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com

[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at
default SMBASE address
http://mid.mail-archive.com/20190917130708.10281-1-
imammedo@redhat.com

[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default
SMBASE" feature
http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com

(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)

So what's next?

To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).

Do you agree?

If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
we can try to resurrect and put over it some kind of protocol
to describe which CPUs to where hotplugged.

or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.

The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
[Jiewen] The PI specification Volume 4 - SMM defines EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate between OS and SMM handler. But it requires the runtime protocol call. I am not sure how OS loader passes this information to OS kernel.

As such, I think using ACPI SCI/GPE -> software SMI handler is an easier way to achieve this. I also recommend this way.
For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch register size), 4) ACPI NVS OPREGION, if the data structure is complicated.



Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):

The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.

Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
common OSes.

The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI
Table"
(addressed in UEFI 2.6B).

(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )

Thanks,
Laszlo



Join rfc@edk2.groups.io to automatically receive all group messages.