Date
1 - 11 of 11
[edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address
Igor Mammedov <imammedo@...>
On Thu, 19 Sep 2019 19:02:07 +0200
"Laszlo Ersek" <lersek@...> wrote:
If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.
On QEMU side, we can drop black-hole approach and allocate
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
"Laszlo Ersek" <lersek@...> wrote:
Hi Igor,[...]
(+Brijesh)
long-ish pondering ahead, with a question at the end.
Finally: can you please remind me why we lock down 128KB (32 pages) at
0x3_0000, and not just half of that? What do we need the range at
[0x4_0000..0x4_FFFF] for?
If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.
On QEMU side, we can drop black-hole approach and allocate
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
Laszlo Ersek
On 09/20/19 10:28, Igor Mammedov wrote:
than growing it.
series, if it can work. Way less opportunity for confusion.
I've started work on the counterpart OVMF patches; I'll report back.
Thanks
Laszlo
On Thu, 19 Sep 2019 19:02:07 +0200OK. Let's go with 128KB for now. Shrinking the area is always easier
"Laszlo Ersek" <lersek@...> wrote:Hi Igor,[...]
(+Brijesh)
long-ish pondering ahead, with a question at the end.Finally: can you please remind me why we lock down 128KB (32 pages) at
0x3_0000, and not just half of that? What do we need the range at
[0x4_0000..0x4_FFFF] for?
If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.
than growing it.
On QEMU side, we can drop black-hole approach and allocateI prefer the black-hole approach, introduced in your current patch
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
series, if it can work. Way less opportunity for confusion.
I've started work on the counterpart OVMF patches; I'll report back.
Thanks
Laszlo
Laszlo Ersek
On 09/20/19 11:28, Laszlo Ersek wrote:
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
I diffed these text files between the test scenarios (looking for
desired / undesired differences). In the Linux guests, I checked /
compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
(another regression test), Fedora guest, normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now -- we
don't know yet if this path will take us to the destination. For now, it
certainly looks great.)
Thanks
Laszlo
On 09/20/19 10:28, Igor Mammedov wrote:I've got good results. For this (1/2) QEMU patch:On Thu, 19 Sep 2019 19:02:07 +0200OK. Let's go with 128KB for now. Shrinking the area is always easier
"Laszlo Ersek" <lersek@...> wrote:Hi Igor,[...]
(+Brijesh)
long-ish pondering ahead, with a question at the end.Finally: can you please remind me why we lock down 128KB (32 pages) at
0x3_0000, and not just half of that? What do we need the range at
[0x4_0000..0x4_FFFF] for?
If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.
than growing it.On QEMU side, we can drop black-hole approach and allocateI prefer the black-hole approach, introduced in your current patch
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
series, if it can work. Way less opportunity for confusion.
I've started work on the counterpart OVMF patches; I'll report back.
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
I diffed these text files between the test scenarios (looking for
desired / undesired differences). In the Linux guests, I checked /
compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
(another regression test), Fedora guest, normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now -- we
don't know yet if this path will take us to the destination. For now, it
certainly looks great.)
Thanks
Laszlo
Igor Mammedov <imammedo@...>
On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@...> wrote:
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
"Laszlo Ersek" <lersek@...> wrote:
On 09/20/19 11:28, Laszlo Ersek wrote:Laszlo, thanks for trying it out.On 09/20/19 10:28, Igor Mammedov wrote:I've got good results. For this (1/2) QEMU patch:On Thu, 19 Sep 2019 19:02:07 +0200OK. Let's go with 128KB for now. Shrinking the area is always easier
"Laszlo Ersek" <lersek@...> wrote:
Hi Igor,[...]
(+Brijesh)
long-ish pondering ahead, with a question at the end.
Finally: can you please remind me why we lock down 128KB (32 pages) at
0x3_0000, and not just half of that? What do we need the range at
[0x4_0000..0x4_FFFF] for?
If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.
than growing it.
On QEMU side, we can drop black-hole approach and allocateI prefer the black-hole approach, introduced in your current patch
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
series, if it can work. Way less opportunity for confusion.
I've started work on the counterpart OVMF patches; I'll report back.
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
I diffed these text files between the test scenarios (looking for
desired / undesired differences). In the Linux guests, I checked /
compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
(another regression test), Fedora guest, normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now -- we
don't know yet if this path will take us to the destination. For now, it
certainly looks great.)
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
Thanks
Laszlo
Paolo Bonzini <pbonzini@...>
On 20/09/19 11:28, Laszlo Ersek wrote:
0x30000..0x4FFFF (only when in SMM).
I'm not super enthusiastic about adding this kind of QEMU-only feature.
The alternative would be to implement VT-d range locking through the
intel-iommu device's PCI configuration space (which includes _adding_
the configuration space, i.e. making the IOMMU a PCI device in the first
place, and the support to the firmware for configuring the VT-d BAR at
0xfed90000). This would be the right way to do it, but it would entail
a lot of work throughout the stack. :( So I guess some variant of this
would be okay, as long as it's peppered with "this is not how real
hardware does it" comments in both QEMU and EDK2.
Thanks,
Paolo
Another possibility would be to alias the 0xA0000..0xBFFFF SMRAM toOn QEMU side, we can drop black-hole approach and allocateI prefer the black-hole approach, introduced in your current patch
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x30000 could be used as
normal when SMRAM is unmapped.
series, if it can work. Way less opportunity for confusion.
0x30000..0x4FFFF (only when in SMM).
I'm not super enthusiastic about adding this kind of QEMU-only feature.
The alternative would be to implement VT-d range locking through the
intel-iommu device's PCI configuration space (which includes _adding_
the configuration space, i.e. making the IOMMU a PCI device in the first
place, and the support to the firmware for configuring the VT-d BAR at
0xfed90000). This would be the right way to do it, but it would entail
a lot of work throughout the stack. :( So I guess some variant of this
would be okay, as long as it's peppered with "this is not how real
hardware does it" comments in both QEMU and EDK2.
Thanks,
Paolo
I've started work on the counterpart OVMF patches; I'll report back.
Laszlo Ersek
Hi Igor,
On 09/24/19 13:19, Igor Mammedov wrote:
step (01); at least as proof of concept.
[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address
http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com
(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)
So what's next?
To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):
The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.
Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
common OSes.
The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
(addressed in UEFI 2.6B).
(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )
Thanks,
Laszlo
On 09/24/19 13:19, Igor Mammedov wrote:
On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@...> wrote:
Considering the plan at [1], the two patch sets [2] [3] should coverI've got good results. For this (1/2) QEMU patch:Laszlo, thanks for trying it out.
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled).
Mostly, I diffed these text files between the test scenarios (looking
for desired / undesired differences). In the Linux guests, I checked
/ compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global
mch.smbase-smram=off" (another regression test), Fedora guest,
normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now --
we don't know yet if this path will take us to the destination. For
now, it certainly looks great.)
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
step (01); at least as proof of concept.
[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address
http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com
(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)
So what's next?
To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):
The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.
Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
common OSes.
The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
(addressed in UEFI 2.6B).
(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )
Thanks,
Laszlo
Igor Mammedov <imammedo@...>
On Mon, 30 Sep 2019 13:51:46 +0200
"Laszlo Ersek" <lersek@...> wrote:
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
"Laszlo Ersek" <lersek@...> wrote:
Hi Igor,we can try to resurrect and put over it some kind of protocol
On 09/24/19 13:19, Igor Mammedov wrote:On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@...> wrote:Considering the plan at [1], the two patch sets [2] [3] should coverI've got good results. For this (1/2) QEMU patch:Laszlo, thanks for trying it out.
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled).
Mostly, I diffed these text files between the test scenarios (looking
for desired / undesired differences). In the Linux guests, I checked
/ compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global
mch.smbase-smram=off" (another regression test), Fedora guest,
normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now --
we don't know yet if this path will take us to the destination. For
now, it certainly looks great.)
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
step (01); at least as proof of concept.
[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address
http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" feature
http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com
(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)
So what's next?
To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):
The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.
Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
common OSes.
The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
(addressed in UEFI 2.6B).
(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )
Thanks,
Laszlo
Yao, Jiewen
below
toggle quoted message
Show quoted text
-----Original Message-----[Jiewen] The PI specification Volume 4 - SMM defines EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate between OS and SMM handler. But it requires the runtime protocol call. I am not sure how OS loader passes this information to OS kernel.
From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Igor
Mammedov
Sent: Monday, September 30, 2019 8:37 PM
To: Laszlo Ersek <lersek@...>
Cc: devel@edk2.groups.io; qemu-devel@...; Chen, Yingwen
<yingwen.chen@...>; phillip.goerl@...;
alex.williamson@...; Yao, Jiewen <jiewen.yao@...>; Nakajima,
Jun <jun.nakajima@...>; Kinney, Michael D
<michael.d.kinney@...>; pbonzini@...;
boris.ostrovsky@...; rfc@edk2.groups.io; joao.m.martins@...;
Brijesh Singh <brijesh.singh@...>
Subject: Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K
SMRAM at default SMBASE address
On Mon, 30 Sep 2019 13:51:46 +0200
"Laszlo Ersek" <lersek@...> wrote:Hi Igor,default SMBASE address
On 09/24/19 13:19, Igor Mammedov wrote:On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@...> wrote:Considering the plan at [1], the two patch sets [2] [3] should coverI've got good results. For this (1/2) QEMU patch:Laszlo, thanks for trying it out.
Tested-by: Laszlo Ersek <lersek@...>
I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled).
Mostly, I diffed these text files between the test scenarios (looking
for desired / undesired differences). In the Linux guests, I checked
/ compared the dmesg too (wrt. the UEFI memmap).
- unpatched OVMF (regression test), Fedora guest, normal boot and S3
- patched OVMF, but feature disabled with "-global
mch.smbase-smram=off" (another regression test), Fedora guest,
normal boot and S3
- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3
- a subset of the above guests, with S3 disabled (-global
ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):
- unpatched OVMF (regression test), normal boot
- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot
- patched OVMF, feature enabled, normal boot.
I plan to post the OVMF patches tomorrow, for discussion.
(It's likely too early to push these QEMU / edk2 patches right now --
we don't know yet if this path will take us to the destination. For
now, it certainly looks great.)
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.
Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)
step (01); at least as proof of concept.
[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM athttp://mid.mail-archive.com/20190917130708.10281-1-imammedo@...SMBASE" feature
[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at defaulthttp://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.comwe can try to resurrect and put over it some kind of protocol
(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)
So what's next?
To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
As such, I think using ACPI SCI/GPE -> software SMI handler is an easier way to achieve this. I also recommend this way.
For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch register size), 4) ACPI NVS OPREGION, if the data structure is complicated.
Hmmm.... Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and itTable"
writes (after defining the table format):
The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.
Note: The use of the SMM Communication ACPI table is deprecated in
UEFI spec. 2.7. This is due to the lack of a use case for
inter-mode communication by non-firmware agents with SMM code
and support for initiating this form of communication in
common OSes.
The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI(addressed in UEFI 2.6B).
(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
<https://mantis.uefi.org/mantis/view.php?id=1631> gives me "Access
Denied" :/ )
Thanks,
Laszlo
Laszlo Ersek
On 09/30/19 16:22, Yao, Jiewen wrote:
bunch of IO port accesses at base 0x0cd8.
Is that correct?
(1) what values to use,
(2) how those values are passed.
Assume a CPU is hotpluged, QEMU injects an SCI, and the ACPI GPE handler
in the OS -- which also originates from QEMU -- writes a particular byte
to the Data port (0xB3), and then to the Control port (0xB2),
broadcasting an SMI.
(1) What values to use.
Note that values ICH9_APM_ACPI_ENABLE (2) and ICH9_APM_ACPI_DISABLE (3)
are already reserved in QEMU for IO port 0xB2, for different purposes.
So we can't use those in the GPE handler.
Furthermote, OVMF's EFI_SMM_CONTROL2_PROTOCOL.Trigger() implementation
(in "OvmfPkg/SmmControl2Dxe/SmmControl2Dxe.c") writes 0 to both ports,
as long as the caller does not specify them.
IoWrite8 (ICH9_APM_STS, DataPort == NULL ? 0 : *DataPort);
IoWrite8 (ICH9_APM_CNT, CommandPort == NULL ? 0 : *CommandPort);
And, there is only one Trigger() call site in edk2: namely in
"MdeModulePkg/Core/PiSmmCore/PiSmmIpl.c", in the
SmmCommunicationCommunicate() function.
(That function implements EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().)
This call site passes NULL for both DataPort and CommandPort.
Yet further, EFI_SMM_COMMUNICATION_PROTOCOL.Communicate() is used for
example by the UEFI variable driver, for talking between the
unprivileged (runtime DXE) and privileged (SMM) half.
As a result, all of the software SMIs currently in use in OVMF, related
to actual firmware services, write 0 to both ports.
I guess we can choose new values, as long as we avoid 2 and 3 for the
control port (0xB2), because those are reserved in QEMU -- see
ich9_apm_ctrl_changed() in "hw/isa/lpc_ich9.c".
(2) How the parameters are passed.
(2a) For the new CPU, the SMI remains pending, until it gets an
INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
BSP). At that point, the new CPU will execute the "initial SMI handler
for hotplugged CPUs", at the default SMBASE.
That's a routine we'll have to write in assembly, from zero. In this
routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
to provide the values last written (see apm_ioport_readb() in
"hw/isa/apm.c"). So we can receive the values in this routine. Alright.
(2b) On all other CPUs, the SMM foundation already accepts the SMI.
There point where it makes sense to start looking is SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b1) This function first checks whether the SMI is synchronous. The
logic for determining that is based on
"gSmmCorePrivate->CommunicationBuffer" being non-NULL. This field is set
to non-NULL in SmmCommunicationCommunicate() -- see above, in (1).
In other words, the SMI is deemed synchronous if it was initiated with
EFI_SMM_COMMUNICATION_PROTOCOL.Communicate(). In that case, the
HandlerType GUID is extracted from the communication buffer, and passed
to SmiManage(). In turn, SmiManage() locates the SMI handler registered
with the same handler GUID, and delegates the SMI handling to that
specific handler.
This is how the UEFI variable driver is split in two halves:
- in "MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmm.c", we have
a call to gMmst->MmiHandlerRegister(), with HandlerType =
"gEfiSmmVariableProtocolGuid"
- in
"MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmmRuntimeDxe.c", we
format communication buffers with the header GUID set to the same
"gEfiSmmVariableProtocolGuid".
Of course, this is what does *not* apply to our use case, as the SMI is
raised by the OS (via an ACPI method), and *not* by a firmware agent
that calls EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().
Therefore, we need to look further in SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b2) What's left there is only the following:
//
// Process Asynchronous SMI sources
//
SmiManage (NULL, NULL, NULL, NULL);
So...
- Are we supposed to write a new DXE_SMM_DRIVER for OvmfPkg, and call
gMmst->MmiHandlerRegister() in it, with HandlerType=NULL? (I.e.,
register a "root SMI handler"?)
- And then in the handler, should we read IO ports 0xB2 / 0xB3?
- Also, is that handler where we'd somehow sync up with the hot-plugged
VCPU, and finally call EFI_SMM_CPU_SERVICE_PROTOCOL.SmmAddProcessor()?
- Does it matter what (pre-existent) CPU executes the handler? (IOW,
does it matter what the value of gMmst->CurrentlyExecutingCpu is?)
Thanks,
Laszlo
-----Original Message-----
From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Igor
Mammedov
Sent: Monday, September 30, 2019 8:37 PM
To: Laszlo Ersek <lersek@...>
Based on "docs/specs/acpi_cpu_hotplug.txt", this seems to boil down to aTo me it looks like we need to figure out how QEMU can make the OS callwe can try to resurrect and put over it some kind of protocol
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
bunch of IO port accesses at base 0x0cd8.
Is that correct?
[Jiewen] The PI specification Volume 4 - SMM defines EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate between OS and SMM handler. But it requires the runtime protocol call. I am not sure how OS loader passes this information to OS kernel.I'm confused about the details. In two categories:
As such, I think using ACPI SCI/GPE -> software SMI handler is an easier way to achieve this. I also recommend this way.
For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch register size), 4) ACPI NVS OPREGION, if the data structure is complicated.
(1) what values to use,
(2) how those values are passed.
Assume a CPU is hotpluged, QEMU injects an SCI, and the ACPI GPE handler
in the OS -- which also originates from QEMU -- writes a particular byte
to the Data port (0xB3), and then to the Control port (0xB2),
broadcasting an SMI.
(1) What values to use.
Note that values ICH9_APM_ACPI_ENABLE (2) and ICH9_APM_ACPI_DISABLE (3)
are already reserved in QEMU for IO port 0xB2, for different purposes.
So we can't use those in the GPE handler.
Furthermote, OVMF's EFI_SMM_CONTROL2_PROTOCOL.Trigger() implementation
(in "OvmfPkg/SmmControl2Dxe/SmmControl2Dxe.c") writes 0 to both ports,
as long as the caller does not specify them.
IoWrite8 (ICH9_APM_STS, DataPort == NULL ? 0 : *DataPort);
IoWrite8 (ICH9_APM_CNT, CommandPort == NULL ? 0 : *CommandPort);
And, there is only one Trigger() call site in edk2: namely in
"MdeModulePkg/Core/PiSmmCore/PiSmmIpl.c", in the
SmmCommunicationCommunicate() function.
(That function implements EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().)
This call site passes NULL for both DataPort and CommandPort.
Yet further, EFI_SMM_COMMUNICATION_PROTOCOL.Communicate() is used for
example by the UEFI variable driver, for talking between the
unprivileged (runtime DXE) and privileged (SMM) half.
As a result, all of the software SMIs currently in use in OVMF, related
to actual firmware services, write 0 to both ports.
I guess we can choose new values, as long as we avoid 2 and 3 for the
control port (0xB2), because those are reserved in QEMU -- see
ich9_apm_ctrl_changed() in "hw/isa/lpc_ich9.c".
(2) How the parameters are passed.
(2a) For the new CPU, the SMI remains pending, until it gets an
INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
BSP). At that point, the new CPU will execute the "initial SMI handler
for hotplugged CPUs", at the default SMBASE.
That's a routine we'll have to write in assembly, from zero. In this
routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
to provide the values last written (see apm_ioport_readb() in
"hw/isa/apm.c"). So we can receive the values in this routine. Alright.
(2b) On all other CPUs, the SMM foundation already accepts the SMI.
There point where it makes sense to start looking is SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b1) This function first checks whether the SMI is synchronous. The
logic for determining that is based on
"gSmmCorePrivate->CommunicationBuffer" being non-NULL. This field is set
to non-NULL in SmmCommunicationCommunicate() -- see above, in (1).
In other words, the SMI is deemed synchronous if it was initiated with
EFI_SMM_COMMUNICATION_PROTOCOL.Communicate(). In that case, the
HandlerType GUID is extracted from the communication buffer, and passed
to SmiManage(). In turn, SmiManage() locates the SMI handler registered
with the same handler GUID, and delegates the SMI handling to that
specific handler.
This is how the UEFI variable driver is split in two halves:
- in "MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmm.c", we have
a call to gMmst->MmiHandlerRegister(), with HandlerType =
"gEfiSmmVariableProtocolGuid"
- in
"MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmmRuntimeDxe.c", we
format communication buffers with the header GUID set to the same
"gEfiSmmVariableProtocolGuid".
Of course, this is what does *not* apply to our use case, as the SMI is
raised by the OS (via an ACPI method), and *not* by a firmware agent
that calls EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().
Therefore, we need to look further in SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b2) What's left there is only the following:
//
// Process Asynchronous SMI sources
//
SmiManage (NULL, NULL, NULL, NULL);
So...
- Are we supposed to write a new DXE_SMM_DRIVER for OvmfPkg, and call
gMmst->MmiHandlerRegister() in it, with HandlerType=NULL? (I.e.,
register a "root SMI handler"?)
- And then in the handler, should we read IO ports 0xB2 / 0xB3?
- Also, is that handler where we'd somehow sync up with the hot-plugged
VCPU, and finally call EFI_SMM_CPU_SERVICE_PROTOCOL.SmmAddProcessor()?
- Does it matter what (pre-existent) CPU executes the handler? (IOW,
does it matter what the value of gMmst->CurrentlyExecutingCpu is?)
Thanks,
Laszlo
Igor Mammedov <imammedo@...>
On Tue, 1 Oct 2019 20:03:20 +0200
"Laszlo Ersek" <lersek@...> wrote:
hw side (QEMU) uses cpu_hotplug_ops as IO write/read handlers
and firmware side (ACPI) scannig for hotplugged CPUs is implemented
in CPU_SCAN_METHOD.
What we can do on QEMU side is to write agreed upon value to command port (0xB2)
from CPU_SCAN_METHOD after taking ctrl_lock but before starting scan loop.
That way firmware will first bring up (from fw pov) all hotplugged CPUs
and then return control to OS to do the same from OS pov.
EFI_SMM_COMMUNICATION_PROTOCOL. So we can use the next unused value
(lets say 0x4). We probably don't have to use status port or
EFI_SMM_COMMUNICATION_PROTOCOL, since the value of written into 0xB2
is sufficient to distinguish hotplug event.
what do you think about following workflow:
on system boot after initial CPUs relocation, firmware set NOP SMI handler
at default SMBASE.
Then as reaction to GPE triggered SMI (on cpu hotplug), after SMI rendezvous,
a host cpu reads IO port 0xB2 and does hotplugged CPUs enumeration.
a) assuming we allow hotplug only in case of negotiated SMI broadcast
host CPU shoots down all in-flight INIT/SIPI/SIPI for hotpugged CPUs
to avoid race within relocation handler.
After that host CPU in loop
b) it prepares/initializes necessary CPU structures for a hotplugged
CPU if necessary and replaces NOP SMI handler with the relocation
SMI handler that is used during system boot.
c) a host CPU sends NOP INIT/SIPI/SIPI to the hotplugged CPU
d) the woken up hotplugged CPU, jumps to default SMBASE and
executes hotplug relocation handler.
e) after the hotplugged CPU is relocated and if there are more
hotplugged CPUs, a host CPU repeats b-d steps for the next
hotplugged CPU.
f) after all CPUs are relocated, restore NOP SMI handler at default
SMBASE.
"Laszlo Ersek" <lersek@...> wrote:
On 09/30/19 16:22, Yao, Jiewen wrote:yep, you can use it to iterate over hotplugged CPUs.-----Original Message-----
From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Igor
Mammedov
Sent: Monday, September 30, 2019 8:37 PM
To: Laszlo Ersek <lersek@...>Based on "docs/specs/acpi_cpu_hotplug.txt", this seems to boil down to aTo me it looks like we need to figure out how QEMU can make the OS callwe can try to resurrect and put over it some kind of protocol
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).
Do you agree?
If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.
to describe which CPUs to where hotplugged.
or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.
The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).
bunch of IO port accesses at base 0x0cd8.
Is that correct?
hw side (QEMU) uses cpu_hotplug_ops as IO write/read handlers
and firmware side (ACPI) scannig for hotplugged CPUs is implemented
in CPU_SCAN_METHOD.
What we can do on QEMU side is to write agreed upon value to command port (0xB2)
from CPU_SCAN_METHOD after taking ctrl_lock but before starting scan loop.
That way firmware will first bring up (from fw pov) all hotplugged CPUs
and then return control to OS to do the same from OS pov.
SeaBIOS writes 0x00 into command port, but it seems that's taken by[Jiewen] The PI specification Volume 4 - SMM defines EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate between OS and SMM handler. But it requires the runtime protocol call. I am not sure how OS loader passes this information to OS kernel.I'm confused about the details. In two categories:
As such, I think using ACPI SCI/GPE -> software SMI handler is an easier way to achieve this. I also recommend this way.
For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch register size), 4) ACPI NVS OPREGION, if the data structure is complicated.
(1) what values to use,
(2) how those values are passed.
Assume a CPU is hotpluged, QEMU injects an SCI, and the ACPI GPE handler
in the OS -- which also originates from QEMU -- writes a particular byte
to the Data port (0xB3), and then to the Control port (0xB2),
broadcasting an SMI.
(1) What values to use.
Note that values ICH9_APM_ACPI_ENABLE (2) and ICH9_APM_ACPI_DISABLE (3)
are already reserved in QEMU for IO port 0xB2, for different purposes.
So we can't use those in the GPE handler.
EFI_SMM_COMMUNICATION_PROTOCOL. So we can use the next unused value
(lets say 0x4). We probably don't have to use status port or
EFI_SMM_COMMUNICATION_PROTOCOL, since the value of written into 0xB2
is sufficient to distinguish hotplug event.
Furthermote, OVMF's EFI_SMM_CONTROL2_PROTOCOL.Trigger() implementationPotentially we can can avoid writing custom SMI handler,
(in "OvmfPkg/SmmControl2Dxe/SmmControl2Dxe.c") writes 0 to both ports,
as long as the caller does not specify them.
IoWrite8 (ICH9_APM_STS, DataPort == NULL ? 0 : *DataPort);
IoWrite8 (ICH9_APM_CNT, CommandPort == NULL ? 0 : *CommandPort);
And, there is only one Trigger() call site in edk2: namely in
"MdeModulePkg/Core/PiSmmCore/PiSmmIpl.c", in the
SmmCommunicationCommunicate() function.
(That function implements EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().)
This call site passes NULL for both DataPort and CommandPort.
Yet further, EFI_SMM_COMMUNICATION_PROTOCOL.Communicate() is used for
example by the UEFI variable driver, for talking between the
unprivileged (runtime DXE) and privileged (SMM) half.
As a result, all of the software SMIs currently in use in OVMF, related
to actual firmware services, write 0 to both ports.
I guess we can choose new values, as long as we avoid 2 and 3 for the
control port (0xB2), because those are reserved in QEMU -- see
ich9_apm_ctrl_changed() in "hw/isa/lpc_ich9.c".
(2) How the parameters are passed.
(2a) For the new CPU, the SMI remains pending, until it gets an
INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
BSP). At that point, the new CPU will execute the "initial SMI handler
for hotplugged CPUs", at the default SMBASE.
That's a routine we'll have to write in assembly, from zero. In this
routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
to provide the values last written (see apm_ioport_readb() in
"hw/isa/apm.c"). So we can receive the values in this routine. Alright.
what do you think about following workflow:
on system boot after initial CPUs relocation, firmware set NOP SMI handler
at default SMBASE.
Then as reaction to GPE triggered SMI (on cpu hotplug), after SMI rendezvous,
a host cpu reads IO port 0xB2 and does hotplugged CPUs enumeration.
a) assuming we allow hotplug only in case of negotiated SMI broadcast
host CPU shoots down all in-flight INIT/SIPI/SIPI for hotpugged CPUs
to avoid race within relocation handler.
After that host CPU in loop
b) it prepares/initializes necessary CPU structures for a hotplugged
CPU if necessary and replaces NOP SMI handler with the relocation
SMI handler that is used during system boot.
c) a host CPU sends NOP INIT/SIPI/SIPI to the hotplugged CPU
d) the woken up hotplugged CPU, jumps to default SMBASE and
executes hotplug relocation handler.
e) after the hotplugged CPU is relocated and if there are more
hotplugged CPUs, a host CPU repeats b-d steps for the next
hotplugged CPU.
f) after all CPUs are relocated, restore NOP SMI handler at default
SMBASE.
(2b) On all other CPUs, the SMM foundation already accepts the SMI.
There point where it makes sense to start looking is SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b1) This function first checks whether the SMI is synchronous. The
logic for determining that is based on
"gSmmCorePrivate->CommunicationBuffer" being non-NULL. This field is set
to non-NULL in SmmCommunicationCommunicate() -- see above, in (1).
In other words, the SMI is deemed synchronous if it was initiated with
EFI_SMM_COMMUNICATION_PROTOCOL.Communicate(). In that case, the
HandlerType GUID is extracted from the communication buffer, and passed
to SmiManage(). In turn, SmiManage() locates the SMI handler registered
with the same handler GUID, and delegates the SMI handling to that
specific handler.
This is how the UEFI variable driver is split in two halves:
- in "MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmm.c", we have
a call to gMmst->MmiHandlerRegister(), with HandlerType =
"gEfiSmmVariableProtocolGuid"
- in
"MdeModulePkg/Universal/Variable/RuntimeDxe/VariableSmmRuntimeDxe.c", we
format communication buffers with the header GUID set to the same
"gEfiSmmVariableProtocolGuid".
Of course, this is what does *not* apply to our use case, as the SMI is
raised by the OS (via an ACPI method), and *not* by a firmware agent
that calls EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().
Therefore, we need to look further in SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].
(2b2) What's left there is only the following:
//
// Process Asynchronous SMI sources
//
SmiManage (NULL, NULL, NULL, NULL);
So...
- Are we supposed to write a new DXE_SMM_DRIVER for OvmfPkg, and call
gMmst->MmiHandlerRegister() in it, with HandlerType=NULL? (I.e.,
register a "root SMI handler"?)
- And then in the handler, should we read IO ports 0xB2 / 0xB3?
- Also, is that handler where we'd somehow sync up with the hot-plugged
VCPU, and finally call EFI_SMM_CPU_SERVICE_PROTOCOL.SmmAddProcessor()?
- Does it matter what (pre-existent) CPU executes the handler? (IOW,
does it matter what the value of gMmst->CurrentlyExecutingCpu is?)
Thanks,
Laszlo
Laszlo Ersek
On 10/04/19 13:31, Igor Mammedov wrote:
that hotplugging a VCPU writes value 4 to IO port 0xB2?
That will allow me to experiment with OVMF.
(I can experiment with some other parts in edk2 even before that.)
Laszlo
On Tue, 1 Oct 2019 20:03:20 +0200
"Laszlo Ersek" <lersek@...> wrote:
(1) What values to use.
SeaBIOS writes 0x00 into command port, but it seems that's taken byThanks. Can you please write a QEMU patch for the ACPI generator such
EFI_SMM_COMMUNICATION_PROTOCOL. So we can use the next unused value
(lets say 0x4). We probably don't have to use status port or
EFI_SMM_COMMUNICATION_PROTOCOL, since the value of written into 0xB2
is sufficient to distinguish hotplug event.
that hotplugging a VCPU writes value 4 to IO port 0xB2?
That will allow me to experiment with OVMF.
(I can experiment with some other parts in edk2 even before that.)
How is that "shootdown" possible?(2) How the parameters are passed.Potentially we can can avoid writing custom SMI handler,
(2a) For the new CPU, the SMI remains pending, until it gets an
INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
BSP). At that point, the new CPU will execute the "initial SMI handler
for hotplugged CPUs", at the default SMBASE.
That's a routine we'll have to write in assembly, from zero. In this
routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
to provide the values last written (see apm_ioport_readb() in
"hw/isa/apm.c"). So we can receive the values in this routine. Alright.
what do you think about following workflow:
on system boot after initial CPUs relocation, firmware set NOP SMI handler
at default SMBASE.
Then as reaction to GPE triggered SMI (on cpu hotplug), after SMI rendezvous,
a host cpu reads IO port 0xB2 and does hotplugged CPUs enumeration.
a) assuming we allow hotplug only in case of negotiated SMI broadcast
host CPU shoots down all in-flight INIT/SIPI/SIPI for hotpugged CPUs
to avoid race within relocation handler.
After that host CPU in loopThanks
b) it prepares/initializes necessary CPU structures for a hotplugged
CPU if necessary and replaces NOP SMI handler with the relocation
SMI handler that is used during system boot.
c) a host CPU sends NOP INIT/SIPI/SIPI to the hotplugged CPU
d) the woken up hotplugged CPU, jumps to default SMBASE and
executes hotplug relocation handler.
e) after the hotplugged CPU is relocated and if there are more
hotplugged CPUs, a host CPU repeats b-d steps for the next
hotplugged CPU.
f) after all CPUs are relocated, restore NOP SMI handler at default
SMBASE.
Laszlo