Re: MemoryFence()

Paolo Bonzini <pbonzini@...>

Il gio 4 feb 2021, 21:09 Ethin Probst <harlydavidsen@...> ha scritto:

For fences x86 has the LFENCE instruction for load fences and
SFENCE for store fences. If I'm not mistaken, MFENCE, then, is just a
combination of the other two. Would it not then be more appropriate to
use an LFENCE for a load (release?) fence, and SFENCE for a store
(acquire?) fence?

No, LFENCE and SFENCE are only useful with special load and store
instructions and are not needed in edk2. In addition, neither of the two
orders earlier stores against subsequent loads; MFENCE is needed for that.


(I ask for release/acquire because I might have
these mixed up.) I might of course be entirely wrong too.
I'd love to, perhaps, become a contributor of EDK2 sometime, but I
honestly have no idea where I'd even begin. :-)

On 2/4/21, Ard Biesheuvel <ardb@...> wrote:
On Thu, 4 Feb 2021 at 19:09, Laszlo Ersek <lersek@...> wrote:

Hi All,

here's the declaration of MemoryFence(), from <BaseLib.h>:

Used to serialize load and store operations.

All loads and stores that proceed calls to this function are
guaranteed to be
globally visible when this function returns.

MemoryFence (
The IA32 and X64 implementations of MemoryFence() are not memory
barriers, only compiler barriers, however.

This is a problem for two reasons:

- code that believes to add a memory barrier only adds a compiler
barrier, possibly creating an unsafe situation,

- code that knowingly calls MemoryFence() for the purposes of a compiler
barrier -- while it will hopefully work safely, relying on tacit
knowledge / assumptions about the x86 architecture -- results in a
source reading experience that mismatches known lockless patterns.

This topic has repeatedly come up in the past; most recently in the
review of Ankur's v6 series for
<>, where some
inter-processor communication needs to happen.

I asked Paolo for help with understanding some of the synchronziation
patterns in PiSmmCpuDxeSmm, the reason being that Ankur's series is
going to rely on, and/or imitate, some of those patterns. From this
discussion with Paolo, the following have emerged (I'm going to
liberally steal / paraphrase Paolo's words):

(1) We should introduce finer-grained fence primitives:

ARM AARCH64 i386

CompilerFence() asm("") asm("") asm("")
AcquireMemoryFence() dmb ish dmb ishld asm("")
ReleaseMemoryFence() dmb ish dmb ish asm("")
MemoryFence() dmb ish dmb ish mfence

"where AcquireMemoryFence() is used on the read side (i.e. between
reads) and ReleaseMemoryFence() is used on the write side (i.e. between
Acquire semantics typically order writes before reads, not /between/
reads. Similarly, release semantics imply that all outstanding writes
complete before a barrier with acquire semantics is permitted to

Note that reasoning about this only makes sense in the context of
concurrency, i.e., different CPUs operating on the same memory (or
coherent DMA masters)

For non-coherent DMA, the 'ish' variants are not appropriate, and
given the low likelihood that any of this is creates a performance
bottleneck, I would suggest to only use full barriers on ARM.

(Note that this step would make MemoryFence() heavier-weight on x86 than
it currently is.)

(2) We should audit and rework existent uses (combinations) of
MemoryFence() into matched acquire / release pairs.
It is very important to be *aware* of the acquire/release semantics,
but I don't think it is necessary to use all the fine grained barrier
types in EDK2.

Less importantly, this would restore the x86 behavior (performance) to
the one seen with the current MemoryFence() implementation.

More importantly, it would fix what the code *means*: "it's very
important to stick to known lockless patterns, and having matching
acquire/release fences makes it much more intuitive to understand what's
going on".


I'm not proposing this as a pre-requisite to merging Ankur's series
(i.e., I don't expect the series to use acquire/release -- it can be
converted later with the rest of the audit).

However, I'd really like us to stop wasting more time on MemoryFence()
doubts such as:

- Hey it's not a memory fence at all, is it safe?
Who decides what 'memory fence' means anyway?

- But on x86 it kinda is, *dependent on* the spinlocks / lock prefixes
we use for synchronization, between the other accesses.

- Is it interchangeable with volatile? Yes? No? which one should we use?
'volatile' affects code generation only, i.e., which loads and stores
are issued. The *order* in which those loads observe stores done by
other masters is what is managed by barrier instructions. So they are
two sides of the same coin: 'volatile' may help to guarantee that
every assignment through a pointer variable results in a store
instruction, but how these are observed by other CPUs still needs to
be managed if it needs to occur in a certain way.

I think volatile is the wrong tool in most cases: the ordering of
accesses are not a property of the type or of the variable, but of the
code sequence. So using asm("") to ensure that an assignment through a
pointer is emitted as a store instruction, followed by a barrier
instruction is a typical pattern. And given the fact that barrier
instructions require asm sequences in most cases anyway, the asm ("")
will often be implied.

In the longer term, multi-processing code like MpInitLib (CpuDxe,
CpuMpPei) and PiSmmCpuDxeSmm should likely abandon all use of
"volatile", and use the explicit barriers instead, for *clarity*.

Step (2) would take quite a bit of thinking (if not much code).


Would there be interest in reviewing such work?


Ethin D. Probst

Join to automatically receive all group messages.