Re: MemoryFence()

Ankur Arora

On 2021-02-08 9:40 a.m., Laszlo Ersek wrote:
On 02/04/21 21:04, Paolo Bonzini wrote:
Il gio 4 feb 2021, 20:46 Ard Biesheuvel <ardb@...> ha scritto:

(1) We should introduce finer-grained fence primitives:

ARM AARCH64 i386

CompilerFence() asm("") asm("") asm("")
AcquireMemoryFence() dmb ish dmb ishld asm("")
ReleaseMemoryFence() dmb ish dmb ish asm("")
MemoryFence() dmb ish dmb ish mfence

"where AcquireMemoryFence() is used on the read side (i.e. between
reads) and ReleaseMemoryFence() is used on the write side (i.e.
between writes)".
Acquire semantics typically order writes before reads, not /between/
reads. Similarly, release semantics imply that all outstanding writes
complete before a barrier with acquire semantics is permitted to
Acquire fences are barriers between earlier loads and subsequent loads
and stores; those earlier loads then synchronize with release stores
in other threads.

Release fences are barriers been earlier loads and stores against
subsequent stores, and those subsequent stores synchronize with
acquire loads in other threads.

In both cases, however, fences only make sense between memory
operations. So something like "after reads" and "before writes" would
have been more precise in some sense, but in practice the usual idiom
is "between" reads/writes as Laszlo wrote.

Note that reasoning about this only makes sense in the context of
concurrency, i.e., different CPUs operating on the same memory (or
coherent DMA masters)

For non-coherent DMA, the 'ish' variants are not appropriate, and
given the low likelihood that any of this is creates a performance
bottleneck, I would suggest to only use full barriers on ARM.
Sure, that's a matter of how to implement the primitives. If you think
that non-coherent DMA is important, a full dmb can be used too.

As far as the compiler is concerned, an asm in the macros *should*
block enough optimizations, even without making the accesses volatile.
CompilerFence (or the edk2 equivalent of cpu_relax, whose name escapes
me right now) would be necessary in the body of busy-waiting loops.
However somebody should check the MSVC docs for asm, too.
Doesn't look too specific:

It is very important to be *aware* of the acquire/release semantics,
but I don't think it is necessary to use all the fine grained barrier
types in EDK2.
I agree as long as the primitives are self-documenting. A single
MemoryFence() does not make it clear in which direction the data is
flowing (whether from other threads to this one, or vice versa).
I've found this article very relevant:
(1) It provides an example for a store-load (Dekker's algorithm) where
any combination of read-acquire + write-release is insufficient. Thus it
would need an MFENCE (hence we need all four APIs in edk2).
(... If we jump back to the part I marked with [*], then we can see
Paolo's description of read-acquire and store-release covers load-load,
load-store, load-store (again), and store-store. What's not covered is
store-load, which Paolo said elsewhere in this thread is exactly what
x86 does reorder. So the MemoryFence() API's use would be "exceptional",
in future code, but it should exist for supporting patterns like
Dekker's algorithm.)
(2) I think the "Recommendations" section:
highlights the very problem we have. It recommends

When doing lockless programming, be sure to use volatile flag
variables and memory barrier instructions as needed.
*after* explaining why "volatile" is generally insufficient:
and *after* describing the compiler barriers.
So this recommendation should recommend compiler barriers rather than
volatile. :/
(3) The article recommends _ReadWriteBarrier, _ReadBarrier and
_WriteBarrier, for compiler fences. I think _ReadWriteBarrier should
suffice for edk2's purposes.
However, the following reference deprecates those intrinsics:
while offering *only* C++ language replacements.
Could we implement CompilerFence() for all edk2 architectures as
*non-inline* assembly? The function would consist of a return
instruction only. For x86, we could use a NASM source; for ARM, separate
MS and GNU assembler sources would be needed.
I totally want to get rid of "volatile" at least in future code, but
that's only possible if one of the following options can be satisfied:
- we find a supported replacement method for _ReadWriteBarrier when
using the MSFT toolchain family (such as the *non-inline*, empty
assembly function),
- or we accept that CompilerFence() is not possible to implement
portably, and we only offer the heavier-weight acquire / release /
full fences, which *include* a compiler fence too.
In the latter case, the body of a busy-waiting loop would have to use
the heavier read-acquire API.
So the structure of the solution we're looking for is:
- exactly *one* of:
- volatile
- compiler fence
- acquire fence used as a heavy substitute for compiler fence,
Since the first two are of unequal power -- volatile for specific
accesses and the compiler fence being a general barrier across which
the compiler does not optimize memory accesses I think we should have
both instead of volatile or compiler fence.

However, instead of volatile, how about read_once(), write_once()
primitives? These, respectively ensure that the compiler emits a
single load or a store and does not combine or mutilate the load/store
in any way.

Internally, these are implemented via volatile (at least the Linux
kernel implementation that I looked at is) but they would explicitly
lay out the guarantees (and non-guarantees) that the C/C++ standard

"However, the guarantees of the standard are not sufficient for using
volatile for multi-threading. The C++ Standard does not stop the compiler
from reordering non-volatile reads and writes relative to volatile reads
and writes, and it says nothing about preventing CPU reordering."


I see at least two cases (essentially where these are useful on their
own without needing any other fences for ordering (compiler or cpu)):

* Device side communication:

271 // The system memory pointed by Address will be updated by the
272 // SATA Host Controller, "volatile" is introduced to prevent
273 // compiler from optimizing the access to the memory address
274 // to only read once.
275 //
276 Value = *(volatile UINT32 *) (UINTN) Address;
277 Value &= MaskValue;

So use read_once() instead of the volatile cast:
276 Value = read_once (Address);

* For handling cases where we don't need a fence, but only
need to ensure that the compiler does not mutilate a store in any
way. As an example, the code below clears the Handler and the
block below it calls the Handler if the pointer is non-NULL.


99 + //
100 + // We are done until the next hot-unplug; clear the handler.
101 + //
102 + mCpuHotEjectData->Handler = NULL;

140 + if (mCpuHotEjectData == NULL ||
141 + mCpuHotEjectData->Handler == NULL) {
142 + return;
143 + }
144 +
145 + mCpuHotEjectData->Handler (CpuIndex);

If the code on line 102 were instead,

102 + write_once (&mCpuHotEjectData->Handler, NULL);

and that on lines 141-145 were:
140 + if (mCpuHotEjectData == NULL ||
141 + ((handler = read_once (mCpuHotEjectData->Handler)) == NULL) {
142 + return;
143 + }
144 +
145 + handler (CpuIndex);

The use of read_once()/write_once() also document the implicit assumptions
being made in the code at the point of use.


- and *all* of
- acquire fence (load-load, load-store)
- release fence (load-store, store-store)
- full fence (load-load, load-store, store-store, store-load)
The implementation of each fence would have to be *at least* as safe as
required; it could be stronger.
I feel that we have to reach an agreement on the "exactly one of" part;
subsequent to that, maybe I can try an RFC patch for <BaseLib.h> (just
the interface contracts, at first).

Join to automatically receive all group messages.