Re: [edk2-devel] OVMF/QEMU shell based unit tests and writing to a virtual disk

Laszlo Ersek

On 10/22/20 20:55, Sean wrote:
Laszlo and others familiar with QEMU,

I am trying to write automation that boots QEMU/OVMF and then runs EFI
applications and those efi applications then use the UEFI shell apis to
write back to the disk their state.  I am seeing very inconsistent
results with sometimes it working fine while other times the disk
contents are invalid.  If i run multiple tests it seems like the first
two work while the 3rd seems to start failing but overall it seems random.

Failing means:
    Disk contents corrupted but present.
    Disk contents wrong size (short).
    Files that show written in UEFI shell never show up on host.

I am trying to determine if this is a known limitation with QEMU or a
bug i need to track down in the unit test code.

My setup:

This is on a Windows 10 x64 host.  I am running current 5.1 version of

My script creates a folder in the Windows NTFS file system.  Copies the
EFI executables and startup.nsh files to it.  Then starts QEMU with the
following additional parameter.

 -drive file=fat:rw:{VirtualDrive},format=raw,media=disk
This is the problem. Don't use the fat / vvfat block driver for anything

I don't even have to look at particulars; as "fat" ("vvfat") is known to
be a hack. In particular write operations should not be relied on
(either guest->host or host->guest directions). Don't expect this QEMU
feature to work as a "semihosting" (esp. "live semihosting") solution.

What's important to understand about "vvfat" is that it attempts to
*re-compose* a filesystem view from block-level operations.
*De-composing* file operations into block operations is an everyday
occurrence (that's what filesystem drivers do everywhere). But the write
direction of vvfat attempts to do the *inverse* -- it seeks to recognize
block operations and to synthesize file operations from them. If you get
lucky, it sometimes even works.

Instead, please use a regular virtual disk image in the QEMU
configuration. This disk image should not be accessed concurrently by
QEMU (= the guest) and other host-side utilities. In other words, first
format / populate the disk image on the host side, then launch QEMU.
Then in the guest UEFI shell, terminate the guest with the "reset -s"
command. Finally, once QEMU has exited, use host-side utilities to fetch
the results from the virtual disk image.

On the host side (on a Linux installation anyway), I tend to use the
"mtools" package (such as "mcopy" etc), for manipulating the contents of
disk image files. Or, more frequently, the "guestfish" program (which is
extremely capable).

I don't know if equivalents exist on Windows.

Now, another option (on Linux anyway) is to loop-mount a "raw" virtual
disk image. This is not recommended, as it directly exposes the host
kernel's filesystem driver(s) to metadata produced by the guest. It
could trigger security issues in the host kernel.

(This is exactly what guestfish avoids, by running a separate Linux
guest -- called the "libguestfs appliance" -- on top of the virtual disk
image. The guestfish command interpreter on the host side exchanges
commands and data with the appliance over virtio-serial. If the metadata
on the disk image is malicious, it will break / exploit the *guest*
kernel in the appliance. The host-side component, the guestfish command
interpreter, only has to sanity-check the virtio-serial exchanges.)

... Earlier I've looked into "virtio-fs" support for OVMF:

however, it's very complex, and the wire format (the opcodes) are
extremely low-level and Linux specific. To the point where they directly
mirror Linux VFS system calls, and (due to lack of documentation) I
don't even understand what a big bunch of the opcodes *do*.

Earlier I had given some thought to a mapping between
opcodes, but when I did that, it looked like a bad fit. Virtio-fs seems
to aim at serializing *Linux guest* filesystem operations as tightly as
possble for host-side processing, and that seemed like a big obstacle
for a *UEFI guest* mapping.

really expect data and/or meta-data to change "under their feet" (-> due
to asynchronous host-side modifications). For a while I was hopeful to
expose such changes via the EFI_MEDIA_CHANGED return value. But -- alas,
I forget the details -- it seemed to turn out that the virtio-fs
interfaces wouldn't really let the EFI_SIMPLE_FILE_SYSTEM_PROTOCOL
driver, to be provided by OVMF, even *detect* the situations when it

So, the virtio-fs driver for OVMF has been postponed indefinitely, and
the best I can recommend at the moment is to use a regular virtual disk
image file. Remember that QEMU (= guest), and the other (host-side)
utilities for manipulating the disk image, should be strictly serialized
(they should mutually exclude each other).


VirtualDrive is the Windows file path of the said mentioned folder.

If interested you should be able to reproduce the results by pulling my
branch and/or you can review the above.

You can see the operations here:


My Branch:

Or if interested you can reproduce it by following steps defined here:

and more details here

After building qemu with the right parameters for your environment you
can run <your stuart_build cmd> --flashonly MARK_STARTUP_NSH=TRUE

For example in my environment it looks like
stuart_build -c Platforms\QemuQ35Pkg\

Anyway if i recall correctly last year when we talked briefly about
automation there was some concern that this would happen. Any
information and/or ideas would be greatly appreciated.


Join to automatically receive all group messages.