Build UEFI application
Peter Wiehe <info@...>
Hi, all!
I have built EDK II and then OVMF. Now I want to develop an UEFI application with EDK II. Is there any documentation about that? Greetings Peter
|
|
Qemu command
Peter Wiehe <info@...>
Hello,
can You tell me what qemu command is used to run OVMF? Thanks in advance Peter
|
|
Re: "edk2-discuss" list settings change
Laszlo Ersek
Hello Mayur,
On 06/24/20 17:21, mgudmeti@nvidia.com wrote: On Wed, May 27, 2020 at 02:16 PM, Laszlo Ersek wrote:Thanks for the heads-up.Hi laszlo, For some reason, I still do not receive moderation notifications from groups.io, for the edk2-discuss mailing list. I receive such notifications for edk2-devel, and I check the pending messages there every day. But I'm left in the dark about edk2-discuss :( And now that I'm checking it, I'm seeing your message (I'll approve it in a moment) -- what's more, I'm seeing 5 more stuck messages, the oldest one dating back to Jun 17. That's terrible :( I'm really sorry. I don't know how we can fix this problem with edk2-discuss. I have now sent an email to <support@groups.io>, with subject not receiving "Message Approval Needed" notifications for edk2-discuss Hopefully this will improve in the near future. Thanks Laszlo
|
|
Re: ReadSaveStateRegister/WriteSaveStateRegister functions
Laszlo Ersek
On 06/24/20 10:53, mzktsn@gmail.com wrote:
Hello,Not sure *why* you'd like to access specific registers from the SMM save state map in PiSmmCore. But, these functions are exposed though a standard interface too; see EFI_SMM_CPU_PROTOCOL (or rather "EFI_MM_CPU_PROTOCOL") in the PI spec, volume 4, section "4.3 CPU Save State Access Services". In edk2, PiSmmCpuDxeSmm provides the protocol; the member functions are delegated to the platform's SmmCpuFeaturesLib instance, and there are general fallback implementations too. See for example SmmReadSaveState() in "UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c": Status = SmmCpuFeaturesReadSaveStateRegister (CpuIndex, Register, Width, Buffer); if (Status == EFI_UNSUPPORTED) { Status = ReadSaveStateRegister (CpuIndex, Register, Width, Buffer); } return Status; where SmmCpuFeaturesReadSaveStateRegister() comes from the platform's library instance, and ReadSaveStateRegister() is the generic/fallback code, from "UefiCpuPkg/PiSmmCpuDxeSmm/SmramSaveState.c". Generally speaking, in a module that's different from PiSmmCpuDxeSmm, you could introduce a depex or a protocol notify on EFI_SMM_CPU_PROTOCOL, and use its member functions. Whether that applies *specifically* to PiSmmCore, I can't say. (Again I don't know *why* you need that functionality there.) Probably best to address the PiSmmCore owners directly; run the following command in edk2: $ python BaseTools/Scripts/GetMaintainer.py \ -l MdeModulePkg/Core/PiSmmCore/PiSmmCore.c MdeModulePkg/Core/PiSmmCore/PiSmmCore.c Jian J Wang <jian.j.wang@intel.com> Hao A Wu <hao.a.wu@intel.com> Eric Dong <eric.dong@intel.com> Ray Ni <ray.ni@intel.com> devel@edk2.groups.io I've added those people to the CC list now. General hint: when posting a query (to any technical mailing list, really), always include the "why", not just the "what" / "how". Thanks Laszlo
|
|
Re: Question regarding different commands when triggering a SW-SMI
mzktsn@...
Thanks a lot.
|
|
Re: Does the Ovmf debug lib work at runtime?
Hantke, Florian <florian.hantke@...>
Great, thank you,I will try it out later.
Best Florian
|
|
Re: Does the Ovmf debug lib work at runtime?
Laszlo Ersek
On 06/14/20 21:23, Hantke, Florian wrote:
Hello everyone,Yes. For example, the following patch: diff --git a/MdeModulePkg/Universal/Variable/RuntimeDxe/Variable.c b/MdeModulePkg/Universal/Variable/RuntimeDxe/Variable.c index 1e71fc642c76..ab77e023d63f 100644 --- a/MdeModulePkg/Universal/Variable/RuntimeDxe/Variable.c +++ b/MdeModulePkg/Universal/Variable/RuntimeDxe/Variable.c @@ -2353,6 +2353,8 @@ VariableServiceGetVariable ( VARIABLE_POINTER_TRACK Variable; UINTN VarDataSize; + DEBUG ((DEBUG_INFO, "%a:%d\n", __FUNCTION__, __LINE__)); + if (VariableName == NULL || VendorGuid == NULL || DataSize == NULL) { return EFI_INVALID_PARAMETER; } generates a bunch of output lines (on the QEMU debug console) every time I run "efibootmgr" in the guest. Thanks Laszlo I put some debug functions in MdePkg/Library/UefiRuntimeLib/RuntimeLib.c
|
|
Does the Ovmf debug lib work at runtime?
florian.hantke@...
Hello everyone,
I am trying some runtime things and I was wondering if the Ovmf debug lib (with Qemu) works at runtime? I put some debug functions in MdePkg/Library/UefiRuntimeLib/RuntimeLib.c and in the exit_boot_services/virtual_address_change events of my basic runtime driver. Looking at the debug output I can see that the last message is my debug from virtual_address_change. Running a Ubuntu 20.04 in the VM I would expect some output from the runtimeLib when the system runs, for instance getVariable. At least in this blog post [1] Ubuntu called the runtimeLib. Could it be that I mistake something or that my approach is wrong? Thank you for your help and all the best Florian [1] http://blog.frizk.net/2017/01/attacking-uefi-and-linux.html
|
|
Re: Question regarding different commands when triggering a SW-SMI
Laszlo Ersek
On 06/06/20 02:16, mzktsn@gmail.com wrote:
Hello,For a relation to exist, it needs to be between multiple things (at least two). Above, you mention one thing only (a "number we write to the 0xB2 IO-port"). What is the *other* thing to which you want to relate the value written to 0xB2? In all the cases the call to SmiManage() with NULLYes, root SMI handlers are processed every time. so should anyone expect different behavior regarding the value written on the IOport(0xb2) when triggeringThe effects of writing any particular command value to IO port 0xB2 are entirely platform specific. On some platforms, it's not even IO port 0xB2 that triggers an SMI. Two example commits: * 17efae27acaf ("OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver", 2020-03-04) This shows that value 4 for signaling a VCPU hotplug is a platform *choice*. * 6e3c834ae47d ("SecurityPkg Tcg: Use SW SMI IO port PCD in Tpm.asl", 2020-04-21) This shows that some platforms use an IO port different from 0xB2 for triggering an SMI. Thanks Laszlo
|
|
Question regarding different commands when triggering a SW-SMI
mzktsn@...
Hello,
I would like to ask, when building firmware with EDK2 for real HW(but my question is either for an qemu-emulation). Is there any relation of what number we write to the 0xB2 IO-port? In all the cases the call to SmiManage() with NULL arguments is called, so should anyone expect different behavior regarding the value written on the IOport(0xb2) when triggering the System Management Interruption? Thanks
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li
On 6/3/2020 9:33 AM, Laszlo Ersek wrote:
On 06/03/20 00:19, annie li wrote:lsmod shows vhost_scsi is used by 4 programs, I assume these 4 are relatedOn 6/2/2020 7:44 AM, Laszlo Ersek wrote:Can you check with "lsmod" if other modules use vhost_scsi?On 05/29/20 16:47, annie li wrote:I am using targetcli to create SCSI lun that the VM boots from. TheI ran more tests, and found booting failure happens randomly when ICan you build the host kernel with "CONFIG_VHOST_SCSI=m", and repeat to targetcli. lsmod |grep vhost_scsi vhost_scsi 36864 4 vhost 53248 1 vhost_scsi target_core_mod 380928 14 target_core_file,target_core_iblock,iscsi_target_mod,vhost_scsi,target_core_pscsi,target_core_user I was thinking maybe these target_* modules are using vhost_scsi, then removed following modules by modprobe -r, target_core_file,target_core_iblock,vhost_scsi,target_core_pscsi,target_core_user then lsmod shows "used by" down to 3 programs, vhost_scsi 36864 3 vhost 53248 1 vhost_scsi target_core_mod 380928 6 iscsi_target_mod,vhost_scsi However, others can not be removed. "rmmod --force" doesn't help either. "dmesg |grep vhost_scsi" doesn't show much useful information either. No, I cannot rmmod these modules right after I create target in targetcli, no matter whether I start a VM or not. Deleting the target in targetcli doesn't help either. Before I create target in targetcli, I can add and remove vhost_scsi module. The "used by" of vhost_scsi is 0. See following steps I did right after I reboot my host, # modprobe vhost_scsivhost_scsi 36864 0 vhost 53248 1 vhost_scsi target_core_mod 380928 1 vhost_scsi # modprobe -r vhost_scsiRight after I setup luns in targetcli, the "used by" is always 4 no matter I stop the VM by "CTRL-C" or graceful shutdown, no matter the VM is running or not. So targetcli is the suspect of these 4 "used by". Thanks Annie
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
Laszlo Ersek
On 06/03/20 00:19, annie li wrote:
On 6/2/2020 7:44 AM, Laszlo Ersek wrote:Can you check with "lsmod" if other modules use vhost_scsi?On 05/29/20 16:47, annie li wrote:I am using targetcli to create SCSI lun that the VM boots from. TheI ran more tests, and found booting failure happens randomly when ICan you build the host kernel with "CONFIG_VHOST_SCSI=m", and repeat If you shut down QEMU gracefully, can you rmmod vhost_scsi in that case? I wonder if the failure to remove the vhost_scsi module is actually another sign of the same (as yet unknown) leaked reference. Thanks Laszlo Nods, it is possible.
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li
On 6/2/2020 7:44 AM, Laszlo Ersek wrote:
On 05/29/20 16:47, annie li wrote:I am using targetcli to create SCSI lun that the VM boots from. The vhost_scsiI ran more tests, and found booting failure happens randomly when ICan you build the host kernel with "CONFIG_VHOST_SCSI=m", and repeat module gets loaded right after I create target in /vhost. However, I cannot remove vhost_scsi module since then. It always complains " Module vhost_scsi is in use" (same even after I delete target in targetcli). Maybe it is related to targetcli, but I didn't try other tools yet. Nods, it is possible. Thanks Annie
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
Laszlo Ersek
On 05/29/20 16:47, annie li wrote:
I ran more tests, and found booting failure happens randomly when ICan you build the host kernel with "CONFIG_VHOST_SCSI=m", and repeat your Ctrl-C test such that you remove and re-insert "vhost_scsi.ko" after every Ctrl-C? My guess is that, when you kill QEMU with Ctrl-C, "vhost_scsi.ko" might not clean up something, and that could break the next guest boot. If you re-insert "vhost_scsi.ko" for each QEMU launch, and that ends up masking the symptom, then there's likely some resource leak in "vhost_scsi.ko". Just a guess. Thanks Laszlo
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li
On 5/28/2020 5:51 PM, Laszlo Ersek wrote:
On 05/28/20 00:04, annie li wrote:Much clear now, thank you!On 5/27/2020 2:00 PM, Laszlo Ersek wrote:Yes.I am a little confused here,(4) Annie: can you try launching QEMU with the following flag: It works but run into another failure. I put details in another email. I prefer to fixing it in the kernel side, details are in another email too.:-) Thanks Annie
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li
On 5/28/2020 6:08 PM, Laszlo Ersek wrote:
On 05/28/20 18:39, annie li wrote:YupOn 5/27/2020 2:00 PM, Laszlo Ersek wrote:Indeed -- as I just pointed out under your other email, I previouslyThis limits the I/O size to 1M.(4) Annie: can you try launching QEMU with the following flag: I'm not sure why that happens.Then I found out it is related to operations on this VM, see following. ... Is it possible that vhost_scsi_handle_vq() -- in the host kernel --I ran more tests, and found booting failure happens randomly when I boot the VM right after it was previously terminated by Ctrl+C directly from QEMU monitor, no matter the max_sectors is 2048, 16383 or 16384. The failing chance is about 7 out of 20. So my previous statement about 0x4000 and 0x3FFF isn't accurate. It is just that booting happened to succeed with 0x3FFF(16383 ), but not with 0x4000(16384). Also, when this failure happens, dmesg doesn't print out following errors, vhost_scsi_calc_sgls: requested sgl_count: 2368 exceeds pre-allocated max_sgls: 2048 This new failure is totally different issue from the one caused by max sized I/O. For my debug log of OVMF, the biggest I/O size is only about 1M. This means Windows 2019 didn't send out big sized I/O out yet. The interesting part is that I didn't see this new failure happen if I boot the VM which was previously shutdown gracefully from inside Windows guest. This involves both changes in kernel and QEMU. I guess maybe it is more straightWith vhost, the virtio-scsi device model is split between QEMU and theYou mean the vhost device on the guest side here, right? In WindowsIf that works, then I *guess* the kernel-side vhost device model that kernel controls the transfer size based on memory consumed. I prefer to fixing it by using larger constants in the kernel, this also avoid splitting big sized I/O by using smaller "max_sectors"default in QEMU. Following is the code change I did in the kernel code vhost/scsi.c, -#define VHOST_SCSI_PREALLOC_SGLS 2048 -#define VHOST_SCSI_PREALLOC_UPAGES 2048 +#define VHOST_SCSI_PREALLOC_SGLS 2560 +#define VHOST_SCSI_PREALLOC_UPAGES 2560 Thanks Annie
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
Laszlo Ersek
On 05/28/20 18:39, annie li wrote:
On 5/27/2020 2:00 PM, Laszlo Ersek wrote: (4) Annie: can you try launching QEMU with the following flag: This limits the I/O size to 1M.Indeed -- as I just pointed out under your other email, I previously missed that the host kernel-side unit was not "sector" but "4K page". So yes, the value 2048 above is too strict. The EFI_BAD_BUFFER_SIZE logic reducesOK! 0x4000 doesn't survive here.That's really interesting. I'm not sure why that happens. ... Is it possible that vhost_scsi_handle_vq() -- in the host kernel -- puts stuff in the scatter-gather list *other* than the transfer buffers? Some headers and such? Maybe those headers need an extra page. If that works, then I *guess* the kernel-side vhost device model You mean the vhost device on the guest side here, right? In WindowsWith vhost, the virtio-scsi device model is split between QEMU and the host kernel. While QEMU manages the "max_sectors" property (= accepts it from the command line, and exposes it to the guest driver), the host kernel (i.e., the other half of the device model) ignores the same property. Consequently, although the guest driver obeys "max_sectors" for limiting the transfer size, the host kernel's constants may prove *stricter* than that. Because, the host kernel ignores "max_sectors". So one idea is to make the host kernel honor the "max_sectors" limit that QEMU manages. The other two ideas are: use larger constants in the kernel, or use a smaller "max_sectors" default in QEMU. The goal behind all three alternatives is the same: the limit that QEMU exposes to the guest driver should satisfy the host kernel. Thanks Laszlo
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
Laszlo Ersek
On 05/28/20 00:04, annie li wrote:
On 5/27/2020 2:00 PM, Laszlo Ersek wrote: (4) Annie: can you try launching QEMU with the following flag: I am a little confused here,Yes. Yes. The transfer size that ultimately reaches the device is the minimum of three quantities: (a) the transfer size requested by the caller (i.e., the UEFI application), (b) the limit set by the READ(10) / READ(16) decision (i.e., MaxBlock), (c) the transfer size limit enforced / reported by EFI_EXT_SCSI_PASS_THRU_PROTOCOL.PassThru(), with EFI_BAD_BUFFER_SIZE Whichever is the smallest from the three, determines the transfer size that the device ultimately sees in the request. And then *that* transfer size must satisfy PREALLOC_SGLS and/or PREALLOC_PROT_SGLS (2048 4K pages: 0x80_0000 bytes). In your original use case, (a) is 0x93_F400 bytes, (b) is 0x1FF_FE00 bytes, and (c) is 0x1FF_FE00 too. Therefore the minimum is 0x93_F400, so that is what reaches the device. And because 0x93_F400 exceeds 0x80_0000, the request fails. When you set "-global vhost-scsi-pci.max_sectors=2048", that lowers (c) to 0x10_0000. (a) and (b) remain unchanged. Therefore the new minimum (which finally reaches the device) is 0x10_0000. This does not exceed 0x80_0000, so the request succeeds. ... In my prior email, I think I missed a detail: while the unit for QEMU's "vhost-scsi-pci.max_sectors" property is a "sector" (512 bytes), the unit for PREALLOC_SGLS and PREALLOC_PROT_SGLS in the kernel device model seems to be a *page*, rather than a sector. (I don't think I've ever checked iov_iter_npages() before.) Therefore the QEMU flag that I recommended previously was too strict. Can you try this instead, please?: -global vhost-scsi-pci.max_sectors=16384 This should set (c) to 0x80_0000 bytes. And so the minimum of {(a), (b), {c}) will be 0x80_0000 bytes -- exactly what PREALLOC_SGLS and PREALLOC_PROT_SGLS require. Although Win2019 boots from vhost-scsi with above flag, I assume we stillThere are multiple ways (alternatives) to fix the issue. - use larger constants for PREALLOC_SGLS and PREALLOC_PROT_SGLS in the kernel; - or replace the PREALLOC_SGLS and PREALLOC_PROT_SGLS constants in the kernel altogether, with such logic that dynamically calculates them from the "max_sectors" virtio-scsi config header field; - or change the QEMU default for "vhost-scsi-pci.max_sectors", from 0xFFFF to 16384. Either should work. Thanks, Laszlo
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li
On 5/27/2020 2:00 PM, Laszlo Ersek wrote:
On 05/27/20 17:58, annie li wrote:Thanks for addressing it.Hi Laszlo,Apologies for that -- while I'm one of the moderators on edk2-devel (I My another email sent out yesterday didn't reach to edk2-discuss. I joined this group and hope the email can show up this time. See my following comments. Thanks for the detailed explanation, it is very helpful.Anyway, re-sending it here, hope you can get it...)Thanks -- in case you CC me personally in addition to messaging the list This limits the I/O size to 1M. The EFI_BAD_BUFFER_SIZE logic reducesI recently added more log inYes, that's possible -- maybe the caller starts with an even larger I/O size to 512K for uni-directional requests. To send biggest I/O(8M) allowed by current vhost-scsi setting, I adjust the value to 0x3FFF. The EFI_BAD_BUFFER_SIZE logic reduces I/O size to 4M for uni-directional requests. -global vhost-scsi-pci.max_sectors=0x3FFF 0x4000 doesn't survive here. You mean the vhost device on the guest side here, right? In Windows virtio-scsi driver, it does read out max_sectors. Even though the driver doesn't take use of it later, it can be used to adjust the transfer length of I/O. I guess you are not mentioning the vhost-scsi on the host? Both VHOST_SCSI_PREALLOC_SGLS(2048) and TCM_VHOST_PREALLOC_PROT_SGLS(512) are hard coded in vhost/scsi.c. ... sgl_count = vhost_scsi_calc_sgls(prot_iter, prot_bytes, TCM_VHOST_PREALLOC_PROT_SGLS); .... sgl_count = vhost_scsi_calc_sgls(data_iter, data_bytes, VHOST_SCSI_PREALLOC_SGLS); In vhost_scsi_calc_sgls, error is printed out if sgl_count is more than TCM_VHOST_PREALLOC_PROT_SGLS or VHOST_SCSI_PREALLOC_SGLS. sgl_count = iov_iter_npages(iter, 0xffff); if (sgl_count > max_sgls) { pr_err("%s: requested sgl_count: %d exceeds pre-allocated" " max_sgls: %d\n", __func__, sgl_count, max_sgls); return -EINVAL; } Looks like vhost-scsi doesn't interrogate the virtio-scsi config space for "max_sectors". Although Win2019 boots from vhost-scsi with above flag, I assume we still need to enlarge the value of VHOST_SCSI_PREALLOC_SGLS in vhost-scsi for final fix instead of setting max_sectors through QEMU options? Adding specific QEMU command option for booting Win2019 from vhost-scsi seems not appropriate. Suggestions? Thanks Annie Cool!Thank you for confirming!
|
|
Re: Windows 2019 VM fails to boot from vhost-scsi with UEFI mode
annie li <annie.li@...>
Hi Laszlo,
On 5/27/2020 2:00 PM, Laszlo Ersek
wrote:
On 05/27/20 17:58, annie li wrote:Hi Laszlo, (I sent out my reply to your original response twice, but my reply somehow doesn't show up in https://edk2.groups.io/g/discuss. It is confusing.Apologies for that -- while I'm one of the moderators on edk2-devel (I get moderation notifications with the other mods, and we distribute the mod workload the best we can), I'm not one of the edk2-discuss mods. Hmm, wait a sec -- it seems like I am? And I just don't get mod notifications for edk2-discuss? Let me poke around in the settings :/ edk2-devel: - Spam Control - Messages are not moderated - New Members moderated - Unmoderate after 1 approved message - Message Policies - Allow Nonmembers to post (messages from nonmembers will be moderated instead of rejected) edk2-discuss: - Spam Control - Messages are not moderated - New Members ARE NOT moderated - Message Policies - Allow Nonmembers to post (messages from nonmembers will be moderated instead of rejected) So I think the bug in our configuration is that nonmembers are moderated on edk2-discuss just the same (because of the identical "Allow Nonmembers to post" setting), *however*, mods don't get notified because of the "New Members ARE NOT moderated" setting. So let me tweak this -- I'm setting the same - Spam Control - New Members moderated - Unmoderate after 1 approved message for edk2-discuss as we have on edk2-devel, *plus* I'm removing the following from the edk2-discuss list description: "Basically unmoderated". (I mean I totally agree that it *should* be unmoderated, but fully open posting doesn't seem possible on groups.io at all!) Thank you for looking at it. Nods.Anyway, re-sending it here, hope you can get it...)Thanks -- in case you CC me personally in addition to messaging the list (which is the common "best practice" for mailing lists), then I'll surely get it. Following up below:On 5/27/2020 7:43 AM, Laszlo Ersek wrote:(2) Regardig "max_sectors", the spec says: max_sectors is a hint to the driver about the maximum transfer size to use. OvmfPkg/VirtioScsiDxe honors and exposes this field to higher level protocols, as follows: (2.1) in VirtioScsiInit(), the field is read and saved. It is also checked to be at least 2 (due to the division quoted in the next bullet). (2.2) PopulateRequest() contains the following logic: // // Catch oversized requests eagerly. If this condition evaluates to false, // then the combined size of a bidirectional request will not exceed the // virtio-scsi device's transfer limit either. // if (ALIGN_VALUE (Packet->OutTransferLength, 512) / 512 > Dev->MaxSectors / 2 || ALIGN_VALUE (Packet->InTransferLength, 512) / 512 > Dev->MaxSectors / 2) { Packet->InTransferLength = (Dev->MaxSectors / 2) * 512; Packet->OutTransferLength = (Dev->MaxSectors / 2) * 512; Packet->HostAdapterStatus = EFI_EXT_SCSI_STATUS_HOST_ADAPTER_DATA_OVERRUN_UNDERRUN; Packet->TargetStatus = EFI_EXT_SCSI_STATUS_TARGET_GOOD; Packet->SenseDataLength = 0; return EFI_BAD_BUFFER_SIZE; } That is, VirtioScsiDxe only lets such requests reach the device that do not exceed *half* of "max_sectors" *per direction*. Meaning that, for uni-directional requests, the check is stricter than "max_sectors" requires, and for bi-directional requests, it is exactly as safe as "max_sectors" requires. (VirtioScsiDxe will indeed refuse to drive a device that has just 1 in "max_sectors", per (2.1), but that's not a *practical* limitation, I would say.) (2.3) When the above EFI_BAD_BUFFER_SIZE branch is taken, the maximum transfer sizes that the device supports are exposed to the caller (per direction), in accordance with the UEFI spec. (2.4) The ScsiDiskRead10(), ScsiDiskWrite10(), ScsiDiskRead16(), ScsiDiskWrite16() functions in "MdeModulePkg/Bus/Scsi/ScsiDiskDxe/ScsiDisk.c" set the "NeedRetry" output param to TRUE upon seeing EFI_BAD_BUFFER_SIZE.I recently added more log in MdeModulePkg/Bus/Scsi/ScsiDiskDxe/ScsiDisk.c that has maximum setting related to MAX SCSI I/O size. For example, In Read(10) command, the MaxBlock is 0xFFFF, and the BlockSize is 0x200. So the max ByteCount is 0xFFFF*0x200 = 0x1FFFE00(32M). After setting MaxBlock as 0x4000 to limit the max ByteCount to 8M, Windows 2019 can boot up from vhost-scsi in my local environment. Looks this 32M setting in ScsiDiskDxe is consistent with the one you mentioned in following (3.2) in QEMU?Yes, that's possible -- maybe the caller starts with an even larger transfer size, and then the EFI_BAD_BUFFER_SIZE logic is already at work, but it only reduces the transfer size to 32MB (per "max_sectors" from QEMU). And then all the protocols expect that to succeed, and when it fails, the failure is propagated to the outermost caller. I am a little confused here,(4) Annie: can you try launching QEMU with the following flag: -global vhost-scsi-pci.max_sectors=2048 If that works, then I *guess* the kernel-side vhost device model could interrogate the virtio-scsi config space for "max_sectors", and use the value seen there in place of PREALLOC_SGLS / PREALLOC_PROT_SGLS. Both VHOST_SCSI_PREALLOC_SGLS(2048) and TCM_VHOST_PREALLOC_PROT_SGLS(512) are hard coded in vhost/scsi.c. ... sgl_count = vhost_scsi_calc_sgls(prot_iter, prot_bytes, TCM_VHOST_PREALLOC_PROT_SGLS); .... sgl_count = vhost_scsi_calc_sgls(data_iter, data_bytes, VHOST_SCSI_PREALLOC_SGLS);
In vhost_scsi_calc_sgls, error is printed out if sgl_count is more
thanTCM_VHOST_PREALLOC_PROT_SGLS or VHOST_SCSI_PREALLOC_SGLS. sgl_count = iov_iter_npages(iter, 0xffff); if (sgl_count > max_sgls) { pr_err("%s: requested sgl_count: %d exceeds pre-allocated" " max_sgls: %d\n", __func__, sgl_count, max_sgls); return -EINVAL; } Looks like vhost-scsi doesn't interrogate the virtio-scsi config space for"max_sectors". The guest virtio-scsi driver may read this configuration out though. So the following flag reduces the transfer size to 8M on QEMU
side. Thanks Cool! I can boot Win2019 VM up from vhost-scsi with the flag above.Thank you for confirming! Laszlo
|
|