Post Reply 
 
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
UEFI boot and sanhook
2018-09-18, 15:27
Post: #1
UEFI boot and sanhook
Hi,

I have a question about the hooking of iSCSI drives in iPXE when booting on UEFI machines.

On legacy systems, it was possible to transparently access the hooked drive using int13h. iPXE would catch those calls and translate them to iSCSI requests.

Is there an equivalent method for UEFI systems? I got as far as hooking the drive and starting an EFI application (either via Multiboot2 using an experimental branch, or just chain shellx64.efi), but I couldn't find a way to actually access the drive from within that application. Directly booting via sanboot seems to work fine.

Using DEBUG=efi_block:3 I see that iPXE installed the drive in the iBFT, and also registered some protocol with UEFI. But trying to do anything on that protocol just hangs, and I see one line of debug output like this one:

Code:
EFIBLK 0x80 read LBA 0x... to ...

Has anyone tried this before? Or is it just not designed for this and only iPXE itself can access the hooked device?
Find all posts by this user
Quote this message in a reply
2018-09-18, 18:24
Post: #2
RE: UEFI boot and sanhook
What exactly is it that you want to do?
iPXE san* devices in EFI is exposed via the efi_block interface.
Booting from this works fine as you wrote.
(sanboot is just sanhook with the extra "tell firmware to boot it", so should be no difference)

When these block devices is connected it is up to mostly the efi firmware to do the right thing.
In the case of the shell it should be available there as block device, partitions and EFI compatible filesystems, if there are any that is.

I would also suggest to listen to the wire (tcpdump or wireshark) and see what is going on there.
One possible reason could be that when you start your applications the network layer is interrupted.

You might also want to provide more details on what you are doing and using (exact commands) in such a way that someone else would be able to reproduce this.

Use GitHub Discussions
VRAM bin
Visit this user's website Find all posts by this user
Quote this message in a reply
2018-09-19, 09:17 (This post was last modified: 2018-09-19 09:57 by parthy.)
Post: #3
RE: UEFI boot and sanhook
Oh yes, sorry. What I would like to do is use iPXE to hook an iSCSI disk, and then access it from within an EFI application. On legacy systems, this used to work. The kernel that was loaded by iPXE could just execute BIOS calls to read from the disk, and iPXE would get the data from the network. I am wondering if the same mechanism is feasible on UEFI.

Here is my iPXE config:

Code:
#!ipxe

set username <secret>
set password <secret>

set keep-san 1
set initiator-iqn <custom-prefix>:${net0/mac:hexhyp}

set iscsi-server ${next-server}

set san-filename \EFI\boot\bootx64.efi

set root-path iscsi:${iscsi-server}::3260:1:iqn.xxxx-yy.de.foo.iscsi:<target_name>

sanhook --drive 0x80 ${root-path} || goto fail

chain shellx64.efi

:fail
shell

When booting this in qemu with OVMF/TianoCore, I get the following:

Code:
EFI Shell version 2.70 [1.0]
Current running mode 1.1.2
EFIBLK 0x80 flush
EFIBLK 0x80 flush
EFIBLK 0x80 flush
EFIBLK 0x80 flush
EFIBLK 0x80 flush
EFIBLK 0x80 read LBA 0x00000000 to 0x7e113798+0x00000200
map: Cannot find required map name.

Press ESC in 1 seconds to skip startup.nsh, any other key to continue.
Shell>

When I boot it on real hardware, I see the partitions appear, but running ls in any of them again hangs.

The hint about the network being interrupted might be the key, though. In the shell, I indeed see the network interface unconfigured. Is there anything I need to to in order for the application to access the hooked disk?
Quick update: If I use snponly.efi, it works in the shell.

The way I see it, the shutdown_boot() call when loading Multiboot binaries severs the network connection. How does this work on legacy systems? The iSCSI connection survives there somehow, until the OS takes over.
Find all posts by this user
Quote this message in a reply
2018-09-19, 12:40
Post: #4
RE: UEFI boot and sanhook
You have still not really explained what it is that you want to do - and the actual issue when you do that.
From my perspective this should all just work.

I don't fully know what multiboot is, but if that is an efi application that has something that tries to do anything with the network interfaces you should probably expect that to cause issues. (If that is the case then you might want to report that as a bug to them)

Again I would strongly suggest that you check tcpdump, and maybe monitor link state, that should tell you if there is any kind of network issues (and most importantly issues caused by anything on the client itself)

Also I would not be surprised if efi shell refuses to deal with a disk, and is only able to handle partitions with efi compatible filesystems. (but again this depends on the firmware support, rather than efi shell itself)

Use GitHub Discussions
VRAM bin
Visit this user's website Find all posts by this user
Quote this message in a reply
2018-09-19, 13:06
Post: #5
RE: UEFI boot and sanhook
What I am trying to do is:

- Hook iSCSI disk in iPXE (the disk contains an EFI-installed Windows 10)
- Load a small multiboot-compliant kernel (in my case, Multiboot2, using this branch)
- In this kernel, I want to chainload the boot loader installed on the iSCSI disk without understanding iSCSI

On legacy systems, this was possible with this config:

Code:
# iscsi setup variables
sanhook --drive 0x80 ${root-path}
kernel tftp://${next-server}/my-boot-loader

The bootloader could then access the iSCSI disk via int13h, because iPXE emulates the BIOS calls.

Now I am trying to apply the same logic to UEFI. My understanding is that this should work because iPXE registers the iSCSI disk as a block device. As I wrote in my update, I got it working in the shell by using the snponly variant.

In the multiboot case, the first attempt to read anything from the disk just hangs. It looks like when the kernel is loaded, the network connection is cut.

It also works now with the multiboot kernel if I remove the shutdown_boot() call, in this line.

Somehow there must be a difference to legacy systems when it comes to this shutdown call.

I hope this makes it a bit clearer.
Find all posts by this user
Quote this message in a reply
2018-09-19, 13:16
Post: #6
RE: UEFI boot and sanhook
Thanks,
Yes there is several differences between legacy and EFI - they are not really comparable in any way.
You will probably see these differences in the multiboot sources as well. - mainly how handles to devices etc are passed around.

Could you give us the pciid of the NIC? since it seems that there might be a bug in that native iPXE driver if you are having issues with ipxe.efi but not snponly.efi

Use GitHub Discussions
VRAM bin
Visit this user's website Find all posts by this user
Quote this message in a reply
2018-09-20, 09:24
Post: #7
RE: UEFI boot and sanhook
I think I've tracked the issue down to the call chain shutdown_boot() -> efi_remove() -> efi_driver_disconnect_all(). The entire subsystem is removed, even though the boot services are still active. Hence, the network connection is gone and the iSCSI disk is no longer available. So the remaining question is if this is intentional or not, i.e., if an iSCSI hook should remain functional after a kernel is loaded (I see the shutdown call in ELF, Multiboot, and Linux loading).

I've tested the shell example on qemu (with drivers e1000 and rtl8139) and on an Intel NUC7 with I219-LM (8086:156f). It seems the problem is not a specific driver, but the fact that a driver is used instead of the UEFI network services.
Find all posts by this user
Quote this message in a reply
2018-09-20, 20:32
Post: #8
RE: UEFI boot and sanhook
(2018-09-20 09:24)parthy Wrote:  I think I've tracked the issue down to the call chain shutdown_boot() -> efi_remove() -> efi_driver_disconnect_all(). The entire subsystem is removed, even though the boot services are still active. Hence, the network connection is gone and the iSCSI disk is no longer available. So the remaining question is if this is intentional or not, i.e., if an iSCSI hook should remain functional after a kernel is loaded (I see the shutdown call in ELF, Multiboot, and Linux loading).

I've tested the shell example on qemu (with drivers e1000 and rtl8139) and on an Intel NUC7 with I219-LM (8086:156f). It seems the problem is not a specific driver, but the fact that a driver is used instead of the UEFI network services.

iPXE provides a full stack that can be shutdown, some Firmware might not.
If something shuts down the nic it is expected that it also brings it up.
For example Linux would start the nic, see iBFT (tools in the initramfs) and then configure the nic correctly and connect to the iSCSI device.
The nic is _always_ shutdown when a real OS kernel takes over which then re-initializes the nic.

Use GitHub Discussions
VRAM bin
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)