Chainloading linux in AWS EC2 HVM - Printable Version +- iPXE discussion forum (https://forum.ipxe.org) +-- Forum: iPXE user forums (/forumdisplay.php?fid=1) +--- Forum: General (/forumdisplay.php?fid=2) +--- Thread: Chainloading linux in AWS EC2 HVM (/showthread.php?tid=7913) |
Chainloading linux in AWS EC2 HVM - plachance - 2016-01-14 21:37 Hello Anyone succeeded to chainload any operating system on AWS EC2 HVM instances? According to this post, it is not possible because it requires low-level access to hypervisor but there is not much information why. Any developer could comment on limitations with AWS EC2 PV instance? What about HVM instances? I tried different tests but can get the network working *after chainloading the kernel*. To summarize the tests: 1/ compiled latest version of iPXE (ipxe.lkrn) with an embedded script to boot from a network server 2/ created an HVM instance 3/ configured Grub to boot from ipxe.lkrn 4/ created an ipxe script on the network server to control chainloading 5/ reboot the EC2 instance What is working: - iPXE load and can find network (netfront) - iPXE can connect to my network server and download various stuff (kernel, initrd, other iPXE scripts, etc.) - imgtrust compiled and enforcing (chainloading fails as expected if downloaded content signature don't match) - kernel loads and find initrd - kernel can find disk device - kernel *seems* to find a network device but not working I tried CentOS 7.0, 7.1, 7.2 (and RHEL 7.2), CoreOS, RancherOS but none of them can bring up the network device. If using DHCP: - CentOS/RHEL distribution show "RTNETLINK answers: file exists" and fails back to the emergency shell - CoreOS show nothing If using Static IP (with kernel arguments ip=${net0/ip}:...): - CentOS/RHEL distribution show "network unreachable" Any idea/workaround? Thanks for assistance! RE: Chainloading linux in AWS EC2 HVM - mcb30 - 2016-01-15 08:24 (2016-01-14 21:37)plachance Wrote: Anyone succeeded to chainload any operating system on AWS EC2 HVM instances? That post is incorrect. iPXE does (as you discovered) have native support for the Xen netfront NIC as used in EC2 HVM. There is no support for PV. The netfront NIC is the same, but PV is effectively a different firmware platform: the differences between HVM and PV are similar to the differences between BIOS and UEFI as far as iPXE is concerned. We just don't support PV as a platform. Quote:To summarize the tests: That's similar to what I've used. The only real difference is that there was no GRUB in my setup (just bin/ipxe.usb written directly to a 1GB EBS root disk). Also, I usually build bin/ipxe.usb with an embedded script that does: Code: #!ipxe This allows me to control the boot process by configuring the EC2 instance "user-data", without needing to rebuild with a new embedded script. I also tend to enable CONSOLE_SYSLOG and CONSOLE_INT13, both of which are useful for getting hold of output in EC2. (The EC2 "system log" which theoretically shows the serial port output has been unreliable for me.) Quote:What is working: I did encounter some issues with the booted OS correctly initialising the netfront NIC after it had been used by iPXE. This was with older versions of CentOS, where the netfront driver did not gracefully handle finding the NIC in an unexpected state. You probably need to start hacking the initrd and/or kernel to print out extra debug information about the state of the netfront NIC, to find out what's going wrong in your setup. Unfortunately it's almost impossible to interact with an EC2 VM until the network is up, so you're limited to adding debug prints. (Alternatively, you could try to reproduce the problem in a local Xen instance, where you would have console access.) Good luck! Michael RE: Chainloading linux in AWS EC2 HVM - plachance - 2016-01-18 19:01 (2016-01-15 08:24)mcb30 Wrote: That's similar to what I've used. The only real difference is that there was no GRUB in my setup (just bin/ipxe.usb written directly to a 1GB EBS root disk). I tried this exact set up without success with the following in the user-data Code: #!ipxe Still not network after loading kernel on EC2 (ipxe.usb) and soyoustart server (ipxe.lkrn) when I compile latests iPXE releases but it works on SoYouStart with their "iPXE netboot script" feature based on iPXE/1.0.0+ (ddd1) Can it be related to my iPXE compile options? Did you try chainloading on EC2 with newer/newest releases of iPXE? I wanted to try compiling iPXE/1.0.0+ (ddd1) myself and use it on EC2 but can't find the commit ID. [ipxe/src] $ git log | grep ddd1 commit 230f16538f4b0ad9ddd1edd7da24c52c39da0c8d commit be0cd1cddd18a29264091fabc69bf2ec7a1f2cd2 I tried an old release (branch 53d2d9e) but same problem. Any clue? Do you remember which iPXE release and script worked? Thanks again for your help! Patrice RE: Chainloading linux in AWS EC2 HVM - mcb30 - 2016-01-18 23:00 (2016-01-18 19:01)plachance Wrote: Still not network after loading kernel on EC2 Are you sure that the initrd image includes the Xen netfront driver? Quote:Can it be related to my iPXE compile options? Try building as per the instructions in http://git.ipxe.org/ipxe.git/commitdiff/cc25260 (pushed earlier today). Quote:I wanted to try compiling iPXE/1.0.0+ (ddd1) myself and use it on EC2 but can't find the commit ID. Ask SoYouStart to provide the source (as required by the GPL). Michael RE: Chainloading linux in AWS EC2 HVM - plachance - 2016-01-18 23:17 Thanks for the quick response! (2016-01-18 23:00)mcb30 Wrote: Are you sure that the initrd image includes the Xen netfront driver? Yes, the system log shows that xen_netfront and xen_blockfront are loaded. Code: [ 5.808330] xen_netfront: Initialising Xen virtual ethernet driver Also, the same CoreOS example provided in the previous post works on SoYouStart using their release of iPXE but not using the latest one I compile myself. Quote:Try building as per the instructions in http://git.ipxe.org/ipxe.git/commitdiff/cc25260 (pushed earlier today). Edit1: Exactly the same problem. iPXE loads as expected and finds the kernel and initrd from the boot server. Code: iPXE LOG Server output log available here. Kernel command line: Code: console=hvc0 console=tty0 console=ttyS0 ip=dhcp inst.repo=http://mirror.centos.org/centos-7/7/os/x86_64 inst.vnc inst.vncpassword=XXXXXXXX inst.headless inst.lang=en_US inst.keymap=us I tried force loading netfront driver by adding Code: rd.driver.pre=xen_netfront,xennet,netfront Quote:Ask SoYouStart to provide the source (as required by the GPL).Edit1: I did but didn't get it yet. Apparently they only patch the code to avoid replying to ping requests. RE: Chainloading linux in AWS EC2 HVM - mcb30 - 2016-01-27 14:31 (2016-01-18 23:17)plachance Wrote: Server output log available here. Thanks for the log. The xenbus enumeration is definitely working since we see a "vif/0" message in the kernel output. Unfortunately I don't think the netfront driver provides anything more verbose than the single "xen_netfront: Initialising Xen virtual ethernet driver" message that you're already seeing. Since you seem to be reliably reaching a controllable userspace, you could try patching the kernel and/or initrd to dump out more diagnostic information. For example, patching xenbus_xs.c to display all xenbus reads and writes would help. Alternatively, you could try reproducing the problem in a local XenServer instance. That would allow you to examine the state using xenstore-ls from dom0, without having to patch the guest kernel/initrd. Michael RE: Chainloading linux in AWS EC2 HVM - plachance - 2016-02-25 07:18 (2016-01-27 14:31)mcb30 Wrote: Alternatively, you could try reproducing the problem in a local XenServer instance. That would allow you to examine the state using xenstore-ls from dom0, without having to patch the guest kernel/initrd. It works as expected. I could successfully run the following sequence: 1. create the domain using "builder='hvm'" 2. boot it using ipxe.usb + bootstrap embedded (after dd if=ipxe.usb of=<path_to_lvm_dev> 3. chainload an ipxe.pxe + boot script embedded image from an http server 4. chainload kernel + initramfs and trigger kickstart installation Xen Platform info Code: xen_version : 4.6.1-1.el7 bootstrap.ipxe Code: #!ipxe boot.ipxe Code: #!ipxe Boot sequence output Code: # xl create -c <path_to_domu_cfg_file> As expected, ipxe stored on disk gets loaded, it can fetch and load a new ipxe.pxe from the bootserver and load the kernel and initramfs. But using the same ipxe.usb and boot.pxe files on AWS EC2 (region: eu-west-1, instance type t2-medium) doesn't work. Here is the console output from AWS console, trimmed a little bit to remove control codes. The first line (ok) is certainly the output of ipxe.usb and the other lines, from the boot.pxe file downloaded from the bootserver. Code: ok So it seems that ipxe can't chainload itself from the network on AWS. Other test A few weeks ago I made another test. I tried to create a chainload loop using the following ipxe script, expecting the sequence to loop infinitely. But it didn't! Code: #!ipxe Console output Code: iPXE 1.0.0+ (3c26) -- Open Source Network Boot Firmware -- http://ipxe.org ipxe load from disk at instance creation, can configure the network and reload itself from 0x80. But then it can't configure the network, fails to boot and (I presume), the first ipxe sequence continue and try to boot from 0x81 which fails because there is only one disk attached to the instance. It doesn't seem related to kernel/initramfs but specific to AWS. Can it be related to the Xen version (mine is 4.6.1 and AWS is 4.2) or AWS-specific Xen tuning preventing chainloading? Do you have a working example of chainloading ipxe from ipxe on AWS? Thanks again for your great work and your help on this issue! |