I'm looking for a way to convert a working Linux installation on a drive to something bootable by iPXE. I have tried to use things like Linux Live Kit and debootstrap, and I have tried converting my rootfs to a squashfs image and booting it with initrd, with no luck.
I would very much like to do this without relying on third-party software or scripts (a simple dd command to convert the partition to an .img file would be nice.) I've been researching this and trying things for a week and I've barely gotten anywhere.
What is the easiest way to do this?
You could always dd your disk to an iSCSI device and boot from that with iPXEs sanboot command.
But my guess that you have something small-ish that you want to boot and if so I would recommend creating an initramfs with your whole disk.
Below is psuedo command to create such a file of your fs
Code:
find . | cpio -H newc -o | gzip > /ramfs.gz
The choices here depends on a few things
* Do you want it to work when network cable is disconnected for whatever reason
* Do you have the RAM to hold your entire system
* Do you want to be able to store data
* Do you want to have multiple machines running with the same image
Since you have already tried squashfs my guess is that you have the RAM and want it to work even if the machine goes "offline"
If so if you post what you have with your squashfs approach so far, as in what is the files you have, the iPXE script you are using, and what the actuall issue is (error message) when you use this I'm sure we can figure it out, but it's always hard to start helping from "nothing".
it might be as easy as adding your squashfs image with initrd, but first adding cpio headers to it, if your are booting in pcbios mode (not efi) than it is possible to have iPXE add those headers for you.
(2017-10-01 21:58)NiKiZe Wrote: [ -> ]You could always dd your disk to an iSCSI device and boot from that with iPXEs sanboot command.
But my guess that you have something small-ish that you want to boot and if so I would recommend creating an initramfs with your whole disk.
Below is psuedo command to create such a file of your fs
Code:
find . | cpio -H newc -o | gzip > /ramfs.gz
The choices here depends on a few things
* Do you want it to work when network cable is disconnected for whatever reason
* Do you have the RAM to hold your entire system
* Do you want to be able to store data
* Do you want to have multiple machines running with the same image
Since you have already tried squashfs my guess is that you have the RAM and want it to work even if the machine goes "offline"
If so if you post what you have with your squashfs approach so far, as in what is the files you have, the iPXE script you are using, and what the actuall issue is (error message) when you use this I'm sure we can figure it out, but it's always hard to start helping from "nothing".
it might be as easy as adding your squashfs image with initrd, but first adding cpio headers to it, if your are booting in pcbios mode (not efi) than it is possible to have iPXE add those headers for you.
Hi NiKiZe,
To answer your questions:
- No, I don't need it to work when the network is unplugged - I only want my disk image stored on a server for the client to fetch
- Yes, I want the image mounted read-only and changes saved in RAM - AFAIK this is default behavior?
- No, I don't need to store any data on the client
- Yes, ideally I want it to work for multiple clients at once
I've tried a lot of things with squashfs. A lot of it was based around this tutorial:
http://willhaley.com/blog/create-a-custo...-17-zesty/
And a lot of it I took from this guide:
https://www.ibm.com/developerworks/commu...u%20(OPAL)
However these were both for PXE. I tried to adapt them to iPXE by simply using the kernel and initrd from these guides in my config. I got examples from places like these:
-
https://www.reversengineered.com/2014/05...e-network/
-
http://forum.ipxe.org/showthread.php?tid=7134
Additionally, instead of attempting to make the installation with debootstrap or similar, I also tried installing Ubuntu Server 17.04 in a virtual machine, mounting it in another virtual machine, and creating the image with mksquashfs. So my config may have looked something like this:
Code:
#!ipxe
kernel http://10.0.0.2/ipxe/vmlinuz
module http://10.0.0.2/ipxe/initrd.img
imgargs vmlinuz boot=live config console=ttyS0 username=aj fetch=http://10.0.0.2/ipxe/sdc.squashfs
boot
This results in iPXE 'freezing':
https://i.imgur.com/6DDgmng.png
The server is not transmitting anything either.
(2017-10-01 22:20)ajr Wrote: [ -> ]To answer your questions:
- No, I don't need it to work when the network is unplugged - I only want my disk image stored on a server for the client to fetch
- Yes, I want the image mounted read-only and changes saved in RAM - AFAIK this is default behavior?
- No, I don't need to store any data on the client
- Yes, ideally I want it to work for multiple clients at once
Multiple clients make iSCSI for a bad option, with your approach of http fetching it will only need network at boot and then it can go offline.
We haven't gotten to what is default yet, but yes with initrd or fetch over http it will be all in RAM.
My question about storing data was supposed to be in regards to iSCSI as permanent storage, but that was a bad stated question.
(2017-10-01 22:20)ajr Wrote: [ -> ]
Code:
#!ipxe
kernel http://10.0.0.2/ipxe/vmlinuz
module http://10.0.0.2/ipxe/initrd.img
imgargs vmlinuz boot=live config console=ttyS0 username=aj fetch=http://10.0.0.2/ipxe/sdc.squashfs
boot
This results in iPXE 'freezing': https://i.imgur.com/6DDgmng.png
The only reason for it to freeze there should be that iPXE is done, kernel is executing but something fails.
So more likely it's your kernel that freezes rather then iPXE.
Try to only do
Code:
kernel http://10.0.0.2/ipxe/vmlinuz
boot
And you should see the same thing - that means that you will need to kernel working.
Are you sure that that same kernel works when you are booting from disk?
The concept that you have here does work.
As an example of systemrescuecd working via iPXE you can test this script:
http://b800.org/sysr/sysrcd.ipxe
In regards to your username= and fetch= those are not handled by the kernel, so you will need to have scripts in your initrd that handles those, and also something that mounts the squashfs etc.
If you want to do all this scripting on your own, that's lots of work.
Another approach that I totally forgot is to use legacy nfsroot, but with that you will still need scripts that creates a local ramfs overlay, unless you want to store files on the NFS share, but that causes issues with multiple clients.
(2017-10-01 22:39)NiKiZe Wrote: [ -> ] (2017-10-01 22:20)ajr Wrote: [ -> ]To answer your questions:
- No, I don't need it to work when the network is unplugged - I only want my disk image stored on a server for the client to fetch
- Yes, I want the image mounted read-only and changes saved in RAM - AFAIK this is default behavior?
- No, I don't need to store any data on the client
- Yes, ideally I want it to work for multiple clients at once
Multiple clients make iSCSI for a bad option, with your approach of http fetching it will only need network at boot and then it can go offline.
We haven't gotten to what is default yet, but yes with initrd or fetch over http it will be all in RAM.
My question about storing data was supposed to be in regards to iSCSI as permanent storage, but that was a bad stated question.
(2017-10-01 22:20)ajr Wrote: [ -> ]
Code:
#!ipxe
kernel http://10.0.0.2/ipxe/vmlinuz
module http://10.0.0.2/ipxe/initrd.img
imgargs vmlinuz boot=live config console=ttyS0 username=aj fetch=http://10.0.0.2/ipxe/sdc.squashfs
boot
This results in iPXE 'freezing': https://i.imgur.com/6DDgmng.png
The only reason for it to freeze there should be that iPXE is done, kernel is executing but something fails.
So more likely it's your kernel that freezes rather then iPXE.
Try to only do
Code:
kernel http://10.0.0.2/ipxe/vmlinuz
boot
And you should see the same thing - that means that you will need to kernel working.
Are you sure that that same kernel works when you are booting from disk?
The concept that you have here does work.
As an example of systemrescuecd working via iPXE you can test this script: http://b800.org/sysr/sysrcd.ipxe
In regards to your username= and fetch= those are not handled by the kernel, so you will need to have scripts in your initrd that handles those, and also something that mounts the squashfs etc.
If you want to do all this scripting on your own, that's lots of work.
Another approach that I totally forgot is to use legacy nfsroot, but with that you will still need scripts that creates a local ramfs overlay, unless you want to store files on the NFS share, but that causes issues with multiple clients.
Here is what happens with only the kernel and boot options:
https://i.imgur.com/Jhssl4q.png
I don't care about the username and fetch flags, I only included them because they were in the examples I found.
(2017-10-01 22:42)ajr Wrote: [ -> ]Here is what happens with only the kernel and boot options: https://i.imgur.com/Jhssl4q.png
I don't care about the username and fetch flags, I only included them because they were in the examples I found.
Good so the kernel boots on its own
now add initrd as well, note that I replaced module in your script with initrd, should be compatible but initrd is more clear.
Code:
#!ipxe
kernel http://10.0.0.2/ipxe/vmlinuz
initrd http://10.0.0.2/ipxe/initrd.img
imgargs vmlinuz boot=live config console=ttyS0 username=aj fetch=http://10.0.0.2/ipxe/sdc.squashfs
boot
and now reading thru it again I see why you get a "blank screen" the console part redirects all output to serial, so removing that should give you output to work with. (probably some kind of error message)
And i think all your other arguments are script specific, so depending on your initrd and it's script you might want to remove them as well, and then test them one by one.
You will need some way of getting your squashfs into your system, either via initrd, or some fetch equivalent scripting. - as such you will "have to care".
To get this working, you will have to understand, or learn all aspects of this (mostly from scratch). If you aren't prepared for that or don't want to, then I would suggest to use some other already working live distro.
A Read Only NFSroots install is nearly perfect for what you want.
With a little juggling, it is possible to go with a Stateless (RO) NFSroots installation.
I have quite a bit of current production experience with Stateful (RW) NFSroots installations: they are actually quite stable; both PCBIOS and EFI (without SecureBoot).
RedHat and CentOS based installations are fairly portable to NFS (care must be taken to use some workarounds with somewhat intentionally-broken NFS boot support) in vmlinuz and initrd builds, but they ARE able to be worked-around with a fair amount of ease. The Workarounds involve installing kernel-devel, replacing the OEM vmlinuz kernel and initrd from the netinstaller or pxe vmlinuz/initrd out of the Installer ISO. Then preparations for NFS migration can continue with subsequent Kernel updates and initrd (dracut) rebuilds then able to take the NFS parameters correctly.
Ubuntu is fairly straightforward for NFSroots, again having to export the OS over to an NFS mount.
SuSE and Open SuSE are some of the few distributions that actually natively support an NFSroots installation; they are not perfect, but they certainly work.
I have a "Garage" computer that I triple boot (iSCSI for Windows, NFSroots for CentOS or Ubuntu) with iPXE and it's been running for three+ years on both OS with nary a hiccup.
Best,
M^3
Would you mind showing me your config MultimediaMan? A NFSroot was what I was wanting when I began all this, but my bootloader couldn't locate /dev/nfs iirc.
What Distro are you running?
For most RH-based distros you need to install/ specify:
The dracut-network package.
Pay heed to this webpage:
https://access.redhat.com/documentation/...esssystems
Particularly:
Code:
after installing the dracut-network package, add the following line to /etc/dracut.conf:
add_dracutmodules+="nfs"
RHEL 6.7-6.9 and RHEL 7.x have somewhat broken NFSroots Support... it's there, but for some reason the default kernel and initrd won't work with NFSroots. The solution is to substitute the OEM vmlinuz and initrd with the Netinstall ISO or the Install ISO in /images/pxeboot/
NFS v4 is supported, but it is difficult to troubleshoot sometimes. If you use NFS v3 you
MUST disable SELinux.
I prefer to boot the vmlinuz/initrd directly from the NFS mount using the iPXE NFS option. It makes updating kernels and initrds less of a chore because they essentially behave exactly as a local disk or iSCSI/FCoE-based installation from an update perspective.
Basic boot arguments:
Example:
Code:
#!ipxe
set nfs_server nas1.private.${15}
set nfs_nic_one enP3p3s0f0 ; set nfs_nic_two enP3p3s0f3
set nfs_args vers=3,rw,defaults
set arg0 splash=off
set arg1 vga=0x314
set arg2 showopts
set arg3 disable_ipv6=1
set arg4 selinux=0
set arg5 enforcing=0
set arg6 ip=${nfs_ip}:${nfs_server}:${def_gateway}:${nfs_netmask}:${hostname}:bond0:none:9000
set arg7 bond=bond0:eth${nfs_nic_one},eth${nfs_nic_two}:mode=1
dhcp netX
initrd nfs://${nfs_server}/boot_${hostname}/boot/initrd
chain nfs://${nfs_server}/boot_${hostname}/boot/vmlinuz initrd=initrd root=nfs:${nfs_server}:/boot_${hostname}:${nfs_args} ${arg0} ${arg1} ${arg2} ${arg3} ${arg4} ${arg5} ${arg6} ${arg7}
Notes:
For RHEL/CentOS version 7+, SLES 12+, and Ubuntu 14+: if you use NFS v3 you
MUST specify NFS v3 in the NFS parameters otherwise the OS may default to NFS v4 (Confirmed in SLES 12).
Yes, booting from a Bond is possible, you just have specify the bond as part of the boot parameters. It doesn't hurt to have the bond configured in the running OS as well... by specifying the NFS adapter in the boot parameters, you effectively give the OS "Super Duper Root" to protect the interface which provides NFS (at nearly all costs)...basically there isn't a whole lot you can do with that particular interface in the running OS.
Ubuntu Server 17.04 on both client and server. Also if you wouldn't mind showing me your NFS export and telling me how you installed the OS on NFS?
(2017-10-02 12:44)ajr Wrote: [ -> ]Ubuntu Server 17.04 on both client and server. Also if you wouldn't mind showing me your NFS export and telling me how you installed the OS on NFS?
Start here:
https://help.ubuntu.com/community/Instal...OnNFSDrive
Helpful setup:
Code:
[root@centos-nfs ~]# cat /etc/exports
/srv/nfs/boot_$mynfsroot *(rw,sync,no_root_squash)
The NFS/iSCSI interfaces are on a private non-routable VLAN which is in DNS (example: nas1.private.${15} )... iPXE boots on a public interface and directs the vmlinuz/ initramfs to load from NFS on the private interface.
For Ubuntu with as desktop, you need to pay particular attention to the ToDos at the bottom on the referenced page: it is a really good idea to install on a regular harddisk and rsync / to NFS to get all of the bits needed for a working Desktop Xserver and Enlightenment/Gnome/KDE.
I prefer to boot the "donor system" with a live Distro of some kind, and mount the localdisk and rsync to the NFS filesystem:
Code:
rsync -a -e -k ssh --exclude=/proc/* --exclude=/sys/* /mnt/$localdiskroot $ip_of_server_hosting_image:/$path/$to/$directory
With a 1GbE network, the clone generally takes less than 5 minutes.
Swap can be placed on the NFS Mount.
I recommended setting "swappiness" to a very low value; Basically swapping only when the OS must.
Code:
mkdir /var/swap
touch /var/swap/swapfile.swp
dd if=/dev/zero of=/var/swap/swapfile.swp count=8589934 bs=1k
chmod 0600 /var/swap/swapfile.swp
mkswap /var/swap/swapfile.swp
swapon -a
Add the following line in /etc/fstab;
/var/swap/swapfile.swp none swap defaults 0 0
MultimediaMan, I have followed that guide a couple times as well. I've also done something similar to the rsync you suggested; I mounted a preinstalled system and copied it to a NFS directory with rsync -varP. I don't remember the errors I got then and I can't check for a bit, but I know it didn't work. I will be able to check later tonight hopefully. Thanks!
I looked at your screenshot: I can almost guarantee the error is because you are missing "initrd=initrd.img" in your Kernel Parameters.
Try this:
imgargs vmlinuz initrd=initrd.img boot=live config console=ttyS0 username=aj fetch=http://10.0.0.2/ipxe/sdc.squashfs
(2017-10-03 03:34)MultimediaMan Wrote: [ -> ]I looked at your screenshot: I can almost guarantee the error is because you are missing "initrd=initrd.img" in your Kernel Parameters.
I guarantee with 100% certainty that initrd= is only needed on the linux cmdline when booting in EFI mode, not in legacy PCBIOS, and the boot in screenshot is with legacy so is not an issue.
It is however good to have it in there for compatibility
As already stated above the issue with that line is rather console=ttyS0 which redirects all text output to the /dev/ttyS0 serial port
Looking at the boot screen again, he's using a Hyper-V Gen 2 VM; that VM is booting from a Hyper-V EFI BIOS.
I've run into the issue many times with many Distros. Even newer PC BIOS kernels can experience the issue if you do not explicitly specify initrd=initrd.img
The VFS message means the vmlinuz kernel can't find the initrd.
Just to avoid any future confusion ...
(2017-10-03 09:05)MultimediaMan Wrote: [ -> ]Looking at the boot screen again, he's using a Hyper-V Gen 2 VM; that VM is booting from a Hyper-V EFI BIOS.
Indeed Hyper-V Gen 2 is EFI only, but this is
not a Gen 2 VM
If we look at the
Features line on the image where it was claimed that iPXE froze:
We see that
there is PXE and bzImage support available, but those can
only be included
in pcbios builds.
EFI builds on the other hand have the
EFI features flag.
(2017-10-03 09:05)MultimediaMan Wrote: [ -> ]I've run into the issue many times with many Distros. Even newer PC BIOS kernels can experience the issue if you do not explicitly specify initrd=initrd.img
The VFS message means the vmlinuz kernel can't find the initrd.
Again initrd= is only needed on the cmdline for EFI.
In the image with VFS message, it is when there was no initrd loaded at all, as in:
Code:
kernel http://10.0.0.2/ipxe/vmlinuz
boot
The reason for this test was to verify that the kernel actually booted, before introducing any possible issues that could have been due to initrd.
Quote:Here is what happens with only the kernel and boot options:
MultimediaMan is perfectly correct that
if this was an EFI boot then initrd= would be needed.
I am getting "Could not start download: Operation not supported" on my NFS guest machine:
https://i.imgur.com/mhYDVor.png
Here is my ipxe config (loaded from my web server):
Code:
#!ipxe
initrd nfs://10.0.0.2/srv/nfsroot/boot/initrd.img-4.10.0-35-generic
chain nfs://10.0.0.2/srv/boot/vmlinuz-4.10.0-35-generic initrd=initrd root=nfs:10.0.0.2:/srv/nfsroot
I can mount the NFS mount normally from another PC.
With further testing I got further by loading initrd and vmlinuz over HTTP instead of NFS. Then initrd was able to take over I think, but I can't boot completely:
https://i.imgur.com/fBd7PJC.png
to use chain NFS:// ... from iPXE you will need to build iPXE with NFS enabled
So my guess is that was what happened for you when you got the
http://ipxe.org/3c092003 error.
as seen in
I would still recommend that you use http access from within iPXE, and then only have the linux kernel do the NFS parts.
moving on to your next image:
first thing to check would be that your linux has gotten an IP at that point.
you might need
Code:
root=/dev/nfs nfsroot=10.0.0.2:/srv/nfsroot
(or atleast you used to need that if you used kernel NFS root, but might be different in todays initrd based boots, so
https://www.kernel.org/doc/Documentation...fsroot.txt might not be relevant but might be worth a read)
You might also want to add vga=791 or something similar to get a bit more info on screen for debug purposes.