Booting from ZFS RAID0/1/5/6 in FreeBSD 8.x

Ok, this is a long post but a useful one.  This is how to make freebsd boot from a ZFS volume (whether it be raid0, raid5 or raid6).  The freebsd installer doesn’t support anything exotic so we have to do this manually.

If you’re using FreeBSD 9.0, then follow the guide at https://www.dan.me.uk/blog/2012/01/22/booting-from-zfs-raid0156-in-freebsd-9-0-release/

First, grab yourself a copy of DVD1 iso or the memory stick image and boot from it.  No other boot image will work – it MUST be the DVD or memory stick image!

Once you’ve booted into the installer and chosen your country and keyboard layouts, go to the Fixit menu and choose either CDROM/DVD or USB depending on the installation media you used.  This will open up a terminal window into a live filesystem booted from the DVD/USB.

For my example, i’m going to build a RAID5 array on disks da0 da1 and da2.

First, we need to remove any existing GPT partition info from the disks – ignore the ‘invalid argument’ message if you get it at this stage:

gpart destroy da0
gpart destroy da1
gpart destroy da2

Now we need to initialise the GPT partitions on each disk:

gpart create -s gpt da0
gpart create -s gpt da1
gpart create -s gpt da2

We will now make a boot (64KB) and ZFS (remaining space) partition on each disk in turn:

gpart add -s 128 -t freebsd-boot da0
gpart add -s 128 -t freebsd-boot da1
gpart add -s 128 -t freebsd-boot da2

gpart add -t freebsd-zfs -l disk0 da0
gpart add -t freebsd-zfs -l disk1 da1
gpart add -t freebsd-zfs -l disk2 da2

And now we have to install the protected MBR boot code into all the drives:

gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da0
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da1
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da2

Now that we’ve configured the disks, we need to load the ZFS kernel modules from the CD so that we can build ZFS volumes:

kldload /mnt2/boot/kernel/opensolaris.ko
kldload /mnt2/boot/kernel/zfs.ko

And create a ZFS pool.  If you want a RAID6 volume, choose raidz2 instead of raidz1 here.  If you want a mirror, use mirror or if you want RAID0 (or single disk) just omit the raidz1 completely:

zpool create zroot raidz1 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
zpool set bootfs=zroot zroot

Ok, now we’ve made our ZFS pool (and it’s currently mounted at /zroot/) – we need to make all our filesystems on it… this is complicated, but here we go:

zfs set checksum=fletcher4 zroot
zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp
chmod 1777 /zroot/tmp
zfs create zroot/usr
zfs create zroot/usr/home
cd /zroot; ln -s /usr/home home
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages
zfs create zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 /zroot/var/tmp

Now we’re ready to install FreeBSD onto the new ZFS partitions.  We’re going to install the base, manual pages, all sources and a generic kernel – this takes some time so be patient…

cd /dist/8.1-RELEASE/
export DESTDIR=/zroot
for dir in base manpages ; do (cd $dir ; ./install.sh) ; done
cd src ; ./install.sh all
cd ../kernels ; ./install.sh generic
cd /zroot/boot ; cp -Rlp GENERIC/* /zroot/boot/kernel/

Now we need to set /var/empty to readonly:

zfs set readonly=on zroot/var/empty

And now we’re ready to configure the installation.  To make things easier, we will chroot into the environment:

chroot /zroot

We need to setup an initial /etc/rc.conf which will mount all ZFS filesystems:

echo ‘zfs_enable=”YES”‘ > /etc/rc.conf

And an initial /boot/loader.conf that will load the ZFS modules and set our root mountpoint:

echo ‘vfs.zfs.prefetch_disable=”1″‘ > /boot/loader.conf
echo ‘vfs.root.mountfrom=”zfs:zroot”‘ >> /boot/loader.conf
echo ‘zfs_load=”YES”‘ >> /boot/loader.conf

Now you can set your root password:

passwd root

And configure your timezone:

tzsetup

And setup a dummy aliases file for sendmail to keep it quiet 😉

cd /etc/mail
make aliases

You can do other configuration here, like adding a user etc – but when you’re done we can exit the environment:

exit

Now, we need to export our ZFS configuration (and reimport it) so we can save out the cache file:

mkdir /boot/zfs
cd /boot/zfs
zpool export zroot && zpool import zroot
cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache

We now create an empty /etc/fstab file as follows:

touch /zroot/etc/fstab

This is the tricky part, we need to unmount the ZFS partitions and re-assign their mountpoints for the root filesystems:

export LD_LIBRARY_PATH=/mnt2/lib
zfs unmount -a
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var

Now we can exit the fixit shell, remove the media and reboot the computer.  Do this as soon as you can.

The computer should reboot into a ZFS-based filesystem, booted from a software RAID array on fully protected disks.

Once it’s booted, you can login and run sysinstall to configure other options like networking and startup programs (like SSH!)

Enjoy!

71 thoughts on “Booting from ZFS RAID0/1/5/6 in FreeBSD 8.x

  1. venture37

    Great work, just one mistake
    You need to use
    echo ‘LOADER_ZFS_SUPPORT=’ > /etc/src.conf
    not
    echo ‘LOADER_ZFS_SUPPORT=YES’ > /etc/src.conf

    Reply
    1. dan Post author

      You definitely need to include “YES” otherwise it wont build support in.
      FreeBSD 8.1-RELEASE and onwards wont require this – but anything older does. Tested with “YES” and it definitely works. I’ve installed several servers using this method.

      Reply
  2. venture37

    I’ve just run into the issue on 8.0-RELEASE AMD64 as I worked through your guide
    from src.conf(5)
    “The values of variables are ignored regardless of their setting; even if they would be set to “FALSE” or “NO”. Just the existence of an option will cause it to be honoured by make(1).”

    Reply
  3. koroshiya.itchy

    This did not work for me. I used 8.1-RELEASE AMD64.Upon rebooting I get:

    error 1 lba 48
    error 1 lba 1
    No ZFS pools located, can’t boot

    The only difference, apart from the architecture, is that I have only one hard drive and therefore I did not create any RAID. Should I have installed the ia64 loader instead of the i386 one?

    Reply
    1. dan Post author

      The instructions work on amd64 8.1-release.
      If it says no pools located, it could indicate that the copying of zpool.cache didnt work for some reason.
      To create zfs without raid on single disk, use “zpool create zroot /dev/…” – this is how my server runs as it has hardware raid.

      Reply
  4. koroshiya.itchy

    I repeated the entire procedure without success. To make sure there was no typo or some other obvious mistake from my side, I also tried the script referenced above by cryx:

    sh /dist/gpt-zfsroot.sh -p ad0 -s 4G -n tank -d /dist/8.1-RELEASE

    As happens with the manual procedure, the script runs without errors or warnings of any kind.

    Same results in both cases: No ZFS pools located. Certainly I am missing something, but I do not know what…

    Reply
  5. koroshiya.itchy

    I have tried the procedure from the FreeBSD wiki:

    http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot

    I have found the following problems:

    Fixit# zpool create zroot /dev/gpt/disk0

    Here, I need to use -f, otherwise it tells me that the device has not been specified correctly:

    Within chroot:

    Fixit# umount /dev

    It tells me device busy.

    Almost at the very end:

    Fixit# zfs unmount -a

    Also tells me device busy. Trying to force unmounting does not work. I cannot proceed any further.

    Maybe this will give you some hints…

    Reply
    1. dan Post author

      Under vmware, using 8.1-RELEASE amd64 DVD .iso, I can confirm this works for both SCSI and IDE disks without error (you can skip the building of the boot loader as 8.1 has this already).
      The only thing I can think of is something to do with disk geometry on an IDE disk on standard hardware and large disks…

      Koroshiya.itchy: Try cleaning your disk first with “dd if=/dev/zero of=/dev/da0 bs=1m count=128” (or whichever device name you’re using) – you can also skip the mounting of /dev and building of the boot loader as that only applies to 8.0-RELEASE

      Hopefully that will help you.

      Reply
  6. koroshiya.itchy

    I suspect that the ZFS tools do not like my drive:

    Creating smaller slices has taken me a bit further, to the boot loader. However, the system does not boot. It just reboots. A similar behavior is observed with OpenSolaris (development version 134)…

    The problems only happens if I create ZFS partitions (even if it is just one). UFS, Ext4, Ext3, Btrfs, etc., are fine.

    This laptop is a nightmare: the Linux kernel hates my ACPI and my sound card; OpenSolaris hates my hard drive; and BSD refuses to boot from ZFS partitions…

    Reply
  7. Robin B

    I got the same errors as you did koroshiya.itchy. Being unable to unmount both /dev and zfs. Did all steps nonetheless until the last bit where you set mountpoints there I had to reboot and go to fixit again and do “zpool import zroot ad6” then I did “zfs list” and noticed it was now called ad6. There I did the mountpoint commands with ad6 instead of zroot and now it’s working 🙂

    Reply
    1. dan Post author

      You should be able to do just ‘zpool import zroot’ – adding ‘ad6’ on the end may have caused it to rename it.
      I’m glad it’s working for you now.

      Reply
    1. dan Post author

      That would mean that your folder (/zroot) does not contain the files. Check that you installed (via ./install.sh) the files after setting DESTDIR to be /zroot.
      You can’t chroot into a folder that has no shell in it.

      Reply
  8. betafeng

    sorry ,i forget install the “base”.

    but i have a another question : how can i modify root’s password when i forget the root’s password

    thx

    Reply
    1. dan Post author

      If you boot in single user mode (option 4 from the boot menu I think), you can set a new root password via “passwd root” then reboot.

      Reply
  9. betafeng

    If you boot in single user mode (option 4 from the boot menu I think), you can set a new root password via “passwd root” then reboot.

    but when i boot in single user mode and “passwd root” but the system echo

    “passwd:pam_chauthtok():error in service module”

    so can’t modify the root password

    Reply
  10. Bruce Cran

    “gpart add -b 162 -s 8388608 -t freebsd-swap -l swap0 da0”

    I’ve already updated the wiki pages, but it’s simpler to use the size suffixes – e.g. 4G instead of 8388608.

    Reply
    1. dan Post author

      Hi Bruce, thanks for that. I’ve been thinking about updating the page for a while so i’ll do it in one update later today – thanks 🙂

      Reply
  11. Spencer

    The memstick img didn’t work for me, but I only have one memstick large enough so I couldn’t check if that was the issue. It booted fine, but attempting to use Fixit generated an error that there was no media present. I downloaded the DVD which worked fine, so the only consequence was that I get to seed memstick and DVD.

    The only problem I had was a panic when trying to restart. It had no problem booting, though. I may not have rebooted quickly enough? Not sure what that description meant..

    And.. “zpool create zroot mirror” does work right? I’ve barely used the system yet, but, initialy, I was worried since “1” isn’t in the title.

    Reply
    1. dan Post author

      The panic on restart from the installer shouldnt cause a problem. So long as you had finished installing everything first.
      the ‘zpool create zroot mirror dev1 dev2’ syntax is correct for creating a mirror. Obviously, you need to have 2 devices after it.
      You can confirm the setup with ‘zpool status’ it will show ‘zroot’ and then ‘mirror’ indented below it and then the devices within the mirror indented below that.
      If you want true piece of mind, turn it off and remove 1 of the drives. As the server is blank, it’s the best time to do tests 🙂

      Reply
  12. Spencer

    Good idea. The system boots fine but there’s still a panic in the shutdown process. I haven’t used ZFS in 2 years, and I have never used FreeBSD.. so the testing is needed anyway 🙂

    Reply
  13. dan Post author

    Hmm.. how much memory do you have in the server? and is it i386 or amd64? You may need to tune ZFS if your memory is restricted – and amd64 edition is much much nicer for ZFS than i386. I’ve got a nice HP microserver with 1GB ram in it working fine with ZFS – but it needed tweaks. If you have 4GB+ ram, then no tweaks needed really.

    Reply
  14. Ben

    using vmware for test, and can boot from zfs. when I simulate one disk missing, it cannot boot any more. sound putting root on zfs is not a good idea.

    Reply
    1. dan Post author

      did you ensure that you put the bootcode on *all* the disks, not just the first disk?
      What does it say when you can’t boot?
      I use ZFS for root on all my storage servers, and I have deliberately failed drives to simulate failures without any trouble at all.

      Reply
  15. dan Post author

    i’ve seen this before within vmware, but never on a physical machine – so it might be vmware specific 🙁

    Reply
    1. dan Post author

      Thanks for that, Jon. Hopefully they can squeeze it into 8.2-RELEASE which is due to be released very shortly! I always upgrade to 8-STABLE after installing which would be why I hadn’t picked up on this in my tests.

      Reply
  16. Ben

    I am reconsidering is it worthy to put root fs on zfs.
    I am trying to build a home NAS that base on FreeBSD+zfs, with 4 2T disks. and if we put root on zfs, i think the 4 disks will keep spinning all the time. that will consume a lot of power.(I am planning run at 7×24), and if we separate system disk and data disk. we can put data disk to idle mode when data is not access. a workaround is to separate system disks and data disks.(system disk on ufs or zfs. but I am lack of SATA ports, only 4).

    Reply
  17. dan Post author

    I use a NAS with 4 x 2TB disks with root on zfs with the disks spinning 24/7. The server I use them in (a HP Microservre) uses very little power. Entire server including all 4 disks, 2 ram modules etc uses under 80 watts. I’m not entirely sure how much difference spinning the disks down would make to the power usage – it’s a personal choice. For me, the extra capacity was worth the extra power usage.

    Reply
  18. dan Post author

    We can’t avoid using slices in bsd as we need a boot loader which sits in the first (small) slice, then swap space which sits in the 2nd slice.
    I have a storage server that doesn’t use ZFS on root, and its ZFS storage array uses entire disks. Unavoidable for booting from it however.

    Reply
  19. da1

    for some reason, I got the “mountroot>” prompt without the “zroot” line in fstab…. with it, everything is fine

    Reply
    1. dan Post author

      so long as you followed all the instructions, it shouldn’t have done that. (specifically the ‘set bootfs’ part) – i’ve installed countless machines with these instructions 🙂

      Reply
  20. Ben

    dan, finally I setup my home nas server with one usb key and 4 2T disks.
    the reason I using the usb key as booting device is I do know want to partition the 2T disks,
    and let zfs using the raw device. (I got the HP microserver too… ;-))

    Reply
  21. dan Post author

    Cool – though the failure of a USB key is a lot more likely than the disadvantages of partitioning the disks (in my opinion) – plus USB is slow slow slow 🙂 Glad it worked for you though.

    Reply
  22. Ben

    the usb key just only has gpt freebsd-boot partition. so if usb key failure, it is easy to replace. and the speed of usb key is nothing about the system since the root fs stay on zfs . , anyway, I using a slc type of usb key, long life! 😉

    Reply
  23. ac

    Thanks for posting this, I just went through it cleanly on my first go.

    You might want to add the bit about having to rescan drives before being able to enter “Fixit” mode in the beginning. Took me a while to figure that one out.

    Reply
    1. dan Post author

      hmmm… you shouldn’t need to rescan drives – they’re scanned when sysinstall starts. unless you change anything, it shouldn’t need to rescan.

      Reply
    1. dan Post author

      interesting… a little too complicated for the how-to though. What I have found is that SATA controller makes a huge difference. The Intel ICH9 controller on 1 of my computers gives far higher performance than an ATI controller on another – with the same disks.

      Reply
    1. dan Post author

      cool 🙂 I use it without raidz on a hardware raid pool without any issues also (in fact on the server hosting this blog) 🙂

      Reply
  24. SIFE

    Do you have any idea about this issue
    “can’t exec getty ‘/usr/libexec/getty’ for port /dev/ttyv* no sush file or directory”
    I get after I finished my ZFS GPT installation.

    Reply
    1. dan Post author

      Usually means either the base OS was not installed correctly – or you don’t have a local console (ttyv0). You can get that with serial consoles sometimes.

      Reply
  25. Spencer

    Hi dan, thanks for the tips. I didn’t think of memory or CPU as a problem as they’re relatively new (8GB, amd64). Disconnecting drives revealed that one of the two had some issues (old second-hand drives) so I figured I’d run a scrub after reconnecting the two and adding a third to replace it. Resilvering on the bad drive got to ~700MB then a reboot occurred..
    I ran the necessary gpart commands then resilvered, but the shutdown problem continues. I’m guessing the next step is to redo some installation. I think I’ll try detaching a drive, updating it, then switching the cables and attaching the “detached” drive. It will be educational.

    Reply
  26. Tramp

    Excellent description. Thanks a lot. Works like a charm on Dell PowerEdge T410 with Perc H200 controller and JBODs using FreeBSD 9.0

    Reply
  27. Somnolent

    I’ve followed this guide and it works great, thanks. However, I was wondering if any extra steps are required when upgrading world. Do you have to reinstall the boot blocks or is that just when zfs itself is upgraded?

    Reply
    1. dan Post author

      You should only need to do it if there have been any major changes to the zfs boot code (rarely) or for new zfs version implementations – however, it doesn’t hurt to do it every time you installworld either.
      After you installworld, simply do:

      gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

      repeat for each disk in your set (da1, da2 etc). note: this has to be done *after* the installworld or you wont be using new files 😉

      Reply
  28. man

    freebsd kept crashing while trying out this, it seems more than 128M of ram is required for the install.
    Upgraded to a total of 512MB and works nicely.

    Reply
    1. dan Post author

      I would discourage using ZFS on any machine with less than 1GB ram. Even at 1GB ram, you really need to tweak ZFS loader.conf options or the kernel will run out of space during heavy load and crash.
      At 512MB ram, you may find it crashes under heavy IO load.

      For a 1GB ram machine, edit your /boot/loader.conf and add:

      vfs.zfs.prefetch_disable=”1″
      vm.kmem_size=”330M”
      vm.kmem_size_max=”330M”
      vfs.zfs.arc_max=”80M”
      vfs.zfs.vdev.cache.size=”5M”

      Reply
  29. M

    This works and all, but now my booting sequence takes about 10 minutes. I get the rotation /-/\ thingy, then 2 minutes later the boot loader loads zfs.ko, then 2 minutes later it loads opensolaris.ko, then couple minutes later it gets to the freebsd menu. (all in between you get the animated \-\/\ thing, so its doing something?)

    But once freebsd is booted, everything is fine.

    Athlon x2 215, 4 gigs ram, amd64, 3 2tb drives, raidz1.

    Machine isnt going to be rebooted that often, but still weird.

    Reply
  30. Victory

    So you’ve added:

    #echo ‘/dev/gpt/swap0 none swap sw 0 0′ > /zroot/etc/fstab
    #echo ‘/dev/gpt/swap1 none swap sw 0 0′ >> /zroot/etc/fstab
    #echo ‘/dev/gpt/swap2 none swap sw 0 0′ >> /zroot/etc/fstab

    Now what if all three swap[n] devices are filled with some data and one of them crashes? This will result in a kernel panic – since they’re not mirrored/redundant somehow but filled like cascade.

    A soultion would be to also gmirror them:
    #gmirror load
    #gmirror label -v -b round-robin swap da0p2 da1p2 da2p2
    and put something like that into fstab:
    #echo “/dev/gmirror/swap none swap sw 0 0” >> /zroot/etc/fstab

    But this will require CPU load for ZFS Mirror + gmirror.
    There is even a better solution: We simply use the already existing redundancy of our raidz1 pool by creating a file as swap space. This is usually not sugested since this results in more I/O load for the disk when swap data is being r/w to Disk. But frankly there is just no difference whether you create a swap file inside of root or whether you create a seperate partition on the same disk and loose flexibility of i.e. expanding swap sice just in time etc. (since you suffer of a “static” partition size)

    I doubt that the majority is using two seperate physical disks for swap and root even the once in the data centers – and since my swap space is anyway rarely being used – I just prefer the method of using file/md as swap and still keep flexibility about removing / adding / resizing swap file just in time – even on running system:

    #echo “kern.maxvnodes=800000″ >> /boot/loader.conf
    #truncate -s 4G /var/swap
    #chmod 0600 /var/swap
    #mdconfig -a -t vnode -f /var/swap -u 0
    #swapon /dev/md0

    See what’s going on:
    #mdconfig -lv

    Make our configuration persistent by letting rc.conf know about our whishes:
    echo ‘
    ### SWAP File
    swapfile=”/var/swap”
    ‘ > /etc/rc.conf

    … now we don’t even have to write a unnecessary fstab entry for swap since rc will tell to swapon automatically …

    Reply
    1. dan Post author

      I use the swapfile method myself. I’ve not got around to updating this page to change the swap partitions yet.
      It’s generally recommended not to use swap on a ZFS partition – but I think weighing the pros and cons together, it’s acceptable for this.

      Reply
  31. Jim

    Hi,

    I followed this guide about a year ago – and my production server has worked great. Recently I have noticed errors on one of the disks and I need to replace. I just wondered if the procedure to replace the faulty disk is different from the usual process of zfs replace. Ideally I want to take the faulty disk offline, reinsert a new one without having to go into single user mode.

    Thanks,

    Jum

    Reply
    1. dan Post author

      If you followed the instructions here, then you have some swap space on the drive too which complicates things a little.
      You should identify the disk that you need to replace – for my example I will assume it is disk2/swap2.
      You should attempt to remove the swap space using ‘swapoff /dev/gpt/swap2’ then you can remove/detach the drive (if it’s a SATA drive on the ATA bus, you can use atacontrol to detach it)
      When you plug the new drive in, you may have to rescan the bus to see the disk – using atacontrol or camcontrol.

      You should re-initialise the disk in the same way as originally (but obviously only for disk2!) by running the gpart commands related to that disk.

      Once you have done this, you can use zfs replace to replace the disk and ZFS will start resilvering to repopulate the drive.
      Check on its status with ‘zpool status’ 🙂

      Reply
  32. Jim

    Hi Dan,

    Looks the drive crashed and burned hours before I was going to attempt the hotswap.

    Now whilst trying to boot I get

    BTX loader 1.00 BTX version is 1.02
    Consoles: internal video/keyboard
    BIOS drive C: is disk0
    BIOS drive D: is disk1
    BIOS 639kB/3668736kB available memory

    FreeBSD/i386 bootstrap loader, Revision 1.1
    (jim@virtua, Sun May 30 16:34:49 BST 2010)
    Consoles: internal video/keyboard
    BIOS drive C: is disk0
    BIOS drive D: is disk1
    BIOS 639kB/3668736kB available memory

    FreeBSD/i386 bootstrap loader, Revision 1.1
    (jim@virtua, Sun May 30 16:34:49 BST 2010)
    Can’t work out which disk we are booting from. Guessed BIOS device 0xffffffff not found by probes, defaulting to disk0:
    leng not found
    panic: Assertion failed: (FALSE), function ficlCompileSoftCore, file softcore.c,
    line 428.

    –> Press a key on the console to reboot <–

    Is it possible to ask it to try disk1 ? – or is this a fixit image job 🙂

    Thanks, Jim

    Reply
  33. wess

    hi

    great tutorial! but i had to change the order from the loader.conf settings. i did:

    echo ‘vfs.zfs.prefetch_disable=”1″‘ > /boot/loader.conf
    echo ‘zfs_load=”YES”‘ >> /boot/loader.conf
    echo ‘vfs.root.mountfrom=”zfs:zroot”‘ >> /boot/loader.conf

    with the original order i get an error on boot time:
    Trying to mount root from zfs:zroot
    ROOT MOUNT ERROR:

    regards wess

    Reply
    1. dan Post author

      The order of lines in loader.conf shouldn’t matter – they’re not interpreted in any order… as it said it was trying to mount from zfs:zroot, it already read the correct line from loader.conf
      Usually if you get a root mount error, it means you forgot the zfs_enable line from rc.conf or forget to set bootfs or copied the zpool.cache file to the new zfs root.

      Reply
    1. dan Post author

      Unfortunately the sparc systems require sun partition tables which aren’t compatible with ZFS booting in FreeBSD.
      You can use ZFS on sparc systems – but not to boot from.

      Reply

Leave a Reply

Your e-mail address will not be published. Required fields are marked *