1. NFS with IPv6 link-local addresses

    02.02.2021 13:37
    by bitstacker

    Yesterday i finally fixed my zfs/nfs setup. While doing that, i came across some problems. This article documents these problems so others don't waste hours like i did.

    My setup

    I have a vm host that runs zfs for a lot of harddisks. I wanted to access the zfs datasets from my virtual machines. What i didn't want is to share the nfs exports across my whole network. So for security reasons, i wanted to share them via ipv6 link-local addresses.

    So i set up a bridge on the vm-host with the ipv6 link-local address 'fe80::1' and all the vms are connected to this bridge. The vms have autoconfigured ipv6 link-local adresses.

    ######### br0                                      ens0 #####
    #VM-Host#----------------+------------------------------#VM1#
    ######### fe80::1/64     |   fe80::4242:ff:fe5e:beef/64 #####
                             |
                             |
                             |                         ens0 #####
                             \------------------------------#VM2#
                                 fe80::4242:ff:fe5e:cafe/64 #####
    

    zfs-on-linux and sharenfs

    Zfs has a nice feature that exports datasets via nfs if you set the sharenfs option. So i tried:

    zfs set sharenfs="rw=@fe80::/64%br0" mypool/mydataset
    

    The result can be checked if you type the following command: (the mountpoint for my example is /srv/mypool/mydataset)

    exportfs -v
    /srv/mypool/mydataset
            <world>(rw,wdelay,root_squash,no_subtree_check,mountpoint,sec=sys,rw,secure,root_squash,no_all_squash)
    

    If you look at it, you might see the problem: Zfs exports it as world read-write allowed. That is not the intended behavior and probably a security risk. It seems zfs or nfs can't parse the link-local address and defaults to world.

    The solution

    After many tries this solution worked for me:

    I disabled the sharenfs option for all my datasets.

    zfs set sharenfs=off mypool/mydataset
    

    And used /etc/exports for the nfs exports:

    /srv/mypool/mydataset fe80::%br0/64(rw,no_root_squash,no_subtree_check)
    

    This allowed access from the whole fe80::/64 subnet. But only on the bridge br0.

    To reload the exports, you have to run

    exportfs -ar
    

    The (now working) syntax is a bit weird: You put the ipv6-link-local address, then the device with percent-symbol, and the cidr-mask last.

    In retrospect, this might have worked with the sharenfs option, too.

    Mounting the share inside the vm

    This is my systemd-mount service file:

    [Unit]
    Description=NFS from vmhost
    After=network.target
    
    [Mount]
    What=[fe80::1%%ens0]:/srv/mypool/mydataset/
    Where=/srv/mydataset/
    Type=nfs
    
    [Install]
    WantedBy=multi-user.target
    

    The additional percent-sign is needed for systemd! The file needs to be called "srv-mydataset.mount" to satisfy the systemd requirements for filenames. (put it in /etc/systemd/system/)

    To enable it and start it run:

    systemctl daemon-reload
    systemctl enable srv-mydataset.mount
    systemctl start srv-mydataset.mount
    

« Page 3 / 3