Accueil‎ > ‎

ZFS on Google Cloud Platform

posted 1 Apr 2019, 08:09 by Christophe Noël   [ updated 3 Apr 2019, 06:45 ]
ZFS provides single node server supporting NFS and can be installed by a simple click on the GCP Marketplace.

The only information provided by google about the file system is related to the ZFS command. But unfortunately, the ZFS mode always fails on deployment (april 2019). The default file system is XFS and works fine.

Two possible approaches for securing your NFS server (otherwise anybody could write on your file system):
- Restrict to internal IP addresses (typically on GCP, the GKE machine belongs to the usual private network 10.0.0.0/8 mask)
- Use Kerberos (which will require the use of private keys)

Excepted if you are familiar with Kerberos, you could also rely on FTP or SSH for accessing the file system remotely...

Default exports file:
root@nfs1-vm:/# more /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/nfs1 10.0.0.0/8(rw,no_subtree_check,fsid=100)
/nfs1 127.0.0.1(rw,no_subtree_check,fsid=100)

When updating the exports file, the following command shall be executed: exportfs -r

The gcloud command to proxy using SSH the  grafana Web console is documented as below (using Windows don't forget to launch the Google Cloud SDK application):
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=xxx --zone europe-west1-b INSTANCE

Instead (using Windows) the syntax should be:
--ssh-flag="-L 3000:localhost:3000"

Using Grafana you may:
  • Add users (the invitation mails fails, so you need to copy the invitation link by yourself looking at pending tab)
  • Monitor disk through put, space used, etc.
In order to resize the disk:
  1. In GCP Console, edit the disk size (you can only increase it!)
  2. Vérifier les disques présents: sudo lsblk 
  3. Etendre avec XFS: sudo xfs_growfs /dev/sdb 

To access using the remote IP


1. Forbids the access to PORT 2049 from outside your cluster !!! (This is critical to prevent anybody using your NFS server)

2. Disable IP Tables (Flush), because GCP already handles firewall:
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
3. Allow all IPs (wildcard) in the exports file

Mount command


The command is:
mount -t nfs4 NFS_IP:/PATH /MOUNT_PATH

This requires installing the NFS packages:
- CentOS: sudo yum install nfs-utils --nogpgcheck
- Debian: sudo apt-get install nfs-common
Option --nogpgcheck may be required because the packet is typically unsigned

Kubernetes

Below is shown a YAML example of a Kubernetes resources for mounting the NFS server. The first part (volumes) declares the volume location (server IP and path). The container part (volume Mounts) indicates the mount path.

apiVersion: v1
metadata:
  name: pod-using-nfs
spec:
  # Add the server as an NFS volume for the pod
  volumes:
    - name: nfs-volume
      nfs:
        # URL for the NFS server
        server: xx.xx.xx.xx
        path: /data
  # In this container, we'll mount the NFS volume
  # and write the date to a file inside it.
  containers:
    - name: app
      image: alpine
      # Mount the NFS volume in the container
      volumeMounts:
        - name: nfs-volume
          mountPath: /nfs1
      # Write to a file inside our NFS
      command: ["/bin/sh"]
      args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]


Comments