Playing around with the ZFS filesystem
Posted: Mon Sep 14, 2020 11:06 pm
Spent a evening messing around with ZFS.
Downloaded the ZFS source from https://github.com/openzfs/zfs/releases ... -2.0.0-rc1. Extracted the zip, cd'd into that folder. Needs both the fatdog devx and kernel sources sfs's to be loaded (available via Control Panel sfs manager). Then it's just a case of running through the usual commands ...
./autogen.sh
./configure
make
make install
.. along with some patience as it does take a while to build/configure/compile. Didn't time it and felt like about a hour, but may have been longer.
One of the nice things about ZFS is that we don’t need to bother with partitions (although you can if you want to). For a server system you could start by taking three hard disks and putting them in a storage pool by running the following command :
# zpool create -f rufwoof1 /dev/sdb /dev/sdc /dev/sdd
zpool create is the command used to create a new storage pool, -f overrides any errors that occur (such as if the disk(s) already have information on them), rufwoof is the name of the storage pool, and /dev/sdb /dev/sdc /dev/sdd are the hard drives we put in the pool.
After you’ve created the pool, you should be able to see it with the df command or zfs list
df -h /rufwoof1
... and it should already be mounted and ready to use
zpool status ... will show which disks are in the pool
You can delete the zpool we’ve created so we can use the disks for other
purposes
# zpool destroy rufwoof1
Bam, the zpool is gone.
But I've only a laptop with a single HDD, partitioned into several separate partitions, so instead - as ZFS can also be used on loopback devices is a nice way to play with ZFS without having to invest in lots of hard drives.
We'll create a number of file filesysytem type files
# for i in 1 2 3 4; do dd if=/dev/zero of=zfs$i bs=1024M count=1; done
Start the zfs-fuse daemon
# zfs-fuse
and pool them in a mirror arrangement ...
# zpool create rufwoof1 mirror $PWD/zfs1 $PWD/zfs2 mirror $PWD/zfs3 $PWD/zfs4
The $PWD above is important, ZFS requires absolute paths when using files
With mirroring changes made in the main pair is duplicated into the mirror pair so if zfs1 file or whatever is lost/destroyed/corrupted, we have a backup.
More simply we might not mirror, and we could just create the pool with
# zpool create rufwoof1 $PWD/zfs1 $PWD/zfs2
run ... zpool status ... to see the status
Your new ZFS filesystem is now live, and you can cd to /rufwoof1 and copy some files into your new ZFS filesystem
We can set ZFS to use transparent compression ...
# zfs set compression=gzip rufwoof1
and turn compression back off again ...
# zfs set compression=off rufwoof1
One important thing, when you’re enabling or disabling ZFS compression, you’re not actually changing any data on the ZFS filesystems. You’re changing the behaviour of ZFS going forward, meaning any future data will be subject to your settings, but your current data will stay as is.
# zfs get all rufwoof1 | grep compress
rufwoof1 compressratio 1x -
rufwoof1 compression off local
In server systems, a nice feature of zfs is that new physical disks can just be bought, added to their racks and then pooled in to expand existing pools. For single user desktop systems such as Fatdog users however, having the option to add new file filesystems to the set could be just as useful, and having transparent compression also helps keep used space down below where if no compression was being used.
There's a bunch of other things zfs can do, snapshots, RAID, data integrity verification and automatic repair ...etc. Hopefully the above intro is simple enough to get you going.
Downloaded the ZFS source from https://github.com/openzfs/zfs/releases ... -2.0.0-rc1. Extracted the zip, cd'd into that folder. Needs both the fatdog devx and kernel sources sfs's to be loaded (available via Control Panel sfs manager). Then it's just a case of running through the usual commands ...
./autogen.sh
./configure
make
make install
.. along with some patience as it does take a while to build/configure/compile. Didn't time it and felt like about a hour, but may have been longer.
One of the nice things about ZFS is that we don’t need to bother with partitions (although you can if you want to). For a server system you could start by taking three hard disks and putting them in a storage pool by running the following command :
# zpool create -f rufwoof1 /dev/sdb /dev/sdc /dev/sdd
zpool create is the command used to create a new storage pool, -f overrides any errors that occur (such as if the disk(s) already have information on them), rufwoof is the name of the storage pool, and /dev/sdb /dev/sdc /dev/sdd are the hard drives we put in the pool.
After you’ve created the pool, you should be able to see it with the df command or zfs list
df -h /rufwoof1
... and it should already be mounted and ready to use
zpool status ... will show which disks are in the pool
You can delete the zpool we’ve created so we can use the disks for other
purposes
# zpool destroy rufwoof1
Bam, the zpool is gone.
But I've only a laptop with a single HDD, partitioned into several separate partitions, so instead - as ZFS can also be used on loopback devices is a nice way to play with ZFS without having to invest in lots of hard drives.
We'll create a number of file filesysytem type files
# for i in 1 2 3 4; do dd if=/dev/zero of=zfs$i bs=1024M count=1; done
Start the zfs-fuse daemon
# zfs-fuse
and pool them in a mirror arrangement ...
# zpool create rufwoof1 mirror $PWD/zfs1 $PWD/zfs2 mirror $PWD/zfs3 $PWD/zfs4
The $PWD above is important, ZFS requires absolute paths when using files
With mirroring changes made in the main pair is duplicated into the mirror pair so if zfs1 file or whatever is lost/destroyed/corrupted, we have a backup.
More simply we might not mirror, and we could just create the pool with
# zpool create rufwoof1 $PWD/zfs1 $PWD/zfs2
run ... zpool status ... to see the status
Your new ZFS filesystem is now live, and you can cd to /rufwoof1 and copy some files into your new ZFS filesystem
We can set ZFS to use transparent compression ...
# zfs set compression=gzip rufwoof1
and turn compression back off again ...
# zfs set compression=off rufwoof1
One important thing, when you’re enabling or disabling ZFS compression, you’re not actually changing any data on the ZFS filesystems. You’re changing the behaviour of ZFS going forward, meaning any future data will be subject to your settings, but your current data will stay as is.
# zfs get all rufwoof1 | grep compress
rufwoof1 compressratio 1x -
rufwoof1 compression off local
In server systems, a nice feature of zfs is that new physical disks can just be bought, added to their racks and then pooled in to expand existing pools. For single user desktop systems such as Fatdog users however, having the option to add new file filesystems to the set could be just as useful, and having transparent compression also helps keep used space down below where if no compression was being used.
There's a bunch of other things zfs can do, snapshots, RAID, data integrity verification and automatic repair ...etc. Hopefully the above intro is simple enough to get you going.