Playing around with the ZFS filesystem

Moderator: Forum moderators

Post Reply
user1111

Playing around with the ZFS filesystem

Post by user1111 »

Spent a evening messing around with ZFS.

Downloaded the ZFS source from https://github.com/openzfs/zfs/releases ... -2.0.0-rc1. Extracted the zip, cd'd into that folder. Needs both the fatdog devx and kernel sources sfs's to be loaded (available via Control Panel sfs manager). Then it's just a case of running through the usual commands ...

./autogen.sh
./configure
make
make install

.. along with some patience as it does take a while to build/configure/compile. Didn't time it and felt like about a hour, but may have been longer.

One of the nice things about ZFS is that we don’t need to bother with partitions (although you can if you want to). For a server system you could start by taking three hard disks and putting them in a storage pool by running the following command :

# zpool create -f rufwoof1 /dev/sdb /dev/sdc /dev/sdd

zpool create is the command used to create a new storage pool, -f overrides any errors that occur (such as if the disk(s) already have information on them), rufwoof is the name of the storage pool, and /dev/sdb /dev/sdc /dev/sdd are the hard drives we put in the pool.

After you’ve created the pool, you should be able to see it with the df command or zfs list

df -h /rufwoof1

... and it should already be mounted and ready to use

zpool status ... will show which disks are in the pool

You can delete the zpool we’ve created so we can use the disks for other
purposes

# zpool destroy rufwoof1

Bam, the zpool is gone.

But I've only a laptop with a single HDD, partitioned into several separate partitions, so instead - as ZFS can also be used on loopback devices is a nice way to play with ZFS without having to invest in lots of hard drives.

We'll create a number of file filesysytem type files

# for i in 1 2 3 4; do dd if=/dev/zero of=zfs$i bs=1024M count=1; done

Start the zfs-fuse daemon
# zfs-fuse

and pool them in a mirror arrangement ...
# zpool create rufwoof1 mirror $PWD/zfs1 $PWD/zfs2 mirror $PWD/zfs3 $PWD/zfs4

The $PWD above is important, ZFS requires absolute paths when using files

With mirroring changes made in the main pair is duplicated into the mirror pair so if zfs1 file or whatever is lost/destroyed/corrupted, we have a backup.

More simply we might not mirror, and we could just create the pool with
# zpool create rufwoof1 $PWD/zfs1 $PWD/zfs2

run ... zpool status ... to see the status

Your new ZFS filesystem is now live, and you can cd to /rufwoof1 and copy some files into your new ZFS filesystem

We can set ZFS to use transparent compression ...

# zfs set compression=gzip rufwoof1

and turn compression back off again ...

# zfs set compression=off rufwoof1

One important thing, when you’re enabling or disabling ZFS compression, you’re not actually changing any data on the ZFS filesystems. You’re changing the behaviour of ZFS going forward, meaning any future data will be subject to your settings, but your current data will stay as is.

# zfs get all rufwoof1 | grep compress
rufwoof1 compressratio 1x -
rufwoof1 compression off local

In server systems, a nice feature of zfs is that new physical disks can just be bought, added to their racks and then pooled in to expand existing pools. For single user desktop systems such as Fatdog users however, having the option to add new file filesystems to the set could be just as useful, and having transparent compression also helps keep used space down below where if no compression was being used.

There's a bunch of other things zfs can do, snapshots, RAID, data integrity verification and automatic repair ...etc. Hopefully the above intro is simple enough to get you going.
user1111

Re: ZFS

Post by user1111 »

I ran mkdir /zfs;make DESTDIR=/zfs install
and then ran mksquashfs against that /zfs folder to produce the attached zfs.sfs

After rebooting fatdog without saving any changes, right clicking and Load SFS seems to have that working OK. It was built with the standard 811 fatdog and using the default kernel 5.4.60, so it may work for you, but with the usual disclaimer, no guarantees, use at your own risk ...etc.

28MB filesize zfs.sfs

Just remember to run zfs-fuse command first otherwise it will complain that zfs-fuse isn't loaded

Recommend you cd to a HDD partition with some reasonable space, then run

for i in 1 2; do dd if=/dev/zero of=zfs$i bs=1024M count=1; done

and then

zpool create rufwoof1 $PWD/zfs1 $PWD/zfs2

(or whatever name you prefer for the pool other than rufwoof1)

and then cd to /rufwoof1 and copy in files ...etc.
user1111

Re: ZFS

Post by user1111 »

In a non persistent environment such as Fatdog without saves, after you reboot to reload those file filesystems into the pool you need to ...

cd to the folder above where the filesystems files are and then run
zpool import -d $PWD/foldername rufwoof1 (or whatever poolname you assigned)

i.e. physically specify the directory with the -d switch, otherwise it assumes something like /dev/dsk folder contains the devices (filesystems)

which in effect remounts them again to /rufwoof1 (or wherever).
user1111

Re: ZFS

Post by user1111 »

DOH! (Homer Simpson moment).

Just discovered that Fatdog already comes with zfs-fuse built in by default :oops:

Beforehand I did search the help pages for zfs - and that returned nowt :roll:

Leaving the postings as they are perhaps a interesting ZFS learning/foundation experience.

Just boot fatdog, run zfs-fuse ... and you're good to go.
user1111

ZFS and data resilience

Post by user1111 »

Data resilience :

Say we have Fatdog running on a laptop with 3 partitions. The first, sda1, is used for Fatdog/system files. sda2 is used as a swap partition, sda3 and sda4 are both formatted to ext4 and empty, used for data.

We mount both sda3 and sda4 and cd /mnt/sda3

Run zfs-fuse to load zfs and

dd if=/dev/zero of=/mnt/sda3/zfs1 bs=1024M count=1
dd if=/dev/zero of=/mnt/sda4/zfsmirror bs=1024M count=1

zpool create rufwoof mirror /mnt/sda3/zfs1 /mnt/sda4/zfsmirror

We can now use /rufwoof to store data files, perhaps for instance open rox /rufwoof and copy a bunch of files into that.

Before shutdown, run

zpool export rufwoof
which has the effect of closing /rufwoof (similar to umount'ing it)

The next time you boot, to load that up again run
zfs-fuse to load up zfs
zpool import -d /mnt/sda3 rufwoof
which reopens /rufwoof (similar to mounting it)

You now have data resilience spread across two partitions (sda3 and sda4). Say for instance the next time you boot and before opening your data folder (/rufwoof in this case), you completely wipe the contents of /dev/sda4, then just recreate /mnt/sda4/zfsmirror (dd if=/dev/zero of=/mnt/sda4/zfsmirror bs=1024M count=1) and when you run zpool import -d /mnt/sda3 rufwoof your data is intact within that. If you wiped out sda3 then again recreate /mnt/sda3/zfs1, but as that was lost you should load zfs using the mirror version ... zpool import -d /mnt/sda4 rufwoof ... and again all your /rufwoof data will be intact.

I've used rufwoof as the pool name in the above, change that to whatever you like.
Similarly I've used 1GB sizes for the data region, again revise that to whatever you like (or have available).

To back that up to off-site, just copy one or the other of /mnt/sda3/zfs1 or /mnt/sda4/zfsmirror files (where again those files could be named whatever you like).

So next time you accidentally delete a partition, or accidentally delete your /mnt/sda3/zfs1 file (or whatever) you can relatively easily recover the data loss.

If you have two HDD's then even better. Set zfs up to use those drives instead of file filesystems as above and if/when one drive fails your data is still intact.

ZFS also maintains internal checksums against files, so if a file becomes corrupted over time such that its checksum doesn't correctly match then zfs will use the mirror copy to correct that file.

Whilst mirroring takes up twice the amount of disk space, if you set zfs to use compression (such as using zfs set compression=gzip rufwoof) then the size of the data stored might be around halved. Whilst compression takes cpu cycles to perform, reading around half as much data from mechanical HDD's is quicker than reading twice as much non compressed data. I'd guess that all washes, such that overall broad performance might be similar to not using zfs, but where you have the added benefit of data resilience (mirroring) as well as the option to use other zfs capabilities such as creating/restoring snapshots.
jamesbond
Posts: 706
Joined: Tue Aug 11, 2020 3:02 pm
Location: The Pale Blue Dot
Has thanked: 123 times
Been thanked: 394 times

Re: ZFS

Post by jamesbond »

rufwoof wrote: Tue Sep 15, 2020 12:38 am Just discovered that Fatdog already comes with zfs-fuse built in by default :oops:
LOL :D We're a bit of a filesystem nut around here :)
Leaving the postings as they are perhaps a interesting ZFS learning/foundation experience.
Yes, very educational. Thank you. Please carry on.
User avatar
Keef
Posts: 261
Joined: Tue Dec 03, 2019 8:05 pm
Has thanked: 3 times
Been thanked: 73 times

Re: ZFS

Post by Keef »

I tried installing TrueOS a while back, which uses ZFS. After a short time it would slow to a crawl, and the only reason I could find was that ZFS needs at least 4gb of ram to be functional. This took a bit of discovering but there are references. I think I only had 2gb at the time,and although I have 4 now, I'm not sure I'd try again. Installing TrueOS (and its base - FreeBSD) was extremely slow. Don't know if it is down to the filesystem or not.
user1111

Re: ZFS

Post by user1111 »

I've created two 250GB file filesystems (using dd) on two different ext4 partitions and zfs mirror mounted those and copied in around 12GB of data. Where the file filesystem on sda3 is the main copy, sda4 holds the mirror copied.

Snapshots run quickly, as do rollbacks (but a little slower), perhaps 10 seconds for a snapshot, 30 seconds for a rollback. But I guess that is subjective to amount of changes that have occurred.

# Snapshot (you can do/have many)
zfs snapshot data@datasnap15092020

sudo zfs list -t all -r data ... will show all snapshots (children)
# zfs list -t all -r data
NAME USED AVAIL REFER MOUNTPOINT
data 12.9G 227G 12.9G /mnt/sda3/data
data@tue15092020 11.6M - 12.9G -

To roll back to a snapshot
zfs rollback data@datasnap15092020

To remove a snapshot
zfs destroy data@datasnap15092020

I set the mount point to be /mnt/sda3/data rather than the default /data (within fatdogs ram area) as running a mksquashfs of /data locked up after about 7GB.
zfs set mountpoint=/mnt/sda3/data
But now when I run a mksquashfs against that, mksquashfs also locks up. htop is showing loads and loads of fuse entries and on my 4GB ram system swap is being used, so yes, looks like it is a ram hungry beast.

Using a mirror is good for if one partition (or filesystem file) is lost/ruined. But if you've good backups then the effort is probably much the same to recover from that.

Whilst relatively simple, its also added layers that introduce other risks, such as running the wrong commands, new commands to learn ...etc. i.e. I'm doubtful as to whether having mirror copies of files via zfs is of much benefit/use given the higher risks of screwing things up and losing data that way.

snapshots/rollbacks are nice, but you still need to make backup copies of data. If you're dedicating two partitions to data then you could use rsync as a form of snapshot. Whilst the first run is slow, subsequent runs are relatively quick. A benefit there is that your data isn't 'hidden' away within zfs format files, you can just flip to your rsync copy and directly see/access each/any file/folder quickly and easily. Another choice might be to layer your data folders in a similar way that Puppy/Fatdog already do for the system (and maybe data) files/folders.

Broadly I'd say that for data integrity there are perhaps easier methods other than using zfs. Which pretty much would leave zfs just being good for scalability - adding in new disks/file filesystems, which for desktop Fatdog users using a laptop - is less likely of a event. With Keef's experience of speed deterioration over time and high ram requirements, I think I'll call it a day on zfs. Time to delete some of my recent Fatdog multisession saves to revert back to how things were before. I am rather too casual with backups and keeping the two partitions and using one to rsync the other is perhaps something I should adopt, along with periodic creating a sfs of the rsync'd copy in readiness for 'off line' backup storage. Once/day run rsync, once/week do a sfs copy type of policy.

One downside to that, is greater disk usage. With compression zfs might halve the size of data being stored. But I guess I could use btrfs partitions with compression turned on instead of ext4 to similar effect. Either way its not much of a issue for me as I've a 1TB HDD in this laptop and even doubling up my data would't be anywhere near half of this disks capacity.
user1111

zfs mirroring over samba

Post by user1111 »

On further consideration I opted to look at using zfs for mirroring. Where all changes to files in a zfs mount on my laptop are (near) instantly mirrored onto a old desktop systems HDD. Sorry, a long, written as I went type posting, which if I hadn't noted it along the way I'd probably never have posted.

Physical layout

Daily use Fatdog laptop that’s wifi internet connected

Old Fatdog deskop that boots Fatdog from DVD, HDD on that is empty, just a single large ext4 partition. Net connected via ethernet cable. On a different LAN segment to the router i.e. behind a hub.

So network flow between the two is wireless from my laptop to the Router (Virgin Median main ISP router in my case), and then from the router to a another router (a old Netgear router physically connected to the main router via ethernet cable), and from that second router onto a Hub, into which the desktops ethernet is plugged. Network speed therefore is the weakest link i.e. the wireless speed.

System Setup

On the desktop I have set SMB to auto-start (in Fatdog Control Panel, Manage Services, Samba set to enabled, running and restarts at bootup). I’ve also added that HDD to /etc/fstab so that the hdd is auto mounted at reboot. A entry something like

/dev/sda3 /mnt/sda1 ext4 defaults 0 2

My etc/samba/smb.conf file has a entry

[sda3]
comment = sda3
path = /mnt/sda3
writeable = yes
force user = root
guest ok = yes

added, I also commented out the [Downloads] section as I have no need for that.

I changed the default screen size to a lower 1024x800 on the desktop system which for me works better when remote logging into the box using X11VNC. I also ran Menu, Network/X11VNC server and set that to auto restart at reboot.

i.e. the intent in mind is to use the desktop as a file-backup/server device.

With that done, I ran a Fatdog ‘Save’ to preserve the changes (I use multi-session saves both on that desktop, and on the laptop, even though my laptop is booting from its HDD).

So now I can boot both the laptop and desktop, and from the laptop I can Menu, Network, TigerVNC and use that to connect to the desktop (running ifconfig on the desktop revealed its IP address of 192.168.1.4) and up pops a window for the desktop systems desktop (F8 key has a option to toggle full screen view on/off).

In the laptops ~/Shares folder Clicking the Network icon within that folder enabled me to mount the desktop using Samba.

With that all done I ran a Fatdog Save on my laptop.


ZFS mirroring from laptop to desktop over samba


On both systems I created a 30GB file filesystem

dd if=/dev/zero of=zfsX bs=1024M count=30

where X = 1 on my laptop, X = mirror on the desktop, so after a while I have zfs1 on the laptop, zfsmirror on the desktop. On both systems I’m using ext4 as the filesystem that the disks were formatted to. In both cases I stored those files into the / folder (which happens to be sda3 in my case on both systems)

With the samba folder open from my laptop, running in a terminal df -h indicates that the desktop mount point is ...

/usr/share/Shares/FATDOG64-9E6—sda3/mnt-point

so the full path to the zfsmirror file (as I created that in the /mnt/sda3 root) is

/usr/share/Shares/FATDOG64-9E6—sda3/mnt-point/zfsmirror

on my laptop the location of the zfs1 file is /mnt/sda3/zfs1

On the laptop, starting zfs-fuse and cd to where zfs1 file is located, I ran

# zpool create rufwoof mirror $PWD/zfs1 /usr/share/Shares/FATDOG64-9E6—sda3/mnt-point/zfsmirror

which presented ...

Defaulting to 4K blocksize (ashift=12) for '/mnt/sda3/zfs1'
Defaulting to 4K blocksize (ashift=12) for '/usr/share/Shares/FATDOG64-9E6—sda3/mnt-point/zfsmirror'

… and it seems to have worked OK.


# zpool status

also indicates that to be setup

Compression

So that has a zfs local (to laptop) storage space (file filesystem), that potentially mirrors all changes to files/folders to my desktop HDD, over wireless and samba networking. That’s obviously going to be slow data transfer so to help improve that I set zfs compression on. On the laptop ...

# zfs set compression=gzip rufwoof

Checking its on ...

# zfs get all rufwoof | grep compress

… indicated …

rufwoof compressratio 1.00x -
rufwoof compression gzip local

That should (hopefully) help reduce the amount of data flowing across the network i.e. as read/writes occur on the laptop, the data that is stored into the mirror will be compressed, so perhaps halving the amount of network data flow between the desktop and laptop.

Operation

So if zfs active I now have a /rufwoof folder on the laptop, that is around 30GB in size, where the data storage for that is mirrored/spread across both my laptop and desktop with the desktop mirroring all changes made on the laptop for that folder. Data resilience, as though you were backing up all changes onto a separate box/hdd near instantaneously after the changes occurred.

Prior to closing that down, run ...

# zpool export rufwoof

which has the effect of closing /rufwoof (similar to umount'ing it)


The next time you boot, to load that up again, first re-establish the samba link/connection/mount and then run

zfs-fuse … to load up zfs

zpool import -d /mnt/sda3 rufwoof

which reopens /rufwoof (similar to mounting it)


Copying a modest amount of data into that /rufwoof (mirrored) folder is relatively slow, a folder of 4GB of music files content for instance copied using rox ran at around the same speed as if being directly copied over samba, so that mirror isn’t really good for handling large amounts of quick response data. But for general daily use, as/when you create/edit individual small files, such as word processor or spreadsheets ...etc. then the speed is OK/good.

Usage as I see it is to use that /rufwoof folder as a daily work area, so all of your changes/data are being mirrored onto a totally separate physical box/HDD whilst you create word documents ...etc. and then periodically back that up locally, perhaps making a sfs of the /rufwoof folder content

cd /mnt/sda4 (or wherever)
mksquashfs /rufwoof rufwoof-backup-copy-18Sept2020.sfs

and transfer that sfs file to a usb and move/store that wherever you like for your ‘offline’ data backups. That way if your laptop blows up, you’ve a exact copy of what data was on it at that time (along with older backups (sfs files)) to recover from.

For shutdown, suggest you first unload the zfs

zpool export rufwoof

Before closing the samba connection (~/Shares folder, right click context menu over the folder and umount it).

For startup its nice if the IP etc of the desktop (mirror) system is fixed, which you can probably do in your router configuration. Also having fatdog on the laptop with the share setting saved is also nice (if you’re in the habit of not usually saving changes like me, then run a save after having made a samba connection to the desktop system). Then for startup, boot both systems, and on the laptop mount the samba share (~/Shares folder and click on the folder that is shared but unmouted, in order to mount it). Then run

zfs-fuse

then something like

zpool import -d /mnt/sda3 rufwoof

to ‘mount’ the zfs (in this case it auto mounts to /rufwoof)


The above is fine if you mainly access that data whilst at home. i.e. have the samba share link available. Being behind a firewall (router) that of course isn’t available when out and about, unless (??) … I don’t yet know if you can mount just one side of a zfs mirror pair in isolation. I guess it should be possible, but haven’t tried that myself. In which case you could still access the zfs contained data whilst out and about, and then perhaps later re mount it as a mirrored pair to have the changes mirrored. ???

s.png
s.png (242.92 KiB) Viewed 1471 times

zfs-fuse sure does like shedloads of tasks, even when the system is relatively idle there's heaps of them
s1.png
s1.png (248.88 KiB) Viewed 1469 times

That's just one part page of them, page down 7 times before I hit the next (tree view) programs task
Last edited by user1111 on Fri Sep 18, 2020 1:21 pm, edited 3 times in total.
user1111

Re: Playing around with the ZFS filesystem

Post by user1111 »

A couple of informative references

Splitting a mirrored pair into a single https://docs.oracle.com/cd/E19253-01/81 ... index.html

Adding a mirror to a single https://blog.fosketts.net/2017/12/11/ad ... zfs-drive/
user1111

Re: Playing around with the ZFS filesystem

Post by user1111 »

Well that samba based zfs mirror worked nicely - for a while. But then the desktop locked up after a few hours of use. So yes it is possible, and works, but not to the extent of providing good enough stability for my liking.

Instead : With samba running on the desktop system (file server), on the laptop ...

Code: Select all

cd
mkdir m
mount.cifs //192.168.1.4/mountname /root/m -o user=root
rsync -av /mnt/sda4/ /root/m/
duplicates /mnt/sda4 (partition I'm using for data) onto that desktop system. Slow to run the first time as all files are copied. Subsequent runs are much quicker because it just transfers new/modified files.

I ethernet connected my laptop to run that
s.png
s.png (21 KiB) Viewed 1133 times
user1111

Re: ZFS

Post by user1111 »

jamesbond wrote: Tue Sep 15, 2020 3:25 pm
rufwoof wrote: Tue Sep 15, 2020 12:38 am Just discovered that Fatdog already comes with zfs-fuse built in by default :oops:
LOL :D We're a bit of a filesystem nut around here :)
Just built fuse-zip (needs libzip to be installed from gslapt, then make release, make install) https://github.com/refi64/fuse-zip (a mirror of https://bitbucket.org/agalanin/fuse-zip/src/master/ )

Similar to sfs, but writeable (zip file) :)

mkdir /tmp/zipArchive
fuse-zip foobar.zip /tmp/zipArchive
(do something with the mounted file system)
fusermount -u /tmp/zipArchive
jamesbond
Posts: 706
Joined: Tue Aug 11, 2020 3:02 pm
Location: The Pale Blue Dot
Has thanked: 123 times
Been thanked: 394 times

Re: ZFS

Post by jamesbond »

rufwoof wrote: Sat Sep 19, 2020 10:45 am Just built fuse-zip (needs libzip to be installed from gslapt, then make release, make install) https://github.com/refi64/fuse-zip (a mirror of https://bitbucket.org/agalanin/fuse-zip/src/master/ )

Similar to sfs, but writeable (zip file) :)
Added this to the repo, thanks.
How does it compare with archivemount (which is included in the ISO)?
user1111

Re: ZFS

Post by user1111 »

jamesbond wrote: Sat Sep 19, 2020 12:18 pm
rufwoof wrote: Sat Sep 19, 2020 10:45 am Just built fuse-zip (needs libzip to be installed from gslapt, then make release, make install) https://github.com/refi64/fuse-zip (a mirror of https://bitbucket.org/agalanin/fuse-zip/src/master/ )

Similar to sfs, but writeable (zip file) :)
Added this to the repo, thanks.
How does it compare with archivemount (which is included in the ISO)?
I wasn't even aware of archivemount James :oops:
jamesbond
Posts: 706
Joined: Tue Aug 11, 2020 3:02 pm
Location: The Pale Blue Dot
Has thanked: 123 times
Been thanked: 394 times

Re: Playing around with the ZFS filesystem

Post by jamesbond »

Nobody knows everything. That's why it's good to have this forum to exchange ideas :thumbup:
Post Reply

Return to “Filesystem”