ZFS is a file system developed for Oracle Solaris. It was released as open source under the CDDL with OpenSolaris. FreeBSD created a port of the file system for FreeBSD 7.0-CURRENT. It was imported into MidnightBSD with 0.3-CURRENT.
ZFS is considered an alternative file system to UFS2 in MidnightBSD. It has independant RAID features that are not tied to GEOM classes. It does not make use of the VFS cache and has some issues with NFS. Advantages include support for very large file systems and large pools of disks. it supports checksum data integrety checking and can repair bad data when raidz is used.
MidnightBSD includes ZFS file system and storage pool version 6. You may access pools created on other operating systems at or below this version. If you upgrade to version 6, you will no longer be able to read them on older versions.
ZFS can be used in two ways. You may either dedicate entire disks to ZFS (recommended) or use GPT partitions (mnbsd-zfs in 0.4-CURRENT) to add to a pool. ZFS shines when used with RAID features.
If you're going to use RAID, determine how many disks you want to use. It's best to group them in identical sizes. If possible, use the same brand and model of drives when using mirroring. If you have two drives, use mirror. If you have more than two drives, consider using raidz. You may add multiple mirror sets (2 at a time) to the pool.
ZFS also supports adding spare drives to the pool. They will be used automatically when a drive fails.
It is strongly recommended to use ZFS only with amd64 MidnightBSD and only on systems with more than 1GB of RAM. It will require tuning sysctl's to get the right balance of memory usage. Particularly, you need to watch the ARC size as it can grow very large
MidnightBSD does not support booting from ZFS at this time. It may be added in a future release. You need a UFS/UFS2 partition for / including /boot, but /var, /tmp, /usr and /home can be on ZFS.
In these examples, mpool and tank are used as pool names. You can pick any name for the pool, but tank is very common. After creating a pool named tank, you'll see /tank
You will most likely want to add zfs_enable="YES" into /etc/rc.conf so that ZFS is loaded on system startup
Create a mirror
zpool create mpool mirror /dev/ad0 /dev/ad1
Add a spare drive
zpool add mpool spare /dev/ad3
Check status
zpool status
Listing information about pools
zpool list
zfs list
Create file systems
zfs create mpool/data
Use raidz instead (raid 5 like mode)
zpool add tank raidz /dev/ad0 /dev/ad1 /dev/ad2
Scrub data (check for errors)
zpool scrub tank
During a hardware upgrade such as moving to a new motherboard or controller, one might find their zpool damaged. Usually the cause is that the device name has changed. For instance, a recent upgrade moved ad6 to ad12.
To fix this problem, several steps are required.
rm /boot/zfs/zpool.cache & shutdown -r now
zpool list
. This should not show the pool.
zpool import
. It should show you possible pools to recover.zpool import name of pool
.
To verify it worked, run zpool list
A ZFS snapshot, is a point in time copy or bookmark of your data. You can use it to compare changes made to a file system or to backup a file system. This allows you to get your data back after trying an upgrade, etc. It can be a handy trick to make copies of jails easily.
You can create a snapshot named 1 using the following:
zfs snapshot tank/test@1
You can also apply a snapshot recursively to all file systems in a pool with the r flag
zfs snapshot -r tank/home@now
As more changes occur to a file system, the amount of disk space a snapshot takes increases. You will want to purge old snapshots to free up disk space when they are no longer needed.
zfs destroy tank/home@now
You can use the rename command to rename a snapshot, the hold command to prevent removal of a snapshot, and many more options. Consult the relevant man pages for more information.
Finally, you can list snapshots
zfs list -t snapshot
You can also make zfs list show snapshots by default by changing this setting
zpool set listsnapshots=on tank
You can use zfs to send a snapshot to the same or another pool with the zfs send and receive commands. This can be used to backup ZFS file systems to another location such as an external disk.
To backup the snapshot named 1 from file system test:
zfs send tank/test@1 | zfs receive tank/testback
Many 4k sector drives do not report their size properly in a bad attempt at backward compatibility. ZFS works fine with drives that report properly, but for the rest of them the following workaround is recommended.
gpart create -s gpt ada0
# create partitions
gpart add -a 1m -t mnbsd-zfs -l drive0 ada0
gpart add -a 1m -t mnbsd-zfs -l drive1 ada1
# use gnop to make 4k friendly devices
gnop create -S 4k gpt/drive0
gnop create -S 4k gpt/drive1
# make a mirror
zpool create mpool mirror /dev/gpt/drive0.nop /dev/gpt/drive1.nop
# export pool and remove virtual devices
zpool export mpool
gnop destroy gpt/drive0.nop
gnop destroy gpt/drive1.nop
# import and keep labels (via -d flag)
zpool import -d /dev/gpt mpool