Expanding a ZFS pool

In an earlier post, I explained how it was possible to grow a ZFS pool by replacing all disks one-by-one. In that post, I also mentioned that if you have enough spare connectors available, you can easily expand the pool by adding another array of disks.

A while back, I had replaced a bunch of older drives in several other computers, so I had quite a pile of different SATA drives lying around, of varying sizes. All were working, but they were nearing or past their warranty expiration[1].

My main fileserver has quite a surplus of drive connectors, so I decided to use a set of them as a simple way to increase storage space. If any of the drives would fail, I’d be just a few minutes work to swap it out for another drive from the pile. 

here’s are the pool and the array before I started:

[root@thunderflare ~]# zpool status
  pool: data
 state: ONLINE
 scrub: scrub completed after 15h35m with 0 errors on Thu Jul 28 09:39:34 2011
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad4     ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0

errors: No known data errors

[root@thunderflare ~]# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
data                         3.51T   502G  40.4K  /data

The pool consists of four 1.5TB drives in raidz, giving me roughly 4 TiB of real diskspace to play with.

I shut down the machine and added four drives to the system: three 250 GB drives and a 500 GB drive. After rebooting the server, I gave the following command:

[root@thunderflare ~]# zpool add -f data raidz /dev/ad12 /dev/ad14 /dev/ad16
 /dev/ad18

After which I checked the array:

[root@thunderflare ~]# zpool status
  pool: data
 state: ONLINE
 scrub: scrub completed after 15h35m with 0 errors on Thu Jul 28 09:39:34 2011
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad4     ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad16    ONLINE       0     0     0
            ad18    ONLINE       0     0     0

errors: No known data errors

[root@thunderflare /data/multimedia]# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
data                         3.51T  1.16T  40.4K  /data

As you can see, the pool now consists of two raidz arrays, and the free disk space has grown by roughly 685 GiB, the sum of three 250 GB drives. Of the 500 GB drive, only 250 GB will be used, until I replace all the 250 GB drives with larger ones.

So there you have it, growing an array by adding disks. Just add drives, and give a single command.

An advantage of this contruction is that in many cases, the pool can sustain two drive failures at the same time without data loss (in those cases where the drives are from different arrays). Another advantage is that I can grow both arrays seperately, so if I need more space, I only need four bigger drives, not eight at the same time.

A drawback is that I lose two drives worth of storage space to parity. Personaly, this doesn’t bother me, as with an eight-drive array, I would probably have chosen raidz2 (RAID 6), and not raidz. This way, I’m more flexible in my drive sizes and upgrade steps.

  1. [1]It is generally advisable to replace drives around the time their warranty expires (generally 3 years, nowadays), unless they’re in a redundant setup you trust, or they contain data you don’t mind losing or restoring from backups.

Comments are disabled for this post