Notes on IOPs (Input/Output Operations per second) and ZPOOLS (example zpool name: abyss)
Choose any 2: speed | reliability | cost
If you pick speed and reliability, it will not be cheap
If you pick reliable and cost effective, it will not be fast
If you pick speed and cost effective, it will not be reliable
IOPs computation/approximation via SEEK time
Using specifications of a Samsung Spinpoint F3 HD103SJ 7200RPM, avg seek time 8.9ms, avg latency 4.17ms
Alternate, via pure rotational speed
One argument, says it is actually 2 x 120 because on average you wait half of a revolution or 180 degrees (ie IOPS ~= RPM / 30) vs a full revolution.
However, if seek time is not fast enough for a 10 degree rotation, you’ll end up waiting one revolution + the previous delta or 360 + 10 = 370 degrees.
What does this all mean for my POOL?
So, if we needed 300 IOPs with a 50% read and 50% write workload from a RAID-Z (write penalty = 4) pool
For, a similar 300 IOPs with a 50% read and 50% write workload for a RAID 1, 10 (write penalty = 2) pool
This says that a 750 IOPs pool would be needed to support a 300 IOPs RAID-Z pool with a 50/50 read/write workload.
Note, only a 450 IOPs pool is needed to support the same 300 IOPs on a RAID 1, 10 pool with the same 50/50 read/write workload.
See zpool IOPs read/write CLI metrics (operations column)
Optimal RAID-Zx pool member per vdev rule 2^n + p
Where n is 1, 2, 3, 4, . . .
And p is the parity: p=1 for raid-z1, p=2 for raid-z2 and p=3 for raid-z3
RAID1 aka mirror
This example creates a mirror with 1 vdev and 1 mirrored data disk (2 disks on the vdev).
adding a striped, mirrored vdev. Growth becomes similar to RAID 10 and adds an additional vdev each add.
3 way mirror
Similar to RAID5, but uses variable width stripe for parity which allows better performance than RAID5. RAID-Z allows a single disk failure. This example creates pool with 1 vdev, 2 data disks and 1 parity disk.
adding a vdev to grow the RAID-Z nested pool
Similar to RAID6, and allows 2 drive failures before being vulnerable to data loss. Here we have 1 vdev with 2 data and 2 parity disks
adding a vdev to grow the RAID-Z2 pool
Allows 3 drive failures before being vulnerable to data loss. Here we have 1 vdev with 2 data and 3 parity disks.
adding a vdev to grow the RAID-Z3 pool
Adding a spare
Having a spare ready minimizes the time your pool is unprotected. You can begin replacement as soon as a failure occurs.
Adding a log (ZIL/Write cache)
Adding a cache (L2ARC/Read cache)