(Limitation 1) An expansion that causes the Filesystem to increase 8 TB since the last Factory Default will fail.
(Limitation 2) An expansion that causes the Total Filesystem size to be 16 TB or more after the expansion will fail.
* These sizes refer to the Filesystem Sizes, not raw disk space.
* The solution/workaround for this is to Factory Default with the final volume in place.
* These are not READYNAS limitations, but more of Linux Kernel and EXT Filesystem Limitations.
Use calculator here and read all of the notes to understand limitations:
http://infotinks.siigna.net/xraid.html
Calculator doesnt take into account initial conditions and the time variable, so it cant account for the limitations. However you can calculate manually the original size and the new size and see the difference to see if Limitation 1 is met. For Limitation 2 as long as original filesystem size is less then 16 TB and new filesystem size is more then 16TB, so if 16 TB line has been crossed then Limitation 2 is met. If a limitation is met, expansion will fail – the volume will not be curropt, it will not expand though. The layers below the filesystem will expand such as LVM and RAID, just not the filesystem.
Other Limitations
Can NOT Add SMALLER Drives
When you click CALCULATE above it assumes that this is a fresh start, as if you factory defaulted with that set of disks.
The reason I mention this is because adding SMALLER disks to the disk set works out in this calculator
HOWEVER, in a live unit you cannot add SMALLER disks they would be unused, the partitioning will not work out the new disks would be unused
What do I mean by SMALLER disks? Example, if your system only has 2 TB and 3 TB disks, do not add a 1 TB disk.
The WORKAROUND for the system to accept the SMALLER disks is to backup your data and factory default with all disks put in place
The BEST SOLUTION in my opinion is to only add bigger disks, or new disks equal in size to any of the disks operating in the system
Can NOT Add INBETWEEN Drives
When you click CALCULATE above it assumes that this is a fresh start, as if you factory defaulted with that set of disks.
The reason I mention this is because adding INBETWEEN disks to the disk set works out in this calculator
HOWEVER, in a live unit you cannot add INBETWEEN disks they would be unused, the partitioning will not work out
What do I mean by INBETWEEN disks? Example, if your system only has 1 TB and 3 TB disks, do not add a 2 TB disk.
The WORKAROUND for the system to accept the INBETWEEN disks is to backup your data and factory default with all disks put in place
The BEST SOLUTION in my opinion is to only add bigger disks, or new disks equal in size to any of the disks operating in the system
XRAID1 vs. XRAID2
* XRAID1 is used in the 4.1.x (and old 3.x) firmware of Readynas.
* XRAID2 is used in ReadyNAS 4.2.x 5.x and 6.x.
* XRAID1 uses a proprietary RAID solution which is a like to RAID4 type of behavior – therefore the RAID5 mathematics work out for it in here
Side note: RAID4 and RAID% mathematically in storage space work out to be the same.
On top of XRAID1 sits the EXT filesystem.
* XRAID2 is a genius installation of MDADM on top of carefully carved out partitions, on to which a volume manager is installed.
The volume manager for 4.2.x and 5.3.x is LVM. On top of the LVM goes the EXT filesystem. For 6.x the volume manager and filesystem are combined because BTRFS can handle it all very well.
Quick note on XRAID1 – (This might be incorrect so I will see if I can get this looked over and fix it up if needs be)
* The expansion logic with XRAID1 is different than XRAID2. XRAID2 does its best to expand on the fly with every new drive.
* HOWEVER XRAID1 needs a whole set of new (and bigger) sized drives to expand.
* For XRAID1 – there are no longer units available that support this, all the XRAID1 units are End of Life.
* With XRAID1 The idea is you need to replace all of the smaller drives to utilize the full disk usage
* The XRAID1 disk space equation is simple, however its biggest downside is pretty apparent (back in the days of XRAID1 however this remarkable):
* The equation for Total Disk Space(not including Overhead) for XRAID1 is: (smallest disk in array)*(number of drives-1)=Total Disk Space
* For example an array of 80GB, 80GB, 80Gb, 160GB, 160GB, 160GB would have a total storage of (80 GB) * (6-1) = 80 * 5 = 400 GB
From that 400GB don’t forget to subtract the overhead, can’t tell you off the top of my head, but it’s not much.
However this is a close order of magnitude estimation
Also There was never an XRAID1 device with this many drive slots as far as I am aware, the most slots available for an XRAID1 system was 4 slots.
Special Thanks goes out to:
Read this if confused on base 2 and base 10 disk drive sizes:
Why Your Hard Drive Shows Less Space Than Advertised
Also this one by me: Drive manufacturer Sizes