Many years ago (about 1 or 2 years after ext4 was considered stable) I
needed to perform data recovery on a 16TB volume so I attempted to
create an raw image. I couldn't complete that process with EXT4 because
of the 16TB file size limit back then. I had to use XFS instead.
Also many years ago I had a dataset on a 16TB raid 6 array that
consisted of 10 years of daily backups, hardlinked to save space. I ran
into the 65000 hardlinks per file limit. Without hardlinks the dataset
would grow to over 400TB. This was about 10 years ago. I was forced to
use btrfs instead. I regret using btrfs because it is very unstable. So
I had to choose between XFS and ZFS.
Today, the largest single rotation hard drive you can buy is actually
16TB, and they are beginning to sample 18TB and 20TB disks. It is not
uncommon to have 10s of TB in a single volume, and single files are
starting to get quite large now.
I would like to request increasing some (all?) of the limits in EXT4
such that they use 64-bit integers at minimum. Yes, I understand it
might slow down, but I would prefer a usable slow filesystem over one
that simply can't store the data and is therefore useless. It's not like
the algorithmic complexity for basic filesystem operations is going up
exponentially by doubling the number of bits for hardlinks or address space.
Call it EXT5 if you have too, but please consider removing all these
arbitrary limits. There are real world instances where I need to do it.
And it needs to work -- even if it is slow. I very much prefer slow and
stable over fast and incomplete/broken.
Thanks for taking the time to consider my request.