Am 27.11.22 um 20:37 schrieb Piergiorgio Sartor:
On Sun, Nov 27, 2022 at 07:21:16PM +0100, Reindl Harald wrote:
You cannot consider the amount of data in the
array as parameter for reliability
If the array is 99% full, MD or ZFS/BTRFS have
same behaviour, in terms of reliability.
If the array is 0% full, as well
you completly miss the point!
if your mdadm-array is built with 6 TB drivres wehn you replace a drive you
need to sync 6 TB no matter if 10 MB or 5 TB are actually used
I'm not missing the point, you're not
understanding the consequences of
your way of thinking.
If the ZFS/BTRFS is 99% full, how much
time will it need to be synched?
The same (more or less) of MD
for the sake of god the point was that mdadm don't know aynthing about
the filesystem because it's a "dumb" block layer
at the end of the days that means when a 6 TB drive fails the full 6 TB
needs to be re-synced no matter how much space is used on the FS on top