Re: RAID 5: weird size results after Grow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marko Berg wrote:
Bill Davidsen wrote:
Marko Berg wrote:
I added a fourth drive to a RAID 5 array. After some complications related to adding a new HD controller at the same time, and thus changing some device names, I re-created the array and got it working (in the sense "nothing degraded"). But size results are weird. Each component partition is 320 G, does anyone have an explanation for the "Used Dev Size" field value below? The 960 G total size is as it should be, but in practice Linux reports the array only having 625,019,608 blocks.

I don't see that number below, what command reported this?

For instance df:

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             625019608 358223356 235539408  61% /usr/pub

How can this be, even though the array should be clean with 4 active devices?

df reports the size of the filesystem, mdadm reports the size of the array.

--
bill davidsen <davidsen@xxxxxxx>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux