shown disk sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil, et al...

(btw: do we have an issue tracker somewhere?)

I was experimenting a bit... created two GPT partitions exactly 10 GiB
(aka 20971520 sectors a 512B).

Created a raid 1 on them:
mdadm --create /dev/md/data --verbose --metadata=1.2 --raid-devices=2
--spare-devices=0 --size=max --chunk=32 --level=raid1 --bitmap=internal
--name=data /dev/sda1 /dev/sdb1

The size is a multiple of the 32KiB so no rounding effects should kick
in.


Now:
--examine gives for both devices:
Avail Dev Size : 20969472 (10.00 GiB 10.74 GB)
     Array Size : 20969328 (10.00 GiB 10.74 GB)
  Used Dev Size : 20969328 (10.00 GiB 10.74 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors

=> Avail is the available payload size on each component device,... so
given that we have the first 2048S for the superblock/bitmap/etc... that
fits exactly.

=> Why is the array size / used dev size smaller?



--detail gives:
     Array Size : 10484664 (10.00 GiB 10.74 GB)
  Used Dev Size : 10484664 (10.00 GiB 10.74 GB)

=> That's half of the Array Size from above? Is that a bug?


--query gives even another value:
/dev/md/data: 9.100GiB raid1 2 devices, 0 spares. Use mdadm --detail for
more detail.
=> But the device really has 20969328S it seems... so the 9.1 GiB seems
a bit bogus as well?


Last but not least... when the tools print values like "10.00 GiB 10.74
GB"... wouldn't it be better if they printed "~10.00 GiB ~10.74 GB" or
something like this to show that the values are rounded and not
_exactly_ 10 GiB... could be helpful to avoid misalignment issues.


Cheers,
Chris.

<<attachment: smime.p7s>>


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux