Re: Shrinking number of devices on a RAID-10 (near 2) array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

On Mon, Aug 25, 2014 at 09:26:14PM +1000, NeilBrown wrote:
> What does "mdadm --examine" on one of the devices report now that it is v1.0?
> 
> Particularly interested in the "Unused Space :" line.

/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 3905b303:ca604b72:be5949c4:ab051b7a
           Name : 2
  Creation Time : Sun Jun  4 08:18:59 2006
     Raid Level : raid10
   Raid Devices : 6

 Avail Dev Size : 618726528 (295.03 GiB 316.79 GB)
     Array Size : 928089792 (885.10 GiB 950.36 GB)
   Super Offset : 618727392 sectors
   Unused Space : before=0 sectors, after=864 sectors
          State : active
    Device UUID : e30176be:81a57e84:1f2aa206:9515150f

    Update Time : Sun Aug 24 14:29:15 2014
       Checksum : bd599c72 - correct
         Events : 1

         Layout : near=2
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

All other devices report same (apart from UUID/checksum/role).

Cheers,
Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux