Re: mdadm: /dev/md0 has been started with 1 drive (out of 2).

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

so I've successfully rebuilt the array, added internal bitmap, haven't run any extensive i/o tests but I continued with copying of my data off the old disks and I haven't really noticed a serious impact. This is a first impression only, but so far so good.

Now that I have bitmap I deliberately repeated sleep/resume cycle exactly as it was done the last time that led to array degradation and sure enough the system started up with a degraded array. in fact, it is way more messy this time because both devices were dynamically assigned new /dev/sdx devices: before sleep they were /dev/sdc1 and /dev/sdd1, after resume they became /dev/sdd1 and /dev/sdb1.

So, I unmounted filesystem on the array, and stopped the array. Then reassembled it, and it looks to be in a good shape. However, I am wondering if this is exactly due to the internal bitmap. Basically what surprised me was that the array was assembled and shown as in sync instantly. Worth noting, I should say that before the laptop went to sleep there were no processes writing to the array disks -- I made sure -- so the data should be consistent on both drives, but as we know from my very first message event count may be still different when upon resume from sleep.

Anyway, here's how it looked in more specific terms

> % mount
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
> dev on /dev type devtmpfs (rw,nosuid,relatime,size=1551228k,nr_inodes=216877,mode=755)
> run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
> /dev/sda3 on / type ext3 (rw,relatime,data=ordered)
> devpts on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
> shm on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
> binfmt on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
> tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
> gvfs-fuse-daemon on /home/ilj/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
> /dev/md0 on /mnt/RAIDVault type ext4 (rw,relatime,stripe=32,data=ordered)
> % cat /proc/mdstat 
> Personalities : [raid1] 
> md0 : active raid1 sdc1[2]
>      1953276736 blocks super 1.2 [2/1] [U_]
>      bitmap: 0/15 pages [0KB], 65536KB chunk
> 
> unused devices: <none>
> % ls -lah /mnt/RAIDVault/
> ls: reading directory /mnt/RAIDVault/: Input/output error
> total 0
> % su
> Password: 
> % umount /mnt/RAIDVault/
> % mdadm --stop --scan
> mdadm: stopped /dev/md0
> …
> here I checked the logs, learned that the drives are now represented by different block device files
> …
> % ls /dev/sd
> sda   sda1  sda2  sda3  sdb   sdb1  sdd   sdd1  sde   
> % mdadm --assemble --scan
> mdadm: /dev/md0 has been started with 2 drives.
> % cat /proc/mdstat 
> Personalities : [raid1] 
> md0 : active raid1 sdd1[2] sdb1[1]
>      1953276736 blocks super 1.2 [2/2] [UU]
>      bitmap: 0/15 pages [0KB], 65536KB chunk
> 
> unused devices: <none>
> % mdadm --examine /dev/sdd1
> /dev/sdd1:
>          Magic : a92b4efc
>        Version : 1.2
>    Feature Map : 0x1
>     Array UUID : c4cf4a52:6daa94c8:6d88a2fa:8f604199
>           Name : sega:0  (local to host sega)
>  Creation Time : Fri Nov  1 16:24:18 2013
>     Raid Level : raid1
>   Raid Devices : 2
> 
> Avail Dev Size : 3906553856 (1862.79 GiB 2000.16 GB)
>     Array Size : 1953276736 (1862.79 GiB 2000.16 GB)
>  Used Dev Size : 3906553472 (1862.79 GiB 2000.16 GB)
>    Data Offset : 262144 sectors
>   Super Offset : 8 sectors
>          State : clean
>    Device UUID : 827ed4c3:baf1ba90:d8f21e10:e524d383
> 
> Internal Bitmap : 8 sectors from superblock
>    Update Time : Fri Nov  8 00:10:07 2013
>       Checksum : 18402838 - correct
>         Events : 56
> 
> 
>   Device Role : Active device 0
>   Array State : AA ('A' == active, '.' == missing)
> % mdadm --examine /dev/sdb1
> /dev/sdb1:
>          Magic : a92b4efc
>        Version : 1.2
>    Feature Map : 0x1
>     Array UUID : c4cf4a52:6daa94c8:6d88a2fa:8f604199
>           Name : sega:0  (local to host sega)
>  Creation Time : Fri Nov  1 16:24:18 2013
>     Raid Level : raid1
>   Raid Devices : 2
> 
> Avail Dev Size : 3906553856 (1862.79 GiB 2000.16 GB)
>     Array Size : 1953276736 (1862.79 GiB 2000.16 GB)
>  Used Dev Size : 3906553472 (1862.79 GiB 2000.16 GB)
>    Data Offset : 262144 sectors
>   Super Offset : 8 sectors
>          State : clean
>    Device UUID : cea7f341:435cdefd:5f883265:a75c5080
> 
> Internal Bitmap : 8 sectors from superblock
>    Update Time : Fri Nov  8 00:10:07 2013
>       Checksum : 55138955 - correct
>         Events : 56
> 
> 
>   Device Role : Active device 1
>   Array State : AA ('A' == active, '.' == missing)

My question is basically if I'm enjoying the benefits of having internal bitmap or maybe I got lucky and this time event count was the same for both drives?

Ivan--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux