Re[2]: mdadm 2.6.4 : How i can check out current status of reshaping ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Neil.

YOU WROTE : 5 февраля 2008 г., 01:48:33:
> On Monday February 4, andre.s@xxxxxxxxx wrote:
>> 
>> root@raid01:/# cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
>> md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
>>       1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
>> 
>> unused devices: <none>
>> 
>> ##############################################################################
>> But how i can see the status of reshaping ?
>> Is it reshaped realy ? or may be just hang up ? or may be mdadm nothing do not give in
>> general ?
>> How long wait when reshaping will finish ?
>> ##############################################################################
>> 

> The reshape hasn't restarted.

> Did you do that "mdadm -w /dev/md1" like I suggested?  If so, what
> happened?

> Possibly you tried mounting the filesystem before trying the "mdadm
> -w".  There seems to be a bug such that doing this would cause the
> reshape not to restart, and "mdadm -w" would not help any more.

> I suggest you:

>   echo 0 > /sys/module/md_mod/parameters/start_ro

> stop the array 
>   mdadm -S /dev/md1
> (after unmounting if necessary).

> Then assemble the array again.
> Then
>   mdadm -w /dev/md1

> just to be sure.

> If this doesn't work, please report exactly what you did, exactly what
> message you got and exactly where message appeared in the kernel log.

> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

I read again your latter.
at first time i did not do

echo 0 > /sys/module/md_mod/parameters/start_ro

now i have done this, then
mdadm -S /dev/md1
mdadm /dev/md1 -A /dev/sd[bcdef]
mdadm -w /dev/md1

and i have : after 2 minutes kernel show something
but reshaping during in process still

root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
      1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
      [==>..................]  reshape = 10.1% (49591552/488386496) finish=12127.2min speed=602K/sec

unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
      1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
      [==>..................]  reshape = 10.1% (49591552/488386496) finish=12259.0min speed=596K/sec

unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
      1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
      [==>..................]  reshape = 10.1% (49591552/488386496) finish=12311.7min speed=593K/sec

unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
      1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
      [==>..................]  reshape = 10.1% (49591552/488386496) finish=12338.1min speed=592K/sec

unused devices: <none>




Feb  5 11:54:21 raid01 kernel: raid5: reshape will continue
Feb  5 11:54:21 raid01 kernel: raid5: device sdc operational as raid disk 0
Feb  5 11:54:21 raid01 kernel: raid5: device sdf operational as raid disk 3
Feb  5 11:54:21 raid01 kernel: raid5: device sde operational as raid disk 2
Feb  5 11:54:21 raid01 kernel: raid5: device sdd operational as raid disk 1
Feb  5 11:54:21 raid01 kernel: raid5: allocated 5245kB for md1
Feb  5 11:54:21 raid01 kernel: raid5: raid level 5 set md1 active with 4 out of 5 devices, algorithm 2
Feb  5 11:54:21 raid01 kernel: RAID5 conf printout:
Feb  5 11:54:21 raid01 kernel:  --- rd:5 wd:4
Feb  5 11:54:21 raid01 kernel:  disk 0, o:1, dev:sdc
Feb  5 11:54:21 raid01 kernel:  disk 1, o:1, dev:sdd
Feb  5 11:54:21 raid01 kernel:  disk 2, o:1, dev:sde
Feb  5 11:54:21 raid01 kernel:  disk 3, o:1, dev:sdf
Feb  5 11:54:21 raid01 kernel: ...ok start reshape thread
Feb  5 11:54:21 raid01 mdadm: RebuildStarted event detected on md device /dev/md1
Feb  5 11:54:21 raid01 kernel: md: reshape of RAID array md1
Feb  5 11:54:21 raid01 kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Feb  5 11:54:21 raid01 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
Feb  5 11:54:21 raid01 kernel: md: using 128k window, over a total of 488386496 blocks.
Feb  5 11:56:12 raid01 kernel: BUG: unable to handle kernel paging request at virtual address 001cd901
Feb  5 11:56:12 raid01 kernel:  printing eip:
Feb  5 11:56:12 raid01 kernel: c041c374
Feb  5 11:56:12 raid01 kernel: *pde = 00000000
Feb  5 11:56:12 raid01 kernel: Oops: 0002 [#1]
Feb  5 11:56:12 raid01 kernel: SMP
Feb  5 11:56:12 raid01 kernel: Modules linked in: nfsd exportfs lockd nfs_acl sunrpc ipt_LOG xt_tcpudp nf_conntrack_ipv4 xt_state nf_conntrack nfnetlink iptable_filter ip_tables x_tables button ac battery loop tsdev psmouse iTCO_wdt sk98lin serio_raw intel_agp agpgart evdev shpchp pci_hotplug pcspkr rtc ide_cd cdrom ide_disk ata_piix piix e1000 generic ide_core sata_mv uhci_hcd ehci_hcd usbcore thermal processor fan
Feb  5 11:56:12 raid01 kernel: CPU:    1
Feb  5 11:56:12 raid01 kernel: EIP:    0060:[<c041c374>]    Not tainted VLI
Feb  5 11:56:12 raid01 kernel: EFLAGS: 00010202   (2.6.22.16-6 #7)
Feb  5 11:56:12 raid01 kernel: EIP is at md_do_sync+0x629/0xa32
Feb  5 11:56:12 raid01 kernel: eax: 001cd901   ebx: c0410d1b   ecx: 00000080   edx: 00000000
Feb  5 11:56:12 raid01 kernel: esi: 05e96a00   edi: 00000000   ebp: dff3e400   esp: f796beb4
Feb  5 11:56:12 raid01 kernel: ds: 007b   es: 007b   fs: 00d8  gs: 0000  ss: 0068
Feb  5 11:56:12 raid01 kernel: Process md1_reshape (pid: 3759, ti=f796a000 task=f7e8a550 task.ti=f796a000)
Feb  5 11:56:12 raid01 kernel: Stack: f796bf9c 00000000 1d1c2fc0 00000000 00000500 00000000 f796bf88 dff3e410
Feb  5 11:56:12 raid01 kernel:        9ac41500 06000000 6a922c00 1d1c2fc0 00000000 dff3e400 000020d2 3a385f80
Feb  5 11:56:12 raid01 kernel:        00000000 001cd800 00000000 00000006 001cd700 00000000 c056fb6b 00177900
Feb  5 11:56:12 raid01 kernel: Call Trace:
Feb  5 11:56:12 raid01 kernel:  [<c041e8ee>] md_thread+0xcc/0xe3
Feb  5 11:56:12 raid01 kernel:  [<c011b368>] complete+0x39/0x48
Feb  5 11:56:12 raid01 kernel:  [<c041e822>] md_thread+0x0/0xe3
Feb  5 11:56:12 raid01 kernel:  [<c0131b89>] kthread+0x38/0x5f
Feb  5 11:56:12 raid01 kernel:  [<c0131b51>] kthread+0x0/0x5f
Feb  5 11:56:12 raid01 kernel:  [<c0104947>] kernel_thread_helper+0x7/0x10
Feb  5 11:56:12 raid01 kernel:  =======================
Feb  5 11:56:12 raid01 kernel: Code: 54 24 48 0f 87 a4 01 00 00 72 0a 3b 44 24 44 0f 87 98 01 00 00 3b 7c 24 40 75 0a 3b 74 24 3c 0f 84 88 01 00 00 0b 85 30 01 00 00 <88> 08 0f 85 90 01 00 00 8b 85 30 01 00 00 a8 04 0f 85 82 01 00
Feb  5 11:56:12 raid01 kernel: EIP: [<c041c374>] md_do_sync+0x629/0xa32 SS:ESP 0068:f796beb4







-- 
Best regards,
Andreas-Sokov

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux