Re: task xfssyncd blocked while raid5 was in recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From: NeilBrown
>Date: 2012-10-24 13:14
>To: hanguozhong
>CC: linux-raid; stan
>Subject: Re: task xfssyncd blocked while raid5 was in recovery
>On Wed, 24 Oct 2012 11:17:15 +0800 hanguozhong <hanguozhong@xxxxxxxxxxxx>
>wrote:

> >From: GuoZhong Han
> >Date: 2012-10-10 10:44
> >To: linux-raid
> >Subject: task xfssyncd blocked while raid5 was in recovery
>   
> >Hello, every one:
> >Recently, a problem has troubled me for a long time.
> >I created a 4*2T (sda, sdb, sdc, sdd) raid5 with XFS file system, 128K chuck size and 2048 strip_cache_size. The mdadm 3.2.2, kernel 2.6.38 and mkfs.xfs 3.1.1 were used. When the raid5 was in >recovery and the schedule reached 47%, I/O errors occurred in sdb. The following was the output:
> 
> >......
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >ata2: status=0x41 { DriveReady Error }
> >ata2: error=0x04 { DriveStatusError }
> >ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> >……
> >d 0:0:1:0: [sdb]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> >sd 0:0:1:0: [sdb]  Sense Key : Aborted Command [current] [descriptor]
> >Descriptor sense data with sense descriptors (in hex):
> >        72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 
> >        00 00 00 f7 
> > sd 0:0:1:0: [sdb]  Add. Sense: No additional sense information
> > sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 6f 4c cc 80 00 04 00 00
> > end_request: I/O error, dev sdb, sector 1867304064
> > hrtimer: interrupt took 28024956 ns
> > ata2: status=0x41 { DriveReady Error }
> > ata2: error=0x04 { DriveStatusError }
> > ata2: translatedserver RspCode F688 Ctrl 0 Idx A6F Len 70 ATA stat/err 0x
> > 41/04 to SCSI SK00 00 64 65 35 64 62 65 65 31 3A 32 64 38 35 63 37 30 63 3A 39 64 31 63 33 63 38 39 3A 32 32 64 39 32 65 32 31 00 00 00 00 00 00/ASC/ASCQ 0xb/00 /00
> > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2F 00 00 00 ata2: status=0x4
> > 1 { DriveReady Error }
> > ata2: error=0x04 { DriveStatusError }
> > ata2: translated ATA stat/err 0x41/04 to SCSI SK/ASC/ASCQ 0xb/00/00
> > sd 0:0:1:0: [sdb]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> > sd 0:0:1:0: [sdb]  Sense Key : Aborted Command [current] [descriptor]
> > Descriptor sense data with sense descriptors (in hex):
> >         72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 
> >         00 00 00 f7 
> > sd 0:0:1:0: [sdb]  Add. Sense: No additional sense information
> > sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 6f 4c a4 80 00 04 00 00
> > end_request: I/O error, dev sdb, sector 1867293824
> > ata2: status=0x41 { DriveReady Error }
> > ……
> 
> > Then, there were lots of error messages about the file system. The following was the output:
> 
> > ......
> > INFO: task xfssyncd/md127:1058 blocked for more than 120 seconds.
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > xfssyncd/md127  D fffffff7000216d0     0  1058      2 0x00000000
> >   frame 0: 0xfffffff700020570 __switch_to+0x1b8/0x1c0 (sp 0xfffffe008d7ff900)
> >   frame 1: 0xfffffff7000216d0 schedule+0x918/0x1538 (sp 0xfffffe008d7ff9d0)
> >   frame 2: 0xfffffff700022a90 schedule_timeout+0x268/0x5b0 (sp 0xfffffe008d7ffd18)
> >   frame 3: 0xfffffff700024ee0 __down+0xd8/0x158 (sp 0xfffffe008d7ffda8)
> >   frame 4: 0xfffffff70085da78 down.cold+0x8/0x28 (sp 0xfffffe008d7ffe18)
> >   frame 5: 0xfffffff700750788 xfs_buf_lock+0xd0/0x120 (sp 0xfffffe008d7ffe38)
> >  frame 6: 0xfffffff700821b40 xfs_getsb+0x38/0x78 (sp 0xfffffe008d7ffe50)
> >   frame 7: 0xfffffff70077e230 xfs_trans_getsb+0xe0/0x100 (sp 0xfffffe008d7ffe68)
> >   frame 8: 0xfffffff7006babc0 xfs_mod_sb+0x88/0x198 (sp 0xfffffe008d7ffe88)
> >   frame 9: 0xfffffff7007a6480 xfs_fs_log_dummy+0x68/0xe0 (sp 0xfffffe008d7ffeb8)
> >   frame 10: 0xfffffff70079c6c0 xfs_sync_worker+0xe0/0xe8 (sp 0xfffffe008d7ffed8)
> >   frame 11: 0xfffffff700570a00 xfssyncd+0x240/0x328 (sp 0xfffffe008d7ffef0)
> >   frame 12: 0xfffffff7000f0530 kthread+0xe0/0xe8 (sp 0xfffffe008d7fff80)
> >   frame 13: 0xfffffff7000bab38 start_kernel_thread+0x18/0x20 (sp 0xfffffe008d7fffe8)
> > INFO: task xfssyncd/md127:1058 blocked for more than 120 seconds.
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > xfssyncd/md127  D fffffff7000216d0     0  1058      2 0x00000000
> >   frame 0: 0xfffffff700020570 __switch_to+0x1b8/0x1c0 (sp 0xfffffe008d7ff900)
> >   frame 1: 0xfffffff7000216d0 schedule+0x918/0x1538 (sp 0xfffffe008d7ff9d0)
> >   frame 2: 0xfffffff700022a90 schedule_timeout+0x268/0x5b0 (sp 0xfffffe008d7ffd18)
> >   frame 3: 0xfffffff700024ee0 __down+0xd8/0x158 (sp 0xfffffe008d7ffda8)
> >   frame 4: 0xfffffff70085da78 down.cold+0x8/0x28 (sp 0xfffffe008d7ffe18)
> >   frame 5: 0xfffffff700750788 xfs_buf_lock+0xd0/0x120 (sp 0xfffffe008d7ffe38)
> >   frame 6: 0xfffffff700821b40 xfs_getsb+0x38/0x78 (sp 0xfffffe008d7ffe50)
> >   frame 7: 0xfffffff70077e230 xfs_trans_getsb+0xe0/0x100 (sp 0xfffffe008d7ffe68)
> >   frame 8: 0xfffffff7006babc0 xfs_mod_sb+0x88/0x198 (sp 0xfffffe008d7ffe88)
> >   frame 9: 0xfffffff7007a6480 xfs_fs_log_dummy+0x68/0xe0 (sp 0xfffffe008d7ffeb8)
> >   frame 10: 0xfffffff70079c6c0 xfs_sync_worker+0xe0/0xe8 (sp 0xfffffe008d7ffed8)
> >   frame 11: 0xfffffff700570a00 xfssyncd+0x240/0x328 (sp 0xfffffe008d7ffef0)
> >   frame 12: 0xfffffff7000f0530 kthread+0xe0/0xe8 (sp 0xfffffe008d7fff80)
> >   frame 13: 0xfffffff7000bab38 start_kernel_thread+0x18/0x20 (sp 0xfffffe008d7fffe8)
> > INFO: task xfssyncd/md127:1058 blocked for more than 120 seconds.
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > xfssyncd/md127  D fffffff7000216d0     0  1058      2 0x00000000
> >   frame 0: 0xfffffff700020570 __switch_to+0x1b8/0x1c0 (sp 0xfffffe008d7ff900)
> >   frame 1: 0xfffffff7000216d0 schedule+0x918/0x1538 (sp 0xfffffe008d7ff9d0)
> >   frame 2: 0xfffffff700022a90 schedule_timeout+0x268/0x5b0 (sp 0xfffffe008d7ffd18)
> >   frame 3: 0xfffffff700024ee0 __down+0xd8/0x158 (sp 0xfffffe008d7ffda8)
> >   frame 4: 0xfffffff70085da78 down.cold+0x8/0x28 (sp 0xfffffe008d7ffe18)
> >   frame 5: 0xfffffff700750788 xfs_buf_lock+0xd0/0x120 (sp 0xfffffe008d7ffe38)
> >   frame 6: 0xfffffff700821b40 xfs_getsb+0x38/0x78 (sp 0xfffffe008d7ffe50)
> >   frame 7: 0xfffffff70077e230 xfs_trans_getsb+0xe0/0x100 (sp 0xfffffe008d7ffe68)
> >   frame 8: 0xfffffff7006babc0 xfs_mod_sb+0x88/0x198 (sp 0xfffffe008d7ffe88)
> >   frame 9: 0xfffffff7007a6480 xfs_fs_log_dummy+0x68/0xe0 (sp 0xfffffe008d7ffeb8)
> >   frame 10: 0xfffffff70079c6c0 xfs_sync_worker+0xe0/0xe8 (sp 0xfffffe008d7ffed8)
> >   frame 11: 0xfffffff700570a00 xfssyncd+0x240/0x328 (sp 0xfffffe008d7ffef0)
> >   frame 12: 0xfffffff7000f0530 kthread+0xe0/0xe8 (sp 0xfffffe008d7fff80)
> > ......
> 
> > The output said “INFO: task xfssyncd/md127:1058 blocked for more than 120 seconds”. What did that mean? I used “cat /proc/mdstat” to see the state of the raid5. The output was:
> 
> > Personalities : [raid0] [raid6] [raid5] [raid4] 
> > md127 : active raid5 sdd[3] sdc[2] sdb[1](F) sda[0]
> >       5860540032 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/3] [U_UU]
> >         resync=PENDING
>       
> > unused devices: <none>
> 
> >      The state of the raid5 was “PENDING”. I had never seen such a state of raid5 before. After that, I wrote a program to access the raid5, there was no response any more. Then I used “ps aux| > task xfssyncd” to see the state of “xfssyncd”. Unfortunately, there was no response yet. Then I tried “ps aux”. There were outputs, but the program could exit with “Ctrl+d” or “Ctrl+z”. And > when I tested the write performance for raid5, I/O errors often occurred. I did not know why this I/O errors occurred so frequently. 
> >      What was the problem? Can any one help me?
> 
>       At first, I thought it was a bug of xfs filesystem. Then I change the filesystem to ext4. I used sda, sdb, sdc and sdd to build a raid5 again. 
>       When the raid5 was in recovery, I/O errors occurred in "sdb". And the state of the raid5 tend to "PENDING" again. 
>       I used "ps aux|grep 127" to see the status of the programs. The following were the outputs:
> 
>       root      1197  0.0  0.0      0     0 ?        D    Oct23   0:03 [jbd2/md127-8]
>       root      1157 75.6  0.0      0     0 ?        R    Oct23 908:02 [md127_raid5]
>       root      1159  1.2  0.0      0     0 ?        S    Oct23  14:36 [md127_resync]
>       root      1381  0.0  0.0      0     0 ?        D    Oct23   0:12 [flush-9:127]
> 
>       The state of "md127_raid5", "jbd2/md127-8", "md127_resync", and "flush-9:127" did not change to other state when I used "ps aux|grep 127" many times.
>       And the %cpu of "md127_raid5" had been the highest in about 75.6%. Programs to read and write to "md127" seemed hang, there was no response.
>       I has no other way, unless restart the machine.
>       
>       The following were the outputs of "smartctl -A /dev/sde":
>       # ./smartctl -A /dev/sde 
>       smartctl 5.39.1 2010-01-28 r3054 [tilegx-unknown-linux-gnu] (local build)
>       Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
> 
>       Probable ATA device behind a SAT layer
>       Try an additional '-d ata' or '-d sat' argument.
>  
>       I use the tilera platform, the newest kernel I can update is 3.0.38. Now I try to update my kernel to this version.
>       But I do not know if this problem have fixed in 3.0.38. If you have better solutions, please tell me. 


>>Might be fixed by upstream commit:

>>commit fab363b5ff502d1b39ddcfec04271f5858d9f26e
>>Author: Shaohua Li <shli@xxxxxxxxxx>
>>Date:   Tue Jul 3 15:57:19 2012 +1000

>>    raid5: delayed stripe fix

>>which is in 3.0.38.
>>If it isn't that, I don't know what it is.

>>So it is worth trying 3.0.38.

>>NeilBrown

Hi, Neil:
    Thanks for your advice.
    I tried to update my kernel to 3.0.38 last week.
    Then I creted a 4*2T(sda, sdb, sdc, sdd) raid5 include the "bad" disk(sdb) just like last time.
    Diffrent from the last test, I added 8 disks(sde, sdf, sdg, sdh, sdi, sdj, sjk, sdl) for the tests.
    The disks were plugged in the device via 2 2680 Rocket cards. Each Rocket card supported at most 8 disks.

    In the first test, Rocket card 1 accessed sda,sdb...,sdg and sdh, meanwhile Rocket car accessed sdi,sdj,sdk and sdl.
    I used a program to write 10M datas to the array per second for test while the array was in recovery.
    And when the percentage of the recovery went up to 92.1%, the speed of the recovery turned down to 18K/sec.
    The outputs of the system said: 
    
    2012-10-27 23:57:04 ata9.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 t0
    2012-10-27 23:57:04 ata9.00: failed command: READ FPDMA QUEUED
    2012-10-27 23:57:04 ata9.00: cmd 60/00:00:80:62:a7/04:00:d6:00:00/40 tag 0 ncq 524288 in
    2012-10-27 23:57:04          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    2012-10-27 23:57:04 ata9.00: status: { ERR }
    2012-10-27 23:57:04 ata9.00: error: { ABRT }
    2012-10-27 23:57:04 ata9.00: failed command: READ FPDMA QUEUED
    2012-10-27 23:57:04 ata9.00: cmd 60/00:00:80:82:a7/04:00:d6:00:00/40 tag 1 ncq 524288 in
    2012-10-27 23:57:04          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    2012-10-27 23:57:04 ata9.00: status: { ERR }
    2012-10-27 23:57:04 ata9.00: error: { ABRT }
    2012-10-27 23:57:04 ata9.00: failed command: READ FPDMA QUEUED
    2012-10-27 23:57:04 ata9.00: cmd 60/00:00:80:66:a7/04:00:d6:00:00/40 tag 2 ncq 524288 in
    2012-10-27 23:57:04          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    2012-10-27 23:57:04 ata9.00: status: { ERR }
    2012-10-27 23:57:04 ata9.00: error: { ABRT }
    2012-10-27 23:57:04 ata9.00: failed command: READ FPDMA QUEUED
    2012-10-27 23:57:04 ata9.00: cmd 60/00:00:80:6a:a7/04:00:d6:00:00/40 tag 3 ncq 524288 in
    2012-10-27 23:57:04          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    2012-10-27 23:57:04 ata9.00: status: { ERR }
    2012-10-27 23:57:04 ata9.00: error: { ABRT }
    ...
    2012-10-27 23:57:04 ata9.00: failed command: READ FPDMA QUEUED
    2012-10-27 23:57:04 ata9.00: cmd 60/00:00:80:be:a7/04:00:d6:00:00/40 tag 19 ncq 524288 in
    2012-10-27 23:57:04          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    ...
    2012-10-27 23:57:05 ata9.00: cmd 60/00:00:80:4e:a2/04:00:d6:00:00/40 tag 30 ncq 524288 in
    2012-10-27 23:57:05          res 01/04:b4:80:ca:a7/00:00:d6:00:00/40 Emask 0x2 (HSM violation)
    2012-10-27 23:57:05 ata9.00: status: { ERR }
    2012-10-27 23:57:05 ata9.00: error: { ABRT }
    2012-10-27 23:57:05 ata9: hard resetting link
    2012-10-27 23:57:07 ata9.00: configured for UDMA/133
    2012-10-27 23:57:07 ata9: EH complete

    The status of the raid5 was not "PENDING" in this test, and the %cpu of "md127_raid5" was normal.
    But three was no response to my program which wrote datas to the array since the recovery speed slowed down.
    The following were the outputs of "ps aux|grep md127" and "cat /proc/mdstat"
 
    #ps aux|grep md127
    root      1462  1.3  0.0      0     0 ?        S    Oct27  33:42 [md127_raid5]
    root      1464  0.6  0.0      0     0 ?        D    Oct27  16:09 [md127_resync]
    root      1497  0.0  0.0      0     0 ?        S    Oct27   0:12 [xfsbufd/md127]
    root      1498  0.0  0.0      0     0 ?        S    Oct27   1:45 [xfsaild/md127]
    root      1501  0.0  0.0      0     0 ?        D    Oct27   1:16 [flush-9:127]

    # cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    md127 : active raid5 sdd[3] sdc[2] sdb[1] sda[0]
    5860540032 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
    [==================>..]  resync = 92.1% (1800644160/1953513344) finish=140607.9min speed=18K/sec
 
     unused devices: <none>

    I used "fdisk -l /dev/sd*" to see if "fdisk" could access the disks, there were no responses of sda,sdb...,
    sdg and sdh which could be accessed via card 1. On the contrary, the outputs of sdi,sdj,sdk and sdl were quite normal.
    Programs to read and write to the device which connected to card 1 seemed hang, they were all in the "D" staus. And
    then I used "reboot" to restart the system, there was no respones yet.
 
    Then I did another test, I unplugged sda, sdb, sdc and sdd and plugged they the other slots which connected to card 2.
    The result was as the same as the first test. The speed of recovery slowed down when the percentage of recovery went up to 92.1%.
    And there were the same outputs as the former. When I used "fdisk -l /dev/sd*" to see if "fdisk" could access the disks. I found
    there were no respones of the disks which could be access via card 2. The outputs of the disks connect to card 1 were normal.
    
    It seemed that the "bad" disk in the raid5 hanged the IO of the card which it was connected to. 
    Other disks, regardless of whether they were in the array or not, as long as they were connected to the card,
    the programs to write to or read from the they would be blocked. 
    
    My problem seems not really be solved, can you help me? I would be very grateful to you.  
         
     ?韬{.n?????%??檩??w?{.n???{炳盯w???塄}?财??j:+v??????2??璀??摺?囤??z夸z罐?+?????w棹f



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux