Hello, Thanks for your answer. Update since our last mail: We saved many data thanks to long and boring rsyncs, with countless reboots: during rsync, sometime a drive was suddenly considered in 'failed' state by the array. The array was still active (with 13 or 12 / 16 disks) but 100% of files failed with I/O after that. We were then forced to reboot, reassemble the array and restart rsync. During those long operation, we have been advised to re-tighten our storage bay's screws (carri bay). And this is were the magic happened. After screwing them back on, no more problem with drive considered failed. We only had 4 file copy failures with I/O, but it didn't correspond to a drive failing in the array (still working with 14/16 drives). We can't guarantee than the problem is fixed, but we moved from about 10 reboot a day to 5 days of work without problems. We now plan to reset and re-introduce one by one the two drive that were not recognize by the array, and let the array synchronize, rewriting data on those drive. Does it sounds like a good idea to you, or do you think it may fails due to some errors? > Yes, with latent Unrecoverable Read Errors, you will need properly > working redundancy and no timeout mismatches. I recommend you > repeatedly use --assemble --force to restore your array, skip the last > file that failed, and continue copying critical files as possible. > > You should at least run this command every reboot until you replace your > drives or otherwise script the work-arounds: > > for x in /sys/block/*/device/timeout ; do echo 180 > $x ; done Thanks for the tip. Made at every reboot, but we still had failures. > > We still have two drives that were not physicaly removed, so that > > theorically contains datas, but that appears as spare in mdadm > > --examine, probably because of the 're-add' attempt we made. > > The only way to activate these, I think, is to re-create your array. > That is a last resort after you've copied everything possible with the > forced assembly state. We will keep this as a last resort, but with updates above, we should not have to use this. > >> Did you run "mdadm --stop /dev/md2" first? That would explain the > >> "busy" reports. > > [trim /] > > There's *something* holding access to sda and sdb -- please obtain and > run "lsdrv" [1] and post its output. > PCI [aacraid] 01:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) ├scsi 0:0:0:0 Adaptec LogicalDrv 0 {6F7C0529} │└sda 930.99g [8:0] MD raid6 (16) inactive 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} ├scsi 0:0:2:0 Adaptec LogicalDrv 2 {81A40529} │└sdb 930.99g [8:16] MD raid6 (2/16) (w/ sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} │ └VG baie 12.73t 33.84g free {7krzHX-Lz48-7ibY-RKTb-IZaX-zZlz-8ju8MM} │ ├dm-3 4.50t [253:3] LV data1 ext4 {83ddded0-d457-4fdc-8eab-9fbb2c195bdc} │ │└Mounted as /dev/mapper/baie-data1 @ /export/data1 │ ├dm-4 200.00g [253:4] LV grid5000 ext4 {c442ffe7-b34d-42c8-800d-ba21bf2ed8ec} │ │└Mounted as /dev/mapper/baie-grid5000 @ /export/grid5000 │ └dm-2 8.00t [253:2] LV home ext4 {c4ebcfd0-e5c2-4420-8a03-d0d5799cf747} │ └Mounted as /dev/mapper/baie-home @ /export/home ├scsi 0:0:3:0 Adaptec LogicalDrv 3 {156214AB} │└sdc 930.99g [8:32] MD raid6 (3/16) (w/ sdb,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:4:0 Adaptec LogicalDrv 4 {82C40529} │└sdd 930.99g [8:48] MD raid6 (4/16) (w/ sdb,sdc,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:5:0 Adaptec LogicalDrv 5 {8F341529} │└sde 930.99g [8:64] MD raid6 (5/16) (w/ sdb,sdc,sdd,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:6:0 Adaptec LogicalDrv 6 {5E4C1529} │└sdf 930.99g [8:80] MD raid6 (16) inactive 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} ├scsi 0:0:7:0 Adaptec LogicalDrv 7 {FF88E4AC} │└sdg 930.99g [8:96] MD raid6 (7/16) (w/ sdb,sdc,sdd,sde,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:8:0 Adaptec LogicalDrv 8 {84B41529} │└sdh 930.99g [8:112] MD raid6 (8/16) (w/ sdb,sdc,sdd,sde,sdg,sdi,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:9:0 Adaptec LogicalDrv 9 {70C41529} │└sdi 930.99g [8:128] MD raid6 (9/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdj,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:10:0 Adaptec LogicalDrv 10 {897976AC} │└sdj 930.99g [8:144] MD raid6 (10/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdk,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:11:0 Adaptec LogicalDrv 11 {6DEC1529} │└sdk 930.99g [8:160] MD raid6 (11/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdl,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:12:0 Adaptec LogicalDrv 12 {71142529} │└sdl 930.99g [8:176] MD raid6 (12/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdm,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:13:0 Adaptec LogicalDrv 13 {14242529} │└sdm 930.99g [8:192] MD raid6 (13/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdn,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:14:0 Adaptec LogicalDrv 14 {2D382529} │└sdn 930.99g [8:208] MD raid6 (14/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdo,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} ├scsi 0:0:15:0 Adaptec LogicalDrv 15 {B4542529} │└sdo 930.99g [8:224] MD raid6 (15/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdp) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} │ └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} │ PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} └scsi 0:0:16:0 Adaptec LogicalDrv 1 {8E940529} └sdp 930.99g [8:240] MD raid6 (1/16) (w/ sdb,sdc,sdd,sde,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn,sdo) in_sync 'ftalc2.nancy.grid5000.fr:2' {2d0b91e8-a0b1-0f4c-3fa2-85f93198a918} └md2 12.73t [9:2] MD v1.2 raid6 (16) clean DEGRADEDx2, 128k Chunk {2d0b91e8:a0b10f4c:3fa285f9:3198a918} PV LVM2_member 12.70t used, 33.84g free {G8XPQ1-E3y0-82Wz-UUpg-hGWC-UvHm-pAbi30} PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 631xESB/632xESB SATA AHCI Controller (rev 09) ├scsi 1:0:0:0 ATA Hitachi HDP72503 {GEAC34RF2T8SLA} │└sdq 298.09g [65:0] Partitioned (dos) │ ├sdq1 285.00m [65:1] MD raid1 (0/2) (w/ sdr1) in_sync 'ftalc2:0' {791b53cf-4800-7f45-1dc0-ae5f8cedc958} │ │└md0 284.99m [9:0] MD v1.2 raid1 (2) clean {791b53cf:48007f45:1dc0ae5f:8cedc958} │ │ │ ext3 {135f2572-81a4-462f-8ce6-11ee0c9a8074} │ │ └Mounted as /dev/md0 @ /boot │ └sdq2 297.81g [65:2] MD raid1 (0/2) (w/ sdr2) in_sync 'ftalc2:1' {819ab09a-8402-6762-9e1f-6278f5bbda51} │ └md1 297.81g [9:1] MD v1.2 raid1 (2) clean {819ab09a:84026762:9e1f6278:f5bbda51} │ │ PV LVM2_member 22.24g used, 275.57g free {XGX5zq-EcVb-nbK7-BKc6-cxMy-7oe0-B5DKJW} │ └VG rootvg 297.81g 275.57g free {oWuOGP-c6Bt-lreb-YWwf-Kkwt-eqUG-fmgRuf} │ ├dm-0 4.66g [253:0] LV dom0-root ext3 {dbf8f715-dc51-40a2-9d7d-db2d24cc3aba} │ │└Mounted as /dev/mapper/rootvg-dom0--root @ / │ ├dm-1 1.86g [253:1] LV dom0-swap swap {82f0fe85-34ae-4da7-afb3-e161396a3494} │ ├dm-6 952.00m [253:6] LV dom0-tmp ext3 {31585de5-61d1-4e7b-977d-ba6df01b3a4a} │ │└Mounted as /dev/mapper/rootvg-dom0--tmp @ /tmp │ ├dm-5 4.79g [253:5] LV dom0-var ext3 {c0826eb6-e535-4d57-a501-9dfb503732e0} │ │└Mounted as /dev/mapper/rootvg-dom0--var @ /var │ └dm-7 10.00g [253:7] LV false_root ext4 {519238c6-22d4-4d1b-88ed-9af71aed8a88} ├scsi 2:0:0:0 ATA Hitachi HDP72503 {GEAC34RF2T8G0A} │└sdr 298.09g [65:16] Partitioned (dos) │ ├sdr1 285.00m [65:17] MD raid1 (1/2) (w/ sdq1) in_sync 'ftalc2:0' {791b53cf-4800-7f45-1dc0-ae5f8cedc958} │ │└md0 284.99m [9:0] MD v1.2 raid1 (2) clean {791b53cf:48007f45:1dc0ae5f:8cedc958} │ │ ext3 {135f2572-81a4-462f-8ce6-11ee0c9a8074} │ └sdr2 297.81g [65:18] MD raid1 (1/2) (w/ sdq2) in_sync 'ftalc2:1' {819ab09a-8402-6762-9e1f-6278f5bbda51} │ └md1 297.81g [9:1] MD v1.2 raid1 (2) clean {819ab09a:84026762:9e1f6278:f5bbda51} │ PV LVM2_member 22.24g used, 275.57g free {XGX5zq-EcVb-nbK7-BKc6-cxMy-7oe0-B5DKJW} ├scsi 3:x:x:x [Empty] ├scsi 4:x:x:x [Empty] ├scsi 5:x:x:x [Empty] └scsi 6:x:x:x [Empty] PCI [ata_piix] 00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE Controller (rev 09) ├scsi 7:x:x:x [Empty] └scsi 8:x:x:x [Empty] Other Block Devices ├loop0 0.00k [7:0] Empty/Unknown ├loop1 0.00k [7:1] Empty/Unknown ├loop2 0.00k [7:2] Empty/Unknown ├loop3 0.00k [7:3] Empty/Unknown ├loop4 0.00k [7:4] Empty/Unknown ├loop5 0.00k [7:5] Empty/Unknown ├loop6 0.00k [7:6] Empty/Unknown └loop7 0.00k [7:7] Empty/Unknown > >> Before proceeding, please supply more information: > >> > >> for x in /dev/sd[a-p] ; mdadm -E $x ; smartctl -i -A -l scterc $x ; > >> done > >> > >> Paste the output inline in your response. > > > > > > I couldn't get smartctl to work successfully. The version supported > > on debian squeeze doesn't support aacraid. > > > I tried from a chroot in a debootstrap with a more recent debian > > version, but only got: > > > > # smartctl --all -d aacraid,0,0,0 /dev/sda > > > smartctl 6.4 2014-10-07 r4002 [x86_64-linux-2.6.32-5-amd64] (local > > build) > > > Copyright (C) 2002-14, Bruce Allen, Christian Franke, > > www.smartmontools.org > > > > Smartctl open device: /dev/sda [aacraid_disk_00_00_0] [SCSI/SAT] > > failed: INQUIRY [SAT]: aacraid result: 0.0 = 22/0 > > It's possible the 0,0,0 isn't correct. The output of lsdrv would help > with this. > > Also, please use the smartctl options I requested. '--all' omits the > scterc information I want to see, and shows a bunch of data I don't need > to see. If you want all possible data for your own use, '-x' is the > correct option. Yes, I will use this option to filter if I get smartctl to work. > > [trim /] > > It's very important that we get a map of drive serial numbers to current > device names and the "Device Role" from "mdadm --examine". As an > alternative, post the output of "ls -l /dev/disk/by-id/". This is > critical information for any future re-create attempts. lrwxrwxrwx 1 root root 9 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8G0A -> ../../sdr lrwxrwxrwx 1 root root 10 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8G0A-part1 -> ../../sdr1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8G0A-part2 -> ../../sdr2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8SLA -> ../../sdq lrwxrwxrwx 1 root root 10 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8SLA-part1 -> ../../sdq1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 ata-Hitachi_HDP725032GLA360_GEAC34RF2T8SLA-part2 -> ../../sdq2 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-baie-data1 -> ../../dm-3 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-baie-grid5000 -> ../../dm-4 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-baie-home -> ../../dm-2 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-rootvg-dom0--root -> ../../dm-0 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-rootvg-dom0--swap -> ../../dm-1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-rootvg-dom0--tmp -> ../../dm-6 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-rootvg-dom0--var -> ../../dm-5 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-name-rootvg-false_root -> ../../dm-7 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-7krzHXLz487ibYRKTbIZaXzZlz8ju8MM4QRfpRFoJ9EJDP7Nar3SLNj53t7urGbk -> ../../dm-4 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-7krzHXLz487ibYRKTbIZaXzZlz8ju8MMICvtF5UTbncSUMC9f0PyK5zHGmmEa8GD -> ../../dm-2 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-7krzHXLz487ibYRKTbIZaXzZlz8ju8MMkzJJGdeMc0QDg4B1r2hsq5bCnS7Ktk4u -> ../../dm-3 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-oWuOGPc6BtlrebYWwfKkwteqUGfmgRufCqs0FclHYC6O5RNOSEpeRZ3xJ3kXCOG0 -> ../../dm-7 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-oWuOGPc6BtlrebYWwfKkwteqUGfmgRufGm4mzDQtuUTShTEyWgXEo8BXt1d2S4Qu -> ../../dm-1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-oWuOGPc6BtlrebYWwfKkwteqUGfmgRufMGhnq5OTr3pyXgyc2CqDE5ibq9xaOSUf -> ../../dm-5 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-oWuOGPc6BtlrebYWwfKkwteqUGfmgRufOD5FJuWOVLYk7wnRPOvlQOLEb0zffl2X -> ../../dm-0 lrwxrwxrwx 1 root root 10 Nov 12 10:19 dm-uuid-LVM-oWuOGPc6BtlrebYWwfKkwteqUGfmgRufuMkGACbZV71GDBcRVxXnAMf7NkWFWezw -> ../../dm-6 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-name-ftalc2:0 -> ../../md0 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-name-ftalc2:1 -> ../../md1 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-name-ftalc2.nancy.grid5000.fr:2 -> ../../md2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-uuid-2d0b91e8:a0b10f4c:3fa285f9:3198a918 -> ../../md2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-uuid-791b53cf:48007f45:1dc0ae5f:8cedc958 -> ../../md0 lrwxrwxrwx 1 root root 9 Nov 12 10:19 md-uuid-819ab09a:84026762:9e1f6278:f5bbda51 -> ../../md1 lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_0_6F7C0529 -> ../../sda lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_10_897976AC -> ../../sdj lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_11_6DEC1529 -> ../../sdk lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_12_71142529 -> ../../sdl lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_13_14242529 -> ../../sdm lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_14_2D382529 -> ../../sdn lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_15_B4542529 -> ../../sdo lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_1_8E940529 -> ../../sdp lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_2_81A40529 -> ../../sdb lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_3_156214AB -> ../../sdc lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_4_82C40529 -> ../../sdd lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_5_8F341529 -> ../../sde lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_6_5E4C1529 -> ../../sdf lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_7_FF88E4AC -> ../../sdg lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_8_84B41529 -> ../../sdh lrwxrwxrwx 1 root root 9 Nov 17 10:18 scsi-SAdaptec_LogicalDrv_9_70C41529 -> ../../sdi lrwxrwxrwx 1 root root 9 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8G0A -> ../../sdr lrwxrwxrwx 1 root root 10 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8G0A-part1 -> ../../sdr1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8G0A-part2 -> ../../sdr2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8SLA -> ../../sdq lrwxrwxrwx 1 root root 10 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8SLA-part1 -> ../../sdq1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 scsi-SATA_Hitachi_HDP7250_GEAC34RF2T8SLA-part2 -> ../../sdq2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 wwn-0x5000cca34de737a4 -> ../../sdr lrwxrwxrwx 1 root root 10 Nov 12 10:19 wwn-0x5000cca34de737a4-part1 -> ../../sdr1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 wwn-0x5000cca34de737a4-part2 -> ../../sdr2 lrwxrwxrwx 1 root root 9 Nov 12 10:19 wwn-0x5000cca34de738cd -> ../../sdq lrwxrwxrwx 1 root root 10 Nov 12 10:19 wwn-0x5000cca34de738cd-part1 -> ../../sdq1 lrwxrwxrwx 1 root root 10 Nov 12 10:19 wwn-0x5000cca34de738cd-part2 -> ../../sdq2 It seems that the mapping changes at each reboot (two drives that host the operating system had different name across reboots). Since we re-tighten screws, we didn't reboot though. > The rest of the information from smartctl is important, and you should > upgrade your system to a level that supports it, but it can wait for later. > > It might be best to boot into a newer environment strictly for this > recovery task. Newer kernels and utilities have more bugfixes and are > much more robust in emergencies. I normally use SystemRescueCD [2] for > emergencies like this. Ok, if I get stuck on some operations, I'll try with SystemRescueCD. Regards, Clément and Marc -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html