Re: Reopen: 16 HDDs too much for RAID6?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bernd,

Bernd Schubert <bs@xxxxxxxxx> schrieb:
> > Here the conf file:
> > monosan:~ # cat /etc/mdadm.conf
> > DEVICE /dev/sd[ab][0-9] /dev/dm-*
> > ARRAY /dev/md2 level=raid1 UUID=d9d31de2:e6dbd3c3:37c7ea09:882a64e5
> > ARRAY /dev/md3 level=raid1 num-devices=2
> > UUID=a8687183:a79e514c:ca492c4b:ffd4384f ARRAY /dev/md4 level=raid6
> > num-devices=15 spares=1 UUID=cfcbe071:f6766d8f:0f1ffefa:892d09c3 ARRAY
> > /dev/md9 level=raid1 num-devices=2 name=9
> > UUID=db687150:614e76fd:28feefc0:b1aae572
> >
> > All dm-* devices are really distinctive. I could post the
> > /etc/multipath.conf too if you want.

here is the file:
monosan:~ # cat /etc/multipath.conf
#
# This configuration file is generated by Yast, do not modify it
# manually please. 
#
defaults {
        polling_interval        "0"
        user_friendly_names     "yes"
#       path_grouping_policy    "multibus"
}

blacklist {
#       devnode "*"
        wwid "SATA_WDC_WD1600YS-01_WD-WCAP02964085"
        wwid "SATA_WDC_WD1600YS-01_WD-WCAP02965435"
}

blacklist_exceptions {
}

multipaths {
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ02TBK"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ0185Y"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ02QRQ"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ0204G"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ02TVQ"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ012AL"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ00PHN"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ01BYF"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ026J1"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ01G09"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ02461"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ013GW"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ01835"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ01C49"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ02TBZ"
        }
        mutlipath {
                wwid "SATA_ST31000340NS_5QJ01JSF"
        }
}

 
> I only have very little experience with multipathing, but please send your 
> config. The fact you really do use multipathing only confirms my initial 
> guess it is a multipath and not a md problem.

But when I assemble the array after a clean shutdown after it has been initially synced there is no problem. Just if the array was degraded it has such duplicated superblocks.
How comes?

Lars
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux