Re: Reopen: 16 HDDs too much for RAID6?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday 28 March 2008 11:55:37 Lars Täuber wrote:
> Hi Bernd,
>
> Bernd Schubert <bs@xxxxxxxxx> schrieb:
> > > Here the conf file:
> > > monosan:~ # cat /etc/mdadm.conf
> > > DEVICE /dev/sd[ab][0-9] /dev/dm-*
> > > ARRAY /dev/md2 level=raid1 UUID=d9d31de2:e6dbd3c3:37c7ea09:882a64e5
> > > ARRAY /dev/md3 level=raid1 num-devices=2
> > > UUID=a8687183:a79e514c:ca492c4b:ffd4384f ARRAY /dev/md4 level=raid6
> > > num-devices=15 spares=1 UUID=cfcbe071:f6766d8f:0f1ffefa:892d09c3 ARRAY
> > > /dev/md9 level=raid1 num-devices=2 name=9
> > > UUID=db687150:614e76fd:28feefc0:b1aae572
> > >
> > > All dm-* devices are really distinctive. I could post the
> > > /etc/multipath.conf too if you want.
>
> here is the file:
> monosan:~ # cat /etc/multipath.conf
> #
> # This configuration file is generated by Yast, do not modify it
> # manually please.
> #
> defaults {
>         polling_interval        "0"
>         user_friendly_names     "yes"
> #       path_grouping_policy    "multibus"
> }
>
> blacklist {
> #       devnode "*"
>         wwid "SATA_WDC_WD1600YS-01_WD-WCAP02964085"
>         wwid "SATA_WDC_WD1600YS-01_WD-WCAP02965435"
> }

Hmm, maybe you are creating multipath of multipath?

Here's something from a config of our systems:

devnode_blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}



>
> blacklist_exceptions {
> }
>
> multipaths {
>         mutlipath {
>                 wwid "SATA_ST31000340NS_5QJ02TBK"
>         }

I would set user friendly names using the alias paramter, something like this

multipaths {
        multipath {
                wwid                    360050cc000203ffc0000000000000019
                alias                   raid1a-ost
        }

> I only have very little experience with multipathing, but please send
> > your config. The fact you really do use multipathing only confirms my
> > initial guess it is a multipath and not a md problem.
>
> But when I assemble the array after a clean shutdown after it has been
> initially synced there is no problem. Just if the array was degraded it has
> such duplicated superblocks. How comes?

No idea so far, but please do some blacklisting. And if you will set more 
readable names like "/dev/disk8" instead of "dm-8", it might get much more 
easy to figure out what it wrong.

Cheers,
Bernd

-- 
Bernd Schubert
Q-Leap Networks GmbH
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux