Re: [PATCH v2] DM RAID: Add support for MD RAID10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 16 Jul 2012 17:06:32 -0500 Brassow Jonathan <jbrassow@xxxxxxxxxx>
wrote:

> 
> On Jul 12, 2012, at 1:32 AM, NeilBrown wrote:
> 
> >> 
> >> @@ -76,6 +80,7 @@ static struct raid_type {
> >> 	const unsigned algorithm;	/* RAID algorithm. */
> >> } raid_types[] = {
> >> 	{"raid1",    "RAID1 (mirroring)",               0, 2, 1, 0 /* NONE */},
> >> +	{"raid10",   "RAID10 (striped mirrors)",        0, 2, 10, -1 /* Varies */},
> >> 	{"raid4",    "RAID4 (dedicated parity disk)",	1, 2, 5, ALGORITHM_PARITY_0},
> >> 	{"raid5_la", "RAID5 (left asymmetric)",		1, 2, 5, ALGORITHM_LEFT_ASYMMETRIC},
> >> 	{"raid5_ra", "RAID5 (right asymmetric)",	1, 2, 5, ALGORITHM_RIGHT_ASYMMETRIC},
> > 
> > Initialising the "unsigned" algorithm to "-1" looks like it is asking for
> > trouble.
> 
> I'm initializing to -1 because I know it is a value that will fail if not set later in the setup process.  I could just as easily choose the default values (near = 2).

I don't much care what value you use as long as it is of the same type as the
variable you assign it to.  So maybe UINT_MAX, or maybe '2'.


> 
> > 
> >> @@ -493,7 +554,7 @@ static int parse_raid_params(struct raid
> >> 			 */
> >> 			value /= 2;
> >> 
> >> -			if (rs->raid_type->level < 5) {
> >> +			if (rs->raid_type->level != 5) {
> >> 				rs->ti->error = "Inappropriate argument: stripe_cache";
> >> 				return -EINVAL;
> >> 			}
> > 
> > This leaves RAID6 out in the cold.  Maybe
> >  level < 5 || level > 6
> > or      !=5          !=6
> > or a switch statement?
> 
> Thanks.  For some reason I had thought that raid6 had level=5, like raid4...  Fixed.

I love consistency ... or at least I think I would if only I could find it
somewhere!

> 
> 
> 
> >> @@ -536,9 +605,25 @@ static int parse_raid_params(struct raid
> >> 	if (dm_set_target_max_io_len(rs->ti, max_io_len))
> >> 		return -EINVAL;
> >> 
> >> -	if ((rs->raid_type->level > 1) &&
> >> -	    sector_div(sectors_per_dev, (rs->md.raid_disks - rs->raid_type->parity_devs))) {
> >> +	if (rs->raid_type->level == 10) {
> >> +		/* (Len * Stripes) / Mirrors */
> >> +		sectors_per_dev *= rs->md.raid_disks;
> >> +		if (sector_div(sectors_per_dev, raid10_copies)) {
> >> +			rs->ti->error = "Target length not divisible by number of data devices";
> >> +			return -EINVAL;
> >> +		}
> > 
> > I'm not entirely sure what you are trying to do here, but I don't think it
> > works.
> > 
> > At the very least you would need to convert the "sectors_per_dev" number to a
> > 'chunks_per_dev' before multiplying by raid_disks and dividing by copies.
> > 
> > But even that isn't necessary.  If you have a 3-device near=2 array with an
> > odd number of chunks per device, that will still work.  The last chunk won't
> > be mirrored, so won't be used.
> > Until a couple of weeks ago a recovery of the last device would trigger an
> > error when we try to recover that last chunk, but that is fixed now.
> > 
> > So if you want to impose this limitation (and there is some merit in making
> > sure people don't confuse themselves), I suggest it get imposed in the
> > user-space tools which create the RAID10.
> 
> I have seen what you do in calc_sectors(), I suppose I could bring that in.  However, I'm simply trying to find the value that will be preliminarily assigned to mddev->dev_sectors.  It is an enforced (perhaps unnecessary) constraint that the array length be divisible by the number of data devices.  I'm not accounting for chunk boundaries - I leave that to userspace to configure and MD to catch.

In that case I suggest you keep it out of dmraid.

It might make sense to check that the resulting array size matches what
user-space said to expect - or is at-least-as-big-as.  However I don't see
much point in other size changes there.

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux