Re: More Hot Unplug/Plug work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/28/2010 12:08 PM, Labun, Marcin wrote:
> 
> 
>> -----Original Message-----
>> From: Doug Ledford [mailto:dledford@xxxxxxxxxx]
>> Sent: Tuesday, April 27, 2010 6:45 PM
>> To: Linux RAID Mailing List; Neil Brown; Labun, Marcin; Williams, Dan J
>> Subject: More Hot Unplug/Plug work
>>
>> So I pulled down Neil's git repo and started working from his hotunplug
>> branch, which was his version of my hotunplug patch.  I had to do a
>> couple minor fixes to it to make it work.  I then simply continued on
>> from there.  I have a branch in my git repo that tracks his hotunplug
>> branch and is also called hotunplug.  That's where my current work is
>> at.
>>
>> What I've done since then:
>>
>> 1) I've implemented a new config file line type: DOMAIN
>>    a) Each DOMAIN line must have at least one valid path= entry, but
>> may
>>       have more than one path= entry.  path= entries are file globs and
>>       must match something in /dev/disk/by-path
> 
> DOMAIN is defined per container or raid volume for native metadata.

No, a DOMAIN can encompass more than a single volume, array, or container.

> Each DOMAIN can have more than one path, so actually list of path define if a given disk belongs to domain or not. 

Correct.

> Do you plan to allow for the same path to be assigned to different containers (so path is shared between domains)?

I had planned that a single DOMAIN can encompass multiple containers.
So I didn't planned on it a single path being in multiple DOMAINs, but I
did plan that a single domain could allow a device to be placed in
multiple different containers based upon need.  I don't have checks in
place to make sure the same path isn't listed in more than one domain,
although that would be a next step.

> If so the domains will have some or all paths overlapped, and some containers will share some paths.
> Going further, thus causes that a new disk can be potentially grabbed by more than one container (because of shared path).
> For example:
> DOMAIN1: path=a path=b path=c
> DOMAIN2: path=a path=d
> DOMAIN3: path=d path=c
> In this example disks from path c can appear in DOMAIN 1 and DOMAIN 3, but not in DOMAIN 2.

What exactly is the use case for overlapping paths in different domains?
 I'm happy to rework the code to support it if there's a valid use case,
but so far my design goal has been to have a path only appear in one
domain, and to then perform the appropriate action based upon that
domain.  So if more than one container array was present in a single
DOMAIN entry (lets assume that the domain entry path encompassed all 6
sata ports on a motherboard and therefore covered the entire platform
capability of the imsm motherboard bios), then we would add the new
drive as a spare to one of the imsm arrays.  It's not currently
deterministic which one we would add it to, but that would change as the
code matures and we would search for a degraded array that we could add
it to.  Only if there are no degraded arrays would we add it as a spare
to one of the arrays (non-deterministic which one).  If we add it as a
spare to one of the arrays, then monitor mode can move that spare around
as needed later based upon the spare-group settings.  Currently, there
is no correlation between spare-group and DOMAIN entries, but that might
change.

> So, in case of Monitor, sharing a spare device will be per path basis.

Currently, monitor mode still uses spare-group for controlling what
arrays can share spares.  It does not yet check any DOMAIN information.

> The same for new disks in hot-plug feature.
> 
> 
> In your repo domain_ent is a struct that contains domain paths.
> The function arrays_in_domain returns a list of mdstat entries that are in the same domain as the constituent device name.
> (so it requires devname and domain as input parameter).
> In which case two containers will share the same DOMAIN?

You get the list of containers, not just one.  See above about searching
the list for a degraded container and adding to it before a healthy
container.

> It seems that this function shall return a list of mdstat entries that share a path to which a devname device belongs.
> So, a given new device is tried to be grabbed by a list of a containers (or native volumes).

Yes.  There can be more than one array/container that this device might
go to.

> Can you send a config file example?

The first two entries are good, the third is a known bad line that I
just leave in there to make sure I don't partition the wrong thing.

DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0 action=partition
	table=/etc/mdadm.table program=sfdisk
DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0-part? action=spare
DOMAIN path=pci-0000:00:1f.2-scsi-[2345]:0:0:0*
	path=pci-0000:00:1f.2-scsi-[2345]:0:0:0-part* action=partition


-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: CFBFF194
	      http://people.redhat.com/dledford

Infiniband specific RPMs available at
	      http://people.redhat.com/dledford/Infiniband

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux