Re: More Hot Unplug/Plug work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/28/2010 02:34 PM, Labun, Marcin wrote:
>>> Going further, thus causes that a new disk can be potentially grabbed
>> by more than one container (because of shared path).
>>> For example:
>>> DOMAIN1: path=a path=b path=c
>>> DOMAIN2: path=a path=d
>>> DOMAIN3: path=d path=c
>>> In this example disks from path c can appear in DOMAIN 1 and DOMAIN
>> 3, but not in DOMAIN 2.
>>
>> What exactly is the use case for overlapping paths in different
>> domains?
> 
> OK, makes sense.
> But if they are overlapped, will the config functions assign path are requested by configuration file
> or treat it as misconfiguration?

For now it merely means that the first match found is the only one that
will ever get used.  I'm not entirely sure how feasible it is to detect
matching paths unless we are just talking about identical strings in the
path= statement.  But since the path= statement is passed to fnmatch(),
which treats it as a file glob, it would be possible to construct two
path statements that don't match but match the same set of files.  I
don't think we can reasonably detect this situation, so it may be that
the answer is "the first match found is used" and have that be the
official stance.

> So, do you plan to make changes similar to incremental in assembly to serve DOMAIN?

I had not planned on it, no.  The reason being that assembly isn't used
for hotplug.  I guess I could see a use case for this though in that if
you called mdadm -As then maybe we should consult the DOMAIN entries to
see if there are free drives inside of a DOMAIN listed as spare or grow
and whether or not we have any degraded arrays while assembling that
could use the drives.  Dunno if we want to do that though.  However, I
think I would prefer to get the incremental side of things working
first, then go there.

> Should an array be split (not assembled) if a domain paths are dividing array into two separate DOMAIN?

I don't think so.  Amongst other things, this would make it possible to
render a machine unbootable if you had a type in a domain path.  I think
I would prefer to allow established arrays to assemble regardless of
domain path entries.

>>  I'm happy to rework the code to support it if there's a valid use
>> case, but so far my design goal has been to have a path only appear in
>> one domain, and to then perform the appropriate action based upon that
>> domain.
> What is then the purpose of metadata keyword?

Mainly as a hint that a given domain uses a specific type of metadata.

> My initial plan was to create a default configuration for a specific metadata, where user specifies actions 
> but without paths letting metadata handler to use default ones.
> In your description, I can see that the path are required.

Yes.  We already have a default action for all paths: incremental.  This
is the same as how things work today without any new support.  And when
you combine incremental with the AUTO keyword in mdadm.conf, you can
control which devices are auto assembled on a metadata by metadata basis
without the use of DOMAINs.  The only purpose of a domain then is to
specify an action other than incremental for devices plugged into a
given domain.

>> add it to.  Only if there are no degraded arrays would we add it as a
>> spare to one of the arrays (non-deterministic which one).  If we add it
>> as a spare to one of the arrays, then monitor mode can move that spare
>> around as needed later based upon the spare-group settings.  Currently,
>> there is no correlation between spare-group and DOMAIN entries, but
>> that might change.
> 
> A spare should go to any container controlled by mdmon, so any that contains redundant volumes.

Yep.

>>
>>> So, in case of Monitor, sharing a spare device will be per path
>> basis.
>>
>> Currently, monitor mode still uses spare-group for controlling what
>> arrays can share spares.  It does not yet check any DOMAIN information.
> 
> Yes, and I am now adding support for domains in monitor and for spare-groups for external metadata.

Good to hear.

>>
>>> The same for new disks in hot-plug feature.
>>>
>>>
>>> In your repo domain_ent is a struct that contains domain paths.
>>> The function arrays_in_domain returns a list of mdstat entries that
>> are in the same domain as the constituent device name.
>>> (so it requires devname and domain as input parameter).
>>> In which case two containers will share the same DOMAIN?
>>
>> You get the list of containers, not just one.  See above about
>> searching the list for a degraded container and adding to it before a
>> healthy container.
> OK.


-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: CFBFF194
	      http://people.redhat.com/dledford

Infiniband specific RPMs available at
	      http://people.redhat.com/dledford/Infiniband

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux