Re: [PATCH 2/3] Don't tell sysfs to launch the container as we are doing it ourselves

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/03/2012 05:24 AM, Jes Sorensen wrote:
> On 12/23/11 00:48, NeilBrown wrote:
>> On Fri, 21 Oct 2011 15:33:19 +0200 Jes.Sorensen@xxxxxxxxxx wrote:
>>
>>> From: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
>>>
>>> Signed-off-by: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
>>> ---
>>>  Incremental.c |    1 -
>>>  1 files changed, 0 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/Incremental.c b/Incremental.c
>>> index 1a9a97b..cff7663 100644
>>> --- a/Incremental.c
>>> +++ b/Incremental.c
>>> @@ -446,7 +446,6 @@ int Incremental(char *devname, int verbose, int runstop,
>>>  	if (info.array.level == LEVEL_CONTAINER) {
>>>  		int devnum = devnum; /* defined and used iff ->external */
>>>  		/* Try to assemble within the container */
>>> -		sysfs_uevent(&info, "change");
>>>  		if (verbose >= 0)
>>>  			fprintf(stderr, Name
>>>  				": container %s now has %d devices\n",
>>
>> Hi Jes,
>>  I've had to revert this patch as it causes a regression.
>>
>> Without the 'change' event the container device doesn't get created in /dev.
>>
>> I'm not sure udev should be running "--incremental" on containers anyway, but
>> if it does it shouldn't cause problems should it?  If it does we should fix
>> those problems.
> 
> Hi Neil,
> 
> I am wary of this being reverted as it fixed a genuine race condition.
> On Fedora we don't have the problem with the container device not being
> created, could it be that your udev scripts are not doing the right
> thing in this case?

I think it's probably worthwhile for us to do a udev rules file
comparison.  So, I'm attaching the patch we apply to the udev rules file
in your distribution before installing it, as also the rules file that
handles incremental assembly on Fedora/RHEL.  Look them over and see if
you want to include any of the things we have there in your version.

As to the question of udev running incremental on containers, I think
it's fair to say that either udev or mdadm should be doing so, and not
both.  Whether it should be one or the other is up for debate I think.
Doing it in mdadm where this patch pulls it out means that it only
happens on add of a new device using incremental assembly.  The udev
version does it on a change event on the container.  The question I have
is, if you had mdmon migrate a spare into a container, would it trigger
this code in mdadm versus would it trigger the code in the udev rules
file?  Is there any situation where a change triggered via something
other than incremental assembly would result in us needing to run
incremental on the container?  If the answer is yes, then I would
recommend doing things in the udev file versus in the incremental
function of mdadm.


-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: 0E572FDD
	      http://people.redhat.com/dledford

# This file causes block devices with Linux RAID (mdadm) signatures to
# automatically cause mdadm to be run.
# See udev(8) for syntax

# Don't process any events if anaconda is running as anaconda brings up
# raid devices manually
ENV{ANACONDA}=="?*", GOTO="md_end"

# Also don't process disks that are slated to be a multipath device
ENV{DM_MULTIPATH_DEVICE_PATH}=="?*", GOTO="md_end"

# We process add events on block devices (since they are ready as soon as
# they are added to the system), but we must process change events as well
# on any dm devices (like LUKS partitions or LVM logical volumes) and on
# md devices because both of these first get added, then get brought live
# and trigger a change event.  The reason we don't process change events
# on bare hard disks is because if you stop all arrays on a disk, then
# run fdisk on the disk to change the partitions, when fdisk exits it
# triggers a change event, and we want to wait until all the fdisks on
# all member disks are done before we do anything.  Unfortunately, we have
# no way of knowing that, so we just have to let those arrays be brought
# up manually after fdisk has been run on all of the disks.

# First, process all add events (md and dm devices will not really do
# anything here, just regular disks, and this also won't get any imsm
# array members either)
SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="linux_raid_member", \
	RUN+="/sbin/mdadm -I $env{DEVNAME}"
SUBSYSTEM=="block", ACTION=="remove", ENV{ID_FS_TYPE}=="linux_raid_member", \
	RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"

# Next, check to make sure the BIOS raid stuff wasn't turned off via cmdline
IMPORT{cmdline}="noiswmd"
IMPORT{cmdline}="nodmraid"
ENV{noiswmd}=="?*", GOTO="md_imsm_inc_end"
ENV{nodmraid}=="?*", GOTO="md_imsm_inc_end"
SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="isw_raid_member", \
	RUN+="/sbin/mdadm -I $env{DEVNAME}"
SUBSYSTEM=="block", ACTION=="remove", ENV{ID_FS_TYPE}=="isw_raid_member", \
	RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
LABEL="md_imsm_inc_end"

# Next make sure that this isn't a dm device we should skip for some reason
ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="dm_change_end"
ENV{DM_UDEV_DISABLE_DISK_RULES_FLAG}=="1", GOTO="dm_change_end"
ENV{DM_SUSPENDED}=="1", GOTO="dm_change_end"
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \
	ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}"
LABEL="dm_change_end"

# Finally catch any nested md raid arrays.  If we brought up an md raid
# array that's part of another md raid array, it won't be ready to be used
# until the change event that occurs when it becomes live
KERNEL=="md*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \
	ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}"

# In case the initramfs only started some of the arrays in our container,
# run incremental assembly on the container itself.  Note: we ran mdadm
# on the container in 64-md-raid.rules, and that's how the MD_LEVEL
# environment variable is already set.  If that disappears from the other
# file, we will need to add this line into the middle of the next rule:
#	IMPORT{program}="/sbin/mdadm -D --export $tempnode", \

SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="md*", \
	ENV{MD_LEVEL}=="container", RUN+="/sbin/mdadm -I $env{DEVNAME}"


LABEL="md_end"
--- mdadm-3.2.1/udev-md-raid.rules.udev	2011-03-27 22:31:20.000000000 -0400
+++ mdadm-3.2.1/udev-md-raid.rules	2011-03-28 10:14:26.047232843 -0400
@@ -2,11 +2,13 @@
 
 SUBSYSTEM!="block", GOTO="md_end"
 
+# In Fedora we handle the raid components in 65-md-incremental.rules so that
+# we can do things like honor anaconda command line options and such
 # handle potential components of arrays
-ENV{ID_FS_TYPE}=="linux_raid_member", ACTION=="remove", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
-ENV{ID_FS_TYPE}=="linux_raid_member", ACTION=="add", RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
-ENV{ID_FS_TYPE}=="isw_raid_member", ACTION=="remove", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
-ENV{ID_FS_TYPE}=="isw_raid_member", ACTION=="add", RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
+#ENV{ID_FS_TYPE}=="linux_raid_member", ACTION=="remove", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
+#ENV{ID_FS_TYPE}=="linux_raid_member", ACTION=="add", RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
+#ENV{ID_FS_TYPE}=="isw_raid_member", ACTION=="remove", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
+#ENV{ID_FS_TYPE}=="isw_raid_member", ACTION=="add", RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
 
 # handle md arrays
 ACTION!="add|change", GOTO="md_end"

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux