DDF test fails if default udev rules are active

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

with my latest patch set, the DDF test case (10-ddf-create) succeeds
reliably for me, with one caveat: It works only if I disable the rule
that runs "mdadm -I" on a newly discovered container. On my system
(Centos 6.3) it is in /lib/udev/65-md-incremental.rules, and the rule is

SUBSYSTEM="block", ACTION="add|change", KERNEL="md*", \
  ENV{MD_LEVEL}=="container", RUN+="/sbin/mdadm -I $env{DEVNAME}"

The reason is that the DDF test case runs mdadm -Asc after writing the
conf file defining the container and 3 arrays.

mdadm -Asc will first create the container. When it starts tries to
create the member arrays, these have already been started by the udev
rule above, causing the assembly to fail with the error message "member
/dev/md127/1 in /dev/md127 is already assembled".

I have done my testing with the above udev rule commented out, and all
goes fine. But I am not sure if "mdadm -Asc /var/tmp/mdadm.conf" failing
indicates a problem with the DDF code, or if it's really just a problem
with the test case. Personally, I'd rather have a test case that
succeeds by default on a system with standard configuration (which means
the above udev rule should be active).

What do you think?
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux