Re: Remove inactive array created by open

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Simon Guinot <simon.guinot@xxxxxxxxxxxx> writes:

> Hi Neil,
>
> I'd like to have your advice about destroying an array created by open
> at close time if not configured, rather than waiting for a ioctl or a
> sysfs configuration. This would allow to get rid of the inactive md
> devices created by an "accidental" open.
>
> On the Linux distribution embedded in LaCie NAS, we are able to
> observe the following scenario:
>
> 1. A RAID array is stopped with a command such as mdadm --stop /dev/mdx.
> 2. The node /dev/mdx is still available because not removed by mdadm at
>    stop.
> 3. /dev/mdx is opened by a process such as udev or mdadm --monitor.
> 4. An inactive RAID array mdx is created and a "add" uevent is
>    broadcasted to userland. It is let to userland to understand that
>    this event must be discarded.
>
> You have to admit that this behaviour is at best awkward :)

No argument there.


>
> I read the commit d3374825ce57
> "md: make devices disappear when they are no longer needed" in which
> you express some concerns about an infinite loop due to udev always
> opening newly created devices. Is that still actual ?
>
> In your opinion, how could we get rid of an inactive RAID array created
> by open ? Maybe we could switch the hold_active flag from UNTIL_IOCTL to
> 0 after some delay (enough to prevent udev from looping) ? In addition,
> maybe we could remove the device node from mdadm --stop ? Or maybe
> something else :)
>
> If you are interested by any of this solutions or one of yours, I'll
> be happy to work on it.

By far the best solution here is to used named md devices.  These are
relatively recent and I wouldn't be surprised if you weren't aware of
them.

md devices 9,0 to 9,511 (those are major,minor numbers) are "numeric" md
devices.  They have in-kernel names md%d which appear in /proc/mdstat
and /sys/block/

If you create a block-special-device node with these numbers, that will
create the md device if it doesn't already exist.

md devices 9,512 to 9,$BIGNUM are "named" md devices.  These have
in-kernel names like md_whatever-you-like.
If you create a block-special-device with device number 9,512 and try to
open it you will get -ENODEV.
To create these you
   echo whatever-you-like >  /sys/module/md_mod/parameters/new_array

A number 512 or greater will be allocated as the minor number.

These arrays behave as you would want them to.  They are only created
when explicitly requested and they disappear when stopped.

mdadm will create this sort of array if you add
 CREATE names=yes
to mdadm.conf and don't use numeric device names.
i.e. if you ask for /dev/md0, you will still get 9,0.
But if you ask for /dev/md/home, you will get 9,512 where as
with names=no (the default) you would probably get 9,127.

A timeout for dropping idle numeric md devices might make sense but it
would need to be several seconds at least as udev can sometimes get very
backlogged and would wouldn't want to add to that.  Would 5 minutes be
soon enough to meet your need?

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux