Ian Kent schreef op 06-07-2016 14:48:
But I don't understand your use of "lazy initialization".
I meant creating the mount point only when it was accessed.
So I don't quite understand what your concern is.
Sure, if you have hundreds or thousands of these there can be a
performance
problem but that's not what you were trying to do I thought.
I'm not sure either at this point but I'm too hungry to read back on
what I had written before :p.
Of course, yes, program maps can never know in advance what key may be
passed to
them.
The problem is usually seen with user applications, cron jobs that scan
directories, GUI file management utilities etc.
It's has been quite a problem over time.
Yes that is my concern. I do not like the idea of invisible access
points at all, unless I was doing it for security, but in that sense you
can't really hide anything in a Linux system. Invisible mount points
only make sense when you have LOTS of them and you only need few at a
time.
LDAP is a map source, like a file, or nis etc.
Maps stored in LDAP behave much the same as maps stored in files.
Right so you can browse them if you want.
So then consider that we only use a static map here. An actual file
with
thousands of entries? Of course it can happen. But what would be the
point of that still (as opposed to LDAP or programmatic).
People do seem to like program maps but plenty of people use static or
semi
-static maps, ie. a bunch of entries followed by the wildcard map key
"*"
usually with some macro or "&" substution.
Similarly, what might match the wildcard map entry can't be known in
advance.
Right. That makes it programmatic; the (server) response will determine
if anything was actually there. On the other hand a wildcard (in a
direct map at least) may determine existing (or match existing)
filesystem components that you could read in advance. And you need to,
in a direct map. But direct maps are mounted anyway (or the mountpoint
is created, I mean). So it currently only applies to indirect maps where
the wildcard is the final component, and it then behaves like a
programmatic thing where it only actually gets mounted (and created) if
e.g. the server reponds with a share.
It's hard enough for me to understand VFS path walking sufficiently
without
having to describe it too.
I don't know. I studied it in some detail, I just don't understand the
RCU algorithm. At least, I have studied path_lookupat and the entire
procedure leading up to lookup_fast and lookup_slow. I'd have to check
my printouts to say more. But let's just say I'm aware of any
(recursive) operation involving symlinks, where intermediate symlinks
get expanded before the original path is being continued, etc. I am also
aware of the presence of autofs code, particularly in the function that
specifies one or two other mount things.
I believe that is called by lookup_slow....
All I can say is there is a problem determining whether a call back to
the
deamon is needed or not at the times the kernel module is called. That
can
sometimes result in all entries in the autofs managed mount point
directory
being mounted.
So the mount point is accessed but the module doesn't know if anything
really needs to happen at that point. And stat operations or anything
else all use the same path lookup. Of course I don't know what happens
pre-mount and post-mount to the directory itself. If post-mount the
directory is really on another filesystem, the "mnt" value will change.
And it will return another dentry for that other mount. I remember
"path" being mnt+dentry.
Another aspect of this problem is that the same system call may need to
have the
directory trigger a mount while not so at other times it's called and
there's no
way to tell what the usage is within in the kernel.
So you are getting called by lookup_slow or something of the kind near
it, but you're not sure why. But these lookups are being done for each
individual path componenent. However a "cd" to the directory should
mount it, but a mere "stat" may not. Is that the case? Seems odd that it
would be difficult, but I believe you on your word.
Maybe "cd" shouldn't mount it until the contents is accessed, but I
don't know.
So sometimes people get caught because they want their stat(2) call to
return
the mounted file system results but not to cause a mount at other times
when it
might lead to a mount storm.
Hmm. It seems like the first desire is a problem. At least, I don't know
how directory lookup is done. I don't know if this information is the
result of a stat call or something else. I do remember seeing code for
dentry-moving but it was confusing.
For sure you want "ls /directory/" to return the mounted contents. But
I'm not sure if its possible for your module to differentiate. Ls -d
"/directory/" should perhaps not cause mounting. I can see quickly
though that "ls" causes an "open" call but "ls -d" does not.
But then, "open" will not access any path components beyond the
directory itself. So if your module is only called by path_lookup it is
going to be impossible to know what for.
Particularly if an "ls" does not actually access path componenets
(beyond the directory itself) there would be no way to know what is
being asked.
Usuall all I can say to them is don't use the "browse" option.....
:).
It is a difficult problem and I'm inevitably caught in the middle
between the
"browse" and "nobrowse" camps.
You appear to be very conscientious about it.
Not all cases of accessing such a directory will see this problem
either but
user space seems to have a habit of using system calls that don't
trigger a
mount together with others that do.
such as that "ls" vs "ls -d"?
Causing the so called mount storm.
Still seems odd but I believe you, but. I guess what it could mean is
"ls" not only trying to list every share (browse) but also to perform
calls on those shares (path componenents retrieved) that might require
additional information. For example, if I am not content just knowing
the shares, but also want to know the number of files in them.... which
definitely some programs or user agents would do. Then I'm in trouble
because this requires mounting. Then, if that is the case, a simple
"list /nas/" might result in the subfolders (/nas/media, /nas/backup,
etc.) to also be queried and you have what you call a mount-storm.
But this is the result of user space convention and behaviour. Surely
people cannot request browse while at the same time wanting to deny
actions on the returned results when these actions stem completely from
userspace?
If some user agent or GUI is trying to list the number of files in each
directory (which at this point means a share) then that's not something
autofs could ever deal with. And I don't think it is a problem autofs
needs to solve.
If I am at all correct about this and I see what you mean right now,
this "GUI" behaviour would then simply be incompatible with unmounted
folders *that-might-need-to-get-mounted-but-not-always* and this GUI
would need to be aware to not do that thing.
Having said that though, with a lot of work, I could probably do
something about
it.
TBH I've avoided it because it is difficult, perhaps in version 6.
It would require not pre-creating mount point directories and adding a
readdir
implementation to the kernel to fill in the names (map keys) that
aren't already
mounted. But that means significantly more (in terms of quantity of)
communication between the daemon and the kernel module.
That means re-writing the user space <-> kernel space communications
completely
in an incompatible manner to what is used now, not to mention
significant
changes to the user space code itself.
A big job really.
And not a big joy perhaps.
So basically you are going to not precreate, but still hijack kernel
calls to return the listings for the directories that do not yet exist
(the listing of those directories).
Which means those directories could be shown, but not opened, or
something of the kind. Then, the user agent .. well I don't know what
difference it makes actually if the mount storm is the result of
userspace calls that directly cause it. So I guess I'm mistaken here and
the issue is even more complex.... :(.
I don't know how a readdir on the "top" directory could /directly/
causes a mountstorm on the listed entries. But then, I don't know much
;-).
It happens, from what I've seen, quite a bit.
Perhaps you would be surprised.
Illuminate me :P. What is the most common cause of it?
Yes, but network sources such as NIS and LDAP are treated much the same
as a
file map.
Yes, I didn't realize at the start. I have always found LDAP to be a
rather incomprehensible thing to work with. Without hands-on experience
it is hard to get a good grasp of how it works. Or even how to organise
it (if you do create one) as there seem to be standards for everything.
Checking if a file map has been updated since a key was last used is
easier with
a file map and straight forward with a NIS map but can't sensibly be
done for an
LDAP map.
Right.
Maybe I'm missing something here.
> Another reason for the default is that, because of this, historically
> there was
> no "browse" option and this maintains what is (should be) expected.
Yeah I thought so, because of the comments near the timeout value.
However that is a reason to never change anything ever. Someone
mistakenly choosing "browse" will very quickly find out.
And wouldn't itself be sufficient but for the problems I'm attempting
to
describe above.
You mean the mountstorm problems are reasons to keep the default at
no-browse?
It still seems at this point that this is only relevant for LDAP/NIS
entries.
First the auto.master file on my system mentions man autofs as a place
for seeing the format of the (sub)map files. However there is nothing
in
there. So that part is missing, but in a sense the contents of
auto.misc
is already somewhat sufficient for that, but still annoying that there
is no man page.
That doesn't sound right, there might be a problem with your
distribution
package.
No, as I say later this is incomprehension on my part ;-). Or merely the
fact that needing to use "number" for accessing "sections" of the
"manual" is so rare (if you are not a developer accessing library
functions and/or kernel functions) that a regular user will almost NEVER
use a section number in daily operation.
Most configuration files are called "openvpn.conf" or something of the
kind, and hence their manual page is also called that, and you don't
need to do "man 5 openvpn.conf", because "man openvpn.conf" is
sufficient.
Using numbers would be superfluous in 99% of cases, or even more than
that. As opposed to perhaps requesting help on function calls, such as
mount() as opposed to /bin/mount.
So as a developer you may be used to it. But a regular user is not.
Anyone who does not do C programming, is not.
So, my mistake, I guess.
You don't imagine that every freaking access of a directory or path
component in the managed directory is going to result in network
access.
You could do a "for f in `seq 1 10000000`; do vdir $f; done" and have
fun.
(Don't know if that would still be the case for "browse" maps).
So this hidden directory, secret door thing is really the most
confusing
of all.
It's somewhat similar to something I might like to do, but still you
don't really expect it ;-).
So the 3-4 things are:
create a man page or section for the format of the sub-maps
Already exists, needs work.
Lets call these mount maps (not the best but the convention used) as
opposed to
the master map.
Alright. Still confusing though.
indicate more clearly that programmatic maps in principle return a
single entry
Maybe but isn't this in autofs(5) already close:
Executable Maps
A map can be marked as executable. A program map will be called
with the key as an argument. It may return no lines of output
if there's an error, or one or more lines containing a map entry
(with \ quoting line breaks). The map entry corresponds to what
would normally follow a map key.
I guess.. it's just that you read over "a map entry" as meaning "only a
single map entry" since just before it says "one or more lines". So it
is something you only notice if you reread it in detail. After finding
out that it doesn't work ;-).
Isn't this near the top of autofs(5) sufficient:
key [-options] location
key
For indirect mounts this is the part of the path
name between the mount point and the path into
the filesystem when it is mounted. Usually you
can think about the key as a sub-directory name
below the autofs managed mount point.
For direct mounts this is the full path of each
mount point. This map is always associated with
the /- mount point in the master map.
Yes it is, particularly because it mentions that it is a sub-directory
name.
Then, I begin to wonder what "usually" means. See, from the looks of it
it is completely identical to sub-directory name, because:
- they key is going to be directly used as the name of the subdirectory
- programmatically, the script does not even have any say about the key,
it cannot even return a key, does not determine the key. The only thing
that can ever happen is that the key as given is turned into a
subdirectory (when valid).
I mean the logical abstraction between "key" and "value" may make sense
from a purely abstract logical point of view ;-). But it has no bearing
on actual domain-specific termininology, it is just a computer science
term so to speak.
What it actually comes down to is:
- mount point for the actual share, or
- subdirectory created as mountpoint for the actual share(s).
Where "value" would mean:
- path to the mount (share), or
- network path to the share that gets mounted on the subdirectory (the
key).
the "mount" utility calls these source and target, respectively (source
for the path, target for the mount point). Otherwise it calls them
"device" and "mountpoint".
So although "key" also has meaning with regards to "database lookup" I
don't see how it is ever not directly used as the mountpoint itself. And
this is what creates the confusion because with the "browse" option it
is not just going to be a "lookup key".
Further there is no difference between "subdirectory mount point" and
"lookup key", where these are fundamentally separate concepts, and could
even imply, in principle, that the actual directory to use is *also*
retrieved by the key.
So conceptually the idea that "key" instantly means "subdirectory
mount-point" is confusing.
In fact it is odd to begin with, from the perspective of a user, that
"auto.smb" mounts the entire share list of the requested "host" or "key"
in one go under the location that was accessed.
This is not the most common scenario (well, maybe if you are in some big
network). This is not most common not only because it mounts /all/ the
shares at once(!) but also because generally people want to have some
level of control over the directory it ends up in, and what is the host
for some server (hostname) might not be where you actually want to mount
it.
So I suggest that even this auto.smb is a bit of a weird entity. I have
used it to generate a map list for me that I then adjusted and used as a
static map file.
I know it does the same as that -net or -host thing. It's just that for
a desktop user (again, you say that is not the target audience, but
still) what you really want is a fixed location the user chooses (in
auto.master) and then a browseable list of shares. Apparently this list
needs to be static. What I envisioned before would be a parameter to a
script that causes it to output a list of shares rather than the share
entries themselves, or just the complete list of it.
This would give it "browse" capability. If you had that, you would need
to be able to pass a parameter to the script as the key, but this is
rather obscure.
What it comes down to is that this is what I would envision for myself:
Auto.master:
/cifs /etc/auto.smb diskstation <-- diskstation is
the key
Now the program/daemon/whatever passes diskstation as key to the script.
It also passes a wildcard parameter such as "*". The script now returns
a list of shares for "diskstation". Subsequently when the daemon needs
to mount, it calls the script with diskstation and the requested mount
point/share. Alternatively, the returned output from the first call must
be complete and is saved by the daemon. It is then used in place of a
static map (the script is now nothing but a static map generator).
Personally I prefer the former: the script has two modes: list and
return entry, and it therefore has two parameters: one to identify what
to list, and the second to determine what entry to return from what was
listed.
Otherwise, without the parameter, you'd need to hardcode it in the
script and duplicate the script for every host, etc.
In fact as a user you may not even want all shares to be present on your
local system. So you might equally well decide to use a static map. So
perhaps you will use "smb.auto list" or "smb.auto list diskstation" or
"smb.auto diskstation *" manually. And perhaps it generates all of the
mappings for you that you can then prune and save.
Perhaps, in absense of the key (the entry, or wildcard) a call such as
"smb.auto diskstation" should simply generate the share list, whereas
"smb.auto diskstation media" generates the key for only that entry.
So you see me use a term like "entry" to determine what you today call a
"key" and the word "key" to determine what you today call a "entry" ;-).
"entry" means "element of pre-existing list", "key" means "value to use
for a lookup operation in a database".
"key" can also mean "that what you need to mount or access something".
so if you combine the two, you have like three things:
- subdirectory (mount point)
- key to use for database lookup
- entry in the list of shares
but you can only solve it if you differentiate between "lookup
operation" and "element of a list".
a lookup operation may /return/ a list, which is why I said:
"diskstation" (hostname) is a key
list of shares is the result of that lookup operation based on that key
entry is an element of that list you can mount.
This applies to a form of browse scenario. If you have invisible
mounpoint, you get:
"diskstation" (hostname) is a key that is going to be a direct
subdirectory
the list of shares it returns is going to be a subdirectory list to that
first subdirectory
this will contain the actual shares, so you get top/key/entries.
And you see we basically have the same thing except that this is one
level deeper in our hierarchy.
In my desired use case, you get:
top/entry
But in the current implementation of e.g. smb.auto, you get:
top/key/entry
So it is clear diskstation (hostname) IS a key, except that in the
current case or scenario this key is ALSO used as the direct descendent
of the toplevel directory used for the configuration in auto.master.
Which gives rise to the idea that maybe "key" and "subdirectory mount
point" should not be equivalent or identical. Not necessarily identical.
In fact, smb.auto DOES produce a browsable list. It is just /beneath/
the subdirectory for its "key" (the hostname) and mounts ALL of the
shares at once.
So you now already have that mountstorm (it just gets mounted by
default) it's just a level deeper.
You now don't have a mountstorm on the level of hosts, but you have a
mountstorm on the level of shares (per host) but it is still the same
thing.
So the design specifies that keys must become mountpoints in the
"sub-top-level" directory (the directory managed by autofs for indirect
mounts).
Whereas perhaps conceptually you would need to differentiate between
key, and an entry used for mounting, and the difference is that this
entry used for mounting can be an element of the list returned by the
key.
So you get key -> list -> entries. In the case of current auto.smb, what
happens is:
key -> (subdirectory) -> list (itself an entry) -> entries.
Or in other words:
key -> list -> entries, but mounted beneath top/key, as key is used as
an extra directory in between.
Current auto.smb:
<managed mount point> / <key> / entries / contents of shares
My preference:
<managed mount point> / entries / contents of shares
And why? Only because I want to specify mount points for specific hosts
manually.
So the question becomes how you can combine such things.
The requirement becomes for the script to return the mount point. A
static list does return mount points. A script does not, the mount point
is specified as the "key" given to it. So if you want the script to
determine the mount point the way a static list does, you need it to
return, in essence, a static list.
Even if your static list is not browseable, so too can a returned
"dynamic list" also be not browseable. But at this point your script can
return a mount point of its own. That is again problematic if the key is
derived from filesystem access.
So you see how hard this is to reconcile.
If we insist in accessing hidden shares and deriving keys from user
action or filesystem action, or filesystem access, then keys are
predetermined the moment the system is actually called. The script or
database can only return whether something is gonna be there or not.
Then the only way to reconcile it is (in my case) for the key to be not
the only argument to the script, but I give the script my argument I
define in auto.master:
/cifs /etc/auto.smb diskstation
Now my script knows that e.g. $1 is gonna be the filesystem requested
path, and $2 is gonna be the custom parameter I determined in advance.
I don't need to use the key by itself ($1) to do the lookup. My real key
is $2. But I will use the second key ($1) to return a share for the host
determined by the first key ($2).
In essence what happens with auto.smb is that the key ($1) is used as
the host (first key) and smb.auto will itself derive and load the list
of shares, and then instantly mount that (it retrieves all of the second
keys and instantly uses that and mounts everything).
autofs allows for this second list of keys to be instantly used and
utilized.
But it's not the common use case because you are now fixed to that
location the user accesses and of course you could use that location
(the key) ($1) to look up a host in a different way, that is perfectly
possible. If your script marries "diskstation" to "ds" or "ds" to
"diskstation" then you could automount /cifs/ds and it would request
shares from host "diskstation".
But you're still stuck with that intermediate level, that extra level of
subdirectories.
It's not possible currently to use the "first key" to determine the host
and then get the "second keys" ($1) to determine the entries. Well,
actually it is, but not in a browseable fashion:
/cifs /etc/auto.smb -Dhost=diskstation
I believe this could work. You pass your real "primary" key as that
variable, and your script returns shares based on that key, but with the
"filesystem key" ($1) as the name of the share to return.
Still, rather convoluted at this point, and not possible to browse this
list.
Because it doesn't return a list. Using $2 to pass that variable
wouldn't really be different.
But the script is only called when an entry is accessed (through the
filesystem) and as such is never called without a secondary key.
The current auto.smb doesn't have secondary keys, it loads and mounts
all of them instantly and automatically, and there is no "automount" for
the secondary level.
Well, there is, it just mounts all of it in one go.
So how do you skip the intermediate level?
- call the script on purpose to generate a 'static list' instead of just
for entries alone.
- use the static list to precreate directories (mount points), or
- use the static list to return readdir information, or to filter
requests for directories, or don't use the static list at all and just
keep it the same.
So we also see here that the static list *could* be used for the thing
you described above, that alternative readdir implementation.
Although it's a bit of a hack, I'm sure.
You could specify some interval for invalidating the list. At which
point it is renewed by another "get list" call to the script.
So in essence I am really advocating "return list" functionality for
scripts here.
And there is no easy way to implement that while staying compatible with
current scripts because you would need to: call the script with an empty
parameter (that will never return an entry) or with a wildcard (that
will probably never return an entry) and in essence, that might just as
well mean, that if you want to take this approach:
- use the wildcard as the "list" operation, scripts that do not know
about the wildcard will simply return zero entries.
- use the extra parameter as $2, in other words: allow for additional
parameters in auto.master in that form of:
/cifs /etc/auto.smb diskstation random
But there are already recognised options. So what we have today (-D) I
guess already works, it is just a bit obscure, and it seems to be the
equivalent of what you would do.
Then an alternative auto.smb (I called it auto-short.smb) would:
- be called as:
/cifs /etc/auto.smb -Dhost=diskstation
- return a list of shares when called with "host=diskstation
/etc/auto.smb *"
- alternatively you could have it be called with: "/etc/auto.smb *
diskstation"
- make sure the shell doesn't expand that * ;-).
- autofs calls this list operation each time auto.master is reread.
- autofs calls this list operation each time some invalidation timeout
occurs
- not really any other way to trigger it.
- the static list being returned (really a dynamic list now) can be used
for readdir, or even access filtering (it contains all the valid keys).
Now when you have something like this, you can:
- call the filesystem component you use to access the actual shares, the
"entry" or simply "share" or "directory". But "key" becomes more
comprehensible now as well. "key" now corresponds to "entries in the
returned list". Each "entry in the returned list" is now a "mapping
between key (directory) and value (device-path). I realize "mount point"
might confuse with the mountpoint defined in the master file.
"submount" would be ugly. "subdir" is most convenient. "subkey" is
illustrative. "entry" describes the entire thing, not just the key. But
in essence we are just talking about a mount-point here.
Nevertheless in principle the "first key" given should not even need to
be the mount point in fact. It is a key to a lookup-table.
If you wanted to do it right I think you would need to do away with
hidden (or principally unknown) mount points, and basically always work
with static lists (or dynamically returned lists) and/or give every
script a first parameter $1 that is a key that you actually use for it.
Then the second paramter $2 would be the key that the filesystem has
actually given you for accessing a path.
In this way "access path" becomes secondary.
Perhaps a script in this way still cannot control the mount point (name)
but at least now it provides it in advance. Of course there are
situations in which you cannot know it in advance or in which it is
irrelevant to know it in advance.
You can imagine a list to be much too big to actually retrieve it. In
itself a sign of a bad system, I think. If it was that big, you would
need hierarchy to make it smaller.
So I don't know if there is ever any real need for obscurity, and for
hosts to be accessed while not being able to list them in advance.
I conjecture that this use case in itself would be odd, that you create
auto-mounts for an unspecified number of hosts you don't really know
about in advance.
It makes for zero-configuration, true. You don't have to keep a list
anywhere or even arrange for it to be generated.
On the other hand if a list could not be had, the script would just
return nothing and the daemon will assume no-browse
no-information-in-advance lookups the way it does today.
Basically if you keep the current $1 and use it to pass a wildcard, no
script is likely to ever break on that (it will just return nothing).
Then $2 can be your primary key. Not every script needs it.
But complete zero-configuration is to me an odd thing: a magical
directory that will mount every host given to it including all of its
shares, does not seem sensible to me in the sense that you would at
/least/ want some configuration for every system right?
One size fits all: the same system for every computer, just place it
somewhere in your filesystem and you have access to everything and
everything.
No, normally you would want to tailor it because you do not have access
to an infinite amount of hosts and the /cifs/host/share structure
therefore makes not much sense.
/device/share may make more sense, perhaps added with
/hosts/hosts-meeting-a-certain-pattern/share
You already have a pretty big system if you use the latter. Anyway.
I think I should conclude this :p.
What seems sensible to me at least is to add something that doesn't
break compability by using a wildcard * for scripts that will return a
certain format of list (or just all shares it can find in the regular
format) with a $2 parameter designating something such as a "host" or
perhaps even "department".
In this way you solve the issue that the current auto.smb requires the
"hostname" to be first key given to it because there is no other way to
pass a key. Then the actual keys can be passed as "second parameter" in
this sense that they still become $1, but at least now you can specify
your host to it in advance instead of guessing it on the wind ;-).
So if I have to say anything I would say that I recommend this
structure:
- $1 remains the filesystem key
- $2 becomes the additional key a user may specify in auto.master
- $1 as * will designate the request for a list lookup
- the list can in the future be used for readdir implementation, and
currently for precreated mountpoints in case "browse" is used.
indicate that this is true for "no browse" on "static maps" as well,
and
that directories are only created when they are accessed.
That seems to be missing from the man page.
And that no-browse is the configured default.
Not sure about that either.
This (from auto.master(5)) says that I have made "browse" the internal
program
default way back when version 5 was first developed but I install a
configuration that turns it off because of (because of the above
problems, which
are not spelled out in the man page):
[no]browse
This is an autofs specific option that is a pseudo mount
option and so is given without a leading dash. Use of the
browse option pre-creates mount point directories for
indirect mount maps so the map keys can be seen in a
directory listing without being mounted. Use of this
option can cause performance problem if the indirect
map is large so it should be used with caution. The
internal program default is to enable browse mode for
indirect mounts but the default installed configuration
overrides this by setting BROWSE_MODE to "no" because
of the potential performance problem.
I know, but if you, as a user, do not understand what the hell it is
doing in the first place, and you have no time for extensive manual
reading in advance (like me) you will never get to that option because
you still don't understand the basics, and the more advanced stuff (like
that) that will be too hard to understand.
That information is too difficult to comprehend (or digest) when you are
someone who still doesn't understand how it works.
I think that limitation indicates why using relative paths is so much
more pleasant. A map file using relative files can be "moved around".
A
map file using absolute paths cannot.
Could you not make a direct map out of the key of the master file, and
the keys of the sub-maps?
Can't really do that.
It's not that I saw it as necessary, I just still think the difference
and distinction between "direct" and "indirect" is rather I dunno, weird
I guess ;-).
The closest thing to that would be multi-mount map entries, again from
autofs(5):
Multiple Mounts
A multi-mount map can be used to name multiple filesystems
to mount. It takes the form:
key [ -options ] \
[[/] location \
[/relative-mount-point [ -options ] location...]...
This may extend over multiple lines, quoting the line-breaks
with
`\´. If present, the per-mountpoint mount-options are appended
to the default mount-options. This behaviour may be overridden
by the append_options configuration setting.
Something I am already using (almost) and I believe this is what
auto.smb uses by default too. It's just that I use it to mount parts of
the same share. (Samba smbd often has issues with file-system boundaries
so I create a share out of each filesystem and share them individually,
then mount them as above).
normally ( /bla/bla ) indicates an absolute path and can only be used
in
a direct map, from what I've seen. You *could* cause it to turn this
"indirect map" into a direct one.
ie. ( bla/bla ) == indirect
( /bla/bla ) == direct.
In the Sun map format maps indirect mount map keys are a single path
component
only.
The situation is different if using amd format maps but I could work
out how
direct mount maps could be usefully used so that mount type has not
been
implemented. Even on Solaris where they are supposed to be fully
supported I
could make sense of them.
/start/of/the/- ( path )
Then any path in the master file ending with /- would be a direct
path.
And would cause the (static) map file to be treated as direct.
Nope, that is unlikely to work and would lead to a lot of problems.
One example might be with:
/start/of/the/-/path
people will ask where have the directories below /start/of/the/ gone
and the
answer will be they have been covered by the autofs file system mount.
But that's no different from a regular mount on that location.
Next it would be but I should be able to see these, surely that's easy
to fix.
At hat point I would probably leave the conversation.
:-). Haha. But come on, you can't take that seriously, the same thing
happens with a regular mount, even a bind mount does that. It's
absolutely not possible to reveal the contents of some subtree when
another filesystem had mounted on top of it, except by remounting the
entire subtree elsewhere.
(I am trying out some gimmick that requires me to mount the root device
under something like /run/mount/rootfsclone, because I am bind mounting
on top of something that I still want to be accessible. The only way to
make it accessible is to mount the entire rootfs somewhere else. I
basically need a persistent location for individual devices without
anything mounted on top of them; ie. a pure one-file-system mount tree.
Then you have those individual mount-trees (filesystem trees) and THEN
you have the complete FHS that you normally use.
That seems like the only sensible way to approach a lot of issues.
Having a /mounts/ with every (custom) mounted device present there.
Recently the number of incomprensible mounts reported by "mount" has
grown so big that the entire tool has become unusable, mostly due to
cgroups. And tmpfs's, and debugfs's, and what not.
I now find I need to create a wrapper for mount so that "mount" stays at
least usable to the user. Such that every subdirectory of /sys, and
every subdirectory of /run, and every subdirectory of /dev/, is actually
hidden from display. And from /proc as well. And the same applies to DF.
(I really need to find a way to better save and redistribute these
modifications).
In any case a similar thing, but more explicit and I think easier to
follow and
maintain are the autofs multi-mount map entries, what I think you
called
relative path above.
It isn't clear but that syntax should function within both direct and
indirect
mount maps.
The problem with all this is that substitution cannot be used in the
key.
The reason for that is same as why the amd regex key matching has not
been
implemented.
A linear map scan would be needed for every lookup in order to match
what has
been sent to us from the kernel. And as much as you might feel that
wouldn't be
a problem because maps are generally small I can assure you it would
make life
really unpleasant for me!
:). You have lost of enemies; if you don't do things right :P. LOL.
I am not thinking about this now, but I guess you mean that my proposed
solution of saving those static lists (or dynamic lists) would actually
help this :p.
Actually what you are saying is that even doing lookups on those lists I
proposed would be quite annoying.
However currently this lookup is already done on the server (if it is
some database) or in the static map (if they are not browseable). Right?
Databases are generally better at lookup than small-scale applications.
Now you need hash-tables etc. Anyway, I can't think about it right now.
I only noticed just now because autofs(8) references autofs(5), and
I'm
like ....huh?
Yeah, that's what we have ....
/media/*/nas /etc/nas-shares.map
Can't do that, each /media/*/nas needs to be mounted at program start
so that
automounts can be done?
Actually the shell easily expands such a pattern to all matching trees?
e.g. if you create /usr/local/doc, and then do
"ls /usr/*/doc/" you will get the output of:
/usr/local/doc /usr/share/doc
So with that output, you mount all of them, all the while knowing that
the middle component (here) is a wildcard. Then the daemon knows about
this wildcard and can use it while processing /etc/nas-shares.map for
each of them; but they are all individual mounts at that point.
e.g. /media/me/nas and /media/pamela/nas are now individual autofs
mounts.
but their "/etc/nas-shares.map" is called with a different wildcard.
I mean that either /media/me/nas will already exist (precreated by the
user) or your program creates it (I think it already does that).
Actually it cannot create it DUH, so the user will need to precreate
them.
I know /media is usually seen as a managed directory but this is
annoying in the first place. Such an important name (/media) and I
cannot do anything with it because it is already taken. At the same time
accessing it from your home directory is usually quite annoying. So in
order to make sense of it I must precreate /media/xen and
/media/dannielle and prevent the daemon (that /media thing) from wiping
them.... buh.
The funny thing is that /media/xen is not even owned by xen.
The subfolders are, but the thing itself isn't (how stupid?).
So in this case perhaps not as convenient, but in the general case if I
precreate:
/mytree/me/nas
/mytree/quinty/nas
etc.
Then the daemon could read
/mytree/*/nas
on program start and see all the shares that match. All the directories
that match.
Then it knows that "me" and "quinty" are wildcard entries for their
respective mounts.
> By definition a program map can't know what keys may be passed to it.
> And, yes, even autofs(5) doesn't say explicitly that a program map
> doesn't know
> what keys may be passed to it, perhaps that should be ammended.
In principle even a program map could read all of the available keys
from whatever database it consults, and return all of them; or return
something only (as it does now, perhaps) when a given key matches (is
found).
I've thought of that too.
But I've resisted because I'm concerned about confusing existing
program maps
people have in unpleasant ways.
At some point you need to be willing to break previous compability.
I understand that Linux is usually a running-upgrade system and even if
people do upgrade entire distributions they want the system to keep
functioning as normal without changing anything.
But e.g. the Debian update program can tell you stuff like "This package
does something different now, and you need to ensure it can either
handle or ignore the wildcard".
on every package installation.
That's not a bad thought, but what would $1 be so that older scripts
wouldn't
choke on it.
I had suggested * because there is no hostname called *.
But scripts may very well mount local filesystems as well. And they
might very well use a shell to do stuff like "[ -d $key ]" or "[ -d
"$key" ]" and....
And as always, stuff breaks a lot when people don't use "$key". If
people's scripts are badly written then [ -d $key ] will malfunction.
In order to cover that you would have to use something like "_", no one
is going to use _ as an actual filename.
So, for instance, if $2 is "browse" the script may choose to return a
browse map, or list of shares, with whatever options are available (it
could even return the UID/USER of the share given options, so that the
created directory would have the same UID as the to be mounted share!)
-- meaning it might simply return ALL entries it could possibly return
in the default format, which is the meaning of the wildcard, and the
program (autofs daemon or module(?)) would be able to generate the
appropriate directory entries based on that. So in that case you're
probably going to be rid of the problem of the non-matching state of
pre- and post-mount of directories.
which pretty much defeats the only possible solution I can think of for
"browse"
option mounts ...
You mean the one suggested above. But if you make an alternative readdir
having a list available won't bug you right?
I mean, having mount information available in advance cannot be a
problem, only a help.
Like I said, I wanted to have the default as "browse" but got hit with
too many
complaints and was only willing to change the package installed default
configuration to satisfy them.
I'm not going to try that again.
"Design by committee" they call it ;-).
Someone told me that OpenSUSE has had "browse" as the default for at
least 5 years.
You did think about making a distinction between static and LDAP/NIS
maps? I'm not sure I am not crossing my bounds here now ;-). Seeing that
I already suggested it.
In this way no one can ever get confused, so you might just rename
autofs(5) to autofs.map(5).
Yeah maybe ...
I will look into providing a patch, but I hope you can rename
autofs(5)
to autofs.map(5) before doing anything like that. I don't feel like
improving a system I know myself haven't had access to, in that sense.
I
don't mean you have to do that straight away, but if that is a
possible
direction to go, or change to make, then I could supply a patch or
anything based on the system that would result from that, seeing that
autofs(5) is already pretty good in that sense.
At some point I'll need to return to this and make a list.
Unfortunately that time isn't now.
Here is just my short summarize:
- suggested compromise between browse and nobrowse by only making browse
for static lists the default to please new people unaware of what will
happen
- suggested perhaps using _ as the wildcard key, not using that "browse"
flag as $2, but allowing a different "primary" key as $2 if the script
wants it.
- suggested changing autofs(5) to autofs.map(5) such that users will be
able to find it.
And the rest is up to you, I think you don't need anyone rewriting it
for you now, but I could still try my hand if that was necessary. I just
think using better terminology is dependent on the first two suggestions
here, particularly perhaps the second, and otherwise you'll have to
explain current terminology better and create some way to separate "key"
in auto.master from "key" in "autofs.map" ;-).
However if scripts could accept an additional "key" and the default
operation or capability would for them to be able to return a "list"
then it would become more clear that the additional "key" would be the
index to retrieve that list, and that the entries resulting from that
may have their own "keys" but these "keys" are then used for the
individual "mount points".
So then you have reason to call the entry keys "entry keys" and the
other ones "lookup keys" for instance. Then, static maps use "entry
keys" but scripts have "lookup keys" as their 2nd parameter. The script
then has "entry key" as the first, and "lookup key" as the second
parameter. Then, entry key is equivalent to subdirectory, and lookup key
is often going to be equivalent to e.g. hostname.
A script without list functionality only accepts "entry keys" but may
very well use them as lookup keys as well.
A database also uses them as equivalent. But at least now you have
something of a model to conceptually separate them. Since browse already
functions for e.g. LDAP, in many cases you will not use a lookup key for
that, simply because (a) /etc/autofs_ldap_auth.conf already contains
such paramters and (b) the other parameters are specified in the
mapname-format for LDAP.
So what we call "lookup key" would already be encoded in the LDAP URL
but such a thing is not possible for scripts.
So you could call scripts the exception to the rule in not having this.
Of course it exists,: the "-D" parameters.
(I think? Haven't tested it yet).
So I don't know.
For a database the entry key is always going to be a lookup key to the
database.
Entry keys can also result from a list fetch.
So they can result from filesystem access or from some list retrieval.
But if "browse" becomes more normal and standard, you would see that it
becomes easier to talk about this "entries from the list you get when
quering the database/script for it".
Then the gap between "entries from the list" and "subdirectories
created" is smaller than the gap between "filesystem access" and
"subdirectories created".
Meaning: key (database designator) -> list -> entries (database entries)
is conceptually easier to grasp than key (filesystem access) -> entry
(database entry).
Or key (filesystem access) -> database lookup -> entry.
Simply because it is hard to understand that user action (filesystem
lookup) is the input to the entire system and is actually the keys being
used.
Even if it's not always to be used, having a level in between that ties
"key" to "pre-existing shares" or "pre-existing entries" becomes easier.
When a user thinks of "key" they think about either existing shares or
existing hosts.
Then conceptually it is logical to assume that the script or database
can give back this list. And that either this list is precreated on the
harddrive, or at least that it is used to match the "keys" against, for
instance.
(By harddrive I mean filesystem (hierarchy)).
This is why I think standard operation for at least smaller lists should
be a form of browse or at least a form of list retrieval, if you are
going to have dynamically generated lists in any case.
Then, it conforms more to a sense of what a general computer user is
going to have of the idea of auto-mounting mount-points. It (he/she)
thinks those mount-points are already going to exist, prior to being
automounted.
This in turn makes it easier to explain that the key is really a
subdirectory, and that there can be another key if needed to specify the
host, or something of the kind.
Then, the most common use case for a new or general computer user
wanting to try this is going to be met, because that use case is going
to involve auto-mounting network-shares of which there is a limited
number.
And it will involve something like
/nas/media
/nas/backup
/nas/home
etc.
Now I'm not sure if it covers the entire use. But at least you'd also
have a script that can generate a static map for you that you can then
use in place of the scripts, if you want to prune the mounts that are
actually visible to you.
Apart from the distributions themselves there is not really any software
in the Linux world that is tailored to either the desktop user, or the
server user, but not both, is there?
Apart from the desktop environments.
Anyway this was a long read and write, I guess, for both of us :P.
Time to get some food again or at least some water :p.
See ya.
--
To unsubscribe from this list: send the line "unsubscribe autofs" in