On Wed, 2020-04-01 at 10:37 +0200, Miklos Szeredi wrote: > On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@xxxxxxxxxx> > wrote: > > Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > > > > > According to dhowell's measurements processing 100k mounts would > > > take > > > about a few seconds of system time (that's the time spent by the > > > kernel to retrieve the data, > > > > But the inefficiency of mountfs - at least as currently implemented > > - scales > > up with the number of individual values you want to retrieve, both > > in terms of > > memory usage and time taken. > > I've taken that into account when guesstimating a "few seconds per > 100k entries". My guess is that there's probably an order of > magnitude difference between the performance of a fs based interface > and a binary syscall based interface. That could be reduced somewhat > with a readfile(2) type API. > > But the point is: this does not matter. Whether it's .5s or 5s is > completely irrelevant, as neither is going to take down the system, > and userspace processing is probably going to take as much, if not > more time. And remember, we are talking about stopping and starting > the automount daemon, which is something that happens, but it should > not happen often by any measure. Yes, but don't forget, I'm reporting what I saw when testing during development. >From previous discussion we know systemd (and probably the other apps like udisks2, et. al.) gets notified on mount and umount activity so its not going to be just starting and stopping autofs that's a problem with very large mount tables. To get a feel for the real difference we'd need to make the libmount changes for both and then check between the two and check behaviour. The mount and umount lookup case that Karel (and I) talked about should be sufficient. The biggest problem I had with fsinfo() when I was working with earlier series was getting fs specific options, in particular the need to use sb op ->fsinfo(). With this latest series David has made that part of the generic code and your patch also cover it. So the thing that was holding me up is done so we should be getting on with libmount improvements, we need to settle this. I prefer the system call interface and I'm not offering justification for that other than a general dislike (and on occasion outright frustration) of pretty much every proc implementation I have had to look at. > > > With fsinfo(), I've tried to batch values together where it makes > > sense - and > > there's no lingering memory overhead - no extra inodes, dentries > > and files > > required. > > The dentries, inodes and files in your test are single use (except > the > root dentry) and can be made ephemeral if that turns out to be > better. > My guess is that dentries belonging to individual attributes should > be > deleted on final put, while the dentries belonging to the mount > directory can be reclaimed normally. > > Thanks, > Miklos