On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@xxxxxxxxxx> wrote: > > Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > > > According to dhowell's measurements processing 100k mounts would take > > about a few seconds of system time (that's the time spent by the > > kernel to retrieve the data, > > But the inefficiency of mountfs - at least as currently implemented - scales > up with the number of individual values you want to retrieve, both in terms of > memory usage and time taken. I've taken that into account when guesstimating a "few seconds per 100k entries". My guess is that there's probably an order of magnitude difference between the performance of a fs based interface and a binary syscall based interface. That could be reduced somewhat with a readfile(2) type API. But the point is: this does not matter. Whether it's .5s or 5s is completely irrelevant, as neither is going to take down the system, and userspace processing is probably going to take as much, if not more time. And remember, we are talking about stopping and starting the automount daemon, which is something that happens, but it should not happen often by any measure. > With fsinfo(), I've tried to batch values together where it makes sense - and > there's no lingering memory overhead - no extra inodes, dentries and files > required. The dentries, inodes and files in your test are single use (except the root dentry) and can be made ephemeral if that turns out to be better. My guess is that dentries belonging to individual attributes should be deleted on final put, while the dentries belonging to the mount directory can be reclaimed normally. Thanks, Miklos