On Tue, Nov 28, 2017 at 07:43:05AM +0800, Ian Kent wrote: > I think the situation is going to get worse before it gets better. > > On recent Fedora and kernel, with a large map and heavy mount activity > I see: > > systemd, udisksd, gvfs-udisks2-volume-monitor, gvfsd-trash, > gnome-settings-daemon, packagekitd and gnome-shell > > all go crazy consuming large amounts of CPU. Yep. I'm not even worried about the CPU usage as much (yet, I'm sure it'll be more of a problem as time goes on). We have pretty huge direct maps and our initial startup tests on a new host with the link vs file took >6 hours. That's not a typo. We worked with Suse engineering to come up with a fix, which should've been pushed here some time ago. Then, there's shutdowns (and reboots). They also took a long time (on the order of 20+min) because it would walk the entire /proc/mounts "unmounting" things. Also fixed now. That one had something to do in SMP code as if you used a single CPU/core, it didn't take long at all. Just got a fix for the suse grub2-mkconfig script to fix their parsing looking for the root dev to skip over fstype autofs (probe_nfsroot_device function). > The symlink change was probably the start, now a number of applications > now got directly to the proc file system for this information. > > For large mount tables and many processes accessing the mount table > (probably reading the whole thing, either periodically or on change > notification) the current system does not scale well at all. We use Clearcase in some instances as well, and that's yet another thing adding mounts, and its startup is very slow, due to the size of /proc/mounts. It's definitely something that's more than just autofs and probably going to get worse, as you say. -- Mike Marion-Unix SysAdmin/Sr. Staff IT Engineer-http://www.qualcomm.com