Hello. On Thu, Jan 09, 2020 at 11:44:15AM -0800, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > I looked at it - there wasn't really any compelling followup. FTR, I noticed udevd consuming non-negligible CPU cycles when doing some cgroup stress testing. And even extrapolating to less artificial situations, the udev events seem to cause useless tickling of udevd. I used the simple script below cat measure.sh <<EOD sample() { local n=$(echo|awk "END {print int(40/$1)}") for i in $(seq $n) ; do mkdir /sys/fs/cgroup/memory/grp1 ; echo 0 >/sys/fs/cgroup/memory/grp1/cgroup.procs ; /usr/bin/sleep $1 ; echo 0 >/sys/fs/cgroup/memory/cgroup.procs ; rmdir /sys/fs/cgroup/memory/grp1 ; done } for d in 0.004 0.008 0.016 0.032 0.064 0.128 0.256 0.5 1 ; do echo 0 >/sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage time sample $d 2>&1 | grep real echo -n "udev " cat /sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage done EOD and I drew the following ballpark conclusion: 1.7% CPU time at 1 event/s -> 60 event/s 100% cpu (The event is one mkdir/migrate/rmdir sequence. Numbers are from dummy test VM, so take with a grain of salt.) > If this change should be pursued then can we please have a formal > resend? Who's supposed to do that? Regards, Michal
Attachment:
signature.asc
Description: Digital signature