On Wed, Mar 17, 2021 at 1:17 PM Jann Horn <jannh@xxxxxxxxxx> wrote: > > On Wed, Mar 17, 2021 at 7:00 PM Christian Brauner > <christian.brauner@xxxxxxxxxx> wrote: > > On Mon, Mar 15, 2021 at 06:16:27PM -0700, Li Li wrote: > > > To improve the user experience when switching between recently used > > > applications, the background applications which are not currently needed > > > are cached in the memory. Normally, a well designed application will not > > > consume valuable CPU resources in the background. However, it's possible > > > some applications are not able or willing to behave as expected, wasting > > > energy even after being cached. > > > > > > It is a good idea to freeze those applications when they're only being > > > kept alive for the sake of faster startup and energy saving. These kernel > > > patches will provide the necessary infrastructure for user space framework > > > to freeze and thaw a cached process, check the current freezing status and > > > correctly deal with outstanding binder transactions to frozen processes. > > I just have some comments on the overall design: > > This seems a bit convoluted to me; and I'm not sure whether this is > really something the kernel should get involved in, or whether this > patchset is operating at the right layer. The issue is that there is lot's of per-process state in the binder driver that needs to be quiesced prior to freezing the process (using the standard freeze mechanism of Documentation/admin-guide/cgroup-v1/freezer-subsystem.rst). That's all this series does... quiesces binder state prior to freeze and then re-enable transactions when the process is thawed. > > If there are non-binder threads that are misbehaving, could you > instead stop all those threads in pure userspace code (e.g. by sending > a thread-directed signal to all of them and letting the signal handler > sleep on a futex); and if the binder thread receives a transaction > that should be handled, wake up those threads again? It is not an issue of stopping threads. It's an issue of quiescing binder for a process so clients aren't blocked waiting for a response from a frozen process that may never handle the transaction. This series causes the soon-to-be-frozen process to reject new transactions and allows user-space to detect when the transactions have drained from the queues prior to freezing the process. > > Or alternatively you could detect that the application is being woken > up frequently even though it's supposed to be idle (e.g. using > information from procfs), and kill it since you consider it to be > misbehaving? > > Or if there are specific usage patterns you see frequently that you > consider to be wasting CPU resources (e.g. setting an interval timer > that fires in short intervals), you could try to delay such timers. > > > With your current approach, you're baking the assumption that all IPC > goes through binder into the kernel API; things like passing a file > descriptor to a pipe through binder or using shared futexes are no No, we're dealing with an issue that is particular to binder IPC when freezing a process. I suspect that other IPC mechanisms do not have this issue -- and if any do for Android, then they would need equivalent pre-freeze/post-freeze mechanisms. So far in the testing of freezing in Android R, there haven't been issues with pipes or futexes that required this kind of explicit quiescing (at least none that I know of -- dualli@, please comment if there have been these kinds of issues). > longer usable for cross-process communication without making more > kernel changes. I'm not sure whether that's a good idea. On top of > that, if you freeze a process while it is in the middle of some > operation, resources associated with the operation will probably stay > in use for quite some time; for example, if an app is in the middle of > downloading some data over HTTP, and you freeze it, this may cause the > TCP connection to remain active and consume resources for send/receive > buffers on both the device and the server. _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel