Re: [PATCH v3 0/3] Binder: Enable App Freezing Capability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 17, 2021 at 7:00 PM Christian Brauner
<christian.brauner@xxxxxxxxxx> wrote:
> On Mon, Mar 15, 2021 at 06:16:27PM -0700, Li Li wrote:
> > To improve the user experience when switching between recently used
> > applications, the background applications which are not currently needed
> > are cached in the memory. Normally, a well designed application will not
> > consume valuable CPU resources in the background. However, it's possible
> > some applications are not able or willing to behave as expected, wasting
> > energy even after being cached.
> >
> > It is a good idea to freeze those applications when they're only being
> > kept alive for the sake of faster startup and energy saving. These kernel
> > patches will provide the necessary infrastructure for user space framework
> > to freeze and thaw a cached process, check the current freezing status and
> > correctly deal with outstanding binder transactions to frozen processes.

I just have some comments on the overall design:

This seems a bit convoluted to me; and I'm not sure whether this is
really something the kernel should get involved in, or whether this
patchset is operating at the right layer.

If there are non-binder threads that are misbehaving, could you
instead stop all those threads in pure userspace code (e.g. by sending
a thread-directed signal to all of them and letting the signal handler
sleep on a futex); and if the binder thread receives a transaction
that should be handled, wake up those threads again?

Or alternatively you could detect that the application is being woken
up frequently even though it's supposed to be idle (e.g. using
information from procfs), and kill it since you consider it to be
misbehaving?

Or if there are specific usage patterns you see frequently that you
consider to be wasting CPU resources (e.g. setting an interval timer
that fires in short intervals), you could try to delay such timers.


With your current approach, you're baking the assumption that all IPC
goes through binder into the kernel API; things like passing a file
descriptor to a pipe through binder or using shared futexes are no
longer usable for cross-process communication without making more
kernel changes. I'm not sure whether that's a good idea. On top of
that, if you freeze a process while it is in the middle of some
operation, resources associated with the operation will probably stay
in use for quite some time; for example, if an app is in the middle of
downloading some data over HTTP, and you freeze it, this may cause the
TCP connection to remain active and consume resources for send/receive
buffers on both the device and the server.
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux