On Thu, 2 Jun 2016, Lennart Poettering wrote:
Well. Let's say you are responsible for the Linux desktops of a large security-senstive company (let's say bank, whatever), and the desktops are installed as fixed workstations, which different employees using them at different times. They log in, they do some "important company stuff", and then they log out again. Now, it's a large company, so it doesn't have the closest control on every single employee, and sometimes employees leave the company. Sometimes even the employees browse to the wrong web sites, catch a browser exploit and suddenly start runing spam bots under their user identity, without even knowing.
This all has nothing to do with individual processes on machines. If you are a big bank you better well detect rogue processes using up CPU on your install base.
In all of these cases you really want to make sure that whatever the user did ends – really ends – by the time he logs out.
No you don't. you are creating simplistic world views that are not there. As others have said, the only simple case of killing processes is those with no use when the user is gone - that is locally started windowing applications. Really, we need to fix gnome and gdm and stuff that lingers where the problem is. We don't need systemd to kill my 200 gdm lockscreen binaries that eventually run me out of resources to unlock my screen. We need gdm to see its bugs and fix it.
This is really just one example. This model I think really needs to be the default everywhere.
People aren't agreeing with you. So making it a default seems like a bad idea. People do seem to agree on "obviously broken windoing apps" that are left lingering. Why can't we just let those get killed?
On desktops and on servers: unless the admin permitted it explicitly, there should not be user code running.
no, user code may be running everywhere as long as it does not affect the purpose or policies of the machines. Such policies are not written by filenames of binary files.
If you allow your intern user access to a webserver to quickly check our the resource consumption of some service that doesn't mean that he shall be allowed to run stuff there forever, just because he once had the login privilege for the server. And even more: after you disabled his user account and logged him out, he really should be gone.
apart from your use case taking up 4 lines, which seems like a difficult policy to code into applications (remember you would also need to be able to code the reverse of that policy) the only thing I do agree with you here is that unlisted uids/gids might be fair game to shoot. But one has to wonder how well that works in the case of network outages where a NIS server or something is temporarilly unavailable and you start shooting legitimate processes.
Yes, UNIX is pretty much a swiss cheese: it's really hard to secure a system properly so that somebody who once had access won't have access anymore at a later point. However, we need to start somewhere, and actually defining a clear lifecycle is a good start.
But your definition is already running foul with just a handful of software developers and it will cause large unexpected problems in the real world. For example, a decade ago at a najor airline, they had their core database automatically deleted each night. turns out an overeager cronjob deletes all "core" files that crashed applications left all over the servers. To me, systemd shooting processes is not different. If you are that concerned about processes, you need a strict security policy on what proccesses you allow to be _started_, not trying to fix your mistakes afterwards by shooting. Paul -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://lists.fedoraproject.org/admin/lists/devel@xxxxxxxxxxxxxxxxxxxxxxx