Your organization might not need anything like this, but there are many that do. Personally, I log user commands (including root) on my servers, and all the logs are piped to another set of machines in approximately real time. I think you might be envisioning a blue-nose admin poring over individual users' command logs, looking for bad behavior. Sounds pretty juvenile, to me. Here are some more realistic purposes: 1) post-breach security analysis 2) meeting regulatory standards for operations integrity 3) troubleshooting user-induced problems 4) discouraging irresponsible user behavior As for #3 and #4, there's an obvious comparison to security cameras. (This is probably not likely to win me many friends, here.) In sensitive areas (e.g., lobby entrance, file room, bank ATM), an organization may install cameras and record video that *isn't* actively being watched. If an incident occurs (unauthorized intruder, missing files, ATM malfunction), someone would look at the video captured around the time of that incident. Command logging could serve a similar purpose. Not all machines (dev machines, desktops, workstations) need command logging--just a few key hosts. And, like many logs, 99+% of the time nobody will read them. But when something goes wrong, the sysadmin responding to the problem has an added perspective on the situation. >>> And your filesystems with the logfiles will fill up really fast, since >>> you want to log the full commands (with pathnames in them), but also the >>> audit messages. >> >> I have now more or less with 30~40 users 50~60mb per day. >> Anyway, you can rotate the log file and it has a big compression ratio. > > That's not the point - you'll get logfiles that are many megs large, every > day. How do you think you'll find what you don't like? Have you heard of "grep"? It's new. I also use Splunk, which can make finding a needle in a haystack much easier. There are probably a lot more techniques for sorting through large, noisy log sets, even if you don't know exactly what you're looking for. One approach is to pipeline a few 'grep -v' commands to ignore each type of log entry that you *don't*' care about, and examining the remaining data, and repeating until the remaining set of log entries is small enough to eyeball. >>> Unless you don't trust any of your users, this is a pointless exercise >>> in pretend security. Post-breach analysis is a key part of security, hardly "pretend" or "pointless", to identify and close security gaps that were previously unknown. Command logs can help make that analysis more efficient and effective, which will generally have security benefits. >> No, I can't trust in all the users, I need some extra security. > > Do these users have root logins? Or do they only have sudo? If the latter, > that's already being logged in /var/log/secure. If the former, and they're > not trained admins, this is the first thing you need to change, long > before you worry about logging. NO ORDINARY USERS should *ever* have root > login. Here, I think you're suggesting that non-root users cannot cause significant harm, which isn't necessarily true. First, non-root users might need to have access to critical business data. Database contents, log files, etc. may be accessible to an unprivileged task operator, possibly with write privileges, for necessary business functions. A data analyst could corrupt or delete records, or a developer could corrupt a source repository. But if you take away those privileges from all analysts and developers, they may not be able to do their jobs as well, or at all. Also, security vulnerabilities do exist that can allow local accounts to escalate their privileges without authorization, possibly even to the root level. Having those logs can make it possible to identify how an attacker managed it. -Ryan -- selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/selinux