Sorry for the belated reply, I got bad news from the oncologist recently and have been a bit distracted. On Fri, 2012-02-17 at 02:53 +0000, Pádraig Brady wrote: > On 02/16/2012 08:17 PM, Frank Mayhar wrote: > > We have to do special stuff if an fsck fails for a particular file > > system. Without running each fsck individually (something I want to > > avoid for a number of reasons), > > please give a couple as this is a crucial point One, memory use. Running each fsck individually means we use more memory than allowing fsck itself to do the parallelization. Not a _lot_ more memory, certainly, but under certain conditions it can become significant (e.g. when running in a cgroup, among other things). Two, tracking multiple parallel instances of fsck from a shell script is a lot less straightforward than allowing the fsck wrapper itself to do so. The fsck wrapper already has the code to do the tracking, the functions I added simply build on that code. Writing a shell script to do the same thing (particularly when one has to handle fsck errors specially, as we do) is redundant, potentially error-prone and (the real kicker as far as I'm concerned) hard to maintain. It also (three) adds complexity to the start-up scripts which are already plenty complex _and_ it adds a dependency on that script which would not exist otherwise. (That is, to do a "proper" fsck by hand one would either have to set up the environment properly so that the script doesn't fall over, or provide _another_ script that can be run independently. It's a _lot_ easier to be able to just type "fsck".) Adding a way to allow special handling to fsck itself is easy (the code is really straightforward), reduces the fsck footprint and reduces complexity, making things easier to maintain. -- Frank Mayhar fmayhar@xxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe util-linux" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html