Stefan Beller <sbeller@xxxxxxxxxx> writes: >> I think two sensible choices that start-failure and return-value can >> make are >> >> (1) This one task failed, but that is OK. Please let the other >> tasks run [*1*]. >> >> (2) There is something seriously wrong with the whole world and I >> declare an emergency. Please kill the other ones and exit. > > (3) There is something wrong, such that I cannot finish my > job, but I know the other 15 processes help towards the goal, > so I want to let them live on until they are done. E.g: fetch submodules > may want to take this strategy if it fails to start another sub > process fetching. How is that different from (1)? Do you meann "let other ones that are already running continue, but do not spawn any new ones?" > We could also offer more access to the pp machinery and an implementation for > (2) might look like this: > ... > By having the pointer to the pp struct passed around, we allow > for adding new callback functions to be added later to the > pp machinery, which may not be expressed via a return code. What you are suggesting would lead to the same "different smart participants making decisions locally, so you need to run around and follow all the detailed codepaths to understand what is going on" design. I was hoping that we have already passed discussing that stage. The whole point of that "SQUASH???" commit was to correct the design of the overall structure so that we make the central dispatcher that uses bunch of "dumb" helpers (that do not make policy decisions locally on their own) as the single place you need to read in order to understand the logic. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html