Kosaki-san wrote: > Yes. > Fujitsu HPC middleware watching sum of memory consumption of the job > and, if over-consumption happened, kill process and remove job schedule. Did those jobs share nodes -- sometimes two or more jobs using the same nodes? I am sure SGI has such users too, though such job mixes make the runtimes of specific jobs less obvious, so customers are more tolerant of variations and some inefficiencies, as they get hidden in the mix. In other words, Rik, both yes and no ;). Both sorts of HPC loads exist, sharing nodes and a dedicated set of nodes for each job. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <pj@xxxxxxx> 1.940.382.4214 - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html