Niklaus wrote:
The problem is not about setting up an online judge. It is about users
submitting code which produces errors by compiler and clogging the
system.
Say an C++ template file takes 3 secs for output and there are 300
users , 1000 secs are wasted. The submissions are much more than that.
If we stop at the first error it would only take maybe < 1s.
How big are these files? I just did a simple -O0 compile of a 1000
lines of "a = $x;" (where $x goes from 0 to 999) and it took 0.077s on
my 2.2GHz AMD64 to generate an object file that moved 1000 values into a
(it wasn't just optimized out). Why not just compile with the optimizer
turned off? It should be really fast at that point.
And if your users are submitting hundreds of thousands of lines of code,
maybe you have another problem on your hand? :-) For reference,
100,000 lines of the same code took 2.489s to compile to object code.
So when you say "3 seconds" you're really talking about 10s if not 100s
of thousands of lines of code. (Also for reference, in this special
case, GCC is faster if you turn the optimizer on as it emits fewer
instructions that way, but in general with non-trivial code you're
faster with the optimizer off).
About parallel, gcc a.c b.c d.c can they be output to a1.out b1.out
d1.out ? Invoking gcc for each of a,b,c seems slow for me . Do we
have anything better ?
Parallel make will typically issue multiple gcc's so you end up with
different processes running. Doing "gcc a.c b.c c.c " to the best of my
knowledge only uses a single process (or at least it's serialized).
Tom