On 15/03/11 05:33, Ian Lance Taylor wrote: > "Wolfram R. Jarisch" <wolfram@xxxxxxxxxxxx> writes: > >> When compiling code with a total array size about equal (or slightly >> larger) to the machine RAM the system freezes (here it is an x86_64 with >> 24GB running RHEL 6). Redhat identified this as a problem with the >> computation on the sha1sum on gcc 4.4.4 (case# 00387722). For some >> reason the compiler tries to "work though the array" in its whole size >> to compute a sha1sum. > > A Red Hat case number doesn't tell us anything useful, so I'm not sure > exactly what the problem is. As far as I know, the compiler does not > compute any SHA1 checksum. The linker does compute such a checksum if > you pass the --build-id option. The linker is part of the GNU binutils; > it is not part of the compiler. Linker bug reports should go to > binutils@xxxxxxxxxxxxxx; see http://sourceware.org/binutils/ . > > That said, I think the linker just computes the checksum over the > section contents of the output file. It has to generate that data > anyhow, so I don't know why computing the checksum would make such a big > difference in linker performance. Maybe there's a bug that causes sha1sum to be iterated over all of BSS? Actually, that may not even be a bug: it makes sense for the sha1sum to be different if the size of BSS (and nothing else) changes. Anyway, as you say, it's a binutils issue. Andrew.