Dear Madam or Sir: When compiling code with a total array size about equal (or slightly larger) to the machine RAM the system freezes (here it is an x86_64 with 24GB running RHEL 6). Redhat identified this as a problem with the computation on the sha1sum on gcc 4.4.4 (case# 00387722). For some reason the compiler tries to "work though the array" in its whole size to compute a sha1sum. On review of the function of the sha1sum I suspect that the present implementation to generate the unique identifier goes overboard: the crucial issue is that a given source code, compiler and associates libraries, CPU ID and other hardware characteristics etc. represent a finite number of distinct states. Involving any quantities DERIVED from these pieces of information (such as going through an array specified in the source code) seems NOT to increase the REPERTOIRE of states of the sha1sum and may therefor be considered computational waste. I am curious whether I missed a point or whether indeed a simpler approach would suffice. But in any event, please let me know of next steps to fix this bug and/or with whom to get in touch. As a quick fix I apply at this time the compiler flags -Wl,--build-id=uuid bypassing a Build ID. This, however, is not a very satisfying approach as one will apply it only after having crashed the system. With best regards, Wolfram Jarisch (301) 765-0810