My code (FORTRAN 77) successfully compiled on 32 bit Xeon cluster with 8 GB RAM(RHEL platform) but when I am using a 64-bit AMD apteron cluster with 8 GB RAM (RHEL platform). I face "relocation overflow" and "relocation truncated to fit errors" when I compile big fortran 77 codes using f77/g77 (even with ifort and pathf95) on it. Though it compiles successfully it cannot assemble or link the executables. I understand this is because of large arrays or so. But the executable size is not more than 2GB. so I could not run my codes with large dimensions. Can you please tell me what options I can use while compiling thse kind of codes. Sometimes, ulimit command or -m32 option worked but when I run the executable it simply dumps the core. Please suggest me what can I do to get rid of these problems?. I am getting this error while compilation [cdacb@master IIAP]$ mpif77 -o step3.x ccea_MPI.F ccea_MPI.F: In subroutine `twoint': /tmp/cc5KpcVF.f:3621: warning: call MPI_RECV(maxinp,1,MPI_INTEGER,isource,1,MPI_COMM_WORLD, 1 /tmp/cc5KpcVF.f:3624: (continued): call MPI_RECV(VTWOE,maxinp,MPI_DOUBLE_PRECISION,isource,128, 2 Argument #1 of `mpi_recv' is one type at (2) but is some other type at (1) [info -f g77 M GLOBALS] /tmp/cc5KpcVF.f:3718: warning: call MPI_SEND(maxinp,1,MPI_INTEGER,0,1,MPI_COMM_WORLD,ierr3) 1 /tmp/cc5KpcVF.f:3720: (continued): call MPI_SEND(VTWOE,maxinp,MPI_DOUBLE_PRECISION,0,128, 2 Argument #1 of `mpi_send' is one type at (2) but is some other type at (1) [info -f g77 M GLOBALS] ccea_MPI.F: In subroutine `vbar': /tmp/cc5KpcVF.f:3621: warning: call MPI_RECV(maxinp,1,MPI_INTEGER,isource,1,MPI_COMM_WORLD, 1 /tmp/cc5KpcVF.f:5085: (continued): call MPI_RECV(vmpi,maxinp,MPI_DOUBLE_PRECISION,isource,6, 2 Argument #1 of `mpi_recv' is one type at (2) but is some other type at (1) [info -f g77 M GLOBALS] /tmp/cc5KpcVF.f:3718: warning: call MPI_SEND(maxinp,1,MPI_INTEGER,0,1,MPI_COMM_WORLD,ierr3) 1 /tmp/cc5KpcVF.f:5276: (continued): call MPI_SEND(vmpi,maxinp,MPI_DOUBLE_PRECISION,0,6, 2 Argument #1 of `mpi_send' is one type at (2) but is some other type at (1) [info -f g77 M GLOBALS] ccea_MPI.o(.text+0x846): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch1_ ccea_MPI.o(.text+0x85c): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch1_ ccea_MPI.o(.text+0x872): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch2_ ccea_MPI.o(.text+0x888): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch2_ ccea_MPI.o(.text+0xa31): In function `MAIN__': : relocation truncated to fit: R_X86_64_PC32 skip3_ ccea_MPI.o(.text+0xbaa): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch1_ ccea_MPI.o(.text+0xbb3): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch1_ ccea_MPI.o(.text+0xbcc): In function `MAIN__': : relocation truncated to fit: R_X86_64_32S scratch1_ ccea_MPI.o(.text+0xfaa): In function `getvbar_': : relocation truncated to fit: R_X86_64_32S mind_ ccea_MPI.o(.text+0x1111): In function `eamatb_': : relocation truncated to fit: R_X86_64_PC32 info_ ccea_MPI.o(.text+0x1158): In function `eamatb_': : additional relocation overflows omitted from the output collect2: ld returned 1 exit status -- Sameer Sinha Member Technical Staff SSDG,CDAC R&D Bangalore-38