Hey, Sorry if this is the wrong list to be subscribed to for this question. I'm working on a haiku os x86_64 port. While working on the port I made a new gcc 4.3.3 target for x86_64 haiku elf files. However in practice I've noticed some weird behavior in the compiled binaries. While compiling with the -m32 option. Which should output an elf_i386 binary it compiles fine, but executing the code has some weird effects. For example. in this code segment: if (address < KERNEL_BASE || address + size > sNextVirtualAddress) { panic("mmu_free: asked to unmap out of range region (%p, size % lx)\n", (void *)address, size); } (address + size > sNextVirtualAddress) always returns true... no matter what size (address + size) actually is. This code segment works perfectly with the normal x86 gcc 4.3.3 target we've been using. If I typecast size to be a void* the code works perfectly with the x86_64 target(compiled as 32bit code). However, this isn't needed with the normal x86 target. I'm assuming since it needs to be typecasted this is a sign that there is a bigger issue in the x86_64 target. Is that a correct assumption? Secondly, While building gcc it outputs a x86_64 gcclib, but it doesn't output the x86 gcclib. Is there an option or setting in the build script that will output the gcclib for both architectures? Here's some code for the haiku x86_64 target. http://dev.haiku-os.org/attachment/ticket/1141/x86_64-buildtools-buildtoolstrunk.patch I'm sorry that I'm diving into gcc without truly understanding the code base, but I'm on a schedule to complete my project. So any help would be extremely appreciated.