Printing float scenarios (32 vs 64 bit parameters)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I have a weird scenario that I'd like to understand better or perhaps it could be a bug.  Let me pose the question, then I'll provide the code.

Using the attached code the following two scenarios work (both compiled as 32-bit programs):

1. On MacOS 10.10.5, using NASM version 2.11.08, and "g++" (Apple LLVM version 6.1.0 (clang-602.0.53)) via Terminal

2. On Ubuntu 64-bit 15.10, using NASM version 2.11.05, and "g++" (gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)) via Terminal
*Note: If using Ubuntu, in the C++ code below the functions must be prefixed with underscores*

However, when I remove the "push 0" instruction before calling _printFloat (and adjust esp by 4 instead of 8 for cleanup), it segfaults in MacOS, but still works in Ubuntu. I'm assuming that in both scenarios the compilers might be promoting the float to double (in C fashion), but why would it still work with 32-bits in Ubuntu and g++?

This gets even more strange when I move to Xcode (6.4) on the Mac, using At&t syntax (code not attached) and clang's integrated assembler both versions work (with and without the "push 0”, that is passing 32 or 64 bits). Why would it work in Xcode both ways, but when moving to Terminal it segfaults if the 32 bits that is "push 0" is excluded?

In sum, everything works fine in all cases when I send 64 bits to the _printFloat function. But when I send 32 bits to _printFloat in scenario #1 it segfaults, yet it continues to work in scenario #2 AND within Xcode (scenario #3).  It doesn't seem to be a Darwin/BSD or clang issue (works both ways in Xcode with said target), but I could be wrong. I'm wondering how g++ is meant to handle this and why passing 32 bits works fine in scenario #2, but not the other (scenario #1 absolutely demands 64 bits).

Does g++ not promote a float parameter if receiving 32 bits, but does promote if receiving 64 bits; whereas Apple's modified clang always promotes float to double as parameters or output (still doesn't explain why it would work in Xcode both ways)?

Any insight is appreciated.  Sorry for the wordiness.


The code...

// C++
#include <iostream>
#include <iomanip>
using namespace std;

extern "C" void asmMain();

extern "C" void printFloat(float f){
   cout << setprecision(7) << f << endl;
   //printf("%f\n",f);
}

extern "C" void printDouble(double d){
   cout << setprecision(15) << d << endl;
}

//main stub driver
int main(){
   asmMain();
   cout << "It works!" << endl;
   return 0;
}

-----------------------------------------
; code.asm

extern _printFloat
extern _printDouble

section .data
value: dd 1.2

section .bss
f_result: resd 1
d_result: resq 1

section .text
global _asmMain
_asmMain:
push ebp
mov ebp, esp

finit
fldpi
fld DWORD [value]
fadd ST0, ST1

fstp DWORD [f_result]
push 0
push DWORD [f_result]
call _printFloat
add esp, 8

fstp QWORD [d_result]
push DWORD [d_result + 4]
push DWORD [d_result]
call _printDouble
add esp, 8

pop ebp
ret



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux