Re: why does -fno-pic coge generation on x64 require the large model?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> What I'm trying to see is how to convince GCC to generate NON-PIC code
>> and link it into a shared library for x64. I only managed to do this
>> with "-fno-PIC -mcmodel=large", and I wonder why with other memory
>> models it doesn't work out. I suspect this has to do with some
>> artifact of x64's addressing modes for symbol offsets.
>
> Yes.  If it were easy to permit non-PIC x86_64 code in a shared library,
> gcc would do it.  But the only way to do that is, as you say, to use the
> large memory model, which is relatively inefficient.
>

Yes, I realize this. Hence my original question - *why* is the large
memory model the only way to do it? I know it's relatively
inefficient, because it's the most general and flexible in terms of
addressing. Why aren't the small & medium models flexible enough?

> The x86 shared library loader has a kludge where pages that contain
> non-PIC code are remapped and relocated, so every process ends up with
> its own copy of each relocated page.  This is provided for
> compatibility with older libraries.  x86_64 is a new architecture, so
> it wasn't necessary to provide backwards compatibility for non-PIC
> libraries.

So non-PIC code on x86_64 is actually different from non-PIC code on
x86? It *doesn't* need page relocation? What's non-PIC about it then,
and again, why only the large memory model allows it?


Eli



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux