Re: 128-bit integer - nonsensical documentation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/28/2015 12:54 AM, David Brown wrote:
On 27/08/15 17:09, Martin Sebor wrote:
Is it fair to say that the main use of extended integers is to "fill the
gaps" if the sequence char, short, int, long, long long has missing
sizes?  Such as if an architecture defines int to be 64-bit and short to
be 32-bit, then you could have an extended integer type for 16-bit?

Something like that. The extended integer types were invented by
the committee in hopes of a) easing the transition from 16-bit
to 32-bit to 64-bit implementations and b) making it possible for
implementers targeting new special-purpose hardware to extend the
language in useful and hopefully consistent ways to take advantage
of the new hardware. One idea was to support bi-endian types in
the type system. There was no experience with these types when
they were introduced and I don't have the impression they've been
as widely adopted as had been envisioned. Intel Bi-endian compiler
does provide support for "extended" mixed-endian types in the same
program.


By "bi-endian types", you mean something like "int_be32_t" for a 32-bit
integer that is viewed as big-endian, regardless of whether the target
is big or little endian?  (Alternatively, you could have "big_endian",
etc., as type qualifiers.)  That would be an extremely useful feature -
it would make things like file formats, file systems, network protocols,
and other data transfer easier and neater.  It can also be very handy in
embedded systems at times.  I know that the Diab Data embedded compiler
suite, now owned by Wind River which is now owned by Intel, has support
for specifying endianness - at least in structures.  If I remember
correctly, it is done with qualifiers rather than with extended integer
types.

The Intel compiler uses attributes (besides pragmas, and other
special features for this), so the raw syntax is or can be close
to qualifiers (the recommended way to use them is via typedefs).
Because the languages requires the qualified and unqualified forms
of the same type to have the same representation, an annotation
that changes a type's endianness cannot be a qualifier. Objects
with different value representations must have distinct types.

Although the compiler isn't available for purchase the manuals
are now all online:
  https://software.intel.com/en-us/c-compilers/biendian-support


I wonder if such mixed endian support would be better done using name
address spaces, rather than extended integer types?

(Sorry for changing the topic of the thread slightly - control of
endianness is one of the top lines in my wish-list for gcc features.)

GCC already has experimental support for controlling endianness:
  https://gcc.gnu.org/ml/gcc/2013-05/msg00249.html

There was a discussion back in June of merging it into trunk:
  https://gcc.gnu.org/ml/gcc/2015-06/msg00126.html

I'm not sure if it's been done yet.

Martin




[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux