Re: Integral conversions in C/C++

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom St Denis wrote:

Because there is no such thing as a negative unsigned number.

Actually, the notion of an ``unsigned integer'' as it is known in
the C/C++ world is already an oxymoron.  And it grows from there.
Don't get me started on the float/double nightmare.

And the promotion applies to the operator

Reiterating a statement without proof does not make said statement
valid.  Show me the section(s) in the standard(s) that limit integral
conversions to assignment statements as you claim.

The promotions occur (in time) with respect to the order of precedence. If they occur out of order you end up with nastyness (like the float example I
sent earlier).

At which point in the parsing process of the original example is
there actually an operator precedence resolution involved ?  Care
to elaborate a bit on that ?

given the original expression, a = 4294967280UL is a colossal
screw-up which I actually expected the compiler to warn me about.

Why?

The missing warning, the screw-up or both ?

    You negated an unsigned expression.

Nope.  I multiplied a natural number by -1 expecting to produce
an integer lange enough to hold the value of a natural number
which itself should be large enough to hold the result of a
multiplication of one natural number variable of limited range
and a relatively small natural number constant.  Clean a priori
information that is available to the compiler.  The C/C++
backwardness, however, maimes it into something that probably
was the norm in the 70ies but is unacceptable in the 2000s.
At least in my book.  But then again, it's only PeeCees in the
computing world even on the MPAs nowadays so why bother.

unsigned char b = 255;
int a = 4;

a += b;

Should the result be 3?

Define the ranges of ``unsigned char'' and ``int'' in your example
and I can give you a reasonable answer.

How is that any different than your example?

Integral _promotion_ vs. integral _conversion_.  Got it ?
If I was to write

int64_t a;
uint32_t b = 0xdeadbeef;

a = -(((int64_t )b) * 10u);

then on a 32 bit machine 10u is promoted to int64_t (which is large
enough to hold the value) and a 64<-64x64 integer multiplication is
performed.  Or isn't it ?  Now, if there was a machine that actually
implemented a 64<-32x32 ``unsigned integer'' multiplication, what
would the result in a of the original example look like if the code
was to be translated according to the standard for this particular
machine ?  Could this multiplication _ever_ be used if the standards
were followed blindly ?

If we followed your logic, the above statement would read

1.  promote b to a signed int (-1)

Wrong on so many levels.  -1 is neither the value after a promotion
nor conversion.  If you indeed understood my logic, then you would
have souped up a conversion example which the above is not.

I have no idea what you're talking about.
You will in a couple of years.

All those having problem understanding the C [and C++] standards raise your hand
(hint: this is embarrassing, but please raise your hand!!!).

I was probably being too optimistic and should have been more
specific: If you stick to C you never will.  C++ is sloooowly
beginning to move into the right direction.  Too many Cisms and
inconsistencies in it that make coding a tedious task but still
preferable over C and Fortran.  We'll see how far they will have
come within five years from now.

That's a completely different problem dealing with _floating point_
conversions (C++ 4.8) and irrelevant to the one I was describing.

Actually, it's the exact same thing.

Right.  Reals, integers, rational and natural numbers.  It's all the
same thing, really.  And because it's all the same thing, really, there
are actually _separate_ clauses for floating point and integer
conversion cases in the C++ standard.  I am with you so far.

It appears that you did not read the original post thoroughly as
that contained explicit references to the C++ standard.

C++ and C are similar

Define ``similar''.  And, please, prove it with pointers to the
standards.

And this has nothing specifically to do with GCC.  Any conforming C or C++
compiler will generate the same damn output.

Since I have only access to various GCC incarnations nowadays,
it's impossible for me to (dis-)prove that point.  Can you ?

And just because you're resilient to new information

Once again: There's been no news so far.

>                                                      doesn't mean
> your complaints or observations are any more correct then when you
> first started posting.  If you used the language properly in the
> first place, you wouldn't be in this mess.

There is no mess as far as my code is concerned as I adhere to the
standards as backward/retarded/perverted the rules may be.  Any well
trained mathematician would, however, think otherwise.  That is the
view I was expressing and the kind of out-of-the-box thinking that
appears to baffle you.  Can't help you with that.


Cheers,
Christian


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux