On 3 Dec, Nathan C Summers wrote: > This may be a bug when used with unsigned numbers, but it certainly is > a valid and acceptable approach to go out of range with signed > numbers, which is what the program was using. Since the MAX macro > returns a 1 for all nonpositive numbers, it's not a problem. > If this weren't ok, why would we need the CLAMP macro? Always staying > in range can be a difficult and inefficient way of doing things. Right. > Changing signed to unsigned turns this nonbug into a bug. While it is > possible to rewrite this particular example by using a (hypothetical) > function saturated_subtraction(width, 15, 1) which has some nastiness > to ensure that width never goes negative, I personally find that to > be a less readable way of doing things. (although such a function > could be useful at times.) It could also be implemented as a macro, > although the amount of nastiness required for doing that is scarcely > imaginable. However I see much room for improvement here. The values I changed to unsigned are likely meant to be unsigned like sizes, distances or heights or widths. And those are frequently used in loops like int i; for (i = 0; i < width; i++) foo (); if we were to expect negative values in this case we'd be really doomed and as such I consider it a feature to be warned by the compiler if a calculation is definitely underflowing or tests are useless. Negative values do have a right to exist when it comes to coordinates but using signed types for error reporting AND value represantation at the same time is likely to be errorprone. >> Shouldn't be. I was about to go even further and change all false >> uses of such values. What you imply is that using unsigned types for >> unsigned values is wrong because you also use them for >> errorindication and this >> (together with magic constants) is about most of the evilness of >> programming. > How do you propose to do so? Add a "valid" bit to each structure for > each varible used in this way? There go any space savings you might > have had. No, this is not my idea but your claim with the space savings is not quite correct. In most cases the bytes in a bitfield are not completely occupied and as such adding another bit wouldn't make a difference for the total size. I propose to do it the following way: gboolean do_something (guint *value, ...) (means passing a pointer to the place the result should be written to) instead of gint do_something (...) and returning a negative value in case of failure I've applied exactly this scheme to tile_manager_get_tile_num () in tile-manager.c and together with replacing the ongoing errorchecking throughout the callees was able to save a whooping 480 bytes in object size on PPC and got faster code while (IMHO) making the code clearer. > I personally find nothing wrong with signifying an invalid > value with an invalid value. I do because one has to constantly check for errorvalues and there lies one problem; we have several places in the GIMP where return values are checked for errors and then passed down to another function where they are checked again while there are also places where we don't have error checking at all. > This is a very accepted practice in C, and even in places and > languages that try avoid such usage end up doing it with pointers. > I've only seen one language that doesn't have an equivalent to > "NULL", the pointer that isn't a pointer -- the invalid pointer. I > fail to see why using NULL to signify an invalid value for a pointer > is any different than using negative numbers to signify an invalid > "unsigned" number. Using signed types is unelegant and prevents the compiler from optimising the code better. NULL pointers have the nice property of being easy to check and I think it's a nice idea to have functions return 0 or !0 aka TRUE or FALSE instead of some bogus values because it's clearer and faster. -- Servus, Daniel