On Tue, 22 Nov 2011, Jilles Tjoelker wrote:
> On Mon, Nov 21, 2011 at 06:29:06PM +1100, Bruce Evans wrote:
>> On Wed, 9 Nov 2011, Jilles Tjoelker wrote:
>>> On Wed, Nov 09, 2011 at 09:35:51AM +0100, Stefan Farfeleder wrote:
>>>> Isn't the behaviour undefined too when you convert an out-of-range
>>>> uintmax_t value back into an intmax_t value?
>>> The result is implementation-defined or an implementation-defined signal
>>> is raised.
>> C doesn't allow any signal, at least in C90 and n869.txt draft C99:
> The possibility of a signal is mentioned in C99TC2 draft n1124 and
> remains in C1x draft n1548. The documentation in 'info gcc' is
> consistent with that.
I wonder why they (C standards) broke that. Though the implementation
may prefer to raising a signal, C90 (and C99-non-draft?) doesn't allow
that, and it is a large change to allow one.
>>> ] For conversion to a type of width N, the value is reduced modulo
>>> ] 2^N to be within range of the type; no signal is raised.
>>> which is exactly what we need.
>> Of course, a correct implementation would give a random result, so that
>> no one depends on implementation-defined behaviour.
> That would be a non-practical implementation, as it would be both slower
> and run fewer existing applications.
It's point is to run fewer existing applications -- the broken ones :-).
This would not necessarily be slower. The hardware might want to or
be able to trap (at no cost unless there is overflow). Then the
implementation can convert the trap to a random result, instead of
rasing a signal. The hardware might be 1's complement, but not
trap. Then the fast version would give a non-random result, but not
what you want. The slow version to give the 2's complement result
that you want could probably give a random result instead. Now it
is apparently allowed to trap instead. A trap is of course better
for running fewer existing applications. Old ones won't have a trap
handler and will just crash.
> I think there should be some "loopholes" to do signed integer arithmetic
> with wraparound, not allowing the compiler to assume there is no
Something like FENV_ACCESS pragmas would be useful. But these are still
not supported by gcc-4.2 or clang.
> While POSIX leaves the behaviour on overflow and division by zero in
> shell arithmetic undefined as with C arithmetic (although it mentions
> the possibility of converting to floating point in case of overflow), I
> prefer that sh(1) not crash.
It should avoid overflow and produce its own implementation-defined result,
without depending on implementation-defined or undefined behaviour in C.
This is easier when someone else is doing it :-).