Version 1.2 of ai05s/ai05-0010-1.txt
!standard 11.6(5) 06-12-14 AI05-0010-1/02
!class Amendment 06-03-21
!status work item 06-03-21
!status received 06-03-03
!priority Medium
!difficulty Hard
!subject Suppressing 11.6 permissions
!summary
(See proposal.)
!problem
The permissions in 11.6 can make it hard to write subprograms that depend on
raising predefined exceptions. For instance,
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
Temp: Integer;
begin
Temp := X * Y;
return False;
exception
when Constraint_Error =>
return True;
end Multiply_Will_Overflow;
11.6 currently allows the compiler to transform the above into an
unconditional "return False;". A mechanism to inform the compiler that it
must make language-defined checks in all instances would enhance the portability
of such code.
!proposal
[Probably define a pragma which would only be allowed in inner scopes, not as a
configuration pragma or in package specs (to avoid the problems with Unsuppress).
This is an all-or-nothing pragma as well; no piece-meal unsuppression here.
Should the pragma include the effect of Unsuppress(All_Checks) (it would seem
to be at cross-purposes with Suppress)??]
!wording
(** TBD **)
!discussion
Note that Unsuppress does not have this effect. It merely ensures that there is no
outstanding permission to skip needed checks. In this case, we're talking primarily
about checks that the compiler can prove have no effect other than the possibility
of raising an exception.
It's possible to avoid this problem with careful coding; the trick is to avoid
creating an unused result. For instance, the above problem could be written as:
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
begin
return ((X * Y) not in Integer);
exception
when Constraint_Error =>
return True;
end Multiply_Will_Overflow;
This cannot be altered by 11.6 permissions, as the result of the multiply does effect
the result of the program.
However, it seems unlikely that most Ada programmers would understand the difference
between these two examples. It's better to provide a mechanism to make the compiler
follow more canonical semantics in this case.
!example
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
pragma Clause_11_6_Sucks;
Temp: Integer;
begin
Temp := X * Y;
return False;
exception
when Constraint_Error =>
return True;
end Multiply_Will_Overflow;
!ACATS test
Create an ACATS test to check that the above code works.
!appendix
From: Robert A. Duff
Sent: Friday, March 3, 2006 2:47 PM
This comes from a discussion we had at AdaCore about a customer bug report.
Robert suggested that pragma Unsuppress should inhibit the permissions given
in RM-11.6 to do "weird" things in the presence of certain exceptions.
It seems like a good idea to me.
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
pragma Unsuppress(All_Checks);
Temp: Integer;
begin
Temp := X * Y;
return False;
exception
when Constraint_Error =>
return True;
end Multiply_Will_Overflow;
11.6 currently allows the compiler to transform the above into an
unconditional "return False;". The suggestion is that the pragma Unsuppress
would negate that permission.
This suggestion is intended for post-Ada-2005-standardization.
****************************************************************
From: Pascal Leroy
Sent: Monday, March 6, 2006 2:39 AM
No, I disagree. The intuitive meaning of Unsuppress is "undo the effect
of Suppress and revert to the normal Ada semantics". It would be very bad
taste to combine these semantics with effects related to 11.6. In
particular, it should be possible to compile a (configuration) pragma
Unsuppress in an empty library without causing any change in the execution
of the program.
I certainly don't like 11.6, and I am sympathetic to the notion of giving
the user control over the permissions stated in this section, but if we
want to do that, let's have a separate pragma Dont_Use_11_6_Damnit. Let's
not mess with the semantics of Unsuppress.
****************************************************************
From: Robert Dewar
Sent: Monday, March 6, 2006 6:16 AM
Well my thinking is that in practice (having had over 12 years of
experience in the use of pragma Unsuppress) is that the meaning in
practice is "I am relying on built-in exceptions for this section
of code -- the correct functioning of this code depends on not
suppressing run-time checks".
In this context you always want to suppress 11.6, since you don't
want some idiot compiler overriding this intention, that's especially
true if you take Bob Duff's expansive view of 11.6 (Bob I will let
you comment further on that if you like, based on our recent
discussions of 'Value).
>
> I certainly don't like 11.6, and I am sympathetic to the notion of giving
> the user control over the permissions stated in this section, but if we
> want to do that, let's have a separate pragma Dont_Use_11_6_Damnit. Let's
> not mess with the semantics of Unsuppress.
In practice, if we had such a pragma, it would be used 100% of the
time when pragma Unsupress was used, and virtually never otherwise.
****************************************************************
From: Robert I. Eachus
Sent: Monday, March 6, 2006 1:04 PM
>In particular, it should be possible to compile a (configuration) pragma
>Unsuppress in an empty library without causing any change in the execution
>of the program.
Nothing in the proposed additional meaning of pragma Unsuppress will
prevent a compiler from acting in just that way--or from the
configuration pragma having the expected result. The reality is that
compiler switches can effectively add a pragma Suppress--or for that
matter a global pragma Unsuppress. Should I be surprised that compiling
a library unit with a global pragma Unsuppress should mimic the effect
of such compiler switches? Of course not. And the rules making pragmas
Suppress and Unsuppress scope based are there just to resolve this issue
correctly. From a programmer's point of view, there are three states
for any check: suppressed, not suppressed and don't care. However,
compilers really only implement two of those. There are usually
compiler switches that can change the default, but there is and should
be no assumption that the default mode is pragma Unsuppress(All_Checks).
Yes, there must be a standard mode, but that standard mode could require
the configuration pragma, or a library containing such a pragma, or more
likely compiler switches to that effect.
The issue, as always, is reasoning from erroneousness. I would much
prefer to find a wording for 11.6 that always disallowed the
optimization in the example. But in two decades neither the ARG or
language designers have found a wording for 11.6 that allows acceptable
and necessary optimizations without also allowing some nasty cases to
sneak through. Obviously if the default is otherwise, compiling such a
pragma will have an effect. An unexpected effect? Only if a programmer
doesn't know what the defaults are. As a result, programmers who do
care tend to use the pragmas so that if someone else compiles the code,
they will get the expected result. Since we are adding pragma
Unsuppress to this version of the standard (in 11.5 where it belongs) it
should come as no surprise to anyone that it affects what is allowed in
11.6.
Let me suggest instead that the desired effect can be more narrowly
obtained by adding to 11.6(5): "If an undefined result from a
predefined operation is directly assigned to a scalar, an implementation
must raise Constraint_Error, unless the check has been suppressed."
What does "directly assigned" mean? May not be the best wording, but
the idea is that if a function result or an out parameter is, or
contains, an undefined result that would otherwise raise
Constraint_Error on assignment, fine. But the Multiply_Will_Overflow
example must work. It may be possible to define this more formally in
terms of/ read/ and/ updated/, but I don't see how. I don't want to
have to add Temp2 := Temp; to the Multiply_Will_Overflow example just to
force the value to be read.
****************************************************************
From: Randy Brukardt
Sent: Monday, March 6, 2006 6:50 PM
> No, I disagree. The intuitive meaning of Unsuppress is "undo the effect
> of Suppress and revert to the normal Ada semantics". It would be very bad
> taste to combine these semantics with effects related to 11.6.
...
Danger, Will Robinson. Danger.
I agree in principle with Robert Dewar's point, but something Robert Eachus
said about there being three states reminded me of something.
It's not just "bad taste" to combine semantics here. It is fact would bring
all of the old and unsolved discussions about Suppress inheritance back to
the front.
As most of you will recall, Unsuppress was stalled over the inheritance of
pragmas issue. That is, the standard says that the effect of Suppress (and
Unsuppress) pragmas are inherited from package specifications to the package
bodies and subunits (see 11.5(8)). But most implementations surveyed did not
in fact do that. There was great concern over requiring implementation to do
such inheritance - it's extra implementation cost, and could surprise users
by suppressing checks that previously were not - but Unsuppress is not
optional like Suppress.
Tucker found the way out of the impasse by noting that since Unsuppress has
no semantics of its own, in general you cannot tell the difference since not
importing the permission to Suppress is the same as Unsuppressing it.
But if you give actual semantics to Unsuppress, then the argument arises
again. For instance:
package Useful_Stuff is
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
pragma Unsuppress(All_Checks);
end Useful_Stuff;
package body Useful_Stuff is
function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
Temp: Integer;
begin
-- Better not use 11.6 here!!
Temp := X * Y;
return False;
exception
when Constraint_Error =>
return True;
end Multiply_Will_Overflow;
end Useful_Stuff;
You wouldn't be able to use 11.6 in the package body, because the Unsuppress
is inherited.
This isn't the end of the world in this case. An implementation could meet
this requirement by *never* using 11.6, as well as by implementing
inheritance. But that seems to be a draconian solution.
My real question is whether this is fixing a real problem, or just one that
*might* happen. In other words, do "idiot compilers" (to use Robert Dewar's
term) really exist? Just because the language allows some optimization
doesn't mean you have to use it! It seems really unfriendly to remove
exception-raising code (statically) inside of a handler for that exception.
Moreover, nothing prevents GNAT or any other compiler taking the tack that
Robert is suggesting. Surely, there is no requirement in the language to use
11.6, so toning down optimizations in the face of an Unsuppress pragma
(which indicates the desire for exceptions to be raised) seems reasonable.
In any case, I'd prefer the solution of fixing 11.6 suggested by Robert
Eachus. The problem doesn't come up with his suggested wording (either the
check is suppressed or it is not). But word smithing that could be a lot of
fun.
So, all-in-all, I think I'd prefer to not kick this sleeping dog.
****************************************************************
From: Tucker Taft
Sent: Monday, March 6, 2006 7:39 PM
Good points, Randy. Unsuppress simply means
erase the suppress. It doesn't mean go beyond
that and if it did, its semantics would be much
more difficult to get right.
One thing worth pointing out
is that the goal of this part of 11.6 was to
allow the removal of dead stores, even if the
dead store happened to involve some operations
that might overflow. There are plenty of compilers
that remove dead stores as a matter of course.
If that was inhibited by the possibility that
one of the predefined checks within the dead store
might fail, removing them would be a whole lot
harder.
Is it really that common that the failure of
a predefined exception is used to answer
some interesting question, without there being
any interest in the result of the operation that failed?
That seems like a corner case, certainly not something
to build into the core meaning of "unsuppress."
I see using "unsuppress" to implement something like
saturation arithmetic, where you want the answer when
it doesn't overflow, and you want the exception when
it does overflow. Suppressing the overflow check would
obviously defeat saturation arithmetic. But there
is generally no dead store in such a routine. You
compute and store/return the result, and if it overflows, catch
the exception and return 'Last or 'First as appropriate.
****************************************************************
From: Robert A. Duff
Sent: Wednesday, March 8, 2006 2:25 PM
> > No, I disagree. The intuitive meaning of Unsuppress is "undo the effect
> > of Suppress and revert to the normal Ada semantics".
I agree, except that 11.6 is not included in any programmer's intuition.
(Which is why I share Pascal's dislike for 11.6 in the first place.)
That is, Unsuppress means "I intend to rely on language-defined checks raising
exceptions." 11.6 means "Do not rely on handling those exceptions" (or at
least be very, very careful). A contradiction, in practise.
> In this context you always want to suppress 11.6, since you don't
> want some idiot compiler overriding this intention, that's especially
> true if you take Bob Duff's expansive view of 11.6 (Bob I will let
> you comment further on that if you like, based on our recent
> discussions of 'Value).
My view of what 11.6 is intended to say is more expansive than Robert's.
My view of what it _ought_ to say is less expansive. ;-)
Robert and I both agree that 11.6 is intended to mean that the compiler can do
the unexpected for the Multiply_Will_Overflow example. (Unexpected by typical
programmers!) What Robert is referring to is that we disagreed on this
example:
function Has_Integer_Syntax(X: String) return Boolean is
Temp: Integer;
begin
Temp := Integer'Value(X);
return True;
exception
when Constraint_Error =>
return False;
end Has_Integer_Syntax;
This is entirely analogous, but we're using 'Value instead of multiplication.
Robert's position is that 11.6 is obviously not intended to apply to the
'Value example. My position is that I'm not so sure -- I think there's
evidence in the AARM that it _is_ intended to apply.
I'm fairly happy to leave the 'Value issue moot, because in practise, the
compiler will not use 11.6 here. There's little payoff in terms of speed, and
'Value is likely a runtime routine, so the compiler would really have to go
out of its way to "optimize".
The Has_Integer_Syntax essentially equivalent to actual code that a customer
told us they were going to use.
> > I certainly don't like 11.6, and I am sympathetic to the notion of giving
> > the user control over the permissions stated in this section, but if we
> > want to do that, let's have a separate pragma Dont_Use_11_6_Damnit. Let's
> > not mess with the semantics of Unsuppress.
>
> In practice, if we had such a pragma, it would be used 100% of the
> time when pragma Unsupress was used, and virtually never otherwise.
In fact, I'd say "virtually never" should be "0.0%", because if you're not
relying on predefined checks, the 11.6 rules in question have no semantic
effect anyway!
****************************************************************
From: Robert A. Duff
Sent: Wednesday, March 8, 2006 2:37 PM
> Good points, Randy. Unsuppress simply means
> erase the suppress.
That's certainly what the RM currently says _now_.
But that's not an argument proving that's what it _should_ say. ;-)
>...It doesn't mean go beyond
> that and if it did, its semantics would be much
> more difficult to get right.
I meant to say in my original note that the proposed rule should probably be
Implementation Advice.
I don't think the rule is difficult to get right, but if it is,
that's OK for Impl Advice.
> One thing worth pointing out
> is that the goal of this part of 11.6 was to
> allow the removal of dead stores, even if the
> dead store happened to involve some operations
> that might overflow.
That's part of the goal. I think instruction scheduling is also part of it --
for example, deferring stores to later in time.
>...There are plenty of compilers
> that remove dead stores as a matter of course.
> If that was inhibited by the possibility that
> one of the predefined checks within the dead store
> might fail, removing them would be a whole lot
> harder.
I agree. But I think that's OK. We're talking about a small region of code
where Unsuppress is used -- it's fine if that code is less efficient than it
might otherwise be. Robert and I discussed implementing this by telling the
back end that everything in that region is pragma-volatile.
> Is it really that common that the failure of
> a predefined exception is used to answer
> some interesting question, without there being
> any interest in the result of the operation that failed?
> That seems like a corner case, certainly not something
> to build into the core meaning of "unsuppress."
>
> I see using "unsuppress" to implement something like
> saturation arithmetic, where you want the answer when
> it doesn't overflow, and you want the exception when
> it does overflow. Suppressing the overflow check would
> obviously defeat saturation arithmetic. But there
> is generally no dead store in such a routine. You
> compute and store/return the result, and if it overflows, catch
> the exception and return 'Last or 'First as appropriate.
That's certainly how I would code it. But we have a customer who wanted to do
otherwise (for the 'Value case -- see example in my other mail). And I have
definitely heard the Multiply_Will_Overflow kind of example more than once in
the past, with programmers believing it should work. I agree that the
Multiply_Will_Overflow thing _should_ be coded so it returns the product,
as well as an indication of whether it overflowed.
****************************************************************
From: Tucker Taft
Sent: Wednesday, March 8, 2006 3:17 PM
> I meant to say in my original note that the proposed rule should probably be
> Implementation Advice.
>
> I don't think the rule is difficult to get right, but if it is,
> that's OK for Impl Advice.
This doesn't really seem to help, if the goal is to figure
out a way to write something like your 'Value example
portably. You don't want to have to read every vendors
documentation relating to Impl. Advice to know whether
what you (or some long forgotten programmer) has written
will be portable.
Why not just add a new pragma for this case? Or a new
restriction-name?
pragma Restriction(No_11_6_Permissions); ;-)
****************************************************************
From: Randy Brukardt
Sent: Wednesday, March 8, 2006 5:43 PM
...
> That is, Unsuppress means "I intend to rely on language-defined checks raising
> exceptions."
That's not what Unsuppress means. It's just one of several ways that it
could be used (admittedly the most common).
...
> What Robert is referring to is that we disagreed on this example:
>
> function Has_Integer_Syntax(X: String) return Boolean is
> Temp: Integer;
> begin
> Temp := Integer'Value(X);
> return True;
> exception
> when Constraint_Error =>
> return False;
> end Has_Integer_Syntax;
>
> This is entirely analogous, but we're using 'Value instead of multiplication.
> Robert's position is that 11.6 is obviously not intended to apply to the
> 'Value example. My position is that I'm not so sure -- I think there's
> evidence in the AARM that it _is_ intended to apply.
I'm pretty certain that you are correct, and I don't think it is necessary
to depend on the AARM. 11.6 depends totally on the definition of
"language-defined check", which is found in 11.5(2). 11.5(2) is quite clear
about what a language-defined check is "...one of the situations defined by
this International Standard that requires a check to be made at run
time...", and that certainly includes attributes, conversions, and the like.
The grey area is whether checks in the predefined library (of which 'Value
is *not* a part) are "language-defined checks". I.e., are the Program_Errors
raised in Ada.Containers.Vectors language-defined checks? The AARM implies
that they are not (they are not indexed as such, while attributes and the
like are included), but a strict reading of 11.5(2) would imply that they
are. I don't think that they were intended to be, but this is a point I'm
happy to leave moot as I doubt that compilers will be wanting to work that
hard anytime soon. (Note: This is a lot "expansive" than what Bob was
arguing.)
> I'm fairly happy to leave the 'Value issue moot, because in practise, the
> compiler will not use 11.6 here. There's little payoff in terms of speed, and
> 'Value is likely a runtime routine, so the compiler would really have to go
> out of its way to "optimize".
Well, I don't think it needs to be moot, Robert is wrong. :-) His position
is not supported by the RM, so there is nothing to leave undecided.
And I don't see why you say there is "little payoff in speed". The multiply
check is two instructions on typical machines; hardly worth worrying about
in most situations. The Value check is likely to be hundreds of
instructions. If these two routines are used in the same way, certainly the
Value optimization is more useful.
Now, I realize that many optimizers don't know the difference between 'Value
and other runtime calls from ordinary user calls (which can't have this sort
of optimization difference). But surely it is possible for them to have that
information (the Ada portion of the Janus/Ada compiler certainly does have
it, although we make a conscious decision not to use 11.6 for the most part
[because the speed gains aren't sufficient for the disruption in the visible
semantics of the program]), and in that case, it could make sense to use it.
> I meant to say in my original note that the proposed rule should probably be
> Implementation Advice.
To which Tucker replied:
> This doesn't really seem to help, if the goal is to figure
> out a way to write something like your 'Value example portably.
I agree. And if that's not your goal, why bother? Most compilers won't go
overboard in using these permissions anyway. So it's just as likely that the
code will work fine on some implementation; and if not, the vendor almost
certainly has provided some implementation-defined way to control
optimization in the problematic code. So unless you can rely on it
everywhere, it really isn't of much value.
The only reason I can see for using IA here is if we don't know exactly what
we mean (in normative terms). But I don't see any problem with that: what
part of "don't use 11.6!" is unclear?? It's 11.6 that's hard to understand
normatively.
> I don't think the rule is difficult to get right, but if it is,
> that's OK for Impl Advice.
The rule is easy to get right. The problem is that doing so is incompatible
with the majority of existing Ada implementations. And this is a very bad
incompatibility, as it can cause checks to be suppressed in regions where
they previously were not suppressed. In order to avoid that, implementations
have to work very hard indeed. The ARG was not willing to make that
requirement for Unsuppress (and that problem, essentially alone, delayed the
completion of AI-224 for two years), and I can't imagine why that has
changed. I realize you didn't sit through the interminable arguments on this
topic, so they aren't seared into your brain, but please realize they exist.
(Better, get AdaCore to start sending you to meetings so you'll know like
the rest of us...)
...
> We're talking about a small region of code
> where Unsuppress is used -- it's fine if that code is less efficient than it
> might otherwise be. ...
This is the crux of the problem. You're talking about *one specific way* to
use Unsuppress. There are other ways to use Unsuppress that aren't so
limited to a piece of code.
To take an example. We assumed that Claw would be compiled with checking on.
If it is not, various things won't work. It's too late (in a practical
sense) to try to find every such place and put an Unsuppress nearby (and it
isn't clear that it is practical anyway). An easier solution would be to put
an Unsuppress into every specification as protection against users compiling
with Suppress command-line options. Surely we don't want the 100,000+ lines
of Claw being compiled with virtually no optimization because we did that
sort of protection - we just want all of the exceptions raised.
Anyway, it's always important to remember that users are clever and come up
with lots of ways to use things, and that arguments based on a particular
usage pattern are pretty weak.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, March 8, 2006 5:57 PM
> ...
> > That is, Unsuppress means "I intend to rely on language-defined checks raising
> > exceptions."
>
> That's not what Unsuppress means. It's just one of several ways that it
> could be used (admittedly the most common).
I should have been more clear about what I meant here. Unsuppress means
"make the language-defined checks". It doesn't imply any value judgement as
to why those checks need to be made, or any "reliance" on them.
For instance, a secure application might want assurances that the checks are
made so that a bug (like a buffer overflow) cannot be easily leveraged into
a security breach. Such an application may not care much about the exception
being raised, just that the language-defined check isn't ignored (unless the
compiler can prove that it cannot fail, of course).
Such an application could use Unsuppress pragmas to cover the entirety of
the program.
****************************************************************
From: Robert A. Duff
Sent: Wednesday, March 8, 2006 7:08 PM
> And I don't see why you say there is "little payoff in speed". The multiply
> check is two instructions on typical machines; hardly worth worrying about
> in most situations. The Value check is likely to be hundreds of
> instructions.
Right, but what matters is the fraction of total time for an operation that
can be saved by eliminating the checks. 'Value probably takes longer than
multiplication...
>...I realize you didn't sit through the interminable arguments on this
> topic, so they aren't seared into your brain, but please realize they exist.
I followed the e-mail discussions, though. I do realize there are thorny
issues related to Unsuppress.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, March 8, 2006 9:18 PM
> Right, but what matters is the fraction of total time for an operation
that
> can be saved by eliminating the checks. 'Value probably takes longer than
> multiplication...
Sure, but we're talking about 11.6 cases where the operation is only being
executed for its checks. Once the checks are eliminated, the operation's
result isn't used at all, so the *entire* operation is going to be
eliminated, not just the checks. (I doubt optimizers leave in useless
operations for the fun of it.) As you say, 'Value probably takes longer than
multiplication, so the net gain is greater (all else being equal, which of
course it rarely is).
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 9:12 AM
>> That is, Unsuppress means "I intend to rely on language-defined checks raising
>> exceptions."
>
> That's not what Unsuppress means. It's just one of several ways that it
> could be used (admittedly the most common).
It is the only way we have used it and seen it used. Remember that
GNAT has implemented Unsuppress for 12 years.
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 9:20 AM
> Such an application could use Unsuppress pragmas to cover the entirety of
> the program.
But in practice such a program would never be compiled with checks
suppressed, and this would be an elementary requirement of such
types of programs, so I think this is only a language lawyer
theoretical argument, not one that is real in practice.
Actually in secure programming, checks *are* generally suppressed,
since you have proved they cannot occur (see Praxis site for
dsicussions and examples)
And indeed, I would think that even if your scenario *were*
realistic, this is the kind of program where you would not
want 11.6 type optimization (or any other behavior altering
optimization) to occur.
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 11:03 AM
> Good points, Randy. Unsuppress simply means
> erase the suppress. It doesn't mean go beyond
> that and if it did, its semantics would be much
> more difficult to get right.
that's certainly true, at most IA would be appropriate.
Note that in my opinion, from a practical portability
point of view, IA is just as strong as a real requirement,
sometimes stronger.
After all, no one is formally validating these days
(may change, but we will see), so the whole normative
standard is not much more than IA. It is market forces
that determine portability compliance, not formal rules.
> One thing worth pointing out
> is that the goal of this part of 11.6 was to
> allow the removal of dead stores, even if the
> dead store happened to involve some operations
> that might overflow. There are plenty of compilers
> that remove dead stores as a matter of course.
> If that was inhibited by the possibility that
> one of the predefined checks within the dead store
> might fail, removing them would be a whole lot
> harder.
Well I think that 11.6 is about much more than just
elimination of dead stores, it is about the whole
issue of allowing full code motion and scheduling
of instructions without worrying about exceptions.
In particular, if an exception occurs, we do not
want to guarantee that all "previous" stores have
occurred, since we want to be able to delay stores.
> Is it really that common that the failure of
> a predefined exception is used to answer
> some interesting question, without there being
> any interest in the result of the operation that failed?
> That seems like a corner case, certainly not something
> to build into the core meaning of "unsuppress."
Sure it is reasonably common ...
> I see using "unsuppress" to implement something like
> saturation arithmetic, where you want the answer when
> it doesn't overflow, and you want the exception when
> it does overflow.
I have never seen it used this way. The normal use for
us is something like Ada.Calendar to ensure that we get
overflow checking for time operations (normally the Ada
run time is compiled with checks off). There are
approximately 100 similar uses in the GNAT run time.
****************************************************************
From: Tucker Taft
Sent: Thursday, March 9, 2006 11:29 AM
>> I see using "unsuppress" to implement something like
>> saturation arithmetic, where you want the answer when
>> it doesn't overflow, and you want the exception when
>> it does overflow.
>
> I have never seen it used this way.
I'm confused now. Your answer below says you use it for parts of
your runtime where you want to raise exceptions if the
time arithmetic overflows. If it doesn't raise an exception,
don't you want to give the correct answer?
> ... The normal use for
> us is something like Ada.Calendar to ensure that we get
> overflow checking for time operations (normally the Ada
> run time is compiled with checks off). There are
> approximately 100 similar uses in the GNAT run time.
Can you explain how this example doesn't match my "typical" usage
scenario. I must be missing something. I agree it isn't
saturation arithmetic, but it seems to be a case where you
return the result of an operation, and you want an exception
if that result involved an overflow or other check failure.
I would think you aren't just doing the operation for the "side-effect"
of a predefined exception. I presume you are using the result
of the operation if it doesn't raise an exception.
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 2:44 PM
> I'm confused now. Your answer below says you use it for parts of
> your runtime where you want to raise exceptions if the
> time arithmetic overflows. If it doesn't raise an exception,
> don't you want to give the correct answer?
Yes indeed, it is not clear whether 11.6 could conceivably affect
our coding in the GNAT run time, clearly we don't know of any such
bugs.
> > ... The normal use for
>> us is something like Ada.Calendar to ensure that we get
>> overflow checking for time operations (normally the Ada
>> run time is compiled with checks off). There are
>> approximately 100 similar uses in the GNAT run time.
>
> Can you explain how this example doesn't match my "typical" usage
> scenario.
Because you seemed to think that it was for defending against bugs,
but we use it in the run time for a completely different purpose,
which is to give proper semantics (e.g. raise an exception) when
it is required. No error conditions are involved.
We know not to use 11.6 sensitive constructs in the run time or
anywhere else, but we are not normal Ada programmers.
Try writing the function
function Add_Would_Overflow (A, B : Integer) return Boolean;
-- Returns True if addition of A and B would cause overflow
A perfectly reasonable function to want to write, but quite
difficult to write in Ada, any suggestions on how to write this
reliably?
****************************************************************
From: Tucker Taft
Sent: Thursday, March 9, 2006 3:41 PM
> Because you seemed to think that it was for defending against bugs,
> but we use it in the run time for a completely different purpose,
> which is to give proper semantics (e.g. raise an exception) when
> it is required. No error conditions are involved.
I actually wasn't trying to make that distinction. I was
simply saying that 11.6 won't give you any trouble if
you use the result of the operation. It is only if you
don't use the result, but instead are only trying to
determine whether or not an exception would be raised.
> We know not to use 11.6 sensitive constructs in the run time or
> anywhere else, but we are not normal Ada programmers.
>
> Try writing the function
>
> function Add_Would_Overflow (A, B : Integer) return Boolean;
> -- Returns True if addition of A and B would cause overflow
>
> A perfectly reasonable function to want to write, but quite
> difficult to write in Ada, any suggestions on how to write this
> reliably?
function Add_Would_Overflow(A, B : Integer) return Boolean is
pragma Unsuppress(All_Checks);
begin
return A + B not in Integer'Range;
exception
when others => return True;
end Add_Would_Overflow;
I have also written such functions many times as part of implementing
Ada semantics on top of a machine or IL that doesn't detect
overflow. Such an approach typically goes something like this:
if A > 0 and then B > 0 then
return Integer'Last - A < B;
elsif A < 0 and then B < 0 then
return Integer'First - A > B;
else
return False;
end if;
Both of the above should work portably I believe.
****************************************************************
From: Tucker Taft
Sent: Thursday, March 9, 2006 4:01 PM
By the way, the example code I gave can be used as a general
pattern, namely taking the value of some operation that might
throw an exception, and checking to see if it is in the base
range of the result type. 11.6 is written specifically to make this
approach meaningful, since it says:
... the implementation shall not presume that an undefined
value is within its subtype, nor even within the base
range of its type, if scalar.
Hence, "return Integer'Value(Str) in Integer'Range;" can
be used to test whether Str meets the syntax requirements
of Integer'Value.
****************************************************************
From: Randy Brukardt
Sent: Thursday, March 9, 2006 4:05 PM
> It is the only way we have used it and seen it used. Remember that
> GNAT has implemented Unsuppress for 12 years.
Fine, but we intend to use it in Claw as a preventative against users
compiling it with checks suppressed. (While the written documentation tells
people not to do that, they don't always read it carefully...) The less
weird bugs that are caused by compiler options, the better. We didn't do
that in the past because Unsuppress was pretty much only in GNAT, but that's
changing rapidly.
****************************************************************
From: Robert A. Duff
Sent: Thursday, March 9, 2006 4:13 PM
> > Try writing the function
> > function Add_Would_Overflow (A, B : Integer) return Boolean;
> > -- Returns True if addition of A and B would cause overflow
> > A perfectly reasonable function to want to write, but quite
> > difficult to write in Ada, any suggestions on how to write this
> > reliably?
The question is not whether a language lawyer can write it.
The question is whether typical attempts by naive programmers
work as expected (as expected by naive programmers, I mean).
Tuck's point is that if you care whether A+B overflows, you probably care
about the answer in the case where it does not. Quite true. But many
programming texts advise that every exceptional case should be matched by a
query, so you can write:
if not Is_Full(Data_Structure) then
Add_Element(Data_Structure, Blah);
else
Deal_With_It;
instead of:
Add_Element(Data_Structure, Blah);
exception
when Data_Structure_Is_Full =>
Deal_With_It;
I don't entirely agree with that philosophy, but it can certainly lead to code
like:
if Add_Would_Overflow(A, B) then
Deal_With_It;
else
... A + B ...
Anyway, Ada doesn't have Blah_Will_Overflow nor Blah_Obeys_'Value_Rules
primitives, and some folks might try to patch that up in their code.
> function Add_Would_Overflow(A, B : Integer) return Boolean is
> pragma Unsuppress(All_Checks);
> begin
> return A + B not in Integer'Range;
> exception
> when others => return True;
others --> Constraint_Error, please! ;-)
> end Add_Would_Overflow;
>
> I have also written such functions many times as part of implementing
> Ada semantics on top of a machine or IL that doesn't detect
> overflow. Such an approach typically goes something like this:
>
> if A > 0 and then B > 0 then
> return Integer'Last - A < B;
> elsif A < 0 and then B < 0 then
> return Integer'First - A > B;
> else
> return False;
> end if;
>
> Both of the above should work portably I believe.
I agree that the above are correct, even in the presence of 11.6.
How about this:
type Modular is mod 2**Integer'Size;
function S_To_U is new Unchecked_Conversion(Integer, Modular);
function U_To_S is new Unchecked_Conversion(Modular, Integer);
function Add_Would_Overflow(A, B : Integer) return Boolean is
R: constant Integer := U_To_S(S_To_U(A) + S_To_U(B));
begin
return (R < 0) xor (A < 0 and B < 0);
end Add_Would_Overflow;
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 5:25 PM
> function Add_Would_Overflow(A, B : Integer) return Boolean is
> pragma Unsuppress(All_Checks);
> begin
> return A + B not in Integer'Range;
> exception
> when others => return True;
> end Add_Would_Overflow;
Yes, I know this works, but are you really comfortable having to
explain why this is necessary and works, and the "obvious"
function Add_Would_Overflow (A, B : Integer) return Boolean is
pragma Unsuppress (All_Checks);
X : Integer;
begin
X := A + B;
return False;
exception
when others => return True;
end;
mysteriously malfunctions?
I believe that nearly all Ada programmers would write the latter rather
than the former and expect it to work.
And their reaction to the explanation of why it does not work would be
something like "I thought Ada was a safe language" ...
****************************************************************
From: Robert Dewar
Sent: Thursday, March 9, 2006 5:28 PM
Maybe the proper suggestion is
pragma Strict_Exception_Semantics;
which cancels 11.6, and subsumes Unsuppress (All_Checks);
In fact I don't think that gcc really takes advantage
of 11.6, since the checks are generally explicit, but
it does take unpleasant advantage of optimizing invalid
values, something we are working on.
By the way, we found that
subtype R is Integer range 1 .. 10;
X : R;
...
if X in R then
being optimized to True was a major menace in legacy Ada 83
code. We changed GNAT to specially recognize this construct,
advise the use of 'Valid instead, and transform the statement
silently into if X'Valid.
Mind you, gcc 4.1 initially was clever enough to optimize away
the 'Valid test based on in-range assumptions :-) :-)
****************************************************************
From: Erhard Ploedereder
Sent: Friday, March 10, 2006 5:29 AM
> The question is not whether a language lawyer can write it.
> The question is whether typical attempts by naive programmers
> work as expected (as expected by naive programmers, I mean).
Actually, the general rule is quite simple that you can give
to naive programmers:
Do not compute a value that you do not then use in any way
(because the compiler might optimize your use-less code out
of existence).
This is a rule that you better give programmers in general,
not just Ada programmers.
Along the same lines, naive programmers might be surprised
to find that the code for
A := B + Five * Six - Eight;
does not contain a multiplication, given that Five, Six, and
Eight are constants. Yet we are not talking about this either.
And, good god, this computation didn't overflow when B > Integer'Last -
25. Sacrilege ! Can't I rely on anything ?
Definitely naive doctors ought not doctor.
Definitely naive pilots ought not fly.
Maybe naive programmers ought not program.
;-)
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 5:51 AM
> Actually, the general rule is quite simple that you can give
> to naive programmers:
> Do not compute a value that you do not then use in any way
> (because the compiler might optimize your use-less code out
> of existence).
But Tuck's proposed coding for my predicate function violates
this rule. Pray tell how to write that function without
violating this rule?
Calling a function and discarding the result is something
that everyone does all the time. Is that also forbidden
by your rule?
> This is a rule that you better give programmers in general,
> not just Ada programmers.
I strongly disagree, see above.
> Along the same lines, naive programmers might be surprised
> to find that the code for
> A := B + Five * Six - Eight;
> does not contain a multiplication, given that Five, Six, and
> Eight are constants. Yet we are not talking about this either.
I reject the analogy
> And, good god, this computation didn't overflow when B > Integer'Last -
> 25. Sacrilege ! Can't I rely on anything ?
I think you mean - 30 hee rather than -25, and indeed this
is a very nasty source of non-portabilityin practice.
> Definitely naive doctors ought not doctor.
> Definitely naive pilots ought not fly.
> Maybe naive programmers ought not program.
Well if you think this, then I am afraid Ada will never succeed,
since most programmers are indeed naive when it comes to subtle
language lawyer stuff like this.
> ;-)
I sometimes have to remind people at AdaCore that if all our
customers knew Ada as well as we did, we might not be in business :-)
****************************************************************
From: Erhard Ploedereder
Sent: Friday, March 10, 2006 6:51 AM
> But Tuck's proposed coding for my predicate function violates
> this rule. Pray tell how to write that function without
> violating this rule?
????
return A + B not in Integer'Range;
Clearly this is a use of the result of A + B. Same is true
for the other variants that Tuck proposed; no use-less code
there; therefore perfectly fine.
> Calling a function and discarding the result is something
> that everyone does all the time. Is that also forbidden
> by your rule?
Well, as always, rules need to be tempered by pragmatics, but
it certainly is a code smell, even if "its done all the time".
(Exaggerating your point, if it is done all the time for a
given function, the function definitely should be a proc.)
And, some style guides impose the "no side-effects from functions"
rule. In that case, my rule is indeed an IMPORTANT rule, because
the only effect of the function call then can be raising an exception
or using up cycles, neither of which uses of a function is laudable
behaviour by the programmer. It's just not a smell, it stinks to
high heaven to discard the result of a function that has no
side-effects.
****************************************************************
From: Robert A. Duff
Sent: Friday, March 10, 2006 9:40 AM
> But Tuck's proposed coding for my predicate function violates
> this rule.
No -- Tuck's function uses the result of the add (compares it against the
bounds). So 11.6 does not apply. The example that runs afoul of
11.6 is:
function Add_Would_Overflow(A, B : Integer) return Boolean is
pragma Unsuppress(All_Checks);
Temp : Integer;
begin
Temp := A + B; -- value of Temp is not used
return False;
exception
when others => return True;
end Add_Would_Overflow;
However, I now realize that 11.6 isn't the only problem with this example.
An implementation is always allowed to get the right answer on overflow,
rather than raising C_E. So this:
X: Integer := Integer'Last;
if Add_Would_Overflow(X, 1) then
X := X + 1; --(*)
might raise C_E on the addition at --(*), but not inside Add_Would_Overflow.
Sigh...
>... Pray tell how to write that function without
> violating this rule?
>
> Calling a function and discarding the result is something
> that everyone does all the time. Is that also forbidden
> by your rule?
> > This is a rule that you better give programmers in general,
> > not just Ada programmers.
>
> I strongly disagree, see above.
>
> > Along the same lines, naive programmers might be surprised
> > to find that the code for
> > A := B + Five * Six - Eight;
> > does not contain a multiplication, given that Five, Six, and
> > Eight are constants. Yet we are not talking about this either.
>
> I reject the analogy
Me, too. There's a huge difference between optimizations that properly
preserve the expected semantics, and the 11.6 permissions.
In other words, I don't care if programmers are surprised by the above -- it
won't cause them to write bugs.
> > And, good god, this computation didn't overflow when B > Integer'Last -
> > 25. Sacrilege ! Can't I rely on anything ?
>
> I think you mean - 30 hee rather than -25, and indeed this
> is a very nasty source of non-portabilityin practice.
Please explain this example -- with -30 or -25.
I don't understand the point.
Boy, this issue is _much_ more controversial than I expected! ;-)
****************************************************************
From: Tucker Taft
Sent: Friday, March 10, 2006 10:12 AM
> ... The example that runs afoul of 11.6 is:
>
> function Add_Would_Overflow(A, B : Integer) return Boolean is
> pragma Unsuppress(All_Checks);
> Temp : Integer;
> begin
> Temp := A + B; -- value of Temp is not used
> return False;
> exception
> when others => return True;
> end Add_Would_Overflow;
>
> However, I now realize that 11.6 isn't the only problem with this example.
> An implementation is always allowed to get the right answer on overflow,
> rather than raising C_E.
True, but "Temp : Integer" is equivalent to
Temp : Integer range Integer'First .. Integer'Last;
so the assignment should raise C_E, even if the computation
doesn't overflow. On the other hand, if they wrote:
Temp : Integer'Base;
begin
Temp := A + B;
then the assignment need not raise C_E either.
It is the fact that the assignment is dead that eliminates the
requirement for raising C_E.
In my view, we just need to get the message out about
"blah [not] in mytype'Range" as the right way to write
such functions. It really isn't that hard, and is
much clearer in my view for the reader about what
the programmer is trying to test for.
I suppose another way to do it is use 'Valid, e.g.:
Temp := A + B;
return not Temp'Valid;
exception
when Constraint_Error => return False;
but the "not in Integer'Range" seems a bit clearer.
****************************************************************
From: Erhard Ploedereder
Sent: Friday, March 10, 2006 10:13 AM
All this discussion has centered around the canonical example of the
"does it overflow?" function.
Does anybody have any other good examples from real code?
About 10 years ago, we did a small study on the use of exceptions in Ada
code available in the public domain, and the results were the exact opposite
of these claims that this style of programming (where the raising of
implicit exceptions realizes an intended non-fault semantics) is frequent
enough to warrant any attention.
I vaguely remember that the study found 3 examples; in all these cases, it
involved results that were used later on, i.e. not subject to 11.6
considerations of dead code elimination. I don't remember how many lines of
code were examined, however. (I remember the study, because one of the 3
examples made me eat crow, since I had written that code.)
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 11:00 AM
Not sure that I understand the parenthetical remark.
Suppose someone writes a loop that goes through a list
and expects an access exception to terminate the loop
in some cases, would this qualify as an exception (I have
seen this style of programming fairly frequently).
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 11:03 AM
By the way, the current thread stems from a customer wanting to write
a Valid_Integer function for a String to tell whether it represents
an integer, and using a 'Value attribute for this test. So it is not
some bogus made up out of the blue concern. I thought the function
was fine, till Bob suggested that 11.6 could apply even to 'Value.
****************************************************************
From: Robert A. Duff
Sent: Friday, March 10, 2006 11:21 AM
> All this discussion has centered around the canonical example of the
> "does it overflow?" function.
>
> Does anybody have any other good examples from real code?
The only two "real" examples I know of are the "does it overflow?" thing you
mentioned above, and the 'Value example I mentioned earlier. The 'Value
example is what triggered this thread in the first place. However, not
everybody agrees that the 'Value example is a problem in theory, and even
fewer (nobody?) think it's a problem in practise.
It's not hard to _construct_ such examples, though. Just find one of the
myriad language defined checks in the RM. E.g.:
begin
Temp := Blah.all; -- Temp never mentioned elsewhere.
exception
when Constraint_Error =>
return ...;
end;
if ... Blah.all ... then -- could raise C_E, according to 11.6
In any case, I believe whatever "real" examples there are must be rare.
****************************************************************
From: Erhard Ploedereder
Sent: Friday, March 10, 2006 12:28 PM
> Please explain this example -- with -30 or -25.
> I don't understand the point.
> > A := B + Five * Six - Eight;
Left-to-right evalaution of "B + Five * Six" can overflow, while
"-Eight" brings it back in range. Do constant folding and no exception
happens. Don't constant-fold and for sufficiently large B you get the
exception. The -30 vs -25 is merely the difference between a necessary
and a sufficient condition.
Interestingly,
A := (B + Five * Six) - Eight;
is subject to the same questions, completely independently from 11.6.,
because of the "correct result is always fine" rule (which is NOT in 11.6).
4.5(10) says so, even though the parentheses seemingly "force" the
overflowing addition.
so much for the facts.
...on the "my-opinion" side, the whole language has been defined in a way to
make implicit exceptions be the abnormal, faulty kind of event, which you
should never get, if you did right in designing/programming and if the
hardware/environment works as expected.
Trying to make them more than that ("I know exactly how to elicit a fault
and I am going to use this in my code") is really ill advised. "Leones
abundant." A couple were mentioned above.
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 12:33 PM
> In any case, I believe whatever "real" examples there are must be rare.
which obviously begs the question of how common it is for 11.6 to run
into examples where it can actually help. Just how much value is there
in this nasty rule?
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 12:48 AM
> ...on the "my-opinion" side, the whole language has been defined in a way to
> make implicit exceptions be the abnormal, faulty kind of event, which you
> should never get, if you did right in designing/programming and if the
> hardware/environment works as expected.
> Trying to make them more than that ("I know exactly how to elicit a fault
> and I am going to use this in my code") is really ill advised. "Leones
> abundant." A couple were mentioned above.
so to summarize your view, Tuck was ill-advised in his code for addition
overflow testing, since it depended on exceptions.
****************************************************************
From: Erhard Ploedereder
Sent: Friday, March 10, 2006 12:56 PM
> Suppose someone writes a loop that goes through a list
> and expects an access exception to terminate the loop
> in some cases, would this qualify as an exception (I have
> seen this style of programming fairly frequently).
Exactly. That might qualify in the sense of a non-fault event. It is a
might, because there is still a judgement involved. For example, doing so
in an otherwise infinite loop is the very thing I am talking about. Having
an exception handler for that outside the while loop -- well, it depends on
whether the design said that a Null is a fault or an expected thing. Now,
in practice, it won't matter for this discussion, since the exception will
happen, because surely you are not advising to dereference for no good
reason other than to avoid writing the explicit "= Null" check.
Where it begins to matter is when the program logic then starts drawing
conclusions, as in "if we got to this point, where I know the exception must
have been raised, then the following must have already happened,
etc. etc.". This is very typical for the kind of tricky code that we are
talking about here. Fault-handling code, on the other hand, tends to be
much more general, because after all, it is here to handle an unexpected
fault, not a planned one. With implicit exceptions implemented in terms of
condition codes or traps being smeared by the hardware regularly these
days short of serious pipeline stalls, or with data not written thru to
memory, lots of more dragons here.
Now, your experience directly contradicts our study. We must
have looked at very different code.
I was tempted to mention the End_Of_File exception as loop
terminator, which we were very surprised NOT to see used
a lot in the code. I refrained, because I am not sure whether
it might have been the case in the two other cases that we found.
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 1:12 PM
Interesting, we have also seen the End_Of_File exception case
a few times.
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 4:13 PM
No one responded to the suggestion of
pragma Strict_Exception_Semantics;
thoughts?
perhaps just not worth it, but probably better than
messing with Unsuppres given the discussion.
****************************************************************
From: Tucker Taft
Sent: Friday, March 10, 2006 4:44 PM
Being an implementor, I'll admit to not being
very enthused. I suppose if it is essentially
equivalent to "-O 0" or "pragma Optimize(Off);"
then it can't be much work, but I wonder about
how much additional testing it might imply.
Some optimizations are difficult to turn off,
especially when the backend is shared with other
languages.
As a user, I don't think I would use it. I think
it might be easier to simply teach people how to
use membership tests.
****************************************************************
From: Robert Dewar
Sent: Friday, March 10, 2006 6:50 PM
I think it should cover more cases than are covered
by membership tests. In particular I would like it
to insist on overflow checking of intermediate
results for example, but perhaps it is just not
worth discussing.
I am not sure it is equivalent to optimization off ...
****************************************************************
From: Pascal Leroy
Sent: Saturday, March 11, 2006 4:20 AM
> which obviously begs the question of how common it is for 11.6 to run
> into examples where it can actually help. Just how much value
> is there in this nasty rule?
I have been reading this thread and wondering: if we took a big eraser and
erased 11.6, would it change things a lot in practice?
This discussion seems to have demonstrated that it is probably fairly
uncommon for users to write code that actually takes advantage of 11.6.
In fact one could argue that users are more likely to get bitten by 11.6
than helped by it.
The other side of the question has to do with implementations: do
compilers actually take advantage of 11.6? I believe that in Apex we have
pretty much decided that 11.6 was unusable because by the time when we run
the optimizer we have lost track of the high-level semantics of the
language, so it's devilishly hard to figure out if the permissions of 11.6
apply (but I am not familiar with this part of our technology, so Steve B.
might provide more correct details). As for GNAT, well, a while back
Robert wrote "I don't think that gcc really takes advantage of 11.6".
I am starting to think that, if we removed 11.6, the only people who would
be affected are the language lawyer would couldn't spend weeks discussing
the possible semantics of programs that fail language-defined checks.
****************************************************************
From: Pascal Leroy
Sent: Saturday, March 11, 2006 4:23 AM
> No one responded to the suggestion of
>
> pragma Strict_Exception_Semantics;
Based on the discussion I think it would be better to have strict
exception semantics be the default and have a pragma
Loose_Exception_Semantics to cover the odd case where you absolutely
positively definitely need to take advantage of 11.6.
****************************************************************
From: Pascal Leroy
Sent: Saturday, March 11, 2006 4:26 AM
> Being an implementor, I'll admit to not being
> very enthused. I suppose if it is essentially
> equivalent to "-O 0" or "pragma Optimize(Off);"
Duh? You've lost me. There are millions of optimizations which actually
improve the code size/speed while preserving the high-level semantics of
the language. I don't see why you would want to disable those when the
users says "don't use 11.6, please".
****************************************************************
From: Tucker Taft
Sent: Saturday, March 11, 2006 10:20 AM
>>which obviously begs the question of how common it is for 11.6 to run
>>into examples where it can actually help. Just how much value
>>is there in this nasty rule?
> ...
> The other side of the question has to do with implementations: do
> compilers actually take advantage of 11.6? ... As for GNAT, well, a while back
> Robert wrote "I don't think that gcc really takes advantage of 11.6".
I am dubious. We made the changes to 11.6 during
the 9x process specifically to address typical optimizations,
and gripes from implementors that Ada made it hard to
do instruction scheduling, vectorization, dead store
elimination, use hardware with imprecise exceptions, etc.
> I am starting to think that, if we removed 11.6, the only people who would
> be affected are the language lawyer would couldn't spend weeks discussing
> the possible semantics of programs that fail language-defined checks.
I admit 11.6 is hard for anyone to understand, but I believe
I understood it at least for a few weeks somewhere in the early
90's, and it corresponded properly to what typical optimizers
do. I don't believe implementors go out of their way to
take advantage of it, but I believe it reflects the kinds
of optimizations that are typically performed by modern
compilers.
The key messages from 11.6 are:
1) don't rely on exceptions from language-defined checks
being considered essential side-effects. Their job
is to prevent invalid, meaningless results from being
used to produce visible output. They need not be raised
at all so long as the above is accomplished.
2) don't rely on exceptions from language-defined checks
being raised precisely where they appear in the source.
They might be "smeared" out, due to vectorization,
reordering or combinations of operations, instruction
scheduling, extra-precision intermediates, etc.
I believe both of these are messages that users should hear
and follow. It is unfortunate that we couldn't come up
with words that are easier to understand, but alas, that
is true for several sections of the manual. We have to rely
on textbooks, training, and mentoring to solve that.
The simplest rule is, and I think Robert has said something like this
many times, "don't use local handlers for predefined check failures."
Handlers in some outer loop are fine, but expecting fine-grained
precision in handling predefined check failures is asking for trouble.
****************************************************************
From: Robert Dewar
Sent: Saturday, March 11, 2006 11:10 PM
> I have been reading this thread and wondering: if we took a big eraser and
> erased 11.6, would it change things a lot in practice?
I think the real effect of 11.6 is seen in postponed stores
a := 6;
b := c + d;
isn't it 11.6 that says a may not be 6 in the overflow handler
for the c+d addition?
Also it certainly used to be 11.6 that allowed out of range
intermediate values, but that's changed, right?
****************************************************************
From: Robert Dewar
Sent: Saturday, March 11, 2006 11:11 PM
> Duh? You've lost me. There are millions of optimizations which actually
> improve the code size/speed while preserving the high-level semantics of
> the language. I don't see why you would want to disable those when the
> users says "don't use 11.6, please".
Tuck is worrying about whether it is possible to implement
removing 11.6 I think, and if it is equivalent to -O0, the
answer is at least yes in the limit.
****************************************************************
From: Robert I. Eachus
Sent: Sunday, March 12, 2006 7:34 AM
>I have been reading this thread and wondering: if we took a big eraser and
>erased 11.6, would it change things a lot in practice?
Yes. 11.6 is the place where most reorderings are permitted. Compilers
need to be able to take advantage of reordering, not just in
expressions, but in stores.
>I am starting to think that, if we removed 11.6, the only people who would
>be affected are the language lawyer would couldn't spend weeks discussing
>the possible semantics of programs that fail language-defined checks.
The 'nasty' cases always seem to be to be those where removing an
assignment is allowed to remove the related range check. Having gotten
that far, there is an equivalence that causes the problems.
declare
A: Int;
B, C: Int := Some_Expression;
begin
A := B*C;
exception
when Constraint_Error => Do_Something;
end;
declare
A,B,C: Int := Some_Expression;
begin
if B*C in Int'Range
then A := B*C;
end if;
end;
Are these two program fragments equivalent? Most Ada programmers would
say yes. I used Int, not Integer for a reason. We run into trouble
with 11.6 when the bounds of Int are not the same as those of Int'Base.
Certainly if I wrote:
declare
A: Int'Base; --assuming this is legal...
B,C: Int := Some_Expression;
begin
A := B*C;
exception
when Constraint_Error => Do_Something;
end;
Then eliminating the constraint check on the assignment to A should not
be a problem. My point is that the freedom in 11.6 to reorder
expressions in a way that eliminates the predefined expresssions raised
by predefined operators is not a problem, it is eliminating the
(predefined) check that is a part of assignment that is the killer.
It would be nice if we could change the RM in a way that defined Integer
(and Long_Integer, etc.) as base types, then it wouldn't be surprising
that the compiler was allowed to remove the (non-existant) check that is
part of assignment. I think it is way to late to do that.
When we track all this through the compiler, the argument for continuing
to allow the freedom is that the back-end may take advantage of 11.6 in
a way that does not distinguish between these cases. But in practice
the problem cases are not the rearrangements that the back-end might do,
it is only the front end optimizations that can conclude that the
assignment to A (and even the storage for A) can be eliminated.
That is why I suggested that we restrict the effect of Unsuppress to the
check that is part of an assignment. We can split legal hairs all we
want, but that is, in practice, the only check that a user internalizes
in his understanding of the meaning of a program. Requiring these
checks--and no other--must be made unless suppressed seems to me to be a
good compromise. If the decision is to make this Implementation Advice,
I can certainly live with that. It is the sort of thing that IA is for,
and users can shake the IA at their compiler vendor if there is a problem.
And now for something completely different...
The question of whether checking for End_Of_File by an exception handler
is a 'normal' loop exit misses the point. If you are reading a
structured file there are two places where End_Of_File can be raised.
Where you expect it, and where you don't. It doesn't matter whether you
are reading records of some sort, or if the structure is implicit in a
series of reads. You can even have records that are written in multiple
lines in a text file and read the same way. In all these cases you need
the exception handler for malformed files. It seems silly to tell the
programmer that he should have a while not End_Of_File loop that
contains a block with an End_Of_File handler, if the behavior he wants
is to discard any malformed structures. I usually do the explicit
check, but only when the non-normal action is to print a message or some
such.
****************************************************************
From: Erhard Ploedereder
Sent: Sunday, March 12, 2006 5:43 PM
Robert D:
> Just how much value is there in this nasty rule?
Pascal L:
> I am starting to think that, if we removed 11.6, the only people who would
> be affected are the language lawyer would couldn't spend weeks discussing
> the possible semantics of programs that fail language-defined checks.
If you ever tried to implement a control-flow analysis or anything based on
a control-flow analysis, and considered to also model a canonical control
flow by implicitly raised exceptions, then you appreciate 11.6. Your control
flow graph explodes by factors and since the algorithms on top often are
directly influenced by the number of branches, they slow down by factors,
in some cases factors squared.
(Usually, people will simply cry "uncle" and give up on such analyses, resp.
use the 11.6 model regardless of what the language says, e.g., CF-analyses
for Java.)
As a compiler writer, you always have the option of crying "uncle", don't
analyse/optimize, and simply produce "canonical code".
As a validator, you simply forbid exception handling altogether to avoid the
thorny issue (and in the process blow up rockets when exceptions happen that
you forbade).
In most other settings of program analysis, these options do not exist.
------
If you ever tried to ensure the canonical semantics and used condition codes
for detection of numerical errors on a deeply pipelined machine, then you
appreciate 11.6. and its fuzziness about implicit exceptions.
-----------
If you ever tried to convince an existing code generator to do any of the
above, because you believe in canonical code, then you start appreciating
11.6, because such convincing is next to impossible short of abolishing just
about any optimization.
--------
If you want to go down that garden path of canonical execution semantics,
then the first rule on the block ought to be a strict left-to-right
evaluation rule. A lot more people get hosed by that one than by the 11.6
rules. Not that I would be in favor.
In fact, I think the "11.6 is bad" discussion is only among language lawyers
and the verifiers who want to have canonical semantics at any cost; few end
users are aware of the details anyway.
Compiler writers who complain about 11.6 are arguing mostly under false
pretenses. As compiler writers, they always have the option to implement
whatever canonical semantics they like (within the bounds of what the
language allows) and it is not hard to do IF you control the details of
the backend technology, IF you do not care about speed or energy
efficiency, and IF your aspiration in analysing control flow are extremely
limited.
To ram these IFs down everone's throat, well, I don't like that (and my cell
phone battery neither).
****************************************************************
From: Erhard Ploedereder
Sent: Sunday, March 12, 2006 5:53 PM
Tuck summed it up perfectly...and I wonder what has changed in recent years
to alter this...
For the end user ....
The key messages from 11.6 are:
1) don't rely on exceptions from language-defined checks
being considered essential side-effects. Their job
is to prevent invalid, meaningless results from being
used to produce visible output. They need not be raised
at all so long as the above is accomplished.
2) don't rely on exceptions from language-defined checks
being raised precisely where they appear in the source.
They might be "smeared" out, due to vectorization,
reordering or combinations of operations, instruction
scheduling, extra-precision intermediates, etc.
------
For compiler writers:
We made the changes to 11.6 during
the 9x process specifically to address typical optimizations,
and gripes from implementors that Ada made it hard to
do instruction scheduling, vectorization, dead store
elimination, use hardware with imprecise exceptions, etc.
****************************************************************
From: Pascal Leroy
Sent: Monday, March 13, 2006 2:37 AM
> The simplest rule is, and I think Robert has said something
> like this many times, "don't use local handlers for
> predefined check failures." Handlers in some outer loop are
> fine, but expecting fine-grained precision in handling
> predefined check failures is asking for trouble.
I don't understand this. Because exceptions cannot be moved across
sequence_of_statements boundaries, I have always thought that using
handlers as locally as possible was your best bet to avoid being bitten by
11.6. Can you educate me?
****************************************************************
From: Pascal Leroy
Sent: Monday, March 13, 2006 2:49 AM
> I believe both of these are messages that users should hear
> and follow. It is unfortunate that we couldn't come up
> with words that are easier to understand, but alas, that
> is true for several sections of the manual. We have to rely
> on textbooks, training, and mentoring to solve that.
I opened one randomly selected textbook, John's, and as far as I can tell
the only thing it says about optimization is that intermediate
computations can be performed using a wider range and therefore don't have
to raise an exception.
You're just fooling yourself if you believe that 11.6 is understood by
anyone outside a small subset of the ARG and a handful of implementers.
****************************************************************
From: Pascal Leroy
Sent: Monday, March 13, 2006 3:21 AM
> In fact, I think the "11.6 is bad" discussion is only among
> language lawyers and the verifiers who want to have canonical
> semantics at any cost; few end users are aware of the details anyway.
Excuse me, but Robert's "does it overflow?" example is a clear case where
a user did care. In fact, unless your last name is Taft, you are
exceedingly likely to get bitten by 11.6 because chances are that you'll
write one of these functions at some point, naively assuming the canonical
semantics.
If you are concerned about safety, you should also have a heart attack at
the notion that objects can become abnormal if a predefined exception is
raised (last sentence of 11.6(6)). It's one thing to say that assignments
may or may not have happened, it's a totally different thing to say that
if you touch any object in a handler you can become erroneous.
> Compiler writers who complain about 11.6 are arguing mostly
> under false pretenses. As compiler writers, they always have
> the option to implement whatever canonical semantics they
> like (within the bounds of what the language allows) and it
> is not hard to do IF you control the details of the backend
> technology, IF you do not care about speed or energy
> efficiency, and IF your aspiration in analysing control flow
> are extremely limited. To ram these IFs down everone's
> throat, well, I don't like that (and my cell phone battery neither).
Baloney. We at Rational have spent the last 15 years implementing
aggressive and sophisticated optimizations for Ada, and we have found the
permissions of 11.6 to be worthless. It may just be that we are clueless,
but...
An optimizer is in the business of making inferences about the program and
performing transformations based on these inferences, right? The problem
is that if the optimizer is allowed to make transformations that don't
preserve the canonical semantics, if can end up making inconsistent
inferences about the program and then proceed to perform transformations
which are *not* permitted by 11.6.
A concrete example which is a cutdown from real customer code (nobody
writes code like that, but the mysteries of code generation do create such
patterns). Consider the following code fragment, with all checks enabled:
if X + 1 > Y then
Z := Y;
else
Z := Y;
end if;
if X = Integer'Last then
Z := 0;
end if;
Optimization phase #1 runs and eliminates the second if_statement,
reasoning that X cannot be Integer'Last, because if it were the evaluation
of X + 1 would have raised an exception. A perfectly legitimate inference
and transformation, consistent with the canonical semantics of the
language.
Optimization phase #2 runs and, looking at the first if_statement, it
realizes that both arms are the same. The canonical semantics would be to
evaluate the condition, drop it on the floor, and assign Y to Z. But
optimization phase #2 wants to take advantage of the permission to avoid
raising exceptions. Since the result of evaluation X+1 has no effect
whatsoever on the outcome of the program, 11.6(5) says that you don't need
to evaluate the condition at all. Fine, you are left with:
Z := Y;
But that's *wrong*; if X has the value Integer'Last, the only choices
allowed by 11.6 are to either raise C_E or assign 0 to Z.
The difficulty here is to determine at which point the inferences go
outside of the domain permitted by 11.6. The optimizer would need to have
a lot of information about the high-level semantics of the language to
determine that. As we all know, optimizers typically operate on
intermediate languages where the high-level semantics has been blurred, to
say the least.
****************************************************************
From: John Barnes
Sent: Monday, March 13, 2006 5:13 AM
> I believe both of these are messages that users should hear
> and follow. It is unfortunate that we couldn't come up
> with words that are easier to understand, but alas, that
> is true for several sections of the manual. We have to rely
> on textbooks, training, and mentoring to solve that.
>I opened one randomly selected textbook, John's, and as far as I can tell
the only thing it says about optimization is that intermediate
computations can be performed using a wider range and therefore don't have
to raise an exception.
>You're just fooling yourself if you believe that 11.6 is understood by
anyone outside a small subset of the ARG and a handful of implementers.
Old John certainly doesn't understand 11.6
But at least he does understand that he doesn't understand!
****************************************************************
From: Erhard Ploedereder
Sent: Monday, March 13, 2006 7:00 AM
> Baloney. We at Rational have spent the last 15 years implementing
> aggressive and sophisticated optimizations for Ada, and we have found the
> permissions of 11.6 to be worthless. It may just be that we are clueless,
> but...
A certain technology that both of us know did
a) not include implicit exception raising in the control-flow graph.
Lifeness of variables in the exception handler was guaranteed by
other means. Consequently, every optimization that drew information
from the CFG was liable to ignore the possibility of this implicit
control flow.
b) instruction scheduling by a bubbling algorithm that payed close
attention to set-set, set-use, and use-set dependencies, as well as
frame boundaries, but very little else. Consequently, it was subject
to the issue raised by Robert with the "a := 6" problem on architectures
with a condition code for overflow checking.
c) did dead assignment elimination as a matter of course, since lots
of dead assignments came down from the IL-generation (e.g., dope
that wasn't needed, etc.) or were the left-overs of copy propagation
or constant propagation.
Are you saying that this is allowed without 11.6 ?
Or has all this changed in your aggressive optimizations to something
more in line with canonical semantics?
Or have measurements shown that the optimizations do not matter for
execution speed?
So I am not sure what the Baloney stands for.
On the CFG-issue: note that you cannot simply "turn off the CFG-related
optimizations", since canonical flow by exceptions implies that your basic
block structure in a CFG without the implicit exception arcs is incorrect --
any optimization that then assumes that a basic block is indeed one, may go
wrong. Typical examples include simple instruction scheduling, unless done
on the object code where the implicit exception branches are manifest.
On to your example. Cute, but it doesn't fly....
Let's kill 11.6. for the argument's sake.
Ditto lets accept that Optimization #1 implements canonical semantics.
(I actually do not agree. You are wrong because of 4.5(10); the conclusion
that X + 1 raises CE if X = Integer'Last is not justified because of it.)
Then, my second optimization ... hold it, not an optimization, but a
necessary transformation because my target architecture only does compares
against 0 ... is to turn
X + 1 > Y into X - Y + 1 > 0
and even without 11.6. no exception happens even though X = Integer'Last.
Likely you turn it into X + 1 -Y and get the exception, but are you sure?
You cannot, since I immediately show you the example of Y < X + 1 turning
into Y -X -1 < 0.
The example of
X + 1 > 1 + Y
is interesting, too, in case the above has not convinced you. If the code
for that causes overflow in the case of X =Integer'Last, I would actually
be mildly surprised.
So, killing 11.6. will not help you here. You also need to kill 4.5(10)
and a bunch of other implementation permissions, I presume. You also need
to prescribe some canonical transformation for the comparisons and, as I
said earlier, you need L-to-R rules in order to get truly predictable
exception behaviour.
GNAT is at the extreme end of the spectrum. If you write
X + 1 - 1 - Y > 0
you get CE for X = Integer'Last, even if you go for higher optimization
levels, i.e. I tried it for -O3.
Not even constant folding, so that implicit exceptions of a strict
left-to-right evaluation are preserved? Wow. That is heavy.
Designer's prerogative, sure, but a rule for the world ?
****************************************************************
From: Pascal Leroy
Sent: Monday, March 13, 2006 8:14 AM
> A certain technology that both of us know did ...
> Or has all this changed in your aggressive optimizations to something
> more in line with canonical semantics?
Well, many things have changed, and yes, we stick very closely to
canonical semantics.
> Or have measurements shown that the optimizations do not
> matter for execution speed?
Please don't put words in my mouth, I didn't say that. I pointed out that
it is exceedingly difficult to make sure, as soon as you start using 11.6,
that you don't go *beyond* the permissions of 11.6.
> So I am not sure what the Baloney stands for.
You wrote, roughly "Compiler writers who complain about 11.6 ... do not
care about speed or energy efficiency". That's what the Baloney stands
for.
> On to your example. Cute, but it doesn't fly....
> Let's kill 11.6. for the argument's sake.
> Ditto lets accept that Optimization #1 implements canonical
> semantics. (I actually do not agree. You are wrong because of
> 4.5(10); the conclusion that X + 1 raises CE if X =
> Integer'Last is not justified because of it.)
The parenthetical comment doesn't make sense. I am the implementer, I
decide what code I generate, so I know darn well that X + 1 will raise
C_E.
> Then, my second optimization ... hold it, not an
> optimization, but a necessary transformation because my
> target architecture only does compares
> against 0 ... is to turn
> X + 1 > Y into X - Y + 1 > 0
> and even without 11.6. no exception happens even though X =
> Integer'Last.
But that's exactly my point.
Some code generation phase upstream from you chose to generate code to
evaluate X + 1 with overflow checking. In the optimizer you can do any
transformation you like provided that you preserve the semantics.
Removing an overflow check is *not* semantics-preserving unless you can
*prove* that X + 1 will never ever overflow.
If you start doing transformations that are not semantics-preserving (like
evaluating X - Y + 1) then some of the inferences performed elsewhere
become invalid. In this case you managed to screw up optimization #1.
In my view, when you chose to evaluate X - Y + 1, you effectively used
11.6. But you are going to protest and say that no, I only used 4.5(10).
So let me change my example a bit to avoid 4.5(10) altogether:
if X.all > Y then
Z := Y;
else
Z := Y;
end if;
if X = null then
Z := 0;
end if;
Same story as in my previous message: optimization #1 drops the second
if_statement because X cannot possibly be null, optimization #2 uses 11.6
to avoid evaluating the condition and you are left with the incorrect Z :=
Y.
> GNAT is at the extreme end of the spectrum. If you write
> X + 1 - 1 - Y > 0
> you get CE for X = Integer'Last, even if you go for higher
> optimization levels, i.e. I tried it for -O3.
And I think it makes perfect sense. Note that you can still optimize away
the subtraction by (1) evaluating X + 1 to check for overflow; then (2)
computing X - Y (don't know if GNAT does it).
****************************************************************
From: Tucker Taft
Sent: Monday, March 13, 2006 9:20 AM
The key point here seems to be that your implementation
can't first presume that checks *will* be performed,
and reason from that, and then later remove a check because
it has no "other" effect on the output. That seems like an
important rule to remember. In other words you can only
"reason" from checks you know will be performed.
I suppose one way to accomplish that would be to somehow
tie the result of the check to the inference that is drawn
by it, sort of like a common subexpression. You can't
perform a common subexpression optimization if the original
calculation of the expression is removed. Only when there
are no uses remaining of the check can the check itself
be removed. In your example there are essentially two
uses of the check that X /= null, after optimization #1
has taken place. The first is the computation of "x.all"
and the second is the "X = null" conditional.
****************************************************************
From: Pascal Leroy
Sent: Monday, March 13, 2006 9:51 AM
More precisely, the second is the *removal* of the X = null condition and
the enclosing if_statement. So you'd have to introduce a dependence on
something that is no longer present in the code. Possible, but not
exactly easy.
This was an example with the permission to remove checks. Of course, you
can construct examples involving the permission to reorder actions. In
this case you'd have to record dependencies on partial ordering
relationships between the actions of a sequence_of_statements. Again,
this is possible, but again, it doesn't seem exactly easy.
This being software, everything is possible, but we have found that
keeping track of how we used the permissions of 11.6 was daunting, and we
pretty much decided not to use them. Other implementers may come to
different conclusions, but I firmly believe that the risk of making
inferences that are inconsistent or go beyond the permissions of 11.6 is
very real.
****************************************************************
From: Erhard Ploedereder
Sent: Monday, March 13, 2006 10:41 AM
you cannot make the argument:
> Removing an overflow check is *not* semantics-preserving unless you can
> *prove* that X + 1 will never ever overflow.
since 4.5(10) says otherwise. It may not be YOUR notion of semantics, but it
is the present manual's notion that correct values are always ok and, where
YOUR semantics say CE, the language semantics say CE or correct value.
And it is rather bogus to argue:
1. "because I know as implementer that I will raise CE for X+1 earlier,
I draw conclusions from that for subsequent code"
and, in the next moment say
2. "but I am not going to do this check after all, hehe".
Take your pick: either stick with line 1 and then make sure that the
premise holds, i.e., perform the check under all circumstances.
Or stick with line 2, in which case you better not draw the conclusion
in line 1, since you are not making the check after all.
As a basis for optimization, you happen to like line 1; I happen to like
line 2. Anybody who likes both lines, has a correctness problem with his
deductive logic; this should not happen.
What I tried to say (and did so badly) is that if you draw conclusions from
the semantics of the language, you are always ok, regardless of what the
optimizations do, because they have to stay within the bounds of these
semantics. Draw conclusions because you just know what this implementation
does or does not do, you are treading on thin ground, because in the end,
the implementation might do differently (as it did in your example; it
took the check away per 11.6. in your diction; per my view, where I did
not fold the if-statement, per a necessary transliteration of the condition
into object code that took advantage of 4.5(10)).
****************************************************************
From: Pascal Leroy
Sent: Wednesday, March 15, 2006 4:10 AM
You have a point here, but 4.5(10) is a to a large extent a red herring.
So please look at my second example, the one with access types, which
didn't involve any "optional" check.
Now it is clear that the interaction between optimizations #1 and #2 is a
bug, and as Tuck pointed out, it can conceivably be addressed by recording
enough information about the premises of the inferences you made, and
ensuring that whenever you use 11.6 you don't break these premises. But
that is *hard* if you have your own code generator, and it seems close to
impossible if you are targeting an existing code generator (try teaching
the subtleties of 11.6 to, say, GCC).
****************************************************************
From: Robert Dewar
Sent: Wednesday, March 15, 2006 2:19 AM
> Baloney. We at Rational have spent the last 15 years implementing
> aggressive and sophisticated optimizations for Ada, and we have found the
> permissions of 11.6 to be worthless. It may just be that we are clueless,
> but...
I am really puzzled by this statement, how can you even do simple things
like deferring stores?
It seems fundamental that a back end optimizer doing convention
instruction scheduling for a superscalar machine must take advantage
of 11.6. Pascal, perhaps you are thinking of higher level optimizations,
which I agree do not take advantage of 11.6.
****************************************************************
From: Robert Dewar
Sent: Wednesday, March 15, 2006 2:21 AM
This thread is fun, as per subject above I suggest that
pragma Unsuppress should inhibit 11.6, and first we have
to discover that we really don't collectively understand
11.6, or even whether it is useful :-) :-)
****************************************************************
From: Robert Dewar
Sent: Wednesday, March 15, 2006 5:39 AM
> Now it is clear that the interaction between optimizations #1 and #2 is a
> bug, and as Tuck pointed out, it can conceivably be addressed by recording
> enough information about the premises of the inferences you made, and
> ensuring that whenever you use 11.6 you don't break these premises. But
> that is *hard* if you have your own code generator, and it seems close to
> impossible if you are targeting an existing code generator (try teaching
> the subtleties of 11.6 to, say, GCC).
11.6 simply lets gcc do its normal instruction scheduling, and I think
we would have a big problem trying to teach gcc NOT to take advantage
of 11.6.
****************************************************************
From: Pascal Leroy
Sent: Wednesday, March 15, 2006 5:51 AM
> I am really puzzled by this statement, how can you even do
> simple things like deferring stores?
>
> It seems fundamental that a back end optimizer doing
> convention instruction scheduling for a superscalar machine
> must take advantage of 11.6. Pascal, perhaps you are thinking
> of higher level optimizations, which I agree do not take
> advantage of 11.6.
Yes, I was mostly thinking of higher level optimizations.
However, now that you mention it, I am starting to think that instruction
scheduling is a problem, too.
One of the things that a high-level optimizer might be interested to know
is that no exception can be raised in a particular region of code, maybe
because it was previously able to prove that checks won't fail in this
region. It seems like a useful properly of the control flow graph, and
having proved it your optimizer may be able to make other inferences,
further transforming the code.
Next your low-level instruction scheduler runs, and it decides to move
instructions around. It may unwittingly move an instruction that can
raise an exception into one of these no-exception regions . But by doing
so it broke the premises on which your high-level optimizer depended. How
can you be sure that the generated code is still correct? Note that I am
not concerned about an exception moving around, I am concerned about
breaking the assumptions upon which earlier optimizations depended.
As Tuck indicated, you'd have to convey to the instruction scheduler all
sorts of information about the assumptions made earlier. That's daunting.
I don't have an example at hand, but I think the potential for trouble is
real. In fact, I wonder how our technology deals with this situation.
****************************************************************
From: Pascal Leroy
Sent: Wednesday, March 15, 2006 6:01 AM
> ... first we have
> to discover that we really don't collectively understand
> 11.6, or even whether it is useful :-) :-)
The latter is really the important question. It seems to me that the
feeling in this group is "yeah, 11.6 is obnoxious because it screws up
portability and makes it very hard to reason about code, but the
optimizations that it permits are oh so vital for performance".
As I said before, we have had no end of trouble with 11.6 and pretty much
stopped using it. We were quite surprised to find that the performance
penalty of *not* using 11.6 were relatively small. Of course, YMMV, but
it is not clear to me if the assumption that 11.6 is vital is justified in
practice.
****************************************************************
From: Robert Dewar
Sent: Wednesday, March 15, 2006 11:19 AM
> Next your low-level instruction scheduler runs, and it decides to move
> instructions around. It may unwittingly move an instruction that can
> raise an exception into one of these no-exception regions .
You have to be able to bound the regions, you can't move exceptions
out of a block for example in any case.
> As Tuck indicated, you'd have to convey to the instruction scheduler all
> sorts of information about the assumptions made earlier. That's daunting.
It is indeed tricky, we have only occasionally run into this. Our early
attempt at zero cost exceptions was done in the front end, and this
worked in early versions of GCC, but later versions did indeed violate
our assumptions -- luckily GCC had implemented zero cost exceptions
for C++ in the meanwhile, and we could switch to the GCC mechanism.
****************************************************************
From: Arnaud Charlet
Sent: Wednesday, March 15, 2006 6:08 AM
> Next your low-level instruction scheduler runs, and it decides to move
> instructions around. It may unwittingly move an instruction that can
> raise an exception into one of these no-exception regions . But by doing
> so it broke the premises on which your high-level optimizer depended. How
> can you be sure that the generated code is still correct?
Clearly, that's called a bug: you need to pass enough info between
your various optimization passes and exception region is something that
all passes should be aware of, same for aliasing information.
> As Tuck indicated, you'd have to convey to the instruction scheduler all
> sorts of information about the assumptions made earlier. That's daunting.
FWIW, that's what GCC does routinely.
****************************************************************
From: Arnaud Charlet
Sent: Wednesday, March 15, 2006 6:10 AM
> As I said before, we have had no end of trouble with 11.6 and pretty much
> stopped using it. We were quite surprised to find that the performance
> penalty of *not* using 11.6 were relatively small. Of course, YMMV, but
> it is not clear to me if the assumption that 11.6 is vital is justified in
> practice.
As Robert said, GCC takes advantage of this, and removing
11.6 is a no-no as far as GNAT is concerned.
****************************************************************
From: Robert Dewar
Sent: Wednesday, March 15, 2006 11:29 AM
> As I said before, we have had no end of trouble with 11.6 and pretty much
> stopped using it. We were quite surprised to find that the performance
> penalty of *not* using 11.6 were relatively small. Of course, YMMV, but
> it is not clear to me if the assumption that 11.6 is vital is justified in
> practice.
Again I wonder if this is too much of a front end perspective.
Performance on modern superscalar architectures (not to mention
ia64) is critically dependent on scheduling, and it is hard to
see how a scheduling algorithm would not fall afoul of strict
Ada semantics.
****************************************************************
From: Robert A. Duff
Sent: Thursday, March 16, 2006 6:33 PM
> This thread is fun, as per subject above I suggest that
> pragma Unsuppress should inhibit 11.6, and first we have
> to discover that we really don't collectively understand
> 11.6, or even whether it is useful :-) :-)
Wow! I really ignited a firestorm with this "Unsuppress should inhibit 11.6"
thread, which has morphed into "perhaps we should inhibit 11.6 altogether"!
I think what we're seeing is a different point of view between people who
design back ends specifically for Ada, and people who write and/or use back
ends that were originally intended for other languages, such as C.
I think we _do_ collectively understand what 11.6 means, but we disagree as to
what it is useful, and to what extent it is damaging to writing working Ada
programs.
I do not think we can eliminate 11.6 entirely.
But, returning to the original question, can we eliminate 11.6
in the presence of Unsuppress?
****************************************************************
From: Randy Brukardt
Sent: Thursday, March 16, 2006 7:45 PM
> I do not think we can eliminate 11.6 entirely.
> But, returning to the original question, can we eliminate 11.6
> in the presence of Unsuppress?
We've already answered that: having different semantics for the explicit use
of Unsuppress from the normal state (that is, without Suppress) has
significant implications for existing implementations of Suppress (and could
even cause the introduction of erroneous behavior in existing programs).
The only way this would work is if 11.6 was allowed *only* in the presense
of pragma Suppress. But that's almost the same as repealing it altogether.
And I don't think some implementers would be very happy with the benchmark
results afterwards.
The issues were outlined here last week; why are you bringing this up again
without addressing those issues? At the very least, you need a worked out
proposal that shows that you've addressed all of the issues that no one else
in the ARG could. (I can believe that we might have missed something
obvious, but I won't believe that without being shown it in
black-and-white.)
****************************************************************
From: Bibb Latting
Sent: Friday, March 17, 2006 7:48 PM
My Apologies for the late post to the thread, can DOLIST be commanded
to return all of the posts to a thread in chronological order as a
single reply? I'll brave the waters and throw in my two cents worth:
1)
> About 10 years ago, we did a small study on the use of exceptions in Ada
> code available in the public domain, and the results were the exact
> opposite of these claims that this style of programming (where the raising of
> implicit exceptions realizes an intended non-fault semantics) is frequent
> enough to warrant any attention.
>
> I vaguely remember that the study found 3 examples; in all these cases, it
> involved results that were used later on, i.e. not subject to 11.6
> considerations of dead code elimination. I don't remember how many lines
> of
I question the validity of the metric; one would expect publicly available code
to have some level of testing and code subject to 11.6 wouldn't pass test. What
about searching support databases instead?
2)
Shouldn't 1.1.3(8-14) include interaction with an object that has a user-specified address?
3)
> function Multiply_Will_Overflow(X, Y: Integer) return Boolean is
> pragma Unsuppress(All_Checks);
> Temp: Integer;
> begin
> Temp := X * Y;
> return False;
> exception
> when Constraint_Error =>
> return True;
> end Multiply_Will_Overflow;
> The key messages from 11.6 are:
>
> 1) don't rely on exceptions from language-defined checks
> being considered essential side-effects. Their job
> is to prevent invalid, meaningless results from being
> used to produce visible output. They need not be raised
> at all so long as the above is accomplished.
>
> 2) don't rely on exceptions from language-defined checks
> being raised precisely where they appear in the source.
> They might be "smeared" out, due to vectorization,
> reordering or combinations of operations, instruction
> scheduling, extra-precision intermediates, etc.
>
> I believe both of these are messages that users should hear
> and follow. It is unfortunate that we couldn't come up
> with words that are easier to understand, but alas, that
> is true for several sections of the manual. We have to rely
> on textbooks, training, and mentoring to solve that.
As an alternative, what about exploiting the elimination of a basic block in
the example? It seems to me that elimination of a basic block is a significant
event worthy of diagnostic support. This is something that could be directed
via IA, left as a competitive distinction between implementations, or
formalized as PROGRAM_ERROR.
What are the impacts to implementations and does the cost/benefit ratio make
sense? As a thought, statement level information, which is probably already
available after the back-end for debug, could be used with a simple diagnostic;
or perhaps when a optimization empties the block, the semantic used to eliminate
the block could be cited.
****************************************************************
From: Tucker Taft
Sent: Friday, March 17, 2006 8:02 PM
> ...
> 2)
>
> Shouldn't 1.1.3(8-14) include interaction with an object that has a
> user-specified address?
Perhaps, though placing an object at a particular address
doesn't necessarily mean it is referenced or updated from outside
the Ada code. And it is easy enough to slap a pragma Export on
it as well, if that is the intent. On the other hand, there are
these words in 13.3(19):
If the Address of an object is specified, or it is imported or
exported, then the implementation should not perform optimizations
based on assumptions of no aliases.
which groups objects whose Address is specified with those that
are imported or exported. Hence, on balance, I think you are right
that specifying the address should be roughly equivalent to exporting
the object.
> 3)
...
> As an alternative, what about exploiting the elimination of a basic
> block in the example? It seems to me that elimination of a basic block
> is a significant event worthy of diagnostic support. This is something
> that could be directed via IA, left as a competitive distinction between
> implementations, or formalized as PROGRAM_ERROR.
I would say -- "don't go there." Anything that implies flow analysis
has been avoided in the rules of the language. Here we are looking
for portability, and I don't think implementation advice is the right
way to solve that.
> What are the impacts to implementations and does the cost/benefit ratio
> make sense? As a thought, statement level information, which is
> probably already available after the back-end for debug, could be used
> with a simple diagnostic; or perhaps when a optimization empties the
> block, the semantic used to eliminate the block could be cited.
These are the kinds of things that "friendly" compilers might do,
but requiring it is asking for trouble. Even Annex H is very loose
in its specification of what is required by pragma Reviewable
in the way of object code explanation.
****************************************************************
From: Randy Brukardt
Sent: Friday, March 17, 2006 8:23 PM
>I question the validity of the metric; one would expect publicly available
code
>to have some level of testing and code subject to 11.6 wouldn't pass test.
What
>about searching support databases instead?
I disagree with the assertion that code subject to 11.6 wouldn't pass
testing. That's what's so insidious about 11.6: problems with it usually
don't show up until years later when someone updates the compiler and the
optimizer is more aggressive. Besides, we've already established that some
compilers hardly use it at all; certainly Janus/Ada doesn't use it much, and
Pascal says that IBM's compiler doesn't use it at all. On these compilers,
testing couldn't possibly show any 11.6 problems.
A data point to the contrary: I had to fix an 11.6 problem in an old ACVC
test for the ACATS. It had been around more than 5 years when the problem
was discovered. So I think it's unlikely that 11.6 problems have all been
discovered.
****************************************************************
From: Robert A. Duff
Sent: Saturday, March 18, 2006 6:05 AM
Welcome to the arg list, Bibb.
Tucker Taft <stt@sofcheck.com> writes:
> Bibb Latting wrote:
>
> > ...
> > 2)
> > Shouldn't 1.1.3(8-14) include interaction with an object that has a
> > user-specified address?
>
> Perhaps, though placing an object at a particular address
> doesn't necessarily mean it is referenced or updated from outside
> the Ada code. And it is easy enough to slap a pragma Export on
> it as well, if that is the intent. On the other hand, there are
> these words in 13.3(19):
>
> If the Address of an object is specified, or it is imported or
> exported, then the implementation should not perform optimizations
> based on assumptions of no aliases.
>
> which groups objects whose Address is specified with those that
> are imported or exported. Hence, on balance, I think you are right
> that specifying the address should be roughly equivalent to exporting
> the object.
I think B.1(24, 38, 38.a) pushes in the opposite direction.
This is where it says pragma Import inhibits initialization.
If I say something like "for X'Address use Y'Address;", where Y is internal to
my program, I don't think I want reads/writes of X to be considered external
effects. If I wanted that, I would use pragma Import on X (in which case
Y'Address might not make sense).
****************************************************************
From: Robert Dewar
Sent: Saturday, March 18, 2006 7:42 PM
> A data point to the contrary: I had to fix an 11.6 problem in an old ACVC
> test for the ACATS. It had been around more than 5 years when the problem
> was discovered. So I think it's unlikely that 11.6 problems have all been
> discovered.
I can't get excited about that, the use of exceptions in the ACVC
tests is highly peculiar and non-standard, and it is not surprising
for these tests to run into 11.6 problems. That is not necessarily
indicative of problems in user code.
****************************************************************
From: Randy Brukardt
Sent: Saturday, March 18, 2006 7:53 PM
True enough, but that wasn't my point. My point was that testing doesn't
find 11.6 problems, so simply having tested code doesn't say anything
whatsoever about the presence or absence of 11.6 issues. The ACVCs are
tested by more different compilers than any other Ada code, yet 11.6 issues
lay undetected for years. Bibb had made a claim that tested code would be
less likely to have 11.6 problems than untested code, and I don't think that
is likely to be true.
****************************************************************
From: Robert Dewar
Sent: Saturday, March 18, 2006 8:24 PM
Testing finds some 11.6 problems and not others, but the same
is true of problems in general, e.g. testing finds some but not
all uninitialized variable problems. I don't see much special
about 11.6 in this regard. Whenever you write erroneous or
implementation dependent code, testing is not definitive.
****************************************************************
From: Robert I. Eachus
Sent: Saturday, March 158 2006 9:05 PM
>I think B.1(24, 38, 38.a) pushes in the opposite direction.
>This is where it says pragma Import inhibits initialization.
>
>If I say something like "for X'Address use Y'Address;", where Y is internal to
>my program, I don't think I want reads/writes of X to be considered external
>effects. If I wanted that, I would use pragma Import on X (in which case
>Y'Address might not make sense).
I agree perfectly with what you say, but I feel that in this case it is
hmm., a bad ramification? When I was designing an Ada compiler
optimizer, I assumed that all bets were off in the scope of user defined
aliases. To say that an address clause idiom used explicitly to
_create_ aliases should be ignored by 11.6 optimizations, but that
static address clauses should be treated as external effects, is not
helpful.
When a user provides an address clause, whether static or not, he is
providing information to the compiler, but part of what he is saying is
"Trust me here." I just don't like to either tell the user that he has
to provide more information to earn the compiler's trust, or that
certain incantations are magic.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, March 21, 2006 6:22 PM
...
> I suppose another way to do it is use 'Valid, e.g.:
>
> Temp := A + B;
> return not Temp'Valid;
> exception
> when Constraint_Error => return False;
>
> but the "not in Integer'Range" seems a bit clearer.
I'm filing the mail on this thread, and something occurred to me that
apparently didn't occur to anyone reading or writing this thread:
"Integer'Range" isn't legal Ada ('Range only works on array types and
objects!).
You have to say "not in Integer" (no 'Range) for this sort of usage.
You'd think with all of the Ada experts gathered here, someone would have
noticed this...I hope we don't confuse future readers of this mail too much.
****************************************************************
From: Tucker Taft
Sent: Tuesday, March 21, 2006 8:04 PM
Randy, did you enter the "wayback" machine and
wake up in 1983? 'Range is defined for all
scalar subtypes, per RM 3.5(11-14):
For every scalar subtype S, the following attributes are defined:
S'First
S'First denotes the lower bound of the range of S.
The value of this attribute is of the type of S.
S'Last
S'Last denotes the upper bound of the range of S.
The value of this attribute is of the type of S.
S'Range
S'Range is equivalent to the range S'First .. S'Last.
****************************************************************
From: Robert Dewar
Sent: Tuesday, March 21, 2006 11:28 PM
> You have to say "not in Integer" (no 'Range) for this sort of usage.
Especially interesting given that the Integer'Range notation was
promoted as a standard and obvious way of solving this problem.
I still find testing whether an integer expression is "in Integer"
to be dubious and certainly non-obvious.
****************************************************************
From: Robert Dewar
Sent: Tuesday, March 21, 2006 11:30 PM
> Randy, did you enter the "wayback" machine and
> wake up in 1983? 'Range is defined for all
> scalar subtypes, per RM 3.5(11-14):
OK, but surely you don't want to promote the
use of Integer'Range when this is Ada 95
specific. Lots of major users of Ada never
woke up after Ada 83, and it seems better here
to promote as a standard solution something
that is guaranteed to work in all versions
of Ada.
****************************************************************
From: Tucker Taft
Sent: Wednesday, March 22, 2006 7:14 AM
Whatever. We are talking about new code, I presume.
Blah'First .. Blah'Last is universal, in any case.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, March 22, 2006 2:30 PM
> Randy, did you enter the "wayback" machine and
> wake up in 1983? 'Range is defined for all
> scalar subtypes, per RM 3.5(11-14):
Apparently. I even looked it up in Annex K and didn't see that it was
defined. It is there *now* though - I wonder who was fuzting with my RM last
night?
In any case, this is a construct that always falls afoul of the Duff rule
(don't have two ways to write the same thing), so I would never
intentionally write it -- I suppose that's why it looks wrong to me.
****************************************************************
From: Tucker Taft
Sent: Wednesday, March 22, 2006 2:51 PM
Perhaps, though "Integer'Range" is the kind of thing that I
wrote in Ada 83 all the time, and then was surprised that
it didn't work, because 'Range meant "'First .. 'Last"
in other contexts. So for me the redundancy runs the
other way.
****************************************************************
From: Robert A. Duff
Sent: Wednesday, March 22, 2006 3:07 PM
> > ...
> > In any case, this is a construct that always falls afoul of the Duff rule
> > (don't have two ways to write the same thing), so I would never
> > intentionally write it -- I suppose that's why it looks wrong to me.
>
> Perhaps, though "Integer'Range" is the kind of thing that I
> wrote in Ada 83 all the time, ...
Me, too.
On the other hand, I do agree with the "Duff rule" Randy cites above.
I don't say Integer'Range, except by accident. ;-)
****************************************************************
Questions? Ask the ACAA Technical Agent