Version 1.4 of ai12s/ai12-0092-1.txt
!standard 1.1.5(4) 14-10-13 AI12-0092-1/01
!class Amendment 13-11-01
!status received 13-08-29
!priority Low
!difficulty Medium
!subject Soft Legality Rules
!summary
Soft legality rules are introduced.
!problem
In numerous cases, the ARG has gotten stuck between a rock and a hard place:
Some situation really IS an error, so we want it to be detected, preferably at
compile time. But making it illegal is an incompatibility. The ARG then had to
choose between allowing errors to go undetected (bad) and breaking existing code
(also bad).
!proposal
(See summary.)
!wording
RM-1.1.2 says:
24/3 Each section is divided into subclauses that have a common structure.
Each clause and subclause first introduces its subject. After the
introductory text, text is labeled with the following headings:
Legality Rules
27 Rules that are enforced at compile time. A construct is legal if it
obeys all of the Legality Rules.
...
Post-Compilation Rules
29 Rules that are enforced before running a partition. A partition is legal
if its compilation units are legal and it obeys all of the Post-Compilation
Rules.
Change RM-1.1.2-(27) to:
27 Rules that are enforced at compile time. A compilation_unit is legal if
it is syntactically well formed and it obeys all the the Legality Rules. If
a compilation_unit is illegal, the implementation shall issue one or more
diagnostic messages indicating that fact.
27.1 Each Legality Rule is either "hard" or "soft". Legality Rules are
hard, unless explicitly specified as soft. Redundant[The run-time
semantics are well defined even in the presence of violations of soft
legality rules. There is a mode in which such violations do not
prevent the program from running (see 10.2).] The hard/soft distinction
applies in the same way to Syntax and Post-Compilation Rules.
Change RM-1.1.2-(29) to:
29 Rules that are enforced before running a partition. A partition is legal
if its compilation units are legal and it obeys all of the Post-Compilation
Rules. If a partition is illegal, the implementation shall issue one or more
diagnostic messages indicating that fact.
NOTE: Violation of Syntax, Legality, and Post-Compilation Rules requires a
diagnostic message, whether the rule is hard or soft.
AARM: The form/wording of diagnostic messages is not defined by the standard.
AARM: There is no implication regarding the severity of soft rules, nor the
probability that they represent real errors. One soft Legality Rule might be a
serious issue, while another might be no big deal.
Editor's note: I expect we can use the heading:
Legality Rules (soft)
to indicate soft rules in most cases. In some cases, we might want to be more
explicit, as in "This is a soft Legality Rule". No need to get all pedantic
here.
RM-1.1.5 says:
1 The language definition classifies errors into several different
categories:
2 * Errors that are required to be detected prior to run time by every
Ada implementation;
3 These errors correspond to any violation of a rule given in this
International Standard, other than those listed below. In
particular, violation of any rule that uses the terms shall,
allowed, permitted, legal, or illegal belongs to this category. Any
program that contains such an error is not a legal Ada program; on
the other hand, the fact that a program is legal does not mean, per
se, that the program is free from other forms of error.
4 The rules are further classified as either compile time rules, or
post compilation rules, depending on whether a violation has to be
detected at the time a compilation unit is submitted to the
compiler, or may be postponed until the time a compilation unit is
incorporated into a partition of a program.
That's Ada-83 talk -- since Ada 95, we haven't used magic words "shall",
"allowed", etc. to indicate kinds of errors -- we explicitly label them Legality
Rules. So delete most of the above:
1 The language definition classifies errors into several different
categories:
2 * Errors that are required to be detected prior to run time by every
Ada implementation;
3 These errors correspond to any violation of a Syntax Rule, Legality
Rule, or Post-Compilation Rule.
RM-10.2 says:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another, and are legal according to the
rules of the language.
Change it to:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another. A violation of the hard Legality
Rules prevents the partition from running. For each soft Legality Rule, the
implementation shall provide two modes: one in which a violation of the rule
prevents the partition from running, and the other in which a violation does
not prevent running. Similar requirements apply to hard/soft Syntax and
Post-Compilation Rules.
!discussion
The primary purpose of soft Legality Rules is to enable the addition of new
Legality Rules without causing incompatibilities. An example from the past is
when we added rules disallowing passing the "same" actual parameter to multiple
formal 'out' parameters of a function. Since functions didn't used to allow
'out' parameters at all, there was no incompatibility. But for uniformity, it
made sense to extend these rules to procedures. But that's incompatible, and in
practice, existing code sometimes does:
Some_Procedure(..., Ignore, Ignore);
There was no good solution: We had a choice between inferior language rules and
incompatibilities. The "soft" concept solves the dilemma: We could have made the
new rules soft, thus providing diagnostic messages for questionable code, while
still retaining compatibility. Individual projects can then decide if or when to
modify their code to obey the rules. The language designers cannot effectively
make such decisions; we have no idea how costly it is to coordinate source-code
changes, possibly across multiple organizations.
Whether we should retroactively soften the 'out' rules should be the subject
of another AI. Another AI should consider whether "unrecognized pragma" should
be a soft legality rule. Other possible soft rules:
new reserved words.
Limited function return (when not immutably limited).
Requirement that overriding "=" come early enough to compose.
Rejected terminology: Several alternatives were suggested. "fatal
error"/"nonfatal error" -- no good because "fatal error" usually means something
else (in particular, an error that stops a program dead in its tracks). A
compiler might consider "missing source file" to be a fatal error. "hard
error"/"soft error" and "major error"/"minor error" were also considered. But
it's confusing to use the word "error" at all, because the RM already uses
"error" to refer to bounded errors, exceptions, and erroneous behavior.
"Required warning" was also suggested, but people found that to be too weak.
In any case, no terminology can be perfect here, because only the programmer can
decide whether a given diagnostic message should be taken seriously. For
example, if you port code containing pragma Defer_Aborts from GNAT to a compiler
that doesn't support Defer_Aborts, the RM requires a warning. But that's a
serious error: the program needs to be redesigned. If the pragma is Check, on
the other hand, the warnings can be safely ignored.
The point of soft errors is to require a diagnostic message. We leave it up to
implementations whether to word their messages with "warning:" or "HORRIBLE
ERROR:" or "eh, no big deal", or whatever. And we leave it up to programmers to
decide when and how to change their code.
!ASIS
No ASIS impact.
!ACATS test
Soft legality rules should be tested via B tests in the usual way (requiring
diagnostic messages). In addition, a new C test should be added that violates a
soft legality rule, and is expected to run in the "allow to run" mode.
!appendix
From: Bob Duff
Sent: Thursday, August 29, 2013 3:25 PM
I would like to propose a new AI on the subject of "nonfatal errors".
The recent discussions under the subject "aggregates and variant parts"
reminded me of this. I'm talking about the sub-thread related to nested
variants, not the main thread about aggregates.
My idea is that a nonfatal error is a legality error. The compiler is
required to detect nonfatal errors at compile time[*], just as it is for any
other legality errors. However, a nonfatal error does not stop the program from
running. (This of course implies that we must have well-defined run-time
semantics in the presence of nonfatal errors.)
[*] Or at "link time", if marked as a "Post-Compilation Rule".
In numerous cases, ARG has gotten stuck between a rock and a hard place: Some
situation really IS an error, so we want it to be detected, preferably at
compile time. But making it illegal is an incompatibility. ARG had to choose
between allowing errors to go undetected (bad) and breaking existing code (also
bad).
When that happens in the future, I propose that we define the error situation to
be a "nonfatal error". We get the best of both worlds: the error must be
detected, but there is no incompatibility.
Example from the "aggregates and variant parts" discussion:
It was suggested that something like this:
type Color is (Red, Orange, Yellow);
type T(D: Color) is
record
case D is
when Red | Orange =>
X : Integer;
case D is
when Red =>
Y : Integer;
when Orange =>
null;
when others =>
Bogus : Integer; -- Wrong!
end case;
when Yellow =>
null;
end case;
end record;
is an error, because the Bogus component can never exist.
One should write "when others => null; -- can't happen".
But it would be completely irresponsible for ARG to make that illegal, because
it would be incompatible. Solution: we could make it a nonfatal error, if we
think it's important to detect it.
!wording
RM-1.1.5 says:
1 The language definition classifies errors into several different
categories:
2 * Errors that are required to be detected prior to run time by every Ada
implementation;
3 These errors correspond to any violation of a rule given in this
International Standard, other than those listed below. In particular,
violation of any rule that uses the terms shall, allowed, permitted,
legal, or illegal belongs to this category. Any program that contains
such an error is not a legal Ada program; on the other hand, the fact
that a program is legal does not mean, per se, that the program is
free from other forms of error.
4 The rules are further classified as either compile time rules, or post
compilation rules, depending on whether a violation has to be detected
at the time a compilation unit is submitted to the compiler, or may be
postponed until the time a compilation unit is incorporated into a
partition of a program.
RM-2.8 says:
Implementation Requirements
13 The implementation shall give a warning message for an unrecognized pragma name.
13.a Ramification: An implementation is also allowed to have modes in
which a warning message is suppressed, or in which the presence of
an unrecognized pragma is a compile-time error.
I suggest moving the pragma-specific stuff into 1.1.5 and generalizing it.
Add after 1.1.5(4):
When such an error is detected, the implementation shall issue
a diagnostic message. Redundant[This International Standard
does not define the form or content of diagnostic messages.]
[Note to anyone who complains that we don't have a precise mathematical
definition of "diagnostic message": Well, we don't have a definition of
"warning", either, yet the sky didn't fall when we wrote 2.8(13)! We also don't
have a definition of what it means to "detect", but everybody knows (informally)
what it means.]
By default, a legality error is a "fatal error". Fatal errors
prevent the program from running (see 10.2). Some legality errors
are explicitly defined by this International Standard to be
"nonfatal errors". Nonfatal errors do not prevent the program
from running.
AARM Ramification: An implementation is also allowed to have
modes in which a nonfatal error is ignored, or in which a
nonfatal error is treated as a fatal error.
[???If it makes people more comfortable, we could require the
latter mode, by adding a normative rule to the RM: An
implementation shall provide a mode in which nonfatal errors
are treated as fatal errors.]
RM-10.2 says:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another, and are legal according to the
rules of the language.
Change that to:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another, and do not contain fatal errors.
Redundant[This implies that such partitions cannot be run. Partitions may
contain nonfatal errors.]
Change 2.8(13) to:
Legality Rules
13 A pragma name that is not recognized by the implementation is illegal.
A violation of this rule is a nonfatal error.
!discussion
Another example is "interface". Using (say) "begin" as an identifier is a fatal
error, and that's fine. But we should have said that using "interface" as an
identifier is a nonfatal error. That would have avoided users wasting huge
amounts of money converting existing code (since "interface" is a widely-used
identifier).
When I originally proposed this idea, I called it "required warnings".
Some folks were worried that programmers might ignore what are considered "mere"
warnings. Calling it a "nonfatal error" makes it clearer that these really are
errors. You really should fix them, unless you are in a situation where it is
very expensive to make ANY modifications to existing code. (In the unrecognized
pragma case, I guess you would "fix" the error (i.e. warning) by suppressing
it.)
In any case, it should be up to programmers to decide whether fixing nonfatal
errors is cost-effective. That is not our job as language designers.
****************************************************************
From: Tucker Taft
Sent: Thursday, August 29, 2013 3:25 PM
> I would like to propose a new AI on the subject of "nonfatal errors"...
Make sense to me.
****************************************************************
From: Tullio Vardanega
Sent: Friday, August 30, 2013 3:50 AM
Interesting.
****************************************************************
From: Randy Brukardt
Sent: Friday, August 30, 2013 4:05 PM
I'm not going to comment on the merits of the idea now.
But I think the terminology is wrong in that it is different than typical usage
of the term.
...
> By default, a legality error is a "fatal error". Fatal errors
> prevent the program from running (see 10.2).
This is not the typical meaning of "fatal error". In pretty much every program
I've ever used, "fatal error" means an error that terminates processing
*immediately*. That is, a "fatal error" can have no recovery. That's not how you
are using it here (certainly, you don't mean to require Ada compilers to detect
no more than one error per compilation attempt).
The canonical use of "fatal error" in Janus/Ada is when the specified source
file cannot be found, but we also use the modifier on some of the
language-defined rules when we believe recovery is likely to cause many bogus
errors. For instance, we treat all context clause errors as fatal in that
continuing with an incomplete or corrupt symboltable is unlikely to provide any
value.
I don't think Ada should use an existing and commonly used term in an
inconsistent manner with the rest of the world. There must be a better term that
doesn't imply immediate termination of processing.
****************************************************************
From: Bob Duff
Sent: Friday, August 30, 2013 4:44 PM
> ...
> > By default, a legality error is a "fatal error". Fatal errors
> > prevent the program from running (see 10.2).
>
> This is not the typical meaning of "fatal error". In pretty much every
> program I've ever used, "fatal error" means an error that terminates
> processing *immediately*. That is, a "fatal error" can have no recovery.
> That's not how you are using it here (certainly, you don't mean to
> require Ada compilers to detect no more than one error per compilation attempt).
Good point. Let's first discuss the merits of the idea, and then later try to
come up with a better term.
History: I first called it "required warning". But you objected that "warning"
is too mild a term -- some folks might ignore warnings. I have no sympathy for
people who deliberately put pennies in fuse boxes (i.e. ignore warnings), but in
a futile attempt to appease you, I came up with a term that contains the word
"error".
But let's try to ignore the term for now, and concentrate on my goal:
to get ARG to quit introducing gratuitous incompatibilities.
That is, to give ARG an "out" -- a way to say, "we really think this ought to be
illegal, but if you have 10 million lines of code scattered across 17
organizations[*] you don't absolutely have to fix these errors -- your call, you
can choose to ignore these errors and still run your programs".
[*] I gather that that was the situation reported by Robert, with some company
that used Interface as the name of lots of child packages.
****************************************************************
From: Tucker Taft
Sent: Friday, August 30, 2013 4:55 PM
survivable error?
recoverable error?
****************************************************************
From: Randy Brukardt
Sent: Friday, August 30, 2013 5:05 PM
> I would like to propose a new AI on the subject of "nonfatal errors".
Certainly you can propose it. I'm against it in its current form, but the fixes are simple. I previously commented on "fatal".
...
> Example from the "aggregates and variant parts" discussion:
> It was suggested that something like this:
>
> type Color is (Red, Orange, Yellow);
>
> type T(D: Color) is
> record
> case D is
> when Red | Orange =>
> X : Integer;
> case D is
> when Red =>
> Y : Integer;
> when Orange =>
> null;
> when others =>
> Bogus : Integer; -- Wrong!
> end case;
> when Yellow =>
> null;
> end case;
> end record;
>
> is an error, because the Bogus component can never exist.
> One should write "when others => null; -- can't happen".
This is a terrible idea, irrespective of the compability issue. That's because
the definition and implementation of such a rule would be fairly complex, and it
fixes nothing (the real problem is that the others clause is virtually required
because values that can't happen must be covered in the variant). Solving a
symptom rather than the actual problem is a terrible use of resources.
I think you need to make a much more compelling example in order to make this
idea worth even having. In the past when the idea was suggested, we essentially
determined that the problem really wasn't worth fixing (as in the above, or as
in the value always out of range problem). The only errors that the language
should be mandating are those that are virtually always an error; under no
circumstances should programmers be "suppressing" language-defined errors (of
any kind). They should either fix them, or simply live with the
"executable-error" error. Warnings are a different kettle of fish in that way, I
think.
If you had suggested making the trivial fix to the underlying problem, I think
you would have had a stronger case.
That is, in the above, the coverage of the inner variant should be as if the
nominal subtype of the discriminant has a static predicate that matches the
immediately enclosing variant alternative. That would be an "executable error"
(what you called a "non-fatal error"), if the coverage is OK for the nominal
subtype of the discriminant and a "non-executable error" (what you misdiscribed
as a "fatal error") otherwise.
That would make sense, as it would eliminate the compatibility problem by
allowing compilation if the coverage is as it used to be, but would strongly
encourage doing the right thing.
> In any case, it should be up to programmers to decide whether fixing
> nonfatal errors is cost-effective. That is not our job as language
> designers.
I agree, but again this is a matter of description. If you call these "errors",
then the intent is that they really reflect something wrong. (Warnings are not
like this, they might reflect something dubious that could be OK.) As such, the
language shouldn't be encouraging them to be left in code. The reason for
leaving them in code is to be able to use existing code that cannot be
practically changed, not to allow sloppy programming.
That has a very significant impact on what can be categorized this way. We must
not categorize anything that might be legitimate usage as a "non-fatal error"
(or whatever the term might be). For instance, calling a static expression that
will always be outside of its static subtype an "error" of any kind is a very
bad idea. (These are very common in dead code, as settings of parameters often
lead to situations where values are outside of unused null ranges, expressions
are divided by zero, and the like.)
That also suggests that the suggestion of changing unrecognized pragmas to a
"non-fatal error" must be opposed. That is a capability that is commonly used in
portable code. Claw, for instance, contains many Gnat-specific pragmas that are
just harmlessly ignored in other compilers. To claim that this is somehow an
"error" would be a major disconnect with reality, IMHO.
Basic conclusion here: terminology matters, and in this case, it is pretty much
the only thing that matters. The actual language rules are far less important
than the impression given by the terminology, because most programmers will only
know the terminology, not the language rules.
****************************************************************
From: Randy Brukardt
Sent: Friday, August 30, 2013 5:18 PM
...
> But let's try to ignore the term for now, and concentrate on my goal:
> to get ARG to quit introducing gratuitous incompatibilities.
I just finished writing a message that essentially concludes that the *only*
thing important here is the terminology. We need to decide on that before we can
even begin to understand what might fall into that category. For instance, an
unrecognized pragma clearly is a "warning" (it is something that makes perfect
sense to ignore), while a "soft error" is still an error - you only ignore it in
frozen code (or code primarily maintained for some previous version of Ada) and
fix it in all other cases.
> That is, to give ARG an "out" -- a way to say, "we really think this
> ought to be illegal, but if you have 10 million lines of code
> scattered across 17 organizations[*] you don't absolutely have to fix
> these errors
> -- your call, you can choose to ignore these errors and still run your
> programs".
>
> [*] I gather that that was the situation reported by Robert, with some
> company that used Interface as the name of lots of child packages.
I'm sympathetic with the goal, but I'm dubious that there are any such
situations. The bad problems (like composition of untagged records) would not be
helped by this (the incompatibility is mostly at runtime, and the compile-time
incompatibilities are necessary to have any sensible semantics for composition).
That's pretty common; many incompatibilities are caused by semantic necessities.
The trivial problems (such as the recent nest variant problem) might be helped,
but it's unclear that they're worth fixing in the first place if there is any
sniff of a compatibility issue. (You, for instance, claimed that that one was
not.)
We've discussed this in the context of other AIs, and yet I cannot recall any
situation where this would have ultimately helped. (And its existence might even
prevent us from finding a better solution that doesn't have any incompatibility,
because we might quit looking earlier. Not that that possibility would factor
into my vote much.)
I think this is much like Tucker's pragma Feature -- an idea that sounds good on
the surface, but never actually would get used in practice. (Although maybe
pragma Feature would have gotten used had Tucker actually made a concrete
proposal as to what "features" it encompassed.) And I expect it to end up in the
same place -- the "No Action" pile. Feel free to prove me wrong.
****************************************************************
From: Bob Duff
Sent: Friday, August 30, 2013 6:14 PM
> Basic conclusion here: terminology matters, and in this case, it is
> pretty much the only thing that matters. The actual language rules are
> far less important than the impression given by the terminology,
> because most programmers will only know the terminology, not the language
> rules.
Yeah, except that we don't really have any control over what terms the user
sees. That is, we don't define what diagnostic messages look like.
A compiler could say "missing semicolon", or "Syntax Error: missing semicolon",
or "Minor Warning, no big deal: missing semicolon", and all those are conforming
implementations, so long as the implementation doesn't allow programs with
missing semicolons to run.
Yes, the terms are important, but we don't control them in practise.
Users don't read the RM, they read the diagnostic messages.
(I hope "diagnostic message" is a neutral term I can use that doesn't indicate
whether it's an "error" or "likely error" or "possible error" or whatever.)
****************************************************************
From: Bob Duff
Sent: Friday, August 30, 2013 6:30 PM
> I'm sympathetic with the goal, but I'm dubious that there are any such
> situations.
I've mentioned half-a-dozen or so during the last few months, as they came up.
Cases where one person says, "Yeah, but that would be INCOMPATIBLE!", and the
other person says, "Yeah, but that is just WRONG!". I'm trying to defuse that
sort of conflict.
All I ask is that we keep an open mind to the idea that we CAN require detection
of errors at compile time, while STILL requiring that the implementation run the
program. And don't reject that idea based on pedantic concerns about the formal
definition of "detect" and "give a diagnostic message" and "error vs. warning"
and so on.
> ...the "No Action" pile. Feel free to prove me wrong.
To prove you wrong, I could go through all the (compile time) incompatibilities
introduced in 95, 2005, 2012, and analyze them. I'll bet there are dozens of
cases. I'm not sure I have the time to do that.
One that comes to mind right now: the new rules about 'in out' parameters being
mutually conflicting or some such. I don't understand those rules, but I think
we found a bunch of incompatibilities in the test suite.
****************************************************************
From: Randy Brukardt
Sent: Friday, August 30, 2013 6:40 PM
> > Basic conclusion here: terminology matters, and in this case, it is
> > pretty much the only thing that matters. The actual language rules
> > are far less important than the impression given by the terminology,
> > because most programmers will only know the terminology, not the
> > language rules.
>
> Yeah, except that we don't really have any control over what terms the
> user sees. That is, we don't define what diagnostic messages look
> like.
True, but compiler vendors try to stay fairly close to the RM terminology.
In most cases where we didn't do that, we came to regret it.
> A compiler could say "missing semicolon", or "Syntax Error:
> missing semicolon", or "Minor Warning, no big deal: missing
> semicolon", and all those are conforming implementations, so long as
> the implementation doesn't allow programs with missing semicolons to
> run.
Or "*SYNTAX ERROR* Missing semicolon" :-)
My problem with "fatal error" is that we have lots of messages with that in
it:
"*FATAL ERROR* Missing source"
I don't want to get the RM and our messages that far out of sync.
> Yes, the terms are important, but we don't control them in practise.
>
> Users don't read the RM, they read the diagnostic messages.
> (I hope "diagnostic message" is a neutral term I can use that doesn't
> indicate whether it's an "error" or "likely error"
> or "possible error" or whatever.)
(Yes it's neutral enough.)
I think my point is that the difference between (using your original terms) a
"fatal error", a "non-fatal error", and a "warning" is fuzzy enough that vendors
will want to stay quite close to the RM terminology. That's especially true in
that 3rd party documents (web sites, books, etc.) that explain these differences
are usually going to stick very close to the RM terminology. So as a practical
matter, I think that vendors *could* stray a long way from the RM terminology,
but there are lots of powerful reasons for not doing so. Maybe AdaCore could get
away with it, but few other vendors can.
And the terminology matters a huge amount here: no one should be ignoring errors
except in exceptional circumstances whereas warnings are ignorable with
justification (examples given in previous messages). For Janus/Ada, I've been
thinking about separating some "warnings" into "informations", as it's hard to
tell in Janus/Ada whether a warning should really be addressed or whether its
information about something that might be important to know but often is
irrelevant. Even though there is no practical difference, the difference in
terminology would clarify things. The same would be true in the RM.
****************************************************************
From: Randy Brukardt
Sent: Friday, August 30, 2013 6:53 PM
...
> One that comes to mind right now: the new rules about 'in out'
> parameters being mutually conflicting or some such. I don't
> understand those rules, but I think we found a bunch of
> incompatibilities in the test suite.
We knew that there were incompatibilities there; those represent very dubious
code that should never have been written. The real question is whether we were
wrong in that judgment, but presuming that we weren't, it's better off this way.
We've always tolerated incompatibilities that find real bugs.
The incompatibilities that we've introduced into the language to date were
considered acceptable (for whatever reason), and they would not be relevant to
your proposed feature. I see no scenario where we could avoid *all*
incompatibilities by having a feature like this. It could do nothing for runtime
incompatibilities, nor can it help if an incompatibility is necessary to have
the language make semantic sense (many Binding Interpretations are in this
category, as are the added legality rules for untagged record composition). So
having a minor incompatibility for a high-value change does not bother me at
all, and indeed I could easily imagine being against classifying one of these as
"soft errors" or whatever we decide to call it, believing that requiring
correction is important.
What would be relevant is cases where we decided not to fix the problem at all
(the nested variant issue most likely will be such a case) or where we adopted a
sub-optimal solution because of compatibility concerns. I don't know of any
practical way to find those in the past. (Re-reading all of the AIs does not
count as "practical".) When I said, "feel free to prove me wrong", I really
meant going forward. (We won't be seriously considering Amendment AIs for years
to come, so we can see if there are any compelling examples in the intervening
years.) There won't be an answer to that challenge until 2018 at the earliest!
****************************************************************
From: Robert Dewar
Sent: Friday, August 30, 2013 9:38 PM
>> A compiler could say "missing semicolon", or "Syntax Error:
>> missing semicolon", or "Minor Warning, no big deal: missing
>> semicolon", and all those are conforming implementations, so long as
>> the implementation doesn't allow programs with missing semicolons to
>> run.
More accurately "as long as the implementation has a mode in which it does not
allow programs with missing semicolons to run".
...
> I think my point is that the difference between (using your original
> terms) a "fatal error", a "non-fatal error", and a "warning" is fuzzy
> enough that vendors will want to stay quite close to the RM
> terminology. That's especially true in that 3rd party documents (web
> sites, books, etc.) that explain these differences are usually going
> to stick very close to the RM terminology. So as a practical matter, I
> think that vendors *could* stray a long way from the RM terminology,
> but there are lots of powerful reasons for not doing so. Maybe AdaCore
> could get away with it, but few other vendors can.
We avoid RM terminology where it is confusing. For instance we say package spec
instead of package declaration, because that's what most programmers say. And we
would not use "package" in a message expecting a programmer to know that a
generic package is not a package. There are lots of obscure terms in the RM
better avoided in error messages (most programmers these days don't read the RM
much!)
> And the terminology matters a huge amount here: no one should be
> ignoring errors except in exceptional circumstances whereas warnings
> are ignorable with justification (examples given in previous
> messages). For Janus/Ada, I've been thinking about separating some
> "warnings" into "informations", as it's hard to tell in Janus/Ada
> whether a warning should really be addressed or whether its
> information about something that might be important to know but often
> is irrelevant. Even though there is no practical difference, the
> difference in terminology would clarify things. The same would be true in the
> RM.
GNAT distinguishes between "info" messages and "warning" messages
****************************************************************
From: Arnaud Charlet
Sent: Saturday, August 31, 2013 4:47 AM
> > One that comes to mind right now: the new rules about 'in out'
> > parameters being mutually conflicting or some such. I don't
> > understand those rules, but I think we found a bunch of
> > incompatibilities in the test suite.
>
> We knew that there were incompatibilities there; those represent very
> dubious code that should never have been written. The real question is
> whether we were wrong in that judgment, but presuming that we weren't,
> it's better off this way. We've always tolerated incompatibilities
> that find real bugs.
As shown by customer code and by many ACATS tests (you have received a bunch of
ACATS petitions for Ada 2012 from us about this), we were pretty wrong: people
use a common idiom when they simply want to ignore the out parameters, using a
single variable, e.g:
Proc1 (Input, Ignore_Out, Ignore_Out);
is *very* common and changing all that code is a real pain for users.
Bob is right, this rule is a good example where a "soft" error would have been
more useful than a "hard" error.
I personally find "hard error" and "soft error" good names FWIW.
****************************************************************
From: Bob Duff
Sent: Saturday, August 31, 2013 9:03 AM
> As shown by customer code and by many ACATS tests (you have received a
> bunch of ACATS petitions for Ada 2012 from us about this), we were pretty wrong:
> people use a common idiom when they simply want to ignore the out
> parameters, using a single variable, e.g:
>
> Proc1 (Input, Ignore_Out, Ignore_Out);
>
> is *very* common and changing all that code is a real pain for users.
And that code is completely harmless!
> Bob is right, this rule is a good example where a "soft" error would
> have been more useful than a "hard" error.
So let's go back and make some of these 2005/2012 incompatibilities into soft
errors. It's not too late. But ARG should consider that high priority -- the
rest of its work can wait several years.
If ARG doesn't do that, I think perhaps AdaCore should have a nonstandard mode
that does it.
> I personally find "hard error" and "soft error" good names FWIW.
Yes, I like it, too. Or instead of "error", talk about "legality":
In each case, we can say something like:
Blah blah shall not blah blah. This rule is a soft legality rule.
And put something in the "classification of errors" section in chap 1 making an
exception for soft legality rules. The rule in 10.2 also needs work.
I suggest: All legality rules require a diagnostic message. (No, I can't
formally define that -- so what?) An implementation must have two modes: one in
which soft errors prevent the program from running, and one in which they do
not.
****************************************************************
From: Randy Brukardt
Sent: Sunday, September 1, 2013 5:52 PM
> > As shown by customer code and by many ACATS tests (you have received
> > a bunch of ACATS petitions for Ada 2012 from us about this), we were
pretty wrong:
> > people use a common idiom when they simply want to ignore the out
> > parameters, using a single variable, e.g:
> >
> > Proc1 (Input, Ignore_Out, Ignore_Out);
> >
> > is *very* common and changing all that code is a real pain for users.
>
> And that code is completely harmless!
I can believe that this happens, but I find it hard to believe that it is "very
common". (Ignoring ACATS tests; ACATS tests often don't reflect the way Ada code
is really used, so I don't much care about incompatibilities that show up in
them.)
Code like the above requires three unlikely things to occur:
(1) Ignoring of one or more parameters is not dangerous at the call site. Most
"in out" and "out" parameters can't be unconditionally ignored. They might
have circumstances where they aren't meaningful, but those are usually tied
to the values of other "out" parameters. So unconditionally ignoring a
parameter meaning assuming that the value of another parameter, which is
always a bad idea. (Ignoring error codes on return from a routine is the
most common example of the danger of doing this.)
(2) The specification of the routine is designed such that it is necessary to
ignore parameters. One hopes that Ada routines don't have unused parameters
and the like; Ada has default parameters and overloading which can easily be
used to reduce the occurrences of such subprograms to rare usages.
(3) For the above to occur, you have to have two or more "out" parameters of the
same type. If you're using strong typing, this is pretty unlikely. I cannot
think of any case where this has ever happened in my code, as out parameters
are most often used for returning multiple entities together from something
that would otherwise be a function. Those entities are almost always of
different types.
Perhaps there are cases not involving ignoring of results that are also involved
here, if this is indeed very common.
In any case, if there truly are a lot of cases where this check is in fact
rejecting legitimate code, then I think that it should be removed altogether.
The idea behind a "soft error" is that it reflects something wrong that doesn't
have to fixed immediately. It is not a case where the "error" should be ignored
forever (unless of course it is impossible to change the source code).
In this particular case, the reason for the rule applying to procedures was
simply that it didn't make sense to say that you can't do this for functions if
you could do it for procedures. If that's not true, then it probably shouldn't
apply to anything.
> > Bob is right, this rule is a good example where a "soft" error would
> > have been more useful than a "hard" error.
>
> So let's go back and make some of these 2005/2012 incompatibilities
> into soft errors. It's not too late. But ARG should consider that
> high priority -- the rest of its work can wait several years.
This seems like a complete waste of time. It only makes sense for "soft errors"
to be those where the semantics are well-defined if the error is not required to
be detected. There are very few such errors in Ada. Moreover, it would take an
immense amount of analysis to differentiate errors that exist for semantic
reasons (like the untagged record equality Legality Rules) and those that could
be "soft errors". Getting it wrong would be very bad, as we would have programs
with undefined semantics executing. (We certainly would have to have tests
containing "soft errors" in the ACATS, and that seems unpleasant.)
I've said it before, but I think this "soft error" idea seems appealing at
first, but I don't think there are many circumstances where it actually could be
applied. In most such cases, the check itself is dubious and quite likely we
don't really want or need it; how its reported is not the real issue.
I think it is fine to keep this idea in our back pocket in case we find a
situation where it would allow making a change that otherwise would be
impossible. But I don't see any reason to try to go back and revisit the last 8
years of Ada development to try to retrofit this idea. That sounds like
rehashing every argument we've ever had about the direction of Ada.
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, September 2, 2013 3:55 AM
> I personally find "hard error" and "soft error" good names FWIW.
I'd prefer "major" and "minor" errors, FWIW...
As for the rest, I also think there is some value in the idea, but that it looks
like another of these brilliant solutions looking desperatly for a problem to
solve...
****************************************************************
From: Bob Duff
Sent: Monday, September 2, 2013 8:32 AM
> I'd prefer "major" and "minor" errors, FWIW...
I don't think that gives the right impression. There is no implication that
soft errors are more "minor". Programmers should take soft error messages
seriously.
At least some soft errors will be errors that we would make into normal
(hard) legality rules, except for the compatibility concern.
Only the programmer can decide how "minor" the soft errors are, and whether to
fix them. Randy doesn't want to make "unrecognized pragma" into a soft error,
and I don't want to fight about that, but the same point applies to warnings: If
you're porting from GNAT to some other compiler, and you see "unrecognized
pragma Abort_Defer", that's probably a very serious error that must be fixed.
On the other hand, if you see "unrecognized pragma Check", that's no big deal --
the program will work fine while ignoring pragmas Check.
> As for the rest, I also think there is some value in the idea, but
> that it looks like another of these brilliant solutions looking
> desperatly for a problem to solve...
The problem is that ARG keeps introducing incompatibilities in every language
version. For many people that's no big deal, but for others, it either costs a
lot of money, or prevents them from upgrading to the new language.
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, September 2, 2013 9:30 AM
> Randy doesn't want to make "unrecognized pragma" into a soft error
If we want to discuss whether unrecognized pragmas are an error, let me mention
transformation tools that use pragmas to indicate places where a transformation
is needed, or other special elements to consider. Using pragmas to that effect
has the benefit that it is very convenient for an ASIS-based transformation
tool, and looks good from the programmer's point of view.
One such tool is Morpheus from Adalabs
(http://www.adalabs.com/products-morpheus.html).
****************************************************************
From: Bob Duff
Sent: Monday, September 2, 2013 10:00 AM
> > Randy doesn't want to make "unrecognized pragma" into a soft error
> If we want to discuss whether unrecognized pragmas are an error,
My point was that we do NOT want to discuss that -- some are error, some are
not. It's the programmer's call.
>...let me
> mention transformation tools that use pragmas to indicate places where
>a transformation is needed, or other special elements to consider.
>Using pragmas to that effect has the benefit that it is very
>convenient for an ASIS-based transformation tool, and looks good from
>the programmer's point of view.
Right, good example.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, September 3, 2013 1:39 AM
[I'm leaving on vacation tomorrow, so I won't be able to participate in this
discussion going forward. Thus a "final" summary from me. Don't decide anything
stupid while I'm gone. :-)]
> > > I personally find "hard error" and "soft error" good names FWIW.
> > I'd prefer "major" and "minor" errors, FWIW...
>
> I don't think that gives the right impression. There is no
> implication that soft errors are more "minor". Programmers should
> take soft error messages seriously.
>
> At least some soft errors will be errors that we would make into
> normal
> (hard) legality rules, except for the compatibility concern.
>
> Only the programmer can decide how "minor" the soft errors are, and
> whether to fix them. Randy doesn't want to make "unrecognized pragma"
> into a soft error, and I don't want to fight about that,
I'm unsure, actually. The real point is that it's not clear how valuable this
is.
...
> > As for the rest, I also think there is some value in the idea, but
> > that it looks like another of these brilliant solutions looking
> > desperatly for a problem to solve...
>
> The problem is that ARG keeps introducing incompatibilities in every
> language version. For many people that's no big deal, but for others,
> it either costs a lot of money, or prevents them from upgrading to the
> new language.
Yes, but this idea is unlikely to have any effect on that. It's greatest value
is in other areas that we traditionally have stayed away from.
The main problem (as I've said before) is that for "hard" errors, the program
cannot be executed. Thus, we don't have to define any semantics for such
execution. For "soft" errors, however, we *do* have to define semantics for
execution, as the program can be executed (at least in one language-defined
mode).
Among other things, this means that soft errors would require a new kind of
ACATS test, which would combine the features of a B-Test and a C-Test -- both
messages would need to be output *and* the execution would have to finish
properly. That's a substantial complication and cost for the ACATS. (I happen to
think that a similar cognitive complication would also exist for *users* of Ada,
but that's not so clear-cut. I also note that this idea bears a lot of
resemblance to the whole argument about unreserved keywords -- which also went
nowhere.)
Anyway, this fact makes "soft" errors most useful for methodological
restrictions as opposed to semantic restrictions. The problem is that Ada
doesn't have many methodological restrictions.
Just a quick look at some common kinds of incompatibilities in Ada 2012:
(1) Adding new entities to a language-defined package (examples: A.4.5(88.e/3),
A.18.2(264.c/3), D.14(29.c/3)). Soft errors would not be helpful for these
incompatibilities (redoing resolution rules in order to avoid the
incompatability would be nasty and bizarre).
(2) Changing the profile of a language-defined subprogram (I didn't remember an
Ada 2012 example off-hand). Even with careful use of default parameters,
these have incompatibilities with renames and 'Access uses (as the profile
is different). Again, I don't think soft errors would be of any value, as
defining multiple profiles would be a massive compilation in the language.
(3) Incompatibilities required by semantic consistency. (examples:
4.5.2(39.k/3)) These are cases where we could not make a sensible definition
of the language without the incompatibility. I don't see how soft errors
would help such cases, as the semantics would need to be well-defined in
order to have a soft error.
(4) Nonsense semantics in previous standards. [This is pretty similar to the
above, but its not caused by a language change.] (Examples: 10.2.1(28.l/3),
12.7(25.e/3), B.3.3(32.b/3)). Soft errors would not help here, as it
wouldn't make sense to define the nonsense semantics formally.
(5) Runtime inconsistencies. Obviously, soft errors will not help in any way
with these.
Certainly there are cases where soft errors could help. (I didn't do any sort of
formal survey.) 6.4.1(6.16/3) is really a methodological restriction, and one
could make it a soft error unless the call is to a function (that can't be
incompatible). I'd like to see more compelling examples than the one Arnaud
posted before doing that (or eliminating the check altogether), but that's a
separate discussion.
The problem with incompatibilities caused by methodological restrictions is that
they're easily avoided by not having the restriction. We don't need soft errors
to do that!
I think the most valuable use of soft errors would be in properly restricting
the contents of assertions, which we decided not to do because we couldn't find
a rule that wasn't too restrictive. That would be less of a problem with soft
errors, as there would always be the option to ignore the error and do the
dubious thing anyway. Similarly, the question of invariants of types with
visible components could be dealt with using soft errors (so that the cases of
generics would not have to be rejected).
So, I think the majority of the value of soft errors would be found going
forward, and it's unlikely to be much help for compatibility issues (except
those we didn't have to introduce, which is a whole 'nuther discussion). We'd
need some cases where they clearly allowed something that we can't currently do.
So I rather agree with J-P:
> > As for the rest, I also think there is some value in the idea, but
> > that it looks like
> > another of these brilliant solutions looking desperatly for a
> > problem to solve...
Exactly.
****************************************************************
From: Bob Duff
Sent: Tuesday, September 3, 2013 9:02 AM
The subject matter of this AI is incompatibilities -- in particular, a mechanism
to reduce the need/desire for them. (And I started the thread, so I get to
define what it's about. ;-)) Below, you point out some cases where soft errors
could help, but brush those aside with "that's a separate discussion" and "whole
'nuther discussion". No, that's THIS discussion. If we can come up with a few
cases where soft errors are a good idea, then they're a good idea.
I feel like the form of your argument is analogous to this: "Driving a car is
perfectly safe. Of course, some people are killed driving cars, but that's a
separate discussion." Heh? ;-)
Anyway, I include both existing incompatibilities (which we should consider
repealing) and future ones where we're tempted, in this discussion.
> The main problem (as I've said before) is that for "hard" errors, the
> program cannot be executed. Thus, we don't have to define any
> semantics for such execution. For "soft" errors, however, we *do* have
> to define semantics for execution, as the program can be executed (at
> least in one language-defined mode).
Yes, we all agree that the run-time semantics has to be well defined in the
presence of soft errors. That's the case for Arno's example -- we already have
wording that defines the semantics of param passing.
> Among other things, this means that soft errors would require a new
> kind of ACATS test, which would combine the features of a B-Test and a
> C-Test --
I can't get excited about that.
> both messages would need to be output *and* the execution would have
> to finish properly. That's a substantial complication and cost for the ACATS.
> (I happen to think that a similar cognitive complication would also
> exist for *users* of Ada, but that's not so clear-cut. I also note
> that this idea bears a lot of resemblance to the whole argument about
> unreserved keywords
> -- which also went nowhere.)
Those are a perfect example of a soft error. It went nowhere, I assume, because
people were uncomfortable with the fact that you could do confusing things (e.g.
"type Interface is interface...") with the compiler remaining silent. With my
proposal, you would get an error message.
> Anyway, this fact makes "soft" errors most useful for methodological
> restrictions as opposed to semantic restrictions. The problem is that
> Ada doesn't have many methodological restrictions.
>
> Just a quick look at some common kinds of incompatibilities in Ada 2012:
>
> (1) Adding new entities to a language-defined package (examples:
> A.4.5(88.e/3), A.18.2(264.c/3), D.14(29.c/3)). Soft errors would not
> be helpful for these incompatibilities (redoing resolution rules in
> order to avoid the incompatability would be nasty and bizarre).
>
> (2) Changing the profile of a language-defined subprogram (I didn't
> remember an Ada 2012 example off-hand). End with careful use of
> default parameters, these have incompatibilities with renames and
> 'Access uses (as the profile is different). Again, I don't think soft
> errors would be of any value, as defining multiple profiles would be a massive
> compilation in the language.
>
> (3) Incompatibilities required by semantic consistency. (examples:
> 4.5.2(39.k/3)) These are cases where we could not make a sensible
> definition of the language without the incompatibility. I don't see
> how soft errors would help such cases, as the semantics would need to
> be well-defined in order to have a soft error.
>
> (4) Nonsense semantics in previous standards. [This is pretty similar
> to the above, but its not caused by a language change.] (Examples:
> 10.2.1(28.l/3), 12.7(25.e/3), B.3.3(32.b/3)). Soft errors would not
> help here, as it wouldn't make sense to define the nonsense semantics formally.
>
> (5) Runtime inconsistencies. Obviously, soft errors will not help in
> any way with these.
I agree with you about 1,2,4,5. I think I disagree on 3 -- run-time semantics
is well defined, albeit potentially confusing.
>...I'd like to see more compelling
> examples than the one Arnaud posted before doing that
What?! What on earth could be more compelling than examples of real code that
ran perfectly fine in Ada 2005, and is now broken in Ada?
>... (or eliminating the
> check altogether), but that's a separate discussion.
>
> The problem with incompatibilities caused by methodological
> restrictions is that they're easily avoided by not having the
> restriction. We don't need soft errors to do that!
Apparently, we do. Tucker was quite insistent on the new 'out' param rules, and
refused to go along with 'out'-allowed-on-functions without it. Hence an
incompatibility (affecting real code!) that could have been avoided by soft
errors.
> I think the most valuable use of soft errors would be in properly
> restricting the contents of assertions, which we decided not to do
> because we couldn't find a rule that wasn't too restrictive. That
> would be less of a problem with soft errors, as there would always be
> the option to ignore the error and do the dubious thing anyway.
> Similarly, the question of invariants of types with visible components
> could be dealt with using soft errors (so that the cases of generics would not
> have to be rejected).
Yes, I agree -- with soft errors (or required warnings), we can freely impose
far more stringent requirements, and that's a good thing.
P.S. Have a good vacation. Don't do anything I would do.
****************************************************************
From: Tucker Taft
Sent: Tuesday, September 3, 2013 9:46 AM
> Apparently, we do. Tucker was quite insistent on the new 'out' param
> rules, and refused to go along with 'out'-allowed-on-functions without
> it. Hence an incompatibility (affecting real code!) that could have
> been avoided by soft errors.
I'd like to provide a little more background on the "OUT" param rule.
It actually wasn't my idea. I was mostly focused on worrying about order of
evaluation and how it affected having (in) out parameters of functions. The
idea of including a check on the case of multiple OUT parameters was someone
else's idea, as far as I know.
Furthermore, at least some of us *were* sensitive to the incompatibility, and Ed
Schonberg did an experiment to determine whether there seemed to be any issue
with this case. Here is his comment about it, and Robert Dewar's response to
that:
Ed> a) I implemented the check on multiple in-out parameters in a
Ed> procedure, where the actuals of an elementary type overlap. In the
Ed> 15,000 tests in our test suite I found two occurrences of P (X, X)
Ed> or near equivalent. One of them (in Matt Heaney's code!) appears
Ed> harmless. The other one is in a program full of other errors, so
Ed> unimportant. So application of this rule should not break anything.
Robert> I guess I can tolerate this rule, of course Ed's experiment also
Robert> shows that it is almost certainly useless, so just another case
Robert> of forcing compiler writers to waste time on nonsense.
So I don't think the ARG was being irresponsible here. It turns out after all
that there are some uses of "Ignore_Out_Param" multiple times for the same call.
I realize that any incompatibility is potentially painful, but at least in this
case we did attempt to check whether it was a real problem, or only a
theoretical one. We missed an interesting category, but we didn't act
irresponsibly in my view.
****************************************************************
From: Jeff Cousins
Sent: Tuesday, September 3, 2013 10:29 AM
For the purposes of testing this on a larger sample of code, does anyone know
whether the latest (v7.1.2) GNAT compiler actually does this checking? It only
seems to do it if -gnatw.i ("Activate warnings on overlapping actuals") is used,
which isn't included under the -gnatwa activate (almost) all warnings list, and
even then it only gives a warning not an error.
****************************************************************
From: Ed Schonberg
Sent: Tuesday, September 3, 2013 11:29 AM
In the current version of the compiler illegal overlaps are reported as errors.
The debugging switch --gnatd.E transforms the error back into a warning, but
the default is as per the RM.
****************************************************************
From: Gary Dismukes
Sent: Tuesday, September 3, 2013 11:48 AM
Try using -gnat2012 with 7.1.2. As Ed mentions, current versions of GNAT
(wavefronts designated by version 7.2.0w and any later releases) do this by
default, because Ada 2012 is now the default.
****************************************************************
From: Jeff Cousins
Sent: Tuesday, September 3, 2013 12:11 PM
Sorry should have said, I'm using the -gnat12 switch (on 7.1.2). -gnatw.i
appears to be a red herring as it also reports overlaps between an in and an
out.
By "current" do I take it that you mean a wave-front, not the latest release
(7.1.2)?
****************************************************************
From: Gary Dismukes
Sent: Tuesday, September 3, 2013 12:50 PM
Right, in wavefront versions, not the latest release.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, September 3, 2013 12:04 PM
> The subject matter of this AI is incompatibilities -- in particular, a
> mechanism to reduce the need/desire for them.
The thread is about "nonfatal errors", a *specific* feature. Uses of it are
ancillary.
> (And I started the thread, so I get to define what it's about. ;-))
Then (channeling one Bob Duff), use an appropriate subject line. :-)
> Below, you point out some cases where soft errors could help, but
> brush those aside with "that's a separate discussion" and "whole
> 'nuther discussion".
> No, that's THIS discussion. If we can come up with a few cases where
> soft errors are a good idea, then they're a good idea.
In the specific cases I mentioned, the question is whether there *is* a
significant compatibility error (which I doubt), and if so, whether that means
that there should be no error at all (hard, soft, or kaiser :-), or whether some
sort of error is still valuable. That's all very specific to a particular case,
and should discussed separately under a thread about that particular rule
(6.4.1(16.6/3)). It has nothing to do with the general idea of soft errors.
...
> Anyway, I include both existing incompatibilities (which we should
> consider repealing) and future ones where we're tempted, in this
> discussion.
If we want to repeal some rule, we ought to discuss that (on a case-by-case
basis). It cannot be sensibly done in some general discussion. We ought to
include the possibility of *partially* repealing the rule using soft errors, as
one of the options under discussion. If in fact we find some case where soft
errors are useful, then we should add them to the language. But that doesn't
belong in a general discussion.
...
> > both messages would need to be output *and* the execution would have
> > to finish properly. That's a substantial complication and cost for
> > the ACATS.
> > (I happen to think that a similar cognitive complication would also
> > exist for *users* of Ada, but that's not so clear-cut. I also note
> > that this idea bears a lot of resemblance to the whole argument
> > about unreserved keywords -- which also went nowhere.)
>
> Those are a perfect example of a soft error. It went nowhere, I
> assume, because people were uncomfortable with the fact that you could
> do confusing things (e.g. "type Interface is interface...") with the
> compiler remaining silent. With my proposal, you would get an error
> message.
To me, it says that people are uncomfortable with the idea of conditional
language design.
...
> > (3) Incompatibilities required by semantic consistency. (examples:
> > 4.5.2(39.k/3)) These are cases where we could not make a sensible
> > definition of the language without the incompatibility. I don't see
> > how soft errors would help such cases, as the semantics would need
> > to be well-defined in order to have a soft error.
...
> I agree with you about 1,2,4,5. I think I disagree on 3 -- run-time
> semantics is well defined, albeit potentially confusing.
It might be well-defined, but it's essentially unimplementable. I would be
strongly opposed to ever allowing such a program to execute. Besides, it's the
little incompatibility here; the runtime incompatibility is many times more
likely to cause problems.
> >...I'd like to see more compelling
> > examples than the one Arnaud posted before doing that
>
> What?! What on earth could be more compelling than examples of real
> code that ran perfectly fine in Ada 2005, and is now broken in Ada?
I don't believe that the example he gave occurred more than once (I'd be amazed
if it occurred at all, in fact, because it requires three separate bad design
decisions, as I outlined last week). Moreover, I have a hard time getting
excited about bugs caused in real code that should never have been written in
the first place. He claimed this is "very common", but his example is completely
unbelievable to me. I'd like to see *real*, believable examples where this is
causing a problem. (They probably would have to be far more complete in order to
be believable.) But as I've said before, this does not belong in this thread,
and I'm leaving soon anyway.
> >... (or eliminating the
> > check altogether), but that's a separate discussion.
> >
> > The problem with incompatibilities caused by methodological
> > restrictions is that they're easily avoided by not having the
> > restriction. We don't need soft errors to do that!
>
> Apparently, we do. Tucker was quite insistent on the new 'out' param
> rules, and refused to go along with 'out'-allowed-on-functions without
> it. Hence an incompatibility (affecting real code!) that could have
> been avoided by soft errors.
Tucker was insistent on 'out' parameter rules *for functions*!! I thought it was
weird to only have such rules on functions, so I extended them to all calls when
I wrote up a specific proposal. We attempted to check if that was a problem (see
Tucker's response), and the answer was 'no'. So we left the more restrictive
rule. But we could just as easily have met the original goal by only having
6.4.1(16.6/3) apply in function calls. And no 'soft errors' are needed to do
that. (Again, this should be a separate discussion.) There was no language need
for the incompatibility; it just seemed more consistent to have it and we
believed that it was harmless.
> P.S. Have a good vacation. Don't do anything I would do.
I'm probably going to spend the first day virtually arguing with you.
Wonderful. :-(
I'm here now because I forgot to reprogram my GPS yesterday (it only holds about
1/3rd of the maps of the US, so I have to reprogram it any time I'm going to go
a long ways). That takes several hours, so I still have time to argue with you.
:-)
****************************************************************
From: Bob Duff
Sent: Tuesday, September 3, 2013 12:45 PM
> Furthermore, at least some of us *were* sensitive to the
> incompatibility, and Ed Schonberg did an experiment to determine
> whether there seemed to be any issue with this case. Here is his
> comment about it, and Robert Dewar's response to that:
>
> Ed> a) I implemented the check on multiple in-out parameters in a
> Ed> procedure, where the actuals of an elementary type overlap. In
> Ed> the 15,000 tests in our test suite I found two occurrences of P
> Ed> (X, X) or near equivalent. One of them (in Matt Heaney's code!)
> Ed> appears harmless. The other one is in a program full of other
> Ed> errors, so unimportant. So application of this rule should not break
> Ed> anything.
>
> Robert> I guess I can tolerate this rule, of course Ed's experiment
> Robert> also shows that it is almost certainly useless, so just
> Robert> another case of forcing compiler writers to waste time on
> Robert> nonsense.
Hmm. I think what happened is that we implemented the rule incorrectly for that
experiment. And then ACATS tests appeared, and we "beefed up" the rule, making
more things illegal. And then more user code became illegal. Then we added a
switch to turn the error into a warning. (So GNAT already treats this as a soft
error -- we have a mode in which the program can run, and another in which it
can't run, and we give a diagnostic message in both modes.)
To Ed Schonberg: Is the above true?
> So I don't think the ARG was being irresponsible here.
I agree. Sorry if I implied that we were being irresponsible AT THAT TIME. I
think at the time ARG was thinking:
1. This error is extremely unlikely to occur in real code.
2. If it does occur, it's certainly a real bug.
3. The only choices are "legal" and "illegal" (i.e. the idea
of soft errors hadn't occured to us).
Now, with 20-20 hind sight, I think we were mistaken:
1. It DOES occur in real code.
2. It's probably a real bug, but not in all cases.
The case Arno showed is perfectly legitimate. In fact,
it's even deterministic, despite the nondeterminism implied
by the run-time semantics.
3. At least some of us are open to a middle ground ("soft errors").
>... We missed an interesting category, but we didn't act
>irresponsibly in my view.
Right, but I think we made a mistake, and we should consider correcting it via
soft errors. I also think that if we had had the concept of soft errors in
mind, we would/should have used it several times during Ada 2005 and 2012.
****************************************************************
From: Bob Duff
Sent: Tuesday, September 3, 2013 12:56 PM
> If we want to repeal some rule, we ought to discuss that (on a
> case-by-case basis).
OK, then let's postpone this general discussion. If I have time, I'll inspect
the existing incompatibilities, and open separate threads about the ones I think
we maybe should repeal via soft errors (or maybe even repeal altogether).
>... It cannot be sensibly done in some general discussion.
Well, some folks are saying "soft error" is a useless concept, because there are
no cases where it should apply. So I've been giving examples as part of the
general discussion. I will now quit doing that, and hopefully open separate
threads for each such example. But I don't want to hear anybody reply to those
threads with "Hey, there's no such thing as a soft error. There's only legal and
illegal, and that's the way it's always been and always should be. Tradition!".
> > I agree with you about 1,2,4,5. I think I disagree on 3 -- run-time
> > semantics is well defined, albeit potentially confusing.
>
> It might be well-defined, but it's essentially unimplementable.
OK, if that's true, then we can't use soft error there.
> I'm probably going to spend the first day virtually arguing with you.
> Wonderful. :-(
No need -- I promised above to quit the general discussion until I've opened
separate discussions of particular examples. And I probably won't get around to
that right away.
> I'm here now because I forgot to reprogram my GPS yesterday (it only
> holds about 1/3rd of the maps of the US, so I have to reprogram it any
> time I'm going to go a long ways). That takes several hours, so I
> still have time to argue with you. :-)
Call me a luddite, but I still use fold-out paper maps.
****************************************************************
From: Tucker Taft
Sent: Tuesday, September 3, 2013 1:00 PM
I am somewhat neutral on the "soft error" concept. It does allow us to
introduce incompatibilities without "officially" doing so, but our attempts to
do that with "unreserved keywords" always ran into trouble with WG-9. I suspect
they would be the stumbling block here again, though we could bring it up at the
next WG-9 meeting explicitly, before we waste a lot of time debating it in the
ARG.
I am probably more tolerant of certain incompatibilities than some folks, as it
seems that if you are upgrading to a new version of the language, you should
expect to do some work to get the benefit. Of course the down side is if the
extra work is too much, then it becomes an entry barrier to upgrading. And some
of our incompatibilities in the past have not had a good work-around (such as
the fixed-point multiplication/division problem we created in Ada 95 as part of
trying to provide better support for decimal fixed point).
Soft errors might at least "officially" reduce the entry barrier, but many
serious organizations consider warnings to be (hard) errors, and presumably
"soft errors" would also be considered "hard" errors by such organizations.
I do think the "soft error" concept is worth considering, and WG-9 is probably
the first place to discuss it. We may have to have an agreed-upon criteria for
specifying a soft error rather than a hard error, and I wonder if soft errors
would be soft only for one revision cycle of the language, at which point they
would mutate to being hard...
****************************************************************
From: Arnaud Charlet
Sent: Tuesday, September 3, 2013 1:05 PM
> Soft errors might at least "officially" reduce the entry barrier, but
> many serious organizations consider warnings to be (hard) errors, and
> presumably "soft errors" would also be considered "hard"
> errors by such organizations.
Actually in our experience at AdaCore, most customers do tolerate warnings
(because they have too many of them to have a no warning hard rule) and do not
consider warnings as hard errors.
In other words, some of our customers are using -gnatwe, but many/the majority
do not.
****************************************************************
From: Bob Duff
Sent: Tuesday, September 3, 2013 1:35 PM
> I am somewhat neutral on the "soft error" concept. It does allow us
> to introduce incompatibilities without "officially" doing so, but our
> attempts to do that with "unreserved keywords" always ran into trouble with WG-9.
Now THAT is totally irresponsible.
But soft errors seem different. People can reasonably be uncomfortable with
compilers silently ignoring errors. But soft errors are NOT silent. And note
that my latest proposal requires two modes, one in which the program can run,
and one in which it can't. It would take an unreasonable degree of stubornness
to say "I like the no-run mode, so I don't want other people to have a yes-run
mode." Especially since implementers can always implement any modes they like
in a NONstandard mode. (And GNAT did so.)
>...I
> suspect they would be the stumbling block here again, though we could
>bring it up at the next WG-9 meeting explicitly, before we waste a lot
>of time debating it in the ARG.
Good idea. But I think they'll want some convincing examples.
And they should be reminded that they don't actually have any control over what
implementers do. Like it or not, AdaCore is going to do what AdaCore wants (in
nonstandard modes).
> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it. We may have to have an
> agreed-upon criteria for specifying a soft error rather than a hard
> error, and I wonder if soft errors would be soft only for one revision
> cycle of the language, at which point they would mutate to being hard...
I don't think "mutate to being hard" is a good idea. Look how slowly people
migrated from Ada 95 -- some still use it.
It's like obsolescent features. We will never remove them entirely.
****************************************************************
From: Robert Dewar
Sent: Tuesday, September 3, 2013 2:06 PM
> I am probably more tolerant of certain incompatibilities than some
> folks, as it seems that if you are upgrading to a new version of the
> language, you should expect to do some work to get the benefit. Of
> course the down side is if the extra work is too much, then it becomes
> an entry barrier to upgrading. And some of our incompatibilities in
> the past have not had a good work-around (such as the fixed-point
> multiplication/division problem we created in Ada 95 as part of trying to
> provide better support for decimal fixed point).
The extra work is reasonable if the incompatibility is
a) really useful
b) unavoidable
Any other incompatibility comes in the gratuitous category. And you really can't
guess what will cause trouble and what will not. The multiplication stuff did
not even surface in the presentation on difficulties at Ada UK, wherase making
Interface reserved caused months of very difficult coordinating work for them.
> Soft errors might at least "officially" reduce the entry barrier, but
> many serious organizations consider warnings to be (hard) errors, and
> presumably "soft errors" would also be considered "hard" errors by such organizations.
>
> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it. We may have to have an
> agreed-upon criteria for specifying a soft error rather than a hard
> error, and I wonder if soft errors would be soft only for one revision cycle of the language, at which point they would mutate to being hard...
I find it bizarre to introduce the completely unfamiliar term soft error, when
we have a perfectly good term "warning message" which we already use in the RM.
I also think this whole discussion is overblown, it would be just fine to have
implementation advice that advised issuing a warning in certain circumstances.
****************************************************************
From: Robert Dewar
Sent: Tuesday, September 3, 2013 2:13 PM
> I am somewhat neutral on the "soft error" concept. It does allow us
> to introduce incompatibilities without "officially" doing so, but our
> attempts to do that with "unreserved keywords" always ran into trouble
> with WG-9.
That's just a lack of competent political lobbying IMO!
****************************************************************
From: Robert Dewar
Sent: Tuesday, September 3, 2013 2:13 PM
> I don't think "mutate to being hard" is a good idea. Look how slowly
> people migrated from Ada 95 -- some still use it.
"mutate to hard" is a good idea ONLY if customers clamour for it, otherwise it
is just a sop to aesthetic concers of the designers.
What do you mean "some still use it". The GREAT majority of our users use Ada 95
(only in the very most recent version has the default changed to Ada 2012, we
never had Ada 2005 as a default, too few people moved to Ada 2005 to have made
that a reasonable choice). Ada 2012 seems more worth the price of admission for
the contract stuff.
> It's like obsolescent features. We will never remove them entirely.
We will never remove them at all, Annex J is completely normative.
It is no longer even possible to reject Annex J stuff under control of the
restriction pragma, after the unwise decision to dump all the pragmas there.
This is one requirement in Ada 2012 that GNAT just completely ignores, and will
continue to do so as far as I am concerned.
We have loads of customers using pragma Restrictions (No_OBsolescent_Features),
who use the existing pragmas extensively. It would be insanity in my view to
cause them trouble by adhering to the letter of the standard.
Again the whole of Annex J is really all about aesthetic concerns of the
designers overriding the reality of users. But as long as it is just meaningless
decoaration in the RM it is harmless I suppose :-)
****************************************************************
From: Randy Brukardt
Sent: Tuesday, September 3, 2013 3:14 PM
...
> We have loads of customers using pragma Restrictions
> (No_OBsolescent_Features), who use the existing pragmas extensively.
> It would be insanity in my view to cause them trouble by adhering to
> the letter of the standard.
FYI, the letter of the standard says that No_Obsolescent_Features does not have
to detect use of the pragmas. See 13.12.1(4/3). We've previously discussed this
(twice). So GNAT *is* following the letter of the standard, the only insanity is
claiming that it is not.
****************************************************************
From: Joyce Tokar
Sent: Tuesday, September 3, 2013 4:22 PM
Do you want to bring this up as a discussion topic at the next WG-9 Meeting?
Or leave it within the ARG to come forward with a proposal?
****************************************************************
From: Jeff Cousins
Sent: Wednesday, September 4, 2013 3:37 AM
First thoughts are that WG 9 could have a brief discussion to see what the
consensus is on whether it's worth investigating, and if so then ask the ARG to
come up with a proposal.
Personally my first choice would be that overlapping out/in out parameters stays
an error, and second choice that it (and any other potential "soft errors")
becomes implementation advice to raise a warning.
****************************************************************
From: Jeff Cousins
Sent: Wednesday, September 4, 2013 3:58 AM
Well I've put several millions of lines through GNAT v7.1.2 using -gnat12
-gnatw.i and it comes out at one warning per 25K lines of code. I suspect that
most are cases where an actual is used once as an in parameter and once as an
out, rather than the out/in out combinations that this discussion is about, but
I haven't time to check through them all. But anyway it's down at the level of
new errors for a new compiler release (due to better checking), never mind what
one might expect for a language revision.
I also think it's better in principle to keep it an error. When the C++ camp
are fighting back, their main technical argument is "what about all the order
dependencies?". The new rules help here.
Also, though a weaker argument, I don't think that programmers should easily
ignore out parameters, they are probably there for a reason, say a flag to
indicate whether another out parameter's value is valid, or whether the solution
to some complex algorithm has converged. To ignore two out parameters is doubly
bad.
****************************************************************
From: Robert Dewar
Sent: Wednesday, September 4, 2013 6:59 AM
> Also, though a weaker argument, I don't think that programmers should
> easily ignore out parameters, they are probably there for a reason,
> say a flag to indicate whether another out parameter's value is valid,
> or whether the solution to some complex algorithm has converged. To
> ignore two out parameters is doubly bad.
The regressions in our test suite, there were three tests affected (out of many
thousands), were all legitimate cases of ignoring out parameters deliberately,
and using the same "discard" variable for two parameters, easy to fix of course.
But sometimes even the most trivial of source changes can be a problem.
****************************************************************
From: Bob Duff
Sent: Wednesday, September 4, 2013 7:45 AM
> if you are upgrading to a new version of the language, you should
> expect to do some work to get the benefit.
I agree with that. That's a good argument in favor of tolerating some
incompatibilities. But it's not a good argument in favor of _gratuitous_
incompatibilities.
If there's a technically acceptable way to avoid a particular incompatibility,
then we should do so.
> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it.
Are you going to the WG9 meeting?
I think examples are key. If you just introduce the "soft error"
idea in the abstract, many people react with "Well, I can't think of any cases
where that would be useful, so what's the point?"
****************************************************************
From: Tucker Taft
Sent: Wednesday, September 4, 2013 7:54 AM
> I agree with that. That's a good argument in favor of tolerating some
> incompatibilities. But it's not a good argument in favor of
> _gratuitous_ incompatibilities.
>
> If there's a technically acceptable way to avoid a particular
> incompatibility, then we should do so.
I am not personally convinced that labeling something a "soft error" or a
"required warning" is avoiding an incompatibility. But it does soften the blow
and thereby reduce the entry barrier to upgrading.
>> I do think the "soft error" concept is worth considering, and WG-9 is
>> probably the first place to discuss it.
>
> Are you going to the WG9 meeting?
Yes, I plan to be there.
> I think examples are key. If you just introduce the "soft error"
> idea in the abstract, many people react with "Well, I can't think of
> any cases where that would be useful, so what's the point?"
Agreed, one good example is worth many thousands of words of impassioned
oratory.
****************************************************************
From: Bob Duff
Sent: Wednesday, September 4, 2013 7:57 AM
> Do you want to bring this up as a discussion topic at the next WG-9 Meeting?
> Or leave it within the ARG to come forward with a proposal?
I won't be at the WG9 meeting. I'm flying to Pittsburgh that Friday morning, in
time for the ARG meeting that afternoon. Anyway, if people don't understand why
gratuitous incompatibilities are so bad, I don't know how to convince them.
Using words like "totally irresponsible" isn't going to work. ;-)
Tucker's idea of discussing with WG9 is fine with me.
Robert and Tucker are both better debaters than I am.
I really do think that it is totally irresponsible to place minor aesthetic
concerns like "I don't like the concept of unreserved keywords" above
compatibility issues that cost real money.
I'd like to know who is opposed to unreserved keywords, and what their reasoning
is. Maybe those prople don't think it's "minor".
I would think making those keywords a "soft error" would be more palatable to
those people, because then at least the compiler has a mode in which those
keywords ARE reserved.
****************************************************************
From: Bob Duff
Sent: Wednesday, September 4, 2013 8:00 AM
> Well I've put several millions of lines through GNAT v7.1.2 using
> -gnat12 -gnatw.i ...
I think version 7.2 wavefronts implement the rule more correctly (more
stringently), so you might get more errors. I'm not sure about that.
Ed? Robert? (I don't remember who implemented this stuff, but it wasn't me.)
****************************************************************
From: Ed Schonberg
Sent: Wednesday, September 4, 2013 8:20 AM
Yes, the latest version has has made warnings into "hard" errors. we kept some
of the overlap checks as warnings for a year, but in June Robert removed the
critical question marks from the error strings. Javier, Robert, and myself had a
hand on the full implementation. Javier also extended the checks to the other
constructs that have order of elaboration issues, such as aggregates.
****************************************************************
From: Jeff Cousins
Sent: Wednesday, September 4, 2013 9:18 AM
> I wonder if soft errors would be soft only for one revision cycle of the
> language, at which point they would mutate to being hard...
Could Annex J be treated similarly? (Maybe two cycles would be more realistic).
Indeed could the ability to specify the same actual for multiple in out
parameters be regarded as an obsolescent feature?
****************************************************************
From: Jean-Pierre Rosen
Sent: Wednesday, September 4, 2013 9:20 AM
> But sometimes even the most trivial of source changes can be a
> problem.
Hmm, yes, but in those contexts you are generally not allowed to change compiler
version (not even compiler options). The incompatibility is then irrelevant.
****************************************************************
From: Robert Dewar
Sent: Wednesday, September 4, 2013 1:25 PM
Well it is interesting to read the presentation from BAE on the effort of
transitioning to Ada 2005. By far the worst hit was Interface as a keyword,
because they had a company-wide convention that each package had a child
xxx.Interface that defined the cross-system interface for the package. That
meant they had to change all packages in all systems across all projects in a
coordinated manner, and it was coordinating the change between different
projects that was hard.
The one line in 25,000 for this particular issue (which is BTW at this stage
water under the bridge anyway) is minor compared to this.
****************************************************************
From: Erhard Ploedereder
Sent: Friday, October 4, 2013 7:24 AM
>> Soft errors might at least "officially" reduce the entry barrier, but
>> many serious organizations consider warnings to be (hard) errors, and
>> presumably "soft errors" would also be considered "hard"
>> errors by such organizations.
>
> Actually in our experience at AdaCore, most customers do tolerate
> warnings (because they have too many of them to have a no warning hard
> rule) and do not consider warnings as hard errors.
In an old compiler of mine, we had 3 categories of warnings, selectable by
compiler switch:
stern warnings -- almost certainly a bug
warnings -- run-of-the-mill warnings
light warnings -- verbose, only for paranoid people
The big advantage of "stern warnings" vs. (legality) errors is that the language
design can be much more liberal.
E.g., in the case at hand the "stern warning condition" could be about aliased
parameters and there is NOT a precise language definition given when the warning
is to be issued. It depends entirely on the cleverness of the compiler and the
particular example whether or not the warning appears. Of course, one could
introduce the current rules as "at least" rules for the warning, but the "at
most" nature of legality errors can be nicely ignored by the language
definition.
For legality errors, the need to narrow down to the always decidable situations
is a real disservice to the user. In that sense, I like the notion of "soft
errors", but I hate the term.
****************************************************************
From: Jeff Cousins
Sent: Friday, October 4, 2013 12:45 PM
We turn on nearly all optional warnings, but then process them to sort them into
three or four categories of our choosing, possibly similar to Erhard's. This
does mean though that we sometimes have to update our tool when the wording of a
warning message changes. For legacy code with a good track record in use usually
only the highest category of warnings would be fixed as a matter of urgency, for
new code hopefully the only warnings allowed would be those either with a
recorded justification or of the lowest category.
****************************************************************
From: Brad Moore
Sent: Tuesday, November 19, 2013 9:18 AM
Part of the discussion has been about whether there are cases where such a
feature would be useful in practice.
One such case could be related to AI05-0144-2 (Detecting dangerous order
dependencies)
The discussion in the AI suggests that the following should be illegal in Ada
2012.
procedure Do_It (Double, Triple : in out Natural) is
begin
Double := Double * 2;
Triple := Triple * 3;
end Do_It;
Var : Natural := 2;
Do_It (Var, Var); -- Illegal by new rules.
Yet this code compiles in my current version of GNAT.
It does produce 'warning: writable actual for "Double" overlaps with actual for
"Triple"' as a compiler warning.
So it appears that GNAT already is treating this as a suppressable error.
This also seems to be an example where a suppressable error would have been
preferrable over introducing a backwards incompatibility.
Maybe we could have gone further to rule out more dangerous order dependencies
if we had the suppressable error mechanism.
****************************************************************
From: Ed Schonberg
Sent: Tuesday, November 19, 2013 9:35 AM
> Yet this code compiles in my current version of GNAT.
Actually the current version reports this as an error, you must have a slightly
older version.
> It does produce 'warning: writable actual for "Double" overlaps with actual for "Triple"' as a compiler warning.
>
> So it appears that GNAT already is treating this as a suppressable error.
We were concerted that this would be a potentially serious disruption, but found
very few instances of such potential problems in our test suite, so we decided
that better be fully conformant and treat this as a error, in particular because
it is really aimed at functions with in-out parameters, of which there are few
examples so far :-)!
> This also seems to be an example where a suppressable error would have
> been preferrable over introducing a backwards incompatibility.
>
> Maybe we could have gone further to rule out more dangerous order
> dependencies if we had the suppressable error mechanism.
Once a new mechanism is in place I'm sure we'll find many uses for it!
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 10:05 AM
After all there is no difference between
an error that can be suppressed
a warning that can be made into an error.
given that the nice implementation of suppressing errors is to make them into
warnings, at least as an option.
And yes, mandating warnings is something that should be done much more often in
the RM, and if it makes people happy to call them suppressible errors fine
(reminds me of the ACA which plays this kind of trick with taxes and penalties).
(btw that's the spelling, not suppressable)
****************************************************************
From: Bob Duff
Sent: Tuesday, November 19, 2013 10:09 AM
> This also seems to be an example where a suppressable error would have
> been preferrable over introducing a backwards incompatibility.
Yes, I agree. AdaCore has had some customer complaints about this.
A typical case is two 'out' params, which sometimes contain useful results, but
sometimes the caller wants to ignore them, and so writes:
Do_Something(X, Y, Ignored_Out_Param, Ignored_Out_Param);
> Maybe we could have gone further to rule out more dangerous order
> dependencies if we had the suppressable error mechanism.
Maybe.
****************************************************************
From: Bob Duff
Sent: Tuesday, November 19, 2013 10:25 AM
> And yes, mandating warnings is something that should be done much more
> often in the RM, and if it makes people happy to call them
> suppressible errors fine
It makes people happy to call them SOMEthing that sounds "bad".
I'm not sure "suppressible error" is the right term, because
(1) I'm not sure it fits into the "Classification of Errors"
section in chap 1, and (2) "suppress" sounds like it means "don't print the
message", whereas what we really mean is "allow the program to run in spite of
the error".
"Warning" is apparently not strong enough for most people.
You buy an electric hedge trimmer, and you read all sorts of silly "warnings"
about how you shouldn't use it in the bath tub etc, leading people to believe
"warning" means "something I shouldn't bother paying attention to". ;-)
> (reminds me of the ACA which plays this kind of trick with taxes and
> penalties).
>
> (btw that's the spelling, not suppressable)
Thanks, I didn't know that!
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 2:41 PM
> "Warning" is apparently not strong enough for most people.
> You buy an electric hedge trimmer, and you read all sorts of silly
> "warnings" about how you shouldn't use it in the bath tub etc, leading
> people to believe "warning" means "something I shouldn't bother paying
> attention to". ;-)
To me, the advantage of "suppressible error" over warnings is two-fold:
(1) The default in standard mode is that it is an error (the program is not
allowed to execute). You can use a configuration pragma to allow the program to
execute (probably modeled as a compiler switch, but of course the RM has nothing
to say about switches). The advantage of the configuration pragma is that it can
be "compiled into the environment" without needing to modify any source code
(which is critical in the case of incompatibilities). "Warning" implies the
opposite (that the program is allowed to execute by default).
(2) The name (and default) makes it clear that these are errors, which we only
support allowing of the execution for the purposes of compatibility with
previous versions of Ada. "Warning" is not so clear.
Of course, if someone has a better name than "suppressible error" (with the same
intent and connotation), I'd be happy to support that too.
The case that we talked about in Pittsburgh (the bad view conversion of access
types), the runtime semantics is that of making the parameter value abnormal
(such that reading it would cause the program to be erroneous). Generally, we'd
prefer to prevent that by a Legality Rule, rather than allowing a potential time
bomb. But since the bug has persisted since Ada 95 (and the construct is an Ada
83 one), it's possible it exists in code. So we could use a suppressible error
for this case, and one expects that the error would be suppressed only in the
case where someone has existing code that they can't modify.
I do think this is a mechanism that we could use in a variety of cases,
especially if we're willing to define execution to be erroneous if it is
suppressed (in many cases, defining the semantics is hard or impossible). It's
certainly worth pursuing.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 2:53 PM
> (1) The default in standard mode is that it is an error (the program
> is not allowed to execute). You can use a configuration pragma to
> allow the program to execute (probably modeled as a compiler switch,
> but of course the RM has nothing to say about switches). The advantage
> of the configuration pragma is that it can be "compiled into the
> environment" without needing to modify any source code (which is critical in the case of incompatibilities). "Warning"
> implies the opposite (that the program is allowed to execute by default).
That's meaningless, compilers are free to provide whatever defaults they like.
They just have to have a mode which is strictly conforming (I doubt many GNAT
programmers know the full details of specifying this strictly conforming mode,
since it is in practice useless except for running ACATS tests :-))
> (2) The name (and default) makes it clear that these are errors, which
> we only support allowing of the execution for the purposes of
> compatibility with previous versions of Ada. "Warning" is not so clear.
Yes, of course in many cases I disagree that these are errors, they prevent well
defined perfectly reasonable programs from executing :-)
> Of course, if someone has a better name than "suppressible error"
> (with the same intent and connotation), I'd be happy to support that too.
Well to me, it's finally a way of recognizing that compatibility is more
important than asthetic consistency of the semantic model, and if we can use
these, whatever they are called, to reduce incompatibilites in the future, that
would be a step forward.
****************************************************************
From: Tucker Taft
Sent: Tuesday, November 19, 2013 3:14 PM
> ... I do think this is a mechanism that we could use in a variety of
> cases, especially if we're willing to define execution to be erroneous
> if it is suppressed (in many cases, defining the semantics is hard or impossible).
> It's certainly worth pursuing.
We at least mentioned the possibility of assigning unique identifiers to
suppressible legality errors, and then presumably defining a pragma that allows
individual (or collections of?) suppressible errors to be suppressed.
It might be nice to eventually assign unique identifiers to all legality errors,
whether or not they should be considered suppressible, if only for documentation
purposes (might be particularly useful for ACATS B tests).
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 3:23 PM
> We at least mentioned the possibility of assigning unique identifiers
> to suppressible legality errors, and then presumably defining a pragma
> that allows individual (or collections of?) suppressible errors to be suppressed.
Don't over-engineer, this has no place in a language standard, it may or may not
make sense for an implementation. Right now, the debug flag in GNAT that
suppresses errors of this kind is indiscriminate and that's proved adequate in
practice.
> It might be nice to eventually assign unique identifiers to all
> legality errors, whether or not they should be considered
> suppressible, if only for documentation purposes (might be particularly useful
> for ACATS B tests).
Not in the language standard please! I can promise you that GNAT would ignore
these if it was done, since it would be a HUGE effort to implement for VERY
little gain.
****************************************************************
From: Tucker Taft
Sent: Tuesday, November 19, 2013 3:32 PM
I did not mean to imply compilers would have to do anything with these. Rather,
we might indicate in the comments of an ACATS B test, which particular legality
rule we were checking. Right now there is not a simple process (that I am aware
of) to check for coverage of legality rules by B tests.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 3:39 PM
Ah OK! I thought we just used RM para numbers? Otherwise sounds like a LOT of
effort even just in the RM for little gain. After all ACATS coverage is SO
incomplete, and likely to remain so, as Randy will tell you, so it seems
premature to be worrying about air tight mechanisms to get to 100% coverage.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 3:58 PM
> > I did not mean to imply compilers would have to do anything with
> > these. Rather, we might indicate in the comments of an ACATS B
> > test, which particular legality rule we were checking. Right now
> > there is not a simple process (that I am aware of) to check for
> > coverage of legality rules by B tests.
>
> Ah OK! I thought we just used RM para numbers?
Well, actually sentence numbers within RM paragraph numbers. (The SAIC coverage
just used paragraph numbers and managed to miss a number of rules that way; some
Legality Rule paragraphs have as many as 4 testable rules in them.)
And of course there are rules that take multiple paragraphs to describe
(bulleted lists are like that).
> Otherwise
> sounds like a LOT of effort even just in the RM for little gain. After
> all ACATS coverage is SO incomplete, and likely to remain so, as Randy
> will tell you, so it seems premature to be worrying about air tight
> mechanisms to get to 100% coverage.
I don't see a lot of value to an RM rule designation. It's easy enough to figure
coverage of ACATS B-Tests to Legality Rules in 95% of the cases. As with many
things, it's the coverage of C-Tests that's hard to quantify (because any
individual program depends on dozens of RM rules to execute).
If there is anything interesting about B-Tests, it is making it easier to grade
them. The one thing that's easier for C-Tests is grading them, because they
either report Passed or don't. B-Tests require some sort of error message
analysis, a lot harder problem.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 4:07 PM
> If there is anything interesting about B-Tests, it is making it easier
> to grade them. The one thing that's easier for C-Tests is grading
> them, because they either report Passed or don't. B-Tests require some
> sort of error message analysis, a lot harder problem.
Well how often will this get done. In our context, we have a set data base of
expected B-test output, and only when there are discrepancies do we have to look
at it, which is seldom! Would be even more seldom if fewer incompatible changes
were made :-) :-)
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 4:53 PM
It's not a major problem for a vendor, because we can compare against a known
good set of results. But anyone that wants to do their own testing has to do
this. Based on my correspondence, there are a number of people that want to do
so. (And of course, any formal testing has to do this as well, it's a
significant part of the expense of formal testing.)
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 3:52 PM
...
> > To me, the advantage of "suppressible error" over warnings is two-fold:
> >
> > (1) The default in standard mode is that it is an error (the program
> > is not allowed to execute). You can use a configuration pragma to
> > allow the program to execute (probably modeled as a compiler switch,
> > but of course the RM has nothing to say about switches). The
> > advantage of the configuration pragma is that it can be "compiled
> > into the environment" without needing to modify any source code
> > (which is critical in the case of incompatibilities). "Warning"
> > implies the opposite (that the program is allowed to execute by default).
>
> That's meaningless, compilers are free to provide whatever defaults
> they like. They just have to have a mode which is strictly conforming
> (I doubt many GNAT programmers know the full details of specifying
> this strictly conforming mode, since it is in practice useless except
> for running ACATS tests :-))
It's not meaningless even if ignored, because it conveys the intent of the
language designers that these are errors. And it conveys the intent of the
language designers as to appropriate defaults. Compilers that stray too far from
Standard mode by default are trouble. (GNAT does this with not having overflow
checking enabled by default; this causes no end of questions from newbies on
comp.lang.ada. It would be better if the free GNAT was much closer to standard
mode than it is.)
> > (2) The name (and default) makes it clear that these are errors,
> > which we only support allowing of the execution for the purposes of
> > compatibility with previous versions of Ada. "Warning" is not so clear.
>
> Yes, of course in many cases I disagree that these are errors, they
> prevent well defined perfectly reasonable programs from executing :-)
Amazing! You can see the future and already disagree with it! :-)
Since we haven't designated a single error to be a "suppressible error", and
we've only talked about two very different cases as potential "suppressible
error", it's impossible to say anything about what the effect of the errors are.
Unless you meant to make a blanket statement about all Ada Legality Rules (as
one can argue in many cases that there is some sensible semantics for illegal
constructs, one example would be class-wide arguments to controlling arguments
of statically bound routines).
> > Of course, if someone has a better name than "suppressible error"
> > (with the same intent and connotation), I'd be happy to support that too.
>
> Well to me, it's finally a way of recognizing that compatibility is
> more important than asthetic consistency of the semantic model, and if
> we can use these, whatever they are called, to reduce incompatibilites
> in the future, that would be a step forward.
I still think these will not reduce the incompatibilities very much, since there
are so many cases where they wouldn't help. But some help is likely to be better
than no help.
Note that the configuration pragma is essentially to enable obsolescent
features; it might even make sense to place it there. The core language wouldn't
have to acknowledge these at all. So we get both "asthetic consistency of the
semantic model" (as obsolescent features are ignored for this purpose) and
compatibility. Sounds like a win-win.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 4:04 PM
> It's not meaningless even if ignored, because it conveys the intent of
> the language designers that these are errors. And it conveys the
> intent of the language designers as to appropriate defaults. Compilers
> that stray too far from Standard mode by default are trouble.
Sure I understand, though I still think that in some cases, e.g. warning about
overlapping parameters, these should NOT be considered as errors, the effect is
well defined, and they are absolutely in warning territory to me. Same with
several other cases.
> (GNAT does this with not having
> overflow checking enabled by default; this causes no end of questions
> from newbies on comp.lang.ada. It would be better if the free GNAT was
> much closer to standard mode than it is.)
Probably so, but compatibility always looms large :-)
>>> (2) The name (and default) makes it clear that these are errors,
>>> which we only support allowing of the execution for the purposes of
>>> compatibility with previous versions of Ada. "Warning" is not so clear.
>>
>> Yes, of course in many cases I disagree that these are errors, they
>> prevent well defined perfectly reasonable programs from executing :-)
>
> Amazing! You can see the future and already disagree with it! :-)
I am thinking of past cases where we would have used this (or at least I hope we
would have used it), e.g. overlapping parameters, or reaching further back,
static expressions like 9/0.
> Unless you meant to make a blanket statement about all Ada Legality
> Rules (as one can argue in many cases that there is some sensible
> semantics for illegal constructs, one example would be class-wide
> arguments to controlling arguments of statically bound routines).
The idea does not come out the blue, it comes out of a discussion context, and I
am commenting in the scope of that context.
> I still think these will not reduce the incompatibilities very much,
> since there are so many cases where they wouldn't help. But some help
> is likely to be better than no help.
We will see
> Note that the configuration pragma is essentially to enable
> obsolescent features; it might even make sense to place it there. The
> core language wouldn't have to acknowledge these at all. So we get
> both "asthetic consistency of the semantic model" (as obsolescent
> features are ignored for this purpose) and compatibility. Sounds like a win-win.
Not sure what you are suggesting wrt obsolescent features. It would be
*awful* to have to use a configuration parameter in order to be able to say
Ascii.LF, or to use size clauses instead of size aspects???
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 4:58 PM
> Not sure what you are suggesting wrt obsolescent features. It would be
> *awful* to have to use a configuration parameter in order to be able
> to say Ascii.LF, or to use size clauses instead of size aspects???
I was just thinking out loud. The configuration pragma to suppress suppressible
errors would primarily be intended for compatibility with older versions of Ada.
That's really the purpose of the obsolescent features annex (provide features
for compatibility), so arguably, that pragma could be defined there. In which
case it wouldn't appear in the core, leaving a clean(er) semantic model. I was
*only* talking about the ability to suppress suppressible errors, and not any
other feature of the language (now or in the future). In any event, it's not a
big deal either way.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 5:10 PM
AH, I see, just define the pragma there, OK .. I don't care, I have always
thought Annex J to be useless :-)
And we still have a nasty unresolved problem with Annex J, which is that we
moved all the pragmas there. It is of course totally unrealistic to expect
everyone to switch to using only aspects, so 98% of all Ada programs will be
using the obsolescent pragmas and aspects.
That means that if No_Obsolescent_Features excludes them, we have a BIG
compatibility problem.
For GNAT, we have just ignored this, and the NOF restriction does not include
this new stuff.
Maybe violating this restriction with rep attributes and pragmas should be a
suppressible error :-) :-)
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 5:43 PM
...
> And we still have a nasty unresolved problem with Annex J, which is
> that we moved all the pragmas there. It is of course totally
> unrealistic to expect everyone to switch to using only aspects, so 98%
> of all Ada programs will be using the obsolescent pragmas and aspects.
>
> That means that if No_Obsolescent_Features excludes them, we have a
> BIG compatibility problem.
But this problem was resolved long ago, because you complained about it 2 years
ago. (This is the fourth time you've made this comment!) Specifically,
13.12.1(4/3) defines No_Obsolescent_Features as:
There is no use of language features defined in Annex J. It is implementation
defined whether uses of the renamings of J.1 and of the pragmas of J.15 are
detected by this restriction. This restriction applies only to the current
compilation or environment, not the entire partition.
> For GNAT, we have just ignored this, and the NOF restriction does not
> include this new stuff.
GNAT is following the letter of the Standard, see above.
> Maybe violating this restriction with rep attributes and pragmas
> should be a suppressible error :-) :-)
That probably would have been better than making it implementation-defined, but
I don't see any reason to change. (And note that representation attributes are
*not* obsolescent.)
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 6:07 PM
> But this problem was resolved long ago, because you complained about
> it 2 years ago. (This is the fourth time you've made this comment!)
> Specifically,
> 13.12.1(4/3) defines No_Obsolescent_Features as:
>
> There is no use of language features defined in Annex J. It is
> implementation defined whether uses of the renamings of J.1 and of the
> pragmas of J.15 are detected by this restriction. This restriction
> applies only to the current compilation or environment, not the entire partition.
>
>> For GNAT, we have just ignored this, and the NOF restriction does not
>> include this new stuff.
>
> GNAT is following the letter of the Standard, see above.
Ah, I didn't know that, good! Although I must say this seems an unnecessary
non-portability. I suspect in practice that any other implementation would copy
GNAT in this regard :-)
>> Maybe violating this restriction with rep attributes and pragmas
>> should be a suppressible error :-) :-)
>
> That probably would have been better than making it
> implementation-defined, but I don't see any reason to change. (And
> note that representation attributes are *not* obsolescent.)
Ah ok, thanks for clarification, though it puzzles me to make Component_Size
attribute specification non-obsolescent and pragma Pack obsolescent.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 19, 2013 6:16 PM
> Ah ok, thanks for clarification, though it puzzles me to make
> Component_Size attribute specification non-obsolescent and pragma Pack
> obsolescent.
The reason is that you can put a attribute_definition_clause in a private part,
and thus reference items that you aren't allowed to mention in a similar aspect
specification. So there are legal things that you can't write as an aspect
specification but can write as an attribute_definition_clause (whether writing
such things is a good idea is a separate issue).
OTOH, the pragmas don't take expressions (just the name of the entity), and thus
any legal pragma can be written as an aspect specification. Ergo, there is no
need for the pragmas, new code should only use aspect specifications.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 6:27 PM
> OTOH, the pragmas don't take expressions (just the name of the
> entity), and thus any legal pragma can be written as an aspect
> specification. Ergo, there is no need for the pragmas, new code should only
> use aspect specifications.
I disagree with the ergo here. To me a pragma Pack can often be regarded as an
implementation detail, and thus better confined to the private part.
****************************************************************
From: Stephen Michell
Sent: Tuesday, November 19, 2013 1:49 PM
...
>> Yet this code compiles in my current version of GNAT.
>
> Actually the current version reports this as an error, you must have a
> slightly older version.
How recent, Ed? The Latest GAP version did not generate an error.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 19, 2013 4:14 PM
More recent than the latest GAP version indeed!
****************************************************************
From: Jeff Cousins
Sent: Wednesday, November 20, 2013 3:43 AM
I see the suppressible errors as a variation on the theme of obsolescent
features. I also think that No_Obsolescent_Features should have an optional
parameter of language revision so that a user could get himself up-to-date with
Ada 2005 now and leave getting up-to-date with Ada 2012 for some time in the
future.
****************************************************************
From: Erhard Ploedereder
Sent: Wednesday, November 20, 2013 7:53 AM
As much as I would like to see suppressable errors for all situations, where any
generally undecidable error situation is decided by the compiler anyhow, I am
seriously worried about the ARG politics of providing the capability. I.e., I am
really eager to strengthen warnings, but I am not at all in favor of weakening
error messages.
Slightly overstating my case, are we ever going to close safety loopholes again
with declaring something plain illegal, when we have the easy way out in
declaring it suppressably illegal?
Statements like
On 19.11.2013 21:53, Robert Dewar wrote:
> Well to me, it's finally a way of recognizing that compatibility is
> more important than asthetic consistency of the semantic model, and if
> we can use these, whatever they are called, to reduce incompatibilites
> in the future, that would be a step forward.
which also have been made at the meeting - so it is not just Robert - make we
worry that indeed almost all newly discovered errors will become suppressable
for fear of rendering an existing program illegal. To me, that is clearly the
wrong way, including the wrong message to the user community. "Ada 2012 with
Suppressable Safety", hmmm.
Note that the aliasing problem is methodological to prevent unexpected results
(it would have been a really good candidate for a suppressable error and a much
broader definition). The case of violating discriminant constraints, which was
the case discussed by the ARG, is a safety issue. When it comes to safety, I do
not want suppressable errors.
Moreover, while compiler producers have the liberty to suppress errors to their
liking in non-standard mode, e.g., to achieve compatibility to
"pre-ARG-decided-error"-status of the language, language definers are not free
to ignore specifying the actual semantics of code in the presence of suppressed
errors. I would love to avoid that.
The analogy to suppressed checks doesn't quite work. One can suppress checks
rather surgically and individually; plus, there is only a small number of them.
Suppressable errors so far are an all-or-nothing proposition.
****************************************************************
From: Tucker Taft
Sent: Wednesday, November 20, 2013 8:07 AM
I think we should reserve "suppressible" errors mostly for what might be called
"methodological" errors. Examples that come to mind are the Ada 2012 aliasing
checks for OUT parameters, the Ada 95 illegality of out-of-range static values,
and perhaps something like the Ada 95 requirement of having all or no
dynamically-tagged operands in a dispatching call.
This bizarre OUT parameter case for access types and view conversions is right
on the border. It doesn't inevitably lead to erroneousness -- only if the OUT
parameter's initial value is read.
In any case, I think the language-defined configuration pragma for suppressing
these should be very specific, uniquely identifying the legality error that is
being suppressed. Implementations could of course define their own pragmas to be
more sweeping in nature.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 20, 2013 8:16 AM
> which also have been made at the meeting - so it is not just Robert -
> make we worry that indeed almost all newly discovered errors will
> become suppressable for fear of rendering an existing program illegal.
> To me, that is clearly the wrong way, including the wrong message to
> the user community. "Ada 2012 with Suppressable Safety", hmmm.
pragma Suppress already has that "problem" if you think it is a problem!
> Note that the aliasing problem is methodological to prevent unexpected
> results (it would have been a really good candidate for a suppressable
> error and a much broader definition). The case of violating
> discriminant constraints, which was the case discussed by the ARG, is a safety issue.
> When it comes to safety, I do not want suppressable errors.
But you can totally suppress discriminant checks anyway!
> The analogy to suppressed checks doesn't quite work. One can suppress
> checks rather surgically and individually; plus, there is only a small
> number of them. Suppressable errors so far are an all-or-nothing
> proposition.
Not at all, I think there should be fine grained control over suppressing
errors, using the scope model of Suppress, and we are only talking here about a
very small number of errors that are candidates for suppression.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 20, 2013 8:22 AM
> I think we should reserve "suppressible" errors mostly for what might
> be called "methodological" errors. Examples that come to mind are the
> Ada 2012 aliasing checks for OUT parameters, the Ada 95 illegality of
> out-of-range static values, and perhaps something like the Ada 95
> requirement of having all or no dynamically-tagged operands in a dispatching call.
I agree completely
> This bizarre OUT parameter case for access types and view conversions
> is right on the border. It doesn't inevitably lead to erroneousness
> -- only if the OUT parameter's initial value is read.
I agree completely
> In any case, I think the language-defined configuration pragma for
> suppressing these should be very specific, uniquely identifying the legality
> error that is being suppressed. Implementations could of course define
> their own pragmas to be more sweeping in nature.
I agree completely, and doubt that sweeping stuff is a good idea. We currently
use an undocumented debug switch for this purpose, and it would be good to
replace it with something less sweeping. Currently our debug switch looks like:
> -- d.E Turn selected errors into warnings. This debug switch causes a
> -- specific set of error messages into warnings. Setting this switch
> -- causes Opt.Error_To_Warning to be set to True. The intention is
> -- that this be used for messages representing upwards incompatible
> -- changes to Ada 2012 that cause previously correct programs to be
> -- treated as illegal now. The following cases are affected:
> --
> -- Errors relating to overlapping subprogram parameters for cases
> -- other than IN OUT parameters to functions.
> --
> -- Errors relating to the new rules about not defining equality
> -- too late so that composition of equality can be assured.
****************************************************************
From: Geert Bosch
Sent: Wednesday, November 20, 2013 11:42 AM
> The case that we talked about in Pittsburgh (the bad view conversion
> of access types), the runtime semantics is that of making the
> parameter value abnormal (such that reading it would cause the program to be erroneous).
This is exactly right, in my opinion. This involved out parameters, where it
typically does not matter if the value is abnormal, as it will get overwritten
anyway. Moreover, failing any language-defined check in a procedure updating a
composite variable may make it abnormal. So, this is just another case where we
cannot avoid a particular set of errors at compile time, unless we make a set of
programs with well-defined behavior illegal.
> Generally, we'd prefer to prevent that by a Legality Rule, rather than
> allowing a potential time bomb. But since the bug has persisted since
> Ada 95 (and the construct is an Ada 83 one), it's possible it exists
> in code. So we could use a suppressible error for this case, and one
> expects that the error would be suppressed only in the case where
> someone has existing code that they can't modify.
Sometimes seems that far too much effort is put into preventing programmers of
shooting themselves in the foot with complicated contraptions, while ignoring
the minefield of real problems in concurrent programs. We're happy to make any
concurrent I/O erroneous (including where the file argument is implicit) even
though languages such as C use implicit locking to avoid undefined behavior.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 20, 2013 3:46 PM
I agree 100% with this, I think for example it is really bad that separate tasks
can't do Put_Line safely. In fact there is an internal ticket here at AdaCore to
fix that :-) So that at least GNAT will behave nicely, even if Ada does not!
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 20, 2013 11:51 AM
...
> > The analogy to suppressed checks doesn't quite work. One can
> > suppress checks rather surgically and individually; plus, there is
> > only a small number of them. Suppressable errors so far are an
> > all-or-nothing proposition.
>
> Not at all, I think there should be fine grained control over
> suppressing errors, using the scope model of Suppress, and we are only
> talking here about a very small number of errors that are candidates
> for suppression.
I had suggested something like that during the ARG meeting. Tucker made the good
point that using local error suppression would require modifying the source
code. And if you are allowed to modify the source code, you should just get rid
of the offending code (all of the cases we've discussed to date have easy local
workarounds) rather than suppressing an error. He thought it was more valuable
to have the user specify what errors they wanted suppressed (globally), so that
they don't have to suppress all or nothing. But they need global suppression so
that they don't have to modify the source (just the build environment).
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 20, 2013 3:48 PM
Well how easy it is to modify the source code to eliminate the error is not
clear, and you can probably justify special local suppress additions easier than
other code changes.
But note I said we should follow the Suppress model, and of course that model
allows global control as well via configuration pragmas.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 20, 2013 12:06 PM
...
> Sometimes seems that far too much effort is put into preventing
> programmers of shooting themselves in the foot with complicated
> contraptions, while ignoring the minefield of real problems in
> concurrent programs. We're happy to make any concurrent I/O erroneous
> (including where the file argument is implicit) even though languages
> such as C use implicit locking to avoid undefined behavior.
Full Ada is not really a concurrent language; there are far too many ways to
shoot yourself in the head (not foot!). (It's more of a cooperating sequential
processes sort of language.) Implicit locking is *way* down the list of those
priorities. (I think that the project that Brad and Steve are working on might
present some help in this area.)
Implicit locking for language-defined subprograms is a massive minefield. We
tried to come up with a definition of it for the containers, but failed (too
many deadlock and livelock cases). Earlier, we had tried to include implicit
locking in Claw, but again there were too many deadlock and livelock cases.
Maybe it could be made to work with a limited subset of I/O, but I'm skeptical
that we could even make that work without lots and lots of work.
I would guess that we'll have to define purpose-built libraries to support
concurrent I/O, probably using new categorizations defined by the
multiprocessing working group. (My guess is that C is delusional if they think
that implicit locking will always work on their full libraries, but probably no
one will care because they don't use (or will learn to avoid) the problem
areas.)
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 20, 2013 3:49 PM
> Full Ada is not really a concurrent language; there are far too many
> ways to shoot yourself in the head (not foot!).
For the record, I find this a totally absurd (and rather
damaging) statement!
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 20, 2013 4:47 PM
I really should have said "parallel" rather than "concurrent". But my point
still stands: Ada has far too many constructs that don't allow concurrent
execution (or don't work reliably for concurrent execution), so it takes a lot
of effort to write code that can be executed in parallel - and the language
provides virtually no help. A language doesn't have to be that way (see
Parasail, for example); more could be done for Ada. (For instance, note Geert's
comments during the recent meeting about Pure and Pure_Function allowing too
much for parallel/concurrent execution.)
P.S. Please note that I won't make a statement like the above in a public forum;
it would be too easily taken the wrong way. Ada surely handles concurrency
better than other mainstream languages. But that's a very low bar!
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 1:59 PM
Here is a nice example of why the rule about overlapping parameters should be
suppressible. Of course all four calls make perfect semantic sense.
> 1. procedure X is
> 2. procedure Xchg (A, B : in out Integer) is
> 3. T : constant Integer := A;
> 4. begin
> 5. A := B;
> 6. B := T;
> 7. end;
> 8.
> 9. function Ident (R : Integer) return Integer
> 10. is begin return R; end;
> 11.
> 12. Data : array (1 .. 10) of Integer;
> 13. M, N : constant Integer := 4;
> 14. P, Q : constant Integer := Ident (4);
> 15. R, S : Integer;
> 16.
> 17. begin
> 18. Xchg (Data(2), Data(2));
> |
> >>> writable actual for "A" overlaps with actual for "B"
>
> 19. Xchg (Data(M), Data(N));
> |
> >>> writable actual for "A" overlaps with actual for "B"
>
> 20. Xchg (Data(P), Data(Q));
> 21. Xchg (Data(R), Data(S));
> 22. end X;
****************************************************************
From: Randy Brukardt
Sent: Friday, November 22, 2013 2:27 PM
Humm. We actually considered this exact case when contemplating the overlapping
parameters rule, and we thought that it was likely that *statically* overlapping
parameters in a case like this represented a bug, rather than being intentional.
Why would someone write a complicated call that does nothing on purpose?
Replacing the calls with "null;" makes more sense (or "delay 0.0001;" if the
point is to waste time).
Since the rule doesn't trigger when the parameters are calculated (as in the P
and Q case), the normal reasons for this occurring would not be illegal. It's
only illegal when the parameters are statically overlapping,
Anyway, there is an argument for making procedure parameter overlapping
suppressible (not functions, that's new to Ada 2012 so there can be no
compatibility problem, and the dangers are much worse when multiple calls are
involved); I'd still like to see a real user case that was convincing (every
example that has been posted so far seems unlikely or contrived to me) but the
suppressible error idea has the right connotations (get rid of the problem if
you can, it's really a problem in 95% of the cases and it's tricky code to avoid
in the other 5%) so I would not object to that. (Presuming Bob gets a real
proposal written up someday soon.)
****************************************************************
From: Tucker Taft
Sent: Friday, November 22, 2013 2:39 PM
> ... I'd still like to see a real user case that was convincing
We did have a customer complain. They had two OUT parameters, and were ignoring
both of them. They passed the same variable named "Ignore" to both.
It is certainly easy to work around (just create an Ignore1 and Ignore2), but as
is to be expected, people hate to make any change to code that is already
working just to make the compiler happy.
****************************************************************
From: Randy Brukardt
Sent: Friday, November 22, 2013 2:56 PM
Understood. My problem with that example is that I can't imagine any
well-designed interface having two out parameters of the same type that you
might want to ignore. I can imagine it happening very rarely (not every
interface is well-designed!), but I can't get very interested in making bad code
easier to write.
In any case, the suppressible error idea seems to have the right presentation
(the default is an error, the error can be turned off for compatibility
reasons), so I wouldn't object to changing the rule that way. But first we need
a full proposal for suppressible errors.
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 2:59 PM
> We did have a customer complain. They had two OUT parameters, and
> were ignoring both of them. They passed the same variable named "Ignore" to
> both.
There was one active complaint, and actually two other similar instances in our
test suite.
> It is certainly easy to work around (just create an Ignore1 and
> Ignore2), but as is to be expected, people hate to make any change to
> code that is already working just to make the compiler happy.
And it is an ugly work around!
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 3:01 PM
I think the exchange example is instructive.
Yes, you can say, well it's stupid to call exchange with two identical
parameters, but it can often arise in the case of conditional and parametrized
complation that you execute some silly code in some cases of parametrization.
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 3:31 PM
> Understood. My problem with that example is that I can't imagine any
> well-designed interface having two out parameters of the same type
> that you might want to ignore. I can imagine it happening very rarely
> (not every interface is well-designed!), but I can't get very
> interested in making bad code easier to write.
Given that the language does not have optional out parameters, it seems
perfectly reasonable to have a procedure that supplies a bunch of output
information which might not be needed in every case.
The language should not be designed around Randy's idiosyncratic ideas of what
he thinks is or is not "bad code". I found the customers code in each of these
cases perfectly reasonable.
> In any case, the suppressible error idea seems to have the right
> presentation (the default is an error, the error can be turned off for
> compatibility reasons), so I wouldn't object to changing the rule that way.
> But first we need a full proposal for suppressible errors.
As usual I find mumbling about defaults to be totally bogus, defaults are up to
an implementation, not the language design. The only requirement is that an
implementation have *A* mode in which the RM semantics hold. Whether this is the
default mode is up to the implementation.
****************************************************************
From: Bob Duff
Sent: Friday, November 22, 2013 3:41 PM
> Understood. My problem with that example is that I can't imagine any
> well-designed interface having two out parameters of the same type
> that you might want to ignore. I can imagine it happening very rarely
> (not every interface is well-designed!), but I can't get very
> interested in making bad code easier to write.
I rather strongly disagree with that point of view. Compatibility is about
compatibility of all code, not just code where ARG approves of the style.
And not just code where Randy approves of the style. You have every right to
scorn code that has two 'out' parameters of the same type. But I and others
don't share that view. We need to avoid letting personal stylistic preferences
override compatibility concerns. Even if you convinced me that that code is
evil, it's not about making it "easier to write" -- that code is already
written, and (in some environments) might be expensive to modify.
For example, I am rabidly opposed to indenting code using TAB characters. But
if I were to propose making them illegal, you would rightly consider that to be
completely irresponsible!
> In any case, the suppressible error idea seems to have the right
> presentation (the default is an error, the error can be turned off for
> compatibility reasons), so I wouldn't object to changing the rule that way.
Right, this is exactly what the concept I've been pushing for is for. You and I
can't necessarily agree whether "two 'out' parameters of the same type" =
"Evil". This way, we don't NEED to agree on that; a "suppressible error" can
satisfy us both. The concept can defuse all sorts of unnecessary arguments.
> But first we need a full proposal for suppressible errors.
Understood. ;-)
****************************************************************
From: Bob Duff
Sent: Friday, November 22, 2013 3:57 PM
> As usual I find mumbling about defaults to be totally bogus, defaults
> are up to an implementation, not the language design. The only
> requirement is that an implementation have *A* mode in which the RM
> semantics hold. Whether this is the default mode is up to the
> implementation.
From a formal point of view, you are of course correct. But I think Randy is
correct to want the RM worded in a way that implies that the "error" is the main
thing, and "Oh by the way, if you really want to, you can run the illegal
program anyway, and get well-defined semantics." That's better than, "You can
do whatever you like, and oh by the way, the compiler must give a warning".
Formally, those are two ways of expressing the same thing, but the first way
makes it sound like we're taking these things seriously.
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 4:31 PM
OK in principle, but in practice you have been making clear warning situations
(like overlapping params) into errors :-)
****************************************************************
From: Randy Brukardt
Sent: Friday, November 22, 2013 4:29 PM
> > Understood. My problem with that example is that I can't imagine any
> > well-designed interface having two out parameters of the same type
> > that you might want to ignore. I can imagine it happening very
> > rarely (not every interface is well-designed!), but I can't get very
> > interested in making bad code easier to write.
>
> Given that the language does not have optional out parameters, it
> seems perfectly reasonable to have a procedure that supplies a bunch
> of output information which might not be needed in every case.
It's a bad idea to have two non-equivalent parameters of the same type in any
subprogram interface, because that means that the compiler can't help you ensure
that the parameters are properly matched. (Coding rules can help with this, by
requiring named notation in calls where there are multiple parameters, but even
that isn't a full cure.) Obviously, you can't always avoid having parameters of
the same type ("-" comes to mind).
> The language should not be designed around Randy's idiosyncratic ideas
> of what he thinks is or is not "bad code". I found the customers code
> in each of these cases perfectly reasonable.
Since I've never had the chance to see the customer code, only very abbreviated
summaries of it, I have no way to decide if I agree or not. It's quite possible
that I'd agree with you if I actually saw the interface itself. I realize that
you probably can't release the customer code, so I probably will never see any
real code like this -- and your descriptions are not at all convincing.
I've said since the beginning that I want to see a *real* example of this
problem, not a sanitized summary of one (which eliminates anything that would
make such an example convincing). Since no one is willing/able to provide such
an example, I continue to think this is much ado about very little. But I remain
willing to be convinced otherwise -- just show a convincing example that
actually looks like real-life code.
> > In any case, the suppressible error idea seems to have the right
> > presentation (the default is an error, the error can be turned off
> > for compatibility reasons), so I wouldn't object to changing the
> > rule that way. But first we need a full proposal for suppressible errors.
>
> As usual I find mumbling about defaults to be totally bogus, defaults
> are up to an implementation, not the language design. The only
> requirement is that an implementation have
> *A* mode in which the RM semantics hold. Whether this is the default
> mode is up to the implementation.
(A) When I'm talking about "default", I'm talking about the default in terms of
the language rules, not necessarily implementations. That has much more to do
with the perception of the language and its safety rather than what any
individual implementation does. Obviously, we can't control implementations, but
we can control the message that we send both programmers and implementers.
(B) I'd caution any implementer that thinks that they know better than the
language designers that they are usually wrong. The language designers have
considered far more cases (use cases, implementation costs, portability issues,
etc.) than an implementer is likely to have encountered. (I know this from
personal experience!) It's certainly possible for a default to be better than
the language (GNAT's static elaboration rules come to mind, because they take an
obscure runtime check and replace it with compile-time rules -- that's almost
always better), but that's the rare case. The cost in interoperability with
other compilers and with the language description makes most such differences
bad choices. Thus, having the defaults differ substantially from the language
definition is generally a bad idea.
(C) The attitude you give above is exactly why we don't want to call these
warnings. Implementers might very well decide to hide warnings by default, but
they'd have a harder case (especially for a "safe" language like Ada) in hiding
errors. That's not the message that we want to give to Ada users, and I don't
think most implementers would want to give that message either.
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 4:47 PM
> It's a bad idea to have two non-equivalent parameters of the same type
> in any subprogram interface, because that means that the compiler
> can't help you ensure that the parameters are properly matched.
> (Coding rules can help with this, by requiring named notation in calls
> where there are multiple parameters, but even that isn't a full cure.)
> Obviously, you can't always avoid having parameters of the same type ("-"
> comes to mind).
Total Nonsense IMO
> Since I've never had the chance to see the customer code, only very
> abbreviated summaries of it, I have no way to decide if I agree or
> not. It's quite possible that I'd agree with you if I actually saw the
> interface itself. I realize that you probably can't release the
> customer code, so I probably will never see any real code like this --
> and your descriptions are not at all convincing.
I really don't care AT ALL if you agree. Whether you think code is good or bad
is not even interesting to me, let alone a valid input to the language design
> I've said since the beginning that I want to see a *real* example of
> this problem, not a sanitized summary of one (which eliminates
> anything that would make such an example convincing). Since no one is
> willing/able to provide such an example, I continue to think this is
> much ado about very little. But I remain willing to be convinced
> otherwise -- just show a convincing example that actually looks like real-life
> code.
I really don't see why we should go to this bother, given your peculiar ideas,
you may be impossible to convince, but the point is to design a language to use
by everyone not just Randy.
> (B) I'd caution any implementer that thinks that they know better than
> the language designers that they are usually wrong. The language
> designers have considered far more cases (use cases, implementation
> costs, portability issues, etc.) than an implementer is likely to have
> encountered. (I know this from personal experience!) It's certainly
> possible for a default to be better than the language (GNAT's static
> elaboration rules come to mind, because they take an obscure runtime
> check and replace it with compile-time rules -- that's almost always
> better), but that's the rare case. The cost in interoperability with
> other compilers and with the language description makes most such
> differences bad choices. Thus, having the defaults differ substantially from
> the language definition is generally a bad idea.
Actually I think the language designers are often far too compiler oriented, and
NOT sufficiently actual user oriented. How many ARG members are actually in the
business of building big critical Ada systems?
I think for instance that the dynamic model of elaboration is pretty horrible,
and definitely it is better that GNAT defaults to the static model.
> (C) The attitude you give above is exactly why we don't want to call
> these warnings. Implementers might very well decide to hide warnings
> by default, but they'd have a harder case (especially for a "safe"
> language like Ada) in hiding errors. That's not the message that we
> want to give to Ada users, and I don't think most implementers would want to
> give that message either.
Weak argument IMO, defaults are chosen in response to customer needs not some
vague idea of what is or is not safe.
****************************************************************
From: Jeff Cousins
Sent: Wednesday, November 27, 2013 10:16 AM
> Actually I think the language designers are often far too compiler oriented,
> and NOT sufficiently actual user oriented. How many ARG members are actually
> in the business of building big critical Ada systems?
That's kind of my day job.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 10:42 AM
Indeed, and your presence is most welcome!
****************************************************************
From: Jeff Cousins
Sent: Thursday, November 28, 2013 10:03 AM
Is the checking for overlapping out or in out parameters definitely in the GNAT
v7.2 "preview"? I've been through the code for multiple projects across 3 sites
without hitting any examples.
****************************************************************
From: Robert Dewar
Sent: Thursday, November 28, 2013 7:53 PM
Yes, and that is not surprising!
****************************************************************
From: Erhard Ploedereder
Sent: Friday, November 22, 2013 4:31 PM
> Here is a nice example of why the rule about overlapping parameters
> should be suppressible.
> Of course all four calls make perfect semantic sense.
>
>> 1. procedure X is
>> 2. procedure Xchg (A, B : in out Integer) is
>> 3. T : constant Integer := A;
>> 4. begin
>> 5. A := B;
>> 6. B := T;
>> 7. end;
Yes. And it come close to being the ONLY program where these calls make perfect
sense (well, actually, all programs are o.k. where the intended semantics are
identity of the pre- and post-values of all parameters in the case of aliased
parameters).
With a minor change, as in:
>> 1. procedure X is
with invariant A + B = A'old + B'old
-- forgive my syntax
>> 2. procedure Xchgplusminus (A, B : in out Integer) is
>> 3. T : constant Integer := A;
>> 4. begin
>> 5. A := B + 1;
>> 6. B := T - 1;
>> 7. end;
aliasing already screws up royally (and with results that depend on the order of
the write-backs - what may have worked on one target in the user's view, fails
on the other target -- not that the above could have worked in anybody's view on
any target).
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 4:49 PM
> aliasing already screws up royally (and with results that depend on
> the order of the write-backs - what may have worked on one target in
> the user's view, fails on the other target -- not that the above could
> have worked in anybody's view on any target).
By the way, it has always puzzled me how much the Ada designers like
non-determinism. Yes, in the language the order of write back of out parameters
is non-deterministic. Why? I can't figure out ANY advantage of making something
like this non-deterministic!
****************************************************************
From: Tucker Taft
Sent: Friday, November 22, 2013 5:00 PM
The real concern for me is that the order is not obvious in the source code, so
if correct functioning of the algorithm relies on there being a well-defined
order, that is ultimately misleading. Something that looks commutative, for
example, is in fact not commutative. I agree that for debugging it is nice if
the order is repeatable, but if anyone starts relying on the order, the
algorithm becomes fragile to seemingly "harmless" maintenance activities such as
introducing a temporary; e.g.:
F(G(X), H(Y)) -->
HT : constant HH := H(Y);
F(G(X), HT)
****************************************************************
From: Randy Brukardt
Sent: Friday, November 22, 2013 5:07 PM
> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic.
> Why? I can't figure out ANY advantage of making something like this
> non-deterministic!
You'd have to ask Jean, as the vast majority of the non-determinism dates back
to Ada 83.
We've occasionally wondered about reducing some of it, but that would have a
massive impact on implementations. Implementers have been taking advantage of
those rules for 30+ years, and finding those places and removing them would be
virtually impossible as there is nothing particularly special about them. (There
may not even be comments near such places.)
For one example, I know Janus/Ada uses the non-determinism of parameter
evaluation to reduce the need to spill registers and the like during the
evaluation of parameters. We try to evaluate the most complex parameter
expression first.
Not sure if there is an advantage taken with the order of write-back (I can't
think of one, either); I suppose it was non-deterministic mainly because the
order evaluation of the parameters is non-deterministic and one would imagine
the write-back to be done in reverse.
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 5:09 PM
> The real concern for me is that the order is not obvious in the source
> code, so if correct functioning of the algorithm relies on there being
> a well-defined order, that is ultimately misleading. Something that
> looks commutative, for example, is in fact not commutative. I agree
> that for debugging it is nice if the order is repeatable, but if anyone
> starts relying on the order, the algorithm becomes fragile to seemingly
> "harmless" maintenance activities such as introducing a temporary;
To me it's even MORE fragile for maintenance if you can have code that has
worked fine for years and years, and suddently the compiler changes its behavior
and breaks the code.
In Pascal, the OR and AND operators are weird, it is impl defined if they short
circuit, and impl defined which order they do things in.
Virtually ALL Pascal compilers did short circuit left to right a la Ada. Then
brilliant Honeywell created a compiler that short circuited right to left.
ENDLESS problems in porting code!
These days, most Ada programmers don't (can't) read the RM any more, it's just
too inpenetrable. Most Ada programmers assume that code that goes through the
compiler and works is "correct". You see this very clearly for example in
elaboration where typical large programs have just sufficient pragma Elaborate's
to get them past one particular compilers choice of elaboration order (elab
order, another case where the language has gratuitous non-deterministic
semantics).
****************************************************************
From: Robert Dewar
Sent: Friday, November 22, 2013 5:13 PM
It's interesting that there are cases where C is more deterministic than Ada,
and in such cases, we guarantee the Ada behavior. For instance a+b+c in c always
means (a+b)+c and it does in GNAT as well, and this is something we guarantee.
****************************************************************
From: Erhard Ploedereder
Sent: Friday, November 22, 2013 5:13 PM
> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage
> of making something like this non-deterministic!
Good question. Does anybody know the answer? My answer to students is:
because calling conventions differ and you want to match up with the prevalent
styles. On a stack machine you push and pop, i.e. return right-to-left, ie.e
deal with your parameters in a LIFO-style. On a "more normal" machine you might
want to do FIFO (i.e., left-to-right), not LIFO.
But that's only my story. Does anbody have deeper insights?
Moreover, that's only my curiosity. We agree on that fact that Ada leaves this
order undefined.
The big difference in my example would merely be that the outcome would be
predictably wrong with a specified order, but wrong (and unexpected)
nevertheless.
****************************************************************
From: Geert Bosch
Sent: Sunday, November 22, 2013 10:52 PM
> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage
> of making something like this non-deterministic!
Indeed, this is right up there with the association of addition and subtraction,
which is implementation defined. This makes it really tricky to write
expressions that will never overflow.
Consider S being an arbitrary String:
Len : Natural := (if S'Last < S'First then 0 else S'Last - S'First
+ 1);
Note how this can overflow if and only if the compiler chooses an evil
association? In practice, the main effect is to make reasoning about program
behavior a lot harder. SPARK requires one to always use parentheses in this
case, but that of course just impedes readability.
Much simpler to require left-to-right associativity. This will affect zero
existing Ada 2012 compilers, and would be trivial to comply with for any new
ones (just add parenthesis implicitly, where required). In return, it will allow
any static analysis tools to do a far better job, and free the programmer from
having to put tons of otherwise unnecessary parentheses in their code.
I think we should make it a point to go through all such cases and specify
behavior. Ultimately, this is easier for both users and implementors of Ada
development tools. Note that this last group also includes code generators (for
modeling languages) and various analysis tools.
It should never be acceptable to have similar expressions to be be well-defined
for C, but implementation-defined or erroneous in Ada, whether it is about
evaluating A + B + C, or writing "Hello" and "World" to standard output in two
separate threads/tasks.
Maybe it isn't too late for a binding interpretation for Ada 2012 for some of
these order of evaluation topics? (*)
(*) Note in particular that the associativity can only have a visible effect in
presence of overflow. So, all we are forbidding is failing an overflow check if
left-to-right evaluation would not have overflowed. Doing anything else is just
evil, isn't it?
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013 5:26 AM
> It's interesting that there are cases where C is more deterministic
> than Ada, and in such cases, we guarantee the Ada behavior. For
> instance a+b+c in c always means (a+b)+c and it does in GNAT as well,
> and this is something we guarantee.
Huh? Order of evaluation is implementation defined, but associativity is well
defined: 4.5(8)
****************************************************************
From: Yannick Moy
Sent: Monday, November 25, 2013 5:48 AM
> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)
The problem is the implementation permission in RM 4.5(13):
"For a sequence of predefined operators of the same precedence level (and in the
absence of parentheses imposing a specific association), an implementation may
impose any association of the operators with operands so long as the result
produced is an allowed result for the left-to-right association, but ignoring
the potential for failure of language-defined checks in either the left-to-right
or chosen order of association."
In the analysis tools that we develop at AdaCore, we explicitly reject this
permission, as GNAT never uses it. It would make analysis much more complex if
we had to take into all possible interleavings.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 7:40 AM
> Note how this can overflow if and only if the compiler chooses an evil
> association? In practice, the main effect is to make reasoning about
> program behavior a lot harder. SPARK requires one to always use
> parentheses in this case, but that of course just impedes readability.
Actually we decided that the extra parens are such an annoyance that they are
only required when SPARK is operating in a special pedantic mode. Otherwise we
guarantee left to right association (GNAT itself always makes the same
guarantee).
> Much simpler to require left-to-right associativity. This will affect
> zero existing Ada 2012 compilers, and would be trivial to comply with
> for any new ones (just add parenthesis implicitly, where required). In
> return, it will allow any static analysis tools to do a far better
> job, and free the programmer from having to put tons of otherwise
> unnecessary parentheses in their code.
I definitely agree with this
> I think we should make it a point to go through all such cases and
> specify behavior. Ultimately, this is easier for both users and
> implementors of Ada development tools. Note that this last group also
> includes code generators (for modeling languages) and various analysis
> tools.
>
> It should never be acceptable to have similar expressions to be be
> well-defined for C, but implementation-defined or erroneous in Ada,
> whether it is about evaluating A + B + C, or writing "Hello" and
> "World" to standard output in two separate threads/tasks.
>
> Maybe it isn't too late for a binding interpretation for Ada 2012 for
> some of these order of evaluation topics? (*)
Failing that, in practice for Ada 2012, it would be good enough for now to just
clearly document the guarantees that GNAT makes, and figure out if it should be
making more such guarantees (I know Geert is looking at the I/O from tasks
issue).
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 7:44 AM
> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)
Yes, well it's even more surprising when someone who is unquestionably an expert
in Ada can be fooled into thinking this (luckily if you are using GNAT, or
probably any other Ada compiler around, you won't be fooled in practice). Here
is the evil paragraph you missed:
> 13 For a sequence of predefined operators of the same precedence
> level (and in the absence of parentheses imposing a specific
> association), an implementation may impose any association of the
> operators with operands so long as the result produced is an allowed
> result for the left-to-right association, but ignoring the potential
> for failure of language-defined checks in either the left-to-right or chosen order of association.
And it is this potential for failure of a check that Geert is disturbed by (and
me too!)
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 7:44 AM
> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)
Little language thing, Huh reads very aggressive in english, it has the sense of
"what kind of nonsense are you talking about, you must be an idiot!" Better
always avoided!
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013 8:05 AM
Sorry about that, I thought it was more like french "Hein?"
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 8:24 AM
> Sorry about that, I thought it was more like french "Hein?"
No problem, it's definitely more aggressive than Hein, at least to me :-)
****************************************************************
From: Bob Duff
Sent: Monday, November 25, 2013 9:07 AM
> Little language thing, Huh reads very aggressive in english, it has
> the sense of "what kind of nonsense are you talking about, you must be
> an idiot!" Better always avoided!
Really? "Huh?" doesn't come across as agressive to me.
To me it means "I am confused", which could because you are talking nonsense,
but also could be because I am missing something.
(I almost wrote "Huh?" instead of "Really?" above. ;-) Neither one is intended
to be rude!)
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 9:17 AM
I know that it is not intended that way, but to many people it comes accross
that way (I am not the only one, I have seen people react this way many times).
So the fact that it does not offend EVERYONE is not an argument against avoiding
it if it offends some! Think of saying Huh in talking, it is hard to say this
without expressing puzzlement of the kind "do you *really* mean to say that?" I
almosts NEVER hear people use it in conversation, and there are people who use
it in email who I think would never use it in conversation.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 8:38 AM
> Yes, well it's even more surprising when someone who is unquestionably
> an expert in Ada can be fooled into thinking this (luckily if you are
> using GNAT, or probably any other Ada compiler around, you won't be
> fooled in practice).
> Here is the evil paragraph you missed:
By the way, I was also quite surprised to learn of this rule, which I have not
been aware of till a recent discussion over the parens in SPARK! :-)
****************************************************************
From: Erhard Ploedereder
Sent: Monday, November 25, 2013 9:22 AM
Despite the discussion of hein, huh and others....
> Order of evaluation is implementation defined, but associativity is
> well defined: 4.5(8)
JP is absolutely right. Associativity is well defined, so GNAT is only
implementing the standard by obeying it.
Order of evaluation is quite another thing, and there is significant
justification for being non-deterministic to allow for better register
optimization and for HW out-of-order evaluation.
****************************************************************
From: Erhard Ploedereder
Sent: Monday, November 25, 2013 9:28 AM
> JP is absolutely right. Associativity is well defined, so GNAT is only
> implementing the standard by obeying it.
I stand corrected by the other mail with the para that allows l-t-r
associativity to be broken. Indeed, who needs that?
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 9:33 AM
> JP is absolutely right. Associativity is well defined, so GNAT is only
> implementing the standard by obeying it.
Amazing, so, another Ada expert (joining me and JPR) who was unaware that this
is not the case! This really shows we got the language rules wrong. Once again,
the issue is that
X + Y + Z
where X,Y,Z are of type Integer, can be
evaluated as either
(X + Y) + Z or as X + (Y + Z)
even though one might overflow and the other not.
> Order of evaluation is quite another thing, and there is significant
> justification for being non-deterministic to allow for better register
> optimization and for HW out-of-order evaluation.
No, this is about associativity, NOT order of evaluation.
Back to order of evaluation for a moment:
I wonder though, yes, I know that it is the compiler writer's mantra that this
is a worthwhile freedom, I would like to see actual figures to justify this (to
me unneccessary) non-determinism. Most optimizations are disappointing :-)
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 9:33 AM
> I stand corrected by the other mail with the para that allows l-t-r
> associativity to be broken. Indeed, who needs that?
Especially who needs it when it is a surprise even to the experts! I wonder if
any compiler actually does this in practice?
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013 9:50 AM
>> 13 For a sequence of predefined operators of the same precedence
>> level (and in the absence of parentheses imposing a specific
>> association), an implementation may impose any association of the
>> operators with operands so long as the result produced is an allowed
>> result for the left-to-right association, but ignoring the potential
>> for failure of language-defined checks in either the left-to-right or
>> chosen order of association.
>
> And it is this potential for failure of a check that Geert is
> disturbed by (and me too!)
My understanding is that this is an "as if" rule, presumably related to allowing
X**4 to be computed as (X*X)*(X*X); I would not go as far as saying that
associativity is not defined in Ada! Especially, it would not allow A * B / C to
be computed as A * (B / C), since it would not be "an allowed result for the
left-to-right association".
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 10:36 AM
Yes, well of course, this is a strawman, no one thinks you can reassociate
A*B/C!
But the rule we are talking about is not an "as if" since it can introduce an
overflow error where none existed before.
The "as if" does not need a special rule
Note that in the case of X**4, we need a special rule because in general x*x*x*x
is not equal to (x*x)**2 for floating-point (the first form is more accurate in
the general case). For FPT, the rule really does not apply, since almost any
reassociation will change model interval results.
Note that you are of course allowed to reassociate even if there are parens if
you know it will not change the result and will not introduce or eliminate an
exception, that's the normal as if rule. For example in
subtype R is integer range 1 .. 10;
A,B,C,D : R;
...
A := (B + C) + D;
it is fine to compute this as
A := B + (C + D);
that's the normal as-if in action. But this special rule allows the introduction
of overflow errors where none would exist in the canonical order. THAT's the
surprise.
In general it is a surprise to find that the two following expressions are not
equivalent:
(A + B) + C
A + B + C
****************************************************************
From: Brad Moore
Sent: Monday, November 25, 2013 10:47 AM
I don't know if I speak for Canada or not, but I also pick up an aggressive tone when huh? is used in written form.
This is not the case however when used in spoken form I would say.
It comes across equivalent to "Are you nuts?", in writing though.
It is interesting to note that for the similar form "eh?", I would interpret as being aggressive neutral, somewhat equivalent to "whats that you say?" or "somethings not right here". I don't know if that is a Canadianism, as apparently we say "eh?" a lot,
though I dont notice this generally.
****************************************************************
From: Bob Duff
Sent: Monday, November 25, 2013 11:04 AM
I agree "Eh?" seems like a neutral "What do you mean?".
And Canadians seem to use it at the end of a sentence as some sort of
punctuation, as in "Nice and warm today, eh?". ;-)
What about "Heh?" That's how I sometimes spell (and pronounce) "Huh?". Guess
I'd better quit using either.
Back to the topic: For what it's worth, I am strongly in favor of determinism.
I think determinism should be tolerated only when there's an important
efficiency (or other) benefit.
For example, the fact that elaboration order is implementation defined has zero
efficiency benefit, and in practice is one of the top 3 or so portability issues
we see at AdaCore. But I suppose we can't really fix that one after the fact.
But we could fix the associativity thing, I think.
****************************************************************
From: Bob Duff
Sent: Monday, November 25, 2013 11:07 AM
> Note that in the case of X**4, we need a special rule because in
> general x*x*x*x is not equal to (x*x)**2 for floating-point (the first
> form is more accurate in the general case).
I think the second form is more accurate. Fewer multiplies --> fewer roundings.
> ...
> exception, that's the normal as if rule.
I like to call it the "as if meta-rule", because it's not a rule of the
language, it's a rule about how to interpret the rules of the language (any
high-level language, not just Ada).
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 11:19 AM
>> Note that in the case of X**4, we need a special rule because in
>> general x*x*x*x is not equal to (x*x)**2 for floating-point (the
>> first form is more accurate in the general case).
>
> I think the second form is more accurate. Fewer multiplies
> --> fewer roundings.
SUCH a common misconception. Just goes to show once again that fpt stuff is
hard. Yes, fewer multiplications, BUT both operands of the last multiplication
have already been rounded, whereas in the first form, one of the operands of
each multiplication is exact.
It was exactly this misconception that caused the screwup in the Ada 83
definition that required repeated multiplication.
If the **2 form was more accurate, then normal "as if" would allow the
substitution, but it's not always more accurate.
>> ...
>> exception, that's the normal as if rule.
>
> I like to call it the "as if meta-rule", because it's not a rule of
> the language, it's a rule about how to interpret the rules of the
> language (any high-level language, not just Ada).
Right, and sometimes language standards have a nasty habit of unneccessarily
stating an as-if rule for one particular part of the language, bogusly implying
that it does not appear elsewhere.
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013 11:31 AM
> But the rule we are talking about is not an "as if"
> since it can introduce an overflow error where none existed before.
Precisely, the only visible difference is whether an exception is raised or not;
therefore I find unfair to say that C is more precise than Ada in the way it
defines associativity - except if you consider that C is more precise than Ada
in defining where exceptions are raised: nowhere! ;-)
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 11:39 AM
No, that's just plain wrong thinking IMO.
In C, overflow is undefined, but if in C we write
a + b + c
where a,b,c are int, we are guaranteed that this is evaluated as (a + b) + c and
will be well defined if neither addition causes overflow. In C, we are NOT
allowed to evaluate this as a + (b + c) if b+c could cause overflow.
In Ada, we are allowed to evaluate this as a + (b + c) even if b+c causes
overflow.
True in Ada, the bad effects of this permission are not so bad as they would be
in C, but C does not allow this abomination in the first place.
So indeed C is more precise than Ada here, no question about it!
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 11:41 AM
> Precisely, the only visible difference is whether an exception is
> raised or not; therefore I find unfair to say that C is more precise
> than Ada in the way it defines associativity - except if you consider
> that C is more precise than Ada in defining where exceptions are
> raised: nowhere! ;-)
There are some things that C does better than Ada. This is one example, another
is that you can do I/O operations from separate threads in C, and things work
fine. Of course that's more a matter of definition of libraries and threads than
the C language itself, but the fact of the matter is that a C programmer can do
printf's from separate threads, and get lines interspersed, but not erroneous
behavior and rubbish as in Ada.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 11:59 AM
> > Yes, well it's even more surprising when someone who is
> > unquestionably an expert in Ada can be fooled into thinking this
> > (luckily if you are using GNAT, or probably any other Ada compiler
> > around, you won't be fooled in practice).
> > Here is the evil paragraph you missed:
>
> By the way, I was also quite surprised to learn of this rule, which I
> have not been aware of till a recent discussion over the parens in
> SPARK! :-)
You shouldn't have been that surprised, given that the rule is an Ada 83 rule
(11.6(5)) that was just moved in Ada 95 and has been untouched since.
I mainly bring this up to concur that this rule is evil. Back in the Ada 83
days, our optimizer guy thought it would be good to implement that rule exactly
as written. (That is, introducing exceptions where none existed initially.) At
the same time, he implemented a number of other algebraic rules (the interesting
one for this purpose being "A - C" => "A + (-C)"). The effect of those two rules
on our runtime was to introduce all kinds of new overflow exceptions (especially
in code like the Text_IO code for formatting numbers) [once a lot of "-" was
converted to "+", the association rule could apply]. We wasted quite a bit of
time debugging before we decided that the rule was evil and could only be
applied if it couldn't introduce an exception.
> Especially who needs it when it is a surprise even to the experts! I
> wonder if any compiler actually does this in practice?
I seriously doubt it. We tried and the results were unusable. Thus I think it
would be OK to kill 4.5(13). (I would not be OK with requiring an order of
evaluation of parameters.)
Note, however, that removing that rule would not stop reordering that *removed*
an exception. (I would be against that, as overflow checks are not free in many
architectures and we ought to be able to do whatever we can to get rid of them.)
But we don't need 4.5(13) for that, as other rules allow evaluating intermediate
expressions with extra precision so no one can count on A + B + C to raise an
overflow if A + B alone would overflow.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 12:08 PM
...
> Back to order of evaluation for a moment:
> I wonder though, yes, I know that it is the compiler writer's mantra
> that this is a worthwhile freedom, I would like to see actual figures
> to justify this (to me unneccessary) non-determinism.
> Most optimizations are disappointing :-)
My guess is that it would be hard to quantify, as it would be difficult to tell
in most compilers when the non-determism is being used.
I know that in Janus/Ada, we use it in rare cases where we need to spill pending
floating point results before a function call. That is a quite expensive but
necessary operation on the X86 architecture (we didn't originally do it, but
user bug reports forced us to do so as nested calls were running out of
registers and crashing). By reordering parameters and expressions, we can often
avoid the need to do that spill.
There are probably other cases where we use it that I don't even know about.
(Our optimizer guy made aggressive use of Ada's permissions; I had to reign him
in when it caused problems, as in the introducing overflow case).
P.S. This whole subthread is under the wrong topic -- it doesn't have anything
to do with "suppressible errors" and I doubt Bob will want it in his AI!
****************************************************************
From: Geert Bosch
Sent: Monday, November 25, 2013 12:31 PM
> I seriously doubt it. We tried and the results were unusable. Thus I
> think it would be OK to kill 4.5(13). (I would not be OK with
> requiring an order of evaluation of parameters.)
Yes, let's do that. I think everyone can agree removing this fixes a nasty wart,
that is becoming increasingly more important and visible with the new pre- and
post-conditions, SPARK 2014 and increased use of model-based code generation,
statical analyzers and provers.
> Note, however, that removing that rule would not stop reordering that
> *removed* an exception. (I would be against that, as overflow checks
> are not free in many architectures and we ought to be able to do
> whatever we can to get rid of them.) But we don't need 4.5(13) for
> that, as other rules allow evaluating intermediate expressions with
> extra precision so no one can count on A + B + C to raise an overflow if A + B
> alone would overflow.
Of course.
****************************************************************
From: Steve Baird
Sent: Monday, November 25, 2013 1:00 PM
> Consider S being an arbitrary String:
>
> Len : Natural := (if S'Last < S'First then 0 else S'Last - S'First
> + 1);
>
> Note how this can overflow if and only if the compiler chooses an evil
> association?
This is not central to the main topic being discussed on this thread, but I
don't think the permission of 4.5(13) applies to Geert's example and therefore
no overflow is possible (partly because the condition ensures that S'First is
positive if the subtraction is evaluated).
You can't associate
1 - 2 + 3
as
1 - (2 + 3)
because that yields the wrong answer.
The permission clearly applies if we replace
S'Last - S'First + 1
with
S'Last + (-S'First) + 1
and all of the rest of Geert's discussion makes sense if we assume this
substitution. I see this as confirmation of Geert's point that the reassociation
permission makes it harder to reason about programs.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 2:45 PM
> We wasted quite a bit of time debugging before we decided that the
> rule was evil and could only be applied if it couldn't introduce an
> exception.
That's meaningless, the rule is ONLY about introducing exceptions, otherwise it
has zero content!
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 2:46 PM
> My guess is that it would be hard to quantify, as it would be
> difficult to tell in most compilers when the non-determism is being used.
Why hard to quantify, you disable this optimization and you see whether any
programs are noticeably affected.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 4:47 PM
Because most compilers don't treat this as something that can be separately
disabled, or even as an "optimization". It's simply part of the semantics of
Ada. In our case, we just used it when it was advantageous and certainly there
isn't anything consistently marking that we're depending on non-determinism
(it's not something that we would have necessarily mentioned in comments). So
there isn't any practical way to find out when it was used or what the effect
was.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 4:51 PM
> > We wasted quite a bit of time debugging before we decided that the
> > rule was evil and could only be applied if it couldn't introduce an
> > exception.
"The rule" was the rule in our optimizer about re-association, not any
particular rule in the RM. You can only apply that rule if no exceptions can be
introduced, meaning that we decided that we cannot use 4.5(13) to any benefit.
As you note, it is an "as-if" rule in the absence of exceptions, so the RM need
not mention it.
> That's meaningless, the rule is ONLY about introducing exceptions,
> otherwise it has zero content!
That's precisely my point -- the RM rule has zero useful content, because
introducing exceptions willy-nilly makes it much more difficult to write code
that won't fail -- I'm not sure that it is even possible to do so in the general
case.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 5:06 PM
> That's precisely my point -- the RM rule has zero useful content,
> because introducing exceptions willy-nilly makes it much more
> difficult to write code that won't fail -- I'm not sure that it is
> even possible to do so in the general case.
What are you talking about? Just use parentheses!
You can't reassociate (A+B)+C.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 4:54 PM
> Yes, let's do that. I think everyone can agree removing this fixes a
> nasty wart, that is becoming increasingly more important and visible
> with the new pre- and post-conditions, SPARK 2014 and increased use of
> model-based code generation, statical analyzers and provers.
I presume that you'll be submitting an AI to this effect? A !question and nice
examples as to why applying this permission would be harmful would be nice. (The
!wording is easy for a change, so I don't really need that.)
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 5:05 PM
> Because most compilers don't treat this as something that can be
> separately disabled, or even as an "optimization". It's simply part of
> the semantics of Ada. In our case, we just used it when it was
> advantageous and certainly there isn't anything consistently marking
> that we're depending on non-determinism (it's not something that we
> would have necessarily mentioned in comments). So there isn't any
> practical way to find out when it was used or what the effect was.
Don't speak for "most compilers" you don't have the background to do that :-)
And specific Ada backends are mostly a thing of the past!
What you say may be idiosyncractically true for your compiler, but I expect
other compilers could easily test this out. After all a C compiler does not have
a fixed evaluation order, but a Java compiler does, so any reasonably flexible
compiler will have both capabilities (e.g. gcc, and it would be a fairly easy
experiment to carry out).
****************************************************************
From: Bob Duff
Sent: Monday, November 25, 2013 5:14 PM
>...After all a C compiler does not have a fixed evaluation order, but
>a Java compiler does, so any reasonably flexible compiler will have
>both capabilities (e.g. gcc, and it would be a fairly easy experiment
>to carry out).
Do any other languages have this rule that "A+B+C" can be evaluated as "A+(B+C)"
even though that might introduce overflow? I think maybe Fortran does, and I
think maybe that's where Ada inherited it from. But I'm not a Fortran lawyer.
I've no idea what gfortran does in this regard.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 5:42 PM
>> ...After all a C compiler does not have a fixed evaluation order, but
>> a Java compiler does, so any reasonably flexible compiler will have
>> both capabilities (e.g. gcc, and it would be a fairly easy experiment
>> to carry out).
>
> Do any other languages have this rule that "A+B+C" can be evaluated as
> "A+(B+C)" even though that might introduce overflow? I think maybe
> Fortran does, and I think maybe that's where Ada inherited it from.
> But I'm not a Fortran lawyer.
Yes Fortran does, even for floating-point, very evil!
> I've no idea what gfortran does in this regard.
I am sure it goes left to right :-)
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 6:04 PM
> > That's precisely my point -- the RM rule has zero useful content,
> > because introducing exceptions willy-nilly makes it much more
> > difficult to write code that won't fail -- I'm not sure that it is
> > even possible to do so in the general case.
>
> What are you talking about? Just use parentheses!
> You can't reassociate (A+B)+C.
For integers, the result of A+B+C and (A+B)+C and A+(B+C) is the same so long as
there is no exceptions. Our optimizer guy had clearly forgotten the last part,
so he was ignoring the parens even if present. But we also had problems with
array indexing code, where there isn't any way the user could add parens -- and
we wouldn't want to be adding them manually, because that would block various
optimizations that we do want to make. Indeed, I don't see any value to
recording parens in the intermediate code for integer expressions (the situation
is very different for float expressions, of course).
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 6:18 PM
> > Because most compilers don't treat this as something that can be
> > separately disabled, or even as an "optimization". It's simply part
> > of the semantics of Ada. In our case, we just used it when it was
> > advantageous and certainly there isn't anything consistently marking
> > that we're depending on non-determinism (it's not something that we
> > would have necessarily mentioned in comments). So there isn't any
> > practical way to find out when it was used or what the effect was.
>
> Don't speak for "most compilers" you don't have the background to do
> that :-) And specific Ada backends are mostly a thing of the past!
Nobody said anything about "back-ends". There are a lot of uses of
non-determinism in the middle phases of compilation, during the generation of
and transforming into "better" intermediate code than would occur without it.
(The generation phase pretty much has to be Ada-specific; transforming may or
may not be). The back-end certainly isn't the only place where non-determinism
is used.
I recall someone from Rational making similar points in an ARG discussion years
ago. I don't think it is just me (or I wouldn't have put it the way I did).
> What you say may be idiosyncractically true for your compiler, but I
> expect other compilers could easily test this out. After all a C
> compiler does not have a fixed evaluation order, but a Java compiler
> does, so any reasonably flexible compiler will have both capabilities
> (e.g. gcc, and it would be a fairly easy experiment to carry out).
I don't think the majority of Ada implementations also can handle Java or many
other languages (i.e. Irvine, Rational technology, not sure about ObjectAda).
That sort of generality is primarily with GNAT, IMHO.
In any case, this is getting *way* off-topic from the already off-topic
discussion. :-) It doesn't really pay to argue about it, because everyone makes
the mistake of thinking that their implementation is "typical", when in actual
fact there pretty much is no such thing (the implementations all vary wildly -
Rational and ASIS has definitively proved that to me).
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 6:29 PM
>>> That's precisely my point -- the RM rule has zero useful content,
>>> because introducing exceptions willy-nilly makes it much more
>>> difficult to write code that won't fail -- I'm not sure that it is
>>> even possible to do so in the general case.
>>
>> What are you talking about? Just use parentheses!
>> You can't reassociate (A+B)+C.
>
> For integers, the result of A+B+C and (A+B)+C and A+(B+C) is the same
> so long as there is no exceptions.
Uh yes, I think we know that, this whole discussion is about exceptions! But
what I am saying is that your statement that the Ada rule makes it hard to
write portable code is dubious, since you can just use parens to suppress this
unwanted flexibility.
> Our optimizer guy had clearly forgotten the
> last part, so he was ignoring the parens even if present. But we also
> had problems with array indexing code, where there isn't any way the
> user could add parens -- and we wouldn't want to be adding them
> manually, because that would block various optimizations that we do want to make.
Well I have no idea what you are talking about here, sounds like some kind of
internal chaos in your compiler, definitely not relevant to the discussion here.
We are talking about standard Ada, not existing compilers with bugs, yes, if you
have bugs you may have trouble writing portable code!
> Indeed, I don't
> see any value to recording parens in the intermediate code for integer
> expressions (the situation is very different for float expressions, of
> course).
Well except for the rule we are discussing, and also, depending on what you mean
by intermediate code, you have to record the exact number of parentheses for
conformance checking. I have no idea why you think it is important to record
parens in the fpt case and not in the integer case, that makes zero sense to me.
****************************************************************
From: Robert Dewar
Sent: Monday, November 25, 2013 6:31 PM
> Nobody said anything about "back-ends". There are a lot of uses of
> non-determinism in the middle phases of compilation, during the
> generation of and transforming into "better" intermediate code than
> would occur without it. (The generation phase pretty much has to be
> Ada-specific; transforming may or may not be). The back-end certainly
> isn't the only place where non-determinism is used.
Randy you are all over the map here. A moment ago you were talking about
register allocation, now you are talking about high level transformations in the
front end. I can't see ANY sensible compiler taking advantage of the
reassociation rule in the front end.
These kind of reorderings are highly architecture dependent, doing them anywhere
except in the architecture aware back end makes zero sense.
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 6:35 PM
...
> Nobody said anything about "back-ends". There are a lot of uses of
> non-determinism in the middle phases of compilation, during the
> generation of and transforming into "better"
> intermediate code than would occur without it. (The generation phase
> pretty much has to be Ada-specific; transforming may or may not be).
> The back-end certainly isn't the only place where non-determinism is
> used.
To bring this closer to the original off-topic discussion :-), I believe this
happens because Ada has rules that effectively require out-of-order evaluation
of parameters.
For instance, if we have a tagged type like:
package P is
type Root is tagged...
procedure Do_Two (A, B : Root); -- Two controlling operands.
function TI return Root; -- Tag-indeterminate function.
function CW return Root'Class; -- Dynamically-tagged function.
end P;
in the call:
P.Do_Two (TI, CW);
you have to evaluate the second parameter first in order to find out the tag
with which T1 needs to dispatch. A strict left-to-right order would require
evaluating the function call TI before you know the tag with which to dispatch,
and that isn't going to work.
In cases like this, an Ada compiler is essentially depending on the
non-determistic evaluation of parameters in order to get the correct Ada
semantics. If we were to require a deterministic evaluation of parameters, we
would also have to do something with cases like this one which cannot be
evaluated in a strict left-to-right order. (Making them illegal would be
incompatible of course, and making them erroneous is hideous...)
****************************************************************
From: Randy Brukardt
Sent: Monday, November 25, 2013 7:23 PM
> > Nobody said anything about "back-ends". There are a lot of uses of
> > non-determinism in the middle phases of compilation, during the
> > generation of and transforming into "better" intermediate code than
> > would occur without it. (The generation phase pretty much has to be
> > Ada-specific; transforming may or may not be). The back-end
> > certainly isn't the only place where non-determinism is used.
>
> Randy you are all over the map here. A moment ago you were talking
> about register allocation, now you are talking about high level
> transformations in the front end. I can't see ANY sensible compiler
> taking advantage of the reassociation rule in the front end.
I agree, but Geert (and others) were also talking about the more general
"non-determinism" in Ada, and that's what I was talking about in the quoted
message. It would be nice if the subjects of these various topics were
different, but breaking the thread by changing the subject usually just leads to
more chaos. (And I don't think I ever said anything about "register allocation";
I've always been talking about higher-level uses of non-determinism.)
> These kind of reorderings are highly architecture dependent, doing
> them anywhere except in the architecture aware back end makes zero
> sense.
Not always. First, Ada sometimes requires such reorderings (see my other
message). Secondly, it can be reasonable to have the middle phases aware of some
of the more important characteristics of the target architecture. We do that so
that we don't have to duplicate large chunks of complicated code for each target
(and of course, a front-end has to be aware of at least some target
characteristics -- a front-end that didn't know the bounds of Integer would have
issues with static expression evaluation). We of course keep the target
characteristics separated from the majority of the compiler code, so it's clear
when we're depending on them in some way.
****************************************************************
From: Erhard Ploedereder
Sent: Tuesday, November 26, 2013 5:23 PM
>> Because most compilers don't treat this as something that can be
>> separately disabled, or even as an "optimization". It's simply part
>> of the semantics of Ada. In our case, .....
>
> Don't speak for "most compilers" you don't have the background to do
> that :-) And specific Ada backends are mostly a thing of the past!
Let me come to Randy's rescue as the old "I love optimizations" guy. This comes
from someone who has built backends for C, Modula, Ada, and several proprietary
languages. Admittedly, I never had to generate code for a language that insisted
on strict left-to-right order.
All this has absolutely zero to do with Ada back-ends and Randy is still right.
What we are talking about here is a typical way of assessing register pressure
at an early stage in the back-end. See Bob Morgan's book on how the VAX compiler
back-ends worked. It is well known that by evaluating the "higher pressure"
subtree first, you save one register at the level above for unbalanced trees
(for a uniform class of registers, etc, etc,). This can add up transitively.
Absent a language prohibition, this is clearly the way to go, among other thing
because some HW might do it anyway for simple instructions, never mind what
sequence the compiler produced.
Once you have made the pressure assessment, which was based implicitly on the
assumption of a particular evaluation order, it is quite risky to generate code
to evaluate in any other order, since you are violating invariants about the
validity of your register pressure algorithm. So, what you are asking for is to
turn off register pressure measuring. Well, what if later phases assume that
they find meaningful results from that algorithm? (And, to do register pressure
for l-t-r as an alternative, is not what I would call "turning the optimization
off", but rather changing the compiler significantly. I never had to need to
implement l-t-r.)
One is very ill advised to "turn off" the optimization of "heavier-tree first"
(which, in some folks' eyes is but competent code generation) later on in code
transformation/generation, if earlier phases already assumed it. I know for
sure, because we were asked to turn off already implemented optimizations and,
for a while, the compiler without the optimizations turned out to be flaky as
hell, because subsequent analyses assumed that the earlier transformations
(normalizations) or analyses had indeed been done. Null-pointer dereferences
inside the compiler and bad generated code are among the effects and they are
nasty bugs, for sure.
Specific case in point:
If you have established a "heavier-subtree-first" regime (or any regime
whatsoever, including the left-to-right scheme), it is extremely risky to then
deviate from this decision during the later phases, because your discovered CSE
definitions and usages suddenly are invalidated, finding uses of the CSE before
the Defs in the alternative execution order. While you might find all affected
places eventually to fix your compiler, secondary derivations, e.g., about the
goodness of CSEs and the resulting register pressure influences, are next to
impossible to undo, because they also have implicit premises, e.g., about the
heaviness of nodes in-between.
So, turning off any "optimization" related to execution order is very risky. I
would certainly not support the notion without BIG bucks asking for it, because
I know that chasing the resulting bugs due to violated premises will cost a LOT
of developer time.
In summary, fooling around with the order of evaluation once established or
assumed by the back-end is among the most risky compiler options that I know. It
definitely is not a "oh just turn this optimization off" thing.
Of course, you can build your compiler without optimization, truly
left-to-right, truly canonical for some definition of canonical, all dead code
present, etc., as long as nobody asks you to "just turn on the optimizations"
because, as a result, the compiler will break quite similarly for a few months.
Will the heavier-tree-first optimization buy much? Who knows? With some effort,
I can write examples where this optimization alone will yield factors on
multi-core. Of course, the examples would be contrived, causing a global cache
miss due to an unnecessary spill inside a loop. My claim is just as unsupported
as the claim that optimizations are generally overrated. I would agree that
(with a few exceptions such as register allocation) every single one will not
save the day, but what about a group of 30 well chosen ones? E.g. getting rid of
95% of constraint checks? Again, who knows?
****************************************************************
From: Erhard Ploedereder
Sent: Tuesday, November 26, 2013 5:38 PM
> I can't see ANY sensible compiler
> taking advantage of the reassociation rule in the front end.
Just curious.... Do you do
A + 5 + 8 === (A+5) + 8
or
A + 5 + 8 === A + 13 (taking advantage of reassociation)
How about
5 + A - A === (5 + A) - A
or
5 + A - A === 5
(Incidentally, I agree that A * B * C in Float needs to be (A*B)*C.)
My point is that reassociation is not a black-and-white issue (presuming
that you find the 2.line transforms o.k.). Of course, one takes
advantage of it quite often in a compiler, the above certainly in the
target-independent part of the compiler.
The only soul searching comes when accuracy, exceptions or traps come into play.
That is where the meat of the discussion ought to be, not on generally damning
or blessing a particular transformation.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 26, 2013 5:53 PM
>> I can't see ANY sensible compiler
>> taking advantage of the reassociation rule in the front end.
The reassociation rule we are talking about is the one that allows you to introduce overflow errors. There is no other relevant rule, so the above is to be read *SPECIFICALLY* in that context. I.e. I cannot see ANY sensible compiler taking advantage of the
reassociation rule and introducing an overflow error
>
> Just curious.... Do you do
> A + 5 + 8 === (A+5) + 8
> or
> A + 5 + 8 === A + 13 (taking advantage of reassociation)
>
> How about
> 5 + A - A === (5 + A) - A
> or
> 5 + A - A === 5
These are totally irrelevant to the discussion, they are allowed "as if"
transformations which would be allowed whether or not the reassociation rule is
present in the RM. So these examples are red herrings in this discussion.
> My point is that reassociation is not a black-and-white issue (presuming
> that you find the 2.line transforms o.k.). Of course, one takes
> advantage of it quite often in a compiler, the above certainly in the
> target-independent part of the compiler.
It is impossible not to find your examples OK, they are clearly valid. No one
could possibly dispute that, but they have nothing whatever to do with the
reassociation rule in the RM.
> The only soul searching comes when accuracy, exceptions or traps come
> into play. That is where the meat of the discussion ought to be, not
> on generally damning or blessing a particular transformation.
That's ALL we are discussing, the reassocation rule says you can freely
reassociation without worrying about exceptions (it has nothing to say about
allowing you to change the accuracy!)
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 26, 2013 6:12 PM
> That's ALL we are discussing, the reassocation rule says you can
> freely reassociation without worrying about exceptions (it has nothing
> to say about allowing you to change the accuracy!)
Nit-pick: Actually, it does say that you can't change the accuracy:
"...an implementation may impose any association of the operators with operands
so long as the result produced is an allowed result for the left-to-right
association..."
so "nothing to say" is inaccurate. (Pun semi-intended.) (The wording does allow
a different value [bit pattern], but it has to be in the same model interval,
which means the required accuracy is unchanged; the AARM notes confirm that).
Which of course doesn't change your point.
****************************************************************
From: Robert Dewar
Sent: Tuesday, November 26, 2013 7:01 PM
> Nit-pick: Actually, it does say that you can't change the accuracy:
Right, that's what I mean, it has nothing to say about allowing you to change
the accuracy. Although it you have some amazing fpt proof stuff that allows you
to show that the model interval is the same or narrower, then as-if allows you
to reassociate.
> so "nothing to say" is inaccurate. (Pun semi-intended.) (The wording
> does allow a different value [bit pattern], but it has to be in the
> same model interval, which means the required accuracy is unchanged;
> the AARM notes confirm that).
Right, so it has nothing to say, since it says nothing that would not be true
WITHOUT any statement. That's what I mean.
> Which of course doesn't change your point.
****************************************************************
From: Geert Bosch
Sent: Tuesday, November 26, 2013 7:45 PM
> What we are talking about here is a typical way of assessing register
> pressure at an early stage in the back-end. See Bob Morgan's book on
> how the VAX compiler back-ends worked. It is well known that by
> evaluating the "higher pressure" subtree first, you save one register
> at the level above for unbalanced trees (for a uniform class of
> registers, etc, etc,). This can add up transitively. Absent a language
> prohibition, this is clearly the way to go, among other thing because
> some HW might do it anyway for simple instructions, never mind what
> sequence the compiler produced.
This is a dangerous line of reasoning. You can generate code doing evaluation in
one order while preserving side effects in another order. They really have not
much to do with one another. While compilers will interleave evaluation of both
sides, and, similarly, modern hardware will have hundreds of instructions in
various stages of execution, but they'll all preserve the semantics of (some)
sequential execution obeying the semantics of the source language, whether a
high level language or machine code.
While 1970s compiler technology may not be applicable today, Knuth's
proclamation that "early optimization is the root of all evil" is still valid
today. We should forget about small efficiencies most of the time.
The following quote seems particularly applicable and timeless:
> "The order in which the operations shall be performed in every
> particular case is a very interesting and curious question, on which
> our space does not permit us fully to enter. In almost every
> computation a great variety of arrangements for the succession of the
> processes is possible, and various considerations must influence the
> selection amongst them for the purposes of a Calculating Engine.
> One essential object is to choose that arrangement which shall tend to
> reduce to a minimum the time necessary for completing the
> calculation." Ada Byron's notes on the analytical engine, 1842.
****************************************************************
From: Jeff Cousins
Sent: Wednesday, November 27, 2013 10:24 AM
> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage of
> making something like this non-deterministic!
In our SIL 2 review, the various places where the order of
evaluation/conversion/assignment is said by the RM to be arbitrary were cited as
Ada's weaknesses.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 10:42 AM
Indeed, and quite correctly cited too IMO. SPARK by the way completely
eliminates ALL non-determinism from the language. That was considered an
essential first step in creating a language suitable for formal reasoning.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 10:46 AM
> In our SIL 2 review, the various places where the order of
> evaluation/conversion/assignment is said by the RM to be arbitrary
> were cited as Ada's weaknesses.
One interesting idea would be to create a separate document that creates a
subset of Ada by specifying orders for all those cases where arbitrary ordering
is required (*) and then compilers could certify that they followed these
requirements (possibly by use of some switch).
Probably this document could also require certain behaviors for at least some
bounded error situations, and perhaps restrict usage that leads to erroneous
execution???
(*) SPARK would be a subset of this subset, since SPARK often works by
eliminating the effects of arbitrary ordering rather than specifying an
ordering. For instance expressions have no side effects, so order of evaluation
of expressions doesn't matter from a side effect point of view. Of course SPARK
does not allow reordering that changes results or introduces exceptions!
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 3:12 PM
> > What we are talking about here is a typical way of assessing
> > register pressure at an early stage in the back-end. See Bob
> > Morgan's book on how the VAX compiler back-ends worked. It is well
> > known that by evaluating the "higher pressure" subtree first, you
> > save one register at the level above for unbalanced trees (for a
> > uniform class of registers, etc, etc,). This can add up
> > transitively. Absent a language prohibition, this is clearly the way
> > to go, among other thing because some HW might do it anyway for
> > simple instructions, never mind what sequence the compiler produced.
>
> This is a dangerous line of reasoning. You can generate code doing
> evaluation in one order while preserving side effects in another
> order. They really have not much to do with one another. While
> compilers will interleave evaluation of both sides, and, similarly,
> modern hardware will have hundreds of instructions in various stages
> of execution, but they'll all preserve the semantics of (some)
> sequential execution obeying the semantics of the source language,
> whether a high level language or machine code.
*This* is a dangerous line of reasoning. Of course, "as-if" optimizations are
always allowed, both at the compiler level and machine level. But the
side-effects that matter for this discussion are the ones that (potentially)
have an external effect, which are directly the result of a machine instruction,
and which can have an outsized runtime impact:
(1) The side-effects that result from external function calls;
(2) The effects of accessing volatile objects;
(3) The place where exceptions are raised, relative to (1) and (2).
If you are presuming a strict evaluation order, then you (a compiler) *cannot*
move any of these things, under any circumstances. That's because the
side-effects are tied to the execution of a single machine instruction, which
cannot be reasonably split.
One could of course inline to mitigate (1), but front-end inlining is the very
definition of "early optimization", so it can't be considered here (based on
your next paragraph).
Plus, keep in mind that you *cannot* evaluate parameters in a strict
left-to-right order in every case and still properly implement Ada semantics (I
showed such an example on Friday). To eliminate these cases would require
introducing incompatibilities, and that brings us back to the original subject
of this thread -- how to *avoid* adding more incompatibilities. I don't see how
we could reconcile both intents.
> While 1970s compiler technology may not be applicable today, Knuth's
> proclamation that "early optimization is the root of all evil" is
> still valid today. We should forget about small efficiencies most of
> the time.
Pretty much the only meaningful thing that diffentiates compilers (as opposed to
eco-systems) for a standardized language like Ada is the way that that they find
(or don't) "small efficiencies". (Aside: And I don't agree that the efficiencies
in question are necessarily small; the cost of a float spill is the reading and
writing of 11 dwords of memory and that is going to be significant in any
event.) Pretty much everything else about a compiler is identical because of the
requirements of the Standard. If one says that "small efficencies" can't be
found, then pretty much every compiler will be functionally identical. In such a
case, there is no business case for there even existing more than one compiler
for a language (it would make a lot more sense for a small company like mine to
build tools for GNAT rather than building Ada compiler where the ability to add
value is very strictly limited) - at least so long as the one that exists is
open source. And of course in that case, there is no need for Ada
Standardization, either.
So I conclude that this group *exists* because of the ability of compilers to
find "small efficiencies" for particular target markets. To dismiss that out of
hand is essentially dismissing the reason the Ada Standard exists at all.
Note that I say the above even though I don't think Janus/Ada takes much
advantage of non-canonical orders. We of course do so to implement cases like my
example on Friday (that's done by introducing temporaries). But generally we try
to keep side-effects in a canonical order so that optimization doesn't change
the effect of a program too much. The problem is that I have no idea of where we
might have taken advantage on non-canonical orders, and there is no practical
way to find them (as it is a basic rule of Ada that would not require special
documentation). Nor have we made any attempt to find out what the third-party
backends do in such cases. So changing this would be a non-starter for me.
P.S. I'm not sure what "strict left-to-right order" means when named notation
and/or default parameters are used in calls. Our compiler converts all calls to
positional notation with default expressions made explicit before we start any
sort of code generation; the back end phases only deal with positional calls. If
we're talking the order that named parameters exist in the source code, that
would be *very* expensive to implement in our compiler, as we'd have to
introduce temporaries for every parameter to get the evaluation order right, or
abandon the existing invariant that the parameters are in passing order when
making calls, or ???. I don't know for sure, but I thought that this was a
common implementation strategy.
****************************************************************
From: Arnaud Charlet
Sent: Wednesday, November 27, 2013 3:24 PM
> Pretty much the only meaningful thing that diffentiates compilers (as
> opposed to eco-systems) for a standardized language like Ada is the
> way that that they find (or don't) "small efficiencies". (Aside: And I
> don't agree that the efficiencies in question are necessarily small;
> the cost of a float spill is the reading and writing of 11 dwords of
> memory and that is going to be significant in any event.) Pretty much
> everything else about a compiler is identical because of the
> requirements of the Standard. If one says that
That's certainly not the case, ease of use, quality of error messages, error
recovery, which host and targets are supported, quality of support,
responsiveness to bug fixes and enhancements, ability to maintain and document
known problems, etc... are much more important than micro optimizations to most
users.
Most of our customers don't really care about performance actually, only a few
do care (alot sometimes).
Also you started with "as opposed to eco-systems" but that's also a non
realistic premise: customers don't buy just a compiler these days, they buy a
whole toolset (if not more) where the compiler is only a small piece, so this is
not what makes the difference in most cases.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 3:41 PM
> the cost of a float
> spill is the reading and writing of 11 dwords of memory and that is
> going to be significant in any event.
what's your model for this odd claim?
) Pretty much everything else about a compiler
> is identical because of the requirements of the Standard. If one says
> that "small efficencies" can't be found, then pretty much every
> compiler will be functionally identical.
This is nonsense, there are lots of opportunities for BIG efficiencies in Ada
compilers, e.g. handling of a bit-packed array slice (which right now GNAT is
not good at, but would be a HUGE efficiency gain if implemented). Another
example is optimization of String'Write to do one big write in the normal case
rather than the generally required separate element-by-element write. There are
LOADS of such cases (I would guess we have well over a hundred similar
enhancement requests filed, and many improvements to the compiler over time are
in this category). Small fry like order of evaluation of expressions is
relatively unimportant compared to this.
Other examples are efficient handling of exceptions and controlled types.
> In such a case, there is no business case for there even existing more
> than one compiler for a language (it would make a lot more sense for a
> small company like mine to build tools for GNAT rather than building
> Ada compiler where the ability to add value is very strictly
> limited) - at least so long as the one that exists is open source. And
> of course in that case, there is no need for Ada Standardization, either.
Different Ada companies do and have concentrate(d) on different opportunities,
and there are many reasons why it is a good thing to have more than
> So I conclude that this group *exists* because of the ability of
> compilers to find "small efficiencies" for particular target markets.
> To dismiss that out of hand is essentially dismissing the reason the
> Ada Standard exists at all.
TOTAL NONSENSE (sorry for all upper case) in my opinion.
For one thing, suppose we did have a world where there is only one compiler. The
standard would be critical for letting a programmer know what is guaranteed to
be portable across architectures and what is not!
> P.S. I'm not sure what "strict left-to-right order" means when named
> notation and/or default parameters are used in calls. Our compiler
> converts all calls to positional notation with default expressions
> made explicit before we start any sort of code generation; the back
> end phases only deal with positional calls. If we're talking the order
> that named parameters exist in the source code, that would be *very*
> expensive to implement in our compiler, as we'd have to introduce
> temporaries for every parameter to get the evaluation order right, or
> abandon the existing invariant that the parameters are in passing
> order when making calls, or ???. I don't know for sure, but I thought that
> this was a common implementation strategy.
Surprised! it would be trivial in GNAT, we would test each expression in the
call in sequence to see if it had side effects, and if so (it's the unusual
case) eliminate these side effects in left to right order. To do this would take
about 20 minutes of work, since all the primitives are at hand, it would be
something like
A := First_Actual (P);
while Present (A) loop
Remove_Side_Effects (A);
Next_Actual (A);
end loop;
well there are probably some details left out, but still, not a big deal once we
decided what was meant by evaluation in order (as Randy points out there are
some interesting cases).
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 5:07 PM
...
> > Pretty much the only meaningful thing that diffentiates compilers
> > (as opposed to eco-systems) for a standardized language like Ada is
> > the way that that they find (or don't) "small efficiencies". (Aside:
> > And I don't agree that the efficiencies in question are necessarily
> > small; the cost of a float spill is the reading and writing of 11
> > dwords of memory and that is going to be significant in any event.)
> > Pretty much everything else about a compiler is identical because of
> > the requirements of the Standard. If one says that
>
> That's certainly not the case, ease of use, quality of error messages,
> error recovery, which host and targets are supported, quality of
> support, responsiveness to bug fixes and enhancements, ability to
> maintain and document known problems, etc... are much more important
> than micro optimizations to most users.
None of those things, other than error messages, are properties of the
*compiler*, they are properties of the business (that's especially true for
support which is mainly what you are talking about here). There is no reason
that some new business could not use the GCC compiler and provide quality
support, targets, and the like.
...
> Also you started with "as opposed to eco-systems" but that's also a
> non realistic premise: customers don't buy just a compiler these days,
> they buy a whole toolset (if not more) where the compiler is only a
> small piece, so this is not what makes the difference in most cases.
I agree with this, and that is my point. The main value to customers is in a
better eco-system. As such, it does not make business sense to build a complex
piece of software (like a compiler) when one could build an eco-system (and
support) around an existing open source compiler. An Ada compiler is the hardest
and most expensive piece of most development eco-systems -- why spend all of
your energy there (because they're infinite time sinks) when it provides very
little incremental value to your customers?
Many of us built Ada compilers when the economics was different (especially as
no open source compiler existed to build around), but that's not true today. I
keep maintaining Janus/Ada because it's very much like my child and you don't
abandon your children -- but it certainly doesn't make economic sense to do so.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 5:22 PM
>> That's certainly not the case, ease of use, quality of error
>> messages, error recovery, which host and targets are supported,
>> quality of support, responsiveness to bug fixes and enhancements,
>> ability to maintain and document known problems, etc... are much more
>> important than micro optimizations to most users.
>
> None of those things, other than error messages, are properties of the
> *compiler*, they are properties of the business (that's especially
> true for support which is mainly what you are talking about here).
totally wrong
ease of use property of the compiler
quality of messages property of the compiler
error recovery property of the compiler
ease of fixing bugs property of the compiler
targets supported property of the compiler
support includes all the above.
> There is no reason
> that some new business could not use the GCC compiler and provide
> quality support, targets, and the like.
Not conceptually right, but in practice this would not be easy
> I agree with this, and that is my point. The main value to customers
> is in a better eco-system. As such, it does not make business sense to
> build a complex piece of software (like a compiler) when one could
> build an eco-system (and support) around an existing open source
> compiler. An Ada compiler is the hardest and most expensive piece of
> most development eco-systems).
The compiler might possibly be the single most expensive piece of the
eco-system, but in the big picture it is still a small part (probably only 20%
of the development resources at AdaCore, if that, go into the compiler).
And mastering the gcc compiler, for example, at the level necessary to do what
you suggest, would be a major investment, easily comparable to building a
compiler from scratch. Of course you would get a lot for that in terms of
targets, etc. but it would be a lot of work.
> Many of us built Ada compilers when the economics was different
> (especially as no open source compiler existed to build around), but
> that's not true today. I keep maintaining Janus/Ada because it's very
> much like my child and you don't abandon your children -- but it
> certainly doesn't make economic sense to do so.
Well of course an open source compiler existed, you could have decided to build
a front end for gcc, just as we did instead of an entire compiler, but as I say,
I think it would probably have been more work, not less. As for abandoning, part
of wisdom is learning when to abandon things that should be abandoned :-)
We have certainly abandoned significant chunks of technology as we go along at
AdaCore, and look for example at the Alsys decision to abandon their compiler
technology in favor of Ada Magic.
Boy this must be the most off-topic thread for a long time, but it isn't only me
keeping it alive :-)
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 5:33 PM
> > the cost of a float
> > spill is the reading and writing of 11 dwords of memory and that is
> > going to be significant in any event.
>
> what's your model for this odd claim?
In Janus/Ada, at least, we have to spill the full extended precision value for
each float and all of the float flags as well (because of the way float
exceptions are managed), then the float processor is fully cleared before the
call; the process then is reversed afterwards.
At least, that's the way it worked before I added the front-end rearrangement of
code. Since then, I haven't been able to construct an example that the compiler
wasn't able to reorder to eliminate the need for any float spilling. It
certainly ought to be possible to construct such a case, but it would be
fiendishly complex (a massive forest of nested function calls that return
floats).
The rearrangement of course depends upon the non-determism of subprogram
parameter ordering.
> ) Pretty much everything else about a compiler
> > is identical because of the requirements of the Standard. If one
> > says that "small efficencies" can't be found, then pretty much every
> > compiler will be functionally identical.
>
> This is nonsense, there are lots of opportunities for BIG efficiencies
> in Ada compilers, e.g. handling of a bit-packed array slice (which
> right now GNAT is not good at, but would be a HUGE efficiency gain if
> implemented). Another example is optimization of String'Write to do
> one big write in the normal case rather than the generally required
> separate element-by-element write. There are LOADS of such cases (I
> would guess we have well over a hundred similar enhancement requests
> filed, and many improvements to the compiler over time are in this
> category). Small fry like order of evaluation of expressions is
> relatively unimportant compared to this.
>
> Other examples are efficient handling of exceptions and controlled
> types.
I would have considered those sorts of things covered by the "early
optimization" that Geert was complaining about. I read his message to say that
any optimization in the front end is evil, which I agree with you is nonsense.
> > In such a case, there is no business case for there even existing
> > more than one compiler for a language (it would make a lot more
> > sense for a small company like mine to build tools for GNAT rather
> > than building Ada compiler where the ability to add value is very
> > strictly
> > limited) - at least so long as the one that exists is open source.
> > And of course in that case, there is no need for Ada
> > Standardization, either.
>
> Different Ada companies do and have concentrate(d) on different
> opportunities, and there are many reasons why it is a good thing to
> have more than
Right, but if we tie the Standard down to the point of requiring everything to
be evaluated in a canonical order, and to eliminate uncommon implementation
strategies like generic sharing (which you complain about the standard
supporting nearly every time I bring it up), there is no longer much chance for
an Ada company to add value in the compiler proper. In such an environment, one
could easily concentrate on a "different opportunity" without investing $$$$ in
a piece that hardly can be different at all from anyone else's.
> > So I conclude that this group *exists* because of the ability of
> > compilers to find "small efficiencies" for particular target markets.
> > To dismiss that out of hand is essentially dismissing the reason the
> > Ada Standard exists at all.
>
> TOTAL NONSENSE (sorry for all upper case) in my opinion.
>
> For one thing, suppose we did have a world where there is only one
> compiler. The standard would be critical for letting a programmer know
> what is guaranteed to be portable across architectures and what is
> not!
I don't think this is hard to do for a vendor; we always tried to make
*everything* portable across architectures other than a relatively small list
outlined in our documentation. The number of choices that the Standard gives us
where we actually make different choices for different targets is quite small
(even on targets as diverse as the U2200 and the Windows PC). The Standard
provides some help, but I don't think it is particularly necessary to that task.
And of course nothing prevents that one compiler vendor from creating a
standard-like document (especially since there already is a Standard). I just
don't see much value to a formal process for it in a one vendor world.
> > P.S. I'm not sure what "strict left-to-right order" means when named
> > notation and/or default parameters are used in calls. Our compiler
> > converts all calls to positional notation with default expressions
> > made explicit before we start any sort of code generation; the back
> > end phases only deal with positional calls. If we're talking the
> > order that named parameters exist in the source code, that would be
> > *very* expensive to implement in our compiler, as we'd have to
> > introduce temporaries for every parameter to get the evaluation
> > order right, or abandon the existing invariant that the parameters
> > are in passing order when making calls, or ???. I don't know for
> > sure, but I thought that this was a common implementation strategy.
>
> Surprised! it would be trivial in GNAT, we would test each expression
> in the call in sequence to see if it had side effects, and if so (it's
> the unusual case) eliminate these side effects in left to right order.
> To do this would take about 20 minutes of work, since all the
> primitives are at hand, it would be something like
>
> A := First_Actual (P);
> while Present (A) loop
> Remove_Side_Effects (A);
> Next_Actual (A);
> end loop;
>
> well there are probably some details left out, but still, not a big
> deal once we decided what was meant by evaluation in order (as Randy
> points out there are some interesting cases).
I was thinking mostly that it would be expensive at runtime as many new memory
temporaries would be needed. We'd of course want to aggressively eliminate those
temporaries, which would complicate the implementation as well.
The alternative would be to redesign our intermediate code to support
interleaving of register temporaries and parameter passing (we didn't allow this
so that register parameter passing could be sensibly supported); that would of
course be even a bigger job because it would invalidate many of the invariants
that the optimizer and back-ends expect. (To Be Honest: I've been considering
doing this anyway for other optimization reasons -- but I haven't started it
because of the invariant factor. In any case, it would have to be quite limited
for x86 targets because of the lack of registers.)
Certainly doing it is possible, but since everything was designed from the
ground up for Ada (83) with no constraints on evaluation order, there certainly
would be quite a few bumps.
P.P.S. The idea of creating a profile or something in Annex H to specify that
everything is evaluated in canonical order (and 11.6 is nullified!) isn't a bad
one. It would let the standard define these things in a more sensible way but
would prevent making this into a barrier for adoption of Ada 2012 and beyond.
(As a specialized needs annex, implementers would not have to support it.)
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 5:45 PM
>>> the cost of a float
>>> spill is the reading and writing of 11 dwords of memory and that is
>>> going to be significant in any event.
>>
>> what's your model for this odd claim?
>
> In Janus/Ada, at least, we have to spill the full extended precision
> value for each float and all of the float flags as well (because of
> the way float exceptions are managed), then the float processor is
> fully cleared before the call; the process then is reversed afterwards.
You can do FAR better than this, it is really quite easy and much cheaper to
make the floating-point stack extend to memory automatically. It's interesting
that Steve Morse in his book claimed this was impossible but I found out how to
do it, and convinced him that it worked (this is what we did in the Alsys
compiler).
Of course you should use SSE these days instead of the old junk fpt anyway :-)
> At least, that's the way it worked before I added the front-end
> rearrangement of code. Since then, I haven't been able to construct an
> example that the compiler wasn't able to reorder to eliminate the need
> for any float spilling. It certainly ought to be possible to construct
> such a case, but it would be fiendishly complex (a massive forest of
> nested function calls that return floats).
>
> The rearrangement of course depends upon the non-determism of
> subprogram parameter ordering.
> I would have considered those sorts of things covered by the "early
> optimization" that Geert was complaining about. I read his message to
> say that any optimization in the front end is evil, which I agree with
> you is nonsense.
Well if you invent what people say, not surprising you will think it is
nonsense, he said nothing of the kind!
> Right, but if we tie the Standard down to the point of requiring
> everything to be evaluated in a canonical order, and to eliminate
> uncommon implementation strategies like generic sharing (which you
> complain about the standard supporting nearly every time I bring it
> up), there is no longer much chance for an Ada company to add value in
> the compiler proper. In such an environment, one could easily concentrate
> on a "different opportunity" without investing $$$$ in a piece that hardly
> can be different at all from anyone else's.
Absurd to think that eliminating generic sharing means this, that's just a
fantasy Randy! As I say, the idea that all Ada compilers are anywhere NEAR the
same, or would be the same even with a few more constraints is absurd nonsense,
I can't imagine ANYONE else agreeing with you on this.
> I don't think this is hard to do for a vendor; we always tried to make
> *everything* portable across architectures other than a relatively
> small list outlined in our documentation. The number of choices that
> the Standard gives us where we actually make different choices for
> different targets is quite small (even on targets as diverse as the U2200 and the Windows PC).
> The Standard provides some help, but I don't think it is particularly
> necessary to that task.
You never were in the business of supporting multiple OS/Target pairs (we
currently support over 50), or you would not make such an absurd statement IMO.
> And of course nothing prevents that one compiler vendor from creating
> a standard-like document (especially since there already is a
> Standard). I just don't see much value to a formal process for it in a one
> vendor world.
Well you are not a vendor, so it's not surprising that you don't understand. Let
me assure you that AdaCore finds the standard VERY useful from this point of
view. If this were not the case we would not support continued work on the
standard.
> I was thinking mostly that it would be expensive at runtime as many
> new memory temporaries would be needed. We'd of course want to
> aggressively eliminate those temporaries, which would complicate the
> implementation as well.
Well they would not complicate anything for us, and I think the impact of such
an approach would be small (it would be a trivial experiment to do with GNAT in
fact).
> P.P.S. The idea of creating a profile or something in Annex H to
> specify that everything is evaluated in canonical order (and 11.6 is
> nullified!) isn't a bad one. It would let the standard define these
> things in a more sensible way but would prevent making this into a
> barrier for adoption of Ada 2012 and beyond. (As a specialized needs
> annex, implementers would not have to support it.)
Perhaps the HRG could be persuaded to look at this idea.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 5:55 PM
> >> That's certainly not the case, ease of use, quality of error
> >> messages, error recovery, which host and targets are supported,
> >> quality of support, responsiveness to bug fixes and enhancements,
> >> ability to maintain and document known problems, etc... are much
> >> more important than micro optimizations to most users.
> >
> > None of those things, other than error messages, are properties of
> > the *compiler*, they are properties of the business (that's
> > especially true for support which is mainly what you are talking about here).
>
> totally wrong
>
> ease of use property of the compiler
> quality of messages property of the compiler
> error recovery property of the compiler
> ease of fixing bugs property of the compiler
> targets supported property of the compiler
I strongly disagree, but I doubt that I would convince you and this certainly is
getting way off-topic. For instance, hardly anyone directly runs a compiler
these days (or ever, for Ada compilers). You use a make tool (GNATMake) or a
programming environment (GPS) and that's where the ease of use comes from, not
the compiler. Indeed, the ease of use of the compiler proper is virtually
irrelevant, so long as it can be invoked from the command line, as you can wrap
something else around it to make it easy to use.
Similarly, "targets supported" is a business decision, not really about the
compiler proper. Porting runtimes is relatively easy compared to building an
entire compiler, and the same is true of building back-ends. (Yes, a really bad
design could make that hard, but that's unlikely, since every implementer I've
talked to has had a relatively portable front-end design with relatively small
back-ends.) ...
> > I agree with this, and that is my point. The main value to customers
> > is in a better eco-system. As such, it does not make business sense
> > to build a complex piece of software (like a compiler) when one
> > could build an eco-system (and support) around an existing open
> > source compiler. An Ada compiler is the hardest and most expensive
> > piece of most development eco-systems).
>
> The compiler might possibly be the single most expensive piece of the
> eco-system, but in the big picture it is still a small part (probably
> only 20% of the development resources at AdaCore, if that, go into the
> compiler).
>
> And mastering the gcc compiler, for example, at the level necessary to
> do what you suggest, would be a major investment, easily comparable to
> building a compiler from scratch. Of course you would get a lot for
> that in terms of targets, etc. but it would be a lot of work.
I wouldn't claim it would be easy, but it would be a lot easier than starting
from scratch. Especially as other people also contribute to the gcc compiler, so
you don't have to be able to fix everything. (To get the best quality support,
you would of course have to do that eventually, but I don't think you'd have to
start there. Indeed, all you really need is hubris. :-) I unfortunately used all
of mine in the 1980s -- I know better now which makes me too risk-adverse.)
> > Many of us built Ada compilers when the economics was different
> > (especially as no open source compiler existed to build around), but
> > that's not true today. I keep maintaining Janus/Ada because it's
> > very much like my child and you don't abandon your children -- but
> > it certainly doesn't make economic sense to do so.
>
> Well of course an open source compiler existed, you could have decided
> to build a front end for gcc, just as we did instead of an entire
> compiler, but as I say, I think it would probably have been more work,
> not less.
I don't think gcc was around in 1981.
> As for abandoning, part of wisdom is learning when to abandon things that
> should be abandoned :-)
Yeah, but what else would I have? I know it's sad, but Janus/Ada is essentially
my family (never having married or having children). I've always joked that I
was married to Ada, and it's pretty true.
(I think I've reached my mid-life crisis at 55! Like always, many years later
than everyone else.)
Anyway, if I did abandon Janus/Ada, then I'd clearly have to use GNAT like
everyone else. Which would help push us to the one-compiler situation that I was
talking about above. (After all, you guys like to repeatedly tell us that there
is only one Ada 2012 compiler. We have to be careful to not change the Standard
to the point where that becomes true permanently.)
> We have certainly abandoned significant chunks of technology as we go
> along at AdaCore, and look for example at the Alsys decision to
> abandon their compiler technology in favor of Ada Magic.
>
> Boy this must be the most off-topic thread for a long time, but it
> isn't only me keeping it alive :-)
Yeah, it's gotten sort of out of hand. I'm afraid I couldn't let Geert's
contention that "early optimization" (that is, in the front end) is bad pass. To
repeat what I said above, we can't change the Standard in such a way that we
make it uneconomic for multiple vendors to support Ada 2012 and Ada 202x. People
are not going to completely redesign their front-ends or back-ends just because
someone thinks its "1970's technology". Anyway, hopefully we can give this a
rest.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 6:10 PM
>> totally wrong
>>
>> ease of use property of the compiler
>> quality of messages property of the compiler
>> error recovery property of the compiler
>> ease of fixing bugs property of the compiler
>> targets supported property of the compiler
>
> I strongly disagree, but I doubt that I would convince you and this
> certainly is getting way off-topic. For instance, hardly anyone
> directly runs a compiler these days (or ever, for Ada compilers). You
> use a make tool
> (GNATMake) or a programming environment (GPS) and that's where the
> ease of use comes from, not the compiler. Indeed, the ease of use of
> the compiler proper is virtually irrelevant, so long as it can be
> invoked from the command line, as you can wrap something else around
> it to make it easy to use.
I begin to think you really don't undestand much about ease of use, so I won't
bother to try to educate you on this, but from my point of view, ease of use of
the compiler is a LOT to do with the compiler, e.g avoiding undesirable Ada 83
library model. We have been very successful in the Ada market because we
understand this. I think success speaks for itself when comparing compilers.
> Similarly, "targets supported" is a business decision, not really
> about the compiler proper. Porting runtimes is relatively easy
> compared to building an entire compiler, and the same is true of
> building back-ends. (Yes, a really bad design could make that hard,
> but that's unlikely, since every implementer I've talked to has had a
> relatively portable front-end design with relatively small back-ends.)
My goodness, your lack of experience shows up again, it is not at ALL easy to
build new back ends, just take one example, implementing software pipelining,
essential to a usable ia64 port is FAR from trivial. But that's one of hundreds
of similar examples. The back end of GCC is incidentally FAR bigger than the Ada
front end, and represents many hundreds of person years of effort.
> I wouldn't claim it would be easy, but it would be a lot easier than
> starting from scratch. Especially as other people also contribute to
> the gcc compiler, so you don't have to be able to fix everything. (To
> get the best quality support, you would of course have to do that
> eventually, but I don't think you'd have to start there. Indeed, all
> you really need is hubris. :-) I unfortunately used all of mine in the
> 1980s -- I know better now which makes me too risk-adverse.)
You underestimate the task
> I don't think gcc was around in 1981.
Right, 1987 was the first official release
> Anyway, if I did abandon Janus/Ada, then I'd clearly have to use GNAT
> like everyone else. Which would help push us to the one-compiler
> situation that I was talking about above. (After all, you guys like to
> repeatedly tell us that there is only one Ada 2012 compiler. We have
> to be careful to not change the Standard to the point where that
> becomes true permanently.)
Well I fear we may already have done that, Ada 2012 was a huge amount of effort,
and a lot of it was for completely unimportant stuff. In fact I would say nearly
all of the Ada 2012 changes were unimportant. Read my Dr. Dobbs article for a
take on that.
> Yeah, it's gotten sort of out of hand. I'm afraid I couldn't let
> Geert's contention that "early optimization" (that is, in the front
> end) is bad pass.
You totally misread what Geert was saying!
> To repeat what I said above, we can't change the Standard in such a
> way that we make it uneconomic for multiple vendors to support Ada
> 2012 and Ada 202x. People are not going to completely redesign their
> front-ends or back-ends just because something thinks its "1970's
> technology". Anyway, hopefully we can give this a rest.
I am flummoxed by this paragraph, I have no idea what you are talking about, I
guess it must be some Janus idiosyncratic thing, because it makes no general
sense to me!
In particular, remember that this off topic thread was all about the
reassociation rule, are you *seriously* saying that eliminiating this
reassociation rule would be the one thing that made it uneconomic for vendors to
support Ada 2012?
No one has proposed any other required change to the Ada 2012 standard!
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 6:29 PM
> > In Janus/Ada, at least, we have to spill the full extended precision
> > value for each float and all of the float flags as well (because of
> > the way float exceptions are managed), then the float processor is
> > fully cleared before the call; the process then is reversed afterwards.
>
> You can do FAR better than this, it is really quite easy and much
> cheaper to make the floating-point stack extend to memory
> automatically.
> It's interesting that Steve Morse in his book claimed this was
> impossible but I found out how to do it, and convinced him that it
> worked (this is what we did in the Alsys compiler).
I don't doubt it; I haven't worked on this area in years. The main problem would
be that you would have to reliably handle the floating point traps in order to
do this (certainly doing a test before every push would not be "much cheaper"),
and that we were never able to do. I'm sure that was because early Oses like SCO
Unix simply didn't handle these things properly. Anyway, we designed a model
that doesn't depend on using any traps, taking advantage of the various
permissions for checks and the fact that the bits are sticky to only check for
problems just before any store.
I've never had any reason to revisit this model (don't have customers with
critical floating point needs).
> Of course you should use SSE these days instead of the old junk fpt
> anyway :-)
Right, and *that* would be a much better reason to revisit floating point
support than trying to get a couple of cycles out of fpt exception checking.
> > I would have considered those sorts of things covered by the "early
> > optimization" that Geert was complaining about. I read his message
> > to say that any optimization in the front end is evil, which I agree
> > with you is nonsense.
>
> Well if you invent what people say, not surprising you will think it
> is nonsense, he said nothing of the kind!
Then I have no idea what he was talking about, unless he was building some sort
of straw man. There are lots of uses of the unspecified order of evaluation that
have nothing to do with performance: some calls have to be evaluated out of
order to meet Ada semantics, and canonicalizing calls with named notation makes
everything else easier.
> > Right, but if we tie the Standard down to the point of requiring
> > everything to be evaluated in a canonical order, and to eliminate
> > uncommon implementation strategies like generic sharing (which you
> > complain about the standard supporting nearly every time I bring it
> > up), there is no longer much chance for an Ada company to add value
> > in the compiler proper. In such an environment, one could easily
> > concentrate on a "different opportunity"
> > without investing $$$$ in a piece that hardly can be different at
> > all from anyone else's.
>
> Absurd to think that eliminating generic sharing means this, that's
> just a fantasy Randy! As I say, the idea that all Ada compilers are
> anywhere NEAR the same, or would be the same even with a few more
> constraints is absurd nonsense, I can't imagine ANYONE else agreeing
> with you on this.
Fair enough. I should point out that I've *always* felt this way, all the way
back to the early 1980s when we first got into this business. I considered it
the #1 business risk, in that we might not be able to sufficiently differentiate
our product from another company with deeper pockets. One of the main reasons
that we chose generic sharing, dynamic allocation of memory for mutable object,
and similar decisions is that we wanted the compiler to be as different as
possible from everyone else's.
And I still feel that way. If I had to remove the generic sharing from the
compiler, allocate mutable objects to the largest possible size, and so on,
there would remain absolutely no reason for anyone to want to use Janus/Ada. I
couldn't compete with AdaCore in terms of support hours or number of targets,
and I can't really imagine anything that I could compete in. (Especially as
AdaCore could probably afford to clone anything that we did if it was
worthwhile.)
> > I don't think this is hard to do for a vendor; we always tried to make
> > *everything* portable across architectures other than a relatively
> > small list outlined in our documentation. The number of choices that
> > the Standard gives us where we actually make different choices for
> > different targets is quite small (even on targets as diverse as the
> > U2200 and the Windows PC). The Standard provides some help, but I
> > don't think it is particularly necessary to that task.
>
> You never were in the business of supporting multiple OS/Target pairs
> (we currently support over 50), or you would not make such an absurd
> statement IMO.
Sorry, but we always supported multiple OS/Target pairs (still do in fact).
Admittedly, the majority of our revenue has always come from a single pair, but
there always have been others, and multiple processors as well (CP/M/Z-80 was
our first pair, but of course we also had MS-DOS/8086 and CP/M-86/8086; later
there were 16-bit and 32-bit x86 targets, on MS-DOS and various Unix systems; we
also did 68020 and SPARC compilers; and of course the U2200 implementation).
So, while it may be "absurd" to you, it was the way I had RRS do business.
> > And of course nothing prevents that one compiler vendor from
> > creating a standard-like document (especially since there already is
> > a Standard). I just don't see much value to a formal process
> for it in a one vendor world.
>
> Well you are not a vendor, so it's not surprising that you don't
> understand. Let me assure you that AdaCore finds the standard VERY
> useful from this point of view. If this were not the case we would not
> support continued work on the standard.
I realize this, and my bank account thanks you. :-) And it's true so long as
customers insist on second-sources that a Standard is helpful. The situation is
not yet to the single-vendor one that I described, and perhaps it will never get
there.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 6:36 PM
> In particular, remember that this off topic thread was all about the
> reassociation rule, are you *seriously* saying that eliminiating this
> reassociation rule would be the one thing that made it uneconomic for
> vendors to support Ada 2012?
>
> No one has proposed any other required change to the Ada 2012
> standard!
Then *you* are completely misreading what Geert was saying. We haven't been
talking about the reassociation rule in *ages* (everyone seems to agree that it
should be dropped, and I asked Geert to write an AI to that effect). We're
talking about the much more general fact that the order of evaluation of many
things is unspecified in Ada. (I've mostly been concentrating on parameters.)
That's the message that Geert sent that Erhard and I had responded to, and
that's what Geert was replying to. Geert has on several occasions called for Ada
to remove all of the non-determinism from the language. I think that would be
way over the top. (I once called that off-topic to an off-topic discussion, and
it still is.) Such a rule would certainly make it much harder to use an existing
Ada technology as the basis for an update-to-date Ada compiler.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 6:46 PM
> I realize this, and my bank account thanks you. :-) And it's true so
> long as customers insist on second-sources that a Standard is helpful.
> The situation is not yet to the single-vendor one that I described,
> and perhaps it will never get there.
Nope, that's not the primary reason for finding the standard useful, it has
nothing to do with our competitors. It has to do with
a) Letting people know Ada is alive and well and standardized and that the
standard is progressing ahead with useful new stuff.
b) Defining clearly what people can and cannot expect to hold if they are trying
to write portable code.
There are a FEW cases in which we guarantee behavior beyond the standard but not
so many.
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 6:48 PM
>> No one has proposed any other required change to the Ada 2012
>> standard!
Removing all the non-determinism has nothing to do with avoiding all front end
optimizations, which of COURSE Geert does not propose! Also, no one is proposing
removing the non-determinism from Ada 2012. As to whether 2020 should be
stricter, interesting issue indeed! Probably a strict annex such as I suggest
would be the way to go.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 7:53 PM
> >> No one has proposed any other required change to the Ada 2012
> >> standard!
>
> Removing all the non-determinism has nothing to do with avoiding all
> front end optimizations, which of COURSE Geert does not propose!
While the Dewar rule certainly applies to the Standard, I don't think one can
apply it to correspondence. :-) You've certainly never been shy about calling me
out when I wrote something that you thought was nonsense (even when you
misinterpreted what I said/meant); I'm not sure why others should be treated
differently.
> ... Also, no one is proposing removing the non-determinism from Ada 2012.
I'd hope not. But some quotes from Geert's message of Nov 24:
>> By the way, it has always puzzled me how much the Ada designers like
>>non-determinism.
...
>I think we should make it a point to go through all such cases and
>specify behavior.
...
>Maybe it isn't too late for a binding interpretation for Ada 2012 for
>some of these order of evaluation topics? (*)
These make me think that Geert is in fact asking for this for Ada 2012. Now, I
grant that it's not 100% clear what "all such cases" applies to. He's talking
about associativity immediately before this statement, but there is only one
such case in the Standard and in our correspondence, so "all" would have to mean
something broader.
In the last statement, "some" hopefully only applies to associativity, but again
there is only one of those cases, so it seems that Geert meant something
broader. (And I got the same feeling from talking to him in Pittsburgh, which is
why I feel justified in applying an expansive interpretation.)
Anyway, I wouldn't have bothered with this discussion at all if I hadn't thought
that Geert was pushing for a change immediately.
> As to whether 2020 should be
> stricter, interesting issue indeed! Probably a strict annex such as I
> suggest would be the way to go.
I think I agree. Less burden on implementers unless they have demand for it (in
which case the burden is at least justifiable).
P.S. Have good Thanksgiving! And that goes for the rest of the US-based ARG as
well!
****************************************************************
From: Robert Dewar
Sent: Wednesday, November 27, 2013 8:12 PM
> While the Dewar rule certainly applies to the Standard, I don't think
> one can apply it to correspondence. :-) You've certainly never been
> shy about calling me out when I wrote something that you thought was
> nonsense (even when you misinterpreted what I said/meant); I'm not
> sure why others should be treated differently.
You just misinterpreted what Geert was saying, I certainly found him clearer. He
was saying that things like order of evaluation of expressions do not belong
being optimized in the front end, and I totally agree with that position. You
extended that to all optimizations, but that's nonsense, Geert did not say that,
and of course doesn't think that, and neither do I.
>> ... Also, no one is proposing removing the non-determinism from Ada
>> 2012.
>
> I'd hope not. But some quotes from Geert's message of Nov 24:
>
>>> By the way, it has always puzzled me how much the Ada designers like
> non-determinism.
> ...
>> I think we should make it a point to go through all such cases and
>> specify
> behavior.
> ...
>> Maybe it isn't too late for a binding interpretation for Ada 2012 for
>> some
> of these
>> order of evaluation topics? (*)
>
> These make me think that Geert is in fact asking for this for Ada
> 2012. Now, I grant that it's not 100% clear what "all such cases"
> applies to. He's talking about associativity immediately before this
> statement, but there is only one such case in the Standard and in our
> correspondence, so "all" would have to mean something broader.
He said SOME of these topics, and we already agree on one!
> In the last statement, "some" hopefully only applies to associativity,
> but again there is only one of those cases, so it seems that Geert
> meant something broader. (And I got the same feeling from talking to
> him in Pittsburgh, which is why I feel justified in applying an
> expansive
> interpretation.)
I don't know if there are other real problems besides the associativity rule,
it's a reasonable question to raise.
> Anyway, I wouldn't have bothered with this discussion at all if I
> hadn't thought that Geert was pushing for a change immediately.
Geert said *EXACTLY* (you quoted him!) that
> Maybe it isn't too late for a binding interpretation for Ada 2012 for
> some of these order of evaluation topics?
Well you apparently agree with that, since you agree that it is reasonable to
eliminate the reassociation case. Once we have one that we can agree on, it is
not a matter of principle any more, but a matter of case by case considering
whether there are any other sufficiently gratuitous non-determinisms to consider
putting them in this same category.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, November 27, 2013 9:02 PM
> You just misinterpreted what Geert was saying, I certainly found him
> clearer. He was saying that things like order of evaluation of
> expressions do not belong being optimized in the front end, and I
> totally agree with that position.
"Order of evaluation of expressions" is the same as "order of evaluation of
parameters" in Ada, as all expressions in Ada or formally a set of function
calls. And you cannot correctly implement Ada without reordering parameters (for
some definition of order) in some cases. Thus an implementer pretty much has to
do some expression (read parameter) reordering in the front end.
This isn't really "optimization", it's "normalization" (as Erhard noted), and it
doesn't have anything to do with the target. Claiming that reordering of
expressions is simply an optimization is just wrong.
> You extended that to all optimizations, but that's nonsense, Geert did
> not say that, and of course doesn't think that, and neither do I.
I don't see any reason that any optimizations shouldn't be done wherever they
are easiest to do for a particular implementation, be that front-end, back-end,
or hind-end. :-) Janus/Ada uses tables of target-specific information to
determine whether or not to do certain front-end optimizations, which are simply
intended to get the intermediate code into the preferred form for the back-end.
These are quite similar to the tables defining the standard data types. I'm not
sure why any compiler writer is supposed to push things to the target-dependent
back-end that are easier done earlier in the process (and do them once, rather
than once per target). It all depends on the compiler architecture, and making
broad statements about what a compiler should or shouldn't do is bogus. (And
yes, I've made the same mistake from time-to-time.)
****************************************************************
From: Robert Dewar
Sent: Thursday, November 28, 2013 2:16 AM
I am retiring from this thread, IMO it has devolved to being content-free :-(
and is going around in circles. I am not really interested in what Janus/Ada
does :-). I suggest Randy take it off line if he wants to pursue it further
(although I won't contribute any more in any case). Sorry for all the noise.
*****************************************************************
From: Bob Duff
Sent: Monday, October 13, 2014 12:19 PM
New version of AI12-0092-1, Soft legality rules. [Editor's note: This is
version /01 of the AI.]
*****************************************************************
From: Bob Duff
Sent: Monday, October 13, 2014 12:37 PM
> New version of AI12-0092-1, Soft legality rules.
To any ARG members planning to go to the upcoming WG9 meeting:
Please take note of this email, which is filed in AI12-0092-1.TXT:
From: Robert Dewar
Sent: Tuesday, September 3, 2013 2:13 PM
> I am somewhat neutral on the "soft error" concept. It does allow us
> to introduce incompatibilities without "officially" doing so, but our
> attempts to do that with "unreserved keywords" always ran into trouble
> with WG-9.
That's just a lack of competent political lobbying IMO!
...and subsequent replies.
History: ARG has proposed unreserved keywords to preserve compability.
Every time, WG9 has rejected the idea. The problem is that there is a conflict:
(1) On the one hand, programs really shouldn't be using keywords
as identifiers, so keywords should be reserved.
(2) On the other hand, they already do (when new ones are added),
so newly added keywords should NOT be reserved.
(1) is a matter of taste/style. (2) is matter of huge amounts of wasted money.
IMHO, it's totally irresponsible to place more importance on (1) than on (2).
I'd like to know who on WG9 is so opposed to nonreserved keywords, and why they
don't consider compatibility more important.
I'd also like to know whether the notion of "soft Legality Rules"
would solve the problem in their view. The idea is that we can add a keyword
like "interface", and add a soft legality rule forbidding the use of "interface"
as an identifier, thus requiring a diagnostic message, so people can fix their
programs. But they don't have to fix them RIGHT NOW.
***************************************************************
From: Randy Brukardt
Sent: Monday, October 13, 2014 1:20 PM
> To any ARG members planning to go to the upcoming WG9 meeting:
It's unlikely that we'll have time to look at any Amendments at this meeting,
because the meeting is shorter than usual and we have to finish the Corrigendum.
The only looking at Amendments that we'll likely do is to look at them to see if
there are any that we want to reclassify as Binding Interpretations so that
they'll appear in the Corrigendum.
> Please take note of this email, which is filed in AI12-0092-1.TXT:
>
> From: Robert Dewar
> Sent: Tuesday, September 3, 2013 2:13 PM
>
> > I am somewhat neutral on the "soft error" concept. It does allow us
> > to introduce incompatibilities without "officially" doing so, but our
> > attempts to do that with "unreserved keywords" always ran into trouble
> > with WG-9.
>
> That's just a lack of competent political lobbying IMO!
>
> ...and subsequent replies.
BTW, the quote is from Tucker (that confused me at first).
...
> I'd like to know who on WG9 is so opposed to nonreserved keywords, and
> why they don't consider compatibility more important.
The short answer to the first question was "everyone not from the US". My
recollections on the second is that they considered the impact on tools and on
education to be more important. They thought it was important that there would
be no exceptions to the rules.
> I'd also like to know whether the notion of "soft Legality Rules"
> would solve the problem in their view. The idea is that we can add a
> keyword like "interface", and add a soft legality rule forbidding the
> use of "interface" as an identifier, thus requiring a diagnostic
> message, so people can fix their programs. But they don't have to fix
> them RIGHT NOW.
My guess (and it's just a guess) is that the root objection is to the option,
and as such, a suppressible error provides no help.
(Why have you to reverted to calling them "soft errors"? No one liked that
term.)
I fear that even suppressible errors will be a hard sell, for that same reason.
I think we'll be able to show that they make things like parallel operations
safer without being overly restrictive (considering you can always turn off the
checks and proceed at your own risk). But I remain dubious that they have much
use for compatibility (there's only a few cases where we have checks that
sensibly can be turned off).
***************************************************************
From: Bob Duff
Sent: Monday, October 13, 2014 2:41 PM
> The short answer to the first question was "everyone not from the US".
> My recollections on the second is that they considered the impact on
> tools and on education to be more important. They thought it was
> important that there would be no exceptions to the rules.
Weak arguments, IMHO. If Robert is right that "That's just a lack of competent
political lobbying IMO!", then somebody (not me) should do some competent
political lobbying. Or maybe some technical lobbying -- to me the
gratuitous-compatibility issue is compelling.
***************************************************************
From: Randy Brukardt
Sent: Monday, October 13, 2014 3:07 PM
With the change in WG 9 voting away from National Bodies, it seems that it will
be easier to make and win such arguments. But we'd have to be careful that we
don't just move the problem to the SC 22 level.
It appears to me that the concern about compatibility is strongly correlated to
the distance from AdaCore. The further you are from AdaCore, the less likely
that you find it important. And of course part of that is the word "gratuitous",
since necessary incompatibility comes from needing to fix previous mistakes.
IMHO, the most unnecessary incompatibility/inconsistency in Ada 2012 was the
change in record equality semantics, yet no one is arguing that we should
eliminate that change. So one could argue that the lack of interest from WG 9 to
date reflects more their judgment of the relative importance of the
incompatibility vs. the alternatives.
Anyway, I'm making someone else's argument for them, so I'll stop now.
***************************************************************
From: Dr. Joyce L Tokar
Sent: Monday, October 13, 2014 6:05 PM
It is worth noting that the direction from SC 22 to conduct working group
business as a collection of experts, all of which have been designated by a
National Body or Liaison organization, may have considerable implications as
documents are put forward for approval above WG 9. The concept is that the WG
members advise their given National Bodies when a document is put forward for
approval. So, if there is a lack of consensus within a given National Body
within WG 9, then it is plausible for that lack of consensus to result in
conflicting recommendations to a NB when it comes time to vote. In my
experience, such conflicts are realized as a vote against approval or an
abstention.
***************************************************************
From: Randy Brukardt
Sent: Monday, October 13, 2014 1:15 PM
...
> 27.1 Each Legality Rule is either "hard" or "soft". Legality Rules are
> hard, unless explicitly specified as soft.
I thought we'd decided to proceed on the idea of "suppressible" errors (as
opposed to "soft" errors). Any particular reason why you changed the terminology
back? Or was it just an oversight?
***************************************************************
From: Bob Duff
Sent: Monday, October 13, 2014 2:40 PM
The previous version of the AI says this:
Terminology:
The original proposal was for "fatal error" and "nonfatal error". That
wasn't liked because it misuses a common (outside of Ada) term. We settled
on "hard error" and "soft error". Also suggested was "major error" and
"minor error".
So I thought "soft" was the final decision.
I explained in the !discussion why "error" is confusing. Compiler writers (and
compiler users) talk about "errors" and "warnings" (etc) meaning "various kinds
of diagnostic messages" and the situations that trigger those messages. But
that's not at all how the RM uses the term "error". See "Classification of
Errors" in chap 1. Hence, I went with "soft Legality Rule". If you want to
talk about the errors, it's "violations of soft legality rules", which is a
mouthful, but that's OK because it's rare.
For example, in one message, you wrote:
(1) The default in standard mode is that it is an error (the program is not
allowed to execute).
But that's not at all what the RM says! Bounded errors, exceptions, and
erroneous execution are all considered "errors" by the RM, yet they do not stop
the program from running.
I read through the entire !appendix, and there were positive notes about "soft".
Many folks like "suppressible", but to me that might imply suppressing the
diagnostic message, which is exactly the opposite of what we're trying to do.
I don't much care what we call these things (other than that the term "error"
doesn't fit in very well). There is no term that is always "right", because
some of these things are going to be serious errors and some will be minor.
Only the user can decide. The only important thing is to require that certain
things trigger a diagnostic message.
***************************************************************
From: Randy Brukardt
Sent: Monday, October 13, 2014 7:41 PM
> So I thought "soft" was the final decision.
No, that was from the original placeholder AI (which was based on e-mail
discussion). When we talked about it in a meeting afterwards (Pittsburgh - you
were there), we decided on "suppressible error", I think because that suggests
the default direction (detected by default). I didn't update the AI afterwards
because I thought your were going to do it soon.
> I read through the entire !appendix, and there were positive notes about "soft".
> Many folks like "suppressible", but to me that might imply suppressing
> the diagnostic message, which is exactly the opposite of what we're trying to do.
Opposite how? The idea is that these are errors that usually prevent the
partition from running (at least WRT to the Standard). If one suppresses them,
you can run the partition. And surely there would be no message in the latter
case (there's no point in forcing a separate suppression of the message, as most
projects want to be warning clean as well as error clean, requiring people to
write two suppression pragmas and/or options would be silly).
Obviously, in the case where they are suppressed there has to be well-defined
semantics (which might be erroneous in some cases) so whatever happens when they
are run makes sense. But I don't see any value to a nagging message after
explicit suppression, and surely no value to *requiring* that (that's up to
implementers, if they have customers that find an intermediate mode useful).
P.S. Yes, I saw your comments on "error", and I agree that you're right about
that. So probably we're talking "suppressible Legality Rule" or something like
that.
P.P.S. And of course I understand Robert's contention that the Ada standard
can't really specify the default behavior; I think it important that the
Standard reflect that intent even if it not really enforceable.
***************************************************************
Questions? Ask the ACAA Technical Agent