Version 1.10 of ai12s/ai12-0092-1.txt

Unformatted version of ai12s/ai12-0092-1.txt version 1.10
Other versions for file ai12s/ai12-0092-1.txt

!standard 1.1.5(4)          14-10-13 AI12-0092-1/01
!class Amendment 13-11-01
!status work item 14-10-13
!status received 13-08-29
!priority Low
!difficulty Medium
!subject Soft Legality Rules
!summary
Soft legality rules are introduced.
!problem
In numerous cases, the ARG has gotten stuck between a rock and a hard place: Some situation really IS an error, so we want it to be detected, preferably at compile time. But making it illegal is an incompatibility. The ARG then had to choose between allowing errors to go undetected (bad) and breaking existing code (also bad).
!proposal
(See summary.)
!wording
RM-1.1.2 says:
24/3 Each section is divided into subclauses that have a common structure. Each clause and subclause first introduces its subject. After the introductory text, text is labeled with the following headings:
Legality Rules
27 Rules that are enforced at compile time. A construct is legal if it obeys all of the Legality Rules.
...
Post-Compilation Rules
29 Rules that are enforced before running a partition. A partition is legal if its compilation units are legal and it obeys all of the Post-Compilation Rules.
Change RM-1.1.2-(27) to:
27 Rules that are enforced at compile time. A compilation_unit is legal if it is syntactically well formed and it obeys all the the Legality Rules. If a compilation_unit is illegal, the implementation shall issue one or more diagnostic messages indicating that fact.
27.1 Each Legality Rule is either "hard" or "soft". Legality Rules are hard, unless explicitly specified as soft. Redundant[The run-time semantics are well defined even in the presence of violations of soft legality rules. There is a mode in which such violations do not prevent the program from running (see 10.2).] The hard/soft distinction applies in the same way to Syntax and Post-Compilation Rules.
Change RM-1.1.2-(29) to:
29 Rules that are enforced before running a partition. A partition is legal if its compilation units are legal and it obeys all of the Post-Compilation Rules. If a partition is illegal, the implementation shall issue one or more diagnostic messages indicating that fact.
NOTE: Violation of Syntax, Legality, and Post-Compilation Rules requires a diagnostic message, whether the rule is hard or soft.
AARM: The form/wording of diagnostic messages is not defined by the standard.
AARM: There is no implication regarding the severity of soft rules, nor the probability that they represent real errors. One soft Legality Rule might be a serious issue, while another might be no big deal.
Editor's note: I expect we can use the heading:
Legality Rules (soft)
to indicate soft rules in most cases. In some cases, we might want to be more explicit, as in "This is a soft Legality Rule". No need to get all pedantic here.
RM-1.1.5 says:
1 The language definition classifies errors into several different categories:
2 * Errors that are required to be detected prior to run time by every
Ada implementation;
3 These errors correspond to any violation of a rule given in this
International Standard, other than those listed below. In particular, violation of any rule that uses the terms shall, allowed, permitted, legal, or illegal belongs to this category. Any program that contains such an error is not a legal Ada program; on the other hand, the fact that a program is legal does not mean, per se, that the program is free from other forms of error.
4 The rules are further classified as either compile time rules, or
post compilation rules, depending on whether a violation has to be detected at the time a compilation unit is submitted to the compiler, or may be postponed until the time a compilation unit is incorporated into a partition of a program.
That's Ada-83 talk -- since Ada 95, we haven't used magic words "shall", "allowed", etc. to indicate kinds of errors -- we explicitly label them Legality Rules. So delete most of the above:
1 The language definition classifies errors into several different categories:
2 * Errors that are required to be detected prior to run time by every
Ada implementation;
3 These errors correspond to any violation of a Syntax Rule, Legality
Rule, or Post-Compilation Rule.
RM-10.2 says:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another, and are legal according to the rules of the language.
Change it to:
27 The implementation shall ensure that all compilation units included in a
partition are consistent with one another. A violation of the hard Legality Rules prevents the partition from running. For each soft Legality Rule, the implementation shall provide two modes: one in which a violation of the rule prevents the partition from running, and the other in which a violation does not prevent running. Similar requirements apply to hard/soft Syntax and Post-Compilation Rules.
!discussion
The primary purpose of soft Legality Rules is to enable the addition of new Legality Rules without causing incompatibilities. An example from the past is when we added rules disallowing passing the "same" actual parameter to multiple formal 'out' parameters of a function. Since functions didn't used to allow 'out' parameters at all, there was no incompatibility. But for uniformity, it made sense to extend these rules to procedures. But that's incompatible, and in practice, existing code sometimes does:
Some_Procedure(..., Ignore, Ignore);
There was no good solution: We had a choice between inferior language rules and incompatibilities. The "soft" concept solves the dilemma: We could have made the new rules soft, thus providing diagnostic messages for questionable code, while still retaining compatibility. Individual projects can then decide if or when to modify their code to obey the rules. The language designers cannot effectively make such decisions; we have no idea how costly it is to coordinate source-code changes, possibly across multiple organizations.
Whether we should retroactively soften the 'out' rules should be the subject of another AI. Another AI should consider whether "unrecognized pragma" should be a soft legality rule. Other possible soft rules:
new reserved words. Limited function return (when not immutably limited). Requirement that overriding "=" come early enough to compose.
Rejected terminology: Several alternatives were suggested. "fatal error"/"nonfatal error" -- no good because "fatal error" usually means something else (in particular, an error that stops a program dead in its tracks). A compiler might consider "missing source file" to be a fatal error. "hard error"/"soft error" and "major error"/"minor error" were also considered. But it's confusing to use the word "error" at all, because the RM already uses "error" to refer to bounded errors, exceptions, and erroneous behavior. "Required warning" was also suggested, but people found that to be too weak.
In any case, no terminology can be perfect here, because only the programmer can decide whether a given diagnostic message should be taken seriously. For example, if you port code containing pragma Defer_Aborts from GNAT to a compiler that doesn't support Defer_Aborts, the RM requires a warning. But that's a serious error: the program needs to be redesigned. If the pragma is Check, on the other hand, the warnings can be safely ignored.
The point of soft errors is to require a diagnostic message. We leave it up to implementations whether to word their messages with "warning:" or "HORRIBLE ERROR:" or "eh, no big deal", or whatever. And we leave it up to programmers to decide when and how to change their code.
!ASIS
No ASIS impact.
!ACATS test
Soft legality rules should be tested via B tests in the usual way (requiring diagnostic messages). In addition, a new C test should be added that violates a soft legality rule, and is expected to run in the "allow to run" mode.
!appendix

From: Bob Duff
Sent: Thursday, August 29, 2013  3:25 PM

I would like to propose a new AI on the subject of "nonfatal errors".
The recent discussions under the subject "aggregates and variant parts"
reminded me of this.  I'm talking about the sub-thread related to nested
variants, not the main thread about aggregates.

My idea is that a nonfatal error is a legality error.  The compiler is
required to detect nonfatal errors at compile time[*], just as it is for any
other legality errors.  However, a nonfatal error does not stop the program from
running.  (This of course implies that we must have well-defined run-time
semantics in the presence of nonfatal errors.)

[*] Or at "link time", if marked as a "Post-Compilation Rule".

In numerous cases, ARG has gotten stuck between a rock and a hard place:  Some
situation really IS an error, so we want it to be detected, preferably at
compile time.  But making it illegal is an incompatibility.  ARG had to choose
between allowing errors to go undetected (bad) and breaking existing code (also
bad).

When that happens in the future, I propose that we define the error situation to
be a "nonfatal error".  We get the best of both worlds:  the error must be
detected, but there is no incompatibility.

Example from the "aggregates and variant parts" discussion:
It was suggested that something like this:

    type Color is (Red, Orange, Yellow);

    type T(D: Color) is
        record
            case D is
                when Red | Orange =>
                    X : Integer;
                    case D is
                        when Red =>
                            Y : Integer;
                        when Orange =>
                            null;
                        when others =>
                            Bogus : Integer; -- Wrong!
                    end case;
                when Yellow =>
                    null;
            end case;
        end record;

is an error, because the Bogus component can never exist.
One should write "when others => null; -- can't happen".

But it would be completely irresponsible for ARG to make that illegal, because
it would be incompatible.  Solution: we could make it a nonfatal error, if we
think it's important to detect it.

!wording

RM-1.1.5 says:

1   The language definition classifies errors into several different
categories:

2     * Errors that are required to be detected prior to run time by every Ada
        implementation;

3       These errors correspond to any violation of a rule given in this
        International Standard, other than those listed below. In particular,
        violation of any rule that uses the terms shall, allowed, permitted,
        legal, or illegal belongs to this category. Any program that contains
        such an error is not a legal Ada program; on the other hand, the fact
        that a program is legal does not mean, per se, that the program is
        free from other forms of error.

4       The rules are further classified as either compile time rules, or post
        compilation rules, depending on whether a violation has to be detected
        at the time a compilation unit is submitted to the compiler, or may be
        postponed until the time a compilation unit is incorporated into a
        partition of a program.

RM-2.8 says:

                         Implementation Requirements

13  The implementation shall give a warning message for an unrecognized pragma name.

13.a        Ramification: An implementation is also allowed to have modes in
            which a warning message is suppressed, or in which the presence of
            an unrecognized pragma is a compile-time error.

I suggest moving the pragma-specific stuff into 1.1.5 and generalizing it.

Add after 1.1.5(4):

        When such an error is detected, the implementation shall issue
        a diagnostic message.  Redundant[This International Standard
        does not define the form or content of diagnostic messages.]

[Note to anyone who complains that we don't have a precise mathematical
definition of "diagnostic message":  Well, we don't have a definition of
"warning", either, yet the sky didn't fall when we wrote 2.8(13)!  We also don't
have a definition of what it means to "detect", but everybody knows (informally)
what it means.]

        By default, a legality error is a "fatal error".  Fatal errors
        prevent the program from running (see 10.2).  Some legality errors
        are explicitly defined by this International Standard to be
        "nonfatal errors".  Nonfatal errors do not prevent the program
        from running.

        AARM Ramification: An implementation is also allowed to have
        modes in which a nonfatal error is ignored, or in which a
        nonfatal error is treated as a fatal error.
        [???If it makes people more comfortable, we could require the
        latter mode, by adding a normative rule to the RM: An
        implementation shall provide a mode in which nonfatal errors
        are treated as fatal errors.]

RM-10.2 says:

27  The implementation shall ensure that all compilation units included in a
    partition are consistent with one another, and are legal according to the
    rules of the language.

Change that to:

27  The implementation shall ensure that all compilation units included in a
    partition are consistent with one another, and do not contain fatal errors.
    Redundant[This implies that such partitions cannot be run. Partitions may
    contain nonfatal errors.]

Change 2.8(13) to:

                         Legality Rules

13  A pragma name that is not recognized by the implementation is illegal.
A violation of this rule is a nonfatal error.

!discussion

Another example is "interface".  Using (say) "begin" as an identifier is a fatal
error, and that's fine.  But we should have said that using "interface" as an
identifier is a nonfatal error.  That would have avoided users wasting huge
amounts of money converting existing code (since "interface" is a widely-used
identifier).

When I originally proposed this idea, I called it "required warnings".
Some folks were worried that programmers might ignore what are considered "mere"
warnings.  Calling it a "nonfatal error" makes it clearer that these really are
errors.  You really should fix them, unless you are in a situation where it is
very expensive to make ANY modifications to existing code.  (In the unrecognized
pragma case, I guess you would "fix" the error (i.e. warning) by suppressing
it.)

In any case, it should be up to programmers to decide whether fixing nonfatal
errors is cost-effective.  That is not our job as language designers.

****************************************************************

From: Tucker Taft
Sent: Thursday, August 29, 2013  3:25 PM

> I would like to propose a new AI on the subject of "nonfatal errors"...

Make sense to me.

****************************************************************

From: Tullio Vardanega
Sent: Friday, August 30, 2013  3:50 AM

Interesting.

****************************************************************

From: Randy Brukardt
Sent: Friday, August 30, 2013  4:05 PM

I'm not going to comment on the merits of the idea now.

But I think the terminology is wrong in that it is different than typical usage
of the term.

...
>         By default, a legality error is a "fatal error".  Fatal errors
>         prevent the program from running (see 10.2).

This is not the typical meaning of "fatal error". In pretty much every program
I've ever used, "fatal error" means an error that terminates processing
*immediately*. That is, a "fatal error" can have no recovery. That's not how you
are using it here (certainly, you don't mean to require Ada compilers to detect
no more than one error per compilation attempt).

The canonical use of "fatal error" in Janus/Ada is when the specified source
file cannot be found, but we also use the modifier on some of the
language-defined rules when we believe recovery is likely to cause many bogus
errors. For instance, we treat all context clause errors as fatal in that
continuing with an incomplete or corrupt symboltable is unlikely to provide any
value.

I don't think Ada should use an existing and commonly used term in an
inconsistent manner with the rest of the world. There must be a better term that
doesn't imply immediate termination of processing.

****************************************************************

From: Bob Duff
Sent: Friday, August 30, 2013  4:44 PM

> ...
> >         By default, a legality error is a "fatal error".  Fatal errors
> >         prevent the program from running (see 10.2).
>
> This is not the typical meaning of "fatal error". In pretty much every
> program I've ever used, "fatal error" means an error that terminates
> processing *immediately*. That is, a "fatal error" can have no recovery.
> That's not how you are using it here (certainly, you don't mean to
> require Ada compilers to detect no more than one error per compilation attempt).

Good point.  Let's first discuss the merits of the idea, and then later try to
come up with a better term.

History:  I first called it "required warning".  But you objected that "warning"
is too mild a term -- some folks might ignore warnings. I have no sympathy for
people who deliberately put pennies in fuse boxes (i.e. ignore warnings), but in
a futile attempt to appease you, I came up with a term that contains the word
"error".

But let's try to ignore the term for now, and concentrate on my goal:
to get ARG to quit introducing gratuitous incompatibilities.
That is, to give ARG an "out" -- a way to say, "we really think this ought to be
illegal, but if you have 10 million lines of code scattered across 17
organizations[*] you don't absolutely have to fix these errors -- your call, you
can choose to ignore these errors and still run your programs".

[*] I gather that that was the situation reported by Robert, with some company
that used Interface as the name of lots of child packages.

****************************************************************

From: Tucker Taft
Sent: Friday, August 30, 2013  4:55 PM

survivable error?
recoverable error?

****************************************************************

From: Randy Brukardt
Sent: Friday, August 30, 2013  5:05 PM

> I would like to propose a new AI on the subject of "nonfatal errors".

Certainly you can propose it. I'm against it in its current form, but the fixes are simple. I previously commented on "fatal".

...
> Example from the "aggregates and variant parts" discussion:
> It was suggested that something like this:
>
>     type Color is (Red, Orange, Yellow);
>
>     type T(D: Color) is
>         record
>             case D is
>                 when Red | Orange =>
>                     X : Integer;
>                     case D is
>                         when Red =>
>                             Y : Integer;
>                         when Orange =>
>                             null;
>                         when others =>
>                             Bogus : Integer; -- Wrong!
>                     end case;
>                 when Yellow =>
>                     null;
>             end case;
>         end record;
>
> is an error, because the Bogus component can never exist.
> One should write "when others => null; -- can't happen".

This is a terrible idea, irrespective of the compability issue. That's because
the definition and implementation of such a rule would be fairly complex, and it
fixes nothing (the real problem is that the others clause is virtually required
because values that can't happen must be covered in the variant). Solving a
symptom rather than the actual problem is a terrible use of resources.

I think you need to make a much more compelling example in order to make this
idea worth even having. In the past when the idea was suggested, we essentially
determined that the problem really wasn't worth fixing (as in the above, or as
in the value always out of range problem). The only errors that the language
should be mandating are those that are virtually always an error; under no
circumstances should programmers be "suppressing" language-defined errors (of
any kind). They should either fix them, or simply live with the
"executable-error" error. Warnings are a different kettle of fish in that way, I
think.

If you had suggested making the trivial fix to the underlying problem, I think
you would have had a stronger case.

That is, in the above, the coverage of the inner variant should be as if the
nominal subtype of the discriminant has a static predicate that matches the
immediately enclosing variant alternative. That would be an "executable error"
(what you called a "non-fatal error"), if the coverage is OK for the nominal
subtype of the discriminant and a "non-executable error" (what you misdiscribed
as a "fatal error") otherwise.

That would make sense, as it would eliminate the compatibility problem by
allowing compilation if the coverage is as it used to be, but would strongly
encourage doing the right thing.

> In any case, it should be up to programmers to decide whether fixing
> nonfatal errors is cost-effective.  That is not our job as language
> designers.

I agree, but again this is a matter of description. If you call these "errors",
then the intent is that they really reflect something wrong. (Warnings are not
like this, they might reflect something dubious that could be OK.) As such, the
language shouldn't be encouraging them to be left in code. The reason for
leaving them in code is to be able to use existing code that cannot be
practically changed, not to allow sloppy programming.

That has a very significant impact on what can be categorized this way. We must
not categorize anything that might be legitimate usage as a "non-fatal error"
(or whatever the term might be). For instance, calling a static expression that
will always be outside of its static subtype an "error" of any kind is a very
bad idea. (These are very common in dead code, as settings of parameters often
lead to situations where values are outside of unused null ranges, expressions
are divided by zero, and the like.)

That also suggests that the suggestion of changing unrecognized pragmas to a
"non-fatal error" must be opposed. That is a capability that is commonly used in
portable code. Claw, for instance, contains many Gnat-specific pragmas that are
just harmlessly ignored in other compilers. To claim that this is somehow an
"error" would be a major disconnect with reality, IMHO.

Basic conclusion here: terminology matters, and in this case, it is pretty much
the only thing that matters. The actual language rules are far less important
than the impression given by the terminology, because most programmers will only
know the terminology, not the language rules.

****************************************************************

From: Randy Brukardt
Sent: Friday, August 30, 2013  5:18 PM

...
> But let's try to ignore the term for now, and concentrate on my goal:
> to get ARG to quit introducing gratuitous incompatibilities.

I just finished writing a message that essentially concludes that the *only*
thing important here is the terminology. We need to decide on that before we can
even begin to understand what might fall into that category. For instance, an
unrecognized pragma clearly is a "warning" (it is something that makes perfect
sense to ignore), while a "soft error" is still an error - you only ignore it in
frozen code (or code primarily maintained for some previous version of Ada) and
fix it in all other cases.

> That is, to give ARG an "out" -- a way to say, "we really think this
> ought to be illegal, but if you have 10 million lines of code
> scattered across 17 organizations[*] you don't absolutely have to fix
> these errors
> -- your call, you can choose to ignore these errors and still run your
> programs".
>
> [*] I gather that that was the situation reported by Robert, with some
> company that used Interface as the name of lots of child packages.

I'm sympathetic with the goal, but I'm dubious that there are any such
situations. The bad problems (like composition of untagged records) would not be
helped by this (the incompatibility is mostly at runtime, and the compile-time
incompatibilities are necessary to have any sensible semantics for composition).
That's pretty common; many incompatibilities are caused by semantic necessities.
The trivial problems (such as the recent nest variant problem) might be helped,
but it's unclear that they're worth fixing in the first place if there is any
sniff of a compatibility issue. (You, for instance, claimed that that one was
not.)

We've discussed this in the context of other AIs, and yet I cannot recall any
situation where this would have ultimately helped. (And its existence might even
prevent us from finding a better solution that doesn't have any incompatibility,
because we might quit looking earlier. Not that that possibility would factor
into my vote much.)

I think this is much like Tucker's pragma Feature -- an idea that sounds good on
the surface, but never actually would get used in practice. (Although maybe
pragma Feature would have gotten used had Tucker actually made a concrete
proposal as to what "features" it encompassed.) And I expect it to end up in the
same place -- the "No Action" pile. Feel free to prove me wrong.

****************************************************************

From: Bob Duff
Sent: Friday, August 30, 2013  6:14 PM

> Basic conclusion here: terminology matters, and in this case, it is
> pretty much the only thing that matters. The actual language rules are
> far less important than the impression given by the terminology,
> because most programmers will only know the terminology, not the language
> rules.

Yeah, except that we don't really have any control over what terms the user
sees.  That is, we don't define what diagnostic messages look like.

A compiler could say "missing semicolon", or "Syntax Error: missing semicolon",
or "Minor Warning, no big deal: missing semicolon", and all those are conforming
implementations, so long as the implementation doesn't allow programs with
missing semicolons to run.

Yes, the terms are important, but we don't control them in practise.

Users don't read the RM, they read the diagnostic messages.
(I hope "diagnostic message" is a neutral term I can use that doesn't indicate
whether it's an "error" or "likely error" or "possible error" or whatever.)

****************************************************************

From: Bob Duff
Sent: Friday, August 30, 2013  6:30 PM

> I'm sympathetic with the goal, but I'm dubious that there are any such
> situations.

I've mentioned half-a-dozen or so during the last few months, as they came up.
Cases where one person says, "Yeah, but that would be INCOMPATIBLE!", and the
other person says, "Yeah, but that is just WRONG!".  I'm trying to defuse that
sort of conflict.

All I ask is that we keep an open mind to the idea that we CAN require detection
of errors at compile time, while STILL requiring that the implementation run the
program.  And don't reject that idea based on pedantic concerns about the formal
definition of "detect" and "give a diagnostic message" and "error vs. warning"
and so on.

> ...the "No Action" pile. Feel free to prove me wrong.

To prove you wrong, I could go through all the (compile time) incompatibilities
introduced in 95, 2005, 2012, and analyze them. I'll bet there are dozens of
cases.  I'm not sure I have the time to do that.

One that comes to mind right now: the new rules about 'in out' parameters being
mutually conflicting or some such.  I don't understand those rules, but I think
we found a bunch of incompatibilities in the test suite.

****************************************************************

From: Randy Brukardt
Sent: Friday, August 30, 2013  6:40 PM

> > Basic conclusion here: terminology matters, and in this case, it is
> > pretty much the only thing that matters. The actual language rules
> > are far less important than the impression given by the terminology,
> > because most programmers will only know the terminology, not the
> > language rules.
>
> Yeah, except that we don't really have any control over what terms the
> user sees.  That is, we don't define what diagnostic messages look
> like.

True, but compiler vendors try to stay fairly close to the RM terminology.
In most cases where we didn't do that, we came to regret it.

> A compiler could say "missing semicolon", or "Syntax Error:
> missing semicolon", or "Minor Warning, no big deal: missing
> semicolon", and all those are conforming implementations, so long as
> the implementation doesn't allow programs with missing semicolons to
> run.

Or "*SYNTAX ERROR* Missing semicolon" :-)

My problem with "fatal error" is that we have lots of messages with that in
it:

"*FATAL ERROR* Missing source"

I don't want to get the RM and our messages that far out of sync.

> Yes, the terms are important, but we don't control them in practise.
>
> Users don't read the RM, they read the diagnostic messages.
> (I hope "diagnostic message" is a neutral term I can use that doesn't
> indicate whether it's an "error" or "likely error"
> or "possible error" or whatever.)

(Yes it's neutral enough.)

I think my point is that the difference between (using your original terms) a
"fatal error", a "non-fatal error", and a "warning" is fuzzy enough that vendors
will want to stay quite close to the RM terminology. That's especially true in
that 3rd party documents (web sites, books, etc.) that explain these differences
are usually going to stick very close to the RM terminology. So as a practical
matter, I think that vendors *could* stray a long way from the RM terminology,
but there are lots of powerful reasons for not doing so. Maybe AdaCore could get
away with it, but few other vendors can.

And the terminology matters a huge amount here: no one should be ignoring errors
except in exceptional circumstances whereas warnings are ignorable with
justification (examples given in previous messages). For Janus/Ada, I've been
thinking about separating some "warnings" into "informations", as it's hard to
tell in Janus/Ada whether a warning should really be addressed or whether its
information about something that might be important to know but often is
irrelevant. Even though there is no practical difference, the difference in
terminology would clarify things. The same would be true in the RM.

****************************************************************

From: Randy Brukardt
Sent: Friday, August 30, 2013  6:53 PM

...
> One that comes to mind right now: the new rules about 'in out'
> parameters being mutually conflicting or some such.  I don't
> understand those rules, but I think we found a bunch of
> incompatibilities in the test suite.

We knew that there were incompatibilities there; those represent very dubious
code that should never have been written. The real question is whether we were
wrong in that judgment, but presuming that we weren't, it's better off this way.
We've always tolerated incompatibilities that find real bugs.

The incompatibilities that we've introduced into the language to date were
considered acceptable (for whatever reason), and they would not be relevant to
your proposed feature. I see no scenario where we could avoid *all*
incompatibilities by having a feature like this. It could do nothing for runtime
incompatibilities, nor can it help if an incompatibility is necessary to have
the language make semantic sense (many Binding Interpretations are in this
category, as are the added legality rules for untagged record composition). So
having a minor incompatibility for a high-value change does not bother me at
all, and indeed I could easily imagine being against classifying one of these as
"soft errors" or whatever we decide to call it, believing that requiring
correction is important.

What would be relevant is cases where we decided not to fix the problem at all
(the nested variant issue most likely will be such a case) or where we adopted a
sub-optimal solution because of compatibility concerns. I don't know of any
practical way to find those in the past. (Re-reading all of the AIs does not
count as "practical".) When I said, "feel free to prove me wrong", I really
meant going forward. (We won't be seriously considering Amendment AIs for years
to come, so we can see if there are any compelling examples in the intervening
years.) There won't be an answer to that challenge until 2018 at the earliest!

****************************************************************

From: Robert Dewar
Sent: Friday, August 30, 2013  9:38 PM

>> A compiler could say "missing semicolon", or "Syntax Error:
>> missing semicolon", or "Minor Warning, no big deal: missing
>> semicolon", and all those are conforming implementations, so long as
>> the implementation doesn't allow programs with missing semicolons to
>> run.

More accurately "as long as the implementation has a mode in which it does not
allow programs with missing semicolons to run".

...
> I think my point is that the difference between (using your original
> terms) a "fatal error", a "non-fatal error", and a "warning" is fuzzy
> enough that vendors will want to stay quite close to the RM
> terminology. That's especially true in that 3rd party documents (web
> sites, books, etc.) that explain these differences are usually going
> to stick very close to the RM terminology. So as a practical matter, I
> think that vendors *could* stray a long way from the RM terminology,
> but there are lots of powerful reasons for not doing so. Maybe AdaCore
> could get away with it, but few other vendors can.

We avoid RM terminology where it is confusing. For instance we say package spec
instead of package declaration, because that's what most programmers say. And we
would not use "package" in a message expecting a programmer to know that a
generic package is not a package. There are lots of obscure terms in the RM
better avoided in error messages (most programmers these days don't read the RM
much!)

> And the terminology matters a huge amount here: no one should be
> ignoring errors except in exceptional circumstances whereas warnings
> are ignorable with justification (examples given in previous
> messages). For Janus/Ada, I've been thinking about separating some
> "warnings" into "informations", as it's hard to tell in Janus/Ada
> whether a warning should really be addressed or whether its
> information about something that might be important to know but often
> is irrelevant. Even though there is no practical difference, the
> difference in terminology would clarify things. The same would be true in the
> RM.

GNAT distinguishes between "info" messages and "warning" messages

****************************************************************

From: Arnaud Charlet
Sent: Saturday, August 31, 2013  4:47 AM

> > One that comes to mind right now: the new rules about 'in out'
> > parameters being mutually conflicting or some such.  I don't
> > understand those rules, but I think we found a bunch of
> > incompatibilities in the test suite.
>
> We knew that there were incompatibilities there; those represent very
> dubious code that should never have been written. The real question is
> whether we were wrong in that judgment, but presuming that we weren't,
> it's better off this way. We've always tolerated incompatibilities
> that find real bugs.

As shown by customer code and by many ACATS tests (you have received a bunch of
ACATS petitions for Ada 2012 from us about this), we were pretty wrong: people
use a common idiom when they simply want to ignore the out parameters, using a
single variable, e.g:

  Proc1 (Input, Ignore_Out, Ignore_Out);

is *very* common and changing all that code is a real pain for users.

Bob is right, this rule is a good example where a "soft" error would have been
more useful than a "hard" error.

I personally find "hard error" and "soft error" good names FWIW.

****************************************************************

From: Bob Duff
Sent: Saturday, August 31, 2013  9:03 AM

> As shown by customer code and by many ACATS tests (you have received a
> bunch of ACATS petitions for Ada 2012 from us about this), we were pretty wrong:
> people use a common idiom when they simply want to ignore the out
> parameters, using a single variable, e.g:
>
>   Proc1 (Input, Ignore_Out, Ignore_Out);
>
> is *very* common and changing all that code is a real pain for users.

And that code is completely harmless!

> Bob is right, this rule is a good example where a "soft" error would
> have been more useful than a "hard" error.

So let's go back and make some of these 2005/2012 incompatibilities into soft
errors.  It's not too late.  But ARG should consider that high priority -- the
rest of its work can wait several years.

If ARG doesn't do that, I think perhaps AdaCore should have a nonstandard mode
that does it.

> I personally find "hard error" and "soft error" good names FWIW.

Yes, I like it, too.  Or instead of "error", talk about "legality":
In each case, we can say something like:

    Blah blah shall not blah blah.  This rule is a soft legality rule.

And put something in the "classification of errors" section in chap 1 making an
exception for soft legality rules.  The rule in 10.2 also needs work.

I suggest:  All legality rules require a diagnostic message. (No, I can't
formally define that -- so what?) An implementation must have two modes: one in
which soft errors prevent the program from running, and one in which they do
not.

****************************************************************

From: Randy Brukardt
Sent: Sunday, September 1, 2013  5:52 PM

> > As shown by customer code and by many ACATS tests (you have received
> > a bunch of ACATS petitions for Ada 2012 from us about this), we were
pretty wrong:
> > people use a common idiom when they simply want to ignore the out
> > parameters, using a single variable, e.g:
> >
> >   Proc1 (Input, Ignore_Out, Ignore_Out);
> >
> > is *very* common and changing all that code is a real pain for users.
>
> And that code is completely harmless!

I can believe that this happens, but I find it hard to believe that it is "very
common". (Ignoring ACATS tests; ACATS tests often don't reflect the way Ada code
is really used, so I don't much care about incompatibilities that show up in
them.)

Code like the above requires three unlikely things to occur:

(1) Ignoring of one or more parameters is not dangerous at the call site. Most
    "in out" and "out" parameters can't be unconditionally ignored. They might
    have circumstances where they aren't meaningful, but those are usually tied
    to the values of other "out" parameters. So unconditionally ignoring a
    parameter meaning assuming that the value of another parameter, which is
    always a bad idea. (Ignoring error codes on return from a routine is the
    most common example of the danger of doing this.)

(2) The specification of the routine is designed such that it is necessary to
    ignore parameters. One hopes that Ada routines don't have unused parameters
    and the like; Ada has default parameters and overloading which can easily be
    used to reduce the occurrences of such subprograms to rare usages.

(3) For the above to occur, you have to have two or more "out" parameters of the
    same type. If you're using strong typing, this is pretty unlikely. I cannot
    think of any case where this has ever happened in my code, as out parameters
    are most often used for returning multiple entities together from something
    that would otherwise be a function. Those entities are almost always of
    different types.

Perhaps there are cases not involving ignoring of results that are also involved
here, if this is indeed very common.

In any case, if there truly are a lot of cases where this check is in fact
rejecting legitimate code, then I think that it should be removed altogether.
The idea behind a "soft error" is that it reflects something wrong that doesn't
have to fixed immediately. It is not a case where the "error" should be ignored
forever (unless of course it is impossible to change the source code).

In this particular case, the reason for the rule applying to procedures was
simply that it didn't make sense to say that you can't do this for functions if
you could do it for procedures. If that's not true, then it probably shouldn't
apply to anything.

> > Bob is right, this rule is a good example where a "soft" error would
> > have been more useful than a "hard" error.
>
> So let's go back and make some of these 2005/2012 incompatibilities
> into soft errors.  It's not too late.  But ARG should consider that
> high priority -- the rest of its work can wait several years.

This seems like a complete waste of time. It only makes sense for "soft errors"
to be those where the semantics are well-defined if the error is not required to
be detected. There are very few such errors in Ada. Moreover, it would take an
immense amount of analysis to differentiate errors that exist for semantic
reasons (like the untagged record equality Legality Rules) and those that could
be "soft errors". Getting it wrong would be very bad, as we would have programs
with undefined semantics executing. (We certainly would have to have tests
containing "soft errors" in the ACATS, and that seems unpleasant.)

I've said it before, but I think this "soft error" idea seems appealing at
first, but I don't think there are many circumstances where it actually could be
applied. In most such cases, the check itself is dubious and quite likely we
don't really want or need it; how its reported is not the real issue.

I think it is fine to keep this idea in our back pocket in case we find a
situation where it would allow making a change that otherwise would be
impossible. But I don't see any reason to try to go back and revisit the last 8
years of Ada development to try to retrofit this idea. That sounds like
rehashing every argument we've ever had about the direction of Ada.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, September  2, 2013  3:55 AM

> I personally find "hard error" and "soft error" good names FWIW.

I'd prefer "major" and "minor" errors, FWIW...

As for the rest, I also think there is some value in the idea, but that it looks
like another of these brilliant solutions looking desperatly for a problem to
solve...

****************************************************************

From: Bob Duff
Sent: Monday, September  2, 2013  8:32 AM

> I'd prefer "major" and "minor" errors, FWIW...

I don't think that gives the right impression.  There is no implication that
soft errors are more "minor".  Programmers should take soft error messages
seriously.

At least some soft errors will be errors that we would make into normal
(hard) legality rules, except for the compatibility concern.

Only the programmer can decide how "minor" the soft errors are, and whether to
fix them.  Randy doesn't want to make "unrecognized pragma" into a soft error,
and I don't want to fight about that, but the same point applies to warnings: If
you're porting from GNAT to some other compiler, and you see "unrecognized
pragma Abort_Defer", that's probably a very serious error that must be fixed.
On the other hand, if you see "unrecognized pragma Check", that's no big deal --
the program will work fine while ignoring pragmas Check.

> As for the rest, I also think there is some value in the idea, but
> that it looks like another of these brilliant solutions looking
> desperatly for a problem to solve...

The problem is that ARG keeps introducing incompatibilities in every language
version.  For many people that's no big deal, but for others, it either costs a
lot of money, or prevents them from upgrading to the new language.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, September  2, 2013  9:30 AM

> Randy doesn't want to make "unrecognized pragma" into a soft error

If we want to discuss whether unrecognized pragmas are an error, let me mention
transformation tools that use pragmas to indicate places where a transformation
is needed, or other special elements to consider. Using pragmas to that effect
has the benefit that it is very convenient for an ASIS-based transformation
tool, and looks good from the programmer's point of view.

One such tool is Morpheus from Adalabs
(http://www.adalabs.com/products-morpheus.html).

****************************************************************

From: Bob Duff
Sent: Monday, September  2, 2013  10:00 AM

> > Randy doesn't want to make "unrecognized pragma" into a soft error
> If we want to discuss whether unrecognized pragmas are an error,

My point was that we do NOT want to discuss that -- some are error, some are
not.  It's the programmer's call.

>...let me
> mention transformation tools that use pragmas to indicate places where
>a  transformation is needed, or other special elements to consider.
>Using  pragmas to that effect has the benefit that it is very
>convenient for an  ASIS-based transformation tool, and looks good from
>the programmer's  point of view.

Right, good example.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, September  3, 2013  1:39 AM

[I'm leaving on vacation tomorrow, so I won't be able to participate in this
discussion going forward. Thus a "final" summary from me. Don't decide anything
stupid while I'm gone. :-)]

> > > I personally find "hard error" and "soft error" good names FWIW.

> > I'd prefer "major" and "minor" errors, FWIW...
>
> I don't think that gives the right impression.  There is no
> implication that soft errors are more "minor".  Programmers should
> take soft error messages seriously.
>
> At least some soft errors will be errors that we would make into
> normal
> (hard) legality rules, except for the compatibility concern.
>
> Only the programmer can decide how "minor" the soft errors are, and
> whether to fix them.  Randy doesn't want to make "unrecognized pragma"
> into a soft error, and I don't want to fight about that,

I'm unsure, actually. The real point is that it's not clear how valuable this
is.

...
> > As for the rest, I also think there is some value in the idea, but
> > that it looks like another of these brilliant solutions looking
> > desperatly for a problem to solve...
>
> The problem is that ARG keeps introducing incompatibilities in every
> language version.  For many people that's no big deal, but for others,
> it either costs a lot of money, or prevents them from upgrading to the
> new language.

Yes, but this idea is unlikely to have any effect on that. It's greatest value
is in other areas that we traditionally have stayed away from.

The main problem (as I've said before) is that for "hard" errors, the program
cannot be executed. Thus, we don't have to define any semantics for such
execution. For "soft" errors, however, we *do* have to define semantics for
execution, as the program can be executed (at least in one language-defined
mode).

Among other things, this means that soft errors would require a new kind of
ACATS test, which would combine the features of a B-Test and a C-Test -- both
messages would need to be output *and* the execution would have to finish
properly. That's a substantial complication and cost for the ACATS. (I happen to
think that a similar cognitive complication would also exist for *users* of Ada,
but that's not so clear-cut. I also note that this idea bears a lot of
resemblance to the whole argument about unreserved keywords -- which also went
nowhere.)

Anyway, this fact makes "soft" errors most useful for methodological
restrictions as opposed to semantic restrictions. The problem is that Ada
doesn't have many methodological restrictions.

Just a quick look at some common kinds of incompatibilities in Ada 2012:

(1) Adding new entities to a language-defined package (examples: A.4.5(88.e/3),
    A.18.2(264.c/3), D.14(29.c/3)). Soft errors would not be helpful for these
    incompatibilities (redoing resolution rules in order to avoid the
    incompatability would be nasty and bizarre).

(2) Changing the profile of a language-defined subprogram (I didn't remember an
    Ada 2012 example off-hand). Even with careful use of default parameters,
    these have incompatibilities with renames and 'Access uses (as the profile
    is different). Again, I don't think soft errors would be of any value, as
    defining multiple profiles would be a massive compilation in the language.

(3) Incompatibilities required by semantic consistency. (examples:
    4.5.2(39.k/3)) These are cases where we could not make a sensible definition
    of the language without the incompatibility. I don't see how soft errors
    would help such cases, as the semantics would need to be well-defined in
    order to have a soft error.

(4) Nonsense semantics in previous standards. [This is pretty similar to the
    above, but its not caused by a language change.] (Examples: 10.2.1(28.l/3),
    12.7(25.e/3), B.3.3(32.b/3)). Soft errors would not help here, as it
    wouldn't make sense to define the nonsense semantics formally.

(5) Runtime inconsistencies. Obviously, soft errors will not help in any way
    with these.

Certainly there are cases where soft errors could help. (I didn't do any sort of
formal survey.) 6.4.1(6.16/3) is really a methodological restriction, and one
could make it a soft error unless the call is to a function (that can't be
incompatible). I'd like to see more compelling examples than the one Arnaud
posted before doing that (or eliminating the check altogether), but that's a
separate discussion.

The problem with incompatibilities caused by methodological restrictions is that
they're easily avoided by not having the restriction. We don't need soft errors
to do that!

I think the most valuable use of soft errors would be in properly restricting
the contents of assertions, which we decided not to do because we couldn't find
a rule that wasn't too restrictive. That would be less of a problem with soft
errors, as there would always be the option to ignore the error and do the
dubious thing anyway. Similarly, the question of invariants of types with
visible components could be dealt with using soft errors (so that the cases of
generics would not have to be rejected).

So, I think the majority of the value of soft errors would be found going
forward, and it's unlikely to be much help for compatibility issues (except
those we didn't have to introduce, which is a whole 'nuther discussion). We'd
need some cases where they clearly allowed something that we can't currently do.

So I rather agree with J-P:

> > As for the rest, I also think there is some value in the idea, but
> > that it looks like
> > another of these brilliant solutions looking desperatly for a
> > problem to solve...

Exactly.

****************************************************************

From: Bob Duff
Sent: Tuesday, September  3, 2013  9:02 AM

The subject matter of this AI is incompatibilities -- in particular, a mechanism
to reduce the need/desire for them.  (And I started the thread, so I get to
define what it's about. ;-))  Below, you point out some cases where soft errors
could help, but brush those aside with "that's a separate discussion" and "whole
'nuther discussion". No, that's THIS discussion.  If we can come up with a few
cases where soft errors are a good idea, then they're a good idea.

I feel like the form of your argument is analogous to this: "Driving a car is
perfectly safe.  Of course, some people are killed driving cars, but that's a
separate discussion." Heh?  ;-)

Anyway, I include both existing incompatibilities (which we should consider
repealing) and future ones where we're tempted, in this discussion.

> The main problem (as I've said before) is that for "hard" errors, the
> program cannot be executed. Thus, we don't have to define any
> semantics for such execution. For "soft" errors, however, we *do* have
> to define semantics for execution, as the program can be executed (at
> least in one language-defined mode).

Yes, we all agree that the run-time semantics has to be well defined in the
presence of soft errors.  That's the case for Arno's example -- we already have
wording that defines the semantics of param passing.

> Among other things, this means that soft errors would require a new
> kind of ACATS test, which would combine the features of a B-Test and a
> C-Test --

I can't get excited about that.

> both messages would need to be output *and* the execution would have
> to finish properly. That's a substantial complication and cost for the ACATS.
> (I happen to think that a similar cognitive complication would also
> exist for *users* of Ada, but that's not so clear-cut. I also note
> that this idea bears a lot of resemblance to the whole argument about
> unreserved keywords
> -- which also went nowhere.)

Those are a perfect example of a soft error.  It went nowhere, I assume, because
people were uncomfortable with the fact that you could do confusing things (e.g.
"type Interface is interface...") with the compiler remaining silent.  With my
proposal, you would get an error message.

> Anyway, this fact makes "soft" errors most useful for methodological
> restrictions as opposed to semantic restrictions. The problem is that
> Ada doesn't have many methodological restrictions.
>
> Just a quick look at some common kinds of incompatibilities in Ada 2012:
>
> (1) Adding new entities to a language-defined package (examples:
> A.4.5(88.e/3), A.18.2(264.c/3), D.14(29.c/3)). Soft errors would not
> be helpful for these incompatibilities (redoing resolution rules in
> order to avoid the incompatability would be nasty and bizarre).
>
> (2) Changing the profile of a language-defined subprogram (I didn't
> remember an Ada 2012 example off-hand). End with careful use of
> default parameters, these have incompatibilities with renames and
> 'Access uses (as the profile is different). Again, I don't think soft
> errors would be of any value, as defining multiple profiles would be a massive
> compilation in the language.
>
> (3) Incompatibilities required by semantic consistency. (examples:
> 4.5.2(39.k/3)) These are cases where we could not make a sensible
> definition of the language without the incompatibility. I don't see
> how soft errors would help such cases, as the semantics would need to
> be well-defined in order to have a soft error.
>
> (4) Nonsense semantics in previous standards. [This is pretty similar
> to the above, but its not caused by a language change.] (Examples:
> 10.2.1(28.l/3), 12.7(25.e/3), B.3.3(32.b/3)). Soft errors would not
> help here, as it wouldn't make sense to define the nonsense semantics formally.
>
> (5) Runtime inconsistencies. Obviously, soft errors will not help in
> any way with these.

I agree with you about 1,2,4,5.  I think I disagree on 3 -- run-time semantics
is well defined, albeit potentially confusing.

>...I'd like to see more compelling
> examples than the one Arnaud posted before doing that

What?!  What on earth could be more compelling than examples of real code that
ran perfectly fine in Ada 2005, and is now broken in Ada?

>... (or eliminating the
> check altogether), but that's a separate discussion.
>
> The problem with incompatibilities caused by methodological
> restrictions is that they're easily avoided by not having the
> restriction. We don't need soft errors to do that!

Apparently, we do.  Tucker was quite insistent on the new 'out' param rules, and
refused to go along with 'out'-allowed-on-functions without it.  Hence an
incompatibility (affecting real code!) that could have been avoided by soft
errors.

> I think the most valuable use of soft errors would be in properly
> restricting the contents of assertions, which we decided not to do
> because we couldn't find a rule that wasn't too restrictive. That
> would be less of a problem with soft errors, as there would always be
> the option to ignore the error and do the dubious thing anyway.
> Similarly, the question of invariants of types with visible components
> could be dealt with using soft errors (so that the cases of generics would not
> have to be rejected).

Yes, I agree -- with soft errors (or required warnings), we can freely impose
far more stringent requirements, and that's a good thing.

P.S. Have a good vacation.  Don't do anything I would do.

****************************************************************

From: Tucker Taft
Sent: Tuesday, September  3, 2013  9:46 AM

> Apparently, we do.  Tucker was quite insistent on the new 'out' param
> rules, and refused to go along with 'out'-allowed-on-functions without
> it.  Hence an incompatibility (affecting real code!) that could have
> been avoided by soft errors.

I'd like to provide a little more background on the "OUT" param rule.
It actually wasn't my idea.  I was mostly focused on worrying about order of
evaluation and how it affected having (in) out parameters of functions.  The
idea of including a check on the case of multiple OUT parameters was someone
else's idea, as far as I know.

Furthermore, at least some of us *were* sensitive to the incompatibility, and Ed
Schonberg did an experiment to determine whether there seemed to be any issue
with this case.  Here is his comment about it, and Robert Dewar's response to
that:

Ed> a) I implemented the check on multiple in-out parameters in a
Ed> procedure, where the actuals of an elementary type overlap.  In the
Ed> 15,000 tests in our test suite I found two occurrences of  P (X, X)
Ed> or near equivalent.  One of them (in Matt Heaney's code!) appears
Ed> harmless. The other one is in a program full of other errors, so
Ed> unimportant. So application of this rule should not break anything.

Robert> I guess I can tolerate this rule, of course Ed's experiment also
Robert> shows that it is almost certainly useless, so just another case
Robert> of forcing compiler writers to waste time on nonsense.

So I don't think the ARG was being irresponsible here.  It turns out after all
that there are some uses of "Ignore_Out_Param" multiple times for the same call.
I realize that any incompatibility is potentially painful, but at least in this
case we did attempt to check whether it was a real problem, or only a
theoretical one.  We missed an interesting category, but we didn't act
irresponsibly in my view.

****************************************************************

From: Jeff Cousins
Sent: Tuesday, September  3, 2013 10:29 AM

For the purposes of testing this on a larger sample of code, does anyone know
whether the latest (v7.1.2) GNAT compiler actually does this checking? It only
seems to do it if -gnatw.i ("Activate warnings on overlapping actuals") is used,
which isn't included under the -gnatwa activate (almost) all warnings list, and
even then it only gives a warning not an error.

****************************************************************

From: Ed Schonberg
Sent: Tuesday, September  3, 2013  11:29 AM

In the current version of the compiler illegal overlaps are reported as errors.
The debugging switch --gnatd.E  transforms the error back into a warning, but
the default is as per the RM.

****************************************************************

From: Gary Dismukes
Sent: Tuesday, September  3, 2013  11:48 AM

Try using -gnat2012 with 7.1.2.  As Ed mentions, current versions of GNAT
(wavefronts designated by version 7.2.0w and any later releases) do this by
default, because Ada 2012 is now the default.

****************************************************************

From: Jeff Cousins
Sent: Tuesday, September  3, 2013  12:11 PM

Sorry  should have said, I'm using the -gnat12 switch (on 7.1.2).  -gnatw.i
appears to be a red herring as it also reports overlaps between an in and an
out.

By "current" do I take it that you mean a wave-front, not the latest release
(7.1.2)?

****************************************************************

From: Gary Dismukes
Sent: Tuesday, September  3, 2013  12:50 PM

Right, in wavefront versions, not the latest release.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, September  3, 2013  12:04 PM

> The subject matter of this AI is incompatibilities -- in particular, a
> mechanism to reduce the need/desire for them.

The thread is about "nonfatal errors", a *specific* feature. Uses of it are
ancillary.

> (And I started the thread, so I get to define what it's about. ;-))

Then (channeling one Bob Duff), use an appropriate subject line. :-)

> Below, you point out some cases where soft errors could help, but
> brush those aside with "that's a separate discussion" and "whole
> 'nuther discussion".
> No, that's THIS discussion.  If we can come up with a few cases where
> soft errors are a good idea, then they're a good idea.

In the specific cases I mentioned, the question is whether there *is* a
significant compatibility error (which I doubt), and if so, whether that means
that there should be no error at all (hard, soft, or kaiser :-), or whether some
sort of error is still valuable. That's all very specific to a particular case,
and should discussed separately under a thread about that particular rule
(6.4.1(16.6/3)). It has nothing to do with the general idea of soft errors.

...
> Anyway, I include both existing incompatibilities (which we should
> consider repealing) and future ones where we're tempted, in this
> discussion.

If we want to repeal some rule, we ought to discuss that (on a case-by-case
basis). It cannot be sensibly done in some general discussion. We ought to
include the possibility of *partially* repealing the rule using soft errors, as
one of the options under discussion. If in fact we find some case where soft
errors are useful, then we should add them to the language. But that doesn't
belong in a general discussion.

...
> > both messages would need to be output *and* the execution would have
> > to finish properly. That's a substantial complication and cost for
> > the ACATS.
> > (I happen to think that a similar cognitive complication would also
> > exist for *users* of Ada, but that's not so clear-cut. I also note
> > that this idea bears a lot of resemblance to the whole argument
> > about unreserved keywords -- which also went nowhere.)
>
> Those are a perfect example of a soft error.  It went nowhere, I
> assume, because people were uncomfortable with the fact that you could
> do confusing things (e.g. "type Interface is interface...") with the
> compiler remaining silent.  With my proposal, you would get an error
> message.

To me, it says that people are uncomfortable with the idea of conditional
language design.

...
> > (3) Incompatibilities required by semantic consistency. (examples:
> > 4.5.2(39.k/3)) These are cases where we could not make a sensible
> > definition of the language without the incompatibility. I don't see
> > how soft errors would help such cases, as the semantics would need
> > to be well-defined in order to have a soft error.
...
> I agree with you about 1,2,4,5.  I think I disagree on 3 -- run-time
> semantics is well defined, albeit potentially confusing.

It might be well-defined, but it's essentially unimplementable. I would be
strongly opposed to ever allowing such a program to execute. Besides, it's the
little incompatibility here; the runtime incompatibility is many times more
likely to cause problems.

> >...I'd like to see more compelling
> > examples than the one Arnaud posted before doing that
>
> What?!  What on earth could be more compelling than examples of real
> code that ran perfectly fine in Ada 2005, and is now broken in Ada?

I don't believe that the example he gave occurred more than once (I'd be amazed
if it occurred at all, in fact, because it requires three separate bad design
decisions, as I outlined last week). Moreover, I have a hard time getting
excited about bugs caused in real code that should never have been written in
the first place. He claimed this is "very common", but his example is completely
unbelievable to me. I'd like to see *real*, believable examples where this is
causing a problem. (They probably would have to be far more complete in order to
be believable.) But as I've said before, this does not belong in this thread,
and I'm leaving soon anyway.

> >... (or eliminating the
> > check altogether), but that's a separate discussion.
> >
> > The problem with incompatibilities caused by methodological
> > restrictions is that they're easily avoided by not having the
> > restriction. We don't need soft errors to do that!
>
> Apparently, we do.  Tucker was quite insistent on the new 'out' param
> rules, and refused to go along with 'out'-allowed-on-functions without
> it.  Hence an incompatibility (affecting real code!) that could have
> been avoided by soft errors.

Tucker was insistent on 'out' parameter rules *for functions*!! I thought it was
weird to only have such rules on functions, so I extended them to all calls when
I wrote up a specific proposal. We attempted to check if that was a problem (see
Tucker's response), and the answer was 'no'. So we left the more restrictive
rule. But we could just as easily have met the original goal by only having
6.4.1(16.6/3) apply in function calls. And no 'soft errors' are needed to do
that. (Again, this should be a separate discussion.) There was no language need
for the incompatibility; it just seemed more consistent to have it and we
believed that it was harmless.

> P.S. Have a good vacation.  Don't do anything I would do.

I'm probably going to spend the first day virtually arguing with you.
Wonderful. :-(

I'm here now because I forgot to reprogram my GPS yesterday (it only holds about
1/3rd of the maps of the US, so I have to reprogram it any time I'm going to go
a long ways). That takes several hours, so I still have time to argue with you.
:-)

****************************************************************

From: Bob Duff
Sent: Tuesday, September  3, 2013  12:45 PM

> Furthermore, at least some of us *were* sensitive to the
> incompatibility, and Ed Schonberg did an experiment to determine
> whether there seemed to be any issue with this case.  Here is his
> comment about it, and Robert Dewar's response to that:
>
> Ed> a) I implemented the check on multiple in-out parameters in a
> Ed> procedure, where the actuals of an elementary type overlap.  In
> Ed> the 15,000 tests in our test suite I found two occurrences of  P
> Ed> (X, X) or near equivalent.  One of them (in Matt Heaney's code!)
> Ed> appears harmless. The other one is in a program full of other
> Ed> errors, so unimportant. So application of this rule should not break
> Ed> anything.

>
> Robert> I guess I can tolerate this rule, of course Ed's experiment
> Robert> also shows that it is almost certainly useless, so just
> Robert> another case of forcing compiler writers to waste time on
> Robert> nonsense.

Hmm.  I think what happened is that we implemented the rule incorrectly for that
experiment.  And then ACATS tests appeared, and we "beefed up" the rule, making
more things illegal.  And then more user code became illegal.  Then we added a
switch to turn the error into a warning. (So GNAT already treats this as a soft
error -- we have a mode in which the program can run, and another in which it
can't run, and we give a diagnostic message in both modes.)

To Ed Schonberg:  Is the above true?

> So I don't think the ARG was being irresponsible here.

I agree.  Sorry if I implied that we were being irresponsible AT THAT TIME.  I
think at the time ARG was thinking:

    1. This error is extremely unlikely to occur in real code.
    2. If it does occur, it's certainly a real bug.
    3. The only choices are "legal" and "illegal" (i.e. the idea
       of soft errors hadn't occured to us).

Now, with 20-20 hind sight, I think we were mistaken:

    1. It DOES occur in real code.
    2. It's probably a real bug, but not in all cases.
       The case Arno showed is perfectly legitimate.  In fact,
       it's even deterministic, despite the nondeterminism implied
       by the run-time semantics.
    3. At least some of us are open to a middle ground ("soft errors").

>...  We missed an interesting category, but we didn't  act
>irresponsibly in my view.

Right, but I think we made a mistake, and we should consider correcting it via
soft errors.  I also think that if we had had the concept of soft errors in
mind, we would/should have used it several times during Ada 2005 and 2012.

****************************************************************

From: Bob Duff
Sent: Tuesday, September  3, 2013  12:56 PM

> If we want to repeal some rule, we ought to discuss that (on a
> case-by-case basis).

OK, then let's postpone this general discussion.  If I have time, I'll inspect
the existing incompatibilities, and open separate threads about the ones I think
we maybe should repeal via soft errors (or maybe even repeal altogether).

>... It cannot be sensibly done in some general discussion.

Well, some folks are saying "soft error" is a useless concept, because there are
no cases where it should apply.  So I've been giving examples as part of the
general discussion.  I will now quit doing that, and hopefully open separate
threads for each such example.  But I don't want to hear anybody reply to those
threads with "Hey, there's no such thing as a soft error. There's only legal and
illegal, and that's the way it's always been and always should be.  Tradition!".

> > I agree with you about 1,2,4,5.  I think I disagree on 3 -- run-time
> > semantics is well defined, albeit potentially confusing.
>
> It might be well-defined, but it's essentially unimplementable.

OK, if that's true, then we can't use soft error there.

> I'm probably going to spend the first day virtually arguing with you.
> Wonderful. :-(

No need -- I promised above to quit the general discussion until I've opened
separate discussions of particular examples. And I probably won't get around to
that right away.

> I'm here now because I forgot to reprogram my GPS yesterday (it only
> holds about 1/3rd of the maps of the US, so I have to reprogram it any
> time I'm going to go a long ways). That takes several hours, so I
> still have time to argue with you. :-)

Call me a luddite, but I still use fold-out paper maps.

****************************************************************

From: Tucker Taft
Sent: Tuesday, September  3, 2013  1:00 PM

I am somewhat neutral on the "soft error" concept.  It does allow us to
introduce incompatibilities without "officially" doing so, but our attempts to
do that with "unreserved keywords" always ran into trouble with WG-9.  I suspect
they would be the stumbling block here again, though we could bring it up at the
next WG-9 meeting explicitly, before we waste a lot of time debating it in the
ARG.

I am probably more tolerant of certain incompatibilities than some folks, as it
seems that if you are upgrading to a new version of the language, you should
expect to do some work to get the benefit.  Of course the down side is if the
extra work is too much, then it becomes an entry barrier to upgrading.  And some
of our incompatibilities in the past have not had a good work-around (such as
the fixed-point multiplication/division problem we created in Ada 95 as part of
trying to provide better support for decimal fixed point).

Soft errors might at least "officially" reduce the entry barrier, but many
serious organizations consider warnings to be (hard) errors, and presumably
"soft errors" would also be considered "hard" errors by such organizations.

I do think the "soft error" concept is worth considering, and WG-9 is probably
the first place to discuss it.  We may have to have an agreed-upon criteria for
specifying a soft error rather than a hard error, and I wonder if soft errors
would be soft only for one revision cycle of the language, at which point they
would mutate to being hard...

****************************************************************

From: Arnaud Charlet
Sent: Tuesday, September  3, 2013  1:05 PM

> Soft errors might at least "officially" reduce the entry barrier, but
> many serious organizations consider warnings to be (hard) errors, and
> presumably "soft errors" would also be considered "hard"
> errors by such organizations.

Actually in our experience at AdaCore, most customers do tolerate warnings
(because they have too many of them to have a no warning hard rule) and do not
consider warnings as hard errors.

In other words, some of our customers are using -gnatwe, but many/the majority
do not.

****************************************************************

From: Bob Duff
Sent: Tuesday, September  3, 2013  1:35 PM

> I am somewhat neutral on the "soft error" concept.  It does allow us
> to introduce incompatibilities without "officially" doing so, but our
> attempts to do that with "unreserved keywords" always ran into trouble with WG-9.

Now THAT is totally irresponsible.

But soft errors seem different.  People can reasonably be uncomfortable with
compilers silently ignoring errors.  But soft errors are NOT silent.  And note
that my latest proposal requires two modes, one in which the program can run,
and one in which it can't.  It would take an unreasonable degree of stubornness
to say "I like the no-run mode, so I don't want other people to have a yes-run
mode."  Especially since implementers can always implement any modes they like
in a NONstandard mode.  (And GNAT did so.)

>...I
> suspect they would be the stumbling block here again, though we could
>bring  it up at the next WG-9 meeting explicitly, before we waste a lot
>of time  debating it in the ARG.

Good idea.  But I think they'll want some convincing examples.

And they should be reminded that they don't actually have any control over what
implementers do.  Like it or not, AdaCore is going to do what AdaCore wants (in
nonstandard modes).

> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it.  We may have to have an
> agreed-upon criteria for specifying a soft error rather than a hard
> error, and I wonder if soft errors would be soft only for one revision
> cycle of the language, at which point they would mutate to being hard...

I don't think "mutate to being hard" is a good idea.  Look how slowly people
migrated from Ada 95 -- some still use it.

It's like obsolescent features.  We will never remove them entirely.

****************************************************************

From: Robert Dewar
Sent: Tuesday, September  3, 2013  2:06 PM

> I am probably more tolerant of certain incompatibilities than some
> folks, as it seems that if you are upgrading to a new version of the
> language, you should expect to do some work to get the benefit.  Of
> course the down side is if the extra work is too much, then it becomes
> an entry barrier to upgrading.  And some of our incompatibilities in
> the past have not had a good work-around (such as the fixed-point
> multiplication/division problem we created in Ada 95 as part of trying to
> provide better support for decimal fixed point).

The extra work is reasonable if the incompatibility is

a) really useful
b) unavoidable

Any other incompatibility comes in the gratuitous category. And you really can't
guess what will cause trouble and what will not. The multiplication stuff did
not even surface in the presentation on difficulties at Ada UK, wherase making
Interface reserved caused months of very difficult coordinating work for them.

> Soft errors might at least "officially" reduce the entry barrier, but
> many serious organizations consider warnings to be (hard) errors, and
> presumably "soft errors" would also be considered "hard" errors by such organizations.
>
> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it.  We may have to have an
> agreed-upon criteria for specifying a soft error rather than a hard
> error, and I wonder if soft errors would be soft only for one revision cycle of the language, at which point they would mutate to being hard...

I find it bizarre to introduce the completely unfamiliar term soft error, when
we have a perfectly good term "warning message" which we already use in the RM.

I also think this whole discussion is overblown, it would be just fine to have
implementation advice that advised issuing a warning in certain circumstances.

****************************************************************

From: Robert Dewar
Sent: Tuesday, September  3, 2013  2:13 PM

> I am somewhat neutral on the "soft error" concept.  It does allow us
> to introduce incompatibilities without "officially" doing so, but our
> attempts to do that with "unreserved keywords" always ran into trouble
> with WG-9.

That's just a lack of competent political lobbying IMO!

****************************************************************

From: Robert Dewar
Sent: Tuesday, September  3, 2013  2:13 PM

> I don't think "mutate to being hard" is a good idea.  Look how slowly
> people migrated from Ada 95 -- some still use it.

"mutate to hard" is a good idea ONLY if customers clamour for it, otherwise it
is just a sop to aesthetic concers of the designers.

What do you mean "some still use it". The GREAT majority of our users use Ada 95
(only in the very most recent version has the default changed to Ada 2012, we
never had Ada 2005 as a default, too few people moved to Ada 2005 to have made
that a reasonable choice). Ada 2012 seems more worth the price of admission for
the contract stuff.

> It's like obsolescent features.  We will never remove them entirely.

We will never remove them at all, Annex J is completely normative.
It is no longer even possible to reject Annex J stuff under control of the
restriction pragma, after the unwise decision to dump all the pragmas there.
This is one requirement in Ada 2012 that GNAT just completely ignores, and will
continue to do so as far as I am concerned.

We have loads of customers using pragma Restrictions (No_OBsolescent_Features),
who use the existing pragmas extensively. It would be insanity in my view to
cause them trouble by adhering to the letter of the standard.

Again the whole of Annex J is really all about aesthetic concerns of the
designers overriding the reality of users. But as long as it is just meaningless
decoaration in the RM it is harmless I suppose :-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, September  3, 2013  3:14 PM

...
> We have loads of customers using pragma Restrictions
> (No_OBsolescent_Features), who use the existing pragmas extensively.
> It would be insanity in my view to cause them trouble by adhering to
> the letter of the standard.

FYI, the letter of the standard says that No_Obsolescent_Features does not have
to detect use of the pragmas. See 13.12.1(4/3). We've previously discussed this
(twice). So GNAT *is* following the letter of the standard, the only insanity is
claiming that it is not.

****************************************************************

From: Joyce Tokar
Sent: Tuesday, September  3, 2013  4:22 PM

Do you want to bring this up as a discussion topic at the next WG-9 Meeting?
Or leave it within the ARG to come forward with a proposal?

****************************************************************

From: Jeff Cousins
Sent: Wednesday, September  4, 2013  3:37 AM

First thoughts are that WG 9 could have a brief discussion to see what the
consensus is on whether it's worth investigating, and if so then ask the ARG to
come up with a proposal.

Personally my first choice would be that overlapping out/in out parameters stays
an error, and second choice that it (and any other potential "soft errors")
becomes implementation advice to raise a warning.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, September  4, 2013  3:58 AM

Well I've put several millions of lines through GNAT v7.1.2 using -gnat12
-gnatw.i and it comes out at one warning per 25K lines of code. I suspect that
most are cases where an actual is used once as an in parameter and once as an
out, rather than the out/in out combinations that this discussion is about,  but
I haven't time to check through them all. But anyway it's down at the level of
new errors for a new compiler release (due to better checking), never mind what
one might expect for a language revision.

I also think it's better in principle to keep it an error.  When the C++ camp
are fighting back, their main technical argument is "what about all the order
dependencies?".  The new rules help here.

Also, though a weaker argument, I don't think that programmers should easily
ignore out parameters, they are probably there for a reason, say a flag to
indicate whether another out parameter's value is valid, or whether the solution
to some complex algorithm has converged.  To ignore two out parameters is doubly
bad.

****************************************************************

From: Robert Dewar
Sent: Wednesday, September  4, 2013  6:59 AM

> Also, though a weaker argument, I don't think that programmers should
> easily ignore out parameters, they are probably there for a reason,
> say a flag to indicate whether another out parameter's value is valid,
> or whether the solution to some complex algorithm has converged.  To
> ignore two out parameters is doubly bad.

The regressions in our test suite, there were three tests affected (out of many
thousands), were all legitimate cases of ignoring out parameters deliberately,
and using the same "discard" variable for two parameters, easy to fix of course.
But sometimes even the most trivial of source changes can be a problem.

****************************************************************

From: Bob Duff
Sent: Wednesday, September  4, 2013  7:45 AM

> if you are upgrading to a new version of the language, you should
> expect to do some work to get the benefit.

I agree with that.  That's a good argument in favor of tolerating some
incompatibilities.  But it's not a good argument in favor of _gratuitous_
incompatibilities.

If there's a technically acceptable way to avoid a particular incompatibility,
then we should do so.

> I do think the "soft error" concept is worth considering, and WG-9 is
> probably the first place to discuss it.

Are you going to the WG9 meeting?

I think examples are key.  If you just introduce the "soft error"
idea in the abstract, many people react with "Well, I can't think of any cases
where that would be useful, so what's the point?"

****************************************************************

From: Tucker Taft
Sent: Wednesday, September  4, 2013  7:54 AM

> I agree with that.  That's a good argument in favor of tolerating some
> incompatibilities.  But it's not a good argument in favor of
> _gratuitous_ incompatibilities.
>
> If there's a technically acceptable way to avoid a particular
> incompatibility, then we should do so.

I am not personally convinced that labeling something a "soft error" or a
"required warning" is avoiding an incompatibility.  But it does soften the blow
and thereby reduce the entry barrier to upgrading.

>> I do think the "soft error" concept is worth considering, and WG-9 is
>> probably the first place to discuss it.
>
> Are you going to the WG9 meeting?

Yes, I plan to be there.

> I think examples are key.  If you just introduce the "soft error"
> idea in the abstract, many people react with "Well, I can't think of
> any cases where that would be useful, so what's the point?"

Agreed, one good example is worth many thousands of words of impassioned
oratory.

****************************************************************

From: Bob Duff
Sent: Wednesday, September  4, 2013  7:57 AM

> Do you want to bring this up as a discussion topic at the next WG-9 Meeting?
> Or leave it within the ARG to come forward with a proposal?

I won't be at the WG9 meeting.  I'm flying to Pittsburgh that Friday morning, in
time for the ARG meeting that afternoon.  Anyway, if people don't understand why
gratuitous incompatibilities are so bad, I don't know how to convince them.
Using words like "totally irresponsible" isn't going to work.  ;-)

Tucker's idea of discussing with WG9 is fine with me.
Robert and Tucker are both better debaters than I am.

I really do think that it is totally irresponsible to place minor aesthetic
concerns like "I don't like the concept of unreserved keywords" above
compatibility issues that cost real money.

I'd like to know who is opposed to unreserved keywords, and what their reasoning
is.  Maybe those prople don't think it's "minor".

I would think making those keywords a "soft error" would be more palatable to
those people, because then at least the compiler has a mode in which those
keywords ARE reserved.

****************************************************************

From: Bob Duff
Sent: Wednesday, September  4, 2013  8:00 AM

> Well I've put several millions of lines through GNAT v7.1.2 using
> -gnat12 -gnatw.i ...

I think version 7.2 wavefronts implement the rule more correctly (more
stringently), so you might get more errors.  I'm not sure about that.

Ed?  Robert?  (I don't remember who implemented this stuff, but it wasn't me.)

****************************************************************

From: Ed Schonberg
Sent: Wednesday, September  4, 2013  8:20 AM

Yes, the latest version has has made warnings into "hard" errors. we kept some
of the overlap checks as warnings for a year, but in June Robert removed the
critical question marks from the error strings. Javier, Robert, and myself had a
hand on the full implementation. Javier also extended the checks to the other
constructs that have order of elaboration issues, such as aggregates.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, September  4, 2013  9:18 AM

> I wonder if soft errors would be soft only for one revision cycle of the
> language, at which point they would mutate to being hard...

Could Annex J be treated similarly?  (Maybe two cycles would be more realistic).
Indeed could the ability to specify the same actual for multiple in out
parameters be regarded as an obsolescent feature?

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, September  4, 2013  9:20 AM

> But sometimes even the most trivial of source changes can be a
> problem.

Hmm, yes, but in those contexts you are generally not allowed to change compiler
version (not even compiler options). The incompatibility is then irrelevant.

****************************************************************

From: Robert Dewar
Sent: Wednesday, September  4, 2013  1:25 PM

Well it is interesting to read the presentation from BAE on the effort of
transitioning to Ada 2005. By far the worst hit was Interface as a keyword,
because they had a company-wide convention that each package had a child
xxx.Interface that defined the cross-system interface for the package. That
meant they had to change all packages in all systems across all projects in a
coordinated manner, and it was coordinating the change between different
projects that was hard.

The one line in 25,000 for this particular issue (which is BTW at this stage
water under the bridge anyway) is minor compared to this.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, October  4, 2013  7:24 AM

>> Soft errors might at least "officially" reduce the entry barrier, but
>> many serious organizations consider warnings to be (hard) errors, and
>> presumably "soft errors" would also be considered "hard"
>> errors by such organizations.
>
> Actually in our experience at AdaCore, most customers do tolerate
> warnings (because they have too many of them to have a no warning hard
> rule) and do not consider warnings as hard errors.


In an old compiler of mine, we had 3 categories of warnings, selectable by
compiler switch:
 stern warnings -- almost certainly a bug
 warnings -- run-of-the-mill warnings
 light warnings -- verbose, only for paranoid people

The big advantage of "stern warnings" vs. (legality) errors is that the language
design can be much more liberal.

E.g., in the case at hand the "stern warning condition" could be about aliased
parameters and there is NOT a precise language definition given when the warning
is to be issued. It depends entirely on the cleverness of the compiler and the
particular example whether or not the warning appears. Of course, one could
introduce the current rules as "at least" rules for the warning, but the "at
most" nature of legality errors can be nicely ignored by the language
definition.

For legality errors, the need to narrow down to the always decidable situations
is a real disservice to the user. In that sense, I like the notion of "soft
errors", but I hate the term.

****************************************************************

From: Jeff Cousins
Sent: Friday, October  4, 2013  12:45 PM

We turn on nearly all optional warnings, but then process them to sort them into
three or four categories of our choosing, possibly similar to Erhard's. This
does mean though that we sometimes have to update our tool when the wording of a
warning message changes. For legacy code with a good track record in use usually
only the highest category of warnings would be fixed as a matter of urgency, for
new code hopefully the only warnings allowed would be those either with a
recorded justification or of the lowest category.

****************************************************************

From: Brad Moore
Sent: Tuesday, November 19, 2013  9:18 AM

Part of the discussion has been about whether there are cases where such a
feature would be useful in practice.

One such case could be related to AI05-0144-2 (Detecting dangerous order
dependencies)

The discussion in the AI suggests that the following should be illegal in Ada
2012.

  procedure Do_It (Double, Triple : in out Natural) is
  begin
     Double := Double * 2;
     Triple := Triple * 3;
  end Do_It;

  Var : Natural := 2;

  Do_It (Var, Var); -- Illegal by new rules.

Yet this code compiles in my current version of GNAT.

It does produce 'warning: writable actual for "Double" overlaps with actual for
"Triple"' as a compiler warning.

So it appears that GNAT already is treating this as a suppressable error.

This also seems to be an example where a suppressable error would have been
preferrable over introducing a backwards incompatibility.

Maybe we could have gone further to rule out more dangerous order dependencies
if we had the suppressable error mechanism.

****************************************************************

From: Ed Schonberg
Sent: Tuesday, November 19, 2013  9:35 AM

> Yet this code compiles in my current version of GNAT.

Actually the current version reports this as an error, you must have a slightly
older version.

> It does produce 'warning: writable actual for "Double" overlaps with actual for "Triple"' as a compiler warning.
>
> So it appears that GNAT already is treating this as a suppressable error.

We were concerted that this would be a potentially serious disruption, but found
very few instances of such potential problems in our test suite, so we decided
that better be fully conformant and treat this as a error, in particular because
it is really aimed at functions with in-out parameters, of which there are few
examples so far :-)!

> This also seems to be an example where a suppressable error would have
> been preferrable over introducing a backwards incompatibility.
>
> Maybe we could have gone further to rule out more dangerous order
> dependencies if we had the suppressable error mechanism.

Once a new mechanism is in place I'm sure we'll find many uses for it!

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  10:05 AM

After all there is no difference between

an error that can be suppressed

a warning that can be made into an error.

given that the nice implementation of suppressing errors is to make them into
warnings, at least as an option.

And yes, mandating warnings is something that should be done much more often in
the RM, and if it makes people happy to call them suppressible errors fine
(reminds me of the ACA which plays this kind of trick with taxes and penalties).

(btw that's the spelling, not suppressable)

****************************************************************

From: Bob Duff
Sent: Tuesday, November 19, 2013  10:09 AM

> This also seems to be an example where a suppressable error would have
> been preferrable over introducing a backwards incompatibility.

Yes, I agree.  AdaCore has had some customer complaints about this.
A typical case is two 'out' params, which sometimes contain useful results, but
sometimes the caller wants to ignore them, and so writes:

    Do_Something(X, Y, Ignored_Out_Param, Ignored_Out_Param);

> Maybe we could have gone further to rule out more dangerous order
> dependencies if we had the suppressable error mechanism.

Maybe.

****************************************************************

From: Bob Duff
Sent: Tuesday, November 19, 2013  10:25 AM

> And yes, mandating warnings is something that should be done much more
> often in the RM, and if it makes people happy to call them
> suppressible errors fine

It makes people happy to call them SOMEthing that sounds "bad".
I'm not sure "suppressible error" is the right term, because
(1) I'm not sure it fits into the "Classification of Errors"
section in chap 1, and (2) "suppress" sounds like it means "don't print the
message", whereas what we really mean is "allow the program to run in spite of
the error".

"Warning" is apparently not strong enough for most people.
You buy an electric hedge trimmer, and you read all sorts of silly "warnings"
about how you shouldn't use it in the bath tub etc, leading people to believe
"warning" means "something I shouldn't bother paying attention to".  ;-)

> (reminds me of the ACA which plays this kind of trick with taxes and
> penalties).
>
> (btw that's the spelling, not suppressable)

Thanks, I didn't know that!

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  2:41 PM

> "Warning" is apparently not strong enough for most people.
> You buy an electric hedge trimmer, and you read all sorts of silly
> "warnings" about how you shouldn't use it in the bath tub etc, leading
> people to believe "warning" means "something I shouldn't bother paying
> attention to".  ;-)

To me, the advantage of "suppressible error" over warnings is two-fold:

(1) The default in standard mode is that it is an error (the program is not
allowed to execute). You can use a configuration pragma to allow the program to
execute (probably modeled as a compiler switch, but of course the RM has nothing
to say about switches). The advantage of the configuration pragma is that it can
be "compiled into the environment" without needing to modify any source code
(which is critical in the case of incompatibilities). "Warning" implies the
opposite (that the program is allowed to execute by default).

(2) The name (and default) makes it clear that these are errors, which we only
support allowing of the execution for the purposes of compatibility with
previous versions of Ada. "Warning" is not so clear.

Of course, if someone has a better name than "suppressible error" (with the same
intent and connotation), I'd be happy to support that too.

The case that we talked about in Pittsburgh (the bad view conversion of access
types), the runtime semantics is that of making the parameter value abnormal
(such that reading it would cause the program to be erroneous). Generally, we'd
prefer to prevent that by a Legality Rule, rather than allowing a potential time
bomb. But since the bug has persisted since Ada 95 (and the construct is an Ada
83 one), it's possible it exists in code. So we could use a suppressible error
for this case, and one expects that the error would be suppressed only in the
case where someone has existing code that they can't modify.

I do think this is a mechanism that we could use in a variety of cases,
especially if we're willing to define execution to be erroneous if it is
suppressed (in many cases, defining the semantics is hard or impossible). It's
certainly worth pursuing.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  2:53 PM

> (1) The default in standard mode is that it is an error (the program
> is not allowed to execute). You can use a configuration pragma to
> allow the program to execute (probably modeled as a compiler switch,
> but of course the RM has nothing to say about switches). The advantage
> of the configuration pragma is that it can be "compiled into the
> environment" without needing to modify any source code (which is critical in the case of incompatibilities). "Warning"
> implies the opposite (that the program is allowed to execute by default).

That's meaningless, compilers are free to provide whatever defaults they like.
They just have to have a mode which is strictly conforming (I doubt many GNAT
programmers know the full details of specifying this strictly conforming mode,
since it is in practice useless except for running ACATS tests :-))

> (2) The name (and default) makes it clear that these are errors, which
> we only support allowing of the execution for the purposes of
> compatibility with previous versions of Ada. "Warning" is not so clear.

Yes, of course in many cases I disagree that these are errors, they prevent well
defined perfectly reasonable programs from executing :-)

> Of course, if someone has a better name than "suppressible error"
> (with the same intent and connotation), I'd be happy to support that too.

Well to me, it's finally a way of recognizing that compatibility is more
important than asthetic consistency of the semantic model, and if we can use
these, whatever they are called, to reduce incompatibilites in the future, that
would be a step forward.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 19, 2013  3:14 PM

> ... I do think this is a mechanism that we could use in a variety of
> cases, especially if we're willing to define execution to be erroneous
> if it is suppressed (in many cases, defining the semantics is hard or impossible).
> It's certainly worth pursuing.

We at least mentioned the possibility of assigning unique identifiers to
suppressible legality errors, and then presumably defining a pragma that allows
individual (or collections of?) suppressible errors to be suppressed.

It might be nice to eventually assign unique identifiers to all legality errors,
whether or not they should be considered suppressible, if only for documentation
purposes (might be particularly useful for ACATS B tests).

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  3:23 PM

> We at least mentioned the possibility of assigning unique identifiers
> to suppressible legality errors, and then presumably defining a pragma
> that allows individual (or collections of?) suppressible errors to be suppressed.

Don't over-engineer, this has no place in a language standard, it may or may not
make sense for an implementation. Right now, the debug flag in GNAT that
suppresses errors of this kind is indiscriminate and that's proved adequate in
practice.

> It might be nice to eventually assign unique identifiers to all
> legality errors, whether or not they should be considered
> suppressible, if only for documentation purposes (might be particularly useful
> for ACATS B tests).

Not in the language standard please! I can promise you that GNAT would ignore
these if it was done, since it would be a HUGE effort to implement for VERY
little gain.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 19, 2013  3:32 PM

I did not mean to imply compilers would have to do anything with these.  Rather,
we might indicate in the comments of an ACATS B test, which particular legality
rule we were checking.  Right now there is not a simple process (that I am aware
of) to check for coverage of legality rules by B tests.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  3:39 PM

Ah OK! I thought we just used RM para numbers? Otherwise sounds like a LOT of
effort even just in the RM for little gain. After all ACATS coverage is SO
incomplete, and likely to remain so, as Randy will tell you, so it seems
premature to be worrying about air tight mechanisms to get to 100% coverage.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  3:58 PM

> > I did not mean to imply compilers would have to do anything with
> > these.  Rather, we might indicate in the comments of an ACATS B
> > test, which particular legality rule we were checking.  Right now
> > there is not a simple process (that I am aware of) to check for
> > coverage of legality rules by B tests.
>
> Ah OK! I thought we just used RM para numbers?

Well, actually sentence numbers within RM paragraph numbers. (The SAIC coverage
just used paragraph numbers and managed to miss a number of rules that way; some
Legality Rule paragraphs have as many as 4 testable rules in them.)

And of course there are rules that take multiple paragraphs to describe
(bulleted lists are like that).

> Otherwise
> sounds like a LOT of effort even just in the RM for little gain. After
> all ACATS coverage is SO incomplete, and likely to remain so, as Randy
> will tell you, so it seems premature to be worrying about air tight
> mechanisms to get to 100% coverage.

I don't see a lot of value to an RM rule designation. It's easy enough to figure
coverage of ACATS B-Tests to Legality Rules in 95% of the cases. As with many
things, it's the coverage of C-Tests that's hard to quantify (because any
individual program depends on dozens of RM rules to execute).

If there is anything interesting about B-Tests, it is making it easier to grade
them. The one thing that's easier for C-Tests is grading them, because they
either report Passed or don't. B-Tests require some sort of error message
analysis, a lot harder problem.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  4:07 PM

> If there is anything interesting about B-Tests, it is making it easier
> to grade them. The one thing that's easier for C-Tests is grading
> them, because they either report Passed or don't. B-Tests require some
> sort of error message analysis, a lot harder problem.

Well how often will this get done. In our context, we have a set data base of
expected B-test output, and only when there are discrepancies do we have to look
at it, which is seldom! Would be even more seldom if fewer incompatible changes
were made :-) :-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  4:53 PM

It's not a major problem for a vendor, because we can compare against a known
good set of results. But anyone that wants to do their own testing has to do
this. Based on my correspondence, there are a number of people that want to do
so. (And of course, any formal testing has to do this as well, it's a
significant part of the expense of formal testing.)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  3:52 PM

...
> > To me, the advantage of "suppressible error" over warnings is two-fold:
> >
> > (1) The default in standard mode is that it is an error (the program
> > is not allowed to execute). You can use a configuration pragma to
> > allow the program to execute (probably modeled as a compiler switch,
> > but of course the RM has nothing to say about switches). The
> > advantage of the configuration pragma is that it can be "compiled
> > into the environment" without needing to modify any source code
> > (which is critical in the case of incompatibilities). "Warning"
> > implies the opposite (that the program is allowed to execute by default).
>
> That's meaningless, compilers are free to provide whatever defaults
> they like. They just have to have a mode which is strictly conforming
> (I doubt many GNAT programmers know the full details of specifying
> this strictly conforming mode, since it is in practice useless except
> for running ACATS tests :-))

It's not meaningless even if ignored, because it conveys the intent of the
language designers that these are errors. And it conveys the intent of the
language designers as to appropriate defaults. Compilers that stray too far from
Standard mode by default are trouble. (GNAT does this with not having overflow
checking enabled by default; this causes no end of questions from newbies on
comp.lang.ada. It would be better if the free GNAT was much closer to standard
mode than it is.)

> > (2) The name (and default) makes it clear that these are errors,
> > which we only support allowing of the execution for the purposes of
> > compatibility with previous versions of Ada. "Warning" is not so clear.
>
> Yes, of course in many cases I disagree that these are errors, they
> prevent well defined perfectly reasonable programs from executing :-)

Amazing! You can see the future and already disagree with it! :-)

Since we haven't designated a single error to be a "suppressible error", and
we've only talked about two very different cases as potential "suppressible
error", it's impossible to say anything about what the effect of the errors are.

Unless you meant to make a blanket statement about all Ada Legality Rules (as
one can argue in many cases that there is some sensible semantics for illegal
constructs, one example would be class-wide arguments to controlling arguments
of statically bound routines).

> > Of course, if someone has a better name than "suppressible error"
> > (with the same intent and connotation), I'd be happy to support that too.
>
> Well to me, it's finally a way of recognizing that compatibility is
> more important than asthetic consistency of the semantic model, and if
> we can use these, whatever they are called, to reduce incompatibilites
> in the future, that would be a step forward.

I still think these will not reduce the incompatibilities very much, since there
are so many cases where they wouldn't help. But some help is likely to be better
than no help.

Note that the configuration pragma is essentially to enable obsolescent
features; it might even make sense to place it there. The core language wouldn't
have to acknowledge these at all. So we get both "asthetic consistency of the
semantic model" (as obsolescent features are ignored for this purpose) and
compatibility. Sounds like a win-win.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  4:04 PM

> It's not meaningless even if ignored, because it conveys the intent of
> the language designers that these are errors. And it conveys the
> intent of the language designers as to appropriate defaults. Compilers
> that stray too far from Standard mode by default are trouble.

Sure I understand, though I still think that in some cases, e.g. warning about
overlapping parameters, these should NOT be considered as errors, the effect is
well defined, and they are absolutely in warning territory to me. Same with
several other cases.

> (GNAT does this with not having
> overflow checking enabled by default; this causes no end of questions
> from newbies on comp.lang.ada. It would be better if the free GNAT was
> much closer to standard mode than it is.)

Probably so, but compatibility always looms large :-)

>>> (2) The name (and default) makes it clear that these are errors,
>>> which we only support allowing of the execution for the purposes of
>>> compatibility with previous versions of Ada. "Warning" is not so clear.
>>
>> Yes, of course in many cases I disagree that these are errors, they
>> prevent well defined perfectly reasonable programs from executing :-)
>
> Amazing! You can see the future and already disagree with it! :-)

I am thinking of past cases where we would have used this (or at least I hope we
would have used it), e.g. overlapping parameters, or reaching further back,
static expressions like 9/0.

> Unless you meant to make a blanket statement about all Ada Legality
> Rules (as one can argue in many cases that there is some sensible
> semantics for illegal constructs, one example would be class-wide
> arguments to controlling arguments of statically bound routines).

The idea does not come out the blue, it comes out of a discussion context, and I
am commenting in the scope of that context.

> I still think these will not reduce the incompatibilities very much,
> since there are so many cases where they wouldn't help. But some help
> is likely to be better than no help.

We will see

> Note that the configuration pragma is essentially to enable
> obsolescent features; it might even make sense to place it there. The
> core language wouldn't have to acknowledge these at all. So we get
> both "asthetic consistency of the semantic model" (as obsolescent
> features are ignored for this purpose) and compatibility. Sounds like a win-win.

Not sure what you are suggesting wrt obsolescent features. It would be
*awful* to have to use a configuration parameter in order to be able to say
Ascii.LF, or to use size clauses instead of size aspects???

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  4:58 PM

> Not sure what you are suggesting wrt obsolescent features. It would be
> *awful* to have to use a configuration parameter in order to be able
> to say Ascii.LF, or to use size clauses instead of size aspects???

I was just thinking out loud. The configuration pragma to suppress suppressible
errors would primarily be intended for compatibility with older versions of Ada.
That's really the purpose of the obsolescent features annex (provide features
for compatibility), so arguably, that pragma could be defined there. In which
case it wouldn't appear in the core, leaving a clean(er) semantic model. I was
*only* talking about the ability to suppress suppressible errors, and not any
other feature of the language (now or in the future). In any event, it's not a
big deal either way.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  5:10 PM

AH, I see, just define the pragma there, OK .. I don't care, I have always
thought Annex J to be useless :-)

And we still have a nasty unresolved problem with Annex J, which is that we
moved all the pragmas there. It is of course totally unrealistic to expect
everyone to switch to using only aspects, so 98% of all Ada programs will be
using the obsolescent pragmas and aspects.

That means that if No_Obsolescent_Features excludes them, we have a BIG
compatibility problem.

For GNAT, we have just ignored this, and the NOF restriction does not include
this new stuff.

Maybe violating this restriction with rep attributes and pragmas should be a
suppressible error :-) :-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  5:43 PM

...
> And we still have a nasty unresolved problem with Annex J, which is
> that we moved all the pragmas there. It is of course totally
> unrealistic to expect everyone to switch to using only aspects, so 98%
> of all Ada programs will be using the obsolescent pragmas and aspects.
>
> That means that if No_Obsolescent_Features excludes them, we have a
> BIG compatibility problem.

But this problem was resolved long ago, because you complained about it 2 years
ago. (This is the fourth time you've made this comment!) Specifically,
13.12.1(4/3) defines No_Obsolescent_Features as:

There is no use of language features defined in Annex J. It is implementation
defined whether uses of the renamings of J.1 and of the pragmas of J.15 are
detected by this restriction. This restriction applies only to the current
compilation or environment, not the entire partition.

> For GNAT, we have just ignored this, and the NOF restriction does not
> include this new stuff.

GNAT is following the letter of the Standard, see above.

> Maybe violating this restriction with rep attributes and pragmas
> should be a suppressible error :-) :-)

That probably would have been better than making it implementation-defined, but
I don't see any reason to change. (And note that representation attributes are
*not* obsolescent.)

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  6:07 PM

> But this problem was resolved long ago, because you complained about
> it 2 years ago. (This is the fourth time you've made this comment!)
> Specifically,
> 13.12.1(4/3) defines No_Obsolescent_Features as:
>
> There is no use of language features defined in Annex J. It is
> implementation defined whether uses of the renamings of J.1 and of the
> pragmas of J.15 are detected by this restriction. This restriction
> applies only to the current compilation or environment, not the entire partition.
>
>> For GNAT, we have just ignored this, and the NOF restriction does not
>> include this new stuff.
>
> GNAT is following the letter of the Standard, see above.

Ah, I didn't know that, good! Although I must say this seems an unnecessary
non-portability. I suspect in practice that any other implementation would copy
GNAT in this regard :-)

>> Maybe violating this restriction with rep attributes and pragmas
>> should be a suppressible error :-) :-)
>
> That probably would have been better than making it
> implementation-defined, but I don't see any reason to change. (And
> note that representation attributes are *not* obsolescent.)

Ah ok, thanks for clarification, though it puzzles me to make Component_Size
attribute specification non-obsolescent and pragma Pack obsolescent.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 19, 2013  6:16 PM

> Ah ok, thanks for clarification, though it puzzles me to make
> Component_Size attribute specification non-obsolescent and pragma Pack
> obsolescent.

The reason is that you can put a attribute_definition_clause in a private part,
and thus reference items that you aren't allowed to mention in a similar aspect
specification. So there are legal things that you can't write as an aspect
specification but can write as an attribute_definition_clause (whether writing
such things is a good idea is a separate issue).

OTOH, the pragmas don't take expressions (just the name of the entity), and thus
any legal pragma can be written as an aspect specification. Ergo, there is no
need for the pragmas, new code should only use aspect specifications.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  6:27 PM

> OTOH, the pragmas don't take expressions (just the name of the
> entity), and thus any legal pragma can be written as an aspect
> specification. Ergo, there is no need for the pragmas, new code should only
> use aspect specifications.

I disagree with the ergo here. To me a pragma Pack can often be regarded as an
implementation detail, and thus better confined to the private part.

****************************************************************

From: Stephen Michell
Sent: Tuesday, November 19, 2013  1:49 PM

...
>> Yet this code compiles in my current version of GNAT.
>
> Actually the current version reports this as an error, you must have a
> slightly older version.

How recent, Ed? The Latest GAP version did not generate an error.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 19, 2013  4:14 PM

More recent than the latest GAP version indeed!

****************************************************************

From: Jeff Cousins
Sent: Wednesday, November 20, 2013  3:43 AM

I see the suppressible errors as a variation on the theme of obsolescent
features. I also think that No_Obsolescent_Features should have an optional
parameter of language revision so that a user could get himself up-to-date with
Ada 2005 now and leave getting up-to-date with Ada 2012 for some time in the
future.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, November 20, 2013  7:53 AM

As much as I would like to see suppressable errors for all situations, where any
generally undecidable error situation is decided by the compiler anyhow, I am
seriously worried about the ARG politics of providing the capability. I.e., I am
really eager to strengthen warnings, but I am not at all in favor of weakening
error messages.

Slightly overstating my case, are we ever going to close safety loopholes again
with declaring something plain illegal, when we have the easy way out in
declaring it suppressably illegal?

Statements like

On 19.11.2013 21:53, Robert Dewar wrote:
> Well to me, it's finally a way of recognizing that compatibility is
> more important than asthetic consistency of the semantic model, and if
> we can use these, whatever they are called, to reduce incompatibilites
> in the future, that would be a step forward.

which also have been made at the meeting - so it is not just Robert - make we
worry that indeed almost all newly discovered errors will become suppressable
for fear of rendering an existing program illegal. To me, that is clearly the
wrong way, including the wrong message to the user community. "Ada 2012 with
Suppressable Safety", hmmm.

Note that the aliasing problem is methodological to prevent unexpected results
(it would have been a really good candidate for a suppressable error and a much
broader definition). The case of violating discriminant constraints, which was
the case discussed by the ARG, is a safety issue. When it comes to safety, I do
not want suppressable errors.

Moreover, while compiler producers have the liberty to suppress errors to their
liking in non-standard mode, e.g., to achieve compatibility to
"pre-ARG-decided-error"-status of the language, language definers are not free
to ignore specifying the actual semantics of code in the presence of suppressed
errors. I would love to avoid that.

The analogy to suppressed checks doesn't quite work. One can suppress checks
rather surgically and individually; plus, there is only a small number of them.
Suppressable errors so far are an all-or-nothing proposition.

****************************************************************

From: Tucker Taft
Sent: Wednesday, November 20, 2013  8:07 AM

I think we should reserve "suppressible" errors mostly for what might be called
"methodological" errors.  Examples that come to mind are the Ada 2012 aliasing
checks for OUT parameters, the Ada 95 illegality of out-of-range static values,
and perhaps something like the Ada 95 requirement of having all or no
dynamically-tagged operands in a dispatching call.

This bizarre OUT parameter case for access types and view conversions is right
on the border.  It doesn't inevitably lead to erroneousness -- only if the OUT
parameter's initial value is read.

In any case, I think the language-defined configuration pragma for suppressing
these should be very specific, uniquely identifying the legality error that is
being suppressed. Implementations could of course define their own pragmas to be
more sweeping in nature.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 20, 2013  8:16 AM

> which also have been made at the meeting - so it is not just Robert -
> make we worry that indeed almost all newly discovered errors will
> become suppressable for fear of rendering an existing program illegal.
> To me, that is clearly the wrong way, including the wrong message to
> the user community. "Ada 2012 with Suppressable Safety", hmmm.

pragma Suppress already has that "problem" if you think it is a problem!

> Note that the aliasing problem is methodological to prevent unexpected
> results (it would have been a really good candidate for a suppressable
> error and a much broader definition). The case of violating
> discriminant constraints, which was the case discussed by the ARG, is a safety issue.
> When it comes to safety, I do not want suppressable errors.

But you can totally suppress discriminant checks anyway!

> The analogy to suppressed checks doesn't quite work. One can suppress
> checks rather surgically and individually; plus, there is only a small
> number of them. Suppressable errors so far are an all-or-nothing
> proposition.

Not at all, I think there should be fine grained control over suppressing
errors, using the scope model of Suppress, and we are only talking here about a
very small number of errors that are candidates for suppression.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 20, 2013  8:22 AM

> I think we should reserve "suppressible" errors mostly for what might
> be called "methodological" errors.  Examples that come to mind are the
> Ada 2012 aliasing checks for OUT parameters, the Ada 95 illegality of
> out-of-range static values, and perhaps something like the Ada 95
> requirement of having all or no dynamically-tagged operands in a dispatching call.

I agree completely

> This bizarre OUT parameter case for access types and view conversions
> is right on the border.  It doesn't inevitably lead to erroneousness
> -- only if the OUT parameter's initial value is read.

I agree completely

> In any case, I think the language-defined configuration pragma for
> suppressing these should be very specific, uniquely identifying the legality
> error that is being suppressed.  Implementations could of course define
> their own pragmas to be more sweeping in nature.

I agree completely, and doubt that sweeping stuff is a good idea. We currently
use an undocumented debug switch for this purpose, and it would be good to
replace it with something less sweeping. Currently our debug switch looks like:

>    --  d.E  Turn selected errors into warnings. This debug switch causes a
>    --       specific set of error messages into warnings. Setting this switch
>    --       causes Opt.Error_To_Warning to be set to True. The intention is
>    --       that this be used for messages representing upwards incompatible
>    --       changes to Ada 2012 that cause previously correct programs to be
>    --       treated as illegal now. The following cases are affected:
>    --
>    --          Errors relating to overlapping subprogram parameters for cases
>    --          other than IN OUT parameters to functions.
>    --
>    --          Errors relating to the new rules about not defining equality
>    --          too late so that composition of equality can be assured.

****************************************************************

From: Geert Bosch
Sent: Wednesday, November 20, 2013  11:42 AM

> The case that we talked about in Pittsburgh (the bad view conversion
> of access types), the runtime semantics is that of making the
> parameter value abnormal (such that reading it would cause the program to be erroneous).

This is exactly right, in my opinion. This involved out parameters, where it
typically does not matter if the value is abnormal, as it will get overwritten
anyway. Moreover, failing any language-defined check in a procedure updating a
composite variable may make it abnormal. So, this is just another case where we
cannot avoid a particular set of errors at compile time, unless we make a set of
programs with well-defined behavior illegal.

> Generally, we'd prefer to prevent that by a Legality Rule, rather than
> allowing a potential time bomb. But since the bug has persisted since
> Ada 95 (and the construct is an Ada 83 one), it's possible it exists
> in code. So we could use a suppressible error for this case, and one
> expects that the error would be suppressed only in the case where
> someone has existing code that they can't modify.

Sometimes seems that far too much effort is put into preventing programmers of
shooting themselves in the foot with complicated contraptions, while ignoring
the minefield of real problems in concurrent programs. We're happy to make any
concurrent I/O erroneous (including where the file argument is implicit) even
though languages such as C use implicit locking to avoid undefined behavior.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 20, 2013  3:46 PM

I agree 100% with this, I think for example it is really bad that separate tasks
can't do Put_Line safely. In fact there is an internal ticket here at AdaCore to
fix that :-) So that at least GNAT will behave nicely, even if Ada does not!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 20, 2013  11:51 AM

...
> > The analogy to suppressed checks doesn't quite work. One can
> > suppress checks rather surgically and individually; plus, there is
> > only a small number of them. Suppressable errors so far are an
> > all-or-nothing proposition.
>
> Not at all, I think there should be fine grained control over
> suppressing errors, using the scope model of Suppress, and we are only
> talking here about a very small number of errors that are candidates
> for suppression.

I had suggested something like that during the ARG meeting. Tucker made the good
point that using local error suppression would require modifying the source
code. And if you are allowed to modify the source code, you should just get rid
of the offending code (all of the cases we've discussed to date have easy local
workarounds) rather than suppressing an error. He thought it was more valuable
to have the user specify what errors they wanted suppressed (globally), so that
they don't have to suppress all or nothing. But they need global suppression so
that they don't have to modify the source (just the build environment).

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 20, 2013  3:48 PM

Well how easy it is to modify the source code to eliminate the error is not
clear, and you can probably justify special local suppress additions easier than
other code changes.

But note I said we should follow the Suppress model, and of course that model
allows global control as well via configuration pragmas.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 20, 2013  12:06 PM

...
> Sometimes seems that far too much effort is put into preventing
> programmers of shooting themselves in the foot with complicated
> contraptions, while ignoring the minefield of real problems in
> concurrent programs. We're happy to make any concurrent I/O erroneous
> (including where the file argument is implicit) even though languages
> such as C use implicit locking to avoid undefined behavior.

Full Ada is not really a concurrent language; there are far too many ways to
shoot yourself in the head (not foot!). (It's more of a cooperating sequential
processes sort of language.) Implicit locking is *way* down the list of those
priorities. (I think that the project that Brad and Steve are working on might
present some help in this area.)

Implicit locking for language-defined subprograms is a massive minefield. We
tried to come up with a definition of it for the containers, but failed (too
many deadlock and livelock cases). Earlier, we had tried to include implicit
locking in Claw, but again there were too many deadlock and livelock cases.
Maybe it could be made to work with a limited subset of I/O, but I'm skeptical
that we could even make that work without lots and lots of work.

I would guess that we'll have to define purpose-built libraries to support
concurrent I/O, probably using new categorizations defined by the
multiprocessing working group. (My guess is that C is delusional if they think
that implicit locking will always work on their full libraries, but probably no
one will care because they don't use (or will learn to avoid) the problem
areas.)

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 20, 2013  3:49 PM

> Full Ada is not really a concurrent language; there are far too many
> ways to shoot yourself in the head (not foot!).

For the record, I find this a totally absurd (and rather
damaging) statement!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 20, 2013  4:47 PM

I really should have said "parallel" rather than "concurrent". But my point
still stands: Ada has far too many constructs that don't allow concurrent
execution (or don't work reliably for concurrent execution), so it takes a lot
of effort to write code that can be executed in parallel - and the language
provides virtually no help. A language doesn't have to be that way (see
Parasail, for example); more could be done for Ada. (For instance, note Geert's
comments during the recent meeting about Pure and Pure_Function allowing too
much for parallel/concurrent execution.)

P.S. Please note that I won't make a statement like the above in a public forum;
it would be too easily taken the wrong way. Ada surely handles concurrency
better than other mainstream languages. But that's a very low bar!

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  1:59 PM

Here is a nice example of why the rule about overlapping parameters should be
suppressible. Of course all four calls make perfect semantic sense.

>      1. procedure X is
>      2.    procedure Xchg (A, B : in out Integer) is
>      3.       T : constant Integer := A;
>      4.    begin
>      5.       A := B;
>      6.       B := T;
>      7.    end;
>      8.
>      9.    function Ident (R : Integer) return Integer
>     10.    is begin return R; end;
>     11.
>     12.    Data : array (1 .. 10) of Integer;
>     13.    M, N : constant Integer := 4;
>     14.    P, Q : constant Integer := Ident (4);
>     15.    R, S : Integer;
>     16.
>     17. begin
>     18.    Xchg (Data(2), Data(2));
>                  |
>         >>> writable actual for "A" overlaps with actual for "B"
>
>     19.    Xchg (Data(M), Data(N));
>                  |
>         >>> writable actual for "A" overlaps with actual for "B"
>
>     20.    Xchg (Data(P), Data(Q));
>     21.    Xchg (Data(R), Data(S));
>     22. end X;

****************************************************************

From: Randy Brukardt
Sent: Friday, November 22, 2013  2:27 PM

Humm. We actually considered this exact case when contemplating the overlapping
parameters rule, and we thought that it was likely that *statically* overlapping
parameters in a case like this represented a bug, rather than being intentional.
Why would someone write a complicated call that does nothing on purpose?
Replacing the calls with "null;" makes more sense (or "delay 0.0001;" if the
point is to waste time).

Since the rule doesn't trigger when the parameters are calculated (as in the P
and Q case), the normal reasons for this occurring would not be illegal. It's
only illegal when the parameters are statically overlapping,

Anyway, there is an argument for making procedure parameter overlapping
suppressible (not functions, that's new to Ada 2012 so there can be no
compatibility problem, and the dangers are much worse when multiple calls are
involved); I'd still like to see a real user case that was convincing (every
example that has been posted so far seems unlikely or contrived to me) but the
suppressible error idea has the right connotations (get rid of the problem if
you can, it's really a problem in 95% of the cases and it's tricky code to avoid
in the other 5%) so I would not object to that. (Presuming Bob gets a real
proposal written up someday soon.)

****************************************************************

From: Tucker Taft
Sent: Friday, November 22, 2013  2:39 PM

> ... I'd still like to see a real user case that was convincing

We did have a customer complain.  They had two OUT parameters, and were ignoring
both of them.  They passed the same variable named "Ignore" to both.

It is certainly easy to work around (just create an Ignore1 and Ignore2), but as
is to be expected, people hate to make any change to code that is already
working just to make the compiler happy.

****************************************************************

From: Randy Brukardt
Sent: Friday, November 22, 2013  2:56 PM

Understood. My problem with that example is that I can't imagine any
well-designed interface having two out parameters of the same type that you
might want to ignore. I can imagine it happening very rarely (not every
interface is well-designed!), but I can't get very interested in making bad code
easier to write.

In any case, the suppressible error idea seems to have the right presentation
(the default is an error, the error can be turned off for compatibility
reasons), so I wouldn't object to changing the rule that way. But first we need
a full proposal for suppressible errors.

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  2:59 PM

> We did have a customer complain.  They had two OUT parameters, and
> were ignoring both of them.  They passed the same variable named "Ignore" to
> both.

There was one active complaint, and actually two other similar instances in our
test suite.

> It is certainly easy to work around (just create an Ignore1 and
> Ignore2), but as is to be expected, people hate to make any change to
> code that is already working just to make the compiler happy.

And it is an ugly work around!

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  3:01 PM

I think the exchange example is instructive.
Yes, you can say, well it's stupid to call exchange with two identical
parameters, but it can often arise in the case of conditional and parametrized
complation that you execute some silly code in some cases of parametrization.

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  3:31 PM

> Understood. My problem with that example is that I can't imagine any
> well-designed interface having two out parameters of the same type
> that you might want to ignore. I can imagine it happening very rarely
> (not every interface is well-designed!), but I can't get very
> interested in making bad code easier to write.

Given that the language does not have optional out parameters, it seems
perfectly  reasonable to have a procedure that supplies a bunch of output
information which might not be needed in every case.

The language should not be designed around Randy's idiosyncratic ideas of what
he thinks is or is not "bad code". I found the customers code in each of these
cases perfectly reasonable.

> In any case, the suppressible error idea seems to have the right
> presentation (the default is an error, the error can be turned off for
> compatibility reasons), so I wouldn't object to changing the rule that way.
> But first we need a full proposal for suppressible errors.

As usual I find mumbling about defaults to be totally bogus, defaults are up to
an implementation, not the language design. The only requirement is that an
implementation have *A* mode in which the RM semantics hold. Whether this is the
default mode is up to the implementation.

****************************************************************

From: Bob Duff
Sent: Friday, November 22, 2013  3:41 PM

> Understood. My problem with that example is that I can't imagine any
> well-designed interface having two out parameters of the same type
> that you might want to ignore. I can imagine it happening very rarely
> (not every interface is well-designed!), but I can't get very
> interested in making bad code easier to write.

I rather strongly disagree with that point of view.  Compatibility is about
compatibility of all code, not just code where ARG approves of the style.

And not just code where Randy approves of the style.  You have every right to
scorn code that has two 'out' parameters of the same type. But I and others
don't share that view.  We need to avoid letting personal stylistic preferences
override compatibility concerns. Even if you convinced me that that code is
evil, it's not about making it "easier to write" -- that code is already
written, and (in some environments) might be expensive to modify.

For example, I am rabidly opposed to indenting code using TAB characters.  But
if I were to propose making them illegal, you would rightly consider that to be
completely irresponsible!

> In any case, the suppressible error idea seems to have the right
> presentation (the default is an error, the error can be turned off for
> compatibility reasons), so I wouldn't object to changing the rule that way.

Right, this is exactly what the concept I've been pushing for is for. You and I
can't necessarily agree whether "two 'out' parameters of the same type" =
"Evil".  This way, we don't NEED to agree on that; a "suppressible error" can
satisfy us both.  The concept can defuse all sorts of unnecessary arguments.

> But first we need a full proposal for suppressible errors.

Understood.  ;-)

****************************************************************

From: Bob Duff
Sent: Friday, November 22, 2013  3:57 PM

> As usual I find mumbling about defaults to be totally bogus, defaults
> are up to an implementation, not the language design. The only
> requirement is that an implementation have *A* mode in which the RM
> semantics hold. Whether this is the default mode is up to the
> implementation.

From a formal point of view, you are of course correct.  But I think Randy is
correct to want the RM worded in a way that implies that the "error" is the main
thing, and "Oh by the way, if you really want to, you can run the illegal
program anyway, and get well-defined semantics."  That's better than, "You can
do whatever you like, and oh by the way, the compiler must give a warning".

Formally, those are two ways of expressing the same thing, but the first way
makes it sound like we're taking these things seriously.

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  4:31 PM

OK in principle, but in practice you have been making clear warning situations
(like overlapping params) into errors :-)

****************************************************************

From: Randy Brukardt
Sent: Friday, November 22, 2013  4:29 PM

> > Understood. My problem with that example is that I can't imagine any
> > well-designed interface having two out parameters of the same type
> > that you might want to ignore. I can imagine it happening very
> > rarely (not every interface is well-designed!), but I can't get very
> > interested in making bad code easier to write.
>
> Given that the language does not have optional out parameters, it
> seems perfectly  reasonable to have a procedure that supplies a bunch
> of output information which might not be needed in every case.

It's a bad idea to have two non-equivalent parameters of the same type in any
subprogram interface, because that means that the compiler can't help you ensure
that the parameters are properly matched. (Coding rules can help with this, by
requiring named notation in calls where there are multiple parameters, but even
that isn't a full cure.) Obviously, you can't always avoid having parameters of
the same type ("-" comes to mind).

> The language should not be designed around Randy's idiosyncratic ideas
> of what he thinks is or is not "bad code". I found the customers code
> in each of these cases perfectly reasonable.

Since I've never had the chance to see the customer code, only very abbreviated
summaries of it, I have no way to decide if I agree or not. It's quite possible
that I'd agree with you if I actually saw the interface itself. I realize that
you probably can't release the customer code, so I probably will never see any
real code like this -- and your descriptions are not at all convincing.

I've said since the beginning that I want to see a *real* example of this
problem, not a sanitized summary of one (which eliminates anything that would
make such an example convincing). Since no one is willing/able to provide such
an example, I continue to think this is much ado about very little. But I remain
willing to be convinced otherwise -- just show a convincing example that
actually looks like real-life code.

> > In any case, the suppressible error idea seems to have the right
> > presentation (the default is an error, the error can be turned off
> > for compatibility reasons), so I wouldn't object to changing the
> > rule that way. But first we need a full proposal for suppressible errors.
>
> As usual I find mumbling about defaults to be totally bogus, defaults
> are up to an implementation, not the language design. The only
> requirement is that an implementation have
> *A* mode in which the RM semantics hold. Whether this is the default
> mode is up to the implementation.

(A) When I'm talking about "default", I'm talking about the default in terms of
the language rules, not necessarily implementations. That has much more to do
with the perception of the language and its safety rather than what any
individual implementation does. Obviously, we can't control implementations, but
we can control the message that we send both programmers and implementers.

(B) I'd caution any implementer that thinks that they know better than the
language designers that they are usually wrong. The language designers have
considered far more cases (use cases, implementation costs, portability issues,
etc.) than an implementer is likely to have encountered. (I know this from
personal experience!) It's certainly possible for a default to be better than
the language (GNAT's static elaboration rules come to mind, because they take an
obscure runtime check and replace it with compile-time rules -- that's almost
always better), but that's the rare case. The cost in interoperability with
other compilers and with the language description makes most such differences
bad choices. Thus, having the defaults differ substantially from the language
definition is generally a bad idea.

(C) The attitude you give above is exactly why we don't want to call these
warnings. Implementers might very well decide to hide warnings by default, but
they'd have a harder case (especially for a "safe" language like Ada) in hiding
errors. That's not the message that we want to give to Ada users, and I don't
think most implementers would want to give that message either.

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  4:47 PM

> It's a bad idea to have two non-equivalent parameters of the same type
> in any subprogram interface, because that means that the compiler
> can't help you ensure that the parameters are properly matched.
> (Coding rules can help with this, by requiring named notation in calls
> where there are multiple parameters, but even that isn't a full cure.)
> Obviously, you can't always avoid having parameters of the same type ("-"
> comes to mind).

Total Nonsense IMO

> Since I've never had the chance to see the customer code, only very
> abbreviated summaries of it, I have no way to decide if I agree or
> not. It's quite possible that I'd agree with you if I actually saw the
> interface itself. I realize that you probably can't release the
> customer code, so I probably will never see any real code like this --
> and your descriptions are not at all convincing.

I really don't care AT ALL if you agree. Whether you think code is good or bad
is not even interesting to me, let alone a valid input to the language design

> I've said since the beginning that I want to see a *real* example of
> this problem, not a sanitized summary of one (which eliminates
> anything that would make such an example convincing). Since no one is
> willing/able to provide such an example, I continue to think this is
> much ado about very little. But I remain willing to be convinced
> otherwise -- just show a convincing example that actually looks like real-life
> code.

I really don't see why we should go to this bother, given your peculiar ideas,
you may be impossible to convince, but the point is to design a language to use
by everyone not just Randy.

> (B) I'd caution any implementer that thinks that they know better than
> the language designers that they are usually wrong. The language
> designers have considered far more cases (use cases, implementation
> costs, portability issues, etc.) than an implementer is likely to have
> encountered. (I know this from personal experience!) It's certainly
> possible for a default to be better than the language (GNAT's static
> elaboration rules come to mind, because they take an obscure runtime
> check and replace it with compile-time rules -- that's almost always
> better), but that's the rare case. The cost in interoperability with
> other compilers and with the language description makes most such
> differences bad choices. Thus, having the defaults differ substantially from
> the language definition is generally a bad idea.

Actually I think the language designers are often far too compiler oriented, and
NOT sufficiently actual user oriented. How many ARG members are actually in the
business of building big critical Ada systems?

I think for instance that the dynamic model of elaboration is pretty horrible,
and definitely it is better that GNAT defaults to the static model.

> (C) The attitude you give above is exactly why we don't want to call
> these warnings. Implementers might very well decide to hide warnings
> by default, but they'd have a harder case (especially for a "safe"
> language like Ada) in hiding errors. That's not the message that we
> want to give to Ada users, and I don't think most implementers would want to
> give that message either.

Weak argument IMO, defaults are chosen in response to customer needs not some
vague idea of what is or is not safe.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, November 27, 2013  10:16 AM

> Actually I think the language designers are often far too compiler oriented,
> and NOT sufficiently actual user oriented. How many ARG members are actually
> in the business of building big critical Ada systems?

That's kind of my day job.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  10:42 AM

Indeed, and your presence is most welcome!

****************************************************************

From: Jeff Cousins
Sent: Thursday, November 28, 2013  10:03 AM

Is the checking for overlapping out or in out parameters definitely in the GNAT
v7.2 "preview"? I've been through the code for multiple projects across 3 sites
without hitting any examples.

****************************************************************

From: Robert Dewar
Sent: Thursday, November 28, 2013   7:53 PM

Yes, and that is not surprising!

****************************************************************

From: Erhard Ploedereder
Sent: Friday, November 22, 2013  4:31 PM

> Here is a nice example of why the rule about overlapping parameters
> should be suppressible.
> Of course all four calls make perfect semantic sense.
>
>>      1. procedure X is
>>      2.    procedure Xchg (A, B : in out Integer) is
>>      3.       T : constant Integer := A;
>>      4.    begin
>>      5.       A := B;
>>      6.       B := T;
>>      7.    end;

Yes. And it come close to being the ONLY program where these calls make perfect
sense (well, actually, all programs are o.k. where the intended semantics are
identity of the pre- and post-values of all parameters in the case of aliased
parameters).

With a minor change, as in:

>>      1. procedure X is
              with invariant A + B = A'old + B'old
                -- forgive my syntax
>>      2.    procedure Xchgplusminus (A, B : in out Integer) is
>>      3.       T : constant Integer := A;
>>      4.    begin
>>      5.       A := B + 1;
>>      6.       B := T - 1;
>>      7.    end;

aliasing already screws up royally (and with results that depend on the order of
the write-backs - what may have worked on one target in the user's view, fails
on the other target -- not that the above could have worked in anybody's view on
any target).

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  4:49 PM

> aliasing already screws up royally (and with results that depend on
> the order of the write-backs - what may have worked on one target in
> the user's view, fails on the other target -- not that the above could
> have worked in anybody's view on any target).

By the way, it has always puzzled me how much the Ada designers like
non-determinism. Yes, in the language the order of write back of out parameters
is non-deterministic. Why? I can't figure out ANY advantage of making something
like this non-deterministic!

****************************************************************

From: Tucker Taft
Sent: Friday, November 22, 2013  5:00 PM

The real concern for me is that the order is not obvious in the source code, so
if correct functioning of the algorithm relies on there being a well-defined
order, that is ultimately misleading.  Something that looks commutative, for
example, is in fact not commutative.  I agree that for debugging it is nice if
the order is repeatable, but if anyone starts relying on the order, the
algorithm becomes fragile to seemingly "harmless" maintenance activities such as
introducing a temporary; e.g.:

    F(G(X), H(Y)) -->

       HT : constant HH := H(Y);
       F(G(X), HT)

****************************************************************

From: Randy Brukardt
Sent: Friday, November 22, 2013  5:07 PM

> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic.
> Why? I can't figure out ANY advantage of making something like this
> non-deterministic!

You'd have to ask Jean, as the vast majority of the non-determinism dates back
to Ada 83.

We've occasionally wondered about reducing some of it, but that would have a
massive impact on implementations. Implementers have been taking advantage of
those rules for 30+ years, and finding those places and removing them would be
virtually impossible as there is nothing particularly special about them. (There
may not even be comments near such places.)

For one example, I know Janus/Ada uses the non-determinism of parameter
evaluation to reduce the need to spill registers and the like during the
evaluation of parameters. We try to evaluate the most complex parameter
expression first.

Not sure if there is an advantage taken with the order of write-back (I can't
think of one, either); I suppose it was non-deterministic mainly because the
order evaluation of the parameters is non-deterministic and one would imagine
the write-back to be done in reverse.

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  5:09 PM

> The real concern for me is that the order is not obvious in the source
> code, so if correct functioning of the algorithm relies on there being
> a well-defined order, that is ultimately misleading.  Something that
> looks commutative, for example, is in fact not commutative.  I agree
> that for debugging it is nice if the order is repeatable, but if anyone
> starts relying on the order, the algorithm becomes fragile to seemingly
> "harmless" maintenance activities such as introducing a temporary;

To me it's even MORE fragile for maintenance if you can have code that has
worked fine for years and years, and suddently the compiler changes its behavior
and breaks the code.

In Pascal, the OR and AND operators are weird, it is impl defined if they short
circuit, and impl defined which order they do things in.

Virtually ALL Pascal compilers did short circuit left to right a la Ada. Then
brilliant Honeywell created a compiler that short circuited right to left.

ENDLESS problems in porting code!

These days, most Ada programmers don't (can't) read the RM any more, it's just
too inpenetrable. Most Ada programmers assume that code that goes through the
compiler and works is "correct". You see this very clearly for example in
elaboration where typical large programs have just sufficient pragma Elaborate's
to get them past one particular compilers choice of elaboration order (elab
order, another case where the language has gratuitous non-deterministic
semantics).

****************************************************************

From: Robert Dewar
Sent: Friday, November 22, 2013  5:13 PM

It's interesting that there are cases where C is more deterministic than Ada,
and in such cases, we guarantee the Ada behavior. For instance a+b+c in c always
means (a+b)+c and it does in GNAT as well, and this is something we guarantee.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, November 22, 2013  5:13 PM

> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage
> of making something like this non-deterministic!

Good question. Does anybody know the answer? My answer to students is:
because calling conventions differ and you want to match up with the prevalent
styles. On a stack machine you push and pop, i.e. return right-to-left, ie.e
deal with your parameters in a LIFO-style. On a "more normal" machine you might
want to do FIFO (i.e., left-to-right), not LIFO.

But that's only my story. Does anbody have deeper insights?
Moreover, that's only my curiosity. We agree on that fact that Ada leaves this
order undefined.

The big difference in my example would merely be that the outcome would be
predictably wrong with a specified order, but wrong (and unexpected)
nevertheless.

****************************************************************

From: Geert Bosch
Sent: Sunday, November 22, 2013  10:52 PM

> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage
> of making something like this non-deterministic!

Indeed, this is right up there with the association of addition and subtraction,
which is implementation defined. This makes it really tricky to write
expressions that will never overflow.

Consider S being an arbitrary String:

   Len : Natural := (if S'Last < S'First then 0 else S'Last - S'First
   + 1);

Note how this can overflow if and only if the compiler chooses an evil
association? In practice, the main effect is to make reasoning about program
behavior a lot harder. SPARK requires one to always use parentheses in this
case, but that of course just impedes readability.

Much simpler to require left-to-right associativity. This will affect zero
existing Ada 2012 compilers, and would be trivial to comply with for any new
ones (just add parenthesis implicitly, where required). In return, it will allow
any static analysis tools to do a far better job, and free the programmer from
having to put tons of otherwise unnecessary parentheses in their code.

I think we should make it a point to go through all such cases and specify
behavior. Ultimately, this is easier for both users and implementors of Ada
development tools. Note that this last group also includes code generators (for
modeling languages) and various analysis tools.

It should never be acceptable to have similar expressions to be be well-defined
for C, but implementation-defined or erroneous in Ada, whether it is about
evaluating A + B + C, or writing "Hello" and "World" to standard output in two
separate threads/tasks.

Maybe it isn't too late for a binding interpretation for Ada 2012 for some of
these order of evaluation topics? (*)

(*) Note in particular that the associativity can only have a visible effect in
presence of overflow. So, all we are forbidding is failing an overflow check if
left-to-right evaluation would not have overflowed.  Doing anything else is just
evil, isn't it?

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013  5:26 AM

> It's interesting that there are cases where C is more deterministic
> than Ada, and in such cases, we guarantee the Ada behavior. For
> instance a+b+c in c always means (a+b)+c and it does in GNAT as well,
> and this is something we guarantee.

Huh? Order of evaluation is implementation defined, but associativity is well
defined: 4.5(8)

****************************************************************

From: Yannick Moy
Sent: Monday, November 25, 2013  5:48 AM

> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)

The problem is the implementation permission in RM 4.5(13):

"For a sequence of predefined operators of the same precedence level (and in the
absence of parentheses imposing a specific association), an implementation may
impose any association of the operators with operands so long as the result
produced is an allowed result for the left-to-right association, but ignoring
the potential for failure of language-defined checks in either the left-to-right
or chosen order of association."

In the analysis tools that we develop at AdaCore, we explicitly reject this
permission, as GNAT never uses it. It would make analysis much more complex if
we had to take into all possible interleavings.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  7:40 AM

> Note how this can overflow if and only if the compiler chooses an evil
> association? In practice, the main effect is to make reasoning about
> program behavior a lot harder. SPARK requires one to always use
> parentheses in this case, but that of course just impedes readability.

Actually we decided that the extra parens are such an annoyance that they are
only required when SPARK is operating in a special pedantic mode. Otherwise we
guarantee left to right association (GNAT itself always makes the same
guarantee).

> Much simpler to require left-to-right associativity. This will affect
> zero existing Ada 2012 compilers, and would be trivial to comply with
> for any new ones (just add parenthesis implicitly, where required). In
> return, it will allow any static analysis tools to do a far better
> job, and free the programmer from having to put tons of otherwise
> unnecessary parentheses in their code.

I definitely agree with this

> I think we should make it a point to go through all such cases and
> specify behavior. Ultimately, this is easier for both users and
> implementors of Ada development tools. Note that this last group also
> includes code generators (for modeling languages) and various analysis
> tools.
>
> It should never be acceptable to have similar expressions to be be
> well-defined for C, but implementation-defined or erroneous in Ada,
> whether it is about evaluating A + B + C, or writing "Hello" and
> "World" to standard output in two separate threads/tasks.
>
> Maybe it isn't too late for a binding interpretation for Ada 2012 for
> some of these order of evaluation topics? (*)

Failing that, in practice for Ada 2012, it would be good enough for now to just
clearly document the guarantees that GNAT makes, and figure out if it should be
making more such guarantees (I know Geert is looking at the I/O from tasks
issue).

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  7:44 AM

> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)

Yes, well it's even more surprising when someone who is unquestionably an expert
in Ada can be fooled into thinking this (luckily if you are using GNAT, or
probably any other Ada compiler around, you won't be fooled in practice). Here
is the evil paragraph you missed:

> 13  For a sequence of predefined operators of the same precedence
> level (and in the absence of parentheses imposing a specific
> association), an implementation may impose any association of the
> operators with operands so long as the result produced is an allowed
> result for the left-to-right association, but ignoring the potential
> for failure of language-defined checks in either the left-to-right or chosen order of association.

And it is this potential for failure of a check that Geert is disturbed by (and
me too!)

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  7:44 AM

> Huh? Order of evaluation is implementation defined, but associativity
> is well defined: 4.5(8)

Little language thing, Huh reads very aggressive in english, it has the sense of
"what kind of nonsense are you talking about, you must be an idiot!" Better
always avoided!

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013  8:05 AM

Sorry about that, I thought it was more like french "Hein?"

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  8:24 AM

> Sorry about that, I thought it was more like french "Hein?"

No problem, it's definitely more aggressive than Hein, at least to me :-)

****************************************************************

From: Bob Duff
Sent: Monday, November 25, 2013  9:07 AM

> Little language thing, Huh reads very aggressive in english, it has
> the sense of "what kind of nonsense are you talking about, you must be
> an idiot!" Better always avoided!

Really?  "Huh?" doesn't come across as agressive to me.
To me it means "I am confused", which could because you are talking nonsense,
but also could be because I am missing something.

(I almost wrote "Huh?" instead of "Really?" above.  ;-) Neither one is intended
to be rude!)

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  9:17 AM

I know that it is not intended that way, but to many people it comes accross
that way (I am not the only one, I have seen people react this way many times).

So the fact that it does not offend EVERYONE is not an argument against avoiding
it if it offends some! Think of saying Huh in talking, it is hard to say this
without expressing puzzlement of the kind "do you *really* mean to say that?" I
almosts NEVER hear people use it in conversation, and there are people who use
it in email who I think would never use it in conversation.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  8:38 AM

> Yes, well it's even more surprising when someone who is unquestionably
> an expert in Ada can be fooled into thinking this (luckily if you are
> using GNAT, or probably any other Ada compiler around, you won't be
> fooled in practice).
> Here is the evil paragraph you missed:

By the way, I was also quite surprised to learn of this rule, which I have not
been aware of till a recent discussion over the parens in SPARK! :-)

****************************************************************

From: Erhard Ploedereder
Sent: Monday, November 25, 2013  9:22 AM

Despite the discussion of hein, huh and others....

> Order of evaluation is implementation defined, but associativity is
> well defined: 4.5(8)

JP is absolutely right. Associativity is well defined, so GNAT is only
implementing the standard by obeying it.

Order of evaluation is quite another thing, and there is significant
justification for being non-deterministic to allow for better register
optimization and for HW out-of-order evaluation.

****************************************************************

From: Erhard Ploedereder
Sent: Monday, November 25, 2013  9:28 AM

> JP is absolutely right. Associativity is well defined, so GNAT is only
> implementing the standard by obeying it.

I stand corrected by the other mail with the para that allows l-t-r
associativity to be broken. Indeed, who needs that?

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  9:33 AM

> JP is absolutely right. Associativity is well defined, so GNAT is only
> implementing the standard by obeying it.

Amazing, so, another Ada expert (joining me and JPR) who was unaware that this
is not the case! This really shows we got the language rules wrong. Once again,
the issue is that

      X + Y + Z

where X,Y,Z are of type Integer, can be
evaluated as either

    (X + Y) + Z  or as X + (Y + Z)

even though one might overflow and the other not.

> Order of evaluation is quite another thing, and there is significant
> justification for being non-deterministic to allow for better register
> optimization and for HW out-of-order evaluation.

No, this is about associativity, NOT order of evaluation.

Back to order of evaluation for a moment:
I wonder though, yes, I know that it is the compiler writer's mantra that this
is a worthwhile freedom, I would like to see actual figures to justify this (to
me unneccessary) non-determinism. Most optimizations are disappointing :-)

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  9:33 AM

> I stand corrected by the other mail with the para that allows l-t-r
> associativity to be broken. Indeed, who needs that?

Especially who needs it when it is a surprise even to the experts! I wonder if
any compiler actually does this in practice?

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013  9:50 AM

>> 13  For a sequence of predefined operators of the same precedence
>> level (and in the absence of parentheses imposing a specific
>> association), an implementation may impose any association of the
>> operators with operands so long as the result produced is an allowed
>> result for the left-to-right association, but ignoring the potential
>> for failure of language-defined checks in either the left-to-right or
>> chosen order of association.
>
> And it is this potential for failure of a check that Geert is
> disturbed by (and me too!)

My understanding is that this is an "as if" rule, presumably related to allowing
X**4 to be computed as (X*X)*(X*X); I would not go as far as saying that
associativity is not defined in Ada! Especially, it would not allow A * B / C to
be computed as A * (B / C), since it would not be "an allowed result for the
left-to-right association".

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  10:36 AM

Yes, well of course, this is a strawman, no one thinks you can reassociate
A*B/C!

But the rule we are talking about is not an "as if" since it can introduce an
overflow error where none existed before.

The "as if" does not need a special rule

Note that in the case of X**4, we need a special rule because in general x*x*x*x
is not equal to (x*x)**2 for floating-point (the first form is more accurate in
the general case). For FPT, the rule really does not apply, since almost any
reassociation will change model interval results.

Note that you are of course allowed to reassociate even if there are parens if
you know it will not change the result and will not introduce or eliminate an
exception, that's the normal as if rule. For example in

     subtype R is integer range 1 .. 10;
     A,B,C,D : R;

     ...

     A := (B + C) + D;

it is fine to compute this as

     A := B + (C + D);

that's the normal as-if in action. But this special rule allows the introduction
of overflow errors where none would exist in the canonical order. THAT's the
surprise.

In general it is a surprise to find that the two following expressions are not
equivalent:

   (A + B) + C
   A + B + C

****************************************************************

From: Brad Moore
Sent: Monday, November 25, 2013  10:47 AM

I don't know if I speak for Canada or not, but I also pick up an aggressive tone when huh? is used in written form.
This is not the case however when used in spoken form I would say.

It comes across equivalent to "Are you nuts?", in writing though.

It is interesting to note that for the similar form "eh?", I would interpret as being aggressive neutral, somewhat equivalent to "whats that you say?" or "somethings not right here". I don't know if that is a Canadianism, as apparently we say "eh?" a lot, 
though I dont notice this generally.

****************************************************************

From: Bob Duff
Sent: Monday, November 25, 2013  11:04 AM

I agree "Eh?" seems like a neutral "What do you mean?".
And Canadians seem to use it at the end of a sentence as some sort of
punctuation, as in "Nice and warm today, eh?".  ;-)

What about "Heh?"  That's how I sometimes spell (and pronounce) "Huh?".  Guess
I'd better quit using either.

Back to the topic:  For what it's worth, I am strongly in favor of determinism.
I think determinism should be tolerated only when there's an important
efficiency (or other) benefit.

For example, the fact that elaboration order is implementation defined has zero
efficiency benefit, and in practice is one of the top 3 or so portability issues
we see at AdaCore.  But I suppose we can't really fix that one after the fact.

But we could fix the associativity thing, I think.

****************************************************************

From: Bob Duff
Sent: Monday, November 25, 2013  11:07 AM

> Note that in the case of X**4, we need a special rule because in
> general x*x*x*x is not equal to (x*x)**2 for floating-point (the first
> form is more accurate in the general case).

I think the second form is more accurate.  Fewer multiplies --> fewer roundings.

> ...
> exception, that's the normal as if rule.

I like to call it the "as if meta-rule", because it's not a rule of the
language, it's a rule about how to interpret the rules of the language (any
high-level language, not just Ada).

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  11:19 AM

>> Note that in the case of X**4, we need a special rule because in
>> general x*x*x*x is not equal to (x*x)**2 for floating-point (the
>> first form is more accurate in the general case).
>
> I think the second form is more accurate.  Fewer multiplies
> --> fewer roundings.

SUCH a common misconception. Just goes to show once again that fpt stuff is
hard. Yes, fewer multiplications, BUT both operands of the last multiplication
have already been rounded, whereas in the first form, one of the operands of
each multiplication is exact.

It was exactly this misconception that caused the screwup in the Ada 83
definition that required repeated multiplication.

If the **2 form was more accurate, then normal "as if" would allow the
substitution, but it's not always more accurate.

>> ...
>> exception, that's the normal as if rule.
>
> I like to call it the "as if meta-rule", because it's not a rule of
> the language, it's a rule about how to interpret the rules of the
> language (any high-level language, not just Ada).

Right, and sometimes language standards have a nasty habit of unneccessarily
stating an as-if rule for one particular part of the language, bogusly implying
that it does not appear elsewhere.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, November 25, 2013  11:31 AM

> But the rule we are talking about is not an "as if"
> since it can introduce an overflow error where none existed  before.

Precisely, the only visible difference is whether an exception is raised or not;
therefore I find unfair to say that C is more precise than Ada in the way it
defines associativity - except if you consider that C is more precise than Ada
in defining where exceptions are raised: nowhere! ;-)

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  11:39 AM

No, that's just plain wrong thinking IMO.

In C, overflow is undefined, but if in C we write

    a + b + c

where a,b,c are int, we are guaranteed that this is evaluated as (a + b) + c and
will be well defined if neither addition causes overflow. In C, we are NOT
allowed to evaluate this as a + (b + c) if b+c could cause overflow.

In Ada, we are allowed to evaluate this as a + (b + c) even if b+c causes
overflow.

True in Ada, the bad effects of this permission are not so bad as they would be
in C, but C does not allow this abomination in the first place.

So indeed C is more precise than Ada here, no question about it!

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  11:41 AM

> Precisely, the only visible difference is whether an exception is
> raised or not; therefore I find unfair to say that C is more precise
> than Ada in the way it defines associativity - except if you consider
> that C is more precise than Ada in defining where exceptions are
> raised: nowhere! ;-)

There are some things that C does better than Ada. This is one example, another
is that you can do I/O operations from separate threads in C, and things work
fine. Of course that's more a matter of definition of libraries and threads than
the C language itself, but the fact of the matter is that a C programmer can do
printf's from separate threads, and get lines interspersed, but not erroneous
behavior and rubbish as in Ada.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  11:59 AM

> > Yes, well it's even more surprising when someone who is
> > unquestionably an expert in Ada can be fooled into thinking this
> > (luckily if you are using GNAT, or probably any other Ada compiler
> > around, you won't be fooled in practice).
> > Here is the evil paragraph you missed:
>
> By the way, I was also quite surprised to learn of this rule, which I
> have not been aware of till a recent discussion over the parens in
> SPARK! :-)

You shouldn't have been that surprised, given that the rule is an Ada 83 rule
(11.6(5)) that was just moved in Ada 95 and has been untouched since.

I mainly bring this up to concur that this rule is evil. Back in the Ada 83
days, our optimizer guy thought it would be good to implement that rule exactly
as written. (That is, introducing exceptions where none existed initially.) At
the same time, he implemented a number of other algebraic rules (the interesting
one for this purpose being "A - C" => "A + (-C)"). The effect of those two rules
on our runtime was to introduce all kinds of new overflow exceptions (especially
in code like the Text_IO code for formatting numbers) [once a lot of "-" was
converted to "+", the association rule could apply]. We wasted quite a bit of
time debugging before we decided that the rule was evil and could only be
applied if it couldn't introduce an exception.

> Especially who needs it when it is a surprise even to the experts! I
> wonder if any compiler actually does this in practice?

I seriously doubt it. We tried and the results were unusable. Thus I think it
would be OK to kill 4.5(13). (I would not be OK with  requiring an order of
evaluation of parameters.)

Note, however, that removing that rule would not stop reordering that *removed*
an exception. (I would be against that, as overflow checks are not free in many
architectures and we ought to be able to do whatever we can to get rid of them.)
But we don't need 4.5(13) for that, as other rules allow evaluating intermediate
expressions with extra precision so no one can count on A + B + C to raise an
overflow if A + B alone would overflow.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  12:08 PM

...
> Back to order of evaluation for a moment:
> I wonder though, yes, I know that it is the compiler writer's mantra
> that this is a worthwhile freedom, I would like to see actual figures
> to justify this (to me unneccessary) non-determinism.
> Most optimizations are disappointing :-)

My guess is that it would be hard to quantify, as it would be difficult to tell
in most compilers when the non-determism is being used.

I know that in Janus/Ada, we use it in rare cases where we need to spill pending
floating point results before a function call. That is a quite expensive but
necessary operation on the X86 architecture (we didn't originally do it, but
user bug reports forced us to do so as nested calls were running out of
registers and crashing). By reordering parameters and expressions, we can often
avoid the need to do that spill.

There are probably other cases where we use it that I don't even know about.
(Our optimizer guy made aggressive use of Ada's permissions; I had to reign him
in when it caused problems, as in the introducing overflow case).

P.S. This whole subthread is under the wrong topic -- it doesn't have anything
to do with "suppressible errors" and I doubt Bob will want it in his AI!

****************************************************************

From: Geert Bosch
Sent: Monday, November 25, 2013 12:31 PM

> I seriously doubt it. We tried and the results were unusable. Thus I
> think it would be OK to kill 4.5(13). (I would not be OK with
> requiring an order of evaluation of parameters.)

Yes, let's do that. I think everyone can agree removing this fixes a nasty wart,
that is becoming increasingly more important and visible with the new pre- and
post-conditions, SPARK 2014 and increased use of model-based code generation,
statical analyzers and provers.

> Note, however, that removing that rule would not stop reordering that
> *removed* an exception. (I would be against that, as overflow checks
> are not free in many architectures and we ought to be able to do
> whatever we can to get rid of them.) But we don't need 4.5(13) for
> that, as other rules allow evaluating intermediate expressions with
> extra precision so no one can count on A + B + C to raise an overflow if A + B
> alone would overflow.

Of course.

****************************************************************

From: Steve Baird
Sent: Monday, November 25, 2013  1:00 PM

> Consider S being an arbitrary String:
>
>    Len : Natural := (if S'Last < S'First then 0 else S'Last - S'First
>    + 1);
>
> Note how this can overflow if and only if the compiler chooses an evil
> association?

This is not central to the main topic being discussed on this thread, but I
don't think the permission of 4.5(13) applies to Geert's example and therefore
no overflow is possible (partly because the condition ensures that S'First is
positive if the subtraction is evaluated).

You can't associate
    1 - 2 + 3
as
    1 - (2 + 3)
because that yields the wrong answer.

The permission clearly applies if we replace
   S'Last - S'First  + 1
with
   S'Last + (-S'First) + 1

and all of the rest of Geert's discussion makes sense if we assume this
substitution. I see this as confirmation of Geert's point that the reassociation
permission makes it harder to reason about programs.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  2:45 PM

> We wasted quite a bit of time debugging before we decided that the
> rule was evil and could only be applied if it couldn't introduce an
> exception.

That's meaningless, the rule is ONLY about introducing exceptions, otherwise it
has zero content!

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  2:46 PM

> My guess is that it would be hard to quantify, as it would be
> difficult to tell in most compilers when the non-determism is being used.

Why hard to quantify, you disable this optimization and you see whether any
programs are noticeably affected.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  4:47 PM

Because most compilers don't treat this as something that can be separately
disabled, or even as an "optimization". It's simply part of the semantics of
Ada. In our case, we just used it when it was advantageous and certainly there
isn't anything consistently marking that we're depending on non-determinism
(it's not something that we would have necessarily mentioned in comments). So
there isn't any practical way to find out when it was used or what the effect
was.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  4:51 PM

> > We wasted quite a bit of time debugging before we decided that the
> > rule was evil and could only be applied if it couldn't introduce an
> > exception.

"The rule" was the rule in our optimizer about re-association, not any
particular rule in the RM. You can only apply that rule if no exceptions can be
introduced, meaning that we decided that we cannot use 4.5(13) to any benefit.
As you note, it is an "as-if" rule in the absence of exceptions, so the RM need
not mention it.

> That's meaningless, the rule is ONLY about introducing exceptions,
> otherwise it has zero content!

That's precisely my point -- the RM rule has zero useful content, because
introducing exceptions willy-nilly makes it much more difficult to write code
that won't fail -- I'm not sure that it is even possible to do so in the general
case.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  5:06 PM

> That's precisely my point -- the RM rule has zero useful content,
> because introducing exceptions willy-nilly makes it much more
> difficult to write code that won't fail -- I'm not sure that it is
> even possible to do so in the general case.

What are you talking about? Just use parentheses!
You can't reassociate (A+B)+C.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  4:54 PM

> Yes, let's do that. I think everyone can agree removing this fixes a
> nasty wart, that is becoming increasingly more important and visible
> with the new pre- and post-conditions, SPARK 2014 and increased use of
> model-based code generation, statical analyzers and provers.

I presume that you'll be submitting an AI to this effect? A !question and nice
examples as to why applying this permission would be harmful would be nice. (The
!wording is easy for a change, so I don't really need that.)

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  5:05 PM

> Because most compilers don't treat this as something that can be
> separately disabled, or even as an "optimization". It's simply part of
> the semantics of Ada. In our case, we just used it when it was
> advantageous and certainly there isn't anything consistently marking
> that we're depending on non-determinism (it's not something that we
> would have necessarily mentioned in comments). So there isn't any
> practical way to find out when it was used or what the effect was.

Don't speak for "most compilers" you don't have the background to do that :-)
And specific Ada backends are mostly a thing of the past!

What you say may be idiosyncractically true for your compiler, but I expect
other compilers could easily test this out. After all a C compiler does not have
a fixed evaluation order, but a Java compiler does, so any reasonably flexible
compiler will have both capabilities (e.g. gcc, and it would be a fairly easy
experiment to carry out).

****************************************************************

From: Bob Duff
Sent: Monday, November 25, 2013  5:14 PM

>...After all a C compiler does not have a fixed evaluation  order, but
>a Java compiler does, so any reasonably flexible compiler  will have
>both capabilities (e.g. gcc, and it would be a fairly  easy experiment
>to carry out).

Do any other languages have this rule that "A+B+C" can be evaluated as "A+(B+C)"
even though that might introduce overflow?  I think maybe Fortran does, and I
think maybe that's where Ada inherited it from.  But I'm not a Fortran lawyer.

I've no idea what gfortran does in this regard.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  5:42 PM

>> ...After all a C compiler does not have a fixed evaluation order, but
>> a Java compiler does, so any reasonably flexible compiler will have
>> both capabilities (e.g. gcc, and it would be a fairly easy experiment
>> to carry out).
>
> Do any other languages have this rule that "A+B+C" can be evaluated as
> "A+(B+C)" even though that might introduce overflow?  I think maybe
> Fortran does, and I think maybe that's where Ada inherited it from.
> But I'm not a Fortran lawyer.

Yes Fortran does, even for floating-point, very evil!

> I've no idea what gfortran does in this regard.

I am sure it goes left to right :-)

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  6:04 PM

> > That's precisely my point -- the RM rule has zero useful content,
> > because introducing exceptions willy-nilly makes it much more
> > difficult to write code that won't fail -- I'm not sure that it is
> > even possible to do so in the general case.
>
> What are you talking about? Just use parentheses!
> You can't reassociate (A+B)+C.

For integers, the result of A+B+C and (A+B)+C and A+(B+C) is the same so long as
there is no exceptions. Our optimizer guy had clearly forgotten the last part,
so he was ignoring the parens even if present. But we also had problems with
array indexing code, where there isn't any way the user could add parens -- and
we wouldn't want to be adding them manually, because that would block various
optimizations that we do want to make. Indeed, I don't see any value to
recording parens in the intermediate code for integer expressions (the situation
is very different for float expressions, of course).

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  6:18 PM

> > Because most compilers don't treat this as something that can be
> > separately disabled, or even as an "optimization". It's simply part
> > of the semantics of Ada. In our case, we just used it when it was
> > advantageous and certainly there isn't anything consistently marking
> > that we're depending on non-determinism (it's not something that we
> > would have necessarily mentioned in comments). So there isn't any
> > practical way to find out when it was used or what the effect was.
>
> Don't speak for "most compilers" you don't have the background to do
> that :-) And specific Ada backends are mostly a thing of the past!

Nobody said anything about "back-ends". There are a lot of uses of
non-determinism in the middle phases of compilation, during the generation of
and transforming into "better" intermediate code than would occur without it.
(The generation phase pretty much has to be Ada-specific; transforming may or
may not be). The back-end certainly isn't the only place where non-determinism
is used.

I recall someone from Rational making similar points in an ARG discussion years
ago. I don't think it is just me (or I wouldn't have put it the way I did).

> What you say may be idiosyncractically true for your compiler, but I
> expect other compilers could easily test this out. After all a C
> compiler does not have a fixed evaluation order, but a Java compiler
> does, so any reasonably flexible compiler will have both capabilities
> (e.g. gcc, and it would be a fairly easy experiment to carry out).

I don't think the majority of Ada implementations also can handle Java or many
other languages (i.e. Irvine, Rational technology, not sure about ObjectAda).
That sort of generality is primarily with GNAT, IMHO.

In any case, this is getting *way* off-topic from the already off-topic
discussion. :-) It doesn't really pay to argue about it, because everyone makes
the mistake of thinking that their implementation is "typical", when in actual
fact there pretty much is no such thing (the implementations all vary wildly -
Rational and ASIS has definitively proved that to me).

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  6:29 PM

>>> That's precisely my point -- the RM rule has zero useful content,
>>> because introducing exceptions willy-nilly makes it much more
>>> difficult to write code that won't fail -- I'm not sure that it is
>>> even possible to do so in the general case.
>>
>> What are you talking about? Just use parentheses!
>> You can't reassociate (A+B)+C.
>
> For integers, the result of A+B+C and (A+B)+C and A+(B+C) is the same
> so long as there is no exceptions.

Uh yes, I think we know that, this whole discussion is about exceptions! But
what I am saying is that your statement that the Ada rule makes it hard to
write portable code is dubious, since you can just use parens to suppress this
unwanted flexibility.

> Our optimizer guy had clearly forgotten the
> last part, so he was ignoring the parens even if present. But we also
> had problems with array indexing code, where there isn't any way the
> user could add parens -- and we wouldn't want to be adding them
> manually, because that would block various optimizations that we do want to make.

Well I have no idea what you are talking about here, sounds like some kind of
internal chaos in your compiler, definitely not relevant to the discussion here.
We are talking about standard Ada, not existing compilers with bugs, yes, if you
have bugs you may have trouble writing portable code!

 > Indeed, I don't
 > see any value to recording parens in the intermediate code for integer
 > expressions (the situation is very different for float expressions, of
 > course).

Well except for the rule we are discussing, and also, depending on what you mean
by intermediate code, you have to record the exact number of parentheses for
conformance checking. I have no idea why you think it is important to record
parens in the fpt case and not in the integer case, that makes zero sense to me.

****************************************************************

From: Robert Dewar
Sent: Monday, November 25, 2013  6:31 PM

> Nobody said anything about "back-ends". There are a lot of uses of
> non-determinism in the middle phases of compilation, during the
> generation of and transforming into "better" intermediate code than
> would occur without it. (The generation phase pretty much has to be
> Ada-specific; transforming may or may not be). The back-end certainly
> isn't the only place where non-determinism is used.

Randy you are all over the map here. A moment ago you were talking about
register allocation, now you are talking about high level transformations in the
front end. I can't see ANY sensible compiler taking advantage of the
reassociation rule in the front end.

These kind of reorderings are highly architecture dependent, doing them anywhere
except in the architecture aware back end makes zero sense.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  6:35 PM

...
> Nobody said anything about "back-ends". There are a lot of uses of
> non-determinism in the middle phases of compilation, during the
> generation of and transforming into "better"
> intermediate code than would occur without it. (The generation phase
> pretty much has to be Ada-specific; transforming may or may not be).
> The back-end certainly isn't the only place where non-determinism is
> used.

To bring this closer to the original off-topic discussion :-), I believe this
happens because Ada has rules that effectively require out-of-order evaluation
of parameters.

For instance, if we have a tagged type like:
    package P is
        type Root is tagged...
        procedure Do_Two (A, B : Root); -- Two controlling operands.
        function TI return Root; -- Tag-indeterminate function.
        function CW return Root'Class; -- Dynamically-tagged function.
    end P;

in the call:

    P.Do_Two (TI, CW);

you have to evaluate the second parameter first in order to find out the tag
with which T1 needs to dispatch. A strict left-to-right order would require
evaluating the function call TI before you know the tag with which to dispatch,
and that isn't going to work.

In cases like this, an Ada compiler is essentially depending on the
non-determistic evaluation of parameters in order to get the correct Ada
semantics. If we were to require a deterministic evaluation of parameters, we
would also have to do something with cases like this one which cannot be
evaluated in a strict left-to-right order. (Making them illegal would be
incompatible of course, and making them erroneous is hideous...)

****************************************************************

From: Randy Brukardt
Sent: Monday, November 25, 2013  7:23 PM

> > Nobody said anything about "back-ends". There are a lot of uses of
> > non-determinism in the middle phases of compilation, during the
> > generation of and transforming into "better" intermediate code than
> > would occur without it. (The generation phase pretty much has to be
> > Ada-specific; transforming may or may not be). The back-end
> > certainly isn't the only place where non-determinism is used.
>
> Randy you are all over the map here. A moment ago you were talking
> about register allocation, now you are talking about high level
> transformations in the front end. I can't see ANY sensible compiler
> taking advantage of the reassociation rule in the front end.

I agree, but Geert (and others) were also talking about the more general
"non-determinism" in Ada, and that's what I was talking about in the quoted
message. It would be nice if the subjects of these various topics were
different, but breaking the thread by changing the subject usually just leads to
more chaos. (And I don't think I ever said anything about "register allocation";
I've always been talking about higher-level uses of non-determinism.)

> These kind of reorderings are highly architecture dependent, doing
> them anywhere except in the architecture aware back end makes zero
> sense.

Not always. First, Ada sometimes requires such reorderings (see my other
message). Secondly, it can be reasonable to have the middle phases aware of some
of the more important characteristics of the target architecture. We do that so
that we don't have to duplicate large chunks of complicated code for each target
(and of course, a front-end has to be aware of at least some target
characteristics -- a front-end that didn't know the bounds of Integer would have
issues with static expression evaluation). We of course keep the target
characteristics separated from the majority of the compiler code, so it's clear
when we're depending on them in some way.

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, November 26, 2013  5:23 PM

>> Because most compilers don't treat this as something that can be
>> separately disabled, or even as an "optimization". It's simply part
>> of the semantics of Ada. In our case, .....
>
> Don't speak for "most compilers" you don't have the background to do
> that :-) And specific Ada backends are mostly a thing of the past!

Let me come to Randy's rescue as the old "I love optimizations" guy. This comes
from someone who has built backends for C, Modula, Ada, and several proprietary
languages. Admittedly, I never had to generate code for a language that insisted
on strict left-to-right order.

All this has absolutely zero to do with Ada back-ends and Randy is still right.

What we are talking about here is a typical way of assessing register pressure
at an early stage in the back-end. See Bob Morgan's book on how the VAX compiler
back-ends worked. It is well known that by evaluating the "higher pressure"
subtree first, you save one register at the level above for unbalanced trees
(for a uniform class of registers, etc, etc,). This can add up transitively.
Absent a language prohibition, this is clearly the way to go, among other thing
because some HW might do it anyway for simple instructions, never mind what
sequence the compiler produced.

Once you have made the pressure assessment, which was based implicitly on the
assumption of a particular evaluation order, it is quite risky to generate code
to evaluate in any other order, since you are violating invariants about the
validity of your register pressure algorithm. So, what you are asking for is to
turn off register pressure measuring. Well, what if later phases assume that
they find meaningful results from that algorithm? (And, to do register pressure
for l-t-r as an alternative, is not what I would call "turning the optimization
off", but rather changing the compiler significantly. I never had to need to
implement l-t-r.)

One is very ill advised to "turn off" the optimization of "heavier-tree first"
(which, in some folks' eyes is but competent code generation) later on in code
transformation/generation, if earlier phases already assumed it. I know for
sure, because we were asked to turn off already implemented optimizations and,
for a while, the compiler without the optimizations turned out to be flaky as
hell, because subsequent analyses assumed that the earlier transformations
(normalizations) or analyses had indeed been done. Null-pointer dereferences
inside the compiler and bad generated code are among the effects and they are
nasty bugs, for sure.

Specific case in point:
If you have established a "heavier-subtree-first" regime (or any regime
whatsoever, including the left-to-right scheme), it is extremely risky to then
deviate from this decision during the later phases, because your discovered CSE
definitions and usages suddenly are invalidated, finding uses of the CSE before
the Defs in the alternative execution order. While you might find all affected
places eventually to fix your compiler, secondary derivations, e.g., about the
goodness of CSEs and the resulting register pressure influences, are next to
impossible to undo, because they also have implicit premises, e.g., about the
heaviness of nodes in-between.

So, turning off any "optimization" related to execution order is very risky. I
would certainly not support the notion without BIG bucks asking for it, because
I know that chasing the resulting bugs due to violated premises will cost a LOT
of developer time.

In summary, fooling around with the order of evaluation once established or
assumed by the back-end is among the most risky compiler options that I know. It
definitely is not a "oh just turn this optimization off" thing.

Of course, you can build your compiler without optimization, truly
left-to-right, truly canonical for some definition of canonical, all dead code
present, etc., as long as nobody asks you to "just turn on the optimizations"
because, as a result, the compiler will break quite similarly for a few months.

Will the heavier-tree-first optimization buy much? Who knows? With some effort,
I can write examples where this optimization alone will yield factors on
multi-core. Of course, the examples would be contrived, causing a global cache
miss due to an unnecessary spill inside a loop. My claim is just as unsupported
as the claim that optimizations are generally overrated. I would agree that
(with a few exceptions such as register allocation) every single one will not
save the day, but what about a group of 30 well chosen ones? E.g. getting rid of
95% of constraint checks? Again, who knows?

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, November 26, 2013  5:38 PM

> I can't see ANY sensible compiler
> taking advantage of the reassociation rule in the front end.

Just curious.... Do you do
  A + 5 + 8   ===   (A+5) + 8
or
  A + 5 + 8   ===   A + 13  (taking advantage of reassociation)

How about
 5 + A - A   === (5 + A) - A
or
 5 + A - A  === 5

(Incidentally, I agree that  A * B * C in Float needs to be (A*B)*C.)

My point is that reassociation is not a black-and-white issue (presuming
that you find the 2.line transforms o.k.).   Of course, one takes
advantage of it quite often in a compiler, the above certainly in the
target-independent part of the compiler.

The only soul searching comes when accuracy, exceptions or traps come into play.
That is where the meat of the discussion ought to be, not on generally damning
or blessing a particular transformation.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 26, 2013  5:53 PM

>> I can't see ANY sensible compiler
>> taking advantage of the reassociation rule in the front end.

The reassociation rule we are talking about is the one that allows you to introduce overflow errors. There is no other relevant rule, so the above is to be read *SPECIFICALLY* in that context. I.e. I cannot see ANY sensible compiler taking advantage of the
 reassociation rule and introducing an overflow error
>
> Just curious.... Do you do
>    A + 5 + 8   ===   (A+5) + 8
> or
>    A + 5 + 8   ===   A + 13  (taking advantage of reassociation)
>
> How about
>   5 + A - A   === (5 + A) - A
> or
>   5 + A - A  === 5

These are totally irrelevant to the discussion, they are allowed "as if"
transformations which would be allowed whether or not the reassociation rule is
present in the RM. So these examples are red herrings in this discussion.

> My point is that reassociation is not a black-and-white issue (presuming
> that you find the 2.line transforms o.k.).   Of course, one takes
> advantage of it quite often in a compiler, the above certainly in the
> target-independent part of the compiler.

It is impossible not to find your examples OK, they are clearly valid. No one
could possibly dispute that, but they have nothing whatever to do with the
reassociation rule in the RM.

> The only soul searching comes when accuracy, exceptions or traps come
> into play. That is where the meat of the discussion ought to be, not
> on generally damning or blessing a particular transformation.

That's ALL we are discussing, the reassocation rule says you can freely
reassociation without worrying about exceptions (it has nothing to say about
allowing you to change the accuracy!)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 26, 2013  6:12 PM

> That's ALL we are discussing, the reassocation rule says you can
> freely reassociation without worrying about exceptions (it has nothing
> to say about allowing you to change the accuracy!)

Nit-pick: Actually, it does say that you can't change the accuracy:

"...an implementation may impose any association of the operators with operands
so long as the result produced is an allowed result for the left-to-right
association..."

so "nothing to say" is inaccurate. (Pun semi-intended.) (The wording does allow
a different value [bit pattern], but it has to be in the same model interval,
which means the required accuracy is unchanged; the AARM notes confirm that).

Which of course doesn't change your point.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 26, 2013  7:01 PM

> Nit-pick: Actually, it does say that you can't change the accuracy:

Right, that's what I mean, it has nothing to say about allowing you to change
the accuracy. Although it you have some amazing fpt proof stuff that allows you
to show that the model interval is the same or narrower, then as-if allows you
to reassociate.

> so "nothing to say" is inaccurate. (Pun semi-intended.) (The wording
> does allow a different value [bit pattern], but it has to be in the
> same model interval, which means the required accuracy is unchanged;
> the AARM notes confirm that).

Right, so it has nothing to say, since it says nothing that would not be true
WITHOUT any statement. That's what I mean.

> Which of course doesn't change your point.

****************************************************************

From: Geert Bosch
Sent: Tuesday, November 26, 2013  7:45 PM

> What we are talking about here is a typical way of assessing register
> pressure at an early stage in the back-end. See Bob Morgan's book on
> how the VAX compiler back-ends worked. It is well known that by
> evaluating the "higher pressure" subtree first, you save one register
> at the level above for unbalanced trees (for a uniform class of
> registers, etc, etc,). This can add up transitively. Absent a language
> prohibition, this is clearly the way to go, among other thing because
> some HW might do it anyway for simple instructions, never mind what
> sequence the compiler produced.

This is a dangerous line of reasoning. You can generate code doing evaluation in
one order while preserving side effects in another order. They really have not
much to do with one another. While compilers will interleave evaluation of both
sides, and, similarly, modern hardware will have hundreds of instructions in
various stages of execution, but they'll all preserve the semantics of (some)
sequential execution obeying the semantics of the source language, whether a
high level language or machine code.

While 1970s compiler technology may not be applicable today, Knuth's
proclamation that "early optimization is the root of all evil" is still valid
today. We should forget about small efficiencies most of the time.

The following quote seems particularly applicable and timeless:
> "The order in which the operations shall be performed in every
> particular case is a very interesting and curious question, on which
> our space does not permit us fully to enter. In almost every
> computation a great variety of arrangements for the succession of the
> processes is possible, and various considerations must influence the
> selection amongst them for the purposes of a Calculating Engine.
> One essential object is to choose that arrangement which shall tend to
> reduce to a minimum the time necessary for completing the
> calculation."  Ada Byron's notes on the analytical engine, 1842.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, November 27, 2013  10:24 AM

> By the way, it has always puzzled me how much the Ada designers like
> non-determinism. Yes, in the language the order of write back of out
> parameters is non-deterministic. Why? I can't figure out ANY advantage of
> making something like this non-deterministic!

In our SIL 2 review, the various places where the order of
evaluation/conversion/assignment is said by the RM to be arbitrary were cited as
Ada's weaknesses.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  10:42 AM

Indeed, and quite correctly cited too IMO. SPARK by the way completely
eliminates ALL non-determinism from the language. That was considered an
essential first step in creating a language suitable for formal reasoning.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  10:46 AM

> In our SIL 2 review, the various places where the order of
> evaluation/conversion/assignment is said by the RM to be arbitrary
> were cited as Ada's weaknesses.

One interesting idea would be to create a separate document that creates a
subset of Ada by specifying orders for all those cases where arbitrary ordering
is required (*) and then compilers could certify that they followed these
requirements (possibly by use of some switch).

Probably this document could also require certain behaviors for at least some
bounded error situations, and perhaps restrict usage that leads to erroneous
execution???

(*) SPARK would be a subset of this subset, since SPARK often works by
eliminating the effects of arbitrary ordering rather than specifying an
ordering. For instance expressions have no side effects, so order of evaluation
of expressions doesn't matter from a side effect point of view. Of course SPARK
does not allow reordering that changes results or introduces exceptions!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  3:12 PM

> > What we are talking about here is a typical way of assessing
> > register pressure at an early stage in the back-end. See Bob
> > Morgan's book on how the VAX compiler back-ends worked. It is well
> > known that by evaluating the "higher pressure" subtree first, you
> > save one register at the level above for unbalanced trees (for a
> > uniform class of registers, etc, etc,). This can add up
> > transitively. Absent a language prohibition, this is clearly the way
> > to go, among other thing because some HW might do it anyway for
> > simple instructions, never mind what sequence the compiler produced.
>
> This is a dangerous line of reasoning. You can generate code doing
> evaluation in one order while preserving side effects in another
> order. They really have not much to do with one another. While
> compilers will interleave evaluation of both sides, and, similarly,
> modern hardware will have hundreds of instructions in various stages
> of execution, but they'll all preserve the semantics of (some)
> sequential execution obeying the semantics of the source language,
> whether a high level language or machine code.

*This* is a dangerous line of reasoning. Of course, "as-if" optimizations are
always allowed, both at the compiler level and machine level. But the
side-effects that matter for this discussion are the ones that (potentially)
have an external effect, which are directly the result of a machine instruction,
and which can have an outsized runtime impact:

(1) The side-effects that result from external function calls;
(2) The effects of accessing volatile objects;
(3) The place where exceptions are raised, relative to (1) and (2).

If you are presuming a strict evaluation order, then you (a compiler) *cannot*
move any of these things, under any circumstances. That's because the
side-effects are tied to the execution of a single machine instruction, which
cannot be reasonably split.

One could of course inline to mitigate (1), but front-end inlining is the very
definition of "early optimization", so it can't be considered here (based on
your next paragraph).

Plus, keep in mind that you *cannot* evaluate parameters in a strict
left-to-right order in every case and still properly implement Ada semantics (I
showed such an example on Friday). To eliminate these cases would require
introducing incompatibilities, and that brings us back to the original subject
of this thread -- how to *avoid* adding more incompatibilities. I don't see how
we could reconcile both intents.

> While 1970s compiler technology may not be applicable today, Knuth's
> proclamation that "early optimization is the root of all evil" is
> still valid today. We should forget about small efficiencies most of
> the time.

Pretty much the only meaningful thing that diffentiates compilers (as opposed to
eco-systems) for a standardized language like Ada is the way that that they find
(or don't) "small efficiencies". (Aside: And I don't agree that the efficiencies
in question are necessarily small; the cost of a float spill is the reading and
writing of 11 dwords of memory and that is going to be significant in any
event.) Pretty much everything else about a compiler is identical because of the
requirements of the Standard. If one says that "small efficencies" can't be
found, then pretty much every compiler will be functionally identical. In such a
case, there is no business case for there even existing more than one compiler
for a language (it would make a lot more sense for a small company like mine to
build tools for GNAT rather than building Ada compiler where the ability to add
value is very strictly limited) - at least so long as the one that exists is
open source. And of course in that case, there is no need for Ada
Standardization, either.

So I conclude that this group *exists* because of the ability of compilers to
find "small efficiencies" for particular target markets. To dismiss that out of
hand is essentially dismissing the reason the Ada Standard exists at all.

Note that I say the above even though I don't think Janus/Ada takes much
advantage of non-canonical orders. We of course do so to implement cases like my
example on Friday (that's done by introducing temporaries). But generally we try
to keep side-effects in a canonical order so that optimization doesn't change
the effect of a program too much. The problem is that I have no idea of where we
might have taken advantage on non-canonical orders, and there is no practical
way to find them (as it is a basic rule of Ada that would not require special
documentation). Nor have we made any attempt to find out what the third-party
backends do in such cases. So changing this would be a non-starter for me.

P.S. I'm not sure what "strict left-to-right order" means when named notation
and/or default parameters are used in calls. Our compiler converts all calls to
positional notation with default expressions made explicit before we start any
sort of code generation; the back end phases only deal with positional calls. If
we're talking the order that named parameters exist in the source code, that
would be *very* expensive to implement in our compiler, as we'd have to
introduce temporaries for every parameter to get the evaluation order right, or
abandon the existing invariant that the parameters are in passing order when
making calls, or ???. I don't know for sure, but I thought that this was a
common implementation strategy.

****************************************************************

From: Arnaud Charlet
Sent: Wednesday, November 27, 2013  3:24 PM

> Pretty much the only meaningful thing that diffentiates compilers (as
> opposed to eco-systems) for a standardized language like Ada is the
> way that that they find (or don't) "small efficiencies". (Aside: And I
> don't agree that the efficiencies in question are necessarily small;
> the cost of a float spill is the reading and writing of 11 dwords of
> memory and that is going to be significant in any event.) Pretty much
> everything else about a compiler is identical because of the
> requirements of the Standard. If one says that

That's certainly not the case, ease of use, quality of error messages, error
recovery, which host and targets are supported, quality of support,
responsiveness to bug fixes and enhancements, ability to maintain and document
known problems, etc... are much more important than micro optimizations to most
users.

Most of our customers don't really care about performance actually, only a few
do care (alot sometimes).

Also you started with "as opposed to eco-systems" but that's also a non
realistic premise: customers don't buy just a compiler these days, they buy a
whole toolset (if not more) where the compiler is only a small piece, so this is
not what makes the difference in most cases.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  3:41 PM

> the cost of a float
> spill is the reading and writing of 11 dwords of memory and that is
> going to be significant in any event.

what's your model for this odd claim?

) Pretty much everything else about a compiler
> is identical because of the requirements of the Standard. If one says
> that "small efficencies" can't be found, then pretty much every
> compiler will be functionally identical.

This is nonsense, there are lots of opportunities for BIG efficiencies in Ada
compilers, e.g. handling of a bit-packed array slice (which right now GNAT is
not good at, but would be a HUGE efficiency gain if implemented). Another
example is optimization of String'Write to do one big write in the normal case
rather than the generally required separate element-by-element write. There are
LOADS of such cases (I would guess we have well over a hundred similar
enhancement requests filed, and many improvements to the compiler over time are
in this category). Small fry like order of evaluation of expressions is
relatively unimportant compared to this.

Other examples are efficient handling of exceptions and controlled types.

> In such a case, there is no business case for there even existing more
> than one compiler for a language (it would make a lot more sense for a
> small company like mine to build tools for GNAT rather than building
> Ada compiler where the ability to add value is very strictly
> limited) - at least so long as the one that exists is open source. And
> of course in that case, there is no need for Ada Standardization, either.

Different Ada companies do and have concentrate(d) on different opportunities,
and there are many reasons why it is a good thing to have more than

> So I conclude that this group *exists* because of the ability of
> compilers to find "small efficiencies" for particular target markets.
> To dismiss that out of hand is essentially dismissing the reason the
> Ada Standard exists at all.

TOTAL NONSENSE (sorry for all upper case) in my opinion.

For one thing, suppose we did have a world where there is only one compiler. The
standard would be critical for letting a programmer know what is guaranteed to
be portable across architectures and what is not!

> P.S. I'm not sure what "strict left-to-right order" means when named
> notation and/or default parameters are used in calls. Our compiler
> converts all calls to positional notation with default expressions
> made explicit before we start any sort of code generation; the back
> end phases only deal with positional calls. If we're talking the order
> that named parameters exist in the source code, that would be *very*
> expensive to implement in our compiler, as we'd have to introduce
> temporaries for every parameter to get the evaluation order right, or
> abandon the existing invariant that the parameters are in passing
> order when making calls, or ???. I don't know for sure, but I thought that
> this was a common implementation strategy.

Surprised! it would be trivial in GNAT, we would test each expression in the
call in sequence to see if it had side effects, and if so (it's the unusual
case) eliminate these side effects in left to right order. To do this would take
about 20 minutes of work, since all the primitives are at hand, it would be
something like

     A := First_Actual (P);
     while Present (A) loop
        Remove_Side_Effects (A);
        Next_Actual (A);
     end loop;

well there are probably some details left out, but still, not a big deal once we
decided what was meant by evaluation in order (as Randy points out there are
some interesting cases).

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  5:07 PM

...
> > Pretty much the only meaningful thing that diffentiates compilers
> > (as opposed to eco-systems) for a standardized language like Ada is
> > the way that that they find (or don't) "small efficiencies". (Aside:
> > And I don't agree that the efficiencies in question are necessarily
> > small; the cost of a float spill is the reading and writing of 11
> > dwords of memory and that is going to be significant in any event.)
> > Pretty much everything else about a compiler is identical because of
> > the requirements of the Standard. If one says that
>
> That's certainly not the case, ease of use, quality of error messages,
> error recovery, which host and targets are supported, quality of
> support, responsiveness to bug fixes and enhancements, ability to
> maintain and document known problems, etc... are much more important
> than micro optimizations to most users.

None of those things, other than error messages, are properties of the
*compiler*, they are properties of the business (that's especially true for
support which is mainly what you are talking about here). There is no reason
that some new business could not use the GCC compiler and provide quality
support, targets, and the like.

...
> Also you started with "as opposed to eco-systems" but that's also a
> non realistic premise: customers don't buy just a compiler these days,
> they buy a whole toolset (if not more) where the compiler is only a
> small piece, so this is not what makes the difference in most cases.

I agree with this, and that is my point. The main value to customers is in a
better eco-system. As such, it does not make business sense to build a complex
piece of software (like a compiler) when one could build an eco-system (and
support) around an existing open source compiler. An Ada compiler is the hardest
and most expensive piece of most development eco-systems -- why spend all of
your energy there (because they're infinite time sinks) when it provides very
little incremental value to your customers?

Many of us built Ada compilers when the economics was different (especially as
no open source compiler existed to build around), but that's not true today. I
keep maintaining Janus/Ada because it's very much like my child and you don't
abandon your children -- but it certainly doesn't make economic sense to do so.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  5:22 PM

>> That's certainly not the case, ease of use, quality of error
>> messages, error recovery, which host and targets are supported,
>> quality of support, responsiveness to bug fixes and enhancements,
>> ability to maintain and document known problems, etc... are much more
>> important than micro optimizations to most users.
>
> None of those things, other than error messages, are properties of the
> *compiler*, they are properties of the business (that's especially
> true for support which is mainly what you are talking about here).

totally wrong

   ease of use		property of the compiler
   quality of messages	property of the compiler
   error recovery        property of the compiler
   ease of fixing bugs   property of the compiler
   targets supported     property of the compiler

support includes all the above.

>  There is no reason
> that some new business could not use the GCC compiler and provide
> quality support, targets, and the like.

Not conceptually right, but in practice this would not be easy

> I agree with this, and that is my point. The main value to customers
> is in a better eco-system. As such, it does not make business sense to
> build a complex piece of software (like a compiler) when one could
> build an eco-system (and support) around an existing open source
> compiler. An Ada compiler is the hardest and most expensive piece of
> most development eco-systems).

The compiler might possibly be the single most expensive piece of the
eco-system, but in the big picture it is still a small part (probably only 20%
of the development resources at AdaCore, if that, go into the compiler).

And mastering the gcc compiler, for example, at the level necessary to do what
you suggest, would be a major investment, easily comparable to building a
compiler from scratch. Of course you would get a lot for that in terms of
targets, etc. but it would be a lot of work.

> Many of us built Ada compilers when the economics was different
> (especially as no open source compiler existed to build around), but
> that's not true today. I keep maintaining Janus/Ada because it's very
> much like my child and you don't abandon your children -- but it
> certainly doesn't make economic sense to do so.

Well of course an open source compiler existed, you could have decided to build
a front end for gcc, just as we did instead of an entire compiler, but as I say,
I think it would probably have been more work, not less. As for abandoning, part
of wisdom is learning when to abandon things that should be abandoned :-)

We have certainly abandoned significant chunks of technology as we go along at
AdaCore, and look for example at the Alsys decision to abandon their compiler
technology in favor of Ada Magic.

Boy this must be the most off-topic thread for a long time, but it isn't only me
keeping it alive :-)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  5:33 PM

> > the cost of a float
> > spill is the reading and writing of 11 dwords of memory and that is
> > going to be significant in any event.
>
> what's your model for this odd claim?

In Janus/Ada, at least, we have to spill the full extended precision value for
each float and all of the float flags as well (because of the way float
exceptions are managed), then the float processor is fully cleared before the
call; the process then is reversed afterwards.

At least, that's the way it worked before I added the front-end rearrangement of
code. Since then, I haven't been able to construct an example that the compiler
wasn't able to reorder to eliminate the need for any float spilling. It
certainly ought to be possible to construct such a case, but it would be
fiendishly complex (a massive forest of nested function calls that return
floats).

The rearrangement of course depends upon the non-determism of subprogram
parameter ordering.

> ) Pretty much everything else about a compiler
> > is identical because of the requirements of the Standard. If one
> > says that "small efficencies" can't be found, then pretty much every
> > compiler will be functionally identical.
>
> This is nonsense, there are lots of opportunities for BIG efficiencies
> in Ada compilers, e.g. handling of a bit-packed array slice (which
> right now GNAT is not good at, but would be a HUGE efficiency gain if
> implemented). Another example is optimization of String'Write to do
> one big write in the normal case rather than the generally required
> separate element-by-element write. There are LOADS of such cases (I
> would guess we have well over a hundred similar enhancement requests
> filed, and many improvements to the compiler over time are in this
> category). Small fry like order of evaluation of expressions is
> relatively unimportant compared to this.
>
> Other examples are efficient handling of exceptions and controlled
> types.

I would have considered those sorts of things covered by the "early
optimization" that Geert was complaining about. I read his message to say that
any optimization in the front end is evil, which I agree with you is nonsense.

> > In such a case, there is no business case for there even existing
> > more than one compiler for a language (it would make a lot more
> > sense for a small company like mine to build tools for GNAT rather
> > than building Ada compiler where the ability to add value is very
> > strictly
> > limited) - at least so long as the one that exists is open source.
> > And of course in that case, there is no need for Ada
> > Standardization, either.
>
> Different Ada companies do and have concentrate(d) on different
> opportunities, and there are many reasons why it is a good thing to
> have more than

Right, but if we tie the Standard down to the point of requiring everything to
be evaluated in a canonical order, and to eliminate uncommon implementation
strategies like generic sharing (which you complain about the standard
supporting nearly every time I bring it up), there is no longer much chance for
an Ada company to add value in the compiler proper. In such an environment, one
could easily concentrate on a "different opportunity" without investing $$$$ in
a piece that hardly can be different at all from anyone else's.

> > So I conclude that this group *exists* because of the ability of
> > compilers to find "small efficiencies" for particular target markets.
> > To dismiss that out of hand is essentially dismissing the reason the
> > Ada Standard exists at all.
>
> TOTAL NONSENSE (sorry for all upper case) in my opinion.
>
> For one thing, suppose we did have a world where there is only one
> compiler. The standard would be critical for letting a programmer know
> what is guaranteed to be portable across architectures and what is
> not!

I don't think this is hard to do for a vendor; we always tried to make
*everything* portable across architectures other than a relatively small list
outlined in our documentation. The number of choices that the Standard gives us
where we actually make different choices for different targets is quite small
(even on targets as diverse as the U2200 and the Windows PC). The Standard
provides some help, but I don't think it is particularly necessary to that task.

And of course nothing prevents that one compiler vendor from creating a
standard-like document (especially since there already is a Standard). I just
don't see much value to a formal process for it in a one vendor world.

> > P.S. I'm not sure what "strict left-to-right order" means when named
> > notation and/or default parameters are used in calls. Our compiler
> > converts all calls to positional notation with default expressions
> > made explicit before we start any sort of code generation; the back
> > end phases only deal with positional calls. If we're talking the
> > order that named parameters exist in the source code, that would be
> > *very* expensive to implement in our compiler, as we'd have to
> > introduce temporaries for every parameter to get the evaluation
> > order right, or abandon the existing invariant that the parameters
> > are in passing order when making calls, or ???. I don't know for
> > sure, but I thought that this was a common implementation strategy.
>
> Surprised! it would be trivial in GNAT, we would test each expression
> in the call in sequence to see if it had side effects, and if so (it's
> the unusual case) eliminate these side effects in left to right order.
> To do this would take about 20 minutes of work, since all the
> primitives are at hand, it would be something like
>
>      A := First_Actual (P);
>      while Present (A) loop
>         Remove_Side_Effects (A);
>         Next_Actual (A);
>      end loop;
>
> well there are probably some details left out, but still, not a big
> deal once we decided what was meant by evaluation in order (as Randy
> points out there are some interesting cases).

I was thinking mostly that it would be expensive at runtime as many new memory
temporaries would be needed. We'd of course want to aggressively eliminate those
temporaries, which would complicate the implementation as well.

The alternative would be to redesign our intermediate code to support
interleaving of register temporaries and parameter passing (we didn't allow this
so that register parameter passing could be sensibly supported); that would of
course be even a bigger job because it would invalidate many of the invariants
that the optimizer and back-ends expect. (To Be Honest: I've been considering
doing this anyway for other optimization reasons -- but I haven't started it
because of the invariant factor. In any case, it would have to be quite limited
for x86 targets because of the lack of registers.)

Certainly doing it is possible, but since everything was designed from the
ground up for Ada (83) with no constraints on evaluation order, there certainly
would be quite a few bumps.

P.P.S. The idea of creating a profile or something in Annex H to specify that
everything is evaluated in canonical order (and 11.6 is nullified!) isn't a bad
one. It would let the standard define these things in a more sensible way but
would prevent making this into a barrier for adoption of Ada 2012 and beyond.
(As a specialized needs annex, implementers would not have to support it.)

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  5:45 PM

>>> the cost of a float
>>> spill is the reading and writing of 11 dwords of memory and that is
>>> going to be significant in any event.
>>
>> what's your model for this odd claim?
>
> In Janus/Ada, at least, we have to spill the full extended precision
> value for each float and all of the float flags as well (because of
> the way float exceptions are managed), then the float processor is
> fully cleared before the call; the process then is reversed afterwards.

You can do FAR better than this, it is really quite easy and much cheaper to
make the floating-point stack extend to memory automatically. It's interesting
that Steve Morse in his book claimed this was impossible but I found out how to
do it, and convinced him that it worked (this is what we did in the Alsys
compiler).

Of course you should use SSE these days instead of the old junk fpt anyway :-)

> At least, that's the way it worked before I added the front-end
> rearrangement of code. Since then, I haven't been able to construct an
> example that the compiler wasn't able to reorder to eliminate the need
> for any float spilling. It certainly ought to be possible to construct
> such a case, but it would be fiendishly complex (a massive forest of
> nested function calls that return floats).
>
> The rearrangement of course depends upon the non-determism of
> subprogram parameter ordering.

> I would have considered those sorts of things covered by the "early
> optimization" that Geert was complaining about. I read his message to
> say that any optimization in the front end is evil, which I agree with
> you is nonsense.

Well if you invent what people say, not surprising you will think it is
nonsense, he said nothing of the kind!

> Right, but if we tie the Standard down to the point of requiring
> everything to be evaluated in a canonical order, and to eliminate
> uncommon implementation strategies like generic sharing (which you
> complain about the standard supporting nearly every time I bring it
> up), there is no longer much chance for an Ada company to add value in
> the compiler proper. In such an environment, one could easily concentrate
> on a "different opportunity" without investing $$$$ in a piece that hardly
> can be different at all from anyone else's.

Absurd to think that eliminating generic sharing means this, that's just a
fantasy Randy! As I say, the idea that all Ada compilers are anywhere NEAR the
same, or would be the same even with a few more constraints is absurd nonsense,
I can't imagine ANYONE else agreeing with you on this.

> I don't think this is hard to do for a vendor; we always tried to make
> *everything* portable across architectures other than a relatively
> small list outlined in our documentation. The number of choices that
> the Standard gives us where we actually make different choices for
> different targets is quite small (even on targets as diverse as the U2200 and the Windows PC).
> The Standard provides some help, but I don't think it is particularly
> necessary to that task.

You never were in the business of supporting multiple OS/Target pairs (we
currently support over 50), or you would not make such an absurd statement IMO.

> And of course nothing prevents that one compiler vendor from creating
> a standard-like document (especially since there already is a
> Standard). I just don't see much value to a formal process for it in a one
> vendor world.

Well you are not a vendor, so it's not surprising that you don't understand. Let
me assure you that AdaCore finds the standard VERY useful from this point of
view. If this were not the case we would not support continued work on the
standard.

> I was thinking mostly that it would be expensive at runtime as many
> new memory temporaries would be needed. We'd of course want to
> aggressively eliminate those temporaries, which would complicate the
> implementation as well.

Well they would not complicate anything for us, and I think the impact of such
an approach would be small (it would be a trivial experiment to do with GNAT in
fact).

> P.P.S. The idea of creating a profile or something in Annex H to
> specify that everything is evaluated in canonical order (and 11.6 is
> nullified!) isn't a bad one. It would let the standard define these
> things in a more sensible way but would prevent making this into a
> barrier for adoption of Ada 2012 and beyond. (As a specialized needs
> annex, implementers would not have to support it.)

Perhaps the HRG could be persuaded to look at this idea.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  5:55 PM

> >> That's certainly not the case, ease of use, quality of error
> >> messages, error recovery, which host and targets are supported,
> >> quality of support, responsiveness to bug fixes and enhancements,
> >> ability to maintain and document known problems, etc... are much
> >> more important than micro optimizations to most users.
> >
> > None of those things, other than error messages, are properties of
> > the *compiler*, they are properties of the business (that's
> > especially true for support which is mainly what you are talking about here).
>
> totally wrong
>
>    ease of use		property of the compiler
>    quality of messages	property of the compiler
>    error recovery        property of the compiler
>    ease of fixing bugs   property of the compiler
>    targets supported     property of the compiler

I strongly disagree, but I doubt that I would convince you and this certainly is
getting way off-topic. For instance, hardly anyone directly runs a compiler
these days (or ever, for Ada compilers). You use a make tool (GNATMake) or a
programming environment (GPS) and that's where the ease of use comes from, not
the compiler. Indeed, the ease of use of the compiler proper is virtually
irrelevant, so long as it can be invoked from the command line, as you can wrap
something else around it to make it easy to use.

Similarly, "targets supported" is a business decision, not really about the
compiler proper. Porting runtimes is relatively easy compared to building an
entire compiler, and the same is true of building back-ends. (Yes, a really bad
design could make that hard, but that's unlikely, since every implementer I've
talked to has had a relatively portable front-end design with relatively small
back-ends.) ...

> > I agree with this, and that is my point. The main value to customers
> > is in a better eco-system. As such, it does not make business sense
> > to build a complex piece of software (like a compiler) when one
> > could build an eco-system (and support) around an existing open
> > source compiler. An Ada compiler is the hardest and most expensive
> > piece of most development eco-systems).
>
> The compiler might possibly be the single most expensive piece of the
> eco-system, but in the big picture it is still a small part (probably
> only 20% of the development resources at AdaCore, if that, go into the
> compiler).
>
> And mastering the gcc compiler, for example, at the level necessary to
> do what you suggest, would be a major investment, easily comparable to
> building a compiler from scratch. Of course you would get a lot for
> that in terms of targets, etc. but it would be a lot of work.

I wouldn't claim it would be easy, but it would be a lot easier than starting
from scratch. Especially as other people also contribute to the gcc compiler, so
you don't have to be able to fix everything. (To get the best quality support,
you would of course have to do that eventually, but I don't think you'd have to
start there. Indeed, all you really need is hubris. :-) I unfortunately used all
of mine in the 1980s -- I know better now which makes me too risk-adverse.)

> > Many of us built Ada compilers when the economics was different
> > (especially as no open source compiler existed to build around), but
> > that's not true today. I keep maintaining Janus/Ada because it's
> > very much like my child and you don't abandon your children -- but
> > it certainly doesn't make economic sense to do so.
>
> Well of course an open source compiler existed, you could have decided
> to build a front end for gcc, just as we did instead of an entire
> compiler, but as I say, I think it would probably have been more work,
> not less.

I don't think gcc was around in 1981.

> As for abandoning, part of wisdom is learning when to abandon things that
> should be  abandoned :-)

Yeah, but what else would I have? I know it's sad, but Janus/Ada is essentially
my family (never having married or having children). I've always joked that I
was married to Ada, and it's pretty true.

(I think I've reached my mid-life crisis at 55! Like always, many years later
than everyone else.)

Anyway, if I did abandon Janus/Ada, then I'd clearly have to use GNAT like
everyone else. Which would help push us to the one-compiler situation that I was
talking about above. (After all, you guys like to repeatedly tell us that there
is only one Ada 2012 compiler. We have to be careful to not change the Standard
to the point where that becomes true permanently.)

> We have certainly abandoned significant chunks of technology as we go
> along at AdaCore, and look for example at the Alsys decision to
> abandon their compiler technology in favor of Ada Magic.
>
> Boy this must be the most off-topic thread for a long time, but it
> isn't only me keeping it alive :-)

Yeah, it's gotten sort of out of hand. I'm afraid I couldn't let Geert's
contention that "early optimization" (that is, in the front end) is bad pass. To
repeat what I said above, we can't change the Standard in such a way that we
make it uneconomic for multiple vendors to support Ada 2012 and Ada 202x. People
are not going to completely redesign their front-ends or back-ends just because
someone thinks its "1970's technology". Anyway, hopefully we can give this a
rest.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  6:10 PM

>> totally wrong
>>
>>     ease of use		property of the compiler
>>     quality of messages	property of the compiler
>>     error recovery        property of the compiler
>>     ease of fixing bugs   property of the compiler
>>     targets supported     property of the compiler
>
> I strongly disagree, but I doubt that I would convince you and this
> certainly is getting way off-topic. For instance, hardly anyone
> directly runs a compiler these days (or ever, for Ada compilers). You
> use a make tool
> (GNATMake) or a programming environment (GPS) and that's where the
> ease of use comes from, not the compiler. Indeed, the ease of use of
> the compiler proper is virtually irrelevant, so long as it can be
> invoked from the command line, as you can wrap something else around
> it to make it easy to use.

I begin to think you really don't undestand much about ease of use, so I won't
bother to try to educate you on this, but from my point of view, ease of use of
the compiler is a LOT to do with the compiler, e.g avoiding undesirable Ada 83
library model. We have been very successful in the Ada market because we
understand this. I think success speaks for itself when comparing compilers.

> Similarly, "targets supported" is a business decision, not really
> about the compiler proper. Porting runtimes is relatively easy
> compared to building an entire compiler, and the same is true of
> building back-ends. (Yes, a really bad design could make that hard,
> but that's unlikely, since every implementer I've talked to has had a
> relatively portable front-end design with relatively small back-ends.)

My goodness, your lack of experience shows up again, it is not at ALL easy to
build new back ends, just take one example, implementing software pipelining,
essential to a usable ia64 port is FAR from trivial. But that's one of hundreds
of similar examples. The back end of GCC is incidentally FAR bigger than the Ada
front end, and represents many hundreds of person years of effort.

> I wouldn't claim it would be easy, but it would be a lot easier than
> starting from scratch. Especially as other people also contribute to
> the gcc compiler, so you don't have to be able to fix everything. (To
> get the best quality support, you would of course have to do that
> eventually, but I don't think you'd have to start there. Indeed, all
> you really need is hubris. :-) I unfortunately used all of mine in the
> 1980s -- I know better now which makes me too risk-adverse.)

You underestimate the task

> I don't think gcc was around in 1981.

Right, 1987 was the first official release

> Anyway, if I did abandon Janus/Ada, then I'd clearly have to use GNAT
> like everyone else. Which would help push us to the one-compiler
> situation that I was talking about above. (After all, you guys like to
> repeatedly tell us that there is only one Ada 2012 compiler. We have
> to be careful to not change the Standard to the point where that
> becomes true permanently.)

Well I fear we may already have done that, Ada 2012 was a huge amount of effort,
and a lot of it was for completely unimportant stuff. In fact I would say nearly
all of the Ada 2012 changes were unimportant. Read my Dr. Dobbs article for a
take on that.

> Yeah, it's gotten sort of out of hand. I'm afraid I couldn't let
> Geert's contention that "early optimization" (that is, in the front
> end) is bad pass.

You totally misread what Geert was saying!

> To repeat what I said above, we can't change the Standard in such a
> way that we make it uneconomic for multiple vendors to support Ada
> 2012 and Ada 202x. People are not going to completely redesign their
> front-ends or back-ends just because something thinks its "1970's
> technology". Anyway, hopefully we can give this a rest.

I am flummoxed by this paragraph, I have no idea what you are talking about, I
guess it must be some Janus idiosyncratic thing, because it makes no general
sense to me!

In particular, remember that this off topic thread was all about the
reassociation rule, are you *seriously* saying that eliminiating this
reassociation rule would be the one thing that made it uneconomic for vendors to
support Ada 2012?

No one has proposed any other required change to the Ada 2012 standard!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  6:29 PM

> > In Janus/Ada, at least, we have to spill the full extended precision
> > value for each float and all of the float flags as well (because of
> > the way float exceptions are managed), then the float processor is
> > fully cleared before the call; the process then is reversed afterwards.
>
> You can do FAR better than this, it is really quite easy and much
> cheaper to make the floating-point stack extend to memory
> automatically.
> It's interesting that Steve Morse in his book claimed this was
> impossible but I found out how to do it, and convinced him that it
> worked (this is what we did in the Alsys compiler).

I don't doubt it; I haven't worked on this area in years. The main problem would
be that you would have to reliably handle the floating point traps in order to
do this (certainly doing a test before every push would not be "much cheaper"),
and that we were never able to do. I'm sure that was because early Oses like SCO
Unix simply didn't handle these things properly. Anyway, we designed a model
that doesn't depend on using any traps, taking advantage of the various
permissions for checks and the fact that the bits are sticky to only check for
problems just before any store.

I've never had any reason to revisit this model (don't have customers with
critical floating point needs).

> Of course you should use SSE these days instead of the old junk fpt
> anyway :-)

Right, and *that* would be a much better reason to revisit floating point
support than trying to get a couple of cycles out of fpt exception checking.

> > I would have considered those sorts of things covered by the "early
> > optimization" that Geert was complaining about. I read his message
> > to say that any optimization in the front end is evil, which I agree
> > with you is nonsense.
>
> Well if you invent what people say, not surprising you will think it
> is nonsense, he said nothing of the kind!

Then I have no idea what he was talking about, unless he was building some sort
of straw man. There are lots of uses of the unspecified order of evaluation that
have nothing to do with performance: some calls have to be evaluated out of
order to meet Ada semantics, and canonicalizing calls with named notation makes
everything else easier.

> > Right, but if we tie the Standard down to the point of requiring
> > everything to be evaluated in a canonical order, and to eliminate
> > uncommon implementation strategies like generic sharing (which you
> > complain about the standard supporting nearly every time I bring it
> > up), there is no longer much chance for an Ada company to add value
> > in the compiler proper. In such an environment, one could easily
> > concentrate on a "different opportunity"
> > without investing $$$$ in a piece that hardly can be different at
> > all from anyone else's.
>
> Absurd to think that eliminating generic sharing means this, that's
> just a fantasy Randy! As I say, the idea that all Ada compilers are
> anywhere NEAR the same, or would be the same even with a few more
> constraints is absurd nonsense, I can't imagine ANYONE else agreeing
> with you on this.

Fair enough. I should point out that I've *always* felt this way, all the way
back to the early 1980s when we first got into this business. I considered it
the #1 business risk, in that we might not be able to sufficiently differentiate
our product from another company with deeper pockets. One of the main reasons
that we chose generic sharing, dynamic allocation of memory for mutable object,
and similar decisions is that we wanted the compiler to be as different as
possible from everyone else's.

And I still feel that way. If I had to remove the generic sharing from the
compiler, allocate mutable objects to the largest possible size, and so on,
there would remain absolutely no reason for anyone to want to use Janus/Ada. I
couldn't compete with AdaCore in terms of support hours or number of targets,
and I can't really imagine anything that I could compete in. (Especially as
AdaCore could probably afford to clone anything that we did if it was
worthwhile.)

> > I don't think this is hard to do for a vendor; we always tried to make
> > *everything* portable across architectures other than a relatively
> > small list outlined in our documentation. The number of choices that
> > the Standard gives us where we actually make different choices for
> > different targets is quite small (even on targets as diverse as the
> > U2200 and the Windows PC). The Standard provides some help, but I
> > don't think it is particularly necessary to that task.
>
> You never were in the business of supporting multiple OS/Target pairs
> (we currently support over 50), or you would not make such an absurd
> statement IMO.

Sorry, but we always supported multiple OS/Target pairs (still do in fact).
Admittedly, the majority of our revenue has always come from a single pair, but
there always have been others, and multiple processors as well (CP/M/Z-80 was
our first pair, but of course we also had MS-DOS/8086 and CP/M-86/8086; later
there were 16-bit and 32-bit x86 targets, on MS-DOS and various Unix systems; we
also did 68020 and SPARC compilers; and of course the U2200 implementation).

So, while it may be "absurd" to you, it was the way I had RRS do business.

> > And of course nothing prevents that one compiler vendor from
> > creating a standard-like document (especially since there already is
> > a Standard). I just don't see much value to a formal process
> for it in a one vendor world.
>
> Well you are not a vendor, so it's not surprising that you don't
> understand. Let me assure you that AdaCore finds the standard VERY
> useful from this point of view. If this were not the case we would not
> support continued work on the standard.

I realize this, and my bank account thanks you. :-) And it's true so long as
customers insist on second-sources that a Standard is helpful. The situation is
not yet to the single-vendor one that I described, and perhaps it will never get
there.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  6:36 PM

> In particular, remember that this off topic thread was all about the
> reassociation rule, are you *seriously* saying that eliminiating this
> reassociation rule would be the one thing that made it uneconomic for
> vendors to support Ada 2012?
>
> No one has proposed any other required change to the Ada 2012
> standard!

Then *you* are completely misreading what Geert was saying. We haven't been
talking about the reassociation rule in *ages* (everyone seems to agree that it
should be dropped, and I asked Geert to write an AI to that effect). We're
talking about the much more general fact that the order of evaluation of many
things is unspecified in Ada. (I've mostly been concentrating on parameters.)
That's the message that Geert sent that Erhard and I had responded to, and
that's what Geert was replying to. Geert has on several occasions called for Ada
to remove all of the non-determinism from the language. I think that would be
way over the top. (I once called that off-topic to an off-topic discussion, and
it still is.) Such a rule would certainly make it much harder to use an existing
Ada technology as the basis for an update-to-date Ada compiler.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  6:46 PM

> I realize this, and my bank account thanks you. :-) And it's true so
> long as customers insist on second-sources that a Standard is helpful.
> The situation is not yet to the single-vendor one that I described,
> and perhaps it will never get there.

Nope, that's not the primary reason for finding the standard useful, it has
nothing to do with our competitors. It has to do with

a) Letting people know Ada is alive and well and standardized and that the
standard is progressing ahead with useful new stuff.

b) Defining clearly what people can and cannot expect to hold if they are trying
to write portable code.

There are a FEW cases in which we guarantee behavior beyond the standard but not
so many.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  6:48 PM

>> No one has proposed any other required change to the Ada 2012
>> standard!

Removing all the non-determinism has nothing to do with avoiding all front end
optimizations, which of COURSE Geert does not propose! Also, no one is proposing
removing the non-determinism from Ada 2012. As to whether 2020 should be
stricter, interesting issue indeed! Probably a strict annex such as I suggest
would be the way to go.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  7:53 PM

> >> No one has proposed any other required change to the Ada 2012
> >> standard!
>
> Removing all the non-determinism has nothing to do with avoiding all
> front end optimizations, which of COURSE Geert does not propose!

While the Dewar rule certainly applies to the Standard, I don't think one can
apply it to correspondence. :-) You've certainly never been shy about calling me
out when I wrote something that you thought was nonsense (even when you
misinterpreted what I said/meant); I'm not sure why others should be treated
differently.

> ... Also, no one is proposing removing the non-determinism from Ada 2012.

I'd hope not. But some quotes from Geert's message of Nov 24:

>> By the way, it has always puzzled me how much the Ada designers like
>>non-determinism.
...
>I think we should make it a point to go through all such cases and
>specify behavior.
...
>Maybe it isn't too late for a binding interpretation for Ada 2012 for
>some of these order of evaluation topics? (*)

These make me think that Geert is in fact asking for this for Ada 2012. Now, I
grant that it's not 100% clear what "all such cases" applies to. He's talking
about associativity immediately before this statement, but there is only one
such case in the Standard and in our correspondence, so "all" would have to mean
something broader.

In the last statement, "some" hopefully only applies to associativity, but again
there is only one of those cases, so it seems that Geert meant something
broader. (And I got the same feeling from talking to him in Pittsburgh, which is
why I feel justified in applying an expansive interpretation.)

Anyway, I wouldn't have bothered with this discussion at all if I hadn't thought
that Geert was pushing for a change immediately.

> As to whether 2020 should be
> stricter, interesting issue indeed! Probably a strict annex such as I
> suggest would be the way to go.

I think I agree. Less burden on implementers unless they have demand for it (in
which case the burden is at least justifiable).

P.S. Have good Thanksgiving! And that goes for the rest of the US-based ARG as
well!

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 27, 2013  8:12 PM

> While the Dewar rule certainly applies to the Standard, I don't think
> one can apply it to correspondence. :-) You've certainly never been
> shy about calling me out when I wrote something that you thought was
> nonsense (even when you misinterpreted what I said/meant); I'm not
> sure why others should be treated differently.

You just misinterpreted what Geert was saying, I certainly found him clearer. He
was saying that things like order of evaluation of expressions do not belong
being optimized in the front end, and I totally agree with that position. You
extended that to all optimizations, but that's nonsense, Geert did not say that,
and of course doesn't think that, and neither do I.

>> ... Also, no one is proposing removing the non-determinism from Ada
>> 2012.
>
> I'd hope not. But some quotes from Geert's message of Nov 24:
>
>>> By the way, it has always puzzled me how much the Ada designers like
> non-determinism.
> ...
>> I think we should make it a point to go through all such cases and
>> specify
> behavior.
> ...
>> Maybe it isn't too late for a binding interpretation for Ada 2012 for
>> some
> of these
>> order of evaluation topics? (*)
>
> These make me think that Geert is in fact asking for this for Ada
> 2012. Now, I grant that it's not 100% clear what "all such cases"
> applies to. He's talking about associativity immediately before this
> statement, but there is only one such case in the Standard and in our
> correspondence, so "all" would have to mean something broader.

He said SOME of these topics, and we already agree on one!

> In the last statement, "some" hopefully only applies to associativity,
> but again there is only one of those cases, so it seems that Geert
> meant something broader. (And I got the same feeling from talking to
> him in Pittsburgh, which is why I feel justified in applying an
> expansive
> interpretation.)

I don't know if there are other real problems besides the associativity rule,
it's a reasonable question to raise.

> Anyway, I wouldn't have bothered with this discussion at all if I
> hadn't thought that Geert was pushing for a change immediately.

Geert said *EXACTLY* (you quoted him!) that

> Maybe it isn't too late for a binding interpretation for Ada 2012 for
> some of these order of evaluation topics?

Well you apparently agree with that, since you agree that it is reasonable to
eliminate the reassociation case. Once we have one that we can agree on, it is
not a matter of principle any more, but a matter of case by case considering
whether there are any other sufficiently gratuitous non-determinisms to consider
putting them in this same category.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 27, 2013  9:02 PM

> You just misinterpreted what Geert was saying, I certainly found him
> clearer. He was saying that things like order of evaluation of
> expressions do not belong being optimized in the front end, and I
> totally agree with that position.

"Order of evaluation of expressions" is the same as "order of evaluation of
parameters" in Ada, as all expressions in Ada or formally a set of function
calls. And you cannot correctly implement Ada without reordering parameters (for
some definition of order) in some cases. Thus an implementer pretty much has to
do some expression (read parameter) reordering in the front end.

This isn't really "optimization", it's "normalization" (as Erhard noted), and it
doesn't have anything to do with the target. Claiming that reordering of
expressions is simply an optimization is just wrong.

> You extended that to all optimizations, but that's nonsense, Geert did
> not say that, and of course doesn't think that, and neither do I.

I don't see any reason that any optimizations shouldn't be done wherever they
are easiest to do for a particular implementation, be that front-end, back-end,
or hind-end. :-) Janus/Ada uses tables of target-specific information to
determine whether or not to do certain front-end optimizations, which are simply
intended to get the intermediate code into the preferred form for the back-end.
These are quite similar to the tables defining the standard data types. I'm not
sure why any compiler writer is supposed to push things to the target-dependent
back-end that are easier done earlier in the process (and do them once, rather
than once per target). It all depends on the compiler architecture, and making
broad statements about what a compiler should or shouldn't do is bogus. (And
yes, I've made the same mistake from time-to-time.)

****************************************************************

From: Robert Dewar
Sent: Thursday, November 28, 2013  2:16 AM

I am retiring from this thread, IMO it has devolved to being content-free :-(
and is going around in circles. I am not really interested in what Janus/Ada
does :-). I suggest Randy take it off line if he wants to pursue it further
(although I won't contribute any more in any case). Sorry for all the noise.

*****************************************************************

From: Bob Duff
Sent: Monday, October 13, 2014  12:19 PM

New version of AI12-0092-1, Soft legality rules. [Editor's note: This is
version /01 of the AI.]

*****************************************************************

From: Bob Duff
Sent: Monday, October 13, 2014  12:37 PM

> New version of AI12-0092-1, Soft legality rules.

To any ARG members planning to go to the upcoming WG9 meeting:

Please take note of this email, which is filed in AI12-0092-1.TXT:

    From: Robert Dewar
    Sent: Tuesday, September  3, 2013  2:13 PM

    > I am somewhat neutral on the "soft error" concept.  It does allow us
    > to introduce incompatibilities without "officially" doing so, but our
    > attempts to do that with "unreserved keywords" always ran into trouble
    > with WG-9.

    That's just a lack of competent political lobbying IMO!

...and subsequent replies.

History:  ARG has proposed unreserved keywords to preserve compability.
Every time, WG9 has rejected the idea.  The problem is that there is a conflict:

    (1) On the one hand, programs really shouldn't be using keywords
        as identifiers, so keywords should be reserved.

    (2) On the other hand, they already do (when new ones are added),
        so newly added keywords should NOT be reserved.

(1) is a matter of taste/style.  (2) is matter of huge amounts of wasted money.
IMHO, it's totally irresponsible to place more importance on (1) than on (2).

I'd like to know who on WG9 is so opposed to nonreserved keywords, and why they
don't consider compatibility more important.

I'd also like to know whether the notion of "soft Legality Rules"
would solve the problem in their view.  The idea is that we can add a keyword
like "interface", and add a soft legality rule forbidding the use of "interface"
as an identifier, thus requiring a diagnostic message, so people can fix their
programs.  But they don't have to fix them RIGHT NOW.

***************************************************************

From: Randy Brukardt
Sent: Monday, October 13, 2014  1:20 PM

> To any ARG members planning to go to the upcoming WG9 meeting:

It's unlikely that we'll have time to look at any Amendments at this meeting,
because the meeting is shorter than usual and we have to finish the Corrigendum.
The only looking at Amendments that we'll likely do is to look at them to see if
there are any that we want to reclassify as Binding Interpretations so that
they'll appear in the Corrigendum.

> Please take note of this email, which is filed in AI12-0092-1.TXT:
>
>     From: Robert Dewar
>     Sent: Tuesday, September  3, 2013  2:13 PM
>
>     > I am somewhat neutral on the "soft error" concept.  It does allow us
>     > to introduce incompatibilities without "officially" doing so, but our
>     > attempts to do that with "unreserved keywords" always ran into trouble
>     > with WG-9.
>
>     That's just a lack of competent political lobbying IMO!
>
> ...and subsequent replies.

BTW, the quote is from Tucker (that confused me at first).

...
> I'd like to know who on WG9 is so opposed to nonreserved keywords, and
> why they don't consider compatibility more important.

The short answer to the first question was "everyone not from the US". My
recollections on the second is that they considered the impact on tools and on
education to be more important. They thought it was important that there would
be no exceptions to the rules.

> I'd also like to know whether the notion of "soft Legality Rules"
> would solve the problem in their view.  The idea is that we can add a
> keyword like "interface", and add a soft legality rule forbidding the
> use of "interface" as an identifier, thus requiring a diagnostic
> message, so people can fix their programs.  But they don't have to fix
> them RIGHT NOW.

My guess (and it's just a guess) is that the root objection is to the option,
and as such, a suppressible error provides no help.

(Why have you to reverted to calling them "soft errors"? No one liked that
term.)

I fear that even suppressible errors will be a hard sell, for that same reason.
I think we'll be able to show that they make things like parallel operations
safer without being overly restrictive (considering you can always turn off the
checks and proceed at your own risk). But I remain dubious that they have much
use for compatibility (there's only a few cases where we have checks that
sensibly can be turned off).

***************************************************************

From: Bob Duff
Sent: Monday, October 13, 2014  2:41 PM

> The short answer to the first question was "everyone not from the US".
> My recollections on the second is that they considered the impact on
> tools and on education to be more important. They thought it was
> important that there would be no exceptions to the rules.

Weak arguments, IMHO.  If Robert is right that "That's just a lack of competent
political lobbying IMO!", then somebody (not me) should do some competent
political lobbying.  Or maybe some technical lobbying -- to me the
gratuitous-compatibility issue is compelling.

***************************************************************

From: Randy Brukardt
Sent: Monday, October 13, 2014  3:07 PM

With the change in WG 9 voting away from National Bodies, it seems that it will
be easier to make and win such arguments. But we'd have to be careful that we
don't just move the problem to the SC 22 level.

It appears to me that the concern about compatibility is strongly correlated to
the distance from AdaCore. The further you are from AdaCore, the less likely
that you find it important. And of course part of that is the word "gratuitous",
since necessary incompatibility comes from needing to fix previous mistakes.
IMHO, the most unnecessary incompatibility/inconsistency in Ada 2012 was the
change in record equality semantics, yet no one is arguing that we should
eliminate that change. So one could argue that the lack of interest from WG 9 to
date reflects more their judgment of the relative importance of the
incompatibility vs. the alternatives.

Anyway, I'm making someone else's argument for them, so I'll stop now.

***************************************************************

From: Dr. Joyce L Tokar
Sent: Monday, October 13, 2014  6:05 PM

It is worth noting that the direction from SC 22  to conduct working group
business as a collection of experts, all of which have been designated by a
National Body or Liaison organization, may have considerable implications as
documents are put forward for approval above WG 9.  The concept is that the WG
members advise their given National Bodies when a document is put forward for
approval. So, if there is a lack of consensus within a given National Body
within WG 9, then it is plausible for that lack of consensus to result in
conflicting recommendations to a NB when it comes time to vote.  In my
experience, such conflicts are realized as a vote against approval or an
abstention.

***************************************************************

From: Randy Brukardt
Sent: Monday, October 13, 2014  1:15 PM

...
>     27.1  Each Legality Rule is either "hard" or "soft".  Legality Rules are
>     hard, unless explicitly specified as soft.

I thought we'd decided to proceed on the idea of "suppressible" errors (as
opposed to "soft" errors). Any particular reason why you changed the terminology
back? Or was it just an oversight?

***************************************************************

From: Bob Duff
Sent: Monday, October 13, 2014  2:40 PM

The previous version of the AI says this:

    Terminology:
    The original proposal was for "fatal error" and "nonfatal error". That
    wasn't liked because it misuses a common (outside of Ada) term. We settled
    on "hard error" and "soft error". Also suggested was "major error" and
    "minor error".

So I thought "soft" was the final decision.

I explained in the !discussion why "error" is confusing.  Compiler writers (and
compiler users) talk about "errors" and "warnings" (etc) meaning "various kinds
of diagnostic messages" and the situations that trigger those messages.  But
that's not at all how the RM uses the term "error".  See "Classification of
Errors" in chap 1.  Hence, I went with "soft Legality Rule".  If you want to
talk about the errors, it's "violations of soft legality rules", which is a
mouthful, but that's OK because it's rare.

For example, in one message, you wrote:

    (1) The default in standard mode is that it is an error (the program is not
    allowed to execute).

But that's not at all what the RM says!  Bounded errors, exceptions, and
erroneous execution are all considered "errors" by the RM, yet they do not stop
the program from running.

I read through the entire !appendix, and there were positive notes about "soft".
Many folks like "suppressible", but to me that might imply suppressing the
diagnostic message, which is exactly the opposite of what we're trying to do.

I don't much care what we call these things (other than that the term "error"
doesn't fit in very well).  There is no term that is always "right", because
some of these things are going to be serious errors and some will be minor.
Only the user can decide.  The only important thing is to require that certain
things trigger a diagnostic message.

***************************************************************

From: Randy Brukardt
Sent: Monday, October 13, 2014  7:41 PM

> So I thought "soft" was the final decision.

No, that was from the original placeholder AI (which was based on e-mail
discussion). When we talked about it in a meeting afterwards (Pittsburgh - you
were there), we decided on "suppressible error", I think because that suggests
the default direction (detected by default). I didn't update the AI afterwards
because I thought your were going to do it soon.

> I read through the entire !appendix, and there were positive notes about "soft".
> Many folks like "suppressible", but to me that might imply suppressing
> the diagnostic message, which is exactly the opposite of what we're trying to do.

Opposite how? The idea is that these are errors that usually prevent the
partition from running (at least WRT to the Standard). If one suppresses them,
you can run the partition. And surely there would be no message in the latter
case (there's no point in forcing a separate suppression of the message, as most
projects want to be warning clean as well as error clean, requiring people to
write two suppression pragmas and/or options would be silly).

Obviously, in the case where they are suppressed there has to be well-defined
semantics (which might be erroneous in some cases) so whatever happens when they
are run makes sense. But I don't see any value to a nagging message after
explicit suppression, and surely no value to *requiring* that (that's up to
implementers, if they have customers that find an intermediate mode useful).

P.S. Yes, I saw your comments on "error", and I agree that you're right about
that. So probably we're talking "suppressible Legality Rule" or something like
that.

P.P.S. And of course I understand Robert's contention that the Ada standard
can't really specify the default behavior; I think it important that the
Standard reflect that intent even if it not really enforceable.

***************************************************************

From: Tullio Vardanega
Sent: Tuesday, October 14, 2014  2:34 AM

> suppressible Legality Rule
This formulation makes sense to me.

***************************************************************

From: Bob Duff
Sent: Tuesday, October 14, 2014  4:00 PM

> > So I thought "soft" was the final decision.
>
> No, that was from the original placeholder AI (which was based on
> e-mail discussion). When we talked about it in a meeting afterwards
> (Pittsburgh - you were there), we decided on "suppressible error", I
> think because that suggests the default direction (detected by
> default). I didn't update the AI afterwards because I thought your were
> going to do it soon.

OK, I don't remember the Pittsburgh discussion.

> > I read through the entire !appendix, and there were positive notes
> > about "soft".
> > Many folks like "suppressible", but to me that might imply
> > suppressing the diagnostic message, which is exactly the opposite of
> > what we're trying to do.
>
> Opposite how? The idea is that these are errors that usually prevent
> the partition from running (at least WRT to the Standard). If one
> suppresses them, you can run the partition. And surely there would be
> no message in the latter case (there's no point in forcing a separate
> suppression of the message, as most projects want to be warning clean
> as well as error clean, requiring people to write two suppression
> pragmas and/or options would be silly).

That's not what I intended, and not what the AI says -- it says that violation
of any legality rule (hard or soft) requires a diagnostic message.  And it
goes without saying that implementations can have nonstandard modes that
suppress such messages (like pragma Warnings(Off) in GNAT).

I think I still like the idea of two suppression pragmas/switches (one to
allow the program to run, and the other to suppress the message).
But maybe I could be convinced otherwise.

Anyway, if your semantics prevails, then "suppressible" makes more sense.

> Obviously, in the case where they are suppressed there has to be
> well-defined semantics (which might be erroneous in some cases) so
> whatever happens when they are run makes sense.

Yes (although obviously we'll try hard not to say "erroneous").

>...But I don't see any value to a
> nagging message after explicit suppression, and surely no value to
> *requiring* that (that's up to implementers, if they have customers
>that  find an intermediate mode useful).

Good point.  In other words, as I like to say, the RM doesn't require anybody
to do anything -- if an implementer doesn't like what it says, they can
implement nonstandard modes that do something else.

> P.S. Yes, I saw your comments on "error", and I agree that you're
> right about that. So probably we're talking "suppressible Legality
> Rule" or something like that.

OK with me, especially if the semantics really is to suppress the messages.

> P.P.S. And of course I understand Robert's contention that the Ada
> standard can't really specify the default behavior; I think it
> important that the Standard reflect that intent even if it not really
> enforceable.

I agree with Robert's formal point, but I also agree with you that the RM
can/should be worded in a way that implies, "This is the normal thing, but
then you can also do the other thing."  Pragma Suppress is worded like that.
And (with the minor exception of Overflow_Check), GNAT has checks turned on by
default, as you might guess from reading the RM.

***************************************************************

From: Randy Brukardt
Sent: Wednesday, November 12, 2014  9:40 PM

...
> > > So I thought "soft" was the final decision.
> >
> > No, that was from the original placeholder AI (which was based on
> > e-mail discussion). When we talked about it in a meeting afterwards
> > (Pittsburgh - you were there), we decided on "suppressible error", I
> > think because that suggests the default direction (detected by
> > default). I didn't update the AI afterwards because I thought your
> > were going to do it soon.
>
> OK, I don't remember the Pittsburgh discussion.

That's why we have the super-secret thing called "meeting minutes". :-)

Seriously, it's expected that AI authors refer to the most recent discussion
in the minutes; it's linked directly from the AI number in the HTML version of
the homework list in the most recent minutes, even if that discussion is not
from the most recent meeting. I know you're not a big fan of HTML, but there's
a reason that I spend 4 hours or so after every meeting making a nicely
cross-linked minutes file, and why I give out HTML links to each person's
homework.

****************************************************************

From: Bob Duff
Sent: Saturday, January 24, 2015  6:15 AM

[Note: This was split from a thread in AI12-0003-1, as it turned into a
discussion of this idea again.]

> I agree with this, though I believe soft legality rules would be a
> feature that would not be appropriate for the corrigenda, ...

I think we should use the "soft legality" concept starting in 1995.  ;-) I'm
puzzled why it's so controversial, and I don't know why it would be
inappropriate for the corrigenda.

***************************************************************

From: Brad Moore
Sent: Saturday, January 24, 2015  7:50 AM

Actually, it appears we've already had them since Ada 83.

See RM 2.8 (13) "The implementation shall give a warning message for an
unrecognized pragma name."

I'm open to considering soft legality for the corrigenda. I just presumed it
wasn't a possibility because ai12-0092-1 was not mentioned as being a potential
corrigenda item in Randy's meeting agenda. Given that we already have soft
legality rules, maybe the ai12-0092-1 is not even needed.

You could think of this as a "required warning", if it appears as a
implementation requirement. If it just appears as implementation advice, then it
is a "recommended warning".

For example, if we wanted the rule I suggested to be a required warning, what
would stop us from using the following wording?

"The implementation shall give a warning message if a direct_name exists that
{blah blah blah}"

If we only wanted this to be implementation advice, we could do as Robert
suggested in the previous email.

In this particular case, I think implementation advice seems like it would be
more appropriate. The check doesn't affect portability, it just points out to
the user that his code might be confusing to other readers. That way, a compiler
vendor that didn't want to implement the check could choose not to.

***************************************************************

From: Robert Dewar
Sent: Saturday, January 24, 2015  8:16 AM

> I think we should use the "soft legality" concept starting in 1995.
> ;-) I'm puzzled why it's so controversial, and I don't know why it
> would be inappropriate for the corrigenda.

I strongly agree!

***************************************************************

From: Robert Dewar
Sent: Saturday, January 24, 2015  8:23 AM

One oddity is that the set of situations we consider for soft legality rules
(aka required warnings) is that they would be incredibly random in the RM. An
independent reader might conclude that they are the most critical cases for
warnings, and nothing could be further from the truth. There are hundreds, even
thousands of cases in which warnings are valuable, many of them FAR more
critical than any of the cases we have as candidates for soft warnings in the
ARG discussions.

***************************************************************

From: Bob Duff
Sent: Saturday, January 24, 2015  10:26 AM

> You could think of this as a "required warning", if it appears as a
> implementation requirement. If it just appears as implementation
> advice, then it is a "recommended warning".

That's what I started with.  But various ARG members didn't like it, because
it's not strong enough.  They want these things to be considered errors.  The
whole point is to deal with cases where we want something to be an error (i.e.
illegal), but to avoid the incompatibility that causes.  Hence "soft legality
rule".

Randy in particular thinks what we call it is of utmost importance.
I've no idea why -- "soft error" vs. "required warning" makes no difference to
me.  I just want ARG to stop introducing all these gratuitous incompatibilities.

Before Tucker jumps in:  I do not claim that ALL incompatibilities are
gratuitous!

***************************************************************

From: Bob Duff
Sent: Saturday, January 24, 2015  10:33 AM

> One oddity is that the set of situations we consider for soft legality
> rules (aka required warnings) is that they would be incredibly random
> in the RM. An independent reader might conclude that they are the most
> critical cases for warnings, and nothing could be further from the
> truth. There are hundreds, even thousands of cases in which warnings
> are valuable, many of them FAR more critical than any of the cases we
> have as candidates for soft warnings in the ARG discussions.

Yes.  I'd go further:  some of the warnings produced by some compilers are FAR
more critical than some existing HARD legality rules. (The case discussed in
this thread, if we decide to make it a hard legality rule, is one example.  I
mean, come on, how many bugs are we saving the world from by making sure people
don't get confused between THE Standard and some user-defined Standard?!
Warnings about (e.g.) possible uninitialized variables are FAR more important.)

I think my soft-legality proposal included some explanation to help avoid the
misunderstandings you're worried about.  Make it clear that the purpose is to
avoid incompatibilities.

***************************************************************

From: Robert Dewar
Sent: Saturday, January 24, 2015  11:13 AM

> Randy in particular thinks what we call it is of utmost importance.
> I've no idea why -- "soft error" vs. "required warning" makes no
> difference to me.  I just want ARG to stop introducing all these
> gratuitous incompatibilities.

LOTS of warnings are serious errors, it is just that the RM does not allow us to
make them into errors. For instance having an unblocked path from a function
that raises Program_Error is definitely an error, and the warning we give is
important. In GNAT we have the capability of turning selected warnings into
errors.

***************************************************************

From: Randy Brukardt
Sent: Saturday, January 24, 2015  4:13 PM

> I think we should use the "soft legality" concept starting in 1995.
> ;-) I'm puzzled why it's so controversial, and I don't know why it
> would be inappropriate for the corrigenda.

The short answer is that it isn't mature enough for the Corrigendum. It
*was* eligible to be included.

The longer answer is that we looked at the list of possible AIs to include in
the Corrigendum during the Portland meeting, and no one asked that we consider
AI12-0092-1. At this point (since by WG 9 directive, we're working on a timed
schedule to produce corrigenda, with the intent that we will do one every 3
years if there is anything at all important enough to publish), we're only
trying to finish up what's already started and consider the handful of new
issues raised since. I don't expect that all of those will get in.

One also could consider the strange bifercation between the talk and actions of
AI12-0092-1 primary author as a significant cause of the lack of maturity for
the proposal. Nearly a year elapsed from the time the issue was discussed in
Pittsburgh (Nov 16, 2013) to the time a proposal was actually created -- and the
author then completely ignored the results of that meeting discussion. (If one
wants to change something agreed-upon at a meeting, one has to at least posit a
reason.) Of course, had the author clicked thru the homework links that I
carefully provide after each meeting, he would have been taken directly to that
discussion. [The AI numbers in the action item list in the HTML minutes always
link to the most recent discussion of that AI.] When that was pointed out to
him, nothing further happened; so the AI state still isn't remotely ready for
completion.

If the issue was really so important to the author, you'd think that proposals
would get turned around a lot quicker. I think the rest of us feed off the lack
of urgency projected by the author.

Note that the "Group of Four" parallel proposals depend on Suppressible Errors,
as do the compile-time tampering checks that I'll propose for Ada 2020 (they
need the Global aspect to make any sense, so they have to wait for that to be
added). So I think the idea will eventually make it into the standard, but I
have yet to see anything new that requires it.

***************************************************************

From: Randy Brukardt
Sent: Saturday, January 24, 2015  4:23 PM

...
> Randy in particular thinks what we call it is of utmost importance.
> I've no idea why -- "soft error" vs. "required warning" makes no
> difference to me.

Since the name for this concept that we decided on in Pittsburgh is neither of
these, it's pretty obvious why you're confused. :-)

...
> I just want ARG to stop introducing all these gratuitous
> incompatibilities.

I think there is only one incompatibility since Ada 95 that could possibly be
called "gratuitous"; every other such incompatiblity has been necessary to make
the semantics make sense. The alternative of adding additional erroneous cases
(or unspecified behavior, in some cases) - the only real alternative most of the
time - is at least as bad.

One of the things we tasked you to do was to show us with concrete examples of
where Suppressible Errors could have been used. So far as I know, there has only
been one decent example, and not everyone agreed with that. We probably need
more to get the idea to fly.

Anyway, why did you hijack this thread with this off-topic discussion???

***************************************************************

From: Randy Brukardt
Sent: Saturday, January 24, 2015  4:34 PM

...
> I think my soft-legality proposal included some explanation to help
> avoid the misunderstandings you're worried about.
> Make it clear that the purpose is to avoid incompatibilities.

The uses that will likely get it into the Standard have nothing to do with
incompatibilities. (The parallel proposals, in particular.) They have to do
directly with the idea of suppressing a safety-net check when the programmer is
sure that all is OK. In particular, if we have compile-time tampering checks,
making those suppressible would allow using subprograms that don't have Global
aspects. But of course if you do, and someone tampers, your program is
erroneous. That allows the programmer to make the choice between erroneous
execution and static checking that might be a pain.

But I don't see any case where we could introduce static tampering checks in the
existing containers. That's because viritually all existing code would fail such
a check (since no existing subprograms have Global aspects, any existing
subprogram call would fail the static check), and making everyone suppress the
checks all the time would completely defeat the purpose of having them in the
first place. So I think we could only use this in new kinds of containers
[especially a "parallel" container; the existing containers can't define proper
Global aspects on reading which would be a problem using them in parallel
loops]. (Similarly, the parallel constructs are something new; there's no
compatibility concern there, but there is a concern that the checks could be too
fierce in some circumstances. Thus the desire to give programmers an out.)

***************************************************************

From: Bob Duff
Sent: Saturday, January 24, 2015  5:12 PM

> One also could consider the strange bifercation between the talk and
> actions of AI12-0092-1 primary author as a significant cause of the
> lack of maturity for the proposal.

The author is just too discouraged by the lack of support for the idea to put
much energy into it.  And busy doing work that actually affects real-live Ada
users.

>... Nearly a year elapsed from the time the issue was  discussed in
>Pittsburgh (Nov 16, 2013) to the time a proposal was actually  created
>-- and the author then completely ignored the results of that  meeting
>discussion.

Not on purpose.  IIRC, I used the wrong version of minutes, or something like
that.

> Note that the "Group of Four" parallel proposals depend on
> Suppressible Errors,

Is that the term decided upon in Pittsburgh?  I'd have to do more study to be
sure, but I don't see how that fits into the existing RM, where "error" is used
much more broadly than "violation of legality rule". It includes raising of
predefined exceptions for example.  See 1.1.5. I think perhaps people are
confusing the RM notion of "error" with the notion of "error" in GNAT and other
compilers (where "error" does indeed mean "violation of a legality rule").

I was hoping to avoid a total rewrite of 1.1.5 (etc) just to get such a simple
idea into the RM.

> ...Anyway, why did you hijack this thread with this off-topic
> discussion???

I didn't think I was "hijacking" by pointing out that this specific issue is an
example of a more general idea.

***************************************************************

From: Randy Brukardt
Sent: Saturday, January 24, 2015  5:35 PM

...
> > Note that the "Group of Four" parallel proposals depend on
> > Suppressible Errors,
>
> Is that the term decided upon in Pittsburgh?  I'd have to do more
> study to be sure, but I don't see how that fits into the existing RM,
> where "error" is used much more broadly than "violation of legality rule".
> It includes raising of predefined exceptions for example.  See 1.1.5.
> I think perhaps people are confusing the RM notion of "error"
> with the notion of "error" in GNAT and other compilers (where "error"
> does indeed mean "violation of a legality rule").

I think you pointed that out in an e-mail exchange that we had back in October.
I think we (you and I) decided that "Suppressible Legality Rule" (and possibly
others starting with "Suppressible") would be the actual term.

I was rather hoping that you'd have written that up by now.

> I was hoping to avoid a total rewrite of 1.1.5 (etc) just to get such
> a simple idea into the RM.

Understood.

> > ...Anyway, why did you hijack this thread with this off-topic
> > discussion???
>
> I didn't think I was "hijacking" by pointing out that this specific
> issue is an example of a more general idea.

I guess I was frustrated by your bringing up soft errors here, because there is
no compatibility issue (this is a new feature that we're talking about), and the
whole thing is IMHO exceedingly unimportant. If we don't have the will for a
Legality Rule, I'd hope that then we'd just use an AARM note to point out that
an implementation might want a warning. Leave it up to the implementer to decide
what to warn on.

Sorry about taking out my frustration over having to spend a lot of time on this
check (which should either be bog-simple or non-existent) on you.

***************************************************************

From: Robert Dewar
Sent: Saturday, January 24, 2015  5:38 PM

> I think there is only one incompatibility since Ada 95 that could
> possibly be called "gratuitous"; every other such incompatiblity has
> been necessary to make the semantics make sense. The alternative of
> adding additional erroneous cases (or unspecified behavior, in some
> cases) - the only real alternative most of the time - is at least as bad.

ALL incompatibilities are a bad idea. The predilection of language designers to
genuinely believe that they have to introduce incompatibilities to "make the
semantics make sense" never ceases to amaze me.

Ada 2005 was in my mind pretty much a failure, in that we implemented it, but
few users made the transition, and there is no question that the
incompatibilities in return of limited types acted as a major barrier.

The barrier is still there for the transition to Ada 2012 (and continues to
cause major problems). but the contracts of Ada 2012 are worth while enough for
people to fight through this issue.

Now if the change had come from significant users saying that they had a real
problem in practice with Ada 95 semantics, that would be one thing, but that was
not the impetus as far as I know. Instead it was language designers putting
elegant "sensible" semantics ahead of pracctical considerations.

BTW, we often run into errors that cause people trouble, and what we do is to
have a debug switch that converts these into warnings :-)

***************************************************************

From: Tucker Taft
Sent: Sunday, January 25, 2015  2:20 PM

> ...  and there is no question that
> the incompatibilities in return of limited types acted as a major
> barrier. ...

I think there is a tad bit of hyperbole here, but I admit that this particular
incompatibility is painful.  But there was clearly a cost/benefit tradeoff to
make, and I believe the ARG made the right one, since I believe the advantage in
being able to construct new limited-type objects in a function exceeds the pain
associated with not being able to return existing limited-type objects by
reference.  There are several ways to work around the feature that was lost, but
the new capability is truly a gain in functionality.

I think every language reviser has to make these trade-offs, and clearly not
everyone makes the trade-off the same way, but it was not merely a matter of
"elegance," it was a matter of gaining important functionality.

***************************************************************

From: Bob Duff
Sent: Sunday, January 25, 2015  4:07 PM

I certainly don't consider this case of incompatibility to be "gratuitous".

OTOH, one could argue that PART of the change was gratuitous, or at least
unnecessary.  In particular, b-i-p was supposed to replace return-by-reference,
so r-by-r had to be made (hard) illegal.  But we also made the return-by-copy
cases illegal, and we didn't have to do that.  See example below, which was
legal in 83 and 95, but illegal in 2005 and 2012.  We could have said it has the
same semantics as in Ada 83, namely return-by-copy.

And we could have made the rule a Suppressable Legality Rule in that case, if we
had thought of it.

That would have reduced the incompatible cases by a factor of 3, if I remember
my experiments correctly.  Use of r-by-r was fairly rare, which (with 2020
hindsight) is not surprising.

It would require breaking privacy, which is a Bad Thing, but I think
incompatibilies are a Worse Thing.  So what if making the full type of T in my
example makes Q illegal?  Whatever maintenance headaches would be caused by that
are actually caused by the incompatibility, even in the NON-limited (full type)
case.

In effect ARG said, "Because you MIGHT have to change your code in the future
during maintenance, we're going to FORCE you to change it now." User response in
some cases was, "Then forget Ada, I'll keep using the non-ISO-standard thing
called Ada 95."  Never forget that conforming to standards is optional!

package P is
   type T is limited private;
   procedure Init (Obj : out T; X : Integer); private
   type T is record
      X : Integer;
   end record;
end P;

package body P is
   procedure Init (Obj : out T; X : Integer) is
   begin
      Obj.X := X;
   end Init;
end P;

with P;
package Q is
   function FF (X : Integer) return P.T; end Q;

package body Q is
   function FF (X : Integer) return P.T is
      Result : P.T;
   begin
      P.Init (Result, 123);
      return Result; -- Illegal in Ada 2005 and 2012!
   end FF;
end Q;

***************************************************************

From: Bob Duff
Sent: Sunday, January 25, 2015  4:27 PM

> ALL incompatibilities are a bad idea. The predilection of language
> designers to genuinely believe that they have to introduce
> incompatibilities to "make the semantics make sense" never ceases to
> amaze me.

Have you followed the development of Python?  Apparently the change from Python
2.8 to Python 3.0 (or whatever) was hugely incompatible, to the point where the
simplest "Hello, world" program needs to be rewritten. And apparently many
Python users are still stuck in 2.x world.

It seems downright irresponsible to me.  ARG would NEVER do that sort of thing.

***************************************************************

From: Bob Duff
Sent: Sunday, January 25, 2015  4:58 PM

> I think you pointed that out in an e-mail exchange that we had back in
> October. I think we (you and I) decided that "Suppressible Legality Rule"
> (and possibly others starting with "Suppressible") would be the actual term.

OK.

> I was rather hoping that you'd have written that up by now.

Understood.

> I guess I was frustrated by your bringing up soft errors here, because
> there is no compatibility issue...

Then I didn't make myself clear.  I was saying that we could have a rule that
forbids declaring anything anywhere with the name Standard. I think that's a
good idea anyway, because then you're guaranteed to have a full expanded name
for every package.

Objection!  That's a gratuitous incompatibility!

Answer:  Suppressible Legality Rule.

> and the whole thing is IMHO exceedingly unimportant.

I certainly agree with that.

>... If we don't have the
> will for a Legality Rule, I'd hope that then we'd just use an AARM
>note to  point out that an implementation might want a warning. Leave
>it up to the  implementer to decide what to warn on.

That's fine, too.

> Sorry about taking out my frustration over having to spend a lot of
> time on this check (which should either be bog-simple or non-existent) on you.

Yes, "non-existent" probably.  I understand your frustration, but it's in the
nature of things -- people always want to discuss the simple things.  Look up
the term "bikeshedding", if you're not already familiar with the term.

***************************************************************

From: Robert Dewar
Sent: Sunday, January 25, 2015  9:45 PM

> I think there is a tad bit of hyperbole here, but I admit that this
> particular incompatibility is painful.  But there was clearly a
> cost/benefit tradeoff to make, and I believe the ARG made the right
> one, since I believe the advantage in being able to construct new
> limited-type objects in a function exceeds the pain associated with
> not being able to return existing limited-type objects by reference.
> There are several ways to work around the feature that was lost, but the new capability is truly a gain in functionality.

Well I don't think it is hyperbole, several of our major customers tried moving
to Ada 2005, ran into this incompatibility, which is not so easy to fix in big
legacy codes, and gave up on Ada 2005. Almost none of our customers migrated to
Ada 2005 (this I surmise from code that people have sent us).

As I said, if we had had important customers complain about not being able to
"construct new limited-type objects in a function" then that would have been
significant, but I don't remember a single suggestion receieved along those
lines (and our customers are not shy about complaining about missing features
and suggesting enhancements).

> I think every language reviser has to make these trade-offs, and
> clearly not everyone makes the trade-off the same way, but it was not
> merely a matter of "elegance," it was a matter of gaining important functionality.

I think more effort should have been made to do this in an upwards compatible
manner (at worst with a pragma controlling things, but I bet we could have done
better).

And I still think that when I hear "There are several ways to work around the
feature that was lost", it implies a lack of awareness of just how difficult it
it is to apply such work arounds to millions of lines of legacy code.

Anyway, water under the bridge, but I do think we have to tread carefully when
it comes to ANY incompatibility, even if it seems to us that there are easy work
arounds.

I was quite struck by the Birmingham presentation one company made on moving
from Ada 95 that the biggest problem they had encountered was INTERFACE being a
new keyword. Now it might seem trivial to just change the name, but the problem
was that they had a codified use of that identifier in modules shared across
several development projects on different schedules, and forcing the necessary
coordination was very painful for them.

I for sure agree that making it illegal to redeclare Standard would be a
gratuitous incompatibility. For fun I will try  our test suite with that change
:-)

***************************************************************

From: Robert Dewar
Sent: Sunday, January 25, 2015  9:50 PM

> In effect ARG said, "Because you MIGHT have to change your code in the
> future during maintenance, we're going to FORCE you to change it now."
> User response in some cases was, "Then forget Ada, I'll keep using the
> non-ISO-standard thing called Ada 95."  Never forget that conforming
> to standards is optional!

And imagine reporting two different things to your boss responding to his
suggestion to examine a move to Ada 2005.

a) Very little (or no) effort in the move

b) We ran into quite a difficult case, with no automatic work around. We don't
know how much work it would be to do these work arounds by hand, shall we
investigate.

Now imagine the bosses reaction to a) and b)

Part of the trouble with Ada 2005 was that it had nothing sufficiently
convincing on the positive side for many users to justify the difficulty in
switching.

Now for Ada 2012, the contract stuff has really caught the imagination of a LOT
of users, and we see lots of our users experimenting (*) with contracts.

(*) and furiously reporting bugs :-) :-)

***************************************************************

From: Robert Dewar
Sent: Sunday, January 25, 2015  9:51 PM

> Have you followed the development of Python?  Apparently the change
> from Python 2.8 to Python 3.0 (or whatever) was hugely incompatible,
> to the point where the simplest "Hello, world" program needs to be rewritten.
> And apparently many Python users are still stuck in 2.x world.
>
> It seems downright irresponsible to me.  ARG would NEVER do that sort
> of thing.

I agree the Python change was FAR worse than anything that the ARG would
countenance. Of course in the Python world there are fewer cases of giant legacy
programs with millions of lines of code (actually the number of such cases might
well be zero).

***************************************************************

From: Randy Brukardt
Sent: Monday, January 26, 2015  3:54 PM

...
> > I guess I was frustrated by your bringing up soft errors here,
> > because there is no compatibility issue...
>
> Then I didn't make myself clear.  I was saying that we could have a
> rule that forbids declaring anything anywhere with the name Standard.
> I think that's a good idea anyway, because then you're guaranteed to
> have a full expanded name for every package.
>
> Objection!  That's a gratuitous incompatibility!
>
> Answer:  Suppressible Legality Rule.

*That* makes sense, but I don't think everyone understood it that way.

And I don't think it would fly, for similar reasons that unreserved keywords
don't fly. (Not that I understand that, either.)

***************************************************************

From: Bob Duff
Sent: Monday, January 26, 2015  5:13 PM

> *That* makes sense, but I don't think everyone understood it that way.

Yes, at least one other person misunderstood me, too, so apparently I wasn't
clear at first.  Sorry about that.

> And I don't think it would fly, for similar reasons that unreserved
> keywords don't fly.

Well, I was hoping that "Suppressible Legality Rules" would solve the
(political) keyword problem.  As in, "interface" shall not be used as an
identifier.  But, oh by the way, that's suppressible.

> ...(Not that I understand that, either.)

I understand why people don't like unreserved keywords.  What I don't understand
is why they think "don't like it" should trump compatibility concerns.

***************************************************************

From: Robert Dewar
Sent: Monday, January 26, 2015  5:28 PM

Indeed!

***************************************************************

From: Tucker Taft
Sent: Monday, January 26, 2015  5:41 PM

We lost this one several times at the WG-9 level.  The EU has more votes than
the US! ;-)

***************************************************************

From: Robert Dewar
Sent: Monday, January 26, 2015  6:08 PM

Ironic since in my experience, these incompatibilities hit European users more
severely. Europeans are much more experimental in moving to new versions of Ada,
US users tend to be much more conservative. We still have lots of users who
stick with Ada 83.

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  10:19 AM

> We lost this one several times at the WG-9 level.  The EU has more
> votes than the US! ;-)

I know, but we should try again with the Suppressible Legality Rules idea.
Don't call it "unreserved keywords".  Call it reserved words, it's illegal to
use them as normal identifiers.  But the fine print ;-) says that's a
suppressible rule (for some of them).  Surely if somebody doesn't like X, and we
require a compile-time error message for X, they should be happy.

Also, Robert has said all it takes is some debating tactics.
So we should Robert to a WG9 meeting to push it through.

****************************************************************

From: Brad Moore
Sent: Tuesday, January 27, 2015  9:15 PM

Well, I for one have changed my view on the reserved words debate. I think
Robert is correct in that had there been more time and effort spent to
examine the issue and present a case to the Ada community for using unreserved
keywords, things might have turned out differently. As I recall, this issue
came up late in the standardization process, rather unexpectedly.
It's not that no one had considered the possibility that the idea might not
sit well with everyone, but I think the amount of pushback was underestimated.
There was probably a certain amount of fear, uncertainty, and doubt about
introducing somewhat of an anomaly to the language, that might be regretted.
Of course, introducing incompatibility to the language is something that surely
should be regretted, but I think at least for me, the thinking was that making
"some" a keyword is something that could be more easily undone if it was found
to be a problem, since it would be a relaxation of restriction. Going the other
way by deciding to later revert a decision and make "some" a keyword, would be
increasing a restriction, and less likely to be a possibility.

As time has passed, the decision that was made does not sit as well with me as
it did at the time.

I am wondering if there is still a possibility to make "Some" an unreserved
keyword for the corrigenda?

Could an effort be made to persuade those who might still be on the fence, or
on the other side of the fence, to jump over the fence, or at least get some
sort of a straw poll to see how many are in favor of making such a change,
after making a strong case for unreserved keywords?

I am also curious to hear if Adacore has encountered customers complaining about
actual cases where the use of "some" had broken someone's build. That's not
really that important though, as it is the perception of possibility for
incompatibilities that scares people. But I think hearing about actual cases of
incompatibility would strengthen the argument for making "some" unreserved.

One might have thought that allowing functions to have in out parameters would
have been more likely to have had a rough go, but a strong case was made for
the feature. I think a strong case can similarly be made for compatibility
issues.

As an aside, I also wonder if the suppressable error idea could be used to
retroactively reduce other incompatibilities that have been introduced since
Ada 95, to encourage those who are still afraid to make the leap from Ada 95,
to do so.

***************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, January 28, 2015  3:08 AM

> We lost this one several times at the WG-9 level.  The EU has more
> votes than the US! ;-)

Watch out! There are Europeans on this list, and the US should realize that
they don't necessarily rule the world ;-)

So yes, we Europeans tend to be more sensible to overall proper design.
Ichbiah argued that elegance was an important quality of language design and
TBH, I think that a lot of Ada's initial elegance has been lost.

Of course, practical considerations are important, but "it costs money"
should not be the /only/ consideration. I understand that AdaCore must listen
to its clients (I know them, they are the same as my clients), but my
experience is that they are excessively frightened of making any change to
their software, tend to overstimate the cost of making a change, and
understimate the cost of NOT making a change (if they consider the latter at
all).

For the specific case of unreserved keywords:
When I present Ada, I keep arguing that it is a piece of engineering, designed
after sound principles. How could I say that with a misfeature that no other
language dared to have; an ugly ad-hoc patch impossible to explain to
newcommers.

And I don't think the incompatibility introduced by a new keyword is that
terrible; just use the nice "refactoring" feature of GPS and you are done (or
Adalog's Adasubst).

I hear some screaming "version control"! "revalidation"! If version control
comes in the way of making changes to identifiers, then you have a huge problem
with your system anyway. Improving identifiers is an important part of the
quality of a system, and I do that all the time. And revalidation is a red
herring: either you don't change your compiler version and there is no
compatibility issue, or you use a new compiler, and you have to revalidate,
with or without changes.

In the name of practicality, we said "it's a nuisance to have to declare access
types everywhere, a pointer is a pointer, let's have anonymous access types
everywhere to ease the programmer's job". And we know the terrible mess that
was introduced.

In short: of course, elegance should be balanced with cost. But cost should
not be the /only/ factor to consider, especially if other solutions can be
found that preserve elegance.

An example: I understand that some fear that "parallel" be used in programs
and cause compatibility problems. Let's use "cobegin" and "coend" (as in
Occam) and keep keywords reserved.

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  3:22 AM

> Of course, practical considerations are important, but "it costs money"
> should not be the/only/  consideration. I understand that AdaCore must
> listen to its clients (I know them, they are the same as my clients),
> but my experience is that they are excessively frightened of making
> any change to their software, tend to overstimate the cost of making a
> change, and understimate the cost of NOT making a change (if they
> consider the latter at all).

I think you should make sure you review the entire Birmingham presentation I
referred to. As I said earlier, I was fascinated by the fact that INTERFACE
being made a reserved word was their biggest problem. To recap, they had
standardized on the use of this identifier in connection with modules that were
shared between projects. These projects were on different release schedules and
maintenance states (from active to baselined) and it was a lot of work to
coordinate these changes between separate projects.

Like most language designers, I think you GREATLY underestimate the costs of
dealing with gratuitous incompatibilities. And the cost MUST concern us, if
people decide the cost of moving to new versions of Ada is too high, then the
ARG (and WG 9) are wasting their time continuing work on the language.

I had real doubts about whether Ada 2005 was a worth while enterprise based on

a) no one but AdaCore implementing it in a reasonable time frame

b) so few of AdaCore customers using it

I think Ada 2012 bailed us out in that the contracts are so obviously valuable
that people are willing to face the cost . Whether we can come up with something
that useful next time round is an interesting question.

I have no problem in creating incompatibilities when the argument is strong
enough. Indeed even given that presentation, I think that the decision to make
INTERFACE a reserved word was supportable, and the right choice. But the burden
is high for justifying incompatibilities, and if there is a reasonable way of
avoiding the incompatibility, it should not be dismissed on mere aesthetic
grounds.

As far as I know SOME has not proved that much of a problem, but we will see,
it's early days for Ada 2012 transition. If we do get customers who have a real
problem with SOME, then probably what GNAT should do is to make SOME an
unreserved keyword by default, with a pedantic mode to make it reserved for
those who want to. Certainly no point in doing that in GNAT in the absence of
such customer input, so we will have to see what happens.

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  3:26 AM

> For the specific case of unreserved keywords:
> When I present Ada, I keep arguing that it is a piece of engineering,
> designed after sound principles. How could I say that with a
> misfeature that no other language dared to have; an ugly ad-hoc patch
> impossible to explain to newcommers.

I find this rhetoric dubious. First of all the "no language dared to have" is
nonsense. PL/1 of course comes to mind, but there are other examples.

And explaining it to anyone is quite simple, it was done to maintain
compatibility with earlier versions. Every practical person in the computer
industry understands this principle. Yes, every now and then huge mistakes are
made, like abandoning the start menu in windows 8, which was a major factor in
the near-catastrophic failure of Windows 8. With Windows 10, Microsoft comes
back to its senses and puts the start menu back where it belongs.

So, yes, you can visit unwise decisions to be incompatible if you find you get
major customer push back!

Not clear to me that the SOME issue is anywhere near that level, but I can
easily see this issue arising in future. New reserved keywords in new versions
of a language are a huge pain. Look at the COBOL example!

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  3:29 AM

> And I don't think the incompatibility introduced by a new keyword is
> that terrible; just use the nice "refactoring" feature of GPS and you
> are done (or Adalog's Adasubst).

This is plain nonsense. It might make sense from a hobbyist point of view, but
in the real world, things are nowhere near that simple for deployed software.

Suppose for example, you are developing a new generation of your software, you
want to make use of the latest and greatest Ada for the new pieces, but
interfaces are in place and cannot be changed. A new keyword can easily make it
impractical to use the new version.

***************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, January 28, 2015  3:47 AM

> I find this rhetoric dubious. First of all the "no language dared to
> have" is nonsense. PL/1 of course comes to mind, but there are other
> examples.

I know languages with reserved keywords, and languages with non-reserved
keywords. Is there any language where part of the keywords are reserved?

***************************************************************

From: Brad Moore
Sent: Wednesday, January 28, 2015  8:02 AM

>> I find this rhetoric dubious. First of all the "no language dared to
>> have" is nonsense. PL/1 of course comes to mind, but there are other
>> examples.
> I know languages with reserved keywords, and languages with
> non-reserved keywords. Is there any language where part of the keywords are
> reserved?

C++11 has what they call "identifiers with special meaning",
which I've also seen described as context-sensitive keywords.

eg. override, final

"Context-sensitive keywords" sounds a lot better to me than "unreserved
keywords"

Maybe that would be better terminology to use, politically.

***************************************************************

From: Robert Dewar
Sent: Thursday, January 29, 2015  5:46 AM

After all the language is full of such context sensitive keywords now, e.g.
Check | Disable | Ignore for assertion policy.

In some GNAT pragmas, we do run into some formal ambiguities from this, e.g.

     pragma Warnings (Off);
     --  turns warnings off

     pragma Warnings (static_STRING_LITERAL);
     --  argument is interpreted as string of warning options

Ah ha! Someone says, what if I write:

    Off : constant String := "a.k";

    pragma Warnings (Off);

    AARGH! ambiguous

Well we don't think this is a real problem, but just for the sake of those who
worry about such things, we have a note in the manual that the ambiguity is
resolved in favor of treating Off as the context dependent keyword, rather than
a reference to the entity.

***************************************************************

From: Brad Moore
Sent: Thursday, January 29, 2015  7:34 AM

If we had suppressable errors then we could say at any such use of a
context-sensitive keyword is a suppressable error.

Then we also could say that "Some", "Overriding", "Interface", and
"Synchronized" are context-sensitive keywords, and eliminate those backward
incompatibilities from Ada 95.

***************************************************************

From: Brad Moore
Sent: Thursday, January 29, 2015  8:01 AM

And Standard could be considered as a context-sensitive keyword since it can be
used in the Default_Storage_Pool pragma, as well as Check, Disable, Ignore, C,
Fortran, Cobol, etc, since they are also used in pragmas.

Why are we making such a fuss about Standard, when we apparently are OK with
other possibilities like;

Check : constant String := "Foo";
declare
    pragma Assertion_Policy(Check);
begin
    ...
end;

or

C : Integer := 0;

declare
    function Strcpy (Target : out C.char_array;
                     Source : in  C.char_array) return Interfaces.C.int;
       with Import => True, Convention => C, External_Name => "strcpy"; begin
    ...
end;

And if we are also worried about backwards incompatibility from Ada 83 then
abstract, aliased, protected, requeue, tagged, and until could also be
reclassified as context-sensitive keywords.

***************************************************************

From: Randy Brukardt
Sent: Friday, January 30, 2015  7:14 PM

...
> I am also curious to hear if Adacore has encountered customers
> complaining about actual cases where the use of "some" had broken
> someone's build. That's not really that important though, as it is the
> perception of possibility for incompatibilities that scares people.
> But I think hearing about actual cases of incompatibility would
> strengthen the argument for making "some" unreserved.

I'm not AdaCore, but as I reported at the time, "some" is quite widely used as a
parameter name in the code of Janus/Ada. I forget precisely why (I reported on
it extensively at the time, so it's somewhere in the mail for AI05-0176-1). That
apparently swayed precisely no one. Thus, it would have to have to be quite an
amazing example to change minds at this point.

***************************************************************

From: Robert Dewar
Sent: Sunday, February  1, 2015  10:36 AM

The question is whether SOME causes trouble in practice. If you have extensive
use of SOME in a context where you can easily  press a button and change all
uses to something else, then that's an example of SOME being used, perhaps
extensively, but it's not an example of it causing real trouble in a real
setting. Similarly, if you are talking about code that in practice is not used
much in major projects, the impact may be minimal. At the time, I certainly
judged Randy's report to be in these non-problematical categories.

On the other hand a single use of SOME in a context where it is NOT easy to
change (e.g. a standard mandated use, like CORBA.String) could be much more
troublesome. That being said, so far I don't think AdaCore has encountered a
troublesome problem with SOME.

***************************************************************

From: Randy Brukardt
Sent: Friday, January 30, 2015  7:29 PM

...
> Why are we making such a fuss about Standard, when we apparently are
> OK with other possibilities like;
>
> Check : constant String := "Foo";
> declare
>     pragma Assertion_Policy(Check);
> begin
>     ...
> end;
...

In none of those other cases is any other kind of entity allowed in the pragma;
ONLY identifiers-specific-to-the-pragma are allowed. Default_Storage_Pool is
therefore different (someone could have sensibly meant an object named
Standard).

...
> And if we are also worried about backwards incompatibility from Ada 83
> then abstract, aliased, protected, requeue, tagged, and until could
> also be reclassified as context-sensitive keywords.

Parsing would become challenging (at least for table-driven parsers like ours)
if everything turned into "identifier". It's OK in some places because there's
no possible ambiguity, but I'd hate to parse:

    subtype Aliased is Integer range 1 .. 10;
    Obj : aliased aliased;

***************************************************************

From: Randy Brukardt
Sent: Monday, January 26, 2015  4:07 PM

...
> That would have reduced the incompatible cases by a factor of 3, if I
> remember my experiments correctly.  Use of r-by-r was fairly rare,
> which (with 2020 hindsight) is not surprising.
>
> It would require breaking privacy, which is a Bad Thing, but I think
> incompatibilies are a Worse Thing.  So what if making the full type of
> T in my example makes Q illegal?  Whatever maintenance headaches would
> be caused by that are actually caused by the incompatibility, even in
> the NON-limited (full
> type) case.

I'm pretty sure something like that was considered, but it didn't fly
politically.

Why not? In large part because breaking privacy also had the effect of breaking
the incremental compilation model of the Rational compiler. And remember who the
chair of the ARG was during this period...

We also considered making constructors something different than functions. I
still think that would have been the way to go, but others thought that it would
have replaced one wart with two warts. In particular, a lot of the functionality
of a function and of a constructor overlap. So users would have been faced with
the question of which one to use when implementing something. The choice would
not always be obvious given the difference in usage (a constructor could only
have been used in initialization contexts -- not assignments, while a function
could be used everywhere). Most thought it was too confusing.

> package body Q is
>   function FF (X : Integer) return P.T is
>      Result : P.T;
>   begin
>      P.Init (Result, 123);
>      return Result; -- Illegal in Ada 2005 and 2012!

I think this is illegal in Ada 95, too. This return fails an accessibility
check.

>   end FF;
>end Q;

You have return a global in this case:

package body Q is
   Result : P.T;

   function FF (X : Integer) return P.T is
   begin
      P.Init (Result, 123);
      return Result; -- Illegal in Ada 2005 and 2012, OK in Ada 95!
   end FF;
end Q;

And the use of a global in cases like this makes it dubious (non-task-safe at a
minimum) anyway. One of the reasons we decided to swallow the incompatibility.
Of course, it might not be as dubious if the global exists for some other reason
and/or is protected with a lock (and of course if no one every uses a task,
plenty common in Ada code). Obviously, there's a lot more of it than we thought.

***************************************************************

From: Bob Duff
Sent: Monday, January 26, 2015  5:34 PM

> > package body Q is
> >   function FF (X : Integer) return P.T is
> >      Result : P.T;
> >   begin
> >      P.Init (Result, 123);
> >      return Result; -- Illegal in Ada 2005 and 2012!
>
> I think this is illegal in Ada 95, too. This return fails an
> accessibility check.

No, it is legal, and does not fail any checks.  It returns Result BY COPY.

I think you missed the fact that, although P.T is limited, its full type is
nonlimited, so none of the return-by-reference stuff in Ada 95 applies (and
therefore, none of the build-in-place stuff in Ada 2005 needs to apply, if one
is willing to break privacy).

This is plain vanilla Ada 83 code, no return-by-reference involved.

> >   end FF;
> >end Q;
>
> You have return a global in this case:
>
> package body Q is
>    Result : P.T;
>
>    function FF (X : Integer) return P.T is
>    begin
>       P.Init (Result, 123);
>       return Result; -- Illegal in Ada 2005 and 2012, OK in Ada 95!
>    end FF;
> end Q;
>
> And the use of a global in cases like this makes it dubious
> (non-task-safe at a minimum) anyway. One of the reasons we decided to
> swallow the incompatibility. Of course, it might not be as dubious if
> the global exists for some other reason and/or is protected with a
> lock (and of course if no one every uses a task, plenty common in Ada
> code). Obviously, there's a lot more of it than we thought.

No no, you're thinking return-by-reference here.

My point was that the incompatibilies were:

    1/3 return-by-ref that really needs to be modified in the presence
    of b-i-p.

    2/3 return-by-copy of limited types, which we (gratitously, IMHO)
    made illegal.  My example was illustrating this more-common case.

The 1/3 and 2/3 are from memory of my experiments -- I could be off.


P.S. If I become "e-mail dark" in the next day or two, it means the blizzard
dumped a huge amount of heavy wet snow on a tree branch, and the strong winds
blew down said branch, and it knocked down a power line that supplies
electricity to this computer.  You may then assume I'm reading a paperback novel
by candle light.

***************************************************************

From: Randy Brukardt
Sent: Monday, January 26, 2015  8:48 PM

> > > package body Q is
> > >   function FF (X : Integer) return P.T is
> > >      Result : P.T;
> > >   begin
> > >      P.Init (Result, 123);
> > >      return Result; -- Illegal in Ada 2005 and 2012!
> >
> > I think this is illegal in Ada 95, too. This return fails an
> > accessibility check.
>
> No, it is legal, and does not fail any checks.  It returns Result BY
> COPY.
>
> I think you missed the fact that, although P.T is limited, its full
> type is nonlimited, so none of the return-by-reference stuff in Ada 95
> applies (and therefore, none of the build-in-place stuff in Ada 2005
> needs to apply, if one is willing to break privacy).

I made the mistake of thinking that Ada 95 didn't have any privacy-breaking
Legality Rules (because I didn't remember any).

But clearly this case is privacy-breaking (I just went and looked it up with old
wording, it's crystal-clear that it breaks privacy). So we could have retained
that privacy-breaking to reduce the incompatibility. Not sure why we didn't
(perhaps Pascal was looking forward to getting rid of a hack in their compiler?
:-).

...
> My point was that the incompatibilies were:
>
>     1/3 return-by-ref that really needs to be modified in the presence
>     of b-i-p.
>
>     2/3 return-by-copy of limited types, which we (gratitously, IMHO)
>     made illegal.  My example was illustrating this more-common case.
>
> The 1/3 and 2/3 are from memory of my experiments -- I could be off.

Right. If the Ada 95 rule was privacy-breaking (and it was), we could have
retained that part of it just to reduce the incompatibility; no one could have
truly complained because the effect would have been the same as in Ada 95.

But either we didn't think of it, were seduced by cleaning up a mess in the
language without noting possible impacts, or ???. Anyway, way too late now (and
at least we ended up with one less wart).

> P.S. If I become "e-mail dark" in the next day or two, it means the
> blizzard dumped a huge amount of heavy wet snow on a tree branch, and
> the strong winds blew down said branch, and it knocked down a power
> line that supplies electricity to this computer.  You may then assume
> I'm reading a paperback novel by candle light.

Could be more fun than reading the ARG list. ;-) We just got an inch of snow
yesterday and are supposed to get another inch tonight. And temps just below
freezing (no deep-freeze, knock wood). I think we're getting off easy.

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  10:22 AM

> I made the mistake of thinking that Ada 95 didn't have any
> privacy-breaking Legality Rules (because I didn't remember any).
>
> But clearly this case is privacy-breaking (I just went and looked it
> up with old wording, it's crystal-clear that it breaks privacy). So we
> could have retained that privacy-breaking to reduce the
> incompatibility. Not sure why we didn't (perhaps Pascal was looking
> forward to getting rid of a hack in their compiler? :-).

???

Are you mixing up compile time and run time?  Run-time rules break privacy all
the time.  (Which is why our allergy to breaking privacy is kind of silly -- we
can always avoid privacy breaking by turning a compile-time rule into a run-time
check, which doesn't benefit users.)

I don't see any legality rule here that breaks privacy.  Am I missing something?
The return-by-copy/return-by-reference difference breaks privacy, but that's run
time.  The accessibility check breaks privacy, but that too is run time.  I'm
looking at the 1995 AARM, and I see only two legality rules in 6.5, neither of
which break privacy.

I repeat:  My example is legal, and does not raise an exception, in both Ada 83
and Ada 95.  There is no privacy-breaking legality rule involved.  We made that
example illegal in Ada 2005 (and 2012) ONLY because we were shy of breaking
privacy.

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  5:19 PM

> Are you mixing up compile time and run time?

Definitely not.

> Run-time rules
> break privacy all the time.  (Which is why our allergy to breaking
> privacy is kind of silly -- we can always avoid privacy breaking by
> turning a compile-time rule into a run-time check, which doesn't
> benefit users.)
>
> I don't see any legality rule here that breaks privacy.  Am I missing
> something?  The return-by-copy/return-by-reference
> difference breaks privacy, but that's run time.  The accessibility
> check breaks privacy, but that too is run time.

Huh? 95% of accessibility checks are caught by the static check. All
accessibility checks include a paired static check and dynamic check.

>  I'm looking at the 1995 AARM, and I see only two legality rules in
> 6.5, neither of which break privacy.
>
> I repeat:  My example is legal, and does not raise an exception, in
> both Ada 83 and Ada 95.  There is no privacy-breaking legality rule
> involved.  We made that example illegal in Ada 2005 (and 2012) ONLY
> because we were shy of breaking privacy.

I really don't care what this example does at runtime (that's not a
compatibility issue, and that's ALL we're talking about here). I agree THIS
particular example is legal, but I was noting that the matching
return-by-reference case could not be legal, since it necessarily fails a static
accessibility check. Which makes the rule privacy-breaking.

I understand your point, as the number of return-by-reference functions is very
small. (The one time I was forced into writing one, it took me almost an entire
working day to get something legal; it was supposed to be a simple replacement
for the fact that Ada 95 did not allow deferred constants of tagged limited
types.) But the incompatibility hits far more cases (anything with a limited
private type in it). I don't remember if we considered that, because if we had,
we should have realized it was unacceptable (there are a lot of limited private
types in Ada 83 code, because there wasn't any other way to deal with types that
needed adjustment on assignment).

***************************************************************

From: Robert Dewar
Sent: Tuesday, January 27, 2015  5:30 PM

> Huh? 95% of accessibility checks are caught by the static check. All
> accessibility checks include a paired static check and dynamic check.

Well Huh? always reads a bit rude to me, but where on earth does your figure of
95% come from

> I really don't care what this example does at runtime (that's not a
> compatibility issue, and that's ALL we're talking about here).
> I agree THIS particular example is legal, but I was noting that the
> matching return-by-reference case could not be legal, since it
> necessarily fails a static accessibility check. Which makes the rule
> privacy-breaking.

Right, but you shouldn't throw out A because unrelated B is a problem!

> I understand your point, as the number of return-by-reference
> functions is very small. (The one time I was forced into writing one,
> it took me almost an entire working day to get something legal; it was
> supposed to be a simple replacement for the fact that Ada 95 did not
> allow deferred constants of tagged limited types.) But the
> incompatibility hits far more cases (anything with a limited private
> type in it). I don't remember if we considered that, because if we
> had, we should have realized it was unacceptable (there are a lot of
> limited private types in Ada 83 code, because there wasn't any other way to
> deal with types that needed adjustment on assignment).

I think most of our users just bumped into this accidentally, e.g.
returning a Text_IO file if I remember one case???

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  5:59 PM

> > Huh? 95% of accessibility checks are caught by the static check. All
> > accessibility checks include a paired static check and dynamic check.
>
> Well Huh? always reads a bit rude to me, but where on earth does your
> figure of 95% come from

"Huh?" is just an expression of surprise/confusion to me. It might mean that
someone's being stupid, but there's no implied value judgment as to which of the
correspondents that is (it might very well be me).

As far as the 95% goes, that's just based on the fact that (in Ada 95, which is
what we were talking about) there are only two ways to get a dynamic
accessibility accessibility check - either do something involving an anonymous
access parameter, or do something involving a check in a generic body. The
latter is dynamic in name only for a compiler like GNAT (it is dynamic in
Janus/Ada, but that's the rare case), and the former shouldn't happen very often
as potential for failure of such checks is like pointing a giant cannon at your
head and giving someone you don't like the lanyard. :-) I'd guess the vast
majority of cases of the latter are by accident (that surely has been the case
with me, mostly because Ada didn't allow in out parameters on functions).

I don't think Ada 2012 changes this dynamic much; the major cause of problems
can be eliminated by using "in out" parameters on functions, but now there are a
couple of new ones involving aliased parameters and class-wide function returns
and of course SAOAATs (stand-alone objects of an anonymous access type). Most of
these new cases are very unlikely to occur in practice. (I do worry about the
class-wide return case somewhat.)

This is one of the reasons that I continue to resist the temptation to fill in
the various missing dynamic accessibility tests in the ACATS. Those objectives
are almost completely untested, but with the possible exception of the
class-wide return case, I doubt that they'll ever happen in practice.

...
> > I understand your point, as the number of return-by-reference
> > functions is very small. (The one time I was forced into writing
> > one, it took me almost an entire working day to get something legal;
> > it was supposed to be a simple replacement for the fact that Ada 95
> > did not allow deferred constants of tagged limited types.) But the
> > incompatibility hits far more cases (anything with a limited private
> > type in it). I don't remember if we considered that, because if we
> > had, we should have realized it was unacceptable (there are a lot of
> > limited private types in Ada 83 code, because there wasn't any other
> > way to deal with types that needed adjustment on assignment).
>
> I think most of our users just bumped into this accidentally, e.g.
> returning a Text_IO file if I remember one case???

Exactly. Limited private types are fairly common, but that alone doesn't trigger
the return-by-reference rules. The return-by-reference stuff was madness, and
getting rid of it probably was a good thing, but it's the related cases that had
nothing to do with that which were the incompatibility that caused problems.

Unfortunately, it has taken until this discussion for me to understand exactly
why that change was such a problem. Which demonstrates likely why we missed
that; we were focused on return-by-reference (which is rare) and didn't think
enough about other cases that would get caught in the net. (I do think someone
brought that up, but we didn't really realize how wide-spread such things are.)

Anyway, hindsight is 2020; if that's the worst mistake we've ever made, I'm
pretty pleased with our results. (The odds of working on something as large as
an Ada revision without making any mistakes is roughly zero, after all.)

***************************************************************

From: Robert Dewar
Sent: Tuesday, January 27, 2015  8:03 PM

Makes me think we should default in GNAT to allowing the copy case, to ease
transition to Ada 2012, to be thought about ...

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  6:48 PM

> > Are you mixing up compile time and run time?
>
> Definitely not.

I think you are.

> > Run-time rules
> > break privacy all the time.  (Which is why our allergy to breaking
> > privacy is kind of silly -- we can always avoid privacy breaking by
> > turning a compile-time rule into a run-time check, which doesn't
> > benefit users.)
> >
> > I don't see any legality rule here that breaks privacy.  Am I
> > missing something?  The return-by-copy/return-by-reference
> > difference breaks privacy, but that's run time.  The accessibility
> > check breaks privacy, but that too is run time.
>
> Huh? 95% of accessibility checks are caught by the static check.

That's true, but...

>...All
> accessibility checks include a paired static check and dynamic check.

No, not this one.  So not "All".

The rule in question is 6.5(17).  Note that we're talking about the 1995 RM
 -- the newer version says "Paragraphs 9 through 20 were deleted."

There is no static rule corresponding to 6.5(17) in the 1995 version, hence
no (static) privacy breaking.

If you disagree, please quote the static rule from RM-95.

> >  I'm looking at the 1995 AARM, and I see only two legality rules in
> > 6.5, neither of which break privacy.
> >
> > I repeat:  My example is legal, and does not raise an exception, in
> > both Ada 83 and Ada 95.  There is no privacy-breaking legality rule
> > involved.  We made that example illegal in Ada 2005 (and 2012) ONLY
> > because we were shy of breaking privacy.
>
> I really don't care what this example does at runtime (that's not a
> compatibility issue, and that's ALL we're talking about here).

Huh?

(I agree with you that "Huh?" should not be taken as an insult.  ;-) I just
don't have any idea what you're getting at in the above, nor in the following.)

> I agree THIS particular example is legal, but I was noting that the
> matching return-by-reference case could not be legal, since it
> necessarily fails a static accessibility check. Which makes the rule
> privacy-breaking.
  ^^^^^^^^ Which rule are you saying breaks privacy?  In 1995!

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  11:38 PM

> > > Are you mixing up compile time and run time?
> >
> > Definitely not.
>
> I think you are.

Certainly not; compatibility is *only* about compile-time rules. For the
purposes of this discussion, what happens at runtime is completely irrelevant.

...
> > > I don't see any legality rule here that breaks privacy.  Am I
> > > missing something?  The return-by-copy/return-by-reference
> > > difference breaks privacy, but that's run time.  The accessibility
> > > check breaks privacy, but that too is run time.
> >
> > Huh? 95% of accessibility checks are caught by the static check.
>
> That's true, but...
>
> >...All
> > accessibility checks include a paired static check and
> dynamic check.
>
> No, not this one.  So not "All".

So you're telling me that this rule violates the design of accessibility checks?

> The rule in question is 6.5(17).  Note that we're talking about the
> 1995 RM -- the newer version says "Paragraphs 9 through 20 were
> deleted."
>
> There is no static rule corresponding to 6.5(17) in the 1995 version,
> hence no (static) privacy breaking.

I thought the static rules were implicit in Ada 95. (Meaning that they were
defined by a blanket rule somewhere.) Maybe I've just gotten senile. :-)

I know that Janus/Ada rejected that example (return-by-reference form) when
I ran into it in Claw. Perhaps that was a bug, and maybe it was changed since
(I didn't check), but rejecting it saved me a huge amount of time trying to
figure what the heck was the problem -- I'd still be trying to get that code
to work.

> If you disagree, please quote the static rule from RM-95.

Not worth the effort, honestly. It would take 2 hours of reading, and I see no
point in an issue that was decided long ago.

...
> > > I repeat:  My example is legal, and does not raise an exception,
> > > in both Ada 83 and Ada 95.

And I agree. It's the return-by-reference case that's not legal (or shouldn't
have been legal, take your pick).

...
> > I really don't care what this example does at runtime (that's not a
> > compatibility issue, and that's ALL we're talking about here).
>
> Huh?
>
> (I agree with you that "Huh?" should not be taken as an insult.  ;-) I
> just don't have any idea what you're getting at in the above, nor in
> the following.)

We're talking about compatibility; for that, only compile-time checks matter.
I don't care whatsoever what it does at runtime, be it return-by-copy,
return-by-reference, return-by-name, return-by-exception, or return-by-ufo.
The only thing that matters for compatibility is whether or not it is legal.
(Inconsistency is another deal altogether, but that's not the point here.)

You keep talking about return-by-copy. That has nothing to do with whether
or not it is legal. It probably has something to do with how likely it is to
occur in actual practice, and that's a useful data point, but in terms of
incompatibility it is irrelevant.

If you had simply said in the first place, "that's not illegal because there
is no static accessibility check FOR ANY TYPE", we'd have had a very different
discussion, because that's a very different claim. But your focusing on "my
example is legal" which is "return-by-copy" made it sound to me like you were
claiming that other examples had different legality (again, that's the only
thing that matter for compatibility).

> > I agree THIS particular example is legal, but I was noting that the
> > matching return-by-reference case could not be legal, since it
> > necessarily fails a static accessibility check. Which makes the rule
> > privacy-breaking.
>                                           ^^^^^^^^ Which rule are you
> saying breaks privacy?  In 1995!

The static accessibility check. There's one implied by the Dewar rule if
nothing else (unconditionally raising Program_Error is always madness, that's
even dumber than unconditionally raising Constraint_Error). Maybe I'm the one
who is mad. Anyway, this is gotten to be a complete waste of time - we're not
changing any rules here now. So let's stop.

***************************************************************

From: Bob Duff
Sent: Wednesday, January 28, 2015  10:23 AM


> We're talking about compatibility; for that, only compile-time checks
> matter. I don't care whatsoever what it does at runtime, be it
> return-by-copy, return-by-reference, return-by-name,
> return-by-exception, or return-by-ufo. The only thing that matters for
> compatibility is whether or not it is legal. (Inconsistency is another
> deal altogether, but that's not the point here.)

Now I see part of why I was confused -- we're using the term "compatible"
differently!  To me, inconsistencies are a subset of incompatibilities.  I
just looked it up in the AARM, and it agrees with me (not surprising, since I
wrote that part) 1.1.2:

                         Inconsistencies With Ada 83

39.b        This heading lists all of the upward inconsistencies between Ada
            83 and Ada 95. Upward inconsistencies are situations in which a
            legal Ada 83 program is a legal Ada 95 program with different
            semantics. This type of upward incompatibility is the worst type
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
            for users, so we only tolerate it in rare situations.

So I suggest you change the way you're using the terms (as non-overlapping categories).

> So let's stop.

OK.

***************************************************************

From: Randy Brukardt
Sent: Wednesday, January 28, 2015  10:51 AM

> > We're talking about compatibility; for that, only compile-time
> > checks matter. I don't care whatsoever what it does at runtime, be
> > it return-by-copy, return-by-reference, return-by-name,
> > return-by-exception, or return-by-ufo. The only thing that matters
> > for compatibility is whether or not it is legal. (Inconsistency is
> > another deal altogether, but that's not the point here.)
>
> Now I see part of why I was confused -- we're using the term
> "compatible" differently!  To me, inconsistencies are a subset of
> incompatibilities.

Maybe informally. But as lead AARM author, I have to document them
separarately. In the case where there is both compile-time and run-time
incompatibility, we document that as an inconsistency.

> I just looked it up in the
> AARM, and it agrees with me (not surprising, since I wrote that part)
> 1.1.2:
>
>                          Inconsistencies With Ada 83
>
> 39.b        This heading lists all of the upward inconsistencies between
Ada
>             83 and Ada 95. Upward inconsistencies are situations in
> which
a
>             legal Ada 83 program is a legal Ada 95 program with different
>             semantics. This type of upward incompatibility is the
> worst
type
>                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>             for users, so we only tolerate it in rare situations.
>
> So I suggest you change the way you're using the terms (as
> non-overlapping categories).

Can't do that, as in that case the AARM no longer would make any sense. Your
wording above is a tautology; as such it can only be understood if you read
"incompatibility" informally.

And of course, in causal conversation, the English meaning of "incompatibility"
(which you're using above) certainly as you say. But I try to never use words
informally here when they have formal meanings. Conversation here (which is
usually on the record, after all) is best if not informal.

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  11:00 AM

> Maybe informally. But as lead AARM author, I have to document them
> separarately. In the case where there is both compile-time and
> run-time incompatibility, we document that as an inconsistency.

I disagree, the idea of incompatibility not including what you call
inconsistency makes ZERO sense. I have no problem in using phrases like
compile-time-incompatibility or run-time-incompatibility.
But to use incompatibility to imply compile time, which is what I understand
your position to be, is plain wrong and confusing IMO

***************************************************************

From: Bob Duff
Sent: Wednesday, January 28, 2015  11:23 AM

> And of course, in causal conversation, the English meaning of
> "incompatibility" (which you're using above) certainly as you say. But
> I try to never use words informally here when they have formal meanings.
> Conversation here (which is usually on the record, after all) is best
> if not informal.

Argh!  The formal meaning matches the informal one!  Yes, the two types of
incompatibility are listed separately, but so what?  In addition to the
wording above, we have:

                        Incompatibilities With Ada 83

39.e        This heading lists all of the upward incompatibilities between Ada
            83 and Ada 95, except for the ones listed under "
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
            Inconsistencies With Ada 83" above. These are the situations in
            ^^^^^^^^^^^^^^^
            which a legal Ada 83 program is illegal in Ada 95. We do not
            generally consider a change that turns erroneous execution into an
            exception, or into an illegality, to be upwardly incompatible.

It is crystal clear in the AARM that "inconsistency" is a subset of "incompatibility".

So PLEASE, stop using these terms in your idiosyncratic and confusing way!

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  11:33 AM

> It is crystal clear in the AARM that "inconsistency" is a subset of
> "incompatibility".

Furthermore, any other view makes ZERO sense, inconsistencies are certainly
incompatibilities, and furthermore they are incompatibilities of the worst
possible kind.

***************************************************************

From: Randy Brukardt
Sent: Friday, January 30, 2015  5:41 PM

...
> It is crystal clear in the AARM that "inconsistency" is a subset of
> "incompatibility".
>
> So PLEASE, stop using these terms in your idiosyncratic and confusing
> way!

You're making a mountain out of a molehill.

There is no confusion in the vast majority of messages here, simply because
we have almost no reason to talk about inconsistencies (run-time
incompatibilities). That's because we have a near-zero tolerance for them, as
they potentially make the rocket go off course when the software is recompiled
(or perhaps make the spam filter pass virii, to take a more realistic example
of something that could get recompiled with little change or oversight).

We've only tolerated them in two cases: (1) where the previously language is
nonsense or ill-defined; (2) where the change is expected to mostly fix bugs
as the new behavior is more like what programmers expect. [I'm actually
surprised at (2), given the sorts of things Robert talks about here, but we
 did the untagged equality in large part because of Robert's strong support.
I probably would have voted against otherwise.]

So the only reasons to talk about inconsistencies here are to (A) justify
something as either (1) or (2), or to reject an idea based on it being neither
(1) nor (2). Usually that is a very short discussion (only justifying something
as (2) potentially needs any discussion).

[Aside: probably the overriding principle is that any inconsistencies that we
allow shouldn't be any more significant than what could happen from basic
compiler maintenance (from one version to the next). For instance, improved
code performance can expose a race condition that had remained hidden in
previous compiler versions because the code was slow enough that it never
occurred. No run-time behavior that the Standard changes should be more likely
than that sort of problem.]

My point is that the only kind of incompatibilities that are even interesting
here (ARG List) are the compile-time kind. We tolerate more of those because
no system that doesn't compile is going to malfunction. :-) Exactly how much
we should tolerate is, of course, open to discussion. [We probably got it
wrong in the limited return case.]

And, to bring this back to topic, "suppressible errors" only can possibly
eliminate compile-time incompatibilities. They can't do anything for
inconsistencies, and indeed they might expose some that are otherwise hidden.
So I don't see any reason that I should care about any run-time effects where
we're specifically talking about "suppressible errors"; they can only be a
negative, not a positive for that.

In that case, we're using "incompatibility" in the effectively the same way.
And we all know what "inconsistency" means in this context, so we can use
that in the rare case that it matters.

Similarly, in the limited return example we were talking about, we would
never have tolerated an inconsistency (a run-time incompatibility, if you
must). If the compile-time incompatibility was unacceptable, then we would
have had to (A) break privacy [definitely unacceptable at the time, quite
possibly still unacceptable], or (B) create a new feature for constructors.
[IMHO, we should have done (B). Several of us held out on that a long time,
I was eventually convinced that it was too close to existing functions and
that I wasn't going to win the separate feature debate - but of course that
didn't take the (compile-time) incompatibility into sufficient account.]
There is no circumstance where a run-time incompatibility would ever have been
tolerated for that feature (and that remains true, IMHO). So there is nothing
to talk about in that realm; OF COURSE the behavior would be unchanged from
Ada 95, or it would be illegal. There are no other choices (since this is
certainly neither of the exceptions where an inconsistency would be allowed
noted at the start of this message). So I remain perplexed why you were so
fixed that - it's irrelevant. The only thing we could ever have varied is the
compile-time incompatibility, nothing else is even an option for the language
standard. (GNAT, of course, could do something different; it would be sad but
understandable.)

I'll try to qualify "incompatibility" with "compile-time" when it really
matters (as I did above), but I make no promises to remember every time.

***************************************************************

From: Bob Duff
Sent: Friday, January 30, 2015  6:33 PM

> You're making a mountain out of a molehill.

Perhaps, but we recently had a hugely unproductive argument, based on our
mutual misunderstanding of each others' use of the term.

> There is no confusion in the vast majority of messages here, simply
> because we have almost no reason to talk about inconsistencies
> (run-time incompatibilities). That's because we have a near-zero
> tolerance for them, as they potentially make the rocket go off course
> when the software is recompiled (or perhaps make the spam filter pass
> virii, to take a more realistic example of something that could get
> recompiled with little change or oversight).

Agreed.  There are 49 "Inconsistencies" documented in the AARM.
Not too many, considering it's a 100,000-line+ document.

> I'll try to qualify "incompatibility" with "compile-time" when it
> really matters (as I did above), but I make no promises to remember every time.

Fair enough.  Thank you!

***************************************************************

From: Randy Brukardt
Sent: Friday, January 30, 2015  6:56 PM

> Agreed.  There are 49 "Inconsistencies" documented in the AARM.
> Not too many, considering it's a 100,000-line+ document.

Well, Word says that Draft 5 (the most current published document) of the AARM
has 90,344 lines. So you've overestimated by 10%. :-)  [Also, it has 55,667
paragraphs, 640,755 words, and 4,455,499 characters including spaces (3,739,586
without spaces). That's probably more than you ever wanted to know about the
size of the AARM. Of course, an AARM with changes shown would be bigger, but I
didn't think counting deleted text made much sense in this case.]

And, looking at the index in Draft 5, there are 18 inconsistencies vs Ada 2005,
19 vs Ada 2012, 15 against Ada 83, and 15 against Ada 95 (so I got 48 in Ada
2012, 67 in current Ada -- probably there are so many new ones because I've
been pretty pedantic about documenting them. Most are cases where we forgot
checks or had too many, not really significant in practice. And I probably
counted wrong somewhere which explains why you have one more for Ada 2012. :-).

***************************************************************

From: Jeff Cousins
Sent: Tuesday, January 27, 2015  10:00 AM

> Ada 2005 was in my mind pretty much a failure, in that we implemented it, but
> few users made the transition, and there is no question that the
> incompatibilities in return of limited types acted as a major barrier.

Thinking just of my projects, overall that change had more negatives than
positives, but it wasn't any big deal, certainly not enough to prevent moving
forward.


And now that there's a separate thread from 0003...

My pet RM recommended warning would be for mis-matched constraints on renames.

***************************************************************

From: Robert Dewar
Sent: Tuesday, January 27, 2015  10:04 AM

> My pet RM recommended warning would be for mis-matched constraints on renames.

can you give an example?

***************************************************************

From: Jeff Cousins
Sent: Tuesday, January 27, 2015  10:18 AM

Bottom of p287/top of p288 of "Programming in Ada 2012".
GNAT gives warnings at the call of Q, though a warning at the declaration of Q
would also be nice.

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  10:36 AM

> > My pet RM recommended warning would be for mis-matched constraints on renames.
>
> can you give an example?

procedure Eg is
   procedure P (X : Integer) is
   begin
      null;
   end P;

   procedure Q (X : Natural) renames P; -- "Natural" is ignored here.
begin
   Q (-1);
end Eg;

This is legal, and does NOT raise an exception, in all versions of Ada.
See RM-8.5.4(7).

GNAT has a compiler bug -- it raises Constraint_Error on the call to Q (and
warns about that at compile time).  I'm not going to open a ticket unless I
manage to fix all more-important bugs and enhancements.  ;-)

I assume what Jeff is asking for is a warning on the renaming_decl.
Similar oddities occur for object renamings and perhaps others.

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  10:37 AM

> My pet RM recommended warning would be for mis-matched constraints on
> renames.

That's reasonable.

But in practice, if you want warnings, it's quicker to ask AdaCore than to
require it in the RM.  ;-)

***************************************************************

From: Jeff Cousins
Sent: Tuesday, January 27, 2015  11:30 AM

> procedure Eg is
>    procedure P (X : Integer) is
>    begin
>       null;
>    end P;

>    procedure Q (X : Natural) renames P; -- "Natural" is ignored here.
> begin
>    Q (-1);
> end Eg;

Integer and Natural the other way round is what I was thinking of, which should
raise a Constraint_Error, but yes, GNAT gets it wrong for Bob's example.

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  6:08 PM

> > My pet RM recommended warning would be for mis-matched constraints
> > on renames.
>
> That's reasonable.
>
> But in practice, if you want warnings, it's quicker to ask AdaCore
> than to require it in the RM.  ;-)

But this seems like a case that tailor-made for Suppressible Errors. Here's a
rule that we know is stupid (static matching should have been required; the
thread that Goodenough dug up shows that the only reason it is the way it is was
that Ichbiah didn't want to define something like static matching -- once that
was done in Ada 95, no further reason remains), but we can't change it because
it would be seriously incompatible. One could easily imagine defining it as a
suppressible error, and then allow reverting to the old rules if suppressed.

That would actually let us get rid of a wart.

***************************************************************

From: Bob Duff
Sent: Tuesday, January 27, 2015  6:53 PM

> > > My pet RM recommended warning would be for mis-matched constraints
> > > on renames.
> >
> > That's reasonable.
> >
> > But in practice, if you want warnings, it's quicker to ask AdaCore
> > than to require it in the RM.  ;-)
>
> But this seems like a case that tailor-made for Suppressible Errors.
> Here's a rule that we know is stupid (static matching should have been
> required; the thread that Goodenough dug up shows that the only reason
> it is the way it is was that Ichbiah didn't want to define something
> like static matching
> -- once that was done in Ada 95, no further reason remains), but we
> can't change it because it would be seriously incompatible. One could
> easily imagine defining it as a suppressible error, and then allow
> reverting to the old rules if suppressed.
>
> That would actually let us get rid of a wart.

Yes, good idea.

But I don't think your history is quite right.  I think there were some cases
in Ada 83 where you could NOT statically match, because of restrictions on
'Base.  To get this right, we'd need to study whether 1995 rules about 'Base
solve the problem, and if not fix them somehow.
(Allow more 'Base, or exempt those cases from the rule.)

***************************************************************

From: Robert Dewar
Sent: Tuesday, January 27, 2015  8:05 PM

> But this seems like a case that tailor-made for Suppressible Errors.
> Here's a rule that we know is stupid (static matching should have been
> required; the thread that Goodenough dug up shows that the only reason
> it is the way it is was that Ichbiah didn't want to define something
> like static matching
> -- once that was done in Ada 95, no further reason remains), but we
> can't change it because it would be seriously incompatible. One could
> easily imagine defining it as a suppressible error, and then allow
> reverting to the old rules if suppressed.

Sure, but Bob is right, if you really want a warning, ask AdaCore.
Putting it in Ada 2020 as a suppressible error might satisfy some one's feeling
of aesthetic consistency, but isn't really helpful :-)

BTW, I find a similar big surprise is that Positive'Image does not work as
expected, we have customers all the time puzzled by this..

***************************************************************

From: Tucker Taft
Sent: Tuesday, January 27, 2015  9:22 PM

> But this seems like a case that tailor-made for Suppressible Errors.
> Here's a rule that we know is stupid (static matching should have been
> required; the thread that Goodenough dug up shows that the only reason
> it is the way it is was that Ichbiah didn't want to define something
> like static matching
> -- once that was done in Ada 95, no further reason remains),...

Generic matching and renames generally use the same rules.  I would argue that
for generic matching, the relative looseness in matching of formal subprograms
is a feature, and it would be quite incompatible to change that.

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  11:05 PM

> Yes, good idea.
>
> But I don't think your history is quite right.

My history is right :-), but whether the exact static matching rules that we
actually have in Ada would work unmodified is a different issue. That doesn't
have anything to do with the historical issue -- after all, the devil is always
in the details.

> I think there
> were some cases in Ada 83 where you could NOT statically match,
> because of restrictions on 'Base.  To get this right, we'd need to
> study whether
> 1995 rules about 'Base solve the problem, and if not fix them somehow.
> (Allow more 'Base, or exempt those cases from the rule.)

Last time we looked at this, we would have exempted 'Base from the rules.

In particular, the formal definition of operator symbols typically looks
like:

     function "*" (Left, Right : in My_Int'Base) return My_Int'Base;

but people would be seriously surprised if

     function "*" (Left, Right : in P.My_Int) return P.My_Int renames P."*";

didn't work. We had planned an exception to the rule to allow this (and only
this - My_Int has to be a first subtype) to work. But the whole idea didn't
fly; we'd need it to be a Suppressible Legality Rule.

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  11:08 PM

> Sure, but Bob is right, if you really want a warning, ask AdaCore.
> Putting it in Ada 2020 as a suppressible error might satisfy some
> one's feeling of aesthetic consistency, but isn't really helpful :-)

It would be helpful because it would change the default from correct (and a
warning) to illegal (and allowable if necessary). People that always run GNAT
in warnings-as-error mode wouldn't see any difference, but everyone else would.
And this is a real bane to understanding Ada code (you have to ignore the
renames and go back to the original declaration, but how many people truly
remember to do that?)

***************************************************************

From: Randy Brukardt
Sent: Tuesday, January 27, 2015  11:16 PM

> Generic matching and renames generally use the same rules.

But that seems more like a conceit than anything that's really required. I
don't see any reason to insist that they're the same (doing that has gotten us
in trouble in other areas in the past), and I think we really need to consider
to cut that cord. After all, the majority of the rules are repeated in both
places; keeping them consistent actually takes work, and for most purposes
that work just causes incompatibilities.

> I would argue that for generic matching, the relative looseness in
> matching of formal subprograms is a feature, and it would be quite
> incompatible to change that.

I disagree that this is a feature - I don't think I've every used it (assuming
that the T vs. T'Base thing is appropriately fixed - couldn't do this without
having that). Having declarations with subtypes that lie is just confusing.
Most such subtypes shouldn't have any constraints, and most formal subprogram
parameters shouldn't have any constraints, and most actual subprogram
parameters shouldn't have any constraints, and stuff that violates that is
just a tripping hazard (to use Bob's term).

Of course, a good compelling example to the contrary could change my mind on
this one.

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  3:03 AM

> Generic matching and renames generally use the same rules.  I would
> argue that for generic matching, the relative looseness in matching of
> formal subprograms is a feature, and it would be quite incompatible to
> change that.

That seems right to me, so what I think is reasonable is for compilers to
give a warning in the non-generic case. I hardly think this is worth
formalizing at the ARG level with a "suppressible error", especially since as
Tuck points out it would create an oddity to do it in one case and not the
other. Such warning heuristics (guessing what is a likely error, avoiding
false positives) are more the domain of a compiler than a standard. And it
is so little effort to add a nice warning to GNAT, compared with debating a
suppressible error in the standard.

Tuck, feel free to open an AdaCore ticket for this warning if you agree it
would be useful!

***************************************************************

From: Robert Dewar
Sent: Wednesday, January 28, 2015  3:05 AM

> It would be helpful because it would change the default from correct
> (and a
> warning) to illegal (and allowable if necessary).

No, the standard has nothing to say about the default operation of a compiler,
so a compiler decides what standard features are or are not included in the
default mode. For example, GNAT defaults to the static elaboration model. You
can get the dynamic model, but only if you set a special switch. I suspect
that GNAT would take everything that was a suppressible error, and just make
it a warning category, with a special switch to make this particular warning
category illegal.

***************************************************************

From: Randy Brukardt
Sent: Friday, January 30, 2015  6:34 PM

True, but I was talking about "standard mode", which of course every Ada
compiler has somehow. And most Ada programmers are aware that GNAT needs some
options to run in "standard mode", so I doubt there will be much surprise if
there is one more. :-)

One uses "standard mode" for maximum code portability. If someone is never
going to use an Ada compiler other than GNAT (or any single compiler for that
matter), then of course the details of the Ada Standard in general are pretty
much unimportant to them -- it only matters when multiple implementation are
in use).

Anyway, I view "suppressible errors" as mainly useful for methodological
restrictions, specifically cases where the construct has a strong probability
of being a problem. (The declaration of some entity named Standard is one such
example.) I'd say there has to be less than a 5% chance that the construct in
question is not a problem in order for a suppresible error to apply. In such
cases, it's important that the programmer explicitly take an action to tell
the compiler and especially future readers that the code is OK and why (local
object only, too much change at this time, etc.) That's why some sort of
explicit action to allow the code should be required. (Of course, if there is
no chance that the construct is useful, then it's just a straight error.)

I'm not much interested in "suppressible errors" in cases that are more grey
than that, especially not if suppressing the error just exposes an
inconsistency (run-time incompatibility if you prefer) or erroneous execution.
Such cases are best left to the implementation, and to informal warning
messages.

Enforcing proper use of parallel operations seems right in this wheelhouse
(you'd want explicit overriding of the checks for anything that is likely to
cause a race condition - but of course it's possible that no such race
condition is possible, so a straight error is too tough). I suspect similar
suppression would help to the Global aspect (doing so would allow hiding memo
functions from the user, but of course they could be hosed if such a function
is used in a parallel context). I think the rename case is here, too,
including the similar generic actual case, but I can imagine that not everyone
would agree.

Anyway, I think you ought to decide on how to handle suppressible errors by
default once we actually determine which errors are suppressible. They might
turn out to be much morre obviously errors (if not, I'm not likely to vote to
put them in at all).

***************************************************************

From: Erhard Ploedereder
Sent: Wednesday, January 28, 2015  1:25 PM

I had a mailer problem, so this message arrives late or really late for
the meeting. [Editor's note: It arrived in my inbox an hour and twenty minutes
after we adjourned the meeting.] A summary of my positions on some of the
issues...

[Issues associated with other AIs removed - Editor.]

Supressable Errors: Since a compiler is free to have a switch that supresses
error messages that endanger compatibility with whatever, I definitely do not
want supressable errors in the language that are supressable only for
compatibility reasons. (The consequence would be that, no matter how bad a
newly discovered problem in existing features is, for eternity all such error
messages would be supressable. In the words of someone else: this is disgusting
language design. An error should remain an error, requiring a fix.)
Suppressable errors as a variant of "stern warning": much less resistance on
my part. Still, compilers have been creating such categories forever. I did it
in the Eighties. If they are in the category "this will never work (as
expected)", well, fine.

***************************************************************

From: Robert Dewar
Sent: Sunday, February  1, 2015  10:32 AM

> Anyway, I view "suppressible errors" as mainly useful for 
> methodological restrictions, specifically cases where the construct 
> has a strong probability of being a problem. (The declaration of some 
> entity named Standard is one such example.) I'd say there has to be 
> less than a 5% chance that the construct in question is not a problem 
> in order for a suppresible error to apply.

It is plain nonsense to say that using Standard as an entity name has a 95%
chance of being a problem. I looked through the 22 uses of Standard as an
entity in our test suite, and 100% of them seemed safe and reasonable to me.
Please don't impose YOUR idiosyncratic views on the language.

The language could have reserved the identifiers in Standard, but quite
deliberately decided not to. Of course you can disagree with that decision,
and for *your own code* restrict uses you don't like, but it is unreasonable
to insist that your view is the only one that makes sense (reminds me of the
anti-use fanatics who are quite sure that everyone writing Ada should avoid
use clauses). Actually it is when you don't use USE clauses that reuse of
standard entities makes sense, e.g. the CORBA Ada standard requires the
definition of a type called String, and to me Corba.String reads very nicely,
and coexists just fine with plain String to refer to the Ada standard String.

That's what worries me about suppressible errors, they have a real potential
for being a place where people's idiosyncratic views on things you should
avoid can be deprecated.

I really much prefer the dynamic in an implementation setting for choosing
what to warn about. You see a report from a user who has done something that,
while legal, is dubious and has caused trouble.

You explain that the compiler is doing the right thing, but put on your
thinking cap to wonder if the compiler could emit a useful warning. Here is
an example of a recent addition to GNAT that has that kind of history:

> NF-74-O107-022 Warn on use of pragma Import in Pure unit (2015-01-10)
>
>   The use of pragma Import in a Pure unit is worrisome since it can be used
>   to "smuggle" non-pure behavior into a pure unit. This usage now generates
>   a warning that calls to a subprogram imported in this manner may in some
>   cases be omitted by the compiler, as allowed for Pure units. This warning
>   is suppressed if an explicit Pure_Function aspect is given.

***************************************************************

From: Bob Duff
Sent: Sunday, February  1, 2015  11:35 AM

>....e.g. the CORBA Ada standard requires the definition of a type  
>called String, and to me Corba.String reads very nicely, and coexists  
>just fine with plain String to refer to the Ada standard String.

That's fine for clients, but within Corba itself, it's a nuisance, because 
Corba.String IS referred to as String, and Standard.String MUST be referred
to as Standard.String.  I find that confusing when reading the sources,
especially since the visibility is different in other parts of PolyORB.

> That's what worries me about suppressible errors, they have a real 
> potential for being a place where people's idiosyncratic views on 
> things you should avoid can be deprecated.

Well, it's much worse when people's idiosyncratic views make it into the
language as (hard) legality rules, which has happened.  Or when those views
distort the language design ("We don't have to worry about X, because nobody
should be using X", which is fine if there's universal agreement about X, but
not otherwise).

> I really much prefer the dynamic in an implementation setting for 
> choosing what to warn about. You see a report from a user who has done 
> something that, while legal, is dubious and has caused trouble.
> 
> You explain that the compiler is doing the right thing, but put on 
> your thinking cap to wonder if the compiler could emit a useful 
> warning. Here is an example of a recent addition to GNAT that has that 
> kind of history: ...

Nothing wrong with that dynamic, but:

There is some value in having standardization, at least for some such cases.
There should be no implication that RM rules are "more important" than
compiler-specific warnings.  In fact it is already the case that (some)
compiler-specific warnings (e.g. uninit vars) are more important than (some)
RM rules (e.g. some obscure accessibility case that only comes up in Bairdian
examples).

ARG has historically gotten into huge arguments, where we have a useful
proposal, but some folks insist on legality rules, but those are incompatible,
so we end up dropping the useful proposal, or we end up with incompatibilities,
or we end up with an inferior alternative proposal.  Suppressible errors are
intended to defuse those arguments.

Consider 'in out' parameters on functions, which tooks decades to get into the
language, and only after deferring to Tucker's insistence on some legality
rules (which turned out to be incompatible).

***************************************************************

From: Robert Dewar
Sent: Sunday, February  1, 2015  2:34 PM

> That's fine for clients, but within Corba itself, it's a nuisance, 
> because Corba.String IS referred to as String, and Standard.String 
> MUST be referred to as Standard.String.  I find that confusing when 
> reading the sources, especially since the visibility is different in 
> other parts of PolyORB.

Seems bad style indeed, I would always use Corba.String within Corba itself.
Within Corba itself, my recommmendation  would be to ALWAYS qualify stsring
to prevenet any confusion.

Maybe *that* should be what causes a warning, referring  to an entity with
the same name as a standard entity without qualification.

> Well, it's much worse when people's idiosyncratic views make it into 
> the language as (hard) legality rules, which has happened.  Or when 
> those views distort the language design ("We don't have to worry about 
> X, because nobody should be using X", which is fine if there's 
> universal agreement about X, but not otherwise).

Strongly agree with this, for instance the incompatible business with
overlapping  parameters, which  does cause customers trouble with error false
positives, should have been a suppressible error.

> There is some value in having standardization, at least for some such 
> cases.  There should be no implication that RM rules are "more 
> important" than compiler-specific warnings.  In fact it is already the 
> case that (some) compiler-specific warnings (e.g. uninit vars) are 
> more important than (some) RM rules (e.g. some obscure accessibility 
> case that only comes up in Bairdian examples).

I am dubious about  there being some value in this
>
> ARG has historically gotten into huge arguments, where we have a 
> useful proposal, but some folks insist on legality rules, but those 
> are incompatible, so we end up dropping the useful proposal, or we end 
> up with incompatibilities, or we end up with an inferior alternative 
> proposal.  Suppressible errors are intended to defuse those arguments.
> Consider 'in out' parameters on functions, which tooks decades to get 
> into the language, and only after deferring to Tucker's insistence on 
> some legality rules (which turned out to be incompatible).

Yes, good point, and, as above, good example!

***************************************************************

From: Robert Dewar
Sent: Sunday, February  1, 2015  2:47 PM

> Maybe *that* should be what causes a warning, referring  to an entity 
> with the same name as a standard entity without qualification.

In fact I am tempted to try that one and see how many violations,

   warning: "String" refers to non-standard declaration at ....

or

   warning: "String" refers to Corba.String

I think I will test that out :-)

***************************************************************

From: Robert Dewar
Sent: Sunday, February  1, 2015  2:51 PM

One point here also is that it is hard to imagine a case in practice where a
suppressible error was defined, and GNAT found the warning so troublesome as
to be troublesome EVEN under a special switch.

Note that similar stuff has happened before in the context of obsolete stuff,
where including the old pragmas/attributes as obsolescent in pragma
Restrictions (No_Obsolescent_Features) caused major incompatibilities, and
GNAT just refuses to make this inclusion (can't remember if this is now
allowed by the RM or not, but it would not affect the GNAT decision).

***************************************************************

From: Randy Brukardt
Sent: Monday, February  2, 2015  8:41 PM

> > Anyway, I view "suppressible errors" as mainly useful for 
> > methodological restrictions, specifically cases where the construct 
> > has a strong probability of being a problem. (The declaration of 
> > some entity named Standard is one such example.) I'd say there has 
> > to be less than a 5% chance that the construct in question is not a 
> > problem in order for a suppresible error to apply.
> 
> It is plain nonsense to say that using Standard as an entity name has 
> a 95% chance of being a problem. I looked through the 22 uses of 
> Standard as an entity in our test suite, and 100% of them seemed safe 
> and reasonable to me. Please don't impose YOUR idiosyncratic views on 
> the language.

Why do you think this is about me? This is just an example, and one that
appears to have a consensus during previous discussions. It SURELY isn't
about me -- anything that gets made a suppressible error would have to have
a consensus of the ARG. 

The only thing about me here is the "95%", which is an approximation of the
criteria that I would use for determining what I think should be (or not be)
a suppressible error. If one doesn't have a criteria, then it just becomes
random selection, and that would be a terrible way to do language design.

...
> That's what worries me about suppressible errors, they have a real 
> potential for being a place where people's idiosyncratic views on 
> things you should avoid can be deprecated.

Yup.  I'd hope that we could come to some sort of consensus on the criteria
for what is a suppressible error. Without that, the whole idea is just going to
turn into a quagmire.

> I really much prefer the dynamic in an implementation setting for 
> choosing what to warn about. You see a report from a user who has done 
> something that, while legal, is dubious and has caused trouble.
> 
> You explain that the compiler is doing the right thing, but put on 
> your thinking cap to wonder if the compiler could emit a useful 
> warning. Here is an example of a recent addition to GNAT that has that 
> kind of history:
> 
> > NF-74-O107-022 Warn on use of pragma Import in Pure unit 
> > (2015-01-10)
> >
> >   The use of pragma Import in a Pure unit is worrisome since it can be used 
> >   to "smuggle" non-pure behavior into a pure unit. This usage now generates
> >   a warning that calls to a subprogram imported in this manner may in some
> >   cases be omitted by the compiler, as allowed for Pure units. This warning
> >   is suppressed if an explicit Pure_Function aspect is given.

I'd guess that you'd have trouble getting consensus on this being a
suppressible error, given that when I raised this issue years ago I was told
that it is an important loophole into making Pure units usable in practice
(such as debugging them).

If all we're going to do is argue about what errors should be suppressible,
and then confuse them with warnings, the whole idea starts to look like a
waste of time.

***************************************************************

Questions? Ask the ACAA Technical Agent