Version 1.11 of ai05s/ai05-0177-1.txt

Unformatted version of ai05s/ai05-0177-1.txt version 1.11
Other versions for file ai05s/ai05-0177-1.txt

!standard 3.1(3/2)          11-02-03 AI05-0177-1/07
!standard 3.11.1(1/1)
!standard 6.1(20/2)
!standard 6.1(30/2)
!standard 6.7(2/3)
!standard 6.7(3/2)
!standard 6.7(5/2)
!standard 6.8(0)
!standard 7.5(2.8/2)
!standard 8.3(18/2)
!standard 8.3.1(3/2)
!standard 13.14(8/1)
!standard 13.14(10.1/3)
!standard 13.14(10.2/3)
!class Amendment 09-10-29
!status Amendment 2012 10-11-19
!status ARG Approved 8-0-2 10-10-29
!status work item 09-10-29
!status received 09-03-15
!priority Medium
!difficulty Easy
!subject Expression functions
!summary
An expression function can define the body of a function, even in a package specification. They act like default expressions for freezing and evaluation purposes.
A null procedure can be a completion.
!problem
With the advent of pre and postconditions (see AI05-0145-1), and conditional expressions (see AI05-0147-1), expressions in specifications are going to grow much larger and become more complicated. It is important that parts of such expressions can be abstracted.
Abstraction of expressions is usually done by introducing functions. However, this puts the expression in the body, rather than the specification. This has several negative side-effects:
-- Hiding of the expression from the compiler and other tools that
primarily process specifications;
-- Requiring the body to exist in order to do static analysis of the
pre/post conditions (meaning static analysis cannot be performed early in the development of a system or on the use of skelton placeholder packages).
-- Introduction of otherwise unnecessary bodies and/or otherwise unnecessary
inline body dependencies (increasing system build times).
!proposal
expression_function ::= [overriding_indicator] function_specification is (expression>);
[Note: The parens are needed to clearly separate an expression function from the start of a normal function body.]
The expected type of the expression of an expression function is that of the return subtype of the function_specification.
Within the expression of an expression function, the names of the parameters of the function_specification are directly visible.
The dynamic semantics are the same as a function whose body contains nothing but "return expression".
An expression function can be used as a declaration or as the completion of a function specification. If used as a completion, the completion can be given anywhere after the declaration in the same package (if the declaration is in a package). Specifically, expression functions in a private part can complete function declarations from the visible part.
The expression of an expression function is frozen in the same way as default expression. That is, freezing is only be performed at the point of a call of the function, with the expression essentially inlined at that point. (Of course, if that call is in a default expression, the freezing would be deferred again.)
!wording
Add expression_function_declaration to the list of basic_declarations in 3.1(3/2).
Modify the AARM Note 3.11(10.a.1/1): A subprogram can be completed by a renaming-as-body{, a null_procedure_declaration, or an expression_function_declaration}, and we need ...
[Editor's note: This makes it clear that null procedures and expression functions used as completions might need an Elaboration_Check.]
Modify the last sentence of 3.11.1(1/1):
A body is a body, an entry_body, {a null_procedure_declaration or expression_function_declaration that completes another declaration}, or a renaming-as-body (see 8.5.4).
Change 6.1(20/2): ... A completion is not allowed for an abstract_subprogram_declaration{,} [or] a null_procedure_declaration (see 6.7){, or an expression_function_declaration (see 6.8)}.
Modify 6.1(30/2):
A subprogram declared by an abstract_subprogram_declaration is abstract; a subprogram declared by a subprogram_declaration is not. See 3.9.3, “Abstract Types and Subprograms”. Similarly, a procedure {declared}[defined] by a null_procedure_declaration is a null procedure; a procedure declared by a subprogram_declaration is not. See 6.7, “Null Procedures”. {Finally, a function declared by an expression_function_declaration is an expression function; a function declared by a subprogram_declaration is not. See 6.8, "Expression Functions".}
[Note: The following three changes are needed to allow null procedures as completions; see the discussion for why this is needed.]
Add after 6.7(2/3):
Legality Rules
If a null_procedure_declaration is a completion, it shall be the completion of a subprogram_declaration or generic_subprogram_declaration. The profile of a null_procedure_declaration that completes a declaration shall conform fully to that of the declaration.
[Editor's note: This is mostly copied from the text for subprogram bodies, some redundancy removed.]
Modify 6.7(3/2):
A null_procedure_declaration declares a null procedure. A completion is not allowed for a null_procedure_declaration{, however a null_procedure_declaration can complete a previous declaration}.
Modify 6.7(5/2):
The elaboration of a null_procedure_declaration has no {other} effect {than to establish that the null procedure can be called without failing the Elaboration_Check}.
Add a new clause:
6.8 Expression functions
An expression_function_declaration provides a shorthand to declare a function whose body consists of a single return statement.
Syntax
expression_function_declaration ::= [overriding_indicator] function_specification is (expression) [aspect_specification];
Name Resolution Rules
The expected type for the expression of an expression_function_declaration is the result type (see 6.5) of the function.
Legality Rules
If an expression_function_declaration is a completion, it shall be the completion of a subprogram_declaration or generic_subprogram_declaration. The profile of an expression_function_declaration that completes a declaration shall conform fully to that of the declaration.
If the result subtype has one or more unconstrained access discriminants, the accessibility level of the anonymous access type of each access discriminant, as determined by the expression of the expression function, shall not be statically deeper than that of the master that elaborated the expression_function_declaration.
AARM Ramification: This can only fail if the discriminant is an access to a part of a non-aliased parameter, as there can be no local declarations here.
AARM Discussion: We don't need to repeat any of the other Legality Rules for return statements since none of them can fail here: the implicit return statement has to apply to this function (and isn't nested in something), there clearly is a return statement in this function, and the static classwide accessibility check cannot fail as a tagged type cannot be declared locally in an expression function.]
Static Semantics
An expression_function_declaration declares an expression function. A completion is not allowed for an expression_function_declaration, however an expression_function_declaration can complete a previous declaration.
Dynamic Semantics
The execution of an expression function is invoked by a subprogram call. For the execution of a subprogram call on an expression function, the execution of the subprogram_body is equivalent to a function body containing only a simple_return_statement whose expression is that of the expression function.
AARM Discussion: The last sentence effectively means that all of the dynamic wording in 6.5 applies as needed, and we don't have to repeat it here.
The elaboration of an expression_function_declaration has no other effect than to establish that the expression function can be called without failing the Elaboration_Check.
Examples
function Is_Origin (P : in Point) return Boolean is -- see 3.9 (P.X = 0.0 and P.Y = 0.0);
Add new bullet after 7.5(2.8/2):
* the expression of an expression_function_declaration (see 6.8)
Modify 8.3(18/2):
* For a package_declaration, task declaration, protected declaration,
generic_package_declaration, [or] subprogram_body{, or expression_function_declaration}, the declaration is hidden from all visibility only until the reserved word is of the declaration;.
[Editor's note: We need this to make the parameters visible in the expression of an expression_function_declaration.]
Add expression_function_declaration into the list of 8.3.1(3/2), after null_procedure_declaration.
Modify 13.14(8/1):
A static expression causes freezing where it occurs. An object name or nonstatic expression causes freezing where it occurs, unless the name or expression is part of a default_expression, a default_name, {the expression of an expression function,} or a per-object expression of a component's constraint, in which case, the freezing occurs later as part of another construct.
Modify 13.14(10.1/3) [as added by AI05-0019-1]:
At the place where a function call causes freezing, the profile of the function is frozen. Furthermore, if a parameter of the call is defaulted, the default_expression for that parameter causes freezing. {If the function call is to an expression function, the expression of the expression function causes freezing.}
AARM Ramification: Freezing of the expression of an expression function only needs to be considered when the expression function is in the same compilation unit and there are no intervening bodies; the end of a declarative_part or library package freezes everything in it, and a body freezes everything declared before it.
Modify 13.14(10.2/3) [as added by AI05-0019-1]:
At the place where a generic_instantiation causes freezing of a callable entity, the profile of that entity is frozen{; if the callable entity is an expression function, the expression of the expression function causes freezing}.
Add after 13.14(10.2/3):
At the place where a use of the Access or Unchecked_Access attribute whose prefix denotes an expression function causes freezing, the expression of the expression function causes freezing.
AARM Reason:
This is needed to avoid calls to unfrozen expressions. Consider:
package Pack is
type Flub is range 0 .. 100;
function Foo (A : in Natural) return Natural is (A + Flub'Size); -- The expression is not frozen here.
type Bar is access function Foo (A : in Natural) return Natural;
P : Bar := Foo'access; -- (A)
Val : Natural := P.all(5); -- (B)
end Pack;
If point (A) did not freeze the expression of Foo (which freezes Flub), then the call at point (B) would be depending on the aspects of the unfrozen type Flub. That would be bad. End AARM Reason.
!discussion
Naming:
These have been variously called expression renaming, parameterized expressions, and expression functions. Parameterized expressions best explains the intended use, but we named these expression functions as that works better with the existing wording. The primary issue is that an expression function either declares a subprogram or completes a subprogram. The vast majority of rules that apply to functions and more generally subprograms also apply to expression functions. If we called them parameterized expressions, neither the word "subprogram" nor "function" would appear; that would mean that additional wording would be needed to ensure that they are properly included in "function" and "subprogram" rules.
---
A goal for expression functions is that they represent a subset of the capabilities of null subprograms in terms of the use as subprogram. If we didn't do that, we would have added another kind of subprogram, whose usages would be subtlely different than the existing kinds. That seems bad.
In order to accomplish this, we added the capability of a null subprogram to represent a completion. This is not an important capability, but it increases the consistency of the various kinds of subprogram declaration.
Note that both null subprograms and expression functions freeze when they are used as completions. There are a number of reasons for this:
(1) This is consistent with the behavior of explicit bodies. If the null procedure or expression function replace the equivalent explicit body, the freezing will not change.
(2) If a null procedure used as a completion did not freeze, then AARM 13.14(10.f/3) would not be true. There would be a way to construct a program that made a procedure call with an unfrozen profile - admittedly, a pathological program (it could only have null procedure bodies). (That could be fixed by extending 13.14(10.1/3) to all subprograms.)
(3) If an expression function used as a completion did not freeze, then it would be possible to make a call to an expression function with an unfrozen expression. Consider:
package Pack is function Foo (A : in Natural) return Natural; type Bar is access function Foo (A : in Natural) return Natural;
P : Bar := Foo'access; -- Freezes Bar and Foo.
type Flub is range 0 .. 100;
function Foo (A : in Natural) return Natural is (A + Flub'Size); -- The expression is not frozen here.
Val : Natural := P.all(5); -- (!)
end Pack;
The call of P.all passes the elaboration check (as the completion of Foo has been elaborated). However, if the body of Foo did not freeze everything, Flub would not be frozen, and thus the call P.all(5) would be evaluating the value of an aspect of an unfrozen type.
Note that the prefix of Foo freezes the declaration of Foo, but not the (still to be seen) completion; and even if the completion was frozen, that would not freeze the expression.
(1) here is not a big deal, but (2) and (3) must be addressed if we later decide this is too limiting. [Editor's note: this strikes me as killing a fly with a bazooka, given that neither of these problems is likely to occur in practice, but the early freezing is likely to bite lots of users. OTOH, there is not an obvious fix other than this for problem (3).
The best idea I've had that did not envolve freezing all completions would be to freeze the expression of an expression function used as a completion at the point of the completion if the subprogram has been used as generic formal parameter or as the prefix of 'Access. Seems like a mess. A regular call would not need a special rule, as either it is before the completion (and the elaboration check would fail), or it calls the completion directly, which is already covered.]
---
One could imagine allowing these functions (when visible) to be used in static expressions if the inlined expression would be static. This would definitely allow abstracting expressions that currently are written as a giant single expression because they need to be static. That's something you cannot do now. We didn't add this to proposal in order to keep the proposal simple.
---
It has been suggested to allow expression functions as generic formal subprogram defaults. This would increase the analogy with null procedures.
We didn't include this for three reasons: (1) The syntax would be very close to the existing default_name syntax. Adding or removing parens would change the semantics substantially. (2) This would probably require explicitly creating a body at the point of the instantiation. So the implementation effort would be substantially increased. (3) We're not likely to make the analogy with null procedures complete (we would need the next item as well), so there is going to be an inconsistency somewhere.
--- An additional idea would be to allow this as a primitive subprogram of an interface. If we did that, we would get the benefit of the identity function idea (see AI05-0140-1) without adding anything special specifically for that case. The expression would have to be included in the 8.3 homograph matching rules, probably by borrowing the rules for full conformance of expressions. That would be necessary to deal with the problem of diamond inheritance with interfaces (that was the primary reason that AI05-0140-1 was scuttled) - otherwise it would be possible to inherit two expression functions with different expressions into a single type.
Including both this and the formal subprogram default would make this completely similar to null procedures.
!example
(See wording.)
!corrigendum 3.1(3/2)
Replace the paragraph:
basic_declaration ::= type_declaration | subtype_declaration | object_declaration | number_declaration | subprogram_declaration | abstract_subprogram_declaration | null_procedure_declaration | package_declaration | renaming_declaration | exception_declaration | generic_declaration | generic_instantiation
by:
basic_declaration ::= type_declaration | subtype_declaration | object_declaration | number_declaration | subprogram_declaration | abstract_subprogram_declaration | null_procedure_declaration | expression_function_declaration | package_declaration | renaming_declaration | exception_declaration | generic_declaration | generic_instantiation
!corrigendum 3.11.01(1)
Replace the paragraph:
Declarations sometimes come in two parts. A declaration that requires a second part is said to require completion. The second part is called the completion of the declaration (and of the entity declared), and is either another declaration, a body, or a pragma. A body is a body, an entry_body, or a renaming-as-body (see 8.5.4).
by:
Declarations sometimes come in two parts. A declaration that requires a second part is said to require completion. The second part is called the completion of the declaration (and of the entity declared), and is either another declaration, a body, or a pragma. A body is a body, an entry_body, a null_procedure_declaration or an expression_function_declaration that completes another declaration, or a renaming-as-body (see 8.5.4).
!corrigendum 6.1(20/2)
Replace the paragraph:
A subprogram_declaration or a generic_subprogram_declaration requires a completion: a body, a renaming_declaration (see 8.5), or a pragma Import (see B.1). A completion is not allowed for an abstract_subprogram_declaration (see 3.9.3) or a null_procedure_declaration (see 6.7).
by:
A subprogram_declaration or a generic_subprogram_declaration requires a completion: a body, a renaming_declaration (see 8.5), or a pragma Import (see B.1). A completion is not allowed for an abstract_subprogram_declaration (see 3.9.3), a null_procedure_declaration (see 6.7), or an expression_function_declaration (see 6.8).
!corrigendum 6.1(30/2)
Replace the paragraph:
A subprogram declared by an abstract_subprogram_declaration is abstract; a subprogram declared by a subprogram_declaration is not. See 3.9.3, "Abstract Types and Subprograms". Similarly, a procedure defined by a null_procedure_declaration is a null procedure; a procedure declared by a subprogram_declaration is not. See 6.7, "Null Procedures".
by:
A subprogram declared by an abstract_subprogram_declaration is abstract; a subprogram declared by a subprogram_declaration is not. See 3.9.3, "Abstract Types and Subprograms". Similarly, a procedure defined by a null_procedure_declaration is a null procedure; a procedure declared by a subprogram_declaration is not. See 6.7, "Null Procedures". Finally, a function declared by an expression_function_declaration is an expression function; a function declared by a subprogram_declaration is not. See 6.8, "Expression Functions".
!corrigendum 6.7(2/2)
Insert after the paragraph:
null_procedure_declaration ::= [overriding_indicator] procedure_specification is null;
>
the new paragraph:
Legality Rules
If a null_procedure_declaration is a completion, it shall be the completion of a subprogram_declaration or generic_subprogram_declaration. The profile of a null_procedure_declaration that completes a declaration shall conform fully to that of the declaration.
!corrigendum 6.7(3/2)
Replace the paragraph:
A null_procedure_declaration declares a null procedure. A completion is not allowed for a null_procedure_declaration.
by:
A null_procedure_declaration declares a null procedure. A completion is not allowed for a null_procedure_declaration, however a null_procedure_declaration can complete a previous declaration.
!corrigendum 6.7(5/2)
Replace the paragraph:
The elaboration of a null_procedure_declaration has no effect.
by:
The elaboration of a null_procedure_declaration has no other effect than to establish that the null procedure can be called without failing the Elaboration_Check.
!corrigendum 6.8(0)
Insert new clause:
An expression_function_declaration provides a shorthand to declare a function whose body consists of a single return statement.
Syntax
expression_function_declaration ::= [overriding_indicator] function_specification is (expression) [aspect_specification];
Name Resolution Rules
The expected type for the expression of an expression_function_declaration is the result type (see 6.5) of the function.
Legality Rules
If an expression_function_declaration is a completion, it shall be the completion of a subprogram_declaration or generic_subprogram_declaration. The profile of an expression_function_declaration that completes a declaration shall conform fully to that of the declaration.
If the result subtype has one or more unconstrained access discriminants, the accessibility level of the anonymous access type of each access discriminant, as determined by the expression of the expression function, shall not be statically deeper than that of the master that elaborated the expression_function_declaration.
Static Semantics
An expression_function_declaration declares an expression function. A completion is not allowed for an expression_function_declaration, however an expression_function_declaration can complete a previous declaration.
Dynamic Semantics
The execution of an expression function is invoked by a subprogram call. For the execution of a subprogram call on an expression function, the execution of the subprogram_body is equivalent to a function body containing only a simple_return_statement whose expression is that of the expression function.
The elaboration of an expression_function_declaration has no other effect than to establish that the expression function can be called without failing the Elaboration_Check.
Examples
function Is_Origin (P : in Point) return Boolean is -- see 3.9 (P.X = 0.0 and P.Y = 0.0);
!corrigendum 7.5(2.8/2)
Insert after the paragraph:
the new paragraph:
!corrignedum 8.3(18/2)
Replace the paragraph:
by:
!corrigendum 8.3.1(3/2)
Replace the paragraph:
If an abstract_subprogram_declaration, null_procedure_declaration, subprogram_body, subprogram_body_stub, subprogram_renaming_declaration, generic_instantiation of a subprogram, or subprogram_declaration other than a protected subprogram has an overriding_indicator, then:
by:
If an abstract_subprogram_declaration, null_procedure_declaration, expression_function_declaration, subprogram_body, subprogram_body_stub, subprogram_renaming_declaration, generic_instantiation of a subprogram, or subprogram_declaration other than a protected subprogram has an overriding_indicator, then:
!corrignedum 13.14(8/1)
Replace the paragraph:
A static expression causes freezing where it occurs. An object name or nonstatic expression causes freezing where it occurs, unless the name or expression is part of a default_expression, a default_name, or a per-object expression of a component's constraint, in which case, the freezing occurs later as part of another construct.
by:
A static expression causes freezing where it occurs. An object name or nonstatic expression causes freezing where it occurs, unless the name or expression is part of a default_expression, a default_name, the expression of an expression function, or a per-object expression of a component's constraint, in which case, the freezing occurs later as part of another construct.
!corrigendum 13.14(10)
Insert after the paragraph:
the new paragraphs:
!ACATS test
Add an ACATS C-Test to test the syntax.
!ASIS
Modify function Is_Null_Procedure:
Add a A_Procedure_Body_Declaration to the list of Kinds.
Add after Is_Null_Procedure (12.49):
function Is_Expression_Function
function Is_Expression_Function
(Element : in Asis.Element)
return Boolean;
Element specifies the element to query.
Returns True for a declaration of a function that is declared as an expression function.
Returns False for any other Element including Nil_Element.
Element expects an element that has the following Element_Kinds: A_Declaration that has one of the following Declaration_Kinds: A_Function_Declaration A_Function_Body_Declaration
Usage Note: This routine tests for the syntactic element expression_function_declaration; calling Is_Expression_Function on a renaming of an expression function will return False. Use the routine ASIS.Callable_Views.Is_Expression to determine if a declaration is semantically an expression function (including renames).
Discussion: A generic function cannot be an expression function.
Add after Result_Subtype, 14.19:
function Expression_Function_Expression (Declaration : in Asis.Declaration) return Asis.Expression;
Declaration specifies the expression function to query.
Returns the expression in the expression function declaration.
Declaration expecrs an element that has the following Element_Kinds:
A_Declaration that has one of the following Declaration_Kinds:
A_Function_Declaration A_Function_Body_Declaration
Raises ASIS_Inappropriate_Element with a Status of Value_Error for any element that does not have these expected kinds.
Returns an element that the following Element_Kinds:
An_Expression
Add to 21.10.3 Callable view categorization
After Is_Null:
function Is_Expression (C : Callable_View) return Boolean is abstract;
After Is_Null's description:
Is_Expression returns True if the Callable_View denotes an Expression Function, and returns False otherwise.
!appendix

From: Bob Duff
Date: Sunday, March 15, 2009  9:37 AM

[Editor's note: This forked off of a discussion about conditional expressions,
found in AI05-0147-1.]

> I must say I still like to abstract complex conditionals into
> predicate functions.

Agreed.  Which means that if you want to use a conditional in a precondition,
you (sometimes) want to put the BODY of a boolean function in a package SPEC, so
it is not hidden from the compiler!

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  2:06 PM

> I must say I still like to abstract complex conditionals into
> predicate functions. People manage to write frighteningly complex
> conditions without this feature, and I am afraid that this will get
> worse with the addition of this feature. Still possible misuse is not
> a reason for prohibition!

I tend to agree. I actually thought about making such predicate function easier
by defining expression renamings, but I didn't think I could sell it as
important enough to bother with. The idea was something like:

   function Is_Interesting (N : in Natural) return Boolean renames (N > 10);

Of course, the expression could be as complex as needed. If this is defined in a
specification (or other place where the renames is "visible"), I could even
imagine rules allowing this to be treated as a static expression. This would be
nice because it would not be a barrier to static analysis like a normal function
body that may not be available to the analyzer.

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  2:11 PM

> > I must say I still like to abstract complex conditionals into
> > predicate functions.
>
> Agreed.  Which means that if you want to use a conditional in a
> precondition, you (sometimes) want to put the BODY of a boolean
> function in a package SPEC, so it is not hidden from the compiler!

All you need is Randy's handy dandy expression renames (see my reply to Robert).

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  4:41 PM

> The obvious alternative is to rely on programmers writing appropriate
> named functions that do the iteration internally.  I don't think
> forcing the definition of named functions is appropriate for
> conditional expressions, but for quantified expressions, I think
> expecting the programmer to program up an appropriate named function
> is not too much to ask.
>
> So I see these quantified expressions as significantly lower priority
> than conditional expressions.
[Editor's note: The message being replied to here is in AI05-0176-1.]

Well there is the issue of bodies in the spec, and it does not seem that you can
use renaming of expressions to achieve this if you don't have quantified
expressions.

I find the bodies-in-spec issue to be a significant one!

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  5:01 PM

...
> Well there is the issue of bodies in the spec, and it does not seem
> that you can use renaming of expressions to achieve this if you don't
> have quantified expressions.

That's surely true, but to be fair, the idea of renaming of expressions is mine
and mine alone. I suggested it in passing at the Tallahassee meeting and
received approximately 0% interest. Perhaps I didn't have the justification
strong enough...

> I find the bodies-in-spec issue to be a significant one!

So do I. One alternative would be a modifier that specifies (in some way -
keyword, pragma) functions that have no side-effects other than through their
interfaces. Such functions can be used in static symbolic analysis based on
their parameters; that's not as good as being able to see the entire body, but
surely better than being able to assume nothing about the behavior of the
function.

Since you and others have expressed dislike of that idea, I'll spend my energies
on other ideas, possibly including renaming of expressions.

****************************************************************

From: Gary Dismukes
Date: Monday, March 16, 2009  5:21 PM

> That's surely true, but to be fair, the idea of renaming of
> expressions is mine and mine alone. I suggested it in passing at the
> Tallahassee meeting and received approximately 0% interest. Perhaps I
> didn't have the justification strong enough...

It sounds like a reasonable sort of feature to me.

In fact, it's a little odd that in Ada one can rename function calls, including
uses of predefined operators:

   C : Boolean renames "and"(A, B);

but you can't rename the infix form:

     C : Boolean renames (A and B);  -- ERROR

Seems like a short step to extend renamings to allow expressions generally (even
though of course these couldn't be classified as object renamings in the general
case, they'd have to be a separate category).

> > I find the bodies-in-spec issue to be a significant one!
>
> So do I. One alternative would be a modifier that specifies (in some
> way - keyword, pragma) functions that have no side-effects other than
> through their interfaces. Such functions can be used in static
> symbolic analysis based on their parameters; that's not as good as
> being able to see the entire body, but surely better than being able
> to assume nothing about the behavior of the function.
>
> Since you and others have expressed dislike of that idea, I'll spend
> my energies on other ideas, possibly including renaming of expressions.

Yes, I would rather have you pursue expression renamings. :-)

****************************************************************

From: Tucker Taft
Date: Monday, March 16, 2009  5:25 PM

I don't personally find bodies-in-the-spec a big issue.

There will always be times when you want to define a predicate which is too
complex to be used in the spec.  Furthermore, it is inappropriate because it
depends on private fields, etc.  It is one thing to specify the precondition in
the spec, which is important.  However, it is not necessary for the full
semantics of the precondition to be in the spec, since any static analysis tool
of any value to a language like Ada will need to be able to look in bodies.

I suppose if we were queasy about having the tool look in all bodies, we could
have a pragma analogous to Inline that indicates that a particular subprogram is
to be effectively "inlined" for the purpose of static analysis.

I really believe that at some point we want to stop adding special syntax, and
fall back on using the programming language, which is already more than Turing
complete.  Subprograms are the key way we do that, and trying to pull the full
power of the programming language into an expression language seems misguided
for Ada, particularly given the importance of private types.

****************************************************************

From: Tucker Taft
Date: Monday, March 16, 2009  5:33 PM

> I don't personally find bodies-in-the-spec a big issue.

Let me clarify.  I don't want to see bodies in the spec.  What I mean is that I
don't mind having preconditions that include calls on functions where the
analyzer needs to look at their body to know what they do.  I think it is
inevitable. We have been talking about using containers in these quantified
expressions.  The semantics of containers certainly isn't going to be fully
visible in the spec!

****************************************************************

From: Bob Duff
Date: Monday, March 16, 2009  5:20 PM

> So I see these quantified expressions as significantly lower priority
> than conditional expressions.

I agree, but I'll repeat my comment: I want to have the option of putting
function bodies in package specs.

Subroutines with parameters were invented a long time ago, and they are really
the most wonderful abstraction mechanism.  It's a Bad Thing anytime you lose
something by wrapping some code up in a subroutine.

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  5:51 PM

...
> I don't personally find bodies-in-the-spec a big issue.
>
> There will always be times when you want to define a predicate which
> is too complex to be used in the spec.
> Furthermore, it is inappropriate because it depends on private fields,
> etc.

I was presuming that such an expression renames could be used as a
renames-as-body in the private part, thus getting access to the private
components without making them directly visible to clients, but still making
them available to analyzers. Seems like the best of all worlds to me.

> It is one thing to specify the
> precondition in the spec, which is important.  However, it is not
> necessary for the full semantics of the precondition to be in the
> spec, since any static analysis tool of any value to a language like
> Ada will need to be able to look in bodies.

Here I disagree. I think the most useful static analysis is going to be that
which is part of a compiler. That means that bodies aren't necessarily going to
be available to the analyzer, only specifications. (This also allows analyzers
to do good work on partial programs.)

Indeed, I just (literally an hour ago) read an article on embedded.com that
makes the same point. http://www.embedded.com/210601784?cid=NL_embedded (it's an
older article that they linked for some reason today). I also recall reading an
article where Robert Dewar made some similar suggestion.

I realize that you know 10 times more about static analysis than I do, so I
could be all wet on this.

> I suppose if we were queasy about having the tool look in all bodies,
> we could have a pragma analogous to Inline that indicates that a
> particular subprogram is to be effectively "inlined" for the purpose
> of static analysis.

Inline still requires the body to exist. I think we really need to be able to
reason about functions whose bodies don't yet exist. (But, I know, I'm still a
minority of one on that point.)

Besides, I'm opposed to specifying inlining/allowing dependencies because of
inlining. I think that the compiler should be deciding such stuff, it has a far
better idea of the trade-offs than any programmer can. The programmer should
simply set a time/space trade-off for a program and get out of the way. Pragma
Inline and similar features are a hack used when compilers didn't have enough
CPU/memory to do these things automatically -- but that time is long gone.

> I really believe that at some point we want to stop adding special
> syntax, and fall back on using the programming language, which is
> already more than Turing complete.
> Subprograms are the key way we do that, and trying to pull the full
> power of the programming language into an expression language seems
> misguided for Ada, particularly given the importance of private types.

I don't disagree with this sentiment (a simple "for all" expression didn't
bother me too much, but the latest stuff I've seen seems over the top). But I do
worry about restricting the ability to do analysis at compile-time.

One reason that I proposed expression renaming is that I don't want the full
complexity of a subprogram body in package specifications. But since expressions
are already there, I don't see any problem with putting more of them there.
Indeed, one could imagine using this facility to define the "identity functions"
that we talked about in Tallahassee (seems better than a dedicated feature).

****************************************************************

From: Bob Duff
Date: Monday, March 16, 2009  5:34 PM

> > Agreed.  Which means that if you want to use a conditional in a
> > precondition, you (sometimes) want to put the BODY of a boolean
> > function in a package SPEC, so it is not hidden from the compiler!
>
> All you need is Randy's handy dandy expression renames (see my reply
> to Robert).

Randy's handy dandy expression renames might be a nice idea.
But what if I want the thing to have a local variable or something?
Seems like it might end up being one of those frustrating features that gives
you ALMOST what you want, but not quite.  Better than nothing? Not sure.

****************************************************************

From: Tucker Taft
Date: Monday, March 16, 2009  7:01 PM

> Agreed.  Which means that if you want to use a conditional in a
> precondition, you (sometimes) want to put the BODY of a boolean
> function in a package SPEC, so it is not hidden from the compiler!

I don't understand this.  Package bodies aren't hidden from the compiler.  The
compiler can look in a package body any time it wants, and it is expected to do
so for inlines and frequently generics.  Of course that may mean more
recompilation if the body changes, but we accept that for inlines and generics.

Separating spec and body is fundamental in Ada, and it is not so much for the
compiler as for the humans.  If anything I would think we would want to go the
other way, and be taking things out of the specs, such as the completion of a
private type.

The only part of a precondition that needs to be in the spec is the
"specification" of the precondition. It's full semantics can and often should be
postponed to a package body, as the full semantics depend on information that is
intentionally hidden in the package body.

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  7:14 PM

> I don't understand this.  Package bodies aren't hidden from the
> compiler.  The compiler can look in a package body any time it wants,
> and it is expected to do so for inlines and frequently generics.  Of
> course that may mean more recompilation if the body changes, but we
> accept that for inlines and generics.

But you should be able to verify a client with only the package spec in view (it
makes sense to verify a client even before the body exists).

> Separating spec and body is fundamental in Ada, and it is not so much
> for the compiler as for the humans.  If anything I would think we
> would want to go the other way, and be taking things out of the specs,
> such as the completion of a private type.

But precoonditions are part of the spec

> The only part of a precondition that needs to be in the spec is the
> "specification" of the precondition.
> It's full semantics can and often should be postponed to a package
> body, as the full semantics depend on information that is
> intentionally hidden in the package body.

I disagree, generally the full semantics of the precondition should be there in
the spec, especially if you want to view them as formal mathematical properties
that go into a theorem prover, or a sophisticated semantic analyzer.

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  7:17 PM

...
> I don't understand this.  Package bodies aren't hidden from the
> compiler.  The compiler can look in a package body any time it wants,
> and it is expected to do so for inlines and frequently generics.  Of
> course that may mean more recompilation if the body changes, but we
> accept that for inlines and generics.

Well, maybe you do, but I don't. Inline and generic body dependencies are both
optional, and Janus/Ada has never had and most likely never will have any sort
of body dependency. (This was the original reason that we implemented generics
as code shared, which should make it clear to you the lengths that I am willing
to go to preserve this point!) Inlining is better implemented at link time,
anyway, and shouldn't require any human intervention in the first place.

Any feature that were to require a body dependency would be strenuously opposed
by me!

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  7:18 PM

> So do I. One alternative would be a modifier that specifies (in some
> way - keyword, pragma) functions that have no side-effects other than
> through their interfaces. Such functions can be used in static
> symbolic analysis based on their parameters; that's not as good as
> being able to see the entire body, but surely better than being able
> to assume nothing about the behavior of the function.

Just not enough, if I see

     procedure A (B, C : Integer);
     pragma Precondition (P (B, C));

there are two cases

a) P is a complex computational procedure, beyond the reach of a theorem prover,
   or a semantic analyzer. Fine, in this case it is just a run-time check that
   can go in the spec. There is no issue of formally verifying that a call to P
   meets this precondition.

b) P is a predicate that is statically understandable, it is a conditional
   expression, or series of conditional expressions in some canonical form,
   possibly including quantifiers. In this case you want the "body" of P visible
   in the spec.

Note that in case a), it is of no interest at all to know that P is pure.

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  7:48 PM

Not quite true (although I agree that seeing the body is better). If the static
analyzer already knows the result of P (B, C) [perhaps because it is the
postcondition of a previous call], then if P has no side-effects, the analyzer
can use the previous result in this precondition and not further evaluate it.
That's a fairly likely case. And it is not something it can do otherwise
(assuming no side-effects is not something that a static analyzer should do,
especially if it is part of a compiler, as erroneousness could result from a bad
assumption -- surely do not want to be able to introduce erroneousness into a
program that otherwise doesn't have it, and especially as part of something that
is supposed improve correctness).

****************************************************************

From: Stephen Michell
Date: Monday, March 16, 2009  8:03 PM

It is even simpler than that. The postconditions on pure functions can be
assumed after the point of the call. I am assuming that we have the notion that
pre(spec)->(pre(body) and post(body)->post(spec). One can safely mark the
condition as TRUE, pending analysis of the body. Body analysis will let us mark
the condition TRUE with no reservations, or mark if false later if the
postcondition analysis fails.

****************************************************************

From: Tucker Taft
Date: Monday, March 16, 2009  8:09 PM

Well I suppose it should be refreshing for me not to be the one with the
wild-eyed over-the-top ideas for enhancing the language...  ;-) ;-) ;-)

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  7:20 PM

> But you should be able to verify a client with only the package spec
> in view (it makes sense to verify a client even before the body
> exists).

Darn right. Tucker seems to have a model where analysis is only done by a
separate tool after the entire program already exists. I think this is far too
late: we need to detect errors as early as possible in the development cycle
(preferably the first time we run a tool on the code).

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  7:37 PM

> Randy's handy dandy expression renames might be a nice idea.
> But what if I want the thing to have a local variable or something?
> Seems like it might end up being one of those frustrating features
> that gives you ALMOST what you want, but not quite.
> Better than nothing?
> Not sure.

Well, putting a full body in a specification is certain to bring up all kinds of
*interesting* freezing issues. We saw that for the various proposals to allow
early instantiations, and I don't think we particularly want to do there. OTOH,
we already have expressions in specifications, so both the language semantics
and compilers presumably can handle them. Indeed, it ought to be able to handle
them without freezing (they could be treated like default expressions), which
would *add* some capabilities that would currently be hard to write.

Also, these sorts of functions could be (ought to be?) automatically inlined;
the more complex stuff that you are allowed to write, the more likely it is that
you would run into some reason why inlining would be difficult.

****************************************************************

From: Bob Duff
Date: Monday, March 16, 2009  8:04 PM

> I don't understand this.  Package bodies aren't hidden from the
> compiler.  The compiler can look in a package body any time it wants,
> and it is expected to do so for inlines and frequently generics.  Of
> course that may mean more recompilation if the body changes, but we
> accept that for inlines and generics.

Compilers can look at bodies any time they want for code-gen stuff.
But the Ada language is designed so that legality issues are always determinable
from specs (visible parts, actually).

Proving assertions statically is just an extension of legality checking, and
should (ideally) likewise depend only on visible parts of imported packages.

> Separating spec and body is fundamental in Ada, and it is not so much
> for the compiler as for the humans.  If anything I would think we
> would want to go the other way, and be taking things out of the specs,
> such as the completion of a private type.

Agreed.

> The only part of a precondition that needs to be in the spec is the
> "specification" of the precondition.
> It's full semantics can and often should be postponed to a package
> body, as the full semantics depend on information that is
> intentionally hidden in the package body.

Yes, that's quite often true.  But not always.

If you have a precondition that looks at private stuff, then I agree -- it
should call a function whose body is in the package body.

But sometimes a precondition involves visible stuff, and in that case it should
be visible, and it's annoying that giving it a name as a function causes it to
be invisible.

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  8:08 PM

> Well, maybe you do, but I don't. Inline and generics are both
> optional, and Janus/Ada has never had and most likely never will have
> any sort of body dependency. (This was the original reason that we
> implemented generics as code shared, which should make it clear to you
> the lengths that I am willing to go to preserve this point!) Inlining
> is better implemented at link time, anyway, and shouldn't require any human intervention in the first place.
>
> Any feature that were to require a body dependency would be
> strenuously opposed by me!

I don't regard this as a valid argument at this stage, and I think there are
better arguments in any case. I think we have already paid too high a price in
determining to maintain the (to me very dubious) possibility of fully shared
generics.

As for inlining being better at link time, this to me is technically quite
wrong, one of the major advantages of inlining is to activate the full set of
flow-based optimizations to the inlined code. That cannot be done at link time.

But in any case, I think the implementation based arguments on this issue are
not the strong ones. The strong argument is conceptual. Preconditions are part
of the spec, so they should be in the spec. End of story as far as I am
concerned.

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  8:13 PM

> If you have a precondition that looks at private stuff, then I agree
> -- it should call a function whose body is in the package body.

To me such preconditions are really part of the body anyway

> But sometimes a precondition involves visible stuff, and in that case
> it should be visible, and it's annoying that giving it a name as a
> function causes it to be invisible.

I agree with this

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  8:15 PM

> Well I suppose it should be refreshing for me not to be the one with
> the wild-eyed over-the-top ideas for enhancing the language...  ;-)
> ;-) ;-)

:-)

Yes, that maniac Dewar is always wanting to add over-complex features :-)

What can I say? Every now and then, I get distracted into thinking Ada is a high
level language :-)

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  8:27 PM

...
> As for inlining being better at link time, this to me is technically
> quite wrong, one of the major advantages of inlining is to activate
> the full set of flow-based optimizations to the inlined code. That
> cannot be done at link time.

Bull. The design that I have does full optimization at link-time: simply by
linking and inlining the intermediate code, then optimizing and code generating.
Code generators are fast enough that there isn't much need to do that early
anymore.

(Of course, I'm talking only about the Ada portion of the program; you might
need an extra linking step to link in foreign language code.)

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  8:36 PM

Yes, well that's easy to design in your head! But nonsense in most practical
environments, or at the least a research effort. Optimization algorithms are not
linear with program size, or even quadratic, and it is simply not the case that
highly optimizing code generators are fast enough to apply to large scale
programs, without a lot of work.

We can't let ourselves be influenced by unrealized plans!

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  9:15 PM

The non quadratic nature of optimization is something that I am acutely aware
of. Indeed, our early optimizer efforts effectively locked up the processors of
the late 1980's. That's why our optimizer has a serious set of time-limiters in
it; we already do full subprogram optimization on most subprograms and I don't
think that we would want to do inter-procedural optimizations anyway (other than
inlining, dead code elimination, and partial evaluation, none of which need to
be particularly expensive to implement).

Thus I know our optimizer and code generator performance would be essentially
linear from current behavior, since there is nothing that we would do that would
require working on larger chunks of code. (We'd use the same limiting code for
the same reasons.) Multicore machines offer the promise of actually doing better
still (because we could optimize multiple subprograms at a time), although that
is obviously more work.

There is an advantage to being a bit behind state-of-the-art in code generator
technology! :-)

> We can't let ourselves be influenced by unrealized plans!

Fair enough (although I've read several articles about such compilers in the
last year or so). It's unfortunate that I can't find the 1-2 man months it would
take to put this together (since it is just rearranging the pieces of our
compiler). I'd like to know if this scheme really works as well in practice as
it does on paper.

Anyway, I would not like adding new body dependencies, period.

****************************************************************

From: Edmond Schonberg
Date: Monday, March 16, 2009  8:42 PM

> The only part of a precondition that needs to be in the spec is the
> "specification" of the precondition.
> It's full semantics can and often should be postponed to a package
> body, as the full semantics depend on information that is
> intentionally hidden in the package body.

Mild objection:  the client of the package should be able to know what the
precondition actually means: a contract that says " we'll reject any input that
doesn't glorp"  is not very useful if you don't know what it means to glorp.
THis is why a rich language of expressions for preconditions seems preferable to
me than one where the body of the predicates is actually off-line. I understand
that this may be needed in some cases, but my preference would be for
preconditions that can be written and read up-front.

****************************************************************

From: Robert Dewar
Date: Monday, March 16, 2009  8:52 PM

I am generally sympathetic to this approach, but am afraid of large conditional
expressions without any kind of abstraction.

****************************************************************

From: Randy Brukardt
Date: Monday, March 16, 2009  9:16 PM

I agree. Large Boolean expressions are notoriously hard to read and debug.

****************************************************************

From: Edmond Schonberg
Date: Monday, March 16, 2009  9:26 PM

I think we all agree on that. But to come back to the topic, quantified
expressions by themselves are compact and easy to read, and  therefore are a
good choice to have in preconditions.
 Sortedness?     (for all I in A'first .. A'last -1 : A(I) <= A (I+1))

If the preconditions have a very complex description that requires them being
off-line, this may be a design issue, not a problem with boolean expressions per
se!

****************************************************************

From: Tucker Taft
Date: Monday, March 16, 2009  10:54 PM

I think if we talk about object-oriented abstractions, then the "class-wide" /
"inheritable" preconditions are almost certainly going to be described using
(abstract) primitive functions, which are inevitably something you can't put in
the spec.  E.g. for an abstract Stack, the precondition on "Pop(Stack)" is "not
Is_Empty(Stack)" and it is clear that Is_Empty will have to be defined
differently for each concrete extension of the abstract Stack type.

If we are talking about scalar or array-of-scalar abstractions I can see these
things being fully defined in the spec, but for an abstraction which is truly
"abstract" (either in the O-O sense or in the private-type sense), then the
predicates are themselves going to involve abstract/private functions.  These
abstract predicates are in some sense "defined" by where they appear in
preconditions and postconditions.  E.g., if the postcondition of "Push(Stack) is
"not Is_Empty(Stack)", and the precondition of "Pop(Stack)" is the same, then we
know that a Pop immediately following a Push is safe.

If we went further and defined a "Depth(Stack)" then we could get more specific,
such as having a postcondition for Push of "Depth(Stack) = Depth(Stack)'old + 1"
and a precondition on Pop of "Depth(Stack) > 0", and a postcondition of
"Depth(Stack) = Depth(Stack)'old - 1". Combined with a postcondition on
Depth(Stack) itself of "Depth'Result >= 0" then we can begin to understand the
stack-like nature of a Stack, without ever seeing the body for "Depth".

I believe that most "interesting" abstractions will have this characteristic,
that is, where it will be necessary to define "helper" functions to specify pre-
and postconditions, and the meaning of the helper functions will be implicit in
how they are used in pre- and postconditions (I'll admit that sounds circular),
and not require visibility on the body of these helper functions, some of which
may be "abstract" when talking about O-O type hierarchies.

Given this, I am dubious about investing heavily in quantified expression
syntax.  Even for things like "sorted" I think you can't expect many compilers
to glean the truth of your "for all" expression from the code, but if you claim
via a postcondition of your sort routine that "Sorted(QSort'Result)" and
elsewhere you have a precondition of "Sorted(A)", then it would not be too much
of a leap for the compiler to expect you to have defined "A" by using QSort or
some other sorting routine, before calling a function that expected "Sorted(A)"
to be True.

To put this another way, I believe that from the *caller* side, pre- and
postconditions will be treated more "symbolically," where it is the matching up
of postconditions and preconditions which will be used to be sure that an object
is in the right "state" before being passed to some operation.

On the *callee* side, I believe the pre- and postconditions will actually be
analyzed in their full glory, because that is the place where they are
guaranteed to actually mean something, since that is where you have full access
to the representation, and in an O-O context, you actually know the concrete
type of the thing you are manipulating.

****************************************************************

From: Randy Brukardt
Date: Tuesday, March 17, 2009  12:07 AM

...
> Given this, I am dubious about investing heavily in quantified
> expression syntax.  Even for things like "sorted"
> I think you can't expect many compilers to glean the truth of your
> "for all" expression from the code, but if you claim via a
> postcondition of your sort routine that "Sorted(QSort'Result)"
> and elsewhere you have a precondition of "Sorted(A)", then it would
> not be too much of a leap for the compiler to expect you to have
> defined "A" by using QSort or some other sorting routine, before
> calling a function that expected "Sorted(A)"
> to be True.

That's fine if the compiler knows that Sorted is Strict (or Pure, if you prefer
to overload that term). But how it can know that without a pragma/keyword of
some sort (or seeing the body) is beyond me. (Very few of these abstractions are
going to be Pure packages, after all).

If Sorted is defined:
    function Sorted (A : in <Whatever>) return Boolean is
    begin
        if Random < 0.2 then return False;
        else return True;
        end if;
    end Sorted;
any symbolic manipulation that is done is simply going to be wrong, because the
result of one call to this function is not going to be related to another. (A
more realistic example would depend on a global data structure as well as the
input parameter. If the global data changes, so might the result of the function
call.)

Without any sort of requirement or way to tell if these functions are strict, I
don't see how you can do any symbolic manipulation. (We haven't even *suggested*
to users that their functions shouldn't have side-effects in these. And I don't
see how we could in any valuable way.)

> To put this another way, I believe that from the
> *caller* side, pre- and postconditions will be treated more
> "symbolically," where it is the matching up of postconditions and
> preconditions which will be used to be sure that an object is in the
> right "state" before being passed to some operation.

I tend to agree, in that is the way I was expecting it to work in practice.
For instance, in Claw we'd surely want Is_Valid preconditions and postconditions
on the various routines.

But I don't see any justification for assuming that the values of multiple calls
to a function are the same. And without that, you can't do anything at all.

****************************************************************

From: Tucker Taft
Date: Tuesday, March 17, 2009  12:42 AM

It seems you are agreeing that programmers will be defining these "helper"
functions, so does that mean you agree that we probably don't need to work too
hard to support bodies in specs, quantified expressions, etc?

I agree the "analyzer" will need some way to know what variables these helper
functions read other than their explicit parameters.  Perhaps we need to have
"Global_In" and "Global_Out" annotations as well as Pre and Post, where any
function used in a precondition, that lacks a "Global_In" annotation, is
presumed to read no variables other than its parameters.  This could be checked
when processing the body of such a function.

****************************************************************

From: Randy Brukardt
Date: Tuesday, March 17, 2009  12:42 AM

> It seems you are agreeing that programmers will be defining these
> "helper" functions, so does that mean you agree that we probably don't
> need to work too hard to support bodies in specs, quantified
> expressions, etc?

Well, I think renaming of expressions would be allow increasing abstraction
without impacting performance, freezing, or analyzability. What's not to like?
:-) You could even hide the actual expressions in the private part and thus
write simple predicates without sacrificing hiding.

I don't think including full bodies in specifications would work, because of
freezing issues. That's irrespective of whether or not it is a good idea.

Quantified expressions seem like too specialized a need to me, but on this one I
will defer to people who know more about this than me.

> I agree the "analyzer" will need some way to know what variables these
> helper functions read other than their explicit parameters.  Perhaps
> we need to have "Global_In"
> and "Global_Out" annotations as well as Pre and Post, where any
> function used in a precondition, that lacks a "Global_In"
> annotation, is presumed to read no variables other than its
> parameters.  This could be checked when processing the body of such a
> function.

Humm. Ada allows *anything* currently, so I don't think the absense of
specification can mean "no side effects". Remember, I want to be able to make
some of this analysis at compile-time, and surely we want to support "partial"
preconditions, postconditions, and so on. Wasn't it you that said that you could
use whatever information is provided? I don't think most applications are going
to want to specify every detail, but that isn't to say that stronger contracts
would be useless in such applications.

In any case, I would like to keep the special behavior of these guys to a
minimum. We already have special attributes only for use in postconditions, but
it doesn't seem that can be helped (because of efficiency concerns). I really
don't want to see more special gizmos.

Anyway, my point is that no specification ought to mean anything goes (as it
does now in existing functions); that means that there also would need to be a
specification of no side effects. Something like Global_In => null. Isn't
Global_In a list of names anyway? It surely isn't a Boolean expression! So it
can have some other syntax (would have to, in fact).

I would think that if Global_In is specified and it is wrong (that is, other
globals are referenced), the program should be rejected. This is a case where
wrong information is actively harmful; no information at all is better.

****************************************************************

From: Randy Brukardt
Date: Thursday, March 19, 2009  12:01 AM

> Well, I think renaming of expressions would be allow increasing
> abstraction without impacting performance, freezing, or analyzability.
> What's not to like? :-) You could even hide the actual expressions in
> the private part and thus write simple predicates without sacrificing
> hiding.
>
> I don't think including full bodies in specifications would work,
> because of freezing issues. That's irrespective of whether or not it
> is a good idea.

In order to make it more possible for Tucker to know what he's rejecting ;-),
here is a more detailed explanation of what I had in mind for renaming of
expressions.

---

   renamed_expression ::=
      [overriding_indicator]
      function_specification renames (<*renamed_*expression>);

[Note: The parens are needed to clearly (?) separate renaming an expression as a
function from renaming some other function as a function. Perhaps it would be
better to use a keyword somewhere to make it crystal-clear.]

The expected type of the renamed_expression is that of the return subtype of the
function_specification.

Within the renamed_expression, the parameters of the function_specification are
visible. (This would work the same way as the parameters are visible within a
precondition.)

The dynamic semantics are the same as a function whose body contains nothing but
"return renamed_expression".

This renames could be used as a renames-as-body or as a renames-as-declaration.
(A renames-as-body could appear in the private part to complete a subprogram
declared in the visible part, as well as in the body.)

Freezing: The renamed expression would freeze as does a default expression. That
is, freezing would only be performed at the point of a call of the function,
with the expression essentially in-lined at that point. (Of course, if that call
is in the default expression, the freezing would be deferred again.)

Elaboration: I believe that freezing is enough for this construct to ensure that
the types of this construct are elaborated, so that no elaboration check would
ever need to be made on an call of one of these renames. However, the
renamed_expression could include a (separate) call that needed an elaboration
check; in that case the check would have to be made (somewhere - in the body if
not inlined, and at the point of the inlined call if inlined). It's not
completely clear to me if there is enough benefit to defining these specially in
terms of elaboration, although it always seems like a good idea to eliminate
run-time checks. Perhaps it would be enough to point this fact out in an AARM
note so that implementors are aware of this to take advantage of it. (Note that
this is only true if the compiler *knows* that a construct is a renamed
expression; if this completes a visible function in a package body, then the
elaboration would be "normal".)

Staticness: One could imagine these functions (when visible) to be used in
static expressions if the inlined expression would be static. I'm not certain
this is a good idea, but it would definitely allow abstracting expressions that
currently are written as a giant single expression because they need to be
static. That's something you cannot do now.

Finally, this construct could be allowed as a primitive subprogram of an
interface. If we did that, we would get the benefit of the identity function
idea without adding anything special specifically for that case. (If we wanted
to add the generic default for that case, then it gets a bit messier, but not a
huge amount.) We would have to somehow include the expression in the matching
rules in order to deal with the Steve Baird problem for diamond inheritance with
interfaces. I'm not quite sure this is worth it, but it does add to the things
that you cannot do now.

---

I know that Bob has already complained that you cannot declare a local variable
in such a renames. The problem with that of course is that it requires adding a
scope somewhere, so the semantics is more complex than a simple inline
substitution. I suspect that if we headed down that road we'd quickly reach a
roadblock because of freezing issues (similar to the ones we had for partial
generic instances). This is a case where 3/4s of a loaf that can be had easily
is worth more than a whole loaf that is a really hard to get to (that is, is a
definitional problem).

****************************************************************

From: Bob Duff
Date: Thursday, April 29, 2010  7:32 PM

AI05-0177-1.TXT -- Renaming of expressions as functions (Also called "expression
functions".)

At today's subgroup phone meeting, everybody except me supported this feature.
I was asked to explain why, and I gave a rather incoherent explanation (I'm not
good at thinking on my feet).  Plus everybody interrupted with objections before
I was done.  So let me try to summarize my opinions, for the record.

I think there is definitely some value in exposing the implementations (i.e.
body) of small-ish subprograms to clients, to allow proof tools to see that
information.  I don't think that applies JUST to functions used in assertions
(e.g. preconditions).

"Exposing to clients" should mean "in the visible part".  Private parts and
bodies are supposed to be hidden from clients.  That's the Ada model.

Expression functions are a poor tool for achieving the "expose to clients" goal,
because they are restrictive.  Why can't they be procedures? (Tucker suggested a
solution to that.)  Why can't they have locally-declared constants?  I hate
these kinds of choices:  I can have the advantages of expression functions, or I
can have the advantages of local constants, but I can't have both at the same
time.  Why not?

It's fine with me if some tools (possibly optionally) look at private parts. But
if they do, they might as well look at bodies, too.  Conceptually, private part
and body are both part of the "implementation"; only the visible part is
"specification".  In fact, they might as well look at all type extensions of a
given root type.  That's what's needed to "expose the implementation" in all
cases.  So I don't see any advantage to expression functions, for tools that
wish to break privacy.

But a modular analysis of a client should look only at visible parts of whatever
it calls.

My final point here is that I think globals annotations are a more fruitful
endeavor.  As proposed, they're not ready to be standardized, but I think
implementations can experiment with them, and should give that experimentation
higher priority than expression functions.

Which leads to a more general point:  There are two extremes for standards:

    (1) Never standardize anything until it's been implemented and used
        successfully.

    (2) Standardize first, then implement and use.

Both have serious problems.  Ada tends toward (2).  I'm saying we should move
more in the direction of (1), at least in this case.

****************************************************************

From: Randy Brukardt
Date: Thursday, April 29, 2010  10:33 PM

> At today's subgroup phone meeting, everybody except me supported this
> feature.  I was asked to explain why, and I gave a rather incoherent
> explanation (I'm not good at thinking on my feet).  Plus everybody
> interrupted with objections before I was done.  So let me try to
> summarize my opinions, for the record.

Sorry about the interruptions. I think you were pausing to gather your thoughts
long enough that we thought you were done talking.

...
> Expression functions are a poor tool for achieving the "expose to clients"
> goal, because they are restrictive.  Why can't they be procedures?
> (Tucker suggested a solution to that.)  Why can't they have
> locally-declared constants?  I hate these kinds of choices:
> I can have the advantages of expression functions, or I can have the
> advantages of local constants, but I can't have both at the same time.
> Why not?

I don't see any way to accomplish a significantly less restrictive vision.
Honestly, I don't see much point (beyond the easier inlining) to doing this for
anything other than predicates. For the sorts of functions that occur as
predicates, either they are trivial combinations of component values, or they
require some complex operations that have no advantage to being exposed anyway.

For instance, if you need a local variable and a loop to implement a predicate,
it's not very likely to be analyzable anyway. (At least not with the tools that
Ada 2012 is going to be providing - we don't have loop invariants and the like.)
So there is no huge loss to having to put it into a body.

The big win is for the tiny one-liner accessor functions: these might as well be
exposed to any tool that cares to look (and especially the compiler).

> It's fine with me if some tools (possibly optionally) look at private
> parts.
> But if they do, they might as well look at bodies, too.
> Conceptually, private part and body are both part of the
> "implementation"; only the visible part is "specification".
> In fact, they might as well look at all type extensions of a given
> root type.  That's what's needed to "expose the implementation" in all
> cases.  So I don't see any advantage to expression functions, for
> tools that wish to break privacy.
>
> But a modular analysis of a client should look only at visible parts
> of whatever it calls.

Ada allows children to have visibility on the private part, so it's not correct
to claim that no one should look there. Compilers look there all the time, and
the runtime correctness of a program *depends* on what is found there. And what
we're trying to analyze is the runtime correctness, isn't it??

In any case, I think the most important thing is to allow *compilers* to be able
to do these sorts of analyses, without having to depend on anything that they
don't naturally have. A compiler always has access to the private part of a
package, and rarely has access to a body other than the one being compiled.
Compilers usually don't pay much attention to privacy once they've gotten the
legality rules out of the way - I don't think our compiler does any analysis on
the code whatsoever until after the stage where privacy and the like is
respected. (And of course there are runtime considerations which require
ignoring privacy.)

Why do I think compilers are so important? It's not so much compilers as
whatever is the first tool run on the source code to remove errors. It should be
possible to find the maximum number of errors immediately after finishing
writing the code for a single unit, because that's when they're easiest to fix.
These days, the first tool run is almost always the compiler (at least to do
deep analysis - some IDEs do some syntactic checking). For that to work, no
other units other than direct dependencies (in particular, no bodies) need
exist.

The other, related issue for me is that compilers have to be able to eliminate
most of these contracts in the same way that we eliminate most range checks.
Like range checks, turning off contracts is like using seat belts until you
leave the garage - it's something you shouldn't do. The effort to do that can
naturally be used to also help the programmer see where their contracts are
iffy. Doing that is necessarily going to require looking under the hood, as all
optimization does to some extent or other.

> My final point here is that I think globals annotations are a more
> fruitful endeavor.  As proposed, they're not ready to be standardized,
> but I think implementations can experiment with them, and should give
> that experimentation higher priority than expression functions.

You are for some reason equating things that have very different orders of
magnitude. Expression functions probably will take a week or so of effort to
implement. Globals annotations are at least an order of magnitude more effort
(probably two orders). It doesn't make sense to say "don't spend a week
implementing an easy, useful feature because there is another feature that will
take 5 months to implement that might be more important". But the effort saved
will have no impact whatsoever on the results of the globals annotations.

I don't agree that the framework for globals annotations is not ready to be
standardized. The specific annotations that work best probably will require some
experimentation, but by not putting together a common framework populated with a
few important annotations which are obvious (specific items, everything in a
package, nothing at all), we're either going to end up with a tower-of-babble or
just as likely nothing at all. Even so, I understand why people feel this way
and am willing to go without. But what I will not stand for is having supposed
contracts with no way whatsoever for the compiler to analyze them if they're
written only in portable Ada. If we don't have one or the other, then I don't
want ANY new contracts in Ada. And I am going to fight that battle as long as I
can.

> Which leads to a more general point:  There are two extremes for
> standards:
>
>     (1) Never standardize anything until it's been implemented and used
>         successfully.
>
>     (2) Standardize first, then implement and use.
>
> Both have serious problems.  Ada tends toward (2).  I'm saying we
> should move more in the direction of (1), at least in this case.

I can buy that for globals annotations. Expression functions are completely
harmless even if you have them and follow your standard of visibility. They
don't do any good in that case, but they can always be treated as a normal
function. There's nothing really new here semantically (they work just like any
other function renames in terms of visibility, primitiveness, and the like). For
analysis purposes they are cost-free, since they will either be inlined or
called normally -- both of which an analyzer of any kind can already deal with.
Finally, since they cost almost nothing to implement (at least compared to any
other alternative), and even allow writing a few things that aren't currently
possible (because they defer freezing), it's hard to imagine how they are
harmful.

You are always telling us to not let "best" prevent us from adopting a solution
that is "good enough". Here is a solution that is good enough for 80% of
predicate functions, lets compilers at least have the information they need, has
very little cost since there is hardly anything new here (just a new combination
of existing pieces). You're saying that we can't have this solution because it
isn't "best". And *then* you go on and say we can't have the "best" solution
either because it needs more study. I can't imagine a more classic example of
letting "best" kill off "good enough"!!

****************************************************************

From: Tucker Taft
Date: Friday, April 30, 2010  7:01 AM

I think I agree with Randy that this is a simple feature which will make a
significant pragmatic difference in the ability for compilers to do additional
compile-time analysis.  After all, in Ada 83, there was no real reason to have a
private part, except for pragmatic reasons associated with separate compilation.
The private part is exactly those things you need to provide the compiler when
compiling a *client* of an abstraction, to allow the compiler to generate
efficient code.  These expression functions seem to fit that purpose quite
exactly.

I don't particularly buy the notion that things in the private part shouldn't
affect whether some client code succeeds or fails at run-time; predicates are
clearly run-time semantics, not static semantics.  If a compiler chooses to emit
warnings about potential run-time failures at compile-time, that certainly is
its business, and it seems irrelevant whether it gleans that information from
the visible part, the private part, or the body, since clearly all three of
those places affect run-time semantics.

As an example of existing private-part information that might affect
compile-time warnings, consider whether a private type includes full default
initialization.  If the private type is implemented with a scalar type, then no
default initialization is provided.  If it is implemented with an access type,
then it is default initialized to null. If it is implemented using a composite
type, then how much of it is default initialized varies according to information
that is specified in the private part.

Clearly if you write:

    X : Priv_Type;
    Default_Value : constant Priv_Type := X;

it would make sense for the compiler to provide a compile-time warning if
Priv_Type is in fact implemented with a scalar type (or an array-of-scalar).

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  5:08 PM

I completely  agree with Bob's position on this. I find restricting to an
expression to be hostile to good coding practice, since it results in having to
construct awkward complex expressions when simple sets of declarations and
statements would do better.

I also think these belong in the visible part, what on earth is the point of
having them in the private part? The private part is simply part of the
implementation separated off for convenience of the compiler (it should not even
be in the same file as the spec, and we plan to implement that separation in
GNAT some day).

Why is it convenient for the compiler to be able to see the body when the
programmer cannot? Well I guess for inlining, but we already have that
mechanism.

So in short I see no point whatever in renaming of expressions in the private
part, and if we allow it in the public part, we should just allow bodies in the
public part and be done with it.

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  5:10 PM

> You are for some reason equating things that have very different
> orders of magnitude. Expression functions probably will take a week or
> so of effort to implement. Globals annotations are at least an order
> of magnitude more effort (probably two orders). It doesn't make sense
> to say "don't spend a week implementing an easy, useful feature
> because there is another feature that will take 5 months to implement
> that might be more important". But the effort saved will have no
> impact whatsoever on the results of the globals annotations.

By the way, in GNAT it is 5 mins work tyo allow bodies in the spec, because we
do anyway, we just have an arbitratry check to forbid it at the source level,
but we allow it in generated code (and all sorts of bodies are generated in the
spec).

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  5:19 PM

In fact all Ada compilers allow instance bodies in specs.

But that's a bit different, because nothing in the instance can refer to
anything in the outer spec (because the generic must have come from elsewhere).

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  5:34 PM

The sort of thing we generate in GNAT is e.g. bodies of equality routines ...

****************************************************************

From: Tucker Taft
Date: Friday, April 30, 2010  5:34 PM

I personally don't want to see bodies showing up in specs.  I think that really
confuses the model.

The idea behind limiting the capabilities is that these are intended for simple,
predicate-like things that need visibility on the full type, so clearly they
can't be in the visible part. They are in the private part simply so the
compiler can do a bit more compile-time analysis of preconditions,
postconditions, and subtype predicates without breaking separate compilation.

This seems to be a case where the "same" feature is of interest to two different
groups for almost diametrically opposed reasons, which in fact means the two
groups come up with opposite requirements for the feature, and hence it really
should be discussed as two different features:

   1) Arbitrary bodies anywhere in specs.

   2) Simple expression-level predicates in the private part.

I might note that we are limiting preconditions and postconditions and
assertions to expressions, so there is some precedent for having to live without
temporary variables, etc., when trying to express something relevant to static
analysis.  That's what these "expression-functions" really are.

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  6:03 PM

OK, I see the point of view here, and am inclined to agree with what Tucker says
below:

[Previous message was quoted here - Editor.]

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  5:55 PM

>    2) Simple expression-level predicates in the private part.

What I don't get is why "simple" should mean exactly "one expression".
I mean, declaring my one little Square constant (see other e-mail) doesn't imply
any complexity.  OTOH, an expression can look at globals, so it's not
necessarily "simple" from an analysis point of view.

Shrug.  I guess I see small payoff for this feature -- but I agree with Randy
and Tuck that the cost is also fairly small.  We're just disagreeing on the
ratio.

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  6:04 PM

> Shrug.  I guess I see small payoff for this feature -- but I agree
> with Randy and Tuck that the cost is also fairly small.  We're just
> disagreeing on the ratio.

Of course if you put in LET, then you are close to a full expressioun language,
and if you put in LAMBDA, you are all the way there :-)

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  6:03 PM

> Sorry about the interruptions. I think you were pausing to gather your
> thoughts long enough that we thought you were done talking.

Yeah, that's a fault of mine -- I can't think quickly when speaking to people,
so I pause, so people jump in, and that causes me to get even more distracted...

That's why I prefer e-mail for most technical communication.

Of course, face-to-face meetings are better than phone meetings, because you can
see visual cues about who wants to speak when.
That's why I like to go to SofCheck for the ARG phone meetings
-- at least I see ONE person face to face.

> I don't see any way to accomplish a significantly less restrictive vision.
> Honestly, I don't see much point (beyond the easier inlining) to doing
> this for anything other than predicates. For the sorts of functions
> that occur as predicates, either they are trivial combinations of
> component values, or they require some complex operations that have no
> advantage to being exposed anyway.
>
> For instance, if you need a local variable and a loop to implement a
> predicate, it's not very likely to be analyzable anyway.

How about a local constant, and no loop?

function Is_Good (X, Y : Integer) return Boolean
    renames X * X > Y and X * X + 1 <= Y + Blah;

versus:

function Is_Good (X, Y : Integer) return Boolean is
    Square : constant Integer := X * X;
begin
    return Square > Y and Square + 1 <= Y + Blah; end Is_Good;

or (to mix in some Lisp-y syntax):

function Is_Good (X, Y : Integer) return Boolean
     renames (let Square = X * X
        in (Square > Y and Square + 1 <= Y + Blah));

Anyway, part of my point is that "not very likely to be analyzable"
is not something we are in a position to judge.  It depends on tools that have
not yet been designed.

...
> > It's fine with me if some tools (possibly optionally) look at
> > private parts.
> > But if they do, they might as well look at bodies, too.
> > Conceptually, private part and body are both part of the
> > "implementation"; only the visible part is "specification".
> > In fact, they might as well look at all type extensions of a given
> > root type.  That's what's needed to "expose the implementation" in
> > all cases.  So I don't see any advantage to expression functions,
> > for tools that wish to break privacy.
> >
> > But a modular analysis of a client should look only at visible parts
> > of whatever it calls.
>
> Ada allows children to have visibility on the private part, so it's
> not correct to claim that no one should look there.

I didn't say "no one".  I said "clients".  Children are not clients -- they are
part of the abstraction.  (In fact, if I ran the circus, children would see
parent bodies.)

>...Compilers look there all the
> time, ...

Never for legality checks.  Only for code gen.  And for code gen, compilers look
at bodies (for inlining, and for generic instantiation (except in the case of
your generic-sharing implementation)).

>...and the runtime correctness of a program *depends* on what is found
>there. And what we're trying to analyze is the runtime correctness,
>isn't  it??

Well, yeah, sort of.  But a modular analysis that doesn't break privacy is more
useful -- it means the analysis will still be correct if you change the
implementation.

In other words, a proof that says "this thing won't raise C_E in this program"
is useful, but a proof that says, "this thing won't raise C_E, period, no matter
what calls it or is called by it" is MORE useful.

The main point of putting various assertions on specs (like preconditions) is to
enable the latter more-useful kind of analysis.

> In any case, I think the most important thing is to allow *compilers*
> to be able to do these sorts of analyses, without having to depend on
> anything that they don't naturally have. A compiler always has access
> to the private part of a package, and rarely has access to a body
> other than the one being compiled. Compilers usually don't pay much
> attention to privacy once they've gotten the legality rules out of the
> way - I don't think our compiler does any analysis on the code
> whatsoever until after the stage where privacy and the like is
> respected. (And of course there are runtime considerations which
> require ignoring privacy.)
>
> Why do I think compilers are so important? It's not so much compilers
> as whatever is the first tool run on the source code to remove errors.
> It should be possible to find the maximum number of errors immediately
> after finishing writing the code for a single unit, because that's
> when they're easiest to fix. These days, the first tool run is almost
> always the compiler (at least to do deep analysis - some IDEs do some
> syntactic checking). For that to work, no other units other than
> direct dependencies (in particular, no bodies) need exist.
>
> The other, related issue for me is that compilers have to be able to
> eliminate most of these contracts in the same way that we eliminate
> most range checks. Like range checks, turning off contracts is like
> using seat belts until you leave the garage - it's something you shouldn't do.

Bad analogy: seat belts cost near-zero to use.

...
> > My final point here is that I think globals annotations are a more
> > fruitful endeavor.  As proposed, they're not ready to be
> > standardized, but I think implementations can experiment with them,
> > and should give that experimentation higher priority than expression
> > functions.

> You are for some reason equating things that have very different
> orders of magnitude. Expression functions probably will take a week or
> so of effort to implement. Globals annotations are at least an order
> of magnitude more effort (probably two orders).

Yes, I agree that expression functions are much simpler to implement (and to
define the semantics of) than globals annotations.

...
> I don't agree that the framework for globals annotations is not ready
> to be standardized. The specific annotations that work best probably
> will require some experimentation, but by not putting together a
> common framework populated with a few important annotations which are
> obvious (specific items, everything in a package, nothing at all),
> we're either going to end up with a tower-of-babble or just as likely
> nothing at all. Even so, I understand why people feel this way and am
> willing to go without. But what I will not stand for is having
> supposed contracts with no way whatsoever for the compiler to analyze
> them if they're written only in portable Ada. If we don't have one or
> the other, then I don't want ANY new contracts in Ada. And I am going to fight
> that battle as long as I can.

I don't see why you want to pound your shoe on the table in this way.
I mean, preconditions and predicates are useful even if they are just run-time
checks, with no "proof tools" in sight.

You have to look at the alternative:  Put comments, which are obviously not
checked at all.

...
> You are always telling us to not let "best" prevent us from adopting a
> solution that is "good enough". Here is a solution that is good enough
> for 80% of predicate functions, lets compilers at least have the
> information they need, has very little cost since there is hardly
> anything new here (just a new combination of existing pieces). You're
> saying that we can't have this solution because it isn't "best". And
> *then* you go on and say we can't have the "best" solution either
> because it needs more study. I can't imagine a more classic example of letting
> "best" kill off "good enough"!!

Your honor, I plead guilty as charged.

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  6:15 PM

> I think I agree with Randy that this is a simple feature ...

I agree it's a simple feature.

>...which will make a significant pragmatic  difference in the ability
>for compilers to do  additional compile-time analysis.

That part, I doubt.

I don't see how knowing that Is_Empty means "Blah.Count = 0" helps, unless the
compiler knows all places where the private Count component is modified.  That
requires looking at the body, not just the private part.

And compilers already know how to look at bodies (for inlining), so I don't see
any benefit from moving information from the body to the private part.

>...  After all, in Ada 83,
> there was no real reason to have a private part,

You can stop right there.  ;-)

> except for pragmatic reasons associated with separate compilation.
> The private part is exactly those things you need to provide the
> compiler when compiling a *client* of an abstraction, to allow the
> compiler to generate efficient code.

Efficient code has little to do with static analysis.

And anyway, efficient code requires inlining bodies, sometimes.

...
> As an example of existing private-part information that might affect
> compile-time warnings, consider whether a private type includes full
> default initialization.  If the private type is implemented with a
> scalar type, then no default initialization is provided.  If it is
> implemented with an access type, then it is default initialized to
> null.
> If it is implemented using a composite type, then how much of it is
> default initialized varies according to information that is specified
> in the private part.
>
> Clearly if you write:
>
>     X : Priv_Type;
>     Default_Value : constant Priv_Type := X;
>
> it would make sense for the compiler to provide a compile-time warning
> if Priv_Type is in fact implemented with a scalar type (or an
> array-of-scalar).

Yes, I agree.  That's because there's a flaw in Ada: you can't know from the visible part whether it makes sense to declare a default-initialized object of type Priv_Type.

****************************************************************

From: Randy Brukardt
Date: Friday, April 30, 2010  6:42 PM

...
> > For instance, if you need a local variable and a loop to implement a
> > predicate, it's not very likely to be analyzable anyway.
>
> How about a local constant, and no loop?
>
> function Is_Good (X, Y : Integer) return Boolean
>     renames X * X > Y and X * X + 1 <= Y + Blah;
>
> versus:
>
> function Is_Good (X, Y : Integer) return Boolean is
>     Square : constant Integer := X * X; begin
>     return Square > Y and Square + 1 <= Y + Blah; end Is_Good;
>
> or (to mix in some Lisp-y syntax):
>
> function Is_Good (X, Y : Integer) return Boolean
>      renames (let Square = X * X
>         in (Square > Y and Square + 1 <= Y + Blah));

I thought about this when I wrote my original reply, and it stuck me that for
*constants* you might as well just define a second expression function. One
presumes the constant has some well-defined meaning (if it doesn't, how could
you name it and why would you want it?); it's unlikely to hurt to be more
generally available.

That is, I'd say that you should write:

function Square (X : Integer) return Integer renames (X * X);

function Is_Good (X, Y : Integer) return Boolean is (Square(X) > Y and
Square(X) + 1 <= Y + Blah);

Presuming inlining, it will generate the same code either way; the amount of
text is about the same either way.

I didn't think this really answered your concern properly, so I didn't put it
into my reply. But maybe I was right the first time...

> Anyway, part of my point is that "not very likely to be analyzable"
> is not something we are in a position to judge.  It depends on tools
> that have not yet been designed.

This is true, of course.

...
> >...Compilers look there all the
> > time, ...
>
> Never for legality checks.  Only for code gen.  And for code gen,
> compilers look at bodies (for inlining, and for generic instantiation
> (except in the case of your generic-sharing implementation)).

Well, all of these things are technically done at runtime. There aren't any
legality checks involved. Yes, I realize this is Ada lawyer hairsplitting.

> >...and the runtime correctness of a program *depends* on what is
> >found there. And what we're trying to analyze is the runtime
> >correctness, isn't  it??
>
> Well, yeah, sort of.  But a modular analysis that doesn't break
> privacy is more useful -- it means the analysis will still be correct
> if you change the implementation.
>
> In other words, a proof that says "this thing won't raise C_E in this
> program"
> is useful, but a proof that says, "this thing won't raise C_E, period,
> no matter what calls it or is called by it" is MORE useful.

I can't argue with this. But I don't think I want to have to wait until 2025 to
be able to get anything, and as Tucker says, proofs based only on specs are
beyond the state of the art today (at least for full-size programming languages
like Ada).

...
> > The other, related issue for me is that compilers have to be able to
> > eliminate most of these contracts in the same way that we eliminate
> > most range checks. Like range checks, turning off contracts is like
> > using seat belts until you leave the garage - it's
> something you shouldn't do.
>
> Bad analogy: seat belts cost near-zero to use.

This analogy is used for range checks all the time. The point is that with
effort, the compiler can make the cost of range checks near-zero.

I think this is an important property of contracts (as opposed to just runtime
assertions, which definitely don't have this property): the compiler ought to be
able to eliminate the majority of the checks. (And the ones that remain suggest
places where the program might have problems.)

If contracts are expensive, no one will use them in the first place. But they're
an important safety feature, so they need to be reasonably cheap. And in any
case turning off safety devices to go out in the world is silly. That's when you
*need* safety devices - to deal with the unforeseen!

(This expense argument is often used against Ada's range and other checks, but
we Ada people know from experience that it is rare that a program has a
performance problem specifically because of the presence of these checks.
Usually the program is plenty fast whether or not they are on, or it is too slow
either with them on or turned off. You can make the same argument about size of
checks. Ada provides Suppress for the rare program in the middle, not for some
sort of general need. All of this applies just as well to all contracts -- and
in large part to assertions as well. [The vast majority of assertions in my code
are just a normal part of the code; I don't make any effort to be able to turn
them off. Why compile/serve web pages/analyze spam without seat belts??])

...
> > I don't agree that the framework for globals annotations is not
> > ready to be standardized. The specific annotations that work best
> > probably will require some experimentation, but by not putting
> > together a common framework populated with a few important
> > annotations which are obvious (specific items, everything in a
> > package, nothing at all), we're either going to end up with a
> > tower-of-babble or just as likely nothing at all. Even so, I
> > understand why people feel this way and am willing to go without.
> > But what I will not stand for is having supposed contracts with no
> > way whatsoever for the compiler to analyze them if they're written
> > only in portable Ada. If we don't have one or the other, then I
> > don't want ANY new contracts in Ada. And
> I am going to fight that battle as long as I can.
>
> I don't see why you want to pound your shoe on the table in this way.
> I mean, preconditions and predicates are useful even if they are just
> run-time checks, with no "proof tools" in sight.

Sure, but we don't need new features for that; putting pragma Assert inside of a
subprogram body will provide that just fine. (Or just using ordinary code, as we
do in Claw.)

The promise of these new features is for them to be at least partially
compile-time checkable, with little or no runtime overhead. That's highly
unlikely to happen for pragma Assert (or anything in the body, for that matter).

> You have to look at the alternative:  Put comments, which are
> obviously not checked at all.

No, the alternative is using pragma Assert and/or ordinary code in the
subprogram body. Still checked, but not well communicated to the caller. We have
a name for people who don't check boundary conditions: "C programmers" ;-)

...
> > You are always telling us to not let "best" prevent us from adopting
> > a solution that is "good enough". Here is a solution that is good
> > enough for 80% of predicate functions, lets compilers at least have
> > the information they need, has very little cost since there is
> > hardly anything new here (just a new combination of existing
> > pieces). You're saying that we can't have this solution because it
> > isn't "best". And
> > *then* you go on and say we can't have the "best" solution either
> > because it needs more study. I can't imagine a more classic
> example of letting "best" kill off "good enough"!!
>
> Your honor, I plead guilty as charged.

I sentence you to getting your homework done early. :-)

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  7:06 PM

>> ...  After all, in Ada 83,
>> there was no real reason to have a private part,
>
> You can stop right there.  ;-)
>
>> except for pragmatic reasons associated with separate compilation.
>> The private part is exactly those things you need to provide the
>> compiler when compiling a *client* of an abstraction, to allow the
>> compiler to generate efficient code.

Pretty fundamental "pragmatic reasons", the compiler HAS to know the layout of
types, to allocate space for them if nothing else!

> Efficient code has little to do with static analysis.

True indeed

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  7:46 PM

> Pretty fundamental "pragmatic reasons", the compiler HAS to know the
> layout of types, to allocate space for them if nothing else!

It doesn't HAVE to.  It HAS to only for efficiency reasons.

And for maximal efficiency, it has to do inlining, which
(currently) requires looking at bodies.  I don't think the sky would fall if
compilers also had to take a peek at bodies in order to efficiently allocate
space for objects of private types.

****************************************************************

From: Roberrt Dewar
Date: Friday, April 30, 2010  8:20 PM

I strongly disagree, avoiding dynamic allocation is not just an efficiency
issue, it is a fundamental functional issue in certified environments that do
not allow dynamic allocation.

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  7:32 PM

> I thought about this when I wrote my original reply, and it stuck me
> that for *constants* you might as well just define a second expression function.
> One presumes the constant has some well-defined meaning (if it
> doesn't, how could you name it and why would you want it?); it's
> unlikely to hurt to be more generally available.
>
> That is, I'd say that you should write:
>
> function Square (X : Integer) return Integer renames (X * X);
>
> function Is_Good (X, Y : Integer) return Boolean is (Square(X) > Y and
> Square(X) + 1 <= Y + Blah);

Good point.

> Presuming inlining, it will generate the same code either way; the
> amount of text is about the same either way.

Agreed.

> I didn't think this really answered your concern properly, so I didn't
> put it into my reply. But maybe I was right the first time...

Yes, you were right the first time.  ;-)

I learned something.

> I can't argue with this. But I don't think I want to have to wait
> until 2025 to be able to get anything, and as Tucker says, proofs
> based only on specs are beyond the state of the art today (at least
> for full-size programming languages like Ada).

Well, Tuck's CodePeer tool looks at bodies (transitively) but it does so by
generating automatically the preconditions and the globals annotations that
we're discussing.

So I think "beyond state of the art" doesn't mean "the tools aren't smart
enough", but more like "the tools don't have the information they need".  We're
trying to give them that info.

> ...
> > > The other, related issue for me is that compilers have to be able
> > > to eliminate most of these contracts in the same way that we
> > > eliminate most range checks. Like range checks, turning off
> > > contracts is like using seat belts until you leave the garage -
> > > it's
> > something you shouldn't do.
> >
> > Bad analogy: seat belts cost near-zero to use.
>
> This analogy is used for range checks all the time.

Yeah, I know.  Or similar analogies involving floatation devices for sailors,
and parachutes for airplane pilots. I'm not picking on you -- I think it's
equally bogus when Tony Hoare does such reasoning-by-sound-bite.  ;-)

>... The point is that with
> effort, the compiler can make the cost of range checks near-zero.

Sometimes, other times not.

> I think this is an important property of contracts (as opposed to just
> runtime assertions, which definitely don't have this property): the
> compiler ought to be able to eliminate the majority of the checks.
> (And the ones that remain suggest places where the program might have
> problems.)

I think you're using "contract" in a non-standard way.
Eiffel's contracts are just run-time checks, just like ours are

> If contracts are expensive, no one will use them in the first place.

Nonsense.  If something is too expensive for production use, it can still be
useful for testing.

>...But
> they're an important safety feature, so they need to be reasonably cheap.

No, they don't need to be.  We would like them to be.

> And in any case turning off safety devices to go out in the world is silly.
> That's when you *need* safety devices - to deal with the unforeseen!

Nonsense again.  Turning off "safety devices" is an engineering decision based
on costs/benefit.

> (This expense argument is often used against Ada's range and other
> checks,

I agree that range checks are analogous.

> ...
> > I don't see why you want to pound your shoe on the table in this way.
> > I mean, preconditions and predicates are useful even if they are
> > just run-time checks, with no "proof tools" in sight.
>
> Sure, but we don't need new features for that; putting pragma Assert
> inside of a subprogram body will provide that just fine. (Or just
> using ordinary code, as we do in Claw.)

No, it isn't "just fine".  Preconditions get scattered about all callers automatically.  That's useful, even if unsound.

> The promise of these new features is for them to be at least partially
> compile-time checkable, with little or no runtime overhead. That's
> highly unlikely to happen for pragma Assert (or anything in the body,
> for that matter).
>
> > You have to look at the alternative:  Put comments, which are
> > obviously not checked at all.
>
> No, the alternative is using pragma Assert and/or ordinary code in the
> subprogram body. Still checked, but not well communicated to the
> caller. We have a name for people who don't check boundary conditions: "C
> programmers" ;-)

OK, but preconditions are still better than that (sprinkle checks in various
useful places, compared to the programmer sprinkling Assert in some of those
places).

> ...
> > > You are always telling us to not let "best" prevent us from
> > > adopting a solution that is "good enough". Here is a solution that
> > > is good enough for 80% of predicate functions, lets compilers at
> > > least have the information they need, has very little cost since
> > > there is hardly anything new here (just a new combination of
> > > existing pieces). You're saying that we can't have this solution
> > > because it isn't "best". And
> > > *then* you go on and say we can't have the "best" solution either
> > > because it needs more study. I can't imagine a more classic
> > example of letting "best" kill off "good enough"!!
> >
> > Your honor, I plead guilty as charged.
>
> I sentence you to getting your homework done early. :-)

Yes, sir, your honor, sir!  ;-)

Seriously, I hope to do it tomorrw or Sunday.  Thank you very much for providing
the relevant meeting notes!  (which I have not yet looked at)

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  8:18 PM

>> If contracts are expensive, no one will use them in the first place.
>
> Nonsense.  If something is too expensive for production use, it can
> still be useful for testing.

Machines are so fast, that for many users, CPU time is simply not a limited
resource.

>> ...But
>> they're an important safety feature, so they need to be reasonably cheap.
>
> No, they don't need to be.  We would like them to be.
>
>> And in any case turning off safety devices to go out in the world is silly.
>> That's when you *need* safety devices - to deal with the unforeseen!
>
> Nonsense again.  Turning off "safety devices" is an engineering
> decision based on costs/benefit.

Furthermore, leaving in run-time checks for production builds can be MUCH more
dangerous than omitting them, if you haven't carefully figured out what to do if
a check fails. Ariane-5 is a really good example of this.

****************************************************************

From: Randy Brukardt
Date: Friday, April 30, 2010  8:34 PM

...
> > > I don't see why you want to pound your shoe on the table in this way.

It's interesting that "pound your shoe" has become a synonym for "take an
extreme" position within the Ada community. I generally keep my shoes on...

> > > I mean, preconditions and predicates are useful even if they are
> > > just run-time checks, with no "proof tools" in sight.
> >
> > Sure, but we don't need new features for that; putting pragma Assert
> > inside of a subprogram body will provide that just fine. (Or just
> > using ordinary code, as we do in Claw.)
>
> No, it isn't "just fine".  Preconditions get scattered about all
> callers automatically.  That's useful, even if unsound.

Sorry, I was talking about putting the Asserts/other code inside the called
subprogram body. I agree that putting those at the call site is not "just fine".
But there is no reason to put those at the call site for the purposes of runtime
checks -- all that does is bloat up the code. The purpose of putting them at the
call site is to enable optimization/static checking; if you're not doing that,
you have no reason to not put them inside of the body.

> > The promise of these new features is for them to be at least
> > partially compile-time checkable, with little or no runtime
> > overhead. That's highly unlikely to happen for pragma Assert (or
> > anything in the body, for that matter).
> >
> > > You have to look at the alternative:  Put comments, which are
> > > obviously not checked at all.
> >
> > No, the alternative is using pragma Assert and/or ordinary code in
> > the subprogram body. Still checked, but not well communicated to the
> > caller. We have a name for people who don't check boundary
> conditions: "C programmers"
> > ;-)
>
> OK, but preconditions are still better than that (sprinkle checks in
> various useful places, compared to the programmer sprinkling Assert in
> some of those places).

No sprinkling going on; those checks are in the called body, not at the call
site, and surely are going to occur in every call for that reason.

****************************************************************

From: Randy Brukardt
Date: Friday, April 30, 2010  8:45 PM

Oops, missed one comment.

...
> > I think this is an important property of contracts (as opposed to
> > just runtime assertions, which definitely don't have this property):
> > the compiler ought to be able to eliminate the majority of the checks.
> > (And the ones that remain suggest places where the program might
> > have
> > problems.)
>
> I think you're using "contract" in a non-standard way.
> Eiffel's contracts are just run-time checks, just like ours are

I probably am; see my paper on this topic. I needed a way to differentiate pure
runtime checks (which I called "assertions") from checks that can usefully be
analyzed statically (which I called "contracts"). Maybe there is a better term
for the latter, but that is always what I meant by "contracts" (including in the
ARG instructions). The big benefit of using Ada is the many ways that Ada
detects problems and refuses to let you proceed until they're fixed; runtime
checks are of much less benefit (which is not to say NO benefit!) as they
require extensive testing to trigger problems and cause what you call a
"tripping hazard" in fielded systems.

Using this terminology: I'm not interested in any more "assertions" in Ada (that
is, pure runtime checks); if we get some because of people writing bad
contracts, that's OK, but they are nearly pointless. (I have more sympathy for
predicate assertions, because "sprinkling" those is hard;
precondition/postcondition assertions on the other hand are pretty simple to
write in the body and I suspect most of us have been writing those since the
early days of Ada 83).

My preference would be for contracts with real legality rules (like we have for
parameter passing, generic instantiation, and the like), but that's probably
unrealistic for anything beyond "set constraints". So I'll live with ones that
can be reduced/detected as warnings of one sort of another.

My understanding of Eiffel is that it doesn't have any real contracts beyond
subprogram profiles.

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  8:54 PM

>>>> I don't see why you want to pound your shoe on the table in this way.
>
> It's interesting that "pound your shoe" has become a synonym for "take
> an extreme" position within the Ada community. I generally keep my shoes on...

It means that in any context, it comes from Kruschev and the U-2 affair!

****************************************************************

From: Bob Duff
Date: Friday, April 30, 2010  9:04 PM

> It's interesting that "pound your shoe" has become a synonym for "take
> an extreme" position within the Ada community. I generally keep my shoes on...

Sorry.  When I say "pound one's shoe on the table" I am accusing you of being
somewhat over-the-top in your rhetoric, either for or against some technical
position.  It's a reference to the famous case where Nikita Khrushchev of the
Soviet Union actually pounded his shoe on the table to make a point (whetever
point it was, I don't much care) in a United Nations debate.

There's a description of it here:

http://en.wikipedia.org/wiki/Shoe-banging_incident

So when you say, for example, "that's a nonstarter" or "that is unacceptable", I
respond that "you're pounding your shoe on the table".  Get it?

When you say, "I don't like that" or "I think that causes maintenance problems",
then I won't cry "shoe on table".  ;-)

****************************************************************

From: Robert Dewar
Date: Friday, April 30, 2010  8:57 PM

>> I think you're using "contract" in a non-standard way.
>> Eiffel's contracts are just run-time checks, just like ours are
>
> I probably am; see my paper on this topic. I needed a way to
> differentiate pure runtime checks (which I called "assertions") from
> checks that can usefully be analyzed statically (which I called
> "contracts"). Maybe there is a better term for the latter, but that is always
> what I meant by "contracts"

That's REALLY confusing terminology. As Bob says, the word contract is strongly
associated with Ada (as in the trademarked phrase programming by contract) and
in this context it ALWAYS refers to run-time checks.

> Using this terminology: I'm not interested in any more "assertions" in
> Ada (that is, pure runtime checks); if we get some because of people
> writing bad contracts, that's OK, but they are nearly pointless. (I
> have more sympathy for predicate assertions, because "sprinkling"
> those is hard; precondition/postcondition assertions on the other hand
> are pretty simple to write in the body and I suspect most of us have
> been writing those since the early days of Ada 83).

To say that contracts as used in Eiffel are nearly pointless is provocative and
is definitely an extreme minority position. And for sure we have lots of major
Ada users who are VERY interested in contracts in the normal sense, as Eiffel
defines it.

> My preference would be for contracts with real legality rules (like we
> have for parameter passing, generic instantiation, and the like), but
> that's probably unrealistic for anything beyond "set constraints". So
> I'll live with ones that can be reduced/detected as warnings of one sort of another.
>
> My understanding of Eiffel is that it doesn't have any real contracts
> beyond subprogram profiles.

Well you are using contracts in such a confusing manner that I don't know what
to make of such a comment!

****************************************************************

From: Yannick Moy
Date: Saturday, May 1, 2010  2:28 AM

>> I probably am; see my paper on this topic. I needed a way to
>> differentiate pure runtime checks (which I called "assertions") from
>> checks that can usefully be analyzed statically (which I called
>> "contracts"). Maybe there is a better term for the latter, but that
>> is always what I meant by "contracts"
>
> That's REALLY confusing terminology. As Bob says, the word contract is
> strongly associated with Ada (as in the trademarked phrase programming
> by contract) and in this context it ALWAYS refers to run-time checks.

I think the term "contract" is meaning the two things for different groups of
people. Historically, it means the run-time checking as performed in Eiffel, and
it still means this for most people. In the research community of program
proving, it means of course what Randy means: side-effect free deterministic
terminating boolean expressions.

JML and Spec# take the word of the programmer for it, as only methods marked as
"pure" in the source can be called from contracts, where pure means essentially
side-effect free. Here is the JML definition of pure:

http://www.eecs.ucf.edu/~leavens/JML/jmlrefman/jmlrefman_7.html#SEC59

SPARK allows calls to SPARK functions (which cannot have side-effects anyway)
and logic functions (declared in stylized comments and defined in Prolog syntax
for the prover).

ACSL only allows calls to logic functions, either defined in stylized comments
(which look a lot like expression functions), or only declared in stylized
comments and defined by some axiomatic. Here is the ACSL definition of
expressions used in contracts (section 2.2):

http://frama-c.com/download/acsl_1.4.pdf

I think it is less confusing to speak about contracts in the Eiffel way, while
working towards making these contracts amenable to static analysis.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  5:38 AM

> I think the term "contract" is meaning the two things for different
> groups of people. Historically, it means the run-time checking as
> performed in Eiffel, and it still means this for most people.

Given the trademarked phrase, I think that's the only way it should be used. How
about using the phrase "static contracts" when we mean that and "dynamic
contracts" when we mean that, so we never get confused. Of course part of the
goal is in some cases to have contracts that can be regarded as either static or
dynamic (that would be true in Eiffel too, to the extent that Eiffel contracts
can be treated statically, it would make sense to do so).

> In the research community of program
> proving, it means of course what Randy means: side-effect free
> deterministic terminating boolean expressions.

(side comment on side-effect free)

I never know what side-effect free means, for instance the only possible
reasonable implementation of a function like Ackerman is to memoize. Does the
memoization of such a function make it have a side effect, or is it just an
efficiency detail of the implementation (in some languages you get the
memoization effect just by asking for it without having to program it, and there
you can most clearly see it is just an low level im-plementation detail, whereas
in this kind of context side-effect free is a high level semantic concept.

Or if you add a counter to a square root function so that for tuning purposes
you can tell how many times it is called, then it is still conceptually pure,
and indeed you want the compiler still to hoist calls, eliminate duplicate calls
etc.

During the discussion of Ada 83, there was some considerable sentiment for
distinguishing between side effect free functions and value returning procedures
(a la the DEC pragma). But the discussion ALWAYS got bogged down in discussions
of points like these, and never got anywhere.

And of course all functions have side effects if you think in terms of the stack
usage, needing to allocate stack space (and possibly raising SE), not to mention
leaving traces of the call on the stack.

So for my taste, one should avoid the phrase side-effect free and instead use a
well defined term in its place that specifies the semantic level at which this
property is defined.

> SPARK allows calls to SPARK functions (which cannot have side-effects
> anyway) and logic functions (declared in stylized comments and defined
> in Prolog syntax for the prover).

Of course at the low level of looking at memory usage, all SPARK functions *do*
have side effects, and indeed when the claim is made that a program has been
proved to be run-time exception free, this does not cover Storage_Error! That
has to be verified at a totally different level.

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  8:02 AM

>> ...
>> Efficient code has little to do with static analysis.
>
> True indeed

This one surprises me.  Efficient Ada code implies eliminating unnecessary
run-time checks.  Good static analysis (e.g. range propagation)is normally the
key to that.  Now that we have user-written assertions, preconditions,
postconditions, etc., eliminating unnecessary the checks associated with these
requires even more static analysis.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  8:20 AM

Yes, of course, but what we (or at least I) meant by this is that issues of
efficient code generation are not relevant to the *process* of static analysis,
though indeed the *results* of static analysis most certainly can lead to better
code generation (e.g. not generating code for dead code that is not obviously
dead without subtle analysis).

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  10:29 AM

Right, me too.

Also, there are some important differences between static analysis done to make
sure the program works, versus making it fast.

For the former, you prefer to be modular (clients don't depend on private parts
and bodies), so the analysis of one thing can't be invalidated by changes in the
other.  For the latter, it's just fine to depend on bodies -- that's what
inlining is all about.

The world desperately needs the former.  For the latter, well, shrug, lots of
programs are plenty fast enough as it is, and besides, we already have a
solution for slow run-time checks -- suppress them.  (Don't start shoe pounding
-- I'm well aware of the costs and benefits of suppressing checks, and I don't
buy analogies with seat belts and life jackets).

If static analysis can make programs faster without resorting to Suppress,
that's a nice side effect, but I have zero interest in discussing how to design
our assertions to accomplish that.

P.S. Note to Randy on terminology: "Assertions" includes pragma Assert,
pre/post-conditions, invariants, null exclusions, constraints, predicates.  (I'm
saying this because you asserted during the meeting that "assertion" means only
pragma Assert.)  "Contract" means that subset of assertions that are visible to
clients. This is pretty well established terminology (see e.g. Bertrand Meyer's
writings), so I suggest we not try to change it.

And both terms (assertion and contract) are broad -- they don't imply static
versus dynamic checking (could be some of both).  This part is admittedly not
universal (some static analysis zealots claim that an "assertion" must be
statically checked, or it's not an assertion -- but they're wrong. ;-) )

Even a comment can be called an assertion.  And if it's in the spec, it can be
called part of the contract.  Those are checked only by "code review".

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  11:23 AM

> Given the trademarked phrase, I think that's the only way it should be
> used. How about using the phrase "static contracts" when we mean that
> and "dynamic contracts" when we mean that, so we never get confused.

I think "contract" means an expression of some property that one piece of code
agrees to ensure, so another piece of code can rely on it.

"Static" and "dynamic" should not be viewed as properties of a contract, but as
properties of various tools that automatically check them (or even human beings
who check them "by hand").

> Of course part of the goal is in some cases to have contracts that can
> be regarded as either static or dynamic (that would be true in Eiffel
> too, to the extent that Eiffel contracts can be treated statically, it
> would make sense to do so).

Right, exactly.

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  8:22 AM

I think it might be useful to refer to these two different proposals as:

   Parameterized Expressions

and

   Bodies in Specs

I personally have an interest in Parameterized Expressions, but no interest in
Bodies in Specs.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  8:28 AM

> I personally have an interest in Parameterized Expressions, but no
> interest in Bodies in Specs.

I don't think these are so separate, bodies-in-specs is intended to address
exactly the same issues as renamed expressions, just in a more general form, so
I don't think it is helpful to try to separate two solutions for the same
problem.

I do have some sympathy for Tuck's objections to bodies in specs as confusing,
and for sure the potential for abuse is huge:

"what's all this crap about specs and bodies? let's just put everything in the
specs and avoid this nonsense!"

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  10:44 AM

> I do have some sympathy for Tuck's objections to bodies in specs as
> confusing, and for sure the potential for abuse is huge:

I agree with you and Tuck that the potential for abuse is huge.

But I have zero sympathy for Tuck's objections -- "potential abuse"
is NEVER an appropriate language-design criterion.  "Potential for accidental
misuse" is, but not mere "potential abuse".

This example illustrates why.  Tuck wants to prevent "abuse", but he can't
define that precisely, so he defines "abuse" as "putting anything besides an
expression in the spec", which is just a wrong definition.  It prevents some
(not all) misuse, but also prevents legitimate use (like my example of a local
constant).  My definition is "putting large, complicated stuff in the spec" --
but that's not precise enough for a language rule.

> "what's all this crap about specs and bodies? let's just put
> everything in the specs and avoid this nonsense!"

Sure -- the same folks who think every type should have a non-portable rep
clause, or unchecked conversions should be scattered all over the place, etc.
It's not our job as language designers to rein in those folks.  It's your job as
an NYU prof to educate them.  ;-)

Having said all that, I must admit that my objections to parameterized
expressions are weak.  I'm not going to quit using Ada if this feature is added.
In fact, I would use this feature myself, if it existed.

I also admit that it's easy to implement.

P.S. I don't understand why the term "parameterized expressions"
is preferable to "expression functions" -- I don't much care.

P.P.S. Tucker suggested a similar thing for a procedure, which I imagine looks
like this:

    procedure Blah (X : T);
  private
    procedure Blah (X : T) renames
      Mumble(X*X); -- A single procedure call goes here.

So what's the term for that?

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  11:58 AM

> But I have zero sympathy for Tuck's objections -- "potential abuse"
> is NEVER an appropriate language-design criterion.  "Potential for
> accidental misuse" is, but not mere "potential abuse".

For me, I never say never! And in this case, I think the potential for confusion
and abuse is too high compared to the utility.

I do think that potential abuse is a legitimate consideration.
For example, suppose we argued to introduce completely unsafe untyped pointers
to solve some very specific problem with the understanding that they should only
be used in this situation. I would object that it would simply be too easy for
people to abuse this feature.

Unchecked_Union is an interesting example for me of a feature on the edge. You
can easily see it being abused, but I think we are safe given its very specific
descriptyion as being for interfacing with C unions.

> This example illustrates why.  Tuck wants to prevent "abuse", but he
> can't define that precisely, so he defines "abuse"
> as "putting anything besides an expression in the spec", which is just
> a wrong definition.  It prevents some (not all) misuse, but also
> prevents legitimate use (like my example of a local constant).  My
> definition is "putting large, complicated stuff in the spec" -- but
> that's not precise enough for a language rule.

I would prefer to solve the local constant problem by introducing LET.

>> "what's all this crap about specs and bodies? let's just put
>> everything in the specs and avoid this nonsense!"
>
> Sure -- the same folks who think every type should have a non-portable
> rep clause, or unchecked conversions should be scattered all over the
> place, etc.

No, I don't buy the analogy in this case, I don't think these cases are similar.

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  12:43 PM

> For me, I never say never!

I rarely do.  ;-)

> I would prefer to solve the local constant problem by introducing LET.

If we can come up with a good syntax for it, then that seems like a reasonable
solution.

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  1:20 PM

> I agree with you and Tuck that the potential for abuse is huge.
>
> But I have zero sympathy for Tuck's objections -- "potential abuse"
> is NEVER an appropriate language-design criterion.  "Potential for
> accidental misuse" is, but not mere "potential abuse"...

Well, here we differ.  I believe that part of good language design is
"encouraging" the programmer to make wise choices, by making some things easier
than others.  Just giving programmers free and uniform license to do whatever
they want is a design philosophy, but it isn't mine.  Of course there is a
spectrum here, but I think you (Bob) and I are on somewhat different places on
this spectrum.

So for me, if a feature has more "potential for abuse" than some other feature,
I will favor the one with less such potential, and not surprisingly that will
sometimes "crimp" someone's style.  If it doesn't do that now and then, it is
probably too lax in my book.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  1:28 PM

So I seem to be somewhere in between, on this issue I tend to side with Tuck,
but when it comes to IN OUT parameters in functions, I would be happy to allow
them without the complexification of the anti-side effect rule.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  1:26 PM

>> I would prefer to solve the local constant problem by introducing
>> LET.
>
> If we can come up with a good syntax for it, then that seems like a
> reasonable solution.

Well if you introduce a keyword, it's easy

LET declaration IN expression

(let A : constant Integer := X * Y in A * A + (A - 1))

If you don't like introducing a keyword (which I think is a reasonable
reluctance, you could have an unreserved keyword (would work to have let be
unreserved), or use one of the following

     (A * A + (A - 1) with A : constant Integer := 23)

     (do A : constant Integer := 23 in ....)

     (use A : constant Integer := 23 in ....)

or just

     (A : constant Integer := 23 in ...)

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  1:42 PM

> ...
> or just
>
>     (A : constant Integer := 23 in ...)

Unfortunately membership test creates
a possible ambiguity here.

You could perhaps use "with...do" instead of "let ... in"

   (with A = <expression>, B = <expression>, ... do ...)

If these are all constants, I don't see the need for
": constant <type> :=" when "=" will do.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  2:09 PM

> You could perhaps use "with...do" instead of "let ... in"
>
>    (with A = <expression>, B = <expression>, ... do ...)

I like this one, reads very nicely to me

> If these are all constants, I don't see the need for
> ": constant <type> :=" when "=" will do.

I never mind Ada getting closer to Algol-68 :-)

****************************************************************

From: Edmond Schonberg
Date: Saturday, May 1, 2010  2:48 PM

If this is introduced into the language, the need for renamed expressions (or
function statements :-)!)   disappears. You can just use such a form directly in
pre- and post-conditions.  They are part of the family of new expression forms:
conditional, quantified, case, and now these. I guess their name is NOT
let-expressions!

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  3:07 PM

The need for parameterized expressions remains because of private types.  The
precondition has to be in the visible part, while the expression would need to
be in the private part if it refers to properties of the full type.

In fact the whole point here is private types, where you have relatively simple
preconditions on some primitive operation, say, such as "Is_Open(File)" for a
file Read.  The File_Type is an IN parameter, and if Is_Open checks something in
the File_Type to determine whether the file is open, multiple sequential calls
on an operation with an Is_Open precondition can proceed with at most the first
one requiring an actual test.  E.g.:

    procedure Open(File : in out File_Type; ...)
      with Post => Is_Open(File);

    procedure Read(File : in File_Type; ...)
      with Pre => Is_Open(File);

    procedure Close(File : in out File_Type)
      with Post => not Is_Open(File);

   private

     type File_Record;
     type File_Type is access File_Record;

     function Is_Open(File : in File_Type) return Boolean
       is (File /= null);

     ...

With the above, all run-time checks associated with the precondition on Read can
likely be omitted, presuming the call on Open is visible.

This illustrates a situation where it would be nice to associate a postcondition
with default initialization somehow.  E.g.:

     type File_Type is limited private
       with Post => not Is_Open(File_Type);

where the "Post" here refers to the postcondition on default initialization.  Or
alternatively:

     type File_Type is limited private
       with Default => not Is_Open(File_Type);

Examples are often illuminating...

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  8:48 PM

> With the above, all run-time checks associated with the precondition
> on Read can likely be omitted, presuming the call on Open is visible.

This example illustrates my puzzlement as to why one would want the
implementation of Is_Open to be "exposed".

The reasoning of a tool is: Open implies Is_Open, so Read is OK thereafter.
That doesn't depend on the null-ness of File.  And anyway, for that, we need to
see the body of Open (which presumably sets File to non-null).

I just don't see how putting Is_Open in the private part (but not putting the
implementation of Open and Read in the private part) buys anthing useful.

> This illustrates a situation where it would be nice to associate a
> postcondition with default initialization somehow.  E.g.:
>
>      type File_Type is limited private
>        with Post => not Is_Open(File_Type);
>
> where the "Post" here refers to the postcondition on default
> initialization.  Or alternatively:
>
>      type File_Type is limited private
>        with Default => not Is_Open(File_Type);

Yes, I think you're right that we need some way to assert things about default
values.

****************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  10:06 PM

...
> The reasoning of a tool is: Open implies Is_Open, so Read is OK
> thereafter.  That doesn't depend on the null-ness of File.  And
> anyway, for that, we need to see the body of Open (which presumably
> sets File to non-null).

This isn't necessarily true because Is_Open could depend on a global, or
something that is at a level of indirection and hence writable by some operation
that takes the File_Type as an "in" parameter.

> I just don't see how putting Is_Open in the private part (but not
> putting the implementation of Open and Read in the private part) buys
> anthing useful. ...

Because you then learn that Is_Open depends only on data that is directly in the
"File_Type" object, so can't be affected by calls that pass the File_Type object
as an "in" parameter.  Without that knowledge, you really have no idea what are
the properties of Is_Open.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  3:16 PM

> If this is introduced into the language, the need for renamed
> expressions (or function statements :-)!)   disappears. You can just use
> such a form directly in pre- and post-conditions.  They are part of
> the family of new expression forms: conditional, quantified, case, and
> now these.

Well yes, you can use it but the functional abstraction is still useful,
consider something like

     function Glob (A, B : Float) return Float;

     ...

     function Glob (A, B : Float) return Float
        renames (with X = A**2 + B**2 do X*X + X);

and now in a precondition we have

     precondition (Glob (Arg1) >= Glob (Arg2));

Sure you could right the precondition expression as a giant single expression,
but you would end up duplicating code and so it is nice to have the functional
abstraction).

> I guess their name is NOT  let-expressions!

I would call them WITH expressions

Do we want to allow a list

      (with X = A*A, Y = B*B do X*Y + X + Y)

or require nesting

      (with X = A*A do (with Y = B*B do X*Y + X + Y))

if we leave out the type, what's the resolution rule for the constant, I guess
it's type must be self-evident without needing any context? Like the expression
in a conversion.

****************************************************************

From: Edmond Schonberg
Date: Saturday, May 1, 2010  3:34 PM

No type inference please, resolution should proceed as for a block, and the
bound variables need explicit types. We need a separator, and it would be
confusing for it to be a semicolon, so a comma would be OK.

***************************************************************

From: Tucker Taft
Date: Saturday, May 1, 2010  3:49 PM

Type inference is used already
for named numbers, for array and loop
indices, for case expressions,
and for operands of a
type conversion.  Why not here?

This seems the clear place where
you want to avoid unnecessary syntactic
overhead.  You can always use a
qualified expression if there is
ambiguity.

****************************************************************

From: Edmond Schonberg
Date: Saturday, May 1, 2010  4:04 PM

well, in all the above cases except conversions there is at least a class of
types to which the expression must belong. But you are right that saying that
the value is a complete context (as for a conversion) does not add any
complication to the analysis.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  4:12 PM

I am in favor of making this a complete context and not requiring a type. but
isn't there one problem, what about

      (with Days = 365 do Days*Days + Days)

that would be ambiguous, is that annoying?

****************************************************************

From: Edmond Schonberg
Date: Saturday, May 1, 2010  6:39 PM

By analogy, this would be of type Universal_Integer, some preference rule in
action.

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  6:48 PM

Presumably:

    (with Days = Day_Count'(365) do Days*Days + Days)

would be legal.

****************************************************************

From: Robert Dewar
Date: Saturday, May 1, 2010  6:48 PM

> By analogy, this would be of type Universal_Integer, some preference
> rule in action.

Interesting, so I suppose

   type X = mod 2 ** 64;

   y : x := x (2 ** 64 - 1);

must cause an overflow?

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  7:55 PM

No, I don't see why it would cause an overflow.
And I don't see what it has to do with Ed's comment.
Please explain what you're getting at.

(It must cause a syntax error, but I assume "=" is supposed to be "is".)

****************************************************************

From: Robert Dewar
Date: Sunday, May 2, 2010  6:21 AM

>> Interesting, so I suppose
>>
>>    type X = mod 2 ** 64;
>>
>>    y : x := x (2 ** 64 - 1);
>>
>> must cause an overflow?
>
> No, I don't see why it would cause an overflow.

Well isn't 2 ** 64 -1 interpreted as universal integer?
If so, that value is out of range of this type, which is a signed type with
bounds -2**63 .. +2**63-1.

But in fact it does not cause overflow at run time, so I am confused on the
treatment of universal integers and conversions like this one.

> And I don't see what it has to do with Ed's comment.
> Please explain what you're getting at.
>
> (It must cause a syntax error, but I assume "=" is supposed to be
> "is".)


yes, oops

****************************************************************

From: Bob Duff
Date: Sunday, May 2, 2010  8:47 AM

> Well isn't 2 ** 64 -1 interpreted as universal integer?

The literals are of type universal_integer, and are implicitly converted to
root_integer.  The operators return root_integer. So 2**64-1 is of type
root_integer.  This is then converted explicitly to X.

> If so, that value is out of range of this type, which is a signed type
> with bounds -2**63 .. +2**63-1.

Static expressions are always evaluated without overflow.

****************************************************************

From: Robert Dewar
Date: Sunday, May 2, 2010  9:08 AM

> Static expressions are always evaluated without overflow.

Ah yes, of course, what is peculiar about the case we are talking about

     (with X = 23 + 45 do X*X)

is that there is no immediate type for X to be converted to, so I guess we just
assume this is a named number?

****************************************************************

From: Edmond Schonberg
Date: Sunday, May 2, 2010  9:31 AM

Yes, the expression is equivalent to an attribute that returns
Universal_Integer, so the context of the call determines some eventual type for
it, and within the body of this " expression  function"  X is a named number.

****************************************************************

From: Bob Duff
Date: Sunday, May 2, 2010  9:33 AM

> is that there is no immediate type for X to be converted to, so I
> guess we just assume this is a named number?

Yeah, that probably works.  If not, I wouldn't mind outlawing such cases -- I
mean, if they cause some semantic or implementation difficulties.

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  8:02 PM

> Do we want to allow a list
>
>       (with X = A*A, Y = B*B do X*Y + X + Y)

Yes.

> or require nesting
>
>       (with X = A*A do (with Y = B*B do X*Y + X + Y))

No.  The whole point is a "shorthand", so requiring longer shorthands is not a
good idea.

> if we leave out the type, what's the resolution rule for the constant,
> I guess it's type must be self-evident without needing any context?
> Like the expression in a conversion.

Yes.

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  8:14 PM

> one of the following
>
>      (A * A + (A - 1) with A : constant Integer := 23)

Among the various syntax suggestions, I'd like to squeltch this one -- the decl
of A really needs to come first!

But I very much like the idea of a lightweight syntax for declaring objects
within expressions.

****************************************************************

From: Robert Dewar
Date: Sunday, May 2, 2010  6:23 AM

> Among the various syntax suggestions, I'd like to squeltch this one --
> the decl of A really needs to come first!

Well most functional languages have both forms. From a conceptual point of view,
it is reasonable to refine the meaning with a post-definition, but I agree that
this is very un-Ada which has this annoying rule that things have to be defined
before they are used. In that respect I much prefer Algol-68 or PL/1 or COBOL
for that matter.

In particular it is really annoying having to declare refinement procedures
before they are used.

But I wander off-topic :-)

Indeed for Ada it would be wrong to use the "where" form, and we should require
A to be declared before its use, even if that is a bit hostile to abstraction in
some cases.

****************************************************************

From: Bob Duff
Date: Saturday, May 1, 2010  8:40 PM

> Well, here we differ.

Apparently.  But not much, I think.  For 3.5 years, I watched you design Ada 9X,
and I noticed many times, where Distinguished Reviewers complained "but that
could be abused" and you took my side (i.e., "yeah it could be, but so what?
It's a useful feature, so it should be in the language, and by the way don't
abuse it".

In a few cases (such as this one), you've inexplicably taken the opposite view.

>...I believe that part of
> good language design is "encouraging" the programmer  to make wise
>choices, by making some things easier  than others.

OK, but the case in question is not "making some things easier or harder", but
simply forbidding some things.

>...Just giving programmers free
> and uniform license to do whatever they want is a design  philosophy,
>but it isn't mine.  Of course there  is a spectrum here, but I think
>you (Bob) and I  are on somewhat different places on this spectrum.

I don't see any spectrum.  I think that in every case, "can be abused" is
irrelevant.  But as I said earlier, "can be accidentally misused" is an
important point.

I am taking "can be abused" to mean "...by malicious or incompetent
programmers".  Deliberately.

> So for me, if a feature has more "potential for abuse"
> than some other feature, I will favor the one with less such
> potential, and not surprisingly that will sometimes "crimp" someone's
> style.  If it doesn't do that now and then, it is probably too lax in
> my book.

****************************************************************

From: Robert Dewar
Date: Sunday, May 2, 2010  6:30 AM

> In a few cases (such as this one), you've inexplicably taken the
> opposite view.

to me it is not at all inexplicable, as Tuck said, it is a balance. A proposed
new feature has to be evaluated for

a) the positive value it adds to the language

b) the negative value it subtracts from the language.

Every new feature has an entry in b) for enlarging the language and increasing
complexity, so for starters the value must exceed that factor (it is that
comparison that lead many features to be omitted from Ada 9X). Failure to do
this balancing leads to the kitchen sink effect as in PL/1. That's the "well you
don't have to use it if you don't like it argument".

But also in the b) column can come potential for abuse, and this has to be
measured against the value added. That's the "but this feature could be
seriously abused" argument

So to me the inexplicable position is to refuse to do this balancing, and take
an absolute point of view that you will never count potential for abuse as a
negative.

> I don't see any spectrum.  I think that in every case, "can be abused"
> is irrelevant.  But as I said earlier, "can be accidentally misused"
> is an important point.

Right, and I find this absolute position inexplicable :-)

For example, suppose we propose the following

It is really annoying that when you write A := B, the type of B has to match. We
propose an extension to the language that says that if the types do not match
according to the current rule, then if the type of A is unambiguous, there is an
implicit unchecked conversion of the right side to B, and the statement is
considered legal.

Well that's useful in some cases perhaps, if we can find even one case where it
is useful, then Bob with his absolute position would ignore the immense
potential for abuse.

> I am taking "can be abused" to mean "...by malicious or incompetent
> programmers".  Deliberately.

Well all programmers are incompetent to some degree. Almost no one knows all of
Ada reliably.

****************************************************************

From: Bob Duff
Date: Sunday, May 2, 2010  9:05 AM

> > I don't see any spectrum.  I think that in every case, "can be
> > abused" is irrelevant.  But as I said earlier, "can be accidentally
> > misused" is an important point.
>
> Right, and I find this absolute position inexplicable :-)

I did explain:

    - It doesn't work (doesn't prevent misuse).

    - It prevents useful things.  (It must, because if whatever
      feature is useless, then that's enough to keep it out
      of the language -- no need to worry about abuse.)

If you can give me just one example, I'll change my opinion from "every case" to
"most cases".

> For example, suppose we propose the following
>
> It is really annoying that when you write A := B, the type of B has to
> match. We propose an extension to the language that says that if the
> types do not match according to the current rule, then if the type of
> A is unambiguous, there is an implicit unchecked conversion of the
> right side to B, and the statement is considered legal.
>
> Well that's useful in some cases perhaps, if we can find even one case
> where it is useful, then Bob with his absolute position would ignore
> the immense potential for abuse.

But I can easily argue against this malfeature without using the concept of
deliberate abuse:

    - This mailfeature will clearly cause accidental
      mistakes (assigning things to the wrong type).
      You and I (who are surely competent) would trip
      over this every day!

    - Readability: We would lose the useful property that
      the powerful (and therefore dangerous) tool called
      Unchecked_Conversion is explicit in the code,
      and looks different from other features.

> > I am taking "can be abused" to mean "...by malicious or incompetent
> > programmers".  Deliberately.
>
> Well all programmers are incompetent to some degree. Almost no one
> knows all of Ada reliably.

I don't think you need to understand every corner of Ada to be "competent".  In
the case of bodies-in-specs, "incompetent" means "don't understand the point of
separating visible parts from implementations". That sort of person will make
poorly-designed packages no matter what we language designers do.

I am genuinely interested in hearing an example where a restriction is a good
idea, and that's based ONLY on "potential for deliberate abuse".  The above one
doesn't work.

Yesterday, you gave the example of untyped pointers, which I didn't respond to.
My response is, we have that feature, it's called System.Address (along with
Address_To_Access_Conversions). It has a huge potential for abuse, and we've
both seen it abused, but we NEED some sort of feature along the lines of
System.Address.  It would make no sense to eliminate it, unless we can come up
with a safer feature that still gets those low-level jobs done.

Care to try another example?

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  12:05 PM

I'm not sure if I already made this clear, but for the record:

I had been opposed to "parameterized expressions"
(whatever they're called) being standardized.

But if we add "let" expressions (perhaps called "with" expressions?), then I
turn 180 degrees, and I am in favor of having both parameterized expressions and
"let" expressions.  "Let" expressions seem quite simple and quite useful in
their own right, and alleviate most of my concerns about parameterized
expressions (which are also quite simple -- I never said otherwise).

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  12:20 PM

I fully agree with this position, can we agree on this tentative syntax as the
best choice *if* we put them in

    WITH_EXPRESSION ::= (with DECL {, DECL} in EXPRESSION)

     DECL ::=  IDENTIFIER = EXPRESSION

with parens being omittable as for conditional expressions?

I think calling these WITH-expressions is a reasonable choice.

Of course the usual semantic perusal by the usual gang of suspects will be
required :-)

****************************************************************

From: Tucker Taft
Date: Monday, May 3, 2010  12:27 PM

I don't think "in" works syntactically, but perhaps "do".

****************************************************************

From: Edmond Schonberg
Date: Monday, May 3, 2010  12:28 PM

Definitely in favor of these.  Can we also say that they can appear in the
private part and work as renames_as_body for contracts?

****************************************************************

From: Tucker Taft
Date: Monday, May 3, 2010  12:36 PM

I agree that a parameterized expression should be allowed in a private part as a
completion of a function spec in the visible part. Otherwise, they don't have
much value.

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  12:53 PM

> Definitely in favor of these.  Can we also say that they can appear in
> the private part and work as renames_as_body for contracts?

Not sure what "they" refers to.

Parameterized expressions should be allowed anywhere a declaration is allowed --
including in private parts.  They are like renamings, in that they can serve as
the spec or the body.  (One proposed syntax used "renames", but Tuck proposed a
different syntax.) This feature has been variously called "parameterized
expressions", "renamings of expressions" and "expression functions".

with_expressions should be allowed whereever an expression is allowed, which
includes "in a parameterized expression that is in a private part".

****************************************************************

From: Gary Dismukes
Date: Monday, May 3, 2010  12:55 PM

> Definitely in favor of these.  Can we also say that they can appear in
> the private part and work as renames_as_body for contracts?

Presumably "with_expressions" are just another flavor of expression and can
occur wherever expressions can occur, so in particular can be used in expression
functions.  As for expression functions, it seems reasonable that those can be
used as renaming-as-bodies.

> > I fully agree with this position, can we agree on this tentative
> > syntax as the best choice *if* we put them in
> >
> >   WITH_EXPRESSION ::= (with DECL {, DECL} in EXPRESSION)
> >
> >    DECL ::=  IDENTIFIER = EXPRESSION

Do we really want to use "=" for these rather than ":="?
The assignment operator seems much more sensible to me.

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  12:57 PM

I don't think "in" works, because:

    A : Integer := 123;

    X := (Y = A in B in C);

Does that mean Y is is of type Integer, value 123, and the expression is "B in
C"?  Or is Y of type Boolean, value "A in B", and the expression is "C"?

We don't want the type of X to help resolve this ambiguity!

Somebody suggested "do", which I can live with.

****************************************************************

From: Steve Baird
Date: Monday, May 3, 2010  1:18 PM

> Do we really want to use "=" for these rather than ":="?
> The assignment operator seems much more sensible to me.

I agree - I don't like this use of "=".

I would actually prefer "=>" because we are (in some sense) specifying a
parameter value, but I'm afraid that would end up looking too much like an
aggregate.

****************************************************************

From: Tucker Taft
Date: Monday, May 3, 2010  1:25 PM

> Do we really want to use "=" for these rather than ":="?
> The assignment operator seems much more sensible to me.

The idea of "=" was that this was more of an assertion that X = <expr> in the
expression.  Using ":=" makes these look very much like assignment statements,
which they definitely are not.  They are closer to renamings.

****************************************************************

From: Tucker Taft
Date: Monday, May 3, 2010  1:29 PM

I prefer "=>" or "=" to ":=".
":=" just makes these look too much like assignment statements.

It seems OK to have them look a bit like aggregates.

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  2:13 PM

> I prefer "=>" or "=" to ":=".
> ":=" just makes these look too much like assignment statements.

I agree with Tucker that we don't want these to look like assignment statements.  (But then, I already dislike the fact that initialization uses the same symbol as assignment statements.)

> It seems OK to have them look a bit like aggregates.

I suppose so.

"=" rubs me the wrong way, because this really is an assignment (i.e. an
initialization), not an equality test, and I am annoyed by languages that misuse
maths notations well-established for hundreds of years.

I guess "=>" is best.

Other ideas?  How about "is"?  "with X is Y*Y do X + X"

****************************************************************

From: Edmond Schonberg
Date: Monday, May 3, 2010  2:20 PM

"=>" is already used to indicate a binding of a name to a value,  it seems
natural to use it here as well.

****************************************************************

From: Gary Dismukes
Date: Monday, May 3, 2010  2:22 PM

> The idea of "=" was that this was more of an assertion that X = <expr>
> in the expression.  Using ":=" makes these look very much like
> assignment statements, which they definitely are not.  They are closer
> to renamings.

It depends on how these are bound.  In the earlier discussion they were being
discussed as being like constants, not renamings.

In fact, you wrote:

> You could perhaps use "with...do" instead of "let ... in"
>
>    (with A = <expression>, B = <expression>, ... do ...)
>
> If these are all constants, I don't see the need for
> ": constant <type> :=" when "=" will do.

(though I would have said: when ":=" will do.:-)

Normally there won't be a semantic difference whether they're semantically
constants or renamings, but there would be if there's a possibility of
intervening side effects.  And I assume that is possible in general, since these
are just a kind of expression.

If we want them to be a shorthand for the "expression function" thingies, OK,
then renaming semantics makes sense.

In any case, I'd prefer "=>" over "=", and sounds like you're OK with that.

****************************************************************

From: Gary Dismukes
Date: Monday, May 3, 2010  2:42 PM

> I agree with Tucker that we don't want these to look like assignment
> statements.  (But then, I already dislike the fact that initialization
> uses the same symbol as assignment statements.)

Fair enough.  I just thought it was consistent with how initialized declarations
look (whether or not you like those:-).

> > It seems OK to have them look a bit like aggregates.
>
> I suppose so.

It's OK, but not great.  It's kind of annoying to have something initially read
a lot like something else that's quite different, but I suppose I can live with
it.  (Almost looks like an extension aggregate, but missing the ancestor
part...)

> "=" rubs me the wrong way, because this really is an assignment (i.e.
> an initialization), not an equality test, and I am annoyed by
> languages that misuse maths notations well-established for hundreds of
> years.

So you agree that this is an assignment binding, not like a renaming.
That's indeed what I thought you originally intended when you suggested having a
"let".  Anyway, I agree that using "=" would be annoying and confusing.

> I guess "=>" is best.
>
> Other ideas?  How about "is"?  "with X is Y*Y do X + X"

Probably "=> is the way to go.  "is" might be OK, but I think it reads kind of
funny, especially for cases where you have more than one of them.

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  2:51 PM

> So you agree that this is an assignment binding, not like a renaming.

Yes, definitely.  It's a shorthand for a constant decl.
Treating it as a renaming (of what?  possibly a variable?) makes no sense to me.

But I suppose I should retract my comment about misuse of maths notations --
"let X = 2 + 2" is a perfectly good maths notation.  So I don't know. And don't
care too much -- I like the feature a lot, and could live with most of the
syntax suggestions.

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  6:36 PM

> I don't think "in" works syntactically, but perhaps "do".

Right, sorry, I meant to write "do"

Some minor points,

the names in the declarations are local to the expression and following
declarations I assume.

the expression itself gets full overloading treatment I assume (that's actually
the hardest thing to implement, because otherwise you can turn the expression
trivially into a function, to deal with the local scoping etc, but you can't do
that because of the overloading, which can't be done until you have all the
declarations resolved, a bit tricky, but nothing insuperable.

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  7:19 PM

> the names in the declarations are local to the expression and
> following declarations I assume.

Yes, I agree.

> the expression itself gets full overloading treatment I assume (that's
> actually the hardest thing to implement, because otherwise you can
> turn the expression trivially into a function, to deal with the local
> scoping etc, but you can't do that because of the overloading, which
> can't be done until you have all the declarations resolved, a bit
> tricky, but nothing insuperable.

I'm not sure what you mean by "full overloading treatment".

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  8:18 PM

> I'm not sure what you mean by "full overloading treatment".

I mean that you can say

     f (g (with a => 2, m (a)));

and you have to consider various f's, g's and m's in figuring this out. the
expression is not (unlike the right side of the declarations) an isolated
context.

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  6:40 PM

> I would actually prefer "=>" because we are (in some sense) specifying
> a parameter value, but I'm afraid that would end up looking too much
> like an aggregate.

I really don't like := for a binding, but perhaps that's my Algol-68 background,
yes, I can buy Constant together with := meaning binding, but I don't like

    a := 3

meaning initialization rather than assignment

I would go for the =>

and not worry about the aggregate issue (it will be very rare to have a string
of more than one of these declarations anyway, and the WITH keyword is close by

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010   6:45 PM

> It's OK, but not great.  It's kind of annoying to have something
> initially read a lot like something else that's quite different, but I
> suppose I can live with it.  (Almost looks like an extension
> aggregate, but missing the ancestor part...)

But it's got (with

which is an unambiguous clue!

>> "=" rubs me the wrong way, because this really is an assignment (i.e.
>> an initialization), not an equality test, and I am annoyed by
>> languages that misuse maths notations well-established for hundreds
>> of years.

no no no!

In Algol-68

    int a = 3

is a BINDING, not an initialization, and the use of the equal sign is really
appropriate for a binding if you ask me. So I like that syntax,

but it's not Ada, so let's stick with =>

This has the advantage of not getting confused with normal declaration syntax.

...
>> Other ideas?  How about "is"?  "with X is Y*Y do X + X"
>
> Probably "=> is the way to go.  "is" might be OK, but I think it reads
> kind of funny, especially for cases where you have more than one of them.

I agree

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  7:13 PM

> is a BINDING, not an initialization, and the use of the equal sign is
> really appropriate for a binding if you ask me.

ok ok ok, I already retracted my rant about "maths" above.

>... So
> I like that syntax,

Me, too (now).

> but it's not Ada, so let's stick with =>

OK with me.

> This has the advantage of not getting confused with normal declaration
> syntax.

I don't get that.  It's exactly equivalent to a normal constant decl.
What confusion are you talking about?

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  8:17 PM

>> is a BINDING, not an initialization, and the use of the equal sign is
>> really appropriate for a binding if you ask me.

You know that in Algol-68 this binding notion is universal, so a variable is
declared using

    ref int a = loc int [:= 3];

Here the identifier a is a reference to an integer, and it is bound to the
result of calling a local allocator that allocates space on the local stack,
then you can optionally initialize as shown. As you can imagine, compilers are
likely to optimize this particular case of binding, but a renaming is not a
special case, after the above declaration you can say

    ref int b = a;

and now b and a are bound to the same location

Unfortunately Algol-68 was too scared of this notation, and introduced the short
hand

    int a;

which is if you ask me an abomination (sort of like the decision to introduce
anonymous arrays in Ada to make the Fortran folks happy).

Worse still

    int a := 3;

is a shorthand for

    ref int a = loc int := 3;

and is only one character different from

    int a = 3;

I really like that in Algol-68, there is LESS syntax for constants, it is
annoying that it is the other way round in Ada.

But I digress, please enter sufficient ^H characters to delete this irrelevant
rambling.

I think we have a consensus on the use of =>

so now we have

   (with DECL {, DECL} do EXPRESSION)

where DECL looks like

    IDENTIFIER => EXPRESSION

****************************************************************

From: Bob Duff
Date: Monday, May 3, 2010  8:50 PM

> But I digress, please enter sufficient ^H characters to delete this
> irrelevant rambling.

I agree with much of the above digression, both the elegance of Algol 68, and
the inelegance of the shorthands.

The main thing I don't like is that pointers (refs) are the same as variables.

And please ^H-ify this to one's heart's content.

> I think we have a consensus on the use of =>
>
> so now we have
>
>    (with DECL {, DECL} do EXPRESSION)
>
> where DECL looks like
>
>     IDENTIFIER => EXPRESSION

Yes, "looks like", but in the BNF in the RM, DECL will need to be something
like:

    with_declaration ::= defining_identifier => expression

****************************************************************

From: Robert Dewar
Date: Monday, May 3, 2010  9:02 PM

> Yes, "looks like", but in the BNF in the RM, DECL will need to be
> something
> like:
>
>     with_declaration ::= defining_identifier => expression

yes sure, I was just indicating the surface syntax

I would call it with_expression_declaration, since with_declaration seems to
confusable with with statements.

****************************************************************

From: Randy Brukardt
Date: Monday, May 3, 2010  9:46 PM

...
> I think we have a consensus on the use of =>
>
> so now we have
>
>    (with DECL {, DECL} do EXPRESSION)
>
> where DECL looks like
>
>     IDENTIFIER => EXPRESSION

This seems OK, but let me step back a bit and make a few reasonably general
comments.

First, can someone make a reasonably concise statement of the problem this is
intended to solve? The only thing I've seen from this thread is that it seems
inexplicably make Bob more comfortable with expression functions. While that's
probably a useful goal, I don't see how that's going to help us sell this to the
Ada community at large. Some people (not on the ARG!) already have objected to
conditional expressions, and this of course is much more strongly in the same
direction.

After all, this is a fairly complex feature to define and implement (we'll need
to make sure that we don't make the same mistake we made with extended returns,
for instance, so that the newly defined identifier is actually visible after the
"do"). We will need a decent justification for it.

I admit that I'm a bit dubious about the need for constants. I've been racking
my brain and can't remember a case where I wanted a constant to use in a single
expression (variables are different, see below). I realize that conditional
expressions (both types) could potentially make expressions larger, but I've
been hoping that the provision of expression functions would tend to limit the
size of expressions -- if they get too large, it is better to create an
abstraction to represent parts and break them up.

I do strongly prefer the "=>" notation, as we definitely do not want these to
look like declarations. Otherwise, we'd have endless arguments about why
variables aren't allowed (or whether they should be allowed). The only cases I
can think of where I've used local objects for a single expression is to discard
unnecessary in out and out parameters. With the additional of in out parameters
to functions, we now will have this problem in function calls as well. This is
especially common when interfacing to existing APIs.

I believe also that the intent is that this entire construct is an expression.
That means that it can't be used in assignment contexts (lvalues in C). That's
probably a good thing; recall however that it can be turned into a "name" by
qualifying it, so it will be possible to rename and use it in other contexts
that require a (constant) name.

****************************************************************

From: Randy Brukardt
Date: Monday, May 3, 2010  10:22 PM

One more point: we need to clearly decide whether DECL is a renaming or whether
it declares a new constant object. The semantics can be different, and it can
matter (for instance, to the "known to be the same object" rules, and whether
Adjust is called for controlled components).

We seem to have had votes for both possibilities in the previous discussion.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  6:05 AM

> First, can someone make a reasonably concise statement of the problem
> this is intended to solve? The only thing I've seen from this thread
> is that it seems inexplicably make Bob more comfortable with expression functions.

not at all inexplicable! if you insist that the "body" of such an expression
function be a single expression, or that you want to be able to write powerful
expressions in preconditions etc, you need this very standard capability. Randy,
do you have experience with the use of LET expressions (we are not inventing
anything new here, all functional languages have this capability).

Generally, we say we don't need this because we can write separate declarations
(it's really Bob's preference to do that anyway by allowing bodies in specs),
but if we don't allow separate declarations, this is a very necessary
abstraction facility.

It is often also a much more convenient form for local abstraction.
Consider something like

      declare
         A : Float := Factor * Adjustment;
      begin
         return A * A + A;
      end;

This reads fine and is much shorter:

      return (with A = Factor * Adjustment do A * A + A);

> While that's probably a useful goal, I don't see how that's going to
> help us sell this to the Ada community at large.

I think no changes we do to the language are specifically much help in selling
to the Ada community, other than to show a general activity in keeping Ada
current and showing continued interest. After all our (our = AdaCore for this
sentence) major competitor, Greenhills, has decided it is not even worth
investing in Ada 2005, let alone Ada 2012.

On the other hand, for certain specific customers, pre/post conditions with the
ability to write powerful expressions are a real plus. We put in pre/post
conditions not because of anything the ARG was doing, but because a critical
customer needed this capability (we will keep these pragmas around for ever, and
likely they will in practice be more heavily used than the eventual aspect
syntax, which of course we will eventually support).

> Some people (not on the ARG!)
> already have objected to conditional expressions, and this of course
> is much more strongly in the same direction.

Ah, the infamous "some people". Who are these people? Do they know anything? And
what specifically is their objection to conditional expressions (which are by
the way a VERY nice improvement, we have redone our run-time library with
judicious but fairly extensive use of conditional expressions, and it is a big
improvement).

> After all, this is a fairly complex feature to define and implement
> (we'll need to make sure that we don't make the same mistake we made
> with extended returns, for instance, so that the newly defined
> identifier is actually visible after the "do"). We will need a decent justification for it.

Not so hard to implement, maybe a day or two's work for us, and as for other
compilers, we don't even see them implementing critical 2005 features yet, so I
can't get to excited about difficulty-of-implementation arguments

As for difficulty-of-definition, I don't see a problem, it is certainly nothing
like the extended return (which was a HUGE pain in the neck, and for me a case
where the implementation and definition efforts were unreasonably large. For me
the second most glaring example of a mismatch between effort and value -- the
first being leap seconds, about which don't get me started!)

> I admit that I'm a bit dubious about the need for constants. I've been
> racking my brain and can't remember a case where I wanted a constant
> to use in a single expression (variables are different, see below). I
> realize that conditional expressions (both types) could potentially
> make expressions larger, but I've been hoping that the provision of
> expression functions would tend to limit the size of expressions -- if
> they get too large, it is better to create an abstraction to represent parts and break them up.

That's strage to me, I guess it's just a frame of mind, have you ever done
significant programming in a functional language?

> I do strongly prefer the "=>" notation, as we definitely do not want
> these to look like declarations. Otherwise, we'd have endless
> arguments about why variables aren't allowed (or whether they should
> be allowed). The only cases I can think of where I've used local
> objects for a single expression is to discard unnecessary in out and out parameters.

Why would you "discard" unnecessary parameters, why not just ignore them,
confused ....

> With the additional of in out
> parameters to functions, we now will have this problem in function
> calls as well. This is especially common when interfacing to existing APIs.
>
> I believe also that the intent is that this entire construct is an
> expression. That means that it can't be used in assignment contexts
> (lvalues in C). That's probably a good thing; recall however that it
> can be turned into a "name" by qualifying it, so it will be possible
> to rename and use it in other contexts that require a (constant) name.

Right, the entire construct is an expression, as in C, and unlike Algol-68.
That's a limitation, we don't allow something like

    (if X then A else B) := large complex expression;

which is sometimes useful (as in Algol-68 use), but it's not Ada so that's that!

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  6:06 AM

> One more point: we need to clearly decide whether DECL is a renaming
> or whether it declares a new constant object. The semantics can be
> different, and it can matter (for instance, to the "known to be the
> same object" rules, and whether Adjust is called for controlled components).

I would be in favor of it being a new constant object (really a value, but I
know Ada likes to call values objects :-( ) My limited knowledge of Ada finds
the idea of renaming an expression plain odd.

> We seem to have had votes for both possibilities in the previous discussion.

Can someone post a clear example where it makes a semantic difference.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010   6:12 AM

By the way, the reason I like to think of the "declarations" as constants rather
than objects, is that for me, the following are equivalent (using equivalent in
the Ada technical sense of very similar, but not quite the same :-))


    (with
       A = 3 * Q,
       B = C * A
     in
       B * B + B)

(I think I like this indenting style by the way


    function T return Integer is
      A : constant Integer := 3 * Q;
      B : constant Integer := C * A;
    begin
      return B * B + B;
    end T;

then you replace the expression with a call to T

It can't be implemented that way unfortunately (otherwise the implementation
would be really trivial), since you don't know the types involved.

The implementation I have in mind, is simply to insert the declarations:

     A : constant Integer := 3 * Q;
     B : constant Integer := C * A;

before the expression in question (we have a quite general insertion mechanism
that can do this), then just replace the where expression with the constituent
expression.

The difficulty is that the names A and B have to be marked to make them unique,
since they are not accessible other than in the target expression.

That's not a big deal, except to make sure that the debugger can handle them OK.

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  7:56 AM

I prefer to think of it as a renaming
of the result of a function call,
rather than making a copy.
Equivalently, it could be thought of
as the same kind of binding that occurs
with an IN parameter.  We really don't
want copies made here of composite objects, in my view, with the attendant
adjusts and finalizes.

Given the use of "=>", using the IN parameter model is perhaps the best.  On the
other hand, the advantage of renaming is that it inherits aliasedness,
accessibility, etc., whereas a parameter loses some of that information.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  8:02 AM

I would go with the renaming, now that I understand the issue, I think it unlikely that the lack of inheritance is an issue for the kind of thing we are likely to encounter, but it does seem useful to be able to use this as in

    (with
       R => Very_Important_Record_With_Long_Name
     do R.X + R.Y + R.Z)

without generating a copy

Of course if you go to the full declaration syntax, you could simply allow
renames, and let the user decide.

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  8:21 AM

> I prefer to think of it as a renaming
> of the result of a function call,
> rather than making a copy.
> Equivalently, it could be thought of
> as the same kind of binding that occurs with an IN parameter.  We
> really don't want copies made here of composite objects, in my view,
> with the attendant adjusts and finalizes.

Why you say "my view", I think you owe us an explanation of why.
An argument with concrete reasons is always more compelling than an argument
from authority, even when I have great respect for the authority -- Tucker Taft
is indeed a world-renowned authority with (mostly) good taste in language
design!  ;-)

Maybe you're worried about the inefficiency of adjusts, maybe it's something
else, but I can't guess.

> Given the use of "=>", using the IN parameter model is perhaps the
> best.  On the other hand, the advantage of renaming is that it
> inherits aliasedness, accessibility, etc., whereas a parameter loses
> some of that information.

I don't much like the renaming idea, because it introduces an alias, which is
error prone.  The IN param idea seems even worse, because then it's
implementation dependent whether or not it introduces an alias.  To me, this
feature looks like a shorthand for declaring a constant inside an expression.
This would imply that it's illegal for limited types, unless the initial value
is a function call or aggregate (b-i-p).

Note that in the last year or so, GNAT added a warning about renaming of
function calls, which could better be expressed by declaring a constant, because
some users found it confusing -- the got mixed up about the syntax, and thought
they were renaming the function, when they were renaming the result of calling
it.

Note that we already have permissions to avoid redundant adjust/finalize, and
compilers can also avoid copies of non-controlled things in many cases.  So I'm
not concerned about efficiency.

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  8:36 AM

> Why you say "my view", I think you owe us an explanation of why.

I see this as a light weight feature,
and having hidden calls on finalize and
adjust add weight and complexity.  The semantics of renaming seem just right.
Any aliases created are totally visible, so there is no difficulty from the
compiler's point of view, and if the user chooses to use both the original and
the renaming in this one expression, I think they definitely fall into your
bucket of actively trying to make a mess of things... ;-).

Note that the only time an alias would be created would be if the right hand
side of the binding were a pre-existing object, and in that case, it is quite
possibly a short-hand you are seeking rather than a full copy

As far as renaming of the results of functions, it has been in the language for
15 years, and I find the new GNAT warning somewhat annoying, though I presume
they didn't put it in to help (or annoy ;-) folks like me.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  9:09 AM

> As far as renaming of the results of functions, it has been in the
> language for 15 years, and I find the new GNAT warning somewhat
> annoying, though I presume they didn't put it in to help (or annoy ;-)
> folks like me.

It's peculiar to do a renaming of a function that returns a scalar result in my
view, it is confusing to readers who are not Tucker Taft, and it achieves
nothing whatever.

You can always suppress the specific warning if you don't like it.

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  9:21 AM

> I see this as a light weight feature,
> and having hidden calls on finalize and adjust add weight and
> complexity.  The semantics of renaming seem just right.  Any aliases
> created are totally visible, so there is no difficulty from the
> compiler's point of view, and if the user chooses to use both the
> original and the renaming in this one expression, I think they
> definitely fall into your bucket of actively trying to make a mess of
> things... ;-).

My concern is about accidental misuse and readability, not deliberate abuse.

But I must admit that I'm being inconsistent in worrying about this, while
advocating 'in out' params on functions.

> Note that the only time an alias would be created would be if the
> right hand side of the binding were a pre-existing object, and in that
> case, it is quite possibly a short-hand you are seeking rather than a
> full copy
>
> As far as renaming of the results of functions, it has been in the
> language for 15 years,

Age is not an argument for or against a feature.

I tend to use constants rather than renamings when possible, and trust the
compiler to remove copies when possible, because constants are a weaker, less
powerful, easier-to-understand feature.  I use renamings when I want the extra
power.

> and I find the new GNAT warning somewhat annoying,

I know.

> though I presume they didn't put it in to help (or annoy ;-) folks
> like me.

You presume correctly.  ;-)

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  9:21 AM

> It's peculiar to do a renaming of a function that returns a scalar
> result in my view, it is confusing to readers who are not Tucker Taft,
> and it achieves nothing whatever.

I generally only use it for composite/private types, and generally only in a
local declare block to hold a large intermediate in some calculation.

> You can always suppress the specific warning if you don't like it.

I think the warning doesn't distinguish elementary from composite.

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  9:32 AM

> I tend to use constants rather than renamings when possible, and trust
> the compiler to remove copies when possible...

I know too much about compilers to ever really trust them to do optimizations.
And when they actually change the semantics, such as removing redundant
finalize/adjusts, this is pushing what some compiler-writers feel comfortable
doing, and certainly isn't something that a "typical" language-independent
optimizer is going to do.

And since the alternative is often to write the somewhat complex function call
in the single place it is used, a "rename" preserves the same semantics, whereas
a copy can change things subtly and certainly can slow things down.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  9:39 AM

>> You can always suppress the specific warning if you don't like it.
>
> I think the warning doesn't distinguish elementary from composite.

It doesn't, but the customer cases which lead to this warning were in fact
composite types. We found people having consistently wrong expectations in the
use of this "feature" (i actually think it is a bit of a misfeature), and not
one case of a reasonable use in our whole test suite (but several suspicious
ones). Style warnings are always tricky :-)

Note that you are turning this warning on if you are seeing it, either with
-gnatwa or with -gnatw.r explicitly. I suggest turning it off if you don't like
it, use

-gnata.R

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  9:53 AM

> ...And what specifically is their objection to conditional expressions

I'm curious about that, too.

The only case I remember is when they were discussed during Ada 9X, and John
Goodenough objected, with the reason "Ada is not an expression language".

> As for difficulty-of-definition, I don't see a problem, it is
> certainly nothing like the extended return (which was a HUGE pain in
> the neck, and for me a case where the implementation and definition
> efforts were unreasonably large.

Extended return statements are trivial to define and to implement -- they're
just syntactic sugar.

Build-in-place function calls were a HUGE pain.  Don't confuse these two
features; they're not the same thing.

> Why would you "discard" unnecessary parameters, why not just ignore
> them, confused ....

I think Randy means a procedure with an 'out' parameter, and he wants to throw
away the value returned in that parameter.  So he declares a very-local variable
called Ignore, say, and passes that.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  10:10 AM

> Extended return statements are trivial to define and to implement --
> they're just syntactic sugar.
>
> Build-in-place function calls were a HUGE pain.  Don't confuse these
> two features; they're not the same thing.

Indeed I referred to the second in my comment, I think of them as one feature
:-)

>> Why would you "discard" unnecessary parameters, why not just ignore
>> them, confused ....
>
> I think Randy means a procedure with an 'out' parameter, and he wants
> to throw away the value returned in that parameter.  So he declares a
> very-local variable called Ignore, say, and passes that.

Ah yes, that makes perfect sense

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  10:26 AM

> > Extended return statements are trivial to define and to implement --
> > they're just syntactic sugar.
> >
> > Build-in-place function calls were a HUGE pain.  Don't confuse these
> > two features; they're not the same thing.
>
> Indeed I referred to the second in my comment, I think of them as one
> feature :-)

Everybody does, which is why I keep pointing out that they're not.  ;-)

They are related, in that there are some cases of build-in-place where you
really need to use the extended return syntax.  And they are related in that
they were invented at the same time.

But it's important to remember that this syntax does not trigger B-I-P. B-I-P is
triggered by limitedness, and it works even when the old-fashioned simple return
syntax is used.

In a non-B-I-P situation, extended return syntax is never necessary, but it is a
convenient shorthand.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  10:45 AM

> Everybody does, which is why I keep pointing out that they're not.
> ;-)

Right, but to me the extended return is a pretty minimal-use piece of syntactic sugar. I wouldn't be in favor of adding it if it was not needed in some cases, so I think it is reasonable to think of them as one feature!

>
> They are related, in that there are some cases of build-in-place where
> you really need to use the extended return syntax.  And they are
> related in that they were invented at the same time.
>
> But it's important to remember that this syntax does not trigger B-I-P.
> B-I-P is triggered by limitedness, and it works even when the
> old-fashioned simple return syntax is used.
>
> In a non-B-I-P situation, extended return syntax is never necessary,
> but it is a convenient shorthand.

I find it dubious ... I never use it myself.

Of course this is all a bit colored by the fact that I regard the change to
limited type return as a non-compatible disaster in the language design, which
has badly hindered our customers from transitioning to Ada 2005, and results in
continued horrible problems in mixing Ada 95 and Ada 2005. :-)

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  10:03 AM

> Can someone post a clear example where it makes a semantic difference.

Here's an example, although I'm not sure it's "clear":

    Var : aliased Integer := 123;

    X : constant Integer :=
        (with My_Var => Var do
            (F (Out_Param => My_Var) -- (1)
             G (Access_Param => My_Var'Access) -- (2)
             H (Named_Access => My_Var'Access) -- (3)
            )
        );

With renaming semantics, all of the above are legal, and the run-time
accessibility check for (2) inside G uses the level of Var. (Well, (3) is legal
only if the type is at the same level as Var.)

With constant semantics, (1) is illegal because it's trying to pass a constant
as a 'out' parameter.  (2) is illegal if the parameter is access-to-variable,
and if access-to-constant, passes a deeper accessibility level than renaming
would. (3) is illegal because My_Var is deeper than the access type.

It would also make a difference if Var is atomic or volatile -- renaming
inherits these properties; a constant would not.

I guess this is somewhat convincing that "rename" vs. "constant"
won't matter in practice.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  10:23 AM

>     X : constant Integer :=
>         (with My_Var => Var do
>             (F (Out_Param => My_Var) -- (1)
>              G (Access_Param => My_Var'Access) -- (2)
>              H (Named_Access => My_Var'Access) -- (3)
>             )
>         );
>
> With renaming semantics, all of the above are legal, and the run-time
> accessibility check for (2) inside G uses the level of Var.
> (Well, (3) is legal only if the type is at the same level as Var.)
>
> With constant semantics, (1) is illegal because it's trying to pass a
> constant as a 'out' parameter.

It would be plain horrible for (1) to be legal in my opinion.
very confusing, these should not be lvalues, I think it is better perhaps to
stick with plain value copying if this sort of thing is going to become legal,
UGH!

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  10:39 AM

I think "IN" parameter semantics gives you what you want.  No copying allowed
for by-reference types, including all controlled, task, protected, etc. Copying
allowed but not required for other composites (but no adjust/finalize possible,
since those are all by-reference types).  Copying required for elementary.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  10:46 AM

That's what I assumed was meant here, I was shocked by Bob's example :-)

The above sounds exactly right to me

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  10:49 AM

I am a bit concerned that the IN parameter semantics for the with-expression
declarations will considerably complicate the implementation, since then there
is no simple transcription to existing declarations, and I don't quite see how
to implement it :-(

We rely on the backend for much of the semantics of parameter passing, and it is
only triggered by parameters.

Tuck, do you have some implementation model in mind, in terms of current
semantics?

With copying, this is essentially syntactic sugar, as I illustrated in my
proposed implementation.

With renaming using IN parameter semantics, I don't see a simple implementation
model.

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  11:00 AM

> That's what I assumed was meant here, I was shocked by Bob's example
> :-)
>
> The above sounds exactly right to me

There are three proposed semantics: renaming, constant, and IN param.
My previous e-mail was misleading, because it only considered the first two.  In
my example, with IN param semantics:

>     X : constant Integer :=
>         (with My_Var => Var do
>             (F (Out_Param => My_Var) -- (1)
>              G (Access_Param => My_Var'Access) -- (2)
>              H (Named_Access => My_Var'Access) -- (3)
>             )
>         );

all three lines would be illegal.  But if we change the type to a by-reference
type, (2) and (3) would be legal, but only if it's access-to-constant.

I suppose that's not so horrible after all.

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  11:14 AM

I would think for composite types, renaming is equivalent to "IN" parameter
dynamic semantics, while for elementary types, copying does what you want.

The constant-ness is a static-semantics- only concern, and shouldn't affect the
run-time model.

****************************************************************

From: Edmond Schonberg
Date: Tuesday, May 4, 2010  11:21 AM

Maybe we should avoid saying that there is some kind of similarity with "IN"
parameters here, because for actual parameter passing the determination of
whether to pass something by copy or by reference is shared between front-end
and back-end, and it would be pointless (or plain impossible) to try to
reproduce this in with-expressions. It should be enough to say that there is
copying for elementary types, renaming for by reference-types, and
implementation-defined behavior for the rest.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  11:25 AM

> I suppose that's not so horrible after all.

And as per discussion with Ed, we don't require equivalence with IN parameter handling here, IN parameter semantics means

a) for by copy types, do a normal constant declaration
b) for by reference types, do a renaming
c) for all others, choose a) or b) as you please

****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  1:37 PM

> And as per discussion with Ed, we don't require equivalence with IN
> parameter handling here, IN parameter semantics means
>
> a) for by copy types, do a normal constant declaration
> b) for by reference types, do a renaming
> c) for all others, choose a) or b) as you please

I'm puzzled by Ed's comment, and your echo of it above.
Yes, the above is the semantics of IN params.
Tuck is suggesting that these new 'with' thingies have the same semantics.  How
are you and Ed suggesting that they differ from IN params?

Maybe "don't require" above is a typo for "do require"?

By the way, how far do we take the analogy with IN params?
Besides by-copy vs. by-ref as discussed above, there are other issues.  For
example:

    X : access T;
    Y : String := (with My_X => X do "Hello");

Is My_X an access parameter, with dynamic accessibility level?

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  2:03 PM

Saying something is EQUIVALENT to the handling of IN parameters to me implies
that an implementation handle them equivalently, which we do not intend to do.

****************************************************************

From: Edmond Schonberg
Date: Tuesday, May 4, 2010  2:31 PM

> Is My_X an access parameter, with dynamic accessibility level?

Robert was concerned that "equivalent" was meant to be taken literally, in which
case it was a mystery how to achieve this, given that much parameter passing
decisions are made by the back-end in a target-specific way, taking alignment
into account, etc.  If "it's just like IN parameters" is meant metaphorically,
there is no problem.  we know what is really by reference and what is really by
copy, and the rest is up to the implementation.

Concerning My_X above, I would hope that there are no additional accessibility
issues that arise. (Do we need another example of the unholy nature of anonymous
access types?)

****************************************************************

From: Steve Baird
Date: Tuesday, May 4, 2010  2:56 PM

> Is My_X an access parameter, with dynamic accessibility level?

If we want to avoid having parameterized expressions involve implicit type
declarations, then it seems that the type of the parameter must be that of the
actual expression, even if that type is anonymous.

This looks like an area where getting the definition right may require some
care.

====

The nominal subtype of one of these parameters also needs to be similarly
defined; this can make a difference if the parameter governs a case expression.

====

Should these "actual parameter" expressions be subject to the same "is expected
to be of any type" name resolution rule as the operand of a type conversion? I
think that is needed if we want to go with a "determine the parameter types
before looking at the rest of the expression" model for name resolution, which
seems desirable.

====

Can a parameterized expression be static?

****************************************************************

From: Edmond Schonberg
Date: Tuesday, May 4, 2010  3:15 PM

>Should these "actual parameter" expressions be subject to the same
>"is expected to be of any type" name resolution rule
>as the operand of a type conversion? I think that is needed if
>we want to go with a "determine the parameter types before
>looking at the rest of the expression" model for name resolution,
>which seems desirable.

Each one of these [declarations? bindings?]  must be a complete context, so the
resolution rule  for each expression must be the same as that of  the expression
of a conversion.

====

>Can a parameterized expression be static?

If all the constituents are static, I don't see why not.


****************************************************************

From: Bob Duff
Date: Tuesday, May 4, 2010  3:36 PM

...
> ====
>
> Can a parameterized expression be static?

Uncle!

****************************************************************

From: Gary Dismukes
Date: Tuesday, May 4, 2010  3:36 PM

> Should these "actual parameter" expressions be subject to the same "is
> expected to be of any type" name resolution rule as the operand of a
> type conversion? I think that is needed if we want to go with a
> "determine the parameter types before looking at the rest of the
> expression" model for name resolution, which seems desirable.

Yes, I think they should be handled like type conversion arguments.
Otherwise it seems that resolution will get too complicated.

> ====
>
> Can a parameterized expression be static?

I would think so.  Making them nonstatic seems overly restrictive.
Do you foresee any semantic problem?

****************************************************************

From: Steve Baird
Date: Tuesday, May 4, 2010  3:56 PM

>> Can a parameterized expression be static?
>
> I would think so.  Making them nonstatic seems overly restrictive.
> Do you foresee any semantic problem?
>

No, just more RM wording that needs to be written.

Getting the accessibility levels right for these guys is another such detail.

And deciding whether these expressions fall into the
   ok-for-build-in-place
category (like aggregates, function calls, and (I think) the two forms of
conditional expressions).

And deciding whether a parameter expression can include a name which denotes an
earlier parameter, as in
     (with X => ..., Y => X + 1 do ...)
.

And deciding whether any applicable index constraint which is imposed on one of
these guys is recursively imposed on the post-do expression. Same question for
the expected type. Same question for the "shall resolve to" rule for qualified
expressions.

Just details that need to be nailed down once we have agreed on the general
idea.

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  4:19 PM

I'd love to see the inside of your brain someday, Steve... ;-)

****************************************************************

From: Edmond Schonberg
Date: Tuesday, May 4, 2010  4:22 PM

> And deciding whether a parameter expression can include a name which
> denotes an earlier parameter, as in
>    (with X => ..., Y => X + 1 do ...)

Should work like other declarations: these are not formal parameters, they are
local bindings, elaborated  sequentially. If such an expression appears in a
renaming, the visibility of these bindings should be as that of local variables
of a subprogram, right?

function Check (X : integer := 15) renames (with X => 10 do  X**3)

the inner X hides the outer one.  We can still write Check.X to retrieve the
outer one.

****************************************************************

From: Yannick Moy
Date: Tuesday, May 4, 2010  3:43 PM

> I fully agree with this position, can we agree on this tentative
> syntax as the best choice *if* we put them in
>
>    WITH_EXPRESSION ::= (with DECL {, DECL} in EXPRESSION)
>
>     DECL ::=  IDENTIFIER = EXPRESSION
>
> with parens being omittable as for conditional expressions?
>
> I think calling these WITH-expressions is a reasonable choice.

As various examples by Robert have shown, even after everybody agreed on "do"
instead of "in", "in" sticks as being more natural.

"do" has a more natural inclination to indicate operations rather than
expressions.

so what about parentheses arount the DECLs above as in:

   with (X => 3 * Y, Z => 2 + T) in X - Z

****************************************************************

From: Steve Baird
Date: Tuesday, May 4, 2010  4:05 PM

> "do" has a more natural inclination to indicate operations rather than
> expressions.


I agree, but "in" is already used in expression syntax whereas "do" is not.

Would you prefer "for"?

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  4:25 PM

> I agree, but "in" is already used in expression syntax whereas "do" is
> not.
>
> Would you prefer "for"?

For seems bizarre to me.
I can live with do

****************************************************************

From: Yannick Moy
Date: Tuesday, May 4, 2010  4:34 PM

> For seems bizarre to me.
> I can live with do

What is the problem of having "in" already in expression syntax?
If we agree to parenthesize the bindings (which make them look more like
parameters or aggregate syntax), then there is no ambiguity.

****************************************************************

From: Robert Dewar
Date: Tuesday, May 4, 2010  4:43 PM

I don't much like parenthesizing the declarations, but let's see what others
think, it is true that

   (with (....) in blabla)

is unambiguous, do we still need the outer parens in this case?

****************************************************************

From: Tucker Taft
Date: Tuesday, May 4, 2010  5:01 PM

I prefer (with ... do ...).  The extra
parentheses doesn't help readability
in my view, and we will get used to "do"
pretty quickly, even if it isn't
the most natural on first exposure.

Omitting the parens around the whole construct seems like a non-starter in my
book, now that we have adopted parens around if- and case- expressions.

****************************************************************

From: Steve Baird
Date: Tuesday, May 4, 2010  5:16 PM

> What is the problem of having "in" already in expression syntax?
It is a matter of human readability, not formal ambiguity.

> If we agree to parenthesize the bindings (which make them look more
> like parameters or aggregate syntax), then there is no ambiguity.
>
True, but we are talking about readability, which is subjective.

I don't think

    Flag := (use X => 34, Y => F (G) for Foo (X, Y, Bar (X)) = Y);

looks bizarre, but it sounds like most prefer "with" and "do".
I think I could live with any of the 6 "with/use" x "for/in/do"
alternatives.

I agree with Tuck that we want consistency with conditional expressions with
regard to parens.

****************************************************************

From: Randy Brukardt
Date: Tuesday, May 4, 2010  5:39 PM

...
> It is often also a much more convenient form for local abstraction.
> Consider something like
>
>       declare
>          A : Float := Factor * Adjustment;
>       begin
>          return A * A + A;
>       end;
>
> This reads fine and is much shorter:
>
>       return (with A = Factor * Adjustment do A * A + A);

Right. My point was that I've rarely seen the need for the first form, so
necessarily I wouldn't have seen the need for the second form. But I was less
concerned about my personal thinking than the bigger picture.

> > While that's probably a useful goal, I don't see how that's going to
> > help us sell this to the Ada community at large.
>
> I think no changes we do to the language are specifically much help in
> selling to the Ada community, other than to show a general activity in
> keeping Ada current and showing continued interest. After all our (our
> = AdaCore for this
> sentence) major competitor, Greenhills, has decided it is not even
> worth investing in Ada 2005, let alone Ada 2012.

I think you missed my point (probably my fault). The "Ada Community" (by that I
mean the people who tend to be involved in SIGAda, comp.lang.ada, the ARG, etc.
rather than the run-of-the-mill programmer) tends to have strong ideas about
what makes Ada great. When we (the ARG) adds new features, we need to do so in a
way that does not run counter to what makes Ada great (at least for the majority
of the community) -- or we need to be able to sell the feature as a clear
improvement.

I would agree that not much of anything would matter to people who aren't
currently using Ada extensively. But I don't want to alienate the current
customers (the old saw is that is a lot easier to retain an existing customer
than it is to get a new one).

...
> > Some people (not on the ARG!)
> > already have objected to conditional expressions, and this of course
> > is much more strongly in the same direction.
>
> Ah, the infamous "some people". Who are these people? Do they know
> anything? And what specifically is their objection to conditional
> expressions (which are by the way a VERY nice improvement, we have
> redone our run-time library with judicious but fairly extensive use of
> conditional expressions, and it is a big improvement).

Sorry. I recall reading somewhere (probably comp.lang.ada) various gripes about
conditional expressions as implemented in GNAT. I don't remember the details in
much detail because I was strongly convinced as you are that conditional
expressions are a good thing.

But when it is less clear that a feature is a good thing (as in this case),
those gripes could start becoming significant. In that case, I think we need to
have a case ready as to why the feature is significant.

> > After all, this is a fairly complex feature to define and implement
> > (we'll need to make sure that we don't make the same mistake we made
> > with extended returns, for instance, so that the newly defined
> > identifier is actually visible after the "do"). We will
> need a decent justification for it.
>
> Not so hard to implement, maybe a day or two's work for us, and as for
> other compilers, we don't even see them implementing critical 2005
> features yet, so I can't get to excited about
> difficulty-of-implementation arguments

The effort for us would be very dependent on whether the declarations are
renames or objects. Since we have a fairly strong wall between expressions and
the rest of the semantics, we would have to move object allocation from one side
to the other.

In any case, it's not trivial; as you note, it can't be modeled with any
existing Ada feature.

> > I admit that I'm a bit dubious about the need for constants. I've
> > been racking my brain and can't remember a case where I wanted a
> > constant to use in a single expression (variables are different, see
> > below). I realize that conditional expressions (both types) could
> > potentially make expressions larger, but I've been hoping that the
> > provision of expression functions would tend to limit the size of
> > expressions -- if they get too large, it is better to create an
> > abstraction to represent parts and break them up.
>
> That's strage to me, I guess it's just a frame of mind, have you ever
> done significant programming in a functional language?

No. My only use of a functional language was in a programming language survey
course at the University of Wisconsin. I've never quite been able to understand
the value of such languages for real programming (it's easy to write simple
examples in them, but real systems are much more complex). In any case, I've
never had an implementation that would be efficient enough (much less of a
concern today, of course)

> > I do strongly prefer the "=>" notation, as we definitely do not want
> > these to look like declarations. Otherwise, we'd have endless
> > arguments about why variables aren't allowed (or whether they should
> > be allowed). The only cases I can think of where I've used local
> > objects for a single expression is to discard unnecessary
> in out and out parameters.
>
> Why would you "discard" unnecessary parameters, why not just ignore
> them, confused ....

If they are "in out" or "out" parameters, you have give them in a call (they
can't be defaulted). That means that you need some value to put them into, and
that value might as well have a very local scope. So I write things like:

    declare
        Unused : Handle;
    begin
        Replace_Window (Window, New_Window => Pretty, Old_Window => Unused);
    end;

I could imagine wanting to do something like that for functions as well,
especially the ones in C APIs (where most of the results are in parameters, the
function result being just a status code).

P.S. I'm just starting to read the mail after spending much of the day on a home
plumbing problem. Sorry about being so far behind.

****************************************************************

From: Randy Brukardt
Date: Tuesday, May 4, 2010  6:26 PM

Reading these in quick succession, I'm struck by:

Robert Dewar writes:
> > After all, this is a fairly complex feature to define and implement
> > (we'll need to make sure that we don't make the same mistake we made
> > with extended returns, for instance, so that the newly defined
> > identifier is actually visible after the "do"). We will
> > need a decent justification for it.

...
> As for difficulty-of-definition, I don't see a problem, it is
> certainly nothing like the extended return (which was a HUGE pain in
> the neck, and for me a case where the implementation and definition
> efforts were unreasonably large. For me the second most glaring
> example of a mismatch between effort and value -- the first being leap
> seconds, about which don't get me started!)

Steve Baird writes:

> Can a parameterized expression be static?
...
>Getting the accessibility levels right for these guys is another such
detail.
>
>And deciding whether these expressions fall into the
>   ok-for-build-in-place
>category (like aggregates, function calls, and (I think) the two forms
>of conditional expressions).
>
>And deciding whether a parameter expression can include a name which
denotes an earlier parameter, as in
>     (with X => ..., Y => X + 1 do ...) .
>
>And deciding whether any applicable index constraint which is imposed
>on one of these guys is recursively imposed on the post-do
>expression. Same question for the expected type.
>Same question for the "shall resolve to" rule for qualified expressions.
>
>Just details that need to be nailed down once we have agreed on the
>general idea.

Robert, are you still sure that the definition is not a problem?? ;-)

I was thinking about a few of the issues that Steve pointed out (particularly
the scope and nominal subtype of the definition), and was pretty sure that Steve
would fine a bunch more. Probably none of them are hard by themselves, but the
sheer number makes this quite a bit of work for the language definition.

Keep in mind that we still haven't been able to quite settle on the details of
resolution for conditional_expressions and we've been working on it for a couple
of years; this is more complex than that feature and we've only just scratched
the surface.

****************************************************************

From: John Barnes
Date: Wednesday, May 5, 2010  1:54 AM

Although I agree it is very interesting and I too had a brief flirtation with
Algol 68 -- But  why are we discussing this topic at all? I thought we had
agreed that no further topics were to be considered for Ada 2012. Maybe we need
another address for old boys chatter.

****************************************************************

From: Robert Dewar
Date: Wednesday, May 5, 2010  7:20 AM

Well the discussion is valuable whether or not this feature goes into the
official Ada 2012 standard, because it is likely that GNAT will implement it in
any case, and it is useful to get input on what to implement. A useful
perspective here is that if you look at the Ada 2012 standard, it has two
impacts:

1. Show everyone, including Ada enthusiasts and Ada serious users, that Ada is
   alive and well and still being worked on.

2. Provide a template for enhancements to GNAT

Right now, no other implementation has touched Ada 2005, let alone 2012, so I am
afraid that the desired effect of having all Ada implementations include the Ada
2012 extensions looks out of reach. This is unfortunate, but not much can be
done about it.

****************************************************************

From: Robert Dewar
Date: Wednesday, May 5, 2010  7:46 AM

> Omitting the parens around the whole construct seems like a
> non-starter in my book, now that we have adopted parens around if- and
> case- expressions.

Do we still allow omitting them when they are there anyway, e.g. in a pragma
argument.

****************************************************************

From: Robert Dewar
Date: Wednesday, May 5, 2010  7:47 AM

> True, but we are talking about readability, which is subjective.
>
> I don't think
>
>     Flag := (use X => 34, Y => F (G) for Foo (X, Y, Bar (X)) = Y);
>
> looks bizarre, but it sounds like most prefer "with" and "do".
> I think I could live with any of the 6 "with/use" x "for/in/do"
> alternatives.
>
> I agree with Tuck that we want consistency with conditional
> expressions with regard to parens.

It is a pity we can't use IN, what if we said that the expressions in the
declarations were simple expressions. We are losing a valuable use of IN here
for a very marginal case :-(

****************************************************************

From: Robert Dewar
Date: Wednesday, May 5, 2010  7:49 AM

> ...
>> It is often also a much more convenient form for local abstraction.
>> Consider something like
>>
>>       declare
>>          A : Float := Factor * Adjustment;
>>       begin
>>          return A * A + A;
>>       end;
>>
>> This reads fine and is much shorter:
>>
>>       return (with A = Factor * Adjustment do A * A + A);
>
> Right. My point was that I've rarely seen the need for the first form

I find that a truly surprising comment, what would you do in this case?

a) duplicate code? UGH

b) declare A more globally?, also for me UGH, why? because if A is more globally
   declared, you can't tell whether the value after this local use is going to
   be used.

I think that it is a fundamental principle of ANY block structured language to
declare things as locally as possible.

****************************************************************

From: Randy Brukardt
Date: Wednesday, May 5, 2010  1:39 PM

> Do we still allow omitting them when they are there anyway, e.g. in a
> pragma argument.

I would presume we'd use the same rules/syntax as we do for conditional
expressions (if and case). It would be annoying to have different rules for
every kind of expression (which is way we ended up with a fairly conservative
rule about omitting them -- the case expressions would get ambiguous if the
parens are omitted after "=>", as their syntax contains "=>").

****************************************************************

From: Randy Brukardt
Date: Wednesday, May 5, 2010  2:00 PM

> >> Consider something like
> >>
> >>       declare
> >>          A : Float := Factor * Adjustment;
> >>       begin
> >>          return A * A + A;
> >>       end;
> >>
> >> This reads fine and is much shorter:
> >>
> >>       return (with A = Factor * Adjustment do A * A + A);
> >
> > Right. My point was that I've rarely seen the need for the
> first form
>
> I find that a truly surprising comment, what would you do in this
> case?
>
> a) duplicate code? UGH
>
> b) declare A more globally?, also for me UGH, why? because if A is
> more globally declared, you can't tell whether the value after this
> local use is going to be used.
>
> I think that it is a fundamental principle of ANY block structured
> language to declare things as locally as possible.

Almost certainly "A". I try to only add abstraction when it really is
meaningful. Adding a declaration "A" in this example is *not* meaningful. (And I
admit I'm not good at recognizing places where abstraction can be added.) I
would use a block as you show here if I did add this declaration (although I'd
probably first look to see if there is an existing block nearby that I could
use, as otherwise the indentation gets out of hand quickly - having to scroll
sideways when browsing through code is really annoying).

Indeed, I naturally simply write the entire body of whatever I'm doing as one
straight chunk of code. Only if I recognize that control flow is requiring
duplicate code will I make a subprogram, which I make nested (in a block if
possible) unless it is clearly useful abstraction. This is essentially code
refactoring on the fly. And the code in question needs to be significant before
I'll do that.

As a consequence, renames are rare in my code, and constants aren't particularly
common (most are for binding function results). I don't think such things add to
readability in general, as they make it harder to see what is really being
calculated. (There are exceptions, of course, but if I can't give it a
meaningful name, I won't do it. And meaningful names are very hard...)

****************************************************************

From: Randy Brukardt
Date: Wednesday, May 5, 2010  2:54 PM

> Although I agree it is very interesting and I too had a brief
> flirtation with Algol 68 -- But  why are we discussing this topic at
> all? I thought we had agreed that no further topics were to be
> considered for Ada 2012. Maybe we need another address for old boys
> chatter.

Well, we have the "expression function" proposal open. If this idea is
considered a critical part of that proposal, then we could add it as part of
that. I suspect that there are at least some people who would agree with that.

Anyway, what we are accepting as Amendments (and as Binding Interpretations for
that matter) are problem statements, not necessarily solutions. The ARG is in
the solution business, after all. That means we're not accepting any more
problems to solve in the Amendment, not that we aren't accepting alternate
solutions to solve the problems that we already are attempting to solve. It
would be silly to restrict ourselves only to solutions that we've already
considered, because that would prevent tweaking that would make the features
better. After all, we've split several existing AIs since we've stopped
considering new AIs -- we surely can consider the new split AIs. And surely we
can do that here if we as a group feel it is important.

What we we're not allowing is the introduction of new problems. For instance,
while I would like to see object versions of the attributes Image, Succ, Pred,
and probably Pos, we'd have to figure out a way to bend one of the existing
problems to justify that inclusion. That's probably too hard to justify for a
nice-to-have feature (although I suspect we could repurpose AI-187 for this
purpose if we really wanted to).

****************************************************************

From: Gary Dismukes
Date: Wednesday, May 5, 2010  3:42 PM

> It is a pity we can't use IN, what if we said that the expressions in
> the declarations were simple expressions.
> We are losing a valuable use of IN here for a very marginal case :-(

Sounds reasonable to require simple_expressions.  If you need an expression involving membership/relational/logical ops (in front of 'in') then you just have to parenthesize.

****************************************************************

From: Jean-Pierre Rosen
Date: Monday, May 24, 2010  7:29 AM

I just noticed that a null procedure declaration cannot be used "as body".

Allowing this looks like a very small change for a very small benefit.
Do you think it is worth an AI?

****************************************************************

From: Edmond Schonberg
Date: Tuesday, May 25, 2010  1:49 PM

Not really.  Null procedures allow you to have package specs without bodies,
little point in then giving  you means to be forced to have them!

****************************************************************

From: Tucker Taft
Date: Tuesday, May 25, 2010  2:01 PM

The null procedure is primarily for interfaces, and those don't even need a
private part much less a body, so it doesn't seem worth adding complexity.
Renaming-as-body and body-as-spec are nasty enough as it is.

****************************************************************

From: Randy Brukardt
Date: Tuesday, May 25, 2010  2:25 PM

Normally I would say it isn't worth it, but the proposed syntax for
parameterized expressions will definitely be able to be used as a body, and it
is quite similar.

    parameterized_expression ::= function_specification is (expression);
    null_procedure_declaration ::= procedure_specification is null;

That makes it likely that it will seem odd that you can't use it as shorthand
for a body. So perhaps we ought to consider it as part of the parameterized
expressions AI.

(I don't think we want to be opening any new AIs for nice-to-haves at this
point, so I don't think we would make any changes if a stand-alone AI is
needed.)

****************************************************************

From: Tucker Taft
Date: Tuesday, May 25, 2010  2:55 PM

Seeing this juxtaposition makes me think that another part of this AI ought to
be allowing "(expression)" as a default for a formal function, in the same way
that we allow "null" as a default for a formal procedure.

****************************************************************

From: Bob Duff
Date: Tuesday, May 25, 2010  3:21 PM

I agree.

****************************************************************

From: Bob Duff
Date: Tuesday, May 25, 2010  3:20 PM

>     parameterized_expression ::= function_specification is (expression);
>     null_procedure_declaration ::= procedure_specification is null;
>
> That makes it likely that it will seem odd that you can't use it as
> shorthand for a body. So perhaps we ought to consider it as part of
> the parameterized expressions AI.

I was going to say "no", but Randy's argument convinced me to say "yes".

It's one of those cost/benefit = the ratio of two infinitesimals, so it's hard to get excited either way.

I remember trying to use "is null" as a body once, and was surprised it's not allowed.  Once in my life.

> (I don't think we want to be opening any new AIs for nice-to-haves at
> this point, so I don't think we would make any changes if a
> stand-alone AI is
> needed.)

I agree.

****************************************************************

From: Robert Dewar
Date: Sunday, July 11, 2010  7:59 AM

I just implemented this, as described, but it's a little delicate to distinguish between function bodies and parametrized expressions

    function F (X : Integer) is Long_Identifier ,
    function F (X : Integer) is Long_Identifier +

Any time distinguishing things is tricky for the compiler, it points to two
possible problems:

   a) difficulty for humans reading the code

   b) difficulty for compilers in handling error situations

I am wondering wheter we should require the expression to be parenthesized to
avoid any such near ambiguities.

I don't feel strongly, just raising the issue to see how others feel, I am
inclined to require the parens (like requiring them around conditional
expressions).

****************************************************************

From: Tucker Taft
Date: Sunday, July 11, 2010  9:19 PM

Parentheses are already required by the proposal.
Perhaps you were looking at an older version or simply missed the parens.

****************************************************************

From: Robert Dewar
Date: Sunday, July 11, 2010  9:25 PM

OK, either way, that's definitely an improvement.
It will simplify the implementation a bit :-)

(just missed the parens, they are there in the syntax indeed!)

****************************************************************

From: Robert Dewar
Date: Monday, July 12, 2010  7:53 AM

By the way, since I had already done all the work to carefully distinguish these
cases without the need for parens, I put it to use in error recovery:

>      1. package PexpParen is
>      2.    function F (A : Integer)
>      3.      return Integer is A * A;
>                                |
>         >>> parametrized expression must be enclosed in parentheses
>
>      4.    function G (A : Integer)
>      5.      return Integer is (A * A);
>      6. end;

:-)

****************************************************************

From: Randy Brukardt
Date: Wednesday, October 20, 2010  10:14 PM

Following is a first attempt at the wording for expression functions. I tried to
copy the wording for null procedures and default_expressions as much as
possible: that made it easier to see what needed to be done.

Note that I changed the name from "parameterized expressions" to "expression
functions" as the wording seems to make more sense with the latter: it's similar
to a null procedure, it can complete a subprogram, it's called with a function
call, etc. If it didn't contain the word "function", I suspect that quite a bit
of wording would have to be changed to add "parameterized expressions" as a
possible type of subprogram, callable entity, etc. I cheated a bit and called
the expression a "parameterized_expression", so we don't lose that term
completely.

Comments welcome.

P.S. If we really want to do something with a Let expression/block
expression/with expression or whatever else you want to call it, I recommend
that be a separate AI. This one is complicated enough...

[This is version /03 of the AI - Editor.]

****************************************************************

From: Tucker Taft
Date: Wednesday, October 20, 2010  10:53 PM

You didn't quite complete the cut and paste job.
"null_procedure_declaration" appears several places where I suspect you meant
"expression_function_declaration." Also, there are quite a few places where "a"
should be "an" because the construct name now starts with a vowel. I have
extracted most of the problematic sentences below.

Otherwise, looks good.

> A expression function can define the body of a function, even in a
> package

> [Note: The parens are needed to clearly (?) separate a expression
> function

> A expression function can be used as a declaration or as the
> completion of

> null_procedure_declaration (see 6.7){, or a
> expression_function_declaration (see 6.8)}.

> Legality Rules
>
> A null_procedure_declaration need not be the completion of a previous
> declaration, but if a null_procedure_declaration is a completion, it
> shall be the completion of a subprogram_declaration or
> generic_subprogram_declaration.
> The profile of a null_procedure_declaration that completes a
> declaration shall conform fully to that of the declaration.
>
> [Editor's note: This is mostly copied from the text for subprogram
> bodies, some redundancy removed.]
>
> Modify 6.7(3/2):
>
> A null_procedure_declaration declares a null procedure. A completion
> is not allowed for a null_procedure_declaration{, however a
> null_procedure_declaration can complete a previous declaration}.
>
> Modify 6.7(5/2):
>
> The elaboration of a null_procedure_declaration has no {other} effect
> {than to establish that the expression function can be called without
> failing the Elaboration_Check}.

> The profile of a expression_function_declaration that completes a

> The elaboration of a expression_function_declaration has no other
> effect

****************************************************************

From: Jean-Pierre Rosen
Date: Thursday, October 21, 2010  2:16 AM

>    expression_function ::=
>       [overriding_indicator]
>       function_specification is (<*parameterized_*expression>);
>
> [Note: The parens are needed to clearly (?) separate a expression
> function from the start of a normal function body. Perhaps it would be
> better to use a keyword somewhere to make it crystal-clear.]
>
<asbestos_suit_on>
what about:
   function_specification := (<*parameterized_*expression>);
</asbestos_suit_on>

****************************************************************

From: Robert Dewar
Date: Thursday, October 21, 2010  8:00 AM

I much prefer the IS, the := here seems most irregular, and implies an
assignment to the entity on the left, which is not what is going on here at all.

Are we definite about the terminology change, if so I will just go ahead and
implement that change in GNAT.

****************************************************************

From: Tucker Taft
Date: Thursday, October 21, 2010  9:48 AM

>   function_specification := (<*parameterized_*expression>);

Uggggh. ;-)

****************************************************************

From: Steve Baird
Date: Thursday, October 21, 2010  12:07 PM

> !proposal
> ... given anywhere in the same package (if the declaration is a package).
     if the declaration is *in* a package

====

Clearly

     Global_Task : Some_Task_Type;

     function F return Some_Task_Type is (Global_Task);

should be illegal. What wording prevents this?

====

Is the convention for one of these guys defined in the same way as the
convention for a null procedure?

****************************************************************

From: Randy Brukardt
Date: Thursday, October 21, 2010  1:18 PM

...
> Are we definite about the terminology change, if so I will just go
> ahead and implement that change in GNAT.

I think so, but you may want to wait until after the meeting next week to be
sure.

Bob wanted to call these "expression functions" from the beginning, and others
did so intermittently in the e-mail. I was going to use "parameterized
expressions" (which explains the use better), but that is not clearly a type of
subprogram, while <something> functions clearly is. I didn't want to have to go
all over the standard looking for subprogram and function and checking to see if
parameterized expressions needed to be added; that sounds error-prone at best.

****************************************************************

From: Randy Brukardt
Date: Thursday, October 21, 2010  1:22 PM

> Clearly
>
>      Global_Task : Some_Task_Type;
>
>      function F return Some_Task_Type is (Global_Task);
>
> should be illegal. What wording prevents this?

I checked all of the legality rules for return statements, but this one isn't
there, it's in 7.5. So

* the *parameterized_*expression of an expression_function_declaration (see 6.8)

needs to be added to 7.5 (after 2.8/2, I think).

> ====
>
> Is the convention for one of these guys defined in the same way as the
> convention for a null procedure?

Yes: there doesn't seem to be any definition for the convention of null
procedures. :-) It's surely not in 6.7.

Is there any such definition that I've missed? If so, it needs to be updated to
add expression_functions. If not, well then you have an action item.

****************************************************************

From: Steve Baird
Date: Thursday, October 21, 2010  1:47 PM

>> Is the convention for one of these guys defined in the same way as
>> the convention for a null procedure?
>
> Yes: there doesn't seem to be any definition for the convention of
> null procedures. :-) It's surely not in 6.7.
>

We have to decide what we want.

Nothing needs to be done if we want Ada (6.3.1(3), at least when not superceded
by 6.3.1(13.2/1)).

This would mean that you can take 'Access of one of these guys.

If we decide that that is an excessive implementation burden (e.g., because some
implementations might have to generate wrappers), then we could add these guys
to the 6.3.1 list of things that get convention Intrinsic.

****************************************************************

From: Randy Brukardt
Date: Thursday, October 21, 2010  2:01 PM

> This would mean that you can take 'Access of one of these guys.
>
> If we decide that that is an excessive implementation burden (e.g.,
> because some implementations might have to generate wrappers), then we
> could add these guys to the 6.3.1 list of things that get convention
> Intrinsic.

Humm. You have to be able to generate wrappers for expression functions: for
instance, if they override a dispatching call (you'll need a wrapper to use in
the tag, at a minimum), if they are given as a completion in the body of package
(they'll need to be a real body in that case), possibly if passed in an
instance. So the burden to do that for 'Access isn't that high.

OTOH, it's annoying to have to do that for these when declared in a
specification (otherwise those can always be inlined).

But all-in-all, I would think that the complexity of defining precisely when you
aren't allowed to take 'Access would be as bad as just making it work. So I lean
to the status quo here. I can be convinced otherwise.

****************************************************************

From: Tucker Taft
Date: Thursday, October 21, 2010  2:03 PM

All dispatching operations need to be Ada, I believe.
For non-dispatching operations, I would vote for Intrinsic, though for me it
isn't a terribly big deal either way. This applies to both expression functions
and null procedures.

****************************************************************

From: Edmond Schonberg
Date: Thursday, October 21, 2010  2:10 PM

> Humm. You have to be able to generate wrappers for expression
> functions: for instance, if they override a dispatching call (you'll
> need a wrapper to use in the tag, at a minimum), if they are given as
> a completion in the body of package (they'll need to be a real body in
> that case), possibly if passed in an instance. So the burden to do that for 'Access isn't that high.

Agreed.

> OTOH, it's annoying to have to do that for these when declared in a
> specification (otherwise those can always be inlined).

Note that with conditional expressions it's possible for expression functions to
be recursive, which will make it rather hard to inline calls to them!

> But all-in-all, I would think that the complexity of defining
> precisely when you aren't allowed to take 'Access would be as bad as just making it work.
> So I lean to the status quo here. I can be convinced otherwise.

Convention Ada makes sense.

****************************************************************

From: Bob Duff
Date: Thursday, October 21, 2010  2:58 PM

> Following is a first attempt at the wording for expression functions.
> I tried to copy the wording for null procedures and
> default_expressions as much as possible: that made it easier to see what needed to be done.

Looks good to me.  Some typos and whatnot marked below.

> [Note: The parens are needed to clearly (?) separate a expression
> function

"a" --> "an"

> from the start of a normal function body. Perhaps it would be better
> to use a keyword somewhere to make it crystal-clear.]

The parens look OK to me, I guess.

> The expected type of the parameterized_expression is that of the
> return subtype of the function_specification.
>
> Within the parameterized_expression, the names of the parameters of
> the function_specification are visible.

directly visible

> The dynamic semantics are the same as a function whose body contains
> nothing but "return parameterized_expression".
>
> A expression function can be used as a declaration or as the
> completion of

"A" --> "An"

> a function specification. If used as a completion, the completion can
> be given anywhere in the same package (if the declaration is a package).

But surely not before the declaration!

> Specifically,
> expression functions in a private part can complete function
> declarations from the visible part.
>
> Freezing: The renamed expression would freeze as does a default expression.

"renamed" --> "parameterized"

> That
> is, freezing would only be performed at the point of a call of the
> function, with the expression essentially in-lined at that point. (Of
> course, if that

"in-lined" --> "inlined"

> call
> is in a default expression, the freezing would be deferred again.)
>
> !wording

> Legality Rules
>
> A null_procedure_declaration need not be the completion of a previous
> declaration, but if a null_procedure_declaration is a completion, it
> shall be the completion of a subprogram_declaration or
> generic_subprogram_declaration.
> The profile of a null_procedure_declaration that completes a
> declaration shall conform fully to that of the declaration.

Heh?  What's this about null procs?

> [Editor's note: This is mostly copied from the text for subprogram
> bodies, some redundancy removed.]

> !discussion

> We did "borrow" the parameterized expression name to mean the actual
> expression of an expression function. This simplifies some of the
> wording and re-enforces

"re-enforces" --> "reinforces"

> the usage intent.
>
> ---
>
> A goal for expression functions is that they represent a subset of the
> capabilities of null subprograms in terms of the use as subprogram.

That's a strange thing to say.  I mean, null procedures have just about the
fewest "capabilities" of any feature -- they don't do anything!  Expr functions
can do lots of things.

So I'm not understanding what you're getting at, here -- could use some
rewording.

>...If we didn't do that, we would
> have added another kind of subprogram, whose usages would be subtly
>different than  the existing kinds. That seems bad.
>
> In order to accomplish this, we added the capability of a null
> subprogram to represent a completion. This is not an important
> capability, but it increases the consistency of the various kinds of
> subprogram declaration.
>
> ---
>
> One could imagine these functions (when visible) to be used in static
> expressions if the inlined expression would be static. This would
> definitely allow abstracting expressions that currently are written as
> a giant single expression because they need to be static. That's
> something you cannot do now. I didn't add this to proposal in order to
> keep the proposal simple.

Hmm.  Interesting idea.

> ---
>
> It has been suggested to allow expression functions as generic formal
> subprogram defaults. This would increase the analogy with null procedures.
>
> I didn't include this for three reasons:
> (1) The syntax would be very close to the existing default_name syntax.
> Adding
> or removing parens would change the semantics substantially.
> (2) This would probably require explicitly creating a body at the
> point of the instantiation. So the implementation effort would be
> substantially increased.
> (3) We're not likely to make the analogy with null procedures complete
> (we would need the next item as well), so there is going to be an
> inconsistency somewhere.
>
> Therefore, I stuck with the simpler proposal.
>
> ---
> An additional idea would be to allow this as a primitive subprogram of
> an interface. If we did that, we would get the benefit of the identity
> function idea (see AI05-0140-1) without adding anything special
> specifically for that case.
> The expression would have to be included in the 8.3 homograph matching
> rules, probably by borrowing the rules for full conformance of
> expressions.
> That would be necessary to deal with the Steve Baird problem for
> diamond inheritance with interfaces (that was the primary reason that
> AI05-0140-1 was scuttled).

Steve Baird didn't invent the diamond inheritance problem.
Or is there a specific diamond inheritance sub-problem that Steve has talked
about?

Anyway, I can't see how interfaces can have primitive subprograms that do stuff,
here or in AI05-0140-1 -- otherwise they aren't interfaces, but have grown into
full multiple inheritance.

> Including both this and the formal subprogram default would make this
> completely similar to null procedures.

****************************************************************

From: Steve Baird
Date: Thursday, October 21, 2010  3:01 PM

> Convention Ada makes sense.

Sounds good as a general principle, overridden in corner cases by 6.3.1(13.2/1).

I think this means no wording changes in this area are needed,

****************************************************************

From: Robert Dewar
Date: Thursday, October 21, 2010  3:30 PM

> ...
>> Are we definite about the terminology change, if so I will just go
>> ahead and implement that change in GNAT.
>
> I think so, but you may want to wait until after the meeting next week
> to be sure.
>
> Bob wanted to call these "expression functions" from the beginning,
> and others did so intermittently in the e-mail. I was going to use
> "parameterized expressions" (which explains the use better), but that
> is not clearly a type of subprogram, while<something>  functions
> clearly is. I didn't want to have to go all over the standard looking
> for subprogram and function and checking to see if parameterized
> expressions needed to be added; that sounds error-prone at best.

I must say I like the change, even though it's extra effort in the
implementation to make the change!

****************************************************************

From: Robert Dewar
Date: Thursday, October 21, 2010  3:35 PM

> Humm. You have to be able to generate wrappers for expression
> functions: for instance, if they override a dispatching call (you'll
> need a wrapper to use in the tag, at a minimum), if they are given as
> a completion in the body of package (they'll need to be a real body in
> that case), possibly if passed in an instance. So the burden to do that for 'Access isn't that high.

We don't generate wrappers, we just expand expression functions to normal
functions in any case. They are just syntactic sugar as far as we are concerned.

> OTOH, it's annoying to have to do that for these when declared in a
> specification (otherwise those can always be inlined).

The normal automatic inlining mechanism takes care of these expanded functions
pefectly fine

> But all-in-all, I would think that the complexity of defining
> precisely when you aren't allowed to take 'Access would be as bad as just making it work.
> So I lean to the status quo here. I can be convinced otherwise.

I would make 'Access work fine (it certainly does in GNAT now!)

****************************************************************

From: Robert Dewar
Date: Thursday, October 21, 2010  3:39 PM

> All dispatching operations need to be Ada, I believe.
> For non-dispatching operations, I would vote for Intrinsic, though for
> me it isn't a terribly big deal either way.
> This applies to both expression functions and null procedures.

We also expand out null procedures unconditionally in GNAT, so it would be extra
work to forbid access and make them intrinsic. Of course it can be done, but I
object to doing this on the basis of making things easier for implementations!

It seems an arbitrary and unnecessary restriction to forbid access for either of
these.

In particuar, you may make a procedure null temporarily, sort of like a stub,
and it would be annoying not to be able to take 'Access of it.

Similarly, if you decide that you have a function with a trivial body, and you
want to make it into an expression function now that you have the amazing Ada
2012 in your hands, it would be annoying to find you can't do this because of an
arbitrary and peculiar choice for the convention.

****************************************************************

From: Randy Brukardt
Date: Thursday, October 21, 2010  5:55 PM

...
> > An additional idea would be to allow this as a primitive subprogram
> > of an interface. If we did that, we would get the benefit of the
> > identity function idea (see AI05-0140-1) without adding anything
> > special specifically for that case.
> > The expression would have to be included in the 8.3 homograph
> > matching rules, probably by borrowing the rules for full conformance
> > of expressions.
> > That would be necessary to deal with the Steve Baird problem for
> > diamond inheritance with interfaces (that was the primary reason
> > that
> > AI05-0140-1 was scuttled).
>
> Steve Baird didn't invent the diamond inheritance problem.
> Or is there a specific diamond inheritance sub-problem that Steve has
> talked about?

I was specifically referring to the problem that Steve raised during the
discussion of AI05-0140-1. It's a particular form of diamond inheritance: when
two expression functions that are subtype conformant are inherited, but the
expressions are different.

> Anyway, I can't see how interfaces can have primitive subprograms that
> do stuff, here or in AI05-0140-1 -- otherwise they aren't interfaces,
> but have grown into full multiple inheritance.

Not at all, these make perfect sense. The interface can't have components, but
there is lots of other stuff that could make a useful default. For instance,
it's not unusual to have a query which always returns True (or False or some
other constant) for most implementations. Or a routine that returns some
function of its parameters by default. There are examples of that in
AI05-0140-1.

Moreover, this would work precisely as null procedures do. Those don't cause
"full multiple inheritance", so I don't see how these would either. The concrete
implementation could be created when the interface is extended (remember that
these work like default expressions) - so typically it would be duplicated, not
shared (same as for null procedures).

The point of Steve's concern is that we would not want to let conflicting
expression functions to be inherited. That's more complicated than for null
procedures, as all of those work the same way. But some sort of expression
conformance would do the trick (or we could just say that they always conflict
and leave it at that). We already have rules for conflicting subprograms that
are inherited because the modes could be different, so the problem itself is not
new.

****************************************************************

From: Bob Duff
Date: Thursday, October 21, 2010  4:04 PM

> > Humm. You have to be able to generate wrappers for expression
> > functions: for instance, if they override a dispatching call (you'll
> > need a wrapper to use in the tag, at a minimum), if they are given
> > as a completion in the body of package (they'll need to be a real
> > body in that case), possibly if passed in an instance. So the burden to do that for 'Access isn't that high.
>
> We don't generate wrappers, we just expand expression functions to
> normal functions in any case.

I think that's what Randy means by "wrapper" -- it's an out-of-line piece of code that can be called in the usual way, and can also be inlined when desirable.  And you can take 'Access of it.

>...They are just syntactic sugar as far  as we are concerned.
> >
> > OTOH, it's annoying to have to do that for these when declared in a
> > specification (otherwise those can always be inlined).
>
> The normal automatic inlining mechanism takes care of these expanded
> functions pefectly fine
> >
> > But all-in-all, I would think that the complexity of defining
> > precisely when you aren't allowed to take 'Access would be as bad as just making it work.
> > So I lean to the status quo here. I can be convinced otherwise.
>
> I would make 'Access work fine (it certainly does in GNAT now!)

I agree.

In general adding restrictions to ease implementation is a tricky business --
you sometimes end up with a more difficult implementation, because it has to
check the restrictions, but can't take advantage of them.

It's also simplest for programmers to say "these things are just like function
bodies, except they look different, and they can go in packages specs".  We
don't want to make the "except..." part longer by adding "and you can't take
'access, and mumble mumble generic mumble, and...".

****************************************************************

From: Randy Brukardt
Date: Thursday, October 21, 2010  7:40 PM

...
> > A goal for expression functions is that they represent a subset of
> > the capabilities of null subprograms in terms of the use as subprogram.
>
> That's a strange thing to say.  I mean, null procedures have just
> about the fewest "capabilities" of any feature -- they don't do
> anything!  Expr functions can do lots of things.
>
> So I'm not understanding what you're getting at, here -- could use
> some rewording.

My point is that expression functions can be used in a subset of the places and
ways that null procedures can, specifically they are called the same way, they
both can complete subprograms (which is new for null procedures), they can be
subprogram declarations. Null procedures can also be used as generic formal
procedure defaults and as the primitives of interfaces. While it would be nice
if expression functions could be used in those places, that is not important
functionality and I left it out for Ada 2012 (although it would not surprise me
if Ada 2020 gained those).

What I didn't want to see is the set of places where you can use normal
subprograms, abstract subprograms, null procedures, and expression functions all
end up different. That just adds to the cognitive overhead.

If you have a suggestion for making what I had in mind clearer (preferably
without the dissertation given here), please suggest it.

****************************************************************

From: Tucker Taft
Date: Thursday, October 21, 2010  7:46 PM

I don't think we should overburden the expression function idea with the diamond
inheritance problem.

We have enough going on already!

****************************************************************

From: Robert Dewar
Date: Thursday, October 21, 2010  8:39 PM

> Note that with conditional expressions it's possible for expression
> functions to be recursive, which will make it rather hard to inline calls to
> them!

I really don't see why these should be a special case for inlining, you inline
them if your implementation can do it and feels like it (in GNAT, you get this
kind of inlining automatically at high optimization levels). If the
implementation can't inline them, then it won't, end of story! If the
implementation feels like inlining the recursive case one step it can if it
likes.

Treating these differently from ordinary functions seems a plain implementation
mistake, at least in our environment.

****************************************************************

From: Randy Brukardt
Date: Saturday, November 20, 2010  1:49 AM

At the Fairfax meeting, we decided that all subprogram completions, including
null procedures and expression function have to be freezing (under 13.14(3/3)).
I tried to explain the reasons for this in the !discussion of the AI:


Note that both null subprograms and expression functions are freeze when they
are used as completions. There are a number of reasons for this:

(1) This is consistent with the behavior of explicit bodies. If the null
    procedure or expression function replace the equivalent explicit body, the
    freezing will not change.

(2) If a null procedure used as a completion did not freeze, then AARM
    13.14(10.f/3) would not be true. There would be a way to construct a program
    that made a procedure call with an unfrozen profile - admittedly, a
    pathological program (it could only have null procedure bodies in a package
    body). (This issue could be fixed by extending 13.14(10.1/3) to all
    subprograms.)

(3) If an expression function used as a completion did not freeze, then it would
    be possible to make a call to an expression function with an unfrozen
    expression. Consider:

      package Pack is
          function Foo (A : in Natural) return Natural;
          type Bar is access function Foo (A : in Natural) return Natural;

          P : Bar := Foo'Access; -- Freezes Bar and Foo.

          type Flub is range 0 .. 100;

          function Foo (A : in Natural) return Natural is
              (A + Flub'Size); -- The expression is not frozen here.

          Val : Natural := P.all(5); -- (!)

      end Pack;

The call of P.all passes the elaboration check (as the completion of Foo has
been elaborated). However, if the body of Foo did not freeze everything, Flub
would not be frozen, and thus the call P.all(5) would be evaluating the value of
an aspect of an unfrozen type.

Note that the prefix of Foo freezes the declaration of Foo, but not the (still
to be seen) completion; and even if the completion was frozen, that would not
freeze the expression.

---

This decision worries me. It seems to require that expression functions used as
completions in specifications all be grouped at the end of the private part, not
wherever they more naturally would appear. Given that this is the motivating use
for introducing this construct, hamstringing them does not seem like a good
thing. Of course, I can't say if this is a real problem until we have real
examples of use.

Even if this rule doesn't cause real problems, it is still going to force
programmers to rearrange their code from time-to-time, which is likely to be an
annoyance. And why do we have that annoyance? For foolish consistency (point (1)
above) and to prevent problems with two pathological cases (points (2) and (3)
above). That doesn't seem right to me.

Thus, I'd like to find an alternative solution to the "always freeze
completions" solution to point (3). [(2) is easy to fix, as noted above.]

Unfortunately, nothing seems to jump out at me. The best I can come up with
would be to say that the expression of an expression function F causes freezing
at the point of the declaration of F if:
  * the expression function F is a completion; and
  * the declaration of F is declared in the same declarative_part or package
    specification; and
  * F is used before the completion as a prefix of the Access or
    Unchecked_Access attribute, or as the formal parameter of a generic
    instantiation.

However, the implementation of this is not ideal: it would seem to require
tracking how a function is used within a spec, or searching back in the code for
uses. Which seems like a lot of work for a near pathology.

But this only causes user pain when an expression function is used in one of
these ways such that trouble might occur. That is likely to be rare; otherwise,
the expression function's expression only is frozen when it is called, or (most
likely) when the end of the declarative part is reached.

In cases like this, I am inclined to give the implementation's the pain in order
to spare the users the pain (many more users than implementers). But perhaps
there is a better fix with even less pain.

Thoughts??

****************************************************************

From: Tucker Taft
Date: Monday, November 22, 2010  9:13 AM

> This decision worries me. It seems to require that expression
> functions used as completions in specifications all be grouped at the
> end of the private part, not wherever they more naturally would
> appear. Given that this is the motivating use for introducing this
> construct, hamstringing them does not seem like a good thing. Of
> course, I can't say if this is a real problem until we have real examples of
> use.

I don't see this as a big problem.  This is no different than deferred
constants.  Their completion has to be placed after the full definition of the
type.

> Even if this rule doesn't cause real problems, it is still going to
> force programmers to rearrange their code from time-to-time, which is
> likely to be an annoyance. And why do we have that annoyance? For
> foolish consistency (point (1) above) and to prevent problems with two
> pathological cases (points (2) and (3) above). That doesn't seem right to me.

I don't agree.  If the completion effectively sets the "elaborated" bit then it
seems it has to freeze.  The only solution would be to defer setting the
"elaborated" bit until the end of the enclosing package spec, but then you
wouldn't be allowed to call it.

I don't see the real problem for the user, given the existing deferred constant
rules.

****************************************************************

From: Randy Brukardt
Date: Monday, November 22, 2010  6:58 PM

> I don't see this as a big problem.  This is no different than deferred
> constants.  Their completion has to be placed after the full
> definition of the type.

Fair enough. I don't honestly know if it is a big problem, either, just that it
is concerning to me, mainly because the reasons for the restriction is because
of unlikely code sequences (calls evaluated in a spec are rare, 'Access in a
spec is reasonably rare, and doing both together is highly unlikely). (Note that
calls in Preconditions and in default expressions aren't usually evaluated in
the spec.)

I was worried about cases with multiple types; having to group all of the
completions at the end may be unnatural (and might separate them a long ways
from the associated types).

I suppose if this freezing turns into a problem in practice, it would be less of
a problem to change the freezing with a BI than to force a complex
implementation without any significant evidence of a problem.

> > Even if this rule doesn't cause real problems, it is still going to
> > force programmers to rearrange their code from time-to-time, which
> > is likely to be an annoyance. And why do we have that annoyance? For
> > foolish consistency (point (1) above) and to prevent problems with
> > two pathological cases (points (2) and (3) above). That doesn't
> seem right to me.
>
> I don't agree.  If the completion effectively sets the "elaborated"
> bit then it seems it has to freeze.  The only solution would be to
> defer setting the "elaborated" bit until the end of the enclosing
> package spec, but then you wouldn't be allowed to call it.

As I noted above, I don't think there will be many calls in specs of these
completions, so that the fact that one *might* occur is hardly a good reason for
restricting these. I'd actually prefer the solution you outlined above (no calls
in the spec at all) to the extra freezing.

I didn't want expression functions given in specs to have to have elaboration
bits at all; the reason they need them in specs is because of weird corner cases
like the example that I started this message with.

OTOH, it's clearly easier and more consistent for them to have elaboration bits,
and for them to have the same semantics as "regular" functions. So I suppose it
probably is best to use the restrictive freezing rule and see if there are
problems in actual practice.

> I don't see the real problem for the user, given the existing deferred
> constant rules.

My code doesn't have very many deferred constants, they usually have a value of
null or 0, and they usually come immediately after the associated type. That
works because each one is associated with a single type.

Expression functions will have multiple parameters, and at least sometimes those
parameters will all be declared in the same package. (Consider a tagged type and
an associated enumeration, for instance). This freezing will require all of the
expression functions that define completions to come after all of the associated
types, usually at the very end of the package. Moreover, there are going to be
plenty of these functions in a typical package (especially as Janus/Ada can't
inline anything else). In a complex package like Claw, putting them all at the
very end would be pretty ugly (widely separating them from the associated
types).

Even with this, I don't see this as a critical issue; we'll need to see examples
in order to tell the effects. So I think we'll have to defer this one until
expression functions get into significant use.

****************************************************************


Questions? Ask the ACAA Technical Agent