Version 1.10 of ai05s/ai05-0147-1.txt

Unformatted version of ai05s/ai05-0147-1.txt version 1.10
Other versions for file ai05s/ai05-0147-1.txt

!standard 4.3.3(15)          10-02-01 AI05-0147-1/08
!standard 4.4(1)
!standard 4.5.7(0)
!standard 4.7(2)
!standard 4.7(3)
!standard 4.9(12)
!standard 4.9(33)
!standard 5.3(3)
!standard 5.3(4)
!standard 7.5(2.1/2)
!class amendment 09-03-13
!status work item 09-03-13
!status received 09-03-13
!priority Medium
!difficulty Medium
!subject Conditional expressions
!summary
Conditional expressions are added to Ada.
!problem
It is not unusual that an expression value needs to be calculated in a way that avoids an exception or other boundary. For instance, consider declaring a range based on a parameter:
procedure P (N : in Natural) is subtype Calculation_Range is Natural range 0 .. (10_000 / N) + 1; begin
This obviously doesn't work if N is 0. One might want to use the maximum value in this case, but such an expression is hard to write in Ada.
One common workaround for the lack of conditional expressions in Ada is to multiply by Boolean'Pos:
procedure P (N : in Natural) is subtype Calculation_Range is Natural range 0 .. (Boolean'Pos(N=0) * 10_000 + Boolean'Pos(N/=0)*(10_000 / N)) + 1; begin
But this doesn't work because the entire expression will be evaluated even when N is 0, including the divide-by-zero.
Similar problems occur avoiding overflow in the expressions of the bounds of a range.
One could write a local function to evaluate the expression properly, but that doesn't work if the expression is used in a context where it has to be static (such as specifying the size of a subtype). In addition, a local function requires putting the expression in a body, possibly requiring the addition of a body to an otherwise subprogram-less package.
A conditional expression would make writing this expression easy:
subtype Calculation_Range is Natural range 0 .. (if N=0 then 10_000 else 10_000 / N) + 1;
The pending addition of preconditions to Ada will greatly increase this need. It is not at all unusual to have the value of one parameter depend on another. It is possible to write a Boolean expression like
Precondition => (not Param_1 >= 0) or Param_2 /= ""
but this sacrifices a lot of readability, when what is actually meant is
Precondition => (if Param_1 >= 0 then Param_2 /= "" else True)
or even more simply
Precondition => (if Param_1 >= 0 then Param_2 /= "")
(depending on an implicit else).
Another situation is renaming an object determined at runtime:
procedure S (A, B : Some_Type) is Working_Object : Some_Type renames Some_Type'(if Some_Func(A) then A else B); begin -- Use Working_Object in a large chunk of code. end S;
In Ada currently, you would have to either duplicate the working code (a bad idea if it is large) or make the working code into a subprogram (which adds overhead, and would complicate the use of exit and return control structures).
for all of these reasons, we are proposing to add conditional expressions to Ada.
!proposal
(See wording.)
!wording
Change the . to a ; at the end of 4.3.3(15), and add a following bullet:
* For a conditional_expression, the applicable index constraint for
each dependent_expression is that, if any, defined for the conditional_expression.
Modify 4.4(1):
In this International Standard, the term "expression" refers to a construct of the syntactic category expression or of any of the [other five syntactic categories defined below]{following categories: relation, simple_expression, term, factor, primary, conditional_expression}.
Add a new clause:
4.5.7 Conditional expressions
Syntax
conditional_expression ::= if condition then *dependent*_expression {elsif condition then *dependent*_expression} [else *dependent*_expression]
condition ::= *boolean_*expression
Whereever the Syntax Rules allow an expression, a conditional_expression may be used in place of the expression, so long as it is immediately surrounded by parentheses.
AARM Discussion: The syntactic category conditional_expression is not explicitly referenced by any other BNF syntax rules. The above rule "plugs it in" to the rest of the grammar, by allowing conditional_expression to play the syntactic role of expression, except that it must be parenthesized.
One of the possibilities for primary is (expression). The above rule implies that we can use (conditional_expression) instead, so the following are syntactically legal:
A := (if X then Y else Z); -- parentheses required A := B + (if X then Y else Z) + C; -- parentheses required
The following procedure calls are syntactically legal:
P(if X then Y else Z); P((if X then Y else Z)); -- redundant parentheses
P((if X then Y else Z), Some_Other_Param); P(Some_Other_Param, (if X then Y else Z)); P(Formal => (if X then Y else Z));
whereas the following are illegal:
P(if X then Y else Z, Some_Other_Param); P(Some_Other_Param, if X then Y else Z); P(Formal => if X then Y else Z);
because in these latter cases, the conditional_expression is not immediately surrounded by parentheses (which means on both sides!).
The above English-language rule is equivalent to modifying the BNF as follows, as far as syntax goes:
expression_within_parens ::= expression | conditional_expression
singleton_list ::= ( conditional_expression )
primary ::= numeric_literal | null | string_literal | aggregate | name | allocator | (expression_within_parens)
pragma_argument_association_list ::= (pragma_argument_association {, pragma_argument_association}) | singleton_list
pragma ::= pragma identifier [pragma_argument_association_list] ;
discriminant_constraint ::= (discriminant_association {, discriminant_association}) | singleton_list
index_expression_list ::= (expression {, expression}) | singleton_list
indexed_component ::= prefix index_expression_list
attribute_designator ::= identifier[(*static*_expression_within_parans)]
range_attribute_designator ::= range[(*static*_expression_within_parens)]
type_conversion ::= subtype_mark (expression_within_parens)
qualified expression ::=
subtype_mark'(expression_within_parens)
actual_parameter_part ::= (parameter_association {, parameter_association}) | singleton_list
entry_index ::= expression_within_parens
generic_actual_part ::= (generic_association {, generic_association}) | singleton_list
We chose not to make this modification because it is a huge change to the BNF grammar, and in addition would require a lot of English text that refers to syntactic categories to change.
AARM Implementation Note: Implementers are cautioned to consider error detection when implementing the syntax for conditional_expressions. Conditional_expressions and if_statements are very similar syntactally, and simple mistakes can appear to change one into the other, potentially causing errors to be moved far away from their actual location.
Name Resolution Rules
A condition is expected to be of any boolean type.
If a conditional_expression is expected to be of a type T, the expected type for each dependent_expression of the conditional_expression is T. If a conditional_expression shall resolve to a type T, each dependent_expression shall resolve to T.
AARM To Be Honest: T in this rule could be any type in a class of types
(including the class of all types), or (for the second rule) an anonymous access type (for renames) or a universal type covering some type (for qualified expressions).
Legality Rules
If the expected type of a conditional_expression is any type in a class of types (instead of a particular type), all dependent_expressions of the conditional_expression shall have the same type.
If there is no "else" dependent_expression, all of the dependent_expressions of the conditional_expression shall be of a boolean type.
If the expected type of a conditional_expression is a specific tagged type, all of the dependent_expressions of the conditional_expression shall be dynamically tagged, or none shall be dynamically tagged; the conditional_expression is dynamically tagged if all of the dependent_expressions are dynamically tagged, is tag-indeterminate if all of the dependent_expressions are tag-indeterminant, and is statically tagged otherwise.
[Editor's note: We don't try to define the type of the conditional expression; it gets complex when implicit conversions are involved. There may not be a unique type that is identifiable (especially as dynamically tagged expressions might need to be converted to be classwide).]
Dynamic Semantics
For the execution of a conditional expression, the condition specified after if, and any conditions specified after elsif, are evaluated in succession (treating a final else as elsif True then), until one evaluates to True or all conditions are evaluated and yield False. If a condition evaluates to True, the associated dependent_expression is evaluated and its value is the value of the expression. Otherwise, the value of the expression is True.
[Editor's note: This is nearly a copy of 5.3(5). I left the clunkyness intact. Note that the last otherwise can be true only for a boolean conditional expression, as an "else" is required in all other cases.]
Add after 4.9(12):
* A conditional_expression all of whose conditions and dependent_expressions are
static expressions;
Replace 4.9(33) by:
An expression is statically unevaluated if it is part of:
* the right operand of a static short-circuit control form whose
value is determined by its left operand; or
* a *dependent_*expression of a conditional_expression whose associated
condition is static and equals False; or
* a condition or *dependent_*expression of a conditional_expression
where the condition corresponding to at least one preceding dependent_expression of the conditional_expression is static and equals True.
AARM Discussion: We need the "of the conditional_expression" here so there is no confusion for nested conditionals; this rule only applies to the conditions and dependent_expressions of a single conditional expression.
AARM Reason: We need the last bullet so that only a single dependent_expression is evaluated in a static conditional expression if there is more than one condition that evaluates to True. The part about conditions makes
(if N = 0 then Min elsif 10_000/N > Min then 10_000/N else Min)
legal if N and Min are static and N = 0. End AARM Notes
A static expression is evaluated at compile time except when it is statically unevaluated. The compile-time evaluation of a static expression is performed exactly, without performing Overflow_Checks. For a static expression that is evaluated:
Delete 5.3(3-4) [they were moved to 4.5.7]
Modify 7.5(2.1/2):
In the following contexts, an expression of a limited type is not permitted unless it is an aggregate, a function_call, [or ]a parenthesized expression or qualified_expression whose operand is permitted by this rule{, or a conditional_expression all of whose dependent_expressions are permitted by this rule}:
!discussion
The syntax of Ada requires that we require some sort of surrounding syntax to make a conditional expression different than an if statement. Without that requirement, we would have code like:
if if Func1 then Func2 then Proc1; end if;
A := if B then C else D + 1;
Even if the compiler can figure it out (which is not clear to the author), it would be dicey for a human, and small errors such as leaving out a semicolon could change the meaning of a statement drastically. In the second case, whether
A := (if B then C else D) + 1;
or
A := (if B then C else D + 1);
was meant will not be evident to the reader. Nor will the one selected by the compiler (the first one) be obvious.
The obvious choice is to surround a conditional expression with parentheses. This is already a common Ada design choice, and will be familar to Ada programmers. Other choices, such as using square brackets (currently unused in Ada), or using syntax other than "if", would be less familar and would cause programmers a confusion as to which syntax to use in a given location.
Thus, we design conditional expressions to work like a fancy set of parentheses. This is the reason that they are defined to be a primary rather than some other sort of expression. The fact that the parentheses are required makes this preferable (it would be confusing otherwise).
Requiring extra pairs of parentheses for conditional expressions is annoying. Consider:
pragma Assert ((if A then B else C)); Proc ((if A then B else C));
We thus adopt rules to allow elimination of the extra parentheses in all contexts where they are directly nested and thus two sets of parantheses are directly adjacent.
pragma Assert (if A then B else C); Proc (if A then B else C);
(Note that we can't do this for aggregates, as the named notation of aggregates could also be the named notation of a call.)
But we don't allow removing the parentheses in any other context, such as additional parameters, choices, etc. In particular, the extra parens are required for named notation:
Proc (Param => if A then B else C); -- Illegal.
We considered additional cases where making parentheses optional would help. We wanted to allow the above case, but it becomes ambiguous for case expressions (AI05-188-1), and we want to use the same rules for all kinds of expressions.
More aggresive rules would make it much harder to create unambiguous syntax rules. Most likely, the parentheses requirements would have to be enforced as legality rules. That would have the effect of making syntax error correction much harder. Consider the following syntactically incorrect Ada code:
exit when if Foo then B := 10; end if;
Currently, syntax error correction most likely would identify a missing expression and semicolon. With the conditional expression syntax without parentheses, the syntax is correct up until the ":=", at which point it is likely too late for the compiler to determine that the real error occurred much earlier. Nor would it be easy for the programmer to see where the error is. (Remember that Foo and B can be arbitrarily complex and on multiple lines.) With a required parenthesis, syntax correction would either identify the required parenthesis or the expression as missing, showing the correct point of the error.
---
We considered using syntax changes to describe the grammar, but that was too complex. The Implementation Note in 4.4 gives a basic explanation. Note that just leaving the parentheses out everywhere makes expressions unavoidably ambiguous, so implementations will most likely want to use a grammar like the suggested changes in place of the language-defined rules. The following was the proposed grammar changes:
In 4.4, define:
expression_within_parentheses ::= expression | conditional_expression
In each of the following change expression to expression_within_parentheses.
primary ::= (expression_within_parentheses)
qualified expression ::=
subtype_mark'(expression_within_parentheses)
type_conversion ::= subtype_mark (expression_within_parentheses)
attribute_designator ::= identifier[(*static*_expression_within_parantheses)]
range_attribute_designator ::= range[(*static*_expression_within_parentheses)]
entry_index ::= expression_within_parentheses
Add productions to the following:
actual_parameter_part ::= (conditional_expression) generic_actual_part ::= (conditional_expression) discriminant_constraint ::= (conditional_expression)
For indexed components, replace
indexed_component ::= prefix(expression {, expression}) by: index_expression_list ::= (expression {, expression}) | (conditional_expression)
indexed_component ::= prefix index_expression_list
For pragmas, replace
pragma ::= pragma identifier [(pragma_argument_association {, pragma_argument_association})]; by: pragma_argument_association_list ::= (pragma_argument_association {, pragma_argument_association}) | (conditional_expression)
pragma ::= pragma identifier [pragma_argument_association_list] ;
---
Resolution of a conditional_expression also follows the model of parentheses. (That's made harder by the complete absence of such rules in the Ada Standard, so there is nothing to copy.) That is, whatever the context expects/requires is what is expected/required of the dependent expressions.
We considered aggregate-like resolution, but that would have used too little of the context. For instance:
Put_Line (Integer'Image (Num_Errors) &
(if Num_Errors = 1 then "error detected." else "errors detected."));
would be ambiguous because the operand of & could be either String or Character.
We augment the resolution rule with legality rules to ensure that all of the dependent_expressions have the same type if the context does not identify a particular type. This is needed to prevent cases like:
type Some_Boolean is new Boolean; function Foo return Some_Boolean;
Cond : Boolean := ...; Var : Natural := ...;
if (if Cond then Var > 10 else Foo) then ... -- Illegal by legality rule.
We also have a rule to disallow mixing statically tagged and dynamically tagged expressions in the same conditional_expression; that makes enforcing 3.9.2(8) possible.
---
We allow the "else" branch to be omitted for boolean-valued conditional expressions. This eases the use of conditional expressions in preconditions and postconditions, as it provides a very readable form of the "implies" relationship of Boolean algebra. That is,
Assert that A implies B
could be written as
pragma Assert(if A then B)
In this case, the "else" branch is more noise than information.
---
Conditional_Expressions are static if all of their conditions and expressions are static. But expressions that are not used are not evaluated (and thus the program is not illegal if one of those expressions would raise an exception). Note that this latter rule applies even to non-static expressions if the controlling condition is static. This is similar to the rules for short circuit operations.
This means that:
(if False then Some_Constant / 0 else 123) is static (with a value of 123).
but
(if False then Some_Function_Call / 0 else 123) is not static.
Note that while (if N = 0 then Some_Function_Call else 10_000 / N) is not static even if N is static, the expression is alway legal. That's because the else expression is not evaluated if N is 0 (it would be illegal if it was).
---
There is no obvious term for the concept defined as "statically unevaluated" above. Here are some of the many choices considered:
statically unselected statically unevaluated statically unevaluable statically ineffective statically unreachable statically irrelevant statically dead statically moot moot
The term choosen seems OK, but none are that great.
----
We allow conditional_expressions in build-in-place contexts subject the normal rules for operands. The model is that a conditional_expression works like a set of parentheses; it does not copy any objects or create any temporaries on its own.
The implementation of such a build-in-place conditional expression shouldn't be too complex; the address of the target object is passed to whichever dependent_expression is actually evaluated.
!examples
Another usage is fixing plurals in output. Much Ada code doesn't even try to get this right because it requires duplicating the entire call.
For instance, we have to write:
if Num_Errors = 1 then
Put_Line ("Compilation of " & Source_Name & "completed with" &
Error_Count'Image(Num_Errors) & "error detected.");
else
Put_Line ("Compilation of " & Source_Name & "completed with" & Error_Count'Image(Num_Errors) & "errors detected.");
end if;
The duplication of course brings the hazard of not making the same changes to each message during maintenance.
Using a conditional expression, we can eliminate this duplication:
Put_Line ("Compilation of " & Source_Name & "completed with" &
Error_Count'Image(Num_Errors) & (if Num_Errors = 1 then "error" else "errors") & " detected.");
!ACATS test
ACATS B and C tests are needed.
!appendix

From: Tucker Taft
Sent: Sunday, February 22, 2009  4:51 AM

[Find the rest of this message in AI05-0145-1.]

Somewhat independent suggestion:

   Add "X implies Y" as a new short-circuit operation meaning
     "not X or else Y".

   By making it a short-circuit operation, we avoid the
   burden of worrying about user-defined "implies" operators
   (which then might no longer mean "not X or Y"),
   boolean array "implies" operators, etc., and the compiler
   can trivially transform them to something it already
   knows how to compile.

   I suspect "implies" will be used almost exclusively in Assert
   and pre/post conditions.


****************************************************************

From: Bob Duff
Sent: Tuesday, February 24, 2009  10:25 AM

It rubs me the wrong way to have short-circuit and non-short-circuit versions
of and[then] and or[else], but not for implies.  How about "implies" and
"implies then"?  Seems more uniform.

I don't see why anybody would "worry" about user-defined "implies", any more
than they would worry about user-defined "and".
"Implies" on arrays is not terribly useful, but the implementation is trivial,
so I'd go for uniformity.

Note that "<=" on Booleans means "implies" (not short circuit, of course).

>    I suspect "implies" will be used almost exclusively in Assert
>    and pre/post conditions.

...which makes any efficiency argument in favor of short-circuits not so
important.

By the way, coding convention at AdaCore is to always use "and then" and "or
else", and never "and" or "or".  (Well, at least for Booleans -- I suppose we're
allowed to "and" arrays.)  I don't agree with this policy -- I think the
short-circuit forms should be used when the meaningfulness of the right-hand
side depends on the value of the left-hand side, and perhaps for efficiency in
some cases.

If I were writing an assertion using "implies" (in my own code, not subject to
AdaCore rules!), I would normally want non-short-circuit, so that if the
right-hand side is buggy, it will get evaluated and cause an exception. I'd
reserve short-circuits for things like "Assert(X /= null implies then X.all >
0);".

Eiffel has "and then", "or else", and "implies", which are short-circuit (called
"semi-strict").  It also has "and" and "or", which _might_ be short-circuit,
depending on the whim of the compiler writer -- yuck.

****************************************************************

From: Robert Dewar
Sent: Thursday, February 26, 2009  3:28 AM

> It rubs me the wrong way to have short-circuit and non-short-circuit
> versions of and[then] and or[else], but not for implies.  How about
> "implies" and "implies then"?  Seems more uniform.

This is uniformity run amok. The non-short circuited versions of AND and OR
are dubious in any case, and "implies then" is plain horrible, if you must
have two different operators, use a horrible name for the non-sc form of
implies :-) In fact you already have the horrible name <=, those who insist
on the non-short-circuited form can use that name.

> I don't see why anybody would "worry" about user-defined "implies",
> any more than they would worry about user-defined "and".
> "Implies" on arrays is not terribly useful, but the implementation is
> trivial, so I'd go for uniformity.

Again, this is uniformity run amok to me, implies on arrays is like NOT on
non-binary modular types, an unhelpful result of orthogonality likely to
correspond to a coding error.

> By the way, coding convention at AdaCore is to always use "and then"
> and "or else", and never "and" or "or".  (Well, at least for Booleans
> -- I suppose we're allowed to "and" arrays.)  I don't agree with this
> policy -- I think the short-circuit forms should be used when the
> meaningfulness of the right-hand side depends on the value of the
> left-hand side, and perhaps for efficiency in some cases.

Actually the other case where AND/OR are allowed is for plain boolean
variables. where indeed the operations are more like boolean arithmetic.

The reason I think that Bob's preference is plain horrible is that it is
VERY rare but not impossible to have a situation where you rely on the
non-short-circuited semantics, and in such a case you use AND/OR relying on
this. If you use AND/OR routinely, then such rare cases are seriously buried.

In English AND/OR short circuit, if you hear an announcement at an airport
that says "All passengers who are reserved on China Airlines flight 127 and
who have not checked in at the checkin counter should now proceed directly
to the gate".

you stop listening at 127. You do not carefully evaluate the first part to
False, then the second part to true (because you have checked in for your
flight), and then calculate that (False AND True) Is False so you can ignore
the announcement after all.

> If I were writing an assertion using "implies" (in my own code, not
> subject to AdaCore rules!), I would normally want non-short-circuit,
> so that if the right-hand side is buggy, it will get evaluated and cause an
> exception. I'd reserve short-circuits for things like "Assert(X /= null
> implies then X.all > 0);".

I definitely disagree strongly with this, and I really can't believe anyone
would coutenance the syntax in this implies then" example, it's nothing like
english usage, whereas "AND THEN" and "OR ELSE" are reasonably in the domain
of normal english.

> Eiffel has "and then", "or else", and "implies", which are
> short-circuit (called "semi-strict").  It also has "and" and "or",
> which _might_ be short-circuit, depending on the whim of the compiler writer -- yuck.

Sounds the right choice to me, Eiffel has it right, and Ada has it wrong, and
Ada will be even wronger if it has "implies then". The nice thing about the
Eiffel choice is that it clearly indicates that AND/OR are to be used when
you don't care about short circuiting, but they don't imply the inefficiency
of forced evaluation, or worse still cover up a case where the full evaluation
is required for correctness.

In Eiffel, I would go for Bob's view of using AND/OR routinely and reserving
AND THEN/OR ELSE for the cases where left to right short circuiting is required
for correctness.

I did not know this feature in Eiffel, definitely the right choice I think.

****************************************************************

From: Bob Duff
Sent: Thursday, February 26, 2009  7:27 AM

> > It rubs me the wrong way to have short-circuit and non-short-circuit
> > versions of and[then] and or[else], but not for implies.  How about
> > "implies" and "implies then"?  Seems more uniform.
>
> This is uniformity run amok.

Of course -- it's no surprise that someone who never uses "and" would see
no use in a non-short-circuit version of "implies".

>... The non-short circuited versions of AND and  OR are dubious in any
>case, and "implies then" is plain horrible, if you  must have two
>different operators, use a horrible name for the non-sc  form of
>implies :-) In fact you already have the horrible name <=, those  who
>insist on the non-short-circuited form can use that name.

I sometimes write:

    pragma Assert (Is_Good (X) <= -- implies
                   Is_Gnarly (Y));

The comment is necessary.  ;-)

I can live with that, but I'd prefer to say "implies" instead of
"<= -- implies".

> > I don't see why anybody would "worry" about user-defined "implies",
> > any more than they would worry about user-defined "and".
> > "Implies" on arrays is not terribly useful, but the implementation
> > is trivial, so I'd go for uniformity.
>
> Again, this is uniformity run amok to me, implies on arrays is like
> NOT on non-binary modular types, an unhelpful result of orthogonality
> likely to correspond to a coding error.

I doubt it would cause coding errors.  I think "implies" on arrays, like "not"
on non-binary modular types [*] would just be a feature nobody would use.

[*] I don't think anybody should use non-binary modular types, with or
without "not".  ;-)

> > By the way, coding convention at AdaCore is to always use "and then"
> > and "or else", and never "and" or "or".  (Well, at least for
> > Booleans -- I suppose we're allowed to "and" arrays.)  I don't agree
> > with this policy -- I think the short-circuit forms should be used
> > when the meaningfulness of the right-hand side depends on the value
> > of the left-hand side, and perhaps for efficiency in some cases.
>
> Actually the other case where AND/OR are allowed is for plain boolean
> variables. where indeed the operations are more like boolean arithmetic.
>
> The reason I think that Bob's preference is plain horrible is that it
> is VERY rare but not impossible to have a situation where you rely on
> the
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> non-short-circuited semantics, and in such a case you use AND/OR
> relying on this. If you use AND/OR routinely, then such rare cases are
> seriously buried.

Probably the root of our disagreement is that I disagree with your "not
impossible" above -- one should NEVER, EVER write code that relies on
non-short-circuited semantics.  Therefore, I only want to distinguish two
cases: (1) I rely on short-circuit semantics ("and then"), (2) I do not
rely on it ("and").

> In English AND/OR short circuit, if you hear an announcement at an
> airport that says "All passengers who are reserved on China Airlines
> flight 127 and who have not checked in at the checkin counter should
> now proceed directly to the gate".
>
> you stop listening at 127.

You had better not -- what if the next word is "or".  ;-)

Your analogy proves that you spend too much time traveling.  ;-)

>... You do not carefully evaluate the first  part to False, then the
>second part to true (because you have checked  in for your flight), and
>then calculate that (False AND True) Is  False so you can ignore the
>announcement after all.

Shrug.  If you say "multiply the number of unicorns in the world by the
cost of feeding a unicorn", I can stop listening at "by".
It doesn't mean that 0 * F(X) should fail to evaluate F(X) in Ada.

...
> > Eiffel has "and then", "or else", and "implies", which are
> > short-circuit (called "semi-strict").  It also has "and" and "or",
> > which _might_ be short-circuit, depending on the whim of the compiler writer -- yuck.
>
> Sounds the right choice to me, Eiffel has it right, and Ada has it
> wrong, and Ada will be even wronger if it has "implies then". The nice
> thing about the Eiffel choice is that it clearly indicates that AND/OR
> are to be used when you don't care about short circuiting, but they
> don't imply the inefficiency of forced evaluation, or worse still
> cover up a case where the full evaluation is required for correctness.

The problem with the Eiffel rule is that if you _accidentally_ care, you've
got a latent bug that will rear its ugly head when you turn on the optimizer,
or switch compilers, or make a seemingly-unrelated code change.  We both agree
that you _shouldn't_ care, but bugs do happen, and bugs discovered "later" are
very costly.

> In Eiffel, I would go for Bob's view of using AND/OR routinely and
> reserving AND THEN/OR ELSE for the cases where left to right short
> circuiting is required for correctness.
>
> I did not know this feature in Eiffel, definitely the right choice I
> think.

I guess you and I will never agree on this issue.
I can live with that.  And I'll continue to put in the noise word "then" after
every "and" in AdaCore code.  ;-)

****************************************************************

From: Tucker Taft
Sent: Thursday, February 26, 2009  9:13 AM

>>    I suspect "implies" will be used almost exclusively in Assert
>>    and pre/post conditions.
>
> ...which makes any efficiency argument in favor of short-circuits not
> so important.

I don't think it is an efficiency argument.  To me, it seems quite clear
in "A implies B" that if A is False, then you have no reason to evaluate B,
and my brain would intuitively take advantage of that.  E.g.

    X /= null implies X.Kind = Abc

There seems no particular advantage to having a non-short-circuit version,
and if you need short-circuiting, as in the above case, you can't get it.

> By the way, coding convention at AdaCore is to always use "and then"
> and "or else", and never "and" or "or".  (Well, at least for Booleans
> -- I suppose we're allowed to "and" arrays.)  I don't agree with this
> policy -- I think the short-circuit forms should be used when the
> meaningfulness of the right-hand side depends on the value of the
> left-hand side, and perhaps for efficiency in some cases.

I do agree with this convention, but I realize you don't.
I think in part it depends on what is your first language.
You were a heavy Pascal user for a while, I believe, and in Pascal, all you
have are "and" and "or."

To me it is always safer, and frequently more efficient, to use "A and then B."
Also, in the case when the compiler could convert "A and B" into "A and then B"
for efficiency, it could go the other way as well.  Hence if I always write "A
and then B," the compiler will use short-circuiting by default, but if it is
smart and B is sufficiently simple as to clearly have no side-effects, and for
some reason "A and B" is more efficient (e.g. it eliminates a branch), then the
compiler can choose it.  On the other hand, if I use "A and B" always, and "B"
is at all expensive to compute, then the compiler has to be pretty highly
optimizing to determine that it can do the short-circuiting, in exactly the
cases where it is most important to do so.

> If I were writing an assertion using "implies" (in my own code, not
> subject to AdaCore rules!), I would normally want non-short-circuit,
> so that if the right-hand side is buggy, it will get evaluated and cause an
> exception. I'd reserve short-circuits for things like "Assert(X /= null implies
> then X.all > 0);".

I'm not convinced.  To me "implies" suggests short circuiting quite strongly,
since if the antecedent is False, the whole thing is uninteresting.  It is an
infix form for "if A then B" in my mind.

> Eiffel has "and then", "or else", and "implies", which are
> short-circuit (called "semi-strict").  It also has "and" and "or",
> which _might_ be short-circuit, depending on the whim of the compiler
> writer -- yuck.

I don't like the last part, but I agree with the first part.
I think Ada's rules are right, but I choose to think of them as the compiler
can short-circuit "and"/"or" only if the second operand has no side-effects,
which is the usual "as if" rules.

****************************************************************

From: Bob Duff
Sent: Thursday, February 26, 2009 10:53 AM

...
> I don't think it is an efficiency argument.

And having said that, you proceed to argue based (partly) on efficiency, below.
;-)

I understand that efficiency is not the entire issue, but it is part of the
issue. And I think you agree with that.

>...To me,
> it seems quite clear in "A implies B" that if A is  False, then you
>have no reason to evaluate B, and  my brain would intuitively take
>advantage of that.

Of course you think that -- you think the same way about "and[then]".
Same reason as Robert, with his airport analogy.  Your view is internally
consistent, but inconsistent with the way Ada is.

>...E.g.
>
>     X /= null implies X.Kind = Abc
>
> There seems no particular advantage to having a non-short-circuit
> version, ...

The advantage is that the cases where s-c is needed stand out in the
code -- readability. There's something "special" or "interesting" about
the above example (as compared to, say, "X.This = 0 and X.That = 1").

>...and if you need
> short-circuiting, as in the above case, you can't  get it.

No, my proposal is to provide both s-c and non-s-c, as we do for and[then].
You and I and Robert all agree there's a need for a s-c version of implication.

...
> I do agree with this convention, but I realize you don't.
> I think in part it depends on what is your first language.
> You were a heavy Pascal user for a while, I believe, and in Pascal,
> all you have are "and" and "or."

I realize that many programmers think that whatever they learned first is
"the way things ought to be".  But no, I am immune to that mistake.
Proof: there are very few things I think Pascal does better than Ada.

Anyway, my first language was Fortran.

The rule in Pascal, which I just looked up, is that the operands of and/or are
both evaluated, and are evaluated left-to-right -- same as all other operators.
I'm talking about Jensen and Wirth's "Pascal User Manual and Report". I'm
ignoring the ISO Pascal standard, because I didn't read it until after I had
quit using Pascal, and I don't have a copy, and I'm too lazy to google for it.

However, I have used at least one Pascal compiler that did s-c evaluation for
and/or.  The Pascal culture never cared a lot about obeying standards, and
anyway J&W's book is too vague to take seriously in that way (although it does
clearly specify what I said above about and/or).

> To me it is always safer, and frequently more efficient, to use "A and
> then B."  Also, in the case when the compiler could convert "A and B"
> into "A and then B" for efficiency, it could go the other way as well.
> Hence if I always write "A and then B," the compiler will use
> short-circuiting by default, but if it is smart and B is sufficiently
> simple as to clearly have no side-effects, and for some reason "A and
> B" is more efficient (e.g. it eliminates a branch), then the compiler
> can choose it.  On the other hand, if I use "A and B" always, and "B"
> is at all expensive to compute, then the compiler has to be pretty
> highly optimizing to determine that it can do the short-circuiting, in
> exactly the cases where it is most important to do so.
>
> > If I were writing an assertion using "implies" (in my own code, not
> > subject to AdaCore rules!), I would normally want non-short-circuit,
> > so that if the right-hand side is buggy, it will get evaluated and cause an exception.
> > I'd reserve short-circuits for things like "Assert(X /= null implies
> > then X.all > 0);".
>
> I'm not convinced.  To me "implies" suggests short circuiting quite
> strongly, since if the antecedent is False, the whole thing is
> uninteresting.

I think you mean "the right-hand side is uninteresting" -- the "whole thing"
is False, which is perfectly "interesting".

I understand how you can see it that way, but only if you say the same thing
about "and" -- if the first argument is False, the rest is uninteresting.

...
> > Eiffel has "and then", "or else", and "implies", which are
> > short-circuit (called "semi-strict").  It also has "and" and "or",
> > which _might_ be short-circuit, depending on the whim of the compiler
> > writer -- yuck.
>
> I don't like the last part, but I agree with the first part.
> I think Ada's rules are right, but I choose to think of them as the
> compiler can short-circuit "and"/"or" only if the second operand has
> no side-effects, which is the usual "as if" rules.

The second operand never has important side effects, unless the code is buggy.
Unfortunately, an Ada compiler can't know that.

Anyway, I don't think I'll ever convince you or Robert, and we haven't heard
anyone else's opinion.  So I guess I'll quit arguing, for now.

For the record, I can tolerate the language as it is, and I can tolerate a s-c
"implies", but my preference is still to provide both s-c and non-s-c versions
of implication.

****************************************************************

From: Randy Brukardt
Sent: Saturday, March 14, 2009  7:45 PM

>The rule in Pascal, which I just looked up, is that the operands of
>and/or are both evaluated, and are evaluated left-to-right -- same as
>all other operators. I'm talking about Jensen and Wirth's "Pascal User
>Manual and Report".
>I'm ignoring the ISO Pascal standard, because I didn't read it until
>after I had quit using Pascal, and I don't have a copy, and I'm too
>lazy to Google for it.

I have the IEEE Pascal standard (ANSI/IEEE 770X3.97.97-1983) on the shelf
here (that's on paper, of course). There is nothing special about Boolean
operators (there are only two sentences).

The overall expression rule is:

    The order of evaluation of the operands of a dyadic operator shall be
    implementation-dependent.

[Shall be implementation-dependent? That means "must be anything you want",
which is not much of a requirement! Terrible use of "shall".]

They then follow that up with a note to make sure that everyone knows that
they didn't specify anything at all:

NOTE: This means, for example, that the operands may be evaluated in textual
order, or in reverse order, or in parallel or they may not both be evaluated.

So Pascal doesn't specify anything at all. Which probably has nothing to do
with anything. :-)

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 26, 2009  1:41 PM

...
> Anyway, I don't think I'll ever convince you or Robert, and we haven't
> heard anyone else's opinion.  So I guess I'll quit arguing, for now.
>
> For the record, I can tolerate the language as it is, and I can
> tolerate a s-c "implies", but my preference is still to provide both
> s-c and non-s-c versions of implication.

Well, you essentially asked for someone else's opinion, so here it is:

I have no idea what logic operation "implies" represents; I have enough trouble
figuring out whether to use "and" or "or". (I sometimes still have to fall back
to constructing truth tables, even after 25 years of programming.) So I doubt
that I would use "implies" at all.

I could be convinced to add it only if doing so was really simple (obviously,
there are others that would like to use it). Having two versions, an
overrideable operator, best named "?" or "?" (that's the Unicode single and
double right-pointing arrows, in case you aren't using a full Unicode font),
and other cruft does not appear to be simple in my book.

I would probably vote against adding anything that complex (especially as this
is supposed to be a "modest" update).

****************************************************************

From: Robert I. Eachus
Sent: Thursday, February 26, 2009  8:39 PM

>[*] I don't think anybody should use non-binary modular types, with or
>without "not".  ;-)

Well, sooorry about that. They may be esoterica for most people, but they show
up all over the place in cryptography, hash tables, factoring large numbers,
random number generators, and radar.  Radar?  Yes, remember the Chinese
Remainder Theorem?  Pulse Doppler radar works by sending out pulses alternately
at two (or more) pulse rates which are evenly divisible by two (or more)
relatively prime numbers. The Chinese Remainder Theorem is then used to
disambiguate the range information. I'm sure you recognize that the distance to
another aircraft--or a mountain--is safety critical information.   Having
support for non-binary moduli in Ada simplifies that code a lot, which makes
validating it much easier.  Yes, belt and suspenders type checking is done on
some of machine code.  It is also easier to check the algorithm for correctness,
then check that the machine code implements the algorithm as written, with
hardware support for non-binary moduli..

I know it is difficult to implement support for non-binary moduli greater than
65535 on some hardware, but that is fine.  Most uses of non-binary moduli on
such systems fit nicely.  For  the number theory and crypto people, even
(2**1023)-1 can be too small, but we are used to rolling our own when efficient
bignum arithmetic is needed.

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009  11:20 AM

> Well, sooorry about that. They may be esoterica for most people, but
> they show up all over the place in cryptography, hash tables,
> factoring large numbers, random number generators, and radar.  Radar?
> Yes, remember the Chinese Remainder Theorem?

You misunderstand Bob entirely, he doesn't think that it is wrong to use modular
integers with a non-binary modulus in programming, he just thinks the use of the
Ada language feature is a VERY bad idea, and I strongly agree. I would always
ban them in any Ada coding standard, and thus insist that people make the mod
operators explicit, which I think makes code in all these areas much clearer.

As you say there are many applications where such modular types make sense, but
that does not mean you should use the nasty implicit feature in Ada (especially
as this is uniquely an Ada oddity).

This is a place where Ada has a seductive feature that sounds great for all
these kind of applications, but in fact it is a bad idea to use it.

And if anyone uses NOT on such a type, they should be banned from using this
feature permanently.

****************************************************************

From: Robert I. Eachus
Sent: Friday, February 27, 2009  2:50 PM

There are two problems with that position.  The first is that the compiler can
and does use the knowledge that the arguments are both of the same modular type
in implementing the code.  You want A + B mod M to be implemented as:

C := A + B;
if C > M
then C := C-M;
end if;

You certainly don't want the integer division that will normally be used by the
compiler if the code is at all time critical.  Also you often can't use (full
word) binary modular types because overflow silently results in wrong answers.
Using integer types is better, but you can still get overflows that may have to
be handled. Using integer types can also result in code paths that cannot be
reached, and therefore cannot be tested.  This is considered a very bad thing in
safety critical software. With the (right) non-binary modular type there is no
possibility of a Constraint_Error, and the code is clean.

The net result is that I have/had (I might have to dig it up) a package that
implemented modular arithmetic for arbitrary moduli less than 2**32.  The
problem was that it was originally written for Ada 83, so some of the operations
use code inserts.  Constants had to be explicitly converted to the private type,
but other than that, the math is intuitive and what you want to see.   I
probably should rewrite things to use Ada 9x at least and support moduli up to
2**63.  Hmm... really nice would be a version that had three versions based on
the size of the modulus where compilers should do conditional compilation of the
instances.  (What is the best way to do that?  It would really be nice to write
the bodies separately, then have the compiler choose the correct one at compile
time.)

> This is a place where Ada has a seductive feature that sounds great
> for  all these kind of applications, but in fact it is a bad idea to
> use it.

Shrug.  I find using the compiler supported types slightly easier than using a
package.  I think that the way that non-binary moduli are included in the
language is best, a separate package could be added in the numeric annex, but
that would probably be more work for compiler vendors not less, and of course,
it would be more work for programmers.  On the other hand, supporting it in the
compilers the way it is should result in better code for hash tables, which are
used quite often in places hidden from the (application) programmer.

> And if anyone uses NOT on such a type, they should be banned from
> using this feature permanently.

Lol!  I've done the dirty deed, but only to test the package described above
with the identity:  not A =  - A-1 Incidentally I think that the expression -A-1
should be written (-A)-1 in this context.  Even so, it is still more misleading
than not A.

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009 11:11 AM

> I sometimes write:
>
>     pragma Assert (Is_Good (X) <= -- implies
>                    Is_Gnarly (Y));
>
> The comment is necessary.  ;-)

I find this an obscure coding style, with or without the comment, I realy have
to think about what it means, and would far rather see an equiavlent form with
NOT/OR/AND which I can easily understand.

> Probably the root of our disagreement is that I disagree with your
> "not impossible" above -- one should NEVER, EVER write code that
> relies on non-short-circuited semantics.  Therefore, I only want to
> distinguish two
> cases: (1) I rely on short-circuit semantics ("and then"), (2) I do
> not rely on it ("and").

But people can and do write such code, and the language encourages such a coding
style by having non-short-circuited forms. Note that if you believe that no one
should EVER write such code, you should like the nice Eiffel convention, which
essentially enshrines that principle in the language.

>> In English AND/OR short circuit, if you hear an announcement at an
>> airport that says "All passengers who are reserved on China Airlines
>> flight 127 and who have not checked in at the checkin counter should
>> now proceed directly to the gate".
>>
>> you stop listening at 127.
>
> You had better not -- what if the next word is "or".  ;-)

Right, OK type, you stop listening at the AND, does not change my argument.

> It doesn't mean that 0 * F(X) should fail to evaluate F(X) in Ada.

A lot of damage to the design of programming languages is done by creating a
false analogy between arithmetic operators and "boolean operators", they are in
fact totally different, and it is uniformity going off the rails to force them
into the same mould.

> The problem with the Eiffel rule is that if you _accidentally_ care,
> you've got a latent bug that will rear its ugly head when you turn on
> the optimizer, or switch compilers, or make a seemingly-unrelated code
> change.  We both agree that you _shouldn't_ care, but bugs do happen,
> and bugs discovered "later" are very costly.

Yes, but in Ada, you can equally have the bug of requiring the evaluation of the
right hand side, and it gets covered up by the use of AND/OR. It is a bug if
code does not mean what it seems to mean. So if you read AND/OR in Ada to never
mean that the right side needs to be evaluated, you will misunderstand the code
(and misunderstanding code is just as evil as the code not doing what the writer
wants, in fact much more evil, because the former affects maintenance, which is
the expensive part of programming).

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  6:20 PM

...
> I find this an obscure coding style, with or without the comment, I
> realy have to think about what it means, and would far rather see an
> equiavlent form with NOT/OR/AND which I can easily understand.

OK, I shall refrain from writing that in the future.

...
> But people can and do write such code, and the language encourages
> such a coding style by having non-short-circuited forms. Note that if
> you believe that no one should EVER write such code, you should like
> the nice Eiffel convention, which essentially enshrines that principle
> in the language.

Not at all.  We shouldn't write such code.  But we do (by accident).
A bug 2 years hence is too severe as a punishment for writing such code.
If we can't detect this sort of error at compile time, it's better to at least
make the behavior portable across all implementations.

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009 11:15 AM

The more examples I see of "implies" the more I think this is a horrible
feature, and I oppose its addition to the language, I think it is a recipe for
obscure coding by mathematical types who know too much :-)

****************************************************************

From: Randy Brukardt
Sent: Friday, February 27, 2009 12:30 PM

Since Robert has come out and acknowledged the elephant in the room, let me add
my voice to his by saying I do not think that this feature belongs in the
language (especially if it comes at the expense of real conditional expressions
- these are both "nice to haves", and conditional expressions are far harder to
write with existing constructs). The fact that it does not have the same meaning
as it does in English (a point I made privately to Bob yesterday) means that it
would be confusing to 75% of the programmers out there. While we could avoid
using it, we'd still have to be able to read it, and thus the confusion would
continue.

****************************************************************

From: Ben Brosgol
Sent: Friday, February 27, 2009 12:43 PM

> The more examples I see of "implies" the more I think this is a
> horrible feature, and I oppose its addition to the language, I think
> it is a recipe for obscure coding by mathematical types who know too
> much :-)

I agree.  In English "implies" has connotations of causality or logical
deduction: "'Socrates is a man' implies 'Socrates is mortal'"; "'It's raining'
implies 'I'll take an umbrella today'".  This is totally absent in something
like:

pragma Assert (X /= null implies X.all > 0);

When I first read this, I asked "How could X being non-null imply that X.all is
positive?"

There is also a subtle type error (in the sense of logical types) with this use
of "implies".  A construct such as "if then" is an operator within the language
itself.  You say:
   "If Socrates is a man, then Socrates is mortal".
On the other hand "implies" is an operator in the metalanguage used to reason
about constructs in the language, and quotation marks are necessary.  You do not
write
   Socrates is a man implies Socrates is mortal.
You need to say
   "Socrates is a man" implies "Socrates is mortal".
(It's been decades since I last had a course in Symbolic Logic, but for some
reason the distinction between "if then" and "implies" has pretty much stayed
with me :-)

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  6:08 PM

> The more examples I see of "implies" the more I think this is a
> horrible feature, and I oppose its addition to the language, I think
> it is a recipe for obscure coding by mathematical types who know too
> much :-)

I find your and Randy's arguments convincing.  I'm not exactly a "mathematical
type", but I accept the fact that lots of folks find "implies" confusing, even
though I don't understand why.  So let's drop the idea.

****************************************************************

From: Brad Moore
Sent: Friday, February 27, 2009  8:21 PM

> Anyway, I don't think I'll ever convince you or Robert, and we haven't
> heard anyone else's opinion.  So I guess I'll quit arguing, for now.

Regarding the use of short circuit operator, my opinion goes both ways,
depending on who is writing the code.

To me it makes sense that a software vendor such as Adacore would want to
always use short circuit operators as a coding convention because the code they
produce is intended for use in a wide variety of applications for a wide variety
of client code. Some users need high performance tight code, others have needs
for validation/certification etc. The code written in such an environment should
consider the needs of all potential clients. In such an environment, the short
circuit forms probably are a better match for the entire universe of client
usage.

However, I can see that in a particular clients environment, particularly one
where performance of language syntax constructs is generally not an issue, a
coding convention of using short circuit forms only where needed can be a better
choice for a coding convention. This makes such uses stand out better, and in an
environment that relies more on informal testing, one can make the case that you
can get better coverage of the boolean expressions of the conditional statement,
with non s-c forms.

e.g.

if A > B and Divide_By_Zero(A,B) then

Will generate an error for one pass through the code, whereas and then, might
not have tested the call to Divide_By_Zero


Regarding "implies", A non-sc form is a nice to have but it doesn't bother me
much if we have to live without the non s-c form. I would prefer not to see the
non-sc form if it means introducing a syntax that does not read well or is not
intuitive. "Implies then" doesn't work for me. I do like "implies" for the s-c
form though.

****************************************************************

From: Robert Dewar
Sent: Saturday, February 28, 2009  3:03 AM

> To me it makes sense that a software vendor such as Adacore would want
> to  always use short circuit operators as a coding convention because
> the code they produce is intended for use in a wide variety of
> applications for a wide variety of client code.

Actually that's not a motivation, since we are talking primarily about the
coding style used internally for tools, so the choice is made on aesthetic
grounds (basically a viewpoint that non-short-circuiting logical operators are
fundamentally not a good idea, though I must say the Eiffel convention makes
good sense to me. I really dislike the idea of being able to write code where
full evaluation is necessary for functional correctness. Bob says "never write
code like that", but the language allows it, and any time you have a language
feature that you feel like saying "never use it", something is wrong.

Probably the most important thing is consistency, and it is a bit unfortunate
that there are two entirely different consistent coding styles possible here,
and people can't agree on which they prefer. That means you end up with
essentially two different languages in this respect, and you have to learn both
of these languages to be able to read Ada code.

It's like the situation with variant records, where there are two entirely
different implementation strategies (allocate max, or allocate actual size and
fiddle with the heap implicitly), again you get two languages, and unfortunately
compilers only have to implement one of those languages, causing significant
portability problems (I believe that all the mainstream compilers allocate the
max now, so that this is not as much of a problem as it might be).

****************************************************************

From: Bob Duff
Sent: Saturday, February 28, 2009  7:51 PM

> Actually that's not a motivation, since we are talking primarily about
> the coding style used internally for tools, so the choice is made on
> aesthetic grounds (basically a viewpoint that non-short-circuiting
> logical operators are fundamentally not a good idea, though I must say
> the Eiffel convention makes good sense to me.

I don't understand this attitude at all (re: the Eiffel "compiler-writer's whim"
rule).  It goes against the entire design of Ada, which is based on the
assumption that programmers make mistakes, and that the language should try to
prevent such mistakes (preferably by static checks).  Not by threatening to
introduce bugs 2 years hence.

After all, it's quite clear from the C standard that you're not supposed to
index arrays out-of-bounds.  But it happens all the time.

>...I really dislike
> the idea of being able to write code where full evaluation is
>necessary for functional correctness.

But you ARE able to write such code in Eiffel.  You depend (by accident) on the
evaluations and order chosen by your current compiler.  There is no rule in
Eiffel (or Ada) that prevents that.  Then 2 years later, the code mysteriously
quits working -- that costs thousands of dollars, each time.

I really think the following language design is better:

    Rule: The operands of "and" are both evaluated, left to right.
    NOTE: It is considered bad style to depend on the fact that both are
    evaluated when one is True, or to depend on the order.  Please try not to
    do that.

>...Bob says "never write code
> like that", but the language allows it, and any time you have a
>language feature that you feel like saying "never use it", something
>is wrong.

Something is indeed wrong, but we don't know how to fix it.  That is, we cannot
forbid (at compile time) the thing that you and I both agree is evil.  Two
reasons:

    - Ada doesn't provide enough information to the compiler to so it can know
      about side effects.  (SPARK is a bit different, here.)

    - Even if the compiler had that info, it's not clear how to formally define
      what is forbidden.  It's not clear which side effects matter.  Lots of
      software has the side effect of heating up the CPU chip, but we don't
      care about that.  We probably don't care about memo-izing functions,
      which have the side effect of modifying some cache.  We probably don't
      care in which order two "new" operations are done, even though it can
      certainly affect the output of the program (e.g. convert to
      Integer_Address and print it out, for debugging).

> Probably the most important thing is consistency, and it is a bit
> unfortunate that there are two entirely different consistent coding
> styles possible here, and people can't agree on which they prefer.

Yes it's unfortunate, but we don't really HAVE to agree.  For example, it's just
fine that I am forced against my will by AdaCore to write "and then".

> That means you end up with essentially two different languages in this
> respect, and you have to learn both of these languages to be able to
> read Ada code.

I agree, that's bad.  But I can't get too excited about it, when I regularly
have to read those two dialects of Ada, plus programs written in C, Python,
make, bash, Perl, awk, autoconfjunk, etc etc.  Maybe 100 years from now, people
will agree on these things (I hope they agree autoconf is junk).

> It's like the situation with variant records, where there are two
> entirely different implementation strategies (allocate max, or
> allocate actual size and fiddle with the heap implicitly), again you
> get two languages, and unfortunately compilers only have to implement
> one of those languages, causing significant portability problems (I
> believe that all the mainstream compilers allocate the max now, so
> that this is not as much of a problem as it might be).

Yes, but not much the language design could do about that, IMHO.

Another (trivial) example is "procedure P(X: Integer);"
versus "procedure P(X: in Integer);".  Some folks like the "in", others think
it's just noise.

****************************************************************

From: Brad Moore
Sent: Friday, February 27, 2009  9:28 PM

The recent arguments against using "implies" have me convinced also.
I agree that "Implies" should be dropped from the proposed syntax.

****************************************************************

[Here we finally start talking about conditional expressions - ED]

From: Randy Brukardt
Sent: Thursday, February 26, 2009  2:11 PM

...
> I'm not convinced.  To me "implies" suggests short circuiting quite
> strongly, since if the antecedent is False, the whole thing is
> uninteresting.  It is an infix form for "if A then B"
> in my mind.

If that's the case, why the heck don't we say that! Ada has needed conditional
expressions since the beginning of time; I find myself using
Boolean'Pos(<some-expr>)*<some-other-expr>+Boolean'Pos(not
<some-expr>)*<some-third-expr> periodically and it is impossible to read and
potentially buggy to boot.

(if A then B else False) makes more sense for readability than "A implies B"
because there cannot be a programmer who doesn't know what the first means,
while the second seems to require a college course in Boolean algebra [and
actually remembering what was taught there :-)]. (I took Probability instead, it
sounded more interesting.)

P.S. We had this discussion at dinner Sunday night after most of you left, and
the four of us that were there all were in favor of conditional expressions.

P.P.S. The hotel elevators still were not fixed when I left Wednesday
morning (the service elevator thankfully was still working). And they
cleaned my room Tuesday (only the second time it was done out of 6 nights),
but they surprised me by leaving new a new shampoo. So I got something out
of the stay. ;-)

****************************************************************

From: Edmond Schonberg
Sent: Thursday, February 26, 2009  2:46 PM

So, the spartan accommodations were conducive to creative thought (and clean
hair)!   Is there a forthcoming AI on conditional expressions?  I'm all in favor
of them, but note that (if A then B else false)  is equivalent to (A and B)  not
to (A implies B) !

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 26, 2009  2:55 PM

Sigh. I need more work like a hole in the head, but I'll add it to my list.
(Unless someone else does it first.)

****************************************************************

From: Edmond Schonberg
Sent: Thursday, February 26, 2009  3:05 PM

You mean our undying gratitude is not reward enough?

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 26, 2009  3:15 PM

Do you think that the credit union will take your undying gratitude for my next
mortgage payment?? :-)

Perhaps you've hit on the solution to the mortgage crisis!!

****************************************************************

From: Stephen Michell
Sent: Thursday, February 26, 2009  2:33 PM

Because A implies B is
   if A then B else True end if
not .............false
"implies" is far less error prone.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 26, 2009  2:51 PM

Only if you know what it means. And obviously I don't!! And I suspect that 3/4s
of Ada programmers are like me, not like the people here wanting "implies".

Conditional expressions are far more generally useful (since they can return a
type other than Boolean), and they will be readable to every Ada programmer (I
can't imagine anyone being confused by the meaning of "if").

****************************************************************

From: Robert Dewar
Sent: Friday, February 26, 2009  10:56 AM

> Because A implies B is
>    if A then B else True end if
> not .............false
> "implies" is far less error prone.

I disagree, I find implies to be peculiar, and I certainly have to think about
what it means, since the else is indeed non intuitive.

A implies B in english means that if A is true then B is true, but there is no
else in english, the else is an artifact of mathematics.

"If a stronger economy imples better exchange rates, then we should bla bla bla"

Says nothing at all in english about what to do if the economy is weaker ...

****************************************************************

From: Stephen Michell
Sent: Friday, February 27, 2009  12:39 PM

   if A implies B then...
Does make sense
   if A implies B and not A then X
will always execute X.

****************************************************************

From: Bob Duff
Sent: Thursday, February 26, 2009  5:01 PM

> If that's the case, why the heck don't we say that! Ada has needed
> conditional expressions since the beginning of time; I find myself
> using Boolean'Pos(<some-expr>)*<some-other-expr>+Boolean'Pos(not
> <some-expr>)*<some-third-expr> periodically and it is impossible to
> read and potentially buggy to boot.

I strongly agree that the Boolean'Pos trick is a horrible hack, though I do it
when necessary.  Or declare a local function. Or misc other workarounds.

Conditional expressions would be a "nice to have".  Note that they were proposed
for Ada 9X, and soundly rejected by the reviewers.

> (if A then B else False) makes more sense for readability than "A implies B"
> because there cannot be a programmer who doesn't know what the first
> means, while the second seems to require a college course in Boolean
> algebra [and actually remembering what was taught there :-)]. (I took
> Probability instead, it sounded more interesting.)

Well, I did take college courses in mathematical logic and modern algebra and so
forth, and many of the students were puzzled by "implies", so you are probably
right that many Ada programmers don't understand it.  I am puzzled by their (and
your) puzzlement -- it means exactly what it means in English.  If I say,
"swimming in the ocean implies you will get wet", I'm not saying anything about
what happens if you do not swim in the ocean -- you might get wet or might not
(maybe it's raining).  Just remember, "falsehood implies anything"; that is,
"False implies X" is True for all X.

If it makes you feel any better, I was puzzled by Probability and Statistics, a
grueling (to me) two-semester course.  ;-)

We certainly don't _need_ an implies operator.  As noted before, we already have
"<= -- implies".  Also, "X implies Y" is equivalent to "(not X) or Y". Use "(not
X) or else Y" in cases where short-circuit is wanted (some of us think it's
_always_ wanted).

So:

    pragma Assert (X /= null implies X.all > 0);

is equivalent to:

    pragma Assert (X = null or[else] X.all > 0);

And I guess I find both equally understandable.
The conditional-expression forms, somewhat less so:

    pragma Assert (if X /= null then X.all > 0 else True);
    pragma Assert (if X = null then True else X.all > 0);

But it might be nice to say:

    Put_Line (Error_Count'Image(Num_Errors) &
              (if Num_Errors = 1 then "error" else "errors")
              & " detected.");

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 26, 2009  5:37 PM

...
> Conditional expressions would be a "nice to have".  Note that they
> were proposed for Ada 9X, and soundly rejected by the reviewers.

My hazy recollection was that they were dropped from Ada 9x because *all* "nice
to have" features were dropped from Ada 9x simply because it was getting to be
too much of a change. That's not the same as "soundly rejected". (Admittedly,
Tucker snuck several of those "nice to haves" back in, like library-level
renaming.)

> I am puzzled by their (and your) puzzlement -- it means exactly what
> it means in English.  If I say, "swimming in the ocean implies you
> will get wet", I'm not saying anything about what happens if you do
> not swim in the ocean -- you might get wet or might not (maybe it's
> raining).  Just remember, "falsehood implies anything"; that is,
> "False implies X" is True for all X.

The last part is the obvious cause of the problem. It surely *does not* mean
precisely what it does in English, which you correctly described as "not saying
anything", that is, *we don't care about this statement* if "swimming in this
ocean" (call it S) is False (as it was at this meeting :-). That is essentially
"(if S then W else null)".

But of course a Boolean expression can't represent "don't care", so we
*arbitrarily* decide that it is "(if S then W else True)". And here we've
diverged from the English meaning, and that's the part that I (and apparently
many students) can't remember. It's very much like "or" (and "xor"), which don't
quite mean the same as in English. So it is best to not make any assumptions at
all about the English meaning. Thus the difficulties.

> If it makes you feel any better, I was puzzled by Probability and
Statistics, a grueling (to me) two-semester course.  ;-)

Those were separate one semester classes for me (took them both, in the latter
was lucky to get the only TA who actually could speak English clearly!).
Probability is easy: the Firesign Theater approach is best: "Everything you know
is wrong!!". That is, you can't do math on probabilities in the normal way -- it
has its own set of rules. The hardest thing is forgetting everything else when
doing it. (For instance, Bayesian spam filters are likely abusing the name, as
they violate all of the assumptions of probability theory [there is no way that
the contents of an e-mail message are independent variables], meaning that you
can't apply the theory at all. There is no such thing as almost independent!
Bayesian spam filters undoubtedly do work, but to claim it is because of
mathematics is completely bogus -- it's just luck and lots of algorithm tuning.)

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009 10:57 AM

> Conditional expressions are far more generally useful (since they can
> return a type other than Boolean), and they will be readable to every
> Ada programmer (I can't imagine anyone being confused by the meaning of "if").

Yes, well of course all languages should have conditional expressions, though
you do have to worry about coercions and typing, and that's not always obvious,
if the expression is used in other than what Algol-68 would call a strong
position.

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009  11:02 AM

> Conditional expressions would be a "nice to have".  Note that they
> were proposed for Ada 9X, and soundly rejected by the reviewers.

But at least part of that rejection was the general attempt to cut down Ada 95
from what was seen as entirely excessive scope, so good things got thrown out.
Tuck always said "show me what is bad about XXX and I will remove it." Well the
problem was not bad XXX's it was too many XXX's. So the fact that something was
rejected in the Ada 9X process is not a technical argument

> Well, I did take college courses in mathematical logic and modern
> algebra and so forth, and many of the students were puzzled by
> "implies", so you are probably right that many Ada programmers don't
> understand it.  I am puzzled by their (and your) puzzlement -- it
> means exactly what it means in English.  If I say, "swimming in the
> ocean implies you will get wet", I'm not saying anything about what
> happens if you do not swim in the ocean -- you might get wet or might
> not (maybe it's raining).  Just remember, "falsehood implies anything"; that is,  "False implies X" is True for all X.

And not saying anything about the else means that giving it a meaning is
artificial.

>     pragma Assert (X /= null implies X.all > 0);
>
> is equivalent to:
>
>     pragma Assert (X = null or[else] X.all > 0);
>
> And I guess I find both equally understandable.

I find the second far superior, I would regard the form using imples as obscure
programming, because I think most programmers would NOT intuitively understand
it.

Next you will be wanting NOR and NAND :-)

> But it might be nice to say:
>
>     Put_Line (Error_Count'Image(Num_Errors) &
>               (if Num_Errors = 1 then "error" else "errors")
>               & " detected.");

Yes, well languages without a conditional expression form are indeed a pain in
the neck in situations like this :-)

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, February 27, 2009  1:06 AM

>     Put_Line (Error_Count'Image(Num_Errors) &
>               (if Num_Errors = 1 then "error" else "errors")
>               & " detected.");

Nice to have - not more.
My own little utilities libraries has:
   function Choose
     (condition: boolean;
      If_True  : string;
      If_False : string) return string;

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009  6:58 PM

Very different, since it forces evaluation, I suggest not calling

    X := Choose (N = 0, 0, 10 / N);

:-)

Now if Ada had special forms like LISP, or proceduring like the original
Algol-68, before Algol-68 revised, then you could write choose properly (if you
were LISPY, you would call it cond :-))

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  6:01 PM

> Yes, well of course all languages should have conditional expressions,
> though you do have to worry about coercions and typing, and that's not
> always obvious, if the expression is used in other than what Algol-68
> would call a strong position.

I don't know Algol-68 very well, so I don't know what "strong position" means.
But I don't see a big problem with the types:

    conditional_expression ::= "(" "if" expression "then" expression "else" expression ")"

    The expected type of the first expression is Boolean.  Or some boolean
    type.

    The expected type of the other two expressions is the expected type of the
    conditional_expression.

And then all the normal Ada rules about expected type apply.
Is there a problem?

So in my example:

>     Put_Line (Error_Count'Image(Num_Errors) &
>               (if Num_Errors = 1 then "error" else "errors")
>               & " detected.");

Put_Line expects String.  So "&" expects String.  So the expected type for the
literal "error", is String, so it resolves just fine.

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009  7:07 PM

> Put_Line expects String.  So "&" expects String.  So the expected type
> for the literal "error", is String, so it resolves just fine.

(if X then 1 else 2.5) + (if Y then 2 else 3.7);

what rule makes this illegal? It needs to be illegal I would think in Ada

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  7:27 PM

The above is not a complete context.  In Ada, you need a complete context for
overload resolution.  Something like:

  Blah : T := (if X then 1 else 2.5) + (if Y then 2 else 3.7);

or:

  Proc((if X then 1 else 2.5) + (if Y then 2 else 3.7));

****************************************************************

From: Robert Dewar
Sent: Friday, February 27, 2009  8:20 PM

Well if you think there is no problem, fine, but it is clear to me that there is
a problem here :-) But let's wait and see if conditional expressions attract
enough support to be worth discussing details.

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  7:16 PM

> Next you will be wanting NOR and NAND :-)

Not.

;-)

Lisp has all 16 of them, I think.  Or at least all the 10 non-trivial ones.

I asked Bill (15 years old) about "implies" yesterday, and he answered
correctly.  Maybe it's genetic.  I'm not some sort of math whiz, but Bill and I
"get" implies.  Strange.

Anyway, I'm not asking for nor, nor nand!

Let's forget about "implies".  That nicely solves the problem of whether it
should be short-ciruit or not or both.

****************************************************************

From: Steve Baird
Sent: Friday, February 27, 2009  8:25 PM

> But let's wait and see
> if conditional expressions attract enough support to be worth
> discussing details.
>

Oh let's not.
Presumably
      (if False then Some_Function_Call / 0 else 123) would be static.

****************************************************************

From: Randy Brukardt
Sent: Friday, February 27, 2009  8:35 PM

> Presumably
>       (if False then Some_Function_Call / 0 else 123) would be static.

Sigh. We can always count on Steve to think of something to make the AI author's
job harder. ;-) (I hadn't thought about staticness, but I suspect it should work
like other short-circuit operations, so I would agree that the above should be
static.)

****************************************************************

From: Randy Brukardt
Sent: Friday, February 27, 2009  8:26 PM

...
>     The expected type of the other two expressions is the expected
> type of the
>     conditional_expression.
>
> And then all the normal Ada rules about expected type apply.
> Is there a problem?

I don't think there is a problem, per se, but I think I'd have to be convinced
that the complexity is worth it. We've been tending toward not trying to use too
much information in resolution, because only a compiler could figure out what
was going on.

Do we really want this to resolve:

     Put ((if Cond then F1 else F2));

with Put overloaded on String and Integer; F1 overloaded on Integer and Float;
and F2 overloaded on Integer and Boolean? That would work, but do we really want
people to write code like that?? (Yes, you can do that with "+", but that too
appears to be a mistake. Why compound it?)

I was thinking that it would resolve like an aggregate (it even looks somewhat
like one), so that it would have to be qualified unless there was a "single
type" identified by the context. In this example, that would mean that the
conditional would have to be qualified:

     Put (Integer'(if Cond then F1 else F2));

It's pretty rare that you have to qualify aggregates, and I would think that the
same would be true here.

In any case, if we adopt a more restrictive resolution rule now, we could change
to the less restrictive one later. Going the other way is incompatible.

> So in my example:
>
> >     Put_Line (Error_Count'Image(Num_Errors) &
> >               (if Num_Errors = 1 then "error" else "errors")
> >               & " detected.");
>
> Put_Line expects String.  So "&" expects String.  So the expected type
> for the literal "error", is String, so it resolves just fine.

Yes, this would work with the rule I suggested as well (at least as long as you
don't "use Wide_Text_IO" as well).

****************************************************************

From: Steve Baird
Sent: Friday, February 27, 2009  9:05 PM

> Sigh. We can always count on Steve to think of something to make the
> AI author's job harder. ;-)

I think we'd also need wording to define the nominal subtype of one of these
guys (not that this would be hard).

I'd say keep it simple. You could imagine special rules for the case where, for
example, the two operands have the same nominal subtype. Let's not.

     Flag : Boolean := ... ;
     X, Y : Positive:= ... ;
   begin
     case (if Flag then X else Y) is
        when 0 => -- should be legal

****************************************************************

From: Steve Baird
Sent: Friday, February 27, 2009  9:34 PM

Never mind.
This is a case where I'd like the unsend command. A conditional expression is
not a name, so nominal subtype definition is not an issue.

****************************************************************

From: Bob Duff
Sent: Friday, February 27, 2009  8:56 PM

> I was thinking that it would resolve like an aggregate...

Yes, that's an even better idea.  Either way, I don't see any "problem".

> > So in my example:
> >
> > >     Put_Line (Error_Count'Image(Num_Errors) &
> > >               (if Num_Errors = 1 then "error" else "errors")
> > >               & " detected.");
> >
> > Put_Line expects String.  So "&" expects String.  So the expected
> > type for the literal "error", is String, so it resolves just fine.
>
> Yes, this would work with the rule I suggested as well (at least as
> long as you don't "use Wide_Text_IO" as well).

Right.  If you have multiple Put_Line's involved, then it's ambiguous, whichever
rule we choose.

****************************************************************

From: Tucker Taft
Sent: Saturday, February 28, 2009  8:14 AM

Actually, "A implies B" means "if A then B else *True*" while "A and then B"
means "if A then B else *False*", and "A or else B" means "if A then True else
B".

I agree that conditional expressions would be valuable, but "implies" has a long
history in logic, and is well understood by many people who would want to write
preconditions and postconditions.  The thing that is *not* intuitive is that "A
implies B" <-> "not A or else B", and forcing people to write the latter is
unacceptable in my view.  I also feel that "A <= B" is unacceptable if we are
going to the trouble of adding syntax for pre/postconditions.  Adding "implies"
as a third kind of short-circuit operation with semantics of "if A then B else
True" is pretty trivial for the compiler, and will bring us in line with the one
language that has had "contracts" since its very inception, namely Eiffel.

****************************************************************

From: Robert Dewar
Sent: Saturday, February 28, 2009  11:41 AM

I still don't like it, and I don't see what can be possibly hard to understand
about "not A or else B". It seems *so* must clearer to me than using implies,
and I don't care what context that is in. All programmers will want to write and
read preconditions and postconditions, I do not buy the argument that there is
some trained mathematical subset who will want to write these PPC's, and for
sure all programmers will have to read them.

To me, this is something that Eiffel got wrong!

****************************************************************

From: Bob Duff
Sent: Saturday, February 28, 2009  2:35 PM

> Actually, "A implies B" means "if A then B else *True*" while "A and
> then B" means "if A then B else *False*", and "A or else B" means "if
> A then True else B".

Right.

> I agree that conditional expressions would be valuable, but "implies"
> has a long history in logic, and is well understood by many people who
> would want to write preconditions and postconditions.

True, but we have empirical proof that some folks are confused by "implies".

>...The thing that is *not*
> intuitive is that "A implies B" <-> "not A or else B",

Strange.  I find that perfectly obvious.  (Except that "logic" has no "or else"
-- there's no short-circuits involved.  ;-))

Note that the "not" in "not A" often gets folded up into a comparison.
"not X /= null" is usually written as "X = null".
And "not X < Y" is "X >= Y".

This discussion has convinced me to use the "or[else]" form in my assertions, to
better communicate with others, even though I don't really understand why others
like it better.

> and forcing people to write the latter is unacceptable in my view.  I
> also feel that "A <= B" is unacceptable if we are going to the trouble
> of adding syntax for pre/postconditions.

I think both "unacceptable"'s above are hyperbole. Surely "implies", whether s-c
or not, is a "nice to have" at best.  I've written approx. one zillion pragmas
Assert in my life, but I guess about 5 of those involve implications.

>...Adding "implies" as a third
> kind of short-circuit operation with semantics of  "if A then B else
>True" is pretty trivial for the  compiler,

Yes, trivial to implement.  I don't think anybody disputes that.

>... and will bring us in line with the one  language that has had
>"contracts" since its very  inception, namely Eiffel.

Shrug.  It's good get ideas from Eiffel, but Eiffel is pretty-much a dead
language.  Ada is alive (as a niche language, unfortunately).

I used to read comp.lang.eiffel regularly, and my impression is that it was
killed by compiler vendors being unwilling to support standards and portable
Eiffel code.  Let that be a lesson to us.

****************************************************************

From: Bob Duff
Sent: Saturday, February 28, 2009  3:03 PM

OK, maybe more than 5.  ;-)  I just did a search in the SofCheck Inspector
sources, and I see 12.  I'd guess all of them were originally written by me. I
shall mend my evil ways!

% find . -name '*.ad?' | xargs grep -i '<=.*implies'
./be/be-pvp-pvp_pass-prop.adb:          pragma Assert(Is_Based_On_Presumptions(VN_Id) <= -- implies
./be/be-pvp.adb:                        Assert(Is_Precondition(VN_Id) <= -- implies
./be/be-pvp.adb:                    -- Assert(Is_Precondition(VN_Id) <= -- implies
./be/be-pvp-scil_extension.adb:            Assert(Is_Present(PV1, VN) <= -- implies
./be/be-pvp-pvp_pass.adb:                Assert(VN_In_Current_Pass(Cur_Proc, VN) <= -- implies
./utils/utils-short_int_sets.adb:        pragma Assert(Result <= -- implies
./utils/utils-command_line.adb:            -- Note that "<=" on Boolean means "implies".
./utils/utils-storage_management-subpools.adb:    pragma Assert(Protection_Enabled <= -- "<=" means "implies"
./utils/utils-storage_management-subpools.adb:        pragma Assert(Protection_Enabled <= -- "<=" means "implies"
./utils/utils-storage_management-subpools.adb:        pragma Assert(Protection_Enabled <= -- "<=" means "implies"
./utils/utils-storage_management-subpools.adb:        -- Note that "<=" on Boolean means "implies".
./utils/utils-storage_management-subpools.adb:        -- Note that "<=" on Boolean means "implies".

I'm grepping 1.5 million lines of code here, which includes Inspector plus a lot
of test code.  12 cases.

Randy and Robert, do you really find the following confusing?

        pragma Assert(Protection_Enabled <= -- "<=" means "implies"
          (Block.all'Address mod OS_Dep.Page_Size_In_Storage_Elements = 0));
        pragma Assert(Protection_Enabled <= -- "<=" means "implies"
                        (Block.all'Size = Page_Size_In_Bits)); null;
    ...
    procedure Allocate_Fixed_Block(Pool: in out Subpool'Class) is
        pragma Assert(Always_Collect <= Collection_Enabled);
        -- Note that "<=" on Boolean means "implies".

So if protection is enabled, that implies that pages need to be aligned and have
a certain size.  And if we always collect, that implies that collection is
currently enabled.  What's so confusing about that?

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 28, 2009  8:40 PM

> OK, maybe more than 5.  ;-)  I just did a search in the SofCheck
> Inspector sources, and I see 12.  I'd guess all of them were
> originally written by me.
> I shall mend my evil ways!

We're supposed to change the language for something that occurs once per 100,000
lines of Ada code?? I surely hope that nothing I've proposed would be used that
rarely...

...
> I'm grepping 1.5 million lines of code here, which includes Inspector
> plus a lot of test code.  12 cases.
>
> Randy and Robert, do you really find the following confusing?
>
>         pragma Assert(Protection_Enabled <= -- "<=" means "implies"
>           (Block.all'Address mod OS_Dep.Page_Size_In_Storage_Elements = 0));
>         pragma Assert(Protection_Enabled <= -- "<=" means "implies"
>                         (Block.all'Size = Page_Size_In_Bits)); null;
>     ...
>     procedure Allocate_Fixed_Block(Pool: in out Subpool'Class) is
>         pragma Assert(Always_Collect <= Collection_Enabled);
>         -- Note that "<=" on Boolean means "implies".

I wouldn't say it was "confusing", I would have said "meaningless". If I
encountered a comment that said "<=" means "implies" before we had this
discussion, my reaction would have been "WTF??". And then I would have tried to
figure out the meaning of the expression ignoring the comment.

I may have written something like this once or twice, but I would not have known
that it actually had a name. (I tend to work from first principles anyway;
reinventing the wheel is usually faster than figuring out what it is called...)

> So if protection is enabled, that implies that pages need to be
> aligned and have a certain size.  And if we always collect, that
> implies that collection is currently enabled.
> What's so confusing about that?

If you wrote the above in a comment, it would have made sense. But "implies" is
not something that I would have thought of as a Boolean operator; I have no
intuition as to what it means exactly. And when reading code, you need to know
what it means *exactly*, assuming you know is the leading source of undetected
bugs. (After all, we know what "assume" means: making an ASS out of yoU and ME.
:-)

****************************************************************

From: Tucker Taft
Sent: Saturday, February 28, 2009  9:19 PM

That's not quite realistic.  We aren't writing pre- and post-conditions on every
subprogram we write yet.  If we were, I suspect "implies" would show up on a
good proportion of the preconditions or postconditions.

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 28, 2009  9:50 PM

Why? Just to make the code hard to understand for a majority of readers? It
surely isn't to save any typing:

   (not A) or else B
   A implies B

That saves all of 6 characters of typing (and only 4 if you leave out the
parens, which while not necessary, need to be given for readability -- I cannot
count the number of times that I thought that "not" covered an entire expression
rather than a single element). And as Bob notes, most of the time, "not" can be
folded into the expression A. (That's true in all cases, not just this one --
explicit "not"s are quite rare in my code.)

But the former expression can be understood by any remotely competent
programmer. That latter one is meaningless to a significant contingent of
programmers, and will send then scrambling for a manual or piece of scratch
paper if they actually have to figure out the result for particular values of A
and B. So it's primary purpose would be to obfuscate the code - not usually a
property of Ada code.

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 28, 2009  10:07 PM

I said:
> Why? Just to make the code hard to understand for a majority of
> readers?

As an aside, I should note that complex Boolean expressions are a complete pain
to write and debug. Any expression containing anything beyond a simple sequence
of "and then"s or "or else"s is likely to be impossible to understand,
especially if it needs to be corrected/expanded. (Just trying to remember to
apply DeMorgans Law makes me break out in hives :-).

Adding "implies" would do absolutely nothing to help this; it would have to be
combined with some other operator so it would be making Boolean expressions more
complex without adding anything in expressive power (other than perhaps for the
chosen few that think in terms of Boolean algebra in their sleep :-).

One thing that I dislike about the
precondition/postcondition/invariant/user-defined-constraints proposals is that
we'd have to write a lot more complex Boolean expressions. I can't think of any
alternative to doing that, so I'm letting that go. But I surely would like to
see something that would make Boolean expressions more understandable.

I would hope that a conditional expression could have that effect, as you could
then write an "if" when you meant an "if" rather than having to convert it into
some bizarre sequence of Boolean operators that don't quite mean the same thing.

P.S. I realized this morning that there is a reasonable way to use the existing
Janus/Ada intermediate code to generate a conditional expression. Indeed, we do
that to figure out array lengths in some cases, so I'm pretty confident that the
optimizer and code generator would not fall over if that technique was used more
generally. Thus, I suspect the toughest job (based on the 'virtual proposal')
would be to add the syntax to the grammar generator and remove the inevitable
conflicts. Probably could have them done tomorrow if the proposal was approved
tonight.

****************************************************************

From: Robert Dewar
Sent: Sunday, March  1, 2009  2:47 AM

> I used to read comp.lang.eiffel regularly, and my impression is that
> it was killed by compiler vendors being unwilling to support standards
> and portable Eiffel code.  Let that be a lesson to us.

I think that's an entirely incorrect analysis, languages do not live or die
based on such things, the dynamics was that Eiffel never had a significant
important constituency.

****************************************************************

From: Robert Dewar
Sent: Sunday, March  1, 2009  2:56 AM

> Randy and Robert, do you really find the following confusing?
>
>         pragma Assert(Protection_Enabled <= -- "<=" means "implies"
>           (Block.all'Address mod OS_Dep.Page_Size_In_Storage_Elements = 0));
>         pragma Assert(Protection_Enabled <= -- "<=" means "implies"
>                         (Block.all'Size = Page_Size_In_Bits)); null;

Actually it is true indeed that these uses in assertions seem acceptable *if* we
had an implies keyword (I would NEVER have used the "<=" I agree totally with
Randy's WTF comment on that :-))

It is in if statements that such things would be confusing

  I would not like to see

      if Protection_Enabled implies ..... then

  I just find that confusing because of the implication of causality
  in implies. In the assert, it really does read to me as

     if Protection_Enabled then bla bla

  and in the assert context that seems fine, since the else false
  really is not an issue, but it is in the IF.

  Unfortunately,

     pragma Postcondition (Protection_Enabled implies ...)

  reads more like the IF than the ASSERT to me. I can't explain
  why this is.

    But the use of (not Protection_Enabled) or else bla bla

  reads perfectly fine in all cases, and even in the Assert it
  reads probably a bit clearer than the IMPLIES to me.

  If the switch were the other way, then the choice is between

     Protection_Disabled or else blabla

       and

     (not Protection_Disabled) implies bla bla

  Here there is no question that I prefer the first form in
  all cases by a big margin.

****************************************************************

From: Robert Dewar
Sent: Sunday, March  1, 2009  3:02 AM

> One thing that I dislike about the
> precondition/postcondition/invariant/user-defined-constraints
> proposals is that we'd have to write a lot more complex Boolean
> expressions. I can't think of any alternative to doing that, so I'm
> letting that go. But I surely would like to see something that would
> make Boolean expressions more understandable.

> I would hope that a conditional expression could have that effect, as
> you could then write an "if" when you meant an "if" rather than having
> to convert it into some bizarre sequence of Boolean operators that
> don't quite mean the same thing.

Yes I agree, conditional expressions would help

Note that you can still abstract with functions though ...

     function X (A : Integer; B : Float);
     pragma Precondition (Check_X (A, B));

and later (remember, forward references are allowed)

     function Check_X (A : Integer; B : Float) return Boolean is
     begin
        if A > 23 then
           return False;
        elsif A > B then
           return True;
        ...
     end Check_X;

It's interesting to note that in the context of MCDC and the Couverture project,
we discussed the impact of requiring people to abstract complex conditions in
this way, but there are two objections:

    a) you require more tests, since each test in the Check_X function
       has to go both ways, and that's more tests than the independence
       rules of MCDC in a complex expression.

    b) people love complex boolean expressions and insist on filling
       their code with them, the more complex the better :-(

You can ameliorate the negative impact by putting lots of comments WITHIN the
complex expression, but people don't seem to like to do this either.

****************************************************************

From: Robert Dewar
Sent: Sunday, March  1, 2009  3:05 AM

Just as a note, the discussion of abstracting complex PPC conditions into
functions shows why the forward reference allowed for PPC is highly desirable.

Languages like Pascal and Ada with their normal disallowance of forward
references (in contrast to languages like Algol, COBOL, and PL/1) make it much
more unpleasant to abstract in this way, since it is unnatural to write local
abstractions BEFORE the use .. you get top down structure written bottom-side
up, which has to be read backwards.

At least we alleviate this for PPC's and that's very helpful.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, March  2, 2009  3:21 AM

I've been watching this discussion, and I too dislike "implies". To me, "A implies B"
means "if A is true, I know that B is true", while in the context of preconditions,
it means "if A is true, B should be true". Quite different.

In fact, the equivalent of:

pragma precondition (A implies B) should be:

if A then pragma precondition (B) end if; (which is obviously not Ada).

So, why not add an optionnal "when" to the pragma?
   pragma Precondition(subprogram_local_name,
     [Check =>] boolean_expression
     [, When => boolean_expression]
     [, [Message =>] string_expression]);

Note 1: the proposed syntax requires named association for the "when".
To be discussed

Note 2: This proposal makes more important the issue of having multiple preconditions.
I don't think there is a problem, but it has to be addressed anyway.

****************************************************************

From: Tucker Taft
Sent: Thursday, March  5, 2009  11:26 AM

I would like to make another pitch for the importance of providing an operation
that corresponds to what is typically labeled "implies," particularly in the
context of adding support for explicit pre- and postconditions.  The discussion
seems to have gotten a bit sidetracked... ;-)

One thing that arises from previous responses is that the word "implies" does
not always convey the right idea, so I have a suggestion for an alternative,
which happens to dovetail with the interest in more general conditional
expressions.

The real need for this comes about in assertions, constraints, preconditions,
postconditions, etc., which are in some sense "conditional."  The typical
situation is something like this:

    procedure Add_To_Fruit_Salad(
      Fruit : in out Fruit_Type; Bowl : in out Bowl_Type) is
    begin
        -- Check if ready to add to fruit salad
        case Fruit.Kind is
          when Apple =>
            pragma Assert(Fruit.Is_Crisp);
            null;
          when Banana =>
            pragma Assert(Fruit.Is_Peeled);
            null;
          when Pineapple =>
            pragma Assert(Fruit.Is_Cored);
            null;
          when others => null;
        end case;
        Cut_Up(Fruit);
        Add_To_Bowl(Fruit, Bowl);
    end Add_To_Fruit_Salad;

How do we "hoist" these assertions up to the beginning of the subprogram, so
they can become preconditions? What we would like to write is something like:

    procedure Add_To_Fruit_Salad(
      Fruit : in out Fruit_Type; Bowl : in out Bowl_Type)
      with
        Pre =>
            Fruit.Kind = Apple then Fruit.Is_Crisp
          elsif
            Fruit.Kind = Banana then Fruit.Is_Peeled
          elsif
            Fruit.Kind = Pineapple then Fruit.Is_Cored,

        Post =>
           Is_In_Bowl(Fruit, Bowl);

The general problem is that to write a precondition, you need to express the
requirements for a subprogram at the point of call, rather than at some
intermediate point in the subprogram's control flow.  Similarly, for a
postcondition, you need to express the guaranteed results after the call, not
just what is true at one particular "return" statement in a subprogram with
multiple return statements.

In other words, you would like to hoist what were assertions at intermediate
points in the subprogram into being either "preconditions" or "postconditions,"
as appropriate.   In our static analysis tool, one of the things it does is
automatically "extract" pre- and postconditions from the code itself.  This is
of course a bit of a "cheat" as far as the appropriate order of doing things,
but it serves two purposes. One it helps document what the code *really* does,
and it can provide a starting point for human-written pre- and postconditions.
This tool is very frequently trying to hoist run-time checks into being
preconditions. Checks like "X /= null" or "I in 1..10" can naturally become
preconditions. But unfortunately, it is quite common if these requirements only
exist on some, but not all of the paths through the subprogram.

As in the example, imagine a subprogram that begins with a case statement on the
kind of an object, and then performs actions appropriate to the kind. In an
object-oriented worlds, one might call a dispatching operation, but that is the
moral equivalent of a case statement at run-time, based on the tag.  The fact is
there may be requirements on the caller that depend on the kind of object.  Our
tool ends up hoisting these kinds of requirements into preconditions anyway, but
we mark them as "(soft)" to indicate that they don't really apply to all paths
through the subprogram.  Of course in some cases one could go ahead and always
impose the requirement on the caller, presuming that they shouldn't be worried
about the details of the object. On the other hand, there are cases where,
typically for a variant record or a tagged-type hierarchy, the component of the
object participating in the reqiurement exists only for some, but not all kinds
of objects. In these cases, we can't real impose the requirement as a "hard"
precondition on all callers.  What we really want to do is change our "soft"
preconditions into "hard" preconditions using an equivalent of an "if
<condition> then <precondition>."  In the above example syntax, we have dropped
the "if" as that seems to be very tightly linked to the notion of a "statement"
in Ada, but we retain the "then" as that seems to convey nicely the conditional
aspect of the precondition.

As perhaps can be guessed at this point, what this leads me to suggest is that
we adopt a syntax for conditional_expression according to:

   conditional_expression ::=
        <condition> THEN <expression>
          {ELSIF <condition> THEN <expression>}
          [ELSE <expression>]

In general, conditional_expression would only be permitted in a limited number
of contexts.  In particular, inside parentheses, after a "=> in the proposed
aspect specification syntax, and more generally as an operand inside a construct
using parentheses, such as an aggregate, function call, pragma, discriminant
constraint, etc.  The "ELSE <expression>" would be optional only for a boolean
conditional expression, as used in an assertion, constraint, precondition,
postcondition, etc., with the implied semantics being "ELSE True."  This special
case provides the conditional assertion capability that is quite important in
some cases for expressing preconditions and postconditions.

Here are uses of the proposed conditional_expression:


    Text_Length : constant Natural :=
      (Text /= null then Text'Length else 0);

    Hash_Is_Ok : constant Boolean :=
      (Key(X) = Key(Y) then Hash(X) = Hash(Y));

    ...

    Ada.Assertions.Assert(
     Obj.Name /= null then Is_Alphabetic(Obj.Name.all),
      "Name, if any, must be alphabetic");

    Put(Console, Str => Prompt /= null then Prompt.all else ">");

    return (Size(X) >= Size(Y) then X else Y);


Note that even without allowing a conditional_expression in normal code, you
would probably often need one to express useful postconditions:

    function Max(X, Y : T) return T
      with
        Post =>
          Max'Result = (Size(X) >= Size(Y) then X else Y);

If one is going to be seeing such things in postconditions, it would be
frustrating to disallow them in normal code.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March  5, 2009  1:56 PM

>    conditional_expression ::=
>         <condition> THEN <expression>
>           {ELSIF <condition> THEN <expression>}
>           [ELSE <expression>]

OK, but I essentially proposed that a week ago. Where the heck have you been?
The only difference here is "elsif" clause, and I had already put that into the
proposal I'm writing. I realized the need for the "Elsif" the first time I tried
to use one of these in an example.

The main difference here is that you left out the parens. And the optional Else.

> In general, conditional_expression would only be permitted in a
> limited number of contexts.

I don't see why. Just *always* parenthesize it, resolve it like an aggregate,
and there is no conflict with other syntax. Trying to  allow the saving of the
typing of *two* characters is silly to me, it makes the resolution and parsing
much harder. (Expression parsing would have to be context-sensitive, which is a
non-starter with me.)

It would be more important to avoid the parens if we were only going to use
pragmas for preconditions, but since we're going with syntax, there is no
advantage at all - IMHO.

> The "ELSE
> <expression>" would be optional only for a boolean conditional
> expression, as used in an assertion, constraint, precondition,
> postcondition, etc., with the implied semantics being "ELSE True."
> This special case provides the conditional assertion capability that
> is quite important in some cases for expressing preconditions and
> postconditions.

I could live with this idea, although it seems unnecessary to me. If you want
"Else True", you really ought to say so.

...
> Note that even without allowing a conditional_expression in normal
> code, you would probably often need one to express useful
> postconditions:
>
>     function Max(X, Y : T) return T
>       with
>         Post =>
>           Max'Result = (Size(X) >= Size(Y) then X else Y);
>
> If one is going to be seeing such things in postconditions, it would
> be frustrating to disallow them in normal code.

Surely. I don't see any reason for these things to be special. But no one has
ever proposed that. Why the straw man??

****************************************************************

From: Robert Dewar
Sent: Thursday, March  5, 2009  2:44 PM

> I don't see why. Just *always* parenthesize it, resolve it like an
> aggregate, and there is no conflict with other syntax. Trying to
> allow the saving of the typing of *two* characters is silly to me, it
> makes the resolution and parsing much harder. (Expression parsing
> would have to be context-sensitive, which is a non-starter with me.)

I agree the parens should always be there (I suppose it would be too radical to allow the nice Algol-68 short syntax

(condition | expression | expression) :-)
>
> It would be more important to avoid the parens if we were only going
> to use pragmas for preconditions, but since we're going with syntax,
> there is no advantage at all - IMHO.

resolve like an aggregate seems dubious to me

doesn't that mean that if you write

   (if X > 3 then 2 else 4) + (if X > 12 then 13 else 14)

is ambiguous?

> I could live with this idea, although it seems unnecessary to me. If
> you want "Else True", you really ought to say so.

I agree

> Surely. I don't see any reason for these things to be special. But no
> one has ever proposed that. Why the straw man??

because there was a suggestion of implies being allowed only in PPC's.
Note that this kind of thing happens in SPARK annotations, where there e.g. you
can write quantifiers.

And while we are at it, if we are talking useful stuff in PPC's, how about
adding quantifiers while we are at it

    (for all X in Sint => Sqrt (X) < 23)

or something like that ...

****************************************************************

From: Edmond Schonberg
Sent: Thursday, March  5, 2009  2:51 PM

>> It would be more important to avoid the parens if we were only going
>> to use pragmas for preconditions, but since we're going with syntax,
>> there is no advantage at all - IMHO.
>
> resolve like an aggregate seems dubious to me
>
> doesn't that mean that if you write
>
>  (if X > 3 then 2 else 4) + (if X > 12 then 13 else 14)
>
> is ambiguous?

No, this is not a complete context, there is some expected type that both of
these have to have

> And while we are at it, if we are talking useful stuff in PPC's, how
> about adding quantifiers while we are at it
>
>   (for all X in Sint => Sqrt (X) < 23)
>
> or something like that ...


Would mesh nicely with  some new iterator forms, of course...  There
is no concern here that the iteration might not finish.

****************************************************************

From: Robert Dewar
Sent: Thursday, March  5, 2009  2:58 PM

...
> No, this is not a complete context, there is some expected type that
> both of these have to have

sorry not quite the right example, here is a more complete example

    procedure q (a, b : Integer);
    procedure q (a, b : float);

    q ((if X > 3 then 2 else 4), (if X > 12 then 13 else 14))

****************************************************************

From: Randy Brukardt
Sent: Thursday, March  5, 2009  3:17 PM

...
> sorry not quite the right example, here is a more complete example
>
>     procedure q (a, b : Integer);
>     procedure q (a, b : float);
>
>     q ((if X > 3 then 2 else 4), (if X > 12 then 13 else 14))

My thinking was that you don't want this to resolve without some qualification,
anymore than you want an aggregate to do so. The problem is that there is an
potentially infinite number of expressions that could control the resolution,
and at some point that gets more confusing than helpful. Qualifying one of the
conditionals doesn't seem that bad:

     q (Integer'(if X > 3 then 2 else 4), (if X > 12 then 13 else 14))

To see the problem, imagine that you have the following declarations:

     function F return Integer;
     function F return Boolean;

     function G (N : Natural) return Integer;
     function G (N : Natural) return Float;

     Q (if X > 3 then F else G(1)), (if X > 12 then G(2) else G(3)));

This resolves to Integer because there is no F for Float. Do we really want
something this confusing to work? (BTW, I really think this sort of rule should
be applicable to *all* Ada subexpressions, because it requires qualification on
the confusing ones and allows the obvious ones to resolve. It is a much better
rule than the one that allows unlimited overloading on results that we currently
use -- but sadly, too incompatible. Of course, you can't qualify functions that
return anonymous access types, another reason to avoid them... :-)

****************************************************************

From: Robert Dewar
Sent: Thursday, March  5, 2009  3:24 PM

OK, well you really state yourself the reason I don't like having to qualify in
this situation, it's not Ada! You wish it were, but it isn't!

Algol-68 has this all worked out very clearly and consistently, but retrofitting
it into Ada may require some kludges (like the qualification in Randy's example,
which to me is an obvious kludge).

other things to worry about are staticness, etc

****************************************************************

From: Bob Duff
Sent: Thursday, March  5, 2009  5:44 PM

> My thinking was that you don't want this to resolve without some
> qualification, anymore than you want an aggregate to do so.

I agree with Randy.  Allowing this to resolve adds implementation work, for
negative user benefit.  The analogy with aggregates is apt -- you have to know
the type before looking inside.

If you erase the second q, then the expected type is Integer for both of those,
and it resolves just fine.

****************************************************************

From: Robert Dewar
Sent: Thursday, March  5, 2009  7:12 PM

I find it really ugly that you cannot replace 1 by (True|1|1) in any context,
most certainly we regarded that as essential in the A68 design.

****************************************************************

From: Bob Duff
Sent: Thursday, March  5, 2009  5:44 PM

> I agree the parens should always be there (I suppose it would be too
> radical to allow the nice Algol-68 short syntax
>
> (condition | expression | expression) :-)

Yes, too radical.  ;-)

Anyway, how would you extend that to allow "elsif"?

****************************************************************

From: Robert Dewar
Sent: Thursday, March  5, 2009  7:14 PM

> Yes, too radical.  ;-)
>
> Anyway, how would you extend that to allow "elsif"?

A68 uses

   (condition | expr | expr |: expr | expr ....)

if I remember right, it's a long time ago ...

of course what I call here expr in A68 is actually a serial clause, so you can
write

   (X > 3 | int a; a := 4*b*c; a*a | 0)

****************************************************************

From: Bob Duff
Sent: Thursday, March  5, 2009  5:55 PM

> What we would like to write is something like:
>
>     procedure Add_To_Fruit_Salad(
>       Fruit : in out Fruit_Type; Bowl : in out Bowl_Type)
>       with
>         Pre =>
>             Fruit.Kind = Apple then Fruit.Is_Crisp
>           elsif
>             Fruit.Kind = Banana then Fruit.Is_Peeled
>           elsif
>             Fruit.Kind = Pineapple then Fruit.Is_Cored,
>
>         Post =>
>            Is_In_Bowl(Fruit, Bowl);

I like this, except I really dislike leaving off the "if".
That really harms readability, to me.

I like the fact that you can leave off the parens in certain places where they
are not needed.

I like the fact that "else True" is optional, but I can live without that if
others find it confusing.

One thing you lose in the above transformation is the full coverage checking of
case statements, which is one of my favorite features of Ada.  (Get rid of the
naughty "when others => null;" above.)  Can I have some form of conditional
expression that starts with "case", and has full coverage rules?  When
applicable, it's much safer than elsif chains.

****************************************************************

From: Jean-Pierre Rosen
Sent: Sunday, March  8, 2009  1:21 AM

> OK, but I essentially proposed that a week ago. Where the heck have
> you been? The only difference here is "elsif" clause, and I had
> already put that into the proposal I'm writing. I realized the need
> for the "Elsif" the first time I tried to use one of these in an example.

I'd rather have:
    when <condition> => <expression>

(I proposed something like that for the pragma version, but my message seems to
have vanished in the haze)

****************************************************************

From: Robert Dewar
Sent: Sunday, March  8, 2009  4:19 AM

I find it peculiar (given expectations from other languages) to have such a
variation between the syntax for conditional statements and conditional
expressions. For my taste I would prefer they be totally unified as in any
reasonable expression language, but given we have inherited the Pascal tradition
of making a big difference between statements and expressins, let's at least
keep the syntax parallel (as we do in the case of function vs procedure calls
for example).

It would be reasnable to have a case expression as well, something like

   (case x is when bla => expr, bla => expr, bla => expr)

but reusing when in this odd way which is at odds with the syntax of case, for
an if expression seems odd to me.


****************************************************************

From: Randy Brukardt
Sent: Sunday, March  9, 2009  2:0 PM

...
> I'd rather have:
>     when <condition> => <expression>
>
> (I proposed something like that for the pragma version, but my message
> seems to have vanished in the haze)

I can't tell whether you mean for the conditions to be evaluated independently
when there are many or sequentially. (Or even if you are allowing many.

If sequentially, the syntax implies an independence between the conditions that
does not exist. (Case statement conditions use "when" and they are independent;
exits use "when" and they are independent.) "elsif" makes it clear that it
*follows* the previous condition.

If independent, then there is no way to handle "else" without repeating the
inverse of the condition, which is obviously error-prone.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, March  9, 2009  6:04 AM

> I find it peculiar (given expectations from other languages) to have
> such a variation between the syntax for conditional statements and
> conditional expressions. For my taste I would prefer they be totally
> unified as in any reasonable expression language, but given we have
> inherited the Pascal tradition of making a big difference between
> statements and expressins, let's at least keep the syntax parallel (as
> we do in the case of function vs procedure calls for example).

This is not intended as a replacement for general "if expression", rather as a
replacement for "implies". I was specifically addressing Tuck's proposal for
Pre/Post conditions.

> I can't tell whether you mean for the conditions to be evaluated
> independently when there are many or sequentially. (Or even if you are
> allowing many.)

Let's make it clearer. Here is what I suggest:
From Tuck:
<declaration>
    WITH
      <aspect_name> [=> <aspect_value>],
      <aspect_name> [=> <aspect_value>],
      ... ;

My suggestion:
<aspect_value> ::= pre | post | invariant {<assertion>}

<assertion> ::= {[<guard>] <boolean expression>}
<guard> ::= when <condition> =>

All, and only, <boolean expression> corresponding to True <guard>s are evaluated
and should return true. No guard is the same as "when True".

This is more like guards in select or protected entries than case statements. I
think the intent would be clearer than conditional expressions (and certainly
clearer than "implies").

****************************************************************

From: Tucker Taft
Sent: Monday, March  9, 2009  10:27 AM

I am unclear what your BNF means.  "pre", "post", etc were intended to be aspect
*names*, not aspect values. Also, the aspect value for "pre" might be several
boolean expressions connected by "and", and each boolean expression might need a
different "guard."  Also, we would like these to be usable in other contexts
where boolean expressions are used, such as in calling the "Assert" procedure
declared in Ada.Assertions, or in the pragma Assert, or in declaring a boolean
constant such as "Is_Ok", etc. So I don't think we want a syntax that only works
in the context of the "aspect_name => aspect_clause" syntax.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, March  9, 2009  11:08 AM

> I am unclear what your BNF means.  "pre", "post", etc were intended to
> be aspect *names*, not aspect values.

Sorry, my confusion. Should have been:
<aspect_value> ::= {<assertion>}
<assertion> ::= [<guard>] <boolean expression>
<guard> ::= when <condition> =>

> Also, the aspect value for "pre" might be several boolean expressions
> connected by "and", and each boolean expression might need a different
> "guard."

The (corrected) proposed syntax allow several <assertion>, with possibly
different guards. I think it is much clearer than packing everything in a single
expression: f.e: X is null iff Y = 0

pre =>
   when X = null => Y = 0;
   when Y = 0    => X = null;

compared to:
pre =>
   (if x = null then y = 0 else y /= 0)

> Also, we would like these
> to be usable in other contexts where boolean expressions are used,
> such as in calling the "Assert" procedure declared in Ada.Assertions,
> or in the pragma Assert,

We could also have:
   pragma Assert(
     [Check =>] boolean_expression
     [, When => boolean_expression]
     , [Message =>] string_expression);

> or in declaring a boolean constant such as "Is_Ok", etc.
> So I don't think we want a syntax that only works in the context of
> the "aspect_name => aspect_clause" syntax.

It would not preclude the "if expression" in the general case, but I think it
would read nicely for assertions. And it's only syntax, I don't think it would
be a big deal for the compiler. So it's mainly a matter of taste.

****************************************************************

From: Bob Duff
Sent: Monday, March  9, 2009  11:20 AM

> I am unclear what your BNF means.  "pre", "post", etc were intended to
> be aspect *names*, not aspect values.
> Also, the aspect value for "pre" might be several boolean expressions
> connected by "and", and each boolean expression might need a different
> "guard."  Also, we would like these to be usable in other contexts
> where boolean expressions are used, such as in calling the "Assert"
> procedure declared in Ada.Assertions, or in the pragma Assert, or in
> declaring a boolean constant such as "Is_Ok", etc.
> So I don't think we want a syntax that only works in the context of
> the "aspect_name => aspect_clause" syntax.

I agree with Tucker.  And if we're going to have a general-purpose conditional
expression, I don't see any need for the proposed "when" syntax.  Nothing wrong
with it, I suppose, but we just don't need gratuitous extra syntax.

Let's restrict Ada 2012 features to what's really needed/useful.

****************************************************************

From: Randy Brukardt
Sent: Monday, March  9, 2009  3:09 PM

...
> The (corrected) proposed syntax allow several <assertion>, with
> possibly different guards. I think it is much clearer than packing
> everything in a single expression:
> f.e: X is null iff Y = 0
>
> pre =>
>    when X = null => Y = 0;
>    when Y = 0    => X = null;
>
> compared to:
> pre =>
>    (if x = null then y = 0 else y /= 0)

Perhaps it's this flu bug that I have, but I don't have the foggiest idea what
the first expression is supposed to mean -- it surely does not appear equivalent
to the second. The second one is crystal clear.

Moral: If you don't understand the logical "implies", that confusion is not
going to go away by dressing it up in some other syntax. If you don't understand
"if", you aren't a programmer, so let's stick to the universal.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March  9, 2009  5:18 AM

> Perhaps it's this flu bug that I have, but I don't have the foggiest
> idea what the first expression is supposed to mean -- it surely does
> not appear equivalent to the second. The second one is crystal clear.

Well, it means that when X is null, Y should be equal to 0, and
(conversely) when Y is equal to 0, X should be null.

I found that notation nice-looking, and in line with the notion of assertions
that are applicable only under some conditions; but if others don't support the
idea, I won't insist on that (it's only syntactic sugar after all).

****************************************************************

From: Randy Brukardt
Sent: Saturday, March 14, 2009  9:48 PM

In the interests of making myself more work that I may never get paid for,
below find a *complete* write-up for conditional expressions. This includes
wording for everything that has previously been discussed. (This is version /02
since I made extensive changes today while filing the mail, including
borrowing several nice examples.)

I included Tucker's optional else part, as I figured it was easier to delete
the wording if we decide to remove that option than to add it at a meeting.

I stuck with the model that these are much like a aggregate, thus I inserted
them into the syntax in the same places.

I've left "case expressions" and "loop expressions" (or "quantifiers", whatever
the heck that is) for someone else to write up. I'd suggest using a similar
model for them, at least syntactically.

All comments welcome.

****************************************************************

From: Tucker Taft
Sent: Sunday, March 15, 2009  8:03 AM

Thanks, Randy.  I have concluded that this is
*much* more important than "implies" or equivalent, as I believe these will
be heavily used in preconditions and postconditions.  There are a lot of
interesting preconditions and postconditions you simply can't state without
using a conditional expression.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 15, 2009  8:20 AM

I agree with this, but actually the use in PPC's seems only a part of what these
bring, it is annoying to have to write:

    if bla then
       lhs := 3;
    else
       lhs := 4;
    end if;

where lhs is the same (possibly complex) lhs. much cleaner and clearer to write

    lhs := (if bla then 3 else 4);

I must say I still like to abstract complex conditionals into predicate
functions. People manage to write frighteningly complex conditions without this
feature, and I am afraid that this will get worse with the addition of this
feature. Still possible misuse is not a reason for prohibition!

****************************************************************

From: Ed Schonberg
Sent: Sunday, March 15, 2009  9:07 AM

> where lhs is the same (possibly complex) lhs. much cleaner and clearer
> to write
>
>   lhs := (if bla then 3 else 4);

Agreed, this will be used abundantly.  I like the default boolean True
interpretation when the else part is missing.

Not convinced about the need for case expressions, but would like to see a
similar proposal for what Randy calls loop expressions, i.e.  quantified
expressions.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 15, 2009  10:22 AM

BTW, for the record on the implies thread, I am much more amenable to adding
implies if it is not spelled with the confusing keyword "implies". So something
like

   a -> b

would be much more acceptable to me :-)

****************************************************************

From: Stephen Michell
Sent: Sunday, March 15, 2009  12:46 PM

For the record, I believe that "implies" or "->" should definitely be added.
From Randy's AI, saying
   if A then B else TRUE  is just difficult way of saying A -> B.
One problem with the if-then-else comes with conjunctions, such as
   A -> B, C-> D, E -> F
It is easy to see that
  A->B and C->D and E->F
which could easily be converted to
  (if A then B else True) and (if C then D else True) and (if E then F else True)
but many people are going to try to do it all with if-then-elsif-else
constructs, which becomes very messy.

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  2:28 PM

> For the record, I believe that "implies" or "->" should definitely be
> added. From Randy's AI, saying
>    if A then B else TRUE  is just difficult way of saying A -> B.

No, note that the proposal says "else True" is optional.
So "A -> B" means "A implies B", which means "if A then B", which means "if A
then B else True".  I think "if A then B" is a good way of saying what's meant.

> One problem with the if-then-else comes with conjunctions, such as
>    A -> B, C-> D, E -> F
> It is easy to see that
>   A->B and C->D and E->F
> which could easily be converted to
>   (if A then B else True) and (if C then D else True) and (if E then F
> else True)

That should be:
   (if A then B) and (if C then D) and (if E then F)

Or use "and then", if you prefer.  I wouldn't normally put that in an 'if'
statement, but in an assertion, it reads just fine.  Or I might write:

    pragma Assert(if A then B);
    pragma Assert(if C then D);
    pragma Assert(if E then F);

(Of course I realize Randy wants double parens there.  No big deal.)

> but many people are going to try to do it all with if-then-elsif-else
> constructs, which becomes very messy.
>
> Robert Dewar wrote:
> > BTW, for the record on the implies thread, I am much more amenable
> > to adding implies if it is not spelled with the confusing keyword
> > "implies". So something like
> >
> >   a -> b
> >
> > would be much more acceptable to me :-)

I've no idea why Robert thinks "->" is more readable and less confusing than
"implies".  I mean, we don't use "^" and "|" for "and" and "or".  We prefer the
English words.  Like "xor" (ha ha!)

I say, leave out "implies" or "->" (or however it's spelled) from the language.
The proposed conditional expression does the job quite nicely.

****************************************************************

From: Tucker Taft
Sent: Sunday, March 15, 2009  3:52 PM

The problem with "implies" is that it
really doesn't read well with assertions or preconditions.  You should try a few
preconditions.  They always seem a bit confusing when you use "implies".
What you are really trying to convey is
that "if blah is true" then this is a precondition.
That is admittedly equivalent to "blah implies this"
but it isn't really a general "implication"
in the logic sense.  It is simply a precondition that only applies some of the
time.

And for postconditions, you quite often want a conditional expression with an
"else". For example, how would you express the postcondition of a max function
without a conditional expression (presuming you didn't have a 'Max attribute!).

I think if we agree on having an else-less boolean conditional-expression, then
we get the best of both worlds -- something that is equivalent to "implies" but
captures the meaning better in an precondition, which based on a general
conditional-expression syntax that we will need for postconditions.

If you drop the "else True" your example doesn't seem so bad:

   (if A then B) and
   (if C then D) and
   (if E then F)

Or depending on the relationships between the clauses, it might become:

     (if A then B elsif
         C then D elsif
         E then F)

I'll admit I still have a slight preference for omitting the initial "if":

    (A then B) and (C then D) and (E then F)

or

    (A then B  elsif  C then D  elsif  E then F)

It just seems more symmetrical to leave off the initial "if" since I think we
all agree we want to leave off the trailing "endif".  And the "if" seems pretty
obvious when you start off with a boolean condition.  Here is an example of a
postcondition using a conditional expression:

   function Max(X, Y : T) return T
     Post =>
       Max'Result = (X >= Y then X else Y);

Here is an example of a precondition:

   function Copy_Tree(T : Tree) return Tree
     Pre =>
       (T /= null then (T.Left'Initialized and T.Right'Initialized));

The "if" seems unnecessary in both cases.

I would agree with Robert that "->" is preferable to "implies" as it seems to
avoid some of the confusion inherent in the word "implies," but unfortunately I
fear we would get stalled in discussions about short-circuiting, which I think
is a waste of breath at this point. Whereas with "then," we have an existing
reserved word, and one for which there is no confusion about the order of
evaluation.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 15, 2009  4:55 PM

> I think if we agree on having an else-less boolean
> conditional-expression, then we get the best of both worlds --
> something that is equivalent to "implies" but captures the meaning
> better in an precondition, which based on a general
> conditional-expression syntax that we will need for postconditions.

This usage argument is a strong one for allowing elseless if conditions.

...
> I'll admit I still have a slight preference for omitting the initial
> "if":
>
>     (A then B) and (C then D) and (E then F)
>
> or
>
>     (A then B  elsif  C then D  elsif  E then F)

I prefer it with the initial IF.

> It just seems more symmetrical to leave off the initial "if" since I
> think we all agree we want to leave off the trailing "endif".  And the
> "if"
> seems pretty obvious when you start off with a boolean condition.
> Here is an example of a postcondition using a conditional expression:

Yes, but you don't always know it's a boolean condition when you start off, and
if the expression is

    bool_var := (long_boolean_expression then ...

it takes a while of left to right reading to understand that you have a
conditional expression

I don't buy the symmetry argument! You read left to right, so the IF on the left
is much more important than the END IF on the right, when you already know you
are in an IF.

> I would agree with Robert that "->" is preferable to "implies" as it
> seems to avoid some of the confusion inherent in the word "implies,"
> but unfortunately I fear we would get stalled in discussions about
> short-circuiting, which I think is a waste of breath at this point.
> Whereas with "then," we have an existing reserved word, and one for
> which there is no confusion about the order of evaluation.

If -> is provided, it definitely should short circuit, but all-in-all I agree
with Tuck that if we have the conditional expressions we don't need implies.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, March 16, 2009  8:41 AM

> Also do we have
>
>       (exists E in A : F(E) > 0)
>
And if we go further, we could write:
   [ x in [1..N] | not exists y in [2..x-1] | x mod y = 0]

Are you sure we are still talking about Ada? ;-)

****************************************************************

From: Tucker Taft
Sent: Monday, March 16, 2009  9:04 AM

Unicode can help here:

     ? 2200, FOR ALL , forall, ForAll
     ? 2203, THERE EXISTS , exist, Exists
     ? 02204, THERE DOES NOT EXIST , nexist, NotExists, nexists

I agree with Robert that these might be appropriate uses of Unicode.  Certainly
minimizes the upward compatibility issue.

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  9:29 AM

> In the interests of making myself more work that I may never get paid
> for, below find a *complete* write-up for conditional expressions.

Thanks.  Looks great.

> One could write a local function to evaluate the expression properly,
> but that doesn't work if the expression is used in a context where it
> has to be static (such as specifying the size of a subtype).

It also doesn't work very well in a package spec.

...
> [Editor's Note: "condition" is defined in 5.3. Should it (both syntax
> and semantics - 2 lines) be moved here to avoid the forward
> reference??]

Probably.

>     Legality Rules
>
> If there is no "else" dependent_expression, the type of
> conditional_expression shall be of a boolean type.

Too many "of"'s here, and not enough "the"'s.  Either "the
conditional_expression shall be of a boolean type" or "the type of the
conditional_expression shall be a boolean type".

Or do we want, "the expected type for the conditional_expression shall be a
boolean type"?

...
> Split 4.9(33), replacing it by the following:
>
> A static expression is evaluated at compile time except when:
>  * it is part of the right operand of a static short-circuit control
> form whose value
>    is determined by its left operand;
>  * it is part of a condition of some part of a conditional_expression,
> and at
>    least one and at least one condition of a preceeding part of the
     ^^^^^^^^^^^^^^^^^^^^^^^^^^written by the DOR Department.
>    conditional_expression has the value True; or
>  * it is part of the dependent_expression of some part of a
>    conditional_expression, and the associated condition evaluates to
> False; or
>  * it is part of the dependent_expression of some part of a
>    conditional_expression, and at least one condition of a preceeding
> part of
>    the conditional_expression has the value True.

The above seems right, but I find it confusing.  Maybe something like this would
be better:

A static expression that is not a subexpression is evaluated at compile time. If
a static expression is evaluated at compile time, all of its operands [immediate
subexpressions?] are also evaluated at compile time, except:

    For a short-circuit control form, the left operand is evaluated.  The right
    operand is evaluated only if the left operand does not determine the value
    of the s-c c.f.

    For a conditional expression, the condition specified after
    if, and any conditions specified after elsif, are evaluated in succession
    (treating a final else as elsif True then), until one evaluates to True or
    all conditions are evaluated and yield False. If a condition evaluates to
    True, the associated dependent_expression is evaluated.

(Maybe we need to define "subexpression" and/or "operand".)

> AARM Reason: We need the last bullet so that only a single
> dependent_expression is evaluated if there is more than one condition
> that evaluates to True.
> End AARM Reason.
>
> The compile-time evaluation of a static expression is performed
> exactly, without performing Overflow_Checks. For a static expression
> that is evaluated:
>
> [Editor's note: Adam Beneschan suggested in AC-0171 to add another
> cases to this list - an aggregate others clause known to denote zero
> elements:
>
>   * it is part of the expression of an array_component_association whose
>     discrete_choice_list is statically known to denote zero
>     components.
>
> I don't think this level of detail is common enough to add to the
> language, but the rewrite of this paragraph would make it easy to add
> here if desired.]

Actually, why don't we just get rid of this concept of "evaluated at compile
time"?  Does it actually mean anything?  I mean, is there an ACATS that can tell
the difference?

The above is a lot of verbiage for a concept that doesn't mean anything.

> Thus, we design conditional expressions work like and look like aggregates.
                                         ^ to

> This
      ^  is

> the reason that they are defined to be a primary rather than some
> other sort of expression. The fact that the parentheses are required
> makes this preferable (it would be confusing otherwise).
>
> One could imagine rules allowing the parenthesis being omitted in
> contexts
                                       parentheses

> where
> they are not needed, such as a pragma argument. However, this would
> add conceptual overhead - it important to note that this was not done
> for aggregates except in the case of qualified expressions. It also
> would make syntax error correction much harder. Consider the following
> syntactically incorrect Ada
> code:
>
>      exit when if Foo then B := 10; end if;
>
> Currently, syntax error correction most likely would identify a
> missing expression and semicolon. With the conditional expression
> syntax without parentheses, the syntax is correct up until the ":=",
> at which point it is likely too late for the compiler to determine
> that the real error occurred much earlier. Nor would it be easy for
> the programmer to see where the error is.
> (Remember that Foo and B can be arbitrarily complex and on multiple
> lines.) With a required parenthesis, syntax correction would either
> identify the
  required parentheses

> parenthesis
> or the expression as missing, showing the correct point of the error.

I agree, regarding the above example of "exit".  But it would be nice if we
could say:

    pragma Assert(if A then B);

and:

    Assert(if A then B); -- Here, Assert is a procedure.

> Following the model of aggregates also simplifies the resolution of
> conditional expressions. This avoids nasty cases of resolution, in
> which a compiler might be able to figure out a unique meaning but a
> human could not. For instance, given the declarations:
>
>    procedure Q (A, B : Integer);
>    procedure Q (A, B : float);
>
>    function F return Integer;
>    function F return Boolean;
>
>    function G (N : Natural) return Integer;
>    function G (N : Natural) return Float;
>
>    Q (if X > 3 then F else G(1)), (if X > 12 then G(2) else G(3)));
>
> If we used full resolution, this would resolve to Integer because
> there is no F for Float. With the unlimited number of terms available
> in a conditional expression, one can construct ever more complex
> examples.
>
> Should we eventually find this to be too restrictive, changing to a
> full resolution model would be compatible (it would only make illegal
> programs legal), whereas going in other direction is would be
> incompatible. So it is best to start with the more restrictive rule.

Agreed, but I'm pretty sure I will never want to relax this rule.

> We allow the "else" branch to be omitted for boolean-valued
> conditional expressions. This eases the use of conditional expressions
> in preconditions and postconditions, as it provides a very readable form
> of the "implies" relationship of Boolean algebra. That is,
>     A implies B
> could be written as
>     (if A then B)
> In this case, the "else" branch is more noise than information.

The point would be clearer if you say:

     pragma Assert(A implies B);
could be written as
     pragma Assert((if A then B));

(Or "pragma Assert(if A then B);", if you take my syntax suggestion above.)

Because I suspect boolean conditional expressions are primarily useful in
assertions (by which I mean to include preconditions and so forth). I don't see
much use for:

    if (if A then B) then
        ...

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  9:32 AM

> Thanks, Randy.  I have concluded that this is
> *much* more important than "implies" or equivalent, as I believe these
> will be heavily used in preconditions and postconditions.  There are a
> lot of interesting preconditions and postconditions you simply can't
> state without using a conditional expression.

Agreed.  So let's forget about adding "implies", because:

 1. Some people find it confusing.

 2. To folks like me, who do not find it confusing, it's not important to have,
    once we have conditional expressions.

 3. New reserved words should not be added unless they provide some important
    benefit, and certainly not for negative benefit, as (1) implies.

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  9:46 AM

> Agreed, this will be used abundantly.  I like the default boolean True
> interpretation when the else part is missing.

Me, too.

> Not convinced about the need for case expressions,

I am.  (I suppose this means I have to write it up?  Bleah. ;-))

The full-coverage/no-duplicate rules are one of the huge benefits of Ada
-- perhaps my favorite feature.  So if I currently have:

    procedure (...) is
    begin
        case ... is
            when This =>
                Assert (...);
            when That =>
                Assert (...);
            ...

I want to convert that into a precondition, but I don't want to convert into a
chain of elsif's, thus losing full coverage.  When I add a new enumeration
literal, I might forget to update the precondition.

Note that I left out the noise words, "null; pragma " above.  ;-)

>...but would like to
> see a similar proposal for what Randy calls loop expressions, i.e.
> quantified expressions.

I've no idea what quantified expressions should look like or mean, so I too
would like to see a proposal.  Didn't Cyrille volunteer to send one to
ada-comment?

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  9:37 AM

> I must say I still like to abstract complex conditionals into
> predicate functions.

Agreed.  Which means that if you want to use a conditional in a precondition,
you (sometimes) want to put the BODY of a boolean function in a package SPEC, so
it is not hidden from the compiler!

[Editor's note: Replies to this part of the message are found in AC-00180.]

>...People manage to write
> frighteningly complex conditions without this feature,  and I am
>afraid that this will get worse with the  addition of this feature.

No doubt.

>...Still possible misuse is
> not a reason for prohibition!

Agreed.  Can I quote you on that when discussing other unrelated issues?  ;-) I
think it's a general principle: if you try to prevent misuse, you will also
prevent good use.  Preventing misuse is the job of educators and code reviewers,
not language designers.

****************************************************************

From: Bob Duff
Sent: Sunday, March 15, 2009  4:39 PM

> I'll admit I still have a slight preference for omitting the initial
> "if":

Yuck.  I agree with everything else you wrote, but I just can't stomach leaving
off the 'if'.

>... And the "if"
> seems pretty obvious when you start off with  a boolean condition.

The problem is, I don't immediately see it as a boolean condition, unless I see
the 'if' first.  Please keep the 'if' syntax on conditional expressions!

****************************************************************

From: Bob Duff
Sent: Monday, March 16, 2009  9:38 AM

> Unicode can help here:
>
>      ? 2200, FOR ALL , forall, ForAll
>      ? 2203, THERE EXISTS , exist, Exists
>      ? 02204, THERE DOES NOT EXIST , nexist, NotExists, nexists
>
> I agree with Robert that these might be appropriate uses of Unicode.
> Certainly minimizes the upward compatibility issue.

I would strongly object to having these features available _only_ through
non-7-bit-ascii characters.  I don't know how to type ?, and I don't want to
learn.  And to this day, misc software displays all kinds unicode stuff wrong.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 16, 2009  1:58 PM

...
> Actually, why don't we just get rid of this concept of "evaluated at
> compile time"?  Does it actually mean anything?
>  I mean, is there an ACATS that can tell the difference?
>
> The above is a lot of verbiage for a concept that doesn't mean
> anything.

Not true: an evaluated static expression that would raises an exception is
statically illegal. Thus, if we didn't have this wording, (for I and J static):

   I /= 0 and then J/I > 5  -- Would be illegal if I = 0

   (if I /= 0 then J/I > 5 else True) -- Would be illegal if I = 0

So it should be obvious that we *do* need this wording and concept.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 16, 2009  2:11 PM

>>...Still possible misuse is
>> not a reason for prohibition!

>Agreed.  Can I quote you on that when discussing other unrelated issues? ;-)
>I think it's a general principle: if you try to >prevent misuse, you
>will also
>prevent good use.  Preventing misuse is the job of educators and code
>reviewers, not language designers.

So you are saying that strong typing is a mistake? ;-)

Obviously, there is a balance between preventing *likely* misuse and *rare*
misuse - you want rules to prevent the former and not the latter. My point is
that this is not a black-and-white decision, so taking Robert's statement out of
context is pointless (and harmful).

****************************************************************

From: Randy Brukardt
Sent: Monday, March 16, 2009  2:14 PM

> The problem is, I don't immediately see it as a boolean condition,
> unless I see the 'if' first.  Please keep the 'if'
> syntax on conditional expressions!

I agree strongly with Bob. This is the same reason for not omitting the parens:
syntax understanding (and thus syntax error correction) requires early
determination of the difference between a "normal" boolean expression and a
conditional expression, just like it does between an if statement and a
conditional expression. I think everything I wrote about the latter would also
apply to the former.

After all, I did use all of the *good* ideas from Tucker's proposal, so the fact
that I didn't use that one should indicate what I thought about it...

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, March 16, 2009  3:18 PM

>> I'll admit I still have a slight preference for omitting the initial
>> "if":
>
> Yuck.  I agree with everything else you wrote, but I just can't
> stomach leaving off the 'if'.

I support that Yuck. Actually, I had to mentally add an "if" in front of the
expression to be able to understand it.

****************************************************************

From: Bob Duff
Sent: Monday, March 16, 2009  5:34 PM

> So you are saying that strong typing is a mistake? ;-)

Not at all.  Ada's type rules do not "prevent misuse", they just make mistakes
less likely.  You can always bypass the type system using Unchecked_Conversion
and other chap-13-ish features.  To me, "misuse" implies some deliberate (but
foolish/misguided) action.

"Prevent misuse" would imply (for example) outlawing Unchecked_Conversion. I'm
against that, despite the fact that I have seen misuse of Unchecked_Conversion.

"Prevent mistakes" means "require the programmer to say something explicit, like
'with Unchecked_Conversion', when using this dangerous feature". I'm in favor of
trying to prevent mistakes, and I'm in favor of requiring documentation of
questionable or dangerous things, which is exactly what "with U_C" does.

> Obviously, there is a balance between preventing *likely* misuse and
> *rare* misuse - you want rules to prevent the former and not the latter.

I don't agree, but I think you're just using "misuse" in a different way than I.

>...My point
> is that this is not a black-and-white decision, so taking Robert's
>statement  out of context is pointless (and harmful).

****************************************************************

From: Randy Brukardt
Sent: Monday, March 16, 2009  7:37 PM

...
> > >I think it's a general principle: if you try to prevent misuse, you
> > >will also prevent good use.  Preventing misuse is the job of
> > >educators and code reviewers, not language designers.
> >
> > So you are saying that strong typing is a mistake? ;-)
>
> Not at all.  Ada's type rules do not "prevent misuse", they just make
> mistakes less likely.  You can always bypass the type system using
> Unchecked_Conversion and other chap-13-ish features.  To me, "misuse"
> implies some deliberate (but
> foolish/misguided) action.

One person's "mistake" is another person's "misuse". You seem to be arguing both
sides here: prevent what I think is a "mistake", but allow what I think is a
"misuse". I just don't see such a black-and-white line here. (Well, at least if
I'm trying to be an impartial language standard editor -- my personal opinions
may vary. :-)

After all, a type error is often just a misguided action.

  Flt : Float := 1.0;
  ...
  Flt := Flt + 1;

is a "mistake" in Ada. But it makes perfect sense - it is just a type error
preventing you from doing something. (Replace "1" with "Int_Var" if you don't
like the literal example. It's hard to see what the "mistake" is in this
example.

So I don't see any reason to try to classify things into "mistakes" and
"misuses". They're just a continuum of programming errors.

> "Prevent misuse" would imply (for example) outlawing Unchecked_Conversion.
> I'm against that, despite the fact that I have seen misuse of
> Unchecked_Conversion.
>
> "Prevent mistakes" means "require the programmer to say something
> explicit, like 'with Unchecked_Conversion', when using this dangerous
> feature".
> I'm in favor of trying to prevent mistakes, and I'm in favor of
> requiring documentation of questionable or dangerous things, which is
> exactly what "with U_C" does.

Which are just two points on the continuum of choices; they're both trying to
preventing errors in different ways. The first would be outright banning of a
relatively rare programming error; the second is trying to discourage the use of
the feature without banning it.

> > Obviously, there is a balance between preventing *likely* misuse and
> > *rare* misuse - you want rules to prevent the former and not the latter.
>
> I don't agree, but I think you're just using "misuse" in a different
> way than I.

I don't see any difference between a "mistake" and "misuse", and apparently you
do. They're the same thing to me. There are just degrees of wrongness: *likely*
mistake/misuse vs. *rare* mistake/misuse.

****************************************************************

From: John Barnes
Sent: Tuesday, March 17, 2009  8:36 AM

I have been away from my desk for a bit and on my return am totally overwhelmed
by ARG mail.

And a big problem is that this thread seems to have changed into a completely
different topic.

Let me say that I really missed conditional expressions in Ada having used them
widely in Algol 60 and would vote for them if put in nicely.

But I seem to be seeing all sorts of strange notions like omitting end if and
even omitting if.  It all seems very ugly and turning Ada into a hieroglyphic
mess.

Just where are we going? Can somone do a very brief summary?

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  8:50 AM

I suspect the "if" at the beginning stays (it's N to 1 in favor, with N >> 1).
The closer does seem superfluous,

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  8:48 AM

Current thinking is  (IF condition THEN expr
                       {ELSIF condition THEN expr}
                       [ELSE expr])

If ELSE expr is omitted, it means ELSE TRUE (convenient for the implication
case).

No one has even suggested END IF being included, and I would definitely be
opposed to that suggestion, It is unnecessary syntactic clutter, and the END
keyword is fundamental in error recovery and should not be allowed in the middle
of expressions!

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  9:23 AM

One remaining question is whether un-parenthesized conditional expressions would
be permitted in some contexts, such as parameters in a call, and other contexts
where they would follow "=>", to avoid the annoyance of double parentheses and
the LISP-like appearance.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March 17, 2009  9:41 AM

My personal preference would be (as mentionned before) "like aggregates", i.e.
no double parentheses only in qualified expressions.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  9:47 AM

I don't understand, aggregates do require the double parens, the question is,
can you right

     X (if A then B else C)

Surely you are not sugegsting that we ALWAYS have to qualify these expressions?
If so, I strongly object, it would be absurd to have to write

    pragma Precondition (Boolean'(if A then B));

now the question is, can we write

    pragma Precondition (if A then B);

or do we have to write

    pragma Precondition ((if A then B));

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March 17, 2009  10:37 AM

>> My personal preference would be (as mentionned before) "like
>> aggregates", i.e. no double parentheses only in qualified expressions.
>
> I don't understand, aggregates do require the double parens
Not in qualified expressions, i.e. T'(...)

I said the same should apply to "if expression", no more, no less.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  10:36 AM

I think what JP meant was that, as with the special case for aggregates in
qualified expressions, a qualified expression whose operand is a conditional
expression would not require double parentheses. Elsewhere, JP doesn't seem
worried about double parentheses, though some of us seem to disagree with him
(mildly, it seems).  I think we would all agree that qualified expressions
should not require double parentheses if the operand is a conditional
expression.

JP, did I convey your intent properly?

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  10:46 AM

Sure they require the double parens:

     X (T'(...))

is what we mean by double parens!

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  10:49 AM

Ah, OK, that makes sense, yes of course we don't need F (Integer'((if A then 1
else 2)));

Even F (Integer'(if A then 1 else 2))

seems horrible to me, I don't see why we can't resolve this properly without the
qualification, seems trivial to me, although I have not spent much time delving
into resolution. Though Ed Schonberg (our primary delver into such matters),
also declares this trivial, and in fact we already do it for conditional
expressions (we would have to undo stuff to implement this nasty restriction).

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  11:04 AM

JP was not suggesting that a qualified expression was required, merely that *if*
you chose to use one, then you wouldn't be burdened with yet another set of
parentheses.  Just like aggregates, the RM wouldn't require a qualified
expression so long as the expected type for the conditional expression was a
"single" type.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  11:43 AM

OK, that's not how I interpreted his message, so how about we let JP clarify?
:-)

****************************************************************

From: Bob Duff
Sent: Tuesday, March 17, 2009  11:58 AM

JP did clarify, and his clarification matches Tucker's interpretation.

There are two separate issues related to qualified expressions, which I think
you are confusing:

    1. Syntax.  Randy proposed it should be "just like aggregates",
       which means you have to put parens around a conditional
       expression, except there's a special case in the BNF for qualified
       expressions that allows you to omit one layer of parens.
       JP agrees with Randy.  Tucker and I would like to allow
       omitting parens in a few more cases (but I don't feel
       strongly).

    2. Overload resolution.  Randy proposed it should be "just like
       aggregates", which means the expected type must be a "single type".
       That means you have to stick in some qualifications where you
       otherwise wouldn't have to.  I agree with Randy on this.
       I think Tucker does, too.  You disagree.  I don't think
       JP said anything about this issue.

Note that my opinion on (2) is based on readability, so your claims that it's
trivial to implement without the "single type" rule don't sway me, even if they
are true (which I doubt -- note that you'd have to intersect an arbitrary number
of types when you have elsif chains).

****************************************************************

From: John Barnes
Sent: Tuesday, March 17, 2009  10:55 AM

> Current thinking is  (IF condition THEN expr
>                       {ELSIF condition THEN expr}
>                       [ELSE expr])
>
> If ELSE expr is omitted, it means ELSE TRUE (convenient for the
> implication case).
>
> No one has even suggested END IF being included, and I would
> definitely be opposed to that suggestion, It is unnecessary syntactic
> clutter, and the END keyword is fundamental in error recovery and
> should not be allowed in the middle of expressions!

Yes I suppose it's different to Algol68 where the closer was simply FI (if
backwards) and no end in sight.

So the else part can only be omitted when the type is Boolean.  Hmmm.

Thanks for summary.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009   1:07 PM

> Yes I suppose it's different to Algol68 where the closer was simply FI
> (if
> backwards) and no end in sight.

Actually, it's more normal in Algol68 to use the short form

    (b | c |: d | e | f)

for conditional expressions at least short ones, so for example

   (b > 0 | 3 / b | 0)

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March 17, 2009  12:16 PM

> JP, did I convey your intent properly?

Yes, and so did Bob.

My concern is not to have a set of different rules. It is better if we can build
on existing rules, although not perfect. For aggregates, we can write:
   T'(1 => 'a', 2 => 'b')
but we must write:
   Put ((1 => 'a', 2 => 'b'));

It would be strange to be able to write
   Put (if B then 1 else 2);
but not:
   Put (1 + if B then 1 else 2);
because of the ambiguity with:
   Put (if B then 1 else 2 + 1)
(unless you want to add an extra level of precedence, but I think it would be
too error-prone)

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  12:49 PM

> Note that my opinion on (2) is based on readability, so your claims
> that it's trivial to implement without the "single type" rule don't
> sway me, even if they are true (which I doubt -- note that you'd have
> to intersect an arbitrary number of types when you have elsif chains).

Intersecting types of candidate interpretations is the natural thing to do in
several places during resolution, so this just falls out from existing
machinery. The rule is simple to state as a name resolution rule:

if the else part is missing, the expected type of each expression, and the
expected type of the conditional expression itself, is Boolean Otherwise the
expected type of each expression is any type, and all the expressions must
resolve to the same type.

These things are homogeneous, unlike aggregates, and it's natural to state the
rule in terms of that uniformity.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  1:10 PM

> It would be strange to be able to write
>    Put (if B then 1 else 2);
> but not:
>    Put (1 + if B then 1 else 2);

Not to me, it seems reasonable to require parens in the second case, exactly
because of possible ambiguity

I am inclined to remove the parens in pragma arguments, (so pragma Precondition
works nicely), and in subprogram calls.

BTW: I am a little concerned that by the time the ARG/WG9/ISO machinery finally
publishes precondition/postcondition in Tucks alternative syntactic form, the
use of the pragmas will be so wide-spread that the new form will have little
relevance. It is also a huge advantage that the pragmas can be used with any
version of Ada, whereas the new syntax would require an immediate commitment to
Ada 201x mode).

****************************************************************

From: Steve Baird
Sent: Tuesday, March 17, 2009  1:41 PM

> There are two separate issues related to qualified expressions, which
> I think you are confusing:

Thanks for stating the issues so clearly.

>
>     1. Syntax.  Randy proposed it should be "just like aggregates",
>        which means you have to put parens around a conditional
>        expression, except there's a special case in the BNF for qualified
>        expressions that allows you to omit one layer of parens.
>        JP agrees with Randy.  Tucker and I would like to allow
>        omitting parens in a few more cases (but I don't feel
>        strongly).
>

I'm torn on this one.

The parens in this example

    pragma Assert ((if A then B else C));

look a bit odd, but we certainly don't want to allow

    begin
       P (if A then B elsif C then D, if E then F elsif G then H else I);

The problem is where to draw the line.

The proposed "just like aggregates" rule builds on precedent, adds to language
consistency, and seems like a reasonable choice. On the other hand, we haven't
seen a specific alternative yet.

>     2. Overload resolution.  Randy proposed it should be "just like
>        aggregates", which means the expected type must be a "single type".
>        That means you have to stick in some qualifications where you
>        otherwise wouldn't have to.  I agree with Randy on this.
>        I think Tucker does, too.  You disagree.  I don't think
>        JP said anything about this issue.
>
> Note that my opinion on (2) is based on readability, so your claims
> that it's trivial to implement without the "single type" rule don't
> sway me, even if they are true (which I doubt -- note that you'd have
> to intersect an arbitrary number of types when you have elsif chains).

I'm less comfortable with the "just like aggregates" rule in this case.

Aggregates, in their full generality, are very complex and I think the
single-expected-type name resolution for aggregates is completely appropriate.

Integer literals, for example, are far less complex and that is why the language
allows

    procedure P (X : Integer) is ... ;
    procedure P (X : Float) is ... ;
   begin
    P (2); -- legal

Conditional expressions lie somewhere in the middle. The Access attribute is
another such construct.

When Ada95 was first introduced, any use of 'Access was required to have a
single expected type. Practical experience showed that this restriction was
annoying. The example given in AI95-00235 is

   type Int_Ptr is access all Integer;
   type Float_Ptr is access all Float;
   function Zap (Val : Int_Ptr) return Float;
   function Zap (Val : Float_Ptr) return Float;
   Value : aliased Integer := 10;
   Result1 : Float := Zap (Value'access); -- Legal?

Before the AI, this example was illegal.
It is generally agreed now (I think) that this was a mistake and that AI-235 was
a good thing.

It seems to me that we are contemplating making a similar mistake here.

I also don't buy the argument that this would be terribly difficult to
implement, but that's not really the main question we should be considering
(unless someone wants to argue that the implementation difficulties are
substantially more significant than anyone has suggested so far).

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  1:50 PM

> The parens in this example
>
>     pragma Assert ((if A then B else C));
>
> look a bit odd, but we certainly don't want to allow
>
>     begin
>        P (if A then B elsif C then D, if E then F elsif G then H else
> I);

why not? it's always possible to write things that look bad, that's not a
sufficient reason for trying to outlaw them.

> The problem is where to draw the line.

let good taste draw the line

****************************************************************

From: Bob Duff
Sent: Tuesday, March 17, 2009  9:37 AM

> One remaining question is whether un-parenthesized conditional
> expressions would be permitted in some contexts, such as parameters in
> a call, and other contexts where they would follow "=>", to avoid the
> annoyance of double parentheses and the LISP-like appearance.

I already indicated that I am in favor of allowing to leave off the "extra"
parens in things like:

    pragma Assert (if ...);

But I don't feel strongly about it.  Randy is opposed.

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  2:40 PM

I'm certainly in favor of leaving them out whenever possible, including the
contexts Tuck mentions.

****************************************************************

From: Bob Duff
Sent: Tuesday, March 17, 2009  3:57 PM

> Intersecting types of candidate interpretations is the natural thing
> to do in several places during resolution, so this just falls out from
> existing machinery. The rule is simple to state as a name resolution
> rule:
>
> if the else part is missing, the expected type of each expression, and
> the expected type of the conditional expression itself, is Boolean
> Otherwise the expected type of each expression is any type, and all
> the expressions must resolve to the same type.

I don't think that's the rule we want.  You want the type from context to be
passed down.  It would make this example:

    Put_Line (Integer'Image (Num_Errors) &
          (if Num_Errors = 1 then "error detected." else "errors detected."));

ambiguous.

But I just realized that the "single type" rule Randy and I have been advocating
also makes this ambiguous!  Uh oh.

> These things are homogeneous, unlike aggregates, and it's natural to
> state the rule in terms of that uniformity.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 17, 2009  4:08 PM

> I think what JP meant was that, as with the special case for
> aggregates in qualified expressions, a qualified expression whose
> operand is a conditional expression would not require double
> parentheses.
> Elsewhere, JP doesn't seem worried about double parentheses, though
> some of us seem to disagree with him (mildly, it seems).  I think we
> would all agree that qualified expressions should not require double
> parentheses if the operand is a conditional expression.

It should be noted that the write-up that I circulated on Saturday says exactly
what Jean-Pierre suggested. So he's just confirming my recommendation.

The only case where I have any sympathy for Tucker's suggestion is in a pragma
with exactly one argument. But I don't see the point of making the syntax of the
language irregular for that one particular case. And requiring the paren makes
it much easier for syntax error detector/correctors to tell the difference
between an if statement and conditional expression - making it more likely that
the error will be flagged at the right place.

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  4:12 PM

> I don't think that's the rule we want.  You want the type from context
> to be passed down.

Of course, the expressions are resolved from the context, nobody ever thought
otherwise (even though my last sentence did not make this explicit).  is the
following better:

if the else part is missing, the expected type of each expression,
and the expected type of the conditional expression itself, is
Boolean Otherwise all  the expressions must resolve to the type of
the context.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  4:32 PM

> But I just realized that the "single type" rule Randy and I have been
> advocating also makes this ambiguous!  Uh oh.

OK, is that enough to convince you, or will you suddenly decide that it makes things more readinable to write

     Put_Line (Integer'Image (Num_Errors) &
           String'(if Num_Errors = 1 then "error detected." else "errors detected."));

:-)

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  4:34 PM

> And requiring
> the paren makes it much easier for syntax error detector/correctors to
> tell the difference between an if statement and conditional expression
> - making it more likely that the error will be flagged at the right place.

I think that's bogus, there will after all be parens in any case,

the clean rule would be that you have to ALWAYS have a set of parens around a
conditional expression, but you don't need to *add* an extra pair in a context
where the parens are already there.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  4:59 PM

> the clean rule would be that you have to ALWAYS have a set of parens
> around a conditional expression, but you don't need to *add* an extra
> pair in a context where the parens are already there.

That doesn't sound very easy to define formally.

I think we would want to introduce a syntactic category called, say,
"parameter_expression" defined as:

    parameter_expression ::= conditional_expression | expression

Where conditional_expression doesn't have parentheses around it.

"parameter_expression" would replace "expression" in certain contexts, and
certainly in:

    primary ::= (parameter_expression)

But the intent would be for "parameter_expression" to replace "expression" in:

    explicit_actual_parameter ::= parameter_expression | variable_name

and in:

    pragma_argument_association ::=
      [pragma_argument_identifier =>] name
    | [pragma_argument_identifier =>] parameter_expression

and in:

    discriminant_association ::=
      [discriminant_selector_name
        {| discriminant_selector_name} =>] parameter_expression

and in the proposed aspect-specification clause

     aspect_name [=> parameter_expression]

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  4:48 PM

> But I just realized that the "single type" rule Randy and I have been
> advocating also makes this ambiguous!  Uh oh.

Perhaps we can say the following for the Name Resolution rules:

    The expected type for each dependent expression is that of
    the conditional expression as a whole.  All dependent expressions
    shall resolve to the same type.  The type of the conditional
    expression as a whole is this same type.  If there is no ELSE part,
    this type shall be a boolean type.

This approach will ensure that the expected type has to be a suitable "single"
type if any one of the dependent expressions requires a single type.  For
example, if one of the dependent expressions is a string literal, then the
expected type has to be a single string type. We certainly don't want to have to
start searching for all in-scope string types to resolve a conditional
expression.

I don't think Ed's "bottom-up" resolution rule would work very well in the
presence of allocators, aggregates, string literals, etc.

>> These things are homogeneous, unlike aggregates, and it's natural to
>> state the rule in terms of that uniformity.

But you have to be careful not to open up some formerly well-shuttered can of
worms.

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  5:01 PM

> This approach will ensure that the expected type has to be a suitable
> "single" type if any one of the dependent expressions requires a
> single type.  For example, if one of the dependent expressions is a
> string literal, then the expected type has to be a single string type.
> We certainly don't want to have to start searching for all in-scope
> string types to resolve a conditional expression.

This looks perfectly clear.

> I don't think Ed's "bottom-up" resolution rule would work very well in
> the presence of allocators, aggregates, string literals, etc.

The resolution rules already work bottom-up with allocators, aggregates, and
string literals (the candidate interpretations denote some class of types, for
example any_access_type, or any_composite_type).  Having these appear in a
conditional expression changes nothing.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 17, 2009  5:23 PM

> The resolution rules already work bottom-up with allocators,
> aggregates, and string literals (the candidate interpretations denote
> some class of types, for example any_access_type, or
> any_composite_type).  Having these appear in a conditional expression changes nothing.

What I meant by "bottom-up" was that your earlier description made it sound like
it required identifying specific types (as opposed to classes of types) in a
bottom-up pass, using no context from above.

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 17, 2009  8:20 PM

What I meant is the standard two-pass resolution algorithm that no doubt we all
use!  We identify candidate interpretations bottom-up, and we find the single
interpretation for a complete context top-down.  I'm not proposing anything
different: find interpretations of each expression, find the common
intersection, and these are the candidate interpretations of the conditional
expression as a whole.  The full context will then indicate the expected type
for it, and that gets propagated to each expression (which may have be
overloaded).

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 17, 2009  5:22 PM

> That doesn't sound very easy to define formally.

That per se is not a very strong argument. The point is that
this rule does exactly what I think you want, and avoids
bad uses. Your definition allows uses in other than contexts
that meet the above rule, things like

    X (if A then B else C, if D then Q else R)

which I do not think is helpful. In fact I prefer to always
have double parens rather than allow the above (yes, I know
that this partially contradicts what I said before about
allowing good taste to elimninate this case, but

a) I had not thought of the rule above
b) I diskliked this example more and more as I looked at it

As for formal definition, either rely on adjunct english rather
than write the syntax rules (after all our syntax for identifiers
does not exclude reserved words, which would be a HUGE pain to
do formally, though of course it could be done)

or

figure out the formal rules, and stick them in the grammar if
it does not clutter things too much.

By the way, if you go for the aspect notation, I would NOT
exempt the single level of parens here. To me the only
objectionable thing is explicitly having two parens before
the if and two parens at the end of the expression. In any
other context the parens are not just unobjectionable, they
are desirable, given we do not have an ending "end if", we
need for readability a clear consistent token to end the
conditional expression and right paren is acceptable, but
I don't like it being a comma soetimes, a semicolon sometimes
etc.

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, March 18, 2009  1:50 AM

>> but we certainly don't want to allow
>>
>>     begin
>>        P (if A then B elsif C then D, if E then F elsif G then H else
>> I);
>
> why not? it's always possible to write things that look bad, that's
> not a sufficient reason for trying to outlaw them.

You could say the same for the rule (that I like a lot) that you can write "A
and B and C" and "A or B or C", but not "A and B or C" without parentheses.

I could go with something like this for if-expressions. Don't put parentheses in
the syntax, but have a rule that if-expressions must be in parentheses if they
appear:
- as operand of a infix operator
- as values of aggregates
- other?

This means that (if .. then ..) would appear as A_Parenthesized_Expression that
contains An_If_Expression (that's -extended- ASIS-speak), but that the
parentheses are required in some contexts. I think it is easier to define it
this way than to define places where the parentheses are *not* required.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 18, 2009  7:22 AM

This matches what I proposed with the notion of a "parameter_expression"
syntactic category.

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, March 18, 2009  9:04 AM

Yes, I replied before seeing your mail. The difference is that I see it more as
a legality rule than syntax - no big deal.

****************************************************************

From: Adam Beneschan
Sent: Monday, March 23, 2009  12:37 PM

!topic AI05-0147 (Conditional expressions)
!reference AI05-0147
!from Adam Beneschan 09-03-23
!discussion

I think this AI just became available on the ada-auth web site, so I've been
looking it over.  I'd definitely appreciate this being part of the language; I
think I've run into a good number of cases in the past where having this would
have been very useful.  And thanks for requiring the parentheses.  I have a
vague memory of seeing an example like this in a book, a long time ago, where I
think the language was some flavor of Algol:

   if if if if A then B else C then D else E then F else G then ...

I have a few comments about the proposed changes to 4.9.  Here's the text as it
currently appears on
http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ai05s/ai05-0147-1.txt?rev=1.2 :

===============================================================================
Split 4.9(33), replacing it by the following:
A static expression is evaluated at compile time except when:
* it is part of the right operand of a static short-circuit control
  form whose value is determined by its left operand;
* it is part of a condition of some part of a conditional_expression,
  and at least one and at least one condition of a preceeding part of
  the conditional_expression has the value True; or
* it is part of the dependent_expression of some part of a
  conditional_expression, and the associated condition evaluates to
  False; or
* it is part of the dependent_expression of some part of a
  conditional_expression, and at least one condition of a preceeding
  part of the conditional_expression has the value True.
===============================================================================

My comments:

(1) "preceding" is misspelled.

(2) The second bullet point repeats the four words "and at least one"
    twice.

(3) I think the language "and the associated condition evaluates to
    True" [or False] is a bit confusing, because at first glance, it's
    mixing run-time and compile-time ideas.  Most of the time, the
    compiler won't know what the condition evaluates to, but the
    language suggests that the compiler needs to know in order to
    decide whether a certain expression is to be statically evaluated.

    I'd suggest that to clarify things, the phrase be reworded as "and
    the associated condition is a static expression whose value is
    True".  I could live with "and the associated condition statically
    evalates to True", but the first phrasing relies on terms that are
    already defined.  In any case, I think the intent is that if the
    condition is not a static expression, the bullet point doesn't
    apply, and thus the condition can't prevent the "static
    expression" referred to by the first phrase of 4.9(33) from being
    evaluated.

    Anyway, it may be necessary to clarify the intent in a case like
    this:

      (if (Constant_1 and then Func_Call(...)) then Expr_1 else Expr_2)

    Suppose Constant_1 is a constant whose value is False, and that
    Expr_1 is a static expression.  Now we know Expr_1 will never be
    evaluated.  However, the condition "Constant_1 and then Func_Call"
    is not a static expression, even though the compiler can tell that
    its value is always False.  I believe that this clarification is
    needed, or at least useful to a reader, in order to determine
    whether the compiler will evaluate Expr_1 at compile time (and
    possibly make the program illegal by 4.9(34)).

    [At first I thought that this probably also existed in the first
    bullet point, which has been around since Ada 95.  However, the
    first bullet point only applies to a "static short-circuit control
    form", which means that the left-hand relation would have to be
    static anyway.  The other bullet points apply to all
    conditional_expressions, not just static ones, so it's different.]

****************************************************************

From: Randy Brukardt
Sent: Monday, March 23, 2009  2:27 PM

> I think this AI just became available on the ada-auth web site, so
> I've been looking it over.  I'd definitely appreciate this being part
> of the language; I think I've run into a good number of cases in the
> past where having this would have been very useful.  And thanks for
> requiring the parentheses.  I have a vague memory of seeing an example
> like this in a book, a long time ago, where I think the language was
> some flavor of Algol:
>
>    if if if if A then B else C then D else E then F else G then ...

There has been a lot of discussion about this proposal over on the ARG list; it
some point there will be a new draft reflecting the additional ideas.

> I have a few comments about the proposed changes to 4.9.
...
>     [At first I thought that this probably also existed in the first
>     bullet point, which has been around since Ada 95.  However, the
>     first bullet point only applies to a "static short-circuit control
>     form", which means that the left-hand relation would have to be
>     static anyway.  The other bullet points apply to all
>     conditional_expressions, not just static ones, so it's different.]

That's definitely not intended. I obviously failed to notice that "static" is
repeated in the short circuit case. That should be the case for the conditional
expressions as well - these bullets only apply to a static conditional
expression. Please imagine that it is there and tell me if there are any other
problems with the wording (other than the typos).

****************************************************************

From: Adam Beneschan
Sent: Monday, March 23, 2009  2:49 PM

But is making the bullet points apply only to "static conditional expression"
correct?  If N is a constant with value 0, it means that you couldn't use the
expression

   (if N = 0 then Max_Value_For_System else 10_000 / N)

if Max_Value_For_System is, say, a function call, because the existence of the
function call makes this *not* a static conditional_- expression, and then the
compiler would have to statically evaluate the "else" expression.

I thought the intent was that in the case

   if A then B else C

if A is a static expression whose value is True, then the compiler wouldn't
evaluate C even if it is a static expression.  That probably ought to be the
case whether or not B is a static expression---the staticness of B should be
irrelevant.  (And similarly in the mirror case where A is statically False.)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 23, 2009  3:45 PM

...
> But is making the bullet points apply only to "static conditional
> expression" correct?

Yes. :-)

> If N is a constant with
> value 0, it means that you couldn't use the expression
>
>    (if N = 0 then Max_Value_For_System else 10_000 / N)
>
> if Max_Value_For_System is, say, a function call, because the
> existence of the function call makes this *not* a static
> conditional_- expression, and then the compiler would have to
> statically evaluate the "else" expression.

That is exactly the intent. I didn't think this expression would fly if it
introduced a new concept - the "partially static expression". Besides, you could
make the same argument for short circuit expressions, and

    N = 0 or else Max_Value_For_System > 10_000

is never a static expression, even if N = 0.

I think the argument is that the language designers did not want whether or not
an expression is static to depend on the values. If the rule you are suggesting
was the case, then

    Max : constant := (if N = 0 then Max_Value_For_System else 10_000 / N)

would be illegal because this is not an static expression if N = 0, and legal
otherwise.

****************************************************************

From: Adam Beneschan
Sent: Monday, March 23, 2009  4:08 PM

> That is exactly the intent. I didn't think this expression would fly
> if it introduced a new concept - the "partially static expression".

I don't see why a new concept would be needed.  Please see my earlier post; all
I was suggesting was changing this (with typos eliminated)

* it is part of a condition of some part of a conditional_expression,
  and at least one condition of a preceding part of the
  conditional_expression has the value True; or

to

* it is part of a condition of some part of a conditional_expression,
  and at least one condition of a preceding part of the
  conditional_expression is a static expression with the value True;
  or

and similarly for the next two paragraphs.  Why would a new concept be needed?
My suggestion didn't refer at all to whether the entire conditional_expression
was static or not.  Your response added that. I am certainly not suggesting
changing the definition of whether a conditional_expression is static.


> Besides, you
> could make the same argument for short circuit expressions, and
>
>     N = 0 or else Max_Value_For_System > 10_000
>
> is never a static expression, even if N = 0.

But then the argument wouldn't apply at all.  The question has to do with when a
static expression is evaluated according to 4.9(33), when it's part of a larger
conditional expression.  Here, "Max_Value_For_System > 10_000" isn't a static
expression, so 4.9(33) doesn't apply to it at all.

The argument doesn't apply equally to short-circuit expressions and
conditional_expressions, for this reason: For the "exception" mentioned in
4.9(33) to apply, so that a static expression isn't evaluated, first of all you
have to have a static expression to apply this clause to, and second of all it
has to depend on a condition that is also a static expression.  That's two
expressions that need to be static.  In a short-circuit expression, there are
only two expressions anyway.  So it's OK for the first bullet point to say it
only applies to a *static* short-circuit expression.  In a
conditional_expression, there's (possibly) a third expression.  My suggestion
here is that the remaining three bullet points should still depend *only* on the
static expression in question *and* on the condition, *not* on whether the third
expression is static.  That's why the argument doesn't apply equally.


> I think the argument is that the language designers did not want
> whether or not an expression is static to depend on the values. If the
> rule you are suggesting was the case, then
>
>     Max : constant := (if N = 0 then Max_Value_For_System else 10_000
> / N)
>
> would be illegal because this is not an static expression if N = 0,
> and legal otherwise.

I don't think that's the issue, though.  Yes, the above example should be
illegal; the conditional expression shouldn't be static and shouldn't be allowed
where a static expression is required.  That wasn't my point.  My concern has to
do with when a static expression that is ***part of*** a larger conditional
expression is evaluated and could possibly make the program illegal.  In this
example:

  (if N = 0 then Max_Value_For_System else 10_000 / N)

10_000 / N is a static expression (if N is static), and I don't believe the
compiler should try to evaluate it if N is 0.  Whether or not the whole
conditional_expression is static is, I believe, not important to this issue.
And I didn't make it part of the issue.  As I mentioned, you're the one who
brought the "static conditional_expression" wording into it, and my feeling is
that doing so was incorrect.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 23, 2009  4:39 PM

...
> and similarly for the next two paragraphs.  Why would a new concept be
> needed?

There is no concept in Ada of statically evaluating *part* of an expression.
You are asking to add one.

...
> > Besides, you
> > could make the same argument for short circuit expressions, and
> >
> >     N = 0 or else Max_Value_For_System > 10_000
> >
> > is never a static expression, even if N = 0.
>
> But then the argument wouldn't apply at all.  The question has to do
> with when a static expression is evaluated according to 4.9(33), when
> it's part of a larger conditional expression.  Here,
> "Max_Value_For_System > 10_000" isn't a static expression, so 4.9(33)
> doesn't apply to it at all.

Which is my point: if the conditional expression is not static, 4.9(33) does not
apply to it at all.

> The argument doesn't apply equally to short-circuit expressions and
> conditional_expressions, for this reason: For the "exception"
> mentioned in 4.9(33) to apply, so that a static expression isn't
> evaluated, first of all you have to have a static expression to apply
> this clause to, and second of all it has to depend on a condition that
> is also a static expression.
> That's two expressions that need to be static.  In a short-circuit
> expression, there are only two expressions anyway.  So it's OK for the
> first bullet point to say it only applies to a *static* short-circuit
> expression.  In a conditional_expression, there's (possibly) a third
> expression.  My suggestion here is that the remaining three bullet
> points should still depend *only* on the static expression in question
> *and* on the condition, *not* on whether the third expression is
> static.  That's why the argument doesn't apply equally.

Actually, there is an unlimited number of other expressions in a conditional
expression (all of the "elsif" branches). You want to require some of them to be
evaluated statically, and others not, depending on the values of still other
others, *and* you want that to happen in a non-static context (so that it isn't
obvious whether the rule is even applied to a particular expression). That's
just too complex in my view.

...
> In this example:
>
>   (if N = 0 then Max_Value_For_System else 10_000 / N)
>
> 10_000 / N is a static expression (if N is static), and I don't
> believe the compiler should try to evaluate it if N is 0.  Whether or
> not the whole conditional_expression is static is, I believe, not
> important to this issue.  And I didn't make it part of the issue.  As
> I mentioned, you're the one who brought the "static
> conditional_expression" wording into it, and my feeling is that doing
> so was incorrect.

If this is not a static expression, the compiler shouldn't be evaluating any of
it. So what's the problem?

****************************************************************

From: Adam Beneschan
Sent: Monday, March 23, 2009  5:08 PM

> There is no concept in Ada of statically evaluating *part* of an expression.

GNAT rejects this:

    procedure Test783 (Param : Integer) is
        N : constant := 0;
        X : Integer := Param + (10 / N);
    begin
        null;
    end Test783;

Are you saying GNAT is wrong?

I think GNAT is right, the way I read the RM.  10 / N is a static expression.
The syntax of "expression" in 4.4 clearly indicates that expressions can be part
of larger expressions.  4.9(2-13) defines when an expression is a static
expression, and the definition doesn't care whether the expression is part of a
larger expression or not.  4.9(33) says that static expressions are evaluated at
compile time, and that rule doesn't care whether the expression is part of a
larger expression or not, except in one particular case involving short
circuits.  In particular, the rule doesn't care whether the static expression is
part of a larger nonstatic expression.

Anyway, that's how I've always interpreted the RM; it would come as a complete
surprise to me to find that 4.9(33) doesn't apply to static expressions that are
part of larger expressions.  I don't see any wording in the RM to support that
interpretation.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 23, 2009  8:26 PM

...
> Are you saying GNAT is wrong?

Yes, but...

> I think GNAT is right, the way I read the RM.  10 / N is a static
> expression.  The syntax of "expression" in 4.4 clearly indicates that
> expressions can be part of larger expressions.
>  4.9(2-13) defines when an expression is a static expression, and the
> definition doesn't care whether the expression is part of a larger
> expression or not.  4.9(33) says that static expressions are evaluated
> at compile time, and that rule doesn't care whether the expression is
> part of a larger expression or not, except in one particular case
> involving short circuits.  In particular, the rule doesn't care
> whether the static expression is part of a larger nonstatic
> expression.
>
> Anyway, that's how I've always interpreted the RM; it would come as a
> complete surprise to me to find that 4.9(33) doesn't apply to static
> expressions that are part of larger expressions.  I don't see any
> wording in the RM to support that interpretation.

Unfortunately, I think you are right (although I don't think that the ACATS
requires any of this). For the record, Janus/Ada does not reject this example
(although it does give a warning). It appears to me that this was the Ada 83
rule (exact evaluation was only required for universal expressions), and I'm
surprised that I was not aware of the change. (Janus/Ada probably goes out of
its way to *not* produce an error in this case, since it evaluates everything
that it can with the exact evaluation machinery.)

But I'm very worried about the effect in generics. Janus/Ada's generic code
sharing depends completely on the fact that there cannot be any "interesting"
static expressions in generics. If that's not true in some (important) case,
then code sharing is impossible.

Anyway, obviously, I have no clue how static expressions work, so you should
disregard everything I've written on the subject. In particular, you should
completely discard the bullets I wrote, because I made no consideration of the
possibility of non-static expressions coming into 4.9(33). I doubt that anything
is correct about them in that case; they'll need to be completely reanalyzed,
and there are many more cases to consider (such as rounding issues, discriminant
checks, etc.).

[Note from Randy, November 3, 2009: One important point to remember that I
forgot in this discussion is that an expression being static has nothing
to do with whether it is evaluated exactly. 4.9(33-37) *do not* apply to
static expressions that are not evaluated. So most of Adam's argument is
bogus -- 10_000/N can be a static expression, and N can be zero, but it
still is legal if it is not evaluated. The important question is whether
we want to allow random non-static function calls in static expressions
even if they are not evaluated. I think the answer is no.]

****************************************************************

From: Georg Bauhaus
Sent: Tuesday, March 23, 2009  6:26 AM

>> I think this AI just became available on the ada-auth web site, so
>> I've been looking it over.  I'd definitely appreciate this being part
>> of the language; I think I've run into a good number of cases in the
>> past where having this would have been very useful.  And thanks for
>> requiring the parentheses.  I have a vague memory of seeing an
>> example like this in a book, a long time ago, where I think the
>> language was some flavor of Algol:
>>
>>   if if if if A then B else C then D else E then F else G then ...

Some of my increasingly stressed hair is pointing upwards.

When I saw conditional expression as currently outlined, I thought they will
definitely need good bridles.  Aren't you otherwise inviting programmers to
violate at least three Ada commandments:

1/ Name things!
2/ Read linearly!
3/ Don't leave holes, cover everything!

As follows.

First, regarding 1, 2: do not let all too clever nesting sneak in via
conditional expressions (to illustrate the danger, add obvious fantasies to the
if if if if  above). Require a renaming:

    Safe_Bound: Natural renames
       (if N = 0 then 10_000 else 10_000 / N);

    subtype Calculation_Range is Natural range 0 .. Safe_Bound + 1;

In preconditions, where there is no declarative region at hand, we can write
normal function calls, no change here. Consequently, a lazy author's ad hoc
nested (if ...) is precluded in preconditions without loss.

And (regarding 3) Please, Please, Please, have the conditional expressions cover
all cases!!!  Have us fully state what we mean. Do not permit telling half
truths for the sake of being negligent: the ELSE is a must here.

I noticed the following comment on the AI

   " b) people love complex boolean expressions and insist on filling
        their code with them, the more complex the better :-( "

Don't invite us to do so. Please.  (Because computer programming is not
recreational mathematics; it is enough like forensic logic as is ...)

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  5:46 AM

A thought, in light of the discussion of leaving out parens

How about considering putting in the end if, and then not bothering with parens
at all

   A := if x then 2 else 3 end if;

seems to read fine to me. I know I worried about error recovering in having END
in the middle of expressions, but I think this is manageable ...

By the way, I looked at how we do resolution in GNAT now for conditional
expressions, and it is quite straightforward, we do the resolution of the type
of the expression based on the THEN expression, and then force the ELSE
expression to resolve to the same type .. quite straightforward.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March 23, 2009  6:06 AM

...
>   A := if x then 2 else 3 end if;
>
> seems to read fine to me.

What worries me is when it appears in more complex expressions:

A := if x then 2 else 3 end if + if y then 5 else 4 end if;

You would certainly write it with parentheses in that case, so we end up with
"end if" /and/ parentheses....

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  6:34 AM

I don't see why you would write parens here, the IF/END IF to me act already as
strong parens.

I certainly never wrote (if x then y else z fi) in Algol-68, though in simple
cases one would often use the short form in A-68 (x | y | z).

****************************************************************

From: John Barnes
Sent: Tuesday, March 23, 2009  10:27 AM

I would certainly prefer "end if" always;  And parens as well when it does not
stand alone as a RHS or parameter or subscript perhaps.

I used conditional expressions a lot in Algol60 and we had them in RTL/2 with
the "end". And did not cloak then in parens.

I hated their absence from Ada but with age I got used to it

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  11:52 AM

A concern with omitting parens,

if we allow parens to be omitted on a function call, but not on indexing or a
type conversion, that's odd to me syntactically, and worse, it means that we
have to deal with this during semantic analysis rather than parsing UGH!

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  11:51 AM

> I would certainly prefer "end if" always;  And parens as well when it
> does not stand alone as a RHS or parameter or subscript perhaps.

OK, well count me as definitely opposed to requiring BOTH if/endif and parens at
the same time, my suggestion was a possible way to get rid of the parens, not a
way to add more stuff :-)

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  11:55 AM

I implemented conditional expressions in GNAT (very easy, since we already had
them, all we lacked was syntax :-)

Here is a little test program:

> with Text_IO; use Text_IO;
> procedure t is
> begin
>    for J in 1 .. 3 loop
>       Put_Line ((if J = 1 then "one"
>                  elsif J = 2 then "two"
>                  else "three"));
>    end loop;
>
>    for J in Boolean loop
>       if (if J then False else True) then
>          Put_Line ("One");
>       else
>          Put_Line ("Two");
>       end if;
>    end loop;
>
>    for J in Boolean loop
>       if (if J then False) then
>          Put_Line ("One");
>       else
>          Put_Line ("Two");
>       end if;
>    end loop;
> end t;


and it's output:

one
two
three
One
Two
One
Two

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 23, 2009  1:08 PM

Cool.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  1:22 PM

Just so no one gets alarmed, this construct will be rejected unless you use the
special -gnatX (extensions allowed) switch on the compilation :-)

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  1:22 PM

By the way Tuck, what do you think of the resolution rule we are using, seems
very simple to understand and implement, and avoids the inflexibility of
treating things as aggregates.

I think I stated the rule on the ARG list, but if not, here it is:

A conditional expression (if X then Y else Z) is resolved using the type of Y,
just as though it had appeared simply as Y, then in the final resolution phase,
Z is resolved using this same type, which must of course be unique. If Z cannot
be resolved using this type, we have an error.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 23, 2009  1:49 PM

That certainly seems simplest to implement, but I'll admit being a little
uncomfortable about how asymmetric it is at a formal level.

****************************************************************

From: Steve Baird
Sent: Tuesday, March 23, 2009  1:46 PM

I would hope that the following example would not be rejected as being
ambiguous:

     type T1 is access This;
     X1 : T1;

     type T2 is access That;
     X2 : T2;

     Flag : Boolean := ... ;

     procedure Foo (X : T1) is ... ;
     procedure Foo (X : T2) is ... ;
   begin
     Foo ((if Flag then null else X1));

How does the rule you describe handle this example?

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, March 23, 2009  2:05 PM

> I think I stated the rule on the ARG list, but if not, here it is:
>
> A conditional expression (if X then Y else Z) is resolved using the
> type of Y, just as though it had appeared simply as Y, then in the
> final resolution phase, Z is resolved using this same type, which must
> of course be unique. If Z cannot be resolved using this type, we have
> an error.

You need to deal with the case where Y and/or Z are both overloaded calls. It is
here that intersection of types is needed (no big deal)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 23, 2009  2:58 PM

> What worries me is when it appears in more complex expressions:
>
> A := if x then 2 else 3 end if + if y then 5 else 4 end if;
>
> You would certainly write it with parentheses in that case, so we end
> up with "end if" /and/ parentheses....

I agree with Jean-Pierre. This syntax would be impossible to read in some
circumstances. As Adam suggested today on Ada-Comment:

if if if x then y end if then False end if then ...

Even the possibility of this makes me think I have the flu again...

Besides, I've already found that in writing a conditional expression, I
naturally put in semicolons and have to remove them. My point is that this
syntax is never going to be exactly the same as an if statement, so I don't see
any advantage to the "end if".

Robert made the claim that error correction is "manageable". That might be true
for some forms of parser, but I'm pretty sure that there is no hope for our
purely table-driven parser. We'd pretty much have to give up if you left out
something in the above expression. I don't think there would be any sane way to
tell between an if statement and a conditional expression in the above syntax,
but they go very different places. So I would be strongly opposed to leaving out
the parens in conditions (if statements, when parts of exits, etc.) -- anywhere
where a simple omission could allow an if statement to be legal.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  6:54 PM

> That certainly seems simplest to implement, but I'll admit being a
> little uncomfortable about how asymmetric it is at a formal level.

Yes, but you really get into a mess if you try to make it symmetrical I fear.
Unless you say .. try both and if one works OK, you run the risk of making
things more restrictive than this asymmetrical definition/implementation.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  6:58 PM

...
> How does the rule you describe handle this example?

It would be ambiguous, but so would

    Foo (null)

so what's the big deal?

I don't mind figuring out a rule that allows this case provided that

a) it is easy to implement

b) it is no more restrictive than the rule I proposed.

What I do object to is the aggregate/simple type rule which would make the
posted example with strings ambiguous.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 23, 2009  7:04 PM

> You need to deal with the case where Y and/or Z are both overloaded
> calls. It is here that intersection of types is needed (no big deal)

well I would say "you may want to deal", we don't currently and I am not sure it
is worth complexifying things for this. But certainly that would be acceptable
to me. And is symmetrical.

My view is that

(if A then B then C)

must work in a context where

B

or

C

on their own would work and yield the same type, I don't mind if we allow more
cases, but we should allow at least this much. My rule allows more since it
would allow Steve's example with null in the else part.

I do see the discomfort with asymmetricality

   (if A then B else C)       OK
   (if not A then C else B)   KO

is odd, but will it really happen in practice?

****************************************************************

From: Bob Duff
Sent: Tuesday, March 23, 2009  8:11 PM

> You need to deal with the case where Y and/or Z are both overloaded
> calls. It is here that intersection of types is needed (no big deal)

I agree it's not that big of a deal to implement.
And for uniformity with other resolution rules, I'd make it symmetric (so
Steve's example resolves).

But note that it's not just "both overloaded" -- it's "all overloaded", because
you can have a chain of elsif's, and you have to intersect the possible types of
all of them, as well as the type(s) determined from context.  I thought this was
the rule Robert was advocating all along.  It matches the "simple" Algol 68 rule
(at least, my understanding of the Algol 68 rule based on a phone conversation
with Robert last week).

Note that the GNAT implementation turns elsif chains into nested if-then-else's.

And note that I'm now convinced that the "resolve like an aggregate"
rule is too restrictive.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 23, 2009  8:27 PM

...
> And note that I'm now convinced that the "resolve like an aggregate"
> rule is too restrictive.

For the record, so am I.

****************************************************************

From: Bob Duff
Sent: Tuesday, March 23, 2009  8:01 PM

...
> > But I just realized that the "single type" rule Randy and I have
> > been advocating also makes this ambiguous!  Uh oh.
>
> OK, is that enough to convince you, ...

Yes (given that it's too late to get rid of the annoying component-wise "&"
operators).

>...or will you suddenly decide that
> it makes things more readinable to write
>
> >>     Put_Line (Integer'Image (Num_Errors) &
> >>           String'(if Num_Errors = 1 then "error detected." else
> >> "errors detected."));
>
> :-)

No, I won't decide that this is more readable!  ;-)

****************************************************************

From: Steve Baird
Sent: Tuesday, March 23, 2009  9:04 PM

...
> What I meant is the standard two-pass resolution algorithm that no
> doubt we all use!  We identify candidate interpretations bottom-up,
> and we find the single interpretation for a complete context top-down.
> I'm not proposing anything different: find interpretations of each
> expression, find the common intersection, and these are the candidate
> interpretations of the conditional expression as a whole.  The full
> context will then indicate the expected type for it, and that gets
> propagated to each expression (which may have be overloaded).
>

...
> I do see the discomfort with asymmetricality
>
>   (if A then B else C)       OK
>   (if not A then C else B)   KO
>
> is odd, but will it really happen in practice?

I think that Ed has outlined the "right" solution, based on the principle of
least surprise.

Perhaps the cases where this more general rule is required won't come up very
often in practice, but they will be annoying and/or mystifying when they do (if
these cases are disallowed). Users could certainly live with such rules, but why
would we want to ask them to?

I think Ed has already noted that the intersection operator has to deal with
cases where one of the operands is something like "the set of all access types"
or "the set of all composite types". but this doesn't seem like a big deal.

Just my opinion.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 23, 2009  9:43 PM

...
> Perhaps the cases where this more general rule is required won't come
> up very often in practice, but they will be annoying and/or mystifying
> when they do (if these cases are disallowed). Users could certainly
> live with such rules, but why would we want to ask them to?
>
> I think Ed has already noted that the intersection operator has to
> deal with cases where one of the operands is something like "the set
> of all access types" or "the set of all composite types". but this
> doesn't seem like a big deal.

What "intersection operator"? I think you are assuming a lot if you think that
there is some general purpose "intersection" operators in most compilers. Surely
Janus/Ada doesn't have one.

For calls, we try to solve each parameter to the parameter type for every
possible interpretation. (We once tried to save intermediate results for such
attempts, but it turned out that in practice multiple occurrences of a single
type is rare, and the overhead to save the results was actually much worse than
the time saved.) If there's more than one that works at any point, then the
expression is ambiguous and we give up. (Note that there may be more than one
*of different types* that is live at some point, but not of the same type).

There is a special case hack to deal with solving pairs of expressions at once,
such as the bounds of a range or the two sides of an assignment. There is one
for memberships as well (that is three possibilities): it takes 500 lines of
code! I don't know of any way to generalize this code (if we knew of a way, we
would have done it in the first place).

While I'm sure that this can be resolved (and I didn't write this code, so it's
possible that there is something that I've missed), I don't think it would be as
easy as Ed suggests. Whereas, the aggregate resolution would be easy to apply.

That said, I understand the problem with aggregate resolution, so it may be
appropriate to go with the more complex resolution. Just don't make claims like
"it is easy to do", because I don't see any reason to assume that.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 24, 2009  4:57 AM

This is the Name Resolution wording I suggested in an earlier reply.  I think it
is about right:

    The expected type for each dependent expression is that of
    the conditional expression as a whole.  All dependent expressions
    shall resolve to the same type.  The type of the conditional
    expression as a whole is this same type.  If there is no ELSE part,
    this type shall be a boolean type.

  This approach will ensure that the expected type has to be a suitable
  "single" type if any one of the dependent expressions requires a single
  type.  For example, if one of the dependent expressions is a string
  literal, then the expected type has to be a single string type.
  We certainly don't want to have to start searching for all in-scope
  string types to resolve a conditional expression.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 24, 2009  5:32 AM

Seems a pity that this rule is more complicated to implement than the current
asymmetric rule applied by GNAT *and* more restrictive. So we have a rule whose
only merit is that it is more aesthetic to describe, hmmmmmm ...

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 24, 2009  5:47 AM

Can you give an example where it is
more restrictive?  It isn't intended
to be.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 24, 2009  7:01 AM

Maybe I don't understand your suggestion, suppose we have two procedures

   type S1 is new String;
   type S2 is new String;

   procedure P (A : S1);
   procedure P (A : S2);

   V1 : S1 := ...

   P ((if B then V1 else "xyz"));

The rule I suggested (and of course the intersection rule) would resolve this,
but as I understand your suggestion, it would not resolve since "xyz" does not
have a unique type considered on its own.

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, March 24, 2009  8:59 AM

But would you resolve:
   P ((if B then "xyz" else V1));

****************************************************************

From: Bob Duff
Sent: Wednesday, March 24, 2009  8:34 AM

Tucker, please explain how your proposed rule deals with Robert's example, and
also with this:

    P((if A then "xxx" elsif B then "yyy" elsif C then "zzz" else V1));

presuming A, B, and C are not overloaded -- just Boolean variables, say.

Also, how does it deal with my earlier example:

    Put_Line (Integer'Image (Num_Errors) &
              (if Num_Errors = 1 then "error" else "errors") &
              "detected.");

assuming there is just one visible Put_Line that takes String, and the usual "&"
ops are visible?

Also, what if there are two Put_Lines, one for String, and one for
Wide_Wide_String?

I think Ed and Steve are saying all of the above should resolve, and I think I
agree.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 24, 2009  9:20 AM

> P ((if B then V1 else "xyz"));

No, the rule as suggested would resolve this.
When some rule says that the expected type must be a "single type" that doesn't
mean that there can only be one overloading, but rather it means that the
expected type is not something like "any composite" or "any string". In
particular, it can't be the operand of a type conversion.  See 8.6(27/2) for the
formalities.

****************************************************************

From: Bob Duff
Sent: Wednesday, March 24, 2009  8:55 AM

> I think that Ed has outlined the "right" solution, based on the
> principle of least surprise.
>
> Perhaps the cases where this more general rule is required won't come
> up very often in practice, but they will be annoying and/or mystifying
> when they do (if these cases are disallowed). Users could certainly
> live with such rules, but why would we want to ask them to?

Shrug.  I think you overestimate the knowledge of typical Ada programmers. The
way people program is sloppy with regard to overloading:  Declare whatever names
make sense, without thinking much about overloading.  Write down references to
possibly-overloaded stuff willy-nilly.  If it resolves, fine. If it's ambiguous,
stick in qualified expressions.  If there end up being too many qualified
expressions, change some of the subp names to be less overloaded.

There's nothing wrong with this sloppiness -- programmers shouldn't have to
understand the overloading rules.  They just need a vague understanding:
Overloading is allowed, and the compiler figures things out based on types, and
you can use qualification to resolve ambiguities.

In fact, even I (who do understand the overloading rules, when I've got my
language lawyer hat on) program in this sloppy way -- I'm not thinking much
about the exact rules when I program.

Nonentheless, I think I agree with your conclusion.  It seems like the simplest
rule (even if not the simplest to implement), and it's uniform with things like
":=".

> I think Ed has already noted that the intersection operator has to
> deal with cases where one of the operands is something like "the set
> of all access types" or "the set of all composite types".
> but this doesn't seem like a big deal.

Agreed.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 24, 2009  2:19 PM

Here is the proposed rule yet again:

    The expected type for each dependent expression is that of
    the conditional expression as a whole.  All dependent expressions
    shall resolve to the same type.  The type of the conditional
    expression as a whole is this same type.  If there is no ELSE part,
    this type shall be a boolean type.

For Robert's example:

 >>    type S1 is new String;
 >>    type S2 is new String;
 >>
 >>    procedure P (A : S1);
 >>    procedure P (A : S2);
 >>
 >>    V1 : S1 := ...
 >>
 >>    P ((if B then V1 else "xyz"));

Both P's would be considered. When considering the first P, the expected type
for "V1" and "xyz" would be S1.  For the second P, the expected type would be S2
for each. Clearly only the first P would be an "acceptable" interpretation. So
this would be unambiguous.

For this example:

      P((if A then "xxx" elsif B then "yyy" elsif C then "zzz" else V1));

the expected type for each of the dependent expressions would be the type of the
first formal parameter of P.  If there are multiple P's, then those are each
considered in turn to find if any of them are "acceptable" interpretations.
Presumably only the P's whose parameter is of a string type and that also
matches "V1" would be acceptable.   If there is exactly one such "P" then all is
well. If more than one, it is ambiguous.  If none, then no dice.

This would be roughly equivalent to taking each P and giving it four parameters
all of the same type.  If that resolves, then so would this.

For this example:

 >
 >     Put_Line (Integer'Image (Num_Errors) &
 >               (if Num_Errors = 1 then "error" else "errors") &
 >               "detected.");
 >
 > assuming there is just one visible Put_Line that takes String,
 > and the usual "&" ops are visible?

Then all directly visible overloadings of "&" with result type String would be
considered.  Only those that have a single string type as their parameter type
would end up acceptable.  If there is more than one, then it is ambiguous.
Clearly there is at least one, since the "&"(String,String)=>String in package
Standard will do the job.

For this one:

 > Also, what if there are two Put_Lines, one for String, and one for
 > Wide_Wide_String?

Both Put_Line's would be considered, but only the one with formal parameter of
type String would be acceptable, the only "&" that matches Integer'Image with
its Left operand is the one for String.

This is all pretty standard overloading stuff.
All the rule I have suggested said is that the expected type is passed down to
all of the dependent expressions, with the additional requirement that all of
them end up resolving to the same type.  The additional requirement is generally
redundant, unless the "expected type" turns out to be something very general
like "any nonlimited type" or equivalent.

Reading name resolution rules is never easy, but the one I suggested isn't that
much different from many others in the language.

****************************************************************

From: Bob Duff
Sent: Wednesday, March 24, 2009  2:53 PM

> This is all pretty standard overloading stuff.

Yes, pretty standard.  Not sure why I was confused earlier.

So your wording is equivalent to Ed's and Steve's notion of intersecting sets of
types, which is a more mechanistic way of looking at it.

Thanks for clarifying.  For the record, I am happy with your proposed rule.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 24, 2009  6:45 PM

OK I understand Tuck's rule, and it seems fine (seems equivalent to Ed's
intersection description actually :-)

****************************************************************

From: Robert Dewar
Sent: Saturday, March 28, 2009  7:04 AM

> What "intersection operator"? I think you are assuming a lot if you
> think that there is some general purpose "intersection" operators in
> most compilers. Surely Janus/Ada doesn't have one.
>
> For calls, we try to solve each parameter to the parameter type for
> every possible interpretation. (We once tried to save intermediate
> results for such attempts, but it turned out that in practice multiple
> occurrences of a single type is rare, and the overhead to save the
> results was actually much worse than the time saved.) If there's more
> than one that works at any point, then the expression is ambiguous and
> we give up. (Note that there may be more than one *of different types*
> that is live at some point, but not of the same type).
>
> There is a special case hack to deal with solving pairs of expressions
> at once, such as the bounds of a range or the two sides of an
> assignment. There is one for memberships as well (that is three
> possibilities): it takes 500 lines of code! I don't know of any way to
> generalize this code (if we knew of a way, we would have done it in the first
> place).

Randy, it seems like you have handled resolution with a series of special case
hacks instead of a general scheme. There is nothing inherently wrong with that
approach if it works, but it is not surprising that you have to keep adding more
special case hacks when more features are added to the language. Perhaps at some
point you might want to consider getting rid of these hacks and using a more
general resolution scheme (you can always look at the GNAT code for an example
:-)

But anyway, in this situation, I don't see this is an argument against what
seems to be the most reasonable resolution rule from a language point of view.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 30, 2009  2:21 PM

> Randy, it seems like you have handled resolution with a series of
> special case hacks instead of a general scheme.
> There is nothing inherently wrong with that approach if it works, but
> it is not surprising that you have to keep adding more special case
> hacks when more features are added to the language. Perhaps at some
> point you might want to consider getting rid of these hacks and using
> a more general resolution scheme (you can always look at the GNAT code
> for an example :-)

I'm pretty sure I have better things to do than to replace working code,
especially for something as complex as resolution. Especially without any
customer demand for such a change.

But upon reflection, I think that membership probably is the wrong example. In a
membership, there is no information from the context as to the type that you
need to resolve to. That's what makes it so messy in our algorithm. A
conditional expression does not have that problem, so it in fact may not be a
problem to do full resolution on it.

> But anyway, in this situation, I don't see this is an argument against
> what seems to be the most reasonable resolution rule from a language
> point of view.

That's true. Nor is the claim that it is easy in some implementation much of an
argument for a resolution rule. In general, arguments (either for or against)
based on a single implementation are very weak.

In this case, I would have preferred a less complex resolution rule, for all of
the reasons I outlined in the AI. But I've seen enough examples where it
wouldn't work very well to realize that the more complex rule is probably
necessary in this case.

****************************************************************

From: Robert Dewar
Sent: Monday, March 30, 2009  3:02 PM

We have implemented this resolution rule for conditional expressions, and for us
it was a fairly simple fix (less than 10 lines fix).

****************************************************************

From: Tucker Taft
Sent: Monday, March 30, 2009  3:38 PM

Can you elaborate slightly when you say "this" resolution rule.
Is this a symmetric one, which takes context into account?

****************************************************************

From: Robert Dewar
Sent: Monday, March 30, 2009  5:49 PM

yes

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 30, 2009  4:10 PM

Yes: the expressions may be overloaded, and are resolved from context.  This is
not different from operator resolution: the candidate interpretations of the
conditional  expression are the types common to the types of the constituent
expressions that follow "then"  and "else",  The context eventually selects one
of those types, which is used to complete the resolution of each expression.

****************************************************************

From: Tucker Taft
Sent: Monday, March 30, 2009  4:40 PM

Would you say this is an implementation of the rule for which I proposed
wording, or is it implementing some other wording?  Here is the wording I
proposed:

    The expected type for each dependent expression is that of
    the conditional expression as a whole.  All dependent expressions
    shall resolve to the same type.  The type of the conditional
    expression as a whole is this same type.  If there is no ELSE part,
    this type shall be a boolean type.

****************************************************************

From: Robert Dewar
Sent: Monday, March 30, 2009  6:07 PM

I think it is the same, though I must say, I find the above wording difficult to
understand precisely.

What we implement exactly is to gather the set of possible types for each
operand, given the context, then do an intersection between these two sets, and
there must be exactly one type in the intersection.

It's actually exactly the same as A + B really ...

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 30, 2009  7:09 PM

> Would you say this is an implementation of the rule for which I
> proposed wording, or is it implementing some other wording?  Here is
> the wording I proposed:
>
>   The expected type for each dependent expression is that of
>   the conditional expression as a whole.  All dependent expressions
>   shall resolve to the same type.  The type of the conditional
>   expression as a whole is this same type.  If there is no ELSE part,
>   this type shall be a boolean type.

Yes, our implementation is equivalent to the above.  The description you give is
global, while the description I gave of our implementation is operational, and
reflects the standard two-pass algorithm.  Just to make sure, here is the
program we are using as a simple test case:

with Text_IO; use Text_IO;
procedure Condifo is
    type I1 is new Integer;
    type I2 is new Integer;
    type I3 is new Integer;
    V1 : I1;
    V2 : I2;
    V3 : I3;

    procedure P (Arg : I1) is
    begin
       Put_Line ("P/I1 called");
    end P;

    procedure P (Arg : I2) is
    begin
       Put_Line ("P/I2 called");
    end P;

    procedure P2 (Arg1 : I1; Arg3 : I3) is
    begin
       Put_Line ("P/I1-I3 called");
    end P2;

    procedure P2 (Arg1 : I1; Arg2 : I1) is
    begin
       Put_Line ("P/I1-I2 called");
    end P2;

    function F return I1 is
    begin
       Put_Line ("F/I1 called");
       return 1;
    end F;

    function F return I2 is
    begin
       Put_Line ("F/I2 called");
       return 2;
    end F;

    function Q return I1 is
    begin
       Put_Line ("Q/I1 called");
       return 1;
    end Q;

    function Q return I2 is
    begin
       Put_Line ("Q/I2 called");
       return 2;
    end Q;

    function R return I3 is
    begin
       Put_Line ("R/I3 called");
       return 3;
    end R;

    function R return I2 is
    begin
       Put_Line ("R/I2 called");
       return 2;
    end R;

begin
    for J in Boolean loop
       P (if J then V1 else F);
    end loop;

    New_Line;

    for J in Boolean loop
       P (if J then F else V2);
    end loop;

    New_Line;

    for J in Boolean loop
       P (if J then Q else R);
    end loop;

    for J in Boolean loop
       P (if J then Q elsif (not J) then  R else F);
    end loop;

    for J in Boolean loop
       P2 (if J then Q else F, R);
    end loop;
end Condifo;

****************************************************************

From: Randy Brukardt
Sent: Monday, March 30, 2009  7:34 PM

> >     The expected type for each dependent expression is that of
> >     the conditional expression as a whole.  All dependent expressions
> >     shall resolve to the same type.  The type of the conditional
> >     expression as a whole is this same type.  If there is no ELSE part,
> >     this type shall be a boolean type.
>
> I think it is the same, though I must say, I find the above wording
> difficult to understand precisely.

I think the wording is either overly complex or wrong - it surely seems more
confusing than necessary.

> What we implement exactly is to gather the set of possible types for
> each operand, given the context, then do an intersection between these
> two sets, and there must be exactly one type in the intersection.
>
> It's actually exactly the same as A + B really ...

Well, almost: a conditional expression can have any number of expressions while
A+B only has two. But the principle is correct: the effect ought to be the same
as
    function Cond_Expr (Then_Part, {Elsif_Part_n,} Else_Part : T) return T;

But Tucker's wording doesn't seem to have that effect. For one thing, it seems
to disallow the "covered-by" rules. In:

   function F (A : T; B : T'Class) return T'Class is
   begin
       return (if Something then A else B);  -- OK?
       if Something then
           return A; -- OK.
       else
           return B; -- OK.
       end if;
   end F;

A and B resolve to different (but compatible) types. I'm basing this on the
wording in 8.6(22-25.1/2) - there it says that if the expected type is a
specific type T, then the construct shall resolve to T, or T'Class, etc.

If we really wanted these all to be the same type (and ignore "covered by"
and the like), we probably should write that as a legality rule after
resolution. (We don't want to have resolution any more specific than necessary.)
But I don't think that is what we want - if it was, I don't see why we would
want to try so hard to make aggregates and string literals work without
qualification.

I think Tucker is trying to say that all of the expressions have to resolve when
their expected type is the type that the entire conditional expression resolves
to. But that's recursive and doesn't make much sense.

(Interestingly, the "aggregate" rule would have worked fine on this example, as
does the function analog rule. Just not the rule Tucker wrote.)

I understand that the reason for the complexity here is to deal with the case
where a conditional expression is used in a place that allows "any <something>
type", such as an if statement (which allows "any boolean type"):

    function ">" (Left, Right : T) return Some_Boolean;

    if (if Something then A > B else True) then

One would hope that this would work and resolve to Some_Boolean (with the "True"
being of type Some_Boolean). But it is hard to get too excited about using
conditional expressions in such contexts (if statements, case statements, type
conversions). I suppose we have to ensure that we prevent something like:

    function ">" (Left, Right : T) return Some_Boolean;
    function "=" (Left, Right : T) return Boolean;

    if (if Something then A > B else A = B) then

We don't want the expressions to have different types. But a legality rule would
be enough to prevent this case.

Anyway, there must be a better way to word this. One that is correct would be
nice! (Not that I can figure it out, unfortunately.)

****************************************************************

From: Robert Dewar
Sent: Monday, March 30, 2009  7:49 PM

> Well, almost: a conditional expression can have any number of
> expressions while A+B only has two. But the principle is correct: the
> effect ought to be the same as
>     function Cond_Expr (Then_Part, {Elsif_Part_n,} Else_Part : T)
> return T;

Actually elseif's are easy to handle, we just rewrite

    (if a then b elsif c then d elsif e then f else g)

as

    (if a then b else (if c then d else (if e then f else g)))

and we only have to worry about the two operand case

****************************************************************

From: Tucker Taft
Sent: Monday, March 30, 2009  9:15 PM

I'm happy to have someone else propose wording, but let me explain what I have
proposed already:

First, it is important to know what is the *expected type* of each dependent
expression, because without that there are a whole lot of rules which don't
work.  I presume we all agree it should be the same as that of the conditional
expression as a whole.

Secondly, we need to know what is the *type* of the conditional expression as a
whole.  It seems clear that it wants to be the same as the type of each of the
dependent expressions.  This latter requirement can also help resolve ambiguity,
and corresponds to the "intersection" that has been mentioned several times.

The last part about "boolean" seems pretty straightforward.

So I don't see how it could be phrased much more simply, but I may be blind to
some obvious easier approach.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 30, 2009  9:54 PM

> First, it is important to know what is the *expected type* of each
> dependent expression, because without that there are a whole lot of
> rules which don't work.  I presume we all agree it should be the same
> as that of the conditional expression as a whole.

Right, no problem here.

> Secondly, we need to know what is the *type* of the conditional
> expression as a whole.  It seems clear that it wants to be the same as
> the type of each of the dependent expressions.  This latter
> requirement can also help resolve ambiguity, and corresponds to the
> "intersection" that has been mentioned several times.

But this I don't agree with (at least pedantically): it means that none of the
classwide conversions will work. We go to great lengths to make an object of
T'Class work (almost) everywhere that an object of T works, and the rule you are
proposing here does not allow that.

That's because the type of an expression is still T'Class even if the expected
type is T (or vice versa). So the rule that you proposed is *WRONG* (not just
confusing), at least if you expect the classwide rules to work as usual (I
certainly do!). You can't talk about the type of a subexpression without adding
an unnecessary restriction to the language, because it is perfectly reasonable
for them to be different.

Note that the rule you have also would deny the use of conditional expressions
for anonymous access types, because they also use these "resolve to some other
type" rules. The problem potentially would happen anywhere that Ada has an
implicit conversion, because the language phrases these in terms of resolving to
some type other than the expected type.

Now, I agree with the basic principle that you are suggesting, the problem is
that the way the language is worded you don't get the correct result.

> The last part about "boolean" seems pretty straightforward.
>
> So I don't see how it could be phrased much more simply, but I may be
> blind to some obvious easier approach.

It's irrelevant whether it can be more simple, the question is whether it can be
phrased *correctly*. Then we can go for simpler!

The only rule that I can think of that would work at all is to phrase it in
terms of an equivalence to a function call (as described previously):

    function Cond_Expr (Then_Part, {Elsif_Part_n,} Else_Part : T) return T;

This is what is implemented in GNAT if I understand correctly, not the rule that
you described.

But of course we have a natural aversion to "equivalences".

The only way that some rule about same types would work would be if we required
all of the expressions to have a single expected type (that is, the same
expected type for all expressions, and no "any"s). But I think that is a new
invention. We definitely cannot talk about the type that the expressions resolve
to.

We could also part about the types of the expressions being the same completely.
(The expected types already define everything that we need to know). But that
leads to other annoyances in the case of resolving to "any boolean type" and the
like. So I don't think that is a good route.

****************************************************************

From: Tucker Taft
Sent: Monday, March 30, 2009  10:27 PM

> But this I don't agree with (at least pedantically): it means that
> none of the classwide conversions will work. We go to great lengths to
> make an object of T'Class work (almost) everywhere that an object of T
> works, and the rule you are proposing here does not allow that.

I'm not sure I follow.  We still allow T'Class where T is expected.
But I think we should require *all* of the dependent expressions to be
class-wide, not just some of them, if you want dispatching to occur.

> That's because the type of an expression is still T'Class even if the
> expected type is T (or vice versa). So the rule that you proposed is
> *WRONG* (not just confusing), at least if you expect the classwide
> rules to work as usual (I certainly do!). You can't talk about the
> type of a subexpression without adding an unnecessary restriction to
> the language, because it is perfectly reasonable for them to be different.

I don't think I fully understand you terminology.  What do you mean "you can't
talk about the type of a subexpression"?

But I think I am seeing your point.  Probably the best fix is to say that *if
the expected type is not a single type* then all of the dependent expressions
shall resolve to the same type.  On the other hand, if the expected type is a
single type, then the only requirement might be a legality rule about
dispatching.  It would be similar to the existing rule that requires all
controlling operands to be dynamically tagged if any of them are.  This would
carry down to the dependent expressions.  All or none are dynamically tagged. Or
something like that.

> Note that the rule you have also would deny the use of conditional
> expressions for anonymous access types, because they also use these
> "resolve to some other type" rules. The problem potentially would
> happen anywhere that Ada has an implicit conversion, because the
> language phrases these in terms of resolving to some type other than the expected type.

Yes, I see your point, and I agree we don't need to require that all dependent
expressions resolve to the same type if there is an expected type that is a
single type.  If the expected type isn't a single type, then the "intersection"
rule would apply.

Here is a possible rewording of the Name Resolution part.  The legality rule
about all-or-none being dynamically tagged is not covered here:

     The expected type for each dependent expression is that of
     the conditional expression as a whole.  If the expected type
     is not a single type, then all dependent expressions
     shall resolve to the same type, and the type of the conditional
     expression as a whole is this same type.  If there is no ELSE part,
     each dependent expression shall resolve to a boolean type.

...
> We could also part about the types of the expressions being the same
> completely. (The expected types already define everything that we need
> to know). But that leads to other annoyances in the case of resolving
> to "any boolean type" and the like. So I don't think that is a good route.

See above for another possible wording.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 30, 2009  11:35 PM

...
> But I think I am seeing your point.  Probably the best fix is to say
> that *if the expected type is not a single type* then all of the
> dependent expressions shall resolve to the same type.  On the other
> hand, if the expected type is a single type, then the only requirement
> might be a legality rule about dispatching.  It would be similar to
> the existing rule that requires all controlling operands to be
> dynamically tagged if any of them are.  This would carry down to the
> dependent expressions.  All or none are dynamically tagged.
> Or something like that.

Ah yes, I forgot about that rule. (Not one of my favorite rules in the language.
Aside: I recall telling Isaac that we needed to enforce that "stupid" rule, and
he wrote a routine called "Check_Stupid_Tagged_Rule" to do so. It's still there
in the Janus/Ada compiler. Apparently, I was a bit too opinionated when
explaining it to him. :-)

I would think that we would only want to enforce that where it mattered (as a
parameter to a call); it would be weird to require type conversions in a return
statement (for instance) that wouldn't have any problem if written as an if
statement with two returns. (See my original mail on this topic for an example.

...
> Here is a possible rewording of the Name Resolution part.
> The legality rule about all-or-none being dynamically tagged is not
> covered here:
>
>      The expected type for each dependent expression is that of
>      the conditional expression as a whole.  If the expected type
>      is not a single type, then all dependent expressions
>      shall resolve to the same type, and the type of the conditional
>      expression as a whole is this same type.  If there is no ELSE part,
>      each dependent expression shall resolve to a boolean type.

Much better. The only glitch here is that we don't ever say what the type of the
conditional expression is if the expected type is a single type. I guess that is
typical (I don't see any wording about the type of an aggregate for instance),
but it looks weird in this context. So perhaps add an Otherwise clause to avoid
confusion:

      The expected type for each dependent expression is that of
      the conditional expression as a whole.  If the expected type
      is not a single type, then all dependent expressions
      shall resolve to the same type, and the type of the conditional
      expression as a whole is this same type.  Otherwise, the type of
      the conditional expression is the single expected type. If there
      is no ELSE part, each dependent expression shall resolve to a boolean
      type.

Adding appropriate Redundant brackets.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 31, 2009  5:47 AM

> ...
>> But I think I am seeing your point.  Probably the best fix is to say
>> that *if the expected type is not a single type* then all of the
>> dependent expressions shall resolve to the same type.  On the other
>> hand, if the expected type is a single type, then the only
>> requirement might be a legality rule about dispatching.  It would be
>> similar to the existing rule that requires all controlling operands
>> to be dynamically tagged if any of them are.  This would carry down
>> to the dependent expressions.  All or none are dynamically tagged.
>> Or something like that.
>
> Ah yes, I forgot about that rule. (Not one of my favorite rules in the
> language. Aside: I recall telling Isaac that we needed to enforce that
> "stupid" rule, and he wrote a routine called
> "Check_Stupid_Tagged_Rule" to do so. It's still there in the Janus/Ada
> compiler. Apparently, I was a bit too opinionated when explaining it
> to him. :-)
>
> I would think that we would only want to enforce that where it
> mattered (as a parameter to a call); it would be weird to require type
> conversions in a return statement (for instance) that wouldn't have
> any if written as an if statement with two returns. (See my original
> mail on this topic for an example.

The only place where you can pass T'Class with an expected type of T is as part
of a dispatching call, and in those contexts you aren't allowed to mix
statically tagged and dynamically tagged operands, according to RM 3.9.2(8).

On the other hand, if the expected type is T'Class, then there seems no reason
to require that all the dependent expressions resolve to the same type covered
by T'Class, nor that all or none be class-wide.

>  ...
>> Here is a possible rewording of the Name Resolution part.
>> The legality rule about all-or-none being dynamically tagged is not
>> covered here:
>>
>>      The expected type for each dependent expression is that of
>>      the conditional expression as a whole.  If the expected type
>>      is not a single type, then all dependent expressions
>>      shall resolve to the same type, and the type of the conditional
>>      expression as a whole is this same type.  If there is no ELSE part,
>>      each dependent expression shall resolve to a boolean type.
>
> Much better. The only glitch here is that we don't ever say what the
> type of the conditional expression is if the expected type is a single
> type. I guess that is typical (I don't see any wording about the type
> of an aggregate for instance), but it looks weird in this context. So
> perhaps add an Otherwise clause to avoid confusion:
>
>       The expected type for each dependent expression is that of
>       the conditional expression as a whole.  If the expected type
>       is not a single type, then all dependent expressions
>       shall resolve to the same type, and the type of the conditional
>       expression as a whole is this same type.  Otherwise, the type of
>       the conditional expression is the single expected type. If there
>       is no ELSE part, each dependent expression shall resolve to a boolean
>       type.

I'm not sure this is wise.  I think you might be better to leave it unstated.
We don't want to say that the type of the conditional expression is T in a
dispatching call with expected type T, but actual dependent expressions of type
T'Class.  But maybe it is necessary to specify the type of the conditional
expression as a whole.  It would be interesting to see in what context the RM
requires there be a single well-defined type for a given expression, and what
would be the appropriate answer in these cases.


****************************************************************

From: Steve Baird
Sent: Monday, April 13, 2009 11:26 AM

Do we want to allow, for example,
    X : Some_Limited_Type := (if ... then  ... else ...); ?

7.5(2.1/2) says:
   In the following contexts, an expression of a limited type is not
   permitted unless it is an aggregate, a function_call, or a
   parenthesized expression or qualified_expression whose operand is
   permitted by this rule:

and then the build-in-place contexts are listed in the next few paragraphs.

Should conditional expressions be added to 7.5(2.1/2)?

****************************************************************

From: Robert Dewar
Sent: Monday, April 13, 2009 12:18 PM

I think yes, why not?

****************************************************************

From: Bob Duff
Sent: Thursday, April 16, 2009  3:19 PM

> Should conditional expressions be added to 7.5(2.1/2)?

I'm not sure.  It's not trivial to implement in GNAT.  It's not very useful --
I'll bet most conditionals will be of types like Boolean and String.

If we do allow it, we need to restrict it to having dependent expressions that
are allowed, so we need the same recursion used for qualified expressions
("whose operand is permitted...").

If we do not allow it, I suggest we add a rule outlawing conditional expressions
of limited types altogether.  Otherwise, they will be allowed in some places
(actual parameters) and not others (initial value of variable), which seems
inconsistent.  No great loss to outlaw them, IMHO.

****************************************************************

From: Robert Dewar
Sent: Wednesday, April 22, 2009  6:45 AM

Conditional expressions are now fully implemented and tested with the latest
development version of GNAT (you have to use the -gnatX language extensions
switch to get them).

The form is

   (if cond then expr [elsif cond then expr] else expr)

with provision for leaving out the parens in contexts where parens are present
anyway (parameters for procedures, pragmas, arguments of type conversion, type
qualifications).

They are really nice (we are making full use of them in the run-time, though
cannot do that in the compiler proper for years because of bootstrap issues).

As with any expression feature like this, you have to be judicious

   (if Leap_Year then 1 else 2)

is really nice, but heavy nesting with complex conditions and expressions makes
for a mess!

Thought .. shouldn't we also have case expressions while we are at it, syntax is
obvious

   (case x is when expr => expr {when expr => expr})


****************************************************************

From: Randy Brukardt
Sent: Monday, June 8, 2009  9:37 PM

I'm not going to have time to update the conditional expression AI before
leaving for the meeting tomorrow.

However, I have an action item to explain my discomfort with the idea of not
requiring parenthesis around conditional expressions (outside of qualified
expressions). I realize that I won't be able to explain this in any useful way
verbally; examples are needed. So let me give some examples.

My concerns center on readability and on syntax error detection. Without the
parens, it will be hard to tell between a conditional expression and an
if_statement, both for humans and for compilers.

Let's assume that parens are not required for actual parameters in calls.
Both of the following could be legal. One isn't indented correctly. Which is it?

    My_Package.Long_Procedure_Name (
       Object1,
       Object2 /= Function_Call(10));
       if Object3 > Func1(100) + Func2(Object1) then
          Subprog (Object4);
       else
          Subprog (Object2);
       end if;

vs.

    My_Package.Long_Procedure_Name (
       Object1,
       Object2 /= Function_Call(10),
       if Object3 > Func1(100) + Func2(Object1) then
          Subprog (Object4)
       else
          Subprog (Object2));

The only clue is the extra semicolons after the statements (vs. the
expressions) until you get to the missing "end if". Parens would at least help
differentiate the latter from the former much earlier in the reading.

More importantly is the detection of syntax errors. A mistake I commonly make
when cutting and pasting is failing to close the subprogram call properly. The
problem here is that the difference between a conditional expression and a if
statement is so minor that the error is likely to be diagnosed pages away from
the real error location.

    My_Package.Long_Procedure_Name (
       Object1,
       Object2 /= Function_Call(10),
    if Object3 > Func1(100) + Func2(Object1) then
       Subprog (Object4);
    else
       Subprog (Object2);
    end if;

The syntax error will not be diagnosed until the semicolon after subprogram call
using Object4, as the syntax up to this point is legal. In this example, that is
two lines away from the actual error, but of course the condition can be many
lines long.

If the conditional expression required parens, some error would be detected as
soon as "if" is encountered, which is right at the point of the error.

I realize the error detection in the Janus/Ada parser is pretty primitive, but I
don't see any way for any LALR(1) parser to figure out where the error *really*
is in the above. That's because the various incomplete constructs aren't (yet)
identified on the parse stack -- it's just a pile of terminals and non-terminals
-- so study of the state of the parse will not provide much information about
the problem. (An LL parser might have a better chance since it knows what
constructs its looking for.)

Existing cases of this sort of problem in Ada ("is" vs. ";" after a subprogram
definition, for instance) are a never-ending annoyance, and I'd really prefer
that we avoid adding similar cases to the language.

If the syntax is not going to be identical (and that isn't really a
possibility), it probably ought to be visually different, especially at the
start. I'm not sure how to best reconcile that with the desire to be familiar to
programmers - gratious differences also will be annoying. We could have used
square brackets here, but that seems pretty radical. More thought is required
here.

****************************************************************

From: Robert Dewar
Sent: Sunday, June 14, 2009  1:27 PM

> However, I have an action item to explain my discomfort with the idea
> of not requiring parenthesis around conditional expressions (outside
> of qualified expressions). I realize that I won't be able to explain
> this in any useful way verbally; examples are needed. So let me give some examples.

I disagree pretty strongly as follows (note this is from the perspective of a
user, GNAT implements conditional expressions fully, and I have started to use
them pretty extensively).

> My concerns center on readability and on syntax error detection.
> Without the parens, it will be hard to tell between a conditional
> expression and an if_statement, both for humans and for compilers.
>
> Let's assume that parens are not required for actual parameters in calls.
> Both of the following could be legal. One isn't indented correctly.
> Which is it?
>
>     My_Package.Long_Procedure_Name (
>        Object1,
>        Object2 /= Function_Call(10));
>        if Object3 > Func1(100) + Func2(Object1) then
>           Subprog (Object4);
>        else
>           Subprog (Object2);
>        end if;
>
> vs.
>
>     My_Package.Long_Procedure_Name (
>        Object1,
>        Object2 /= Function_Call(10),
>        if Object3 > Func1(100) + Func2(Object1) then
>           Subprog (Object4)
>        else
>           Subprog (Object2));

Well

a) improperly indented code is always hard to read, and is
    basically unacceptable.

b) a good compiler has options to check indentation anyway
    GNAT will complain loudly about improperly indented code,
    so a reader will never encounter such!

> The only clue is the extra semicolons after the statements (vs. the
> expressions) until you get to the missing "end if". Parens would at
> least help differentiate the latter from the former much earlier in the reading.

I see no advantage in parens here in making something readable that should never
be written (or tolerated by the compiler) in the first place.

> More importantly is the detection of syntax errors. A mistake I
> commonly make when cutting and pasting is failing to close the
> subprogram call properly. The problem here is that the difference
> between a conditional expression and a if statement is so minor that
> the error is likely to be diagnosed pages away from the real error location.
>
>     My_Package.Long_Procedure_Name (
>        Object1,
>        Object2 /= Function_Call(10),
>     if Object3 > Func1(100) + Func2(Object1) then
>        Subprog (Object4);
>     else
>        Subprog (Object2);
>     end if;

Well it would be easy enough to special case this if it really arises in
practice (we will wait for a user report!) I have never run into this in my use
of conditional expressions.

> The syntax error will not be diagnosed until the semicolon after
> subprogram call using Object4, as the syntax up to this point is
> legal. In this example, that is two lines away from the actual error,
> but of course the condition can be many lines long.
>
> If the conditional expression required parens, some error would be
> detected as soon as "if" is encountered, which is right at the point of the error.
>
> I realize the error detection in the Janus/Ada parser is pretty
> primitive, but I don't see any way for any LALR(1) parser to figure
> out where the error
> *really* is in the above. That's because the various incomplete
> constructs aren't (yet) identified on the parse stack -- it's just a
> pile of terminals and non-terminals -- so study of the state of the
> parse will not provide much information about the problem. (An LL
> parser might have a better chance since it knows what constructs its
> looking for.)

Ada is designed to be easy to read, I do not consider designing so that a
naively written parser can give good error messages to be a significant factor
in the language design.

> Existing cases of this sort of problem in Ada ("is" vs. ";" after a
> subprogram definition, for instance) are a never-ending annoyance, and
> I'd really prefer that we avoid adding similar cases to the language.

GNAT has special circuitry that shows that this problem can be handled just
fine, GNAT in practice accurately diagnoses the misuse of IS for semicolon and
vice versa, so at least with GNAT this is not a "never-ending annoyance", it is
not an annoyance at all.

> If the syntax is not going to be identical (and that isn't really a
> possibility), it probably ought to be visually different, especially
> at the start. I'm not sure how to best reconcile that with the desire
> to be familiar to programmers - gratious differences also will be
> annoying. We could have used square brackets here, but that seems
> pretty radical. More thought is required here.

I disagree that more thought is required, I find the proposal as it is
convenient to use, and convenient to read, and I would really find the double
parens to be an irritation.

****************************************************************

From: Bob Duff
Sent: Sunday, June 14, 2009  3:44 PM

FWIW, I find:

    Some_Procedure ((if X then Y else Z));

in the case where there's just one param, to have mildly annoying "extra"
parens.  This seems better:

    Some_Procedure (if X then Y else Z);

On the other hand, when there are 2 or more params:

    Some_Procedure (Mumble, (if X then Y else Z), Dumble);

the "extra" parens seem helpful, as compared to:

    Some_Procedure (Mumble, if X then Y else Z, Dumble);

Whatever we do, let's not let this minor issue derail the conditional
expressions proposal!  I can live with most of the various alternative proposed
syntaxes.  (And I can add style checks to GNAT if desired...)

After careful consideration of Randy's and other's notes, my strong opinion is:
I don't care.  ;-)

****************************************************************

From: Robert Dewar
Sent: Sunday, June 14, 2009  4:00 PM

> FWIW, I find:
>
>     Some_Procedure ((if X then Y else Z));
>
> in the case where there's just one param, to have mildly annoying "extra"
> parens.  This seems better:
>
>     Some_Procedure (if X then Y else Z);

I agree, but find the extra parens *more* than mildly annoying, I find them
really annoying, especially in the context of

       pragma Precondition (if X > 0 then Y / X > 3);

> On the other hand, when there are 2 or more params:
>
>     Some_Procedure (Mumble, (if X then Y else Z), Dumble);

I agree

> the "extra" parens seem helpful, as compared to:
>
>     Some_Procedure (Mumble, if X then Y else Z, Dumble);

I agree, but would not legislate this, let it be a matter of appropriate style,
and I am not sure about the case:

>      Some_Procedure (Subject => Mumble,
                       Flag    => if X then Y else Z,
                       Data    => Dumble);

Here the extra parens seem unnecessary to me

> Whatever we do, let's not let this minor issue derail the conditional
> expressions proposal!  I can live with most of the various alternative
> proposed syntaxes.  (And I can add style checks to GNAT if desired...)

Indeed

> After careful consideration of Randy's and other's notes, my strong opinion is:
> I don't care.  ;-)

I think at this stage we would not change GNAT, since we now have a lot of code
using the proposed style with optional parens in the parameter case (of course
we require the -gnatX switch to enable extensions :-)) but if the ARG decides to
always require double parens, we can do this under --pedantic or some style
switch for ACATS purposes :-)

****************************************************************

From: Pascal Leroy
Sent: Monday, June 15, 2009  12:41 AM

> However, I have an action item to explain my discomfort with the idea of not
> requiring parenthesis around conditional expressions (outside of qualified
< expressions). I realize that I won't be able to explain this in any useful
> way verbally; examples are needed. So let me give some examples.

I must confess that I didn't read the entire thread on conditional expressions,
but I share Randy's discomfort.  I am not particularly in love with requiring
parentheses around these expressions, but I believe that there has to be a
closing delimiter somehow, otherwise you end up with awful syntax headaches.
Consider:

A := if B then C else D + 1;

It is quite unclear for the human reader how the "+ 1" binds.  I believe that,
since a conditional_expression is a primary, it actually binds as (if B then C
else D) + 1, which is the worst possible outcome.  But I am not even sure that
my understanding of the grammar is correct.

Also, without a closing delimiter, you end up with the dreaded "dangling else
problem" of C:

A := if B then if C then D else E;

Again, it is possible to define how the "else" binds (C does it) but that
doesn't help readability.

As much as I hate extra parentheses, I think that they are better than the above
ambiguities.  Either that, or put "end" or "end if" at the end of the
expression.

****************************************************************

From: Tucker Taft
Sent: Monday, June 15, 2009  9:54 AM

Although I favored some special-casing to eliminate duplicate parentheses, I
agree that the value of the feature vastly outweighs any slight aesthetic
concerns with double parentheses.

If we do decide to somehow special case this, I think we do need to worry about
error recovery as Randy has, and we should think about how this generalizes, if
at all, to aggregates and other uses of parentheses (such as the "sequence"
notation we discussed at Brest of "(A | B | C)" ).

Extra spaces or named notation can to some degree lessen the aesthetic concern:

    Foo( (if A then B else C) );

  or

    Foo(Param => (if A then B else C));

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  2:20 PM

> I must confess that I didn't read the entire thread on conditional
> expressions, but I share Randy's discomfort.  I am not particularly in
> love with requiring parentheses around these expressions, but I
> believe that there /has /to be a closing delimiter somehow, otherwise
> you end up with awful syntax headaches.  Consider:
>
> A := if B then C else D + 1;

This seems an irrelevant concern, no one has even vaguely suggested not
requiring the parens in this case. The cases where we leave out parens are
limited to subprogram parameters, and pragma arguments, where we do have
delimiters, and in particular in the case of one param, the delimiters are an
extra set of parens.

> It is quite unclear for the human reader how the "+ 1" binds.  I
> believe that, since a conditional_expression is a primary, it actually
> binds as (if B then C else D) + 1, which is the worst possible
> outcome.  But I am not even sure that my understanding of the grammar is
> correct.

Again, not to worry, no one thinks this is an issue that should be addressed.

> Also, without a closing delimiter, you end up with the dreaded
> "dangling else problem" of C:
>
> A := if B then if C then D else E;

Again, no one suggests allowing this syntax, and e.g. in a procedure call we
would require

    P (if B then (if C then D else E));

> Again, it is possible to define how the "else" binds (C does it) but
> that doesn't help readability.
>
> As much as I hate extra parentheses, I think that they are better than
> the above ambiguities.  Either that, or put "end" or "end if" at the
> end of the expression.

Well I think what we need is your reaction to the actual proposal, rather than
to one that is obviously unworkable which no one has proposed :-)

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  2:40 PM

> This seems an irrelevant concern, no one has even vaguely suggested
> not requiring the parens in this case. The cases where we leave out
> parens are limited to subprogram parameters, and pragma arguments,
> where we do have delimiters, and in particular in the case of one
> param, the delimiters are an extra set of parens.

Are those the only cases?

I find the pragma Precondition example (and other assertions) to be a strong
argument for allowing to omit the extra parens.  After all, one of the main
purposes of this conditional exp feature is to express implication, given that
folks won't let me add the "implies" operator to the language.  ;-)

I would be happy with a rule that requires parens in a call if there is more
than one actual param.  Either as a language rule (a syntax rule, written in
English, rather than BNF), or as an implementation-defined style check.

> > It is quite unclear for the human reader how the "+ 1" binds.  I
> > believe that, since a conditional_expression is a primary, it
> > actually binds as (if B then C else D) + 1, which is the worst
> > possible outcome.  But I am not even sure that my understanding of the
> > grammar is correct.
>
> Again, not to worry, no one thinks this is an issue that should be
> addressed.
>
> > Also, without a closing delimiter, you end up with the dreaded
> > "dangling else problem" of C:
> >
> > A := if B then if C then D else E;
>
> Again, no one suggests allowing this syntax, and e.g. in a procedure
> call we would require
>
>     P (if B then (if C then D else E));

or:

    P (if B then (if C then D) else E);

Right?

These are allowed only when the type is boolean, so "else False" is implicit.

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  2:52 PM

> Are those the only cases?

Maybe conversions and qualified expressions, can't remember now?
No big deal either way for those two cases I would say, but they are double
paren cases.

> I find the pragma Precondition example (and other assertions) to be a
> strong argument for allowing to omit the extra parens.  After all, one
> of the main purposes of this conditional exp feature is to express
> implication, given that folks won't let me add the "implies" operator
> to the language.  ;-)

I agree

> I would be happy with a rule that requires parens in a call if there
> is more than one actual param.  Either as a language rule (a syntax
> rule, written in English, rather than BNF), or as an implementation-defined
> style check.

Yes, I could live with that, although I find

     Proc1 (x => 3, y => if bla then 3 else 2)

fine

****************************************************************

From: Tucker Taft
Sent: Wednesday, June 17, 2009  2:54 PM

> These are allowed only when the type is boolean, so "else False" is implicit.

Since we want "if A then B" to be equivalent to "A => B", "else true" is
implicit.  "else false" would make it equivalent to "A and then B" which is not
very useful.

If you write it in an assertion or a precondition, it becomes clear that you
mean "else True":

    Assert (if A then B [else True])

    Pre => (if A then B [else True])

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  3:20 PM

Indeed, when you say

       pragma Precondition (if A then B);
       --  Current implemented GNAT syntax, subject to change :-)

you are saying that if A is true then B must hold, otherwise you don't have any
precondition (which means True).

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  4:01 PM

> > These are allowed only when the type is boolean, so
> > "else False" is implicit.
          ^^^^^ Typo for True.  Think-o, really.  ;-)

> Since we want "if A then B" to be equivalent to "A => B", "else true"
> is implicit.  "else false" would make it equivalent to "A and then B"
> which is not very useful.

Right.  Thanks for the correction.  I meant "else True".

> If you write it in an assertion or a precondition, it becomes clear
> that you mean "else True":
>
>     Assert (if A then B [else True])
>
>     Pre => (if A then B [else True])

Right.  Robert may use my momentary confusion as evidence that "the 'implies'
op is confusing".  [Note deliberate ambiguous use of "may".]

I'm embarrassed.  I promise you: I didn't flunk my Mathematical Logic course,
and I really do know what "implies" means, deep down in my heart!  ;-)

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  4:03 PM

> Yes, I could live with that, although I find
>
>      Proc1 (x => 3, y => if bla then 3 else 2)
>
> fine

Shrug.  I could live with that, or I could live with a requirement (or style
rule) for:

     Proc1 (x => 3, y => (if bla then 3 else 2))

Right now I find the latter preferable, but I could change my mind tomorrow.

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  4:05 PM

> Indeed, when you say
>
>        pragma Precondition (if A then B);
>        --  Current implemented GNAT syntax, subject to change :-)
>
> you are saying that if A is true then B must hold, otherwise you don't
> have any precondition (which means True).

Right.  Sorry again for my momentary stupidity!

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  4:19 PM

> Right now I find the latter preferable, but I could change my mind tomorrow.

I have no strong preference on this one, so if people will be more amenable to
the no parens in the single param case if we remove it for the double param
case, that's fine.

In fact I kind of like the rule that you need parens unless the expression is
already enclosed in parens. So that all we ever do is avoid double parens.

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, June 17, 2009  4:24 PM

>> Yes, I could live with that, although I find
>>
>>      Proc1 (x => 3, y => if bla then 3 else 2)
>>
>> fine
>
> Shrug.  I could live with that, or I could live with a requirement (or
> style
> rule) for:
>
>      Proc1 (x => 3, y => (if bla then 3 else 2))

My concern with these proposals to remove the parentheses is how hard it will be
to explain, both to users and in the language definition. The current proposal
is "same rule as aggregates", i.e. extra parentheses omitted only for qualified
expressions. This is an easy rule. And yes, if you pass an aggregate as the only
parameter to a procedure, you have double parentheses, and nobody complained
about that.

I have some sympathy to removing the parentheses for pragma assert, but how can
you /simply/ put that into the language?

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  5:16 PM

> My concern with these proposals to remove the parentheses is how hard
> it will be to explain, both to users and in the language definition.

Both are important concerns, but in this case, it's not such a big deal:

The user-view is simple.  Put in parens where needed for readable code.
Then, if the compiler complains about missing parens, put them in.

The lang-def issue is also no big deal.  Nobody but language lawyers read it, so
it can be arcane, so long as it's precise.

>...The
> current proposal is "same rule as aggregates", i.e. extra parentheses
>omitted only for qualified expressions. This is an easy rule.

Yes.

>...And yes,
> if you pass an aggregate as the only parameter to a procedure, you
>have  double parentheses, and nobody complained about that.

The aggregate case is different, I think.  There, the parens actually do
something (gather up all those components into a bigger thing).

> I have some sympathy to removing the parentheses for pragma assert,
> but how can you /simply/ put that into the language?

Good question.  I think the answer is: make the BNF syntax fairly simple, and
add some English-language rules, saying something like "if there is more than
one arg, any conditional exp must have parens".  Or make it conditional upon
named notation, if that's desired.

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  5:37 PM

> The user-view is simple.  Put in parens where needed for readable code.
> Then, if the compiler complains about missing parens, put them in.

Well this presumes that the compiler is always clever enough to recognize a
conditional expression without parens as such.

I prefer a user-view that says, enclose a conditional expression in parens
except you can leave out the parens if the conditional expression is already
enclosed in parens, e.g. when it appears as the sole argument to a subprogram.

****************************************************************

From: Pascal Leroy
Sent: Thursday, June 18, 2009  5:22 PM

> This seems an irrelevant concern, no one has even vaguely suggested
> not requiring the parens in this case. The cases where we leave out
> parens are limited to subprogram parameters, and pragma arguments,
> where we do have delimiters, and in particular in the case of one
> param, the delimiters are an extra set of parens.

Whatever the rules, I would hope that conversions, slices, and array components
are handled similarly to subprogram calls.  That is, if I am able to omit
parentheses in:

X := Fun(if A then B else C);

I should be able to omit them in:

X := Arr(If Backwards then Arr'Last else Arr'First);

Breaking the syntactic similarity between these constructs would be silly.

****************************************************************

From: Robert Dewar
Sent: Friday, June 19, 2009  12:36 PM

> X := Fun(if A then B else C);
>
> I should be able to omit them in:
>
> X := Arr(If Backwards then Arr'Last else Arr'First);
>
> Breaking the syntactic similarity between these constructs would be silly.

I agree entirely. I really lean now to the solution that says you can omit
parens only in the double parens cases, which would of course include the type
conversion and qualified expression cases.

****************************************************************

From: Tucker Taft
Sent: Friday, June 19, 2009  12:46 PM

I presume we are *not* generalizing this to aggregates, as that could create
some real ambiguity.  That is, this is not some general rule that anyplace you
find yourself with two sets of parentheses you can replace them with one.  For
example:

     Fun((A => 3, B => 4))

cannot be replaced by:

     Fun(A => 3, B => 4)

****************************************************************

From: Robert Dewar
Sent: Friday, June 19, 2009  1:02 PM

Right, that obviously won't work for aggregates, so this would be JUST for
conditional expressions.

P.S. A thought about conditional expressions and Implies

If I write

    if A then B end if;

it has the feeling of "nothing if A is false"

If I write

    pragma Precondition (if A then B);

then that same feeling carries over nicely, this really can be thought of as

    if A then pragma Precondition (b) else nothing

and so that's nicely parallel, and in fact I like that much better than

    pragma precondition (A => B);

But if we use this in some other context

    if (if B then C) then ...

I really can't understand that any more, so I think my coding standard would
allow omission of the else only in the pre/postcondition usages.

I can understand

    if B implies C then ...

a bit better, but I still have to think about it, so I still prefer not to
introduce an implies operator.

****************************************************************

From: Tucker Taft
Sent: Friday, June 19, 2009  1:21 PM

> But if we use this in some other context
>
>    if (if B then C) then ...
>
> I really can't understand that any more, so I think my coding standard
> would allow omission of the else only in the pre/postcondition usages.

That seems a bit overly stringent.  I think it is also well defined in pragma
Assert, or in calling a function which is the rough equivalent of Assert.  I
could see disallowing it when used as the condition of an "if" or "while"
statement, or when combined into a more complex expression such as:

    Assert((if B then C) or else (D and E));

> I can understand
>
>    if B implies C then ...
>
> a bit better, but I still have to think about it, so I still prefer
> not to introduce an implies operator.

Agreed.

****************************************************************

From: Robert Dewar
Sent: Friday, June 19, 2009  2:02 PM

> That seems a bit overly stringent.  I think it is also well defined in
> pragma Assert, or in calling a function which is the rough equivalent
> of Assert.

Yes, of course Assert is just like PPC's, and indeed a subprogram acting like
Assert would also meet this general feeling (do nothing in the else case)

****************************************************************

From: Randy Brukardt
Sent: Monday, January 18, 2010  11:12 PM

Last spring we discussed various resolution rules, without ever quite coming to
a conclusion. I think we ran out of energy before actually reaching an answer.
I'd like your thoughts on my current ideas.

The model for conditional expressions is that it is a particularly fancy set of
parentheses. Figuring out the correct rules for that is challenging, in large
part because there are no rules that describe how parentheses work in the normal
case.

The main problem with the rules that were previously proposed is that they
didn't work properly in the face of implicit conversion. That occurred about of
various attempts to talk about the type of the dependent_expressions, especially
to figure out the type of the construct as a whole.

On this last point, I can't find any reason to actually specify such a type. I
can't find any wording that ever says what the type of an aggregate is, for
instance. The only sort of definition that I can find is the nominal subtype,
but that is only defined for names. Memberships do define a "result type" of
Boolean, but "result type" seems to be only defined for operations, operators,
and function calls.

But this seems more to be a hole in the existing wording than a feature. Rules
like 4.3.2(5/2) require knowing something about the expression. (This particular
case is not a problem in the existing language, though: an extension aggregate
ancestor expression can't be a non-qualified aggregate as no type could be
identified. But that wouldn't be true for a conditional expression.)

Even so, that type doesn't need to be defined in the Name Resolution Rules. Nor
do any of the other rules necessarily need to apply to the resolution of a
conditional expression; they can be separate legality rules.

Thus, I would like to propose that the Name Resolution Rule for conditional
expressions be simply:

    The expected type for each dependent_expression of a conditional_expression
    is the expected type of the conditional_expression as a whole.

Followed by some Legality Rules that I'll get to in a minute.

Unfortunately, this wording doesn't work for expressions that don't have an
expected type. Those occur in renames and qualified expressions: those are
contexts that don't allow implicit conversion. Qualified expressions use "the
operand shall resolve to be of the type determined by the subtype_mark, or a
universal_type that covers it". (Object renames are similar.) Thus dynamically
tagged expressions aren't allowed, nor are the implicit conversion associated
with anonymous access types.

(Aside: Tucker's new wording for non-memberships [8.6(27.1/3), from AI-102 and
AI-149] which requires convertibility, seems to assume that expressions have
expected types. The rule doesn't need to apply to "shall resolve to" cases
(since the same type is obviously convertible), but there is nothing that makes
that clear. I just added a Ramification note, but I'm not sure if that is
sufficient.]

So we need slightly more complex wording:

    If a conditional_expression has an expected type T, the expected type for
    each dependent_expression of a conditional_expression is T. If a
    conditional_expression shall resolve to a type T, each dependent_expression
    shall resolve to T.

    AARM To Be Honest: T in this rule could be any type in a class of types
    (including the class of all types), or (for the second rule) an anonymous
    access type (for renames) or a universal type covering some type (for
    qualified expressions).

Then we need some legality rules:

    Legality Rules

    If the expected type of a conditional_expression is any type in a class of
    types (instead of a particular type), all dependent_expressions of the
    conditional_expression shall have the same type.

    If the expected type of a conditional_expression is a specific tagged type,
    all of the dependent_expressions of the conditional_expression shall be
    dynamically tagged, or none shall be dynamically tagged; the
    conditional_expression is dynamically tagged if all of the
    dependent_expressions are dynamically tagged, and is statically tagged
    otherwise.

[We need this latter rule to be able to enforce 3.9.2(8); we don't want the
parameter of a dispatching call to have mixed static and dynamic tagging. We
could have written this in terms of dispatching calls instead, but 3.9.2(9/1)
makes other uses of dynamically tagged expressions illegal.]

The first rule is needed to avoid expressions with different types in the
different branches. (Note the we don't need to talk about "single types" here,
as that is a property of constructs, not contexts; we're only talking about
contexts here.) For instance, this rule would make the following illegal
(although it would resolve):

    type Some_Boolean is new Boolean;
    function Foo return Some_Boolean;

    Cond : Boolean := ...;
    Var  : Natural := ...;

    if (if Cond then Var > 10 else Foo) then ... -- Illegal by legality rule.

Note that this formulation would make some things ambiguous that would resolve
by looking at other expressions:

    function ">" (A, B : Natural) return Some_Boolean;

    if (if Cond then Var < 10 else Var > 20) then ... -- ">" is ambiguous;
         -- the fact that only one interpretation would pass the legality rule
	 -- is irrelevant.

I think this ambiguity is in keeping with the analogy to parens; the expression
would be ambiguous if written as statements:

    if Cond then
        if (Var < 10) then ...
    else
        if (Var > 20) then ... -- ">" is ambiguous
    end if;

Because of this, I can't get too concerned.

The legality rule does have the effect that if any implicit conversions apply in
such a case, they all have to be the same. I haven't been able to think of any
realistic case where that would matter; usually such expressions would be
ambiguous anyway.


Note that we don't have a similar rule that applies when the expected type is a
particular type. That allows the branches to have expressions that are different
implicit conversions. For instance:

    type T is tagged ...;
    type NT is new T with ...;
    Tobj : T;
    NTObj : NT;

    function Bar return T'Class is
    begin
        return (if Cond then Tobj else NTObj); -- Legal.
        if Cond then -- Legal.
           return Tobj;
        else
           return NTObj;
        end if;
    end Bar;

Again, this allows a similar analogy.

In all such cases, the types have to be related or the expression wouldn't have
resolved (the expected types are the same, after all). So I don't think this is
weird, indeed it seems necessary to deal properly with anonymous access types
(and especially the literal "null").

One case that has bothered me was that type conversions fail this principle:

         Var := Natural (if Cond then 10 else 1.5);

The type conversions would be legal separately, but since this isn't a context
with a particular expected type, the types must match and these don't. This is
easily fixed (move the type conversions inside of the conditional expression),
so I don't think this is important.

I'm sure at this point that Steve or Bob will come up with a dozen examples of
why this doesn't work. Fire away. :-)


---

A related issue is build-in-place for conditional expressions.

I believe that the implementation model for objects that are required to be
built-in-place is not too bad. Consider:

    type Some_ImmLim_Type is limited tagged ...
    function Glarch (I : Natural) return Some_ImmLim_Type;

    Obj : Some_ImmLim_Type := (if Cond then Glarch(1) else Glarch(2));

In a case like this, the address of Obj would be passed to the conditional
expression, and given to whatever dependent_expression actually is evaluated.
This might be a bit messy, but it doesn't appear to be intractable.

What bothers me is what happens if the expression has some expressions would
normally be built-in-place, and some that would not. Consider:

    type Some_Type is record C : Natural; ...
    function Blob return Some_Type;

    Obj1 : Some_Type := (if Cond then Blob else Some_Type'(C => 1, ...));

If the implementation always builds aggregates in place, and never does that for
function calls, the conflict in calling conventions could be messy to deal with.

I'm not sure if this is a real problem: if some expression uses build-in-place,
just build it into an anonymous object and assign that when needed.

So, for the moment, I'm going to allow build-in-place for conditional
expressions.

---

Thanks for reading this lengthy discussion.

****************************************************************

From: Steve Baird
Sent: Tuesday, January 19, 2010  11:12 PM

> So we need slightly more complex wording:
>
>      If a conditional_expression has an expected type T, the expected
> type for each dependent_expression
>      of a conditional_expression is T. If a conditional_expression
> shall resolve to a type T, each
>      dependent_expression shall resolve to T.

"of a conditional_expression" => "of the conditional_expression".

>
>      AARM To Be Honest: T in this rule could be any type in a class of
> types (including the class of all
>      types), or (for the second rule) an anonymous access type (for
> renames) or a universal type covering
>      some type (for qualified expressions).
>

I don't think that a TBH note is appropriate here (as opposed to explicitly saying what is really meant), but I agree that it's a judgment call.

> Then we need some legality rules:
>
>      Legality Rules
>
>      If the expected type of a conditional_expression is any type in a
> class of types (instead of a
>      particular type), all dependent_expressions of the
> conditional_expression shall have the same type.


Can we eliminate "if the expected type" test and replace the above with
something like:

    Given any two dependent_expressions of one conditional_expression,
    either the types of the two expressions shall be the same or
    one shall be a universal_type that covers the other.

And now that we are talking about an arbitrary pair of dependent expressions, we
can continue with more legality checks.

    Furthermore, the two dependent_expressions shall agree with respect
    to the properties of being statically tagged, dynamically tagged, and
    tag indeterminant. A conditional expression is statically tagged
    (respectively: dynamically tagged, tag indeterminant) if and only
    if all of its dependent_expressions are.

Actually, the second sentence probably needs to be moved to 3.9.2 - otherwise we
are (implicitly at least) adding a "notwithstanding what 3.9.2 said about these
definitions" here and that's undesirable.

>      If the expected type of a conditional_expression is a specific
> tagged type, all of the dependent_expressions
>      of the conditional_expression shall be dynamically tagged, or
> none shall be dynamically tagged; the
>      conditional_expression is dynamically tagged if all of the
> dependent_expressions are dynamically tagged, and
>      is statically tagged otherwise.


I think there may be a problem here with something like
    (if cond then statically-tagged-expr else tag-indeterminant-expr) which is
    addressed by my proposal above.

>
>
> ---
>
> So, for the moment, I'm going to allow build-in-place for conditional
> expressions.
>

Sounds right.

Incidentally, I agree that it seems odd that we don't have a rule defining the
type of a conditional expression. Even if the language definition doesn't need
such a rule, I think that most compiler writers would find such a rule
reassuring.

If we need such a rule, perhaps something like
    The type of a conditional expression is the unique element of the
    set of dependent_expression types of the conditional_expression
    which does not cover any other element of that set.

, with an accompanying AARM argument for why this definition is well-defined (at
least if no legality rules have been violated).

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 19, 2010  3:47 PM

Steve Baird writes:
> Randy Brukardt wrote:
> > So we need slightly more complex wording:
> >
> >      If a conditional_expression has an expected type T, the expected
> >      type for each dependent_expression
> >      of a conditional_expression is T. If a conditional_expression
> >      shall resolve to a type T, each
> >      dependent_expression shall resolve to T.
>
> "of a conditional_expression" => "of the conditional_expression".

OK.

> >      AARM To Be Honest: T in this rule could be any type in  class of
> >      types (including the class of all
> >      types), or (for the second rule) an anonymous access type (for
> >      renames) or a universal type covering
> >      some type (for qualified expressions).
> >
>
> I don't think that a TBH note is appropriate here (as opposed to
> explicitly saying what is really meant), but I agree that it's a
> judgment call.

I looked at the alternatives, and they're not pretty. Trying to say this
normatively (and still be understandable) is very difficult. Indeed, I really
wasn't sure that I even was going to include this TBH because I didn't think it
really makes much sense the way I described it (and I couldn't think of a better
way to put it.

If you have some description that would make sense in normative words (this
certainly is not it), please suggest it.

> > Then we need some legality rules:
> >
> >      Legality Rules
> >
> >      If the expected type of a conditional_expression is any type in a
> >      class of types (instead of a
> >      particular type), all dependent_expressions of the
> >      conditional_expression shall have the same type.
> >
>
> Can we eliminate "if the expected type" test and replace the above
> with something like:
>
>     Given any two dependent_expressions of one conditional_expression,
>     either the types of the two expressions shall be the same or
>     one shall be a universal_type that covers the other.

No, this is wrong. This doesn't allow the implicit conversions for class-wide
and anonymous access types. In those cases, the types of the expressions might
in fact be different types (remember that every anonymous access type is
different), but that is fine so long as the expression is expected to be of a
particular type (as opposed to one of some class of types). (I'd rather say a
"single type", but that term is already used for something else.)

I would have preferred a looser rule for this particular rule, such that the
dependent_expressions all are expected to have the same particular expected type
(thus allowing implicit conversions so long as they all convert to the same
type). But that would be a really weird rule, and I can't find any cases where
that rule would really help (neither class-wide nor anonymous access are likely
to be involved in an "any type" scenario).

> And now that we are talking about an arbitrary pair of dependent
> expressions, we can continue with more legality checks.
>
>     Furthermore, the two dependent_expressions shall agree with respect
>     to the properties of being statically tagged, dynamically tagged, and
>     tag indeterminant. A conditional expression is statically tagged
>     (respectively: dynamically tagged, tag indeterminant) if and only
>     if all of its dependent_expressions are.

This is also wrong, in that the only interesting property is whether the
expressions are dynamically tagged or something else. None of the legality rules
in 3.9.2(8-9) care about tag indeterminant expressions; they're lumped in with
statically tagged expressions.

> Actually, the second sentence probably needs to be moved to
> 3.9.2 - otherwise we are (implicitly at least) adding a
> "notwithstanding what 3.9.2 said about these definitions"
> here and that's undesirable.

I don't see any way to reasonably say that in 3.9.2, and besides, none of the
rules in 3.9.2 apply as the expression has no specified type. If we actually
tried to define the type of the expression, we would have to do it in such a way
that the taggedness is correct, in which case we'd move the second sentence here
to the paragraph determining that (it would require lots of sentences to figure
that out!)

> >      If the expected type of a conditional_expression is a specific
> >      tagged type, all of the dependent_expressions
> >      of the conditional_expression shall be dynamically tagged, or
> >      none shall be dynamically tagged; the
> >      conditional_expression is dynamically tagged if all of the
> >      dependent_expressions are dynamically tagged, and
> >      is statically tagged otherwise.
> >
> I think there may be a problem here with something like
>     (if cond then statically-tagged-expr else
> tag-indeterminant-expr) which is addressed by my proposal above.

No, this is find by the legality rules of 3.9.2(8-9), and we would want it to be
treated as statically tagged for the dynamic semantics of dispatching. We
certainly would not want to disallow mixing of tag-indeterminant and statically
tagged cases, because most uses of tag-indeterminant items are constructor
functions where we really want the type named in the return statement (they're
just technically tag-indeterminant).

But we probably ought to mention the case where all of the branches are
tag-indeterminant.

the conditional_expression is dynamically tagged if all of the
dependent_expressions are dynamically tagged, is tag-indeterminate if all of the
dependent_expressions are tag-indeterminant, and is statically tagged otherwise.

> >
> > ---
> >
> > So, for the moment, I'm going to allow build-in-place for conditional
> > expressions.
> >
>
> Sounds right.
>
> Incidentally, I agree that it seems odd that we don't have a rule
> defining the type of a conditional expression. Even if the language
> definition doesn't need such a rule, I think that most compiler
> writers would find such a rule reassuring.
>
> If we need such a rule, perhaps something like
>     The type of a conditional expression is the unique element of the
>     set of dependent_expression types of the conditional_expression
>     which does not cover any other element of that set.
>
> , with an accompanying AARM argument for why this definition is
> well-defined (at least if no legality rules have been violated).

This only works because you applied a much too stringent legality rule. For
anonymous access types, the types most likely are unrelated (and some can be
named). If I had to do this, I would use a ladder of rules (and would try to
include the dynamic tagging as well):

The type of the conditional_expression is as followed:
  * If the conditional_expression shall resolve to a type T, then the type of
    the expression is T;
  * If the expected type of the conditional_expression is any type in a class of
    types, then the type of the conditional_expression is the type of the
    dependent_expressions Redundant[which are all the same];
  * If the expected type of the conditional_expression is a specific tagged type
    T and the dependent_expressions are dynamically tagged, then the type of the
    conditional_expression is T'Class;
  * Otherwise, the type of the conditional_expression is the expected type T.

AARM Ramification: Note that in the last case, the dependent expressions may
have other types so long as they can be implicitly converted to T.

We could drop the second sentence of the legality rules in this case. But this
is way to many rules for something that doesn't seem to have any language need.
If someone can find a language need, then we should do this, but otherwise I
don't see the point.

****************************************************************

From: Bob Duff
Sent: Monday, February  1, 2010  3:22 PM

Here's my homework for ai05-0147-1.

The goal is to fix the syntax rules.
My understanding is that we all agree on the rules, but we were having trouble
expressing those rules in RM-ese.

I realized we were mixing two approaches:

    - Stick conditional_expression into the BNF grammar
      where appropriate.  We don't like this because
      it involves too much change to the RM.

    - Leave the BNF grammar alone, but describe in English
      where conditional_expression is allowed in place
      of the syntactic category expression.  We don't
      like this for the same reason we don't like
      a similar ugly hack when used for pragmas.

I decided to go entirely with the second approach (so there's only one "don't
like").

We had been using the first approach in primary and qualified_expression.  I got
rid of that -- the wording below eliminates the previously-proposed changes to
those BNF rules; that is, the BNF syntax rules for primary and
qualified_expression remain as in Ada 2005.

This has the "interesting" property that conditional_expression is not
referenced in BNF -- only in the English syntax rule that plugs it in (allows it
to replace expression, sometimes).

Here goes:

[This is version /08 of the AI - Editor.]

****************************************************************

From: Tucker Taft
Sent: Monday, February  1, 2010  4:07 PM

I think we had also talked about allowing an unparenthesized conditional
expression immediately following "=>".  I think we certainly want that in the
aspect specifications.

****************************************************************

From: Edmond Schonberg
Sent: Monday, February  1, 2010  4:23 PM

as long as it is followed by a ')',  right? This is a little harder to phrase
properly.

****************************************************************

From: Steve Baird
Sent: Monday, February  1, 2010  4:31 PM

Didn't we explicitly decide against
this because of interactions with case
expressions (and we wanted the same parenthesizing rules for conditional
expressions as for case expressions)?

****************************************************************

From: Randy Brukardt
Sent: Monday, February  1, 2010  5:20 PM

That's what it says in the minutes, and is surely what I understood we had
decided. Otherwise, case expressions become ambiguous, which would not be good.
Or we have different rules for everything (aggregates, case exprs, cond exprs,
...), which sounds like a nightmare.

****************************************************************

From: Bob Duff
Sent: Monday, February  1, 2010  3:31 PM

This is mostly nitpicking.  I'd rather you review the proposed wording I sent a
few minutes ago, if you don't have time for both.

AI05-0147-1 says:

> It is possible to write a Boolean expression like
>    Precondition => (not Param_1 >= 0) or Param_2 /= ""
> but this sacrifices a lot of readability, when what is actually meant is
>    Precondition => (if Param_1 >= 0 then Param_2 /= "" else True)

We allow "else True" to be defaulted, so this might be
better:

   Precondition => (if Param_1 >= 0 then Param_2 /= "")
  where "else True" is implied by default.

> Another situation is renaming an object determined at runtime:
>
>      procedure S (A, B : Some_Type) is
>          Working_Object : Some_Type renames
>             (if Some_Func(A) then A else B);
>      begin
>          -- Use Working_Object in a large chunk of code.
>      end S;
>
> In Ada currently, you would have to either duplicate the working code
> (a bad idea if it is large) or make the working code into a subprogram
> (which adds overhead, and would complicate the use of exit and return control structures).

I don't think we plan to allow that (even though it might be nice!).
An object_renaming_decl requires a name, whereas a conditional_expression is
merely an expression.  Nor do we intend to allow:

    (if Some_Func(...) then X else Y) := Toto;

Right?

On the other hand, I suppose this:

    (if Some_Func(...) then X else Y).all := Toto;
                                     ^^^^ will be allowed.

Note redundant "Redundant" in:

> If an expression appears directly inside of another set of
> parentheses, an expression_within_parentheses can be used instead.
> Redundant[This allows the use of a single set of parentheses in such
> cases.]

The following strikes me as overly paternalistic:

> AARM Implementation Note: In order to avoid ambiguity, and preserve
> the ability to do useful error correction, it is recommended that the
> grammar for all of the above productions be modified to reflect this rule. ...

We don't normally presume to tell implementers how best to do syntactic error
correction.

First para of !discussion: "a if" --> "an if".

****************************************************************

From: Randy Brukardt
Sent: Monday, February  1, 2010  5:51 PM

> > It is possible to write a Boolean expression like
> >    Precondition => (not Param_1 >= 0) or Param_2 /= ""
> > but this sacrifices a lot of readability, when what is actually meant is
> >    Precondition => (if Param_1 >= 0 then Param_2 /= "" else True)
>
> We allow "else True" to be defaulted, so this might be
> better:
>
>    Precondition => (if Param_1 >= 0 then Param_2 /= "") where "else
> True" is implied by default.

I wanted to be crystal-clear about the equivalence, and the defaulted "else"
is not crystal-clear to me. Indeed, I have to look it up each time to figure out
what it is.

Perhaps the thing to do is here to give both by adding:

or even more simply
    Precondition => (if Param_1 >= 0 then Param_2 /= "")

to the AI.

> > Another situation is renaming an object determined at runtime:
> >
> >      procedure S (A, B : Some_Type) is
> >          Working_Object : Some_Type renames
> >             (if Some_Func(A) then A else B);
> >      begin
> >          -- Use Working_Object in a large chunk of code.
> >      end S;
> >
> > In Ada currently, you would have to either duplicate the working
> > code (a bad idea if it is large) or make the working code into a
> > subprogram (which adds overhead, and would complicate the use of
> > exit and return control structures).
>
> I don't think we plan to allow that (even though it might be nice!).
> An object_renaming_decl requires a name, whereas a
> conditional_expression is merely an expression.

Oh, sorry. I should have written:

          Working_Object : Some_Type renames
             Some_Type'(if Some_Func(A) then A else B);

because you can make any expression a name by qualifying it (AI05-0003-1).
And, as Tucker pointed out, you can do that in Ada 95-speak, without
AI05-0003-1, too:

          Working_Object : Some_Type renames
             Some_Type(Some_Type'(if Some_Func(A) then A else B));

But there is a weirdness here, because the name represents a constant view (at
least according to AI05-0003-1). So it isn't a 100% replacement for the
renaming.

One wonders if we want a rule to disallow this case (is it more confusing than
helpful??). I think it would have to be a specific rule.

> Nor do we
> intend to allow:
>
>     (if Some_Func(...) then X else Y) := Toto;
>
> Right?

     Some_Type'(if Some_Func(...) then X else Y) := Toto;

is illegal because the LHS is a constant view.

> On the other hand, I suppose this:
>
>     (if Some_Func(...) then X else Y).all := Toto;
>                                      ^^^^ will be allowed.

That won't work (the prefix is not a name), but of course:

     Some_Type'(if Some_Func(...) then X else Y).all := Toto;

would be legal.


> Note redundant "Redundant" in:
>
> > If an expression appears directly inside of another set of
> > parentheses, an expression_within_parentheses can be used instead.
> > Redundant[This allows the use of a single set of parentheses in such
> > cases.]

We're started adding "Redundant" in front the square brackets, because we also
use square brackets to mean deletion, and we've been confused on multiple
occasions. This is text you replaced anyway, so it doesn't matter.

> The following strikes me as overly paternalistic:
>
> > AARM Implementation Note: In order to avoid ambiguity, and preserve
> > the ability to do useful error correction, it is recommended that
> > the grammar for all of the above productions be modified to reflect
> > this rule. ...
>
> We don't normally presume to tell implementers how best to do
> syntactic error correction.

I think this is a special case. The grammar in the standard is seriously
ambiguous and since a reasonably easy solution is known and available, I think
we should give it and explain why they would like to use it. That's especially
true because my professional opinion was that it couldn't be done (I thought
that inherent ambiguity, especially for "name", would have made it impossible).
This is one case where I was happy to be proved wrong, and I don't want to lose
that information as other implementers may have similar problems. (And not
everyone one reads the AIs, or even knows what they are.)

You could argue that there are existing cases like that in the Ada grammar; I
agree with that but think they too ought to have AARM notes if the published
grammar is problematic. Type_conversion comes to mind (this has to be treated as
a form of name, especially as it is allowed to be *used* as a name). There are
only a few such places in the standard; it's annoying that new Ada implementers
are supposed to figure out these problems and solutions on their own.

Anyway, there may be a better way to put this, but clearly every implementer
will need a solution to avoid the ambiguity (unless they are using an ambiguous
grammar parser, but that seems unlikely). So I think we ought to help. Your
elimination of any grammar at all will make it even harder for readers to make
sense of this. Maybe something like:

AARM Implementation Note: Directly adding conditional_expression in place of
expression causes a very ambiguous grammar. Instead, it is possible to modify
the grammar for all of the above productions to reflect this rule.
...

or something like that.

****************************************************************

Questions? Ask the ACAA Technical Agent