Version 1.8 of ai05s/ai05-0247-1.txt

Unformatted version of ai05s/ai05-0247-1.txt version 1.8
Other versions for file ai05s/ai05-0247-1.txt

!standard 3.4(27/2)          11-04-28 AI05-0247-1/08
!standard 3.9.2(20.2/2)
!standard 6.1.1(0)
!standard 7.3.2(0)
!reference AI05-0145-2
!class Amendment 11-03-21
!status work item 11-03-21
!status received 11-03-17
!priority High
!difficulty Hard
!subject Preconditions, Postconditions, multiple inheritance, and dispatching calls
!summary
A variety of improvements to the model of contract aspects (preconditions, postconditions, and type invariants) are made.
!proposal
The wording of 13.3.2(19/3) makes it clear that the preconditions and postconditions (along with type_invariants, these are collectively called contract aspects here) that apply to a dispatching call are those of the actual body. However, this rule causes logical problems when contract aspects are added by multiple inheritance.
Consider:
package Pack1 is type T1 is tagged private; function Is_Valid (Obj : T1) return Boolean; procedure P1 (Obj : in out T1); procedure P3 (Obj : in T1) with Pre'Class => Is_Valid (Obj); private ... end Pack1;
package Pack2 is type I2 is interface; function Is_Green (Obj : I2) return Boolean is abstract; procedure P1 (Obj : in out I2) is null with Post'Class => Is_Green (Obj); procedure P2 (Obj : in I2) is null with Pre'Class => Is_Green (Obj); procedure P3 (Obj : in I2) is null with Pre'Class => Is_Green (Obj); function Something_Green return I2 is abstract with Post'Class => Is_Green (Something_Green'Result); end Pack2;
with Pack1, Pack2; package Pack3 is type T3 is new Pack1.T1 and Pack2.I2 with private; overriding function Is_Green (Obj : T3) return Boolean; overriding procedure P2 (Obj : in T3); overriding function Something_Green return T3; -- P1 and P3 are inherited from Pack1.P1 and Pack1.P3, -- respectively. private ... end Pack3;
with Pack1, Pack3; procedure Main is procedure Do_It (Obj : in out Pack3.T3'Class) is begin Pack3.P1(Obj); -- (1) Pack3.P2(Obj); -- (2) end Do_It; O3 : Pack3.T3; begin Do_It (O3); end Main;
The dispatching call to P1 at (1) will call P1.P1, inherited from type T1. This P1 has no postcondition, so Is_Green will not be called on exit from P1. However, the following call to P2 at (2) will call Pack3.P2; it will inherit the precondition from the interface. Thus Is_Green will be called on entrance to P2. If P1 returned a value for which Is_Green is False (perhaps because the programmer forgot to override P1 for type T3), the precondition on the call to P2 would fail. But that is bizarre; given the postcondition matches the precondition for the type of the calls, it will be totally mysterious to the programmer as to why the failure is there.
It has been suggested that the contract aspects of an inherited routine ought to be considered the same as those for an overriding routine declared at the same point. (This would require nothing more than the compiler generating a wrapper at this point.) But that brings up a new problem. Consider the following:
with Pack2, Pack3; procedure Main2 is procedure Do_It (Obj : in Pack2.I2'Class) is begin Pack2.P3(Obj); -- (3) end Do_It; begin Do_It (Pack3.Something_Green); end Main2;
The effective precondition for the call at (3) (assuming that inherited routines have the same contract aspects as an overriding routine) is Is_Green(Obj) or Is_Valid(Obj). The call at (3) has to pass its precondition, as Is_Green(Obj) has to be true (based on the postcondition of Something_Green, which creates the object passed in). However, this means that we know nothing about Is_Valid(Obj): it might in fact be False. And if it is, the (inherited) body of Pack1.P3 is going to be mighty surprised to get an object that violates its precondition (since of course it knows nothing about Is_Green).
What we want to ensure is that the classwide preconditions (Pre'Class) and classwide postconditions (Post'Class) automatically provide appropriate behavior to preserve the Liskov Substitutability Principle (LSP). LSP is important because it provides the foundation for proving anything about a dispatching call. So long as all of the overridings preserve LSP, the properties known for the dispatching call can be assured. On the other hand, if LSP is violated, nothing can be assumed about the properties of the dispatching call with examining the actual bodies called.
LSP is required by various standards (specifically DO-178C), so it is important that Ada have good support for it.
But LSP can get in the way, especially for statically bound calls (where there is no benefit). Thus we do not want to put any restrictions on how specific preconditions and postconditions are used.
LSP requires that preconditions are combined with "or" (weakening), and postconditions are combined with "and" (strengthening), and that they are inherited by all overridings. That is the behavior that we want for class-wide preconditions and postconditions.
Since class-wide preconditions are combined differently than class-wide postconditions, it makes sense that they may require different rules. In particular, the handling of dispatching calls and subprogram inheritance need to be different. As we noted above, getting postconditions to work properly just requires ensuring that in any case where a postcondition is added by an interface that the added postcondition is reflected in the called routine. Any dispatching call would have to expect the same or a weaker postcondition; thus, it is OK for postconditions to be those of the body (modulo ensuring that the body evaluates any newly added postconditions).
But this does not work for the preconditions of dispatching calls. The only precondition that the writer of a dispatching call can know about is that of the routine being called; that typically is stronger than the precondition of the actual body being called. It makes no sense to enforce any other preconditions at the point of the call of a dispatching routine (and, in particular to enforce a weaker precondition that might apply to some body). Moreover, since any body necessarily has a weaker precondition, this stronger precondition is the only one that needs to be tested to get consistency. For instance, in the example above, the precondition of the call at (3) is Is_Green(Obj) [as this is a dispatching call to Pack2.P3]; this is stronger than the actual precondition of the (implicit) body Pack3.P3. And it is completely different than the precondition of the (inherited) actual body Pack1.P3 (which is Is_Valid(Obj)).
However, the precondition of the inherited routine seems wrong. It is weaker than the one expected by the invoked body (if the entire inherited precondition is considered), and might even be different as noted above. It turns out that any such precondition (where a precondition is inherited from multiple, different ancestor routines) will always violate LSP unless the inherited preconditions are exactly the same.
This should be obvious if one imagines that the inherited routine was in fact written explicitly. In the case of Pack3.P3, the body would presumably look like:
procedure P3 (Obj : in T3) is begin Pack1.P3 (Pack1.T1 (Obj)); -- (4) end P3;
The call to Pack1.P3 at (4) has a precondition of Is_Valid(Obj); this precondition has to be evaluated at this call because the precondition of Pack3.P3 is weaker (Is_Green(Obj) or Is_Valid(Obj)). Thus we end up checking the precondition of the actual body in this case, not the weaker precondition of the body Pack3.P3. Indeed, the effect is to and the preconditions when called on an object of type T2'Class (since we also have to check the precondition expected by the interface dispatching call). OTOH, a dispatching call using an object of T1'Class does not need to check Is_Green at all.
Since this sort of inheritance always will violate LSP, we don't want the compiler generating such routines automatically. Thus we introduce a rule that such routines require overriding.
Note that if we are using Pre (a specific precondition) rather than Pre'Class (a class-wide precondition), then the preconditions of a dispatching call need not have any real relationship. In that case, the existing rules are as good as any, and they have the advantage of working the same way for access-to-subprogram calls.
There are additional problems. While logically preconditions are evaluated at the point of a call, the rules given in AI05-0145-2 don't allow that for dispatching calls. As noted above, we can fix that by only checking the stronger precondition of the denoted subprogram rather than the actual precondition of the invoked subprogram. But this is weird for Pre (rather than Pre'Class) preconditions; these aren't inherited and aren't necessarily subject to any "weakening" rules.
Indeed, it makes more sense to treat Pre as an assertion that occurs at the start of the body rather than one that is combined with Pre'Class. As such, it provides additional requirements for a particular body. This means that Pre preconditions will have to be generated in the body (or a wrapper) for dispatching calls, while Pre'Class preconditions should only be generated at the call site.
Finally, the wording given in AI05-0145-2 makes it clear that Pre and Post are not inherited. However, nothing is said about what happens to these when a subprogram is inherited. With the AI05-0145-2 wording, it appears that there is no inheritance of Pre and Post. Clearly, we want to evaluate the Pre and Post values associated with a subprogram when that subprogram is inherited, otherwise the body of the subprogram could be called without the preconditions it is expecting.
Thus we need to make five changes:
* change the rules for evaluation of Pre preconditions such that they are evaluated separately from any Pre'Class preconditions (and have to be True for a call to succeed);
* change the rules for calls, such that Pre/Post are always evaluated based on the invoked body;
* change the rules for classwide preconditions of dispatching calls, such that the precondition of the (denoted) call is evaluated, not the (possibly) weaker precondition of the invoked body;
* add a rule to require overriding if the class-wide precondition is weaker than that of the inherited body(s);
* change the rules for inherited subprograms such that there always is an implicit body; this implicit body is always equivalent to a call on the concrete body with appropriate postconditions and invariants evaluated for the new body.
We need a similar rule to the last for Type_Invariant'Class.
!wording
Modify 3.4(27) as follows:
For [the execution of] a call on an inherited subprogram, {the execution of the inherited subprogram is equivalent to the execution of a subprogram_body that simply calls} [a call on] the corresponding primitive subprogram of the parent or progenitor type {with its formal parameters as the actual parameters and, if it is a function, returns the value of the call} [is performed;]{. If there is more than one inherited homograph (see 8.3), the call is to the inherited subprogram that is neither abstract nor a null procedure, if any; otherwise, the call has no effect [Redundant: (as it is to a null procedure)]. The}[the] normal conversion of each actual parameter to the subtype of the corresponding formal parameter (see 6.4.1) performs any necessary type conversion as well. If the result type of the inherited subprogram is the derived type, the result of calling the subprogram of the parent or progenitor is converted to the derived type, or in the case of a null extension, extended to the derived type using the equivalent of an extension_aggregate with the original result as the ancestor_part and null record as the record_component_association_list.
AARM Ramification: We define calls on inherited subprograms in terms of an implicit body that calls the inherited subprogram. This has two effects: * The postconditions (see 6.1.1) and type invariants (see 7.3.2) that
are evaluated are those that apply to this implicit body; they may include additional expressions from those that apply to the inherited routine if there are multiple ancestors with such routines.
* When multiple subprograms are inherited, this defines which one is actually
called.
Note that both cases involve inheritance from more than one ancestor. If there is only a single parent type, this definition is the same as directly calling the inherited routine.
Note that if the inherited subprogram is a null procedure, the call from the implicit subprogram_body has no effect, including the evaluation of preconditions or postconditions. We don't specify a particular null procedure being called, so we don't know which ones to check anyway. In contrast, preconditions and postconditions are checked for calls to other subprograms. Note that other rules ensure that class-wide preconditions do not need to be checked on such calls, as they are always the same or weaker that the class-wide precondition for the call on the inherited subprogram. But specific preconditions and postconditions need to be checked; where this is done depends on the implementation model for these checks (it might or might not need extra code). End AARM Ramification.
Replace 3.9.2(20.3/3) [as replaced by AI05-0126-1 and AI05-0197-1]:
* otherwise, the action is as defined in 3.4 for a call on an
inherited subprogram.
Replace the additional AARM note (added by AI05-0197-1) with:
For the last bullet, if there are multiple corresponding operations for the parent and progenitors, there has to be at least one that is not abstract (if the inherited operations are all abstract, there would have to be an explicit overriding of the operation, and then the first bullet would apply.) In that case, 3.4 defines that the non-null, non-abstract one is called (there can only be one of these); otherwise, an arbitrary null routine is called. (If the inherited operations are all abstract, there would have to be an explicit overriding of the operation, and then the first bullet would apply.)
Replace 13.3.2 [from AI05-0145-2] with the following:
[Editor's note: Parts of this wording, including all of the Static Semantics, are unchanged. We put the entire wording here to make it easy to see what the entire rules are.]
[Editor's note: Note that we're moving this to 6.1.1 by popular demand.]
6.1.1 Preconditions and Postconditions
For a subprogram or entry, the following language-defined aspects may be specified with an aspect_specification (see 13.3.1):
Pre
This aspect defines a specific precondition for a callable entity; it shall be specified by an expression, called a *specific precondition expression*. If not specified for an entity, the specific precondition expression for the entity is the enumeration literal True.
Pre'Class
This aspect defines a class-wide precondition for a callable entity and its descendants; it shall be specified by an expression, called a class-wide precondition expression. If not specified for an entity, the class-wide precondition expression for the entity is the enumeration literal True.
Post
This aspect defines a specific postcondition for a callable entity; it shall be specified by an expression, called a *specific postcondition expression*. If not specified for an entity, the specific postcondition expression for the entity is the enumeration literal True.
Post'Class
This aspect defines a class-wide postcondition for a callable entity and its descendants; it shall be specified by an expression, called a class-wide postcondition expression. If not specified for an entity, the class-wide postcondition expression for the entity is the enumeration literal True.
Name Resolution Rules
The expected type for any precondition or postcondition expression is any boolean type.
Within the expression for a Pre'Class or Post'Class aspect for a primitive subprogram of a tagged type T, a name that denotes a formal parameter of type T is interpreted as having type T'Class. Similarly, a name that denotes a formal access parameter of type access-to-T is interpreted as having type access-to-T'Class. [Redundant: This ensures the expression is well-defined for a primitive subprogram of a type descended from T.]
For an attribute_reference with attribute_designator Old, if the attribute reference has an expected type or shall resolve to a given type, the same applies to the prefix; otherwise the prefix shall be resolved independently of context.
Legality Rules
The Pre or Post aspect shall not be specified for an abstract subprogram. [Redundant: Only the Pre'Class and Post'Class aspects may be specified for such a subprogram.]
If a type has a inherited subprogram P implicitly declared, the corresponding primitive subprogram of the parent or progentitor is neither null nor abstract, a homograph (see 8.3) of P is declared and inherited from an ancestor T2 that is different from the ancestor T1 from which P is inherited, and at least one class-wide precondition that applies to the subprogram inherited from T2 does not fully conform to any class-wide precondition that applies P, then:
* If the type is abstract the implicit declared subprogram is *abstract*.
* Otherwise, the subprogram *requires overriding* and shall be overridden with a nonabstract subprogram.
[Editor's note: This is very similar to 3.9.3(4-6/3). I tried to combine that into that text, but the result is just too confusing.]
AARM Reason: Such an inherited subprogram would necessarily violate the Liskov Substitutability Principle (LSP) if called via a dispatching call from an ancestor other than the one that provides the called body. In such a case, the class-wide precondition of the actual body is stronger than the class-wide precondition of the ancestor. If we did not enforce that precondition for the body, the body could be called when the precondition it knows about is False - such "counterfeiting" of preconditions has to be avoided. But enforcing the precondition violates LSP. We do not want the language to be implicitly creating bodies that violate LSP; the programmer can still write an explicit body that calls the appropriate parent subprogram. In that case, the violation of LSP is explicitly in the code and obvious to code reviewers (both human and automated).
We have to say that the subprogram is abstract for an abstract type in this case, so that the next concrete type has to override it for the reasons above. Otherwise, inserting an extra level of abstract types would eliminate the requirement to override (as there is only one declared operation for the concrete type), and that would be bad for the reasons given above. End AARM Reason.
AARM Ramification: This requires the set of class-wide preconditions that apply to each declared homograph to be identical. Since full conformance requires each name to denote the same declaration, it is unlikely that independently declared preconditions would conform. This rule does allow "diamond inheritance" of preconditions, and of course no preconditions at all match.
We considered adopting a rule that would allow examples where the expressions would conform after all inheritance has been applied, but this is complex and is not likely to be common in practice. Since the penalty here is just that an explicit overriding is required, the complexity is too much. End AARM Ramification. [Editor's note: There is more on this topic in the !discussion.]
If a renaming of a subprogram or entry S1 overrides an inherited subprogram S2, then the overriding is illegal unless each class-wide precondition that applies to S1 fully conforms to some class-wide precondition that applies to S2 and each class-wide precondition that applies to S2 fully conforms to some class-wide precondition that applies to S1.
AARM Reason: Such an overriding subprogram would violate LSP, as the precondition of S1 would usually be different (and thus stronger) than the one known to a dispatching call through an ancestor routine of S2. This is always OK if the preconditions match, so we always allow that.
AARM Ramification: This only applies to primitives of tagged types; other routines cannot have class-wide preconditions.
[Editor's note 1: We don't try to modify conformance, as these two are the only checks that we want. We don't want to make other overridings illegal; we don't want any effect on 'Access; we don't want anything to change for generic formal subprograms or renamings other than the above.
[Editor's note 2: These rules have nothing to do with postconditions or invariants. See below.]
Static Semantics
If a Pre'Class or Post'Class aspect is specified for a primitive subprogram of a tagged type T, then the associated expression also applies to the corresponding primitive subprogram of each descendant of T.
For a prefix X that denotes an object of a nonlimited type, the following attribute is defined:
X'Old Denotes the value of X at the beginning of the execution of the subprogram or entry. For each X'Old in a postcondition expression that is enabled, a constant is implicitly declared at the beginning of the subprogram or entry, of the type of X, initialized to X. The value of X'Old in the postcondition expression is the value of this constant. The type of X'Old is the type of X. These implicit declarations occur in an arbitrary order.
Use of this attribute is allowed only within a postcondition expression.
AARM annotation:
The prefix X can be any nonlimited object that obeys the syntax for prefix. Useful cases are: the name of a formal parameter of mode '[in] out', the name of a global variable updated by the subprogram, a function call passing those as parameters, a subcomponent of those things, etc.
A qualified expression can be used to make an arbitrary expression into a valid prefix, so T'(X + Y)'Old is legal, even though (X + Y)'Old is not. The value being saved here is the sum of X and Y (a function result is an object). Of course, in this case "+"(X, Y)'Old is also legal, but the qualified expression is arguably more readable.
Note that F(X)'Old and F(X'Old) are not necessarily equal. The former calls F(X) and saves that value for later use during the postcondition. The latter saves the value of X, and during the postcondition, passes that saved value to F. In most cases, the former is what one wants.
If X has controlled parts, adjustment and finalization are implied by the implicit constant declaration.
If postconditions are disabled, we want the compiler to avoid any overhead associated with saving 'Old values.
'Old makes no sense for limited types, because its implementation involves copying. It might make semantic sense to allow build-in-place, but it's not worth the trouble.
end AARM annotation.
For a prefix F that denotes a function declaration, the following attribute is defined:
F'Result
Within a postcondition expression for function F, denotes the result object of the function. The type of this attribute is that of the function result except within a Post'Class postcondition expression for a function with a controlling result or with a controlling access result. For a controlling result, the type of the attribute is T'Class, where T is the function result type. For a controlling access result, the type of the attribute is an anonymous access type whose designated type is T'Class, where T is the designated type of the function result type. Use of this attribute is allowed only within a postcondition expression for F.
Dynamic Semantics
If the Assertion_Policy in effect at the point of a subprogram or entry declaration is Check, then upon a call of the subprogram or entry, after evaluating any actual parameters, precondition checks are performed as follows:
* The specific precondition check begins with the evaluation of the specific precondition expression that applies to the subprogram or entry; if the expression evaluates to False, Assertions.Assertion_Error is raised.
* The class-wide precondition check begins with the evaluation of any class-wide precondition expressions that apply to the subprogram or entry. If and only if all the class-wide precondition expressions evaluate to False, Assertions.Assertion_Error is raised.
AARM Ramification: The class-wide precondition expressions of the entity itself as well as those of any parent or progentitor operations are evaluated, as these apply to all descendants.
The order of performing the checks is not specified, and if any of the class-wide precondition expressions evaluate to True, it is not specified whether the other class-wide precondition expressions are evaluated. It is not specified whether any check for elaboration of the subprogram body is performed before or after the precondition checks. It is not specified whether in a call on a protected operation, the checks are performed before or after starting the protected action. For a task or protected entry call, the checks are performed prior to checking whether the entry is open.
If the Assertion_Policy in effect at the point of a subprogram or entry declaration is Check, then upon successful return from a call of the subprogram or entry, prior to copying back any by-copy IN OUT or OUT parameters, the postcondition check is performed. This consists of the evaluation of the specific and class-wide postcondition expressions that apply to the subprogram or entry. If any of the postcondition expressions evaluate to False, then Ada.Assertions.Assertion_Error is raised. The order of performing the checks is not specified, and if any postcondition expression evaluates to False, it is not specified whether any other postcondition expressions are evaluated. It is not specified whether any constraint checks associated with copying back in out or out parameters are performed before or after the postcondition checks.
AARM Ramification: The class-wide postcondition expressions of the entity itself as well as those of any parent or progentitor operations are evaluated, as these apply to all descendants; in constrast, only the specific postcondition of the entity applies. Postconditions can always be evaluated inside the invoked body.
If a precondition or postcondition check fails, the exception is raised at the point of the call. [Redundant: The exception cannot be handled inside the called subprogram.]
For any subprogram or entry call (including dispatching calls), the specific precondition check and the postcondition check that is performed is determined by the those of the subprogram or entry actually invoked.
AARM Ramification: This applies to access-to-subprogram calls, dispatching calls, and to statically bound calls. We need this rule to cover statically bound calls as well, as specific pre- and postconditions are not inherited, but the subprogram might be. [Editor's note: Wording to define aspect inheritance only in non-overriding situations seems to be way too complex -- and we need this rule for dispatching and access-to-subprogram calls anyway.]
For an inherited subprogram, we depend on the definition in 3.4 to provide an implicit body that may have additional postconditions from those that apply to the original subprogram. This may happen if the operation is inherited from a parent and a progenitor, and both the parent operation and progenitor operation have one or more associated postcondition expressions. End AARM Ramification.
For a call via an access-to-subprogram value, the class-wide precondition check is determined by the subprogram or entry actually invoked. In contrast, the class-wide precondition check for other calls to a subprogram or entry consists solely of checking the class-wide preconditions that apply to the denoted entity (not necessarily the one that is invoked).
AARM Ramification: For a dispatching call, we are talking about the Pre'Class(es) that apply to the subprogram that the dispatching call is resolving to, not the Pre'Class(es) for the subprogram that is ultimately dispatched to. The class-wide precondition of the resolved call is necessarily the same or stronger than that of the invoked call. For a statically bound call, these are the same; for an access-to-subprogram, (which has no class-wide preconditions of its own), we check the class-wide preconditions of the invoked routine.
AARM Implementation Note: These rules imply that logically, class-wide preconditions of routines must be checked at the point of call (other than for access-to-subprogram calls, which must be checked in the body, probably using a wrapper). Specific preconditions that might be called with a dispatching call or via an access-to-subprogram value must be checked inside of the subprogram body. In contrast, the postcondition checks always need to be checked inside the body of the routine. Of course, an implementation can evaluate all of these at the point of call for statically bound calls if the implementation uses wrappers for dispatching bodies and for 'Access values. End AARM Implementation Note.
If the Assertion_Policy in effect at the point of a subprogram or entry declaration is Ignore, then no precondition or postcondition check is performed on a call on that subprogram or entry. If the Assertion_Policy in effect at the point of a subprogram or entry declaration is Check, then preconditions and postconditions are considered to be enabled for that subprogram or entry.
Bounded Errors
It is a bounded error to invoke a potentially blocking operation (see 9.5.1) during the evaluation of a precondition or postcondition expression. If the bounded error is detected, Program_Error is raised. If not detected, execution proceeds normally, but if it is invoked within a protected action, it might result in deadlock or a (nested) protected action.
NOTE: A precondition is checked just before the call. If another task can change any value that the precondition expression depends on, the precondition may not hold within the subprogram or entry body.
Move 13.3.3 [from AI05-0146-1] to clause 7.3.2.
!discussion
This issue originally came up because null procedures are considered the same when inherited, so a dispatching call may call an arbitrary one. That however, does not work if the null procedures have different preconditions, postconditions, or type_invariants -- in that case, the routines are distinguishable. The solution here has to address this case as well as the more general problems of inheritance.
In general, we want to be able to generate preconditions at call sites, since that is the only way that they can be eliminated by optimization or other forms of proof. Similarly, we want to be able to generate postconditions inside of bodies, since that is the only that they can be eliminated by optimization or other forms of proof. (But we need to be able to effectively generate them at the call site in order to use them as known values in following preconditions.)
In any case, the language defines all of these checks as being made at the call site. Doing so elsewhere is in general an optimization.
The change to separate Pre from Pre'Class introduces additional problems with renames and 'Access. Since renaming can change the dispatching status of a subprogram (from dispatching to statically bound, or the reverse), it introduces some interesting problems. This is fixed by making such overridings illegal.
We also would like access-to-subprogram to work the same way as a dispatching call. But that only works if the designated subprogram does not have any Pre'Class (as we don't have any such preconditions on access-to-subprogram types). We fix that treating 'Access differently than a similar dispatching call; all of the preconditions are evaluated based on the invoked subprogram.
The requirement that preconditions fully conform stems from the need to ensure that the preconditions of an inherited body are checked as the body expects. In particular, added class-wide preconditions can weaken the precondition, but it is unlikely that an existing body is prepared for such weakening. Since actually enforcing the preconditions would violate LSP, we instead require the subprogram to be overridden in such cases. That way, any violation of LSP is an explicit decision by the programmer and not something that happens implicitly by the language.
Note that the requirement is that the class-wide preconditions all fully conform. This is a tough requirement that cannot be met for independently derived preconditions, even if they appear identical. For example:
type Int1 is interface; function Is_Good (P : Int1) return Boolean is abstract; procedure Foo (Param : Int1) is null with Pre'Class => Is_Good (Param);
type Int2 is interface; function Is_Good (P : Int2) return Boolean is abstract; procedure Foo (Param : Int2) is null with Pre'Class => Is_Good (Param);
type Int3 is Int1 and Int2;
6.3.1(21) requires that each direct_name in an expression have to denote the same declaration. But in this example, neither Param nor Is_Good denote the same declaration in the two preconditions. And these preconditions are what is inherited (these are class-wide expressions and they are resolved only once). Thus, type Int3 is illegal, as procedure Foo requires overriding.
-----------------
We move preconditions and postconditions directly after subprogram specifications (to clause 6.1.1) as these are strongly associated with these. It doesn't make sense for this section to be defined in chapter 13, especially with the Inline and No_Return aspects defined in Chapter 6. These are arguably more important.
Similarly, we move type invariants to clause 7.3.2, directly after private types to which they apply.
!corrigendum 6.1.1(0)
Insert new clause:
Force a conflict; the real text is found in the conflict file.
!ACATS test
No additional ACATS test is needed here.
!ASIS
No impact on ASIS.
!appendix

From: Steve Baird
Sent: Thursday, March 3, 2011  3:28 PM

[This thread was split from AI05-0197-1]

Problem #2:

Making an arbitrary choice among null procedures assumes that they are
interchangeable, and leads to problems if they are not.

Consider the following example:

    declare
       package Pkg1 is
           type Ifc is interface;
           procedure Op (X : in out Ifc) is null;
       end Pkg1;

       package Pkg2 is
          type T is tagged null record
            with Type_Invariant => Is_Valid (T);
          procedure Op (X : in out T) is null;
          function Is_Valid (X : T) return Boolean;
       end Pkg2;
       package body Pkg2 is ... end Pkg2;

       package Pkg3 is
          type D is new Pkg1.T and Pkg2.Ifc with null record;
       end Pkg3;
    begin ...; end;

Does a dispatching call to Pkg3.Op where the tag of the controlling operand is
Pkg3.D'Tag result in a call to Is_Valid?

It seems like it depends on the "arbitrary" choice mentioned in this AI's new
wording for 3.9.2(20.2/2), which defines the dynamic semantics of such a
dispatching call:

   * otherwise, the action is the same as the action for the
     corresponding operation of the parent type or progenitor
     type from which the operation was inherited. {If there is
     more than one such corresponding operation, the action is
     that for the operation that is not a null procedure, if any;
     otherwise, the action is that of an arbitrary one of the
     operations.}

If we flip a coin to decide which from among two candidates is the
"corresponding operation ... from which the operation was inherited", and if
exactly one of these two candidates includes a call to Is_Valid in its dynamic
semantics, then it seems like we have a problem.

Both here and when 8.3(12.3/2) says "one is chosen arbitrarily", we are relying
on the premise that syntactically null procedures with appropriately conformant
profiles are interchangeable with respect to dynamic semantics.

One approach to this problem (which Randy can explain his views on in a separate
message) would involve two steps:

    1) In these two arbitrary-choice situations (3.9.2 and 8.3),
       we add a preference rule preferring operations inherited
       from a non-interface type over operations inherited from
       an interface type.

    2) We take whatever steps are needed (possibly none?)
       in order to ensure that null procedures which are primitive ops
       of interface types really are interchangeable (e.g., we
       already disallow pre and post conditions for null procedures).

This issue needs more work.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 4, 2011  1:15 AM

...
> Both here and when 8.3(12.3/2) says "one is chosen arbitrarily", we
> are relying on the premise that syntactically null procedures with
> appropriately conformant profiles are interchangeable with respect to
> dynamic semantics.
>
> One approach to this problem (which Randy can explain his views on in
> a separate message) would involve two steps:
>
>     1) In these two arbitrary-choice situations (3.9.2 and 8.3),
>        we add a preference rule preferring operations inherited
>        from a non-interface type over operations inherited from
>        an interface type.
>
>     2) We take whatever steps are needed (possibly none?)
>        in order to ensure that null procedures which are primitive ops
>        of interface types really are interchangeable (e.g., we
>        already disallow pre and post conditions for null procedures).
>
> This issue needs more work.

I don't really have any views specifically on this problem, but the discussion
of it has gotten me very concerned that there is something fundamentally wrong
with the way we handle Pre/Post/Type_Invariant aspects on dispatching calls.
I'll call these "contract aspects" in the discussion below; most of the points
apply to all three of them.

Tucker has explained that the contract aspects that apply to a particular
subprogram body are always determined when that subprogram is declared. In
particular, inheritance never changes the contract aspects of a subprogram body.

That clearly works well on the body side of the contract; if the contracts
changed with inheritance, it would be much harder to figure out what properties
can be depended upon (or have to be preserved in the results).

OTOH, it doesn't seem to work as well on the call side of the contract. The
current rules say that the contracts depend on the actual body executed; the
implementation is likely to be a wrapper around the "regular" body (if the
contracts are normally enforced at the call site).

This has some unfortunate effects when interfaces are "added" into an existing
type hierarchy. For example, consider:

    package P1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P1 (Obj : in out T1);
    private
       ...
    end P1;

    package P2 is
       type I2 is interface;
       function Is_Wobbly (Obj : I2) return Boolean is abstract;
       procedure P1 (Obj : in out I2) is null
           with Post'Class => Is_Wobbly (I2);
       procedure P2 (Obj : in I2) is null
           with Pre'Class => Is_Wobbly (I2);
    end P2;

    with P1, P2;
    package P3 is
       type T3 is new P1.T1 and P2.I2 with private;
       overriding
       function Is_Wobbly (Obj : T3) return Boolean;
       overriding
       procedure P2 (Obj : in T3);
    private
       ...
    end P3;

    with P1, P3;
    procedure Main is
        procedure Do_It (Obj : in P3.T3'Class) is
        begin
           Obj.P1; -- (1)
           Obj.P2; -- (2)
        end Do_It;
        O3 : P3.T3;
    begin
        Do_It (O3);
    end Main;

The dispatching call to P1 at (1) will call P1.P1, inherited from type T1.
This P1 has no postcondition, so Is_Wobbly will not be called on exit from P1.
However, the following call to P2 at (2) will call P3.P2; it will inherit the
precondition from the interface. Thus Is_Wobbly will be called on entrance to
P2. If P1 returned a value for which Is_Wobbly is False (perhaps because the
programmer forgot to override P1 for type T3), the *precondition* on the call to
P2 would fail. But that is bizarre; given the postcondition matches the
precondition for the type of the calls, it will be totally mysterious to the
programmer as to why the failure is there. The only way to reason about these
calls is to know the details of the contracts for all of the various possible
routines involved -- that way seems to lead to madness! (And of course the code
will most likely not be this easy to analyze).

It is of course possible to create similar examples with the Type_Invariant
aspect.

These examples bother me so much simply because here we have a contract that we
appear to be promising to enforce, yet we aren't enforcing it. Lying to the
client (whether or not there is some logical reason for it) does not seem like a
good policy.

This is just one effect of these rules. More generally, these rules prevent any
analysis of the contracts of a dispatching call -- by the programmer, by the
compiler, or by any tool that doesn't have access to the complete source code of
the program. That's because even if the entire program obeys a programming style
rule to avoid the "funny" cases, a subprogram that hasn't even been written yet
can add additional contracts (via any of the class-wide or specific contract
aspects) -- and those would render any analysis wrong.

One side effect of this that generating the contract aspects at the point of the
call is not possible in general. That of course means that the compiler cannot
eliminate checks of those aspects when they are not needed, nor point out when
they are guaranteed to fail. It also means that wrappers are needed for the
dispatching versions of routines if the compiler generates such aspects at the
call site for statically bound calls.

The rules for combining contracts (especially preconditions) also seem confusing
at best. Adding a precondition to an overriding routine causes a *weaker*
precondition; the original class-wide precondition no longer needs to be true.
That does not seem helpful and surely makes analysis harder.

All of these rules seem to vary from the usual model of dispatching calls. For
instance, constraints can be thought of a weaker form of contract aspects (as
preconditions involving only a single parameter, for instance). Ada 95 required
constraints to match exactly (3.9.2(10/2) requires subtype conformance) for
routines that override inherited routines (that is, possible dispatching
targets). (Ada 2012 will of course extend this to subtype predicates.) Ada also
has similar rules for other properties (including Convention, also in
3.9.2(10/2), and the No_Return aspect (6.5.1(6/2))). It would have had similar
rules for global in/global out annotations had those been defined, and most
likely would have similar rules for exception contracts as well.

So why should contract aspects (Pre/Post/Type_Invariant) be different? If these
are considered part of the profile, with suitable rules all of the bizarre cases
go away.

The easiest rule would be to require (full) conformance of contract aspects that
apply to a subprogram. Effectively, only one (class-wide) precondition could
apply to a dispatching call - the same precondition would apply to all of the
subprograms that could be dispatched to. This sounds very limiting, but as
Tucker has pointed out, such a precondition could be made up of other
dispatching calls. (Sorry Tucker for using your argument against you... :-)

If we were to make such aspects part of the profile, then it would seem that we
should have similar requirements on access-to-subprogram types. That would not
be strictly necessary, but note that the same sorts of issues apply to calls
through access-to-subprogram types. These bother me far less (especially as
wrappers might be required to deal with access-before-elaboration checks), but
consistency seems valuable.

We could use other somewhat weaker rules. One thing that bothers me about the
"absolute" rule is that a lot of potentially dispatching routines are never
called that way in practice. Such routines could have conventional specific
contract aspects without problems.

One could even imagine using Tucker's original rules (although that leaves
Steve's problem unsolved) so long as there is the possibility of compile-time
enforcement of a stricter rule so that at least "well-structured" dispatching
calls could have their properties known at the call site.

For me, the most important part of contract aspects is that they are known at
the call site. This opens up the possibility of the compiler eliminating the
checks (and even more importantly, warning when the contracts are known to fail)
without having to know anything about the implementation of the subprogram.
Denying this possibility to dispatching calls makes such calls a second-class
citizen in the Ada universe, and reduces the contract aspects to little more
than fancy Assert pragmas (only Type_Invariant does much automatically).

Thus I think we need to reconsider this area. (As a side effect, such
reconsideration may very well eliminate the problem that Steve was trying to
fix.)

****************************************************************

From: Tucker Taft
Sent: Friday, March 4, 2011  9:43 AM

After thinking more about this, I now agree with Randy that we have a problem.
It arises whenever an operation inherited from an interface is overridden with
an operation inherited from some other type.  My conclusion is that it may be
necessary for a new version of the inherited routine to be generated, so that
the compiler can insert the additional checks implied by Pre'Class, Post'Class,
etc. aspects inherited from the interface.

This breaks the principle that you can always just reuse the code when you
inherit an operation, but I believe in the presence of multiple inheritance of
class-wide aspects, we don't really have a choice.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 4, 2011  9:46 PM

I'm happy to hear that; I'd hate to think that I was making less sense than
Charlie Sheen...

However, I don't think this quite works, because of the "weakening" rules for
preconditions. The new precondition inherited from the interface could
"counterfeit" the precondition on the original body, leading to a scenario where
the body is called without problem with parameters that violate the precondition
that it knows about. That seems pretty nasty.

To give a specific example:

    package P1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P (Obj : in out T1)
          with Pre'Class => Is_Valid (Obj);
    private
       ...
    end P1;

    package body P1 is
       procedure P (Obj : in out T1) is
       begin
          -- Code here that assume Is_Valid is True for Obj.
       end P;
    end P1;


    package P2 is
       type I2 is interface;
       function Is_Wobbly (Obj : I2) return Boolean is abstract;
       procedure P (Obj : in out I2) is null
           with Pre'Class => Is_Wobbly (I2);
       function Something_Wobbly return I2
           with Post'Class => Is_Wobbly (Something_Wobbly'Result);
    end P2;

    with P1, P2;
    package P3 is
       type T3 is new P1.T1 and P2.I2 with private;
       overriding
       function Is_Wobbly (Obj : T3) return Boolean;
       overriding
       function Something_Wobbly return T3;
    private
       ...
    end P3;

    with P1, P3;
    procedure Main is
        procedure Do_It (Obj : in P3.T3'Class) is
        begin
           Obj.P; -- (1)
        end Do_It;
    begin
        Do_It (P3.Something_Wobbly);
    end Main;

Using the new semantics Tucker suggested, the call at (1) has to pass its
precondition, as Is_Wobbly(Obj) has to be true (based on the postcondition of
Something_Wobbly). However, since preconditions are effectively combined with
"or", Is_Valid(Obj) might in fact be False. And if it is, the body of P is going
to be mighty surprised to get an object that violates its precondition!

(I don't think this problem happens for Post or Type_Invariant, as they are
"anded", nor would it happen if the precondition was described as a
Dynamic_Predicate.)

Also note that this "weakening" means that even Pre'Class that necessarily must
apply to all calls cannot be generated at the call-site (because of the possible
need to "or" it with some other precondition) -- which eliminates the ability of
the compiler to do much in the way of checking elimination. (Again, this is not
a problem with the other aspects.)

It seems that "weakening" doesn't apply to multiple inheritance as much as it
does to the "primary" inheritance. But that doesn't seem to lead to a rule that
makes much sense, as it would seem to treat progenitors different than interface
parents (something we've avoided in the past).

The easy fix would be to combine the preconditions with "and". But I realize
there are logical issues with that on the call side of the equation. It strikes
me that there are logical issues on one side or the other whenever contract
aspects are combined; they only make sense if there is only one.

Thus a radical solution would be to require that exactly one precondition apply
to each subprogram (either Pre or Pre'Class, possibly inherited). To support
combining and "weakening", we'd need a way to refer to the aspects of a parent
routine, so that they can be used in a new Pre aspect. An attribute would work,
something like: P'Parent_Pre.

That would mean that you couldn't change Pre'Class precondition; if you need to
do that, you'd have to use a Pre on each subprogram in the inheritance. Not sure
if that is too complicated. And you couldn't assume much about the calls if you
did that (rather than using a single Pre'Class), but as that is the current
state, I can hardly imagine that it is harmful.

Anyway, more thought is needed.

****************************************************************

From: Jean-Pierre Rosen
Sent: Saturday, March 5, 2011  1:26 AM

> Using the new semantics Tucker suggested, the call at (1) has to pass
> its precondition, as Is_Wobbly(Obj) has to be true (based on the
> postcondition of Something_Wobbly). However, since preconditions are
> effectively combined with "or", Is_Valid(Obj) might in fact be False.
> And if it is, the body of P is going to be mighty surprised to get an
> object that violates its precondition!

Doesn't Eiffel have the same problem? How is it handled? (Just trying to avoid
reinventing the wheel).

****************************************************************

From: Randy Brukardt
Sent: Saturday, March 5, 2011  2:14 AM

I had wondered the same thing, but am not sure if Eiffel has interfaces or some
other form of multiple inheritance. (Without that, this particular problem
cannot come up.)

****************************************************************

From: Bob Duff
Sent: Saturday, March 5, 2011  7:32 AM

Eiffel has multiple inheritance.  Not just interfaces -- you can inherit two
non-abstract methods that conflict, and there's some syntax for resolving the
conflict (combine them into one, rename one or both, ....).

I haven't had time to understand the issue you guys are talking about, so I
don't know how it relates to Eiffel.

One possibly-related thing is that in Eiffel preconditions are checked by the
caller ("as if", of course).

****************************************************************

From: Tucker Taft
Sent: Saturday, March 5, 2011  10:28 AM

Eiffel has full multiple inheritance.

****************************************************************

From: Jean-Pierre Rosen
Sent: Saturday, March 5, 2011  11:26 AM

> I had wondered the same thing, but am not sure if Eiffel has
> interfaces or some other form of multiple inheritance. (Without that,
> this particular problem cannot come up.)

Eiffel has full multiple inheritance (Bertrand Meyer has claimed that no
language is usable without full multiple inheritance - but that was before
interfaces)

****************************************************************

From: Tucker Taft
Sent: Thursday, March 17, 2011  3:30 PM

Eiffel essentially follows the same rules we have proposed for Ada 2012, though
it requires slightly different syntax when augmenting a precondition in a
descendant type.

Rather than saying simply "requires X > 0" to establish a precondition, you say
"requires else X > 0" to emphasize that the new precondition is tacked onto any
inherited precondition with an "or else."  Similarly, to define a postcondition,
you write "ensures Y > 0", but to "override" one you write "ensures then Y > 0",
emphasizing that this is tacked on using "and then."  I can't say I find
"requires else" or "ensures then" palatable from an English point of view, but
it is comforting to know they have adopted the implicit "or-ing"/"and-ing"
approach.

In an earlier version of Eiffel, they allowed overriding and merely established
the principle that the programmer should only write weaker preconditions, and
stronger postconditions, but made no effort to check that.  In the current
version, there is no way to write a stronger precondition, or a weaker
postcondition, because of the implicit "or-ing"/"and-ing" semantics.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 17, 2011  3:35 PM

Here is a web page that defines some of this for Eiffel:

   http://docs.eiffel.com/book/method/et-inheritance#Inheritance_and_contracts

****************************************************************

From: Bob Duff
Sent: Thursday, March 17, 2011  7:10 PM

Thanks, Tuck.

Here's another possibly-interesting web site about Eiffel, which I have not
read, but I noticed the interesting statement, "Assertions can be statically
verified":

    http://tecomp.sourceforge.net/index.php?file=doc/papers/lang/modern_eiffel.txt

****************************************************************

From: Tucker Taft
Sent: Thursday, March 17, 2011  9:13 PM

Interesting.  It looks like things went quiet about 18 months ago.  Their
concept of static verification is pretty simplistic.

****************************************************************

From: Randy Brukardt
Sent: Saturday, March 19, 2011  7:45 PM

> Eiffel essentially follows the same rules we have proposed for Ada
> 2012, though it requires slightly different syntax when augmenting a
> precondition in a descendant type.

It occurred to me this morning (in thinking about an off-hand remark that Bob
made) that we're looking at the wrong part of the problem. It's not the
combining that is getting us in trouble, it is the thinking about how these
things are enforced.

These contract aspects (Pre/Post/Type_Invariant) are visible to callers and form
part of the contract of the call. I started musing about what would happen if we
formally stated that they belong to the call (and are generated at the call
site) rather that to the called body. And I realized that most of the problems
go away; moreover, the weakening/strengthening makes much more sense.

The question is what are these contracts intended to do? If we consider the
purpose to be ensuring correctness of calls, then this focus makes perfect
sense.

Note that for normal, statically bound calls, there is no difference between
considering the call or the body. But for dispatching calls, there is an obvious
difference -- only that part of the contract aspects needed to ensure the
caller's correctness are enforced. (If it is needed to enforce something on all
calls to a particular body, it shouldn't be in a contract aspect; we have
assertions and predicates for that.)

What's interesting is that we don't *need* to enforce all of the contract
aspects in order for a body to have a consistent view of its callers. The
"weakening" of preconditions and "strengthening" of postconditions has the
correct effect, without the "counterfeiting" that occurs in the current rules.

To see what I'm talking about, we need to look at an example:

    package Simple_Window is
        type Root_Win is tagged ...

        function Is_Valid (Win : in Root_Win) return Boolean;

        function Create (Data : in Data_Type'Class) return Root_Win
           with Post'Class => Is_Valid (Create'Result);

        procedure Show_Hidden (Win : in out Root_Win)
           with Pre'Class => Is_Valid (Win),
                Post'Class => Is_Valid (Win);

        procedure Close (Win : in out Root_Win)
           with Pre'Class => Is_Valid (Win),
                Post'Class => not Is_Valid (Win);

    end Simple_Window;

    package Visibility is
        type Visible is interface;

        function Is_Visible (Obj : in Visible) return Boolean is abstract;

        procedure Show_Hidden (Obj : in out Visible)
            with Pre'Class  => not Is_Visible (Obj),
                 Post'Class => Is_Visible (Obj) is abstract;

        procedure Hide_Visible (Obj : in out Visible)
            with Pre'Class  => Is_Visible (Obj),
                 Post'Class => not Is_Visible (Obj) is abstract;

    end Visibility;

    with Simple_Window, Visibility; use Simple_Window, Visibility;
    package Visible_Window is
        type Visible_Win is new Root_Win and Visible with ...

        --Inherits:
        --   function Is_Valid (Win : in Visible_Win) return Boolean;

        function Is_Visible (Obj : in Visible_Win) return Boolean;

        function Create (Data : in Data_Type'Class) return Visible_Win
           with Post'Class => Is_Visible (Create'Result);
           -- This is "and"ed with Is_Valid (Create'Result).

        --Inherits:
        -- procedure Show_Hidden (Win : in out Visible_Win);

        procedure Hide_Visible (Win : in out Visible_Win)
            with Pre'Class  => Is_Valid (Win),
                 Post'Class => Is_Valid (Win);

        --Inherits:
        --procedure Close (Win : in out Root_Win);

    end Visible_Window;

    with Simple_Window, Visibility, Visible_Window;
    use  Simple_Window, Visibility, Visible_Window;
    procedure Testing is

        procedure Hide (Win : in out Visible'Class) is
        begin
            if Is_Visible (Win) then
                Hide_Visible (Win); -- (1)
            end if;
        end Hide;

        A_Win : Visible_Win := Create (...);
    begin
        if Something then
            Hide_Visible (Win); -- (2)
            Close (Win); -- (3)
        else
            Hide (Win); -- (4)
            Close (Win); -- (5)
        end if;
    end Testing;

The effect of the rules that I'm suggesting is that the details of contract
aspects that are evaluated depends upon the call.

Specifically, for the dispatching call (1), the precondition is Is_Visible
(Win), as that is the class precondition for routine being called (procedure
Hide_Visible (Obj : in out Visible'Class);). [Dispatching calls never would
evaluate specific preconditions, only the ones that apply to all of the
subprograms.] Note that whether Is_Valid (Win) is True has no effect here; that
makes sense because this is an object of Visible'Class and it doesn't even have
Is_Valid. Why would the caller care about the value of some routine that it does
not know about?

Similarly, the call (1) only checks the postcondition (not Is_Visible (Win)). No
check need be performed on other postconditions of the body.

Note the contrast with the statically bound call at (2): this evaluates the
entire precondition of the routine (Is_Valid (Win) or Is_Visible (Win)).

Note that these calls call the *same* body, but use *different* preconditions.
This seems bad. But it is not, because preconditions are "or"ed together. Thus,
inside of the body, the precondition has the same state either way (if
Is_Visible is True, then the precondition is satisfied).

Another objection might be that later callers might depend on the unchecked
Is_Valid postcondition. But look at the two calls (3) and (5). Both of these
calls (obviously) have the precondition of Is_Valid (Win). But in the case of
call (2), the postcondition of the call includes Is_Valid (Win) [so the
precondition succeeds], while the postcondition of call (4) does not (and
cannot: Visible'Class does not contain a Is_Valid operation). So there is no
reason to depend upon that value to be true. Moreover, it will be enforced
before call (5), so there is no problem even if it somehow gets False.

A final objection is that readers might treat these contract aspects as if they
are assertions and presume that they are always checked. But that seems to be
more an error of understanding than anything else; once users realize that it is
the call that matters, not the body, it will make more sense.

Note that what this rule does is make the preconditions on a dispatching call
*stronger* than those on the called body -- indeed they are strong enough so
that all possible bodies will pass. Similarly, the postconditions and invariants
on a dispatching call are *weaker*, weak enough so that call possible bodies
will pass. This almost makes sense. ;-)

---

We should now turn to the inheritance issues that lead to this discussion in the
first place.

Imagine that there is a call to Show_Hidden that immediately follows call (1).
[Not very useful, but bear with me...] The precondition for that call would be
not Is_Visible (Win). The call would dispatch to the body for the root type,
which as a precondition of Is_Valid (Win). That does seem good; at this point
the reader is probably saying, "see I knew this idea was nuts; I wish I hadn't
read this so far". Ah, but not so fast.

Let's back up a moment and imagine a version of Ada with no inheritance, only
overriding. We'd have to write the body of this routine explicitly in this case.
What would it look like? Probably something like:

     procedure Show_Hidden (Obj : in out Visible_Win) is
     begin
         Show_Hidden (Root_Win (Obj));
     end Show_Hidden;

So what, you are probably saying. Well, I see a call here -- a statically bound
call for that matter! Such a call will obviously enforce the pre and post
conditions of the called subprogram. So the missing Is_Valid (Win) call would
occur within the body of the subprogram. Since this is stronger than the
precondition of the subprogram as a whole (Is_Valid or (not Is_Visible)), it
will have to be evaluated, and thus the parent's precondition will be enforced.

So, all we have to do to avoid the "wrong precondition" problem is to treat
inherited routines where there are new contract aspects in the same way as this
call. That's not much different than the wrapper model that was previously
proposed to "fix" this case.

---

Having shown that making contract aspects apply to the call rather than to the
body eliminates the logical inconsistencies that otherwise arise, let's look a
number of smaller issues.

First, this model doesn't appear to extend that well to access-to-subprogram
types. But actually it does. Access-to-subprogram types themselves do not have
an associated contract; effectively it is true. If we apply the same rule to
'Access that we apply to inheritance, we realize that any subprogram with a
contract aspect needs a wrapper in order that that aspect gets evaluated on the
call (just like any inherited routine that *changes* a contract needs a
wrapper). This essentially is the same model as used by the existing proposal,
it would just be worded differently.

Second, there is Steve's problem of inheritance of multiple null procedures
(with different contract aspects) where an arbitrary one is called. The
"natural" model would be that a call to an arbitrary one of the routines is
generated. But that means it would be arbitrary as to what contracts are
evaluated. That doesn't seem good. But turning back to the model of "what would
we write explicitly" solves the problem. If we explicitly wrote such a body, it
almost certainly would be a null procedure itself -- we wouldn't call *any*
inherited routine. So all we have to do is write the rules so that happens (and
it is only necessary when the contracts are changed), and the question is
answered.

Third, Type_Invariant is interesting. In general, this scheme works because
preconditions are weakened and postconditions are strengthened. Thus there is no
issue with outbound invariant checks. Inbound invariant checks, however, might
cause problems in that not all of them my be evaluated. I recall that Tucker's
original invariant proposal had no inbound checks, which means that that
proposal would not have any problem. I would suggest that we consider going back
to that proposal.

This last issue reminds me that the reason that those inbound checks were added
was to detect errors earlier. This is the same reason that we wanted to ensure
that all of the preconditions and postconditions of a body are evaluated. But it
should be clear by now that doing so also breaks the logical consistency of the
checks, leading to "counterfeiting" of preconditions and/or failure to check
postconditions for the types involved in a call. If we have to make a choice
between correctness and easier debugging, I know which one I want to choose. :-)

[It should be noted that making extra postcondition checks appears to harmless
in this scheme; the problems really appear with the preconditions. But I haven't
thought about this in detail.]

Finally, this scheme has the distinct advantage that checks can be generated at
the call site. (Indeed, they have to be for preconditions of dispatching calls,
since the precondition of the call is in general stronger than the precondition
of the called body.) That exposes the checks to the full machinery of compiler
optimization, which should allow many of them to be eliminated. (And ones that
have to fail to generate warnings - even more valuable for correctness.) Since
this is existing machinery in all compilers that I know of, we want to be able
to take advantage of that.

To summarize, by changing the locus of contracts from callable entities to the
calls themselves, we eliminate the logical problems that we encounter when the
contracts belong to bodies rather than calls. Again, note that this change has
no effect for any statically bound calls, nor for access-to-subprogram calls; it
only affects dispatching calls. I think we need to look very seriously at
whether this approach works better than the body-oriented approach (even though
there is a slight cost in debugging information).

****************************************************************

From: Bob Duff
Sent: Sunday, March 20, 2011  5:05 AM

> These contract aspects (Pre/Post/Type_Invariant) are visible to
> callers and form part of the contract of the call.

Yes.  But why does everybody leave out predicates, which are the most important
of the 4 new "contract" features?

>...I started musing about what would
> happen if we formally stated that they belong to the call (and are
>generated  at the call site) rather that to the called body.

Preconditions belong to the call site.  Postconditions belong to the body.

> The question is what are these contracts intended to do? If we
> consider the purpose to be ensuring correctness of calls, then this
> focus makes perfect sense.

The purpose of preconditions is to ensure the correctness of calls.
In order to prove a precondition correct (either by hand, or using some tool),
you typically need to look at the actual parameters.  Postconditions are the
flip side -- they're supposed to be true for ANY call.

Sorry, but your e-mail is too long for me to read right now.
I'll try to get to it later.  I'm definitely interested!

I think there's a bug in the rules that prevents preconditions from being
checked at the call site.  This came up in a discussion that Cyrille Comar was
involved in, and then the discussion got dropped on the floor before anybody
figured out what the problem is.

I'd rather spend energy on getting this stuff right, rather than
(say) obscure corners of the accessibility rules that apply only to programs
that nobody writes.  ;-)

****************************************************************

From: Tucker Taft
Sent: Sunday, March 20, 2011  7:23 AM

     Thanks for this nice explication.
I actually agree with you that the right "view" is from the caller -- that is
really what preconditions and postconditions are about.  One of our goals,
however, was that the preconditions could be *implemented* by doing the correct
check inside the called routine. That will generally be less time efficient, but
it is clearly more space efficient, and for some compilers might simplify
implementation.  However, if we keep your "caller" view clearly in sight, it
becomes apparent that for inherited code, this may not work, and a new version
of the code that performs a weaker precondition check and a stronger
postcondition check will be needed. So it isn't even a "wrapper."  It really
needs to be a new version, or perhaps an "unwrapper" which bypasses the stronger
precondition check in the inherited code.

But you are right, the semantics are simpler to understand, and the "funny"
inheritance rules make more sense, if in the RM we define the semantics from the
caller side.  We should still make some effort in my view to allow the body-side
implementation of checks if desired, but the caller side view should remain the
underlying semantics.

By the way, Bob implied that postconditions were to be associated with the body.
I don't agree.  Postconditions are both more time *and* space efficient if
performed inside the body, but the right semantic view is still from the
caller's point of view.  Preconditions and postconditions are promises made to
the *caller*.  Also, there is no guarantee that the postconditions are the same
for every call, since they could depend on the value of the IN parameters. E.g.,
the postcondition of "max" is clearly dependent on the values of the IN
parameter (e.g. Max'Result >= Left and Max'Result >= Right and (Max'Result =
Left or Max'Result = Right))

Access-to-subprogram is admittedly weird, and I'm not sure what we should do
with those.  Since we don't seem motivated to put pre/postconditions on
access-to-subp types, the question is what if any preconditions can/should be
enforced in the body. Based on the caller view, the answer would be *none*.
However, that pretty much defeats the ability to implement body-side checks.
Hence, we have proposed that the preconditions *are* checked.  But this is after
all of the "or"ing has taken place with inherited 'Class preconditions.

So perhaps a way to think about it is that on a call through an access-to-subp
value, a precondition is checked that is the "or" of every possible
non-access-to-subp call.  Another way to look at is that a subprogram reached
through an acc-to-subp indirection can still act as though it was called
directly in some way, including potentially a dispatching call if it is a
dispatching operation. It need not worry about the access-to-subp possibility.
Yet another way to think about it is that an access-to-subp call is actually a
call on a wrapper that contains a "normal" call, so we know that *some*
precondition check is performed, depending on which sort of "normal" call is
chosen to be used inside this wrapper.

In any case, I think we are in agreement that the caller view is the key to
understanding the semantics for preconditions and postconditions.  Type
invariants are more like postconditions in my view, and so are a bit easier to
reason about.  Subtype predicates are really more associated with subtype
conversions, and subtype conversions clearly happen on the caller side as far as
parameters and results, since the body doesn't really have any clue as to the
subtypes of the actual parameters or the target of the return object.

****************************************************************

From: Yannick Moy
Sent: Monday, March 21, 2011  12:36 PM

> So, all we have to do to avoid the "wrong precondition" problem is to
> treat inherited routines where there are new contract aspects in the
> same way as this call. That's not much different than the wrapper
> model that was previously proposed to "fix" this case.

Could you point me to the discussions about the "wrong precondition" problem? Or
was it only discussed at ARG meetings?

****************************************************************

From: Tucker Taft
Sent: Monday, March 21, 2011  12:45 PM

The problem is related to inheriting a subprogram from the parent type with a
particular precondition, while also inheriting abstract or null subprograms from
one or more interface "progenitors" that have different preconditions but
otherwise match the subprogram inherited from the parent.

The danger is that the inherited code will enforce the precondition it had in
the parent, when in fact it should also "or" in the preconditions coming from
the interface progenitor (effectively weakening the precondition).

****************************************************************

From: Randy Brukardt
Sent: Monday, March 21, 2011  9:59 PM

> > These contract aspects (Pre/Post/Type_Invariant) are visible to
> > callers and form part of the contract of the call.
>
> Yes.  But why does everybody leave out predicates, which are the most
> important of the 4 new "contract" features?

Because they don't have anything to do with subprograms, and in particular have
nothing to do with any of the problem cases.

This discussion started originally because of a question that Steve had about
inheriting null procedures that had different contract aspects, and eventually
widened into a question about any inheritance that causes different contract
aspects.

Such inheritance cannot happen for predicates. When you have any form of
inheritance, all of the inherited and overriding routines must be subtype
conformant. That requires static matching for any subtypes, and that requires
the predicates to be exactly the same.

Since the predicates have to be the same for any possible body that a
dispatching call might execute, there is no issue to discuss. Thus I've left any
mention of predicates out of my messages, since that would just confuse the
issue further.

[Note that the solution that makes predicates have no problems could also be
used for other contract aspects -- that is, requiring them to be the same
everywhere -- but that seems to be a fairly limiting solution.]

> >...I started musing about what would
> > happen if we formally stated that they belong to the call (and are
> >generated  at the call site) rather that to the called body.
>
> Preconditions belong to the call site.  Postconditions belong to the
> body.

That's wrong; Tucker explained why. If you take this literally, you could never
use the postconditions to prove and eliminate following preconditions -- which
would be silly.

...
> I think there's a bug in the rules that prevents preconditions from
> being checked at the call site.  This came up in a discussion that
> Cyrille Comar was involved in, and then the discussion got dropped on
> the floor before anybody figured out what the problem is.

It's not a bug, as Tucker points out, it was completely intended. It's just
completely wrong -- the rules should be that preconditions are *always* checked
at the call site, and then if the compiler can and wants to do some as-if
optimizations, that's fine.

> I'd rather spend energy on getting this stuff right, rather than
> (say) obscure corners of the accessibility rules that apply only to
> programs that nobody writes.  ;-)

Feel free. :-)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 21, 2011  10:22 PM

>      Thanks for this nice explication.
> I actually agree with you that the right "view" is from the caller --
> that is really what preconditions and postconditions are about.  One
> of our goals, however, was that the preconditions could be
> *implemented* by doing the correct check inside the called routine.

And this is where we went off the rails. This is an *optimization*; we must not
break the logical rules in order to make this possible. And I think I've
convinced myself that this is not possible in general (although it is possible
for any subprogram that cannot be called with a dispatching call -- and there
are a lot of those in typical Ada programs).

> That will generally be less time efficient, but it is clearly more
> space efficient, and for some compilers might simplify implementation.
> However, if we keep your "caller" view clearly in sight, it becomes
> apparent that for inherited code, this may not work, and a new version
> of the code that performs a weaker precondition check and a stronger
> postcondition check will be needed.
> So it isn't even a "wrapper."  It really needs to be a new version, or
> perhaps an "unwrapper" which bypasses the stronger precondition check
> in the inherited code.

The important point that I picked up on is that the preconditions that are
enforced for a dispatching call ought to be those that apply to the routine that
the call resolves to, and not those of the actual body.

For consistency, I suggested that we apply that to all of the contract aspects,
but it would work fine to apply that only to preconditions.

> But you are right, the semantics are simpler to understand, and the
> "funny" inheritance rules make more sense, if in the RM we define the
> semantics from the caller side.  We should still make some effort in
> my view to allow the body-side implementation of checks if desired,
> but the caller side view should remain the underlying semantics.

I think we should allow body side implementation by making sure that we don't
specify precisely where the exception is raised; but if the results are
different (as in dispatching calls), then it is up to compilers to get the right
result.

> By the way, Bob implied that postconditions were to be associated with
> the body.  I don't agree.  Postconditions are both more time *and*
> space efficient if performed inside the body, but the right semantic
> view is still from the caller's point of view.  Preconditions and
> postconditions are promises made to the *caller*.  Also, there is no
> guarantee that the postconditions are the same for every call, since
> they could depend on the value of the IN parameters.
> E.g., the postcondition of "max" is clearly dependent on the values of
> the IN parameter (e.g. Max'Result >= Left and Max'Result >= Right and
> (Max'Result = Left or Max'Result = Right))

I agree, although I don't really agree that they are necessarily more time and
space efficient. It seems to me to be necessary to consider the complete
picture. If you put postconditions into the body, your optimizer might be able
to prove that some of the checks aren't needed. But if you do that, you can no
longer prove that some of the following *preconditions* are no longer needed. So
whether the result is more time-efficient or not is very unclear to me.

You might be able to do better if you have some way to inject the values of
"virtual expressions" into your intermediate form after calls. [This is, somehow
tell the optimizer that the values of the post-condition expressions are known.]
I don't know if optimizers typically have this ability; our optimizer definitely
does not (and I don't know if this idea has a name; I doubt that I am the first
to think of it!). But doing that is not much different from the ability to
generate the postcondition at this point (since these are arbitrary
expressions).

[Note that I'm mostly interested in what can be done with existing or common
compiler technology. If you generate both the pre and post conditions at the
call site, typical common-subexpression optimizations will remove the vast
majority of the precondition checks, since they will be satisfied by preceeding
postconditions. Moreover, if the optimizer determines that some precondition is
always going to be False, a compile-time warning could be issued that there is
an obvious bug in the program.

"Virtual expressions" would use the same mechanism that common subexpressions do
(especially for determining whether it is OK to treat two similar expressions as
always getting the same result), other than inserting a known static value
rather than evaluating into a temporary. (And you'd have to be careful not to
use part of one in other optimizations, since they wouldn't actually be
evaluated there.)]

> Access-to-subprogram is admittedly weird, and I'm not sure what we
> should do with those.  Since we don't seem motivated to put
> pre/postconditions on access-to-subp types, the question is what if
> any preconditions can/should be enforced in the body.
> Based on the caller view, the answer would be *none*.
> However, that pretty much defeats the ability to implement body-side
> checks.  Hence, we have proposed that the preconditions *are* checked.
> But this is after all of the "or"ing has taken place with inherited
> 'Class preconditions.
>
> So perhaps a way to think about it is that on a call through an
> access-to-subp value, a precondition is checked that is the "or" of
> every possible non-access-to-subp call.  Another way to look at is
> that a subprogram reached through an acc-to-subp indirection can still
> act as though it was called directly in some way, including
> potentially a dispatching call if it is a dispatching operation.
> It need not worry about the access-to-subp possibility.
> Yet another way to think about it is that an access-to-subp call is
> actually a call on a wrapper that contains a "normal"
> call, so we know that
> *some* precondition check is performed, depending on which sort of
> "normal" call is chosen to be used inside this wrapper.

I think you lost me. I proposed in my message that we think of an
access-to-subprogram in the same way that we think of a dispatching call to a
routine that has different contracts. Since the access-to-subprogram has no
contracts, that puts the entire contract in the body (logically). That's the
*same* idea that I suggested for inherited dispatching bodies where the
contracts are different.

> In any case, I think we are in agreement that the caller view is the
> key to understanding the semantics for preconditions and
> postconditions.  Type invariants are more like postconditions in my
> view, and so are a bit easier to reason about.  Subtype predicates are
> really more associated with subtype conversions, and subtype
> conversions clearly happen on the caller side as far as parameters and
> results, since the body doesn't really have any clue as to the
> subtypes of the actual parameters or the target of the return object.

Now for the tough question: who is going to try to take a crack at rewording the
rules in order to have the correct effects? I might have time next week, but I
think this one is going to require a number of iterations, so the sooner we
start the better.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 21, 2011  11:16 PM

> The problem is related to inheriting a subprogram from the parent type
> with a particular precondition, while also inheriting abstract or null
> subprograms from one or more interface "progenitors"
> that have different preconditions but otherwise match the subprogram
> inherited from the parent.
>
> The danger is that the inherited code will enforce the precondition it
> had in the parent, when in fact it should also "or" in the
> preconditions coming from the interface progenitor (effectively
> weakening the precondition).

I think this description is backwards, because you're talking about what the
inherited code will do; the point is what the *call* to it will do; the language
(even the current wording) does not talk much about what bodies do, as these
things are defined on calls more than on bodies.

The danger really is that the preconditions that are enforced will be different
(possibly weaker) than those expected by the body; *or* that the postconditions
that are enforced are different (weaker) than those expected by the caller. If
we really want to talk about these in terms of bodies, then we need different
rules for preconditions and postconditions!

Anyway, to answer Yannick's question, there are examples of the problem in my
message of Saturday (procedure Show_Hidden has the classic example of the issue,
depending on the rules for such inheritance). There are also examples in the
mail of AI05-0197-1 (which will soon be moved to the new AI05-0247-1).

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 22, 2011  1:28 AM

One of the things I was thinking about is that the model of "weakening"
preconditions doesn't seem to work properly with existing code when dispatching
is involved.

Consider:

    package Pack1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P1 (Obj : in T1);
    private
       ...
    end Pack1;


    with Pack1;
    package Pack2 is
        type T2 is new Pack1.T1 with ...
        overriding
        procedure P1 (Obj : in T2)
            when Pre'Class => Is_Valid (Obj);
    end Pack2;

Imagine that Pack1 is some existing library (say, Ada.Containers.Vector).
And Pack1 is written by a user that wants to use this new-fangled precondition
thingy.

The problem here is that the precondition of the original P1 is True. So it is
impossible to give a weaker precondition later.

However, the current wording ignores the preconditions for which none is
specified. That's probably because the alternative is to effectively not
allow preconditions at all on overriding routines. However, that does not work
at all with the model of weakening preconditions.

For example:

    with Pack1;
    procedure Do_It3 (Obj : in Pack1.T1'Class) is
    begin
        Obj.P1;
    end Do_It3;

The dispatching call Obj.P1 has no precondition. However, if Obj is a T2 object,
then the body of P1 has a *stronger* precondition.

We can handle this by requiring the precondition of the actual routine to be
evaluated in this case. Interestingly, this is the same as the requirements for
adding a precondition to an inherited via multiple inheritance; perhaps that
needs to apply any time a precondition is added?? (Nah, that's too strong. It
only needs to apply any time that the precondition might be different or
originally empty.)

In any case, an empty class-wide Precondition is different than one specified to
be True. Specifically, if the original P1 was:

        procedure P1 (Obj : in T1
            when Pre'Class => True;

then adding a later precondition on some extension would have no effect.
Whatever rules we come up with had better take this into account.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 22, 2011  9:47 AM

I think you need to put a Pre'Class
precondition on the operations of the root type if you want any of this to work.

I suppose we could treat the "Pre" and the "Pre'Class"
preconditions completely independently, and the "Pre'Class" get weakened as you
go down the hierarchy, and the Pre are evaluated as is, and a failure of the
"Pre" is a failure, end of story, just as though the precondition were an
assertion as the first line of the subprogram body.

That way you could add Pre's anywhere, and not have to worry about your
ancestors.  They would be effectively enforced in the body.  Meanwhile,
Pre'Class would be consistently described as enforced at the call site, and
would undergo weakening.  I guess I kind of favor this model. Mixing Pre and
Pre'Class seems like a mistake in general, and I believe that projects will
adopt one approach or the other.

We would still need to think about how access-to-subp interacts with this.
Probably should still be modeled as a call on a conjured-up subprogram that
contains a normal, non-dispatching call on the designated subprogram.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 22, 2011  4:52 PM

> I think you need to put a Pre'Class
> precondition on the operations of the root type if you want any of
> this to work.

I can believe that that is preferred. But the language rules have to make sense
even when that is not done, because of compatibility concerns. There is a lot of
code out there without preconditions, and we don't want to prevent using it, or
make it hard to use preconditions.

> I suppose we could treat the "Pre" and the "Pre'Class"
> preconditions completely independently, and the "Pre'Class"
> get weakened as you go down the hierarchy, and the Pre are evaluated
> as is, and a failure of the "Pre" is a failure, end of story, just as
> though the precondition were an assertion as the first line of the
> subprogram body.

Yes, I think I suggested something on this line a week or two ago. I think it
makes more sense.

> That way you could add Pre's anywhere, and not have to worry about
> your ancestors.  They would be effectively enforced in the body.
> Meanwhile, Pre'Class would be consistently described as enforced at
> the call site, and would undergo weakening.  I guess I kind of favor
> this model.
> Mixing Pre and Pre'Class seems like a mistake in general, and I
> believe that projects will adopt one approach or the other.

I have to wonder if we shouldn't enforce that. Mixing them makes no logical
sense. (I also proposed this last week.)

Specifically, either one of Pre'Class or Pre can be specified on a subprogram.
And, for an overriding/inherited routine, Pre'Class can only be specified/added
to a subprogram if its ancestors have Pre'Class. (And if Pre'Class is inherited,
then Pre cannot be specified, only Pre'Class.)

These rules would avoid confusion with mixed Pre/Pre'Class environments.
Pre'Class works like the Eiffel contracts; Pre is more of a free-form assertion.

Pre would always just be that of the body. Pre'Class would be that of the
*call* (with the special cases for inheritance from two subprograms).

One could even consider access-to-subprogram in the same way, as a profile that
does not have a Pre'Class, meaning that subprograms with Pre'Class couldn't
match it. We could then consider adding Pre'Class/Post'Class to
access-to-subprogram, making the similarity with dispatching calls complete.

This might be going too far, but it seems that combining Pre and Pre'Class is
too confusing for everyone. We're better off without it.

> We would still need to think about how access-to-subp interacts with
> this.  Probably should still be modeled as a call on a conjured-up
> subprogram that contains a normal, non-dispatching call on the
> designated subprogram.

I've answered this repeatedly, and you don't seem to be registering my various
ideas. So I'm not sure what's wrong with them.

Anyway, it strikes me that any model separation would be on top of the existing
rules. I think I've figured out a consistent set of changes to the dynamic
semantics that fixes the issues (but I'm sure once Steve reviews it I will have
that misconception removed ;-). I think we could add some Legality Rules to it
if we wanted to seriously separate Pre and Pre'Class, but I don't think it is
required. I'll try to write it up ASAP.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 22, 2011  5:05 PM

>> I suppose we could treat the "Pre" and the "Pre'Class"
>> preconditions completely independently, and the "Pre'Class"
>> get weakened as you go down the hierarchy, and the Pre are evaluated
>> as is, and a failure of the "Pre" is a failure, end of story, just as
>> though the precondition were an assertion as the first line of the
>> subprogram body.
>
> Yes, I think I suggested something on this line a week or two ago. I
> think it makes more sense.

Yes, I should have acknowledged you.  But I frequently lose track of where ideas
come from (sorry about that), and just slowly integrate them into the overall
possibilities...

>> We would still need to think about how access-to-subp interacts with
>> this.  Probably should still be modeled as a call on a conjured-up
>> subprogram that contains a normal, non-dispatching call on the
>> designated subprogram.
>
> ... I've answered this repeatedly, and you don't seem to be
> registering my various ideas. So I'm not sure what's wrong with them.

Sorry, again I am having trouble keeping track of whose ideas are whose.  Are we
in agreement on this one, namely that an access-to-subp is equivalent to a
"normal" call on the designated subprogram, with the appropriate pre or
pre'class enforced?  That is, any precondition check to be performed is bundled
into a wrapper if necessary?  Or if not, how does your idea differ?

> Anyway, it strikes me that any model separation would be on top of the
> existing rules. I think I've figured out a consistent set of changes
> to the dynamic semantics that fixes the issues (but I'm sure once
> Steve reviews it I have that misconception removed ;-). I think we
> could add some Legality Rules to it if we wanted to seriously separate
> Pre and Pre'Class, but I don't think it is required. I'll try to write it up ASAP.

Great.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 22, 2011  5:14 PM

> Yes, I think I suggested something on this line a week or two ago. I
> think it makes more sense.

Minor aside: There doesn't seem to be any rule restricting Pre'Class or
Post'Class to dispatching routines of a tagged type. It seems these are allowed
on untagged types, routines that aren't primitive, and the like. Is this
intended, or even a good idea??

    type Foo is new Integer;
    function "+" (Left, Right : Foo) return Foo
       with Pre'Class => Left > 0 and Right > 0;

    type Bar is new Foo;
    function "+" (Left, Right : Bar) return Bar;
       -- Inherits Pre'Class???

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 22, 2011  8:51 PM

I am pretty sure there is a restriction of 'Class aspects to tagged types or
dispatching subprograms.  Ah yes, last legality rule of AI-183:

   If the aspect_mark includes 'Class, then the associated
   entity shall be a tagged type or a primitive subprogram
   of a tagged type.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 23, 2011  8:24 PM

OK, good to find out that it is covered. But I have to admit I never thought to
look there, given that there are only a handful of such aspects and as such I
didn't expect there to be a blanket rule existing only for them. Thus, I'm not
sure that such a blanket rule is a great idea. Maybe there needs to be an AARM
note in 13.3.2 and 13.3.3 mentioning this rule.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 23, 2011  10:05 PM

It turns out that 13.3.3(6/3) repeats this rule, while there is no similar text
in 13.3.2. I marked the text in 13.3.3 as redundant (presuming no other change),
and added an AARM note to 13.3.2 for future readers that are as confused as I
was. I suspect that either the text should be deleted from 13.3.3 or something
similar added to 13.3.2 -- but I'll wait for someone other than me to make a
formal editorial change request to that effect.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 22, 2011  5:53 PM

...
>> I'll try to write it up ASAP.
>
> Great.

Hopefully this was an optimistic "great" and not the other kind. :-) Anyway,
here is version /02 of this AI, complete with a wording proposal. I didn't do
anything with legality rules (even though I think some restrictions there would
be a good idea). This seems to me to be about the minimum change to fix the
problems previously identified.

As always, comments welcome.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 22, 2011  6:10 PM

> Yes, I should have acknowledged you.  But I frequently lose track of
> where ideas come from (sorry about that), and just slowly integrate
> them into the overall possibilities...

The important point is whether we want to enforce this with Legality Rules, or
just make it the model implicitly.

I should note that we don't enforce it with Legality Rules, some subprograms can
be called both ways (from dispatching calls that expect the body to check
preconditions, and from dispatching calls are required to check the precondition
themselves). Which means that you might have to generate code in both places
(ugh). For instance:

    package Pack1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P3 (Obj : in T1)
          with Pre => Is_Valid (Obj);
    private
       ...
    end Pack1;

    package Pack2 is
       type I2 is interface;
       function Is_Green (Obj : I2) return Boolean is abstract;
       procedure P3 (Obj : in I2) is null
           when Pre'Class => Is_Green (Obj);
    end Pack2;

    with Pack1, Pack2;
    package Pack3 is
       type T3 is new Pack1.T1 and Pack2.I2 with private;
       overriding
       function Is_Green (Obj : T3) return Boolean;
       -- P3 is inherited from Pack1.P3.
    private
       ...
    end Pack3;

    with Pack1, Pack2, Pack3;
    procedure Main is
        procedure Do_It1 (Obj : in Pack1.T1'Class) is
        begin
           Obj.P3; -- (1)
        end Do_It1;
        procedure Do_It2 (Obj : in Pack2.T2'Class) is
        begin
           Obj.P3; -- (2)
        end Do_It2;
        O3 : P3.T3;
    begin
        Do_It1 (O3);
        Do_It2 (O3);
    end Main;

Call (1) is to a routine with an ordinary Precondition, so the body always
checks the precondition. Call (2) is to a routine with a Pre'Class precondition,
but the check is at the call site. But of course these call the same body, so
the body must have the code even for call (2). [That code can never fail for
call (2), even though it will be executed.]

If we made this sort of thing illegal, this could not happen.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 22, 2011  6:10 PM

> !wording
>
> Change 13.3.2(19/3) from:
>
> For a dispatching call or a call via an access-to-subprogram value,
> the precondition or postcondition check performed is determined by the
> subprogram actually invoked.
> Redundant[Note that for a dispatching call, if there is a Pre'Class
> aspect that applies to the subprogram named in the call, then if the
> precondition expression for that aspect evaluates to True, the
> precondition check for the call will succeed.]
>
> to:
>
> For a dispatching call to a subprogram that does not have a Pre'Class
> specified, or a call via an access-to-subprogram value, the
> precondition or postcondition check performed is determined by the
> subprogram actually invoked. Similarly, the postcondition check for
> all dispatching calls is determined by the subprogram actually
> invoked.
>
> In contrast, the precondition check for a dispatching call to a
> subprogram that does have Pre'Class specified consists solely of
> checking the Pre'Class expressions that apply to the subprogram.

What about the "Pre" aspect?  Is this meant to imply that if there is a
Pre'Class aspect specified, then the Pre aspect is ignored?

Also, when you say the "the Pre'Class expressions that apply to the subprogram"
do you mean the subprogram denoted by the call prefix, or do you mean the
subprogram actually invoked? The wording seems ambiguous.

Finally, what are the implications that only the Pre'class known to the caller
be checked?  If one of the Pre'Class expressions that applies to the subprogram
actually invoked is True, must we still raise an Assertion_Error if all of the
expressions that apply to the denoted subprogram evaluate to False?

> AARM Ramification: We are talking about the Pre'Class that applies to
> the subprogram that the dispatching call is resolving to call, not the
> Pre'Class for the subprogram that is ultimately dispatched to.

I believe this needs to be made clearer in the normative wording.
An AARM note seems inadequate.  The word "denote" could be used to emphasize the
compile-time denoted subprogram, rather than the run-time invoked subprogram.

> AARM Implementation Note: These rules imply that logically,
> preconditions of routines that have Pre'Class specified and might be
> called with a dispatching call must be checked at the point of call.
> Preconditions of other routines that might be called with a
> dispatching call must be checked inside of the subprogram body
> (possibly in a wrapper). It's possible for both conditions to be
> necessary for routines that are inherited from multiple ancestors (in
> that case, the check at the call site necessarily be the same or
> stronger than the one inside of the routine).

I don't understand the "preconditions of other routines".  What does that mean?
Is this the "Pre" aspects as opposed to the "Pre'Class" aspects?

> In contrast, the postcondition checks always need to be
> checked
> inside the body of the routine.
> End AARM Implementation Note.
>
> Notwithstanding what this standard says elsewhere, an inherited
> subprogram that has a specified Pre'Class, Post'Class, or
> Type_Invariant'Class (see 13.3.3) and

that

> is primitive for a type that has one or more progenitors is equivalent
> to an overriding subprogram S whose body consists of:
>     * The null statement, if all of the inherited subprograms are null
> subprograms;
>     * A call on the parent subprogram, with all of the parameters the same as
>       the parameters to S (possibility with appropriate type conversions).
> If there are multiple conformant subprograms S, there is only one
> overriding subprogram S created....

Is this needed?  It feels awfully mechanistic.  Can't we accomplish this some
other way?

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 22, 2011  9:29 PM

> The important point is whether we want to enforce this with Legality
> Rules, or just make it the model implicitly.
>
> I should note that we don't enforce it with Legality Rules, some
> subprograms can be called both ways (from dispatching calls that
> expect the body to check preconditions, and from dispatching calls are
> required to check the precondition themselves).

The above parenthetical comment confuses me.  Should one of those said
"non-dispatching calls"?

> ... Which means that you might have to generate code in both places
> (ugh). For instance:
>
>      package Pack1 is
>         type T1 is tagged private;
>         function Is_Valid (Obj : T1) return Boolean;
>         procedure P3 (Obj : in T1)
>            with Pre =>  Is_Valid (Obj);
>      private
>         ...
>      end Pack1;
>
>      package Pack2 is
>         type I2 is interface;
>         function Is_Green (Obj : I2) return Boolean is abstract;
>         procedure P3 (Obj : in I2) is null
>             when Pre'Class =>  Is_Green (Obj);
>      end Pack2;
>
>      with Pack1, Pack2;
>      package Pack3 is
>         type T3 is new Pack1.T1 and Pack2.I2 with private;
>         overriding
>         function Is_Green (Obj : T3) return Boolean;
>         -- P3 is inherited from Pack1.P3.
>      private
>         ...
>      end Pack3;
>
>      with Pack1, Pack2, Pack3;
>      procedure Main is
>          procedure Do_It1 (Obj : in Pack1.T1'Class) is
>          begin
>             Obj.P3; -- (1)
>          end Do_It1;
>          procedure Do_It2 (Obj : in Pack2.T2'Class) is
>          begin
>             Obj.P3; -- (2)
>          end Do_It2;
>          O3 : P3.T3;
>      begin
>          Do_It1 (O3);
>          Do_It2 (O3);
>      end Main;
>
> Call (1) is to a routine with an ordinary Precondition, so the body
> always checks the precondition. Call (2) is to a routine with a
> Pre'Class precondition, but the check is at the call site. But of
> course these call the same body, so the body must have the code even
> for call (2). [That code can never fail for call (2), even though it
> will be executed.]

I'm losing you.  What do you mean by "the body must have the code even for call
(2)"?

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 23, 2011  9:37 PM

> > For a dispatching call to a subprogram that does not have a
> > Pre'Class specified, or a call via an access-to-subprogram value,
> > the precondition or postcondition check performed is determined by
> > the subprogram actually invoked. Similarly, the postcondition check
> > for all dispatching calls is determined by the subprogram actually
> > invoked.
> >
> > In contrast, the precondition check for a dispatching call to a
> > subprogram that does have Pre'Class specified consists solely of
> > checking the Pre'Class expressions that apply to the subprogram.
>
> What about the "Pre" aspect?  Is this meant to imply that if there is
> a Pre'Class aspect specified, then the Pre aspect is ignored?

Yes, but not because of any explicit intent, but rather the way that Pre works.
Pre is "or"ed with all of the other Pre'Class aspects that are inherited. Since
we're using a stronger precondition on such calls than that of the invoked
subprogram, this necessarily means that any precondition of the body must
succeed. Thus there is no need to evaluate Pre in such cases; if the compiler
can prove that all calls are either this kind of dispatching calls or statically
bound calls, it can always generate the preconditions at the call site and do no
evaluation in the body. That seems like a good thing.

I was wondering whether we should even allow this mixing; that's covered in my
other e-mail which I see has a separate answer from you so I'll give it a
separate answer as well.

> Also, when you say the "the Pre'Class expressions that apply to the
> subprogram" do you mean the subprogram denoted by the call prefix, or
> do you mean the subprogram actually invoked?
> The wording seems ambiguous.

The former. I thought that it is necessary to explicitly talk about the dynamic
meaning of subprogram explicitly; anything else is necessarily static. But I
agree that it is not that clear, so if there is some better wording available,
we should use it.

> Finally, what are the implications that only the Pre'class known to
> the caller be checked?  If one of the Pre'Class expressions that
> applies to the subprogram actually invoked is True, must we still
> raise an Assertion_Error if all of the expressions that apply to the
> denoted subprogram evaluate to False?

Yes. The intent is that the stronger precondition of the call is what is
checked; that avoids "counterfeiting". If we want to be able to do analysis at
the call site (and code generation!), we have to be able to know what
preconditions and postconditions apply to that call. For postconditions, these
are "weaker" (or the same), so we need no special rules. But for preconditions,
these can be "stronger" than those of the body; in order to have any chance to
generate these at the call site, we have to be able to evaluate the "stronger"
precondition rather than the potentially weaker one of the called body.

I considered making this a permission rather than a requirement, but that only
seemed to make it harder for users to figure out what preconditions are
enforced.

Beyond that, this makes perfect sense (to me at least): the only precondition
the caller can know about for a dispatching call is the Pre'Class that applies
to the (statically) resolved subprogram. It is the one that a caller has to
conform to have a provably correct program. That being the case, why shouldn't
we also enforce it??

Note that this rule change is sort of an optimization; it is intended to allow
code generation for preconditions at the call site of dispatching calls and also
to enforce the intended (Eiffel-like) model for subprograms that have Pre'Class
aspects. We could live without it, but doing so would pretty much eliminate the
ability of compilers to optimize preconditions for dispatching calls -- that
seems bad to me. (This is the same reason that we go out of are way to allow
postconditions to be generated inside of a subprogram body -- mostly so that the
compiler can prove that they are already true and eliminate the checks.)

> > AARM Ramification: We are talking about the Pre'Class that applies
> > to the subprogram that the dispatching call is resolving to call,
> > not the Pre'Class for the subprogram that is ultimately dispatched to.
>
> I believe this needs to be made clearer in the normative wording.
> An AARM note seems inadequate.  The word "denote" could be used to
> emphasize the compile-time denoted subprogram, rather than the
> run-time invoked subprogram.

OK, I'll make an attempt to clarify up this wording. Humm, 3.9.2 uses "denotes
the declaration of a dispatching operation". But I can't quite find a way to
work "denotes" into "For a dispatching call to a subprogram that does not have a
Pre'Class specified,".

"For a dispatching call to a denoted subprogram that does not have a Pre'Class
specified,"

doesn't make much sense.

"For a dispatching call to a subprogram that does not have a Pre'Class specified
for the denoted subprogram,"

is right but way too redundant. But I can't think of anything better. I did add
a parenthetical remark to the last paragraph:

  In contrast, the precondition check for a dispatching call to a subprogram
  that does have Pre'Class specified for the denoted subprogram consists solely
  of checking the Pre'Class expressions that apply to the denoted subprogram
  (not necessarily the one that is invoked).

> > AARM Implementation Note: These rules imply that logically,
> > preconditions of routines that have Pre'Class specified and might be
> > called with a dispatching call must be checked at the point of call.
> > Preconditions of other routines that might be called with a
> > dispatching call must be checked inside of the subprogram body
> > (possibly in a wrapper). It's possible for both conditions to be
> > necessary for routines that are inherited from multiple ancestors
> > (in that case, the check at the call site necessarily be the same or
> > stronger than the one inside of the routine).
>
> I don't understand the "preconditions of other routines".
> What does that mean?  Is this the "Pre" aspects as opposed to the
> "Pre'Class" aspects?

Both "Pre" aspects, and "Pre'Class" aspects added further down the inheritance
tree. The precondition of a dispatching call is that which is known at the call
site, period. There is no advantage to weakening here, while there might be for
statically bound calls (or dispatching calls whose root is higher up the tree).

I gave the example that led to this in my other note, the one that confused you
so. I'll try to explain that again when I answer that one.

> > In contrast, the postcondition checks always need to be checked
> > inside the body of the routine.
> > End AARM Implementation Note.
> >
> > Notwithstanding what this standard says elsewhere, an inherited
> > subprogram that has a specified Pre'Class, Post'Class, or
> > Type_Invariant'Class (see 13.3.3) and
>
> that
>
> > is primitive for a type that has one or more progenitors is
> > equivalent to an overriding subprogram S whose body consists of:
> >     * The null statement, if all of the inherited subprograms are
> > null subprograms;
> >     * A call on the parent subprogram, with all of the parameters
> > the same as
> >       the parameters to S (possibility with appropriate type conversions).
> > If there are multiple conformant subprograms S, there is only one
> > overriding subprogram S created....
>
> Is this needed?  It feels awfully mechanistic.  Can't we accomplish
> this some other way?

Maybe, but I doubt it will help much. Everything I thought of always led back
here. Note that this rule is required to fix the three problems outlined in the
AARM Reason note; some solution to these problems is required. I thought that
one rule to fix all three was preferable to inventing three separate rules that
really complicate things.

As a reminder, the AARM note says:

AARM Reason: This rule eliminates three problems: (1) the issue that two null
subprograms (one of which is chosen for dispatching arbitrarily by
3.9.2(20.3/3)) may not be equivalent at runtime; (2) "counterfeiting" problems
that arise because adding an interface precondition to the mix weakens the
precondition of the inherited routine (in this case, we need to enforce the
precondition of the actual body, else it might not be true when the call is made
-- which would be bad); (3) problems that arise because postconditions and
invariants added by an interface would not be enforced on a inherited routine
(it does not know about any such contracts).

All three problems occur because of dispatching that calls inherited routines
that have a different view of the preconditions, postconditions, and invariants
that apply to them than an otherwise identical explicitly declared routine.
Besides the logical problems that arise, we also have a problem simply because
an explicitly declared routine works differently than an inherited one. That
seems bad on its face, even ignoring the other problems.

It immediately was obvious to me that there is no problem with explicitly
declared routines. That led to my original idea, which was simply to require
overriding for the problem cases - that is a nice, simple solution. That would
be a Legality Rule something like:

If the set of Pre'Class or Post'Class or Type_Invariant'Class aspects that would
apply to a call of an inherited subprogram is different than the set of
corresponding aspects of a call to the parent subprogram, then the subprogram
requires overriding.

But this rule is difficult to define properly; we don't want to talk about
"calls", really, and we don't want confusion about the implicit inherited
subprogram as opposed to the subprogram that was inherited from, and it is hard
to define "different" very well.   (Since this is a Legality Rule that requires
the programmer to do something, we don't want it to apply unless we really need
it; thus we would want to be able to prevent it from applying when the same
Pre'Class comes from multiple parents - to take one example.)

The difficulty of defining this, along with the simple fact that it requires
people to write overriding subprograms (containing calls that they could easily
make a mistake in) just to get the preconditions right caused me to look
elsewhere.

Since it is easy to define what the body of such a subprogram looks like, and
ensuring that it exists for both contract aspects and for dispatching purposes
is needed to fix the problems (in a straightforward manner), I decided to write
the proposed rules.

The best way (IMHO) to simplify these rules is to make the proposed rules apply
to *all* inherited primitive routines of tagged types -- that is, there is (for
language rule purposes) an explicit body (defined as I proposed above) for
*every* such body. Then, we could get rid of almost all of 3.9.2(20-20.3) --
only the first bullet is even possible. Compilers still can share bodies as they
do now (except in the pre/post cases of course) -- only the language definition
is changed. In that case, this rule would move somewhere else in the Standard
(3.9.2??). I didn't do this because it is a lot of language change for what is
clearly a corner case.

The alternative of trying to define these properly in place is just too hard
IHMO. The wording for each of these cases would be very complex. The problem
cases come up when there are multiple inherited routines with different
preconditions, postconditions, and the like. In those cases, the effective
aspect is the combined one. I can't figure out how to talk about that
combination, because you cannot talk about "the" inherited subprogram; there is
more than one.

And the rules themselves are complex. Specifically, the precondition for an
inherited routine called via a dispatching call is the Pre'Class of the
subprogram denoted by the call "and"ed with the actual precondition of the
parent routine. (These are often the same, but don't have to be when interfaces
are involved -- they can be completely disjoint in that case.) This is what you
would get with an explicit body, and it is best that the inherited routines work
the same way.

The precondition of a statically bound call to such an inherited subprogram is
(effectively) the precondition of the parent routine. OTOH, the postcondition of
a statically bound call is the postcondition of the parent routine "and"ed with
any postconditions inherited from other routines.

And on and on and on...

Ideas welcome, of course.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 23, 2011  9:52 PM

...
> > The important point is whether we want to enforce this with Legality
> > Rules, or just make it the model implicitly.
> >
> > I should note that we don't enforce it with Legality Rules, some
> > subprograms can be called both ways (from dispatching calls that
> > expect the body to check preconditions, and from dispatching calls
> > are required to check the precondition themselves).
>
> The above parenthetical comment confuses me.  Should one of those said
> "non-dispatching calls"?

No. If you recall from the rules I previously proposed, there are two kinds of
dispatching calls:

If the denoted subprogram has one or more Pre'Class expressions specified, then
the precondition expressed by those expressions (only) is evaluated (for
dispatching calls). These conform to the Eiffel-like model, so for the sake of
discussion, let's call them "Eiffel-like contract dispatching calls".

OTOH, if the denoted subprogram does not have a Pre'Class expressions specified,
then the precondition that is evaluated is that of the invoked subprogram; the
precondition can be anything, and reasoning about it at the call site is not
possible. Let's call these "Ad-hoc contract dispatching calls."

The question I had was whether we should enforce a separation of these two
models of contracts and calls. If we don't do that, it is possible to call the
same subprogram from both an "Eiffel-like contract dispatching call" and a
"ad-hoc contract dispatching call". As in the example that follows...

> > ... Which means that you might have to generate code in both places
> > (ugh). For instance:
> >
> >      package Pack1 is
> >         type T1 is tagged private;
> >         function Is_Valid (Obj : T1) return Boolean;
> >         procedure P3 (Obj : in T1)
> >            with Pre =>  Is_Valid (Obj);
> >      private
> >         ...
> >      end Pack1;
> >
> >      package Pack2 is
> >         type I2 is interface;
> >         function Is_Green (Obj : I2) return Boolean is abstract;
> >         procedure P3 (Obj : in I2) is null
> >             when Pre'Class =>  Is_Green (Obj);
> >      end Pack2;
> >
> >      with Pack1, Pack2;
> >      package Pack3 is
> >         type T3 is new Pack1.T1 and Pack2.I2 with private;
> >         overriding
> >         function Is_Green (Obj : T3) return Boolean;
> >         -- P3 is inherited from Pack1.P3.
> >      private
> >         ...
> >      end Pack3;
> >
> >      with Pack1, Pack2, Pack3;
> >      procedure Main is
> >          procedure Do_It1 (Obj : in Pack1.T1'Class) is
> >          begin
> >             Obj.P3; -- (1)
> >          end Do_It1;
> >          procedure Do_It2 (Obj : in Pack2.T2'Class) is
> >          begin
> >             Obj.P3; -- (2)
> >          end Do_It2;
> >          O3 : P3.T3;
> >      begin
> >          Do_It1 (O3);
> >          Do_It2 (O3);
> >      end Main;
> >
> > Call (1) is to a routine with an ordinary Precondition, so the body
> > always checks the precondition. Call (2) is to a routine with a
> > Pre'Class precondition, but the check is at the call site. But of
> > course these call the same body, so the body must have the code even
> > for call (2). [That code can never fail for call (2), even though it
> > will be executed.]
>
> I'm losing you.  What do you mean by "the body must have the code even
> for call (2)"?

(1) is an ad-hoc dispatching call; it requires that the precondition (Is_Valid
(Obj) or Is_Green (Obj)) is evaluated within the invoked subprogram.

(2) is an Eiffel-like dispatching call; it requires that the precondition
(Is_Green (Obj)) is evaluated at the call site. It does not need to evaluate any
other precondition, and shouldn't evaluate a precondition in the body. Luckily,
evaluating a precondition in the body has to be harmless -- the precondition of
the call has to be part of the larger precondition, and whatever part it is must
be "or"ed (and is True), so the rest is irrelevant -- but it still wastes time.

It should noted that this scenario would happen if P3 for T1 had no Pre at all,
so this is a fairly likely case in practice.

> > If we made this sort of thing illegal, this could not happen.

I hope this explains what I'm talking about here.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 23, 2011  11:00 PM

> Yes, but not because of any explicit intent, but rather the way that
> Pre works. Pre is "or"ed with all of the other Pre'Class aspects that
> are inherited. Since we're using a stronger precondition on such calls
> than that of the invoked subprogram, this necessarily means that any
> precondition of the body must succeed. Thus there is no need to
> evaluate Pre in such cases; if the compiler can prove that all calls
> are either this kind of dispatching calls or statically bound calls,
> it can always generate the preconditions at the call site and do no
> evaluation in the body. That seems like a good thing. ...

I thought we were discussing the possibility of treating Pre'Class and Pre completely
independently.  That is Pre'Class is "or"ed with other Pre'Class aspects, but the
Pre aspect is not "or"ed with anything.  In other words, the Pre aspect is *always*
checked.  The Pre'Class aspect can only be weakened, thanks to the "or"-ing rules.
The Pre aspect has no such guarantee -- it acts almost exactly like an Assert pragma
placed immediately after the "is" of the body.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 23, 2011  11:52 PM

I've only considered such a separation at compile-time (via Legality Rules). That is,
a given subprogram can only have Pre or Pre'Class, never both. But since that wasn't
necessary in order to define the dynamic semantics of calls and especially the fixes
to the bugs that triggered this AI, I just defined that and was continuing to explore
whether or not the Legality Rules were a good idea.

As far as just treating them independently at runtime, that is a whole different kettle
of fish. That effectively means that the Pre and Pre'Class aspects are "and"ed together
-- which is a big change in the model. I suspect that it might in fact be a *better*
model, but it means starting over completely in thinking about all of these examples of
inheritance and the like.

One thing that immediately comes to mind is how Pre'Class is treated vis-a-vis
access-to-subprogram. One of the advantages of the model that I proposed is that
access-to-subprogram is treated the same way as a similar dispatching call (one without
a Pre'Class aspect). I don't see how that could work in the model you are suggesting;
since Pre'Class would be unknown (and thus True), it would be ignored on any
access-to-subprogram call (at least unless we allowed Pre'Class on access-to-subprogram
types). Sounds like a nasty hole.

I'm presuming that in this model, the Pre'Class aspects are "or"ed together, then the
result is "and"ed with Pre. If there is no Pre'Class, it is treated as True.

I'd hope that we continue to have the Pre'Class model as I laid out in my wording proposal
(that we use the Pre'Class of the statically identified subprogram, which is necessarily
the same or stronger than the Pre'Class of the invoked body). And then Pre of the invoked
body would be evaluated.

Please tell me if I understand your model correctly.

I'll definitely have to think about this some more.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, March 24, 2011  1:29 AM

I think this discussion focuses too much on technical aspects, before the
principles. So here is my view of what this all means.

Pre'Class and Post'Class are the enforcement of the LSP. i.e., if B "is an" A,
then every input acceptable to an A must be accepted by a B, and every output
produced by a B must be something that could have been produced by an A. Hence
the "or" and "and" semantics.

Pre-conditions are a contract imposed on the caller. A failure of a
pre-condition is an error made by the caller. Therefore, it seems logical to
check it at the caller's site.

Post-Conditions are a promise made by the callee. Failure of a post-condition is
a flaw in the callee, therefore it seems logical to check it in the body - even
maybe at the place of the return statement, which would give the possibility to
have an exception handler in case of a failure. (AFAIR, we never discussed the
place of the check in the light of who could handle the exception).

Pre is totally different. It is a special check of a condition that must hold
for a particular subprogram, not transmitted to any one else. I agree with Tuck
that it is basically like an Assert just after the "is" of the body. It's like
explicitely checking for some special value (or combination of values) in the
body of the subprogram. I even think the latter is preferable, since you can
choose which exception is raised, so I see little value in Pre (except than the
test is performed before elaborating the declarative part of the callee).

In short, my model implies:
- Pre'class checked at the call site
- Pre, Post, and Post'class checked in the body.

(Note that the checking of Pre in the body would take care of the "anding" of
the preconditions that worried Randy).

As far as optimization is concerned, the compiler can assume that all Post and
Post'Class hold in determining if checks for Pre'Class can be eliminated.
Nothing can be done for Pre (as for an explicit check, or an Assert in the
body).

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  2:06 AM

I'm glad to hear from someone else in this discussion.

I had thought we'd already done the first principles discussion that you lay out
above; I've just been trying to fix the bugs. Apparently all of the previous
discussion is junk (since that previous model has been rejected).

Anyway, I assume that you are talking about dispatching calls in most of the
above. Pre and Pre'Class are equivalent (except for the "and"!) for statically
bound calls. Specifically, you surely can generate Pre at the call point, and
optimize it. It would be pretty bad if you couldn't. Especially as you can't
give Pre'Class on anything other than primitives of tagged types (I think I want
call-site preconditions on other subprograms!).

The current rules proposal does not allow any Pre/Post exceptions to be handled
by the body, and I think that is exactly right. Your assertion that we never
discussed it is wrong; we discussed it ad-nausem at the Tallehasse meeting (I
think you missed that one, probably why you don't know that). 13.3.2(18/3) is
pretty clear that this is the intent. The reason is that for statically bound
cases, we ought to let the compiler have the freedom to generate the checks
wherever it wants (in the body or at the call site) - in order to maximize the
possibilities for optimization/proof. Dispatching calls and access-to-subprogram
are different, of course.

I can't tell from your discussion as to exactly which Pre'Class you expect to
enforce at the call site of a dispatching call. I've proposed that it is the
statically known one, which by definition is stronger than the Pre'Class for the
actually invoked body. Is that what you are intending?? (It definitely isn't
possible to generate the code at the call site unless we specify this.)

Also, it would be helpful if you could explain in your model what you expect to
happen for calls through access-to-subprogram values to subprograms that have
Pre'Class preconditions. This seems to have to work differently than dispatching
calls, which is uncomfortable.

Finally, none of this has any effect on the actual bug fixes; the rules that
Tucker found too "mechanistic". Those rules are needed to fix bugs that only
involve Pre'Class (and Post'Class and Type_Invariant'Class); nothing about
changing the handling of Pre has anything to do with that.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  8:05 AM

> As far as just treating them independently at runtime, that is a whole
> different kettle of fish. That effectively means that the Pre and
> Pre'Class aspects are "and"ed together -- which is a big change in the
> model. I suspect that it might in fact be a *better* model, but it
> means starting over completely in thinking about all of these examples
> of inheritance and the like.

I don't think it is that big a deal, since it means you simply ignore the "Pre"
as far as inheritance, and only worry about Pre'Class. One reason I prefer this
is that it means "Pre" can be all about the code you are "protecting" with the
precondition, while Pre'Class can be about what the abstract interface requires.

Cyrille from AdaCore finds the whole Pre'Class inheritance "or"-ing thing
unpleasant, and I think there are others who might have the same view, and it
seems potentially a benefit if Pre and Pre'Class are treated independently.

Note that this doesn't come up for Post and Post'Class, since everything is
and-ed together.

> One thing that immediately comes to mind is how Pre'Class is treated
> vis-a-vis access-to-subprogram. One of the advantages of the model
> that I proposed is that access-to-subprogram is treated the same way
> as a similar dispatching call (one without a Pre'Class aspect). I
> don't see how that could work in the model you are suggesting; since
> Pre'Class would be unknown (and thus True), it would be ignored on any
> access-to-subprogram call (at least unless we allowed Pre'Class on
> access-to-subprogram types). Sounds like a nasty hole.

The model I thought we were talking about for access-to-subprogram was that you
effectively created a call at the point of the 'Access, and whatever Pre'Class
or Pre aspects that would be enforced at that point are the Pre'Class/Pre that
would be enforced at the point of any call through that access value.  That
seems very well defined, and yes, does imply a wrapper in some implementations.
So it is really the code calling 'Access that has to worry about whether the
preconditions will be satisfied at the point of any indirect call.  That seems
about right.

> I'm presuming that in this model, the Pre'Class aspects are "or"ed
> together, then the result is "and"ed with Pre. If there is no
> Pre'Class, it is treated as True.

Yes, that is effectively what I am proposing.

>
> I'd hope that we continue to have the Pre'Class model as I laid out in
> my wording proposal (that we use the Pre'Class of the statically
> identified subprogram, which is necessarily the same or stronger than
> the Pre'Class of the invoked body). And then Pre of the invoked body would be evaluated.

Right, though I would still allow the additional Pre'Class aspects to be
*evaluated*, even if the caller isn't aware of them.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  1:43 PM

> Cyrille from AdaCore finds the whole Pre'Class inheritance "or"-ing
> thing unpleasant, and I think there are others who might have the same
> view, and it seems potentially a benefit if Pre and Pre'Class are
> treated independently.
>
> Note that this doesn't come up for Post and Post'Class, since
> everything is and-ed together.

OK, I'll think about this in wording terms.

> > One thing that immediately comes to mind is how Pre'Class is treated
> > vis-a-vis access-to-subprogram. One of the advantages of the model
> > that I proposed is that access-to-subprogram is treated the same way
> > as a similar dispatching call (one without a Pre'Class aspect). I
> > don't see how that could work in the model you are suggesting; since
> > Pre'Class would be unknown (and thus True), it would be ignored on
> > any access-to-subprogram call (at least unless we allowed Pre'Class
> > on access-to-subprogram types). Sounds like a nasty hole.
>
> The model I thought we were talking about for access-to-subprogram was
> that you effectively created a call at the point of the 'Access, and
> whatever Pre'Class or Pre aspects that would be enforced at that point
> are the Pre'Class/Pre that would be enforced at the point of any call
> through that access value.  That seems very well defined, and yes,
> does imply a wrapper in some implementations.  So it is really the
> code calling 'Access that has to worry about whether the preconditions
> will be satisfied at the point of any indirect call.  That seems about
> right.

OK, again I'll try to think about this in wording terms.

> > I'm presuming that in this model, the Pre'Class aspects are "or"ed
> > together, then the result is "and"ed with Pre. If there is no
> > Pre'Class, it is treated as True.
>
> Yes, that is effectively what I am proposing.
> >
> > I'd hope that we continue to have the Pre'Class model as I laid out
> > in my wording proposal (that we use the Pre'Class of the statically
> > identified subprogram, which is necessarily the same or stronger
> > than the Pre'Class of the invoked body). And then Pre of the
> invoked body would be evaluated.
>
> Right, though I would still allow the additional Pre'Class aspects to
> be *evaluated*, even if the caller isn't aware of them.

Do we need wording for that? I had thought that was irrevelant (if the compiler
wants to evaluate 50 digits of PI between each statement, it is allowed to do
that). Or is there some issue with side-effects?

Anyway, you need to think about the implicit body wording that you thought was
too "mechanistic". Since that only exists to fix various problems with
Pre'Class/Post'Class/Type_Invariant'Class combinations, and nothing said above
has any effect on those problems, we still need that wording or something
equivalent. I tried to explain how I arrived at that wording in a previous
message, hopefully so that you or someone else could come up with a better idea
of how to approach it wording-wise. Please help (or decide to live with my
proposal)...

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  2:57 PM

>> Right, though I would still allow the additional Pre'Class aspects to
>> be *evaluated*, even if the caller isn't aware of them.
>
> Do we need wording for that? I had thought that was irrevelant (if the
> compiler wants to evaluate 50 digits of PI between each statement, it
> is allowed to do that). Or is there some issue with side-effects?

I think the bigger issue is if one of them evaluates to True even though the one
visible to the caller evaluates to False, can we go ahead and execute the body.

> Anyway, you need to think about the implicit body wording that you
> thought was too "mechanistic". Since that only exists to fix various
> problems with Pre'Class/Post'Class/Type_Invariant'Class combinations,
> and nothing said above has any effect on those problems, we still need
> that wording or something equivalent. I tried to explain how I arrived
> at that wording in a previous message, hopefully so that you or
> someone else could come up with a better idea of how to approach it
> wording-wise. Please help (or decide to live with my proposal)...

I'll see if I can come up with something less mechanistic.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  3:19 PM

> >> Right, though I would still allow the additional Pre'Class aspects
> >> to be *evaluated*, even if the caller isn't aware of them.
> > Do we need wording for that? I had thought that was irrevelant (if
> > the compiler wants to evaluate 50 digits of PI between each
> > statement, it is allowed to do that). Or is there some issue with side-effects?
>
> I think the bigger issue is if one of them evaluates to True even
> though the one visible to the caller evaluates to False, can we go
> ahead and execute the body.

That's not a "bigger issue"; it's the whole point of the rule. The answer is
"NO"! Because there is no way to allow that and still have a check at the call
site. (You can evaluate the preconditions you know about at the call site, but
if you can't raise an exception when they evaluate to False [because some
precondition you don't know about might be True], what is the point? The only
other way to implement that would be to pass in what you know as an additional
parameter, which seems way over the top.)

I've repeatedly said that we have to require the checking of the stronger
precondition, because it is consistent with the caller's view of the world, and
it is the only way to do the check at the call site. Moreover, I'm dubious that
"weakening" the class-wide precondition is useful in practice -- and since
"strenghtening" it violates LIS, in most programs it will be the same all the
way up. (Meaning this only will matter in corner cases.)

The only reason to worry about evaluation of the others is because you want to
generate them in the body (or wrapper) for some reason. But that does not appear
to be necessary (assuming we use my "required body" model for the messy cases).
In that case, why even allow it?

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  4:28 PM

I guess what I was considering, at least for a moment, was whether we wanted it
to be "OK" for it be *unspecified* whether an exception is raised in the case
where the weaker Pre'Class is satisfied but the caller-visible Pre'Class is
*not* satisfied.

I admit that sounds pretty weird...  And it sure sounds like bad news for
portability.

So I guess I am convinced that you *must* do Pre'Class checks at the call site,
and only at the call site.

I do think this begins to argue pretty strongly for separating out "Pre" from
Pre'Class, and effectively putting the "Pre" checks in the body (though *not*
handleable! -- they are like a pragma Assert in the outer declarative part of
the subprogram), along with the Post and Post'Class, while the Pre'Class checks
remain strictly at the caller, and are based on the particular "view" denoted by
the caller, not by the body actually reached.

I also think that for access-to-subprogram, if the designated subprogram has a
Pre'Class aspect, the access value generated by 'Access must designate a wrapper
that checks it (unless of course Assertions are being ignored).  Otherwise, a
simple way of bypassing a Pre'Class check would be to declare a local
access-to-subprogram type, and then call through that.  Hardly seems desirable.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  3:32 PM

...
> The current rules proposal does not allow any Pre/Post exceptions to
> be handled by the body, and I think that is exactly right. Your
> assertion that we never discussed it is wrong; we discussed it
> ad-nausem at the Tallehasse meeting (I think you missed that one,
> probably why you don't know that).
> 13.3.2(18/3) is pretty clear that this is the intent. The reason is
> that for statically bound cases, we ought to let the compiler have the
> freedom to generate the checks wherever it wants (in the body or at
> the call site) - in order to maximize the possibilities for
> optimization/proof.
> Dispatching calls and access-to-subprogram are different, of course.

One additional point I thought of on the way home last night: the body of a
subprogram does not necessarily know about all of the postconditions that apply
to it. Additional ones can be added to inherited versions via progenitors. Those
also have to be evaluated, and I can't think of any sane way to do that so that
the exception could be handled within the body. (You could pass in a thunk with
such postconditions, but that would be a lot of overhead for something that is
going to happen very rarely.) The other possibility would be to make such
subprograms illegal in some way (one of the options I already considered for
this case); but that requires users to write wrappers by hand, with all of the
possibilities for error that entails.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  4:14 PM

I agree we discussed the issue of handling pre/post-condition exceptions and
concluded they should *not* be handle-able in the body. Let's not re-open that
one.

****************************************************************

From: Robert Dewar
Sent: Thursday, March 24, 2011  6:03 PM

> One additional point I thought of on the way home last night: the body
> of a subprogram does not necessarily know about all of the
> postconditions that apply to it. Additional ones can be added to
> inherited versions via progenitors. Those also have to be evaluated,
> and I can't think of any sane way to do that so that the exception
> could be handled within the body. (You could pass in a thunk with such
> postconditions, but that would be a lot of overhead for something that
> is going to happen very rarely.) The other possibility would be to
> make such subprograms illegal in some way (one of the options I
> already considered for this case); but that requires users to write
> wrappers by hand, with all of the possibilities for error that entails.

If true, this is very worrisome, I don't see any easy way for us to accomodate
that in our implementation

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  7:11 PM

The model that Tucker and I have been discussing is that for such inherited
subprograms, the semantics is as if there was an explicit body for the routine
(complete with the expected postcondition checks, if you are doing them inside
of the body), which just calls the parent routine. This seems to be about as
hard to implement as the other cases where you have to generate a wrapper for a
tag or a use of 'Access, so I don't think there can be a major implementation
problem. If there is, however, we probably should use the "requires overriding"
solution instead (which would insist that the client write an explicit body for
the routine).

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  7:44 PM

> If true, this is very worrisome, I don't see any easy way for us to
> accomodate that in our implementation

Randy was simply giving more reasons why exceptions arising from postconditions
shouldn't be handleable in the body.  But we decided a long time ago that these
exceptions are not handle-able inside the body, so this whole thread is merely
reinforcing that decision.

Actually adding the additional postcondition checks isn't that hard, it just
means creating a wrapper.  But Randy's point was that such a wrapper cannot
raise an exception that could be handled inside the body it is wrapping!  But
who cares, since we don't want the exception to be handle-able anyway.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  5:24 PM

> I also think that for access-to-subprogram, if the designated
> subprogram has a Pre'Class aspect, the access value generated by
> 'Access must designate a wrapper that checks it (unless of course
> Assertions are being ignored). Otherwise, a simple way of bypassing a
> Pre'Class check would be to declare a local access-to-subprogram type,
> and then call through that. Hardly seems desirable.

Alternatively, we say that a subprogram to which a Pre'Class aspect applies has
effectively an "Intrinsic" convention, so you can't take 'Access of it.  That
would mean the compiler wouldn't have to create the wrapper.  The user could of
course always create an explicit wrapper if they needed one.

OK, this brings up another Baird-ish question:  What happens if you rename a
subprogram to which a Pre'Class applies. Does it still apply?  If you rename it
to be something that is not a dispatching operation, then it is only interesting
if it is non-abstract, and you can only call it in a non-dispatching way, so it
feels like it should still be subject to the same Pre'Class check.  But we have
a rule that you can't specify a Pre'Class attribute on anything but a
dispatching operation.  But we aren't really specifying it, we are just carrying
it over.  It would again seem weird if you could get rid of the effect of a
Pre'Class aspect simply by renaming the subprogram. Perhaps we are adopting the
renaming-as-body semantics, which is essentially a wrapper model.

OK, now here is the killer follow-up:  What happens if you rename it to be
another dispatching operation, and it overrides an inherited subprogram which
had its own Pre'Class attribute? This feels like trouble, and almost argues for
the same trick suggested above for access-to-subprogram, namely disallow the
rename, since effectively the "conventions" don't match, where "convention" has
now been augmented to include the Pre'Class aspect, since that certainly affects
what you have to do at the call site.

The simplest solution to all of these problems might be to say that subtype
conformance (which is needed for 'Access, overriding, and renaming-as-body)
requires that the "same" Pre'Class aspects, if any, apply to both subprograms
(for overriding, clearly this has to allow a new Pre'Class aspect to be applied
*after* the overriding occurs -- which is sort of OK, since aspect specs don't
get elaborated until the freezing point).

This also implies that if some day we allow Pre'Class aspects on access-to-subp,
then that might be a way to support 'Access on these guys (though it doesn't
seem like an important capability at this point).

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  7:21 PM

Fascinating. I proposed something like this a while back, but rejected it as too
outlandish. I personally would go further and *not* allow further specification
of Pre'Class (or Post'Class or Type_Invariant'Class for that matter). Then there
is no combination cases, no one is confused by "or" or "and". We don't need
implicit bodies when there is multiple inheritance. If you need to add some
precondition, you still have Pre/Post/Type_Invariant.

This indeed is my model of how these should work -- modifying Pre'Class or
Post'Class on a dispatching routines just does not make sense to me -- we don't
allow that for any other property. And I cannot think of any significant reason
why you would want to do this in practice. (Maybe there is some reason relating
to multiple inheritance -- I don't think too hard about that.)

But I thought all of this would end up way too limiting. Especially because
there isn't any way for it to allow no precondition to match, making it much
harder to match interfaces. And I can't see any way to add Pre'Class to subtype
conformance and then somehow not have it apply to an overriding routine. That
would be a major change in model -- it would invalidate 3.9.2(10/2) which seems
to be the cornerstone of Ada OOP. (At least of the implementability of Ada OOP.)

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  7:59 PM

> Fascinating. I proposed something like this a while back, but rejected
> it as too outlandish. I personally would go further and *not* allow
> further specification of Pre'Class (or Post'Class or
> Type_Invariant'Class for that matter). Then there is no combination
> cases, no one is confused by "or" or "and". We don't need implicit
> bodies when there is multiple inheritance. If you need to add some precondition, you still have Pre/Post/Type_Invariant.

I am *not* recommending that.  You can still override, but you can't do so by
renaming another dispatching operation that already has a Pre'Class. And you
*can* weaken the Pre'Class if you wish.

I think that is important because the Pre'Class on the root type might be too
strong for some important subtree of the type hierarchy. That subtree should be
able to have a weaker Pre'Class if it makes sense in that subtree.

> This indeed is my model of how these should work -- modifying
> Pre'Class or Post'Class on a dispatching routines just does not make
> sense to me -- we don't allow that for any other property.

I don't follow you when you say "any other property."  To what are you
referring.  Clearly the 'Address, the names of the formal parameters, the
default expressions, whether the routine is inlined, and most importantly the
code itself, can change with the overriding.

> ... And I cannot think of any
> significant reason why you would want to do this in practice. (Maybe
> there is some reason relating to multiple inheritance -- I don't think
> too hard about that.)

I don't agree; weakening the Pre'Class can make sense as you go into a
particular subtree of the type hierarchy.

> But I thought all of this would end up way too limiting. Especially
> because there isn't any way for it to allow no precondition to match,
> making it much harder to match interfaces.

I believe you are going to far.  I was using the subtype conformance as a way to
eliminate various unpleasant cases that require wrappers. I am not suggesting we
break the whole model, just that we effectively outlaw things like 'Access and
weird renamings.

> ... And I can't see any way to add Pre'Class to subtype conformance
> and then somehow not have it apply to an overriding routine.

This just requires a wording tweak in my view.  Para 3.9.2(10/2) now says:

    ... If the dispatching operation overrides an inherited
    subprogram, it shall be subtype conformant with the inherited
    subprogram. ...

If we add Pre'Class matching to subtype conformance, then that is almost exactly
what we want.  But after the overriding takes place, we want to allow further
weakening of the Pre'Class, so if necessary we add something to the above
sentence to make it clear that that is OK.

> ... That would be a major change in model -- it would invalidate
> 3.9.2(10/2) which seems to be the cornerstone of Ada OOP. (At least of
> the implementability of Ada OOP.)

I'm not suggesting we drop the above sentence, but rather have it say something
about Pre'Class if necessary to allow weakening it, but not replacing the
Pre'Class via a renaming of some other dispatching operation which has a
completely different Pre'Class.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  8:31 PM

> I don't follow you when you say "any other property."  To what are you
> referring.  Clearly the 'Address, the names of the formal parameters,
> the default expressions, whether the routine is inlined, and most
> importantly the code itself, can change with the overriding.

"Interesting" property visible in the contract: constraints, no-return, not
null, predicates, "Is_Synchronized", etc.

My personal feeling is that allowing the parameter names and defaults to change
was a mistake, but not a very important one.

> > ... And I cannot think of any
> > significant reason why you would want to do this in practice. (Maybe
> > there is some reason relating to multiple inheritance -- I don't
> > think too hard about that.)
>
> I don't agree; weakening the Pre'Class can make sense as you go into a
> particular subtree of the type hierarchy.

I would like to see an example of why you would want this in practice (not in
theory!!). There is a lot of OOP stuff that makes sense in theory but never
actually happens in practice. Needing to "weaken" the precondition seems to me
like there was a bug in the original definition of the precondition (most
likely, the boolean functions that make it up are not defined properly). But I
am willing to believe that I am wrong -- but to do that I'd need to see an
example for which the weaker precondition makes sense.

> > But I thought all of this would end up way too limiting. Especially
> > because there isn't any way for it to allow no precondition to
> > match, making it much harder to match interfaces.
>
> I believe you are going to far.  I was using the subtype conformance
> as a way to eliminate various unpleasant cases that require wrappers.
> I am not suggesting we break the whole model, just that we effectively
> outlaw things like 'Access and weird renamings.

Fine, but then you can't do it with subtype conformance, since overriding
routines are required to be subtype conformant.

> > ... And I can't see any way to add Pre'Class to subtype conformance
> > and then somehow not have it apply to an overriding routine.
>
> This just requires a wording tweak in my view.  Para
> 3.9.2(10/2) now says:
>
>     ... If the dispatching operation overrides an inherited
>     subprogram, it shall be subtype conformant with the inherited
>     subprogram. ...
>
> If we add Pre'Class matching to subtype conformance, then that is
> almost exactly what we want.  But after the overriding takes place, we
> want to allow further weakening of the Pre'Class, so if necessary we
> add something to the above sentence to make it clear that that is OK.

Then you're breaking the definition of subtype conformance. That seems really
bad to me; in large part because it seems likely that you'll want other such
exceptions in the future. It just turns into a morass.

> > ... That would be a major change in model -- it would invalidate
> > 3.9.2(10/2) which seems to be the cornerstone of Ada OOP. (At least
> > of the implementability of Ada OOP.)
>
> I'm not suggesting we drop the above sentence, but rather have it say
> something about Pre'Class if necessary to allow weakening it, but not
> replacing the Pre'Class via a renaming of some other dispatching
> operation which has a completely different Pre'Class.

That just does not make any sense. You're saying that it has to be subtype
conformant, but it is OK for it not to be subtype conformant sometimes. That way
lies madness (mine :-).

I do agree that requiring that always it too strong a restriction, so I think
you have to change something else. Perhaps we need a fifth kind of conformance
(subtype - Pre'Class) -- "overriding conformance" which is the current
definition of subtype conformance. Then add "subtype conformance" which would be
the current kind + Pre'Class??

****************************************************************

From: Bob Duff
Sent: Thursday, March 24, 2011  7:24 PM

> If true, this is very worrisome, I don't see any easy way for us to
> accomodate that in our implementation

Is there any hope that I can get through this email thread without reading all
of it?  Randy and Tuck have been going at it for 3 days, and I've saved the
emails unread. Do I need to read all the emails?

I think it's important and I need to understand the issues, but I feel
overwhelmed.  Any possibility of a summary?

P.S. I think we should at least ALLOW preconditions to be checked at the call
site.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 24, 2011  8:13 PM

Here is what we are debating:

    1) Must Pre'Class be checked at the call site,
       or could it be done inside the body?
        Answer: we seem to agree it only makes sense
          if it is done at the point of call, and only
          the Pre'Class aspects known at the point of call
          are checked.

    2) Should specifying a Pre'Class have any affect on
       the "Pre" aspects?
         Tuck's Answer: No, let's split them completely.
            This allows (and pretty much requires) "Pre" aspects
            to be checked in the body.
         Randy's Answer: Not sure.

    3) What are some of the implications of renaming and taking
        'Access of a subprogram to which Pre'Class applies?
           Tuck's Answer: Make hard stuff illegal, by incorporating
             Pre'Class matching into subtype conformance, but still
             allow Pre'Class weakening.
           Randy's Answer: Not sure.  Maybe disallow Pre'Class weakening
             completely.

One other issue that seems clear is that when you inherit code for an operation
from a parent type, and you also inherit the "same" operation from one or more
interfaces where the operation has a Post'Class aspect, you will probably have
to create a wrapper to properly incorporate the inherited Post'Class aspects
into the inherited code, since they all get and-ed together.

Randy can fill in more if I have forgotten something.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 24, 2011  8:43 PM

>     2) Should specifying a Pre'Class have any affect on
>        the "Pre" aspects?
>          Tuck's Answer: No, let's split them completely.
>             This allows (and pretty much requires) "Pre" aspects
>             to be checked in the body.
>          Randy's Answer: Not sure.

I think you've convinced me on this one.

>     3) What are some of the implications of renaming and taking
>         'Access of a subprogram to which Pre'Class applies?
>            Tuck's Answer: Make hard stuff illegal, by incorporating
>              Pre'Class matching into subtype conformance, but still
>              allow Pre'Class weakening.
>            Randy's Answer: Not sure.  Maybe disallow Pre'Class weakening
>              completely.

I'm not against the idea, I'm against the execution. I'm surely in favor of
making hard stuff illegal!

> One other issue that seems clear is that when you inherit code for an
> operation from a parent type, and you also inherit the "same"
> operation from one or more interfaces where the operation has a
> Post'Class aspect, you will probably have to create a wrapper to
> properly incorporate the inherited Post'Class aspects into the
> inherited code, since they all get and-ed together.

This also applies to Type_Invariant'Class (it works like Post'Class in that
sense).

This latter issue is the "bug"; this has to be addressed or we will have
problems with missed Pre'Class/Post'Class in inherited operations (bodies that
get called without their preconditions being satisfied are really bad. The other
things are all about making the model better, we could live without them but
everyone will be happier if we don't.

> Randy can fill in more if I have forgotten something.

I think that is it. Lots of examples in my various e-mails; it will help to go
back and make sure those all make sense in the end.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 25, 2011  10:53 PM

I've been trying to work on the wording for this AI, and...

...
> The simplest solution to all of these problems might be to say that
> subtype conformance (which is needed for 'Access, overriding, and
> renaming-as-body) requires that the "same"
> Pre'Class aspects, if any, apply to both subprograms (for overriding,
> clearly this has to allow a new Pre'Class aspect to be applied *after*
> the overriding occurs -- which is sort of OK, since aspect specs don't
> get elaborated until the freezing point).

I don't understand why you only are worrying about renames-as-body here. Doesn't
renames-as-declaration have the same problem? An overriding is not a completion,
after all, it is a new declaration of a homograph.

The problem in question is renaming a routine that has one or more class-wide
preconditions as an overriding of another routine that has one or more different
class-wide preconditions. We can only allow weakening, but this is not likely to
be that. I don't see any reason that you can't use a renames-as-declaration in
such a context.

("Class-wide preconditions" is the nice name I gave Pre'Class in my wording
proposal - trying to talk about Pre'Class separately from Pre is beyond what I
can do wording-wise.)

If that's the case, modifying subtype conformance does not do the trick, since
renames-as-declaration only needs mode-conformance. We probably need separate
legality rules in that case.

The basic problem (both for renames and for access-to-subprogram) is that we
don't allow specifying the class-wide precondition, so we have to assume "True"
and that starts to get weird.

===============

The aspects of a renames are the same as the renamed entity (unless otherwise
specified), so there does not seem to be a problem with any sort of renames that
isn't overriding. So I have to think we just want a hack to cover that case.

Indeed, all we really need to do is declare that a renames is just a shorthand
for an explicit body in this case; indeed this is exactly the same sort of
problem that occurs via multiple inheritance. We should solve these the same way
(either make both illegal, with a specific rule) or just use the rules I've
previously laid out.

The access-to-subprogram also might as well get a specific Legality Rule.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 25, 2011  11:59 PM

Attached find a new draft of the AI (version /03) covering these issues. I fear
that I didn't do a very good job of explaining the overall model, but I was
concentrating on the wording changes needed. Now back to pragmas => aspects.

****************************************************************

From: Robert Dewar
Sent: Saturday, March 26, 2011  5:20 AM

>> Cyrille from AdaCore finds the whole Pre'Class inheritance "or"-ing
>> thing unpleasant, and I think there are others who might have the
>> same view, and it seems potentially a benefit if Pre and Pre'Class
>> are treated independently.

I also dislike the Pre'Class oring!
And I definitely think they should be treated independently.
You really want to be able to specify Pre that means what it says, and not have
it intefered with by peculiar incomprehensible (to me and others) oring.

****************************************************************

From: Robert Dewar
Sent: Saturday, March 26, 2011  5:21 AM

> The model that Tucker and I have been discussing is that for such
> inherited subprograms, the semantics is as if there was an explicit
> body for the routine (complete with the expected postcondition checks,
> if you are doing them inside of the body), which just calls the parent
> routine. This seems to be about as hard to implement as the other
> cases where you have to generate a wrapper for a tag or a use of
> 'Access, so I don't think there can be a major implementation problem.

At first reading, sounds reasonable, to be investigated

****************************************************************

From: Robert Dewar
Sent: Saturday, March 26, 2011  5:22 AM

> Actually adding the additional postcondition checks isn't that hard,
> it just means creating a wrapper.  But Randy's point was that such a
> wrapper cannot raise an exception that could be handled inside the
> body it is wrapping!  But who cares, since we don't want the exception
> to be handle-able anyway.

Yes, sounds ok

****************************************************************

From: Robert Dewar
Sent: Saturday, March 26, 2011  5:22 AM

> P.S. I think we should at least ALLOW preconditions to be checked at
> the call site.

of course ... no one suggested otherwise, we can't handle the exceptions within
the body, everyone agrees on that.

****************************************************************

From: Tucker Taft
Sent: Saturday, March 26, 2011  10:28 AM

> I don't understand why you only are worrying about renames-as-body here.
> Doesn't renames-as-declaration have the same problem? An overriding is
> not a completion, after all, it is a new declaration of a homograph.
>
> The problem in question is renaming a routine that has one or more
> class-wide preconditions as an overriding of another routine that has
> one or more different class-wide preconditions. We can only allow
> weakening, but this is not likely to be that. I don't see any reason
> that you can't use a renames-as-declaration in such a context.
>
> ("Class-wide preconditions" is the nice name I gave Pre'Class in my
> wording proposal - trying to talk about Pre'Class separately from Pre
> is beyond what I can do wording-wise.)
>
> If that's the case, modifying subtype conformance does not do the
> trick, since renames-as-declaration only needs mode-conformance. We
> probably need separate legality rules in that case. ...

Subtype conformance is required when overriding an inherited dispatching
operation, per 3.9.2(10/2).  This rule is independent of whether the overriding
is by a simple declaration or by a renaming.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  11:52 PM

OK, but that's the case where we can't change subtype conformance, because we
want to be able to add additional an Pre'Class to weaken the precondition. So
that doesn't help any.

But no matter, I gave up completely on this conformance model in the draft AI I
created. Changing conformance has the potential of causing weird effects for
formal subprograms and it doesn't seem worth the hassle, especially as the
"implicit body" model fixes the renames problems just as cleanly as it fixes the
three other odditities.

So I just wrote an explicit legality rule for 'Access, and a bit of wording to
invoke the "implicit body" model for any overriding rename, and we're all set.

At least until you try to replace the "mechanistic" "implicit body" wording with
something prettier but fiendishly complex. :-)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  11:40 PM

[Editor's note: On Monday morning, Bob Duff sent a message that quoted the
entire 30K+ AI in order to make a number of typo corrections and wording
improvements. These have all been addressed in version /04, except for
a few below that are mentioned in the following mail: ">>" is the original
AI wording, ">" is Bob's comments, and unquoted stuff is this reply.]

> > !standard 13.3.2(19/3)                                 11-03-25
> > AI05-0247-1/03
>
> Thanks for writing this up.  I haven't (yet?) read the long discussion
> that led up to this, but I've got a bunch of (mostly editorial)
> comments below.

And thanks for reading this and making an attempt to understand it.

> I have a feeling we're re-treading the same ground covered by Meyer /
> Eiffel.

Probably, but Ada also has legacy issues that aren't necessarily present in
Eiffel. In particular, we have to do something useful for existing code that
doesn't have preconditions defined.

> I think this AI addresses some major concerns I had several months ago
> (but never got around to bringing up, because I didn't fully
> understand the issues).

Good to hear.

...
> >     with Pack1, Pack2;
> >     package Pack3 is
> >        type T3 is new Pack1.T1 and Pack2.I2 with private;
> >        overriding
> >        function Is_Green (Obj : T3) return Boolean;
> >        overriding
> >        procedure P2 (Obj : in T3);
> >        overriding
> >        function Something_Green return T3;
> >        -- P1 and P3 are inherited from Pack1.P1 and Pack1.P3,
> >        -- respectively.
>
> Not "respectively".  The Pack1 versions override in both cases.

Not sure what you mean here. I meant P1 is inherited from Pack1.P1, and P3 is
inherited from Pack1.P3. Pack2 is not mentioned, and I wasn't interested in the
details as to why (irrelevant for this AI). I suppose I could have written this
as two separate comments, but I fail to see anything wrong with the one I have.

> >     with Pack1, Pack3;
> >     procedure Main is
> >         procedure Do_It (Obj : in Pack3.T3'Class) is
> >         begin
> >            Obj.P1; -- (1)
>
> P1 takes an 'in out', but Obj is constant.

Oops. Fixed.

> I wonder if these sorts of examples would be clearer if we avoid
> prefix notation calls?!

Unless you want to confuse the heck out of yourself, you'll never use prefix
calls for dispatching in Ada 2005 or later. The alternative is lots of use
clauses (which is not going to help make these examples more understandable), or
lengthy prefixes which are really hard to get right. (And I'm not quite sure
what the prefix needs to be here...)

...
> > It has been suggested that the contract aspects of an inherited
> > routine ought to be considered the same as those for an overriding
> > routine declared at the same point. (This would require nothing more
> > than the compiler generating a wrapper at this point.) But that brings
> > up a new problem. Consider the following:
> >
> >     with Pack1, Pack2, Pack3;
>
> Remove Pack1.  The whole point of the example is that we don't know
> about Pack1 here.

OK.

...
> > Note that if we are using Pre rather than Pre'Class, then the
> > preconditions of a dispatching call need have any real relationship.
> > In that case, the
>
> "need not"

Oops again.

> > Finally, the wording given in AI05-0145-2 makes it clear that Pre
> > and Post are not inherited. However, nothing is said about what
> > happens to these when a subprogram is inherited. With the
> > AI05-0145-2 wording, it appears. Clearly,
>
> What appears what?

"that there is no inheritance of Pre and Post".

> > we
> > want to evaluate the Pre and Post values associated with a
> > subprogram when that subprogram is inherited.

I added another phrase here:

   , otherwise the body of the subprogram could be called without the
   preconditions it is expecting.

...
> > Add at the end of 3.10.2(32/2):
> >
> > If P has one or more class-wide precondition expressions that apply
> > to it, at least one of those expressions shall be the enumeration
> > literal True.
>
> "shall statically denote True"?

A precondition aspect represents an expression, not a value. Thus, we have to
word this in terms of the expression, not the value that we might evaluate it to
have. In particular, my intent was to require that this expression is exactly
the Boolean enumeration literal True. I suppose it would be OK to have used
"statically denotes" instead (that allows renames of True):

shall statically denote the Boolean enumeration literal True.

I didn't want to call this "True" by itself (that invites confusion with the
value), "literal True" doesn't make much sense, so I used the entire long thing.

> > AARM Reason: An access to access-to-subprogram type does not have a
>
> Too many access-to's.

Access 3?? :-)

...
> > Modify 13.3.2(2-5/3):
> >
> > Pre
> > This aspect [specifies]{defines} a {specific} precondition for a
> > callable entity; it shall be specified by an expression, called a
> > {specific} precondition expression. {If not specified for an entity,
> > the specific precondition expression for the entity is the
> > enumeration literal True.}
>
> "enumeration literal True" --> "True" (and below)

See above. This is *not* a value, but an expression consisting of nothing but
the Boolean enumeration literal True. I figured I could leave out "Boolean"
since that is the type of the expression, but the other part is necessary to
emphasize the distinction.

...

> > Modify 13.3.2(6/3):
> >
> > The expected type for [a]{any} precondition or postcondition
> > expression is any boolean type.
>
> I think "a" is just as clear.

OK, but I disagree; there are two different preconditions, and we never defined
"precondition expression" to mean both kinds. That seemed like overkill, so I
used "any" consistently to mean all of the kinds.

...
> > * The specific precondition check begins with the evaluation of the specific
> >   precondition expression that applies to the subprogram or entry;
> >   if the expression evaluates to False, Ada.Assertions.Assertion_Error is
> >   raised.
>
> "Ada.Assertions.Assertion_Error" --> "Assertion_Error".
> We don't normally spell out full expanded names; it's just noise.  For
> example, in A.10.5:
>
> 11        The exception Mode_Error is propagated if the mode is not In_File.
>           The exception End_Error is propagated if an attempt is made to read
>           a file terminator.
>
> (I don't see any difference between "raised" and "propagated".)

We treated Assertion_Error differently for some reason (see 11.4.2(18/2)).
Probably because no one would have a clue where this exception is declared if we
didn't give the prefix (it is not anywhere near here, and it it not in the same
package as in the case given above). But the "Ada." prefix shouldn't be there
(we never write that).

And there is a difference between "raised" and "propagated". Subprograms
"propagate" exceptions; checks "raise" exceptions. This is a check, so the
exception is raised.

(Why you Ada 95 guys wrote the text that way I don't know, but reviewers beat on
me until all of the containers were consistent with the above.)

...
> > The order of performing the checks is not specified, and if any of
> > the class-wide precondition expressions evaluate to True, it is not
> > specified whether the other class-wide precondition expressions are
> > evaluated. It is not specified whether any check for elaboration of
> > the subprogram body is performed before or after the precondition
> > checks. It is not specified whether in a call on a protected
> > operation, the checks are performed before or after starting the
> > protected action. For a task or protected entry call, the checks are
>
> "task or protected entry call" --> "entry call"
>
> > performed
> > prior to checking whether the entry is open.

That's existing Tucker wording which I left unchanged. I assume he was
reinforcing some idea here (probably the intent that it applies to both) with
the redundancy.

...
> > If the Assertion_Policy in effect at the point of a subprogram or
> > entry declaration is Check, then upon successful return from a call of
> > the subprogram or entry, prior to copying back any by-copy in out or
> > out parameters, the postcondition checks is performed. This consists
> > of the evaluation of the
>
> "are"

"is". :-). More accurately, "postcondition check is" - there is only one
postcondition check (everything is lumped together because there is no need to
talk about them individually). I for a while had separated the checks, but I
obviously left some vestige of that model when I switched it back.

...
> > AARM Ramification: This applies to access-to-subprogram calls, dispatching
> > calls, and to statically bound calls. We need this rule to cover
> > statically bound calls as well, as Pre preconditions and Post
> > postconditions are not inherited, but
>
> "Pre preconditions and Post postconditions" --> "specific
> pre- and postconditions"

Yes, that's better. Although I was trying to remind the reader of what that
means. But they probably should go look it up if they are confused.

...
> > Notwithstanding what this standard says elsewhere, an inherited subprogram S
> > (or multiple conformant inherited subprograms) that has one or or
> > more class-wide precondition or postcondition expressions that apply
> > to S, or has a specified Type_Invariant'Class (see 13.3.3), that is
> > not explicitly overridden, and that is primitive for a type that has
> > one or more progenitors is equivalent to an overriding subprogram S whose
> > body consists of:
> >    * The null statement, if all of the inherited subprograms are
> >      null subprograms;
> >    * A call on the parent subprogram, with all of the parameters the same
> >      as the parameters to S (possibly with appropriate type conversions).
>
> Do we need, "The same rule applies to functions (possibly with
> appropriate type conversion of the result)"?

Why? This wording is about primitive *subprograms*, and those surely include
functions with controlling results. The type conversion wording can be read to
apply to everything. I could see expanding that a bit to say "(possibly with
appropriate type conversions on the parameters and result if any)", but that's
all that could be needed.

> > If a subprogram renaming overrides one or more inherited
> > subprograms, the
> > renaming is equivalent of an subprogram whose body calls the renamed
> > subprogram, with
>
> "an" --> "a"
>
> > all
> > of the parameters the same as the parameters to S (possibly with appropriate
> > type
> > conversions).
>
> Functions?

Same change as above (not that I think it is necessary, but it isn't worth the
argument).

> > AARM Ramification: We really mean equivalent here! The call on the parent
> > subprogram will evaluate preconditions as needed. And a dispatching
> > call will call this body, rather than that of one of the inherited
> > subprograms. Note that if the routines are not all null, then the
> > subprogram inherited for the parent type must be concrete (not
> > abstract or null), so that is the one we want to call.
>
> For some reason, the "really mean it" part rubs me the wrong way.
> Can we say, "This equivalence implies that..."?

We could, but this is a nod to the usual "we said 'equivalent', but we really
meant pretty similar" that always seems to happen. Here, we really did mean
"equivalent" in the literal sense of the word. So it is an ARG inside remark.

I made the change and added a "Editor's Note" to mention the difference. If
Tucker ever has his "better idea", this will all be different anyway.

> > AARM Discussion: Only one implicit overriding subprogram is created
> > for a
> > single set of homoegraphs.
>
> "homographs".

Yeah, I already saw that.

> But I think it might be better to say "single set of inherited
> subprograms".

Not sure. I used "homographs" because it is formally clear that the set includes
only routines with the proper profile. "Inherited subprograms" could mean that,
or all of the routines with the same name, or all of the routines in total. I
didn't want any confusion. Maybe "inherited homographs"? Or "inherited
subprograms that are homographs"? "inherited homographic subprograms"? :-)

> > AARM Reason: These rules eliminates four problems: (1) the issue that two
>
> "eliminate"
>
> "problems" --> "potential problems" (they're not problems anymore,
> once we adopt this AI!)

OK.

> > null subprograms (one of which is chosen for dispatching arbitrarily by
> > 3.9.2(20.3/3)) may not be equivalent at runtime; (2) "counterfeiting"
>
> "may not be equivalent at runtime" --> "might have different
> contracts"?

"contract" is not an RM technical term (at least in this sense), so I didn't
want to use that term here. "might have different Pre'Class, Post'Class, and
Type_Invariant'Class aspects" is better, but the latter part is possibly
confusing.

"might have different contract aspects that apply" would be better, but again
"contract aspects" is a term that isn't defined.

"might have different aspects that apply" seems vague. Besides, the fact that
there could be different behavior for two null subprograms is the crux of the
issue and needs to be mentioned somehow.

I used "might have different aspects that apply, causing different runtime
behavior", but there might be something better possible.

...
Following this are several suggestions that I used unmodified.

...
> > We fix these both by including Pre'Class in conformance checking, such that
>
> I don't see anything about conformance in this AI.  Maybe this
> discussion section is obsolete?

Just this paragraph. Tucker had suggested this model, but I discovered that the
"implicit body" model fixed the problems with renames -- and without making
anything illegal.

I almost used the same rule for 'Access, but didn't solely because it seemed a
bit more confusing there.

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 28, 2011  12:59 PM

Like some others, I am overwhelmed by the details of the discussion on the exact
semantics of Pre'Class and Post'Class.  I am concerned about the difficulties of
presenting this to users in an intuitive fashion. I am also concerned that with
different rules for Pre'Class and Pre  the danger is that once again (as Erhard
remarked in connection with aspects vs. attributes) the perception will be that
Ada always offers two ways of doing roughly the same thing.  This perception may
be misguided but the complications are plenty real.

So here is a timid suggestion:  we agree that Pre and Post by themselves are
little more that assert statements, and the real meat is in the inheritable and
combinable aspects. So what about renaming Pre'Class and Post'Class as Pre and
Post, and ditch the non- inheritable versions?  For those timid souls that shy
away from complex inheritance patterns things will work as they expect, and the
others will have to understand how the RM faithfully matches LSP and all is
well.

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011  1:25 PM

I think abandoning "simple" Pre would be a mistake.
Assert statements are not visible in the spec, and there is a very important
documentation and contract associated with specifying a "simple" precondition.

I also think it is quite important that Pre'Class remain named "Pre'Class" to
emphasize its inheritability.

I remember a long debate with Cyrille at AdaCore, and the conclusion was that it
was important to provide both inheritable and a non-inheritable preconditions.
I don't believe we should back down on that (despite the legitimate concern
about two different ways to do the same thing).

The fact is Pre and Pre'Class are really two different things, in the same way
that T and T'Class are different.  Admittedly Ada is unusual among OOP languages
in making the distinction between T and T'Class, but I think it was one of the
better choices we made in our OOP model.  I feel Pre and Pre'Class go hand in
hand with the T and T'Class distinction.

Getting the right answer is not trivial, but we shouldn't be frightened by the
length of the road, but rather be focused on the end result.  If we don't like
the end result, that is fine (and I understand and appreciate your concern about
the "two ways" issue).

But if we mainly don't like the way the sausage was made, don't watch... ;-)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  1:41 PM

> So here is a timid suggestion:  we agree that Pre and Post by
> themselves are little more that assert statements, and the real meat
> is in the inheritable and combinable aspects. So what about renaming
> Pre'Class and Post'Class as Pre and Post, and ditch the non-
> inheritable versions?  For those timid souls that shy away from
> complex inheritance patterns things will work as they expect, and the
> others will have to understand how the RM faithfully matches LSP and
> all is well.

That doesn't work because Pre'Class is only allowed on primitives of tagged
types, while Pre can be used anywhere. You'd end up with two different semantics
with the same name.

The other reason it doesn't work is a practical one: the LIS semantics is way
too fierce for most real uses. I have been completely unable to come up with
anything even vaguely plasible as to when you would want to weaken a
precondition. As such, I think Pre'Class will typically be used on a root
subprogram and then never changed on any child subprograms. If you have
additional properties (not known to the root type) that need to be enforced on a
routine, the only way to do that is via a Pre expression. Unless you are willing
to completely give up on compile-time optimizability.

Keep in mind that this whole debate is solely about how dispatching calls and
inherited calls should work. In the normal statically bound case, this is all
very straightforward.

If we are going to drop anything, it should be Pre'Class etc. It may not be
worth it to allow dispatching calls to be analyzed/optimized. (I disagree with
that idea, but I can imagine why others would not.)

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 28, 2011  1:56 PM

> Keep in mind that this whole debate is solely about how dispatching
> calls and inherited calls should work. In the normal statically bound
> case, this is all very straightforward.

Is it? Aren't the inherited Pre'class involved, even if this is a static call?

> If we are going to drop anything, it should be Pre'Class etc. It may
> not be worth it to allow dispatching calls to be analyzed/optimized.
> (I disagree with that idea, but I can imagine why others would not.)

From your previous paragraph, this does not seem like such a big loss. Either we
want inheritance all along, and then we should call it with the shorter name
anyway, or else inheritance is a minor convenience and we can drop it. Tuck
indicates that Pre'Class is indispensable, but you seem to say that in practice
it is only useful at the root of a hierarchy. ???

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011  2:22 PM

> ...
>> If we are going to drop anything, it should be Pre'Class etc. It may
>> not be worth it to allow dispatching calls to be analyzed/optimized.
>> (I disagree with that idea, but I can imagine why others would not.)
>
>  From your previous paragraph, this does not seem like such a big
> loss. Either we want inheritance all along, and then we should call it
> with the shorter name anyway, or else inheritance is a minor convenience and
> we can drop it. Tuck indicates that Pre'Class is indispensable, but you seem
> to say that in practice it is only useful at the root of a hierarchy. ???

Remember that with interfaces, there are many "roots" of a hierarchy.
Without Pre'Class, you really can't say anything interesting on an interface,
and that would be a shame.  In fact, one could argue that it is really
interfaces (or other abstract tagged types) that need Pre'Class, and without
Pre'Class, you essentially lose the ability to specify any kind of contract for
such types.

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 28, 2011  2:50 PM

Could be, but I have trouble imagining different preconditions for different
primitives of the same interface. As you say, these are contracts for these
abstract types, but I suspect that in most cases these are going to be
(abstract) properties of the type, and not operation-specific predicates. Maybe
this lack of imagination disqualifies me from this discussion, but the
precondition Is_Green is not a compelling example!  How come no one has come up
with a good motivating example with independent Pre'Class predicates on, say an
interface and a concrete ancestor type in this discussion?

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011  3:00 PM

A precondition is a set of requirements
on any input to the subprogram, which includes the controlling object(s), other
parameters, any global state read by the subprogram, etc.

You are definitely lacking in imagination if you think preconditions only depend
on the single "primary" controlling object.

Imagine a vector-like collection of some sort.
Typically there is an operation for indexing into the vector.  Clearly the
precondition would be something like "Pre'Class => Index in 1..Length(Vec)".

And then you could have extensible vectors with some upper bound, in which case
the precondition on indexed assignment might be weakened to be:

     Pre'Class => Index in 1..Max_Size(Vec)

with a postcondition of

     Post'Class => Length(Vec) = Integer'Max(Length(Vec'Old), Index)

presuming elsewhere we ensure that Length(Vec) <= Max_Size(Vec).

Or a vector that could grow one element at a time, which would allow the
precondition on indexed assignment to be:

    Pre'Class => Index in 1..Length(Vec)+1

etc., etc.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:11 PM

> A precondition is a set of requirements on any input to the
> subprogram, which includes the controlling object(s), other
> parameters, any global state read by the subprogram, etc.
>
> You are definitely lacking in imagination if you think preconditions
> only depend on the single "primary" controlling object.

I *know* I qualify as "lacking in imagination" this way. But then again I don't
think multiple inheritance is useful enough to go to the absurd lengths it
requires in definition and implementation.

> Imagine a vector-like collection of some sort.
> Typically there is an operation for indexing into the vector.
>  Clearly the precondition would be something like "Pre'Class => Index
> in 1..Length(Vec)".

Looks good.

> And then you could have extensible vectors with some upper bound, in
> which case the precondition on indexed assignment might be weakened to
> be:
>
>      Pre'Class => Index in 1..Max_Size(Vec)

I'm glad that you are giving an example. But I disagree here. "Length" is a
dispatching operation in Pre'Class; the appropriate design is to change the
meaning of Length.

> with a postcondition of
>
>      Post'Class => Length(Vec) = Integer'Max(Length(Vec'Old), Index)
>
> presuming elsewhere we ensure that Length(Vec) <= Max_Size(Vec).

This doesn't make much sense to me.

****************************************************************

From: Bob Duff
Sent: Monday, March 28, 2011  3:16 PM

> I'm glad that you are giving an example. But I disagree here. "Length"
> is a dispatching operation in Pre'Class; the appropriate design is to
> change the meaning of Length.

I think you're misunderstanding Tucker's example.  In the "extensible"
vector, you can set the I'th element to X, even though I is bigger than the
_current_ length, and it automatically grows the vector.

Overriding Length doesn't accomplish that.

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011  3:32 PM

Correct.  And only on an indexed *assignment* operation is the Index allowed to
be larger than Length.  On an indexed *fetch* operation, the Index must be no
larger than the Length.

BTW, when I said lacking in imagination, I was making reference to Ed's comment.
He seemed to be saying that a single precondition would be adequate for an
entire set of operations (unless I misunderstood him, which is possible).  That
seemed like a lack of imagination (though probably more likely a
misunderstanding on my part).

In any case, this was meant to show both that preconditions vary from one
operation to another, and how preconditions can be usefully weakened as you add
bells and whistles to a data structure as you extend it.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:38 PM

> I think you're misunderstanding Tucker's example.  In the "extensible"
> vector, you can set the I'th element to X, even though I is bigger
> than the _current_ length, and it automatically grows the vector.
>
> Overriding Length doesn't accomplish that.

OIC.

I was thinking of "extensible" in the sense of Ada.Containers.Vectors, and not
in the sense of some horrible auto-extending semantics. I don't think this is a
very good example simply because it is such a bad idea at the core that I'd be
happier if the language didn't permit it in the first place. :-)

The "overdraw" example is better. But I still don't think that these examples
are likely to occur often in practice. Maybe the people who think everything in
the world should be written by OOP would run into them, but those people are
nuts. :-)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:46 PM

> BTW, when I said lacking in imagination, I was making reference to
> Ed's comment.  He seemed to be saying that a single precondition would
> be adequate for an entire set of operations (unless I misunderstood
> him, which is possible).
> That seemed like a lack of imagination (though probably more likely a
> misunderstanding on my part).

I know, but I am thinking like Ed here. And I am perfectly willing to admit a
lack of imagination when it comes to the use of OOP. I find it useful in
specialized circumstances, but not something that you will want to use for
everything. I suspect that a lot of these examples come up because people are
defining way too many types with way too much inheritance. And that won't happen
in practice (most "hierarchies" are only one level deep).

> In any case, this was meant to show both that preconditions vary from
> one operation to another, and how preconditions can be usefully
> weakened as you add bells and whistles to a data structure as you
> extend it.

I have no problem believing that they might vary. I'm still dubious that
weakening is useful in practice (but at least you have finally shown an example
where it might be useful).

I'd be happy if we treated the contract as we do most other contract items in
Ada (must match, period). But I realize that is too limiting for interfaces
(virtually nothing would match, so almost all interfaces would be illegal). I
personally don't care that much about interfaces, but I can't seriously advocate
a rule that would make them useless, either. (If it wasn't for this issue, I
would be pushing harder for such a rule, especially as the additional Pre takes
care of most of the realistic cases.)

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011   3:51 PM

> ... The "overdraw" example is better. But I still don't think that
> these examples are likely to occur often in practice. Maybe the people
> who think everything in the world should be written by OOP would run
> into them, but those people are nuts. :-)

Why do people use inheritance at all?  One reason is because they have
hierarchies of abstractions of increasing complexity.

I think we should be very wary of trying to predict exactly how people will use
these capabilities.  Bob and I came up with a few examples, and could almost
certainly come up with more and better examples if we felt the need, but at some
point, you have to agree that OOP involves hierarchies where what changes as you
go down the hierarchy is very much application specific.  If we start saying
that this or that abstraction is a "bad idea," we are going out on a pretty
fragile limb in my view.

****************************************************************

From: Bob Duff
Sent: Monday, March 28, 2011  2:32 PM

> Here is what we are debating:

Thanks for the summary.  I actually went back and read the whole thread as well.

>     1) Must Pre'Class be checked at the call site,
>        or could it be done inside the body?
>         Answer: we seem to agree it only makes sense
>           if it is done at the point of call, and only
>           the Pre'Class aspects known at the point of call
>           are checked.

Question:  One way to implement "check at call site" is to create a wrapper at
the point of type derivation (in cases where it's needed -- multiple
inheritance). Correct?

If so, I agree.

>     2) Should specifying a Pre'Class have any affect on
>        the "Pre" aspects?
>          Tuck's Answer: No, let's split them completely.
>             This allows (and pretty much requires) "Pre" aspects
>             to be checked in the body.
>          Randy's Answer: Not sure.

OK, I guess.  If Pre is equivalent to a pragma Assert in the body, then it's not
much use, and (like Assert), a Pre that the caller can't see might fail.  I
suppose that's OK, but I'd be inclined to stick with 'Pre'Class when using
tagged types.

>     3) What are some of the implications of renaming and taking
>         'Access of a subprogram to which Pre'Class applies?
>            Tuck's Answer: Make hard stuff illegal, by incorporating
>              Pre'Class matching into subtype conformance, but still
>              allow Pre'Class weakening.
>            Randy's Answer: Not sure.  Maybe disallow Pre'Class weakening
>              completely.

These are corner cases.  I don't have strong opinions, but the general idea of
outlawing things that cause trouble seems OK.

Speaking of outlawing things, please remind me why we don't want to outlaw
pre/post/invariant on (ops of) interfaces. These features seem useful on
interfaces, so we shouldn't outlaw them, but I'm having trouble articulating
exactly why.

> One other issue that seems clear is that when you inherit code for an
> operation from a parent type, and you also inherit the "same"
> operation from one or more interfaces where the operation has a
> Post'Class aspect, you will probably have to create a wrapper to
> properly incorporate the inherited Post'Class aspects into the
> inherited code, since they all get and-ed together.

I must be missing something.  It seems like this is exactly
(1) above, hence my question above.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   2:57 PM

...
> >     1) Must Pre'Class be checked at the call site,
> >        or could it be done inside the body?
> >         Answer: we seem to agree it only makes sense
> >           if it is done at the point of call, and only
> >           the Pre'Class aspects known at the point of call
> >           are checked.
>
> Question:  One way to implement "check at call site" is to create a
> wrapper at the point of type derivation (in cases where it's needed --
> multiple inheritance).
> Correct?

Not really, since the Pre'Class aspects known at the point of the call can
differ depending on precisely which routine is called. This is most obvious when
calling through an interface.

You *could* do it with wrappers, but you'd need one wrapper for every possible
set of Pre'Class -- while this is ususally a small set, it probably doesn't make
sense to generate a lot of wrappers.

> >     2) Should specifying a Pre'Class have any affect on
> >        the "Pre" aspects?
> >          Tuck's Answer: No, let's split them completely.
> >             This allows (and pretty much requires) "Pre" aspects
> >             to be checked in the body.
> >          Randy's Answer: Not sure.
>
> OK, I guess.  If Pre is equivalent to a pragma Assert in the body,
> then it's not much use, and (like Assert), a Pre that the caller can't
> see might fail.  I suppose that's OK, but I'd be inclined to stick
> with 'Pre'Class when using tagged types.

That's a decision for the user to make, not the language. I agree that users
probably should either use Pre'Class or Pre exclusively (not mixing them), but
that is the sort of "style" rule that we usually leave to separate tools to
enforce.

> >     3) What are some of the implications of renaming and taking
> >         'Access of a subprogram to which Pre'Class applies?
> >            Tuck's Answer: Make hard stuff illegal, by incorporating
> >              Pre'Class matching into subtype conformance, but still
> >              allow Pre'Class weakening.
> >            Randy's Answer: Not sure.  Maybe disallow Pre'Class weakening
> >              completely.
>
> These are corner cases.  I don't have strong opinions, but the general
> idea of outlawing things that cause trouble seems OK.
>
> Speaking of outlawing things, please remind me why we don't want to
> outlaw pre/post/invariant on (ops of) interfaces.
> These features seem useful on interfaces, so we shouldn't outlaw them,
> but I'm having trouble articulating exactly why.

Same here. I can't quite figure out how you could usefully use Pre on interfaces
(given the weakening semantics). Post and Invariants don't suffer from that.

> > One other issue that seems clear is that when you inherit code for an
> > operation from a parent type, and you also inherit the "same"
> > operation from one or more interfaces where the operation has a
> > Post'Class aspect, you will probably have to create a wrapper to
> > properly incorporate the inherited Post'Class aspects into the
> > inherited code, since they all get and-ed together.
>
> I must be missing something.  It seems like this is exactly
> (1) above, hence my question above.

(1) is only talking about Pre'Class. Post'Class and Post are handled identically
(we check everything based on the actual body). It doesn't work very well to
skip inherited Post'Class, and it doesn't help code generation or
understandability any.

****************************************************************

From: Bob Duff
Sent: Monday, March 28, 2011   2:32 PM

> > I don't agree; weakening the Pre'Class can make sense as you go into
> > a particular subtree of the type hierarchy.
>
> I would like to see an example of why you would want this in practice
> (not in theory!!).

There are lots of examples out there.  Read Meyer's book on OOP, for example.
(Yeah, I know that's not practical in the short term -- it's thick enough to
make a good doorstop on a windy day. ;-))

Anyway, I'm pretty sure this is an important OOP capability.
But we also want plain-old Pre on plain-old non-OO procedures.

>... There is a lot of OOP stuff that makes sense in theory but  never
>actually happens in practice. Needing to "weaken" the precondition
>seems to me like there was a bug in the original definition of the
>precondition (most likely, the boolean functions that make it up are
>not defined properly). But I am willing to believe that I am wrong --
>but to do that I'd need to see an example for which the weaker
>precondition makes sense.

How about:

    type Money is digits ... range 0.0 .. ...; -- nonnegative!

    type Bank_Account is ...
    procedure Withdraw
        (Account: in out Bank_Account;
         Amount: Money) with
    Pre'Class => Current_Balance(Account) >= Amount;

    type Bank_Account_With_Overdraw is ...
    -- You know, the kind where the bank gives you an automatic loan.
    overriding procedure Withdraw
        (Account: in out Bank_Account_With_Overdraw;
         Amount: Money) with
    Pre'Class => True; -- weaken, to allow overdraw

A dispatching call on Withdraw, passing Bank_Account'Class should ensure the
balance is enough.  But with Bank_Account_With_Overdraw'Class, the caller can
rely on the automatic loan.

****************************************************************

From: Bob Duff
Sent: Monday, March 28, 2011   2:32 PM

> > By the way, Bob implied that postconditions were to be associated
> > with the body.  I don't agree.

Well, I don't really agree either.  ;-)

> > ...Postconditions are
> > both more time *and* space efficient if performed inside the body,
> > but the right semantic view is still from the caller's point of
> > view.  Preconditions and postconditions are promises made to the
> > *caller*.

No, that's not right, either.  I don't remember exactly what I said, but here's
what I think now:

A precondition is a promise made by the caller to the callee.
A postcondition is a promise made by the callee to the caller.

Or, equivalently:
A precondition is a requirement on the caller, which the body can rely on.
A postcondition is a requirement on the body, which the caller can rely on.

As for efficiency, it's more complicated than I implied.
Let's let implementers worry about efficiency.

> > In any case, I think we are in agreement that the caller view is the
> > key to understanding the semantics for preconditions and
> > postconditions.

I think it's necessary to take both caller and callee views.

> > ...Type invariants are more like
> > postconditions in my view,

Ideally, they are.  But Eiffel checks them going both in and out, and for good
reason.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:23 PM

> > That doesn't work because Pre'Class is only allowed on primitives of
> > tagged types, while Pre can be used anywhere. You'd end up with two
> > different semantics with the same name.
>
> "is only allowed"  in the current version of the AI. Easy to change,
> in particular if it simplifies the text.

But it won't simply the text. Pre is not inherited on other operations for good
reason - the allowed changes in profile would make a mess out of any existing
Pre and we definitely do not need or want any "weakening" for operations of
untagged types.

> > The other reason it doesn't work is a practical one: the LIS
> > semantics is way too fierce for most real uses. I have been
> > completely unable to come up with anything even vaguely plasible as
> > to when you would want to weaken a precondition. As such, I think
> > Pre'Class will typically be used on a root subprogram and then never
> > changed on any child subprograms. If you have additional properties
> > (not known to the root
> > type) that need to be enforced on a routine, the only way to do that
> > is via a Pre expression. Unless you are willing to completely give
> > up on compile-time optimizability.
> >
> > Keep in mind that this whole debate is solely about how dispatching
> > calls and inherited calls should work. In the normal statically
> > bound case, this is all very straightforward.
>
> Is it? Aren't the inherited Pre'class involved, even if this is a
> static call?

Sure, but the reason Pre'Class and Post'Class are inherited is so that
dispatching calls have well-defined preconditions and postconditions and thus
are analyzable. It doesn't provide any value to statically bound calls.

For a dispatching call to be analyzable, it has to have the "same" contract for
all overriding routines. For existing contract elements (constraints, not null,
no return, etc.) this is enforced by requiring subtype conformance and matching
for all descendants. For Preconditions, this requires that the precondition is
the same (or weaker) on all descendants. For postconditions, this requires that
the postcondition is the same (or stronger) on all descendants.

> > If we are going to drop anything, it should be Pre'Class etc. It may
> > not be worth it to allow dispatching calls to be analyzed/optimized.
> > (I disagree with that idea, but I can imagine why others would not.)
>
> From your previous paragraph, this does not seem like such a big loss.
> Either we want inheritance all along, and then we should call it with
> the shorter name anyway, or else inheritance is a minor convenience
> and we can drop it. Tuck indicates that Pre'Class is indispensable,
> but you seem to say that in practice it is only useful at the root of
> a hierarchy. ???

As previously noted, I don't think weakening will come up often in practice.
(Tucker seems to disagree.) In the absence of useful weakening, the Pre'Class is
the same all the way from the root. [That makes sense to me, this is a contract
and you do not change contracts for inherited routines -- but it apparently
doesn't make sense to OOP mavens.] But this is a property that has to be ensured
in order for the compiler (and static analysis tools) to be able to take
advantage of it. There cannot be any possibility of cheating -- thus it has to
be inherited automatically.

OTOH, in practice I think you are going to need to be able to add other
preconditions that violate the OOP model. You could use assertions for that, but
then statically bound callers will not be able to "see" and depend on those
added preconditions. And in Ada, the vast majority of calls are statically
bound. We really do not want to make the normal case worse just to support some
OOP orthodoxy.

So I do think we need both -- since I think it is horrible to eliminate the
possibility of analysis on dispatching calls.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:33 PM

> > Remember that with interfaces, there are many "roots" of a hierarchy.
> > Without Pre'Class, you really can't say anything interesting on an
> > interface, and that would be a shame.  In fact, one could argue that
> > it is really interfaces (or other abstract tagged types) that need
> > Pre'Class, and without Pre'Class, you essentially lose the ability
> > to specify any kind of contract for such types.
>
> Could be, but I have trouble imagining different preconditions for
> different primitives of the same interface.
> As you say, these are contracts for these abstract types, but I
> suspect that in most cases these are going to be (abstract) properties
> of the type, and not operation-specific predicates. Maybe this lack of
> imagination disqualifies me from this discussion, but the precondition
> Is_Green is not a compelling example!  How come no one has come up
> with a good motivating example with independent Pre'Class predicates
> on, say an interface and a concrete ancestor type in this discussion?

I'm of two minds on this.

Mind one says that we have to have preconditions on interfaces because the
alternative is that interfaces are second-class citizens. A dispatching call
through one of its primitives would not have a known precondition, and thus we
either would have to adopt a special rule for them (using the preconditions of
the concrete operation) or require all such operations to have no precondition.
The first is the bad old "no analysis" case again; it should be obvious that the
second is really limiting. Neither seems acceptable to me.

Mind two says that "weakening" which happens when you combine preconditions is
almost never going to be what you want. Using the lousy "Is_Green" example, the
precondition ends up as "Is_Valid or Is_Green", which is definitely not what you
want (unless "Valid" and "Green" end up being defined as the same thing).

The net effect is that I cannot imagine any rules that would make preconditions
on interfaces both useful and able to be analyzed. This is where I have the same
lack of imagination that you have -- except it is worse: I can't really imagine
enough uses for interfaces where they would actually gain anything in practice
to even bother having them, much less make them work.

So I don't care what the rules for interfaces are, so long as they don't screw
up the language. And screwing up the language would be treating dispatching
calls differently depending on what sort of routine is involved. Or preventing
any static analysis of dispatching calls forever (because if we adopt a run-time
only rule, there will never be any chance to fix it).

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011  3:34 PM

> Speaking of outlawing things, please remind me why we don't want to
> outlaw pre/post/invariant on (ops of) interfaces.
> These features seem useful on interfaces, so we shouldn't outlaw them,
> but I'm having trouble articulating exactly why...

I think we did outlaw them on abstract subprograms.  And as far as allowing them
on null procedures of interfaces, I think we at least talked about disallowing
them there as well.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   3:50 PM

I thought Bob was talking about the classwide versions.

The specific versions are banned on abstract subprograms and null procedures
because the former can never be called, and the latter have problems with
equivalence - we don't require overriding null procedures, and if there are two
null procedures with different Pres, whose Pre gets used??

But the classwide versions need to exist so it is possible to reason about
dispatching calls for classwide interfaces. Otherwise, they all have a
precondition of True (and with weakening, that is likely to be trouble for the
inherited bodies!!).

****************************************************************

From: Bob Duff
Sent: Monday, March 28, 2011  5:19 PM

> I thought Bob was talking about the classwide versions.

Yes.  Sorry for being unclear.  My understanding is that Pre'Class and
Post'Class are allowed on operations of interfaces, both abstract and null. And
Invariant'Class is allowed on an interface. Is that correct?

And I guess my question is either "Why are they allowed?" or "Why are they
disallowed?", depending on the above.

I think some of us (like me, but probably also Randy and Ed) are/were forgetting
something: All this talk about inheriting a procedure from multiple ancestors
(e.g. from the parent and also from an interface) is the unusual case.  We have
to define what it means, but the more common case is where you inherit P1, P2,
P3 from your main parent, and you inherit totally unrelated Q1, Q2, Q3 from some
interface. You probably override the Q's, and it's good to be bound by their
Pre'Class, whether they were abstract or null.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   5:35 PM

> Yes.  Sorry for being unclear.  My understanding is that Pre'Class and
> Post'Class are allowed on operations of interfaces, both abstract and
> null.
> And Invariant'Class is allowed on an interface.
> Is that correct?

Yes.

> And I guess my question is either "Why are they allowed?" or "Why are
> they disallowed?", depending on the above.

I answered that in one of my earlier messages. The short answer is that if we
don't, then dispatching calls though interface values have no preconditions at
all -- that would be bad.

> I think some of us (like me, but probably also Randy and Ed) are/were
> forgetting something: All this talk about inheriting a procedure from
> multiple ancestors (e.g. from the parent and also from an interface)
> is the unusual case.  We have to define what it means, but the more
> common case is where you inherit P1, P2, P3 from your main parent, and
> you inherit totally unrelated Q1, Q2, Q3 from some interface.
> You probably override the Q's, and it's good to be bound by their
> Pre'Class, whether they were abstract or null.

I agree (other than the "forgetting" part), except that I would go further and
say that the use of interfaces -- at all -- is unusual. So by definition
anything that happens with them is unusual. (Even if you disagree with me on the
value of interfaces, you can't argue that most Ada OOP code is Ada 95 code --
for the simple reason that most Ada compilers only are Ada 95 compilers.)

But even if they are unusual, we have to have a proper definition for them.

****************************************************************

From: Tucker Taft
Sent: Monday, March 28, 2011   5:55 PM

> I agree (other than the "forgetting" part), except that I would go
> further and say that the use of interfaces -- at all -- is unusual.

Having used languages with interfaces, my experience is that they eventually get
used a lot, so I think you are generalizing from inadequate experience.

> ... So by definition
> anything that happens with them is unusual. (Even if you disagree with
> me on the value of interfaces, you can't argue that most Ada OOP code
> is Ada 95 code -- for the simple reason that most Ada compilers only
> are Ada 95
> compilers.)

True, but we are designing Ada 2012, not Ada 96.

> But even if they are unusual, we have to have a proper definition for them.

Agreed.

****************************************************************

From: Robert Dewar
Sent: Monday, March 28, 2011  5:43 PM

> Like some others, I am overwhelmed by the details of the discussion on
> the exact semantics of Pre'Class and Post'Class.  I am concerned about
> the difficulties of presenting this to users in an intuitive fashion.
> I am also concerned that with different rules for Pre'Class and Pre
> the danger is that once again (as Erhard remarked in connection with
> aspects vs. attributes) the perception will be that Ada always offers
> two ways of doing roughly the same thing.  This perception may be
> misguided but the complications are plenty real. So here is a timid
> suggestion:  we agree that Pre and Post by themselves are little more
> that assert statements, and the real meat is in the inheritable and
> combinable aspects. So what about renaming Pre'Class and Post'Class as
> Pre and Post, and ditch the non- inheritable versions?  For those
> timid souls that shy away from complex inheritance patterns things
> will work as they expect, and the others will have to understand how
> the RM faithfully matches LSP and all is well.

I really do NOT like this suggestion, Pre and Post have a lot of advantages over
assertions (appear in the spec, can reference 'Old and 'Result, etc), and work
in a clear and intuitive fashion.

I don't like the whole business of Or'ing for Pre'Class, and having that as the
ONLY mechanism seems a huge step backwards to me.

If you want to kill anything, kill the Pre'Class and Post'Class.

But I think it is just fine to have both Pre and Pre'Class if you agree they are
separate, Pre has no inheritance stuff, and Pre'Class is the inherting version
with weakening.

I would not mind a rule forbidding both Pre and Pre'Class to apply to the same
function, but I don't think this is necessary.

****************************************************************

From: Robert Dewar
Sent: Monday, March 28, 2011  5:53 PM

> I remember a long debate with Cyrille at AdaCore, and the conclusion
> was that it was important to provide both inheritable and a
> non-inheritable preconditions.  I don't believe we should back down on
> that (despite the legitimate concern about two different ways to do
> the same thing).

For me it's definitely not the same thing, I agree with the next paragraph
entirely.

> The fact is Pre and Pre'Class are really two different things, in the
> same way that T and T'Class are different.  Admittedly Ada is unusual
> among OOP languages in making the distinction between T and T'Class,
> but I think it was one of the better choices we made in our OOP model.
> I feel Pre and Pre'Class go hand in hand with the T and T'Class
> distinction.
>
> Getting the right answer is not trivial, but we shouldn't be
> frightened by the length of the road, but rather be focused on the end
> result.  If we don't like the end result, that is fine (and I
> understand and appreciate your concern about the "two ways" issue).

There are often good reasons for having two forms for the same thing, we don't
abandon if statements because they are redundant wrt

    case Bool is
      when True  => ...
      when False => ...
    end case

****************************************************************

From: Robert Dewar
Sent: Monday, March 28, 2011   5:57 PM

> From your previous paragraph, this does not seem like such a big loss.
> Either we want inheritance all along, and then we should call it with
> the shorter name anyway, or else inheritance is a minor convenience
> and we can drop it. Tuck indicates that Pre'Class is indispensable,
> but you seem to say that in practice it is only useful at the root of
> a hierarchy. ???

there is a real divergence here between people who think the Eiffel style
weakening is essential, and people who see it as useless.

I don't see any hope for a consensus for removing Pre in favor of Pre'Class or
Pre'Class in favor of Pre.

The only thing that will fly IMO is to have both and keep them separated.

****************************************************************

From: Robert Dewar
Sent: Monday, March 28, 2011   6:06 PM

> ... There is a lot of OOP stuff that makes sense in theory but never
> actually happens in practice. Needing to "weaken" the precondition
> seems to me like there was a bug in the original definition of the
> precondition (most likely, the boolean functions that make it up are
> not defined properly). But I am willing to believe that I am wrong --
> but to do that I'd need to see an example for which the weaker
> precondition makes sense.

I just don't get the weakening at all. If you have a type extension, then you
may want preconditions that hold on the extension, which is obviously impossible
with weakening.

Anyway, I don't mind it being there, even if I don't understand it, but please
don't take away simple preconditions, which I

(a) understand
(b) am sure I need

****************************************************************

From: Steve Baird
Sent: Monday, March 28, 2011  6:18 PM

> If you have a type extension,
> then you may want preconditions that hold on the extension, which is
> obviously impossible with weakening.
>

I think the way this is usually done is with a dispatching call in your
precondition (e.g., a dispatching call to Is_Valid).

> I don't see any hope for a consensus for removing Pre in favor of
> Pre'Class or Pre'Class in favor of Pre.
>
> The only thing that will fly IMO is to have both and keep them
> separated.

I agree.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  6:26 PM

> The only thing that will fly IMO is to have both and keep them
> separated.

I agree. Keeping them separated is new, in that the previously approved rules
did not do that. But I do think that is a clear improvement over the older
rules.

I've been trying to keep these changes to "minor tweaks" to the rules, and while
we've expanded the changes somewhat beyond that, the changes are almost totally
focused on improving the semantics dispatching calls. Very little is changed
about the handling of non-dispatching calls.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  6:21 PM

> > ... The "overdraw" example is better. But I still don't think that
> > these examples are likely to occur often in practice. Maybe the
> > people who think everything in the world should be written by OOP
> > would run into them, but those people are nuts. :-)
>
> Why do people use inheritance at all?  One reason is because they have
> hierarchies of abstractions of increasing complexity.

Those are the people that are nuts. ;-) [Please note the smiley faces here...]

I use inheritance in order to share implementation (and to a lesser extent
usage) of closely related abstractions. Any "hierarchy" beyond the first level
is often a complete accident and is rarely planned.

Aside: This emphasis on implementation sharing is one of the reasons that I find
interfaces to not be that valuable. Since they cannot have any implementation,
there is nothing that can be shared. Regular abstract types can implement the
parts of the abstraction that are common to all of the entities, and that is
much more valuable IMHO. I also note that all forms of abstract types interfere
with the agile-like development process that I've always used, as you have to
provide implementations of all of the routines before you can compile any. In
extreme cases such as the Claw Builder, it can take several days of work to get
even a framework compliable, which increases the possibility for programming
into dead-ends all that much more. End aside.

I realize that there are all these fancy theories about hierarchies of
abstractions, but such things tend not to appear in practice. (At least not in
*my* practice!) Most of the examples show much smaller increments of
functionality than would make sense in practice. So I remain dubious about the
true value of these things.

> I think we should be very wary of trying to predict exactly how people
> will use these capabilities.

I agree with this. I don't buy the "weakening" arguments that much, but what I
do buy is the need to support interfaces in some consistent way (given the fact
that we have them). And that appears to require "weakening", because the other
choices don't make any sense.

> Bob and I came up
> with a few examples, and could almost certainly come up with more and
> better examples if we felt the need, but at some point, you have to
> agree that OOP involves hierarchies where what changes as you go down
> the hierarchy is very much application specific.  If we start saying
> that this or that abstraction is a "bad idea," we are going out on a
> pretty fragile limb in my view.

And if you take remarks made with smiley faces following them as serious
pronouncements, you are already at the end of that limb...

As far as your original "extensible array" example, I was specifically saying
that that example itself is a "bad idea". Specifically, the idea of a data
structure that automatically expands on writes is a *bad idea*, irrespective of
how that is accomplished. Writes to the wrong places need to be detected, not
covered up by a data structure. If you want to expand your data structure, it is
very important that you do that explicitly (as in Ada.Containers.Vectors).

Because the example itself is a lousy idea, the example did not do much for me
as an example of "weakening" of preconditions (or an example of anything else,
for that matter). That does not say anything about whether "weakening" of
preconditions is useful or not, or whether there are similar examples that are
not related to an obviously bad idea (there probably are).

I'm pretty much of the opinion that this is truly a case-by-case situation.
Let me show you the example that I have been thinking about where "weakening" is
exactly wrong. Probably there is something wrong with my OOP design, but I can't
quite imagine what it would be.

         package Root is
             type Root_Window is limited tagged private;

             function Exists (W : in Root_Window) return Boolean;

             procedure Draw (W : in Root_Window; What : Drawing)
                 with Pre'Class => Exists (W));
         end Root;

         with Root; use Root;
         package Visible is
             type Visible_Window is new Root_Window with private;

             function Is_Visible (W : in Visible_Window) return Boolean;

             procedure Show (W : in Visible_Window)
                 with Pre'Class => Exists (W),
                      Post'Class => Is_Visible (W);

             procedure Draw (W : in Visible_Window; What : Drawing)
                 with Pre'Class => Is_Visible (W);
         end Visible;

Assume that Visible.Draw is only supported for Visible windows.

The added precondition is Is_Visible (W) does not have the right effect: it
gives Draw a precondition of Exists (W) or Is_Visible (W), while we really want
Exists (W) and Is_Visible (W).

So we either have to use Pre instead of Pre'Class (meaning that the precondition
is not inherited by any later extensions; it might be forgotten or incorrectly
copied), or we have to flatten out the hierarchy. There is no way to fix the
precondition in Root, because it simply does not know about this property of a
window; there is nothing we can write there that would include the Visible
property. Nor can we change the meaning of Exists, as the precondition of Show
notes (it would be wrong if Exists included visibility).

It seems to me that something like this can happen anytime an extension adds a
property to a type. And that seems like something that is common, especially
when interfaces are used. It surely is application-dependent as to how those new
properties affect the existing inherited routines, and it might even differ from
routine to routine.

The net effect is that I don't find "weakening" very compelling, but I do
realize that the logic of dispatching calls does not allow anything else. Thus I
think most of the time in practice, we will be requiring an unchanged class-wide
precondition. I think that is what book authors should concentrate on, not these
bizarre corner cases where "weakening" actually works. But I am not going to
argue too much about "weakening", either, because it will occasionally be useful
(especially with interfaces).

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011   6:29 PM

> I think the way this is usually done is with a dispatching call in
> your precondition (e.g., a dispatching call to Is_Valid).

Correct. I expect that to be common; but in that case the precondition is
unchanged on the new routine.

The next most common case is the one of additional new properties in the
extension that need to be reflected in the precondition (see the message that
crossed with yours), but that doesn't work with weakening.

The least likely case (IMHO) is the one in which weakening will in fact work.

****************************************************************

From: Steve Baird
Sent: Monday, March 28, 2011  6:33 PM

> Assume that Visible.Draw is only supported for Visible windows.
Well, there goes Liskov substitutability.

I'm not saying that automatically makes this a bad design, but it shouldn't be a
surprise if it doesn't work well with language constructs that were designed to
support LSP.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  6:35 PM

...
> But I think it is just fine to have both Pre and Pre'Class if you
> agree they are separate, Pre has no inheritance stuff, and Pre'Class
> is the inherting version with weakening.
>
> I would not mind a rule forbidding both Pre and Pre'Class to apply to
> the same function, but I don't think this is necessary.

I had once made a suggestion to that effect, but I am no longer in favor of it.
The reason that the rule was needed was to prevent "counterfeiting" from Pre
being added to a descendant. But since we've separated the meaning of them, that
is no longer a problem.

And it would seem to be a usability annoyance. If you have a newly added
property that you need to enforce on some descendant routine, you can use Pre to
do that. It is not ideal, but at least statically bound calls will get the
advantages of a proper precondition (dispatching calls will have the possibility
of an assertion failure). If we disallowed having both, you'd have to use an
assertion inside of the body. That would be a step backwards for a statically
bound call.

It doesn't seem to make much sense to have both types on a single subprogram,
but it's not clear that added Legality Rules help anything, either. I'm sure
AdaControl will have a reasonable set of policies for programmers to use.

****************************************************************

From: Robert Dewar
Sent: Monday, March 28, 2011  6:45 PM

> I agree. Keeping them separated is new, in that the previously
> approved rules did not do that. But I do think that is a clear
> improvement over the older rules.

I always assumed they should be separated, that's what we implemented in GNAT as
far as I remember :-)

****************************************************************

From: Randy Brukardt
Sent: Monday, March 28, 2011  9:01 PM

> > Assume that Visible.Draw is only supported for Visible windows.
> Well, there goes Liskov substitutability.

I suppose, but I don't much care about ivory-tower principles. If you have this
problem (an added property affecting existing operations), you need a mechanism
to solve it. That's especially true if you are extending an existing type whose
specifications you can't change - I don't think there is any way to preverve the
Liskov property in that case. (Raising some random exception rather than writing
a proper precondition doesn't change anything.)

> I'm not saying that automatically makes this a bad design, but it
> shouldn't be a surprise if it doesn't work well with language
> constructs that were designed to support LSP.

OK, but I'm more concerned that there be a way to model whatever design that you
need rather than worrying about whether we meet some abstract model which may or
may not be appropriate.

In any case, I'd rather depend on the rules that Ada currently uses to make
Liskov work (the contract shall not change) than some rules that allow the
contract to change a little bit, and in a way that is rarely of any use. That is
more confusing than helpful, at least for my pea brain. :-) But I suppose
education can help there, and I don't see much reason to diverge too far from
existing practice in this area (that is, Eiffel).

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 29, 2011  8:40 AM

>>> Assume that Visible.Draw is only supported for Visible windows.
>> Well, there goes Liskov substitutability.
>
> I suppose, but I don't much care about ivory-tower principles. If you
> have this problem (an added property affecting existing operations),
> you need a mechanism to solve it. That's especially true if you are
> extending an existing type whose specifications you can't change - I
> don't think there is any way to preverve the Liskov property in that
> case. (Raising some random exception rather than writing a proper
> precondition doesn't change
> anything.)

Hardly an ivory-tower principle given that DO-178C is likely to require the use
and verification of LSP in OO contexts

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 29, 2011  1:03 AM

>> ... There is a lot of OOP stuff that makes sense in theory but never
>> actually happens in practice. Needing to "weaken" the precondition
>> seems to me like there was a bug in the original definition of the
>> precondition (most likely, the boolean functions that make it up are
>> not defined properly). But I am willing to believe that I am wrong --
>> but to do that I'd need to see an example for which the weaker
>> precondition makes sense.
>
> I just don't get the weakening at all. If you have a type extension,
> then you may want preconditions that hold on the extension, which is
> obviously impossible with weakening.

Right. "Weakening" seems to be a consequence of strictly following the Liskov
Substitutability Principle (LSP). But I've seen people writing for years that
that "principle" has dubious effects. So I suspect that we're just having a
corner of a debate about the appropriateness of LSP (only) as compared to a more
general model. And Ada currently has a more general model, so cramming
preconditions into an LSP-only model is confusing and frustrating.

But there doesn't seem to be any alternative if we want some sort of
analyzability for dispatching calls (in the absence of the complete source
code). And not being able to analyze them means that people will have to make a
choice between compile-time checked code and OOP with dispatching -- not the
sort of choice that I think we want to forcing people to make.

> Anyway, I don't mind it being there, even if I don't understand it,
> but please don't take away simple preconditions, which I
>
> (a) understand
> (b) am sure I need

I agree. I think it is possible to make sense out of Pre'Class (Post'Class is
works the way you'd expect, so nothing worrisome there) if you follow some
simple rules:
(1) Never mix Pre and Pre'Class on a single subprogram;
(2) For new primitive routines, only use Pre'Class if at all possible (it's a
    part of the contract of any overriding routine);
(3) For overriding routines, only use Pre'Class if you are certain the
    "weakening" works -- otherwise, use Pre;
(4) The combination of interfaces, preconditions, and existing routines that
    have preconditions is a disaster waiting to happen -- avoid any multiple
    inheritance cases for primitive routines of interfaces.

(4) is more a function of not having any idea of how to make these work usefully
than any well-thought out reason; the problem is that "weakening" means that the
weaker precondition (often "True") is the one that will be used, and that is
hardly ever going to be what you want. It probably can only work if the
preconditions are effectively the same. (Well, discounting the type of people
that somehow writes everything to be compatible with LSP.)

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 30, 2011  9:43 PM

I would like to push back a bit on the question of whether to use subtype
conformance for controlling legality in these cases involving Pre'Class.

Historically speaking, we consolidated for Ada 95 the various levels of
conformance between subprogram profiles because we saw various places where
similar rules appeared in Ada 83, without any kind of encompassing concept.
Each matching rule seemed distinct, some pickier than others (e.g. overloading
was based on types matching, formal subprograms depended on modes and types
matching, our new access-to-subprogram required subtypes to match, etc.).  By
bringing them together in a single section, and establishing a hierarchy of
conformance levels (type, mode, subtype, full), we hoped to bring some structure
to the rules, and make sure no two matching rules were arbitrarily inconsistent.

I understand that changing the definition of something like "subtype
conformance" is scary, but I think it is worse to go back to sprinkling rules
about the manual relating to matching rules between subprogram profiles.
Furthermore, subtype conformance was invented precisely for "indirect-call-ish"
situations, such as calling through an access-to-subprogram value, a dispatching
call, and renaming as body.

Issues with Pre'Class seem very much related to indirect calls, where the caller
might not know statically what subprogram body is actually being invoked, and
the body has certain minimum expectations about what Pre'Class aspect has been
checked.  To me, this means subtype conformance should be adjusted to take
Pre'Class into account, rather than distributing special rules about Pre'Class
matching or weakening or whatever to all of the places where indirect calls
appear.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 31, 2011  1:40 AM

> I would like to push back a bit on the question of whether to use
> subtype conformance for controlling legality in these cases involving
> Pre'Class.

Fair enough...

> Historically speaking, we consolidated for Ada 95 the various levels
> of conformance between subprogram profiles because we saw various
> places where similar rules appeared in Ada 83, without any kind of
> encompassing concept.  Each matching rule seemed distinct, some
> pickier than others (e.g. overloading was based on types matching,
> formal subprograms depended on modes and types matching, our new
> access-to-subprogram required subtypes to match, etc.).  By bringing
> them together in a single section, and establishing a hierarchy of
> conformance levels (type, mode, subtype, full), we hoped to bring some
> structure to the rules, and make sure no two matching rules were
> arbitrarily inconsistent.
>
> I understand that changing the definition of something like "subtype
> conformance" is scary,

It's not scary, it is wrong.

> but I think it is worse to go
> back to sprinkling rules about the manual relating to matching rules
> between subprogram profiles.

That's not what I'm proposing.

> Furthermore, subtype conformance was invented precisely for
> "indirect-call-ish" situations, such as calling through an
> access-to-subprogram value, a dispatching call, and renaming as body.

Right. And you are proposing breaking those.

> Issues with Pre'Class seem very much related to indirect calls, where
> the caller might not know statically what subprogram body is actually
> being invoked, and the body has certain minimum expectations about
> what Pre'Class aspect has been checked.  To me, this means subtype
> conformance should be adjusted to take Pre'Class into account, rather
> than distributing special rules about Pre'Class matching or weakening
> or whatever to all of the places where indirect calls appear.

I think you should actually go read my proposed AI, rather than talking about
alternatives that don't make any sense. And/or read my mail more carefully.

First of all, you are proposing changing subtype conformance in some way that
doesn't make sense. My understanding is that you want to change it to require
Pre'Class to match in some but not all cases (in particular, you want
overloading subprograms to be able to specify an additional Pre'Class). That can
only be accomplished by adding an entirely new kind of conformance, or some
truly bizarre rules for overriding. Neither of those will accomplish the goal of
simplifying the rules.

Second, (most) renames don't actually use subtype conformance. Your explanation
was that we only cares about overriding renames. But as noted above, you want
overriding to *not* be subject to this rule - it *cannot* require Pre'Class
matching. So the entire proposal you are making makes no sense - you still need
a special case rule for renames.

Third, overriding also comes up in "implemented-by", so if you design a "wording
hack" to deal with overriding, it has to be spread around to these other places
as well.

Fourth, most of the uses of subtype conformance don't care about preconditions.
(For instance profile matching for access-to-subprograms in type conversions,
equality, and generic access actuals). You're adding a complication there.

Fifth, subtype conformance is used in full conformance. 8.3(12.3/2) uses full
conformance, so you would be requiring Pre'Class matching on inherited
subprograms -- that would require overriding in various interface cases. (And
that requirement would not be enough to eliminate the null subprogram problems,
so we'd still need other rules for that.)

Lastly, I don't buy the need to make the renames case illegal. Any "problem"
disappears as soon as you consider it equivalent to the matching explicit body.
Which is *exactly* the same rule that is being proposed to fix three other
problems in this AI! Unless we decide to make all of those other cases illegal
(technically, "requires overriding" - an explicit body has to be given), I
cannot even justify making the renames case illegal. Why should someone have to
write an explicit body rather than a renames (with the extra chances for error),
when in similar cases with inherited subprograms, we're writing the body
automatically?

The only case that has any reason to be made illegal is the 'Access one, and
that one is pretty dubious (the explicit body would work there, too, but for
'Access we might not be somewhere where the declaration of an arbitrary body
should be allowed).

Thus I used a *single* rule to make 'Access illegal if the subprogram has a
non-trivial Pre'Class. I did not "sprinkle rules around" because there are no
such rules! If we need any additional rules, I would push to get rid of them by
eliminating the illegal cases (it's not worth it).


P.S. If you can do the impossible and make this work, please show me with actual
wording proposals!! I think we've reasonably converged on what we want (at least
in broad terms) and we need to figure out how to accomplish that. I've spend the
better part of three days writing an AI on this topic, including telling you
repeatedly why this idea of yours won't work, and I've gotten hardly any
constructive feedback. (Only "isn't there another approach?" -- to which the
answer is, not really.) So show me something I haven't considered instead of
stuff I already know don't work!!!

And please note that I've been writing this AI to keep you from being
overburdened so that you can finish your other homework. (BTW, please do your
other homework. ;-) But it doesn't pay for me to spend more time on it if you
are going to virtually ignore it in favor of things that clearly don't work.
Constructive ideas are welcome - rehashing is not.

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  1, 2011  4:51 AM

Sorry for entering this discussion late, but I spent the week-end skiing (Hi,
Tuck), and changed my laptop, which means wasting hours fighting drivers that
are incompatible between WXP/32 and W7/64...

But OOP is a topic I'm interested in (unlike accessibility rules;-)), so here is
my point of view.

1) There is an important political issue here. As Robert pointed out, we are in
   the context of an emerging DO178C. The OO annex is pushed by people who want
   to introduce Java and C++ in avionics software, and they claim that the
   complexity of testing dynamic dispatching can be harnessed by strict usage of
   the LSP (whether this is true is still to be seen, and is irrelevant here).
   If we manage to /better/ support the LSP than C++ and Java, this will be one
   more reason to prefer Ada, and we'll have an opportunity to backfire the
   Java-in-avionics people. If we invent some new concepts for assertions,
   different from what the LSP needs, we will be shooting ourself in the foot,
   and providing new good arguments for avoiding Ada in avionics (hey, those
   guys missed the boat once again!).

Note that I am /not/ a big fan of OOP, and certainly not of OOP everywhere. The
title of my position paper for the Ada-Europe panel is "Object Orientation in
Critical Systems: Yes, in Moderation".

2) Interfaces are really a promise to provide some functionalities. To me, they
   have a value because they are /not/ inheritance (i.e., they don't imply an
   "is a" relationship), although they might be used that way. Typical use of
   interfaces is for things like the listener paradigm: register an On_Event
   subprogram that is called when the event happens, irrespectively of the true
   type of the supporting object.

3) I fully agree with Bob here:
> A precondition is a promise made by the caller to the callee.
> A postcondition is a promise made by the callee to the caller.
>
> Or, equivalently:
> A precondition is a requirement on the caller, which the body can rely on.
> A postcondition is a requirement on the body, which the caller can rely on.

The important thing is that the precondition is necessary to the algorithm in
the /body/. A Divide might tell: "sorry, I don't know how to perform this
service if the rhs is 0, you have to check". A derived Divide that handles
infinities may accept 0. If the caller doesn't know wich kind of divide it is
using (due to dispatching), it has to check for 0.

But then, does it make sense to allow (class-wide) preconditions on interfaces?
Such things have no body! There is no reason to impose checks to the user if
there is no related algorithmic constraint that requires that check. Putting
preconditions on  (a primitive operation of) an interface is like saying: "we
assume that future implementations will have a problem if this does not hold".

OTOH, preconditions on interfaces can be understood the other way round (think
of the listener call-back): a promise from the caller to the implementer that
some condition holds; whether the implementer needs that condition or not is the
implementer's problem.

In any case, if a SP is called through a dispatching call of an interface, only
the preconditions for the interface (if any!) should be checked at the caller
site. If we have a class-wide precondition on the actual body, raise
Assertion_Error from the body, just like for a specific precondition. The whole
point of interfaces is to mix completely unrelated type, therefore there is no
common requirement on the caller.

Does it defeat static analysis? Yes, but as in Randy's example, that's because
an implementation of an interface that has some extra preconditions does not
satisfy the LSP. Such an implementation would be forbidden by the current trend
of DO178C. Or if you prefer, this works perfectly well as long as you obey the
LSP. If you don't, the behavior is well specified, but don't expect to prove or
optimize anything.

Another interesting issue is what happens when you have a dispatching call to a
primitive operation (through its controlling tagged type) that happens to
implement an operation of an interface. I would say: don't check the
preconditions of the interface. If the implementation of the operation requires
the condition from the interface, it should repeat it in its own preconditions
(otherwise, it is lying about its real preconditions - something like the
clauses in tiny characters at the bottom of contracts). If the implementation
does /not/ require the condition from the interface, there is no point in
enforcing it.  Of course, this would be easier to explain if preconditions on
interfaces are simply forbidden.

In short, my position is:
- at the caller's site, check the preconditions that apply to the view of the
  controlling operand, nothing else. Other conditions that may apply (whether
  specific or class-wide) are checked in the body.
- I am still unsure whether preconditions on interfaces should be allowed at
  all. If it simplifies implementation to forbid them, I could live with it.

Of course, all this discussion does not apply to postconditions, as often noted
before.

4) I think we should specify whether the evaluation of preconditions on a PO is
   part of the protected action or not. Otherwise, if the evaluation involves
   calls to some other PO, we would have a situation where whether the
   evaluation of preconditions is potentially blocking would depend on the
   implementation.

5) Some nitpicking:
>         type I2 is interface;
>         function Is_Green (Obj : I2) return Boolean is abstract;
>         procedure P1 (Obj : in out I2) is null
>             with Post'Class =>  Is_Green (Obj);
>         procedure P2 (Obj : in I2) is null
>             when Pre'Class =>  Is_Green (Obj);
               ^^^^ => with
>         procedure P3 (Obj : in I2) is null
>             when Pre'Class =>  Is_Green (Obj);
               ^^^^ => with
>         function Something_Green return I2 is abstract
>             when Post'Class =>  Is_Green (Something_Green'Result);
               ^^^^ => with
>      end Pack2;

****************************************************************

From: Tucker Taft
Sent: Friday, April  1, 2011  10:56 AM

> 2) Interfaces are really a promise to provide some functionalities. To
> me, they have a value because they are /not/ inheritance (i.e., they
> don't imply an "is a" relationship), although they might be used that
> way. Typical use of interfaces is for things like the listener
> paradigm: register an On_Event subprogram that is called when the event
> happens, irrespectively of the true type of the supporting object.

> ...
> But then, does it make sense to allow (class-wide) preconditions on
> interfaces? Such things have no body! There is no reason to impose
> checks to the user if there is no related algorithmic constraint that
> requires that check. Putting preconditions on (a primitive operation
> of) an interface is like saying: "we assume that future implementations will
> have a problem if this does not hold".

> OTOH, preconditions on interfaces can be understood the other way
> round (think of the listener call-back): a promise from the caller to
> the implementer that some condition holds; whether the implementer needs that
> condition or not is the implementer's problem. ...

Well you seem to be nicely arguing both sides here.  If you think about the
Listener interface, a class-wide precondition makes perfect sense if it refers
to the properties of the parameters that are required for a call to make any
sense. Suppose we have a stream-like interface, which has Open, Is_Open, Read,
and Close, operations.  Clearly we would want a Pre'Class on Read that
Is_Open(Stream). Similarly for the Listener interface, if you have Subscribe,
Unsubscribe, and Is_Subscribed, you would want to specify Pre'Class of
Is_Subscribed(Listener) on the Unsubscribe operation.  I don't see anything
about interfaces that makes preconditions of less value.

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  1, 2011 11:47 AM

I agree. Preconditions on interfaces make sense if you  view them the other way
round (compared to tagged types): they are not a requirement from the
implementation, they are a promise to the implementer.

My point is that:
1) They should not be mixed with other class-wide preconditions. If you dispatch
   from the interface, checking other preconditions is the responsability of the
   body. If you dispatch from the tagged type, ignore the preconditions on the
   interface.

2) If that seems too complicated, I could accept forbidding preconditions on
   interfaces altogether.

****************************************************************

From: Randy Brukardt
Sent: Friday, April  1, 2011  2:03 PM

...
> I agree. Preconditions on interfaces make sense if you  view them the
> other way round (compared to tagged types): they are not a requirement
> from the implementation, they are a promise to the implementer.

This is really the wrong way to look at preconditions (and postconditions) at
all.

These are both *contracts* between the caller and callee; and both are bound
equally.

Specifically, a class-wide precondition is a contract that puts requirements on
*any possible* caller when calling *any possible* callee.

If the call in question is statically bound, then the *any possible* doesn't
have any effect, and the effect is solely on the caller.

But in a dispatching call, the "any possible callee" remains, and that does have
the effect of a restriction on possible bodies. That has nothing to do with
interfaces -- it is true of any body of a routine that can be called via a
dispatching call.

And that makes sense: the class-wide precondition is part of the contract of a
routine, and as such it puts requirements on both sides.

> My point is that:
> 1) They should not be mixed with other class-wide preconditions. If
> you dispatch from the interface, checking other preconditions is the
> responsability of the body. If you dispatch from the tagged type,
> ignore the preconditions on the interface.

This is effectively "and"ing them together. But it doesn't make much sense.

Moreover, it demolishes LSP for interfaces.

You said earlier:

> If we invent some new concepts for assertions, different from what the
> LSP needs, we will be shooting ourself in the foot, and providing new
> good arguments for avoiding Ada in avionics (hey, those guys missed
> the boat once again!).

So I can't reconcile these two positions. You're specifically saying "let's get
it wrong" for interfaces.

Beyond that, I'm strongly opposed to any semantics that denies compilers and
tools without full source code the ability to do analysis.

> 2) If that seems too complicated, I could accept forbidding
> preconditions on interfaces altogether.

That doesn't help; in fact, it makes it worse (from the perspective of LSP).
If there is no precondition, that is equivalent to a precondition of True.
When you combine that with any other precondition, you end up with True. I don't
see how that helps anything at all.

I agree that these semantics will make interfaces very hard to use with
preconditions. But that problem exists in Eiffel and other languages and it
doesn't seem to bother Meyer or others. Why should we care? We're not going to
invent a new theory of multiple inheritance in the ARG, and even if we did, no
one would care.

As Bob pointed out, most use of interfaces is with "root" types. The interfaces
aren't added to existing trees (that's when the problems happen). And they're
not usually added to inherited operations, but rather new ones for the type. It
would be easy enough to avoid such mix-ins if they cause trouble (we could even
make them illegal, by requiring overriding in those cases).

Tucker has suggested (and I wrote up) a model where Pre'Class, Post'Class, and
Type_Invariant'Class preserve LSP everywhere, while Pre, Post, and
Type_Invariant are free-form. This has the distinct advantage that a simple tool
like AdaControl can enforce the use of LSP when that is desired (simply ban the
use of Pre and Post on primitives of a tagged type, and Type_Invariant on tagged
types). That seems like a major advantage for the avionics people.

If supporting LSP is important (and it sounds like it is), then we can't go
blowing up LSP for interfaces.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:15 PM

> And thanks for reading this and making an attempt to understand it.

You're welcome.

The homework deadline approaches, so I'm about to overwhelm you with emails
about the homework.  Sorry about that!

Please don't feel too overwhelmed about the deadlines!
WG9 has no power to arrest you if we're late.
Ada 95 was supposed to be Ada 93.
Ada 2005 was standardized in 2007.
If Ada 2012 is a few months late, I promise you that the sky will not fall!

> Bob Duff writes:
> > I have a feeling we're re-treading the same ground covered by Meyer
> > / Eiffel.
>
> Probably, but Ada also has legacy issues that aren't necessarily
> present in Eiffel.

Right.  We also have to adjust "Eiffel think" to the fact that Ada splits T from
T'Class, and splits packages from classes/types.

>... In particular, we have to do something useful for existing code
>that  doesn't have preconditions defined.

Right.  Clearly the default Pre and Pre'Class has to be True for that reason.
And of course you can't weaken True.

> > I think this AI addresses some major concerns I had several months
> > ago (but never got around to bringing up, because I didn't fully
> > understand the issues).
>
> Good to hear.
>
> ...
> > >     with Pack1, Pack2;
> > >     package Pack3 is
> > >        type T3 is new Pack1.T1 and Pack2.I2 with private;
> > >        overriding
> > >        function Is_Green (Obj : T3) return Boolean;
> > >        overriding
> > >        procedure P2 (Obj : in T3);
> > >        overriding
> > >        function Something_Green return T3;
> > >        -- P1 and P3 are inherited from Pack1.P1 and Pack1.P3,
> > >        -- respectively.
> >
> > Not "respectively".  The Pack1 versions override in both cases.
>
> Not sure what you mean here. I meant P1 is inherited from Pack1.P1,
> and P3 is inherited from Pack1.P3.

But, looking back at the original example, that's not true.
P1 is inherited from both Pack1 and Pack3.
P3 is inherited from both Pack1 and Pack3.
The body ones (both in P1) win.

>...Pack2 is not mentioned, and I wasn't interested  in the details as
>to why (irrelevant for this AI). I suppose I could have  written this
>as two separate comments, but I fail to see anything wrong with  the
>one I have.

Well, either it's wrong, or I don't understand it.

> > I wonder if these sorts of examples would be clearer if we avoid
> > prefix notation calls?!
>
> Unless you want to confuse the heck out of yourself, you'll never use
> prefix calls for dispatching in Ada 2005 or later.

I don't much like prefix calls, either.

> ...The alternative is lots of use
> clauses (which is not going to help make these examples more
> understandable), or lengthy prefixes which are really hard to get right.
> (And I'm not quite sure what the prefix needs to be here...)

Now I don't understand -- you don't like prefix notation, you don't like use
clauses, and you don't like expanded names. What else is there?

In these examples, I strongly prefer expanded names!
They're not "lengthy" (here).

Anyway, I'll send you updated examples in a separate message.

> > "shall statically denote True"?
>
> A precondition aspect represents an expression, not a value.

"True" is an expression!

>...Thus, we have
> to word this in terms of the expression, not the value that we might
>evaluate it to have. In particular, my intent was to require that this
>expression is exactly the Boolean enumeration literal True. I suppose
>it  would be OK to have used "statically denotes" instead (that allows
>renames  of True):
>
> shall statically denote the Boolean enumeration literal True.

It's still needlessly verbose.  Think:  If someone reads "shall statically
denote True", what might they think they can write, which you don't want to
allow?

> > "enumeration literal True" --> "True" (and below)
>
> See above. This is *not* a value, but an expression consisting of
> nothing but the Boolean enumeration literal True. I figured I could
> leave out "Boolean" since that is the type of the expression, but the
> other part is necessary to emphasize the distinction.

See MY comment above.

> ...
>
> > > Modify 13.3.2(6/3):
> > >
> > > The expected type for [a]{any} precondition or postcondition
> > > expression is any boolean type.
> >
> > I think "a" is just as clear.
>
> OK, but I disagree; there are two different preconditions, and we
> never defined "precondition expression" to mean both kinds. That
> seemed like overkill, so I used "any" consistently to mean all of the kinds.

I still think you're being overly pedantic, but I don't want to fight too much
over 2 letters.  ;-)

> > "task or protected entry call" --> "entry call"
> >
> > > performed
> > > prior to checking whether the entry is open.
>
> That's existing Tucker wording which I left unchanged. I assume he was
> reinforcing some idea here (probably the intent that it applies to
> both) with the redundancy.

Well, whoever wrote it, I think we can assume that readers know that "entry"
means "protected or task entry" in Ada!

> ...
> > > if the Assertion_Policy in effect at the point of a subprogram or
> > > entry declaration is Check, then upon successful return
> > from a call of
> > > the subprogram or entry, prior to copying back any by-copy
> > in out or
> > > out parameters, the postcondition checks is performed. This
                                               ^^
                                               ||
> > consists
> > > of the evaluation of the
> >
> > "are"
>
> "is". :-). More accurately, "postcondition check is" - there is only
> one postcondition check (everything is lumped together because there
> is no need to talk about them individually). I for a while had
> separated the checks, but I obviously left some vestige of that model when I
> switched it back.

No, I meant the above marked "checks is" should be "checks are".

> ...
> > > Notwithstanding what this standard says elsewhere, an inherited subprogram S
> > > (or multiple conformant inherited subprograms) that has one or or
> > > more class-wide precondition or postcondition expressions that
> > > apply to S, or has a specified Type_Invariant'Class (see 13.3.3),
> > > that is not explicitly overridden, and that is primitive for a
> > > type that has
> > > one or more progenitors is equivalent to an overriding subprogram
> > > S whose
> > > body consists of:
> > >    * The null statement, if all of the inherited subprograms are
> > > null subprograms;
> > >    * A call on the parent subprogram, with all of the parameters
> > > the same
> > >      as the parameters to S (possibly with appropriate type conversions).
> >
> > Do we need, "The same rule applies to functions (possibly with
> > appropriate type conversion of the result)"?
>
> Why? This wording is about primitive *subprograms*, and those surely
> include functions with controlling results. The type conversion
> wording can be read to apply to everything. I could see expanding that
> a bit to say "(possibly with appropriate type conversions on the
> parameters and result if any)", but that's all that could be needed.

OK.

...
> > > null subprograms (one of which is chosen for dispatching arbitrarily by
> > > 3.9.2(20.3/3)) may not be equivalent at runtime; (2) "counterfeiting"
> >
> > "may not be equivalent at runtime" --> "might have different
> > contracts"?
>
> "contract" is not an RM technical term (at least in this sense), so I
> didn't want to use that term here. "might have different Pre'Class,
> Post'Class, and Type_Invariant'Class aspects" is better, but the
> latter part is possibly confusing.
>
> "might have different contract aspects that apply" would be better,
> but again "contract aspects" is a term that isn't defined.
>
> "might have different aspects that apply" seems vague. Besides, the
> fact that there could be different behavior for two null subprograms
> is the crux of the issue and needs to be mentioned somehow.
>
> I used "might have different aspects that apply, causing different
> runtime behavior", but there might be something better possible.

OK.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:16 PM

>...[AI05-0247-1 Preconditions, Postconditions, multiple inheritance, and
>    dispatching calls]

Here are two copies of the examples from AI05-0247-1, from the !proposal
section.  The first just has some corrections, so it compiles under GNAT:

  I added private parts so it could compile.  (You can delete these.)
  I changed "when" to "with" in several places.
  In Main2, T2 should be I2.
  In Main2, P3.Something_Green should be Pack3.Something_Green
  In Main2, remove "with Pack1;".

  I added explicit conversions to I2'Class in 3 places,
  because GNAT apparently does not yet implement the rule
  that these conversions are implicit. These are marked "???NYI",
  and I suppose you should delete them.

The second one below is identical to the first one, except it uses expanded name
notation instead of prefix notation. For examples like this, I think that's much
clearer; I suggest you replace the example in the AI with the second one below.

    package Pack1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P1 (Obj : in out T1);
       procedure P3 (Obj : in T1)
          with Pre'Class => Is_Valid (Obj);
    private
       type T1 is tagged null record;
    end Pack1;

    package Pack2 is
       type I2 is interface;
       function Is_Green (Obj : I2) return Boolean is abstract;
       procedure P1 (Obj : in out I2) is null
           with Post'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       procedure P2 (Obj : in I2) is null
           with Pre'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       procedure P3 (Obj : in I2) is null
           with Pre'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       function Something_Green return I2 is abstract
           with Post'Class => Is_Green (Something_Green'Result);
    end Pack2;

    with Pack1, Pack2;
    package Pack3 is
       type T3 is new Pack1.T1 and Pack2.I2 with private;
       overriding
       function Is_Green (Obj : T3) return Boolean;
       overriding
       procedure P2 (Obj : in T3);
       overriding
       function Something_Green return T3;
       -- P1 and P3 are inherited from Pack1.P1 and Pack1.P3,
       -- respectively.
    private
       type T3 is new Pack1.T1 and Pack2.I2 with null record;
    end Pack3;

    with Pack1, Pack3;
    procedure Main is
        procedure Do_It (Obj : in out Pack3.T3'Class) is
        begin
           Obj.P1; -- (1)
           Obj.P2; -- (2)
        end Do_It;
        O3 : Pack3.T3;
    begin
        Do_It (O3);
    end Main;

    ----------------------------------------------------------------

    with Pack2, Pack3;
    procedure Main2 is
        procedure Do_It (Obj : in Pack2.I2'Class) is
        begin
           Obj.P3; -- (3)
        end Do_It;
    begin
        Do_It (Pack3.Something_Green);
    end Main2;

    ----------------------------------------------------------------

Here's the second one:

    package Pack1 is
       type T1 is tagged private;
       function Is_Valid (Obj : T1) return Boolean;
       procedure P1 (Obj : in out T1);
       procedure P3 (Obj : in T1)
          with Pre'Class => Is_Valid (Obj);
    private
       type T1 is tagged null record;
    end Pack1;

    package Pack2 is
       type I2 is interface;
       function Is_Green (Obj : I2) return Boolean is abstract;
       procedure P1 (Obj : in out I2) is null
           with Post'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       procedure P2 (Obj : in I2) is null
           with Pre'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       procedure P3 (Obj : in I2) is null
           with Pre'Class => Is_Green (I2'Class(Obj)); -- ???NYI
       function Something_Green return I2 is abstract
           with Post'Class => Is_Green (Something_Green'Result);
    end Pack2;

    with Pack1, Pack2;
    package Pack3 is
       type T3 is new Pack1.T1 and Pack2.I2 with private;
       overriding
       function Is_Green (Obj : T3) return Boolean;
       overriding
       procedure P2 (Obj : in T3);
       overriding
       function Something_Green return T3;
       -- P1 and P3 are inherited from Pack1.P1 and Pack1.P3,
       -- respectively.
    private
       type T3 is new Pack1.T1 and Pack2.I2 with null record;
    end Pack3;

    with Pack1, Pack3;
    procedure Main is
        procedure Do_It (Obj : in out Pack3.T3'Class) is
        begin
           Pack3.P1(Obj); -- (1)
           Pack3.P2(Obj); -- (2)
        end Do_It;
        O3 : Pack3.T3;
    begin
        Do_It (O3);
    end Main;

    ----------------------------------------------------------------

    with Pack2, Pack3;
    procedure Main2 is
        procedure Do_It (Obj : in Pack2.I2'Class) is
        begin
           Pack2.P3(Obj); -- (3)
        end Do_It;
    begin
        Do_It (Pack3.Something_Green);
    end Main2;

    ----------------------------------------------------------------

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:18 PM

> I'm pretty much of the opinion that this is truly a case-by-case situation.
> Let me show you the example that I have been thinking about where
> "weakening" is exactly wrong. Probably there is something wrong with
> my OOP design, but I can't quite imagine what it would be.

Here, you are deliberately violating LSP.  That's not necessarily bad design.
In fact, I don't think it's possible/feasible to obey LSP in all cases; LSP
definitely has its problems.

You are strenthening the Pre'Class on derivation.
I think you said you shouldn't weaken Pre'Class but should leave it the same.

So use Pre instead of Pre'Class.  I think you said this in a later email.

>          package Root is
>              type Root_Window is limited tagged private;
>
>              function Exists (W : in Root_Window) return Boolean;
>
>              procedure Draw (W : in Root_Window; What : Drawing)
>                  with Pre'Class => Exists (W));
>          end Root;
>
>          with Root; use Root;
>          package Visible is
>              type Visible_Window is new Root_Window with private;
>
>              function Is_Visible (W : in Visible_Window) return Boolean;
>
>              procedure Show (W : in Visible_Window)
>                  with Pre'Class => Exists (W),
>                       Post'Class => Is_Visible (W);
>
>              procedure Draw (W : in Visible_Window; What : Drawing)
>                  with Pre'Class => Is_Visible (W);
>          end Visible;

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:18 PM

> > > Assume that Visible.Draw is only supported for Visible windows.
> > Well, there goes Liskov substitutability.
>
> I suppose, but I don't much care about ivory-tower principles.

I don't think LSP is "ivory-tower".  It's got very practical benefits.  It also
has problems, and I agree that you don't always want to obey LSP.

>... If you have
> this problem (an added property affecting existing operations), you
>need a  mechanism to solve it.

We do -- use Pre instead of Pre'Class, and then you can strengthen your
preconditions.  You pay a price for that, though -- dispatching calls can fail
in ways that are not apparent given what's visible at the call site.

That's just fine in many cases -- e.g. if there are no such dispatching calls.

>...That's especially true if you are extending an  existing type whose
>specifications you can't change - I don't think there is  any way to
>preverve the Liskov property in that case. (Raising some random
>exception rather than writing a proper precondition doesn't change
> anything.)

That (raise some arbitrary exception) is another way to defeat LSP.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:19 PM

> But then, does it make sense to allow (class-wide) preconditions on
> interfaces? Such things have no body!

It does make sense, for the same reason it makes sense for abstract subprograms
of abstract types.  It means all the bodies in the hierarchy below this have the
precondition.  Some such bodies might not take advantage of it, but that's OK.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  2:51 PM

> As Bob pointed out, most use of interfaces is with "root" types.

Must've been my evil twin.  But never mind...

> Tucker has suggested (and I wrote up) a model where Pre'Class,
> Post'Class, and Type_Invariant'Class preserve LSP everywhere, while
> Pre, Post, and Type_Invariant are free-form.

That's a nice summary of the model, and I think that's best, right now.

****************************************************************

From: Randy Brukardt
Sent: Friday, April  1, 2011  3:03 PM

> > As Bob pointed out, most use of interfaces is with "root" types.
>
> Must've been my evil twin.  But never mind...

I think I slightly mischaracterized what you previously said, which was
something to the effect that most subprograms are inherited only from one parent
(either an interface or some other tagged type), not multiple ones. The point
was that the cases that cause problems are mostly corner cases. We probably
could just make the whole bunch illegal and hardly anyone would notice.

But that doesn't mean disallowing preconditions on interfaces or other broad
strokes -- just targeted illegal cases.

> > Tucker has suggested (and I wrote up) a model where Pre'Class,
> > Post'Class, and Type_Invariant'Class preserve LSP everywhere, while
> > Pre, Post, and Type_Invariant are free-form.
>
> That's a nice summary of the model, and I think that's best, right
> now.

Right. I probably ought to add a mention of this to the AI.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  3:16 PM

> I think I slightly mischaracterized what you previously said, which
> was something to the effect that most subprograms are inherited only
> from one parent (either an interface or some other tagged type), not multiple
> ones.

Ah, yes.  I remember saying that, and I still believe it.
(And maybe my evil twin does, too.)

****************************************************************

From: Edmond Schonberg
Sent: Friday, April  1, 2011  3:23 PM

I am sympathetic to Jean-Pierre's suggestion that Pre'Class should be forbidden
for interfaces.  Saying that you inherit from an interface is like saying you
inherit a mortgage.  What you inherit is the obligation to implement operations
with the given signature. In a sense, that's the precondition for the use of the
interface: these operations must be provided.  Adding some predicate to these
abstract operations does not seem to convey anything useful.  Suppose you have
an interface Serializable, that provides an operation Flatten, what precondition
should such an operation carry? The point of the interface is to provide a
classification mechanism for otherwise unrelated types that may land in some
heterogenous data structure.  Not only is it rare to be able to provide some
common predicate for all implementations of the operation (when the point is
that they will apply to objects that have no other commonality) but as this
discussion shows, it seems hard to pin down what is the useful semantics of
these predicates.

****************************************************************

From: Bob Duff
Sent: Friday, April  1, 2011  3:33 PM

> I am sympathetic to Jean-Pierre's suggestion that Pre'Class should be
> forbidden for interfaces. ...

I don't agree.

I will be in New York on Monday, and I'll try to convince you with the help of a
white board.  And perhaps with help from Steve, who will also be in NY.

****************************************************************

From: Edmond Schonberg
Sent: Friday, April  1, 2011  3:44 PM

Looking forward....

****************************************************************

From: Tucker Taft
Sent: Friday, April  1, 2011  3:36 PM

I feel like we have given many examples of why these are useful.
They have little or nothing to do with the underlying implementation, and
everything to do with the abstract semantics of the operations associated with
the interface.  Open/Read/Close, Subscribe/Unsubscribe, Initialize/Add/Freeze,
etc.  I really think we will be defeating the point of having preconditions if
we don't allow preconditions of some sort on abstract operations.

****************************************************************

From: Randy Brukardt
Sent: Friday, April  1, 2011  3:41 PM

Keep in mind that not having a precondition on an interface subprogram is the
same as having a Pre'Class precondition of True on that subprogram. Since that
is the weakest possible predicate, you are then saying that you can never have
any class-wide predicate with an operation inherited from an interface
(presuming you are preserving any semblance of LSP).

If you are in an environment that requires LSP, that essentially means that you
cannot use predicates with any operation that is related to an interface (since
Pre is banned on tagged primitives in such an environment) -- or, alternatively,
you just can't use interfaces. Neither of these seem like a good idea.

While I'm surprised that anyone would require LSP, apparently that's an
important part of DO-178C, and so I think LSP has to be an important part of
whatever solution we use.

Keep in mind that there is always an easy solution to keeping class-wide
contract aspects LSP-compliant: ensure that they are the same all the way up.
That can be done mechanically by banning redefinitions of Pre'Class, etc. on
overriding routines. I suspect in 90% of real uses, this is what will happen (on
purpose or just because it is the only thing that makes sense). It's too fierce
to require that on all users, but keep in mind that these things we are
discussing are rare corner cases -- it doesn't matter that much how they are
resolved.

****************************************************************

From: Robert Dewar
Sent: Friday, April  1, 2011  4:03 PM

> If you are in an environment that requires LSP, that essentially means
> that you cannot use predicates with any operation that is related to
> an interface (since Pre is banned on tagged primitives in such an
> environment) -- or, alternatively, you just can't use interfaces.
> Neither of these seem like a good idea.

This is not necessarily correct, you can imagine a set of preconditions defined
by Pre aspects, with a proof that LSP is respected, in fact I expect that to be
a fairly common situation. So don't assume that LSP will always attempt to be
achieved by use of the automatic weakening.

> While I'm surprised that anyone would require LSP, apparently that's
> an important part of DO-178C, and so I think LSP has to be an
> important part of whatever solution we use.

I don't know why this should be surprising, enforcing LSP is a reasonable way of
reducing the test burden for dynamic dispatching. (think about what coverage
means in such cases).

> Keep in mind that there is always an easy solution to keeping
> class-wide contract aspects LSP-compliant: ensure that they are the
> same all the way up. That can be done mechanically by banning
> redefinitions of Pre'Class, etc. on overriding routines. I suspect in
> 90% of real uses, this is what will happen (on purpose or just because
> it is the only thing that makes sense). It's too fierce to require
> that on all users, but keep in mind that these things we are
> discussing are rare corner cases -- it doesn't matter that much how they are
> resolved.

I disagree with this prediction of what will happen in 90% of cases, it does not
match my experience with previous discussions of how to deal with coverage in
the presence of dispatching.

****************************************************************

From: Randy Brukardt
Sent: Friday, April  1, 2011  4:45 PM

> > If you are in an environment that requires LSP, that essentially
> > means that you cannot use predicates with any operation that is
> > related to an interface (since Pre is banned on tagged primitives in
> > such an
> > environment) -- or, alternatively, you just can't use interfaces.
> > Neither of these seem like a good idea.
>
> This is not necessarily correct, you can imagine a set of
> preconditions defined by Pre aspects, with a proof that LSP is
> respected, in fact I expect that to be a fairly common situation. So
> don't assume that LSP will always attempt to be achieved by use of the
> automatic weakening.

I can imagine that, but I don't see the point of working so hard. The "effective
precondition" (that is the precondition as expressed in all ways - Pre,
Pre'Class, subtype predicates, constraints, assertions in the body, explicit
code in the body) always has to be the same or weaker in order for LSP to be
preserved. If you use Pre (or assertions and other body code), you end up having
to prove that these are weaker than for the routine being overridden. That is
likely to be complex. OTOH, if you use Pre'Class (and ban Pre), you already know
that the precondition is the same or weaker, so your proof burden is mostly to
check that the body code doesn't introduce any "implicit preconditions".

So while I expect that some initial efforts might use Pre, ultimately I think
you'll see much more use of Pre'Class.

****************************************************************

From: Cyrille Comar
Sent: Sunday, April  3, 2011  8:50 PM

I'm jumping in this discussion without knowing exactly what is the origin of the
issue being discussed... Sorry about that.

Le 01/04/2011 23:44, Randy Brukardt a ‚crit :

> I can imagine that, but I don't see the point of working so hard. The
> "effective precondition" (that is the precondition as expressed in all
> ways - Pre, Pre'Class, subtype predicates, constraints, assertions in the
> body, explicit code in the body) always has to be the same or weaker
> in order for LSP to be preserved. If you use Pre (or assertions and
> other body code), you end up having to prove that these are weaker
> than for the routine being overridden. That is likely to be complex.
> OTOH, if you use Pre'Class (and ban Pre), you already know that the
> precondition is the same or weaker, so your proof burden is mostly to
> check that the body code doesn't introduce any "implicit preconditions".

In  a formal environment, you have to achieve 2 kinds of proofs:

   - verify that the code of a primitive is compatible with its contract.
     Compatibility doesn't necessarily stops to proving that Pre ==> Post can be
     proven with the body of the routine. It can also be interesting to verify
     that all instructions participate to the proof of the post condition and
     thus that the post condition is not trivial or simply weaker than what it
     should be (or the code is doing more than what it says in the post, which
     are 2 ways of seeing the same situation)

   - verify LSP

by using Pre'Class you are getting the second for free (by construction) and
will have to work more on the first. The second part of it about competeness
becomes a real challenge.

by using Pre instead, you put yourself in a better position to do the first
analysis but you have to do the second one as well. Since the second one is
pretty simple to do, I think we are better off using only Pre and never
Pre'Class. That seems even more obvious for postconditions, if all the hierarchy
of primitives have the same post'class, you will need a lot of persuasion to
convince anyone (and even more a prover) that you need different code in the
different overridden bodies to achieve exactly the same thing...

****************************************************************

From: Randy Brukardt
Sent: Monday, April  4, 2011  8:50 PM

> by using Pre'Class you are getting the second for free (by
> construction) and will have to work more on the first. The second part
> of it about competeness becomes a real challenge.

You lost me here. The second part is the same either way. More below.

> by using Pre instead, you put yourself in a better position to do the
> first analysis but you have to do the second one as well. Since the
> second one is pretty simple to do, I think we are better off using
> only Pre and never Pre'Class. That seems even more obvious for
> postconditions, if all the hierarchy of primitives have the same
> post'class, you will need a lot of persuasion to convince anyone (and
> even more a prover) that you need different code in the different
> overridden bodies to achieve exactly the same thing...

Something is wrong with your thinking here; it seems like you are thinking that
Pre'Class and Post'Class are much more limited than they actually are. I would
expect Pre'Class to cover exactly the same ground as a set of Pre items would.
And thus the effort to prove using one or the other is identical. And of course,
doing one proof is easier than doing two, so I would expect Pre'Class to be
easier than using a set of Pre items.

Considering just Pre for the moment. If you cannot write the set of your Pre
items in terms of Pre'Class, then one of two things must be true:
   (1) The set of Pre items violates LSP. No more to be said about that.
   (2) The set of Pre items has insufficient abstraction to be able to be
       modeled as one or more Pre'Class items.
This latter case shows a design problem, because it means that the predicate
functions needed to write the Pre'Class don't exist. And that will cause
problems at the site of dispatching calls, because there will not be a way to
check the predicates before making calls. Users need to be able to write
conditionals involving the preconditions:
    if not Is_Valid (Window) then
        Create (Window, ...);
    end if;
    Draw (Window, ...);

Also keep in mind two additional points:
* It's OK to add additional Pre'Class and Post'Class items later. They are
  combined using "or" and "and" respectively to keep LSP intact. So surely there
  is no reason that additional postconditions can't be added.
* The function calls in Pre'Class and Post'Class will almost always be
  dispatching calls. As such, they automatically adjust for the changes in the
  child objects. And these don't cost extra analysis effort for analyzing
  dispatching bodies, as the type is always known in the Pre'Class and
  Post'Class items - so the appropriate body can be plugged in for analysis.

So it seems to me that the effort for analyzing the bodies is the same either
way, and the effort for determining whether the preconditions and postconditions
meet LSP drops from non-zero to zero.

Moreover, using Pre prevents any analysis of dispatching calls unless full
source code is available, while using Pre'Class allows partial designs to be
analyzed as well complete programs. That ought to allow catching errors sooner.

****************************************************************

From: Robert Dewar
Sent: Friday, April  1, 2011  4:52 PM

> The wonder and curse of OOP is that it "just works" and really isn't
> testable in a formal sense. Most of the time, you don't need to test
> it, either, because "it works".
>
> But obviously you can't use that strategy in a formal verification
> environment. I'm not surprised that OOP has been banned in such
> environments, but I am surprised that anyone would think that there is
> enough value to strict LSP OOP that it would be worth allowing in such
> environments.

There are people developing critical certified code using LSP for OO stuff who
find it quite usable, and they find it much preferable to banning OOP entirely.
So this is real experience from important users.

****************************************************************

From: Edmond Schonberg
Sent: Saturday, April  2, 2011  8:06 AM

Does DO178-C allow multiple inheritance of interfaces? This seems surprising,
given how cautiously OOP is being introduced in the field.  I very much doubt
that there is certified code out there that uses MI, so this discussion may be
way ahead of current practice.

****************************************************************

From: Robert Dewar
Sent: Saturday, April  2, 2011  8:11 AM

I don't think DO178-C goes anywhere near this level of complex detail. And I
agree with Ed that this discussion is way ahead of current practice. I think at
most we will see in certified applications very conservative use of single
inheritance with very limited use of dynamic dispatching. Cyrille, what do you
think?

****************************************************************

rom: Jean-Pierre Rosen
Sent: Sunday, April  3, 2011  3:46 AM

> Does DO178-C allow multiple inheritance of interfaces? This seems
> surprising, given how cautiously OOP is being introduced
> in the field.  I very much doubt that there is certified code out
> there that uses MI, so this discussion may be way ahead of current
> practice.

It tends to allow interfaces, but is very cautious about full MI. (TBH, I should
recheck from the OO supplement that I don't have at hand now, but that was the
approach of OOTiA).

Don't forget that DO178C is being pushed by Java people!

****************************************************************

From: Jean-Pierre Rosen
Sent: Sunday, April  3, 2011  3:51 AM

> I don't think DO178-C goes anywhere near this level of complex detail.
> And I agree with Ed that this discussion is way ahead of current
> practice. I think at most we will see in certified applications very
> conservative use of single inheritance with very limited use of
> dynamic dispatching. Cyrille, what do you think?

(I'm not Cyrille ;-) ).

My position for the OO panel is that I doubt that application of the LSP will be
acceptable in practice at levels A and B. That's why I'm proposing a model that
would make pessimistic testing acceptable, precisely by limiting dispatching to
a small set of easily recognizable places.

If that turns out to be more practical than the LSP approach, that would be a
very good point for Ada against Java and C++ !

****************************************************************

From: Cyrille Comar
Sent: Sunday, April  3, 2011  8:25 AM

DO-178c doesn't ban use of multiple inheritance. It just differentiate
inheritance of interfaces and inheritance of implementation and claims the
former is less troublesome without imposing additional activities on the latter.
It is thought that the new LSP verification is sufficiently of a increased
burden to limit its use.

Now, concerning the use of interfaces at the highest level of DO-178c, I don't
think it is ahead of practice by a long shot, actually I know that it is being
used today (albeit carefully) by a customer of us who is planned to go to
certification in the coming year. That is still DO-178B but since this standard
is silent in that respect, DO-178c drafts will be used as a reference until it
becomes official. I also don't imagine how the Java crowd could live without
interfaces and the intersection of people interested by level A and Java users
is far from empty from what I have seen at the DO-178c committee.

****************************************************************

From: Stephen Michell
Sent: Saturday, April  2, 2011  3:59 PM

The reason that OOP is mistrusted in the HI field is mainly because of dynamic
dispatching. Ada's static dispatching model is supported in Spark and is being
accepted, but for most other languages, dynamic dispatching is all that they
have and you cannot do enough static analysis of the code to ensure that all
possible resolutions of a dispatching call will behave safely.

****************************************************************

From: Cyrille Comar
Sent: Sunday, April  3, 2011  8:37 AM

The static analysis tool that I am most familiar with (and I'm not the only one
on this list ;-)) does enough of static analysis to ensure the above, depending
how you define "will behave safely" but it should be able to detect a case where
a dispatching call would lend on a primitive that has too strong a precondition
for instance. (those who know better, please correct me)

Note that what you call the "Ada's static dispatching model" is considered by
some as pretty unsafe because it violates the "simple dispatching rule" that
guarantees that the right method is called at run-time for the right object.
This is a pretty controversial topic and I don't want to restart a thread on
this but wanted to mention that the opinion expressed above is not
representative of the whole HI community.

****************************************************************

From: Tucker Taft
Sent: Saturday, April  2, 2011  7:50 PM

This isn't just for DO-178C.

****************************************************************

From: Robert Dewar
Sent: Saturday, April  2, 2011  8:48 PM

Sure, but to me, the DO-178C standard is the most convincing argument that this
business of weakening of preconditions makes sense (to me it otherwise does NOT
make sense!)

****************************************************************

From: Tucker Taft
Sent: Saturday, April  2, 2011  9:11 PM

Apparently he folks who worked on DO-178C believe in LSP.  This isn't because it
is somehow unique to safety-critical software.  Rather it is because it is a
well established approach to making sense out of OOP.  The fact that they don't
want to adopt all of the complexities of OOP doesn't somehow lessen the
relevance of LSP to those who do want to use, for example, multiple inheritance.
The fact that the DO-178C folks believe in OO with LSP means that many of those
who are more  adventuresome will feel the same way.  If Ada drops support for
LSP on interfaces just because DO-178C might not use interfaces, to me that is
just plain weird.  We have interfaces, whether or not DO-178C chooses to use
them. Many of those who use interfaces will be interested in the LSP approach,
and it would be weird to support it just for the subset that we *think* DO-178C
folks are interested in (in fact, I have not seen anything that indicates they
won't use interfaces).

****************************************************************

From: Robert Dewar
Sent: Saturday, April  2, 2011  9:40 PM

I think you are misunderstanding the motivations here somewhat.

The issue in DO-178C is coverage testing which conceptually involves testing
every possible flow control path to make sure it is taken.

For dynamic dispatching this would mean making sure that for every call and
every method that corresponds, the path is indeed taken.

This is impractical for two reasons.

a) too many paths

b) not all of them are possible anyway

LSP is seen as a means to satisfactorily allow more limited testing, the idea
being that if one path is taken, then since in some sense all possible paths are
the "same", this is good enough.

****************************************************************

From: Jean-Pierre Rosen
Sent: Sunday, April  3, 2011  3:44 AM

Let's be clear:

1) LSP is not automatic, and it is possible to give very convincing examples of
   violations of the LSP. IF a program obeys by the LSP, some interesting
   properties can be proven (like not requiring pessimistic testing for DO178C).
   Where the language can help is by a) making easier to show that LSP is being
   obeyed, and b) raising an exception if it is assumed and not obeyed. The
   issue is NOT on forcing LSP on every program.

2) If the caller has a certain view of an object, he is bound by that view, and
   nothing else. If I have an object that belongs to A'Class, and the actual is
   a B (derived from A), my part of the contract is only in fulfilling the
   preconditions for A - hence the "weakening" semantics of preconditions.
   Everybody (I hope!) agrees with that. But see below for interfaces.

3) There are two ways of enforcing the contract: statically (automatic "oring"
   of preconditions), or dynamically (raise an exception in case of precondition
   failure).

Now, the whole issue is what to do with interfaces. Automatic "oring" simply
does not work, as pointed out repeatedly. Why? because implementing an interface
is not the same thing as inheriting. Inheritance happens serially between
classes that belong to a common family, organized by increasing specialization.
Implementing an interface is done by classes that have no relationship
whatsoever, and all implementations of interfaces of a given class happen in
parallel. (Randy - and many people - view interfaces as a kind of MI, I don't.
That's part of the disagreement).

What I'm proposing is automatic "oring" of preconditions for derivation, but not
between the various progenitors. If you call an operation through a view of one
progenitor, other preconditions (whether specific or class-wide) are checked at
run-time. In other words, this would have the effect that preconditions from a
serial branch are "ored" (statically), and preconditions from parallel branches
are "anded" (dynamically). Might make more sense if stated this way...

As far as the LSP is concerned: it is automatically enforced at compile time for
derivation (serial branches), and dynamically enforced between parallel
branches. IOW, if you use an operation that has its own preconditions to
implement an operation from an interface, it is your responsability to make sure
that you don't violate LSP (possibly by "oring" the preconditions from the
interface into the implementing routine yourself).

Note that with this model, I'm now moving in favor of allowing preconditions on
interfaces, otherwise an interface without preconditions could be implemented
only by a procedure without preconditions (if you want the LSP, of course). In a
sense, an interface has to balance between very strong preconditions (making
implementation easier, since it leaves much space for weakening), an weak
preconditions (more user-friendly, more burden for the implementation).

****************************************************************

From: Tucker Taft
Sent: Sunday, April  3, 2011 10:07 AM

I don't think there is always such a stark contrast between interfaces and
non-interface types.  I suspect that as Ada 2005 becomes more widely used, we
will see some projects move what were abstract tagged types into being
interfaces, and some projects that initially use interfaces for everything move
some of the interfaces back to being "normal" tagged types, just so they can
provide more flexible "default" implementations.  Or we will see what happens in
Java, where many interfaces have "default" non-interface types, which implement
the interface, and can be used as a place to provide more complex defaults.

So I think we would be interfering with natural evolution if we set up
significantly different rules between "normal" tagged types and interfaces.
Also, we have in several places made certain that whether a particular interface
is considered the "parent" type or one of the "progenitor" types has no net
effect on semantics. It affects the vocabulary, but that is about all. We should
be careful to preserve that.

****************************************************************

From: Bob Duff
Sent: Sunday, April  3, 2011 10:31 AM

An interface is conceptually the same thing as an abstract type with
all-abstract ops.  Interfaces are a bit more flexible, that's all.

Therefore, it makes no sense to me to place arbitrary restrictions on
interfaces, different from abstract types.

It also makes no sense to make them behave differently in some subtle way.

****************************************************************

From: Dan Eilers
Sent: Sunday, April  3, 2011 11:19 AM

In this example, T1a is an abstract type, and T1b is a limited interface.
So you would not expect them to behave differently in some subtle way.
But they behave differently with regard to inheritance of limitedness, causing
T3a derived from T1a to be legal, and T3b derived from T1b to be illegal.


package P is

   type T1a is abstract tagged limited null record;
   type T1b is limited interface;

   type T2 is limited null record;

   type T3a is new T1a with record
      a1: T2;   -- OK
   end record;

   type T3b is new T1b with record
      a2: T2;   -- ERROR:
   end record;
end P;

****************************************************************

From: Tucker Taft
Sent: Sunday, April  3, 2011 12:03 PM

Good point, but the alternative (as explained in AARM 7.5(6.c)) was that order
mattered when there was one parent and one progenitor and both were interfaces.
We went with the rule where order didn't matter if both orders were legal, in
"type A is new B and C with ...".

But you are right, if components are being inherited, the rules on limitness
inheritance are different. We could have conceivably said that from a visibly
limited null record type, limitness was not inherited.

So this does give an incentive to change a null abstract tagged limited
non-interface type into being a limited interface type.  Which is probably
another reason not to have different rules in other regards, so you don't end up
in a nasty either/or -- such as, "either you can make it a non-interface type,
in which case limitness is inherited, or you can make it an interface type, in
which case you have different limitations."

****************************************************************

From: Bob Duff
Sent: Sunday, April  3, 2011 11:29 AM

> In this example, T1a is an abstract type, and T1b is a limited interface.
> So you would not expect them to behave differently in some subtle way.
> But they behave differently with regard to inheritance of limitedness,
> causing T3a derived from T1a to be legal, and T3b derived from T1b to
> be illegal.

When I said "behave ... subtle", I was talking about run-time behavior.
Subtle compile-time rules are much less harmful.

You're right that T3b is illegal, and my previous paragraph ("Therefore, ...")
is about compile-time restrictions.  But there's no real restriction here --
just add "limited".  That's subtle, I suppose, but GNAT tells you exactly what
to do.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, April  4, 2011  7:35 AM

> I don't think there is always such a stark contrast between interfaces
> and non-interface types. I suspect that as Ada 2005 becomes more
> widely used, we will see some projects move what were abstract tagged
> types into being interfaces, and some projects that initially use
> interfaces for everything move some of the interfaces back to being
> "normal" tagged types, just so they can provide more flexible
> "default" implementations.
That's possible, but I don't think it changes anything to the discussion

> Or we will see what happens in Java, where many interfaces have
> "default" non-interface types, which implement the interface, and can
> be used as a place to provide more complex defaults.
... IMHO, an horrible Java habit.

> So I think we would be interfering with natural evolution if we set up
> significantly different rules between "normal" tagged types and
> interfaces.
> Also, we have in several places made certain that whether a particular
> interface is considered the "parent" type or one of the "progenitor"
> types has no net effect on semantics.
> It affects the vocabulary, but that is about all.
> We should be careful to preserve that.

The different rules are not between tagged types and interfaces. They are
between derivation and multiple progenitors.

I may not have made myself clear, so here it is what I'm suggesting:
- If B derives from A and both have (class-wide) preconditions, the effective
  precondition is the "or" of both (I think this is accepted)
- If a tagged type C has progenitors A and B with (class-wide) preconditions on
  operation O, the effective precondition for O is the "and" of both.

This comes from the following requirements:
1) If the user gets an object as an A'Class, the only contract on its part is in
   fulfilling the class-wide preconditions for A.
2) When the body is called (which in general is not an A), all applicable
   preconditions (wheter class-wide or specific) are met, or Assertion_Error is
   raised.

Can we agree on these requirements?
Note that to obey LSP, a class needs to ensure that 1) implies 2).

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, April  4, 2011  7:37 AM

> An interface is conceptually the same thing as an abstract type with
> all-abstract ops.  Interfaces are a bit more flexible, that's all.

Yes, from an implementation point of view. The same way that all this stuff
about OOP is no more than a glorified table of pointers to subprograms. And ifs
are just gotos in disguise.

 From a methodological point of view, there are differences, though.

****************************************************************

From: Edmond Schonberg
Sent: Monday, April  4, 2011  8:09 AM

>I may not have made myself clear, so here it is what I'm suggesting:
>- If B derives from A and both have (class-wide) preconditions, the effective
>  precondition is the "or" of both (I think this is accepted)
>- If a tagged type C has progenitors A and B with (class-wide) preconditions on
>  operation O, the effective precondition for O is the "and" of both.

>This comes from the following requirements:
>1) If the user gets an object as an A'Class, the only contract on its part is in
>   fulfilling the class-wide preconditions for A.
>2) When the body is called (which in general is not an A), all applicable
>   preconditions (wheter class-wide or specific) are met, or Assertion_Error is
>   raised.

That's a very intuitive description of the desired rule, I think it's the right
formulation.

****************************************************************

From: Edmond Schonberg
Sent: Monday, April  4, 2011 12:13 PM

After further discussion ...  when you you say "

- If a tagged type C has progenitors A and B with (class-wide) preconditions on
  operation O, the effective precondition for O is the "and" of both"

I understand that you want the conjunction for INVARIANTS of the type  (f it's
an A and a B it better have the characteristics of both). But for the
preconditions of a given operation this would mean that the caller on a
dispatching call doesn't see all the applicable requirements, and gets an
Assertion_Error in spite of having satisfied the precondition he knows about.
This can't possibly be what you want.

Leaving aside multiple progenitors, the common case is a type that inherits from
one interface and  from one concrete type that provides an implementation for
the operations of the interface. The precondition is a consistency relation
between abstract operations:  there is an abstract operation AO,  an abstract
predicate AP, and we state that AO can be called if AP is true.   Given the
corresponding operations of the concrete type, CO and CP,  we want the same
relationship between them:  if CP obtains, CO is callable.  There is no issue of
and'ing or or'ing here. A dispatching call verifies AP (which is itself
dispatching a dispatches to CP).  It's up to the programmer to verify that the
abstract and concrete operations are in the same relationship to each other.

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, April  4, 2011  3:48 PM

> I understand that you want the conjunction for INVARIANTS of the type
> (f it's an A and a B it better have the characteristics of both). But
> for the preconditions of a given operation this would mean that the
> caller on a dispatching call doesn't see all the applicable
> requirements, and gets an Assertion_Error in spite of having satisfied
> the precondition he knows about. This can't possibly be what you want.
Yes it is.

If this happens, it means that the implementation of the interface does not meet the LSP. I don't think it is our business to *require* the LSP everywhere - there are too many examples where the LSP does not work.

I'm just saying that it's the implementor's business to make sure that the LSP is met (if desired) - possibly by manually "oring" the precondition from the interface into its own preconditions.

Note that this is exactly similar to what happens if a specific precondition is stronger than the class-wide precondition.

> Leaving aside multiple progenitors, the common case is a type that
> inherits from one interface and from one concrete type that provides
> an implementation for the operations of the interface. The
> precondition is a consistency relation between abstract operations:
> there is an abstract operation AO, an abstract predicate AP, and we
> state that AO can be called if AP is true. Given the corresponding
> operations of the concrete type, CO and CP, we want the same
> relationship between
> them: if CP obtains, CO is callable. There is no issue of and'ing or
> or'ing here. A dispatching call verifies AP (which is itself
> dispatching a dispatches to CP). It's up to the programmer to verify
> that the abstract and concrete operations are in the same relationship
> to each other.

I don't get you here, if you check AP, you check AP, not CP!

Once again, I think there is a fundamental difference between interfaces and
inheritance. IMHO, an interface is written by the user of the interface. Take a
"serializable" interface for example. It basically says: "if you give me a type
with a Put operation, I can save it to a disk". A precondition says: "I promise
never to call Put on a closed file" (and therefore you don't need to check it
yourself).

If the implementer has extra conditions, and these are not met, it raises an
exception. That's what happens if the user pulls out the diskette - sorry the
USB key ;-) - while the Put is in progress.

If you want to use LSP proofs, you have to prove that no implementation has
stronger conditions than what is promised by the interface's preconditions.

****************************************************************

From: Edmond Schonberg
Sent: Monday, April  4, 2011  6:54 PM

You can't check AP, it's abstract!  The call to AP in the precondition is a
dispatching call, that executes CP of course.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, April  5, 2011  3:13 AM

I'm completely confused here. A (class-wide) precondition is a boolean
expression that follows "with Pre'class =>". How can it be abstract?

****************************************************************

From: Tucker Taft
Sent: Tuesday, April  5, 2011  6:25 AM

The functions it calls can be abstract.

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, April  5, 2011  7:36 AM

Because it can only be a call to a subprogram, you have no components with which
to write an expression. That subprogram is itself a primitive of the interface,
and the expression will be a dispatching call.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, April  5, 2011  1:24 PM

Huh? Are there any special visibility rule that would prevent me from writing an
expression involving only global variables? Or functions from some other
package? or a combination of functions (primitive or not)?

****************************************************************

From: Edmond Schonberg
Sent: Tuesday, April  5, 2011  1:34 PM

Of course not, but I suspect that this will be the unusual case. All the
examples used so far assume that some primitive operations are accessors and
modifiers, and others are predicates that control the legality of the former.  I
have not seen credible examples involving existing data structures or global
entities.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, April  5, 2011  1:36 AM

A "good" precondition will depend on the values of the parameters of the
operation, that necessarily involves other primitives of an interface type
(there cannot be anything else that takes the interface type).

There isn't a language rule against writing a junk precondition (it's too hard
to define), but there is no reason to care what happens for such things. It's
irrelevant.

I don't understand why you care anyway about a concept that necessarily violates
LSP. (See my other message.) Class-wide preconditions are intended solely for
uses where LSP is preserved. As I noted in my other message, I know think these
should be illegal (unless there is a brand-new body involved). Please explain
why we should ingrain LSP violations in the language (I think that would be
horrible for uses where LSP is required, especially as it would be implicit).

****************************************************************

From: Randy Brukardt
Sent: Monday, April  4, 2011  9:49 PM

...
> The different rules are not between tagged types and interfaces. They
> are between derivation and multiple progenitors.
>
> I may not have made myself clear, so here it is what I'm suggesting:
> - If B derives from A and both have (class-wide) preconditions, the
> effective precondition is the "or" of both (I think this is accepted)
> - If a tagged type C has progenitors A and B with
> (class-wide) preconditions on operation O, the effective precondition
> for O is the "and" of both.
>
> This comes from the following requirements:
> 1) If the user gets an object as an A'Class, the only contract on its
> part is in fulfilling the class-wide preconditions for A.
> 2) When the body is called (which in general is not an A), all
> applicable preconditions (wheter class-wide or specific) are met, or
> Assertion_Error is raised.
>
> Can we agree on these requirements?

No. I think class-wide preconditions are a different beast that never needs
checking in the body.

> Note that to obey LSP, a class needs to ensure that 1) implies 2).

Right, and that is impossible given the rules that you suggest. That's why it is
unacceptable.

As soon as you combine two preconditions A and B with "and", you've violated LSP
(unless one is "True" or A = B for all objects). Moreover, you can't fix it by
adding a new Pre condition: (A and B) and (A or B) = (A and B). So if we adopt
this as the rule for multiple inheritance, you've killed LSP for all time on
interfaces. That's a non-starter.

Besides, I think you are focusing on the wrong thing. When you say that
interfaces "are not about multiple inheritance", I tend to agree -- but the
troublesome cases are all multiple inheritance cases. I don't much care about
the cases that are not problems! (So long as we don't break them.) More detail
below.

===

The discussion this weekend has given me more indication that preserving LSP is
very important. Thus I think it is critical that the class-wide things
(Pre'Class, Post'Class, and Type_Invariant'Class) never violate LSP on their
own. (Obviously, the body of a routine can violate LSP, the language cannot do
anything to ensure that.) If someone wants to violate LSP, they have to use Pre,
Post, etc. (Or just randomly raise exceptions in the body!)

It should be noted that having both a Pre'Class and a Pre for a routine
automatically violates LSP. (You can use *only* Pre and still have LSP, but
never both [unless one is trivial].) Thus I'm not even going to consider Pre
when discussing this; it's separate from any discussion of preconditions and
LSP.

Anyway, I think that we need to take a more serious look at the rules to ensure
that we are not doing something that violates LSP for Pre'Class. As such, I have
a new proposal below (well, it's old, but it never was written up).

But first to motivate my suggestion:

The first thing to remember is that having multiple direct ancestors doesn't
really matter here. It is the inheritance of the subprograms that matters, not
the parent and progenitor types. If only one of the ancestor types has a
particular operation, then the Pre'Class only comes from that ancestor. That is
the normal single inheritance case. This is by far the most common case in
practice (as Bob pointed out last week).

Now, assume we do have two operations, with class-wide preconditions A (for type
A_T [any tagged type]) and B (for type B_T [an interface]). (The natural
extension occurs when there are more than two, so we'll only talk about two). So
when we inherit these, we get a precondition of (A or B). That's the only
precondition that would not violate LSP, since dispatching calls on type A_T
will check only A, and dispatching calls on type B_T will check only B.

Thus, the body will see a precondition of A or B. If the body is provided by
overriding, LSP can be preserved by writing the body properly. That will
probably require avoiding calling the parent operation (if A_T is concrete), as
it will have a stronger precondition A. Thus, "writing the body properly" may
not be easy, but it is possible, and since we've already stipulated that the
language cannot do anything to help here, that's pretty much the end of the
discussion.

However, if the operation is inherited, then there is trouble. This is
effectively calling the parent operation with a precondition of A, but the
operations precondition is A or B. This is especially bad if the operation is
called through type B_T'Class, as in that case the only precondition check will
be B. And checking A necessarily violates LSP.

Indeed, inheritance of concrete routines always will violate LSP if there is
more than one direct ancestor operation. This appears to be fundamental to me,
it is not something that we can avoid by tweaking the rules. There is no way to
avoid the problem that there are more preconditions on such a dispatching call
than the caller knows about -- which is the definition of LSP.

Note that there is a similar (but much less serious) problem with
postconditions. In this case, there are multiple postconditions that need to be
checked; the inherited routine does not know about all of them, and given that
compilers are probably going to generate postconditions inside of the subprogram
bodies, the additional ones will have to be added somehow.

Also note that Tucker's problem with renaming overridings is essentially this
same problem: in that case, the inherited preconditions may have no relationship
to the preconditions of the renamed routine.

=================

Given that inheritance of subprograms with multiple direct ancestors always
violates LSP, I no longer think that the language should automatically construct
such subprograms (as proposed in the current draft of AI05-0247-1). If you're
going to do something that violates LSP, you ought to have it explicitly in the
program source code where tools can see it. It's nasty to make proof tools try
to figure out the Ada inheritance rules!

Therefore, I am now leaning to requiring overriding for such inheritance. The
only exception would be for routines that have no Pre'Class, Post'Class, or
Type_Invariant'Class that apply (for the first two, the expressions would be
"True"). We could do better than that, but it would require a messy new concept
(two preconditions for different types can never be fully conformant, so we'd
need a new version of conformance that took type differences and the associated
function differences into account). This is not likely to be that common, as
multiple inheritance of an operation is not that common.

In this multiple inheritance case, if the user can write an LSP-compliant body,
they can do so, and there is no problem. If they have to violate LSP, they can
write a body doing that, too. But we're not going to do that automatically for
them.

[The renames case then has to be just illegal. Again, a full body would work in
place of the renames, so this is no significant loss.]

==================

I'd be happy to write up a new version of AI05-0247-1 to this end, but:
(1) I'd be violating the promise of no new wording I made on Friday;
(2) I'm not certain that enough people agree with this approach to make it
    worthwhile.

I do note that this point is critical to the success of Ada 2012.
Preconditions are central to the new features in Ada, and we cannot get them
wrong (or omitted). The only alternative IMHO is to abandon all hope and
completely drop Pre'Class, et. al., especially as Cyrille says that he thinks it
would be easier not to use them. But I think that it permanently denying
dispatching call-site analysis to compilers or other tools without full source
capability -- not what I signed up for.

So I would like to see if there is any way to get a resolution of this on
Thursday, otherwise, Ada 2012 will be delayed for a year...

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, April  6, 2011  3:03 AM

> Of course not, but I suspect that this will be the unusual case. All
> the examples used so far assume that some primitive operations are
> accessors and modifiers, and others are predicates that control the
> legality of the former.  I have not seen credible examples involving
> existing data structures or global entities.

External protection against concurrent access: the caller must hold some
semaphore.

There are also often conditions involving the non-controlling parameters

In any case, we can't assume that "it can only be a call to a subprogram [...
which] is itself a primitive of the interface"

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, April  6, 2011  3:35 AM

> I don't understand why you care anyway about a concept that
> necessarily violates LSP. (See my other message.) Class-wide
> preconditions are intended solely for uses where LSP is preserved. As
> I noted in my other message, I know think these should be illegal
> (unless there is a brand-new body involved). Please explain why we
> should ingrain LSP violations in the language (I think that would be
> horrible for uses where LSP is required, especially as it would be implicit).

About the LSP:
It is a Substitution Principle. An object obeys the LSP if it can be used in a
place where one of his ancestors is expected, without the caller noticing it.
i.e., if B is an A, you can safely substitute a B for an A. Note that it is
about objects, not operations.

Of course, to obey the LSP, an object must provide operations whose
preconditions are weaker and postconditions stronger (won't restate it here).

Note that the LSP is somewhat of an illusion. There has to be some differences
between A and B, otherwise they would be identical! The whole point is to define
which differences are "relevant" and which are not. Note also that when it comes
to real-time, "relevant" properties include WCET, blocking status, etc. To meet
LSP, the root class must announce a WCET that is worse than any of its
descendants!

The LSP is useful for HI systems, because it is expected (I am not fully
convinced, but that is another story) to harness the combinatorial explosion
implied by pessimistic testing. The approach is two-steps: 1) show that the
system behaves correctly using only root classes (or prove that pre-conditions
are sufficient to guarantee correct behaviour of the system), and 2) prove that
every class obeys the LSP.

Our problem is what kind of support the language should provide. I don't think
that we should force the LSP on everybody, we had some convincing examples -
including by you Randy - where it does not apply. My position is that applying
or not the LSP depends on the project and is a methodological aspect, not a
language aspect.

From the language point of view, I think the best we can do is to ensure my two
previous requirements.

The difference between types derived from a tagged type, and types that
implement an interface, is that the latter do not form a consistent, homogeneous
set - they gather unrelated types. This observation leads me to a suggestion: do
not allow Pre'class on interfaces, but allow only Pre. Every implementation of
an operation would simply have an additionnal Pre, comming (I do not dare say
inherited) from its interface. It would work the same way we (almost) agreed
already.

****************************************************************

From: Bob Duff
Sent: Wednesday, April  6, 2011  9:20 AM

> Note that the LSP is somewhat of an illusion. There has to be some
> differences between A and B, otherwise they would be identical!

Yes, that's quite true.  It's easy to forget that LSP is not some perfectly
general principle.  It can only work with respect to certain well-understood
properties.

> Our problem is what kind of support the language should provide. I
> don't think that we should force the LSP on everybody, we had some
> convincing examples - including by you Randy - where it does not apply.

Agreed.

I don't think anybody is forcing LSP on anyone.  The almost-agreed-upon idea is
that Pre'Class/Post'Class/Invariant'Class do something like LSP, and
Pre/Post/Invariant do not.

> My position is that applying or not the LSP depends on the project and
> is a methodological aspect, not a language aspect.

Agreed.  Programmers can choose the 'Class versions, or not.

> The difference between types derived from a tagged type, and types
> that implement an interface, is that the latter do not form a
> consistent, homogeneous set - they gather unrelated types. This
> observation leads me to a suggestion: do not allow Pre'class on
> interfaces, but allow only Pre.

I'm strongly opposed to this restriction.  It's doing exactly what you said you
didn't want above -- forcing methodological stuff at the language-design level.

It means you can't do anything LSP-like if you use interfaces.

I also strongly object to Ada going its own way, when this stuff has been worked
out by Eiffel (except for the important distinction between specific and
class-wide types).  Eiffel imposes no such restrictions, and has reasonable
semantics for class-wide preconditions.

I'd be happy to implement GNAT-specific Restrictions along the lines you
suggest.  Imposing arbitrary restrictions is the business of programmers/project
leaders, not language designers.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, April  6, 2011  2:06 PM

...
> Our problem is what kind of support the language should provide. I
> don't think that we should force the LSP on everybody, we had some
> convincing examples - including by you Randy - where it does not
> apply.
> My position is that applying or not the LSP depends on the project and
> is a methodological aspect, not a language aspect.

Right. And to that end, Ada is provide both LSP-compliant aspects (Pre'Class,
etc.) and ones that don't require LSP (Pre, etc.).

>  From the language point of view, I think the best we can do is to
> ensure my two previous requirements.

But those requirements (the second one) denies the use of LSP to interfaces.
That's not "giving the project the choice to apply LSP"!

> The difference between types derived from a tagged type, and types
> that implement an interface, is that the latter do not form a
> consistent, homogeneous set - they gather unrelated types. This
> observation leads me to a suggestion: do not allow Pre'class on
> interfaces, but allow only Pre. Every implementation of an operation
> would simply have an additionnal Pre, comming (I do not dare say
> inherited) from its interface. It would work the same way we (almost)
> agreed already.

Pre is not inherited, so there is no possibility of "Pre" coming from it's
interface. And if we were to add such a thing, we again have to decide how they
are combined. If they are combined with "and" (which is what you are
suggesting), you are denying the use of LSP to interfaces.

If anything, it is more important to have LSP for interfaces. LSP is all about
making it possible to reason about dispatching calls. For regular inheritance,
there is the possibility of statically bound calls, for which LSP just gets in
the way. But the only way to use an interface is with dispatching calls.

One can imagine other ways to allow more flexibility on how preconditions
combine, perhaps by specifying how they combine when the overriding routine is
defined. (I'm convinced body inheritance cannot be allowed in these cases.) But
specifying combining adds a lot of complication to the language and its use, so
I'm dubious that it is a good idea. ("Modern Eiffel" requires that additional
preconditions be flagged with "else"; this seems like a good idea to me simply
because remembering that other preconditions are inherited seems difficult. One
could imagine allowing flagging with other keywords as well. Such flagging could
be restricted by tools to ensure LSP in such environments.)

****************************************************************

From: Bob Duff
Sent: Wednesday, April  6, 2011  10:27 AM

> I'd be happy to write up a new version of AI05-0247-1 to this end, but:
> (1) I'd be violating the promise of no new wording I made on Friday;

I'm glad you kept that promise!  But I'd like to understand this particular rule
you propose.  You'll have to explain it either in email or during the meeting.

> (2) I'm not certain that enough people agree with this approach to
> make it worthwhile.

I don't understand the proposal, so I'm not sure.  [...pause] OK, Ed just
clarified for me:

    type T3 is [interface] new T1 and T2;

And suppose both T1 and T2 have operation P ("same" profile).

    If T1 and T2 are interfaces, there's no new restrictions.
    (We already require overriding P in this case if both
    P's are abstract, and T3 is concrete.)

    If T3 is abstract (interface or not), then there's no restriction.

The only restriction would be if the P for T1 is a concrete subprogram (so we're
inheriting a body, and your proposal is to require that body to be overridden).

Is that right?  If so, I think it's a reasonable restriction.
But I also think we ought to be learning what Eiffel does in similar cases.
Note that in Eiffel, you can actually be inheriting 2 bodies (not possible in
Ada, other than "null").

****************************************************************

From: Randy Brukardt
Sent: Wednesday, April  6, 2011  2:25 PM

> > I'd be happy to write up a new version of AI05-0247-1 to this end, but:
> > (1) I'd be violating the promise of no new wording I made on Friday;
>
> I'm glad you kept that promise!  But I'd like to understand this
> particular rule you propose.  You'll have to explain it either in
> email or during the meeting.

OK, it would be something like:

An inherited operation requires overriding if all of the below are true:

* There are two or more inherited homographs with the same defining name and
  profile; and

* At least one of the inherited homographs is not an abstract subprogram,

* At least one of the inherited homographs has a class-wide precondition or
  postcondition that is not True or is primitive for a type that has
  Type_Invariant'Class specified.

> > (2) I'm not certain that enough people agree with this approach to
> > make it worthwhile.
>
> I don't understand the proposal, so I'm not sure.  [...pause] OK, Ed
> just clarified for me:
>
>     type T3 is [interface] new T1 and T2;
>
> And suppose both T1 and T2 have operation P ("same" profile).
>
>     If T1 and T2 are interfaces, there's no new restrictions.
>     (We already require overriding P in this case if both
>     P's are abstract, and T3 is concrete.)
>
>     If T3 is abstract (interface or not), then there's no restriction.
>
> The only restriction would be if the P for T1 is a concrete subprogram
> (so we're inheriting a body, and your proposal is to require that body
> to be overridden).
>
> Is that right?  If so, I think it's a reasonable restriction.

Yes, but note several things:

* This only applies if class-wide contracts are in use. That's both for
  compatibility and put no restrictions on non-LSP bodies.

* This applies to any concrete body, including null procedures. Indeed, it was
  the problems with null procedures that brought up this problem in the first
  place. If the routines are all abstract, either they require overriding
  anyway, or there is no problem because they can't be called.

* I didn't put any restrictions on the type, specifically I make the requirement
  any time that a concrete body is generated. It's possible that the "T3 is
  abstract case" above could be excluded, but I haven't tried to figure out if
  that actually works or not.

* The rule could be weaker, since the real requirement is that the class-wide
  contracts are the same. But defining "the same" is pretty hard, so I didn't
  try.

* I included the Post'Class and Type_Invariant'Class cases in the rules to fix
  the problems that they cause. Those problems are more minor and we could
  automatically make bodies in those cases without much weirdness. But since
  Tucker didn't like the automatic body rules much anyway, I just made them all
  "require overriding".

> But I also think we ought to be learning what Eiffel does in similar
> cases.  Note that in Eiffel, you can actually be inheriting 2 bodies
> (not possible in Ada, other than "null").

I tried figuring that out last month when we initial discussed this, and I
wasn't able to find anything that discussed the issue in the various references
that you and Tucker dug up. (I looked for an hour or so.) Everything I saw
always had a body; I wonder if Eiffel even has body inheritance. (If you never
have body inheritance, this problem does not come up. And it surely is possible
to have a shorthand for calling the parent operation which would look somewhat
like body inheritance but has very different semantics. That is, the bug in Ada
may be the way it describes dispatching rather than anything to do with
inheritance per-se.)

Anyway, I don't think I'm going to spend more time trying to figure out what
Eiffel does; if someone else can find a reference, I'd surely be interested in
reading about it.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  8:10 AM

>> The difference between types derived from a tagged type, and types
>> that implement an interface, is that the latter do not form a
>> consistent, homogeneous set - they gather unrelated types. This
>> observation leads me to a suggestion: do not allow Pre'class on
>> interfaces, but allow only Pre.
>
> I'm strongly opposed to this restriction.  It's doing exactly what you
> said you didn't want above -- forcing methodological stuff at the
> language-design level.

I don't see why.
But there is another more important reason to keep Pre'Class: it has to apply
interfaces derived from other interfaces. I withdraw the suggestion (pending
tonight's discussion)

> It means you can't do anything LSP-like if you use interfaces.

No. It means that if you want to support LSP, and you have to repeat the
pre-condition from the interface in your implementing subprograms.

> I also strongly object to Ada going its own way,

I strongly agree with that

> when this
> stuff has been worked out by Eiffel (except for the important
> distinction between specific and class-wide types).  Eiffel imposes no
> such restrictions, and has reasonable semantics for class-wide
> preconditions.

I could not find anything about what happens when several preconditions are
inherited in parallel. All I could find is the usual weakening/strengthening for
linear inheritance. I'd be glad if you had a reference.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  8:30 AM

> But those requirements (the second one) denies the use of LSP to interfaces.
> That's not "giving the project the choice to apply LSP"!

I repeat it here:

> 2) When the body is called (which in general is not an A), all
> applicable preconditions (wheter class-wide or specific) are met, or
> Assertion_Error is raised.

If you don't agree with this, it means that you are happy if a body gets called
with some of its preconditions false. ?????

-----
In order to save time for the discussion, let me summarize what I think we all
agree on:

1) Pre is independent from Pre'Class, and has nothing to do with the LSP.

2) When a tagged type (or interface) is derived from a single type (or
   interface), and has preconditions, these preconditions are "ored" with those
   of the parent. (TBH: not sure when a tagged type is derived from only one
   interface)

-----
The issue is when a tagged type has multiple progenitors. Let's have:

A root tagged type RTT;
A root interface RI;
A derived interface DI (derived from RI);

each with a primitive operation OP, and preconditions are specified for OP by
each.

Now, have a derived type DT is new RTT and DI;

How are the various preconditions combined if
1) OP is overriden (with  its own preconditions)
2) OP is not overriden

My suggested formula is:
(PRE'Class(RTT) or PRE'Class(DT)) and (PRE'Class(RI) or PRE'Class(DI))

i.e.: "or" for serial inheritance, "and" for parallel inheritance

Please provide your alternative formulas! (including what's provided by other
languages if you know about)

****************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  8:50 AM

> How are the various preconditions combined if
> 1) OP is overriden (with its own preconditions)
> 2) OP is not overriden
>
> My suggested formula is:
> (PRE'Class(RTT) or PRE'Class(DT)) and (PRE'Class(RI) or PRE'Class(DI))
>
> i.e.: "or" for serial inheritance, "and" for parallel inheritance
>
> Please provide your alternative formulas! (including what's provided
> by other languages if you know about)

Disjunction, aka "or", between all is the standard, I believe.
See the article I referenced (and quoted in my prior e-mai).
Also, in Eiffel, as explained in:
   http://docs.eiffel.com/book/method/et-inheritance#Join_and_uneffecting

Join and uneffecting

It is not an error to inherit two deferred features from different parents under
the same name, provided they have the same signature (number and types of
arguments and result). In that case a process of feature join takes place: the
features are merged into just one -- with their preconditions and
postconditions, if any, respectively or-ed and and-ed.

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  9:12 AM

> > But those requirements (the second one) denies the use of LSP to interfaces.
> > That's not "giving the project the choice to apply LSP"!
>
> I repeat it here:
>  > 2) When the body is called (which in general is not an A), all  >
> applicable preconditions (wheter class-wide or specific) are met, or
> > Assertion_Error is raised.
> If you don't agree with this, it means that you are happy if a body
> gets called with some of its preconditions false. ?????

Yes, I am happy with that.  So are Tucker, Randy, B. Meyer, etc.  ;-)

That's what "or" means -- I am happy that "Foo or Bar" is True, even though Foo might be False, and Bar might be False.

> -----
> In order to save time for the discussion, let me summarize what I
> think we all agree on:
>
> 1) Pre is independent from Pre'Class, and has nothing to do with the LSP.
>
> 2) When a tagged type (or interface) is derived from a single type (or
> interface), and has preconditions, these preconditions are "ored" with
> those of the parent.

Yes, I think we all agree on the above.

>... (TBH: not sure when a tagged type is derived from  only one
>interface)

I'm pretty sure interfaces shouldn't be special here.

> -----
> The issue is when a tagged type has multiple progenitors. Let's have:
>
> A root tagged type RTT;
> A root interface RI;
> A derived interface DI (derived from RI);
>
> each with a primitive operation OP, and preconditions are specified
> for OP by each.
>
> Now, have a derived type DT is new RTT and DI;
>
> How are the various preconditions combined if
> 1) OP is overriden (with  its own preconditions)
> 2) OP is not overriden
>
> My suggested formula is:
> (PRE'Class(RTT) or PRE'Class(DT)) and (PRE'Class(RI) or PRE'Class(DI))
>
> i.e.: "or" for serial inheritance, "and" for parallel inheritance

IMHO "and" should be "or".

> Please provide your alternative formulas! (including what's provided
> by other languages if you know about)

Yes, the above is a good description of the issue (although you don't need RI to
illustrate the issue).  Let me write it out in code:

    package P1 is
        type RTT is tagged ...;
        procedure OP(X: RTT) with Pre'Class => Is_Green(X);
    end P1;

    package P2 is
        type RI is interface;
        procedure OP(X: RI) with Pre'Class => Is_Blue(X) is abstract;
    end P2;

    package P3 is
        type DI is new RI;
        overriding procedure OP(X: DI) with Pre'Class => Is_Red(X) is abstract;
    end P3;

    package P4 is
        type DT is new RTT and DI;
        overriding procedure OP(X: DT) with Pre'Class => Is_Purple(X);
    end P4;

    package body P4 is
        overriding procedure OP(X: DT) with Pre'Class => Is_Purple(X) is
        begin
            -- Here, we can assume
            -- Is_Green(X) or Is_Blue(X) or Is_Red(X) or Is_Purple(X).
            ...
        end OP;
    end P4;

(with's and use's omitted above)

    with P2;
    procedure Client (X : P2.RI'Class) is
    begin
        if Is_Blue(X) then
            P2.OP(X); -- dispatch
        end if;
    end Client;

Now suppose Client is passed X'Tag = DT'Tag.
But Client has never heard of RTT, DI, or DT.
It correctly checks the only precondition it knows about (Is_Blue), which
ensures that when we get to the body of OP, the precondition "Is_Green(X) or
Is_Blue(X) or Is_Red(X) or Is_Purple(X)" is True.  If that body blows up when
Is_Green is True but Is_Red is False, that's bad, because there's no way the
Client could have known about that.

If you don't like that, then you can use Pre instead of Pre'Class, and use some
other means to prove things won't fail.  That other means will necessarily break
abstraction, which may or may not be OK.

****************************************************************

From: Randy Brukardt
Sent: Thursday, April  7, 2011  11:55 AM

...
> In order to save time for the discussion, let me summarize what I
> think we all agree on:
>
> 1) Pre is independent from Pre'Class, and has nothing to do with the
> LSP.
>
> 2) When a tagged type (or interface) is derived from a single type (or
> interface), and has preconditions, these preconditions are "ored" with
> those of the parent. (TBH: not sure when a tagged type is derived from
> only one interface)

Right.

> -----
> The issue is when a tagged type has multiple progenitors. Let's have:
>
> A root tagged type RTT;
> A root interface RI;
> A derived interface DI (derived from RI);
>
> each with a primitive operation OP, and preconditions are specified
> for OP by each.
>
> Now, have a derived type DT is new RTT and DI;
>
> How are the various preconditions combined if
> 1) OP is overriden (with  its own preconditions)
> 2) OP is not overriden
>
> My suggested formula is:
> (PRE'Class(RTT) or PRE'Class(DT)) and (PRE'Class(RI) or PRE'Class(DI))
>
> i.e.: "or" for serial inheritance, "and" for parallel inheritance
>
> Please provide your alternative formulas! (including what's provided
> by other languages if you know about)

I've done this repeatedly, but you don't seem to read it.

They all have to "or"ed together. Anything else violates LSP a-priori (unless
the preconditions are exactly the same). I've explained why in a previous
message, I don't have time to repeat that before the call.

Implementation inheritance cannot work in this case (whether you use "and"
or "or"!!). Thus is it banned (you must provide an overriding). I think you are
trying to make implementation inheritance work, but any rule you choose will
fail at least one of the requirements (usually more than one).

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  10:28 AM

> To put it another way:  If type T inherits from interface I, it is
> promising that any dispatching call via I'Class will not blow up.  And
> that's true for each such I.  Therefore, the precondition of T must be
> the same as, or weaker than, the precondition of each such I.

I agree. That's what the programmer should ensure. But what if he doesn't?

We are not giving advice to programmers here; we are defining language rules.
Here is the problem:

I has precondition PI.

T has precondition PT, which, as stated by the programmer, is not weaker that
PI.

What happens? I see three possibilities:
1) Statically require PT to be weaker that PI => compile_time error.
Best for LSP, but I doubt we want to go that far.

2) Decide that the effective precondition for the call is PI or PT. This implies
that the subprogram can be called with its own, specified, explicit precondition
false. That's what I don't like - although proof making tools will consider the
effective precondition. But I'm afraid that manual inspection is very likely to
miss the implicit "oring". Eiffel solves this by requiring the "else", which
acts as a reminder to the reader.

3) When called from I'class, check PT and raise Assertion_Error if it does not
hold. That's the dynamic version of 1), but is in effect "and".

As I write this, I think I could live with 2) provided:
- there is special syntax (as in Eiffel)
- if there is no inherited precondition, forbid precondition on the implementing
  implementations (that is the case where 1) is easily implementable).

****************************************************************

From: Randy Brukardt
Sent: Thursday, April  7, 2011  12:01 PM

This has to be statically illegal (technically, require overriding). The
compiler should not be in the business of automatically building bodies that
violate LSP (and this case is always an implicit body).

****************************************************************

From: John Barnes
Sent: Thursday, April  7, 2011  5:04 AM

Oh dear. When I was writing the intro to the rat and wrote "There are also
aspects Pre'Class and Post'Class for use with tagged types", I thought "I fear
there is trouble ahead".

What to do? Certainly don't mess up the plain Pre. Make the class ones distinct.
Possibilities are

1  Give it up until next time

2  Have two different attributes for preconditions

3  Could have two for post as well for symmentry but make them the same

4 Use a temporary name so that it can be made redundant next time when a better
  solution is found eg Pre'Classtemp, Pre'Crass, Pre'Mix, Pre'Concrete,
  Pre'Monition...

Hmm. Better read the next one.

****************************************************************

From: Bob Duff
Sent: Wednesday, April  6, 2011  9:25 AM

General comment about "contract" stuff:

Anybody who wants to do things differently from the way Eiffel works should: (1)
understand the Eiffel rules, and (2) present concrete reasons why Ada should be
different.

****************************************************************

From: Tucker Taft
Sent: Wednesday, April  6, 2011  9:50 AM

Here is a good paper on this whole topic, as it relates to JML, Java Modeling
Language, which was inspired by Eiffel and Larch. It talks about "behavioral
subtyping" which is pretty much what we are arguing about:

    http://www.eecs.ucf.edu/~leavens/JML/fmco.pdf

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  7:03 AM

Thanks, good paper, I read it on the train last night.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  7:33 AM

I found it quite disappointing, as far as our problem is concerned. It does not
address the case when you inherit from a class and an interface (both with
preconditions).

***************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  8:28 AM

It covers that case in the following (from page 12), I believe:

The meaning of specification cases conjoined by also can be a bit subtle.
However, it is easiest to just keep in mind that all specification cases of all
inher- ited methods have to each be obeyed by a method. If, for a given method,
the subtype and supertypes all specify the same precondition and assignable
clause, then the conjoined specification will be equivalent to a single
specification case whose precondition and assignable clause are the same as in
the individual spec- ification cases, and with a postcondition that is the
conjunction of the postcon- ditions in the individual specification cases. If
different preconditions are given in a sub- and supertype the meaning of the
conjoined specification cases is more involved: the precondition of the
conjoined specification will effectively be the disjunction of the preconditions
from the individual specification cases, and the postcondition of the conjoined
specification will effectively be a conjunction of implications, where each
precondition (wrapped in \old()) implies the corre- sponding postcondition. This
effective postcondition is slightly weaker than the conjunction of the
postconditions, since each postcondition only has to apply in case the
corresponding precondition was satisfied [DL96].

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  8:51 AM

> It covers that case in the following (from page 12), I believe:

Not to my understanding (but I may understand wrong)

> The meaning of specification cases conjoined by also can be a bit
> subtle. However, it is easiest to just keep in mind that all
> specification cases of all inher- ited methods have to each be obeyed
> by a method.

If we interpret that as multiple inherited preconditions, it would mean that
they are "anded".

> If, for a given method, the subtype and supertypes all specify the
> same precondition and assignable clause, then the conjoined
> specification will be equivalent to a single specification case whose
> precondition and assignable clause are the same as in the individual
> spec- ification cases, and with a postcondition that is the
> conjunction of the postcon- ditions in the individual specification cases.

That's trivially the case when there is no problem

> If
> different preconditions are given in a sub- and supertype the meaning
> of the conjoined specification cases is more involved: the
> precondition of the conjoined specification will effectively be the
> disjunction of the preconditions from the individual specification
> cases,

Note that here, supertype is now singular.
And it talks about disjunction of the "individual specification cases".
I don't get exactly what it means.

[...] stuff about postconditions stripped

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  8:53 AM

> It covers that case in the following (from page 12), I believe:

Right.  And the important point is there is not even a hint that one might want
to treat interfaces differently from (abstract) classes.

There is an added complication in the paper, because their language supports the
"also" thing, which allows multiple independent preconditions with corresponding
postconditions, as in "you must obey precondition A or B, and if you obey A but
not B, you must ensure X, and if you obey B but not A, you must ensure Y".  We
don't have that feature in Ada 2012.

****************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  9:03 AM

>> If
>> different preconditions are given in a sub- and supertype the meaning
>> of the conjoined specification cases is more involved: the
>> precondition of the conjoined specification will effectively be the
>> disjunction of the preconditions from the individual specification
>> cases,
> Note that here, supertype is now singular.

It says "a ... supertype" not "the supertype".

> And it talks about disjunction of the "individual specification
> cases". I don't get exactly what it means.

"Disjunction" means "or" ("conjunction" means "and").

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, April  7, 2011  9:05 AM

> Right.  And the important point is there is not even a hint that one
> might want to treat interfaces differently from
> (abstract) classes.

That's not the point. It's about what happens if you inherit in parallel (but of
course, only interfaces can be inherited in parallel)

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  9:20 AM

> That's not the point.

It's exactly the point.  In the following two cases:

    type A is interface with Pre => ...;
    type B is new A with Pre => ...;
    type C is new B with Pre => ...;

    type A is interface with Pre => ...;
    type B is interface with Pre => ...;
    type C is new A and B with Pre => ...;

C should end up with identical semantics in both cases.
In both cases, it gets preconditions from A and B, and those preconditions need
to be "or"ed in both cases.

>...It's about what happens if you inherit in parallel  (but of course,
>only interfaces can be inherited in parallel)

There's no "of course" about it.  Ada and Java have that restriction, but Eiffel
does not.

****************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  9:28 AM

I trust you saw the quote from the Eiffel manual where they clearly say when
"joining" preconditions from multiple super classes, you use "or".

The point is that the precondition is what is required of the caller, and if you
combine two interfaces, one of which has no preconditions, then a caller through
that interface should not be surprised by some precondition imposed after the
fact.  The bug is with the programmer who believes their type can "implement" an
interface, but in fact they can't live with the minimal precondition imposed by
that interface on its callers.  That is the Eiffel model, and that is what we
are proposing for "Pre'Class."

Implementing an interface means living with the Pre'Class preconditions
inherited from that interface as well as all of the others, which may mean
"living with" a "True" precondition over all.

****************************************************************

From: John Barnes
Sent: Thursday, April  7, 2011  9:13 AM

> There is an added complication in the paper, because their language
> supports the "also" thing, which allows multiple independent
> preconditions with corresponding postconditions, as in "you must obey
> precondition A or B, and if you obey A but not B, you must ensure X,
> and if you obey B but not A, you must ensure Y".  We don't have that
> feature in Ada 2012.

Presumably one could have two differently named subprograms inheriting different
conditions. Hmm maybe not.

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  9:38 AM

Yes.  But the case we're arguing about is the (somewhat unusual) case where you
inherit the same procedure from two different parents. Never mind LSP (which is
semi-bogus, IMHO) -- the point is that the clients of one of those parents
doesn't know about the other one, and vice versa.  Therefore, the body of that
procedure cannot assume that BOTH preconditions are true -- it can only assume
that one or the other is true.  Because a dispatching call can come from 'Class
of either parent.

And this has nothing to do with interfaces -- if both parents are interfaces or
(if Ada allowed it) both parents are not, the argument would be the same.

To put it another way:  If type T inherits from interface I, it is promising
that any dispatching call via I'Class will not blow up.  And that's true for
each such I.  Therefore, the precondition of T must be the same as, or weaker
than, the precondition of each such I.  Hence, "or".

>... Hmm maybe not.

Definitely so.  ;-)

I really think that the burden of proof should be on those who want to violate
the rules of Eiffel or JMI!

****************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  9:45 AM

For what it is worth, I am sympathetic with Randy's proposal that you must
override under certain circumstances.

Basically I would say that if you inherit a non-abstract, non-null body from the
parent type, and it has a Pre'Class that is not merely "True", then unless all
of the other Pre'Class's for that operation (inherited from progenitors) fully
conform to some Pre'Class already included in the Pre'Class "disjunction" for
the parent's subprogram, the subprogram must be overridden in the derived type.

This forces the programmer to reckon with the situation explicitly, rather than
just being surprised by the fact that the precondition has been weakened.  Of
course the implementation of the overriding operation could be a simple call of
the parent's operation, but at least we know then that the programmer explicitly
decided that was OK.

From the compiler's point of view, if the programmer isn't forced to do this,
the compiler probably will have to do it (that is, create a wrapper), since it
might be assuming that certain preconditions had been checked, and if they
haven't, it might have to generate the code for the operation again, and it
might give additional warnings about likely check failures, etc.

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  9:54 AM

I can go along with that.

****************************************************************

From: Bob Duff
Sent: Thursday, April  7, 2011  9:58 AM

> > I also strongly object to Ada going its own way,

> I strongly agree with that

Then you have to agree with "or" for preconditions.

****************************************************************

From: Randy Brukardt
Sent: Thursday, April  7, 2011  9:58 PM

During today's phone call, it was pointed out that the title of Section 13
("Representation Issues") no longer reflects the contents, and hasn't since Ada
95. Bob Duff suggested that we call it "Hodgepodge of stuff". John suggested
simply "Others". I think (hope??) that they both were joking.

Tucker suggested that we have a naming contest, to end at midnight EDT (-4 UTC)
next Thursday. He suggests that the prize be a bottle of scotch in Edinburgh
(although he never specified who was buying it :-).

Let me lay out a couple of rules, in case someone actually does put up a
prize:
   The first suggestion to post on the ARG list is considered the winner if
   there are multiple occurrences of the same suggestion. If a message gets
   stuck in moderation, it might be necessary to split the prize. If you are one
   of those people who make more work for me by getting all of your incoming
   messages stuck in my server's spam filter, tough if that delays it. :-) Send
   it from Gmail if you care.

   The ARG will hold an e-mail vote starting on tax day (April 15th) to choose a
   winner; "stay unchanged" will be a choice, and if it wins, we'll just have to
   share the bottle during the meeting. :-)

Enter early and often!!

Here is a summary of the contents of Section 13 today:

   * Definition of aspects
   * Definition of representation items and operational items
   * Representation aspect Pack
   * Attribute_Definition_Clauses
   * Representation attributes
   * Aspect_Specifications
   * Preconditions, Postconditions, Type_Invariants
   * Enumeration and Record rep clauses
   * Package System and various children
   * Machine code insertions
   * Unchecked_Conversion
   * Data Validity, 'Valid attribute
   * Unchecked access
   * Storage pools; subpools; Unchecked_Deallocation
   * Pragma Restrictions; various restrictions
   * Pragam Profile; No_Implementation_Extensions profile
   * Streams
   * Freezing rules(!)

Humm, Bob's "Hodgepodge of stuff" is looking pretty good; seems like the leader
in the clubhouse. ;-)

I'll suggest
 "Aspects of entities and representation issues"
as my entry in the contest.

To be fair, Tucker suggested
 "Operational and Representation Control"
during the call today. There might have been other suggestions, but I didn't
write them down.

Again, enter early and often!

****************************************************************

From: Randy Brukardt
Sent: Thursday, April  7, 2011  10:12 PM

> Again, enter early and often!

Entry 2 from me:
 "Aspects of entities and low-level constructs"

[This is the union "and", it really should "or", but that doesn't help
readability any.]

****************************************************************

From: Tucker Taft
Sent: Thursday, April  7, 2011  10:28 PM

Controlling Representation and Operational Aspects

Representation and Operational Aspects

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  8, 2011  12:34 AM

> I'll suggest
>   "Aspects of entities and representation issues"

Representation is an aspect. I'd rather say:

"Aspect of entities and low-level issues"

(what happens when you modify a previous entry? Do we share the bottle
50/50?)

****************************************************************

From: Randy Brukardt
Sent: Friday, April  8, 2011  12:57 AM

> "Aspect of entities and low-level issues"

I think you combined both of my two previous submissions, but that's OK.

> (what happens when you modify a previous entry? Do we share the bottle
> 50/50?)

Hard to take half a bottle home, though. I doubt I can take that through
customs. :-) And I suspect that drinking half a bottle might be a bit much...

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  8, 2011  2:49 AM

> I think you combined both of my two previous submissions, but that's OK.

I sent that message before I read your second ones. Seems we are already
converging...

****************************************************************

From: Bob Duff
Sent: Friday, April  8, 2011  5:36 AM

> Hard to take half a bottle home, though. I doubt I can take that
> through customs. :-)

The security actors don't like liquids.  And you guys are talking about
inflamable liquids!  (Or "flamable", if you prefer.)

(earlier):

> Tucker suggested that we have a naming contest, to end at midnight EDT
> (-4
> UTC) next Thursday. He suggests that the prize be a bottle of scotch
> in Edinburgh (although he never specified who was buying it :-).

Clearly, the winner should buy it.  ;-)

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  8, 2011  6:59 AM

> The security actors don't like liquids.  And you guys are talking
> about inflamable liquids!  (Or "flamable", if you prefer.)

Therefore, I'm afraid we'll have to drink it during the next ARG meeting...

****************************************************************

From: John Barnes
Sent: Monday, April 11, 2011  1:12 PM

> Here is a summary of the contents of Section 13 today:
>
>   * Definition of aspects
>   * Definition of representation items and operational items
>   * Representation aspect Pack
>   * Attribute_Definition_Clauses
>   * Representation attributes
>   * Aspect_Specifications
>   * Preconditions, Postconditions, Type_Invariants
>   * Enumeration and Record rep clauses
>   * Package System and various children
>   * Machine code insertions
>   * Unchecked_Conversion
>   * Data Validity, 'Valid attribute
>   * Unchecked access
>   * Storage pools; subpools; Unchecked_Deallocation
>   * Pragma Restrictions; various restrictions
>   * Pragam Profile; No_Implementation_Extensions profile
>   * Streams
>   * Freezing rules(!)
>
> Humm, Bob's "Hodgepodge of stuff" is looking pretty good; seems like
> the leader in the clubhouse. ;-)

The real thing that sticks out for me is preconditions, postconditions and type
invariants. They seem really out of place. We have discussed this before of
course.

****************************************************************

From: John Barnes
Sent: Monday, April 11, 2011  1:12 PM

****************************************************************

From: Robert Dewar
Sent: Monday, April 11, 2011  1:12 PM

I agree that the placement of preconditions/postconditions etc has nothing to do
with chapter 13. These belong in the chapter on subprograms IMO.

I don't see what's wrong with having the freezing rules in chapter 13, it's
where they are relevant after all.

****************************************************************

From: John Barnes
Sent: Monday, April 11, 2011  2:48 PM

> I agree that the placement of preconditions/postconditions etc has
> nothing to do with chapter 13. These belong in the chapter on
> subprograms IMO.

Well at least we agree on something! And clearly we cannot avoid forward
references; it's a reference manual not a textbook.

****************************************************************

From: Tucker Taft
Sent: Monday, April 11, 2011  3:51 PM

If chapter 13 were seen as defining the ability to define annotations of various
sorts, where the annotations might specify representation, streaming,
pre/post-conditions, etc., it doesn't seem so far off.

Perhaps:

   Chapter 13 -- Annotating the Program

Most of chapter 13 is not about programming per se, but rather about annotating
a program, and I think it is a reasonable gathering of topics when looked at
that way.

****************************************************************

From: John Barnes
Sent: Monday, April 11, 2011  4:49 PM

That's a good viewpoint. (It's all about views as we know.) So I have to
take back what I said about agreeing with Robert!  It's bedtime
anyway.

****************************************************************

From: Tucker Taft
Sent: Tuesday, April 12, 2011  1:46 PM

Another entry:

    Chapter 13 -- Program Annotations

This would probably need to be explained in an introductory sentence or two to
say that pragmas, rep clauses, aspect specifications, etc. are all ways of
annotating an Ada program with information to control representation, run-time
checking, streaming, storage management, etc.

****************************************************************

From: John Barnes
Sent: Wednesday, April 13, 2011  8:43 AM

> Another entry:
>
>    Chapter 13 -- Program Annotations

That's even better. John

****************************************************************

From: Randy Brukardt
Sent: Wednesday, April 13, 2011  12:59 PM

> Well at least we agree on something! And clearly we cannot avoid
> forward references; it's a reference manual not a textbook.

I've previously in the "don't care" camp on this one, but in thinking about this
further, having preconditions and invariants in Chapter 13 makes no sense. We
have loads of other aspects scattered thoughout the standard, and I can't think
of a single reason why these three are different. Specifically:

   Predicates: Chapter 3 (3.2.4)
   Default_Value: Chapter 3 (3.5)
   Inline: Chapter 6 (6.3.2) [remember, this is now an aspect]
   No_Return: Chapter 6 (6.5.1) [also an aspect]

Not to mention many in the various annexes (B, C, D).

So I now have changed to Bob's position: pre- and postconditions should be
chapter 6 (either 6.1.1 or 6.3.3, the former makes more sense since these are
specification-related, the latter is better because it doesn't make a single
subsection), and invariants should be in chapter 7 (7.3.2) [as these are only
allowed on private types, so they should be documented there].

****************************************************************

From: John Barnes
Sent: Thursday, April 14, 2011  3:49 AM

They also have affinity with exceptions because of the relationship with
Assertion_Policy and that is in section 11. So one is going to get forward
references again.

****************************************************************

From: Randy Brukardt
Sent: Thursday, April 28, 2011  9:39 PM

We've previously discussed (without conclusion) that putting preconditions,
postconditions, and type_invariants into Chapter 13 is weird.

This is especially true as other aspects for other entities are scattered about
the RM; Default_Value for types in 3.2.4, Inline for subprograms in 6.3.2,
No_Return for subprograms in 6.5.1 (not to mention various specialized needs
annexes). I fail to see what is different about preconditions and
type_invariants.

So I am asking for a quick straw poll on this issue. Note that all of the
clauses are new, it is just the location in the standard that would change
(along with a couple of forward references to 13.3.1). Type invariants are
related to private types, so I think they should be moved there (if moved).
Preconditions/postconditions are related to subprograms, but I'm not sure which
place makes more sense (part of specifications, or near conformance).

If a change passes, I'll put it into the forthcoming AI05-0247-1, which is
rewriting 13.3.2 anyway, so moving it wouldn't be much of a change.

Please rank the following solutions from 1 (preferred) to 3 (least
preferred):

_______ Leave preconditions/postconditions in 13.3.2 and type_invariants in 13.3.3.

_______ Move preconditions/postconditions to 6.1.1 and type_invariants to 7.3.2.

_______ Move preconditions/postconditions to 6.3.3 and type_invariants to 7.3.2.

_______ Write in: move preconditions/postconditions to ____ and type_invariants to _____.

****************************************************************

From: Edmond Schonberg
Sent: Thursday, April 28, 2011  9:51 PM

Please rank the following solutions from 1 (preferred) to 3 (least
preferred):

___3____ Leave preconditions/postconditions in 13.3.2 and
type_invariants in 13.3.3.

___1____ Move preconditions/postconditions to 6.1.1 and
type_invariants to 7.3.2.

____2___ Move preconditions/postconditions to 6.3.3 and
type_invariants to 7.3.2.

****************************************************************

From: Gary Dismukes
Sent: Friday, April 29, 2011  12:04 AM

Please rank the following solutions from 1 (preferred) to 3 (least
preferred):

___3___ Leave preconditions/postconditions in 13.3.2 and
type_invariants in 13.3.3.

___1___ Move preconditions/postconditions to 6.1.1 and type_invariants
to 7.3.2.

___2___ Move preconditions/postconditions to 6.3.3 and type_invariants
to 7.3.2.

****************************************************************

From: Brad Moore
Sent: Friday, April 29, 2011  12:09 AM

Please rank the following solutions from 1 (preferred) to 3 (least
preferred):

___3____ Leave preconditions/postconditions in 13.3.2 and
type_invariants in 13.3.3.

___1____ Move preconditions/postconditions to 6.1.1 and
type_invariants to 7.3.2.

___2____ Move preconditions/postconditions to 6.3.3 and
type_invariants to 7.3.2.


****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April 29, 2011  12:35 AM

____3___ Leave preconditions/postconditions in 13.3.2 and
type_invariants in 13.3.3.

___1____ Move preconditions/postconditions to 6.1.1 and
type_invariants to 7.3.2.

___2____ Move preconditions/postconditions to 6.3.3 and
type_invariants to 7.3.2.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, April 29, 2011  3:42 AM

My ballot:

Please rank the following solutions from 1 (preferred) to 3 (least preferred):

___3____ Leave preconditions/postconditions in 13.3.2 and type_invariants in 13.3.3.

___1____ Move preconditions/postconditions to 6.1.1 and type_invariants to 7.3.2.

___2____ Move preconditions/postconditions to 6.3.3 and type_invariants to 7.3.2.

****************************************************************

From: John Barnes
Sent: Friday, April 29, 2011  7:54 AM

Here is my vote

> ____3___ Leave preconditions/postconditions in 13.3.2 and
> type_invariants in 13.3.3.
>
> ____2___ Move preconditions/postconditions to 6.1.1 and
> type_invariants to 7.3.2.
>
> ____1___ Move preconditions/postconditions to 6.3.3 and
> type_invariants to 7.3.2.

****************************************************************

From: Robert Dewar
Sent: Friday, April 29, 2011  9:58 AM

> _HORRIBLE______ Leave preconditions/postconditions in 13.3.2 and
> type_invariants in 13.3.3.
>
> _OK______ Move preconditions/postconditions to 6.1.1 and
> type_invariants to 7.3.2.
>
> _OK______ Move preconditions/postconditions to 6.3.3 and
> type_invariants to 7.3.2.
>
> _DNA______ Write in: move preconditions/postconditions to ____ and
> type_invariants to _____.

****************************************************************

From: Bob Duff
Sent: Friday, April 29, 2011  8:01 AM

My votes are identical to Randy's above (and a lot of other folks).

****************************************************************

From: Steve Baird
Sent: Friday, April 29, 2011  11:28 AM

Ditto.

****************************************************************

From: Robert Dewar
Sent: Friday, April 29, 2011  1:35 PM

If you want a vote from me that is in more conventional form, I echo this vote
:-)

****************************************************************

From: Randy Brukardt
Sent: Friday, April 29, 2011  2:16 PM

Tucker didn't vote yet, but since we have a landslide, I'm declaring a winner,
putting it into the AI, and moving on to the letter ballot.

****************************************************************

Questions? Ask the ACAA Technical Agent