Version 1.11 of ai05s/ai05-0290-1.txt

Unformatted version of ai05s/ai05-0290-1.txt version 1.11
Other versions for file ai05s/ai05-0290-1.txt

!standard 2.8(3)          12-03-19 AI05-0290-1/07
!standard 2.8(4)
!standard 3.2.4(0)
!standard 3.8.1(21)
!standard 4.6(51/2)
!standard 4.6(57)
!standard 4.9.1(4)
!standard 6.1.1(0)
!standard 7.3.2(0)
!standard 11.4.2(6/2)
!standard 11.4.2(7/2)
!standard 11.4.2(9/2)
!standard 11.4.2(10/2)
!standard 11.4.2(16/2)
!standard 11.4.2(17/2)
!standard 11.4.2(18/2)
!standard 11.5(7.2/2)
!standard 11.5(25)
!class Amendment 12-02-14
!status Amendment 2012 12-02-14
!status ARG Approved (Letter Ballot) 11-0-2 12-03-12
!status work item 12-02-14
!status received 11-11-21
!priority Medium
!difficulty Medium
!subject Improved control over assertions
!summary
The pragma Assertion_Policy is expanded to allow control of assertion expressions in a way that is very similar to pragma Suppress, but without the possible consequence of erroneous execution.
A perceived problem with Pragma Suppress for inlined calls is fixed.
Variant parts now handle not-covered-by-any-choice selector values in the same way as case statements and case expressions.
The definition of "statically compatible" for subtypes is improved so that a type with a disabled predicate check is not statically compatible with a type with an enabled predicate check.
!problem
When assertions (particularly predicates and preconditions) are used in 3rd party libraries of packages (which can also describe reusable code used within an organization, and implementation-defined packages), additional control over the assertions is needed.
In particular, assertions used to control inbound correctness checks should (almost) never be turned off, as these are needed to prevent failures in the the implementation of the library. In particular, clients should be discouraged from turning these checks off.
However, assertions used to make internal correctness checks (such as postconditions and invariants) are less important - actions of the client should not be able to cause these to fail.
Hence, there needs to be fine control over the checks of various kinds of assertions. Control for any but the Assert Pragma cannot be sensibly based on a policy in effect at the point of the check. For example, this would cause invariant checks to be performed only occasionally. The proposal is generally that the policy in effect at the time of the aspect specification controls all its checks. A key distinction between suppressing a check and ignoring an assertion (by specifying a policy of "Ignore") is that in the latter case, the assertion expressions that are not evaluated because of the policy are not assumed to be true.
!proposal
(See wording.)
!wording
Modify 2.8(3-4) as follows:
pragma_argument_association ::= [pragma_argument_identifier =>] name | [pragma_argument_identifier =>] expression
{ | pragma_argument_aspect_mark => name
| pragma_argument_aspect_mark => expression }
In a pragma, any pragma_argument_associations without a pragma_argument_identifier {or pragma_argument_aspect_mark} shall precede any associations with a pragma_argument_identifier {or pragma_argument_aspect_mark}.
-------------
Add after 3.2.4(6/3):
Predicate checks are defined to be enabled or disabled for a given subtype as follows:
* If a subtype is declared by a type_declaration or subtype_declaration that includes a predicate specification, then:
- if performing checks is required by the Static_Predicate assertion
policy (see 11.4.2) and the declaration includes a Static_Predicate specification, then predicate checks are enabled for the subtype;
- if performing checks is required by the Dynamic_Predicate assertion
policy (see 11.4.2) and the declaration includes a Dynamic_Predicate specification, then predicate checks are enabled for the subtype;
- otherwise, predicate checks are disabled for the subtype [redundant:
, regardless of whether predicate checking is enabled for any other subtypes mentioned in the declaration];
* If a subtype is defined by a derived type declaration that does not include a predicate specification, then predicate checks are enabled for the subtype if and only if predicate checks are enabled for at least one of the parent subtype and the progenitor subtypes;
* If a subtype is created by a subtype_indication other than in one of the previous cases, then predicate checks are enabled for the subtype if and only if predicate checks are enabled for the subtype denoted by the subtype_mark;
* Otherwise, predicate checks are disabled for the given subtype.
[AARM: In this case, no predicate specifications can apply to the subtype and so it doesn't typically matter whether predicate checks are enabled. This rule does make a difference, however, when determining whether predicate checks are enabled for another type when this type is one of multiple progenitors. See the "derived type declaration" wording above.]
Replace 3.2.4(22/3) with:
If predicate checks are enabled for a given subtype, then:
In 3.2.4(23/3), replace "check is made" by "check is performed" (three times).
-------------
Add after 3.8.1(21):
When an object of a discriminated type T is initialized by default, Constraint_Error is raised if no discrete_choice_list of any variant of a variant_part of T covers the value of the discriminant that governs the variant_part. When a variant_part appears in the component_list of another variant V, this test is only applied if the value of the discriminant governing V is covered by the discrete_choice_list of V.
AARM implementation note:
This is not a "check"; it cannot be suppressed. However, in most cases it is not necessary to generate any code to raise this exception. A test is needed (and can fail) in the case where the discriminant subtype has a Static_Predicate specified, it also has predicate checking disabled, and the discriminant governs a variant_part which lacks a "when others" choice.
The test also could fail for a static discriminant subtype with range checking suppressed and the discriminant governs a variant_part which lacks a "when others" choice. But execution is erroneous if a range check that would have failed is suppressed (see 11.5), so an implementation does not have to generate code to check this case. (An unchecked failed predicate does not cause erroneous execution, so the test is required in that case.)
Like the checks associated with a per-object constraint, this test is not made during the elaboration of a subtype_indication.
End AARM Implementation Note.
-------------
Replace 4.6(51/3) with:
After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is performed that the value is not null. If predicate checks are enabled for the target subtype (see 3.2.4), a check is performed that the predicate of the target subtype is satisfied for the value.
-------------
Modify 4.6(57/3):
If an Accessibility_Check fails, Program_Error is raised. {If a predicate check fails, Assertions.Assertion_Error is raised.} Any other check associated with a conversion raises Constraint_Error if it fails.
-------------
Replace 4.9.1(10/3) with:
both subtypes are static, every value that satisfies the predicate of S1 also satisfies the predicate of S2, and it is not the case that both types each have at least one applicable predicate specification, predicate checks are enabled (see 11.4.2) for S2, and predicate checks are not enabled for S1.
-------------
Replace 6.1.1(19/3) with:
If performing checks is required by the Pre, Pre'Class, Post, or Post'Class assertion policies (see 11.4.2) in effect at the point of a corresponding aspect specification applicable to a given subprogram or entry, then the respective precondition or postcondition expressions are considered @i(enabled).
AARM Note: If a class-wide precondition or postcondition expression is enabled, it remains enabled when inherited by an overriding subprogram, even if the policy in effect is Ignore for the inheriting subprogram.
Replace 6.1.1(31/3) with:
Upon a call of the subprogram or entry, after evaluating any actual parameters, precondition checks are performed as follows:
Modify 6.1.1(32/3):
The specific precondition check begins with the evaluation of the specific precondition expression that applies to the subprogram or entry{, if it is enabled}; if the expression evaluates to False, Assertions.Assertion_Error is raised{; if the expression is not enabled, the check succeeds}.
Modify 6.1.1(33/3):
The class-wide precondition check begins with the evaluation of any {enabled} class-wide precondition expressions that apply to the subprogram or entry. If and only if all the class-wide precondition expressions evaluate to False, Assertions.Assertion_Error is raised.
AARM Ramification: Class-wide precondition checks are performed for all appropriate calls, but only enabled precondition expressions are evaluated. Thus, the check would be trivial if no precondition expressions are enabled.
Modify 6.1.1(35/3):
[If the assertion policy in effect at the point of a subprogram or entry declaration is Check, then upon]{Upon} successful return from a call of the subprogram or entry, prior to copying back any by-copy in out or out parameters, the postcondition check is performed. This consists of the evaluation of [the]{any enabled} specific and class-wide postcondition expressions that apply to the subprogram or entry. If any of the postcondition expressions evaluate to False, then Assertions.Assertion_Error is raised. The postcondition expressions are evaluated in an arbitrary order, and if any postcondition expression evaluates to False, it is not specified whether any other postcondition expressions are evaluated. The postcondition check, and any constraint {or predicate} checks associated with [copying back] in out or out parameters are performed in an arbitrary order.
Delete 6.1.1(40/3).
-------------
Modify 7.3.2(9/3):
If one or more invariant expressions apply to a type T[, and the assertion policy (see 11.4.2) at the point of the partial view declaration for T is Check,] then an invariant check is performed at the following places, on the specified object(s):
Add before 7.3.2(21/3):
If performing checks is required by the Invariant or Invariant'Class assertion policies (see 11.4.2) in effect at the point of corresponding aspect specification applicable to a given type, then the respective invariant expression is considered @i(enabled).
AARM Note: If a class-wide invariant expression is enabled for a type, it remains enabled when inherited by descendants of that type, even if the policy in effect is Ignore for the inheriting type.
Modify 7.3.2(21/3):
The invariant check consists of the evaluation of each {enabled} invariant expression that applies to T, on each of the objects specified above. If any of these evaluate to False, Assertions.Assertion_Error is raised at the point of the object initialization, conversion, or call. If a given call requires more than one evaluation of an invariant expression, either for multiple objects of a single type or for multiple types with invariants, the evaluations are performed in an arbitrary order, and if one of them evaluates to False, it is not specified whether the others are evaluated. Any invariant check is performed prior to copying back any by-copy in out or out parameters. Invariant checks, any postcondition check, and any constraint or predicate checks associated with in out or out parameters are performed in an arbitrary order.
Delete 7.3.2(23/3).
-------------
Replace 11.4.2(5/2 - 7/2) with:
The form of a pragma Assertion_Policy is as follows:
pragma Assertion_Policy(policy_identifier); pragma Assertion_Policy(@i(assertion)_aspect_mark => policy_identifier
{, @i(assertion)_aspect_mark => policy_identifier});
A pragma Assertion_Policy is allowed only immediately within a declarative_part, immediately within a package_specification, or as a configuration pragma.
-----
Replace 11.4.2(9/2) with:
The @i(assertion)_aspect_mark of a pragma Assertion_Policy shall be one of Assert, Static_Predicate, Dynamic_Predicate, Pre, Pre'Class, Post, Post'Class, Type_Invariant, Type_Invariant'Class, or some implementation defined aspect_mark. The policy_identifier shall be either Check, Ignore, or some implementation-defined identifier.
[AARM: Implementation defined: Implementation-defined policy_identifiers and assertion_aspect_marks allowed in a pragma Assertion_Policy.]
------ Replace 11.4.2(10/2) with:
A pragma Assertion_Policy determines for each assertion aspect named in the pragma_argument_associations whether assertions of the given aspect are to be enforced by a run-time check. The policy_identifier Check requires that assertion expressions of the given aspect be checked that they evaluate to True at the points specified for the given aspect; the policy_identifier Ignore requires that the assertion expression not be evaluated at these points, and the run-time checks not be performed. @Redundant[Note that for subtype predicate aspects (see 3.2.4), even when the applicable Assertion_Policy is Ignore, the predicate will still be evaluated as part of membership tests and Valid attribute_references, and if static, will still have an effect on loop iteration over the subtype, and the selection of case_alternatives and variant_alternatives.]
If no assertion_aspect_marks are specified in the pragma, the specified policy applies to all assertion aspects.
A pragma Assertion_Policy applies to the named assertion aspects in a specific region, and applies to all assertion expressions specified in that region. A pragma Assertion_Policy given in a declarative_part or immediately within a package_specification applies from the place of the pragma to the end of the innermost enclosing declarative region. The region for a pragma Assertion_Policy given as a configuration pragma is the declarative region for the entire compilation unit (or units) to which it applies.
If a pragma Assertion_Policy applies to a generic_instantiation, then the pragma Assertion_Policy applies to the entire instance.
AARM note: This means that an Assertion_Policy pragma that occurs in a scope enclosing the declaration of a generic unit but not also enclosing the declaration of a given instance of that generic unit will not apply to assertion expressions occurring within the given instance.
If multiple Assertion_Policy pragmas apply to a given construct for a given assertion aspect, the assertion policy is determined by the one in the innermost enclosing region of a pragma Assertion_Policy specifying a policy for the assertion aspect. If no such Assertion_Policy pragma exists, the policy is implementation defined.
[AARM: Implementation defined: The default assertion policy.]
----------
Modify 11.4.2(16/2):
A compilation unit containing a {check for an assertion (including a }pragma Assert{)} has a semantic dependence on the Assertions library unit.
[Needed because all such checks raise Assertions.Assertion_Error.]
----------
Delete 11.4.2(17/2).
----------
Replace 11.4.2(18/2) with:
If performing checks is required by the Assert assertion policy in effect at the place of a pragma Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided.
----------
Replace 11.5(7.2/3) with:
If a checking pragma applies to a generic_instantiation, then the checking pragma also applies to the entire instance.
AARM note: This means that a Suppress pragma that occurs in a scope enclosing the declaration of a generic unit but not also enclosing the declaration of a given instance of that generic unit will not apply to constructs within the given instance.
[Note: The inline part of this rule was deleted as part of these discussions.]
------------
Replace 11.5(25) with:
All_Checks
Represents the union of all checks; suppressing All_Checks suppresses all checks other than those associated with assertions. In addition, an implementation is allowed (but not required) to behave as if a pragma Assertion_Policy(Ignore) applies to any region to which pragma Suppress(All_Checks) applies.
AARM Discussion: We don't want to say that assertions are suppressed, because we don't want the potential failure of an assertion to cause erroneous execution (see below). Thus they are excluded from the suppression part of the above rule and then handled with an implicit Ignore policy.
!discussion
In order to achieve the intended effect that library writers stay in control over the enforcement of pre- and postconditions as well as type invariants, the policy control needs to extend over all uses of the respective types and subprograms. The place of usage is irrelevant. This is enforced by the rules in the !wording.
For Assert pragmas, the policy in effect for the place of the pragma is the controlling policy.
For subtype predicates, the policy that is relevant is the one in effect at the point where the "nearest" applicable predicate specification is provided.
The Assertion_Policy pragma allows control over each assertion aspect individually, if so desired. The need was clearly identified, e.g., to ensure enforcement of preconditions, while postconditions might be left to the whims of the library user.
The regions to which Assertion_policy pragmas apply can be nested (as in the library example). The simple rule is that "inner" pragmas have precedence over "outer" pragmas for any given assertion aspect.
Implementation-defined policies and assertion aspects are allowed in order that compiler vendors can experiment with more elaborate schemes.
A clear separation was made between language checks that, if failing but suppressed, render the program erroneous, and assertions, for which suppression does not cause language issues that lead to erroneousness. This is one of the reasons why assertion control was not subsumed under the pragmas Suppress and Unsuppress with appropriate check names. For example, in verified code, the policy Ignore for Assertions make sense without impacting the program semantics. (Suppressing a range check on the other hand can lead to abnormal values.)
The ARG felt the following capability to be very useful, but shied away from extending the model so late in the production if the 2012 standard. Ada 2020 should consider this capability (AI12-0022-1 has been opened to do this):
Example: Imagine the following routine in GUI library:
procedure Show_Window (Window : in out Root_Window); -- Shows the window. -- Raises Not_Valid_Error if Window is not valid.
We would like to be able to use a predicate to check the comment. With an "On_Failure" aspect we could do this without changing the semantics:
subtype Valid_Root_Window is Root_Window with Dynamic_Predicate => Is_Valid (Valid_Root_Window), On_Failure => Not_Valid_Error;
procedure Show_Window (Window : in out Valid_Root_Window); -- Shows the window.
If we didn't have the "On_Failure" aspect here, using the predicate as a precondition in lieu of a condition explicitly checked by the code would change the exception raised on this failure to be Assertion_Error. This would obviously not be acceptable for existing packages and too limiting for future packages. For upward compatibility, similar considerations apply to preconditions, so the "On_Failure" aspect is needed for them as well. We could also imagine that after one On_Failure aspect has been specified, additional preconditions could be defined for the same subprogram with distinct On_Failure aspects specifying distinct expressions to be raised.
-----
For pragma Inline (and for Assertion_Policy), we decided to eliminate the rule that said that the presence of a pragma Suppress or Unsuppress at the point of a call would affect the code inside the called body, if pragma Inline applies to the body. Given that inlining is merely advice, and is not supposed to have any significant semantic effect, having it affect the presence or absence of checks in the body, whether or not it is actually inlined, seemed unwise. Furthermore, in many Ada compilers, the decision to inline may be made very late, so if the rule is instead interpreted as only having an effect if the body is in fact inlined, it is still a problem, because the decision to inline may be made after the decision is made whether to insert or omit checks.
----
This example illustrates the motivation for the new 3.8.1 rule about variant parts:
declare pragma Assertion_Policy (Ignore);
subtype Non_Zero is Integer with Static_Predicate => Non_Zero /= 0;
type Rec (D : Non_Zero) is record case D is when Integer'First .. -1 => ...; when 1 .. Integer'Last => ....; end case; end record;
Zero : Integer := Ident_Int (0);
subtype Zero_Sub is Rec (D => Zero); -- no exception is raised here
X : Rec (D => Zero); -- raises Constraint_Error begin null; end;
We could require that a subtype declaration such as Zero_Sub fail a runtime check, but this seemed similar to per-object constraint checking.
!corrigendum 2.8(3)
Replace the paragraph:
pragma_argument_association ::= [pragma_argument_identifier =>] name | [pragma_argument_identifier =>] expression
by:
pragma_argument_association ::= [pragma_argument_identifier =>] name | [pragma_argument_identifier =>] expression | pragma_argument_aspect_mark => name | pragma_argument_aspect_mark => expression
!corrigendum 2.8(4)
Replace the paragraph:
In a pragma, any pragma_argument_associations without a pragma_argument_identifier shall precede any associations with a pragma_argument_identifier.
by:
In a pragma, any pragma_argument_associations without a pragma_argument_identifier or pragma_argument_aspect_mark shall precede any associations with a pragma_argument_identifier or pragma_argument_aspect_mark.
!corrigendum 3.2.4(0)
Insert new clause:
[A placeholder to cause a conflict; the real wording is found in the conflict file.]
!corrigendum 3.8.1(21)
Insert after the paragraph:
A record value contains the values of the components of a particular variant only if the value of the discriminant governing the variant is covered by the discrete_choice_list of the variant. This rule applies in turn to any further variant that is, itself, included in the component_list of the given variant.
the new paragraph:
When an object of a discriminated type T is initialized by default, Constraint_Error is raised if no discrete_choice_list of any variant of a variant_part of T covers the value of the discriminant that governs the variant_part. When a variant_part appears in the component_list of another variant V, this test is only applied if the value of the discriminant governing V is covered by the discrete_choice_list of V.
!corrigendum 4.6(51/2)
Replace the paragraph:
After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is made that the value is not null.
by:
After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is made that the value is not null. If predicate checks are enabled for the target subtype (see 3.2.4), a check is performed that the predicate of the target subtype is satisfied for the value.
!corrigendum 4.6(57)
Replace the paragraph:
If an Accessibility_Check fails, Program_Error is raised. Any other check associated with a conversion raises Constraint_Error if it fails.
by:
If an Accessibility_Check fails, Program_Error is raised. If a predicate check fails, Assertions.Assertion_Error is raised. Any other check associated with a conversion raises Constraint_Error if it fails.
!corrigendum 4.9.1(4)
Replace the paragraph:
A constraint is statically compatible with a scalar subtype if it statically matches the constraint of the subtype, or if both are static and the constraint is compatible with the subtype. A constraint is statically compatible with an access or composite subtype if it statically matches the constraint of the subtype, or if the subtype is unconstrained. One subtype is statically compatible with a second subtype if the constraint of the first is statically compatible with the second subtype.
by:
[A placeholder to cause a conflict; the real wording is found in the conflict file.]
!corrigendum 6.1.1(0)
Insert new clause:
[A placeholder to cause a conflict; the real wording is found in the conflict file.]
!corrigendum 7.3.2(0)
Insert new clause:
[A placeholder to cause a conflict; the real wording is found in the conflict file.]
!corrigendum 11.4.2(6/2)
Replace the paragraph:
pragma Assertion_Policy(policy_identifier);
by:

pragma Assertion_Policy(policy_identifier); pragma Assertion_Policy( assertion_aspect_mark => policy_identifier {, @iassertion_
aspect_mark => policy_identifier});
!corrigendum 11.4.2(7/2)
Replace the paragraph:
A pragma Assertion_Policy is a configuration pragma.
by:
A pragma Assertion_Policy is allowed only immediately within a declarative_part, immediately within a package_specification, or as a configuration pragma.
!corrigendum 11.4.2(9/2)
Replace the paragraph:
The policy_identifier of a pragma Assertion_Policy shall be either Check, Ignore, or an implementation-defined identifier.
by:
The assertion_aspect_mark of a pragma Assertion_Policy shall be one of Assert, Static_Predicate, Dynamic_Predicate, Pre, Pre'Class, Post, Post'Class, Type_Invariant, Type_Invariant'Class, or some implementation defined aspect_mark. The policy_identifier shall be either Check, Ignore, or some implementation-defined identifier.
!corrigendum 11.4.2(10/2)
Replace the paragraph:
A pragma Assertion_Policy is a configuration pragma that specifies the assertion policy in effect for the compilation units to which it applies. Different policies may apply to different compilation units within the same partition. The default assertion policy is implementation-defined.
by:
A pragma Assertion_Policy determines for each assertion aspect named in the pragma_argument_associations whether assertions of the given aspect are to be enforced by a run-time check. The policy_identifier Check requires that assertion expressions of the given aspect be checked that they evaluate to True at the points specified for the given aspect; the policy_identifier Ignore requires that the assertion expression not be evaluated at these points, and the run-time checks not be performed. Note that for subtype predicate aspects (see 3.2.4), even when the applicable Assertion_Policy is Ignore, the predicate will still be evaluated as part of membership tests and Valid attribute_references, and if static, will still have an effect on loop iteration over the subtype, and the selection of case_statement_alternatives and variants.
If no assertion_aspect_marks are specified in the pragma, the specified policy applies to all assertion aspects.
A pragma Assertion_Policy applies to the named assertion aspects in a specific region, and applies to all assertion expressions specified in that region. A pragma Assertion_Policy given in a declarative_part or immediately within a package_specification applies from the place of the pragma to the end of the innermost enclosing declarative region. The region for a pragma Assertion_Policy given as a configuration pragma is the declarative region for the entire compilation unit (or units) to which it applies.
If a pragma Assertion_Policy applies to a generic_instantiation, then the pragma Assertion_Policy applies to the entire instance.
If multiple Assertion_Policy pragmas apply to a given construct for a given assertion aspect, the assertion policy is determined by the one in the innermost enclosing region of a pragma Assertion_Policy specifying a policy for the assertion aspect. If no such Assertion_Policy pragma exists, the policy is implementation defined.
!corrigendum 11.4.2(16/2)
Replace the paragraph:
A compilation unit containing a pragma Assert has a semantic dependence on the Assertions library unit.
by:
A compilation unit containing a check for an assertion (including a pragma Assert) has a semantic dependence on the Assertions library unit.
!corrignedum 11.4.2(17/2)
Delete the paragraph:
The assertion policy that applies to a generic unit also applies to all its instances.
!corrignedum 11.4.2(18/2)
Replace the paragraph:
An assertion policy specifies how a pragma Assert is interpreted by the implementation. If the assertion policy is Ignore at the point of a pragma Assert, the pragma is ignored. If the assertion policy is Check at the point of a pragma Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided.
by:
If performing checks is required by the Assert assertion policy in effect at the place of a pragma Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided.
!corrigendum 11.5(7.2/2)
Replace the paragraph:
If a checking pragma applies to a generic instantiation, then the checking pragma also applies to the instance. If a checking pragma applies to a call to a subprogram that has a pragma Inline applied to it, then the checking pragma also applies to the inlined subprogram body.
by:
If a checking pragma applies to a generic_instantiation, then the checking pragma also applies to the entire instance.
!corrigendum 11.5(25)
Replace the paragraph:
All_Checks
Represents the union of all checks; suppressing All_Checks suppresses all checks.
by:
All_Checks
Represents the union of all checks; suppressing All_Checks suppresses all checks other than those associated with assertions. In addition, an implementation is allowed (but not required) to behave as if a pragma Assertion_Policy(Ignore) applies to any region to which pragma Suppress(All_Checks) applies.
!ACATS Test
Create ACATS C-Tests to test these changes, specifically that Ignore really ignores the check and local Check policies override more global Ignore policies.
!ASIS
** TBD: Some change to pragma processing probably will be needed to account for the 'Class possibility in aspect_marks.
!appendix

From: Randy Brukardt
Sent: Monday, November 21, 2011  5:13 PM

Let me start by saying that it is awfully late for any significant change, my
default position is no change at this point, and the following issue isn't that
critical to get right. But I'd like to discuss it some and either feel better
about our current decision or possibly consider some alternatives.

----

John remarked in his latest ARG draft:

>> *** I really think that this should raise Constraint_Error and that
>> subtype predicates should not be controlled by Assertion_Policy but
>> by Suppress. Ah well.....

My initial response was:

> I agree with this. The problem is that you want to replace constraints with
> (static) predicates, but you can't really do that because people are used to
> removing assertions even though they would never remove a constraint
> check.

But upon thinking about it further, it's not that clear-cut. First, it would be
odd for static and dynamic predicates to raise different exceptions. Second,
there really isn't that much difference in use between predicates and
preconditions (entering calls) and invariants and postconditions (after calls).
So there is an argument that they all should be the same. (But of course you can
make the same arguments for constraints in both of the positions.)

The thing that really bothers me is that contracts somehow seem less important
than constraints, even though their purposes are the same. While assertions
(including the "body postconditions" we discussed a few weeks ago) really are in
a different category of importance. So it seems weird to me to include pragma
Assert and the precondition and predicate aspects in the same bucket as far are
suppression/ignoring is concerned.

Let me give an example from Claw to illustrate my point.

Most Claw routines have requirements on the some or all of the parameters
passed. For instance, Show is defined as:

       procedure Show (Window : in Root_Window_Type;
                       How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal);
        -- Show Window according to How.
        -- Raises:
        --      Not_Valid_Error if Window does not have a valid (Windows) window.
        --      Windows_Error if Windows returns an error.

The implementation of Show will start something like:

       procedure Show (Window : in Root_Window_Type;
                       How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is
       begin
           if not Is_Valid (Window) then
               raise Not_Valid_Error;
           end if;
           -- Assume Window is valid in the following code.
           ...
       end Show;

This is of course Ada 95 code (assuming no implementation-defined extensions).
So we couldn't have used pragma Assert. But even if it had existed, I don't
think we would have used it because we consider the validity check as an
important part of the contract and would not want it turned off. In particular,
the body of Show will take no precautions against Window being invalid, and if
violated, almost anything could happen. (Especially if checks are also
suppressed; note that the Claw documentation says that compiling Claw with
checks suppressed is not supported.)

Essentially, this could be incorrect without the check, and as such it is not
appropriate to use an assertion (which is supposed to be able to be ignored) to
make the check.

If I was writing this code in Ada 2012, I would want to use a predicate or
precondition to make this requirement more formal (and executable and visible to
tools and all of those other good things).

For instance, using a predicate: (I'm using a precondition mainly because it is
easier to write here; I'd probably use a predicate in practice because almost
all of the Claw routines need this contract and it would be a pain to repeat it
hundreds of times. But the same principles apply either way.)

       procedure Show (Window : in Root_Window_Type;
                       How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal)
        with Pre => Is_Valid (Window);
        -- Show Window according to How.
        -- Raises:
        --      Windows_Error if Windows returns an error.

We'd prefer to remove the check from the body:

       procedure Show (Window : in Root_Window_Type;
                       How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is
       begin
           -- Assume Window is valid in the following code.
           ...
       end Show;

But note now that the body is incorrect if the Assertion_Policy is Ignore and
the Window really is invalid. This is a bad thing: it means that code is at risk
of being used in a way that has not be tested and is not supported. (Obviously,
in this case the results aren't quite as bad in that the results aren't
safety-critical. But what about a subpool-supporting storage pool? These are
defined with a precondition:
        with Pre'Class => Pool_of_Subpool(Subpool) = Pool'Access;
and I'd expect a user-written pool to do very bad things if this is not true.
I'd surely not expect such a pool to repeat this check, but it would be
necessary to be defensive.)

To see this even more clearly, imagine that the containers library had
implemented most of its exceptional cases as predicates or preconditions. The
body of the containers may not even be implemented in Ada (GNAT for instance
suppresses checks as a matter of course in the predefined packages); if those
cases aren't checked, it is hard to predict what might happen.

Possibly important aside: I was wrong when I said the writing the condition as a
precondition or predicate didn't matter. For a precondition, the declaration of
the call determines what Assertion_Policy is in effect. Thus, extending the
existing Claw rule to include "compiling Claw with the Assertion_Policy to be
other than Check is not supported" would be sufficient to eliminate the problem
(at least formally - not so sure that is true practically).

For predicates, however, it is the Assertion_Policy in effect at the point of
the evaluation (subtype conversion, etc.) that determines whether the check is
made. That means that predicates can be ignored even if all of Claw is compiled
with Assertion_Policy = Check -- which makes the protections even weaker (since
I don't think a library should be dictating how a client compiles their code).

End possibly important aside.

Fundamentally, contract aspects are more like constraints than they are like
assertions. So it is dubious to put them into the same bucket with them. I know
at one point Tucker argued that making contracts "suppressed" rather than
"ignored" was bad because it could mean that adding contracts could make a
program less-safe. And I agree with that, except that the problem exists anyway
because the body of a subprogram with contracts is going to assume that those
contracts are true -- and the failure of that assumption is going to make the
results unpredictable. (Maybe not to the level of erroneousness, but at least in
any formal sense. And if combined with Suppress as in GNAT, we will really will
have erroneousness.)

I think it is just as bad to require a bullet-proof library to repeat all of the
predicate and precondition checks in the body "just-in-case" someone turns them
off. People hate redundant code for good reason (it gets out of sync; it makes
programs larger, it's often dead and thus causes problems with coverage
analysis, etc.)

---

If we accept the premise that assertions are different than contracts, what can
we do?

Going all the way to Suppression semantics is probably going too far, for a
number of reasons. Most importantly, if this is not combined with Suppress of
constraint checks, the code is likely to malfunction logically, but not in Ada
terms. (It's likely to fail raising a predefined exception in some unexpected
way; a manageable error.)

The most logical solution would be to have a separate Contract_Policy. Such a
policy would not include assertions (of any stripe) but would include all of the
contract aspects. (And Assertion_Policy would revert to only handling
assertions.) With such a separate policy, turning it off could be treated
similarly to suppressing checks (something only to be done in very restricted
circumstances for critical [and clearly demonstrated] needs). More importantly,
management can clearly see if it is being abused and set appropriate limits.

A less disruptive solution (from the user's perspective, it would require more
rewording in the Standard than the former) would be to have an additional
Assertion_Policy "Check_Contracts_and_Ignore_Others", which would do exactly
what it says. One would hope that if such a policy existed that the use of the
full "Ignore" could be discouraged just like the use of Suppress is discouraged.

---

I've written enough on this topic. I've been uncomfortable since thinking about
the implications of turning off these things on the storage pools and in
containers more than a year ago. But I've never been convinced in my own mind
that there is significant problem here -- and I'm still not. But noting that
John has some similar concerns, I thought it would be good to air these out.

Now it's your turn to comment.

****************************************************************

From: Tucker Taft
Sent: Monday, November 21, 2011  5:48 PM

I could see adding another policy which distinguished assertions from other
contract-like things. On the other hand, some folks (e.g. us) currently use
assertions exactly like preconditions, postconditions, constraints, invariants,
etc., so those of us already using assertions heavily probably don't see the
point.  If an assertion is violated, then bad things can happen.  You don't
generally get erroneousness, because the built-in constraint checks prevent
that.  But you can certainly get complete garbage-out, given enough garbage-in.

As far as where the policy is relevant, I am surprised that for predicates it
depends on where the check is performed. I would expect all checks associated
with the predicate on a formal parameter would be on or off depending on the
policy at the point where the subprogram spec is compiled, just like
preconditions and postconditions. Certainly you want to know when you compile a
body whether you can rely on the predicates associated with the formal
parameters, otherwise you *will* get into the erroneousness world if the
compiler makes the wrong assumption.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 21, 2011  7:17 PM

> I could see adding another policy which distinguished assertions from
> other contract-like things.
> On the other hand, some folks (e.g. us) currently use assertions
> exactly like preconditions, postconditions, constraints, invariants,
> etc., so those of us already using assertions heavily probably don't
> see the point.  If an assertion is violated, then bad things can
> happen.  You don't generally get erroneousness, because the built-in
> constraint checks prevent that.  But you can certainly get complete
> garbage-out, given enough garbage-in.

My view of assertions is a bit different: an assertion is something that ought
to be True at a given point, but it doesn't affect the correctness of the
program. IMHO, checks that do affect the correctness of the program should not
be given as an assertion (pragma Assert), but rather be directly part of the
program. For instance, if you have an if statement that handles two possible
cases, but other cases are not handled, there ought to be some else branch with
a bug box or exception raise in it -- not some assertion that can be ignored.
(I've yet to see a case where a pragma Assert could do something that you
couldn't do with regular code; that's not true of our old Condcomp facility,
which allows conditionally compiled declarations and pragmas.) Thus, pragma
Assert is purely a debugging aid; it never has an affect on any sort of
correctness and it can be ignored without any harmful effects.

I realize that pragma Assert can be used in other ways than I outlined above,
but I always thought that the above was the intent -- it would surely be the way
that I'd use it (if we had implemented it at all; we have not done so to date,
so I've never actually used it).

This difference is probably why I view the other contracty things differently,
because they surely aren't optional, are not just debugging aids, and shouldn't
be ignored without lots of careful consideration. Thus lumping them together
doesn't make much sense.

I'd be happier if there was an extra policy; but I won't scream to much since
impl-def policies are allowed (so I can always define something sensible).

> As far as where the policy is relevant, I am surprised that for
> predicates it depends on where the check is performed.
> I would expect all checks associated with the predicate on a formal
> parameter would be on or off depending on the policy at the point
> where the subprogram spec is compiled, just like preconditions and
> postconditions.
> Certainly you want to know when you compile a body whether you can
> rely on the predicates associated with the formal parameters,
> otherwise you *will* get into the erroneousness world if the compiler
> makes the wrong assumption.

I had that thought as well when I noticed that difference. The problem is that
predicates belong to the subtype, not the subprogram. I could imagine a similar
rule to subprograms where the policy that matters is that which applies to the
subtype -- but that wouldn't necessarily give you certainty over the calls
(especially when the subtype and subprogram are declared in different packages).

A rule that made subtype conversions to formal parameters work differently than
other subtype conversions is just too weird to contemplate.

In any case, it does seem like there is a bug in the handling of subtype
predicates compared to the other contracts. I suppose I shouldn't be surprised;
Bob wrote the wording of predicates without considering how the other contracts
worked, while the other three were pretty much developed together.

Aside: We made whether pre and post-conditions checks are made for a dispatching
call unspecified if the call and the invoked subprogram have different policies.
We didn't do anything similar for type invariants; I would think that we ought
to (the same problems apply). [I noticed this checking to see if the rules for
type invariants were the same as for pre- and post-conditions; they are except
for this difference.]

P.S. I should stop thinking about this stuff. All I seem to find is problems.
:-)

****************************************************************

From: Bob Duff
Sent: Monday, November 21, 2011  7:35 PM

> Let me start by saying that it is awfully late for any significant
> change, my default position is no change at this point, ...

Agreed.

I don't think this stuff is very important.  There are all sorts of levels of
checking ("policies") one might want (all-checks-on, checks-suppressed,
checks-ignored, preconditions checked but not postconditions, etc).
Implementations will provide those as needed or requested by customers.  The
only important one for the Standard to require is all-checks-on.

Terminology:  I use "assertion" to refer to any sort of checks, including
pre/post-conditions.  Not just pragma Assert.  I think that matches Eiffel
terminology.

And Constraint_Error vs. Assertion_Failure is a non-issue.
A bug is a bug.

****************************************************************

From: Tucker Taft
Sent: Monday, November 21, 2011  8:41 PM

> ... Aside: We made whether pre and post-conditions checks are made for
> a dispatching call unspecified if the call and the invoked subprogram
> have different policies....

Your description seems a bit off.  It is always based on where the declaration
of some subprogram appears; the unspecified part is whether it is determined by
the named subprogram or the invoked subprogram.  It has nothing to do with where
the call is.  That is good because when compiling the body of a subprogram, the
compiler knows which rule it is following.  If it depended on where the call
was, the compiler would have no idea whether to assume the precondition has been
checked.

> ... We didn't do anything similar for type invariants; I would think
> that we ought to (the same problems apply). [I noticed this checking
> to see if the rules for type invariants were the same as for pre- and
> post-conditions; they are except for this difference.]

Yes, I would agree we need to do roughly the same thing for invariants, or pick
one, e.g., always base it on which subprogram body is invoked.

I think predicate checks associated with parameter associations really need to
be determined based on where the subprogram is declared (either the named
subprogram or the invoked subprogram).  Otherwise it is hopeless for the
compiler to take any advantage of the predicate checks.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, November 22, 2011  2:50 AM

> Let me start by saying that it is awfully late for any significant
> change, my default position is no change at this point, and the
> following issue isn't that critical to get right. But I'd like to
> discuss it some and either feel better about our current decision or
> possibly consider some alternatives.
>
> ----
>
> John remarked in his latest ARG draft:
>
>>> *** I really think that this should raise Constraint_Error and that
>>> subtype predicates should not be controlled by Assertion_Policy but
>>> by Suppress. Ah well.....
>
[...] discussion deleted

> Now it's your turn to comment.

You convinced me - maybe more than you would like ;-)

I think that if we want to do it right, we need a special suppress, and a
special exception: Contract_Error. That's really what it is about.

The really important question is: is it too late in the game? We the ARG are
very carefull about meeting our deadlines. But the message I heard from NBs and
WG9 is "we are not so much in a hurry, we can accept a delay if it really
improves the language".  So it might be worth considering.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 22, 2011  7:11 AM

I think it could be a mistake to make such a strong distinction between pragma
Assert and these other kinds of assertions.  I realize that some people have not
been using assertions at all. But for those who have, they serve very much the
same purpose as these new contracts.  They are essentially higher-level
constraint checks. To imply that somehow they are fundamentally different seems
like an unjustifiable leap.

And I suppose if you don't use pragma Assert, then you really shouldn't care.

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, November 22, 2011  12:23 PM

Thinking more about future users...:

What I would like to be able to is to turn off Assertions and Postconditions in
MY code, because I have verified it to my heart's content. But what I would like
to continue checking is the Preconditions of services that my library offers.
After all, there is no limit to the s... of people (not reading contracts) and I
want to ensure that my code works as advertised.

Unfortunately, it looks like it is an all-or-nothing policy that is provided
today. If that issue is (re)opened, I would argue for the differentiation above.

If it is not re-opened then I want to note for 2020 that the upward
incompatibility of billions and billions of lines of code relying on turning off
precondition checks exists - and it would be a good thing to cause that
incompatibility ;-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 22, 2011  1:01 PM

> What I would like to be able to is to turn off Assertions and
> Postconditions in MY code, because I have verified it to my heart's
> content. But what I would like to continue checking is the
> Preconditions of services that my library offers.
> After all, there is no limit to the s... of people (not reading
> contracts) and I want to ensure that my code works as advertised.

I was thinking about that this morning. Propagating "Assertion_Error" from the
body of some routine indicates a bug in that routine, pure and simple. But
propagating it from a call indicates a bug in *your* code. It would be valuable
to have such a distinction -- not all code is written by the same user. (Note:
you need to include predicates with preconditions above.)

I'm thinking about something like Claw, where the failure of a postcondition or
invariant means that there is a bug in Claw (call tech support!) while the
failure of a precondition or predicate means that there is a bug in the client's
code (damn, better go fix it). There ought to be some differentiation between
these.

> Unfortunately, it looks like it is an all-or-nothing policy that is
> provided today. If that issue is (re)opened, I would argue for the
> differentiation above.

Well, what I think we see is that there are quite a few differentiations that
makes sense (contracts vs. pure debugging code [assertions]); caller vs. callee
bugs. Which is probably why we punted.

Hopefully, compiler optimizations of contracts will be good enough that
management can treat Assertion_Policy(Ignore) the same as it treats
Suppress(All_Checks) -- it needs reams of justification. (Note that in "correct"
code, most of the preconditions and postconditions will match up, so turning off
one or the other will have almost no effect on the code performance, since the
compiler will only check one anyway.)

> If it is not re-opened then I want to note for 2020 that the upward
> incompatibility of billions and billions of lines of code relying on
> turning off precondition checks exists - and it would be a good thing
> to cause that incompatibility ;-)

Agreed. Any code that *relies* on turning off checks (of any kind) is broken.
There might in very rare cases be a reason to write intentionally broken code,
but it should not be considered portable in any sense.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 22, 2011  2:19 PM

Assertion_Policy is based on compilations (typically a source file), so perhaps
you could use Ignore in the body, while using Check on the spec.

I could imagine a version of Assertion_Policy which allowed you to specify a
different exception is to be raised.  Now go talk to your favorite compiler
vendor, and don't forget to bring some cash... ;-)

****************************************************************

From: Bob Duff
Sent: Tuesday, November 22, 2011  2:38 PM

> I could imagine a version of Assertion_Policy which allowed you to
> specify a different exception is to be raised.  Now go talk to your
> favorite compiler vendor, and don't forget to bring some cash... ;-)

I agree.  I can think of all sorts of useful variations on the idea of "remove
some assertions/checks for efficiency". Given that we're not all in agreement
about what we want, the appropriate thing is to let vendors experiment, based on
customer feedback, and maybe standardize something for Ada 2020.

The Ada 2012 rules should be left alone at this point.

By the way, have any of the people participating in this discussion looked at
Meyer's Eiffel book?  He gives a set of options for turning on/off checks, with
rationale. I don't agree with everything he says, but I think we should all look
at it before trying to reinvent that wheel.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 22, 2011  1:25 PM

...
> But for those who have, they serve very much the same purpose as these
> new contracts.  They are essentially higher-level constraint checks.

That's an abuse of the construct, IMHO.

> To imply that somehow they are fundamentally different seems like an
> unjustifiable leap.

They *are* fundamentally different:
(1) As pragmas, they should never have an effect on the dynamic correctness of
    the program. It should always be possible to erase all of the pragmas and
    get a functioning program. Anything else is an abuse of pragmas. We got rid
    of almost all of the offending pragmas in Ada 2012.
(2) I've always viewed its purpose as holding code that eases debugging without
    having any effect on correctness. My programs have a lot of such code, and
    it makes sense to remove it from the production code. It makes sense for the
    Ada language to have a feature aimed at providing supporting such uses. It
    does *not* make sense to remove checks that protect the server from clueless
    clients (that's especially true in the case where the writer of the "server"
    (library) is different from the writer of the client -- exactly the case
    where Ada has better support than other languages). As such, it doesn't make
    sense for the language to support such a feature. (In the rare case where a
    check has to be removed for performance reasons, the good old comment symbol
    will suffice -- such things should *never* be done globally).

I can see Erhard's point that invariants and postconditions are fundamentally
similar (they verify the promises to the client and internally to the server,
neither of which is much interest to the client), but preconditions and
predicates are very different: they detect client mistakes -- something that is
totally out of the hands of the library creator.

> And I suppose if you don't use pragma Assert, then you really
> shouldn't care.

I would use it (as outlined above) if it had been defined in Ada 95. But I
wouldn't use it for inbound contract checks, because those should never be
turned off and I don't want to mix requirements on the client with internal
self-checks -- these are wildly different things. (Personally, I'd never want to
even use the same exception for such things, but that can't really be helped for
language-defined checks -- Constraint_Error has a similar problem of confusing
client contract checks vs. body errors.)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 22, 2011  1:35 PM

...
> I think predicate checks associated with parameter associations really
> need to be determined based on where the subprogram is declared
> (either the named subprogram or the invoked subprogram).  Otherwise it
> is hopeless for the compiler to take any advantage of the predicate
> checks.

You'd have to propose a rule in order to convince me about this. A predicate is
a property of a subtype, and it can get checked in a lot of contexts that have
nothing to do with a subprogram call. Having different rules for subprogram
calls from other things (like type conversions, aggregate components, etc.)
would add a lot of implementation complexity, possibility for user confusion,
and wording complexity for little gain.

I could easily imagine making the rule that it is the state at the declaration
of the subtype that controls whether the check is made or not. That would mean
whether the check is made or not is not context dependent (and it would be more
like the other checks). And I think that would be enough for the compiler to be
able to make assumptions about whether the call is checked, as the subtype is
necessarily visible and previously compiled when the body is compiled (so the
compiler will know whether checks are on or off for the predicate at that
point).

Note that I think whether Type_Invariants are on or off should depend on the
type declaration, and not on the subprograms (they're necessarily in the same
package, so it would be very unusual for them to have a different state). That
would make the most sense since a Type_Invariant is a property of a (private)
type; it's odd for some unrelated declaration to be determining anything about
such a property. As noted above, the compiler would always know when the body is
compiled, and I think that is sufficient for it to make optimizations.

****************************************************************

From: Bob Duff
Sent: Tuesday, November 22, 2011  3:17 PM

> I would use it (as outlined above) if it had been defined in Ada 95.
> But I wouldn't use it for inbound contract checks, because those
> should never be
                                                       ^^^^^^^^^^^^^^^^^^
> turned off and I don't want to mix requirements on the client with
> internal self-checks -- these are wildly different things.

I agree with the distinction you make between internal self-checks in a library
versus checks on clients' proper use of that library.

But I think it's very wrong to say "should never be turned off"
about ANY checks.  If I have strong evidence (from testing, code reviews,
independent proof tools, ...) that calls to the library won't fail
preconditions, and I need to turn them off to meet my performance goals, then I
want to be able to do so. CLAW is not such a case, because it's not
performance-critical, because it's painting stuff on a screen.  But you can't
generalize from that to all libraries.

Commenting them out is not a good option, because then I can't automatically
turn them back on for debug/test builds.

Look at it another way:  The ability to turn off checks (of whatever
sort) gives me the freedom to put in useful checks without worrying about how
inefficient they are.  I've written some pretty complicated assertions, on
occasion.

Or yet another way:  The decision to turn checks on or off properly belongs to
the programmer, not to us language designers.

Erhard imagines a library carefully written by competent people, and a client
sloppily written by nincompoops.  That is indeed a common case, but the opposite
case, and everything in between, can also happen.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 22, 2011  4:23 PM

> Assertion_Policy is based on compilations (typically a source file),
> so perhaps you could use Ignore in the body, while using Check on the
> spec.
>
> I could imagine a version of Assertion_Policy which allowed you to
> specify a different exception is to be raised.  Now go talk to your
> favorite compiler vendor, and don't forget to bring some cash... ;-)

My favorite vendor needs more time to do the work as opposed to cash. :-)
[Although I suppose cash to live on would help provide that time...probably
ought to go buy some lottery tickets. :-)]

I agree that vendor extensions here will be important (some sort of function
classification to allow optimizations will be vital). I do worry that we'll end
up with too much fragmentation this way -- although I suppose we'll all end up
copying whatever AdaCore does by some sort of necessity.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 22, 2011  4:56 PM

...
> But I think it's very wrong to say "should never be turned off"
> about ANY checks.  If I have strong evidence (from testing, code
> reviews, independent proof tools, ...) that calls to the library won't
> fail preconditions, and I need to turn them off to meet my performance
> goals, then I want to be able to do so.
> CLAW is not such a case, because it's not performance-critical,
> because it's painting stuff on a screen.  But you can't generalize
> from that to all libraries.

I don't disagree with this basic point, but supposed "performance goals" are
usually used to justify all sorts of iffy behavior. But in actual practice,
performance is impacted only a small amount by checks, and it is very rare for
the checks to be the difference in performance.

That is, most of the time, performance is way too slow/large/whatever, and
turning off checks doesn't make enough difference. You have to switch
algorithms. Or performance is really good enough as it is, and turning off
checks just makes the program more fragile and prone to malfunction without any
real benefit.

The area between where turning off checks helps is pretty narrow. It's almost
non-existent for constraint checks (a large number of which compilers remove). I
like to think that the situation will be pretty similar for contract checks.

Specifically, my fantasy compiler (I say fantasy compiler not because I don't
know how to implement this, but more because I don't know if I'll ever have the
time to actually implement it to make it real -- and I don't want make non-real
things sound like they exist) should be able to remove most invariants and
postconditions by proving them true. Similarly, it should be able to remove most
predicates and preconditions by proving them true (based in part of the
postconditions, known to either be checked or proved true). So the incremental
effect of contract checks will be quite small.

I should note that this depends on writing "good" contracts, using only provably
"pure" functions. My fantasy compiler will surely give lots of warnings for bad
contracts (and reject them altogether in some modes). Bad contracts always have
to be evaluated and are costly - but these aren't really "contracts" given that
they change based on factors other than the object values involved. One hopes
these things aren't common.

> Commenting them out is not a good option, because then I can't
> automatically turn them back on for debug/test builds.

True enough. But that's the role of pragma Assert: put the debugging stuff it
in, not into contracts.

> Look at it another way:  The ability to turn off checks (of whatever
> sort) gives me the freedom to put in useful checks without worrying
> about how inefficient they are.  I've written some pretty complicated
> assertions, on occasion.

Which is fine. But if you write complicated (and non-provable) contracts, I hope
your compiler cuts you off at the knees. (My fantasy compiler surely will
complain loudly about such things.) Keep contracts and debugging stuff separate!
(And anything that can't be part of the production system is by definition
debugging stuff!)

> Or yet another way:  The decision to turn checks on or off properly
> belongs to the programmer, not to us language designers.

True enough. But what worries me is lumping together things that are completely
different. Debugging stuff can and should be turned off, and no one should have
to think too much about doing so. Contract stuff, OTOH, should only be turned
off under careful consideration. It's annoying to put those both under the same
setting, encouraging the confusion of thinking these things are the same
somehow.

> Erhard imagines a library carefully written by competent people, and a
> client sloppily written by nincompoops.  That is indeed a common case,
> but the opposite case, and everything in between, can also happen.

Yes, of course. But as I note, well-written contracts can be (and should be)
almost completely eliminated by compilers. So the reason for turning those off
should be fairly minimal (especially as compilers get better at optimizing
these).

The danger is confusing debugging stuff (like expensive assertions that cannot
be left in the production program -- and yes, I've written some things like
that) with contract stuff that isn't costly in the first place and should be
left in all programs with the possible exception of the handful on the razor's
edge of performance. (And, yes, I recognize the need to be able to turn these
things off for that razor's edge. I've never argued that you shouldn't be able
to turn these off at all, only that they shouldn't be lumped with "expensive
assertions" that can only be for debugging purposes.)

BTW, my understanding is that GNAT already has much finer control over
assertions and contracts than that offered by the Standard. Do you have any
feeling for how often this control is used vs. the blunt instrument given in the
Standard? If it is widely used, that suggests that the Standard is already
deficient.

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 22, 2011  10:07 PM

> I don't disagree with this basic point, but supposed "performance
> goals" are usually used to justify all sorts of iffy behavior. But in
> actual practice, performance is impacted only a small amount by
> checks, and it is very rare for the checks to be the difference in
> performance.

But checks can be a menace in terms of deactivated code in a certified
environment, so often in a 178B context people DO want to turn off all checks,
because they don't want to deal with deactivated code.

> That is, most of the time, performance is way too slow/large/whatever,
> and turning off checks doesn't make enough difference. You have to
> switch algorithms. Or performance is really good enough as it is, and
> turning off checks just makes the program more fragile and prone to
> malfunction without any real benefit.

Yes, most of the time, but there are VERY significant exceptyions.

> The area between where turning off checks helps is pretty narrow. It's
> almost non-existent for constraint checks (a large number of which
> compilers remove). I like to think that the situation will be pretty
> similar for contract checks.

That's just wrong, there are cases in which constraint checks have to be turned
off to meet performance goals.

> Specifically, my fantasy compiler (I say fantasy compiler not because
> I don't know how to implement this, but more because I don't know if
> I'll ever have the time to actually implement it to make it real --
> and I don't want make non-real things sound like they exist) should be
> able to remove most invariants and postconditions by proving them
> true. Similarly, it should be able to remove most predicates and
> preconditions by proving them true (based in part of the postconditions, known to either be checked or proved true).
> So the incremental effect of contract checks will be quite small.

Well hardly worth discussing fantasies!

> Which is fine. But if you write complicated (and non-provable)
> contracts, I hope your compiler cuts you off at the knees. (My fantasy
> compiler surely will complain loudly about such things.) Keep
> contracts and debugging stuff separate! (And anything that can't be
> part of the production system is by definition debugging stuff!)

I think you definitely are in fantasy land here, and I don't see any point
trying to set you straight, since you proclaim this to be a fantasy.

> True enough. But what worries me is lumping together things that are
> completely different. Debugging stuff can and should be turned off,
> and no one should have to think too much about doing so. Contract
> stuff, OTOH, should only be turned off under careful consideration.
> It's annoying to put those both under the same setting, encouraging
> the confusion of thinking these things are the same somehow.

For many people they are the same somehow, because assertions were always about
contracts...

> Yes, of course. But as I note, well-written contracts can be (and
> should be) almost completely eliminated by compilers. So the reason
> for turning those off should be fairly minimal (especially as
> compilers get better at optimizing these).

This idea of complete elimitation is plain nonsense, and the idea that it can
happen automatically even more nonsensical. Randy, you might want to follow the
Hi-Lite project to get a little more grounded here and also familiarize yourself
with SPARK, and what can and cannot be feasibly achieved in terms of proof of
partial correctness.

> The danger is confusing debugging stuff (like expensive assertions
> that cannot be left in the production program -- and yes, I've written
> some things like that) with contract stuff that isn't costly in the
> first place and should be left in all programs with the possible
> exception of the handful on the razor's edge of performance. (And,
> yes, I recognize the need to be able to turn these things off for that
> razor's edge. I've never argued that you shouldn't be able to turn
> these off at all, only that they shouldn't be lumped with "expensive
> assertions" that can only be for debugging purposes.)

I find your distinction between assertions and pre/postconditions etc to be
pretty bogus.

> BTW, my understanding is that GNAT already has much finer control over
> assertions and contracts than that offered by the Standard. Do you
> have any feeling for how often this control is used vs. the blunt
> instrument given in the Standard? If it is widely used, that suggests
> that the Standard is already deficient.

I have never seen any customer code using this fine level of control. It still
seems reasonable to provide it!

Actually in GNAT, we have a generalized assertion

    pragma Check (checkname, assertion [, string]);

where checkname can be referenced in a Check_Policy to turn it on or off, but I
don't know if anyone uses it.

Internally something like a precondition gets turned into

   pragma Check (Precondition, .....)

****************************************************************

From: Robert Dewar
Sent: Tuesday, November 22, 2011  11:07 PM

...
> I'm thinking about something like Claw, where the failure of a
> postcondition or invariant means that there is a bug in Claw (call
> tech support!) while the failure of a precondition or predicate means
> that there is a bug in the client's code (damn, better go fix it).
> There ought to be some differentiation between these.

I would assume that appropriate messages indicate what is going on.

****************************************************************

From: Bob Duff
Sent: Wednesday, November 23, 2011  7:26 AM

> BTW, my understanding is that GNAT already has much finer control over
> assertions and contracts than that offered by the Standard. Do you
> have any feeling for how often this control is used vs. the blunt
> instrument given in the Standard? If it is widely used, that suggests
> that the Standard is already deficient.

My evidence is not very scientific, but my vague impression is that most people
turn checks on or off for production.  I.e. not fine grained.

I think "fine grained" would be a good idea in many cases, but I suppose most
people find it to be too much trouble.

Which reminds me of another reason to have a way to turn off ALL checks:  If
you're thinking of turning off some checks, you should turn off all checks, and
measure the speed -- that tells you whether turning off SOME checks can help,
and a best-case estimate of how much.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 23, 2011  1:58 PM

> > I don't disagree with this basic point, but supposed "performance
> > goals" are usually used to justify all sorts of iffy behavior. But
> > in actual practice, performance is impacted only a small amount by
> > checks, and it is very rare for the checks to be the difference in
> > performance.
>
> But checks can be a menace in terms of deactivated code in a certified
> environment, so often in a 178B context people DO want to turn off all
> checks, because they don't want to deal with deactivated code.

I don't understand. A check of the sort I'm talking about is by definition not
something "deactivated". Either the compiler can prove it to be OK, in which
case it doesn't appear in the code at all (it's essentially a comment), or it is
executed everytime the  runs. In which case it is an integral part of the code.

I'd expect problems in terms of 178B for the handlers of check failures (since
there wouldn't be an obvious way to execute or verify them), but not the checks
themselves. Perhaps you meant the handlers?

...
> > The area between where turning off checks helps is pretty narrow.
> > It's almost non-existent for constraint checks (a large number of
> > which compilers remove). I like to think that the situation will be
> > pretty similar for contract checks.
>
> That's just wrong, there are cases in which constraint checks have to
> be turned off to meet performance goals.

I said "almost non-existent". The only time it makes sense to turn off
constraint checks is in a loop, verified to be very "hot" in terms of program
performance, and no better algorithm is available. And even then, you would be
better off restructuring the loop to move the checks outside of it rather than
turning them off (that's usually possible with the addition of subtypes).

...
> > Which is fine. But if you write complicated (and non-provable)
> > contracts, I hope your compiler cuts you off at the knees. (My
> > fantasy compiler surely will complain loudly about such things.)
> > Keep contracts and debugging stuff separate! (And anything that
> > can't be part of the production system is by definition debugging
> > stuff!)
>
> I think you definitely are in fantasy land here, and I don't see any
> point trying to set you straight, since you proclaim this to be a
> fantasy.

I'd like you try. The only reason I proclaimed this a fantasy is because I have
no idea if I'll ever be able to spend the 2-3 months of time to implement
aspect_specifications and Pre/Post in the Janus/Ada *front-end*. That will be a
*lot* of work.

But the "proving" is just the stuff the Janus/Ada optimizer (and I would expect
every other compiler optimizer) already does. I'd add two very simple extensions
to what it already does: (1) add support for "facts" to the intermediate code --
that is expressions that the optimizer knows the value of without explicitly
evaluating them (so they would have no impact on generated code); (2) extending
common-subexpression elimination to function calls of functions that are
provably pure (using a new categorization).

The only real issue I see with this is that the Janus/Ada optimizer is not set
up to give detailed feedback to the user, which means that it can only report
that it was unable to "prove" a postcondition -- it can't give any indication as
to why. Obviously that is something that will need future enhancement.

> > True enough. But what worries me is lumping together things that are
> > completely different. Debugging stuff can and should be turned off,
> > and no one should have to think too much about doing so. Contract
> > stuff, OTOH, should only be turned off under careful consideration.
> > It's annoying to put those both under the same setting, encouraging
> > the confusion of thinking these things are the same somehow.
>
> For many people they are the same somehow, because assertions were
> always about contracts...

Hiding contracts in the body was always a bad idea. In any case, there is
clearly a grey area, and it will move as compiler (and other tools) technology
improves. But I would say that anything that a compiler cannot use in proving
should be an assertion (that is, a debugging aid) rather than part of a
contract. That means to me that such things should not change the correctness of
the program if turned off, and they should be suppressable independent of those
things that can be used automatically.

> > Yes, of course. But as I note, well-written contracts can be (and
> > should be) almost completely eliminated by compilers. So the reason
> > for turning those off should be fairly minimal (especially as
> > compilers get better at optimizing these).
>
> This idea of complete elimitation is plain nonsense, and the idea that
> it can happen automatically even more nonsensical.
> Randy, you might want to follow the Hi-Lite project to get a little
> more grounded here and also familiarize yourself with SPARK, and what
> can and cannot be feasibly achieved in terms of proof of partial
> correctness.

I think SPARK is doing more harm than good at this point. It was very useful as
a proof-of-concept, but it requires a mindset and leap that prevents the vast
majority of programmers from every using any part of it. The effect is that they
are solving the wrong problem.

In order for something to be used by a majority of (Ada) programmers, it has to
have at least the following characteristics:
(1) Little additional work required;
(2) Little degrading of performance;
(3) Has to be usable in small amounts.

The last is the most important. The way I and probably many other Ada
programmers learned the benefit of Ada constraint checks was to write some code,
and then have a bug detected by such a check. It was noticed that (a) fixing the
bug was simple since the check pinpointed the location of the problem, and (b)
finding and fixing the bug would have been much harder without the check. That
caused me to add additional constraints to future programs, which found
additional bugs. The feedback back loop increased the adoption of the checks to
the point that I will only consider turning them off for extreme performance
reasons.

I think we have to set up the same sort of feedback loop for contracts. That
means that simple contracts should have little impact on performance (and in
some cases, they might even help performance by giving additional information to
the optimizer and code generator). (I also note that this focus also provides
obvious ways to leverage multicore host machines, which is good as a lot of
these techniques are not easily parallelizable.)

As such, I'm concentrating on what can be done with existing technology -- that
is, subprogram level understanding using well-known optimization techniques. I'm
sure we'll want to go beyond that eventually, but we need several iterations of
the feedback loop in order to build up customer demand for such things. (And
then we'll have to add exception contracts, side-effect contracts, and similar
things to Ada to support stronger proofs.)

> > The danger is confusing debugging stuff (like expensive assertions
> > that cannot be left in the production program -- and yes, I've
> > written some things like that) with contract stuff that isn't costly
> > in the first place and should be left in all programs with the
> > possible exception of the handful on the razor's edge of
> > performance. (And, yes, I recognize the need to be able to turn
> > these things off for that razor's edge. I've never argued that you
> > shouldn't be able to turn these off at all, only that they shouldn't
> > be lumped with "expensive assertions" that can only be for debugging
> > purposes.)
>
> I find your distinction between assertions and pre/postconditions etc
> to be pretty bogus.

Fair enough. But then there will never be any real progress here, because few
people will use expensive contracts that slow down their programs. And I
*really* don't want to go back to the bad old days of multiple versions of
programs to test (one with checks and one without). It's like the old saw says:
create and test a car with seatbelts, then remove them in the production
version.

So I want to separate the contracts (an integral part of program correctness,
and almost never turned off) from the assertions (expensive stuff that has
little impact on program correctness, can be turned off without impact). I
suspect that one probably could easily find several additional categories, which
you may want to control separately.

Note that one of the reasons why I supported the "assert_on_start" and
"assert_on_exit" ideas is that these provide a place to put those expensive
assertions that cannot be reasoned about with simple technology. I surely agree
that just because something is expensive is no good reason not to include it.
But simply because you have a few expensive assertions around is no good reason
to turn off the bulk of the contracts which are cheap and easily eliminated
completely.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:19 PM

...
> My view of assertions is a bit different: an assertion is something
> that ought to be True at a given point, but it doesn't affect the
> correctness of the program. IMHO, checks that do affect the
> correctness of the program should not be given as an assertion (pragma
> Assert), but rather be directly part of the program. For instance, if
> you have an if statement that handles two possible cases, but other
> cases are not handled, there ought to be some else branch with a bug
> box or exception raise in it -- not some assertion that can be
> ignored. (I've yet to see a case where a pragma Assert could do
> something that you couldn't do with regular code; that's not true of
> our old Condcomp facility, which allows conditionally compiled
> declarations and
> pragmas.)

OK, for me, I am with Tuck, assertions are definitely about preconditions and
postconditions, and if false, something is just as wrong as if a Pre or Post
aspect was violated

> I realize that pragma Assert can be used in other ways than I outlined
> above, but I always thought that the above was the intent -- it would
> surely be the way that I'd use it (if we had implemented it at all; we
> have not done so to date, so I've never actually used it).

No, that's nbot at all the intent as I see it, in fact I find your
intepretration very odd.

>  Thus, pragma Assert is purely a debugging aid; it never has an affect
> on any sort of correctness and it can be ignored without any harmful
> effects.

I absolutely disagree with this view of assertions. The primary purpose of
assertions is to inform the reader of pre and post conditions for sections of
code. They are valuable even if turned off, and doing the same thing with
regular code is actively confusing. Assertions are NOT part of the program, they
are statements about the program (aka contracts) at least as I use them. Yes,
they can be turned on and just as Pre/Post are useful debugging aids when turned
on, so our assertions.

Once again, I see no substantial differences between assersions and pre/post
conditions.

Have a look for example at the sources of einfo.ads/adb in the GNAT compiler.
The subprograms in the body are full of assertions like:

>    procedure Set_Non_Binary_Modulus (Id : E; V : B := True) is
>    begin
>       pragma Assert (Is_Type (Id) and then Is_Base_Type (Id));
>       Set_Flag58 (Id, V);
>    end Set_Non_Binary_Modulus;

Here the assert is to be read as, and behaves as, a precondition.
Any call that does not meet this precondition is wrong. Yes, it would be much
better if these were expressed as preconditions in the spec, but we didn't have
that capability fifteen years ago. We have an internal ticket to replace many of
our assertions with pre post conditions, but that's not always feasible.
Consider this kind of codce

   if Ekind (E) = E_Signed_Integer then
      ...

   elsif Ekind (E) = E_Modular_Integer then
      ...

   else pragma Assert (Is_Floating_Point_Type (E));
      ...

Here the pragma assert is saying that the only piossibility (in the absence of
someone screweing up preconditions somewhere) is a floating-point type. It does
not need to be tested, since it is the only possibility, but this is very useful
documentation, and of course we can turn on assertions, and then, like all
contracts, we enable useful debugging capabilities.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:21 PM

Overall comment from Robert. I don't think anything needs to be changed here.
Let's let usage determine the need for finer grained control, and next time
around, we can consider whether to standardize some of this finer control.

In practice it won't make much difference, most implementations go out of their
way to accomdoate pragmas/attributes/restrictions etc from other
implementations, we have dozens of pragmas that are there just because some
other implementation invented them.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:23 PM

> Thinking more about future users...:
>
> What I would like to be able to is to turn off Assertions and
> Postconditions in MY code, because I have verified it to my heart's
> content. But what I would like to continue checking is the
> Preconditions of services that my library offers.
> After all, there is no limit to the s... of people (not reading
> contracts) and I want to ensure that my code works as advertised.
>
> Unfortunately, it looks like it is an all-or-nothing policy that is
> provided today. If that issue is (re)opened, I would argue for the
> differentiation above.

Don't worry, so far every implementation that provides pre and post conditions
provides the selective control you ask for. I will be VERY surprised if that
does not continue to be the case.

> If it is not re-opened then I want to note for 2020 that the upward
> incompatibility of billions and billions of lines of code relying on
> turning off precondition checks exists - and it would be a good thing
> to cause that incompatibility ;-)

I agree more control is needed, I just think it's premature to try to dictate to
implementations what it should be, let users and customers decide.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:31 PM

> ...
>> But for those who have, they serve very much the same purpose as
>> these new contracts.  They are essentially higher-level constraint
>> checks.
>
> That's an abuse of the construct, IMHO.

Very peculiar viewpoint, in that case, most people using assertions are in your
idiosyncratic view, abusing it, but I find this a bit silly, since we know in
practice that most users of assertions are in fact carrying out this abuse,
regarding it as the proper thing to do

>> To imply that somehow they are fundamentally different seems like an
>> unjustifiable leap.
>
> They *are* fundamentally different:

No they aren't

> (1) As pragmas, they should never have an effect on the dynamic
> correctness of the program. It should always be possible to erase all
> of the pragmas and get a functioning program. Anything else is an
> abuse of pragmas. We got rid of almost all of the offending pragmas in Ada
> 2012.

That's a purist position that bares no relationship to reality. E.g.
if you take away a pragma that specifies the queing policy, of course the
program may fail, I would say half the pragmas in existence, both language
defined and impl defined are like that.

pragmas that affect dynamic behavior of the program (I really don't know what
you mean by dynamic correctness, to most programmers correct means the program
is doing what it is meant to do)

Detect_Blocking
Default_Storage_Pool
Discard_Names
Priorty_Specific_Dispatching
Locking_Policy
Restrictions
Atomic
Atomic_Components
Attach_Handler
Convention
Elaborate_All
Export
Import
Interrupt_Priority
Linker_Options
Pack
Priority
Storage_Size
Volatile
Volatile_Components

In fact the number of pragmas that do NOT affect the dynamic behavior of a
program is very small.

So if your viewpoint is based on this fantasy that pragmas do not affect the
correctness of a program, they are built on a foundation of sand.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:32 PM

Really one conclusion from this thread is that there is no consensus on whether
a late change is desirable, let alone important enough to do as a late change.

So it seems obvious to me that the proper action is to leave things alone. You
would need a VERY clear consensus to change something like this at this stage!

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  2:56 PM

>> But checks can be a menace in terms of deactivated code in a
>> certified environment, so often in a 178B context people DO want to
>> turn off all checks, because they don't want to deal with deactivated
>> code.
>
> I don't understand. A check of the sort I'm talking about is by
> definition not something "deactivated". Either the compiler can prove
> it to be OK, in which case it doesn't appear in the code at all (it's
> essentially a comment), or it is executed everytime the  runs. In
> which case it is an integral part of the code.

> I'd expect problems in terms of 178B for the handlers of check
> failures (since there wouldn't be an obvious way to execute or verify
> them), but not the checks themselves. Perhaps you meant the handlers?

I mean that if you have

    if A < 10 then
      raise COnstraint_Error;
    end if;

then you will have deactivated code (the raise can never be executed).
Now if you are doing source level coverage (which is really what 178B requires),
you may be able to deal with this with a suitable traceability study. But in
practice many producers of safety critical code (and not a few DER's) prefer to
see every line of object code executed in tests. It is controversial whether
178B requires this, but let me say for sure Robert Dewar requires it :-)

If I was managing a SC project, I would want to see 100% coverage at the object
level, and having to deal with all these cases by injection tests would be too
painful, so I would probably go with turnning checks off. After all if you are
using a SPARK approach in which you prove that no constraint errors can occur,
what's the point in leaving in junk dead tests.

> I said "almost non-existent". The only time it makes sense to turn off
> constraint checks is in a loop, verified to be very "hot" in terms of
> program performance, and no better algorithm is available. And even
> then, you would be better off restructuring the loop to move the
> checks outside of it rather than turning them off (that's usually
> possible with the addition of subtypes).

This is just totally at odds with how many people use the language.
There is nothing necessarily wrong with making sure the language covers RB's
idiosyncratic views on how the language should be used, as well as everyone
else's idiosyncratic views, but we don't want to designt the language so it is
ONLY suitable for this usage view.

> I'd like you try. The only reason I proclaimed this a fantasy is
> because I have no idea if I'll ever be able to spend the 2-3 months of
> time to implement aspect_specifications and Pre/Post in the Janus/Ada *front-end*.
> That will be a *lot* of work.

As I say, I recommend you take a close look both at the SPARK technology and at
the Hi-Lite project, these are both very real.

> But the "proving" is just the stuff the Janus/Ada optimizer (and I
> would expect every other compiler optimizer) already does. I'd add two
> very simple extensions to what it already does: (1) add support for
> "facts" to the intermediate code -- that is expressions that the
> optimizer knows the value of without explicitly evaluating them (so
> they would have no impact on generated code); (2) extending
> common-subexpression elimination to function calls of functions that are provably pure (using a new categorization).

It is MUCH MUCH harder than you think, and lots of people have spent lots of
time working on these problems, and the naive optimism you express is not shared
by those people.

> Hiding contracts in the body was always a bad idea. In any case, there
> is clearly a grey area, and it will move as compiler (and other tools)
> technology improves. But I would say that anything that a compiler
> cannot use in proving should be an assertion (that is, a debugging
> aid) rather than part of a contract. That means to me that such things
> should not change the correctness of the program if turned off, and
> they should be suppressable independent of those things that can be used automatically.

Well some preconditions and postconditions belong naturally in the body.
Consider a memo function. The memoizing should be totally invisible in the spec,
and generally the caller won't even be able to see or talk about the memo data.
But it makes perfect sense to write preconditions and postconditions in the body
saying what the memo data must look like on entry and exit.

> I think SPARK is doing more harm than good at this point. It was very
> useful as a proof-of-concept, but it requires a mindset and leap that
> prevents the vast majority of programmers from every using any part of
> it. The effect is that they are solving the wrong problem.

I have no idea what you mean, but I am pretty sure you are not really familiar
with SPARK, or the associated proof technologies. Are you even familiar with the
annotation language?

> In order for something to be used by a majority of (Ada) programmers,
> it has to have at least the following characteristics:
> (1) Little additional work required;
> (2) Little degrading of performance;
> (3) Has to be usable in small amounts.

Again, how about looking at Hi-Lite which is precisely about figuring out the
extent to which formal methods can be employed using roughly these criteria.
Randy, while you sit fantasizing over what can be done, other people are
devoting huge amounts of effort to thinking

> The last is the most important. The way I and probably many other Ada
> programmers learned the benefit of Ada constraint checks was to write
> some code, and then have a bug detected by such a check. It was
> noticed that (a) fixing the bug was simple since the check pinpointed
> the location of the problem, and (b) finding and fixing the bug would
> have been much harder without the check. That caused me to add
> additional constraints to future programs, which found additional
> bugs. The feedback back loop increased the adoption of the checks to
> the point that I will only consider turning them off for extreme performance reasons.

Yes, we know contraint checks are a relatively simple case. In the SPARK context
for example, it has proved practical to prove large applications like iFacts to
be free of any run-time errors, but it has NOT proved feasible to prove partial
correctness for the whole application.

> As such, I'm concentrating on what can be done with existing
> technology -- that is, subprogram level understanding using well-known
> optimization techniques. I'm sure we'll want to go beyond that
> eventually, but we need several iterations of the feedback loop in
> order to build up customer demand for such things. (And then we'll
> have to add exception contracts, side-effect contracts, and similar
> things to Ada to support stronger
> proofs.)

We are already WAY beyond this. And there is very real customer demand Have a
look at http://www.open-do.org/projects/hi-lite/. There are many large scale
users of Ada in safety-critical contexts who are very interested, especially in
the environment of 178-C in the extent to which unit proof can replace unit
testing.

> Fair enough. But then there will never be any real progress here,
> because few people will use expensive contracts that slow down their
> programs. And I
> *really* don't want to go back to the bad old days of multiple
> versions of programs to test (one with checks and one without). It's
> like the old saw
> says: create and test a car with seatbelts, then remove them in the
> production version.

People DO use expensive contracts that slow down their programs, ALL THE TIME!
Remember that at AdaCore, we have years of experience here. The precondition and
postcondition pragmas of GNAT which have been around for over three years, were
developed in response to customer demand by large customers developing large
scale safety-critical applications. It really doesn't matter than the tests may
be expensive if enabled at run time, they won't be enabled in the final build.

> So I want to separate the contracts (an integral part of program
> correctness, and almost never turned off) from the assertions
> (expensive stuff that has little impact on program correctness, can be
> turned off without impact). I suspect that one probably could easily
> find several additional categories, which you may want to control separately.

the almost never turned off does not correspond with how we see our customers
using these features *at all*.

And indeed even after we replace many of the assertions in GNAT itself with pre
and post conditions, we will normally turn them all off as we do now for
production builds of the compiler, since turning assertions on does compromise
compiler performance3 significantly, and this matters to many people. We do
builds internally with assertions turned on, and these are indeed useful for
debugging porposes.

> Note that one of the reasons why I supported the "assert_on_start" and
> "assert_on_exit" ideas is that these provide a place to put those
> expensive assertions that cannot be reasoned about with simple
> technology. I surely agree that just because something is expensive is
> no good reason not to include it. But simply because you have a few
> expensive assertions around is no good reason to turn off the bulk of
> the contracts which are cheap and easily eliminated completely.

The idea that preconeditions and postconditions will be restricted to simple
things that can "be reasoned about with simple technology":

a) bares no relation to the way people are actually using the feature

b) does not correspond with my expectations of how people will use
    the technology

c) does not correspond with my recommendations of how people *should*
    use the technology

I have no objection to making sure that what we define is reasonably usage for
your (to me very peculiar) view of how these features should be used, but I
would be appalled to think this was the official view of the language design, or
that the designm would be compromised by this viewpoint (I find assert_on_start
and assert_on_exit deeply flawed if they are motivated by dealing with expensive
stuff).

I contest the idea that the bulk of contracts are cheap and easily eliminated.
Again look at Hi-Lite to get a better feel for the state of the art in this
respect.

Even just eliminating all contstraint checks is far from trivial (read some of
the SPARK papers on this subject).

****************************************************************

From: Bob Duff
Sent: Wednesday, November 23, 2011  2:59 PM

>...But I would say that anything that a compiler cannot  use in proving
>should be an assertion (that is, a debugging aid) rather than  part of
>a contract.

Could we please settle on terminology?  Preferably somewhat standard
terminology?

The term "assertion" means pragma Assert, precondition, postcondition,
invariant, and predicate.  Bertrand Meyer uses it that way, and I think it makes
sense.  Probably constraints and null exclusions should also be called
assertions.

I really think equating "assertion" and "debugging aid" will cause nothing but
confusion.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 23, 2011  4:11 PM

> > I don't understand. A check of the sort I'm talking about is by
> > definition not something "deactivated". Either the compiler can
> > prove it to be OK, in which case it doesn't appear in the code at
> > all (it's essentially a comment), or it is executed everytime the
> > runs. In which case it is an integral part of the code.
>
> > I'd expect problems in terms of 178B for the handlers of check
> > failures (since there wouldn't be an obvious way to execute or
> > verify them), but not the checks themselves. Perhaps you meant the handlers?
>
> I mean that if you have
>
>     if A < 10 then
>       raise COnstraint_Error;
>     end if;
>
> then you will have deactivated code (the raise can never be executed).

I see. But this is not actually what will be generated in the object code (at
least for language-defined checks).

> Now if you are doing source level coverage (which is really what 178B
> requires), you may be able to deal with this with a suitable
> traceability study. But in practice many producers of safety critical
> code (and not a few DER's) prefer to see every line of object code
> executed in tests. It is controversial whether 178B requires this, but
> let me say for sure Robert Dewar requires it :-)
>
> If I was managing a SC project, I would want to see 100% coverage at
> the object level, and having to deal with all these cases by injection
> tests would be too painful, so I would probably go with turnning
> checks off. After all if you are using a SPARK approach in which you
> prove that no constraint errors can occur, what's the point in leaving
> in junk dead tests.

I agree with this. But it seems to me that you don't need to test every possible
check, only one. That's because in practice the code will look something like:

    Mov EAX,[A]
    Cmp EAX, 10
    Jl  Check_Failed

And every instruction of the object code here will be executed whether or not
the check fails. The branch might even be a jump to the handler, if it is
local).

I suppose if your requirement is to exercise every possible *path*, then you
have to remove all of the checks, but otherwise, you only need remove complex
ones with dead code. (And there aren't many of those that are language-defined.)

> > I said "almost non-existent". The only time it makes sense to turn
> > off constraint checks is in a loop, verified to be very "hot" in
> > terms of program performance, and no better algorithm is available.
> > And even then, you would be better off restructuring the loop to
> > move the checks outside of it rather than turning them off (that's
> > usually possible with the addition of subtypes).
>
> This is just totally at odds with how many people use the language.
> There is nothing necessarily wrong with making sure the language
> covers RB's idiosyncratic views on how the language should be used, as
> well as everyone else's idiosyncratic views, but we don't want to
> designt the language so it is ONLY suitable for this usage view.

I agree with you in general. I started out as one of those "many people". I
eventually learned I was wrong, and also got bigger machines to run my code on.
The language should discourage bad practices and misuse, but not to the extent
of preventing things that need to be done occasionally. I surely would not
suggest eliminating the ability to turn off checks, what bothers me of course is
the requirement to treat them all the same, when they are clearly not the same.

...
> > But the "proving" is just the stuff the Janus/Ada optimizer (and I
> > would expect every other compiler optimizer) already does. I'd add
> > two very simple extensions to what it already does: (1) add support
> > for "facts" to the intermediate code -- that is expressions that the
> > optimizer knows the value of without explicitly evaluating them (so
> > they would have no impact on generated code); (2) extending
> > common-subexpression elimination to function calls of functions that
> > are provably pure (using a new categorization).
>
> It is MUCH MUCH harder than you think, and lots of people have spent
> lots of time working on these problems, and the naive optimism you
> express is not shared by those people.

Trying to do it all surely is harder than I am expressing here. But I only am
interested in the 75% that is easy, because that has the potential to make
contracts as cheap as constraint checks (can't eliminate *all* of those either,
it just matters that you eliminate most).

> > Hiding contracts in the body was always a bad idea. In any case,
> > there is clearly a grey area, and it will move as compiler (and
> > other tools) technology improves. But I would say that anything that
> > a compiler cannot use in proving should be an assertion (that is, a
> > debugging
> > aid) rather than part of a contract. That means to me that such
> > things should not change the correctness of the program if turned
> > off, and they should be suppressable independent of those things
> that can be used automatically.
>
> Well some preconditions and postconditions belong naturally in the body.
> Consider a memo function. The memoizing should be totally invisible in
> the spec, and generally the caller won't even be able to see or talk
> about the memo data. But it makes perfect sense to write preconditions
> and postconditions in the body saying what the memo data must look
> like on entry and exit.

I agree that some things naturally belong in the body -- those clearly are not
part of the contract. They're some other kind of thing (I've been calling them
assertions, but Bob hates that, so I'll just say that they need a good name -
"non-contract assertions" is the best I've got, and it isn't great).

I'd argue that such things matter not to the client at all, and probably don't
matter much to the implementation of the body either. Which makes them a very
different sort of thing to a contract assertion -- even a (contract)
postcondition which can be used as an assumption at the call site.

> > I think SPARK is doing more harm than good at this point. It was
> > very useful as a proof-of-concept, but it requires a mindset and
> > leap that prevents the vast majority of programmers from every using
> > any part of it. The effect is that they are solving the wrong problem.
>
> I have no idea what you mean, but I am pretty sure you are not really
> familiar with SPARK, or the associated proof technologies. Are you
> even familiar with the annotation language?

I'm familiar enough to know that there is essentially no circumstance where I
could use it, as it has no support for dynamic dispatching, exceptions, or
references (access types or some real equivalent, not Fortran 66-type tricks). I
know it progressed some from the days when I studied it extensively (about 10
years ago), but not in ways that would be useful to me or many other Ada
programmers.

Anyway, the big problem I see with SPARK is that you pretty much have to use it
exclusively in some significant chunk of code to get any benefit. Which prevents
the sort of incremental adoption that is really needed to change the attitudes
of the typical programmer.

...
> Again, how about looking at Hi-Lite which is precisely about figuring
> out the extent to which formal methods can be employed using roughly
> these criteria. Randy, while you sit fantasizing over what can be
> done, other people are devoting huge amounts of effort to thinking

The only reason I'm just thinking rather than doing is lack of time and $$$ to
implement something. Prior commitments (to Ada 2012)  have to be done first.

> > The last is the most important. The way I and probably many other
> > Ada programmers learned the benefit of Ada constraint checks was to
> > write some code, and then have a bug detected by such a check. It
> > was noticed that (a) fixing the bug was simple since the check
> > pinpointed the location of the problem, and (b) finding and fixing
> > the bug would have been much harder without the check. That caused
> > me to add additional constraints to future programs, which found
> > additional bugs. The feedback back loop increased the adoption of
> > the checks to the point that I will only consider turning them off
> > for extreme performance reasons.
>
> Yes, we know contraint checks are a relatively simple case.
> In the SPARK context for example, it has proved practical to prove
> large applications like iFacts to be free of any run-time errors, but
> it has NOT proved feasible to prove partial correctness for the whole
> application.

I'm not certain I understand what you mean by "partial correctness", but if I am
even close, that seems like an obvious statement. And even if it was possible,
I'd leave that to others to do. I'm not talking about anything like that; I
don't want to prove anything larger than a subprogram. You can chain those
together to prove something larger, but I doubt very much it would ever come
close to "correctness". I just want to prove the obvious stuff, both for
performance reasons (so contracts don't cost much in practice) and to provide an
aid as to where things are going wrong.

> > As such, I'm concentrating on what can be done with existing
> > technology -- that is, subprogram level understanding using
> > well-known optimization techniques. I'm sure we'll want to go beyond
> > that eventually, but we need several iterations of the feedback loop
> > in order to build up customer demand for such things. (And then
> > we'll have to add exception contracts, side-effect contracts, and
> > similar things to Ada to support stronger
> > proofs.)
>
> We are already WAY beyond this. And there is very real customer demand
> Have a look at http://www.open-do.org/projects/hi-lite/. There are
> many large scale users of Ada in safety-critical contexts who are very
> interested, especially in the environment of 178-C in the extent to
> which unit proof can replace unit testing.

I looked at the site and didn't see anything concrete, just a lot of motherhood
statements. I realize that there had to be a lot of that in the grant proposals,
but I would like to see more than that to feel that anything is really being
accomplished. How is Hi-Lite going to reach its goals.

And clearly, the users requiring 178 and the like are a tiny minority. It's
perfectly good to support them (they surely need it), but their needs are far
from the mainstream. As such "Hi-Lite" is just a corner of what I want to
accomplish. (If you don't have big goals, there is no point; and if the goals
are attainable you probably will end up disappointed. :-) I would like to see an
environment where all programmers include this sort of information in their
programs, and all compilers and tools use that information to detect (some) bugs
immediately. That can only be approached incrementally, and it requires a
grass-roots effort -- it's not going to happen just from big projects where
management can mandate whatever they want/need.

...
> People DO use expensive contracts that slow down their programs, ALL
> THE TIME! Remember that at AdaCore, we have years of experience here.
> The precondition and postcondition pragmas of GNAT which have been
> around for over three years, were developed in response to customer
> demand by large customers developing large scale safety-critical
> applications.
> It really doesn't matter than the tests may be expensive if enabled at
> run time, they won't be enabled in the final build.

Which is exactly wrong: leaving the garage without seatbelts. I understand that
there are reasons for doing this, but it doesn't make it any more right.

And I'm personally not that interested in "safety-critical applications",
because these people will almost always do the right thing ultimately -- they
have almost no other choice (they'd be exposed to all sorts of liability
otherwise). I'm much more concerned about the mass of less critical applications
that still would benefit from fewer errors (that would be all of them!)

I've wasted too much of my (and your) time with this, so I'll stop here. No need
to convince each other -- time will tell.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, November 23, 2011  4:25 PM

> The term "assertion" means pragma Assert, precondition, postcondition,
> invariant, and predicate.  Bertrand Meyer uses it that way, and I
> think it makes sense.  Probably constraints and null exclusions should
> also be called assertions.
>
> I really think equating "assertion" and "debugging aid" will cause
> nothing but confusion.

OK, I'll stop trying to forward this discussion.

I could live with some better terminology for the separation, but the separation
is clearly real and those that chose to ignore it force programmers into a
corner where they have to choose between a too-slow program with checks on, or a
decently performing program that is no better than a C program. We have the
technology to prevent the need of this Hobson's choice, and it's sad not to be
able to get any traction with it.

Beyond that, I find it incredibly sad that people want to drive Ada into a box
where it is only useful for large safety-critical applications. *All*
applications can benefit from this sort of technology, but not if it is turned
into a difficult formal language that only a few high priests can understand.
And that's what I'm getting out of this discussion.

Probably you are all right, and there is no real value to the beginner and
hobbyist anymore, since they can't be monetized. Probably I'd be best off
stopping thinking for myself and just becoming one of the sheep punching a
clock. Because the alternative isn't pretty.

****************************************************************

From: John Barnes
Sent: Wednesday, November 23, 2011  4:30 PM

Partial correctness simply means that a program is proved to be correct provided
it terminates. It's nothing to do with the proof being otherwisee flaky. Mostly
if a program has been proved to be partially correct then it really is correct
but there are amusing examples where all the proof comes out OK yet the program
is flawed because it cannot be proved to terminate in some cases.

There is a simple example on page 289 of my Spark book. It's a factorial
function. It looks OK but it loops for ever in one case.

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  4:57 PM

> Partial correctness simply means that a program is proved to be
> correct provided it terminates. It's nothing to do with the proof
> being otherwisee flaky. Mostly if a program has been proved to be
> partially correct then it really is correct but there are amusing
> examples where all the proof comes out OK yet the program is flawed
> because it cannot be proved to terminate in some cases.

In the SPARK class, one of the examples was a integer square root. You were not
asked to prove partial correctness, but I did anyway. I programmed a binary
search, and tested as the termination condition that I actually had the square
root. Obviously this only terminates if it correctly computes the square root.
However, the trick in binary searches is always to get the termination correct
and make sure you don't go into an infinite loop :-) Of course I did not prove
that.

> There is a simple example on page 289 of my Spark book. It's a
> factorial function. It looks OK but it loops for ever in one case.

Yes, a very nice example.

BTW Randy, it would be a mistake to think that Hi-Lite is ONLY relevant to
178B/C environments, the purpose is precisely to make proof of preconditions and
postconditions more generally accessible.

****************************************************************

From: John Barnes
Sent: Thursday, November 24, 2011  9:31 AM

I contrived it by accident when giving a Spark course in York. One thing it
taught me was the value of for loops which by their very nature always
terminate. And by contrast I always avoid while loops.

****************************************************************

From: Robert Dewar
Sent: Thursday, November 24, 2011  9:53 AM

Well *always* sounds a bit suspicious, not everything is in the primitive
recursive domain!

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  4:51 PM

> I see. But this is not actually what will be generated in the object
> code (at least for language-defined checks).

Yes it is, what else are you thinking of (certainly you don't want to use the
built-in x86 operations like the range check, they are pipline catastrophes.

> I agree with this. But it seems to me that you don't need to test
> every possible check, only one. That's because in practice the code
> will look something like:
>
>      Mov EAX,[A]
>      Cmp EAX, 10
>      Jl  Check_Failed

well that's not what it will look like for us, since we maintain traceability on
what check failed.

> I suppose if your requirement is to exercise every possible *path*,
> then you have to remove all of the checks, but otherwise, you only
> need remove complex ones with dead code. (And there aren't many of
> those that are
> language-defined.)

Right, if you have a JL, it has to execute both ways even in simple coverage.

> I agree with you in general. I started out as one of those "many
> people". I eventually learned I was wrong, and also got bigger
> machines to run my code on. The language should discourage bad
> practices and misuse, but not to the extent of preventing things that
> need to be done occasionally. I surely would not suggest eliminating
> the ability to turn off checks, what bothers me of course is the
> requirement to treat them all the same, when they are clearly not the same.

But checks are seldom turned on

> I'm familiar enough to know that there is essentially no circumstance
> where I could use it, as it has no support for dynamic dispatching,
> exceptions, or references (access types or some real equivalent, not
> Fortran 66-type tricks). I know it progressed some from the days when
> I studied it extensively (about 10 years ago), but not in ways that
> would be useful to me or many other Ada programmers.

Well do you write safety-critical code at all? If not you really don't have the
right context, people usually severely subset in the SC case. Certainly you
would be very surprised to see access types and dynamic storage allocation in an
SC app. So the SPARK restrictions in practice correspond with the subset that
people anyway.

> Anyway, the big problem I see with SPARK is that you pretty much have
> to use it exclusively in some significant chunk of code to get any
> benefit. Which prevents the sort of incremental adoption that is
> really needed to change the attitudes of the typical programmer.

Well SC apps are more likely to be generated from scratch anyway, and we are not
talking typical programmer when it comes to SC apps. So SPARK is not for the
environment you are thinking in terms of (gewnertal purpose programming using
the whole language).

> ...
>> Again, how about looking at Hi-Lite which is precisely about figuring
>> out the extent to which formal methods can be employed using roughly
>> these criteria. Randy, while you sit fantasizing over what can be
>> done, other people are devoting huge amounts of effort to thinking
>
> The only reason I'm just thinking rather than doing is lack of time
> and $$$ to implement something. Prior commitments (to Ada 2012)  have
> to be done first.

Well take a quick look at the Hi-Lite pages, how much effort is that, I think
you will find them interesting

> I'm not certain I understand what you mean by "partial correctness",
> but if I am even close, that seems like an obvious statement. And even
> if it was possible, I'd leave that to others to do. I'm not talking
> about anything like that; I don't want to prove anything larger than a
> subprogram. You can chain those together to prove something larger,
> but I doubt very much it would ever come close to "correctness". I
> just want to prove the obvious stuff, both for performance reasons (so
> contracts don't cost much in
> practice) and to provide an aid as to where things are going wrong.

Gosh, I am quite surprised you don't know what partial correctness means. This
is a term of art in proof of program correctness that is very old, at least
decades (so old I have trouble tracking down first use, everyone even vageuly
associated with proof techniques knows this term).

Briefly, proving partial correctness means that IF the program terminates THEN
you your proof says something about the results. If a single subprogram is
involved, then it would mean for example proving that IF the preconditions hold,
and IF thew subprogram terminates, then the postconditions hold.

Proving termination (total correctness) is far far harder.

> I looked at the site and didn't see anything concrete, just a lot of
> motherhood statements. I realize that there had to be a lot of that in
> the grant proposals, but I would like to see more than that to feel
> that anything is really being accomplished. How is Hi-Lite going to
> reach its goals.

Did you read the technical papers? I don't think you looked hard enough!

> And clearly, the users requiring 178 and the like are a tiny minority.
> It's perfectly good to support them (they surely need it), but their
> needs are far from the mainstream. As such "Hi-Lite" is just a corner
> of what I want to accomplish. (If you don't have big goals, there is
> no point; and if the goals are attainable you probably will end up
> disappointed. :-) I would like to see an environment where all
> programmers include this sort of information in their programs, and
> all compilers and tools use that information to detect (some) bugs
> immediately. That can only be approached incrementally, and it
> requires a grass-roots effort -- it's not going to happen just from big
> projects where management can mandate whatever they want/need.

Randy, I think you have a LOT to study before understanding the issues here :-)

> Which is exactly wrong: leaving the garage without seatbelts. I
> understand that there are reasons for doing this, but it doesn't make
> it any more right.

If you can prove the car won't crash, seatbelts are not needed. It is pretty
useless for a flight control system to get a constraint error :-)

****************************************************************

From: Robert Dewar
Sent: Wednesday, November 23, 2011  5:04 PM

>> I really think equating "assertion" and "debugging aid" will cause
>> nothing but confusion.
>
> OK, I'll stop trying to forward this discussion.
>
> I could live with some better terminology for the separation, but the
> separation is clearly real

Well you think it's real, that does not make it "clearly real", and I don't find
the separation meaningful at all. And I gather that Tuck agrees with this.

> and those that chose to ignore it force programmers into a corner
> where they have to choose between a too-slow program with checks on,
> or a decently performing program that is no better than a C program.
> We have the technology to prevent the need of this Hobson's choice,
> and it's sad not to be able to get any traction with it.

It's because we can't agree on how it should be done! In the GNAT situation you
could have

    pragma Check (Expensive, condition, string)

    pragma Check (Cheap, condition, string)

and then control activation of Cheap or Expensive with e.g.

    pragma Check_Policy (Expensive, Off);
    pragma Check_Policy (Cheap, On);

which seemed to me useful when I implemented it, but in fact we never saw a
customer use this feature, and we never used it ourselves internally, so perhaps
I was mistaken in thinking this a useful idea.

For sure, the artificial attempt to corral cheap stuff into the fold of pragma
Assert, and expensive stuff into the fold of preconditions/postconditions seems
seriously misguided to me!

> Beyond that, I find it incredibly sad that people want to drive Ada
> into a box where it is only useful for large safety-critical
> applications. *All* applications can benefit from this sort of
> technology, but not if it is turned into a difficult formal language
> that only a few high priests can understand. And that's what I'm getting out of this discussion.

Nobody is saying that, it is just that your suggestions here make no technical
sense to some of us. You will have to do a better job of cobnvincing people, or
perhaps decide you are wrong, so far I have not seen any sympathetic response to
the claim that assertions are about cheap debugging stuff, and pre/post
conditions are about expensive contracts.

> Probably you are all right, and there is no real value to the beginner
> and hobbyist anymore, since they can't be monetized. Probably I'd be
> best off stopping thinking for myself and just becoming one of the
> sheep punching a clock. Because the alternative isn't pretty.

That's just not true, we have huge numbers of students in the GAP program
enthusiastically using Ada, and the precondition/postcondition feature is one we
get student questions about all the time (there are also several university
courses focused on programming by contract using the Ada 2012 features).

****************************************************************

From: Yannick Moy
Sent: Thursday, November 24, 2011  3:33 AM

>> We are already WAY beyond this. And there is very real customer
>> demand Have a look at http://www.open-do.org/projects/hi-lite/. There
>> are many large scale users of Ada in safety-critical contexts who are
>> very interested, especially in the environment of 178-C in the extent
>> to which unit proof can replace unit testing.
>
> I looked at the site and didn't see anything concrete, just a lot of
> motherhood statements. I realize that there had to be a lot of that in
> the grant proposals, but I would like to see more than that to feel
> that anything is really being accomplished. How is Hi-Lite going to
> reach its goals.

For something concrete, look at this page, with a small example of Ada 2012 code
whose contracts and absence of run-time errors are proved with our tool (called
gnatprove):

http://www.open-do.org/projects/hi-lite/a-database-example/

How we do it is presented in various papers pointed-to by the main page:
- the high-level view:
http://www.open-do.org/wp-content/uploads/2011/02/DewarSSS2010.doc
- a short introduction:
http://www.open-do.org/wp-content/uploads/2011/06/Hi_Lite_Contract.pdf
- the proof tool-chain:
http://www.open-do.org/wp-content/uploads/2011/06/Why_Hi_Lite_Ada.pdf
- the modified "formal" containers:
http://www.open-do.org/wp-content/uploads/2011/06/Correct_Code_Containing_Containers.pdf

> And clearly, the users requiring 178 and the like are a tiny minority.
> It's perfectly good to support them (they surely need it), but their
> needs are far from the mainstream. As such "Hi-Lite" is just a corner
> of what I want to accomplish.

We are clearly targetting DO-178 users in Hi-Lite, but not only. As an example,
a partner of the project is Astrium, which is in the space industry, where
DO-178 does not apply. And customers who have expressed interest in Hi-Lite come
from a variety of industries.

> (If you don't have big goals, there is no point; and if the
> goals are attainable you probably will end up disappointed. :-) I
> would like to see an environment where all programmers include this
> sort of information in their programs, and all compilers and tools use
> that information to detect (some) bugs immediately.

We agree then! The technology we develop has this potential, because:
* We are not restricting the user in his code. Instead, we detect automatically
  which subprograms we can analyze.
* We don't require any user work upfront. You can start with the default
  implicit contract for subprograms of True for both precondition and
  postcondition! And the effects of subprograms (global variables read/written)
  are generated automatically.
* We don't require an expensive analysis. Each subprogram is analyzed
  separately, based on the contracts of the subprograms it calls.

> That can only be approached incrementally,
> and it requires a grass-roots effort -- it's not going to happen just
> from big projects where management can mandate whatever they want/need.

I don't think Hi-Lite qualifies as a "big project", if you mean by that
something immobile and monolithic. We expect there will be many uses of the
technology, and we will try to accommodate as many as possible. BTW, Hi-Lite is
a very open project, so feel free to participate in technical discussions on
hi-lite-discuss mailing list
(http://lists.forge.open-do.org/mailman/listinfo/hi-lite-discuss), and let us
know if you would like to be involved in any way.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, November 24, 2011  8:06 AM

Let's summarize the positions (correct me if I'm wrong):

The D view:
A program is a whole. If there's a bug in it, it is a bug, irrespectively of
whether it is to be blamed on the caller or the callee. If checks are disabled,
they are disabled as a whole. That's what the important (i.e. paying) customers
-most of them in safety critical applications- want.

The B view:
There are library providers and library users. The library provider is
responsible for his own bugs, and wants to be protected from incorrect calls
from the user. The user must be aware of the right protocol to call the library.
It makes sense to disable checks internal to the library (algorithmic checks),
but not those that enforce the contract with the user. Users are the public at
large, by far the highest number of users, few of them write critical
applications. Not many are paying customers, but this is the public we should
try to conquere.

Personnaly, I understand D, but tend to agree with B. For example, the whole
ASIS interface follows the B model: each query has an "expected element kind",
and I would feel terribly insecure if the checks on the element kind were
removed when I put a pragma to increase the speed on my own program. (Of course,
currently these checks are not implemented as pre-conditions, but they could).

Now, what is the issue? To have one or two pragmas to disable some checks,
either together or separately. Those who want to disable all checks together can
easily do so with two pragmas. Those who want to disconnect the two aspects are
unable to do so if there is only one pragma. Moreover, it's not like a big
change  that could bring nasty consequences all over the language. So I think
there should be really two pragmas, even at this late stage (I even think that
contract violations should have their own, different, exception, but I don't
have much hope to get support on this).

****************************************************************

From: Tullio Vardanega
Sent: Thursday, November 24, 2011  8:24 AM

For what my opinion is worth here, I find JPR's summary useful
for the layman to get to grips with the avalanche of views aired
in this (very interesting) thread.

****************************************************************

From: Robert Dewar
Sent: Thursday, November 24, 2011  8:36 AM

> Now, what is the issue? To have one or two pragmas to disable some
> checks, either together or separately. Those who want to disable all
> checks together can easily do so with two pragmas. Those who want to
> disconnect the two aspects are unable to do so if there is only one
> pragma. Moreover, it's not like a big change  that could bring nasty
> consequences all over the language. So I think there should be really
> two pragmas, even at this late stage (I even think that contract
> violations should have their own, different, exception, but I don't
> have much hope to get support on this).

I strongly object to changing things in an incoimpatible way from the way they
are now. Turning off assertions should turn off all assertions including
preconditions and postconditions.

But I don't have any strong feeling about introducing new gizmos that provide
more control (GNAT already has a far greater level of control than anything that
is likely to be stuffed in at the last minute).

I still overall oppose any change at this stage, we just don't have a clear
consensus for a last minute change. I would say let the market place decide what
level of control is needed.

After all in real life, such control is provided with a mixture of compiler
switches (about which we can say nothing in the RM) and pragmas, and the blance
between them has to be carefully considered in real life.

****************************************************************

From: Robert Dewar
Sent: Thursday, November 24, 2011  8:54 AM

> For what my opinion is worth here, I find JPR's summary useful for the
> layman to get to grips with the avalanche of views aired in this (very
> interesting) thread.

Indeed!

Note that I definitely agree that finer control is needed, and we have
implemented this finer control for years in GNAT (remember that we have
precondition and postcondition pragmas that work in Ada 95 and have been around
for years). So we agree that this kind of control is needed.

My concern is trying to figure out and set in stone what this control should be
at this late stage, when we lack a consensus about what should be done.

In practice you need both compiler switches and appropriate pragmas to control
things. The former (switches) will always be the province of the implementation
anyway. To me it is not so terrible if we have to leave some of the latter up to
the implementor as well.

****************************************************************

From: Robert Dewar
Sent: Thursday, November 24, 2011  9:00 AM

Let me note that Tullio captures the disucssion with Erhardt, but it does not
capture the very different view between Randy and Robert/Tuck, which can be
summarized as follows

Randy thinks pragma Assert is about simple things that have negligible overhead,
and should normally not be suppressed since the overhead is small. But pre/post
conditions are about complex things that may need to be suppressed. That's his
basis for wanting separate control (QUITE different from Erhard's point of view
about standard libraries, which I think we all share). Note that Randy uses the
term "assertions" to apply only to pragma Assert, and does not think of pre/post
as assertions.

Robert and Tuck don't make this big distinction, for them, following Eiffel
thinking, these all come under the heading of assertions, and there is no basis
for separate control *on those grounds*.

The Erhard argument about separate control of Pre and Post(I think this would
include body asserts as well) is quite different and seems valid, though I can't
get too excited, we have this degree of control in GNAT, but we never saw it
used either by a customer or by us ibnternally. But to be fair, we haven't
really seen people use pre/post for general library development (we certainly do
NOT use them yet in our GNAT run-time library).

****************************************************************

From: Erhard Ploedereder
Sent: Friday, November 25, 2011  6:14 AM

I agree 100% with J.P.

We ought to get it right in the standard, not just the implementation, which
later will not budge at all from whatever it implemented initially. Sorry,
Robert, but saying that GNAT already provides fine-grained control and therefore
the standard need not simply does not cut it for me.

Further on the subject:

Hopefully there is an ENABLE option as well as a SUPPRESS, because as a library
writer I have to chooose between:
 a) make it a PRE, which is really good documentation
 b) check it explicitly in the body and raise an exception

If I cannot insist on the PRE-check, I have no choice but to go for b), and I'll
be damned if I then provide the same information as a PRE as well.  I really do
not want to check it twice.

So, having no fine control over PRE is truly bad software-engineering, because
it forces users interested in robust software to do the wrong thing.

****************************************************************

From: Robert Dewar
Sent: Friday, November 25, 2011  8:47 AM

> I agree 100% with J.P.
>
> We ought to get it right in the standard, not just the implementation,
> which later will not budge at all from whatever it implemented
> initially. Sorry, Robert, but saying that GNAT already provides
> fine-grained control and therefore the standard need not simply does
> not cut it for me.

I just think it is too much of a rush to "get this right in the standard", when
there is a wide divergence of opinion on what getting it right means. No one is
disagreeing that a finer level of control would be desirable *in the standard*,
we are just disagreeing over whether there is time for this relatively
non-urgent last minute change given the lack of consensus.

Erhard, make a specific proposal, I think the only chance for a change at this
stage (when you have several people opposed to making a change) is to make a
proposal, then if that proposal gets a consensus we can make the change.

If not, we can continue the discussion process, and then when we come to an
agreement, we just make it recommended practice and existing implementations can
conform right away to the decision. It may well be possible to agree on this so
that a decision is made by the time the standard is official.

> So, having no fine control over PRE is truly bad software-engineering,
> because it forces users interested in robust software to do the wrong thing.

I don't think any users will be forced to do anything they don't like in
practice :-)

****************************************************************

From: Bob Duff
Sent: Friday, November 25, 2011  10:31 AM

> Erhard, make a specific proposal, ...

Right, I've lost track of what is being proposed -- it got buried underneath a
lot of philosophical rambling (much of which I agree with, BTW).  I certainly
agree with Erhard that for a library, one wants a way to turn off "internal"
assertions, but keep checks (such as preconditions) that protect the library
from its clients.

I'm not sure how to accomplish that (e.g., what about preconditions that are
"internal", such as on procedures in a private library package?  What about when
a library calls an externally-visible part of itself?).  And I don't think it's
that big of a problem, because vendors will give their customers what they need.

We can standardize existing practice later (which is really how standards are
supposed to be built, anyway!).

****************************************************************

From: Randy Brukardt
Sent: Monday, November 28, 2011  8:51 PM

> > Erhard, make a specific proposal, ...
>
> Right, I've lost track of what is being proposed -- it got buried
> underneath a lot of philosophical rambling (much of which I agree
> with, BTW).  I certainly agree with Erhard that for a library, one
> wants a way to turn off "internal"
> assertions, but keep checks (such as preconditions) that protect the
> library from its clients.
>
> I'm not sure how to accomplish that (e.g., what about preconditions
> that are "internal", such as on procedures in a private library
> package?  What about when a library calls an externally-visible part
> of itself?).  And I don't think it's that big of a problem, because
> vendors will give their customers what they need.

I don't think that there have really been too many concrete proposals in this
thread - I'll try to rectify that below. But as someone who has supported a
large third-party library that was intended to work on multiple compilers, I
have to disagree with the above statement. In order to create and support
something like Claw, one has to minimize compiler dependencies. It is not likely
that we could have depended on some compiler-specific features unless those are
widely supported.

On top of that, while I agree that vendors will give their *customers* what they
need, that does not necessarily extend to third-party library vendors (which
includes various open source projects as well as paid vendors), as library
creators are not necessarily the customers of the compiler vendors. Library
creators/vendors have a somewhat different set of problems than the typical
customer (need for maximum portability, need to control as much as possible how
the library is compiled, etc.)

So I think it is important that the standard at least try to address these
issues. And it seems silly to claim that it is too hard to do now, given that
this stuff is only implemented in one compiler (that I know of) right now and
that compiler has finer-grained control over these things than anything imagined
as a solution in the Standard. So I can't imagine any significant implementation
or description problems now -- that will get harder as we go down the road and
these things get more widely implemented.

---

Anyway, enough philosophy. Let me turn to some details. Following are several
proposals (most previously discussed), provided in my estimated order of
importance: (note that I try to provide counter-arguments when they have come
up)

Proposal #1: The assertion policy in effect at the point of the declaration of a
    subtype with a predicate is the one used to determine whether it is checked
    or not (rather than the assertion policy at the point of the check, as it is
    currently proposed in the draft Standard).

Reason #1A: It should be possible for the body of a subprogram to be able to
    assume that checks are on for all calls to that subprogram. The rules for
    preconditions (and invariants and postconditions as well) already determine
    whether checks are made by the policy at the point of the subprogram
    declaration, which gives this property. It's weird that predicates are
    different.

Reason #1B: I'm suggesting using the subtype declaration as the determining
    point, as that is similar to that for preconditions. A predicate is a
    subtype property, so having it depend on some unrelated subprogram
    declaration (as Tucker suggested) seems bizarre. And that would require
    having a different determination for predicate checks in other contexts
    (aggregate components, object initializations, etc.) Finally, using the
    subtype declaration should be enough, as a subprogram body always knows the
    location of the subtype declarations used in its parameters profiles.

Rebuttal #1Z: One thing that strikes me about this change is that a body can
    know more about the assertions that apply to the parameters than it can
    about the constraint checks that were made on those same parameters.
    Specifically, if checks are suppressed at a call-site, then random junk can
    be passed into a subprogram -- and there is nothing reasonable that the
    subprogram can do to protect itself. (Rechecking all of constraints of the
    parameters explicitly in the subprogram body is not reasonable, IMHO.) The
    erroneousness caused by such suppression bails us out formally, but that
    doesn't provide any reassurance in practice. The concern addressed above for
    predicates is very similar. However, just because we got it wrong in 1983
    doesn't mean that we have to get it wrong now. So I don't find the suppress
    parallel to be a very interesting argument.

---

Proposal #2: If the invoked subprogram and denoted subprogram of a dispatching
    call have different assertion policies, it is unspecified which is used for
    a class-wide invariant.

Reason #2A: This is fixing a bug; invariants and postconditions should act the
    same and this is the rule used for postconditions (otherwise, the compiler
    would always be required to make checks in the body in case someone might
    turn off the checks elsewhere, not something we want to require).

Reason #2B: I wrote this as applying only to class-wide invariants, as the
    specific invariants only can be determined for the actually invoked
    subprogram (so there is no point in applying them at the call site rather
    than in the body). We could generalize this to cover all invariants, but the
    freedom doesn't seem necessary (and I know "unspecified" scares some users).

Corollary #2C: We need to define the terms "class-wide invariant" and "specific
    invariant", since we keep finding ourselves talking about them and the
    parallel with preconditions (where these are defined terms) makes it even
    more likely that everyone will use them. Best to have a definition.

---

One thing that strikes me is that we seem to have less control over assertions
(including contracts) than we have over constraint checks. One would at least
expect equivalence. Specifically, a subprogram (or entire library) can declare
that it requires constraint checks on (via pragma Unsuppress). It would make
sense to have something similar for libraries to use for assertions. Thus the
next proposal.

Proposal #3: There is a boolean aspect "Always_Check_Assertions" for packages
    and subprograms. If this aspect is True, the assertions that belong to and
    are contained in the indicated unit are checked, no matter what assertion
    policy is specified.

Reason #3A: There ought to be a way for a program unit to declare that it
    requires assertion checks on for proper operation. This ought to include
    individual subprograms. I've described this as an aspect since it applies to
    a program unit (package or subprogram), but a pragma would also work. This
    aspect (along with pragma Unsuppress) would allow a library like Claw to not
    only say in the documentation (that no one ever reads) that turning off
    checks and assertions is not supported, but also to declare that fact to the
    compiler and reader (so checks are in fact not turned off unless some
    non-standard mode is used).

Rebuttal #3Z: One could do this by applying pragma Assertion_Policy(Check) as part of a library. There are four problems with this: first, a configuration pragma only applies to the units with which it actually appears (not necessarily children or subunits
). That means it has to be repeated with every unit of a set. Secondly, since the default assertion policy is implementation-defined, there would be no clear difference between requiring assertions for correct operation and just wanting assertions checked 
(for testing purposes). Third, this requires checking for the entire library (as a configuration pragma cannot be applied to an individual
subprogram, as Suppress and Unsuppress can). That might be too much of a requirement for some uses. Finally, as a configuration pragma, the pragma cannot be part of the library [package] (it has to appear outside). That makes it more likely that it get sep
arated from the unit, and less likely that the requirement be noted by the reader. None of these are compelling by themselves, but as a set it seems to suggest something stronger is needed.

Reason #3C: Having a special assertion policy for this purpose (say
"Always_Check") doesn't seem sufficient because it still has the issues caused by the pragma being a configuration pragma (as noted in the Rebuttal).

---

Proposal #4: Provide an additional assertion policy "Check_External_Only". In
    this policy, Preconditions and Predicates are checked, and others
    (Postconditions, type invariants, and pragma asserts) are ignored.

Reason #4A: The idea here is that once unit testing is completed for a package,
    assertions that are (mostly) about checking the correctness of a body are
    not (necessarily) needed. OTOH, assertions that are (mostly) about checking
    the correctness of calls are needed so long as new calls are being written
    and tested (which is usually so long as the package is in use). So it makes
    sense to treat these separately.

Alternative #4B: I'm not sure I have the right name here, and the name is
    important in this case. My first suggestion was "Check_Pres_Only" which is a
    trick as "Pre"s here means PREconditions and PREdicates. But that could be
    confused with the Pre aspect only, which is not what we want.

Discussion #4C: In the note above, Bob suggests that calls that are fully
    internal ought to be exempted (or something like that). This sounds like FUD
    to me - a strawman set up to prove that since we can't get this separation
    perfect, we shouldn't do it at all.  But that makes little sense to me; at
    worst, there will be extra checks that can't fail, and the user will always
    have the option of turning off all assertion checks if some are left on that
    are intolerable. The proposed rule is simple, and errs on the side of
    leaving checks on that might in fact be internal. It seems good enough
    (don't let best be the enemy of better! :-)

---

My intent is to put all of these on the agenda for the February meeting, unless
of course we have a consensus here that one or more of these are bad ideas.

Finally, I'll mention a couple of other ideas that I won't put forth proposals
for, as I don't think that they will get traction.

Not a proposal #5: Provide an additional assertion policy
    "Check_Contracts_Only". In this policy, pragma Asserts are ignored, and all
    of the others are checked.

The way I view pragma Assert vs. the other contracts, this makes sense. But I
seem to be in the minority on this one, and I have to agree that Proposal #4
makes more sense (except for the lame name), so I am going to support that one
and not push for this.

Not a proposal #6: Have predicate and precondition failures raise a different
    exception, Contract_Error.

This also makes sense in my world view, as failures of such inbound contracts is
a very different kind of error than the failure of a pragma Assert or even a
postcondition in a subprogram body. It would make it much clearer as to whether
the cause of the problem is one within the subprogram or one of the call. But
this feels like a bigger change than any of the first four proposals, we already
have a similar problem with Constraint_Error, and I don't want to go too far
here (I'd rather the easy fixes get accomplished rather than getting bogged down
in the harder ones). If someone else wants to take the lead on this idea, I'll
happily support it.

****************************************************************

From: Brad Moore
Sent: Tuesday, December  6, 2011  12:28 AM

...
> On top of that, while I agree that vendors will give their *customers*
> what they need, that does not necessarily extend to third-party
> library vendors (which includes various open source projects as well
> as paid vendors), as library creators are not necessarily the customers of the compiler vendors.
> Library creators/vendors have a somewhat different set of problems
> than the typical customer (need for maximum portability, need to
> control as much as possible how the library is compiled, etc.)
>
> So I think it is important that the standard at least try to address
> these issues. And it seems silly to claim that it is too hard to do
> now, given that this stuff is only implemented in one compiler (that I
> know of) right now and that compiler has finer-grained control over
> these things than anything imagined as a solution in the Standard. So
> I can't imagine any significant implementation or description problems
> now -- that will get harder as we go down the road and these things get more widely implemented.

I definitely agree with Randy, Erhard, et al, that one needs finer grained
control to do things like enable preconditions while disabling postconditions.

I'm wondering though if we haven't already provided this control in the
language.

One can use statically unevaluated expressions to accomplish this. RM
4.9 (32.1/3 - 33/3)

package Config is
       Expensive_Postconditions_Enabled : constant Boolean := False; end Config;

with Config; use Config;
package Pak1 is

    procedure P
      (Data : in out Array_Type)
       with Pre => Data'First = 0 and then Is_A_Power_Of_Two (Data'Length) ,
                Post =>  Expensive_Postconditions_Enabled and then Correct_Results (Data'Old, Data);

end Pak1;

This already gives programmers tons of flexibility, as I see it, since they can
define as many different combinations of statically unevaluated constant
expressions as needed. One possibly perceived advantage of this approach, is
that it might encourage one to leave the assertion policy enabled, and let the
programmer provide the switches that can be configured for production vs
debugging releases of the application, rather than apply a brute force, all on
or all off approach, or some other such coarser grained policy, which might make
developers/management uneasy, without an exhaustive analysis of all the source.

In spite of these current possibilities though, I think Randy's proposals below
make sense, and are worth pursuing, since it can be desirable to disable/enable
checks without having to modify source code, or implement some sort of
configuration dependent/implementation independent project file approach.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, February 15, 2012  10:50 PM

[Responding to old editorial review comments, the quotes are from Bob Duff.]

> I don't see any problem with 4.6(51/3) and 4.6(57).  "Checks"
> are the things controlled by pragma Suppress.  We decided that
> predicates are controlled by assertion policy, so they can't be
> checks.

That's a *big* problem, because preconditions and the like are most certainly
described as "checks" (see, for example, the Dynamic Semantics section of
6.1.1). And trying to describe them as something else is very difficult (in
terms of wording). The wording of 4.6(51/3) and 4.6(57) is an example of how
confusing this is.

I would prefer to call these things "checks" (because they are), and then exempt
checks controlled by Assertion_Policy (that is, checks caused by assertions)
from Suppress. That shouldn't take many words in 11.5. (Of course, it would be
even better to allow these to be controlled by Suppress as well as
Assertion_Policy. But that seems like more change than we'd like to make now. Of
course, that would provide a solution to the AI05-0290-1 problems of assertion
control, in particular the availability of Unsuppress. But I digress...)

> (I don't entirely agree with that decision, but it's not important.)
> So
> 4.6(51/3) correctly avoids "check" for the predicate.  And
> 4.6(57) correctly says that checks raise P_E or C_E.  An AARM note
> could mention that a predicate failure could raise A_E, but isn't
> mentioned because it isn't a check.

That's what I did for now, but essentially all of the other assertions are
described as checks. It's just too complicated to invent an entire new language
and semantics just for them. If we have to change that, I think we're adding a
month of work to the Standard, because every line of dynamic semantics for
assertions will have to be totally rewritten. And it took us dozens of
iterations to get them right as it is (IF they're right).

> But now I think 4.9(34/3) needs to say:
>
>     The expression is illegal if its evaluation raises an exception.
>     For the purposes of this evaluation, the assertion policy is assumed
>     to be Check.
>
> As I said above, if "evaluation" can happen at compile time, then so
> can "raising".

I'm still dubious, but I think it is irrelevant because assertions are surely
"checks".

Thoughts?

****************************************************************

From: Bob Duff
Sent: Thursday, February 16, 2012  5:18 AM

> That's a *big* problem, ...

Please don't panic!
The only big problem is that the RM is too big.  ;-)

>...because preconditions and the like are most  certainly described as
>"checks" (see, for example, the Dynamic Semantics  section of 6.1.1).

I see.  I didn't realize that, so I wrote the predicates section thinking that
assertions are not checks.  Nobody seemed to care at the time, so let's not get
too excited.  This is what happens when we make the RM too big for one person to
read cover to cover; we have to live with it now.

>...And trying to describe them as something else is very  difficult (in
>terms of wording). The wording of 4.6(51/3) and 4.6(57) is an  example
>of how confusing this is.
>
> I would prefer to call these things "checks" (because they are), and
> then exempt checks controlled by Assertion_Policy (that is, checks
> caused by assertions) from Suppress.

OK, if you think that's easier, I'm all for it.  Certainly assertions are
checks, intuitively speaking.

>...That shouldn't take many words
> in 11.5.

Good.

>...(Of course, it would be even better to allow these to be controlled
>by Suppress as well as Assertion_Policy. But that seems like more
>change  than we'd like to make now. Of course, that would provide a
>solution to the
> AI05-0290-1 problems of assertion control, in particular the
>availability of  Unsuppress. But I digress...)

Yeah, it's a bit of a mess that we have two completely different ways of
suppressing check-like things.  But I agree with not trying to fix that now.

Just yesterday I was discussing a compiler bug with Ed Schonberg, which involved
something like "expand this node with all checks suppressed", and the bug was
that assertions were NOT being suppressed, but they needed to be.

> > (I don't entirely agree with that decision, but it's not important.)
> > So
> > 4.6(51/3) correctly avoids "check" for the predicate.  And
> > 4.6(57) correctly says that checks raise P_E or C_E.  An AARM note
> > could mention that a predicate failure could raise A_E, but isn't
> > mentioned because it isn't a check.
>
> That's what I did for now, but essentially all of the other assertions
> are described as checks. It's just too complicated to invent an entire
> new language and semantics just for them. If we have to change that, I
> think we're adding a month of work to the Standard, because every line
> of dynamic semantics for assertions will have to be totally rewritten.
> And it took us dozens of iterations to get them right as it is (IF they're
< right).

Well, I suppose they're not right, but they're close enough.

> > But now I think 4.9(34/3) needs to say:
> >
> >     The expression is illegal if its evaluation raises an exception.
> >     For the purposes of this evaluation, the assertion policy is assumed
> >     to be Check.
> >
> > As I said above, if "evaluation" can happen at compile time, then so
> > can "raising".
>
> I'm still dubious, but I think it is irrelevant because assertions are
> surely "checks".

OK.

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  8:00 AM

I believe (pretty strongly) that Suppress(All_Checks) ought to suppress
assertion checks as well.

****************************************************************

From: Bob Duff
Sent: Thursday, February 16, 2012  8:10 AM

I could be convinced of that, but:

You don't give any reasons for your pretty-strong belief.

We've discussed this, and that's not what we decided.
If we had gone that way, then why would we have invented Assertion_Policy?  It
would make much more sense to have added some new check names.

If we went that way, and the program executes "pragma Assert(False);", and
assertion checks (or all checks) are suppressed, then is it erroneous?

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  8:28 AM

> I could be convinced of that, but:
>
> You don't give any reasons for your pretty-strong belief.

There are a lot of programs that use Suppress(All_Checks) to mean "turn off all
run-time checks" and as we seem to now agree, assertion checks are "checks."

> We've discussed this, and that's not what we decided.

I must have misssed or slept through the discussion on All_Checks as applied to
assertion checks.

> If we had gone that way, then why would we have invented
> Assertion_Policy?  It would make much more sense to have added some
> new check names.

Assertion policy was invented because there was more than just "on" or "off" for
assertions.  We imagined "assume true," "ignore," "check", and "check fiercely,"
etc.

> If we went that way, and the program executes "pragma Assert(False);",
> and assertion checks (or all checks) are suppressed, then is it
> erroneous?

I was in the discussion where we decided that when assertion are ignored, they
are really ignored.  It is as though they weren't there at all.

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  8:33 AM

...
> I could be convinced of that, but:
>
> You don't give any reasons for your pretty-strong belief.

The other reason is the wording in 11.5 about All_Checks:

[The following check corresponds to all situations in which any predefined
exception is raised.]

25
All_Checks

Represents the union of all checks; [suppressing All_Checks suppresses all
checks.]

25.a
Ramification: All_Checks includes both language-defined and
implementation-defined checks.

25.b/3
To be honest: {AI05-0005-1} There are additional checks defined in various
Specialized Needs Annexes that are not listed here. Nevertheless, they are
included in All_Checks and named in a Suppress pragma on implementations that
support the relevant annex. Look up "check, language-defined" in the index to
find the complete list.

---

This certainly conveys to me the message that Suppress(All_Checks) suppresses
all checks, whether or not we have a "named" check associated with them.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  1:28 PM

> I believe (pretty strongly) that Suppress(All_Checks) ought to
> suppress assertion checks as well.

This from the guy who insisted up and down that suppression was the wrong model
for assertions. :-)

I actually don't know how to reconcile that with Assertion_Policy(Ignore).
Suppress(All_Checks) makes the program erroneous if checks fail;
Assertion_Policy(Ignore) doesn't.

The cool thing is that if this is true, we only need to add a user Note to 11.5:
"Assertions are checks, so they're suppressed by Suppress(All_Checks). We don't
give them a check name, since using Assertion_Policy is preferred if it is
desired to turn off assertions only." (This is too important for just an AARM
note.)

And reword a few things so that predicates are described as checks.

****************************************************************

From: Bob Duff
Sent: Thursday, February 16, 2012  1:49 PM

> The other reason is the wording in 11.5 about All_Checks:
>
> [The following check corresponds to all situations in which any
> predefined exception is raised.]
>
> Represents the union of all checks; [suppressing All_Checks suppresses
> all checks.]

Assertion_Error is not a predefined exception.  So if you want Suppress to
suppress assertions, we need new wording somewhere.

Most of the above text is @Redundant.  If you want "the union of all checks" to
make sense for assertions, then I think the assertions need check name(s).

And from your other email, I assume you're proposing to change this:

  26  If a given check has been suppressed, and the corresponding error
  situation occurs, the execution of the program is erroneous.

although that wasn't 100% clear to me.

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  1:54 PM

> I actually don't know how to reconcile that with Assertion_Policy(Ignore).
> Suppress(All_Checks) makes the program erroneous if checks fail;
> Assertion_Policy(Ignore) doesn't.

Assertion checks are user-specified, and so there is no reason for execution to
become erroneous if they are suppressed, so long as it is interpreted as meaning
the same thing as ignored.  Other kinds of checks, such as array out of bounds,
null pointer, discriminant checks, etc., clearly make the execution erroneous if
they fail and execution proceeds.

Assertion policy was created to allow for more implementation flexibility
through the use of implementation-defined policies.

****************************************************************

From: Steve Baird
Sent: Thursday, February 16, 2012  1:51 PM

>> I believe (pretty strongly) that Suppress(All_Checks) ought to
>> suppress assertion checks as well.
>
> This from the guy who insisted up and down that suppression was the
> wrong model for assertions. :-)

I think this means that we want to make it clear that a suppressing an assertion
check cannot lead to erroneousness in the same way that suppressing other kinds
of checks can.

It doesn't make sense to try to define the behavior of a program execution
which, absent suppression, would have failed an array indexing  check. That's
why such an execution is ,quite correctly, defined to be erroneous.

Assertion checks are different - type-safety and all the other invariants that
an implementation (as opposed to a user) might depend on are not compromised if
we ignore assertion checks. If we say that a suppressed assertion check simply
might or might not be performed, this doesn't lead to definitional problems.

Presumably suppressing an assertion check (when the assertion policy is Check)
means that an implementation is allowed, but not required, to behave as though
the assertion policy in effect is Ignore (which, incidentally, includes
evaluation of the Boolean expression).

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  2:04 PM

> Assertion_Error is not a predefined exception.  So if you want
> Suppress to suppress assertions, we need new wording somewhere.
>
> Most of the above text is @Redundant.  If you want "the union of all
> checks" to make sense for assertions, then I think the assertions need
> check name(s).

I'm not sure I would agree that if something is a "check" then it automatically
needs a "check name."

> And from your other email, I assume you're proposing to change this:
>
>    26  If a given check has been suppressed, and the corresponding error
>    situation occurs, the execution of the program is erroneous.
>
> although that wasn't 100% clear to me.

Yes, as mentioned in an earlier note, suppressing an assertion check would mean
ignoring it, not presuming it was true.

I gave my reasons why I think All_Checks should cover assertion checks.  I'd be
curious how others feel.  When someone says "suppress all" I have always assumed
they really meant it. The creation of Assertion_Policy was to give more
flexibility, but I never thought it meant making "suppress all" mean "suppress
some."  Maybe I am the only person who feels this way...

****************************************************************

From: Edmond Schonberg
Sent: Thursday, February 16, 2012  2:07 PM

> Assertion policy was created to allow for more implementation
> flexibility through the use of implementation-defined policies.

Indeed, but I'm afraid it's not flexible enough because it's a configuration
pragma, while Suppress gives you a per-scope  control.  Could we just make it
into a regular pragma with similar semantics?

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  2:45 PM

...
> > Most of the above text is @Redundant.  If you want "the union of all
> > checks" to make sense for assertions, then I think the assertions
> > need check name(s).
>
> I'm not sure I would agree that if something is a "check"
> then it automatically needs a "check name."

I would hope not, since there are no check names for checks defined in Annexes,
and there is an AARM note (which someone quoted earlier) that says they're
included in All_Checks.

> > And from your other email, I assume you're proposing to change this:
> >
> >    26  If a given check has been suppressed, and the corresponding error
> >    situation occurs, the execution of the program is erroneous.
> >
> > although that wasn't 100% clear to me.
>
> Yes, as mentioned in an earlier note, suppressing an assertion check
> would mean ignoring it, not presuming it was true.

Could you suggest a wording for this paragraph that would have the right effect?

> I gave my reasons why I think All_Checks should cover assertion checks. I'd be
> curious how others feel.  When someone says "suppress all" I have always
> assumed they really meant it. The creation of Assertion_Policy was to give
> more flexibility,
> but I never thought it meant making "suppress all" mean "suppress some."
> Maybe I am the only person who feels this way...

I've always thought that Suppress was a better model than Assertion_Policy for
assertions. I agree that not making them erroneous is probably a good idea. But
I would much prefer that the implementation be allowed to check any assertions
that it wants to, even when they are suppressed. (They're supposed to be true,
after all.) Presumably, an implementation would only check those that it could
then prove to be true or are very cheap. "Ignore" does not let the
implementation have any flexibility in this area.

OTOH, making Suppress(All_Checks) apply to assertions might be a (minor)
compatibility problem. pragma Assert would be included in that, and it probably
isn't included in Ada 2005 compilers. Thus it could be turned off where it is
now on. Not a big deal, but we ought to remain aware of it.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  2:51 PM

>>	Assertion policy was created to allow for more implementation
>>	flexibility through the use of implementation-defined policies.
>>
>	Indeed, but I'm afraid it's not flexible enough because it's a
>configuration pragma, while Suppress gives you a per-scope  control.
> Could we just make it into a regular pragma with similar semantics?

I forgot about that; it had come up in the write-up of AI05-0290-1 I did
yesterday.

If we gave these check names, at least some of the control issues would go away,
because Unsuppress would do what Erhard and I have been asking for (give a way
for a library to force predicate and precondition checks always).

As noted in my previous note, Suppress is better than Ignore because it allows
the compiler to make the check and then depend on it later (presumably it would
only do that if the net code size was smaller). That eliminates some of the need
for other policies.

So perhaps much of what we really want could be accomplished by mostly
abandoning Assertion_Policy, and using Suppress, modulo that it is not erroneous
to fail an unchecked assertion.

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  2:58 PM

>>> And from your other email, I assume you're proposing to change this:
>>>
>>>     26  If a given check has been suppressed, and the corresponding error
>>>     situation occurs, the execution of the program is erroneous.
>>>
>>> although that wasn't 100% clear to me.
>>
>> Yes, as mentioned in an earlier note, suppressing an assertion check
>> would mean ignoring it, not presuming it was true.
>
> Could you suggest a wording for this paragraph that would have the
> right effect?

How about:

If a given check has been suppressed, then if it is an assertion check, the
corresponding assertion is simply ignored, while if it is some other check and
the corresponding error situation occurs, the execution of the program is
erroneous.

>> I gave my reasons why I think All_Checks should cover assertion checks. I'd be
>> curious how others feel.  When someone says "suppress all" I have
>> always assumed
>> they really meant it. The creation of Assertion_Policy was to give
>> more flexibility,
>> but I never thought it meant making "suppress all" mean "suppress some."
>> Maybe I am the only person who feels this way...
>
> I've always thought that Suppress was a better model than
> Assertion_Policy for assertions. I agree that not making them
> erroneous is probably a good idea. But I would much prefer that the
> implementation be allowed to check any assertions that it wants to,
> even when they are suppressed. (They're supposed to be true, after
> all.) Presumably, an implementation would only check those that it could then prove to be true or are very cheap. "Ignore"
> does not let the implementation have any flexibility in this area.
>
> OTOH, making Suppress(All_Checks) apply to assertions might be a
> (minor) compatibility problem. pragma Assert would be included in
> that, and it probably isn't included in Ada 2005 compilers. Thus it
> could be turned off where it is now on. Not a big deal, but we ought to remain aware of it.

Pragma Assert was in plenty of Ada 95 compilers as well.
I'd be curious how it was implemented there.  Certainly in Green Hills and Aonix
compilers I can assure you that Suppress(All_Checks) turned off assertion checks
as well.

****************************************************************

From: Steve Baird
Sent: Thursday, February 16, 2012  3:10 PM

> If a given check has been suppressed, then if it is an assertion
> check, the corresponding assertion is simply ignored, while if it is
> some other check and the corresponding error situation occurs, the
> execution of the program is erroneous.

Although I agree with the general direction you are suggesting, I see two minor
problems with this wording.

    1) For Assertion_Policy Ignore, we still evaluate the Boolean.
       I don't think we want something similar but slightly different
       here.

    2) I want to preserve the longstanding rule that suppression only
       gives an implementation additional permissions - it never imposes
       a requirement on an implementation. It sounds like you are
       requiring that the assertion must be ignored, as opposed to
       allowing it to be ignored.

I'll try to come up with wording that addresses these points.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  3:25 PM

>     1) For Assertion_Policy Ignore, we still evaluate the Boolean.
>        I don't think we want something similar but slightly different
>        here.

Huh? When the policy is Ignore, nothing is evaluated. Nor can anything be
assumed. It wouldn't be very useful if it evaluated something.

>     2) I want to preserve the longstanding rule that suppression only
>        gives an implementation additional permissions - it never imposes
>        a requirement on an implementation. It sounds like you are
>        requiring that the assertion must be ignored, as opposed to
>        allowing it to be ignored.

Definitely. I consider that an advantage rather than a "difference".

****************************************************************

From: Tucker Taft
Sent: Thursday, February 16, 2012  3:32 PM

> Although I agree with the general direction you are suggesting, I see
> two minor problems with this wording.
>
> 1) For Assertion_Policy Ignore, we still evaluate the Boolean.
> I don't think we want something similar but slightly different here.

I had forgotten that, and I doubt if all Ada 95 compilers follow that rule, and
it would really defeat the purpose of ignoring the assertion from a performance
point of view. Where is that specified?

> 2) I want to preserve the longstanding rule that suppression only
> gives an implementation additional permissions - it never imposes a
> requirement on an implementation. It sounds like you are requiring
> that the assertion must be ignored, as opposed to allowing it to be
> ignored.

I suppose, but then it must either check or ignore.
It can't suppress the check and then assume it is true.
One of the fundamental principles that got us to the Assertion_Policy approach
was that adding a pragma Assert never made the program *less* safe because it
was asserting something that was in fact untrue.

>
> I'll try to come up with wording that addresses these points.

All power to you.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  3:53 PM

...
> > 1) For Assertion_Policy Ignore, we still evaluate the Boolean.
> > I don't think we want something similar but slightly different here.
>
> I had forgotten that, and I doubt if all Ada 95 compilers follow that
> rule, and it would really defeat the purpose of ignoring the assertion
> from a performance point of view.
> Where is that specified?

I don't think you forgot anything, I think Steve is making this up.

> > 2) I want to preserve the longstanding rule that suppression only
> > gives an implementation additional permissions - it never imposes a
> > requirement on an implementation. It sounds like you are requiring
> > that the assertion must be ignored, as opposed to allowing it to be
> > ignored.
>
> I suppose, but then it must either check or ignore.
> It can't suppress the check and then assume it is true.
> One of the fundamental principles that got us to the Assertion_Policy
> approach was that adding a pragma Assert never made the program *less*
> safe because it was asserting something that was in fact untrue.

Right, this is a general principle in Ada. But of course "check" doesn't mean
that any code will actually be executed: such a check might be optimized out.
For instance:

      procedure P (A : access Something)
          with Pre => A /= null;

Ignoring the fact that the programmer should probably have used a null
exclusion, if a call looks like:

       if Ptr /= null then
            P (Ptr);

The check would also be Ptr /= null. One would expect that normal
common-subexpression and dead code elimination would completely remove the
check. But it still can be assumed true in the body in this case.

****************************************************************

From: Steve Baird
Sent: Thursday, February 16, 2012  3:54 PM

> I'll try to come up with wording that addresses these points.

1) Add Assertion_Check to the list of defined checks (details TBD)

2) Replace "check" with "check other than Assertion_Check" in
    the erroneous execution section of 11.5.

3) In the Implementation Permissions section, add

    At any point within a region for which Assertion_Check
    is suppressed, an implementation is allowed (but not required)
    to define the Assertion_Policy in effect at that point to
    be Ignore.

> Huh? When the policy is Ignore, nothing is evaluated. Nor can anything
> be assumed.

Oops. My mistake.
The RM does contain

    If the assertion policy is Ignore at the point of a pragma Assert,
    ...the elaboration of the pragma consists of evaluating the boolean
    expression ...

but the elided text is significant. My bad.

Nonetheless, I still think that defining the effect of suppression in terms of
the Ignore policy is a good idea.

****************************************************************

From: Bob Duff
Sent: Thursday, February 16, 2012  5:29 PM

> Assertion checks are user-specified, and so there is no reason for
> execution to become erroneous if they are suppressed,...

Not sure what "no reason" means.  Yeah, there is no reason why we MUST define a
suppressed False assertion to be erroneous. But there is a reason that there
ought to be a mode where it's erroneous: efficiency.  Currently, that's done via
an impl-def policy.  There are lots of cases where efficiency can be improved by
assuming (and not checking) that assertions are True.

>...so long as it is interpreted as meaning  the same thing as ignored.

That part is circular reasoning: yes, of course if we define it like "Ignore"
then it's not erroneous.

>...Other kinds of checks,
> such as array out of bounds, null pointer, discriminant  checks, etc.,
>clearly make the execution erroneous if  they fail and execution
>proceeds.

On the flip side, there are non-assertion checks that could sensibly have an
"ignore" option.  Range checks, for example: don't check the range, but don't
later assume it's in range. Overflow checks: don't check, but return an
implementation dependent value of the type.  Robert is fond of pointing out
that:

    pragma Suppress(...);
    if X = 0 then
        Put_Line(...);
    end if;
    Y := 1/X;

most programmers are surprised that the compiler can completely eliminate the if
statement.

> Assertion policy was created to allow for more implementation
> flexibility through the use of implementation-defined policies.

Right, but it's a bit of a mess.
It's not clear why more flexibility isn't desirable for non-assertion checks.

> > The cool thing is that if this is true, we only need to add a user
> > Note to
> > 11.5: "Assertions are checks, so they're suppressed by Suppress(All_Checks).
> > We don't give them a check name, since using Assertion_Policy is
> > preferred if it is desired to turn off assertions only." (This is
> > too important for just an AARM note.)
> >
> > And reword a few things so that predicates are described as checks.

Right, if assertions are checks, we really should be using the standard wording
for checks: "a check is made...".  Currently we say "If the policy is Check,
then ... Assertion_Error is raised", which is weird, because we don't say "if
Divide_Check is not suppressed, a check is made that ... is nonzero".

It would simplify if the assertion-check wording assumed that the policy is
Check.  Then when we define Assertion_Policy, we say, "Wording elsewhere assumes
... Check.  If the policy is Ignore, then instead ...".

I fear we don't have the time to do all this rewording.
Oh, well, it's not the end of the world if we're inconsistent.

> I'm not sure I would agree that if something is a "check" then it
> automatically needs a "check name."

I think the intent was that all checks have names.  But the ones in the annexes
are only named via a "To be honest".  That's a cheat, of course.

> Yes, as mentioned in an earlier note, suppressing an assertion check
> would mean ignoring it, not presuming it was true.

I'm a little uncomfortable with the idea that Suppress wouldn't mean
"erroneous".

I'm a little uncomfortable that:

    subtype S is Natural range 0..10;

has confusingly different semantics than:

    subtype S is Natural with
        Static_Predicate => S <= 10;

> I gave my reasons why I think All_Checks should cover assertion
> checks.  I'd be curious how others feel.

I have mixed feelings.

> ...When someone
> says "suppress all" I have always assumed they really meant it.

Yeah, but to me, "really mean it" means "cross my heart and hope to die, you may
strike me with erroneous lightning if I'm wrong".  In other words, Suppress is
the extreme, "prefer efficiency over safety".

> The creation of Assertion_Policy was to give more flexibility, but I
> never thought it meant making "suppress all" mean "suppress some."
> Maybe I am the only person who feels this way...

I see your point.  Mixed feelings.

> > 1) For Assertion_Policy Ignore, we still evaluate the Boolean.
> > I don't think we want something similar but slightly different here.
>
> I had forgotten that, ...

Please re-forget that.  ;-)

> One of the fundamental principles that got us to the Assertion_Policy
> approach was that adding a pragma Assert never made the program *less*
> safe because it was asserting
  ^^^^^
> something that was in fact untrue.

No, "never" is wrong.  The principle holds for Check and Ignore policies, but
implementations can have a policy where the principle is violated -- and such a
policy has some advantage.

> 1) Add Assertion_Check to the list of defined checks (details TBD)

I'd prefer to split out Precondition_Check, Postcondition_Check,
Predicate_Check, Invariant_Check, Assert_Check (Pragma_Assert_Check?).
Assertion_Check could be the union of these. Predicate_Check could be the union
of Static_Predicate_Check and Dynamic_Predicate_Check.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  6:17 PM

...
> Steve Baird wrote:
> > 1) Add Assertion_Check to the list of defined checks (details TBD)
>
> I'd prefer to split out Precondition_Check, Postcondition_Check,
> Predicate_Check, Invariant_Check, Assert_Check (Pragma_Assert_Check?).
> Assertion_Check could be the union of these.
> Predicate_Check could be the union of Static_Predicate_Check and
> Dynamic_Predicate_Check.

And something like Before_Call_Assertion_Check is the union of Predicate_Check
and Precondition_Check (I don't have the perfect name); and
After_Call_Assertion_Check is the union of Postcondition_Check and
Invariant_Check.

In any case, this is definitely getting into AI05-0290-1 territory ("Improved
control for assertions"), and there is not enough time to come to any
conclusions before the agenda is finalized (that's tomorrow), so I think I'll
probably just add all of this to that AI and we'll have to hash it out at the
meeting.

****************************************************************

From: Bob Duff
Sent: Thursday, February 16, 2012  6:33 PM

> I'll probably just add all of this to that AI and we'll have to hash
> it out at the meeting.

OK.  Or, we can hash it out between 2013 and 2020.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 16, 2012  6:59 PM

> OK.  Or, we can hash it out between 2013 and 2020.

I don't think that works, at least for Suppress(All_Checks), because changing
that later would be a massive incompatibility. (It already might be an
incompatibility, but not with Tucker's compilers.)

Similarly, we've had several strong comments that we need mechanisms for
3rd-party packages. That shouldn't be ignored.

Once we've dealt with those two, it seems inconceivable that we couldn't agree
on the rest (which seems easy to me). In any case, I hope we don't spend the
whole meeting on things that no one will ever notice (this is definitely *not*
in that category).

****************************************************************

From: Erhard  Ploedereder
Sent: Friday, February 17, 2012  3:49 AM

>> I believe (pretty strongly) that Suppress(All_Checks) ought to
>> suppress assertion checks as well.

> The cool thing is that if this is true, we only need to add a user
> Note to
> 11.5: "Assertions are checks, so they're suppressed by Suppress(All_Checks).
> We don't give them a check name, since using Assertion_Policy is
> preferred if it is desired to turn off assertions only." (This is too
> important for just an AARM note.)

And the semantics of
 pragma Unsuppress(All_Checks);
 pragma Assertion_Policy(Ignore);

is what ? Obviously there needs to be a rule to resolve the apparent
contradiction.

Incidently:
The point that "the old checks" prevent erroneousness within the framework of
language semantics while assertion checks are unrelated to erroneousness but
rather deal with application semantics is a very good one. We ought to keep that
in mind when deciding on assertion control.

****************************************************************

From: Jeff Cousins
Sent: Friday, February 17, 2012  4:12 AM

John's book says "pragma Suppress (All_Checks); which does the obvious thing".

So it's not so obvious.

What would people naturally reply if asked what it
covers, without thinking too much? As the Assertion_Policy is called "Check" and
not something like "Verify" my top-of-my-head answer  would be that assertions
are checks.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, February 17, 2012  4:29 AM

>> OK.  Or, we can hash it out between 2013 and 2020.
> I don't think that works, at least for Suppress(All_Checks), because
> changing that later would be a massive incompatibility.

I agree with Randy. This is way too important to not resolve now.
and I want to expand the future incompatibility argument to the Assertion
control in general.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, February 17, 2012  5:06 AM

If pragma Assertion_Policy(Ignore) "guarantees" that the assertion is not
evaluated, then there is no check to be talked about, is there?
Consequently
 pragma Unsuppress(All_Checks);
 pragma Assertion_Policy(Ignore);
would imply that there is nothing there to be unsuppressed, hence Unsuppress
would not be an answer to the 3rd party-SW question.

In the end, I propose to make the Assertion Control pragmas semantically
analogous to the Suppress/Unsuppress pragmas (including the scoping), but
controlling only the assertion world. The syntactic differences are there only
to separate assertion checks from runtime checks that prevent erroneousness.
(Maybe it would be a good idea to have the assertion check names as 2. argument
to assertion control pragmas only, with "All_Checks" as a special comprehensive
choice on the Suppress side.)

****************************************************************

From: John Barnes
Sent: Friday, February 17, 2012  1:26 PM

> John's book says "pragma Suppress (All_Checks); which does the obvious thing".

> So it's not so obvious.

It was obvious when I wrote it!


****************************************************************

From: Tucker Taft
Sent: Saturday, February 25, 2012  10:43 PM

Here is a new version from Erhard and Tuck of an AI on pragma Assertion_Policy.
[This is version /04 of the AI - Editor.]

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 25, 2012  11:08 PM

Comments:

(1) Shouldn't "assertion_kind" be "assertion_aspect_mark"? That is, why use words to
repeat here what is specified in 2.8 and 13.1.1? (Still will need the list of them,
of course.)

(2) Then the wording would be about "assertion aspects" rather than "assertion kinds".

****************************************************************

From: Erhard Ploedereder
Sent: Sunday, February 26, 2012  11:32 AM

Two more places to fix (fairly easily):
4.6. 51/3
6.1.1. 19/3

Apply the boilerplate...
If requires by the <respective> assertion policies in effect at <  > ,

---------------

(I checked for all occurrences of assertion policy in the RM and AARM)

4.9. 34/3 is another, but it is ok as is.

****************************************************************

From: Erhard Ploedereder
Sent: Sunday, February 26, 2012  11:48 AM

Discovery of 19/3 led to a simplification of 31/3 and 35/3 :

modify 6.1.1. 19/3 to read

If required by the Pre, Pre'Class, Post, or Post'Class assertion policies (see
11.4.2) in effect at the point of a corresponding aspect specification
applicable to a given subprogram or entry, then the respective preconditions and
postconditions are considered to be enabled for that subprogram or entry.

Modify 6.1.1 31/3 to:

Upon a call of the subprogram or entry, after evaluating any actual parameters,
checks for enabled preconditions are performed as follows:

Modify 6.1.1. 35/3, first sentence, to:

Upon successful return from a call of the subprogram or entry, prior to copying
back any by-copy in out or out parameters, the check of enabled postconditions
is performed.

****************************************************************

From: Robert Dewar
Sent: Sunday, February 26, 2012  3:48 PM

Query: for run time checks, the implementation can assume that no check is
violated if checks are suppressed.

This is certainly not true for ignoring assertions, a compiler should ignore
assertions if they are off (GNAT at least in one case uses suppressed assertions
to sharpen warning messages, the case is an assert of

   pragma Assert (S'First = 1);

which suppresses warnings about S'First possibly not being 1 even if assertions
are suppressed. But warnings are insignificant semantically, so that's OK.

What about preconditions and postconditions that are suppressed, are they also
required to be totally ignored? Same question with predicates. If predicates are
suppressed, I assume they still operate as expected in controlling loops etc??

Sorry if this is confused!

****************************************************************

From: Erhard Ploedereder
Sent: Monday, February 27, 2012  1:46 AM

...
> What about preconditions and postconditions that are suppressed, are
> they also required to be totally ignored? Same question with
> predicates. If predicates are suppressed, I assume they still operate
> as expected in controlling loops etc??
>
> Sorry if this is confused!

Present thinking is that assertions, pre- and postconditions and type invariants
are ignored in the above sense. Compiler is not allowed to assume truth. I think
that is a pretty solid position.

On subtype predicates the jury is still out. It was written up as "ignored" in
the above sense, but then we discovered some issues; that is the one issue
remaining on the assertion control.

****************************************************************

From: Robert Dewar
Sent: Monday, February 27, 2012  7:20 AM

> Present thinking is that assertions, pre- and postconditions and type
> invariants are ignored in the above sense. Compiler is not allowed to
> assume truth. I think that is a pretty solid position.

I think that makes sense. I am also thinking that perhaps in GNAT we will
implement another setting, which says, assume these are True, but don't generate
run-time code. That's clearly appropriate for things you have proved true.

****************************************************************

From: Tucker Taft
Sent: Monday, February 27, 2012  10:13 AM

> What about preconditions and postconditions that are suppressed, are
> they also required to be totally ignored? Same question with
> predicates. If predicates are suppressed, I assume they still operate
> as expected in controlling loops etc??

Pragma Suppress has no effect on pre/postconditions, except for pragma
Suppress(All_Checks), which *allows* the implementation to interpret it as
Assertion_Policy(Ignore). We intentionally do not want erroneousness to be
associated with assertion-ish things like pre/postconditions, etc. The exact
details of that are part of this AI, and for subtype predicates, are pretty
subtle.  We decided we didn't like the solution currently proposed by this AI
for subtype predicates, but we have agreed on a different approach which still
avoids erroneousness.  I will be writing that up in the next couple of days.

> Sorry if this is confused!

This is a confusing area.  We spent several hours on this topic, and visited
many different places in the design "space."  Personally, I feel pretty good
about where we "landed," but I haven't written it up yet, so you will have to
hang in there for a few more days before you will see the full story, at least
with respect to subtype predicates.  But we all felt it was quite important that
adding assertion-like things to a program and then specifying
Assertion_Policy(Ignore) should *not* introduce erroneousness, even if your
assertion-like things are incorrect.

****************************************************************

From: Steve Baird
Sent: Monday, February 27, 2012  10:42 PM

How does the design you have in mind handle the following example?

     declare
       pragma Assertion_Policy (Ignore);

       subtype Non_Zero is Integer with Static_Predicate
          => Non_Zero /= 0;

       type Rec (D : Non_Zero) is
         record
            case D is
               when Integer'First .. -1 => ...;
               when 1 .. Integer'Last => ....;
            end case;
         end record;

       Zero : Integer := Ident_Int (0);

       X : Rec (D => Zero);

I'm wondering about the general issue of how the Ignore assertion policy
interacts with the coverage rules for case-ish constructs (case statements, case
expressions, variant parts) when the nominal subtype is a static subtype with a
Static_Predicate.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  6:42 AM

> I'm wondering about the general issue of how the Ignore assertion
> policy interacts with the coverage rules for case-ish constructs (case
> statements, case expressions, variant parts) when the nominal subtype
> is a static subtype with a Static_Predicate.

Ignore should only be about removing checks, not altering any of the other
semantics of static predicates, it would not be acceptable at all for ignore to
make a case statement illegal!

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  9:03 AM

>> I'm wondering about the general issue of how the Ignore assertion
>> policy interacts with the coverage rules for case-ish constructs
>> (case statements, case expressions, variant parts) when the nominal
>> subtype is a static subtype with a Static_Predicate.
>
> Ignore should only be about removing checks, not altering any of the
> other semantics of static predicates, it would not be acceptable at
> all for ignore to make a case statement illegal!

Correct.  The intent is "Ignore" means "ignore for the purpose of range checks."
Any non-checking semantics remain, such as membership, full coverage, etc.  Case
statements luckily include an option to raise Constraint_Error at run-time if
the value is not covered for some reason (RM 5.4(13)).  The same is true for
case expressions.  Variant records don't really have such a fall back, but we
will have to be clear that even if Ignore applies where the discriminant subtype
is specified, the subtype's predicate still determines whether or not a
particular discriminant-dependent component exists.  It is as though there were
a "when others => null;" alternative in the variant record, where the "when
others" covers values that don't satisfy the predicate.

I suppose one way to think of it is that values that don't satisfy the predicate
are like values outside of the base range of the subtype.  Some objects of the
subtype can have them, and some objects can't, but the rules of the language
should be set up so that values that don't satisfy the predicate, analogous to
values outside the base range, don't cause erroneousness.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  9:06 AM

...
> I suppose one way to think of it is that values that don't satisfy the
> predicate are like values outside of the base range of the subtype.
> Some objects of the subtype can have them, and some objects can't, but
> the rules of the language should be set up so that values that don't
> satisfy the predicate, analogous to values outside the base range,
> don't cause erroneousness.

That seems exactly right, and for a case statement, it means that just as you do
a range check even if checks are suppressed, you will do a predicate check even
if checks are suppressed.

Interestingly, gnat has a switch -gnatV that forces extra validity checks around
the place. I think these should also trigger extra predicate checks for
predicated subtypes.

****************************************************************

From: Jeff Cousins
Sent: Tuesday, February 28, 2012  9:42 AM

Have we got a definite answer yet as to whether Steve's example

       subtype Non_Zero is Integer with Static_Predicate
          => Non_Zero /= 0;

       type Rec (D : Non_Zero) is
         record
            case D is
               when Integer'First .. -1 => ...;
               when 1 .. Integer'Last => ....;
            end case;
         end record;

(i.e. without coverage of the Zero case) is always legal, always illegal, or
dependent on the Assertion Policy (which is implementation defined if not
specified)?

I could live with either of the first two as long as it's spelt out which.  I
think that before the meeting we would have been evenly split about which we
were expecting.  It seems to have been a mistake to have sold predicates as
being similar to constraints.

****************************************************************

From: Steve Baird
Sent: Tuesday, February 28, 2012  10:02 AM

...
>> I'm wondering about the general issue of how the Ignore assertion
>> policy interacts with the coverage rules for case-ish constructs
>> (case statements, case expressions, variant parts) when the nominal
>> subtype is a static subtype with a Static_Predicate.

> Ignore should only be about removing checks, not altering any of the
> other semantics of static predicates, it would not be acceptable at
> all for ignore to make a case statement illegal!

Agreed.

My question was about dynamic semantics only.

...
> Variant records don't really have
> such a fall back, but we will have to be clear that even if Ignore
> applies where the discriminant subtype is specified, the subtype's
> predicate still determines whether or not a particular
> discriminant-dependent component exists.  It is as though there were a
> "when others => null;" alternative in the variant record, where the
> "when others" covers values that don't satisfy the predicate.

I think this is a bad idea. I think (as in the case of case statements)
Constraint_Error should be raised. Let's detect a problem like this ASAP instead
of sweeping it under the rug and then seeing mystifying behavior later.

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  10:19 AM

> Have we got a definite answer yet as to whether Steve's example
...
> (i.e. without coverage of the Zero case) is always legal, always illegal, or
> dependent on the Assertion Policy (which is implementation defined if not
> specified)?

This is always legal.  Legality should not be affected by the Assertion_Policy.
I will be writing this up shortly.  For case statements, if the case
expression's value doesn't satisfy the static predicate, then Constraint_Error
would be raised, per the existing RM paragram 5.4(13).  For variant records, I
think we have to assume an implicit "when others => null" meaning that if the
discriminant does not satisfy the predicate, the implicit "null" variant
alternative is chosen.

> I could live with either of the first two as long as it's spelt out which.  I
> think that before the meeting we would have been evenly split about which we
> were expecting.  It seems to have been a mistake to have sold predicates as
> being similar to constraints.

Stay tuned for my upcoming write-up.

****************************************************************

From: Bob Duff
Sent: Sunday, February 26, 2012  5:23 PM

> This is certainly not true for ignoring assertions, ...
...
> What about preconditions and postconditions that are suppressed,

Note on terminology: "assertion" is not synonymous with "pragma Assert",
according to the RM.  The term "assertion" includes pragma Assert, and also
predicates, pre, post, and invariants.  I believe this agrees with the usage in
Eiffel.

I think the intent is to treat all assertions, including pre/post, the same way
with respect to your question.  But it's still somewhat up in the air, and there
was some talk during the meeting of treating some kinds of assertions
differently.

****************************************************************

From: Robert Dewar
Sent: Sunday, February 26, 2012  7:01 PM

> Note on terminology: "assertion" is not synonymous with "pragma
> Assert",

great, another case where the RM invents a term that NO Ada programmer will use.
I will bet you anything that to virtually 100% of Ada programmers, assertion
will mean what ever it is that pragma Assert does.

> according to the RM.  The term "assertion" includes pragma Assert, and
> also predicates, pre, post, and invariants.

Why invent inherently confusing terminology

Come up with a new word. Contracts for example or somesuch.

****************************************************************

From: Bob Duff
Sent: Tuesday, February 28, 2012  7:02 AM

> > according to the RM.  The term "assertion" includes pragma Assert,
> > and also predicates, pre, post, and invariants.
>
> Why invent inherently confusing terminology

ARG didn't invent this terminology.  It's pretty standard to include
preconditions and the like in "assertion".

> Come up with a new word. Contracts for example or somesuch.

Ada has too many "new" words as it is.  For example, using "access"
for what everybody else calls "pointer" or "reference" just causes confusion.
We should go along with industry-wide terms, to the extent possible.

"Contract" is an informal term, which to me means a whole set of assertions,
such as all the assertions related to a given abstraction.  Using "contract" to
refer to a particular assertion would confuse things, I think.

****************************************************************

From: Bob Duff
Sent: Tuesday, February 28, 2012  7:18 AM

> ARG didn't invent this terminology.  It's pretty standard to include
> preconditions and the like in "assertion".

By the way, you snipped my reference to Eiffel, which agrees that "assertion"
includes preconditions and the like. Eiffel should be respected when talking
about contracts.

If you still don't believe me, google "assertion programming language".
(Note that the 10'th hit is an AdaCore web page that correctly includes pre,
etc.)

In languages that only have assert statements (C and pre-2012 Ada[1], for
example), that's all you have.  But in languages that have preconditions and
invariants[2], those are included.

[1] Yeah, I know it's a macro in C, and a pragma in Ada, but conceptually, it's
    a statement.

[2] Called "predicates" in Ada.  ;-)

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  7:25 AM

>> Why invent inherently confusing terminology
>
> ARG didn't invent this terminology.  It's pretty standard to include
> preconditions and the like in "assertion".

I mean it's confusing in the Ada context. It is always a mistake to push
terminology that no one will use (e.g.

    a generic package is not a package
    a package spec is not what you think it is
    a type should be called a subtype
    a class type has to be called a tagged type

    and yes, I would add access/pointer to that list.
    etc.)

I am willing to bet that for nearly all Ada programmers Assertion will continue
to refer to things done by pragma Assert, as it always have. And I often see ARG
members using this terminology, e.g. something like

"Well a precondition in a subprogram body is really just an assertion."

which is official nonsense, but everyone understands it!

>> Come up with a new word. Contracts for example or somesuch.
>
> Ada has too many "new" words as it is.  For example, using "access"
> for what everybody else calls "pointer" or "reference" just causes
> confusion.  We should go along with industry-wide terms, to the extent
> possible.
>
> "Contract" is an informal term, which to me means a whole set of
> assertions, such as all the assertions related to a given abstraction.
> Using "contract" to refer to a particular assertion would confuse
> things, I think.

Contract items?

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  7:30 AM

...
> By the way, you snipped my reference to Eiffel, which agrees that
> "assertion" includes preconditions and the like.
> Eiffel should be respected when talking about contracts.

does Eiffel have assertions in the Ada 2005 sense?

The trouble here is that it is so natural to assume that assertion refers to
pragma assert because

a) it has always done so in Ada
b) the name just encourages this identification

That's why I think Ada programmers will continue to use the term assertion to
mean that which is specified with pragma Assert, and it is futile for the ARG to
try to push terminology that won't stick.

More Ada 2012 programmers will know previous versions of Ada than Eiffel!!

****************************************************************

From: Bob Duff
Sent: Tuesday, February 28, 2012  7:45 AM

> does Eiffel have assertions in the Ada 2005 sense?

Yes, but I think it's called "check", so the potential confusion you're worried
about ("assertion" /= "assert") doesn't arise.

By the way, I don't deny that there's a potential confusion.
I usually get around it by saying something like "assertions, such as
preconditions and the like...".

In general, programming language terminology is a mess!
"Procedure", "function", "method", "subroutine", "routine", "subprogram" -- all
mean basically the same thing. It really damages communication among people with
different backgrounds.

Note that the RM often uses the term "remote procedure call" and its
abbreviation "RPC", even though it could be calling a function.  It doesn't make
perfect sense, but RPC is so common in the non-Ada world, we decided to use it.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  7:53 AM

>> does Eiffel have assertions in the Ada 2005 sense?
>
> Yes, but I think it's called "check", so the potential confusion
> you're worried about ("assertion" /= "assert") doesn't arise.

Right, thats what I remembered (no assertions as such).

> By the way, I don't deny that there's a potential confusion.
> I usually get around it by saying something like "assertions, such as
> preconditions and the like...".

Yes, but you can't expect people to do that all the time.
I suppose we should get in the habit of using "asserts"
to refer to the set of things done with pragma assert.
(like preconditions).

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, February 28, 2012  8:04 AM

>> "Contract" is an informal term, which to me means a whole set of
>> assertions, such as all the assertions related to a given
>> abstraction.  Using "contract" to refer to a particular assertion
>> would confuse things, I think.
>
> Contract items?

I'd rather avoid "contract", because we often refer to generic formals as a
contract, we talk about the contract model for assume the best/assume the worst,
etc.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  8:08 AM

yes, good point

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, February 28, 2012  10:01 AM

> That's why I think Ada programmers will continue to use the term
> assertion to mean that which is specified with pragma Assert, and it
> is futile for the ARG to try to push terminology that won't stick.

So what is your proposal? We need some word or words to describe the group of
pragma Assert, prcondition, postcondition, type invariant, and maybe subtype
predicate. Repeating them all every time something is said about all of them is
editorial suicide. "Contract" isn't it, since Assert(ions) have no contract
idea.

(By the way, I don't believe that Ada programmers will be so literal in their
interpretations that they believe that only pragma Assert can create
assertions.)

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  10:14 AM

>> That's why I think Ada programmers will continue to use the term
>> assertion to mean that which is specified with pragma Assert, and it
>> is futile for the ARG to try to push terminology that won't stick.
>
> So what is your proposal? We need some word or words to describe the
> group of pragma Assert, prcondition, postcondition, type invariant,
> and maybe subtype predicate. Repeating them all every time something
> is said about all of them is editorial suicide.
> "Contract" isn't it, since Assert(ions) have no contract idea.

At this point I think we need to stick with "assertion expressions"
given that we are using "Assertion_Policy" to control them all, and they all
raise Assertion_Error on failure.

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, February 28, 2012  10:19 AM

> So what is your proposal?

Assumptions?

Checks? (I think it's too vague)

****************************************************************

From: Ben Brosgol
Sent: Tuesday, February 28, 2012  10:37 AM

...
> At this point I think we need to stick with "assertion expressions"
> given that we are using "Assertion_Policy" to control them all, and
> they all raise Assertion_Error on failure.

Maybe "assertion forms"?

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  2:38 PM

> (By the way, I don't believe that Ada programmers will be so literal
> in their interpretations that they believe that only pragma Assert can
> create assertions.)

The reason I think this is that this has been the case for years.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, February 29, 2012  7:21 AM

> Assumptions?

Has exactly the opposite meaning in verification land, i.e. something that you
can always assume on faith.

> Checks? (I think it's too vague)

Already taken by the run-time checks and, as an axiom, not to be confused with
"the new stuff".

and from Ben B.:
> assertion forms

Does not solve Robert's concern that the term is completely occupied by pragma
Assert.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  2:37 PM

> I could live with either of the first two as long as it's spelt out
> which.  I think that before the meeting we would have been evenly
> split about which we were expecting.  It seems to have been a mistake
> to have sold predicates as being similar to constraints.

To me, if predicates are not similar to constraints they are broken.
The following two should be very similar in effect

    subtype R is integer range 1 .. 10;

    subtype R is integer with
      Static_Predicate => R in 1 .. 10;

****************************************************************

From: Ed Schonberg
Sent: Tuesday, February 28, 2012  2:45 PM

At the meeting there was an agreement in that direction (not unanimous of
course, nothing is).  This is why Assertion_Policy has to have a different
effect on pre/postconditions than on predicate checks. Creating values that
violate the predicate is as bad as creating out-of- range values.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  2:53 PM

The reason I think those two cases have to be very similar (I never like to use
the word equivalent :-)) is that I think in practice any Ada programmer would
expect them to be similar, they sure *look* similar :-)

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  3:01 PM

They are very similar.  The one significant difference is that the first one if
you suppress the range check, your program can go erroneous if you violate the
constraint.  In the second one, if you set Assertion_Policy(Ignore) then some
checks will be eliminated, but there will be no assumption that the checks would
have passed if they had been performed.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  3:24 PM

> They are very similar.  The one significant difference is that the
> first one if you suppress the range check, your program can go
> erroneous if you violate the constraint.

Not easily though, since generally e.g. uninitialized values don't drive you
into erroneousness.

****************************************************************

From: Bob Duff
Sent: Tuesday, February 28, 2012  3:39 PM

> They are very similar.

Agreed.  If you don't turn off the checks, the only difference is which
exception gets raised.  That counts as "very similar" in my book, especially if
you're not going to handle these exceptions (which I expect is the usual case).

If you DO turn off the checks, then you should have made sure by other means
that the checks won't fail, so they're still quite similar.  Of course, you
might make a mistake, and the consequences of that mistake might be different,
but the rule for programmers is simple: Don't do that. That is, don't violate
constraints, and don't violate predicates, and especially don't turn the checks
off without good reason.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 28, 2012  3:43 PM

> and especially don't turn the checks off without good reason.

Exactly, and deliberately wanting to violate the check without an exception is
NOT a good reason :-)

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  5:07 PM

Here is the "final" version, incorporating the newer ideas about how
Assertion_Policy should apply to subtype predicate checks, and using the term
"enabled" more uniformly.
[Editor's note: This is version /05.]

****************************************************************

From: Steve Baird
Sent: Tuesday, February 28, 2012  6:42 PM

> Here is the "final" version, ..

That's optimistic.

Some questions and observations:

1) In general, I like the approach for defining how
    assertion policies interact with subtype predicates.

    What about cases where a subtype name is repeated with a
    conformance check?

    For example
         pragma Assertion_Policy (Static_Predicate => Check);

         subtype Foo is Integer with Static_Predicate => Foo /= 123;

         package Pkg is
           type T (D : Foo) is private;

           pragma Assertion_Policy (Static_Predicate => Ignore);
         private
           type T (D : Foo) is ....
         end Pkg;

     For a subtype that has a specified predicate aspect
     (either static or dynamic), perhaps we want the
     assertion policy corresponding to that aspect to
     participate in subtype conformance. In the above example,
     the two uses of Foo would fail the subtype conformance
     legality check even though they denote the same subtype.

     Or perhaps we punt and just say that this case is implementation
     dependent.

     Similar issues occur for subprogram declarations (spec, stub,
     and body), and deferred constants.

2) What about conformance for 'Access attributes and related checking?

     We have an access-to-foo type with one assertion policy and an
     aliased object of type foo declared with a different assertion
     policy in effect. Do we want to allow the Access attribute of
     the object to yield a value of the access type?

      Or two access types which would otherwise be convertible but they
      differ with respect to applicable assertion policy. Do we want
      to allow conversion between such access types?

     Maybe allowing these things is ok, but it makes it very hard
     to write assertions that you can count on.

     The same problems occur for access-to-subprogram types.

     Would we want this stricter conformance rule only if the subtype in
     question is affected by the assertion policy? Should two uses
     of Standard,Integer fail a conformance check because of a difference
     in assertion policy? What about the case where we don't know
     (e.g., an access type declaration where the designated type is
      incomplete) or where getting this knowledge requires breaking
      privacy?

      What about overriding primitive subprograms of tagged types?
      Is it ok if overrider and overridden differ with respect to
      assertion policy?

3) For a call with a defaulted expression which includes a
    qualified expression, which assertion policy determines what
    predicate checking is performed for the qualified expression?

4) I think the rules for dealing with assertion policies and generics
    are fine, but I'd still like to see an AARM note stating explicitly
    that an assertion_policy pragma (and, for that matter, a suppress
    pragma) which applies to a region which includes the declaration or
    body of a generic has no effect on an instance of that
    generic (assuming the  instance is declared outside of the region
    affected by the pragma).

    I think the consequences of the rule

      If a pragma Assertion_Policy applies to a generic_instantiation,
      then the pragma Assertion_Policy applies to the entire instance.

    are a bit less obvious than some other rules and an AARM note is
    justified.

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 28, 2012  8:19 PM

...
> For a subtype that has a specified predicate aspect (either static or
> dynamic), perhaps we want the assertion policy corresponding to that
> aspect to participate in subtype conformance. In the above example,
> the two uses of Foo would fail the subtype conformance legality check
> even though they denote the same subtype.
>
> Or perhaps we punt and just say that this case is implementation
> dependent.
>
> Similar issues occur for subprogram declarations (spec, stub, and
> body), and deferred constants.

I think the initial declaration should determine whether the check is enabled.
The state of the assertion policy at the point of the completion should not be
relevant.

> 2) What about conformance for 'Access attributes and related checking?

Yuck.  First we should be sure we agree on the rules without worrying about
Assertion_Policy.  We already decided that "static subtype matching" requires
that predicates come from the same declaration (in 4.9.1(2/3)).  So that seems
pretty clear.

We had hoped that legality didn't depend on Assertion_Policy, but I guess I
would be inclined to say that conflicting Assertion_Policies should cause errors
when using 'Access.

> We have an access-to-foo type with one assertion policy and an aliased
> object of type foo declared with a different assertion policy in
> effect. Do we want to allow the Access attribute of the object to
> yield a value of the access type?
>
> Or two access types which would otherwise be convertible but they
> differ with respect to applicable assertion policy. Do we want to
> allow conversion between such access types?
>
> Maybe allowing these things is ok, but it makes it very hard to write
> assertions that you can count on.
>
> The same problems occur for access-to-subprogram types.

I would go with it being illegal if the predicate-check policies conflict and
there are any subtypes with non-trivial predicates.

Alternatively, for access-to-subprogram, we could try to generalize the
assertion-policy rules for preconditions and postconditions (whatever those are)
for calling through an access-to-subprogram.

> Would we want this stricter conformance rule only if the subtype in
> question is affected by the assertion policy?

I would hope so.  That is, if the predicate is True, then conflicting assertion
policies are not a problem.  It is only if the predicate is not True that the
policies need to agree between the aliased object and the designated subtype of
the access type.  Or equivalently between the designated profile and the
subprogram that is the prefix to the 'Access.

> ... Should two uses
> of Standard,Integer fail a conformance check because of a difference
> in assertion policy? What about the case where we don't know (e.g., an
> access type declaration where the designated type is
> incomplete) or where getting this knowledge requires breaking privacy?

I would need to see some examples there.  I can't quite imagine how that would
happen.

> What about overriding primitive subprograms of tagged types?
> Is it ok if overrider and overridden differ with respect to assertion
> policy?

Here I think in the same way that an overriding subprogram inherits the
convention of the inherited subprogram, the overriding subprogram should
probably inherit the predicate-check assertion policy state.  It is not clear
how else it could work.

> 3) For a call with a defaulted expression which includes a qualified
> expression, which assertion policy determines what predicate checking
> is performed for the qualified expression?

I think the place where the default expression appears.

> 4) I think the rules for dealing with assertion policies and generics
> are fine, but I'd still like to see an AARM note stating explicitly
> that an assertion_policy pragma (and, for that matter, a suppress
> pragma) which applies to a region which includes the declaration or
> body of a generic has no effect on an instance of that generic
> (assuming the instance is declared outside of the region affected by
> the pragma).
>
> I think the consequences of the rule
>
> If a pragma Assertion_Policy applies to a generic_instantiation, then
> the pragma Assertion_Policy applies to the entire instance.
>
> are a bit less obvious than some other rules and an AARM note is
> justified.

Agreed.

****************************************************************

From: Steve Baird
Sent: Wednesday, February 29, 2012  11:35 AM

> I think the initial declaration should determine whether the check is
> enabled.  The state of the assertion policy at the point of the
> completion should not be relevant.

That seems like a good principle.

>> 2) What about conformance for 'Access attributes and related checking?
>
> We had hoped that legality didn't depend on Assertion_Policy, but I
> guess I would be inclined to say that conflicting Assertion_Policies
> should cause errors when using 'Access.

Right. It is as if the same identifier can be used to denote two different
subtypes, one with predicate checking and one not.

I think we *could* allow those two subtypes to conform (i.e., it wouldn't
compromise type-safety), but I agree with you that we probably don't want to.

>> Would we want this stricter conformance rule only if the subtype in
>> question is affected by the assertion policy?
>
> I would hope so.

Me too, but I think we have to deal with the "maybe" case discussed below.

>> ... Should two uses
>> of Standard.Integer fail a conformance check because of a difference
>> in assertion policy? What about the case where we don't know (e.g.,
>> an access type declaration where the designated type is
>> incomplete) or where getting this knowledge requires breaking
>> privacy?
>
> I would need to see some examples there.  I can't quite imagine how
> that would happen.

For example, an access type where the designated type is incomplete (perhaps a
Taft type or a type from a limited view).

     pragma Asserion_Policy (Check);

     procedure P (X : access Some_Incomplete_Type);

     package Nested is
        pragma Assertion_Policy (Ignore);

        type Ref is access procedure (Xx : access Some_Incomplete_Type);

        Ptr : Ref := P'Access; -- legal?

Or, if we are concerned about privacy here (and perhaps we aren't; we could view
assertion_policy stuff as being like repspec stuff, but it would be nicer if we
didn't have to do that):

      package Pkg is
          type Has_A_Predicate is private;
          type Has_no_Predicate is private;
      private
           type Has_A_Predicate is new Integer
             with Static_Predicate => Has_A_Predicate /= 123;

           type Has_No_Predicate is new Integer;
      end Pkg;


      pragma Assertion_Policy (Check);
      use Pkg;
      Has : aliased Has_A_Predicate;
      Has_No : aliased Has_No_Predicate;

      package Nested is
         pragma Assert_Policy (Ignore);
         type Has_Ref is access Has_A_Predicate;
         type Has_No_Ref is access Has_No_Predicate;

         Ptr1 : Has_Ref := Has'Access; -- we want this to be illegal ...
         Ptr2 : Has_No_Ref := Has_No'Access; -- but what about this?


>> What about overriding primitive subprograms of tagged types?
>> Is it ok if overrider and overridden differ with respect to assertion
>> policy?
>
> Here I think in the same way that an overriding subprogram inherits
> the convention of the inherited subprogram, the overriding subprogram
> should probably inherit the predicate-check assertion policy state.
> It is not clear how else it could work.

And if interface types are involved and one subprogram overrides more than one
inherited subp and the inherited guys don't all agree? I think there are also
much more obscure cases not involving interface types where one primitive op can
override two, but I would have to check to be sure.

>> 3) For a call with a defaulted expression which includes a qualified
>> expression, which assertion policy determines what predicate checking
>> is performed for the qualified expression?
>
> I think the place where the default expression appears.

Sounds good. Ditto for all the other forms of default expressions, presumably.

>> Variant records don't really have
>> such a fall back, but we will have to be clear that even if Ignore
>> applies where the discriminant subtype is specified, the subtype's
>> predicate still determines whether or not a particular
>> discriminant-dependent component exists.  It is as though there were
>> a "when others => null;" alternative in the variant record, where the
>> "when others" covers values that don't satisfy the predicate.
>
> I think this is a bad idea. I think (as in the case of case
> statements) Constraint_Error should be raised. Let's detect a problem
> like this ASAP instead of sweeping it under the rug and then seeing
> mystifying behavior later.

Did my "let's treat all case-ish constructs uniformly"
argument change your mind?

****************************************************************

From: Steve Baird
Sent: Wednesday, February 29, 2012  12:05 PM

>> Here I think in the same way that an overriding subprogram inherits
>> the convention of the inherited subprogram, the overriding subprogram
>> should probably inherit the predicate-check assertion policy state.
>> It is not clear how else it could work.
>>
>
> And if interface types are involved ....

Never mind. I see that 3.9.2(10/2) already has

   .... if the operation overrides multiple inherited operations, then
   they shall all have the same convention.

So your proposal already covers this case.


****************************************************************

From: Tucker Taft
Sent: Wednesday, February 29, 2012  12:17 PM

> ... Or, if we are concerned about privacy here (and perhaps we aren't;
> we could view assertion_policy stuff as being like repspec stuff, but
> it would be nicer if we didn't have to do that):

I think conflicting assertion policies is like conflicting rep-clauses.  It has
to see through privacy.  When a programmer starts applying micro-control with
the assertion policy, they are now talking about the concrete representations of
things, not just their abstract interfaces.

...
>> I think this is a bad idea. I think (as in the case of case
>> statements) Constraint_Error should be raised. Let's detect a problem
>> like this ASAP instead of sweeping it under the rug and then seeing
>> mystifying behavior later.
>
> Did my "let's treat all case-ish constructs uniformly"
> argument change your mind?

Not sure, though probably some realistic examples might help decide.  Since
discriminants cannot be changed after an object is created, except by
whole-record assignment, the question is whether we check to be sure the
discriminant is covered by the variant alternatives on object creation or on
component selection, I suppose. Doing it on object creation is perhaps simpler,
so I guess making an unconditional check (even if predicate checks are disabled)
to be sure that some variant alternative covers the discriminant value seems
reasonable. If there is a "when others" then there won't be any check anyway, so
it would only be if the variant part did *not* have a "when others" and relied
on full coverage of exactly those elements that satisfied the static predicate.

So I guess I can go with your suggestion.

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  1:44 PM

Steve,
    Would you be willing to do the next "rev" on this AI?

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  2:20 PM

Sure, but only after I know what ideas we want to express.

In particular, I'm wondering about the interactions with "statically matching" and incomplete types that I mentioned earlier.

I'd like to avoid cases where we conservatively disallow some construct (e.g.,
an access type conversion) because of assertion policy differences when it turns
out that there are no predicate specifications anywhere in the neighborhood.
Maybe we want to be more permissive and assume when dealing with an incomplete
view of a type that the full type has no applicable predicate specifications?
That would make predicates for designated subtypes less trustworthy, but I think
users would be unlikely to accidentally violate a predicate via this mechanism.
Whether it would be used deliberately (the old "designated predicate bypass
trick") is a separate question.

It would be more of a problem that tools, developers, maintainers, etc. would
have to deal with the possibility that someone *might* use this trick, even
nobody ever does.

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  2:31 PM

We could disallow predicates on the first subtype of a type that is a deferred
incomplete type.  Putting predicates on a first subtype seems pretty weird
anyway.  The purpose of a predicate is normally to define an interesting subset,
not to specify something that is true for all values of the type (that would be
more appropriate for a type invariant).

****************************************************************

From: Bob Duff
Sent: Friday, March 2, 2012  2:43 PM

> I'd like to avoid cases where we conservatively disallow some
> construct (e.g., an access type conversion) because of assertion
> policy differences when it turns out that there are no predicate
> specifications anywhere in the neighborhood.

Well, I don't think we want any incompatibilities.

I haven't followed much of this discussion, but if you're suggesting that pragma
Assertion_Policy would affect the legality of a program, I look askance.

****************************************************************

From: Robert Dewar
Sent: Friday, March 2, 2012  2:43 PM

> We could disallow predicates on the first subtype of a type that is a
> deferred incomplete type.  Putting predicates on a first subtype seems
> pretty weird anyway.  The purpose of a predicate is normally to define
> an interesting subset, not to specify something that is true for all
> values of the type (that would be more appropriate for a type
> invariant).

I disagree, it's not weird to have a range on a first subtype

    type R is new integer range 1 .. 10;

so why should it be weird to have a prediate on a first subtype

    type Non_Zero is new Integer with
      Predicate => Non_Zero /= 0;

to me predicates and constraints are really pretty much the same beast, it is
just that predicates have more power. In fact it seems a bit odd to me to have
constraints at all, given a more powerful mechanism that emcompasses ranges, but
of course I understand the historical reasoning.

On the other hand to me deferred incomplete types are a bit weird anyway, so I
really don't mind restrictions on them.

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  2:52 PM

>> We could disallow predicates on the first subtype of a type that is a
>> deferred incomplete type. Putting predicates on a first subtype seems
>> pretty weird anyway. The purpose of a predicate is normally to define
>> an interesting subset, not to specify something that is true for all
>> values of the type (that would be more appropriate for a type
>> invariant).
>
> I disagree, it's not weird to have a range on a first subtype

You are right, if the first subtype is derived from some preexisting type.  But
I believe it would be weird to put a predicate on the first subtype of a
non-private, non-numeric, non-derived type. And if the type is private, a type
invariant is more appropriate.

If the type is numeric or derived, then I can see the use of the predicate on
the first subtype, since you can convert to an un-predicated version using 'Base
if numeric, or by explicit conversion to the parent type if derived.

In any case, it sounds like we agree that imposing restrictions on the use of
predicates with deferred incomplete types would be OK.

****************************************************************

From: Bob Duff
Sent: Friday, March 2, 2012  2:54 PM

> We could disallow predicates on the first subtype of a type that is a
> deferred incomplete type.  Putting predicates on a first subtype seems
> pretty weird anyway.

I don't agree.  I think it makes perfect sense to say:

    type Nonzero is new Integer with Static_Predicate => Nonzero /= 0;

>...The purpose
> of a predicate is normally to define an interesting subset,  not to
>specify something that is true for all values of the  type...

Agreed, specifying a predicate on a scalar first subtype does not say anything
about all values of the type, because 'Base takes away the predicate, as well as
the constraint:

    X: Nonzero'Base := 0; -- no exception

    type Color is (Red, Yellow, Green, None)
        with Static_Predicate => Color /= None;

    subtype Optional_Color is Color'Base; -- allows None

For composites, there's no 'Base, so it applies to all objects of the type
(modulo some loopholes):

    type My_String is array (Positive range <>) of Character
        with Dynamic_Predicate => My_String'First = 1;

Perhaps we should allow 'Base on composites in Ada 2020.

>... (that would be more appropriate for a type invariant).

Well, type invariants only work for private-ish types.

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  3:15 PM

> We could disallow predicates on the first subtype of a type that is a
> deferred incomplete type.

This might be ok for explicit incomplete type declarations.

How does this work with limited withs?

Any type declaration that can be seen via a limited with has an incomplete view.
Fortunately, limited withs don't make subtype declarations visible, so we are
only concerned with first subtypes.

> Putting predicates on
> a first subtype seems pretty weird anyway.  The purpose of a predicate
> is normally to define an interesting subset, not to specify something
> that is true for all values of the type (that would be more
> appropriate for a type invariant).
>

I disagree in the case of a derived type (upon reading Robert's message halfway
through composing this one, I see that he made this point too and you now
agree).

Even in the case of a non-derived type, I could imagine wanting something like

     type Foo (Max_Length : Natural) is
       record
         Data : Buffer(1 .. Max_Length);
         Current_Length : Natural := 0;
       end record
       with Dynamic_Predicate Foo.Current_Length < Foo.Max_Length;

and I wouldn't want to ban this in order to support mixing of assertion
policies.

Another ugly option would be to have a configuration pragma (one which has to be
consistent across the entire partition) for specifying the assertion policy
associated with the designated subtype of an access type when the view of that
subtype is incomplete at the point of the access type definition. Just a
thought. I haven't thought about DSA interactions at all - maybe there aren't
any.

****************************************************************

From: Robert Dewar
Sent: Friday, March 2, 2012  3:30 PM

>      X: Nonzero'Base := 0; -- no exception
>
>      type Color is (Red, Yellow, Green, None)
>          with Static_Predicate =>  Color /= None;
>
>      subtype Optional_Color is Color'Base; -- allows None
>
> For composites, there's no 'Base, so it applies to all objects of the
> type (modulo some loopholes):

Sure, but 'Base for integer types is an odd beast anyway, I virtually never saw
it in user code.

And this is really no different from

   type R is range 1 .. 10;

   R'Base
   range disappeared

****************************************************************

From: Bob Duff
Sent: Friday, March 2, 2012  5:26 PM

> >      X: Nonzero'Base := 0; -- no exception
> >
> >      type Color is (Red, Yellow, Green, None)
> >          with Static_Predicate =>  Color /= None;

I'm not actually sure that's legal.  Is "None" visible here?
Never mind, it's not important to the issues being discussed.

> >      subtype Optional_Color is Color'Base; -- allows None
> >
> > For composites, there's no 'Base, so it applies to all objects of
> > the type (modulo some loopholes):
>
> Sure, but 'Base for integer types is an odd beast anyway, I virtually
> never saw it in user code.

I don't see why it's an "odd beast".  I agree it's not particularly common, but
I sometimes use it.  Like this, for example:

    type Index is range 1..Whatever;
    type My_Array is array(Index range <>) ...;

    subtype Length is Index'Base range 0..Index'Last;

And then use variables of subtype Length to point to the last-used component of
variables of type My_Array.

In Ada 83, you had to declare Length first, and then declare Index in terms of
Length, and that still works, but the above seems better -- the main thing is
the array index, then I add a subtype that "by the way" includes the extra zero
value, in case of empty arrays.

> And this is really no different from
>
>    type R is range 1 .. 10;
>
>    R'Base
>    range disappeared

Not sure what you're trying to say here.  I agree, it's "no different".
The 'Base attribute takes away the predicate, and also takes away the
constraint.  I have no objection to that, and it's occassionally useful. I just
wish 'Base worked for composite types.

My main point was to agree with Tucker that predicates don't apply to all values
of a type, but only to a subset (at least for scalars).

By the way, the more I think about this stuff, the more I agree with those who
have said predicate violations should raise Constraint_Error rather that
Assert_Failure!

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  4:52 PM

>> We could disallow predicates on the first subtype of a type that is a
>> deferred incomplete type.
>
> This might be ok for explicit incomplete type declarations.
>
> How does this work with limited withs?

I don't see a problem with limited withs.
Can you construct an example which has such a problem.

> Any type declaration that can be seen via a limited with has an
> incomplete view. Fortunately, limited withs don't make subtype
> declarations visible, so we are only concerned with first subtypes.
> ...

I don't think we need to restrict predicates on types declared in the visible
part of a package just because they might be referenced in a limited with.  I
couldn't, but perhaps you can construct a problem that involves such a type...

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  5:09 PM

> I don't see a problem with limited withs.
> Can you construct an example which has such a problem.

I haven't tried to compile this, but this should be close.

    package Pkg1 is
      type Even is new Integer with
        Dynamic_Predicate => Even mod 2 = 0;
    end Pkg1;

    limited with Pkg1;
    package Checker is
       pragma Assertion_Policy (Check);
       type Safe_Ref is access all Pkg1.Even;
       Safe_Ptr . Safe_Ref;
    end Checker;

    limited with Pkg1;
    package Ignorer is
       pragma Assertion_Policy (Ignore);
       type Unsafe_Ref is access all Pkg1.Even;
       Unsafe_Ptr . Unsafe_Ref := new Even'(3);
         -- violates predicate, but that's ok
    end Ignorer;

    with Checker, Ignorer;
    use Checker, Ignorer;
    procedure Converter is
        Safe_Ptr_Has_Been_Corrupted;
    begin
        Safe_Ptr := Safe_Ref (Unsafe_Ptr);
        if Safe_Ptr.all not in Even then
           raise Safe_Ptr_Has_Been_Corrupted;
        end if;
    end Converter;

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  5:19 PM

>> I don't see a problem with limited withs.
>> Can you construct an example which has such a problem.
>
> I haven't tried to compile this, but this should be close.

I don't see a problem here, since the compiler can certainly see that there is a
predicate on the type "Pkg1.Even", even if it is using a limited "with" and the
type is officially incomplete.

But I suppose it would be a bigger problem if Even were defined as derived from
some other type declared in a different package, and "Even" didn't have a
predicate itself but it inherited one.  We could keep following the chain of
"with" clauses I suppose just to learn whether the subtypes have predicates, but
once you start doing name resolution you are starting to violate the spirit of
"limited" with.

At this point it looks simpler to disallow the conversion if different assertion
policies apply to the two designated subtypes, even if we don't know for sure
whether there is a predicate.

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  5:24 PM

> But I suppose it would be a bigger problem if Even were defined as
> derived from some other type declared in a different package, and
> "Even" didn't have a predicate itself but it inherited one.  We could
> keep following the chain of "with" clauses I suppose just to learn
> whether the subtypes have predicates, but once you start doing name
> resolution you are starting to violate the spirit of "limited" with.

There would be bigtime implementation problems if we ever introduced a language
rule that required resolution of a name which occurs in a limited view of a
package.

The implementation model is that an implementation should be able to compile a
limited wither after only parsing the limited withee.

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  5:30 PM

> At this point it looks simpler to disallow the conversion if different
> assertion policies apply to the two designated subtypes, even if we
> don't know for sure whether there is a predicate.

Now we are back to undesirable choice B:
   Conservatively disallow otherwise-legal constructs
   because of assertion policy incompatibilities even
   though there are no assertions of any kind in the
   entire program.

This is a point I would like to get consensus on before working on RM wording.

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  5:31 PM

> There would be bigtime implementation problems if we ever introduced a
> language rule that required resolution of a name which occurs in a
> limited view of a package.

Tuck - I didn't mean to suggest that you were advocating such a language rule;
in your message, you had already pointed out that this approach doesn't work in
the general case.

****************************************************************

From: Tucker Taft
Sent: Friday, March 2, 2012  5:40 PM

>> At this point it looks simpler to disallow the conversion if
>> different assertion policies apply to the two designated subtypes,
>> even if we don't know for sure whether there is a predicate.
>
> Now we are back to undesirable choice B:
> Conservatively disallow otherwise-legal constructs because of
> assertion policy incompatibilities even though there are no assertions
> of any kind in the entire program.
>
> This is a point I would like to get consensus on before working on RM
> wording.

Using the term "incompatibility" is a bit misleading... ;-) I would say
"conflicts" so you don't scare anyone.

Anyone who is using conflicting assertion policies but has no assertions of any
kind in the program is asking for trouble, in my mind...

Are we really just worried about conversions between access-to-incomplete types,
or is there a bigger problem here?

****************************************************************

From: Bob Duff
Sent: Friday, March 2, 2012  5:47 PM

> At this point it looks simpler to disallow the conversion if different
> assertion policies apply to the two designated subtypes, even if we
> don't know for sure whether there is a predicate.

So I write a program and debug/test with all-checks-on, as is common practice.
Then I turn some checks off, and it turns illegal?  I don't know what the right
answer is here, but I'm pretty sure this isn't the right answer.

If one turns checks off, and one violates the checks, one should expect bad
things to happen.

> > package Pkg1 is
> > type Even is new Integer with
> > Dynamic_Predicate => Even mod 2 = 0; end Pkg1;
&c

Tuck, your mailer is deleting indentation.  Mailers that molest mail content are
a menace, and should be banished (grumble). But maybe you could just turn that
malfeature off?

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  5:58 PM

> Using the term "incompatibility" is a bit misleading... ;-) I would
> say "conflicts" so you don't scare anyone.
>

Good point.

> Anyone who is using conflicting assertion policies but has no
> assertions of any kind in the program is asking for trouble, in my
> mind...

Yeah, but I was just demonstrating the point with an extreme example. Either
there is a need for varying assertion policies within a partition or there isn't
- if there is, then this problem can arise.

> Are we really just worried about conversions between
> access-to-incomplete types, or is there a bigger problem here?
>

There are variations; P.all'Access is effectively a
type conversion.

I think that what it really comes down to is the
definition of "statically matching" for subtypes;
to see where the problems are, look where that is
used. As your question suggests, I think we can
dismiss RM uses of "statically matching" that cannot
involve an incomplete type.

There was also the issue that determining whether a
private type has a specified predicate breaks privacy,
but we know how to implement that (and, if you think
of predicates as being similar to constraints, then
it seems reasonable to treat a predicate spec as being
just another kind of rep spec).

****************************************************************

From: Bob Duff
Sent: Friday, March 2, 2012  5:59 PM

> Using the term "incompatibility" is a bit misleading... ;-) I would
> say "conflicts" so you don't scare anyone.

Right.  But are you sure there no real incompatibilities
(2005-2012) here?  Say a program contains pragmas Assert, no other assertions,
and contains pragmas Assertion_Policy. It seems like you're wanting to add an
assume-the-worst rule just-in-case there are predicates, which there aren't.
Isn't that incompatible?

I hope I have the sense to butt out of this conversation soon!

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  6:09 PM

I agree with both of you, but mostly Tuck.

Tuck was referring to my use of the word "incompatibility" in talking about
about a single partition where the entire partition did not share the same
assertion policy.

Bob is raising the point that there is an incompatibility here in the usual
sense (we reject a program that used to compile), but Tuck was correct that I
was not talking about that kind of incompatibility when I used the term.

****************************************************************

From: Steve Baird
Sent: Friday, March 2, 2012  6:38 PM

> Now we are back to undesirable choice B:
>   Conservatively disallow otherwise-legal constructs
>   because of assertion policy incompatibilities even
>   though there are no assertions of any kind in the
>   entire program.
>
> This is a point I would like to get consensus on before working on RM
> wording.

I suppose that if I'm going to talk about choice B and request a consensus, then
I ought to also present a choice A.

We could simply ignore assertion policy in deciding whether two names denoting
the same subtype denote statically matching subtypes and then say that in cases
where a stricter conformance check along the lines of the stuff we have been
discussing today would have flagged a problem, users instead get
implementation-dependent assertion enforcement. Or perhaps we could even define
some portable rules for resolving these assertion-policy conflicts.

This would be a simpler model, but it would make predicates easier to violate
(and might introduce portability issues).

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, March 3, 2012  12:44 PM

> Now we are back to undesirable choice B:
>   Conservatively disallow otherwise-legal constructs
>   because of assertion policy incompatibilities even
>   though there are no assertions of any kind in the
>   entire program.

I find this not just undesirable; it is close to unacceptable.

Why is there no further discussion on treating subtype predicates in the
constraints/checks/suppressable category rather than the assertion/ignorable
category?

If the main point of resistance is that predicate checks unlike constraint
checks are not to be assumed to be "true" when suppressed, than this rule can be
easily stated as such. It would certainly be a rule far preferable to the rules
that are being discussed currently.

It also would automatically allow for Steve's proposal phrased as "...users
instead get implementation-dependent assertion enforcement"

If "erroneousness by suppressing" is the issue, well (apart from my opinion that
it is indeed erroneous to ignore a failed subtype predicate), then one could
probably also describe it as a bounded error to create an invalid value by
ignoring/suppressing the failed check or limit the suppressing in Steve's sense.

****************************************************************

From: Tucker Taft
Sent: Saturday, March 3, 2012  1:05 PM

> ... If the main point of resistance is that predicate checks unlike
> constraint checks are not to be assumed to be "true" when suppressed,
> than this rule can be easily stated as such. It would certainly be a
> rule far preferable to the rules that are being discussed currently.

This would mean that the compiler could never trust a predicate on a subprogram
parameter, on a global variable, on a dereferenced object, etc.  That would seem
to imply a lot of overhead.

If the only issue is conversion between access-to-incomplete types, I am
reluctant to give up everything else just for those.

****************************************************************

From: Robert Dewar
Sent: Saturday, March 3, 2012  1:50 PM

> Why is there no further discussion on treating subtype predicates in
> the constraints/checks/suppressable category rather than the
> assertion/ignorable category?

I am in favor of this, I think trying to ignore predicates makes no sense.

> If the main point of resistance is that predicate checks unlike
> constraint checks are not to be assumed to be "true" when suppressed,
> than this rule can be easily stated as such. It would certainly be a
> rule far preferable to the rules that are being discussed currently.
>
> It also would automatically allow for Steve's proposal phrased as
> "...users instead get implementation-dependent assertion enforcement"
>
> If "erroneousness by suppressing" is the issue, well (apart from my
> opinion that it is indeed erroneous to ignore a failed subtype
> predicate), then one could probably also describe it as a bounded
> error to create an invalid value by ignoring/suppressing the failed
> check or limit the suppressing in Steve's sense.

I really don't mind failed predicates being just as bad as failed constraints
(any differentiation between the two seems hard to explain and plain odd).

****************************************************************

From: Robert Dewar
Sent: Saturday, March 3, 2012  1:51 PM

> This would mean that the compiler could never trust a predicate on a
> subprogram parameter, on a global variable, on a dereferenced object,
> etc.  That would seem to imply a lot of overhead.

I think the compiler should be allowed to trust predicates exactly to the same
extent it is allowed to trust constraints, no more, and no less.

****************************************************************

From: Tucker Taft
Sent: Saturday, March 3, 2012  2:16 PM

>> Why is there no further discussion on treating subtype predicates in
>> the constraints/checks/suppressable category rather than the
>> assertion/ignorable category?
>
> I am in favor of this, I think trying to ignore predicates makes no
> sense.
  ...

If we go this route, then I would be in favor of the suggestion that predicate
checks raise Constraint_Error on failure, and we treat them as similarly to
constraints in every way.  That is, assign them their own "_Check" identifier
and have them controlled by pragma Suppress/Unsuppress.

I agree that the more we talk about them, the more they resemble subtype
constraints, and trying to give them the same "pampering" we give the other
assertion-like things doesn't seem to be working.

****************************************************************

From: Bob Duff
Sent: Saturday, March 3, 2012  2:33 PM

> I think the compiler should be allowed to trust predicates exactly to
> the same extent it is allowed to trust constraints, no more, and no
> less.

I agree.  And they should raise Constraint_Error on failure, as for constraints
and "not null".

****************************************************************

From: Robert Dewar
Sent: Saturday, March 3, 2012  2:33 PM

>> I am in favor of this, I think trying to ignore predicates makes no
>> sense.
>    ...
>
> If we go this route, then I would be in favor of the suggestion that
> predicate checks raise Constraint_Error on failure, and we treat them
> as similarly to constraints in every way.  That is, assign them their
> own "_Check" identifier and have them controlled by pragma
> Suppress/Unsuppress.

I agree 100% with this proposal

> I agree that the more we talk about them, the more they resemble
> subtype constraints, and trying to give them the same "pampering" we
> give the other assertion-like things doesn't seem to be working.

I think it is just confusing to try to distinguish them from subtype
constraints.

****************************************************************

From: Robert Dewar
Sent: Saturday, March 3, 2012  2:34 PM

> I agree that the more we talk about them, the more they resemble
> subtype constraints, and trying to give them the same "pampering" we
> give the other assertion-like things doesn't seem to be working.

Maybe they should be called Constraint instead of Predicate (half kidding :-))

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, March 3, 2012  4:39 PM

> If we go this route, then I would be in favor of the suggestion that
> predicate checks raise Constraint_Error on failure, and we treat them
> as similarly to constraints in every way.  That is, assign them their
> own "_Check" identifier and have them controlled by pragma
> Suppress/Unsuppress.

That is exactly the notion that I would like to see explored more.
Intuitively, it seems "right". But I am waiting for Steve's "have you considered
... ?".

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 4, 2012  6:53 PM

> That is exactly the notion that I would like to see explored more.
> Intuitively, it seems "right". But I am waiting for Steve's "have you
> considered ... ?".

I'm not Steve, but have you considered ;-) that Suppress eliminates checks based
on their location, while assertions (contract if you prefer) eliminate checks
based on the declaration?

This works for constraints only because any check that would fail (but is not
detected by a suppressed check) makes the program erroneous. In that case, the
fact that the body of the subprogram is "wrong" (assumes that the check is made)
is OK (the program makes no sense anyway). (Arguably, this is not the way
Suppress was really intended to work, as evidenced by the "On" parameter to the
Suppress pragma, but it never was properly defined.)

But there never has been any support for making predicates (or any other
assertions) cause erroneous execution in this way. (The idea being that adding a
predicate should never make a program less safe, in any language-defined mode.)

That means that if you have location-based Suppress, then you cannot use
predicates to move correctness checks into the contract rather than in the body.
When the check state is ignore at some call site, you neither get a check nor
erroneous execution if that check fails. Thus you have to repeat the check in
the body (and moreover, the compiler can almost never optimize the duplicate
check away). OTOH, preconditions work "right", so the net effect is that you
always have to use a precondition rather than a predicate (which is usually
going to be more complicated and less readable). That would strip away a large
part of their value.

Erhard, you have been as concerned about this as I have, so I don't understand
why you're willing to abandon it now.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 4, 2012  7:07 PM

...
> But there never has been any support for making predicates (or any other
> assertions) cause erroneous execution in this way. (The idea being
> that adding a predicate should never make a program less safe, in any
> language-defined mode.)

I understand the idea, but the fact of the matter is it just doesn't work to
somehow thing of predicates as something that makes the program safer. They
really fundamentally change the meaning of the program, just as constraints do.
Any attempt at distinguishing them from constraints is IMO doomed to end up in a
mess.

> Erhard, you have been as concerned about this as I have, so I don't
> understand why you're willing to abandon it now.

I do!

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 4, 2012  7:09 PM

...
> We could simply ignore assertion policy in deciding whether two names
> denoting the same subtype denote statically matching subtypes and then
> say that in cases where a stricter conformance check along the lines
> of the stuff we have been discussing today would have flagged a
> problem, users instead get implementation-dependent assertion
> enforcement. Or perhaps we could even define some portable rules for
> resolving these assertion-policy conflicts.

It's pretty clear that the "default" should be "Check"; that is, if there is any
"conflict", "Check" always wins. Everybody has to see "Ignore" before we ignore
anything.

But I have to admit, this doesn't seem to be working.

What I don't understand (or perhaps remember) is what was so bad about the
"simple" model that the policy that applies to the predicate is what applies to
the check? The way Tucker wrote that model up during the meeting didn't work,
but that was a terminology problem rather than any real definitional problem.
That model avoids problems with static matching (because the subtypes that match
are identical to the policies applied), it doesn't assign a meaning to
"renaming" subtypes (which bothered people about some other models).

It works properly for contracts (so long as you are careful), it doesn't cause
these definitional problems, and it doesn't introduce erroneousness. So what
have I missed??

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 4, 2012  7:47 PM

> I understand the idea, but the fact of the matter is it just doesn't
> work to somehow thing of predicates as something that makes the
> program safer. They really fundamentally change the meaning of the
> program, just as constraints do. Any attempt at distinguishing them
> from constraints is IMO doomed to end up in a mess.

I tried to make a point like this last fall, and was essentially told I was
crazy. Apparently, it is only crazy if I try to explain it...sigh.

Anyway, I don't disagree with your point, but it is important to realize that
anything that is true about predicates is also true about preconditions. That
is, some specifications have to be given in precondition form (because they
involve multiple parameters), but other specifications can be given in either
predicate or precondition (or constraint for that matter) forms. It is
uncomfortable for those to be handled differently.

And I don't want something a client does to introduce errors (or
erroneousness) into my code - no matter how I describe the contract.

(I can show examples of all of these cases using the Ada.Containers as a basis,
if you need examples of what I'm talking about.)

One of the appealing things about predicates is that they (appear) to offer a
way to avoid the bugs caused by client suppression of constraint checks. Using
an assertion model, I'd definitely consider eliminating almost all of the
constraints from my program. If we use a Suppression model for predicates but
not preconditions, then that will force everything into preconditions.

Let me make it clear that virtually any model would be preferable to the one
Tucker and Steve seem to be describing. So I could imagine following a purely
suppression model for predicates if we can't work out something better. But I'm
very dubious that there was any problem with the "policy at the point of
predicate declaration" model.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 4, 2012  7:59 PM

> Anyway, I don't disagree with your point, but it is important to
> realize that anything that is true about predicates is also true about
> preconditions. That is, some specifications have to be given in
> precondition form (because they involve multiple parameters), but
> other specifications can be given in either predicate or precondition
> (or constraint for that
> matter) forms. It is uncomfortable for those to be handled differently.

I do not think of predicates this way at all, I think of them just as
constraints, trying to convolute them with preconditions is to me confusing
obfuscation.

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 4, 2012  8:21 PM

That makes sense for static predicates and some other predicates. But keep in
mind that constraints themselves are a replacement for preconditions (for
instance, if you use a Positive parameter, you don't usually include that the
parameter is positive as part of the precondition - but it surely is).

And certainly some dynamic predicates would exist only for the purpose of
simplifying preconditions in this way. For instance, if you had:

     subtype Valid_Root_Window_Type is Root_Window_Type
         with Dynamic_Predicate => Is_Valid (Valid_Root_Window_Type);

in Claw, you could use it on most of the operations to eliminate having to
mention this in (almost) every precondition. It would be preferable because the
name of the subtype would serve as a reminder of the requirement (it wouldn't be
necessary to read the precondition).

If predicates work differently than preconditions, then 3rd-party developers
(and the Standard itself) can really only use preconditions for this sort of
thing (or at least have a strong incentive to do so), and that will clutter and
lengthen the preconditions a lot. Bob has said that he uses predicates almost
exclusively for this sort of thing, so it is not just me thinking this way. (Not
to put thoughts into Bob's mouth though - I don't know if he cares about how
assertion policy works at all.)

****************************************************************

From: Bob Duff
Sent: Sunday, March 4, 2012  8:40 PM

>...Bob has said that he uses predicates  almost exclusively for this
>sort of thing, so it is not just me thinking  this way.

Yes.  In many cases, at first I think I need a precondition, and then I see that
the same precondition applies to many procedures, and it's a property of a
single parameter, so I switch to a predicate.  And then I can use it for local
variables, components, etc.  Good.

The only time I really need a precondition is when it refers to two or more
parameters.  Other times, predicates usually seem better.  For DRY if nothing
else.

>...(Not to put thoughts into Bob's mouth though - I don't know if he
>cares about how assertion policy works at all.)

I care, but I've come to the conclusion that it doesn't much matter what the
Standard says.  Implementations will do what their customers want.  I think we
can safely let the market decide, in this case.

****************************************************************

From: Robert Dewar
Sent: Sunday, March 4, 2012  8:46 PM

> Yes.  In many cases, at first I think I need a precondition, and then
> I see that the same precondition applies to many procedures, and it's
> a property of a single parameter, so I switch to a predicate.  And
> then I can use it for local variables, components, etc.  Good.
>
> The only time I really need a precondition is when it refers to two or
> more parameters.  Other times, predicates usually seem better.  For
> DRY if nothing else.

Well the above are incompatible observations to me.

You say in the first para that a predicate is appropriate if it applies to many
procedures. But if you have a precondition that applies to a single parameter
for a single procedure, it ends up being heavier to use a predicate, since it
requires you to declare a special subtype just for that procedure call.

****************************************************************

From: Jeff Cousins
Sent: Monday, March 5, 2012  5:30 AM

Speaking as a poor old user I agree with Robert, predicates are a lot easy to
understand if they work like constraints and are suppressed like constraints.
Predicates have been "sold" as being a cleverer form of constraints, and it's
confusing if they sit in a fuzzy area half way between constraints and
assertions.

There is Randy's problem that you cannot use predicates to move correctness
checks into the contract rather than in the body, but as this problem already
exists for constraints I think it should be solved in a common way, say a pragma
or aspect "suppress_determined_at_call_site_not_body" in the spec.

****************************************************************

From: Robert Dewar
Sent: Monday, March 5, 2012  5:48 AM

I think people are worrying too much about the detailed issue of what happens
when suppressed checks fail.

For preconditions and postconditions, I think it is just fine for these to be
totally ignored if suppressed, like assertions (sorry can't help using this term
in its obvious sense, i.e. stuff done with pragma assert).

But predicates really do read like constraints (and BTW I think
*should* raise constraint error if they fail).

"poor old users" have the following pretty simple view of constraints.

You leave checks on unless you are absolutely sure that the checks won't fail,
and you really need the extra performance. Then, if and only if these two
factors are true, you suppress checks.

The issue of exactly what happens if checks fail when they are suppressed is
something that only language lawyers know for constraints, and that will
continue to be true for predicates no matter what we decide, users simply won't
know the rules (and won't think that they have to, because they will only
suppress checks when they are sure they won't fail).

****************************************************************

From: Jeff Cousins
Sent: Monday, March 5, 2012  5:56 AM

Yes to all that.

****************************************************************

From: Erhard Ploedereder
Sent: Monday, March 5, 2012  7:22 AM

> That means that if you have location-based Suppress, then you cannot
> use predicates to move correctness checks into the contract rather
> than in the body. When the check state is ignore at some call site,
> you neither get a check nor erroneous execution if that check fails.
> Thus you have to repeat the check in the body (and moreover, the
> compiler can almost never optimize the duplicate check away). OTOH,
> preconditions work "right", so the net effect is that you always have
> to use a precondition rather than a predicate (which is usually going to be more complicated and less readable).
> Erhard, you have been as concerned about this as I have, so I don't
> understand why you're willing to abandon it now.

Correct, but arguably this is not such a bad move. To use subtype predicates for
that purpose, one needs to introduce a subtype, which either clutters up the
interface (if it is part of the interface) or the predicate is not visible as
part of a contract (if it is part of the body of the interface). I don't see why
PRE does not cover the necessary grounds for this.

> I'm not Steve, but have you considered  that Suppress eliminates
> checks based on their location, while assertions (contract if you
> prefer) eliminate checks based on the declaration?> This works for
> constraints only because any check that would fail (but is not
> detected by a suppressed check) makes the program erroneous.

The erroneousness I am willing to accept. The Suppress semantics for subtype
predicates could be made different than for constraints, as far as I am
concerned to maintain value-safeness, i.e. wahtever applied to the aspect
specficiation appplies to all checks of the aspect.

****************************************************************

From: Tucker Taft
Sent: Monday, March 5, 2012  9:43 AM

> It works properly for contracts (so long as you are careful), it
> doesn't cause these definitional problems, and it doesn't introduce erroneousness.
> So what have I missed??

You have to have two different predicates, one which is used for checking, and
one which is used for membership. Or I suppose you could say is what matters is
the Assertion_Policy at the point of the "last" explicit aspect specification
for a predicate.  That means if you want to change whether a given predicate
check is enabled, you have to write

   ... with Static_Predicate => True;

at a point where you have specified a different Assertion_Policy.

That could probably work, as you would be unlikely to do that by mistake.

****************************************************************

From: Geert Bosch
Sent: Monday, March 5, 2012  1:16 PM

> If we go this route, then I would be in favor of the suggestion that
> predicate checks raise Constraint_Error on failure, and we treat them
> as similarly to constraints in every way.  That is, assign them their
> own "_Check" identifier and have them controlled by pragma
> Suppress/Unsuppress.
>
> I agree that the more we talk about them, the more they resemble
> subtype constraints, and trying to give them the same "pampering" we
> give the other assertion-like things doesn't seem to be working.

Right, I agree with this. Anything else is just too complicated for users to
grasp and for us to properly define.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 5, 2012  6:25 PM

> > It works properly for contracts (so long as you are careful), it
> > doesn't cause these definitional problems, and it doesn't introduce
> > erroneousness. So what have I missed??
>
> You have to have two different predicates, one which is used for
> checking, and one which is used for membership.

Yes, but that seems necessary in any model (including Suppression). Since
predicates are always used in memberships, validity checks, and case/loop
statements (irrespective of the Assertion_Policy), and all subtypes have
predicates, there are always going to be two predicates associated with every
subtype (the "full" predicate and the one that depends on the assertion policy).
The only reason you can get away with a single constraint is that only the last
constraint is used (it's always a proper subset or fails a check); that model
does not work for predicates.

> Or I suppose you could say is what matters is the Assertion_Policy at
> the point of the "last" explicit aspect specification for a predicate.
> That means if you want to change whether a given predicate check is
> enabled, you have to write
>
>    ... with Static_Predicate => True;
>
> at a point where you have specified a different Assertion_Policy.
>
> That could probably work, as you would be unlikely to do that by
> mistake.

I'm not sure this helps much; you've still got two predicates for every subtype.
And it makes subtype predicates subtly different than preconditions (which would
be the point of using this model - to make them work the same). Preconditions
inherit the checking state from parent operations.

Anyway, we have to find a proper model here, and soon, as this is the last
remaining Ada 2012 issue and we've promised an almost-final standard for review
ASAP.

Also note that changing predicates to a suppression model would also imply no
longer classifying them as "assertions". (I cannot imagine how we could still
call them "assertions" but yet treat them differently in every important
[user-visible] way - not affected by assertion_policy, erroneous rather than
ignore, etc.) This would totally destroy our carefully-crafted wording, which
would have to be completely done over from scratch. Indeed, that also includes
the bounded error rules for protected actions among other things.

Predicates are more like assertions for most purposes (since arbitrary
expressions evaluated at arbitrary points have far more "issues" than the much
more restricted possibilities of constraints). We need (almost) all of the rules
that apply to assertions (including preconditions, etc.) to apply to them as
well.

Since assertions (general name) are the most important thing about Ada 2012, it
appears that we have two unappealing choices:
(1) Go forward with rules we *know* are wrong, assuming implementers will clean
    up the mess (with the complete loss of portability that entails); or
(2) Abandon Ada 2012 altogether, restarting it whenever we figure this stuff out
    to true agreement (essentially making it Ada 2014).

Blah.

****************************************************************

From: Robert Dewar
Sent: Monday, March 5, 2012  6:50 PM

> Also note that changing predicates to a suppression model would also
> imply no longer classifying them as "assertions". (I cannot imagine
> how we could still call them "assertions" but yet treat them
> differently in every important [user-visible] way - not affected by
> assertion_policy, erroneous rather than ignore, etc.) This would
> totally destroy our carefully-crafted wording, which would have to be
> completely done over from scratch. Indeed, that also includes the
> bounded error rules for protected actions among other things.

Right, indeed I think we should NOT call predicates assertions, it's confusing.

> Predicates are more like assertions for most purposes (since arbitrary
> expressions evaluated at arbitrary points have far more "issues" than
> the much more restricted possibilities of constraints). We need
> (almost) all of the rules that apply to assertions (including
> preconditions, etc.) to apply to them as well.

Well they may be more like assertions from a detailed legalistic semantic point
of view, but the fact remains that they are like constraints from the
programmers point of view

> Since assertions (general name) are the most important thing about Ada
> 2012, it appears that we have two unappealing choices:

> (1) Go forward with rules we *know* are wrong, assuming implementers
> will clean up the mess (with the complete loss of portability that
> entails)

No I don't see the problem as nearly so intractable, and in fact I think the
details that we are discussing now are pretty unimportant details. It is a
common phenomenon that language lawyers regard problems of no great importance
as significant :-)

> (2) Abandon Ada 2012 altogether, restarting it whenever we figure this
> stuff out to true agreement (essentially making it Ada 2014).

To me this is a strawman of no significance at this stage. If ISO delays the
standard it would not have much effect on the implementation scene, which is
what matters to users *other* than language lawyers. But the effect which it
*would* have would be wholly negative, and it would be unfortunate to delay over
essentially unimportant minor points.

Just to be clear, to me as a user, looking at all the suggestions that have been
made for predicates, I see no big significant difference between them, these are
VERY minor semantic points if you ask me.

Yes, I know they seem big, language design discussions always have that effect!

****************************************************************

From: Tucker Taft
Sent: Monday, March 5, 2012  6:53 PM

I think a rule that associates the assertion policy with the location of the
"final" explicit predicate expression that is applicable to the subtype could
work.  The only other alternative that seems to be simple enough to be
understandable at this point is the suppress model, where it is erroneous if you
suppress it and it would have evaluated to False.

I would recommend a straw vote at this point to see which way most folks are
leaning:

1) Assertion_Policy applies to subtype predicates, based on the last explicit
   predicate aspect clause, and the effect is that the predicates are not
   checked on conversion, etc., but are still relevant for membership tests,
   full case-statement coverage, etc.

   or

2) Suppress(Predicate_Check) applies to predicates, and it is erroneous if the
   check would have returned False at the point of a suppressed check.

Personally, I am on the fence between these two.

****************************************************************

From: Robert Dewar
Sent: Monday, March 5, 2012  7:15 PM

I strongly favor 2). I have given my reasons. I think that whatever you say in
the RM, ordinary programmers will think of predicates as being like constraints.
It won't matter very much if you choose 1) and their thinking is not quite
accurate, since as I said earlier, in the case where the predicate is not
violated, the two rules are equivalent, and I think most programmers will figure
if they suppress predicate checking they had better make sure the predicates are
not violated.

But all in all, given that programmers will inevitably think of predicates as
being like constraints, they may as well behave as much like constraints as
possible.

****************************************************************

From: Steve Baird
Sent: Monday, March 5, 2012  7:49 PM

> ...
> it is erroneous if the check would have returned False at the point of
> a suppressed check.

With suppressed checks, Ada05 gives us an unambiguous definition of when a
program crosses the line into erroneousness.

The line doesn't seem so clearly defined when we allow arbitrary boolean
expressions.

Consider, for example, a predicate which has side effects.

I know that such predicates are considered to be poorly behaved, but I still
want the language to be well-defined if someone does choose to write such a
thing.

What is the state of a program which depends on the value of a global variable
that a predicate function "would have" updated?

Is an implementation allowed (but of course not required) to perform a
suppressed assertion check?

Maybe there are no serious problems here - I haven't had a chance to think
things through - but I feel nervous about this.

Can we go with 90% of the suppression model, but keep the rule that a suppressed
predicate check is simply not performed (or perhaps it is
implementation-dependent as to whether the check is performed, but there are
only two options: it is, or it isn't).  I generally like the suppression model,
but I'd feel better if we could avoid using "would have in some alternative
universe" stuff to define erroneous execution.

****************************************************************

From: Bob Duff
Sent: Monday, March 5, 2012  7:51 PM

> Well they may be more like assertions from a detailed legalistic
> semantic point of view, but the fact remains that they are like
> constraints from the programmers point of view

Well, I agree, but in fact all of the following are pretty-much alike from the
programmer's point of view:

    constraints (range, index, discrim)
    null exclusions ("not null")
    predicates
    invariants
    preconditions
    postconditions
    pragma Assert
    if ... then raise Blah
    when others => raise Program_Error;

Only the details are different.

Did I forget any?  I'd mention digits_constraint and delta_constraint, except
most programmers have probably never heard of those.

> No I don't see the problem as nearly so intractable, and in fact I
> think the details that we are discussing now are pretty unimportant
> details.

Exactly.

>...It is a common phenomenon that language lawyers regard problems  of
>no great importance as significant :-)

Right, sometimes it's hard for language lawyers to take off the "lawyer" hat and
put on the "plain old programmer" hat.  ;-)

****************************************************************

From: Bob Duff
Sent: Monday, March 5, 2012  8:00 PM

> Since assertions (general name) are the most important thing about Ada
> 2012, it appears that we have two unappealing choices:
> (1) Go forward with rules we *know* are wrong, assuming implementers
> will clean up the mess

That would be just fine.  I find it only mildly "unappealing".

>... (with the complete loss of portability that entails); or

There's no loss of portability when checks are turned ON.

And in the big scheme of things, it really doesn't matter that much if there's a
loss of portability when checks are turned OFF.

> (2) Abandon Ada 2012 altogether, restarting it whenever we figure this
> stuff out to true agreement (essentially making it Ada 2014).

I never understood the hurry for 2012, but now that it's almost done, it would
be pretty silly to delay just because the rules for turning off checks are
somewhat bogus.

Look at the big picture!

****************************************************************

From: Tucker Taft
Sent: Monday, March 5, 2012  8:04 PM

> Can we go with 90% of the suppression model, but keep the rule that a
> suppressed predicate check is simply not performed (or perhaps it is
> implementation-dependent as to whether the check is performed, but
> there are only two options: it is, or it isn't). I generally like the
> suppression model, but I'd feel better if we could avoid using "would
> have in some alternative universe" stuff to define erroneous
> execution.

I don't think that really works, unless you go for something based on the
location of a use in some declaration, and I think we agreed that was getting
pretty complicated.

On the other hand, we could perhaps distinguish static and dynamic predicates,
and say that the compiler can never assume a dynamic predicate evaluates to
true, since we know dynamic predicates are not always well behaved.  On the
other hand, a static predicate is pretty well defined, definitely doesn't have
any side effects, and is about as equivalent to a range constraint as you can
get, so it is well defined to say what it means to assume it is true even
without checking it.

****************************************************************

From: Robert Dewar
Sent: Monday, March 5, 2012  8:04 PM

> I know that such predicates are considered to be poorly behaved, but I
> still want the language to be well-defined if someone does choose to
> write such a thing.

I don't care, why not say it is a bounded error to have side effects, and the
possible effects of a suppressed check is that it might or might not update the
global or may set it to some unexpected value.

> What is the state of a program which depends on the value of a global
> variable that a predicate function "would have"
> updated?
>
> Is an implementation allowed (but of course not required) to perform a
> suppressed assertion check?

I don't care one way or another

> Can we go with 90% of the suppression model, but keep the rule that a
> suppressed predicate check is simply not performed (or perhaps it is
> implementation-dependent as to whether the check is performed, but
> there are only two options: it is, or it isn't).  I generally like the
> suppression model, but I'd feel better if we could avoid using "would
> have in some alternative universe" stuff to define erroneous
> execution.

I think it is fine to insist on the predicate check not being performed if it is
suppressed. After all the only reason we don't insist on the check being
suppressed in the constraint case is that it may be infeasible or too expensive
to suppress the check (e.g. if it is free in hardware), and that reasoning does
not apply to predicates.

****************************************************************

From: Robert Dewar
Sent: Monday, March 5, 2012  8:09 PM

> On the other hand, we could perhaps distinguish static and dynamic
> predicates, and say that the compiler can never assume a dynamic
> predicate evaluates to true, since we know dynamic predicates are not
> always well behaved.  On the other hand, a static predicate is pretty
> well defined, definitely doesn't have any side effects, and is about
> as equivalent to a range constraint as you can get, so it is well
> defined to say what it means to assume it is true even without
> checking it.

I could live with that certainly

****************************************************************

From: Randy Brukardt
Sent: Monday, March 5, 2012  9:47 PM

...
> Can we go with 90% of the suppression model, but keep the rule that a
> suppressed predicate check is simply not performed (or perhaps it is
> implementation-dependent as to whether the check is performed, but
> there are only two
> options: it is, or it isn't).  I generally like the suppression model,
> but I'd feel better if we could avoid using "would have in some
> alternative universe" stuff to define erroneous execution.

I don't think that works, because it would mean that both (1) you have no
control over whether clients make a check or not and (2) the program is not
erroneous if the check fails. Which means that any predicates could not affect
correctness in a body.

To take a concrete example:

The vector container has the following routine:

function Element (Position  : Cursor) return Element_Type;
   -- If Position equals No_Element, then Constraint_Error is propagated.
   -- Otherwise, Element returns the element designated by Position.

Ignoring for the moment the exact exception being raised, this could (and
probably should) be written as:

     subtype Element_Cursor is Cursor
         with Dynamic_Predicate => Has_Element (Element_Cursor);

I'm using a predicate here because there are quite a few routines with such a
requirement, and repeating it over and over in preconditions is not appealing.
[I'm not thrilled by the subtype name here; Ada.Containers does not have a
concept of a "null" cursor by that name, and "valid" means something else. I
don't want to spend too long on this example, so I used the first name that
works.]

    function Element (Position : Element_Cursor) return Element_Type;
        -- Element returns the element designated by Position.

A precondition would work, of course:

    function Element (Position : Cursor) return Element_Type
        with Pre => Has_Element (Position);
        -- Element returns the element designated by Position.

In both cases, this is assuming that Assertion_Policy is Check, and this is
specified inside the package Ada.Containers.Vectors.

Now, the problem is that if the client Suppresses predicate checks, then the
predicate check is not made at the call site. If the resulting body is *not*
erroneous, then the only possibility for a correct implementation of the
original specification is to repeat the check within the body of Element.

OTOH, an Assertion_Policy of Ignore at the client site would have no effect on
whether the Precondition is checked, so it in fact could be used in this way.

The net effect is that Dynamic_Predicates could not be used to define checking
for parameters [without repeating the checks in the body]. And that is 90% of
the uses I envison for them (these are very similar to null exclusions in that
way, 90% of uses being associated with parameters).

At least if the effect is erroneousness, we don't have to care whether the body
works or not. And as Robert says, most users will understand that. (I'd prefer
to avoid erroneousness; indeed, I tend to recheck parameters that may be called
from client code where we don't control the checking specifically to avoid
erroneous execution, but I realize I'm unusual here.)

Note that if we *don't* use the predicate, and just check the rules in the body,
we don't have any possible erroneousness nor incorrectness. Nor do tools get a
chance to do verification or remove the extra error path from the code.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 5, 2012  9:52 PM

...
> I would recommend a straw vote at this point to see which way most
> folks are leaning:
>
> 1) Assertion_Policy applies to subtype predicates, based on the last
> explicit predicate aspect clause, and the effect is that the
> predicates are not checked on conversion, etc., but are still relevant
> for membership tests, full case-statement coverage, etc.
>
>    or
>
> 2) Suppress(Predicate_Check) applies to predicates, and it is
> erroneous if the check would have returned False at the point of a
> suppressed check.
>
> Personally, I am on the fence between these two.

I prefer (1), for the reasons outlined in my previous messages: (1) I'd rather
not add more erroneous execution to the language, and without it, predicates
can't be used to check "semantics"; and (2) I don't want to rewrite all of the
11.4.1 stuff to exclude predicates from "assertions" but then apply all of the
side-effect and protected operation rules to them. [Note that static predicates
really don't have issue (2), but it would be even messier to separate static and
dynamic predicates here.]

****************************************************************

From: Randy Brukardt
Sent: Monday, March 5, 2012  10:06 PM

...
> > Since assertions (general name) are the most important thing about
> > Ada 2012, it appears that we have two unappealing choices:
>
> > (1) Go forward with rules we *know* are wrong, assuming implementers
> > will clean up the mess (with the complete loss of portability that
> > entails)
>
> No I don't see the problem as nearly so intractable, and in fact I
> think the details that we are discussing now are pretty unimportant
> details. It is a common phenomenon that language lawyers regard
> problems of no great importance as significant :-)

You might be right in terms of the language design. But these are very important
details - get them wrong, and I (and many other programmers) will not be able to
use Dynamic_Predicates for a substantial fraction of their intended purpose.
That's very bad.

Moreover, even if we agree we want Suppression rules, I don't know how to
(re)word the rules in 11.4.1 that assume that predicates are "assertions". We
had so much trouble getting them right as it is (and that's assuming that
they're right now, probably a dubious assumption), redoing them is a very scary
task.

And the Standard is promised for delivery ASAP after March 1st (which has
already passed). Most likely, that will be sometime next week. This wording all
has to be done and letter balloted by then. That has me *very* scared.

****************************************************************

From: John Barnes
Sent: Tuesday, March 6, 2012  2:07 AM

I fear that I have not been following this properly (very busy on lectures on
the fourth dimension), but

...
>> I agree that the more we talk about them, the more
>> they resemble subtype constraints, and trying to give
>> them the same "pampering" we give the other assertion-like
>> things doesn't seem to be working.
>

Yes, yes, I felt that when writing that bit of the rat (which will now need
revising heavily). And do I get Value'First amd Value'Last as well?

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  5:14 AM

> I fear that I have not been following this properly (very busy on
> lectures on the fourth dimension), but

!!

> Yes, yes, I felt that when writing that bit of the rat (which will now
> need revising heavily). And do I get Value'First amd Value'Last as well?

If there's no predicate, you get 'First.  If Value is static, you get
Value'First_Valid.  Otherwise (there's a dynamic predicates), you get neither.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  7:21 AM

> If there's no predicate, you get 'First.  If Value is static, you get
> Value'First_Valid.  Otherwise (there's a dynamic predicates), you get
> neither.

Note that Value'First_Valid can also be used if there is no predicate, providing
there is at least one value (not a null subtype!)

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  6:00 AM

> You might be right in terms of the language design. But these are very
> important details - get them wrong, and I (and many other programmers)
> will not be able to use Dynamic_Predicates for a substantial fraction
> of their intended purpose. That's very bad.

Randy, this is is just over-the-top pessimistic!

I'm quite sure we WILL get these rules wrong.  So what?
We'll work it out over the next couple of years.
Implementers will do what their customers need, and ARG will adjust the rules
accordingly.

> Moreover, even if we agree we want Suppression rules, I don't know how
> to (re)word the rules in 11.4.1 that assume that predicates are "assertions".
> We had so much trouble getting them right as it is (and that's
> assuming that they're right now, probably a dubious assumption),
> redoing them is a very scary task.

Don't panic.  Steve is going to produce some wording.
If he gets it wrong, the sky will not fall.

Compiler writers are going to understand one way or another, whether predicates
go by "erroneous" rules or "ignore" rules. It really doesn't matter whether or
not the wording says they are "assertions".

> And the Standard is promised for delivery ASAP after March 1st (which
> has already passed). Most likely, that will be sometime next week.
> This wording all has to be done and letter balloted by then. That has me
> *very* scared.

Take some advice from Alfred E. Newman.  ;-)

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  7:24 AM

> I don't think that works, because it would mean that both (1) you have
> no control over whether clients make a check or not

I really think it is a waste of time to worry about this. If people turn off
checks, they get what they get, I think that worrying too much about the status
of programs with checks that are turned off and fail is pointless.

I copy Yannick on this, because he is in charge of a major project
(Hi-Lite) that is concerned with mixing tests and proofs, and I think his
perspective may be useful. Yannick, can you review this thread?

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  7:30 AM

> You might be right in terms of the language design. But these are very
> important details - get them wrong, and I (and many other programmers)
> will not be able to use Dynamic_Predicates for a substantial fraction
> of their intended purpose. That's very bad.

You keep saying this, but frankly I don't see "many other programmers"
here, and I don't sympathize with your concerns, since I think they are
misplaced. You are trying to push the language into a "Randy Brukardt" mold here
when it just doesn't fit, and doesn't fit with normal Ada usage.

> Moreover, even if we agree we want Suppression rules, I don't know how
> to (re)word the rules in 11.4.1 that assume that predicates are "assertions".
> We had so much trouble getting them right as it is (and that's
> assuming that they're right now, probably a dubious assumption),
> redoing them is a very scary task.

It's such a minor point to me, that I dont really care if the RM gets it 100%
right.

> And the Standard is promised for delivery ASAP after March 1st (which
> has already passed). Most likely, that will be sometime next week.
> This wording all has to be done and letter balloted by then. That has me *very* scared.

We can fix this with a binding interpretation if necessary. The Ada 2012 RM is
full of errors, I am sure of that, no reason to think we have succeeded in
producing an error free document where we never succeeded in doing this before.

Just as there is nothing special about a day when you don't know any of the
remaining bugs in a piece of software, and in practice you release software with
known bugs, there is nothing special about a problem we know about in the RM
when we release the RM compared to the myriad problems in the RM that we don't
know about.

So stop being scared, it is based on this faulty perception that there is some
merit in passing the Ada 2012 RM on to the next stage on a day when we are not
aware of any errors!

****************************************************************

From: Brad Moore
Sent: Tuesday, March 6, 2012  8:16 AM

1) They are like contraints
or
2) They are like invariants,
       (or like preconditions/postconditions that are implicitly applied
        on all calls involving the subtype)

The initial problem was that our Ada 2005 assertion policy was too course. We
decided we needed more fine-grained control, which led to our current (almost)
proposal. We are still saying that we would ideally like to have the ability to
ensure that checks are enforced in certain situations. That tells me that our
current proposal isn't fine-grained enough.

I haven't completely thought this through, but...

What if we added another assertion policy identifier, in addition to Check and
Ignore, call it Force, say.

The idea here is that such an assertion policy cannot be overridden, and it
requires the checks to be performed (like Check).

You could think of this as setting a Checks_Forced aspect on all subtypes
covered by the scope of the assertion policy pragma.

Hopefully that would satisfy Randy, and others who share his concerns,
(including myself), although like Tucker, I'm still on the fence with regard to
the straw poll. If I had to say which way I am leaning, I will say, option 1.

****************************************************************

From: Jeff Cousins
Sent: Tuesday, March 6, 2012  8:34 AM

Could this Force assertion policy also be used to force the checking of
constraints on subtypes, or, if we're keeping the model of different but
parallel systems of control for constraints and assertions, invent an
Unsuppressable aspect to force the checking of constraints on subtypes?

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  8:38 AM

> What if we added another assertion policy identifier, in addition to
> Check and Ignore, call it Force, say.

I would not rush to add this to the RM. If such an assertion policy identifier
is useful, then implementations can add it.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  8:45 AM

A general thought here. I don't think we are going to be able to get all this
stuff 100% right first time. I especially don't think that a bunch of language
designers within minimal experience in writing large Ada systems using a
contract approach are going to be able to resolve this in a week or two of
hurried email dicussion.

Whatever we decide probably doesn't matter than much, since it will be subject
to refinement on use.

I definitely would like to get the input of users who DO have the experience of
writing large Ada systems using preconditions and postconditions (remember GNAT
has had these capabilities for a couple of years, and that those pragmas were
added to support some large scale customers writing critical systems, some of
the certified, who wanted to use a contract based approach).

We definitely will get this input over time, and it may well lead us to modify
and refine what we have.

That's fine, Ada is an evolving language, and we improve, modify, and refine the
language all the time.

No need to get too crazy with the attempt to get the Ada 2012 RM perfect first
time around.

I think we will be nearer to the final result if we choose the constraint model
for predicats, but I also think it doesn't matter *that* much.

Let's do our best to get the details right, but let's not go crazy over it
(talking about delaying till 2014 is to me in the crazy category :-) :-))

****************************************************************

From: Jean-Pierre Rosen
Sent: Tuesday, March 6, 2012  9:48 AM

> No need to get too crazy with the attempt to get the Ada 2012 RM
> perfect first time around.

I agree with that, but still must be careful to not put anything that would
create incompatibilities to fix later.

F.e., we do have to decide now if we raise Constraint_Error or Assertion_Error
for failures of subtype predicates.

---
Now for the straw poll:
I tend to agree with the position that predicates are "extended constraints".
There is certainly no problem with static predicates. What worries me is the
whole notion of dynamic predicates.

I view a type as a set of values, and a subtype as a subset of that set.
It seems we went that way:
1) I want a subtype that excludes 0.
   subtype Not_null is integer with Not_Null /= 0; Fine, static predicate

2) Now, I want only even values. Let's do it the same:
   subtype Even is Integer with Even mod 2 = 0; Gosh, that's not static.  We
   have to allow dynamic predicates

3) But now, we have that strange thing:
   N : Integer := Anything;
   subtype Strange is Integer with Strange mod N = 0;

Here, the values are not moving, it is the boundaries of the subset that can
move at any time, putting a value in or out at random!


I really think we should concentrate on 1) and 2), making these work as
expected. Define what happens with 3 for the sake of portability, but don't
worry if it makes strange things.

And if people remove checks, all bets are off. If you  want something
guaranteed, write preconditions. I don't buy the argument "I'll make a subtype
because I need a precondition in various places", that's just saving typing. If
the subtype makes sense from the  problem domain, fine, otherwise use
preconditions.

After all, that may even make a clear cut between where to  use predicates or
preconditions: use preconditions when you want guarantees even when checks are
suppressed.

Maybe an  implementation advice that /within preconditions/, the compiler should
not rely on predicates beeing true.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  10:11 AM

> I agree with that, but still must be careful to not put anything that
> would create incompatibilities to fix later.

Incompatibilities are an issue for implementations, not the standard per se. I
don't think that is too much of a concern. An implementation can always deal
with such problems if they are significant, by introducing switches etc.

> F.e., we do have to decide now if we raise Constraint_Error or
> Assertion_Error for failures of subtype predicates.

Well no big deal, if you change your mind in six months that will have zero
effect on the the implementations out there. The new release of GNAT Pro that
would reflect any last minute decisions in the RM won't be out for another 10
months.

In practice, the bigger compatibility problems are those caused by changes in
the last few months before the RM is issued, GNAT Pro version 7, which is what
people will be using for a year, was finalized a couple of months ago, and there
are some changes.

But not to worry, that's AdaCore's problem, and it's perfectly manageable, it's
not a problem for the standard.

> 2) Now, I want only even values. Let's do it the same:
>     subtype Even is Integer with Even mod 2 = 0; Gosh, that's not
> static.  We have to allow dynamic predicates
>
> 3) But now, we have that strange thing:
>     N : Integer := Anything;
>     subtype Strange is Integer with Strange mod N = 0;
>
> Here, the values are not moving, it is the boundaries of the subset
> that can move at any time, putting a value in or out at random!
>
>
> I really think we should concentrate on 1) and 2), making these work
> as expected. Define what happens with 3 for the sake of portability,
> but don't worry if it makes strange things.

I definitely agree with this

> And if people remove checks, all bets are off. If you  want something
> guaranteed, write preconditions. I don't buy the argument "I'll make a
> subtype because I need a precondition in various places", that's just
> saving typing. If the subtype makes sense from the  problem domain,
> fine, otherwise use preconditions.

I agree with this

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  10:20 AM

> No need to get too crazy with the attempt to get the Ada 2012 RM
> perfect first time around.

I agree with everything Robert said, including the parts I snipped.

I'd like to add:  We should avoid adding new stuff now, to maximize the
probability that we can do it right later, and minimize the amount of
ill-conceived baggage that might get orphaned when we do it right.  Also to
minimize the time we need to spend talking about such new features!

Adding Unsuppressable, Force, or the like seems unwise at this time, although
such things might well be a good idea for a binding interpretation in 2013-14.

Changing invariants to raise Constraint_Error is OK with me to do now.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  10:27 AM

I agree with all the above additions :-)

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  10:26 AM

> An implementation can always deal with such problems if they are
> significant, by introducing switches etc.

Right.

Also, note that we don't usually worry TOO much about incompatibilies involving
incorrect programs (which includes programs that turn off failing checks,
whether that be "ignore/safe" or "suppress/erroneous").

And note that even Constraint_Error vs. Assertion_Error isn't THAT big of a
deal, because assertions are primarily for detecting bugs, not for handling and
recovering from.

****************************************************************

From: Steve Baird
Sent: Tuesday, March 6, 2012  12:03 PM

Tiucker Taft wrote:
> On the other hand, we could perhaps distinguish static and dynamic
> predicates, and say that the compiler can never assume a dynamic
> predicate evaluates to true, since we know dynamic predicates are not
> always well behaved.  On the other hand, a static predicate is pretty
> well defined, definitely doesn't have any side effects, and is about
> as equivalent to a range constraint as you can get, so it is well
> defined to say what it means to assume it is true even without
> checking it.

This sounds promising to me.

Robert Dewar wrote:
> I think it is fine to insist on the predicate check not being
> performed if it is suppressed. After all the only reason we don't
> insist on the check being suppressed in the constraint case is that it
> may be infeasible or too expensive to suppress the check (e.g. if it
> is free in hardware), and that reasoning does not apply to predicates.

No, the only reason we don't insist on the check being suppressed in the
constraint case is that we don't have well-defined semantics for an execution
which, for example, "successfully" writes to an out-of-bounds array element
because the index check was suppressed.

This case is, quite correctly, defined to be erroneous and erroneous means
anything can happen including performing the suppressed check and raising C_E
just as if the check had not been suppressed.

However, this supports your main point - "it is fine to insist on the predicate
check not being performed if it is suppressed" - because omitting an assertion
check doesn't introduce definitional problems.

Combining Tuck's and Robert's ideas, we get

    -- Suppressing any form of assertion except
       a static predicate check simply causes the
       assrtion to be ignored. The implementation is
       not granted permission to make any extra
       assumptions -the hypothetical behavior of
       the execution if the check had been performed
       is completely irrelevant. A variation on this
       would be to have suppression grant a permission (as
       opposed to imposing a requirement) to ignore
       the assertion.

    - Suppressing a static predicate check is
      follows the same model as suppressing a
      constraint check. The implementation is allowed
      to assume that the check would have passed
      and execution is erroneous if this assumption
      turns out to be false.

I could live with this.

In earlier discussions however, there was some support for the idea that we
don't want adding an assertion to make make a program less reliable - the
scenario some folks were concerned about is
     - user adds assertions for improved debugging, etc.
     - user disables them for the production version
     - some execution of the production version then becomes
       erroneous where it otherwise wouldn't have.
     - some catastrophic failure occurs which wouldn't have
       occurred if the assertion had never been written

If folks want to avoid this problem, I could also live with eliminating this
special treatment for static predicates and instead using the "suppression does
not allow assumptions" approach for all forms of assertions.

Robert Dewar wrote:
> I don't care, why not say it is a bounded error to have side effects,
> and the possible effects of a suppressed check is that it might or
> might not update the global or may set it to some unexpected value.

I agree with the spirit of this remark. As long as we have well-defined
semantics for suppressing an assertion check, I'm less concerned about the
details. I just want them to be well-defined.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  12:34 PM

> No, the only reason we don't insist on the check being suppressed in
> the constraint case is that we don't have well-defined semantics for
> an execution which, for example, "successfully" writes to an
> out-of-bounds array element because the index check was suppressed.

That's just not true historically. During the design of Ada 83, this was
specifically discussed, and it is absolutely the case that the main reason for
not insisting on the check being suppressed (*) is that on some hardware, e.g.
MIPS at the time, checking for integer overflow was free, and for quite a few
machines, checking for float overflow was free.

Since the idea of pragma Suppress was to speed up execution, it was silly to
require it to suppress no-cost checks!

> This case is, quite correctly, defined to be erroneous and erroneous
> means anything can happen including performing the suppressed check
> and raising C_E just as if the check had not been suppressed.

(*) Yes, I know this, but this is language lawyer stuff.
We know perfectly well what it would mean to require checks to be suppressed,
and we would know it when we saw it if a compiler did not suppress the checks.
The fact that you can't figure out how to say this formally in a language
definition is irrelevant.

That's why implementation advice is often much more powerful than requirements.
In IA, you can say all sorts of things that

a) make perfect sense
b) are perfectly easy for everyone to understand
c) can't be easily formalized

so if we had IA saying

Implementation advice

The compiler should not generate any code corresponding to run time checks that
are suppressed.

We would understand that completely, and it might change implementations!

> However, this supports your main point - "it is fine to insist on the
> predicate check not being performed if it is suppressed" - because
> omitting an assertion check doesn't introduce definitional problems.

I really don't care much about definitional problems, I care about what the
intention of the language design is. If we adopt a suppression/constraint model
for predicates, I think it is just fine to have IA that says:

     The compiler should not generate any code corresponding
     to predicate checks that are suppressed.

And if some language lawyer wants to start mumbling about erroneous ..
anything-allowed .. including doing the predicate check.

I won't pay much attention. Worrying about what erroneous means is a big waste
of time from a practical point of view.

...
> In earlier discussions however, there was some support for the idea
> that we don't want adding an assertion to make make a program less
> reliable - the scenario some folks were concerned about is
>       - user adds assertions for improved debugging, etc.
>       - user disables them for the production version
>       - some execution of the production version then becomes
>         erroneous where it otherwise wouldn't have.
>       - some catastrophic failure occurs which wouldn't have
>         occurred if the assertion had never been written

I think this scenario is 100% bogus. The rules of the language already allow
implementations to do unacceptable things (do I need to restate my
password-check/system disk deleted example), or to implement multiplication by
recursive addition, but reasonable implementations don't do stupid things, just
because the RM gives them formal permission to do so.

> If folks want to avoid this problem, I could also live with
> eliminating this special treatment for static predicates and instead
> using the "suppression does not allow assumptions"
> approach for all forms of assertions.

I think that's a bad idea, you damage code quality for no good reason at all.

> Robert Dewar wrote:
>> I don't care, why not say it is a bounded error to have side effects,
>> and the possible effects of a suppressed check is that it might or
>> might not update the global or may set it to some unexpected value.
>
> I agree with the spirit of this remark. As long as we have
> well-defined semantics for suppressing an assertion check, I'm less
> concerned about the details. I just want them to be well-defined.

Worrying about well-definedness when checks are suppressed is a total waste of
time in my opinion. It won't change what implementations do, and it won't change
what programmers do.

****************************************************************

From: Steve Baird
Sent: Tuesday, March 6, 2012  1:02 PM

>> No, the only reason we don't insist on the check being suppressed in
>> the constraint case is that we don't have well-defined semantics for
>> an execution which, for example, "successfully" writes to an
>> out-of-bounds array element because the index check was suppressed.
>
> That's just not true historically.

You're right.
I probably should have said something like
    "The main reason this is still the definition today, regardless
     of how the language started out, is ..."

I conclude this from more recent ARG discussions.

> I really don't care much about definitional problems, I care about
> what the intention of the language design is.
> If we adopt a suppression/constraint model for predicates, I think it
> is just fine to have IA that says:
>
>     The compiler should not generate any code corresponding
>     to predicate checks that are suppressed.
>

Sounds good to me.

>
>> If folks want to avoid this problem, I could also live with
>> eliminating this special treatment for static predicates and instead
>> using the "suppression does not allow assumptions"
>> approach for all forms of assertions.
>
> I think that's a bad idea, you damage code quality for no good reason
> at all.

I agree. I was just listing another option for those who object to the
possibility of assertions introducing a new possibility for erroneous execution.

****************************************************************

From: Yannick Moy
Sent: Tuesday, March 6, 2012  9:00 AM

> I copy Yannick on this, because he is in charge of a major project
> (Hi-Lite) that is concerned with mixing tests and proofs, and I think
> his perspective may be useful. Yannick, can you review this thread?

The issue for mixing tests and proofs is the ability to check dynamically
(during testing) the assumptions that we are making statically (for proofs). So
it does not matter how checks are enabled by the customer, because in this
special testing mode, all RM checks will be enabled, and some additional checks
will be performed. For example, we assume for proofs that all subcomponents of
scalar type of an IN parameter of composite type are valid, and so we'll check
during testing that this is the case when calling the subprogram (the new
'Valid_Scalars attribute that you are going to implement!). This accounts for
the fact the caller might be verified by testing and the callee by formal
verification.

So we must recompile all code to be able to insert the additional checks for
testing, therefore the choice between alternatives (1) and (2) should not have
any effect on our strategy above.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:00 PM

> > You might be right in terms of the language design. But these are
> > very important details - get them wrong, and I (and many other
> > programmers) will not be able to use Dynamic_Predicates for a
> > substantial fraction of their intended purpose. That's very bad.
>
> Randy, this is is just over-the-top pessimistic!
>
> I'm quite sure we WILL get these rules wrong.  So what?
> We'll work it out over the next couple of years.
> Implementers will do what their customers need, and ARG will adjust
> the rules accordingly.

THAT'S exactly what I'm afraid of. 3rd party library developers are NOT the
customers of any implementer. They *share* customers with implementers. At best,
they are related to an implementer (so internal needs *might* get fixes made,
but surely at a lower priority that those carrying the freight).

The whole reason for having a standard is to put all of the customers and all of
the implementers on as even a footing as possible. That includes those with less
money!

> > Moreover, even if we agree we want Suppression rules, I don't know
> > how to (re)word the rules in 11.4.1 that assume that predicates are
> > "assertions". We had so much trouble getting them right as it is
> > (and that's assuming that they're right now, probably a dubious assumption),
> > redoing them is a very scary task.
>
> Don't panic.  Steve is going to produce some wording.
> If he gets it wrong, the sky will not fall.

This wording is far more important than the "Ignore" rules. It essentially
defines what can and cannot be used in assertions, and it's not something that
we can change substantially after the fact.

So while the "sky won't fall", Ada 2012 will. Perhaps AdaCore is comfortable in
ignoring the standard when convinient, but in that case, why have it at all?

> Compiler writers are going to understand one way or another, whether
> predicates go by "erroneous" rules or "ignore" rules.
> It really doesn't matter whether or not the wording says they are
> "assertions".

This has *nothing* to do with compiler writers. It has *everything* to do with
what the creator of a subprogram can assume inside the subprogram based on the
subtypes of the parameters. If checks off are erroneous (or if the checks can be
"pinned" on, as with Preconditions), then the subprogram can assume the checks
are made. But if the subprogram cannot assume the checks are made and the
program is *not* erroneous if the checks fail, then the subprogram has to repeat
the checks in the body (in which case, it is silly to include them in the
specification in the first place).

Most importantly, it will be completely impossible to change any existing
package to use predicates rather than comments and body checks (especially
language-defined packages), because in a checks off siituation, the checks will
not be made as specified in the original interface. (I'll discuss this more in
my response to J-P.) But a language feature that ought to be but cannot be used
to describe Text_IO and containers and streams is seriously flawed and
definitely not ready for prime-time.

> > And the Standard is promised for delivery ASAP after March 1st
> > (which has already passed). Most likely, that will be sometime next week.
> > This wording all has to be done and letter balloted by then. That
> > has me
> > *very* scared.
>
> Take some advice from Alfred E. Newman.  ;-)

Joyce has just told me that she needs the completed Standard on Monday.
That's impossible even without this issue, and I'm unwilling to "finish" a
standard that doesn't at least try to address this issue in some way. It's
idiotic to standardize something with known bugs in important areas.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:02 PM

> > I don't think that works, because it would mean that both
> (1) you have
> > no control over whether clients make a check or not
>
> I really think it is a waste of time to worry about this. If people
> turn off checks, they get what they get, I think that worrying too
> much about the status of programs with checks that are turned off and
> fail is pointless.

It would be nice if that was true, but it surely is not true of assertions as it
stands. Which means that they cannot be used to describe language-defined
packages (or anything else existing), since there is no "exception" for someone
having turned checks off.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:14 PM

> > You might be right in terms of the language design. But these are
> > very important details - get them wrong, and I (and many other
> > programmers) will not be able to use Dynamic_Predicates for a
> > substantial fraction of their intended purpose. That's very bad.
>
> You keep saying this, but frankly I don't see "many other programmers"
> here, and I don't sympathize with your concerns, since I think they
> are misplaced. You are trying to push the language into a "Randy
> Brukardt"
> mold here when it just doesn't fit, and doesn't fit with normal Ada
> usage.

Both Erhard and Geert indicated that they have similar issues during the recent
ARG meeting, and J-P has indicated such in e-mail. That's far more than no one
else. I'm not going to try to guess whether they care about predicates at all
(J-P seems to have said that he'd rather the Preconditions are used anyway), but
I find it bizarre to define a feature that cannot safely be used for its
intended purpose.

...
> We can fix this with a binding interpretation if necessary.
> The Ada 2012 RM is full of errors, I am sure of that, no reason to
> think we have succeeded in producing an error free document where we
> never succeeded in doing this before.

It has nothing to do with an "error-free" document. It's about producing a
document that has the critical features "right". If dynamic predicates cannot be
used on parameters because someone might turn off the checking, then we have a
serious problem that needs to be addressed before going forward.

This is *not* some corner case of little importance. It is all about whether you
can even use predicates to describe the interface of a package (which was the
only reason I wanted them in the first place - I much prefer
constraints/predicates to preconditions).

> So stop being scared, it is based on this faulty perception that there
> is some merit in passing the Ada 2012 RM on to the next stage on a day
> when we are not aware of any errors!

I will not pass this standard forward if we aware of any errors in critical
functionality. That would be idiotic. You can find someone else to do this if
you want to do that.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  4:33 PM

> It would be nice if that was true, but it surely is not true of
> assertions as it stands. Which means that they cannot be used to
> describe language-defined packages (or anything else existing), since
> there is no "exception" for someone having turned checks off.

So what? You seem to be fixated on this issue, but I just don't see it as
critical. If someone turns checks off, they get what they get if the checks
fail. If they are concerned about what happens when checks fail. they should be
sure not to turn them off unless they are sure they won't fail.

I just don't get it, and don't see what I am missing???

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  4:35 PM

> THAT'S exactly what I'm afraid of. 3rd party library developers are
> NOT the customers of any implementer. They *share* customers with
> implementers.

Randy, it's going to be several years, optimistically, before there are multiple
Ada 2012 implementations.  That's plenty of time for ARG to nail down some rules
about turning off assertions.

> This wording is far more important than the "Ignore" rules. It
> essentially defines what can and cannot be used in assertions, and
> it's not something that we can change substantially after the fact.

Heh?  All the rules about assertions are already nailed down, EXCEPT for the
part about turning off the checks.

> So while the "sky won't fall", Ada 2012 will. Perhaps AdaCore is
> comfortable in ignoring the standard when convinient, but in that
> case, why have it at all?

Hyperbole.  AdaCore doesn't ignore the standard.

> This has *nothing* to do with compiler writers. It has *everything* to
> do with what the creator of a subprogram can assume inside the
> subprogram based on the subtypes of the parameters. If checks off are
> erroneous (or if the checks can be "pinned" on, as with
> Preconditions), then the subprogram can assume the checks are made.
> But if the subprogram cannot assume the checks are made and the
> program is *not* erroneous if the checks fail, then the subprogram has
> to repeat the checks in the body (in which case, it is silly to include them
> in the specification in the first place).

AdaCore is using assertions internally already, and some customers are also.  If
you don't want to use them in your libraries, because of an (IMHO misguided)
fear of users who turn off checks, that's fine. Or you could just tell your
users not to turn off checks.  The checks-on semantics are fine today.

> Most importantly, it will be completely impossible to change any
> existing package to use predicates rather than comments and body
> checks (especially language-defined packages), because in a checks off
> siituation, the checks will not be made as specified in the original
> interface. (I'll discuss this more in my response to J-P.) But a
> language feature that ought to be but cannot be used to describe
> Text_IO and containers and streams is seriously flawed and definitely not
> ready for prime-time.

To use preconditions or predicates in Text_IO, for ex, one needs a way to
specify which exception gets raised (as you suggested at the Kemah meeting).  I
strongly agree that's a good idea, and I strongly believe it's too late to add
that feature to Ada 2012 now.

> Joyce has just told me that she needs the completed Standard on Monday.

So do the best you can in the next 2 weeks, and then give it to Joyce.

> That's impossible even without this issue, and I'm unwilling to
> "finish" a standard that doesn't at least try to address this issue in
> some way. It's idiotic to standardize something with known bugs in important
> areas.

Assertions are important, but the exact semantics of turning them off is not.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:40 PM

> F.e., we do have to decide now if we raise Constraint_Error or
> Assertion_Error for failures of subtype predicates.

The exception raised is close to irrelevant. No matter what is choosen, it will
be wrong for the vast majority of interfaces. We've already decided that we need
more control, just that it is too late now to define that control.

Just look at the language-defined packages, and look at things that ought to be
defined as predicates or preconditions, and see which exceptions they raise:

Containers.Vectors: Constraint_Error and Program_Error.
Text_IO: Status_Error and Mode_Error.
Generic_Elementary_Functions: Argument_Error.

I know the same is true in Claw, and I'd expect it to be true in AWS, GTKAda,
and all of the other existing libraries out there.

...
> And if people remove checks, all bets are off. If you  want something
> guaranteed, write preconditions. I don't buy the argument "I'll make a
> subtype because I need a precondition in various places", that's just
> saving typing. If the subtype makes sense from the  problem domain,
> fine, otherwise use preconditions.

I strongly disagree with this. This is not about "saving typing", it's about
readability by not cluttering the interface with mostly irrelevant junk. This
depends on the subtype having a descriptive name, of course, but it seems
similar to using named subtypes with constraints in the specification.

Consider Stream_IO.

I would probably want to define:

    subtype Open_File_Type is File_Type
       with Dynamic_Predicate => Is_Open (Open_File_Type),
            Predicate_Fail_Raises => Status_Error;

and then use it in the various routines:

    function Name (File : in Open_File_Type) return String;
    function Form (File : in Open_File_Type) return String;
    procedure Set_Index (File : in Open_File_Type; To : in Positive_Count);

Using preconditions would greatly clutter the specification (and obscure the
real differences between the routines, and finding the routines themselves):

    function Name (File : in File_Type) return String
       with Pre => Is_Open (File),
            Pre_Fail_Raises => Status_Error;

    function Form (File : in File_Type) return String
       with Pre => Is_Open (File),
            Pre_Fail_Raises => Status_Error;

    procedure Set_Index (File : in File_Type; To : in Positive_Count)
       with Pre => Is_Open (File),
            Pre_Fail_Raises => Status_Error;

One could make an argument that these preconditions are clearer documentation
than the predicates, but (1) clearly they are much larger; (2) the rule
requiring this check isn't even given explicitly in A.12.1 [actually, I have to
wonder if it is missing], repeating it everywhere when the authors of the
Standard didn't feel the need to do so seems wrong; (3) why do we have a
different rule about constraints?

Specifically, I doubt that there is anyone that would say they you have to
define Set_Index as:

    procedure Set_Index (File : in File_Type; To : in Count'Base)
       with Pre => Is_Open (File) and To > 0;

because this is "clearer documentation" than using a subtype with a constraint!
So why should "Open_File_Type" be different??

But this example highlights the difference between Preconditions and Predicates.
With the proposed semantics, it is OK to use a constraint to describe this
interface (because the execution is erroneous with checking off), and OK to use
a precondition (so long as Assertion_Policy (Check) is applied locally; then the
predicate will always be checked), but it is not OK to use a predicate (because
if the checks are off at the call site and the check would fail, the program is
not erroneous but it also would not follow the language-defined specification --
the exception raised probably would be the wrong one or none at all).

I find this unacceptable, precisely because I think most programmers will think
of constraints and predicates as interchangable (especially with Suppression
semantics). If so, that had better be true, including when checks are suppressed
somewhere. Otherwise, lots of people are going to misuse predicates and get
burned.

There may be some other way to deal with this (perhaps some sort of ImplPerm to
void the checking semantics of predefined packages when called with checks off),
but I think it would be best to either make constraints/predicates *exactly*
equivalent (including erroneousness) or leave predicates completely as
assertions. The middle ground is just a mess and leaves predicates not usable
for their #1 intended purpose (at least in existing packages).

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:47 PM

> > ...
> > it is erroneous if the check would have returned False at the point
> > of a suppressed check.
>
> With suppressed checks, Ada05 gives us an unambiguous definition of
> when a program crosses the line into erroneousness.
>
> The line doesn't seem so clearly defined when we allow arbitrary
> boolean expressions.
>
> Consider, for example, a predicate which has side effects.
>
> I know that such predicates are considered to be poorly behaved, but I
> still want the language to be well-defined if someone does choose to
> write such a thing.

Why do we care? A compiler is allowed to reject such a program if it can
identify it (within limits). So by definition, such code is not portable. Making
it a bit more non-portable seems irrelevant.

> What is the state of a program which depends on the value of a global
> variable that a predicate function "would have"
> updated?
>
> Is an implementation allowed (but of course not required) to perform a
> suppressed assertion check?

There can't be such a thing as a "suppressed assertion check". If we go with
this model, these are simply checks, and they're definitely not assertions.
Probably we would want them to be covered by some assertion rules, but that
would require explicit inclusion.

In any case, a program which depends on such a state is non-portable (the check
may or may not be made), and thus I don't see why we care.

I'd have to see a realistic example of why this matters in some case where it
would not be possibly illegal by the 11.4.1 rules before I could even get 10%
concerned.

(This seems mainly to me to be an excuse to screw-up the rules more than a real
concern.)

That is, either make predicates 100% like constraints (with the possible
addition of a few assertion rules on top), or make them 100% like assertions.
Some sort of 90% rule seems like madness to me.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  4:49 PM

> THAT'S exactly what I'm afraid of. 3rd party library developers are
> NOT the customers of any implementer. They *share* customers with
> implementers. At best, they are related to an implementer (so internal
> needs *might* get fixes made, but surely at a lower priority that those
> carrying the freight).

Well we have lots of partnership agreements with 3rd party library developers,
and work closely with them. After all you can't just blindly implement to your
interpretation of the RM and expect it to work out of the box. Randy, if you are
planning on developing libraries for Ada 2012, I would encourage you to open a
partnership with AdaCore (at the moment the only Ada 2012 implementor in town
:-)). This costs you nothing and we will be happy to work with you closely to
make sure that you can effectively market your products to our customers, which
is in everyone's interest.

> The whole reason for having a standard is to put all of the customers
> and all of the implementers on as even a footing as possible. That
> includes those with less money!

Well ultimately the issue is what implementors implement and not what the
standard says. Of course Ada Core tries to follow the RM development as closely
as possible, but with any formal release there will be differences. Right now,
GNAT is based on the RM as it was several months ago, it will be another year
before an updated version comes out. And of course we will try our hardest to
remove remaining discrepancies, but that means we have quite a bit of time to
iron out problems!

> This wording is far more important than the "Ignore" rules. It
> essentially defines what can and cannot be used in assertions, and
> it's not something that we can change substantially after the fact.

Of course we can change this substantially after the fact, we have always made
substantial changes after the fact. Yes, you have to worry about implementors,
but there is nothing

> So while the "sky won't fall", Ada 2012 will. Perhaps AdaCore is
> comfortable in ignoring the standard when convinient, but in that
> case, why have it at all?

Don't indulge in over-board rhetoric.
Talking about Ada 2012 failing just means you have lost perspective. AdaCore
will continue to follow the standard as closely as it can. if we find
significant problems in the standard we will point them out and the standard
will get fixed. if we find significant problems in GNAT, then we will fix those
problems. As I say we have a year on the implementation side to get things in
shape which should be enough time.

(though I promise that even after a year, significant bugs will still be present
both in the RM, and in GNAT!)

> This has *nothing* to do with compiler writers. It has *everything* to
> do with what the creator of a subprogram can assume inside the
> subprogram based on the subtypes of the parameters. If checks off are
> erroneous (or if the checks can be "pinned" on, as with
> Preconditions), then the subprogram can assume the checks are made.
> But if the subprogram cannot assume the checks are made and the
> program is *not* erroneous if the checks fail, then the subprogram has
> to repeat the checks in the body (in which case, it is silly to include them
> in the specification in the first place).

I disagree, it is perfectly reasonable for the spec of a 3rd party library to
say that if checks are turned off, and the checks fail, then all bets are off.
This is what any user of the library will assume anyway.

> Most importantly, it will be completely impossible to change any
> existing package to use predicates rather than comments and body
> checks (especially language-defined packages), because in a checks off
> siituation, the checks will not be made as specified in the original
> interface. (I'll discuss this more in my response to J-P.) But a
> language feature that ought to be but cannot be used to describe
> Text_IO and containers and streams is seriously flawed and definitely not
> ready for prime-time.

I think this is just seriously flawed thinking, and I don't really know where it
comes from. For example, if you turn off constraint checking and you pass
rubbish to standard library routines, you get rubbish behavior. That's always
been the case, what's the big deal???

> Joyce has just told me that she needs the completed Standard on Monday.
> That's impossible even without this issue, and I'm unwilling to
> "finish" a standard that doesn't at least try to address this issue in
> some way. It's idiotic to standardize something with known bugs in important areas.

And that is the attitude that you need to review. The standard will have bugs in
important areas. The fact that you know them does not make any difference from
not knowing them. There is nothing special about a day on which you don't know
about any of the serious bugs in important areas that remain!

To think otherwise is a recipe for uselessly delaying release indefinitely,
since there may never be such a day.

We certainly never have released a version of GNAT Pro that did not contain
known errors. That's because the release process takes months, and in those
months we will find errors with fixes that are too late to incorporate.

In this situation, we document the problems, and fix them, and customers who
need a particular fix can get an intermediate unofficial relase that fixes known
problems.

I don't see the development of a language standard as being any different!

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  4:53 PM

> Both Erhard and Geert indicated that they have similar issues during
> the recent ARG meeting, and J-P has indicated such in e-mail. That's
> far more than no one else. I'm not going to try to guess whether they
> care about predicates at all (J-P seems to have said that he'd rather
> the Preconditions are used anyway), but I find it bizarre to define a
> feature that cannot safely be used for its intended purpose.

Let Erhard and Geert and JP speak for themselves. I know you are misrepresenting
Geert's viewpoint, because I just spent a couple of hours talking about this
issue with him today!

> It has nothing to do with an "error-free" document. It's about
> producing a document that has the critical features "right". If
> dynamic predicates cannot be used on parameters because someone might
> turn off the checking, then we have a serious problem that needs to be
> addressed before going forward.

You think it's a serious problem, it's not at ALL clear that this is a majority,
let alone a consensus position!

> This is *not* some corner case of little importance. It is all about
> whether you can even use predicates to describe the interface of a
> package (which was the only reason I wanted them in the first place -
> I much prefer constraints/predicates to preconditions).

Well it's not the only reason I wanted them, and I think you will find your
viewpoint is not a consensus position.

> I will not pass this standard forward if we aware of any errors in
> critical functionality. That would be idiotic. You can find someone
> else to do this if you want to do that.

We may have to!
Obviously the issue of whether the standard is passed on is not the decision of
one person, it is a collective decision of the ARG. No one person can hold it
hostage to their own viewpoint.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  4:55 PM

...
> > This wording is far more important than the "Ignore" rules. It
> > essentially defines what can and cannot be used in assertions, and
> > it's not something that we can change substantially after the fact.
>
> Heh?  All the rules about assertions are already nailed down, EXCEPT
> for the part about turning off the checks.

I strongly believe that if we want predicates to act like constraints, that's
fine (and indeed the way I originally envisioned them), but in that case they
are *not* assertions (assertions raise Assertion_Error, are controlled by
assertion policy, etc.). We would want some of the assertion rules to apply to
them, but surely not all (probably not even a large number, maybe just the
"error if significant side-effects" rules).

That will require rewriting some or all of the assertion rules, at a very late
date.

I am strongly opposed to trying to make these half-assertions and
half-constraints -- that way lies madness (mine, at least).

>> Most importantly, it will be completely impossible to change any
>> existing package to use predicates rather than comments and body
>> checks (especially language-defined packages), because in a checks
>> off siituation, the checks will not be made as specified in the
>> original interface. (I'll discuss this more in my response to J-P.)
>> But a language feature that ought to be but cannot be used to
>> describe Text_IO and containers and streams is seriously flawed and
>> definitely not
ready for prime-time.

>To use preconditions or predicates in Text_IO, for ex, one needs a way to
>specify which exception gets raised (as you suggested at the Kemah meeting).
>I strongly agree that's a good idea, and I strongly believe it's too late
>to add that feature to Ada 2012 now.

Right (I agree with all of the above), but so what? If the checking behavior is
broken such that the predicates aren't checked as required by the standard in
such an interface, you can't use them even if you have the right exceptions
getting raised. That's what we have to get right now; the rest of it will have
to wait (hopefully not very long).

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 6, 2012  4:56 PM

> ...
> To use preconditions or predicates in Text_IO, for ex, one needs a way
> to specify which exception gets raised (as you suggested at the Kemah
> meeting).  I strongly agree that's a good idea, and I strongly believe
> it's too late to add that feature to Ada 2012 now.

Geert had an interesting suggestion of writing a precondition as:

   procedure Read(File : File_Type; ...)
     with Pre => Is_Open(File) or else Raise_Status_Error;"

This seems like a neat way to get the right exception raised, at least for a
precondition.

For what that's worth.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012   4:57 PM

> Containers.Vectors: Constraint_Error and Program_Error.
> Text_IO: Status_Error and Mode_Error.
> Generic_Elementary_Functions: Argument_Error.

Geert has been looking at the GEF issues, and has some very nice solutations
that require no change in the language whatever we decide.

> I know the same is true in Claw, and I'd expect it to be true in AWS,
> GTKAda, and all of the other existing libraries out there.

I would not speak for others!

> I strongly disagree with this. This is not about "saving typing", it's
> about readability by not cluttering the interface with mostly irrelevant junk.
> This depends on the subtype having a descriptive name, of course, but
> it seems similar to using named subtypes with constraints in the specification.

Yes, similar indeed!

> There may be some other way to deal with this (perhaps some sort of
> ImplPerm to void the checking semantics of predefined packages when
> called with checks off), but I think it would be best to either make
> constraints/predicates *exactly* equivalent (including erroneousness)
> or leave predicates completely as assertions. The middle ground is
> just a mess and leaves predicates not usable for their #1 intended
> purpose (at least in existing packages)

I agree a middle ground is dubious. I would like constraints and predicates to
be as similar as possible (I never like the term equivalent, even without exact
in front of it). And indeed similar wrt erroneousness. I think that's the right
way to go.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012   5:00 PM

> To use preconditions or predicates in Text_IO, for ex, one needs a way
> to specify which exception gets raised (as you suggested at the Kemah
> meeting).  I strongly agree that's a good idea, and I strongly believe
> it's too late to add that feature to Ada 2012 now.

Geert is busy adding preconditions and postconditions to our numeric library
routine, looks like things are working out VERY nicely without needing any new
features in this direction.

I will let him speak for himself on the details!

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012   5:02 PM

> > It would be nice if that was true, but it surely is not true of
> > assertions as it stands. Which means that they cannot be used to
> > describe language-defined packages (or anything else existing), since
> > there is no "exception" for someone having turned checks off.
>
> So what? You seem to be fixated on this issue, but I just don't see it
> as critical. If someone turns checks off, they get what they get if
> the checks fail. If they are concerned about what happens when checks
> fail. they should be sure not to turn them off unless they are sure
> they won't fail.
>
> I just don't get it, and don't see what I am missing???

That's fine *if the language says that*. It *does* say that for constraints, but
not for assertions. For assertions other than predicates, you can use a local
assertion policy to "pin" them turned on, so you can be covered. But there is
nothing that will allow predicates to be either forced to be checked or which
makes execution "incorrect" if they would have failed.

For instance, using predicates as I suggested in J-P's reply in Stream_IO.
The standard says "Status_Error is raised if File is not  open." It does
*not* say "Status_Error is raised if File is not open and predicate checks are
not suppressed." So an implementation using a predicate would not be conformant
to the standard. This seems bad, especially as a similar constraint would be OK.
(If we want these to act like constraints, they ought to act 100% like
constraints, not 75%. I think you actually agree, you just don't know it. :-)

See my reply to J-P for more detail.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 6, 2012  5:02 PM

> I am strongly opposed to trying to make these half-assertions and
> half-constraints -- that way lies madness (mine, at least).

I tend to agree with this, although it is far from driving me mad, Indeed I
don't think it's that important what we decide.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  5:08 PM

> > To use preconditions or predicates in Text_IO, for ex, one needs a
> > way to specify which exception gets raised (as you suggested at the
> > Kemah meeting).  I strongly agree that's a good idea, and I strongly
> > believe it's too late to add that feature to Ada 2012 now.
>
> Geert is busy adding preconditions and postconditions to our numeric
> library routine, looks like things are working out VERY nicely without
> needing any new features in this direction.

I believe Geert was going to use the proposed mechanism for specifying the
exception to be raised. I know he was *very* interested in it.

In any case, preconditions work "right" in this sense; you can force them to be
checked at all calls regardless of the caller's assertion policy. So you can
match the existing semantics exactly.

I just want to be able to use predicates in the same way.

****************************************************************

From: Steve Baird
Sent: Tuesday, March 6, 2012  5:16 PM

Randy Brukardt wrote:
> If checks off are erroneous (or if the checks can be "pinned" on, as
> with Preconditions), then the subprogram can assume the checks are
> made. But if the subprogram cannot assume the checks are made and the
> program is *not* erroneous if the checks fail, then the subprogram has
> to repeat the checks in the body (in which case, it is silly to
> include them in the specification in the first place).

Tucker Taft wrote:
> I think a rule that associates the assertion policy with the location
> of the "final" explicit predicate expression that is applicable to the
> subtype could work.

Randy would like a way to have a predicate he can count on, at least for
incoming parameters. Seems like a good idea to me.

Let's look at a variation on Tuck's approach (or perhaps this is Tuck's approach
exactly - I'm not sure).  If the assertion policy in effect at the point of a
predicate specification is Ignore, then that new contribution to the subtype's
predicate is ignored (with respect to runtime checking; not with respect to
static semantics or membership tests). However, any "inherited" predicates are
unaffected by the assertion policy in effect at the point of inheritance. For
example

    declare
      pragma Assertion_Policy (Check);
      subtype Even is Integer with Dynamic_Predicate => Even mod 2 = 0;

      package Pkg is
        pragma Assertion_Policy (Ignore);

        subtype Multiple_Of_Six is Even
          with Dynamic_Predicate => Multiple_Of_Six mod 3 = 0;
      end Pkg;

      X : Mos := 4; -- succeeds; mod 3 check was ignored
      Y : Mos := 3; -- fails; mod 2 check was not ignored

This means that subtype Even can be trusted if you declare a parameter of
subtype Even.

This approach also eliminates all the conformance issues that I was bringing up
earlier; a given subtype has the same properties in all contexts.

Randy continued:
>> I know that such predicates are considered to be poorly
>> > behaved, but I still want the language to be well-defined if
>> > someone does choose to write such a thing.
>
> Why do we care? A compiler is allowed to reject such a program if it
> can identify it (within limits). So by definition, such code is not portable.
> Making it a bit more non-portable seems irrelevant.

I'm starting to think you are right. I can live with erroneousness in this case
(more specifically, I feel less nervous about this than I did before).

Randy also said:
> That is, either make predicates 100% like constraints (with the
> possible addition of a few assertion rules on top), or ...

Have you changed your position? I thought that in Houston you were saying that
you didn't want assertions to provide a new path to erronousness (see the
"catastrophic failure" scenario in a previous message from me today).

I'm going to hit the send button before trying to digest the flurry of messages
on this thread that have arrived while I was composing this one. Hopefully this
won't be OBE at birth (well, actually, it would be great if there is a wonderful
solution that everyone is happy with waiting in my mailbox).

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  5:331 PM

...
> Randy also said:
> > That is, either make predicates 100% like constraints (with the
> > possible addition of a few assertion rules on top), or ...
>
> Have you changed your position? I thought that in Houston you were
> saying that you didn't want assertions to provide a new path to
> erronousness (see the "catastrophic failure" scenario in a previous
> message from me today).

Not really. I'd prefer there wasn't a new path to erroneousness. But I was
trying to explain what I could and could not live with. While I don't like the
"erroneous" solution all that much, it has the advantage of being consistent
with the current language (vis-a-vis constraints). As such, I think I'd be
sawing off the limb I'm sitting on if I tried to oppose that! (I could make a
case that the current definition of Suppress is harmful, but I doubt that anyone
would care, because we're not changing that.) OTOH, I would oppose a "solution"
that neither provides checking guarantees nor the "erroneous" escape hatch.

****************************************************************

From: Steve Baird
Sent: Tuesday, March 6, 2012  5:49 PM

> If the assertion policy
> in effect at the point of a predicate specification is Ignore, then
> that new contribution to the subtype's predicate is ignored (with
> respect to runtime checking; not with respect to static semantics or
> membership tests). However, any "inherited" predicates are unaffected
> by the assertion policy in effect at the point of inheritance.

If we don't want to require that implementations go through the bother of having
to maintain, in effect, two different predicates for one subtype (one for
checking, one for membership tests), we could also allow implementations to
reject a predicate spec if it applies to a subtype that already has a predicate
spec and the two specs differ with respect to applicable assertion policy.

The Multiple_Of_Six subtype in my previous example could then be rejected (we
could make the rejection either optional or mandatory).

****************************************************************

From: Randy Brukradt
Sent: Tuesday, March 6, 2012  6:04 PM

> If we don't want to require that implementations go through the bother
> of having to maintain, in effect, two different predicates for one
> subtype (one for checking, one for membership tests), we could also
> allow implementations to reject a predicate spec if it applies to a
> subtype that already has a predicate spec and the two specs differ
> with respect to applicable assertion policy.
>
> The Multiple_Of_Six subtype in my previous example could then be
> rejected (we could make the rejection either optional or mandatory).

Wouldn't that break privacy? If the predicate was on the full type of a private
type, you couldn't see it, and rejecting based on something you can't see is a
no-no. Also, there might be contract problems (subtype of a formal type adds a
predicate, actual subtype has a predicate).

My original idea was essentally as you've outlined (sans legality rules).

Tucker's idea was to have the *last* subtype that explicitly has a predicate
control whether the checking is on or off (for the entire predicate). That
somewhat avoids the "two predicate" problem, by making the "checking" one
trivial (or identical to the membership one).

That might have some problems with inheritance, but arguably that's in the noise
compared with the alternatives. (And arguably, if you are overriding routines
that had checking on with routines that have checking off, you know what you are
doing, or at least shouldn't be surprised.)

****************************************************************

From: Bob Duff
Sent: Tuesday, March 6, 2012  5:50 PM

> > Heh?  All the rules about assertions are already nailed down, EXCEPT
> > for the part about turning off the checks.
>
> I strongly believe that if we want predicates to act like constraints,
> that's fine (and indeed the way I originally envisioned them), but in
> that case they are *not* assertions (assertions raise Assertion_Error,
> are controlled by assertion policy, etc.).

That's fine with me, too.

> That will require rewriting some or all of the assertion rules, at a
> very late date.

OK, but Steve has volunteered to write up some wording.  Yes, it's late, and we
might get it (slightly) wrong -- no big deal.  Anyway, I suggest we quit all
this handwringing, and wait for Steve's wording.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  6:07 PM

...
> OK, but Steve has volunteered to write up some wording.  Yes, it's
> late, and we might get it (slightly) wrong -- no big deal.

I could live with "slightly", but I worry that it would be much worse.

> Anyway, I suggest we quit all this handwringing, and wait for Steve's
> wording.

No "handwringing" here. Gnashing of teeth and wild pacing, perhaps. :-)

In any case, if he'd get on with it, we could stop discussing this. But he
wanted guidance, and that requires considering all of the possibilities,
including FUD about some. :-)

****************************************************************

From: Steve Baird
Sent: Tuesday, March 6, 2012  6:17 PM

>> The Multiple_Of_Six subtype in my previous example could then be
>> rejected (we could make the rejection either optional or mandatory).
>
> Wouldn't that break privacy? If the predicate was on the full type of
> a private type, you couldn't see it, and rejecting based on something
> you can't see is a no-no. Also, there might be contract problems
> (subtype of a formal type adds a predicate, actual subtype has a predicate).

Yes, you're right. Scratch that dumb idea.

> My original idea was essentally as you've outlined (sans legality rules).
>
> Tucker's idea was to have the *last* subtype that explicitly has a
> predicate control whether the checking is on or off (for the entire
> predicate). That somewhat avoids the "two predicate" problem, by
> making the "checking" one trivial (or identical to the membership one).

Right. And what's wrong with that approach? It seems like it still buys you what
you want - a subtype whose predicate can be trusted. If you have a subtype
Speckled_Foo declared with assertion policy Check in force, and if you declare
your exported subprogram's parameter to be of subtype Speckled_Foo, then what's
the problem?

Somebody else can declare a subtype of Speckled_Foo with Checks_Ignored and
supply a new predicate of, say, True, but that won't help them break your
interface.

What is the objection to Tuck's proposal?

****************************************************************

From: Randy Brukardt
Sent: Tuesday, March 6, 2012  6:39 PM

I don't have one. I was concerned that there was a possible hole via inheritance
and overriding and dispatching, but I think I was wrong about that, as
overriding requires subtype conformance (for dispatching primitives), and as
such you can't change the name of the subtype involved. Thus you can't change
the state of the checking, either.

Thus I'm wholeheartedly in favor of working out the details of Tuck's latest
proposal to see if any problems pop up. Otherwise, it seems to be the least
change from the previous proposals and has the right semantics on the margin.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 6, 2012  6:43 PM

...
> Let's look at a variation on Tuck's approach (or perhaps this is
> Tuck's approach exactly - I'm not sure). If the assertion policy in
> effect at the point of a predicate specification is Ignore, then that
> new contribution to the subtype's predicate is ignored (with respect
> to runtime checking; not with respect to static semantics or
> membership tests). However, any "inherited" predicates are unaffected
> by the assertion policy in effect at the point of inheritance. For
> example...

I started with that, but didn't like having "partial" predicates.  I believe it
should either be on or off.  Not halfway on.

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, March 7, 2012  4:04 AM

Hard to stay in sync with all these messages arriving while you sleep...
Some specific responses:

Le 06/03/2012 23:40, Randy Brukardt a écrit :
> Using preconditions would greatly clutter the specification (and
> obscure the real differences between the routines, and finding the routines
> themselves):
>
>     function Name (File : in File_Type) return String
>        with Pre => Is_Open (File),
>             Pre_Fail_Raises => Status_Error;
>
>     function Form (File : in File_Type) return String
>        with Pre => Is_Open (File),
>             Pre_Fail_Raises => Status_Error;
>
>     procedure Set_Index (File : in File_Type; To : in Positive_Count)
>        with Pre => Is_Open (File),
>             Pre_Fail_Raises => Status_Error;

As a matter of personal taste, I'm not worried by this, but YMMV of course. I'm
certainly willing to accept it if it allows for a simple rule like:
  "predicates are controlled by the caller, as a callee either you
   recheck or you accept erroneousness.
   preconditions are controled by the callee, the caller can't bypass
   them"

> For instance, using predicates as I suggested in J-P's reply in Stream_IO.
> The standard says "Status_Error is raised if File is not  open." It
> does
> *not* say "Status_Error is raised if File is not open and predicate
> checks are not suppressed." So an implementation using a predicate
> would not be conformant to the standard.

But Set_Col expects a value of subtype Positive_Count. With check suppressed,
you could pass it a value of 0. Would you say that an implementation that does
not move the cursor left of the left edge of the screen is not conformant ;-)?

> In any case, preconditions work "right" in this sense; you can force
> them to be checked at all calls regardless of the caller's assertion
> policy. So you can match the existing semantics exactly.
>
> I just want to be able to use predicates in the same way.

That's where we don't agree: I think there is more value in having predicates
AND preconditions if they DON'T behave the same way. The one you chose depends
on the kind of semantics you want, not on the amount of typing involved
(slightly tong in cheek).

>  I'd prefer there wasn't a new path to erroneousness.

This is the original reason for all this discussion. I don't think it is a /new/
path for erroneousness. We extend the notion of constraint, therefore we extend
the corresponding /existing/ erroneousness to cover those new features. I'd say
that, presented this way, it would be no surprise to the user.

Anyway, a fundamental point of suppressing checks is that it should be done only
when you can prove (more or less) formally that the suppressed condition cannot
happen. I would go as far as saying that a compiler that sees checks suppressed
is allowed to assume that the condition has been formally proven by means
outside its own realm. The user who fails to ensure this will get what he
deserves (erroneousness, nasal daemons, ...)

-------------
At this point, I  propose a motion:

Preconditions are not assertions, they are "extended constraints"
=> Failing a precondition raises Constraint_Error => Checking preconditions is
   governed by pragma Suppress at the point where a check is required.
=> It is erroneous to force a false precondition by mean of suppressing
   checks.

Any seconder? Straw poll?

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  7:45 AM

> As a matter of personal taste, I'm not worried by this, but YMMV of
> course. I'm certainly willing to accept it if it allows for a simple
> rule like:
>    "predicates are controlled by the caller, as a callee either you
>     recheck or you accept erroneousness.

That seems just right to me, I think most library writers will be happy to
accept erroneousness (if callers suppress checks, and then pass garbage, it
seems fine to consider this erroneous to me).

An implementation can certainly provide some pragma, policy or whatever to
implement the rechecking automatically, and I think it's worth allowing some
time to figure out how to do that in the most useful way, rather than rushing to
design this now.

>     preconditions are controled by the callee, the caller can't bypass
>     them"

Again, an implementation may want to have a mode, pragma, or policy that does
allow them to be ignored under control of the caller. After all, for example in
certified systems, you often really don't want runtime checks of ANY kind at
all, since they can generate a lot of extra work in coverage testing.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, March 7, 2012  8:26 AM

What about when the library is delivered as a DLL - increasingly the case -
possibly compiled with a different version (or even vendor or revision of Ada)
of the compiler, when the only checking will be whatever minimum is enforced by
the linker, which often doesn't amount to much more than matching up symbol
names.  I think it would be very difficult for predicates in the library to be
enforced on the caller.  The library would either have to be paranoid and
re-check, or any assumptions about its use spelt out in black  white in its user
guide.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 7, 2012  8:34 AM

At least in Ada, you would still be providing the Ada package spec in source
code, and you could still have your Assertion_Policy pragma there.  Of course if
they edit the source code for the provided Ada package spec, then all bets are
off.  But I think the general idea that "normal" users won't cause trouble doing
their own "normal" thing is a reasonable goal.

****************************************************************

From: Jeff Cousins
Sent: Wednesday, March 7, 2012  8:37 AM

Some users I've seen would edit the specs just so in the hope of making their
code more efficient or avoiding irritating errors...  :-(

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  8:59 AM

Sure, and that is why it may be appropriate to have switches, pragmas, or
policies that cause contract stuff to be ignored. If that makes more sleepless
nights for Randy, sorry, but we need things that accomodate all users needs :-)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 7, 2012  12:07 PM

No, that's fine by me. So long as it is clearly a "non-standard mode" in the
sense of the Standard, it's not a problem. If some customer wants us to support
Claw in such a mode (one it was not designed or tested for), they can pay extra
$$$ for the service. It's the "normal" case that I worry about.

And I fully realize that some customers have requirements that are beyond the
norm. For instance, if you need 100% coverage, you can't use the sort of
"belt-and-suspenders" code that I typically write, simply because some of the
"suspenders" checks are necessarily redundant and thus dead (and untestable)
paths no matter what input you give. You (somewhat perversely) have to make your
code less robust to meet such a requirement. But if you have this requirement,
then you need a simpler version of some things than you would normally use, and
it's fine to support such customers.

I'm not particularly worried about Claw being used in a verification environment
(anyone using Windows to underlie flight-critical software has completely lost
their mind), so I don't think this will really come up in practice. But I could
imagine it happening in other contexts (perhaps using the socket subset of Claw,
extended Ada containers, etc.)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 7, 2012  12:10 PM

> At least in Ada, you would still be providing the Ada package spec in
> source code, and you could still have your Assertion_Policy pragma
> there.  Of course if they edit the source code for the provided Ada
> package spec, then all bets are off.  But I think the general idea
> that "normal" users won't cause trouble doing their own "normal" thing
> is a reasonable goal.

Moreover, it is perfectly reasonable for the Ada compiler to make the checks in
the body in this case. It already has to be able to do that (at least for some
subset of assertions) in order to support dispatching, and it probably has to do
that to support Export, and this seems to be the same sort of thing.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 7, 2012  12:28 PM

...
> > For instance, using predicates as I suggested in J-P's reply in Stream_IO.
> > The standard says "Status_Error is raised if File is not  open." It does
> > *not* say "Status_Error is raised if File is not open and predicate
> > checks are not suppressed." So an implementation using a predicate
> > would not be conformant to the standard.
> But Set_Col expects a value of subtype Positive_Count. With check
> suppressed, you could pass it a value of 0. Would you say that an
> implementation that does not move the cursor left of the left edge of
> the screen is not conformant ;-)?

That OK because execution is erroneous in that case. So anything is
"conformant". The proposal I was reacting to was that predicates would be
different and not erroneous.

> >  I'd prefer there wasn't a new path to erroneousness.

> This is the original reason for all this discussion. I don't think it
> is a /new/ path for erroneousness. We extend the notion of constraint,
> therefore we extend the corresponding /existing/ erroneousness to
> cover those new features. I'd say that, presented this way, it would
> be no surprise to the user.

Well, it's certainly a large expansion of erroneousness, especially as it now
involved arbitrary expressions (and possibly side-effects) rather than just
compares against precomputed values.

> Anyway, a fundamental point of suppressing checks is that it should be
> done only when you can prove (more or less) formally that the
> suppressed condition cannot happen. I would go as far as saying that a
> compiler that sees checks suppressed is allowed to assume that the
> condition has been formally proven by means outside its own realm. The
> user who fails to ensure this will get what he deserves
> (erroneousness, nasal daemons, ...)

I'm actually dubious that this is the right model for some existing Ada checks.
In particular, range checks and null exclusion checks both have well-defined
semantics as to what happens if the check is not made [you get an invalid
value]. (This is not true for index checks, rendezvous checks, etc.) Like
predicates, there is no good reason for these to cause erroneous execution when
not made.

Well, except that the model for suppress involves locations rather than
declarations, so you can't tell inside a subprogram whether or not the checks
were made. This IMHO is a completely bogus reason for introducing erroneousness
(13.9.1(12/3) is another case where Ada introduces erroneousness to make life
easier for the implementer without any semantic need). Especially as a different
model for suppression would have eliminated the problem entirely.

We're obviously not going to change any of this now, but I surely understand
people who do not want to expand such "unnecessary" erroneousness.

> -------------
> At this point, I  propose a motion:
>
> Preconditions are not assertions, they are "extended constraints"
> => Failing a precondition raises Constraint_Error => Checking
>    preconditions is governed by pragma Suppress at the point
>    where a check is required.
> => It is erroneous to force a false precondition by mean of
>    suppressing checks.

I could live with this, but I'd prefer the model that Steve is working on (based
on the last subtype declaration with a predicate). It would not introduce
"unnecessary" erroneousness, and it's much closer to the model currently
described in the Standard.

As to the exception raised, that's a separate (and unrelated IMHO) question.
And I don't think it matters, we have to be able to specify the exception raised
as described in the not yet written AI12 split from AI05-0290-1. Geert indicated
that he would implement it himself in GNAT because he needed it that much, so
it's likely to be an available feature quite soon. (I surely will include that
feature from the start when I implement predicates in Janus/Ada, it's very
valuable.)

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, March 7, 2012  12:46 PM

I have always believed that Randy's specific examples are better expressed as
PRE-conditions rather than wrapped into subtype predicates. I appreciate Randy's
point, though, that in general there should not be a distinction between
parameter type "Positive" and a PRE that forces the parameter to be positive.

Still, if we cannot make it work any other way (in an easy to understand model),
I opt for the alternative (2) including the erroneousness.

P.S. And I do not give a hoot for users that have side-effects in their subtype
predicates (or assertions). They will just have to live with the fact that the
side-effects come and go as they like.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  12:49 PM

> That OK because execution is erroneous in that case. So anything is
> "conformant". The proposal I was reacting to was that predicates would
> be different and not erroneous.

I agree that this would be an undesirable situation. I am in favor of
erroneousity :-)

> Well, it's certainly a large expansion of erroneousness, especially as
> it now involved arbitrary expressions (and possibly side-effects)
> rather than just compares against precomputed values.

I don't see it as a large expansion

> I'm actually dubious that this is the right model for some existing
> Ada checks. In particular, range checks and null exclusion checks both
> have well-defined semantics as to what happens if the check is not
> made [you get an invalid value]. (This is not true for index checks,
> rendezvous checks,
> etc.) Like predicates, there is no good reason for these to cause
> erroneous execution when not made.

I am confused, I thought the general rule was that any run-time check that is
suppressed and fails results in erroneousness.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 7, 2012  1:08 PM

...
> > I'm actually dubious that this is the right model for some existing
> > Ada checks. In particular, range checks and null exclusion checks
> > both have well-defined semantics as to what happens if the check is
> > not made [you get an invalid value]. (This is not true for index
> > checks, rendezvous checks, etc.) Like predicates, there is no good
> > reason for these to cause erroneous execution when not made.
>
> I am confused, I thought the general rule was that any run-time check
> that is suppressed and fails results in erroneousness.

That's the rule for checks that are *suppressed*. But Ada also goes to great
lengths to say that even checks that are formally required don't actually have
to be made, and describes that happens if they are not made (you get invalid
values). Since we have that model, there is no semantic need to go all the way
to erroneousness when such a check (like a range check) is not made because of
suppression - there is no semantic difference between suppression and check
optimization. The result of a suppressed range check is just potentially
invalid, and that by itself does not need to cause any erroneous execution.

For most other language-defined checks, there is no semantic model of what
happens if a check is not made (indexing out of range, for instance), and thus
erroneous execution is the only choice. Indeed, there is a requirement that an
invalid value be detected in such situations (you can defer range checks, but
you have to make index checks); so suppression is clearly a different beast
here.

In any case, I'm not proposing a change here; it's decades too late to make such
a fundamental model change for Suppression.

What I was saying is that predicates are much like range checks; there is no
semantic need for execution to be erroneous after they are not checked. There
are other reasons that that might need to be the case, but they are pragmatic
and not semantic. As such, the best situation is to arrange the rules so the
pragmatic problems don't arise either. Only if that isn't possible should we go
to erroneous execution. (Steve is working on such a solution, I believe.)

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  1:16 PM

> That's the rule for checks that are *suppressed*. But Ada also goes to
> great lengths to say that even checks that are formally required don't
> actually have to be made, and describes that happens if they are not
> made (you get invalid values). Since we have that model, there is no
> semantic need to go all the way to erroneousness when such a check
> (like a range check) is not made because of suppression - there is no
> semantic difference between suppression and check optimization. The
> result of a suppressed range check is just potentially invalid, and
> that by itself does not need to cause any erroneous execution.

Sure I understand all this. No problem, I was thinking of the Suppress case

I prefer to just go with the erroneousness for suppressed predicates, frankly I
don't think programmers understand the intermediate case, to them there is only
right code and wrong code, not really different degrees of wrongness :-)

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  12:52 PM

> I have always believed that Randy's specific examples are better
> expressed as PRE-conditions rather than wrapped into subtype predicates.
> I appreciate Randy's point, though, that in general there should not
> be a distinction between parameter type "Positive" and a PRE that
> forces the parameter to be positive.

But there already is such a distinction, if you pass a non-positive value to a
positive parameter and checks are suppressed, you are in erroneous territory
anyway.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  1:00 PM

> And I fully realize that some customers have requirements that are
> beyond the norm. For instance, if you need 100% coverage, you can't
> use the sort of "belt-and-suspenders" code that I typically write,
> simply because some of the "suspenders" checks are necessarily
> redundant and thus dead (and
> untestable) paths no matter what input you give. You (somewhat
> perversely) have to make your code less robust to meet such a
> requirement. But if you have this requirement, then you need a simpler
> version of some things than you would normally use, and it's fine to support such customers.

It is not the case that adding run-time checks and exceptions etc necessarily
makes code more robust. Surely Ariane-5 taught us that much!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 7, 2012  1:23 PM

That's certainly true, but I was also thinking of handling cases that are not
needed by callers. For instance, the query functions in the Janus/Ada compiler
try to handle every kind of type, but it is quite possible that some of them are
never called for certain kinds of types. That code can't be covered by basic
coverage analysis (it could be with a special test harness, but that's a lot of
work for dead code). That's always a problem when covering reusable code in a
system.

And I probably used the wrong word when I said "robust". In my projects, an
early detection of errors and prevention of wrong answers is the most important
criteria. I'd rather the programs didn't even try in that case, just report/log
the error and tell the user to call tech support (preferably without damaging
the ability to perform other operations). So that's what I think of as "robust".
[Aside: That's even true of the web server; returning *something* to a
bad/broken query is precisely how security holes are introduced. Doing nothing
is safer, and much more recoverable. Similarly, I don't want the compiler trying
to recover internal errors; that's just asking for the generation of bad code.]
But it's obvious that there are many systems where completing the operation is
paramount -- asking the pilot to call tech support while landing the plane is
silly.

Moral: one project's "robust" is another project's "fragile". One size does not
fit all.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  1:25 PM

> Moral: one project's "robust" is another project's "fragile". One size
> does not fit all.

yes, absolutely!

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 7, 2012  1:31 PM

Steve is going to take a shot at writing up a new version.  It might be worth
waiting to see if what he produces is reasonable as far as a version that can go
in the 2012 standard that we won't regret later.

****************************************************************

From: Steve Baird
Sent: Wednesday, March 7, 2012  1:43 PM

In the interests of avoiding big changes at this late date, I am taking Tuck's
previous version as a starting point and trying to change it as little as
possible. The main change is that the assertion policy for a subtype no longer
varies with where the subtype is used.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, March 7, 2012  5:33 PM

> Geert had an interesting suggestion of writing a precondition as:
>
>   procedure Read(File : File_Type; ...)
>     with Pre => Is_Open(File) or else Raise_Status_Error;"

If the decision goes the way it seems to go, please make sure to put this in the
discussion in lieu of Randy's subtype predicate example with the same
functionality that is currently there. In Houston, I blindly copied it; I should
already have turned it into a PRE example.

I'd go as far as to ratify this change even at such a late date!
It is such an obvious and major improvement without hidden traps.

[Editor's note: Further discussion on this topic is included in AI12-0022-1,
which formally proposed this feature. - Editor.]

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, March 7, 2012  5:35 PM

> Changing invariants to raise Constraint_Error is OK with me to do now.

Presumably you meant subtype predicates, not type invariants.
(This whole discussion is not about type invariants, which are fine the way they
are.)

I agree, too, that under choice (2), subtype predicate checks need to raise C_E.

****************************************************************

From: Robert Dewar
Sent: Wednesday, March 7, 2012  5:39 PM

Yes, I agree with this--it's an incompatibility, but in practice not a big one
(it is rare for people to write handlers for either exception in a case like
this).

****************************************************************

From: Steve Baird
Sent: Thursday, March 8, 2012  12:24 PM

> In the interests of avoiding big changes at this late date, I am
> taking Tuck's previous version as a starting point and trying to
> change it as little as possible. The main change is that the assertion
> policy for a subtype no longer varies with where the subtype is used.

See attached. [This is version /06 of the AI. - Editor.]

Highlights include:
    New rules for deciding whether predicate checking is enabled for
    a subtype. This is generally determined by the assertion policy
    in effect at the point of the "nearest" predicate specification.

    Deal with variant_parts the same way we already handle case stmts
    and case exprs - C_E if no choice covers selector value.

    Fix tiny hole in definition of "statically compatible".

    Add AARM comments explaining how pragmas Assertion_Policy
    and Suppress interact with instantiations.

Thanks to Tuck and Bob for preliminary review and suggested improvements.

****************************************************************

From: Robert Dewar
Sent: Thursday, March 8, 2012  2:51 PM

> Highlights include:
>      New rules for deciding whether predicate checking is enabled for
>      a subtype. This is generally determined by the assertion policy
>      in effect at the point of the "nearest" predicate specification.
>
>      Deal with variant_parts the same way we already handle case stmts
>      and case exprs - C_E if no choice covers selector value.
>
>      Fix tiny hole in definition of "statically compatible".
>
>      Add AARM comments explaining how pragmas Assertion_Policy
>      and Suppress interact with instantiations.

I think we should have failed predicates raise CE

****************************************************************

From: Steve Baird
Sent: Thursday, March 8, 2012  3:02 PM

> I think we should have failed predicates raise CE

I was trying to minimize last-minute Ada 2012 changes so I didn't even consider
this.

However, the wording for this change would be trivial if we decide to do it;
just change the final sentence of 3.2.4(23)

    Assertions.Assertion_Error is raised if any of these checks fail.
to
    Constraint_Error is raised if any of these checks fail.

We just need to agree on what we want to do.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 8, 2012  3:32 PM

I don't think I agree with Robert.

I do agree that at some point we should provide a way to control exactly which
exception it raises, but we should leave the default for now as Assertion_Error.
It is controlled by Assertion_Policy and it is called an "assertion expression."

If we had gone the other route, and made it controlled by Suppress and not
called an "assertion expression" then I would agree it should raise
Constraint_Error.

Raising Constraint_Error is not really what you want in the long run either.
You would like a user-specified exception.  But at this point, consistency
counts, and based on the proposed AI, that means the default should be
Assertion_Error, in my view.

****************************************************************

From: Robert Dewar
Sent: Thursday, March 8, 2012  3:39 PM

> If we had gone the other route, and made it controlled by Suppress and
> not called an "assertion expression" then I would agree it should
> raise Constraint_Error.

I think we should have gone the other route and have it controlled by suppress.

Actually, come to think of it, in GNAT, since we have something called
Predicate, we could devise our own rules for that. Interesting, means that you
would have both possibilities.

Bob, what do you think

****************************************************************

From: Bob Duff
Sent: Thursday, March 8, 2012  5:08 PM

> Bob, what do you think

I think we should all stop distracting Randy from the job at hand, which is to
crank out an RM with minimal changes from what we've already got.  Steve has
done a good job of nailing down the last remaining item.

I suggest we defer all talk about 2020 features and impl-def features until WG9
approves the RM.

P.S. I'm going on vacation right about now...

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 8, 2012  6:12 PM

I take it you're in favor of starting a letter ballot on Steve's version of the
AI? I was thinking of doing that tonight, but was not sure if it was premature.

****************************************************************

From: Bob Duff
Sent: Thursday, March 8, 2012  6:25 PM

> I take it you're in favor of starting a letter ballot on Steve's
> version of the AI? I was thinking of doing that tonight, but was not
> sure if it was premature.

OK, if you think that's the best way forward.

I vote "yes" on Steve's proposal.

****************************************************************

From: Robert Dewar
Sent: Thursday, March 8, 2012  7:40 PM

I also vote yes

****************************************************************

From: Tucker Taft
Sent: Thursday, March 8, 2012  8:19 PM

> ... I take it you're in favor of starting a letter ballot on Steve's
> version of the AI? I was thinking of doing that tonight, but was not
> sure if it was premature.

I vote "yes" as well.

****************************************************************

From: Geert Bosch
Sent: Thursday, March 8, 2012  9:46 PM

After finally having read this in detail, and compared with the existing
languages in each section, I'd like to vote in favor of this version of the AI.

I spotted two minor typos in the proposal and discussion sections, and suggest a
one more sentence to the discussion, but that is at the editor's discretion.

...
> Hence, there needs to be fine control over the checks of various kinds
> of assertions. Control for any but the Assert Pragma cannot be
> sensibly based on a policy in effect at the point of the check. For
> example, this would cause invariant checks to be performed only
> occasionally.  The proposal is generally that the policy in effect at
> the time of the aspect specification controls all its checks.
> A key distinction between supressing a check and ignoring an assertion
                           sup{p}ressing
> (by specifying a policy of "Ignore") is that in the latter case, the
> assertion expressions that are not evaluated because of the policy are
> *not* assumed to be true.
...
>!discussion
...
> A clear separation was made between language checks that, if failing
> but suppressed, render the program erroneous, and assertions, for
> which suppression does not cause language issues that lead to
> erroneousness. This is one of the reasons why assertion control was
> not subsumed under the pragmas Suppress and Unsuppress with
> appropriate check names. For example, in verified code, the policy
> Ignore for Assertions make sense without impacting the program semantics.
> (Supressing a range check on the other hand can lead to abnormal
> values.)
  Sup{p}ressing

...
> If we didn't have the "On_Failure" aspect here, using the predicate as
> a precondition in lieu of a condition explicitly checked by the code
> would change the exception raised on this failure to be
> Assertion_Error. This would obviously not be acceptable for existing
> packages and too limiting for future packages. For upward
> compatibility, similar considerations apply to preconditions, so the
> "On_Failure" aspect is needed for them as well. We could also imagine
> that after one On_Failure aspect has been specified, additional
> preconditions could be defined for the same subprogram with distinct
> On_Failure aspects specifying distinct expressions to be raised.

We might want to add the following (Geert's trick):
{With the language as defined by the 2012 standard, the same result can be
achieved by assertion conditions calling functions that raise the required
exceptions on failure.}

> -----
>
> For pragma Inline (and for Assertion_Policy), we decided to eliminate
> the rule that said that the presence of a pragma Suppress or
> Unsuppress at the point of a call would affect the code inside the
> called body, if pragma Inline applies to the body. Given that inlining
> is merely advice, and is not supposed to have any significant semantic
> effect, having it affect the presence or absence of checks in the
> body, whether or not it is actually inlined, seemed unwise.
> Furthermore, in many Ada compilers, the decision to inline may be made
> very late, so if the rule is instead interpreted as only having an
> effect if the body is in fact inlined, it is still a problem, because
> the decision to inline may be made after the decision is made whether to
> insert or omit checks.

(I'm so happy with the above language! Inlining should never have effect on
semantics.)

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, March 9, 2012  12:36 AM

> If a pragma Assertion_Policy applies to a generic_instantiation, then
> the pragma Assertion_Policy applies to the entire instance.
>
> AARM note: This means that an Assertion_Policy pragma that occurs in a
> scope enclosing the declaration of a generic unit but not also
> enclosing the declaration of a given instance of that generic unit
> will not apply to assertion expressions occuring within the given instance.

Is this even possible with shared generics? I know they are now out of fashion,
but this seems to require full duplication.

-------------
I have a feeling that since Steve was asked to write this one, there was more
support for predicates-like-constraints (C_E, suppress, erroneous).

Are we sure we want predicates-as-assertions? This is not something we can
change later.

I vote yes for the rest of the AI.

****************************************************************

From: Tucker Taft
Sent: Friday, March 9, 2012  12:55 AM

> I have a feeling that since Steve was asked to write this one, there
> was more support for predicates-like-constraints (C_E, suppress, erroneous).

I don't quite understand what you are saying.  Steve wrote it up as
predicates-as-assertions.  Are you saying you wished he had written it as
"predicates-as-constraints," or are you saying you think *other* people might
have wished for that? I recommend you just vote for yourself at this point.

> Are we sure we want predicates-as-assertions? This is not something we
> can change later.

I think an implementor could certainly add a "Suppress(Predicate_Check)" with
the obvious erroneousness-oriented semantics, even if we define the
Assertion_Policy model as the standard one for predicates.

> I vote yes for the rest of the AI.

Not sure what you mean by this.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  1:30 AM

> > If a pragma Assertion_Policy applies to a generic_instantiation, then
> > the pragma Assertion_Policy applies to the entire instance.
> >
> > AARM note: This means that an Assertion_Policy pragma that occurs in a
> > scope enclosing the declaration of a generic unit but not also
> > enclosing the declaration of a given instance of that generic unit
> > will not apply to assertion expressions occuring within the given instance.
>
> Is this even possible with shared generics? I know they are now out of
> fashion, but this seems to require full duplication.

Thanks for noticing this one, I missed it. The answer is NO; there is no
possible way to do any significant sharing with this model. I don't care that
much, in that I'll simply ignore this "rule" in Janus/Ada; it's impossible and
makes no sense at all, so why care about it?

I suppose one could pass in a bunch of Assertion_Policy flags to an instance and
turn them off that way, but that's a nasty distributed overhead for something
rarely used (most of the time, these will be all on or all off, or will have
policies inside the generic that will override the one applying to the instance
in any case).

Note that the similar rule works for Suppress, simply because that is a
permission that can be ignored. I'd still prefer the model for assertions (*all*
assertions) is that as well, but as I've never gotten any traction with that,
I'll shut up about that.

I actually don't see any model of Assertion_Policy that could work for both
shared generics and duplicated generics, short of making it
implementation-defined, which is unpleasant at best. Saying that the policy that
applies to the generic unit applies to the instance would work fine for shared
generics and surely could be implemented for duplicated generics, but it seems
like a pain in the latter case and it's not clear that it would make sense to
users. (One could imagine to split the spec and body in this way, but that's
just too confusing.

So I guess I'm saying I don't care enough to try to change it; but if we did, it
should be the exact opposite (the policy governing the generic rules; the one
governing the instance doesn't matter inside it). (I may regret this stance
someday, but whatever.)

> -------------
> I have a feeling that since Steve was asked to write this one, there
> was more support for predicates-like-constraints (C_E, suppress,
> erroneous).

I think at least some of the supporters of that position had not considered all
of the implications, and have changed their positions yet again.

> Are we sure we want predicates-as-assertions? This is not something we
> can change later.

They (at least dynamic predicates) have to be at least partially assertions
-- if not, then side-effects are allowed and have a defined meaning, protected
actions could get hosed by blocking, and so on. It's not possible for dynamic
predicates to be completely constraints.

It would seem to be too confusing to treat static predicates differently from
dynamic predicates in this area.

Its clear that the exception raised is close to irrelevant; if you care, you'll
specify it by whatever gizmo we end up designing (or by using Geert's trick).

I, at least, am quite opposed to "unnecessary" erroneousness (that is,
erroneouseness for a construct which has well defined semantics); there is far
too much of it in the standard already. [As I previously noted, I don't think
the suppress of range checks should be allowed to cause erroneous execution.]

Finally (and leastly), no one has ever volunteered to do the rewrites of 3.2.4,
4.6, 11.4.2, and 11.5 needed to implement a Suppress solution. It's at least
twice as much wording as Steve has (and that includes rewriting wording that
only exists in my meeting notes). Steve's solution is much closer to the
original intent, and I think that the one thing that's rather wordy can be
simplified.

> I vote yes for the rest of the AI.

Like Tuck, I'm rather mystified by this. At least half of the rest of the AI
would be have to be redone if predicates changed status, along with AI05-0274-1.
So please vote Yes or No or Abstain. (Ballot coming in minutes.)

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  12:51 AM

I've got a concern and an editorial comment about Steve's AI. (Actually, I saw
more while writing this...)

First, the concern. 6.1.1(31/3) and 6.1.1(35/3) were changed to use "enable",
which is fine. And "enable" (6.1.1(19/3)) was changed to say "see 11.4.2". But I
don't see anything in the new 11.4.2 (or the old one, for that matter), there
ever discusses when a precondition or postcondition is in fact enabled. In
contrast, there is a massive discussion in 11.4.2 about subtype predicates and
when they're enabled.

Specifically, 6.1.1(19/3) says:

If required by the Pre, Pre'Class, Post, or Post'Class assertion policies (see
11.4.2) in effect at the point of a corresponding aspect specification
applicable to a given subprogram or entry, then the checks for the respective
preconditions or postconditions are @i(enabled) for that subprogram or entry.

But I don't see where "if required by" is defined. 11.4.2(10/3) might be
attempting to do this, but it seems too much of a leap from "Check means the
check is performed" (which is what 11.4.2 says) to "Check means the check is
enabled" (which is what I hope we're saying here). After all, we never say
*what's* required in the "if" part here. Maybe the "if" doesn't say quite
enough: "if {performing checks is }required by the Pre, ...". (If we do that, we
need a similar fix for 7.3.2(9/3)).

Then, two editorial comments about the following from 11.4.2:

> > Predicate checks are defined to be enabled or disabled for a given
> > subtype as follows:
> >   If a subtype is declared by a type_declaration or subtype_declaration
> >   that includes one or more predicate specifications, then
> >      - if the applicable assertion policy for at least one of the
> >        assertion expressions is Check, then predicate checks are
> >        enabled for the subtype;
> >      - otherwise, predicate checks are disabled for the subtype [redundant:
> >        , regardless of whether predicate checking is enabled for any
> >        other subtypes mentioned in the declaration].
> >
> >   Otherwise, if a subtype is defined by a derived type declaration
> >   then predicate checks are enabled for the subtype if and only if
> >   predicate checks are enabled for at least one of the parent subtype
> >   and the progenitor subtypes.
> >
> >   Otherwise, if a subtype is created by a subtype_indication then
> >   predicate checks are enabled for the subtype if and only if
> >   predicate checks are enabled for the subtype denoted by the
> >   subtype_mark.
> >
> >   Otherwise, predicates checks are disabled for the given subtype.
> >   [AARM: In this case, no predicate specifications can apply to the subtype
> >   and so it doesn't typically matter whether predicate checks are enabled.
> >   This rule does make a difference, however, when determining whether
> >   predicates checks are enabled for another type when this type is one of
> >   multiple progenitors. See the "at least one" wording above.]

(1) Why is this in 11.4.2? All of the other assertions have this defined in
    their own sections (6.1.1(19/3) and 7.3.2(9/3), modulo the previous
    discussion). Either all of these should be defined in 11.4.2, or this block
    should go in 3.2.4.

(2) Four paragraphs in a row, at two different levels, starting with
    "otherwise"! Please, let's not. :-) These are already indented, so they
    could be bullets, and then we would not need the middle two "otherwise,
    if"s.

(3) Humm. This last paragraph seems wrong. You could inherit a predicate from a
    subtype_mark (in a place where only a subtype_mark is allowed), and that
    would not be covered by the other bullets (in particular, that would not be
    a subtype_indication). Not certain that can happen (most things allow
    subtype_indications, but not everything). Are you absolutely certain that
    there is no possibility of inheriting a predicate here?? It seems risky to
    me. Maybe we need to say "subtype_indication or subtype_mark" earlier??

(4) This block is followed by the "Implementation Defined" AARM note for "other
    assertion policies". That either needs to be deleted or moved.

None of this is remotely show stopping; it's all about polishing the wording,
not changing the intent.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  1:33 AM

Letter Ballot on AI05-0290-1, version 5. [This should have been "6", based on
the description in the next paragraph - Editor.]

This is the version of the AI as Steve posted it today (and possibly some
editorial improvements as suggested by me and Geert, no semantic changes).

___ Approve AI with editorial changes.

___ Reject AI with comments __________________________________.

___ Abstain.

Please vote early and often. Well, at least early. I need to tell Joyce when the
Standard will be ready tonight (that's Friday night).

****************************************************************

From: John Barnes
Sent: Friday, March 9, 2012  4:23 AM

Where is it?  Not in the database.

****************************************************************

From: Robert Dewar
Sent: Friday, March 9, 2012  4:30 AM

> Are we sure we want predicates-as-assertions? This is not something we
> can change later.

Well we can make any changes we like later, for instance, add another aspect so
that constraint error is raised, and add another checking policy to get
predicates-as-constraints in effect. These are minor issues anyway (applying
only in the case where a predicate fails)

****************************************************************

From: Robert Dewar
Sent: Friday, March 9, 2012  4:32 AM

> I think an implementor could certainly add a "Suppress(Predicate_Check)"

or the ARG can add this to the language later :-)

****************************************************************

From: Robert Dewar
Sent: Friday, March 9, 2012  4:41 AM

> I actually don't see any model of Assertion_Policy that could work for
> both shared generics and duplicated generics, short of making it
> implementation-defined, which is unpleasant at best

I think it's fine to make it impl-defined, the impact on user code is negligible
even theoretically, and in practice it won't make any difference at all.

****************************************************************

From: Robert Dewar
Sent: Friday, March 9, 2012  4:41 AM

> Letter Ballot on AI05-0290-1, version 5.
...
> _X__ Approve AI with editorial changes.

****************************************************************

From: Tucker Taft
Sent: Friday, March 9, 2012  8:01 AM

> Letter Ballot on AI05-0290-1, version 5.

Was there supposed to be an attachment?

I am voting "yes" presuming you made the edits you described.

****************************************************************

From: John Barnes
Sent: Friday, March 9, 2012  9:22 AM

I understand now that the attachment called v12 is actually v5. Why does 12
equal 5?

****************************************************************

From: Brad Moore
Sent: Friday, March 9, 2012  10:04 AM

Not sure, perhaps we have switched to using base 3 for AI versions? ;-)

****************************************************************

From: Tucker Taft
Sent: Friday, March 9, 2012  10:25 AM

This was due to the multiple versions Erhard and I bounced back and forth during
the ARG meeting.  Randy only counts revisions that actually make it into the
config-mgmt system, so with Erhard and my versioning, we are up to version 12+,
but for Randy, only version 5.

****************************************************************

From: John Barnes
Sent: Friday, March 9, 2012  11:18 AM

Thanks Tuck for that explanation of why 12 = 5. Arned with that knowledge, I can
now vote.

****************************************************************

From: Robert Dewar
Sent: Friday, March 9, 2012  11:40 AM

What fuss about 12 and 5, was that anything substantive?

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  12:51 AM

> Where is it?  Not in the database.

Sorry, should have been clearer. This is the version that Steve posted to the
ARG list yesterday 12:25 PM, along with the following editorial comments from
Geert and me.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  1:17 PM

> >> I understand now that the attachment called v12 is actually v5. Why
> >> does
> >> 12 equal 5?
> >
> > Not sure, perhaps we have switched to using base 3 for AI versions?
> > ;-)
>
> This was due to the multiple versions Erhard and I bounced back and
> forth during the ARG meeting.  Randy only counts revisions that
> actually make it into the config-mgmt system, so with Erhard and my
> versioning, we are up to version 12+, but for Randy, only version 5.

Right. I'll file at least one version for each discussion period during the
meeting, so that will be at least two versions. There since have been two more
drafts. The actual version number won't be known until I actually do this, I
only know that it will be around 5 or so.

(Joyce needs the Standard ASAP, I doubt she cares about the state of the version
control system for AIs! So for now, I'm ignoring all of the intermediate
versions and focusing only on the final result; I'll have to fix that after I
get the Standard done but other tasks are unimportant at the moment.)

****************************************************************

From: Geert Bosch
Sent: Friday, March 9, 2012  10:05 AM

> Is this even possible with shared generics? I know they are now out of
> fashion, but this seems to require full duplication.

I think it would work just fine to add a few boolean flags as part of the
instance data and use these as first boolean test:
   with Pre => Some_Precondition (...)

would be expanded to

  with Pre => _ignore_pre or else Some_Precondition (...)

where _ignore_pre is the boolean added to the generic during expansion in the
compiler. A single perfectly predictable test is very cheap, and conforming in
both semantic effect and spirit.

****************************************************************

From: John Barnes
Sent: Friday, March 9, 2012  11:22 AM

Here is my vote. After all that fuss about 12 and 5, I fear I am abstaining.

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> ___ Approve AI with editorial changes.
>
> ___ Reject AI with comments __________________________________.
>
> __X_ Abstain.

John (retd)  I'm not sure whether that should be retarded rather than retired.

****************************************************************

From: Tullio Vardanega
Sent: Friday, March 9, 2012  11:45 AM

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> X Approve AI with editorial changes.

****************************************************************

From: Geert Bosch
Sent: Friday, March 9, 2012  12:07 PM

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> _X_ Approve AI with editorial changes.

****************************************************************

From: Jeff Cousins
Sent: Friday, March 9, 2012  12:08 PM

Letter Ballot on AI05-0290-1, version 5.

This is the version of the AI as Steve posted it today (and possibly some
editorial improvements as suggested by me and Geert, no semantic changes).

_X_ Approve AI with editorial changes.

___ Reject AI with comments __________________________________.

___ Abstain.

...

I was originally intending to abstain, as I thought being sometimes like an
assertion and being sometimes like a constraint were going to be irreconcilable,
but having read the AI more slowly over lunch it seems to be a sensible
compromise and spells out what happens in the problem cases, though it's still
not nice that sometimes violating a subtype predicate causes an Assertion_Error
and sometimes causes a Constraint_Error.  The ideas about specifying what
exception is raised are interesting but too late for Ada 2012.  As Randy says,
presumably after having seen the bigger picture after applying the proposed
changes, some polishing is still needed, so yes assuming this is done.

****************************************************************

From: Steve Baird
Sent: Friday, March 9, 2012  12:35 PM

_X__ Approve AI with editorial changes.

___ Reject AI with comments __________________________________.

___ Abstain.


I vote to approve with changes.

----

Now about those editorial changes:

Please feel free to clean up "(see 11.4.2)" references that I added. The idea
(which I may have failed to implement correctly) was to add such a reference
whenever we refer to predicate checks being enabled or disabled in RM sections
preceding the point where this notion is defined.

> (1) Why is this in 11.4.2? All of the other assertions have this
> defined in their own sections (6.1.1(19/3) and 7.3.2(9/3), modulo the
> previous discussion). Either all of these should be defined in 11.4.2,
> or this block should go in 3.2.4.

I think you are right - moving this to 3.2.4 would probably make sense (and
would eliminate some of the forward references). Just my two cents worth; I'm
leaving this to the editor's discretion.

[That sounds so much better than "I'm dumping this one in Randy's lap"]

> Four paragraphs in a row, at two different levels, starting with
> "otherwise"! Please, let's not.

I'm leaving this to the editor's discretion.

> This block is followed by the "Implementation Defined" AARM note for
> "other assertion policies". That either needs to be deleted or moved.

I'm leaving this to the editor's discretion.

> Humm. This last paragraph seems wrong. You could inherit a predicate
> from a subtype_mark (in a place where only a subtype_mark is allowed),
> and that would not be covered by the other bullets (in particular,
> that would not be a subtype_indication). Not certain that can happen
> (most things allow subtype_indications, but not everything). Are you
> absolutely certain that there is no possibility of inheriting a predicate here??

The decomposition into subcases is intended to mirror the structure of
3.2.4(3/3-5/3). Does that answer your question?

****************************************************************

From: Steve Baird
Sent: Friday, March 9, 2012  12:45 PM

>> AARM note: This means that an Assertion_Policy pragma that occurs in
>> a scope enclosing the declaration of a generic unit but not also
>> enclosing the declaration of a given instance of that generic unit
>> will not apply to assertion expressions occuring within the given instance.
> Is this even possible with shared generics? I know they are now out of
> fashion, but this seems to require full duplication.
>

Randy Brukardt wrote:
> The answer is NO; there is no
> possible way to do any significant sharing with this model.

Geert Bosch wrote:
> I think it would work just fine to add a few boolean flags as part of
> the instance data and use these as first boolean test:

I agree with Geert. This rule does introduce some implementation complexity for
a shared code generics implementation but nothing insurmountable.

The main point here is that we want to treat assertion policy consistently with
suppression in this respect (although Randy makes a good point that
suppression-related issues can be dealt with by simply ignoring Suppress pragmas
and this is not true for Assertion_Policy pragmas).

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  1:13 PM

> > Is this even possible with shared generics? I know they are now out
> > of fashion, but this seems to require full duplication.
>
> I think it would work just fine to add a few boolean flags as part of
> the instance data and use these as first boolean test:
>    with Pre => Some_Precondition (...)
>
> would be expanded to
>
>   with Pre => _ignore_pre or else Some_Precondition (...)
>
> where _ignore_pre is the boolean added to the generic during expansion
> in the compiler. A single perfectly predictable test is very cheap,
> and conforming in both semantic effect and spirit.

I mentioned this possibility yesterday, but dismissed it because of distributed
overhead.

I wouldn't call it "very cheap", for a number of reasons:
(1) Accessing instance data is doubly indirect, so it's never "very cheap"
to access;
(2) There are something like 12 of these flags, all of which would have to be
set (at runtime) for every instance;
(3) Since we don't know the contents of the body, we'd have to set these flags
for every instance whether there are any assertions in the body or not.
(4) You'd definitely *not* want to do this for assertions evaluated outside of
the generic unit (which would be most of them), so the compiler complexity is
doubled.
(5) This would not get rid of testing paths, just make even more of them
inaccessible and thus untestable, so it wouldn't work for some users. (I suppose
it is possible that the entire notion of shared generics would not work for
those users, although I have no evidence either way on this.)

Something I didn't think of yesterday is that the flags could be bit-packed, so
(2) could be made fairly cheap (probably just a single assignment per instance),
and that would mitigate (2) and (3). But that would complicate (1) further,
making the check even further from "very cheap".

In any case, I'm not asking for a rule change here; there's obviously a path to
implementing this "by-the-book", although it is unlikely that I would do that in
the absence of some customer need. (The typical use cases would never cause any
significance to how this is done, so I very much doubt that anyone would care
about this.)

****************************************************************

From: Brad Moore
Sent: Friday, March 9, 2012  1:01 PM

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> _X__ Approve AI with editorial changes.
>
> ___ Reject AI with comments __________________________________.
>
> ___ Abstain.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 9, 2012  1:19 PM

I seem to have forgotten to vote on the ballot I initiated.

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic
> changes).
>
> __Yea hey!!__ Approve AI with editorial changes.
>
> ___ Reject AI with comments __________________________________.
>
> ___ Abstain.

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, March 9, 2012  3:38 PM

> ___ Approve AI with editorial changes.
>
> ___ Reject AI with comments __________________________________.
>
> _X__ Abstain.
>
Not yet fully convinced about predicates...

****************************************************************

From: Gary Dismukes
Sent: Friday, March 9, 2012  4:55 PM

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> _X_ Approve AI with editorial changes.
>
> ___ Reject AI with comments __________________________________.
>
> ___ Abstain.
>

I have a few additional (minor) editorial suggestions that I'll mention here
(but of course my vote is not contingent on these):

> append after 3.8.1(21):
>
> If the value of the discriminant governing the variant is not covered
> by the discrete_choice_list of the variant then Constraint_Error is
> raised. This rule applies in turn to any further variant that is,
> itself, included in the component_list of the given variant. This
> check is performed when an object of a discriminated type is initialized by default.

When describing checks, I think that the RM usually follows a structure like
"... a check is made that ...." followed by "If this check fails, then...". I
suggest that the above be rephrased along these lines:

  When an object of a discriminated type is initialized by default, a check is
  made that the discrete_choice_list of a variant governed by a discriminant
  covers the discriminant value. If this check fails, then Constraint_Error
  is raised. This rule applies in turn to...

> AARM note:
>   Like the checks associated with a per-object constraint, this check
>   is not performed during the elaboration of a subtype indication.
>   This check can fail if the discriminant subtype
>   has a Static_Predicate specified, it also has predicate checking
>   disabled, and the discriminant governs a variant part which
>   lacks a "when others" choice.

In the AARM note following that check, I suggest rephrasing the last sentence to
emphasize that this is a case where the check can fail even though predicate
checking is disabled:

  This check can fail even when the discriminant subtype has predicate checking
  disabled, if the discriminant subtype has a Static_Predicate specified and
  the discriminant governs a variant part which lacks a "when others" choice.

> ------
> Replace 11.4.2, 10/2 by:
>
> ...
>
> AARM note: This means that an Assertion_Policy pragma that occurs in a
> scope enclosing the declaration of a generic unit but not also
> enclosing the declaration of a given instance of that generic unit
> will not apply to assertion expressions occuring within the given instance.

occuring => occurring


> ------
> Replace 11.4.2, 10/2 by:
>
> ...
>
> Predicate checks are defined to be enabled or disabled for a given
> subtype as follows:
>    If a subtype is declared by a type_declaration or subtype_declaration
>    that includes one or more predicate specifications, then

It was a bit of a surprise to me that a subtype_declaration can have more than
one predicate specification.  Based on talking to Steve, this can only occur if
there are both a Static_Predicate and a Dynamic_Predicate (which still seems a
little weird to me).  I wonder if this wording could be revised to reflect that
possibility more clearly. The current wording makes it sound like there could be
even more than two predicate specifications present on a declaration (which is
not the case).  At least an AARM paragraph might be in order.

>       - if the applicable assertion policy for at least one of the assertion
>         expressions is Check, then predicate checks are enabled for the
>         subtype;
>       - otherwise, predicate checks are disabled for the subtype [redundant:
>         , regardless of whether predicate checking is enabled for any
>         other subtypes mentioned in the declaration].
>
>    Otherwise, if a subtype is defined by a derived type declaration
>    then predicate checks are enabled for the subtype if and only if
>    predicate checks are enabled for at least one of the parent subtype
>    and the progenitor subtypes.
>
>    Otherwise, if a subtype is created by a subtype_indication then
>    predicate checks are enabled for the subtype if and only if
>    predicate checks are enabled for the subtype denoted by the
>    subtype_mark.
>
>    Otherwise, predicates checks are disabled for the given subtype.

I tend to agree with Randy that the string of "otherwise" paragraphs should be
put in a different form (such as bullets).

>    [AARM: In this case, no predicate specifications can apply to the subtype
>    and so it doesn't typically matter whether predicate checks are enabled.
>    This rule does make a difference, however, when determining whether
>    predicates checks are enabled for another type when this type is one of
>    multiple progenitors. See the "at least one" wording above.]

When I first read 'See the "at least one" wording above.' I initially thought it
might be referring to the first of the two bulleted paragraphs above ("at least
one of the assertion expressions"), then spotted the "at least one" in the other
paragraph.  Perhaps revise the AARM sentence to clarify this (such as 'See the
"at least one of the parent subtype" wording above.').

****************************************************************

From: Steve Baird
Sent: Friday, March 9, 2012  5:30 PM

Gary - Thanks for the careful reading - you caught some good ones.

I agree with all of Gary's comments and suggested editorial improvements except
for one:

>> AARM note:
>>   Like the checks associated with a per-object constraint, this check
>>   is not performed during the elaboration of a subtype indication.
>>   This check can fail if the discriminant subtype
>>   has a Static_Predicate specified, it also has predicate checking
>>   disabled, and the discriminant governs a variant part which
>>   lacks a "when others" choice.
>
> In the AARM note following that check, I suggest rephrasing the last
> sentence to emphasize that this is a case where the check can fail
> even though predicate checking is disabled:
>
>   This check can fail even when the discriminant subtype has predicate checking
>   disabled, if the discriminant subtype has a Static_Predicate specified and
>   the discriminant governs a variant part which lacks a "when others" choice.

I think the point of this AARM note is to demonstrate the motivation for the
check by describing a scenario in which the check can fail.

Gary's point is, I think, that users might be surprised to discover that a
static_predicate specification can have an effect on a program even if predicate
checking is disabled for the subtype in question.

I agree on both counts (a disabled predicate can have an effect and it is
somewhat surprising), but that's not the point of this note.

I think the original wording seems more like an explanation of why this check is
needed.

Gary - please correct me if I misinterpreted your comments.

****************************************************************

From: Gary Dismukes
Sent: Friday, March 9, 2012  6:27 PM

Steve Baird wrote, in response to one of my comments:

...
> I think the original wording seems more like an explanation of why
> this check is needed.
>
> Gary - please correct me if I misinterpreted your comments.

You're right, I got a little confused on that one.  I think that part of what
threw me off is that the wording makes it sound like this check would only fail
if predicate checking is disabled.  Strictly speaking that's true, because if
the predicate has checking enabled, then it would fail another check first (on
the conversion of the value to the discriminant's subtype) and never get to this
check.  But if the check *were* to be performed (;-), then it would fail
independently of predicate checking being enabled or disabled.  Maybe it would
help to say "This check is needed (and will fail) in the case where ...", but I
suppose it's OK as is.

****************************************************************

From: Gary Dismukes
Sent: Friday, March 9, 2012  6:48 PM

> predicate checking being enabled or disabled.  Maybe it would help to
> say "This check is needed (and will fail) in the case where ...", but
> I suppose it's OK as is.

Correction: that should be "and can fail", not "and will fail"!

****************************************************************

From: Bob Duff
Sent: Saturday, March 10, 2012  10:16 AM

> Letter Ballot on AI05-0290-1, version 5.
>
> This is the version of the AI as Steve posted it today (and possibly
> some editorial improvements as suggested by me and Geert, no semantic changes).
>
> ___ Approve AI with editorial changes.

_X_ Approve AI with editorial changes.

> ___ Reject AI with comments __________________________________.
>
> ___ Abstain.

****************************************************************

From: Erhard Ploedereder
Sent: Monday, March 12, 2012  8:50 AM

Letter Ballot on AI05-0290-1, version 5.

This is the version of the AI as Steve posted it today (and possibly some
editorial improvements as suggested by me and Geert, no semantic changes).

 __X_ Approve AI with editorial changes.

 ___ Reject AI with comments __________________________________.

 ___ Abstain.

A suggested editorial on the rewrite of 4.9.1(10/3):
(twice) make "checking is enabled" into "checks are enabled" to match the
diction of the remainder of the RM.

A "must" editorial on the same para: the reference is 11.4.2, not 11.2.

****************************************************************

From: Tucker Taft
Sent: Tuesday, March 13, 2012  8:22 PM

> [untracked - way off-topic for this TN]

I took the liberty of CC'ing the ARG on this one. [The original message was
a private one from Steve Baird - Editor.]

>> What about a situation where there is a Pre on a declaration, and
>> then the completion is a renaming as body of a subprogram that *also*
>> has a Pre on it?  Is this like inheritance, or is this illegal, or is
>> the semantics like a wrapper?  I guess in general renaming-as-body is
>> like a wrapper, so probably can leave it at that for the purposes of
>> Pre/Post as well.
>>
>
>
> I think you are right, but it made me wonder about a couple of
> spectacularly unimportant corner cases.

I think your brain consists only of corners... ;-)

> declare
> package Pkg0 is
> type T0 is tagged null record;
> function Is_Dandy (X0 : T0'Class) return Boolean; end Pkg0;
>
> package Pkg1 is
> type T1 is new Pkg0.T0 with null record; procedure Op1 (X1 : T1) with
> Pre'Class => Pkg0.Is_Dandy (X1); procedure Op1_Ren (X1 : T1) renames
> Op1;
>
> procedure Op2 (X1 : T1);
> end Pkg1;
>
> package Pkg2 is
> type T2 is new Pkg1.T1 with null record;
>
> overriding procedure Op1_Ren (X2 : T2);
> -- inherits Pre'Class?

Yes, since a dispatching call through the Op1_Ren of T1 checks Is_Dandy.

>
> overriding procedure Op2 (X2 : T2) renames Op1;
> -- inherits Pre'Class ?

This is a new dispatching operation, so this question is harder to answer, but I
think, it has the same properties as the implicitly declared Op1, and we know
that one is expecting that Is_Dandy has been checked before arriving at the
inherited body.

> end Pkg2;
>
> Which of the ops in Pkg2 have Is_Dandy Pre'Class preconditions? What
> about Pkg2.Op1_Ren?

The implicit Op1, as well as the two explicit ops, I would say.

> ====
>
> package Outer is is
> type T1 is tagged null record;
> package Inner is
> type T2 is tagged null record;
> procedure Mixed (X1 : T1; X2 : T2)
> with Pre'Class => ...;
> end Inner;
> procedure Mixed_Ren (...) renames Inner.Mixed; end Outer;
>
> Does Outer.Mixed_Ren have a Pre'Class which is inherited by
> descendants of Outer.T1?

Yes, I would say so.  Again, the body of Inner.Mixed is expecting the given
precondition to have been checked.

****************************************************************

From: Randy Brukardt
Sent: Saturday, March 17, 2012  5:00 PM

...
> > append after 3.8.1(21):
> >
> > If the value of the discriminant governing the variant is not
> > covered by the discrete_choice_list of the variant then
> > Constraint_Error is raised. This rule applies in turn to any further
> > variant that is, itself, included in the component_list of the given
> > variant. This check is performed when an object of a discriminated
> > type is initialized by default.
>
> When describing checks, I think that the RM usually follows a
> structure like "... a check is made that ...." followed by "If this
> check fails, then...".
> I suggest that the above be rephrased along these lines:
>
>   When an object of a discriminated type is initialized by default, a check is
>   made that the discrete_choice_list of a variant governed by a discriminant
>   covers the discriminant value. If this check fails, then Constraint_Error
>   is raised. This rule applies in turn to...

That's quite correct, but it is *very* important that this not be described as a
"check". This is *not* suppressable/ignorable; it's always made (logically).
Note that the similar case statement rule is simply described as part of the
Dynamic Semantics of case statements; we need to do the same here. (I also
rearranged the words a bit to clarify the meaning.)

    When an object of a discriminated type is initialized by default,
    Constraint_Error is raised if no discrete_choice_list of a variant that is
    governed by a discriminant covers the value of the discriminant. This rule
    applies in turn to any further variant that is, itself, included in the
    component_list of the given variant.

I still find the variants and discriminants here to be confusing. And there is
no description of what variant we are talking about. So introduce some letters:

    When an object of a discriminated type T is initialized by default,
    Constraint_Error is raised if no discrete_choice_list of a variant V of T
    that is governed by a discriminant D covers the value of D. This rule
    applies in turn to any further variant that is, itself, included in the
    component_list of V.

Humm. The "covers" part to hangs oddly, move it (and that allows us to get rid
of D):

    When an object of a discriminated type T is initialized by default,
    Constraint_Error is raised if no discrete_choice_list of a variant V of T
    covers the value of the discriminant that governs V. This rule applies in
    turn to any further variant that is, itself, included in the component_list
    of V.

> > AARM note:
> >   Like the checks associated with a per-object constraint, this check
> >   is not performed during the elaboration of a subtype indication.
> >   This check is needed (and can fail) in the case where the discriminant
> >   subtype has a Static_Predicate specified, it also has predicate checking
> >   disabled, and the discriminant governs a variant part which
> >   lacks a "when others" choice.

Gotta eliminate "check" here, as well. Probably also note that this is not a
check, and say something about the range check case:

AARM implementation note:
   This is not a "check"; it cannot be suppressed. However, in most cases it is
   not necessary to generate any code to raise this exception. A test is needed
   (and can fail) in the case where the discriminant subtype has a
   Static_Predicate specified, it also has predicate checking disabled, and the
   discriminant governs a variant part which lacks a "when others" choice.

   The test also could fail for a static discriminant subtype with range
   checking suppressed and the discriminant governs a variant part which lacks a
   "when others" choice. But execution is erroneous if a range check that would
   have failed is suppressed (see 11.5), so an implementation does not have to
   generate code to check this case. (An unchecked failed predicate does not
   cause erroneous execution, so the test is required in that case.)

   Like the checks associated with a per-object constraint, this test is not
   made during the elaboration of a subtype indication.
End AARM Implementation Note.

The reason for adding the second paragraph is that given the emphasis on this
not being a check, it's important to explain why this does not require any
change to an implementation for range checks. (It's also important to explain
that this test is not inconsistent (runtime incompatible), because the program
already was erroneous, so even if the implementation adds the test and raises
Constraint_Error, that was already allowed in the past.)

> > ------
> > Replace 11.4.2, 10/2 by:
> >
> > ...
> >
> > Predicate checks are defined to be enabled or disabled for a given
> > subtype as follows:
> >    If a subtype is declared by a type_declaration or subtype_declaration
> >    that includes one or more predicate specifications, then
>
> It was a bit of a surprise to me that a subtype_declaration can have
> more than one predicate specification.  Based on talking to Steve,
> this can only occur if there are both a Static_Predicate and a
> Dynamic_Predicate (which still seems a little weird to me).  I wonder
> if this wording could be revised to reflect that possibility more clearly.
> The current wording makes it sound like there could be even more than
> two predicate specifications present on a declaration (which is not
> the case).  At least an AARM paragraph might be in order.

I'm not sure why we care. If it contains two predicate specifications, it surely
contains one, and that's all we care about here (presence or absence). I think
this is just a case of Steve being overly pedantic. So just say:

    If a subtype is declared by a type_declaration or subtype_declaration
    that includes a predicate specification, then

Then, the "at least one" wording in the next item has to go. But I think that's
OK; you can't put a Assertion_Policy pragma in the middle of an aspect
specification (no semicolons there), so it is whatever policy applies to the
entire (sub)type declaration is all that matters. (And there is no inheritance
of policies for subtypes, unlike for pre- and postconditions.) So this wording
can be a lot simpler:

   - if the applicable assertion policy for the declaration
     is Check, then predicate checks are enabled for the subtype;

(remembering that this immediately follows the previous "lead-in", so the
declaration being talked about is obvious).

...
> >    [AARM: In this case, no predicate specifications can apply to the subtype
> >    and so it doesn't typically matter whether predicate checks are enabled.
> >    This rule does make a difference, however, when determining whether
> >    predicates checks are enabled for another type when this type is one
> >    of multiple progenitors. See the "at least one" wording above.]
>
> When I first read 'See the "at least one" wording above.' I initially
> thought it might be referring to the first of the two bulleted
> paragraphs above ("at least one of the assertion expressions"), then
> spotted the "at least one" in the other paragraph.  Perhaps revise the
> AARM sentence to clarify this (such as 'See the "at least one of the
> parent subtype"
> wording above.').

This is no longer an issue because the wording that confused you is gone.
Best possible solution. :-)

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 18, 2012  10:33 PM

Well, as expected, I saw some issues when adding this wording to the
Standard:
[(3) is particularly sticky. Please review...]

(1) The proposed grammar change in 2.8(3) is:

  pragma_argument_association ::=
     [pragma_argument_identifier =>] name
   | [pragma_argument_identifier =>] expression
   | [pragma_argument_aspect_mark =>] name
   | [pragma_argument_aspect_mark =>] expression

This is ambiguous in the case of an unprefixed name or expression. (I realize
this grammar is ambiguous in a bunch of other ways as well, but they're not so
obvious. Erhard in particular complained about this elsewhere; his argument
there certainly applies here as well.) Moreover, we never actually use the
possibility of an optional aspect_mark anywhere, so this ambiguity isn't even
necessary.

So I replaced this grammar by:

  pragma_argument_association ::=
     [pragma_argument_identifier =>] name
   | [pragma_argument_identifier =>] expression
   | pragma_argument_aspect_mark =>  name
   | pragma_argument_aspect_mark =>  expression

Which doesn't have the obvious ambiguity. (I realize that we never use this
pragma grammar anywhere, as each pragma has it's own grammar. But that's no
reason for this to be wrong.)

---

(2) The first bullet of the 3.2.4(6/3) change should end with a colon; the
penultimate bullet should end with a semicolon rather than a period.

---

(3) My rewording of 3.8.1(21.1/3) doesn't work because I confused "variant" and
"variant_part". (I think Steve did, too, because the wording he had made no
sense either; we're not interested in variants in this wording.) However, we
need to exclude this test on variant_parts that appear in variants that aren't
selected, and that is very hard to word. (Steve copied the old wording for when
components are included; besides being backwards, this wording isn't clear at
all and thus is a lousy example to copy.) (It would be nice if we had a term
like "selected" to describe a variant whose components are included in the
record value.)

This should read something like:

When an object of a discriminated type T is initialized by default,
Constraint_Error is raised if no discrete_choice_list of any variant of a
variant_part of T covers the value of the discriminant that governs the
variant_part. When a variant_part appears in the component_list of another
variant V, this test is only applied if the value of the discriminant governing
V is covered by the discrete_choice_list of V.

[If reviewing this, make sure you put it into the context of 3.8.1(21); that's
the mistake I made last night. - RLB]

----

(4) 4.9.1(10/3) uses the wording "obeys the predicate", which is not defined (or
even obvious). (This is old wording, but garbage is still garbage with age. :-)
I replaced "obeys" with "satisfies" (the defined term).

[Aside: I don't quite see why the checking state ought to matter for "static
compatibility", as this affects Legality Rules and that seems wrong on the face
of it. Moreover, the semantics of the (static) predicate doesn't change whether
or not checking is enabled. But I presume that Steve has some reason that he
failed to explain, so I've made this change without knowing why.]

----

(5) Oh, crap, the wording for 6.1.1 doesn't come close to the model we intended.
I have no idea how to fix, but that will have to be done before the Standard
goes out. See the separate message.

****************************************************************

From: Erhard Ploedereder
Sent: Monday, March 19, 2012  6:38 AM

Good idea!

> (1) The proposed grammar change in 2.8(3) is:
>
>   pragma_argument_association ::=
>      [pragma_argument_identifier =>] name
>    | [pragma_argument_identifier =>] expression
>    | [pragma_argument_aspect_mark =>] name
>    | [pragma_argument_aspect_mark =>] expression
>
> This is ambiguous in the case of an unprefixed name or expression. (I
> realize this grammar is ambiguous in a bunch of other ways as well,
> but they're not so obvious. Erhard in particular complained about this
> elsewhere; his argument there certainly applies here as well.)
> Moreover, we never actually use the possibility of an optional
> aspect_mark anywhere, so this ambiguity isn't even necessary.
>
> So I replaced this grammar by:
>
>   pragma_argument_association ::=
>      [pragma_argument_identifier =>] name
>    | [pragma_argument_identifier =>] expression
>    | pragma_argument_aspect_mark =>  name
>    | pragma_argument_aspect_mark =>  expression

****************************************************************

From: Randy Brukardt
Sent: Sunday, March 18, 2012  11:06 PM

The last draft (and I think several others) have the following as the wording
changes for 6.1.1:

> Replace 6.1.1(19/3) with:
>
> If performing checks is required by the Pre, Pre'Class, Post, or
> Post'Class assertion policies (see 11.4.2) in effect at the point of a
> corresponding aspect specification applicable to a given subprogram or
> entry, then the checks for the respective preconditions or
> postconditions are @i(enabled) for that subprogram or entry.
>
> Replace 6.1.1(31/3) with:
>
> Upon a call of the subprogram or entry, after evaluating any actual
> parameters, enabled checks for preconditions are performed as follows:
>
> Replace the first sentence of 6.1.1(35/3) with:
>
> Upon successful return from a call of the subprogram or entry, prior
> to copying back any by-copy in out or out parameters, any enabled
> checks for postconditions are performed.
>
> Delete 6.1.1(40/3).

Unfortunately, this wording doesn't seem to come close to the model that we
discussed previously. I don't know if we really intended to abandon all of the
ideas of inheritance that we discussed, but I don't think so.

Let me enumerate the problems with this wording:

(1) 6.1.1(35/3) says "any enabled checks". But there is only *one* postcondition
    check per subprogram -- it combines all of class-wide expressions and the
    (single) specific expression in a *single* check. If we're talking about
    checks that are enabled, we can only turn that one check on or off; we can't
    change the parts of the check. (That's similar to the model for predicates,
    BTW.) But we have separate policies for Post and Post'Class; moreover we
    intended the policy of the *original* specification to apply forever. None
    of that is reflected in the wording or the model.

(2) 6.1.1(31/3) also says "enabled checks". Here, specific and class-wide
    preconditions are separate checks, so there is no issue with Pre. But the
    class-wide check also is a *single* check including all of the inherited
    routines; it evaluates all of the expressions that "apply". Again, the model
    that the policy of the *original* specification always is used does not
    appear anywhere.

(3) The original use of "enabled" in 6.1.1(26/3) talks about postcondition
    *expressions* that are enabled, but that was never clear in the wording of
    6.1.1(19/3), and is totally lost now. (The new 19/3 wording talks
    specifically about checks.)

-----------------

There seem to be two ways to fix this.

The first way is to keep to the current model of enabling "checks". That would
require splitting all of the checks into separate checks for each expression,
which then could be enabled or disabled separately. There are two problems with
this strategy:
A: there is no fix at all for (3), which is only interested in expressions;
B: pretty much all of the Dynamic Semantics section of 6.1.1 would
have to be redone. In particular, I recall Erhard and I having a lengthy
discussion about ordering of postcondition checks, which is not a problem since
there is only one (and the order of 1 thing isn't very interesting!). Changing
that would require changing quite a bit of wording.

The second way is to change 6.1.1(19/3) to talk about enabling or disabling
*expressions*, not *checks*. This brings up the problem of how to talk about
expressions that aren't enabled. One possibility is to say that for an
expression that is not "enabled", when it is evaluated, the result is True and
nothing else is evaluated. Then all of the checking wording can be left alone.
Alternatively, we could insert "enabled" strategically into the checking
wording. That's probably a better plan.

----

Attempting to follow this plan would require the following changes to Draft 15:

Replace 6.1.1(19/3) with:

If performing checks is required by the Pre, Pre'Class, Post, or Post'Class
assertion policies (see 11.4.2) in effect at the point of a corresponding aspect
specification applicable to a given subprogram or entry, then the respective
precondition or postcondition expressions are considered @i(enabled).

AARM Note: If a class-wide precondition or postcondition expression is enabled,
it remains enabled when inherited, even if the policy in effect is Ignore for
the inheriting callable entity.

[This is the model we wanted for these, I believe (see the minutes).]

Replace 6.1.1(31/3) with:

Upon a call of the subprogram or entry, after evaluating any actual parameters,
precondition checks are performed as follows:

[The model is that checks are always performed, but only enabled expressions are
evaluated, so the check might be trivial.]

Modify 6.1.1(32/3):

The specific precondition check begins with the evaluation of the specific
precondition expression that applies to the subprogram or entry{, if it is
enabled}; if the expression evaluates to False, Assertions.Assertion_Error is
raised{; if the expression is not enabled, the check succeeds}.

Modify 6.1.1(33/3):

The class-wide precondition check begins with the evaluation of any {enabled}
class-wide precondition expressions that apply to the subprogram or entry. If
and only if all the class-wide precondition expressions evaluate to False,
Assertions.Assertion_Error is raised.

Modify 6.1.1(35/3):

[If the assertion policy in effect at the point of a subprogram or entry
declaration is Check, then upon]{Upon} successful return from a call of the
subprogram or entry, prior to copying back any by-copy in out or out parameters,
the postcondition check is performed. This consists of the evaluation of
[the]{any enabled} specific and class-wide postcondition expressions that apply
to the subprogram or entry. If any of the postcondition expressions evaluate to
False, then Assertions.Assertion_Error is raised. The postcondition expressions
are evaluated in an arbitrary order, and if any postcondition expression
evaluates to False, it is not specified whether any other postcondition
expressions are evaluated. The postcondition check{,}[ and] constraint checks{,
and predicate checks} associated with copying back in out or out parameters are
performed in an arbitrary order.

[Note: The precondition checks are missing from the last sentence; I fixed that
in Invariants but not here.]

Delete 6.1.1(40/3).


Similar changes would need to be done for Invariants.

Is this OK? Is there a better idea out there?


****************************************************************

From: Erhard Ploedereder
Sent: Monday, March 19, 2012  6:33 AM

A fix that goes back to the diction of the reviewed RM would be:

"then the checks for the respective preconditions or postconditions are
@i(enabled)" -> "then the respective preconditions or postconditions are
@i(enabled)"

"enabled checks for preconditions" -> "checks for enabled preconditions"

"enabled checks for postconditions" -> "checks for enabled postconditions"

(The plural for checks in 31/3 and 40/3 was also in the reviewed RM.)

I would actually prefer the change that stopped calling the evaluation of
multiple enabled preconditions a single check and then stay with the "enabled
checks" diction, but the above is the minimal change to fix the inconsistency.

****************************************************************

From: Tucker Taft
Sent: Monday, March 19, 2012  9:06 AM

> Unfortunately, this wording doesn't seem to come close to the model
> that we discussed previously. I don't know if we really intended to
> abandon all of the ideas of inheritance that we discussed, but I don't think so.

I'm not sure I agree with you.  For the subtype predicate, it seemed important
for it to be all on or all off, because we certainly want a single complete
predicate used for things like membership, and it was too confusing in my view
to have two different definitions of the overall "predicate."

For pre/postconditions, I don't see the justification for that.  Each
pre/postcondition can be independently turned on or off without any real
definitional problem.  So long as the compiler knows which ones are on and which
ones are off, there isn't any danger of erroneousness.  And I would say it is
determined where the explicit aspect clause appears, not where it is inherited.
What really matters is that when you get to the subprogram body, you know which
ones have been enforced, and that shouldn't depend on whether you were called
through the original subprogram or the implicitly-declared inherited one.

I may be missing part of your point, but it seems that we can use "enabled" for
particular preconditions/postconditions, and then assemble the overall "check"
from the enabled ones.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 19, 2012  11:33 AM

You obviously missed my point, because you agreed completely with me (and the
last part suggests using wording like that I proposed at the end of the
message).

My point is simply that the definition of pre- and postcondition checks is that
there is only *one* check which is the combination of all of the pre- and
postcondition expressions. The wording above seems to imply that there are
multiple checks that can be controlled individually; moreover, the wording of
19/3 suggests that the checks are on or off based on the last declaration (or
something like that, I can't really tell).

We obviously never considered this wording in context (I know I never did), and
it simply doesn't make any sense in that context.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 19, 2012  11:39 AM

> A fix that goes back to the diction of the reviewed RM would be:
>
> "then the checks for the respective preconditions or postconditions
> are @i(enabled)" -> "then the respective preconditions or
> postconditions are @i(enabled)"
>
> "enabled checks for preconditions" -> "checks for enabled
> preconditions"
>
> "enabled checks for postconditions" -> "checks for enabled
> postconditions"
>
> (The plural for checks in 31/3 and 40/3 was also in the reviewed RM.)

Sure, that's fine. But not for 35/3, as you have above. It simply makes no
sense.

And please keep in mind that "precondition" is ambiguous: it could mean
"precondition expression" or "precondition check" or something else altogether.
I'd rather be clear.

> I would actually prefer the change that stopped calling the evaluation
> of multiple enabled preconditions a single check and then stay with
> the "enabled checks" diction, but the above is the minimal change to
> fix the inconsistency.

I understand that you'd prefer multiple checks, and I might, too, if today
wasn't the deadline for producing the Standard. But rewriting the entire Dynamic
Semantics section on the last afternoon is a non-starter.

The bigger problem is that your "minimal change" doesn't fix anything. I cannot
get from your suggestion above to the notion that the checks are always
performed, but the expressions themselves may be on or off. And I can't quite
see how the "always on if originally on" model falls out from this wording,
either.

Anyway, we're out of time; there's no time for discussion. We're going to have
to make a hard choice right now (see next message).

****************************************************************

From: Randy Brukardt
Sent: Monday, March 19, 2012  12:07 PM

Sorry about the ultra-short time to vote on this, but I have to provide the
finished Standard today. So please vote on the following ballot ASAP:

____ Approve moving forward with the standard using the wording proposed below.

____ Delay the standard for an unknown period to come up with acceptable
     wording.


Vote by 3:00 PM CDT (-6 UTC).

Note that there are no other choices; if you don't vote for moving the standard
forward, then you are voting against doing so implicitly (because the time will
run out before there is a solution).

Please note that we have a hard deadline from ISO for submitting the Standard on
April 1; that requires WG 9 approval and they need  at least a week to vote.
Thus the deadline for finishing the Standard's wording today (I have at least
1/2 day of editorial checking to do after final wording approval).

If this vote fails, we'll have to agree on wording and take another vote (with a
longer deadline); in that case, the completion of the Standard will be delayed
at least 3-4 days and more likely over a week. I have no idea what will happen
if we miss the deadline (and I don't want to find out).

I wouldn't call this vote if I wasn't confident that the following wording
reflects our intent accurately, but it is significantly different than the old
wording (which did not, in my view, reflect the intent at all).

[Editorial comments are welcome, otherwise, this is an up or down vote.]

                        Randy Brukardt.

----------------

Following are the significant wording changes from the v7 posted Saturday
(consider this vote an approval for those changes as well):

Add after 3.8(21):

When an object of a discriminated type T is initialized by default,
Constraint_Error is raised if no discrete_choice_list of any variant of a
variant_part of T covers the value of the discriminant that governs the
variant_part. When a variant_part appears in the component_list of another
variant V, this test is only applied if the value of the discriminant governing
V is covered by the discrete_choice_list of V.

[Reword so this is not a "check"; reword so the test is on "variant_part"s and
not "variant"s.]

---

Replace 6.1.1(19/3) with:

If performing checks is required by the Pre, Pre'Class, Post, or Post'Class
assertion policies (see 11.4.2) in effect at the point of a corresponding aspect
specification applicable to a given subprogram or entry, then the respective
precondition or postcondition expressions are considered @i(enabled).

AARM Note: If a class-wide precondition or postcondition expression is enabled,
it remains enabled when inherited, even if the policy in effect is Ignore for
the inheriting callable entity.

Replace 6.1.1(31/3) with:

Upon a call of the subprogram or entry, after evaluating any actual parameters,
precondition checks are performed as follows:

[The model is that checks are always performed, but only enabled expressions are
evaluated, so the check might be trivial.]

Modify 6.1.1(32/3):

The specific precondition check begins with the evaluation of the specific
precondition expression that applies to the subprogram or entry{, if it is
enabled}; if the expression evaluates to False, Assertions.Assertion_Error is
raised{; if the expression is not enabled, the check succeeds}.

Modify 6.1.1(33/3):

The class-wide precondition check begins with the evaluation of any {enabled}
class-wide precondition expressions that apply to the subprogram or entry. If
and only if all the class-wide precondition expressions evaluate to False,
Assertions.Assertion_Error is raised.

Modify 6.1.1(35/3):

[If the assertion policy in effect at the point of a subprogram or entry
declaration is Check, then upon]{Upon} successful return from a call of the
subprogram or entry, prior to copying back any by-copy in out or out parameters,
the postcondition check is performed. This consists of the evaluation of
[the]{any enabled} specific and class-wide postcondition expressions that apply
to the subprogram or entry. If any of the postcondition expressions evaluate to
False, then Assertions.Assertion_Error is raised. The postcondition expressions
are evaluated in an arbitrary order, and if any postcondition expression
evaluates to False, it is not specified whether any other postcondition
expressions are evaluated. The postcondition check{,}[ and] constraint checks{,
and predicate checks} associated with copying back in out or out parameters are
performed in an arbitrary order.

[Note: The precondition checks are missing from the last sentence; I fixed that
in Invariants but not here.]

Delete 6.1.1(40/3).

[Reword to talk about enabling "expressions" not "checks", so the intended
piece-meal enabling works without massive rewording.]

---

Modify 7.3.2(9/3):

If one or more invariant expressions apply to a type T[, and the assertion
policy (see 11.4.2) at the point of the partial view declaration for T is
Check,] then an invariant check is performed at the following places, on the
specified object(s):

Add before 7.3.2(21/3):

If performing checks is required by the Invariant or Invariant'Class assertion
policies (see 11.4.2) in effect at the point of  corresponding aspect
specification applicable to a given type, then the respective invariant
expression are considered @i(enabled).

AARM Note: If a class-wide invariant expression is enabled, it remains enabled
when inherited, even if the policy in effect is Ignore for the inheriting type.

[This is really only important for 'Class expressions; they need to be checked
based on their original checking state (that is, always on or always off) -
otherwise dispatching provides a hole in the checking. They "apply" to all
derived types, and are included in those type's checks, so we can't "enable"
checks without significant rewording.]

Modify 7.3.2(21/3):

The invariant check consists of the evaluation of each {enabled} invariant
expression that applies to T, on each of the objects specified above. If any of
these evaluate to False, Assertions.Assertion_Error is raised at the point of
the object initialization, conversion, or call. If a given call requires more
than one evaluation of an invariant expression, either for multiple objects of a
single type or for multiple types with invariants, the evaluations are performed
in an arbitrary order, and if one of them evaluates to False, it is not
specified whether the others are evaluated. Any invariant check is performed
prior to copying back any by-copy in out or out parameters. Invariant checks,
any postcondition check, and any constraint or predicate checks associated with
in out or out parameters are performed in an arbitrary order.

[Note that the addition of "predicate checks" here was done in a previous AI,
approved in Kemah.]

---

[I've looked over the rest of the wording and did not see any problems yet, but
that might change when I actually go to insert it. I hope not. - RLB]

****************************************************************

From: Jeff Cousins
Sent: Monday, March 19, 2012  12:16 PM

> Sorry about the ultra-short time to vote on this, but I have to provide the
> finished Standard today. So please vote on the following ballot ASAP:
>
>_X__ Approve moving forward with the standard using the wording proposed below.
>
>____ Delay the standard for an unknown period to come up with acceptable
>     wording.

****************************************************************

From: Robert Dewar
Sent: Monday, March 19, 2012  12:17 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> __x__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

(unlike most of our letter ballots, this one seems like one of those surveys
that political parties word carefully to ensure the desired result :-))

****************************************************************

From: Tucker Taft
Sent: Monday, March 19, 2012  12:25 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> __X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 19, 2012  12:27 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> __X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************

From: Tullio Vardanega
Sent: Monday, March 19, 2012  12:39 PM

> [...] please vote on the following ballot ASAP:
>
> X Approve moving forward with the standard using the wording proposed
> below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************

From: Geert Bosch
Sent: Monday, March 19, 2012  12:55 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> _X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 19, 2012  1:12 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> __X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************

From: Bob Duff
Sent: Monday, March 19, 2012  1:27 PM

_X__ Approve moving forward with the standard using the wording proposed below.

Note:  I didn't read the proposed wording.
If you had instead said "Approve moving forward with the previous
believed-to-be-wrong wording", I'd have voted for that.

****************************************************************

From: Brad Moore
Sent: Monday, March 19, 2012  2:31 PM

>Sorry about the ultra-short time to vote on this, but I have to provide
>the finished Standard today. So please vote on the following ballot
>ASAP:
>
>___X_ Approve moving forward with the standard using the wording
>proposed below.
>
>____ Delay the standard for an unknown period to come up with
>acceptable wording.

****************************************************************

From: Gary Dismukes
Sent: Monday, March 19, 2012  2:38 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> _X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.


The wording changes look reasonable to me.

Just noting a couple of very minor typos that don't even affect wording:

> ----------------
>
> Following are the significant wording changes from the v7 posted
> Saturday (consider this vote an approval for those changes as well):
>
> Add after 3.8(21):

That should be 3.8.1.

> Add before 7.3.2(21/3):
>
> If performing checks is required by the Invariant or Invariant'Class
> assertion policies (see 11.4.2) in effect at the point of
> corresponding aspect specification applicable to a given type, then
> the respective invariant expression are considered @i(enabled).
>
> AARM Note: If a class-wide invariant expression is enabled, it remains
> enabled when inherited, even if the policy in effect is Ignore for the
> inheriting type.
>
> [This is really only important for 'Class expressions; they need to be
> checked based on their original checking state (that is, always on or
> always off) - otherwise dispatching provides a hole in the checking. They
> "apply" to all derived types, and are included in those type's checks, so we
> can't

"those type's" => "those types'"

(But I guess that paragraph in "[]" isn't even AARM text(?), it's just a side
note from you, so this doesn't need fixing.:-)

> "enable" checks without significant rewording.]

****************************************************************

From: Erhard Ploedereder
Sent: Monday, March 19, 2012  7:04 PM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following ballot ASAP:
>
> __X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.
>
>
> Vote by 3:00 PM CDT (-6 UTC).

Sorry for being late, I was not reading E-mail until just now.

****************************************************************

From: Randy Brukardt
Sent: Monday, March 19, 2012  8:01 PM

Well, a couple more issues (not very important):

(1) The proposed change in 11.5(25) is:

All_Checks
Represents the union of all checks; suppressing All_Checks suppresses all
checks. In addition, an implementation is allowed (but not required) to behave
as if a pragma Assertion_Policy(Ignore) applies to any region to which pragma
Suppress(All_Checks) applies.

The problem with this is that assertion checks are, well, checks. So the above
says that they are both suppressed and ignored, which is certainly not what we
want (besides making no sense). I added a few words to make the above exclusive:

All_Checks
Represents the union of all checks; suppressing All_Checks suppresses all checks
other than those associated with assertions. In addition, an implementation is
allowed (but not required) to behave as if a pragma Assertion_Policy(Ignore)
applies to any region to which pragma Suppress(All_Checks) applies.

and added an AARM note to explain (suppressed means erroneous if they fail,
which is not what we want for assertions).

[Reminder: "assertion checks" here means checks for all of the assertions,
including preconditions, predicates, etc.]

----

(2) An unrelated bug that I happened to notice:

11.4.2(16/2):

A compilation unit containing a pragma Assert has a semantic dependence on the
Assertions library unit.

I replaced this by:

A compilation unit containing a {check for an assertion (including a }pragma
Assert{)} has a semantic dependence on the Assertions library unit.

because it clearly needs to be true of any unit that can raise
Assertions.Assertion_Error -- that is, any assertion check.

----

(3) The last sentence of 11.4.2(10/3) says:

Note that for subtype predicate aspects (see 3.2.4), even when the applicable
Assertion_Policy is Ignore, the predicate will still be evaluated as part of
membership tests and Valid attribute_references, and if static, will still have
an effect on loop iteration over the subtype, and the selection of
case_alternatives and variant_alternatives.

The problem here is that there is no such thing as a "case_alternative" or a
"variant_alternative". These are called "case_statement_alternative" and
"variant", respectively, and I changed the wording accordingly.

Similarly, the first sentence talks about "argument_association"s, but the
proper term is "pragma_argument_association"s.

If someone thinks this is overly picky, someone should note that exact matching
is needed in order for the automatic linking to the appropriate syntax
production to happen (and also for the by-hand lookup by readers of print
versions).

----

And that's it; I finally reached the end of this monster.

****************************************************************

From: Tucker Taft
Sent: Monday, March 19, 2012  8:12 PM

> ...
> And that's it; I finally reached the end of this monster.

Hurray!  You are the man!

****************************************************************

From: Bob Duff
Sent: Monday, March 19, 2012  8:17 PM

> Well, a couple more issues (not very important):

Stop fixing bugs in the RM.  Now.  Ship it to ISO and see what happens.

By and by, we'll fix the bugs.

****************************************************************

From: Bob Duff
Sent: Monday, March 19, 2012  8:19 PM

> Hurray!  You are the man!

Yes!  You are THE man!

****************************************************************

From: Edmond Schonberg
Sent: Monday, March 19, 2012  9:36 PM

> And that's it; I finally reached the end of this monster.

An you should feel like St. George!  The Ada community owes you an enormous debt
for this work and for the heroic last weeks push!

****************************************************************

From: Jeff Cousins
Sent: Tuesday, March 20, 2012  6:05 AM

Thanks Randy for all your good work on this,

****************************************************************

From: John Barnes
Sent: Tuesday, March 20, 2012  6:18 AM

> Sorry about the ultra-short time to vote on this, but I have to
> provide the finished Standard today. So please vote on the following
> ballot ASAP:
>
> __X__ Approve moving forward with the standard using the wording
> proposed below.
>
> ____ Delay the standard for an unknown period to come up with
> acceptable wording.

****************************************************************


Questions? Ask the ACAA Technical Agent