!standard 2.8(3) 12-03-19 AI05-0290-1/07 !standard 2.8(4) !standard 3.2.4(0) !standard 3.8.1(21) !standard 4.6(51/2) !standard 4.6(57) !standard 4.9.1(4) !standard 6.1.1(0) !standard 7.3.2(0) !standard 11.4.2(6/2) !standard 11.4.2(7/2) !standard 11.4.2(9/2) !standard 11.4.2(10/2) !standard 11.4.2(16/2) !standard 11.4.2(17/2) !standard 11.4.2(18/2) !standard 11.5(7.2/2) !standard 11.5(25) !class Amendment 12-02-14 !status Amendment 2012 12-02-14 !status ARG Approved (Letter Ballot) 11-0-2 12-03-12 !status work item 12-02-14 !status received 11-11-21 !priority Medium !difficulty Medium !subject Improved control over assertions !summary The pragma Assertion_Policy is expanded to allow control of assertion expressions in a way that is very similar to pragma Suppress, but without the possible consequence of erroneous execution. A perceived problem with Pragma Suppress for inlined calls is fixed. Variant parts now handle not-covered-by-any-choice selector values in the same way as case statements and case expressions. The definition of "statically compatible" for subtypes is improved so that a type with a disabled predicate check is not statically compatible with a type with an enabled predicate check. !problem When assertions (particularly predicates and preconditions) are used in 3rd party libraries of packages (which can also describe reusable code used within an organization, and implementation-defined packages), additional control over the assertions is needed. In particular, assertions used to control inbound correctness checks should (almost) never be turned off, as these are needed to prevent failures in the the implementation of the library. In particular, clients should be discouraged from turning these checks off. However, assertions used to make internal correctness checks (such as postconditions and invariants) are less important - actions of the client should not be able to cause these to fail. Hence, there needs to be fine control over the checks of various kinds of assertions. Control for any but the Assert Pragma cannot be sensibly based on a policy in effect at the point of the check. For example, this would cause invariant checks to be performed only occasionally. The proposal is generally that the policy in effect at the time of the aspect specification controls all its checks. A key distinction between suppressing a check and ignoring an assertion (by specifying a policy of "Ignore") is that in the latter case, the assertion expressions that are not evaluated because of the policy are *not* assumed to be true. !proposal (See wording.) !wording Modify 2.8(3-4) as follows: pragma_argument_association ::= [pragma_argument_identifier =>] name | [pragma_argument_identifier =>] expression { | pragma_argument_aspect_mark => name | pragma_argument_aspect_mark => expression } In a pragma, any pragma_argument_associations without a pragma_argument_identifier {or pragma_argument_aspect_mark} shall precede any associations with a pragma_argument_identifier {or pragma_argument_aspect_mark}. ------------- Add after 3.2.4(6/3): Predicate checks are defined to be *enabled* or *disabled* for a given subtype as follows: * If a subtype is declared by a type_declaration or subtype_declaration that includes a predicate specification, then: - if performing checks is required by the Static_Predicate assertion policy (see 11.4.2) and the declaration includes a Static_Predicate specification, then predicate checks are enabled for the subtype; - if performing checks is required by the Dynamic_Predicate assertion policy (see 11.4.2) and the declaration includes a Dynamic_Predicate specification, then predicate checks are enabled for the subtype; - otherwise, predicate checks are disabled for the subtype [redundant: , regardless of whether predicate checking is enabled for any other subtypes mentioned in the declaration]; * If a subtype is defined by a derived type declaration that does not include a predicate specification, then predicate checks are enabled for the subtype if and only if predicate checks are enabled for at least one of the parent subtype and the progenitor subtypes; * If a subtype is created by a subtype_indication other than in one of the previous cases, then predicate checks are enabled for the subtype if and only if predicate checks are enabled for the subtype denoted by the subtype_mark; * Otherwise, predicate checks are disabled for the given subtype. [AARM: In this case, no predicate specifications can apply to the subtype and so it doesn't typically matter whether predicate checks are enabled. This rule does make a difference, however, when determining whether predicate checks are enabled for another type when this type is one of multiple progenitors. See the "derived type declaration" wording above.] Replace 3.2.4(22/3) with: If predicate checks are enabled for a given subtype, then: In 3.2.4(23/3), replace "check is made" by "check is performed" (three times). ------------- Add after 3.8.1(21): When an object of a discriminated type T is initialized by default, Constraint_Error is raised if no discrete_choice_list of any variant of a variant_part of T covers the value of the discriminant that governs the variant_part. When a variant_part appears in the component_list of another variant V, this test is only applied if the value of the discriminant governing V is covered by the discrete_choice_list of V. AARM implementation note: This is not a "check"; it cannot be suppressed. However, in most cases it is not necessary to generate any code to raise this exception. A test is needed (and can fail) in the case where the discriminant subtype has a Static_Predicate specified, it also has predicate checking disabled, and the discriminant governs a variant_part which lacks a "when others" choice. The test also could fail for a static discriminant subtype with range checking suppressed and the discriminant governs a variant_part which lacks a "when others" choice. But execution is erroneous if a range check that would have failed is suppressed (see 11.5), so an implementation does not have to generate code to check this case. (An unchecked failed predicate does not cause erroneous execution, so the test is required in that case.) Like the checks associated with a per-object constraint, this test is not made during the elaboration of a subtype_indication. End AARM Implementation Note. ------------- Replace 4.6(51/3) with: After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is performed that the value is not null. If predicate checks are enabled for the target subtype (see 3.2.4), a check is performed that the predicate of the target subtype is satisfied for the value. ------------- Modify 4.6(57/3): If an Accessibility_Check fails, Program_Error is raised. {If a predicate check fails, Assertions.Assertion_Error is raised.} Any other check associated with a conversion raises Constraint_Error if it fails. ------------- Replace 4.9.1(10/3) with: both subtypes are static, every value that satisfies the predicate of S1 also satisfies the predicate of S2, and it is not the case that both types each have at least one applicable predicate specification, predicate checks are enabled (see 11.4.2) for S2, and predicate checks are not enabled for S1. ------------- Replace 6.1.1(19/3) with: If performing checks is required by the Pre, Pre'Class, Post, or Post'Class assertion policies (see 11.4.2) in effect at the point of a corresponding aspect specification applicable to a given subprogram or entry, then the respective precondition or postcondition expressions are considered @i(enabled). AARM Note: If a class-wide precondition or postcondition expression is enabled, it remains enabled when inherited by an overriding subprogram, even if the policy in effect is Ignore for the inheriting subprogram. Replace 6.1.1(31/3) with: Upon a call of the subprogram or entry, after evaluating any actual parameters, precondition checks are performed as follows: Modify 6.1.1(32/3): The specific precondition check begins with the evaluation of the specific precondition expression that applies to the subprogram or entry{, if it is enabled}; if the expression evaluates to False, Assertions.Assertion_Error is raised{; if the expression is not enabled, the check succeeds}. Modify 6.1.1(33/3): The class-wide precondition check begins with the evaluation of any {enabled} class-wide precondition expressions that apply to the subprogram or entry. If and only if all the class-wide precondition expressions evaluate to False, Assertions.Assertion_Error is raised. AARM Ramification: Class-wide precondition checks are performed for all appropriate calls, but only enabled precondition expressions are evaluated. Thus, the check would be trivial if no precondition expressions are enabled. Modify 6.1.1(35/3): [If the assertion policy in effect at the point of a subprogram or entry declaration is Check, then upon]{Upon} successful return from a call of the subprogram or entry, prior to copying back any by-copy in out or out parameters, the postcondition check is performed. This consists of the evaluation of [the]{any enabled} specific and class-wide postcondition expressions that apply to the subprogram or entry. If any of the postcondition expressions evaluate to False, then Assertions.Assertion_Error is raised. The postcondition expressions are evaluated in an arbitrary order, and if any postcondition expression evaluates to False, it is not specified whether any other postcondition expressions are evaluated. The postcondition check, and any constraint {or predicate} checks associated with [copying back] in out or out parameters are performed in an arbitrary order. Delete 6.1.1(40/3). ------------- Modify 7.3.2(9/3): If one or more invariant expressions apply to a type T[, and the assertion policy (see 11.4.2) at the point of the partial view declaration for T is Check,] then an invariant check is performed at the following places, on the specified object(s): Add before 7.3.2(21/3): If performing checks is required by the Invariant or Invariant'Class assertion policies (see 11.4.2) in effect at the point of corresponding aspect specification applicable to a given type, then the respective invariant expression is considered @i(enabled). AARM Note: If a class-wide invariant expression is enabled for a type, it remains enabled when inherited by descendants of that type, even if the policy in effect is Ignore for the inheriting type. Modify 7.3.2(21/3): The invariant check consists of the evaluation of each {enabled} invariant expression that applies to T, on each of the objects specified above. If any of these evaluate to False, Assertions.Assertion_Error is raised at the point of the object initialization, conversion, or call. If a given call requires more than one evaluation of an invariant expression, either for multiple objects of a single type or for multiple types with invariants, the evaluations are performed in an arbitrary order, and if one of them evaluates to False, it is not specified whether the others are evaluated. Any invariant check is performed prior to copying back any by-copy in out or out parameters. Invariant checks, any postcondition check, and any constraint or predicate checks associated with in out or out parameters are performed in an arbitrary order. Delete 7.3.2(23/3). ------------- Replace 11.4.2(5/2 - 7/2) with: The form of a pragma Assertion_Policy is as follows: pragma Assertion_Policy(policy_identifier); pragma Assertion_Policy(@i(assertion)_aspect_mark => policy_identifier {, @i(assertion)_aspect_mark => policy_identifier}); A pragma Assertion_Policy is allowed only immediately within a declarative_part, immediately within a package_specification, or as a configuration pragma. ----- Replace 11.4.2(9/2) with: The @i(assertion)_aspect_mark of a pragma Assertion_Policy shall be one of Assert, Static_Predicate, Dynamic_Predicate, Pre, Pre'Class, Post, Post'Class, Type_Invariant, Type_Invariant'Class, or some implementation defined aspect_mark. The policy_identifier shall be either Check, Ignore, or some implementation-defined identifier. [AARM: Implementation defined: Implementation-defined policy_identifiers and assertion_aspect_marks allowed in a pragma Assertion_Policy.] ------ Replace 11.4.2(10/2) with: A pragma Assertion_Policy determines for each assertion aspect named in the pragma_argument_associations whether assertions of the given aspect are to be enforced by a run-time check. The policy_identifier Check requires that assertion expressions of the given aspect be checked that they evaluate to True at the points specified for the given aspect; the policy_identifier Ignore requires that the assertion expression not be evaluated at these points, and the run-time checks not be performed. @Redundant[Note that for subtype predicate aspects (see 3.2.4), even when the applicable Assertion_Policy is Ignore, the predicate will still be evaluated as part of membership tests and Valid attribute_references, and if static, will still have an effect on loop iteration over the subtype, and the selection of case_alternatives and variant_alternatives.] If no assertion_aspect_marks are specified in the pragma, the specified policy applies to all assertion aspects. A pragma Assertion_Policy applies to the named assertion aspects in a specific region, and applies to all assertion expressions specified in that region. A pragma Assertion_Policy given in a declarative_part or immediately within a package_specification applies from the place of the pragma to the end of the innermost enclosing declarative region. The region for a pragma Assertion_Policy given as a configuration pragma is the declarative region for the entire compilation unit (or units) to which it applies. If a pragma Assertion_Policy applies to a generic_instantiation, then the pragma Assertion_Policy applies to the entire instance. AARM note: This means that an Assertion_Policy pragma that occurs in a scope enclosing the declaration of a generic unit but not also enclosing the declaration of a given instance of that generic unit will not apply to assertion expressions occurring within the given instance. If multiple Assertion_Policy pragmas apply to a given construct for a given assertion aspect, the assertion policy is determined by the one in the innermost enclosing region of a pragma Assertion_Policy specifying a policy for the assertion aspect. If no such Assertion_Policy pragma exists, the policy is implementation defined. [AARM: Implementation defined: The default assertion policy.] ---------- Modify 11.4.2(16/2): A compilation unit containing a {check for an assertion (including a }pragma Assert{)} has a semantic dependence on the Assertions library unit. [Needed because all such checks raise Assertions.Assertion_Error.] ---------- Delete 11.4.2(17/2). ---------- Replace 11.4.2(18/2) with: If performing checks is required by the Assert assertion policy in effect at the place of a pragma Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided. ---------- Replace 11.5(7.2/3) with: If a checking pragma applies to a generic_instantiation, then the checking pragma also applies to the entire instance. AARM note: This means that a Suppress pragma that occurs in a scope enclosing the declaration of a generic unit but not also enclosing the declaration of a given instance of that generic unit will not apply to constructs within the given instance. [Note: The inline part of this rule was deleted as part of these discussions.] ------------ Replace 11.5(25) with: All_Checks Represents the union of all checks; suppressing All_Checks suppresses all checks other than those associated with assertions. In addition, an implementation is allowed (but not required) to behave as if a pragma Assertion_Policy(Ignore) applies to any region to which pragma Suppress(All_Checks) applies. AARM Discussion: We don't want to say that assertions are suppressed, because we don't want the potential failure of an assertion to cause erroneous execution (see below). Thus they are excluded from the suppression part of the above rule and then handled with an implicit Ignore policy. !discussion In order to achieve the intended effect that library writers stay in control over the enforcement of pre- and postconditions as well as type invariants, the policy control needs to extend over all uses of the respective types and subprograms. The place of usage is irrelevant. This is enforced by the rules in the !wording. For Assert pragmas, the policy in effect for the place of the pragma is the controlling policy. For subtype predicates, the policy that is relevant is the one in effect at the point where the "nearest" applicable predicate specification is provided. The Assertion_Policy pragma allows control over each assertion aspect individually, if so desired. The need was clearly identified, e.g., to ensure enforcement of preconditions, while postconditions might be left to the whims of the library user. The regions to which Assertion_policy pragmas apply can be nested (as in the library example). The simple rule is that "inner" pragmas have precedence over "outer" pragmas for any given assertion aspect. Implementation-defined policies and assertion aspects are allowed in order that compiler vendors can experiment with more elaborate schemes. A clear separation was made between language checks that, if failing but suppressed, render the program erroneous, and assertions, for which suppression does not cause language issues that lead to erroneousness. This is one of the reasons why assertion control was not subsumed under the pragmas Suppress and Unsuppress with appropriate check names. For example, in verified code, the policy Ignore for Assertions make sense without impacting the program semantics. (Suppressing a range check on the other hand can lead to abnormal values.) The ARG felt the following capability to be very useful, but shied away from extending the model so late in the production if the 2012 standard. Ada 2020 should consider this capability (AI12-0022-1 has been opened to do this): Example: Imagine the following routine in GUI library: procedure Show_Window (Window : in out Root_Window); -- Shows the window. -- Raises Not_Valid_Error if Window is not valid. We would like to be able to use a predicate to check the comment. With an "On_Failure" aspect we could do this without changing the semantics: subtype Valid_Root_Window is Root_Window with Dynamic_Predicate => Is_Valid (Valid_Root_Window), On_Failure => Not_Valid_Error; procedure Show_Window (Window : in out Valid_Root_Window); -- Shows the window. If we didn't have the "On_Failure" aspect here, using the predicate as a precondition in lieu of a condition explicitly checked by the code would change the exception raised on this failure to be Assertion_Error. This would obviously not be acceptable for existing packages and too limiting for future packages. For upward compatibility, similar considerations apply to preconditions, so the "On_Failure" aspect is needed for them as well. We could also imagine that after one On_Failure aspect has been specified, additional preconditions could be defined for the same subprogram with distinct On_Failure aspects specifying distinct expressions to be raised. ----- For pragma Inline (and for Assertion_Policy), we decided to eliminate the rule that said that the presence of a pragma Suppress or Unsuppress at the point of a call would affect the code inside the called body, if pragma Inline applies to the body. Given that inlining is merely advice, and is not supposed to have any significant semantic effect, having it affect the presence or absence of checks in the body, whether or not it is actually inlined, seemed unwise. Furthermore, in many Ada compilers, the decision to inline may be made very late, so if the rule is instead interpreted as only having an effect if the body is in fact inlined, it is still a problem, because the decision to inline may be made after the decision is made whether to insert or omit checks. ---- This example illustrates the motivation for the new 3.8.1 rule about variant parts: declare pragma Assertion_Policy (Ignore); subtype Non_Zero is Integer with Static_Predicate => Non_Zero /= 0; type Rec (D : Non_Zero) is record case D is when Integer'First .. -1 => ...; when 1 .. Integer'Last => ....; end case; end record; Zero : Integer := Ident_Int (0); subtype Zero_Sub is Rec (D => Zero); -- no exception is raised here X : Rec (D => Zero); -- raises Constraint_Error begin null; end; We could require that a subtype declaration such as Zero_Sub fail a runtime check, but this seemed similar to per-object constraint checking. !corrigendum 2.8(3) @drepl @xcode<@fa@i<@ft>@fa] name | [>@i<@ft>@fa] expression>> @dby @xcode<@fa@i<@ft>@fa] name | [>@i<@ft>@fa] expression | >@i<@ft>@fa name | >@i<@ft>@fa expression>> !corrigendum 2.8(4) @drepl @xindent, any @fas without a @i@fa shall precede any associations with a @i@fa.> @dby @xindent, any @fas without a @i@fa or @i@fa shall precede any associations with a @i@fa or @i@fa.> !corrigendum 3.2.4(0) @dinsc [A placeholder to cause a conflict; the real wording is found in the conflict file.] !corrigendum 3.8.1(21) @dinsa A record value contains the values of the components of a particular @fa only if the value of the discriminant governing the @fa is covered by the @fa of the @fa. This rule applies in turn to any further @fa that is, itself, included in the @fa of the given @fa. @dinst When an object of a discriminated type @i is initialized by default, Constraint_Error is raised if no @fa of any @fa of a @fa of @i covers the value of the discriminant that governs the @fa. When a @fa appears in the @fa of another @fa @i, this test is only applied if the value of the discriminant governing @i is covered by the @fa of @i. !corrigendum 4.6(51/2) @drepl After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is made that the value is not null. @dby After conversion of the value to the target type, if the target subtype is constrained, a check is performed that the value satisfies this constraint. If the target subtype excludes null, then a check is made that the value is not null. If predicate checks are enabled for the target subtype (see 3.2.4), a check is performed that the predicate of the target subtype is satisfied for the value. !corrigendum 4.6(57) @drepl If an Accessibility_Check fails, Program_Error is raised. Any other check associated with a conversion raises Constraint_Error if it fails. @dby If an Accessibility_Check fails, Program_Error is raised. If a predicate check fails, Assertions.Assertion_Error is raised. Any other check associated with a conversion raises Constraint_Error if it fails. !corrigendum 4.9.1(4) @drepl A constraint is @i with a scalar subtype if it statically matches the constraint of the subtype, or if both are static and the constraint is compatible with the subtype. A constraint is @i with an access or composite subtype if it statically matches the constraint of the subtype, or if the subtype is unconstrained. One subtype is @i with a second subtype if the constraint of the first is statically compatible with the second subtype. @dby [A placeholder to cause a conflict; the real wording is found in the conflict file.] !corrigendum 6.1.1(0) @dinsc [A placeholder to cause a conflict; the real wording is found in the conflict file.] !corrigendum 7.3.2(0) @dinsc [A placeholder to cause a conflict; the real wording is found in the conflict file.] !corrigendum 11.4.2(6/2) @drepl @xcode<@ft<@b Assertion_Policy(@i@fa);> @dby @xcode<@ft<@b Assertion_Policy(@i@fa); @b Assertion_Policy( @i@fa =@> @i@fa {, @i>assertion_>@fa =@> @i@fa});> !corrigendum 11.4.2(7/2) @drepl A @fa Assertion_Policy is a configuration pragma. @dby A @fa Assertion_Policy is allowed only immediately within a @fa, immediately within a @fa, or as a configuration pragma. !corrigendum 11.4.2(9/2) @drepl The @i@fa of a @fa Assertion_Policy shall be either Check, Ignore, or an implementation-defined identifier. @dby The @i@fa of a @fa Assertion_Policy shall be one of Assert, Static_Predicate, Dynamic_Predicate, Pre, Pre'Class, Post, Post'Class, Type_Invariant, Type_Invariant'Class, or some implementation defined @fa. The @i@fa shall be either Check, Ignore, or some implementation-defined @fa. !corrigendum 11.4.2(10/2) @drepl A @fa Assertion_Policy is a configuration pragma that specifies the assertion policy in effect for the compilation units to which it applies. Different policies may apply to different compilation units within the same partition. The default assertion policy is implementation-defined. @dby A @fa Assertion_Policy determines for each assertion aspect named in the @fas whether assertions of the given aspect are to be enforced by a run-time check. The @i@fa Check requires that assertion expressions of the given aspect be checked that they evaluate to True at the points specified for the given aspect; the @i@fa Ignore requires that the assertion expression not be evaluated at these points, and the run-time checks not be performed. Note that for subtype predicate aspects (see 3.2.4), even when the applicable Assertion_Policy is Ignore, the predicate will still be evaluated as part of membership tests and Valid @fas, and if static, will still have an effect on loop iteration over the subtype, and the selection of @fas and @fas. If no @i@fas are specified in the pragma, the specified policy applies to all assertion aspects. A @fa Assertion_Policy applies to the named assertion aspects in a specific region, and applies to all assertion expressions specified in that region. A @fa Assertion_Policy given in a @fa or immediately within a @fa applies from the place of the pragma to the end of the innermost enclosing declarative region. The region for a @fa Assertion_Policy given as a configuration pragma is the declarative region for the entire compilation unit (or units) to which it applies. If a @fa Assertion_Policy applies to a @fa, then the @fa Assertion_Policy applies to the entire instance. If multiple Assertion_Policy pragmas apply to a given construct for a given assertion aspect, the assertion policy is determined by the one in the innermost enclosing region of a @fa Assertion_Policy specifying a policy for the assertion aspect. If no such Assertion_Policy pragma exists, the policy is implementation defined. !corrigendum 11.4.2(16/2) @drepl A compilation unit containing a @fa Assert has a semantic dependence on the Assertions library unit. @dby A compilation unit containing a check for an assertion (including a @fa Assert) has a semantic dependence on the Assertions library unit. !corrignedum 11.4.2(17/2) @ddel The assertion policy that applies to a generic unit also applies to all its instances. !corrignedum 11.4.2(18/2) @drepl An assertion policy specifies how a @fa Assert is interpreted by the implementation. If the assertion policy is Ignore at the point of a @fa Assert, the pragma is ignored. If the assertion policy is Check at the point of a @fa Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided. @dby If performing checks is required by the Assert assertion policy in effect at the place of a @fa Assert, the elaboration of the pragma consists of evaluating the boolean expression, and if the result is False, evaluating the Message argument, if any, and raising the exception Assertions.Assertion_Error, with a message if the Message argument is provided. !corrigendum 11.5(7.2/2) @drepl If a checking pragma applies to a generic instantiation, then the checking pragma also applies to the instance. If a checking pragma applies to a call to a subprogram that has a @fa Inline applied to it, then the checking pragma also applies to the inlined subprogram body. @dby If a checking pragma applies to a @fa, then the checking pragma also applies to the entire instance. !corrigendum 11.5(25) @drepl @xhang<@xterm Represents the union of all checks; suppressing All_Checks suppresses all checks.> @dby @xhang<@xterm Represents the union of all checks; suppressing All_Checks suppresses all checks other than those associated with assertions. In addition, an implementation is allowed (but not required) to behave as if a pragma Assertion_Policy(Ignore) applies to any region to which pragma Suppress(All_Checks) applies.> !ACATS Test Create ACATS C-Tests to test these changes, specifically that Ignore really ignores the check and local Check policies override more global Ignore policies. !ASIS ** TBD: Some change to pragma processing probably will be needed to account for the 'Class possibility in aspect_marks. !appendix From: Randy Brukardt Sent: Monday, November 21, 2011 5:13 PM Let me start by saying that it is awfully late for any significant change, my default position is no change at this point, and the following issue isn't that critical to get right. But I'd like to discuss it some and either feel better about our current decision or possibly consider some alternatives. ---- John remarked in his latest ARG draft: >> *** I really think that this should raise Constraint_Error and that >> subtype predicates should not be controlled by Assertion_Policy but >> by Suppress. Ah well..... My initial response was: > I agree with this. The problem is that you want to replace constraints with > (static) predicates, but you can't really do that because people are used to > removing assertions even though they would never remove a constraint > check. But upon thinking about it further, it's not that clear-cut. First, it would be odd for static and dynamic predicates to raise different exceptions. Second, there really isn't that much difference in use between predicates and preconditions (entering calls) and invariants and postconditions (after calls). So there is an argument that they all should be the same. (But of course you can make the same arguments for constraints in both of the positions.) The thing that really bothers me is that contracts somehow seem less important than constraints, even though their purposes are the same. While assertions (including the "body postconditions" we discussed a few weeks ago) really are in a different category of importance. So it seems weird to me to include pragma Assert and the precondition and predicate aspects in the same bucket as far are suppression/ignoring is concerned. Let me give an example from Claw to illustrate my point. Most Claw routines have requirements on the some or all of the parameters passed. For instance, Show is defined as: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal); -- Show Window according to How. -- Raises: -- Not_Valid_Error if Window does not have a valid (Windows) window. -- Windows_Error if Windows returns an error. The implementation of Show will start something like: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is begin if not Is_Valid (Window) then raise Not_Valid_Error; end if; -- Assume Window is valid in the following code. ... end Show; This is of course Ada 95 code (assuming no implementation-defined extensions). So we couldn't have used pragma Assert. But even if it had existed, I don't think we would have used it because we consider the validity check as an important part of the contract and would not want it turned off. In particular, the body of Show will take no precautions against Window being invalid, and if violated, almost anything could happen. (Especially if checks are also suppressed; note that the Claw documentation says that compiling Claw with checks suppressed is not supported.) Essentially, this could be incorrect without the check, and as such it is not appropriate to use an assertion (which is supposed to be able to be ignored) to make the check. If I was writing this code in Ada 2012, I would want to use a predicate or precondition to make this requirement more formal (and executable and visible to tools and all of those other good things). For instance, using a predicate: (I'm using a precondition mainly because it is easier to write here; I'd probably use a predicate in practice because almost all of the Claw routines need this contract and it would be a pain to repeat it hundreds of times. But the same principles apply either way.) procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) with Pre => Is_Valid (Window); -- Show Window according to How. -- Raises: -- Windows_Error if Windows returns an error. We'd prefer to remove the check from the body: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is begin -- Assume Window is valid in the following code. ... end Show; But note now that the body is incorrect if the Assertion_Policy is Ignore and the Window really is invalid. This is a bad thing: it means that code is at risk of being used in a way that has not be tested and is not supported. (Obviously, in this case the results aren't quite as bad in that the results aren't safety-critical. But what about a subpool-supporting storage pool? These are defined with a precondition: with Pre'Class => Pool_of_Subpool(Subpool) = Pool'Access; and I'd expect a user-written pool to do very bad things if this is not true. I'd surely not expect such a pool to repeat this check, but it would be necessary to be defensive.) To see this even more clearly, imagine that the containers library had implemented most of its exceptional cases as predicates or preconditions. The body of the containers may not even be implemented in Ada (GNAT for instance suppresses checks as a matter of course in the predefined packages); if those cases aren't checked, it is hard to predict what might happen. Possibly important aside: I was wrong when I said the writing the condition as a precondition or predicate didn't matter. For a precondition, the declaration of the call determines what Assertion_Policy is in effect. Thus, extending the existing Claw rule to include "compiling Claw with the Assertion_Policy to be other than Check is not supported" would be sufficient to eliminate the problem (at least formally - not so sure that is true practically). For predicates, however, it is the Assertion_Policy in effect at the point of the evaluation (subtype conversion, etc.) that determines whether the check is made. That means that predicates can be ignored even if all of Claw is compiled with Assertion_Policy = Check -- which makes the protections even weaker (since I don't think a library should be dictating how a client compiles their code). End possibly important aside. Fundamentally, contract aspects are more like constraints than they are like assertions. So it is dubious to put them into the same bucket with them. I know at one point Tucker argued that making contracts "suppressed" rather than "ignored" was bad because it could mean that adding contracts could make a program less-safe. And I agree with that, except that the problem exists anyway because the body of a subprogram with contracts is going to assume that those contracts are true -- and the failure of that assumption is going to make the results unpredictable. (Maybe not to the level of erroneousness, but at least in any formal sense. And if combined with Suppress as in GNAT, we will really will have erroneousness.) I think it is just as bad to require a bullet-proof library to repeat all of the predicate and precondition checks in the body "just-in-case" someone turns them off. People hate redundant code for good reason (it gets out of sync; it makes programs larger, it's often dead and thus causes problems with coverage analysis, etc.) --- If we accept the premise that assertions are different than contracts, what can we do? Going all the way to Suppression semantics is probably going too far, for a number of reasons. Most importantly, if this is not combined with Suppress of constraint checks, the code is likely to malfunction logically, but not in Ada terms. (It's likely to fail raising a predefined exception in some unexpected way; a manageable error.) The most logical solution would be to have a separate Contract_Policy. Such a policy would not include assertions (of any stripe) but would include all of the contract aspects. (And Assertion_Policy would revert to only handling assertions.) With such a separate policy, turning it off could be treated similarly to suppressing checks (something only to be done in very restricted circumstances for critical [and clearly demonstrated] needs). More importantly, management can clearly see if it is being abused and set appropriate limits. A less disruptive solution (from the user's perspective, it would require more rewording in the Standard than the former) would be to have an additional Assertion_Policy "Check_Contracts_and_Ignore_Others", which would do exactly what it says. One would hope that if such a policy existed that the use of the full "Ignore" could be discouraged just like the use of Suppress is discouraged. --- I've written enough on this topic. I've been uncomfortable since thinking about the implications of turning off these things on the storage pools and in containers more than a year ago. But I've never been convinced in my own mind that there is significant problem here -- and I'm still not. But noting that John has some similar concerns, I thought it would be good to air these out. Now it's your turn to comment. **************************************************************** From: Tucker Taft Sent: Monday, November 21, 2011 5:48 PM I could see adding another policy which distinguished assertions from other contract-like things. On the other hand, some folks (e.g. us) currently use assertions exactly like preconditions, postconditions, constraints, invariants, etc., so those of us already using assertions heavily probably don't see the point. If an assertion is violated, then bad things can happen. You don't generally get erroneousness, because the built-in constraint checks prevent that. But you can certainly get complete garbage-out, given enough garbage-in. As far as where the policy is relevant, I am surprised that for predicates it depends on where the check is performed. I would expect all checks associated with the predicate on a formal parameter would be on or off depending on the policy at the point where the subprogram spec is compiled, just like preconditions and postconditions. Certainly you want to know when you compile a body whether you can rely on the predicates associated with the formal parameters, otherwise you *will* get into the erroneousness world if the compiler makes the wrong assumption. **************************************************************** From: Randy Brukardt Sent: Monday, November 21, 2011 7:17 PM > I could see adding another policy which distinguished assertions from > other contract-like things. > On the other hand, some folks (e.g. us) currently use assertions > exactly like preconditions, postconditions, constraints, invariants, > etc., so those of us already using assertions heavily probably don't > see the point. If an assertion is violated, then bad things can > happen. You don't generally get erroneousness, because the built-in > constraint checks prevent that. But you can certainly get complete > garbage-out, given enough garbage-in. My view of assertions is a bit different: an assertion is something that ought to be True at a given point, but it doesn't affect the correctness of the program. IMHO, checks that do affect the correctness of the program should not be given as an assertion (pragma Assert), but rather be directly part of the program. For instance, if you have an if statement that handles two possible cases, but other cases are not handled, there ought to be some else branch with a bug box or exception raise in it -- not some assertion that can be ignored. (I've yet to see a case where a pragma Assert could do something that you couldn't do with regular code; that's not true of our old Condcomp facility, which allows conditionally compiled declarations and pragmas.) Thus, pragma Assert is purely a debugging aid; it never has an affect on any sort of correctness and it can be ignored without any harmful effects. I realize that pragma Assert can be used in other ways than I outlined above, but I always thought that the above was the intent -- it would surely be the way that I'd use it (if we had implemented it at all; we have not done so to date, so I've never actually used it). This difference is probably why I view the other contracty things differently, because they surely aren't optional, are not just debugging aids, and shouldn't be ignored without lots of careful consideration. Thus lumping them together doesn't make much sense. I'd be happier if there was an extra policy; but I won't scream to much since impl-def policies are allowed (so I can always define something sensible). > As far as where the policy is relevant, I am surprised that for > predicates it depends on where the check is performed. > I would expect all checks associated with the predicate on a formal > parameter would be on or off depending on the policy at the point > where the subprogram spec is compiled, just like preconditions and > postconditions. > Certainly you want to know when you compile a body whether you can > rely on the predicates associated with the formal parameters, > otherwise you *will* get into the erroneousness world if the compiler > makes the wrong assumption. I had that thought as well when I noticed that difference. The problem is that predicates belong to the subtype, not the subprogram. I could imagine a similar rule to subprograms where the policy that matters is that which applies to the subtype -- but that wouldn't necessarily give you certainty over the calls (especially when the subtype and subprogram are declared in different packages). A rule that made subtype conversions to formal parameters work differently than other subtype conversions is just too weird to contemplate. In any case, it does seem like there is a bug in the handling of subtype predicates compared to the other contracts. I suppose I shouldn't be surprised; Bob wrote the wording of predicates without considering how the other contracts worked, while the other three were pretty much developed together. Aside: We made whether pre and post-conditions checks are made for a dispatching call unspecified if the call and the invoked subprogram have different policies. We didn't do anything similar for type invariants; I would think that we ought to (the same problems apply). [I noticed this checking to see if the rules for type invariants were the same as for pre- and post-conditions; they are except for this difference.] P.S. I should stop thinking about this stuff. All I seem to find is problems. :-) **************************************************************** From: Bob Duff Sent: Monday, November 21, 2011 7:35 PM > Let me start by saying that it is awfully late for any significant > change, my default position is no change at this point, ... Agreed. I don't think this stuff is very important. There are all sorts of levels of checking ("policies") one might want (all-checks-on, checks-suppressed, checks-ignored, preconditions checked but not postconditions, etc). Implementations will provide those as needed or requested by customers. The only important one for the Standard to require is all-checks-on. Terminology: I use "assertion" to refer to any sort of checks, including pre/post-conditions. Not just pragma Assert. I think that matches Eiffel terminology. And Constraint_Error vs. Assertion_Failure is a non-issue. A bug is a bug. **************************************************************** From: Tucker Taft Sent: Monday, November 21, 2011 8:41 PM > ... Aside: We made whether pre and post-conditions checks are made for > a dispatching call unspecified if the call and the invoked subprogram > have different policies.... Your description seems a bit off. It is always based on where the declaration of some subprogram appears; the unspecified part is whether it is determined by the named subprogram or the invoked subprogram. It has nothing to do with where the call is. That is good because when compiling the body of a subprogram, the compiler knows which rule it is following. If it depended on where the call was, the compiler would have no idea whether to assume the precondition has been checked. > ... We didn't do anything similar for type invariants; I would think > that we ought to (the same problems apply). [I noticed this checking > to see if the rules for type invariants were the same as for pre- and > post-conditions; they are except for this difference.] Yes, I would agree we need to do roughly the same thing for invariants, or pick one, e.g., always base it on which subprogram body is invoked. I think predicate checks associated with parameter associations really need to be determined based on where the subprogram is declared (either the named subprogram or the invoked subprogram). Otherwise it is hopeless for the compiler to take any advantage of the predicate checks. **************************************************************** From: Jean-Pierre Rosen Sent: Tuesday, November 22, 2011 2:50 AM > Let me start by saying that it is awfully late for any significant > change, my default position is no change at this point, and the > following issue isn't that critical to get right. But I'd like to > discuss it some and either feel better about our current decision or > possibly consider some alternatives. > > ---- > > John remarked in his latest ARG draft: > >>> *** I really think that this should raise Constraint_Error and that >>> subtype predicates should not be controlled by Assertion_Policy but >>> by Suppress. Ah well..... > [...] discussion deleted > Now it's your turn to comment. You convinced me - maybe more than you would like ;-) I think that if we want to do it right, we need a special suppress, and a special exception: Contract_Error. That's really what it is about. The really important question is: is it too late in the game? We the ARG are very carefull about meeting our deadlines. But the message I heard from NBs and WG9 is "we are not so much in a hurry, we can accept a delay if it really improves the language". So it might be worth considering. **************************************************************** From: Tucker Taft Sent: Tuesday, November 22, 2011 7:11 AM I think it could be a mistake to make such a strong distinction between pragma Assert and these other kinds of assertions. I realize that some people have not been using assertions at all. But for those who have, they serve very much the same purpose as these new contracts. They are essentially higher-level constraint checks. To imply that somehow they are fundamentally different seems like an unjustifiable leap. And I suppose if you don't use pragma Assert, then you really shouldn't care. **************************************************************** From: Erhard Ploedereder Sent: Tuesday, November 22, 2011 12:23 PM Thinking more about future users...: What I would like to be able to is to turn off Assertions and Postconditions in MY code, because I have verified it to my heart's content. But what I would like to continue checking is the Preconditions of services that my library offers. After all, there is no limit to the s... of people (not reading contracts) and I want to ensure that my code works as advertised. Unfortunately, it looks like it is an all-or-nothing policy that is provided today. If that issue is (re)opened, I would argue for the differentiation above. If it is not re-opened then I want to note for 2020 that the upward incompatibility of billions and billions of lines of code relying on turning off precondition checks exists - and it would be a good thing to cause that incompatibility ;-) **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:01 PM > What I would like to be able to is to turn off Assertions and > Postconditions in MY code, because I have verified it to my heart's > content. But what I would like to continue checking is the > Preconditions of services that my library offers. > After all, there is no limit to the s... of people (not reading > contracts) and I want to ensure that my code works as advertised. I was thinking about that this morning. Propagating "Assertion_Error" from the body of some routine indicates a bug in that routine, pure and simple. But propagating it from a call indicates a bug in *your* code. It would be valuable to have such a distinction -- not all code is written by the same user. (Note: you need to include predicates with preconditions above.) I'm thinking about something like Claw, where the failure of a postcondition or invariant means that there is a bug in Claw (call tech support!) while the failure of a precondition or predicate means that there is a bug in the client's code (damn, better go fix it). There ought to be some differentiation between these. > Unfortunately, it looks like it is an all-or-nothing policy that is > provided today. If that issue is (re)opened, I would argue for the > differentiation above. Well, what I think we see is that there are quite a few differentiations that makes sense (contracts vs. pure debugging code [assertions]); caller vs. callee bugs. Which is probably why we punted. Hopefully, compiler optimizations of contracts will be good enough that management can treat Assertion_Policy(Ignore) the same as it treats Suppress(All_Checks) -- it needs reams of justification. (Note that in "correct" code, most of the preconditions and postconditions will match up, so turning off one or the other will have almost no effect on the code performance, since the compiler will only check one anyway.) > If it is not re-opened then I want to note for 2020 that the upward > incompatibility of billions and billions of lines of code relying on > turning off precondition checks exists - and it would be a good thing > to cause that incompatibility ;-) Agreed. Any code that *relies* on turning off checks (of any kind) is broken. There might in very rare cases be a reason to write intentionally broken code, but it should not be considered portable in any sense. **************************************************************** From: Tucker Taft Sent: Tuesday, November 22, 2011 2:19 PM Assertion_Policy is based on compilations (typically a source file), so perhaps you could use Ignore in the body, while using Check on the spec. I could imagine a version of Assertion_Policy which allowed you to specify a different exception is to be raised. Now go talk to your favorite compiler vendor, and don't forget to bring some cash... ;-) **************************************************************** From: Bob Duff Sent: Tuesday, November 22, 2011 2:38 PM > I could imagine a version of Assertion_Policy which allowed you to > specify a different exception is to be raised. Now go talk to your > favorite compiler vendor, and don't forget to bring some cash... ;-) I agree. I can think of all sorts of useful variations on the idea of "remove some assertions/checks for efficiency". Given that we're not all in agreement about what we want, the appropriate thing is to let vendors experiment, based on customer feedback, and maybe standardize something for Ada 2020. The Ada 2012 rules should be left alone at this point. By the way, have any of the people participating in this discussion looked at Meyer's Eiffel book? He gives a set of options for turning on/off checks, with rationale. I don't agree with everything he says, but I think we should all look at it before trying to reinvent that wheel. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:25 PM ... > But for those who have, they serve very much the same purpose as these > new contracts. They are essentially higher-level constraint checks. That's an abuse of the construct, IMHO. > To imply that somehow they are fundamentally different seems like an > unjustifiable leap. They *are* fundamentally different: (1) As pragmas, they should never have an effect on the dynamic correctness of the program. It should always be possible to erase all of the pragmas and get a functioning program. Anything else is an abuse of pragmas. We got rid of almost all of the offending pragmas in Ada 2012. (2) I've always viewed its purpose as holding code that eases debugging without having any effect on correctness. My programs have a lot of such code, and it makes sense to remove it from the production code. It makes sense for the Ada language to have a feature aimed at providing supporting such uses. It does *not* make sense to remove checks that protect the server from clueless clients (that's especially true in the case where the writer of the "server" (library) is different from the writer of the client -- exactly the case where Ada has better support than other languages). As such, it doesn't make sense for the language to support such a feature. (In the rare case where a check has to be removed for performance reasons, the good old comment symbol will suffice -- such things should *never* be done globally). I can see Erhard's point that invariants and postconditions are fundamentally similar (they verify the promises to the client and internally to the server, neither of which is much interest to the client), but preconditions and predicates are very different: they detect client mistakes -- something that is totally out of the hands of the library creator. > And I suppose if you don't use pragma Assert, then you really > shouldn't care. I would use it (as outlined above) if it had been defined in Ada 95. But I wouldn't use it for inbound contract checks, because those should never be turned off and I don't want to mix requirements on the client with internal self-checks -- these are wildly different things. (Personally, I'd never want to even use the same exception for such things, but that can't really be helped for language-defined checks -- Constraint_Error has a similar problem of confusing client contract checks vs. body errors.) **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:35 PM ... > I think predicate checks associated with parameter associations really > need to be determined based on where the subprogram is declared > (either the named subprogram or the invoked subprogram). Otherwise it > is hopeless for the compiler to take any advantage of the predicate > checks. You'd have to propose a rule in order to convince me about this. A predicate is a property of a subtype, and it can get checked in a lot of contexts that have nothing to do with a subprogram call. Having different rules for subprogram calls from other things (like type conversions, aggregate components, etc.) would add a lot of implementation complexity, possibility for user confusion, and wording complexity for little gain. I could easily imagine making the rule that it is the state at the declaration of the subtype that controls whether the check is made or not. That would mean whether the check is made or not is not context dependent (and it would be more like the other checks). And I think that would be enough for the compiler to be able to make assumptions about whether the call is checked, as the subtype is necessarily visible and previously compiled when the body is compiled (so the compiler will know whether checks are on or off for the predicate at that point). Note that I think whether Type_Invariants are on or off should depend on the type declaration, and not on the subprograms (they're necessarily in the same package, so it would be very unusual for them to have a different state). That would make the most sense since a Type_Invariant is a property of a (private) type; it's odd for some unrelated declaration to be determining anything about such a property. As noted above, the compiler would always know when the body is compiled, and I think that is sufficient for it to make optimizations. **************************************************************** From: Bob Duff Sent: Tuesday, November 22, 2011 3:17 PM > I would use it (as outlined above) if it had been defined in Ada 95. > But I wouldn't use it for inbound contract checks, because those > should never be ^^^^^^^^^^^^^^^^^^ > turned off and I don't want to mix requirements on the client with > internal self-checks -- these are wildly different things. I agree with the distinction you make between internal self-checks in a library versus checks on clients' proper use of that library. But I think it's very wrong to say "should never be turned off" about ANY checks. If I have strong evidence (from testing, code reviews, independent proof tools, ...) that calls to the library won't fail preconditions, and I need to turn them off to meet my performance goals, then I want to be able to do so. CLAW is not such a case, because it's not performance-critical, because it's painting stuff on a screen. But you can't generalize from that to all libraries. Commenting them out is not a good option, because then I can't automatically turn them back on for debug/test builds. Look at it another way: The ability to turn off checks (of whatever sort) gives me the freedom to put in useful checks without worrying about how inefficient they are. I've written some pretty complicated assertions, on occasion. Or yet another way: The decision to turn checks on or off properly belongs to the programmer, not to us language designers. Erhard imagines a library carefully written by competent people, and a client sloppily written by nincompoops. That is indeed a common case, but the opposite case, and everything in between, can also happen. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 4:23 PM > Assertion_Policy is based on compilations (typically a source file), > so perhaps you could use Ignore in the body, while using Check on the > spec. > > I could imagine a version of Assertion_Policy which allowed you to > specify a different exception is to be raised. Now go talk to your > favorite compiler vendor, and don't forget to bring some cash... ;-) My favorite vendor needs more time to do the work as opposed to cash. :-) [Although I suppose cash to live on would help provide that time...probably ought to go buy some lottery tickets. :-)] I agree that vendor extensions here will be important (some sort of function classification to allow optimizations will be vital). I do worry that we'll end up with too much fragmentation this way -- although I suppose we'll all end up copying whatever AdaCore does by some sort of necessity. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 4:56 PM ... > But I think it's very wrong to say "should never be turned off" > about ANY checks. If I have strong evidence (from testing, code > reviews, independent proof tools, ...) that calls to the library won't > fail preconditions, and I need to turn them off to meet my performance > goals, then I want to be able to do so. > CLAW is not such a case, because it's not performance-critical, > because it's painting stuff on a screen. But you can't generalize > from that to all libraries. I don't disagree with this basic point, but supposed "performance goals" are usually used to justify all sorts of iffy behavior. But in actual practice, performance is impacted only a small amount by checks, and it is very rare for the checks to be the difference in performance. That is, most of the time, performance is way too slow/large/whatever, and turning off checks doesn't make enough difference. You have to switch algorithms. Or performance is really good enough as it is, and turning off checks just makes the program more fragile and prone to malfunction without any real benefit. The area between where turning off checks helps is pretty narrow. It's almost non-existent for constraint checks (a large number of which compilers remove). I like to think that the situation will be pretty similar for contract checks. Specifically, my fantasy compiler (I say fantasy compiler not because I don't know how to implement this, but more because I don't know if I'll ever have the time to actually implement it to make it real -- and I don't want make non-real things sound like they exist) should be able to remove most invariants and postconditions by proving them true. Similarly, it should be able to remove most predicates and preconditions by proving them true (based in part of the postconditions, known to either be checked or proved true). So the incremental effect of contract checks will be quite small. I should note that this depends on writing "good" contracts, using only provably "pure" functions. My fantasy compiler will surely give lots of warnings for bad contracts (and reject them altogether in some modes). Bad contracts always have to be evaluated and are costly - but these aren't really "contracts" given that they change based on factors other than the object values involved. One hopes these things aren't common. > Commenting them out is not a good option, because then I can't > automatically turn them back on for debug/test builds. True enough. But that's the role of pragma Assert: put the debugging stuff it in, not into contracts. > Look at it another way: The ability to turn off checks (of whatever > sort) gives me the freedom to put in useful checks without worrying > about how inefficient they are. I've written some pretty complicated > assertions, on occasion. Which is fine. But if you write complicated (and non-provable) contracts, I hope your compiler cuts you off at the knees. (My fantasy compiler surely will complain loudly about such things.) Keep contracts and debugging stuff separate! (And anything that can't be part of the production system is by definition debugging stuff!) > Or yet another way: The decision to turn checks on or off properly > belongs to the programmer, not to us language designers. True enough. But what worries me is lumping together things that are completely different. Debugging stuff can and should be turned off, and no one should have to think too much about doing so. Contract stuff, OTOH, should only be turned off under careful consideration. It's annoying to put those both under the same setting, encouraging the confusion of thinking these things are the same somehow. > Erhard imagines a library carefully written by competent people, and a > client sloppily written by nincompoops. That is indeed a common case, > but the opposite case, and everything in between, can also happen. Yes, of course. But as I note, well-written contracts can be (and should be) almost completely eliminated by compilers. So the reason for turning those off should be fairly minimal (especially as compilers get better at optimizing these). The danger is confusing debugging stuff (like expensive assertions that cannot be left in the production program -- and yes, I've written some things like that) with contract stuff that isn't costly in the first place and should be left in all programs with the possible exception of the handful on the razor's edge of performance. (And, yes, I recognize the need to be able to turn these things off for that razor's edge. I've never argued that you shouldn't be able to turn these off at all, only that they shouldn't be lumped with "expensive assertions" that can only be for debugging purposes.) BTW, my understanding is that GNAT already has much finer control over assertions and contracts than that offered by the Standard. Do you have any feeling for how often this control is used vs. the blunt instrument given in the Standard? If it is widely used, that suggests that the Standard is already deficient. **************************************************************** From: Robert Dewar Sent: Tuesday, November 22, 2011 10:07 PM > I don't disagree with this basic point, but supposed "performance > goals" are usually used to justify all sorts of iffy behavior. But in > actual practice, performance is impacted only a small amount by > checks, and it is very rare for the checks to be the difference in > performance. But checks can be a menace in terms of deactivated code in a certified environment, so often in a 178B context people DO want to turn off all checks, because they don't want to deal with deactivated code. > That is, most of the time, performance is way too slow/large/whatever, > and turning off checks doesn't make enough difference. You have to > switch algorithms. Or performance is really good enough as it is, and > turning off checks just makes the program more fragile and prone to > malfunction without any real benefit. Yes, most of the time, but there are VERY significant exceptyions. > The area between where turning off checks helps is pretty narrow. It's > almost non-existent for constraint checks (a large number of which > compilers remove). I like to think that the situation will be pretty > similar for contract checks. That's just wrong, there are cases in which constraint checks have to be turned off to meet performance goals. > Specifically, my fantasy compiler (I say fantasy compiler not because > I don't know how to implement this, but more because I don't know if > I'll ever have the time to actually implement it to make it real -- > and I don't want make non-real things sound like they exist) should be > able to remove most invariants and postconditions by proving them > true. Similarly, it should be able to remove most predicates and > preconditions by proving them true (based in part of the postconditions, known to either be checked or proved true). > So the incremental effect of contract checks will be quite small. Well hardly worth discussing fantasies! > Which is fine. But if you write complicated (and non-provable) > contracts, I hope your compiler cuts you off at the knees. (My fantasy > compiler surely will complain loudly about such things.) Keep > contracts and debugging stuff separate! (And anything that can't be > part of the production system is by definition debugging stuff!) I think you definitely are in fantasy land here, and I don't see any point trying to set you straight, since you proclaim this to be a fantasy. > True enough. But what worries me is lumping together things that are > completely different. Debugging stuff can and should be turned off, > and no one should have to think too much about doing so. Contract > stuff, OTOH, should only be turned off under careful consideration. > It's annoying to put those both under the same setting, encouraging > the confusion of thinking these things are the same somehow. For many people they are the same somehow, because assertions were always about contracts... > Yes, of course. But as I note, well-written contracts can be (and > should be) almost completely eliminated by compilers. So the reason > for turning those off should be fairly minimal (especially as > compilers get better at optimizing these). This idea of complete elimitation is plain nonsense, and the idea that it can happen automatically even more nonsensical. Randy, you might want to follow the Hi-Lite project to get a little more grounded here and also familiarize yourself with SPARK, and what can and cannot be feasibly achieved in terms of proof of partial correctness. > The danger is confusing debugging stuff (like expensive assertions > that cannot be left in the production program -- and yes, I've written > some things like that) with contract stuff that isn't costly in the > first place and should be left in all programs with the possible > exception of the handful on the razor's edge of performance. (And, > yes, I recognize the need to be able to turn these things off for that > razor's edge. I've never argued that you shouldn't be able to turn > these off at all, only that they shouldn't be lumped with "expensive > assertions" that can only be for debugging purposes.) I find your distinction between assertions and pre/postconditions etc to be pretty bogus. > BTW, my understanding is that GNAT already has much finer control over > assertions and contracts than that offered by the Standard. Do you > have any feeling for how often this control is used vs. the blunt > instrument given in the Standard? If it is widely used, that suggests > that the Standard is already deficient. I have never seen any customer code using this fine level of control. It still seems reasonable to provide it! Actually in GNAT, we have a generalized assertion pragma Check (checkname, assertion [, string]); where checkname can be referenced in a Check_Policy to turn it on or off, but I don't know if anyone uses it. Internally something like a precondition gets turned into pragma Check (Precondition, .....) **************************************************************** From: Robert Dewar Sent: Tuesday, November 22, 2011 11:07 PM ... > I'm thinking about something like Claw, where the failure of a > postcondition or invariant means that there is a bug in Claw (call > tech support!) while the failure of a precondition or predicate means > that there is a bug in the client's code (damn, better go fix it). > There ought to be some differentiation between these. I would assume that appropriate messages indicate what is going on. **************************************************************** From: Bob Duff Sent: Wednesday, November 23, 2011 7:26 AM > BTW, my understanding is that GNAT already has much finer control over > assertions and contracts than that offered by the Standard. Do you > have any feeling for how often this control is used vs. the blunt > instrument given in the Standard? If it is widely used, that suggests > that the Standard is already deficient. My evidence is not very scientific, but my vague impression is that most people turn checks on or off for production. I.e. not fine grained. I think "fine grained" would be a good idea in many cases, but I suppose most people find it to be too much trouble. Which reminds me of another reason to have a way to turn off ALL checks: If you're thinking of turning off some checks, you should turn off all checks, and measure the speed -- that tells you whether turning off SOME checks can help, and a best-case estimate of how much. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 1:58 PM > > I don't disagree with this basic point, but supposed "performance > > goals" are usually used to justify all sorts of iffy behavior. But > > in actual practice, performance is impacted only a small amount by > > checks, and it is very rare for the checks to be the difference in > > performance. > > But checks can be a menace in terms of deactivated code in a certified > environment, so often in a 178B context people DO want to turn off all > checks, because they don't want to deal with deactivated code. I don't understand. A check of the sort I'm talking about is by definition not something "deactivated". Either the compiler can prove it to be OK, in which case it doesn't appear in the code at all (it's essentially a comment), or it is executed everytime the runs. In which case it is an integral part of the code. I'd expect problems in terms of 178B for the handlers of check failures (since there wouldn't be an obvious way to execute or verify them), but not the checks themselves. Perhaps you meant the handlers? ... > > The area between where turning off checks helps is pretty narrow. > > It's almost non-existent for constraint checks (a large number of > > which compilers remove). I like to think that the situation will be > > pretty similar for contract checks. > > That's just wrong, there are cases in which constraint checks have to > be turned off to meet performance goals. I said "almost non-existent". The only time it makes sense to turn off constraint checks is in a loop, verified to be very "hot" in terms of program performance, and no better algorithm is available. And even then, you would be better off restructuring the loop to move the checks outside of it rather than turning them off (that's usually possible with the addition of subtypes). ... > > Which is fine. But if you write complicated (and non-provable) > > contracts, I hope your compiler cuts you off at the knees. (My > > fantasy compiler surely will complain loudly about such things.) > > Keep contracts and debugging stuff separate! (And anything that > > can't be part of the production system is by definition debugging > > stuff!) > > I think you definitely are in fantasy land here, and I don't see any > point trying to set you straight, since you proclaim this to be a > fantasy. I'd like you try. The only reason I proclaimed this a fantasy is because I have no idea if I'll ever be able to spend the 2-3 months of time to implement aspect_specifications and Pre/Post in the Janus/Ada *front-end*. That will be a *lot* of work. But the "proving" is just the stuff the Janus/Ada optimizer (and I would expect every other compiler optimizer) already does. I'd add two very simple extensions to what it already does: (1) add support for "facts" to the intermediate code -- that is expressions that the optimizer knows the value of without explicitly evaluating them (so they would have no impact on generated code); (2) extending common-subexpression elimination to function calls of functions that are provably pure (using a new categorization). The only real issue I see with this is that the Janus/Ada optimizer is not set up to give detailed feedback to the user, which means that it can only report that it was unable to "prove" a postcondition -- it can't give any indication as to why. Obviously that is something that will need future enhancement. > > True enough. But what worries me is lumping together things that are > > completely different. Debugging stuff can and should be turned off, > > and no one should have to think too much about doing so. Contract > > stuff, OTOH, should only be turned off under careful consideration. > > It's annoying to put those both under the same setting, encouraging > > the confusion of thinking these things are the same somehow. > > For many people they are the same somehow, because assertions were > always about contracts... Hiding contracts in the body was always a bad idea. In any case, there is clearly a grey area, and it will move as compiler (and other tools) technology improves. But I would say that anything that a compiler cannot use in proving should be an assertion (that is, a debugging aid) rather than part of a contract. That means to me that such things should not change the correctness of the program if turned off, and they should be suppressable independent of those things that can be used automatically. > > Yes, of course. But as I note, well-written contracts can be (and > > should be) almost completely eliminated by compilers. So the reason > > for turning those off should be fairly minimal (especially as > > compilers get better at optimizing these). > > This idea of complete elimitation is plain nonsense, and the idea that > it can happen automatically even more nonsensical. > Randy, you might want to follow the Hi-Lite project to get a little > more grounded here and also familiarize yourself with SPARK, and what > can and cannot be feasibly achieved in terms of proof of partial > correctness. I think SPARK is doing more harm than good at this point. It was very useful as a proof-of-concept, but it requires a mindset and leap that prevents the vast majority of programmers from every using any part of it. The effect is that they are solving the wrong problem. In order for something to be used by a majority of (Ada) programmers, it has to have at least the following characteristics: (1) Little additional work required; (2) Little degrading of performance; (3) Has to be usable in small amounts. The last is the most important. The way I and probably many other Ada programmers learned the benefit of Ada constraint checks was to write some code, and then have a bug detected by such a check. It was noticed that (a) fixing the bug was simple since the check pinpointed the location of the problem, and (b) finding and fixing the bug would have been much harder without the check. That caused me to add additional constraints to future programs, which found additional bugs. The feedback back loop increased the adoption of the checks to the point that I will only consider turning them off for extreme performance reasons. I think we have to set up the same sort of feedback loop for contracts. That means that simple contracts should have little impact on performance (and in some cases, they might even help performance by giving additional information to the optimizer and code generator). (I also note that this focus also provides obvious ways to leverage multicore host machines, which is good as a lot of these techniques are not easily parallelizable.) As such, I'm concentrating on what can be done with existing technology -- that is, subprogram level understanding using well-known optimization techniques. I'm sure we'll want to go beyond that eventually, but we need several iterations of the feedback loop in order to build up customer demand for such things. (And then we'll have to add exception contracts, side-effect contracts, and similar things to Ada to support stronger proofs.) > > The danger is confusing debugging stuff (like expensive assertions > > that cannot be left in the production program -- and yes, I've > > written some things like that) with contract stuff that isn't costly > > in the first place and should be left in all programs with the > > possible exception of the handful on the razor's edge of > > performance. (And, yes, I recognize the need to be able to turn > > these things off for that razor's edge. I've never argued that you > > shouldn't be able to turn these off at all, only that they shouldn't > > be lumped with "expensive assertions" that can only be for debugging > > purposes.) > > I find your distinction between assertions and pre/postconditions etc > to be pretty bogus. Fair enough. But then there will never be any real progress here, because few people will use expensive contracts that slow down their programs. And I *really* don't want to go back to the bad old days of multiple versions of programs to test (one with checks and one without). It's like the old saw says: create and test a car with seatbelts, then remove them in the production version. So I want to separate the contracts (an integral part of program correctness, and almost never turned off) from the assertions (expensive stuff that has little impact on program correctness, can be turned off without impact). I suspect that one probably could easily find several additional categories, which you may want to control separately. Note that one of the reasons why I supported the "assert_on_start" and "assert_on_exit" ideas is that these provide a place to put those expensive assertions that cannot be reasoned about with simple technology. I surely agree that just because something is expensive is no good reason not to include it. But simply because you have a few expensive assertions around is no good reason to turn off the bulk of the contracts which are cheap and easily eliminated completely. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:19 PM ... > My view of assertions is a bit different: an assertion is something > that ought to be True at a given point, but it doesn't affect the > correctness of the program. IMHO, checks that do affect the > correctness of the program should not be given as an assertion (pragma > Assert), but rather be directly part of the program. For instance, if > you have an if statement that handles two possible cases, but other > cases are not handled, there ought to be some else branch with a bug > box or exception raise in it -- not some assertion that can be > ignored. (I've yet to see a case where a pragma Assert could do > something that you couldn't do with regular code; that's not true of > our old Condcomp facility, which allows conditionally compiled > declarations and > pragmas.) OK, for me, I am with Tuck, assertions are definitely about preconditions and postconditions, and if false, something is just as wrong as if a Pre or Post aspect was violated > I realize that pragma Assert can be used in other ways than I outlined > above, but I always thought that the above was the intent -- it would > surely be the way that I'd use it (if we had implemented it at all; we > have not done so to date, so I've never actually used it). No, that's nbot at all the intent as I see it, in fact I find your intepretration very odd. > Thus, pragma Assert is purely a debugging aid; it never has an affect > on any sort of correctness and it can be ignored without any harmful > effects. I absolutely disagree with this view of assertions. The primary purpose of assertions is to inform the reader of pre and post conditions for sections of code. They are valuable even if turned off, and doing the same thing with regular code is actively confusing. Assertions are NOT part of the program, they are statements about the program (aka contracts) at least as I use them. Yes, they can be turned on and just as Pre/Post are useful debugging aids when turned on, so our assertions. Once again, I see no substantial differences between assersions and pre/post conditions. Have a look for example at the sources of einfo.ads/adb in the GNAT compiler. The subprograms in the body are full of assertions like: > procedure Set_Non_Binary_Modulus (Id : E; V : B := True) is > begin > pragma Assert (Is_Type (Id) and then Is_Base_Type (Id)); > Set_Flag58 (Id, V); > end Set_Non_Binary_Modulus; Here the assert is to be read as, and behaves as, a precondition. Any call that does not meet this precondition is wrong. Yes, it would be much better if these were expressed as preconditions in the spec, but we didn't have that capability fifteen years ago. We have an internal ticket to replace many of our assertions with pre post conditions, but that's not always feasible. Consider this kind of codce if Ekind (E) = E_Signed_Integer then ... elsif Ekind (E) = E_Modular_Integer then ... else pragma Assert (Is_Floating_Point_Type (E)); ... Here the pragma assert is saying that the only piossibility (in the absence of someone screweing up preconditions somewhere) is a floating-point type. It does not need to be tested, since it is the only possibility, but this is very useful documentation, and of course we can turn on assertions, and then, like all contracts, we enable useful debugging capabilities. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:21 PM Overall comment from Robert. I don't think anything needs to be changed here. Let's let usage determine the need for finer grained control, and next time around, we can consider whether to standardize some of this finer control. In practice it won't make much difference, most implementations go out of their way to accomdoate pragmas/attributes/restrictions etc from other implementations, we have dozens of pragmas that are there just because some other implementation invented them. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:23 PM > Thinking more about future users...: > > What I would like to be able to is to turn off Assertions and > Postconditions in MY code, because I have verified it to my heart's > content. But what I would like to continue checking is the > Preconditions of services that my library offers. > After all, there is no limit to the s... of people (not reading > contracts) and I want to ensure that my code works as advertised. > > Unfortunately, it looks like it is an all-or-nothing policy that is > provided today. If that issue is (re)opened, I would argue for the > differentiation above. Don't worry, so far every implementation that provides pre and post conditions provides the selective control you ask for. I will be VERY surprised if that does not continue to be the case. > If it is not re-opened then I want to note for 2020 that the upward > incompatibility of billions and billions of lines of code relying on > turning off precondition checks exists - and it would be a good thing > to cause that incompatibility ;-) I agree more control is needed, I just think it's premature to try to dictate to implementations what it should be, let users and customers decide. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:31 PM > ... >> But for those who have, they serve very much the same purpose as >> these new contracts. They are essentially higher-level constraint >> checks. > > That's an abuse of the construct, IMHO. Very peculiar viewpoint, in that case, most people using assertions are in your idiosyncratic view, abusing it, but I find this a bit silly, since we know in practice that most users of assertions are in fact carrying out this abuse, regarding it as the proper thing to do >> To imply that somehow they are fundamentally different seems like an >> unjustifiable leap. > > They *are* fundamentally different: No they aren't > (1) As pragmas, they should never have an effect on the dynamic > correctness of the program. It should always be possible to erase all > of the pragmas and get a functioning program. Anything else is an > abuse of pragmas. We got rid of almost all of the offending pragmas in Ada > 2012. That's a purist position that bares no relationship to reality. E.g. if you take away a pragma that specifies the queing policy, of course the program may fail, I would say half the pragmas in existence, both language defined and impl defined are like that. pragmas that affect dynamic behavior of the program (I really don't know what you mean by dynamic correctness, to most programmers correct means the program is doing what it is meant to do) Detect_Blocking Default_Storage_Pool Discard_Names Priorty_Specific_Dispatching Locking_Policy Restrictions Atomic Atomic_Components Attach_Handler Convention Elaborate_All Export Import Interrupt_Priority Linker_Options Pack Priority Storage_Size Volatile Volatile_Components In fact the number of pragmas that do NOT affect the dynamic behavior of a program is very small. So if your viewpoint is based on this fantasy that pragmas do not affect the correctness of a program, they are built on a foundation of sand. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:32 PM Really one conclusion from this thread is that there is no consensus on whether a late change is desirable, let alone important enough to do as a late change. So it seems obvious to me that the proper action is to leave things alone. You would need a VERY clear consensus to change something like this at this stage! **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:56 PM >> But checks can be a menace in terms of deactivated code in a >> certified environment, so often in a 178B context people DO want to >> turn off all checks, because they don't want to deal with deactivated >> code. > > I don't understand. A check of the sort I'm talking about is by > definition not something "deactivated". Either the compiler can prove > it to be OK, in which case it doesn't appear in the code at all (it's > essentially a comment), or it is executed everytime the runs. In > which case it is an integral part of the code. > I'd expect problems in terms of 178B for the handlers of check > failures (since there wouldn't be an obvious way to execute or verify > them), but not the checks themselves. Perhaps you meant the handlers? I mean that if you have if A < 10 then raise COnstraint_Error; end if; then you will have deactivated code (the raise can never be executed). Now if you are doing source level coverage (which is really what 178B requires), you may be able to deal with this with a suitable traceability study. But in practice many producers of safety critical code (and not a few DER's) prefer to see every line of object code executed in tests. It is controversial whether 178B requires this, but let me say for sure Robert Dewar requires it :-) If I was managing a SC project, I would want to see 100% coverage at the object level, and having to deal with all these cases by injection tests would be too painful, so I would probably go with turnning checks off. After all if you are using a SPARK approach in which you prove that no constraint errors can occur, what's the point in leaving in junk dead tests. > I said "almost non-existent". The only time it makes sense to turn off > constraint checks is in a loop, verified to be very "hot" in terms of > program performance, and no better algorithm is available. And even > then, you would be better off restructuring the loop to move the > checks outside of it rather than turning them off (that's usually > possible with the addition of subtypes). This is just totally at odds with how many people use the language. There is nothing necessarily wrong with making sure the language covers RB's idiosyncratic views on how the language should be used, as well as everyone else's idiosyncratic views, but we don't want to designt the language so it is ONLY suitable for this usage view. > I'd like you try. The only reason I proclaimed this a fantasy is > because I have no idea if I'll ever be able to spend the 2-3 months of > time to implement aspect_specifications and Pre/Post in the Janus/Ada *front-end*. > That will be a *lot* of work. As I say, I recommend you take a close look both at the SPARK technology and at the Hi-Lite project, these are both very real. > But the "proving" is just the stuff the Janus/Ada optimizer (and I > would expect every other compiler optimizer) already does. I'd add two > very simple extensions to what it already does: (1) add support for > "facts" to the intermediate code -- that is expressions that the > optimizer knows the value of without explicitly evaluating them (so > they would have no impact on generated code); (2) extending > common-subexpression elimination to function calls of functions that are provably pure (using a new categorization). It is MUCH MUCH harder than you think, and lots of people have spent lots of time working on these problems, and the naive optimism you express is not shared by those people. > Hiding contracts in the body was always a bad idea. In any case, there > is clearly a grey area, and it will move as compiler (and other tools) > technology improves. But I would say that anything that a compiler > cannot use in proving should be an assertion (that is, a debugging > aid) rather than part of a contract. That means to me that such things > should not change the correctness of the program if turned off, and > they should be suppressable independent of those things that can be used automatically. Well some preconditions and postconditions belong naturally in the body. Consider a memo function. The memoizing should be totally invisible in the spec, and generally the caller won't even be able to see or talk about the memo data. But it makes perfect sense to write preconditions and postconditions in the body saying what the memo data must look like on entry and exit. > I think SPARK is doing more harm than good at this point. It was very > useful as a proof-of-concept, but it requires a mindset and leap that > prevents the vast majority of programmers from every using any part of > it. The effect is that they are solving the wrong problem. I have no idea what you mean, but I am pretty sure you are not really familiar with SPARK, or the associated proof technologies. Are you even familiar with the annotation language? > In order for something to be used by a majority of (Ada) programmers, > it has to have at least the following characteristics: > (1) Little additional work required; > (2) Little degrading of performance; > (3) Has to be usable in small amounts. Again, how about looking at Hi-Lite which is precisely about figuring out the extent to which formal methods can be employed using roughly these criteria. Randy, while you sit fantasizing over what can be done, other people are devoting huge amounts of effort to thinking > The last is the most important. The way I and probably many other Ada > programmers learned the benefit of Ada constraint checks was to write > some code, and then have a bug detected by such a check. It was > noticed that (a) fixing the bug was simple since the check pinpointed > the location of the problem, and (b) finding and fixing the bug would > have been much harder without the check. That caused me to add > additional constraints to future programs, which found additional > bugs. The feedback back loop increased the adoption of the checks to > the point that I will only consider turning them off for extreme performance reasons. Yes, we know contraint checks are a relatively simple case. In the SPARK context for example, it has proved practical to prove large applications like iFacts to be free of any run-time errors, but it has NOT proved feasible to prove partial correctness for the whole application. > As such, I'm concentrating on what can be done with existing > technology -- that is, subprogram level understanding using well-known > optimization techniques. I'm sure we'll want to go beyond that > eventually, but we need several iterations of the feedback loop in > order to build up customer demand for such things. (And then we'll > have to add exception contracts, side-effect contracts, and similar > things to Ada to support stronger > proofs.) We are already WAY beyond this. And there is very real customer demand Have a look at http://www.open-do.org/projects/hi-lite/. There are many large scale users of Ada in safety-critical contexts who are very interested, especially in the environment of 178-C in the extent to which unit proof can replace unit testing. > Fair enough. But then there will never be any real progress here, > because few people will use expensive contracts that slow down their > programs. And I > *really* don't want to go back to the bad old days of multiple > versions of programs to test (one with checks and one without). It's > like the old saw > says: create and test a car with seatbelts, then remove them in the > production version. People DO use expensive contracts that slow down their programs, ALL THE TIME! Remember that at AdaCore, we have years of experience here. The precondition and postcondition pragmas of GNAT which have been around for over three years, were developed in response to customer demand by large customers developing large scale safety-critical applications. It really doesn't matter than the tests may be expensive if enabled at run time, they won't be enabled in the final build. > So I want to separate the contracts (an integral part of program > correctness, and almost never turned off) from the assertions > (expensive stuff that has little impact on program correctness, can be > turned off without impact). I suspect that one probably could easily > find several additional categories, which you may want to control separately. the almost never turned off does not correspond with how we see our customers using these features *at all*. And indeed even after we replace many of the assertions in GNAT itself with pre and post conditions, we will normally turn them all off as we do now for production builds of the compiler, since turning assertions on does compromise compiler performance3 significantly, and this matters to many people. We do builds internally with assertions turned on, and these are indeed useful for debugging porposes. > Note that one of the reasons why I supported the "assert_on_start" and > "assert_on_exit" ideas is that these provide a place to put those > expensive assertions that cannot be reasoned about with simple > technology. I surely agree that just because something is expensive is > no good reason not to include it. But simply because you have a few > expensive assertions around is no good reason to turn off the bulk of > the contracts which are cheap and easily eliminated completely. The idea that preconeditions and postconditions will be restricted to simple things that can "be reasoned about with simple technology": a) bares no relation to the way people are actually using the feature b) does not correspond with my expectations of how people will use the technology c) does not correspond with my recommendations of how people *should* use the technology I have no objection to making sure that what we define is reasonably usage for your (to me very peculiar) view of how these features should be used, but I would be appalled to think this was the official view of the language design, or that the designm would be compromised by this viewpoint (I find assert_on_start and assert_on_exit deeply flawed if they are motivated by dealing with expensive stuff). I contest the idea that the bulk of contracts are cheap and easily eliminated. Again look at Hi-Lite to get a better feel for the state of the art in this respect. Even just eliminating all contstraint checks is far from trivial (read some of the SPARK papers on this subject). **************************************************************** From: Bob Duff Sent: Wednesday, November 23, 2011 2:59 PM >...But I would say that anything that a compiler cannot use in proving >should be an assertion (that is, a debugging aid) rather than part of >a contract. Could we please settle on terminology? Preferably somewhat standard terminology? The term "assertion" means pragma Assert, precondition, postcondition, invariant, and predicate. Bertrand Meyer uses it that way, and I think it makes sense. Probably constraints and null exclusions should also be called assertions. I really think equating "assertion" and "debugging aid" will cause nothing but confusion. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 4:11 PM > > I don't understand. A check of the sort I'm talking about is by > > definition not something "deactivated". Either the compiler can > > prove it to be OK, in which case it doesn't appear in the code at > > all (it's essentially a comment), or it is executed everytime the > > runs. In which case it is an integral part of the code. > > > I'd expect problems in terms of 178B for the handlers of check > > failures (since there wouldn't be an obvious way to execute or > > verify them), but not the checks themselves. Perhaps you meant the handlers? > > I mean that if you have > > if A < 10 then > raise COnstraint_Error; > end if; > > then you will have deactivated code (the raise can never be executed). I see. But this is not actually what will be generated in the object code (at least for language-defined checks). > Now if you are doing source level coverage (which is really what 178B > requires), you may be able to deal with this with a suitable > traceability study. But in practice many producers of safety critical > code (and not a few DER's) prefer to see every line of object code > executed in tests. It is controversial whether 178B requires this, but > let me say for sure Robert Dewar requires it :-) > > If I was managing a SC project, I would want to see 100% coverage at > the object level, and having to deal with all these cases by injection > tests would be too painful, so I would probably go with turnning > checks off. After all if you are using a SPARK approach in which you > prove that no constraint errors can occur, what's the point in leaving > in junk dead tests. I agree with this. But it seems to me that you don't need to test every possible check, only one. That's because in practice the code will look something like: Mov EAX,[A] Cmp EAX, 10 Jl Check_Failed And every instruction of the object code here will be executed whether or not the check fails. The branch might even be a jump to the handler, if it is local). I suppose if your requirement is to exercise every possible *path*, then you have to remove all of the checks, but otherwise, you only need remove complex ones with dead code. (And there aren't many of those that are language-defined.) > > I said "almost non-existent". The only time it makes sense to turn > > off constraint checks is in a loop, verified to be very "hot" in > > terms of program performance, and no better algorithm is available. > > And even then, you would be better off restructuring the loop to > > move the checks outside of it rather than turning them off (that's > > usually possible with the addition of subtypes). > > This is just totally at odds with how many people use the language. > There is nothing necessarily wrong with making sure the language > covers RB's idiosyncratic views on how the language should be used, as > well as everyone else's idiosyncratic views, but we don't want to > designt the language so it is ONLY suitable for this usage view. I agree with you in general. I started out as one of those "many people". I eventually learned I was wrong, and also got bigger machines to run my code on. The language should discourage bad practices and misuse, but not to the extent of preventing things that need to be done occasionally. I surely would not suggest eliminating the ability to turn off checks, what bothers me of course is the requirement to treat them all the same, when they are clearly not the same. ... > > But the "proving" is just the stuff the Janus/Ada optimizer (and I > > would expect every other compiler optimizer) already does. I'd add > > two very simple extensions to what it already does: (1) add support > > for "facts" to the intermediate code -- that is expressions that the > > optimizer knows the value of without explicitly evaluating them (so > > they would have no impact on generated code); (2) extending > > common-subexpression elimination to function calls of functions that > > are provably pure (using a new categorization). > > It is MUCH MUCH harder than you think, and lots of people have spent > lots of time working on these problems, and the naive optimism you > express is not shared by those people. Trying to do it all surely is harder than I am expressing here. But I only am interested in the 75% that is easy, because that has the potential to make contracts as cheap as constraint checks (can't eliminate *all* of those either, it just matters that you eliminate most). > > Hiding contracts in the body was always a bad idea. In any case, > > there is clearly a grey area, and it will move as compiler (and > > other tools) technology improves. But I would say that anything that > > a compiler cannot use in proving should be an assertion (that is, a > > debugging > > aid) rather than part of a contract. That means to me that such > > things should not change the correctness of the program if turned > > off, and they should be suppressable independent of those things > that can be used automatically. > > Well some preconditions and postconditions belong naturally in the body. > Consider a memo function. The memoizing should be totally invisible in > the spec, and generally the caller won't even be able to see or talk > about the memo data. But it makes perfect sense to write preconditions > and postconditions in the body saying what the memo data must look > like on entry and exit. I agree that some things naturally belong in the body -- those clearly are not part of the contract. They're some other kind of thing (I've been calling them assertions, but Bob hates that, so I'll just say that they need a good name - "non-contract assertions" is the best I've got, and it isn't great). I'd argue that such things matter not to the client at all, and probably don't matter much to the implementation of the body either. Which makes them a very different sort of thing to a contract assertion -- even a (contract) postcondition which can be used as an assumption at the call site. > > I think SPARK is doing more harm than good at this point. It was > > very useful as a proof-of-concept, but it requires a mindset and > > leap that prevents the vast majority of programmers from every using > > any part of it. The effect is that they are solving the wrong problem. > > I have no idea what you mean, but I am pretty sure you are not really > familiar with SPARK, or the associated proof technologies. Are you > even familiar with the annotation language? I'm familiar enough to know that there is essentially no circumstance where I could use it, as it has no support for dynamic dispatching, exceptions, or references (access types or some real equivalent, not Fortran 66-type tricks). I know it progressed some from the days when I studied it extensively (about 10 years ago), but not in ways that would be useful to me or many other Ada programmers. Anyway, the big problem I see with SPARK is that you pretty much have to use it exclusively in some significant chunk of code to get any benefit. Which prevents the sort of incremental adoption that is really needed to change the attitudes of the typical programmer. ... > Again, how about looking at Hi-Lite which is precisely about figuring > out the extent to which formal methods can be employed using roughly > these criteria. Randy, while you sit fantasizing over what can be > done, other people are devoting huge amounts of effort to thinking The only reason I'm just thinking rather than doing is lack of time and $$$ to implement something. Prior commitments (to Ada 2012) have to be done first. > > The last is the most important. The way I and probably many other > > Ada programmers learned the benefit of Ada constraint checks was to > > write some code, and then have a bug detected by such a check. It > > was noticed that (a) fixing the bug was simple since the check > > pinpointed the location of the problem, and (b) finding and fixing > > the bug would have been much harder without the check. That caused > > me to add additional constraints to future programs, which found > > additional bugs. The feedback back loop increased the adoption of > > the checks to the point that I will only consider turning them off > > for extreme performance reasons. > > Yes, we know contraint checks are a relatively simple case. > In the SPARK context for example, it has proved practical to prove > large applications like iFacts to be free of any run-time errors, but > it has NOT proved feasible to prove partial correctness for the whole > application. I'm not certain I understand what you mean by "partial correctness", but if I am even close, that seems like an obvious statement. And even if it was possible, I'd leave that to others to do. I'm not talking about anything like that; I don't want to prove anything larger than a subprogram. You can chain those together to prove something larger, but I doubt very much it would ever come close to "correctness". I just want to prove the obvious stuff, both for performance reasons (so contracts don't cost much in practice) and to provide an aid as to where things are going wrong. > > As such, I'm concentrating on what can be done with existing > > technology -- that is, subprogram level understanding using > > well-known optimization techniques. I'm sure we'll want to go beyond > > that eventually, but we need several iterations of the feedback loop > > in order to build up customer demand for such things. (And then > > we'll have to add exception contracts, side-effect contracts, and > > similar things to Ada to support stronger > > proofs.) > > We are already WAY beyond this. And there is very real customer demand > Have a look at http://www.open-do.org/projects/hi-lite/. There are > many large scale users of Ada in safety-critical contexts who are very > interested, especially in the environment of 178-C in the extent to > which unit proof can replace unit testing. I looked at the site and didn't see anything concrete, just a lot of motherhood statements. I realize that there had to be a lot of that in the grant proposals, but I would like to see more than that to feel that anything is really being accomplished. How is Hi-Lite going to reach its goals. And clearly, the users requiring 178 and the like are a tiny minority. It's perfectly good to support them (they surely need it), but their needs are far from the mainstream. As such "Hi-Lite" is just a corner of what I want to accomplish. (If you don't have big goals, there is no point; and if the goals are attainable you probably will end up disappointed. :-) I would like to see an environment where all programmers include this sort of information in their programs, and all compilers and tools use that information to detect (some) bugs immediately. That can only be approached incrementally, and it requires a grass-roots effort -- it's not going to happen just from big projects where management can mandate whatever they want/need. ... > People DO use expensive contracts that slow down their programs, ALL > THE TIME! Remember that at AdaCore, we have years of experience here. > The precondition and postcondition pragmas of GNAT which have been > around for over three years, were developed in response to customer > demand by large customers developing large scale safety-critical > applications. > It really doesn't matter than the tests may be expensive if enabled at > run time, they won't be enabled in the final build. Which is exactly wrong: leaving the garage without seatbelts. I understand that there are reasons for doing this, but it doesn't make it any more right. And I'm personally not that interested in "safety-critical applications", because these people will almost always do the right thing ultimately -- they have almost no other choice (they'd be exposed to all sorts of liability otherwise). I'm much more concerned about the mass of less critical applications that still would benefit from fewer errors (that would be all of them!) I've wasted too much of my (and your) time with this, so I'll stop here. No need to convince each other -- time will tell. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 4:25 PM > The term "assertion" means pragma Assert, precondition, postcondition, > invariant, and predicate. Bertrand Meyer uses it that way, and I > think it makes sense. Probably constraints and null exclusions should > also be called assertions. > > I really think equating "assertion" and "debugging aid" will cause > nothing but confusion. OK, I'll stop trying to forward this discussion. I could live with some better terminology for the separation, but the separation is clearly real and those that chose to ignore it force programmers into a corner where they have to choose between a too-slow program with checks on, or a decently performing program that is no better than a C program. We have the technology to prevent the need of this Hobson's choice, and it's sad not to be able to get any traction with it. Beyond that, I find it incredibly sad that people want to drive Ada into a box where it is only useful for large safety-critical applications. *All* applications can benefit from this sort of technology, but not if it is turned into a difficult formal language that only a few high priests can understand. And that's what I'm getting out of this discussion. Probably you are all right, and there is no real value to the beginner and hobbyist anymore, since they can't be monetized. Probably I'd be best off stopping thinking for myself and just becoming one of the sheep punching a clock. Because the alternative isn't pretty. **************************************************************** From: John Barnes Sent: Wednesday, November 23, 2011 4:30 PM Partial correctness simply means that a program is proved to be correct provided it terminates. It's nothing to do with the proof being otherwisee flaky. Mostly if a program has been proved to be partially correct then it really is correct but there are amusing examples where all the proof comes out OK yet the program is flawed because it cannot be proved to terminate in some cases. There is a simple example on page 289 of my Spark book. It's a factorial function. It looks OK but it loops for ever in one case. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 4:57 PM > Partial correctness simply means that a program is proved to be > correct provided it terminates. It's nothing to do with the proof > being otherwisee flaky. Mostly if a program has been proved to be > partially correct then it really is correct but there are amusing > examples where all the proof comes out OK yet the program is flawed > because it cannot be proved to terminate in some cases. In the SPARK class, one of the examples was a integer square root. You were not asked to prove partial correctness, but I did anyway. I programmed a binary search, and tested as the termination condition that I actually had the square root. Obviously this only terminates if it correctly computes the square root. However, the trick in binary searches is always to get the termination correct and make sure you don't go into an infinite loop :-) Of course I did not prove that. > There is a simple example on page 289 of my Spark book. It's a > factorial function. It looks OK but it loops for ever in one case. Yes, a very nice example. BTW Randy, it would be a mistake to think that Hi-Lite is ONLY relevant to 178B/C environments, the purpose is precisely to make proof of preconditions and postconditions more generally accessible. **************************************************************** From: John Barnes Sent: Thursday, November 24, 2011 9:31 AM I contrived it by accident when giving a Spark course in York. One thing it taught me was the value of for loops which by their very nature always terminate. And by contrast I always avoid while loops. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 9:53 AM Well *always* sounds a bit suspicious, not everything is in the primitive recursive domain! **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 4:51 PM > I see. But this is not actually what will be generated in the object > code (at least for language-defined checks). Yes it is, what else are you thinking of (certainly you don't want to use the built-in x86 operations like the range check, they are pipline catastrophes. > I agree with this. But it seems to me that you don't need to test > every possible check, only one. That's because in practice the code > will look something like: > > Mov EAX,[A] > Cmp EAX, 10 > Jl Check_Failed well that's not what it will look like for us, since we maintain traceability on what check failed. > I suppose if your requirement is to exercise every possible *path*, > then you have to remove all of the checks, but otherwise, you only > need remove complex ones with dead code. (And there aren't many of > those that are > language-defined.) Right, if you have a JL, it has to execute both ways even in simple coverage. > I agree with you in general. I started out as one of those "many > people". I eventually learned I was wrong, and also got bigger > machines to run my code on. The language should discourage bad > practices and misuse, but not to the extent of preventing things that > need to be done occasionally. I surely would not suggest eliminating > the ability to turn off checks, what bothers me of course is the > requirement to treat them all the same, when they are clearly not the same. But checks are seldom turned on > I'm familiar enough to know that there is essentially no circumstance > where I could use it, as it has no support for dynamic dispatching, > exceptions, or references (access types or some real equivalent, not > Fortran 66-type tricks). I know it progressed some from the days when > I studied it extensively (about 10 years ago), but not in ways that > would be useful to me or many other Ada programmers. Well do you write safety-critical code at all? If not you really don't have the right context, people usually severely subset in the SC case. Certainly you would be very surprised to see access types and dynamic storage allocation in an SC app. So the SPARK restrictions in practice correspond with the subset that people anyway. > Anyway, the big problem I see with SPARK is that you pretty much have > to use it exclusively in some significant chunk of code to get any > benefit. Which prevents the sort of incremental adoption that is > really needed to change the attitudes of the typical programmer. Well SC apps are more likely to be generated from scratch anyway, and we are not talking typical programmer when it comes to SC apps. So SPARK is not for the environment you are thinking in terms of (gewnertal purpose programming using the whole language). > ... >> Again, how about looking at Hi-Lite which is precisely about figuring >> out the extent to which formal methods can be employed using roughly >> these criteria. Randy, while you sit fantasizing over what can be >> done, other people are devoting huge amounts of effort to thinking > > The only reason I'm just thinking rather than doing is lack of time > and $$$ to implement something. Prior commitments (to Ada 2012) have > to be done first. Well take a quick look at the Hi-Lite pages, how much effort is that, I think you will find them interesting > I'm not certain I understand what you mean by "partial correctness", > but if I am even close, that seems like an obvious statement. And even > if it was possible, I'd leave that to others to do. I'm not talking > about anything like that; I don't want to prove anything larger than a > subprogram. You can chain those together to prove something larger, > but I doubt very much it would ever come close to "correctness". I > just want to prove the obvious stuff, both for performance reasons (so > contracts don't cost much in > practice) and to provide an aid as to where things are going wrong. Gosh, I am quite surprised you don't know what partial correctness means. This is a term of art in proof of program correctness that is very old, at least decades (so old I have trouble tracking down first use, everyone even vageuly associated with proof techniques knows this term). Briefly, proving partial correctness means that IF the program terminates THEN you your proof says something about the results. If a single subprogram is involved, then it would mean for example proving that IF the preconditions hold, and IF thew subprogram terminates, then the postconditions hold. Proving termination (total correctness) is far far harder. > I looked at the site and didn't see anything concrete, just a lot of > motherhood statements. I realize that there had to be a lot of that in > the grant proposals, but I would like to see more than that to feel > that anything is really being accomplished. How is Hi-Lite going to > reach its goals. Did you read the technical papers? I don't think you looked hard enough! > And clearly, the users requiring 178 and the like are a tiny minority. > It's perfectly good to support them (they surely need it), but their > needs are far from the mainstream. As such "Hi-Lite" is just a corner > of what I want to accomplish. (If you don't have big goals, there is > no point; and if the goals are attainable you probably will end up > disappointed. :-) I would like to see an environment where all > programmers include this sort of information in their programs, and > all compilers and tools use that information to detect (some) bugs > immediately. That can only be approached incrementally, and it > requires a grass-roots effort -- it's not going to happen just from big > projects where management can mandate whatever they want/need. Randy, I think you have a LOT to study before understanding the issues here :-) > Which is exactly wrong: leaving the garage without seatbelts. I > understand that there are reasons for doing this, but it doesn't make > it any more right. If you can prove the car won't crash, seatbelts are not needed. It is pretty useless for a flight control system to get a constraint error :-) **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 5:04 PM >> I really think equating "assertion" and "debugging aid" will cause >> nothing but confusion. > > OK, I'll stop trying to forward this discussion. > > I could live with some better terminology for the separation, but the > separation is clearly real Well you think it's real, that does not make it "clearly real", and I don't find the separation meaningful at all. And I gather that Tuck agrees with this. > and those that chose to ignore it force programmers into a corner > where they have to choose between a too-slow program with checks on, > or a decently performing program that is no better than a C program. > We have the technology to prevent the need of this Hobson's choice, > and it's sad not to be able to get any traction with it. It's because we can't agree on how it should be done! In the GNAT situation you could have pragma Check (Expensive, condition, string) pragma Check (Cheap, condition, string) and then control activation of Cheap or Expensive with e.g. pragma Check_Policy (Expensive, Off); pragma Check_Policy (Cheap, On); which seemed to me useful when I implemented it, but in fact we never saw a customer use this feature, and we never used it ourselves internally, so perhaps I was mistaken in thinking this a useful idea. For sure, the artificial attempt to corral cheap stuff into the fold of pragma Assert, and expensive stuff into the fold of preconditions/postconditions seems seriously misguided to me! > Beyond that, I find it incredibly sad that people want to drive Ada > into a box where it is only useful for large safety-critical > applications. *All* applications can benefit from this sort of > technology, but not if it is turned into a difficult formal language > that only a few high priests can understand. And that's what I'm getting out of this discussion. Nobody is saying that, it is just that your suggestions here make no technical sense to some of us. You will have to do a better job of cobnvincing people, or perhaps decide you are wrong, so far I have not seen any sympathetic response to the claim that assertions are about cheap debugging stuff, and pre/post conditions are about expensive contracts. > Probably you are all right, and there is no real value to the beginner > and hobbyist anymore, since they can't be monetized. Probably I'd be > best off stopping thinking for myself and just becoming one of the > sheep punching a clock. Because the alternative isn't pretty. That's just not true, we have huge numbers of students in the GAP program enthusiastically using Ada, and the precondition/postcondition feature is one we get student questions about all the time (there are also several university courses focused on programming by contract using the Ada 2012 features). **************************************************************** From: Yannick Moy Sent: Thursday, November 24, 2011 3:33 AM >> We are already WAY beyond this. And there is very real customer >> demand Have a look at http://www.open-do.org/projects/hi-lite/. There >> are many large scale users of Ada in safety-critical contexts who are >> very interested, especially in the environment of 178-C in the extent >> to which unit proof can replace unit testing. > > I looked at the site and didn't see anything concrete, just a lot of > motherhood statements. I realize that there had to be a lot of that in > the grant proposals, but I would like to see more than that to feel > that anything is really being accomplished. How is Hi-Lite going to > reach its goals. For something concrete, look at this page, with a small example of Ada 2012 code whose contracts and absence of run-time errors are proved with our tool (called gnatprove): http://www.open-do.org/projects/hi-lite/a-database-example/ How we do it is presented in various papers pointed-to by the main page: - the high-level view: http://www.open-do.org/wp-content/uploads/2011/02/DewarSSS2010.doc - a short introduction: http://www.open-do.org/wp-content/uploads/2011/06/Hi_Lite_Contract.pdf - the proof tool-chain: http://www.open-do.org/wp-content/uploads/2011/06/Why_Hi_Lite_Ada.pdf - the modified "formal" containers: http://www.open-do.org/wp-content/uploads/2011/06/Correct_Code_Containing_Containers.pdf > And clearly, the users requiring 178 and the like are a tiny minority. > It's perfectly good to support them (they surely need it), but their > needs are far from the mainstream. As such "Hi-Lite" is just a corner > of what I want to accomplish. We are clearly targetting DO-178 users in Hi-Lite, but not only. As an example, a partner of the project is Astrium, which is in the space industry, where DO-178 does not apply. And customers who have expressed interest in Hi-Lite come from a variety of industries. > (If you don't have big goals, there is no point; and if the > goals are attainable you probably will end up disappointed. :-) I > would like to see an environment where all programmers include this > sort of information in their programs, and all compilers and tools use > that information to detect (some) bugs immediately. We agree then! The technology we develop has this potential, because: * We are not restricting the user in his code. Instead, we detect automatically which subprograms we can analyze. * We don't require any user work upfront. You can start with the default implicit contract for subprograms of True for both precondition and postcondition! And the effects of subprograms (global variables read/written) are generated automatically. * We don't require an expensive analysis. Each subprogram is analyzed separately, based on the contracts of the subprograms it calls. > That can only be approached incrementally, > and it requires a grass-roots effort -- it's not going to happen just > from big projects where management can mandate whatever they want/need. I don't think Hi-Lite qualifies as a "big project", if you mean by that something immobile and monolithic. We expect there will be many uses of the technology, and we will try to accommodate as many as possible. BTW, Hi-Lite is a very open project, so feel free to participate in technical discussions on hi-lite-discuss mailing list (http://lists.forge.open-do.org/mailman/listinfo/hi-lite-discuss), and let us know if you would like to be involved in any way. **************************************************************** From: Jean-Pierre Rosen Sent: Thursday, November 24, 2011 8:06 AM Let's summarize the positions (correct me if I'm wrong): The D view: A program is a whole. If there's a bug in it, it is a bug, irrespectively of whether it is to be blamed on the caller or the callee. If checks are disabled, they are disabled as a whole. That's what the important (i.e. paying) customers -most of them in safety critical applications- want. The B view: There are library providers and library users. The library provider is responsible for his own bugs, and wants to be protected from incorrect calls from the user. The user must be aware of the right protocol to call the library. It makes sense to disable checks internal to the library (algorithmic checks), but not those that enforce the contract with the user. Users are the public at large, by far the highest number of users, few of them write critical applications. Not many are paying customers, but this is the public we should try to conquere. Personnaly, I understand D, but tend to agree with B. For example, the whole ASIS interface follows the B model: each query has an "expected element kind", and I would feel terribly insecure if the checks on the element kind were removed when I put a pragma to increase the speed on my own program. (Of course, currently these checks are not implemented as pre-conditions, but they could). Now, what is the issue? To have one or two pragmas to disable some checks, either together or separately. Those who want to disable all checks together can easily do so with two pragmas. Those who want to disconnect the two aspects are unable to do so if there is only one pragma. Moreover, it's not like a big change that could bring nasty consequences all over the language. So I think there should be really two pragmas, even at this late stage (I even think that contract violations should have their own, different, exception, but I don't have much hope to get support on this). **************************************************************** From: Tullio Vardanega Sent: Thursday, November 24, 2011 8:24 AM For what my opinion is worth here, I find JPR's summary useful for the layman to get to grips with the avalanche of views aired in this (very interesting) thread. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 8:36 AM > Now, what is the issue? To have one or two pragmas to disable some > checks, either together or separately. Those who want to disable all > checks together can easily do so with two pragmas. Those who want to > disconnect the two aspects are unable to do so if there is only one > pragma. Moreover, it's not like a big change that could bring nasty > consequences all over the language. So I think there should be really > two pragmas, even at this late stage (I even think that contract > violations should have their own, different, exception, but I don't > have much hope to get support on this). I strongly object to changing things in an incoimpatible way from the way they are now. Turning off assertions should turn off all assertions including preconditions and postconditions. But I don't have any strong feeling about introducing new gizmos that provide more control (GNAT already has a far greater level of control than anything that is likely to be stuffed in at the last minute). I still overall oppose any change at this stage, we just don't have a clear consensus for a last minute change. I would say let the market place decide what level of control is needed. After all in real life, such control is provided with a mixture of compiler switches (about which we can say nothing in the RM) and pragmas, and the blance between them has to be carefully considered in real life. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 8:54 AM > For what my opinion is worth here, I find JPR's summary useful for the > layman to get to grips with the avalanche of views aired in this (very > interesting) thread. Indeed! Note that I definitely agree that finer control is needed, and we have implemented this finer control for years in GNAT (remember that we have precondition and postcondition pragmas that work in Ada 95 and have been around for years). So we agree that this kind of control is needed. My concern is trying to figure out and set in stone what this control should be at this late stage, when we lack a consensus about what should be done. In practice you need both compiler switches and appropriate pragmas to control things. The former (switches) will always be the province of the implementation anyway. To me it is not so terrible if we have to leave some of the latter up to the implementor as well. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 9:00 AM Let me note that Tullio captures the disucssion with Erhardt, but it does not capture the very different view between Randy and Robert/Tuck, which can be summarized as follows Randy thinks pragma Assert is about simple things that have negligible overhead, and should normally not be suppressed since the overhead is small. But pre/post conditions are about complex things that may need to be suppressed. That's his basis for wanting separate control (QUITE different from Erhard's point of view about standard libraries, which I think we all share). Note that Randy uses the term "assertions" to apply only to pragma Assert, and does not think of pre/post as assertions. Robert and Tuck don't make this big distinction, for them, following Eiffel thinking, these all come under the heading of assertions, and there is no basis for separate control *on those grounds*. The Erhard argument about separate control of Pre and Post(I think this would include body asserts as well) is quite different and seems valid, though I can't get too excited, we have this degree of control in GNAT, but we never saw it used either by a customer or by us ibnternally. But to be fair, we haven't really seen people use pre/post for general library development (we certainly do NOT use them yet in our GNAT run-time library). **************************************************************** From: Erhard Ploedereder Sent: Friday, November 25, 2011 6:14 AM I agree 100% with J.P. We ought to get it right in the standard, not just the implementation, which later will not budge at all from whatever it implemented initially. Sorry, Robert, but saying that GNAT already provides fine-grained control and therefore the standard need not simply does not cut it for me. Further on the subject: Hopefully there is an ENABLE option as well as a SUPPRESS, because as a library writer I have to chooose between: a) make it a PRE, which is really good documentation b) check it explicitly in the body and raise an exception If I cannot insist on the PRE-check, I have no choice but to go for b), and I'll be damned if I then provide the same information as a PRE as well. I really do not want to check it twice. So, having no fine control over PRE is truly bad software-engineering, because it forces users interested in robust software to do the wrong thing. **************************************************************** From: Robert Dewar Sent: Friday, November 25, 2011 8:47 AM > I agree 100% with J.P. > > We ought to get it right in the standard, not just the implementation, > which later will not budge at all from whatever it implemented > initially. Sorry, Robert, but saying that GNAT already provides > fine-grained control and therefore the standard need not simply does > not cut it for me. I just think it is too much of a rush to "get this right in the standard", when there is a wide divergence of opinion on what getting it right means. No one is disagreeing that a finer level of control would be desirable *in the standard*, we are just disagreeing over whether there is time for this relatively non-urgent last minute change given the lack of consensus. Erhard, make a specific proposal, I think the only chance for a change at this stage (when you have several people opposed to making a change) is to make a proposal, then if that proposal gets a consensus we can make the change. If not, we can continue the discussion process, and then when we come to an agreement, we just make it recommended practice and existing implementations can conform right away to the decision. It may well be possible to agree on this so that a decision is made by the time the standard is official. > So, having no fine control over PRE is truly bad software-engineering, > because it forces users interested in robust software to do the wrong thing. I don't think any users will be forced to do anything they don't like in practice :-) **************************************************************** From: Bob Duff Sent: Friday, November 25, 2011 10:31 AM > Erhard, make a specific proposal, ... Right, I've lost track of what is being proposed -- it got buried underneath a lot of philosophical rambling (much of which I agree with, BTW). I certainly agree with Erhard that for a library, one wants a way to turn off "internal" assertions, but keep checks (such as preconditions) that protect the library from its clients. I'm not sure how to accomplish that (e.g., what about preconditions that are "internal", such as on procedures in a private library package? What about when a library calls an externally-visible part of itself?). And I don't think it's that big of a problem, because vendors will give their customers what they need. We can standardize existing practice later (which is really how standards are supposed to be built, anyway!). **************************************************************** From: Randy Brukardt Sent: Monday, November 28, 2011 8:51 PM > > Erhard, make a specific proposal, ... > > Right, I've lost track of what is being proposed -- it got buried > underneath a lot of philosophical rambling (much of which I agree > with, BTW). I certainly agree with Erhard that for a library, one > wants a way to turn off "internal" > assertions, but keep checks (such as preconditions) that protect the > library from its clients. > > I'm not sure how to accomplish that (e.g., what about preconditions > that are "internal", such as on procedures in a private library > package? What about when a library calls an externally-visible part > of itself?). And I don't think it's that big of a problem, because > vendors will give their customers what they need. I don't think that there have really been too many concrete proposals in this thread - I'll try to rectify that below. But as someone who has supported a large third-party library that was intended to work on multiple compilers, I have to disagree with the above statement. In order to create and support something like Claw, one has to minimize compiler dependencies. It is not likely that we could have depended on some compiler-specific features unless those are widely supported. On top of that, while I agree that vendors will give their *customers* what they need, that does not necessarily extend to third-party library vendors (which includes various open source projects as well as paid vendors), as library creators are not necessarily the customers of the compiler vendors. Library creators/vendors have a somewhat different set of problems than the typical customer (need for maximum portability, need to control as much as possible how the library is compiled, etc.) So I think it is important that the standard at least try to address these issues. And it seems silly to claim that it is too hard to do now, given that this stuff is only implemented in one compiler (that I know of) right now and that compiler has finer-grained control over these things than anything imagined as a solution in the Standard. So I can't imagine any significant implementation or description problems now -- that will get harder as we go down the road and these things get more widely implemented. --- Anyway, enough philosophy. Let me turn to some details. Following are several proposals (most previously discussed), provided in my estimated order of importance: (note that I try to provide counter-arguments when they have come up) Proposal #1: The assertion policy in effect at the point of the declaration of a subtype with a predicate is the one used to determine whether it is checked or not (rather than the assertion policy at the point of the check, as it is currently proposed in the draft Standard). Reason #1A: It should be possible for the body of a subprogram to be able to assume that checks are on for all calls to that subprogram. The rules for preconditions (and invariants and postconditions as well) already determine whether checks are made by the policy at the point of the subprogram declaration, which gives this property. It's weird that predicates are different. Reason #1B: I'm suggesting using the subtype declaration as the determining point, as that is similar to that for preconditions. A predicate is a subtype property, so having it depend on some unrelated subprogram declaration (as Tucker suggested) seems bizarre. And that would require having a different determination for predicate checks in other contexts (aggregate components, object initializations, etc.) Finally, using the subtype declaration should be enough, as a subprogram body always knows the location of the subtype declarations used in its parameters profiles. Rebuttal #1Z: One thing that strikes me about this change is that a body can know more about the assertions that apply to the parameters than it can about the constraint checks that were made on those same parameters. Specifically, if checks are suppressed at a call-site, then random junk can be passed into a subprogram -- and there is nothing reasonable that the subprogram can do to protect itself. (Rechecking all of constraints of the parameters explicitly in the subprogram body is not reasonable, IMHO.) The erroneousness caused by such suppression bails us out formally, but that doesn't provide any reassurance in practice. The concern addressed above for predicates is very similar. However, just because we got it wrong in 1983 doesn't mean that we have to get it wrong now. So I don't find the suppress parallel to be a very interesting argument. --- Proposal #2: If the invoked subprogram and denoted subprogram of a dispatching call have different assertion policies, it is unspecified which is used for a class-wide invariant. Reason #2A: This is fixing a bug; invariants and postconditions should act the same and this is the rule used for postconditions (otherwise, the compiler would always be required to make checks in the body in case someone might turn off the checks elsewhere, not something we want to require). Reason #2B: I wrote this as applying only to class-wide invariants, as the specific invariants only can be determined for the actually invoked subprogram (so there is no point in applying them at the call site rather than in the body). We could generalize this to cover all invariants, but the freedom doesn't seem necessary (and I know "unspecified" scares some users). Corollary #2C: We need to define the terms "class-wide invariant" and "specific invariant", since we keep finding ourselves talking about them and the parallel with preconditions (where these are defined terms) makes it even more likely that everyone will use them. Best to have a definition. --- One thing that strikes me is that we seem to have less control over assertions (including contracts) than we have over constraint checks. One would at least expect equivalence. Specifically, a subprogram (or entire library) can declare that it requires constraint checks on (via pragma Unsuppress). It would make sense to have something similar for libraries to use for assertions. Thus the next proposal. Proposal #3: There is a boolean aspect "Always_Check_Assertions" for packages and subprograms. If this aspect is True, the assertions that belong to and are contained in the indicated unit are checked, no matter what assertion policy is specified. Reason #3A: There ought to be a way for a program unit to declare that it requires assertion checks on for proper operation. This ought to include individual subprograms. I've described this as an aspect since it applies to a program unit (package or subprogram), but a pragma would also work. This aspect (along with pragma Unsuppress) would allow a library like Claw to not only say in the documentation (that no one ever reads) that turning off checks and assertions is not supported, but also to declare that fact to the compiler and reader (so checks are in fact not turned off unless some non-standard mode is used). Rebuttal #3Z: One could do this by applying pragma Assertion_Policy(Check) as part of a library. There are four problems with this: first, a configuration pragma only applies to the units with which it actually appears (not necessarily children or subunits). That means it has to be repeated with every unit of a set. Secondly, since the default assertion policy is implementation-defined, there would be no clear difference between requiring assertions for correct operation and just wanting assertions checked (for testing purposes). Third, this requires checking for the entire library (as a configuration pragma cannot be applied to an individual subprogram, as Suppress and Unsuppress can). That might be too much of a requirement for some uses. Finally, as a configuration pragma, the pragma cannot be part of the library [package] (it has to appear outside). That makes it more likely that it get separated from the unit, and less likely that the requirement be noted by the reader. None of these are compelling by themselves, but as a set it seems to suggest something stronger is needed. Reason #3C: Having a special assertion policy for this purpose (say "Always_Check") doesn't seem sufficient because it still has the issues caused by the pragma being a configuration pragma (as noted in the Rebuttal). --- Proposal #4: Provide an additional assertion policy "Check_External_Only". In this policy, Preconditions and Predicates are checked, and others (Postconditions, type invariants, and pragma asserts) are ignored. Reason #4A: The idea here is that once unit testing is completed for a package, assertions that are (mostly) about checking the correctness of a body are not (necessarily) needed. OTOH, assertions that are (mostly) about checking the correctness of calls are needed so long as new calls are being written and tested (which is usually so long as the package is in use). So it makes sense to treat these separately. Alternative #4B: I'm not sure I have the right name here, and the name is important in this case. My first suggestion was "Check_Pres_Only" which is a trick as "Pre"s here means PREconditions and PREdicates. But that could be confused with the Pre aspect only, which is not what we want. Discussion #4C: In the note above, Bob suggests that calls that are fully internal ought to be exempted (or something like that). This sounds like FUD to me - a strawman set up to prove that since we can't get this separation perfect, we shouldn't do it at all. But that makes little sense to me; at worst, there will be extra checks that can't fail, and the user will always have the option of turning off all assertion checks if some are left on that are intolerable. The proposed rule is simple, and errs on the side of leaving checks on that might in fact be internal. It seems good enough (don't let best be the enemy of better! :-) --- My intent is to put all of these on the agenda for the February meeting, unless of course we have a consensus here that one or more of these are bad ideas. Finally, I'll mention a couple of other ideas that I won't put forth proposals for, as I don't think that they will get traction. Not a proposal #5: Provide an additional assertion policy "Check_Contracts_Only". In this policy, pragma Asserts are ignored, and all of the others are checked. The way I view pragma Assert vs. the other contracts, this makes sense. But I seem to be in the minority on this one, and I have to agree that Proposal #4 makes more sense (except for the lame name), so I am going to support that one and not push for this. Not a proposal #6: Have predicate and precondition failures raise a different exception, Contract_Error. This also makes sense in my world view, as failures of such inbound contracts is a very different kind of error than the failure of a pragma Assert or even a postcondition in a subprogram body. It would make it much clearer as to whether the cause of the problem is one within the subprogram or one of the call. But this feels like a bigger change than any of the first four proposals, we already have a similar problem with Constraint_Error, and I don't want to go too far here (I'd rather the easy fixes get accomplished rather than getting bogged down in the harder ones). If someone else wants to take the lead on this idea, I'll happily support it. **************************************************************** From: Brad Moore Sent: Tuesday, December 6, 2011 12:28 AM ... > On top of that, while I agree that vendors will give their *customers* > what they need, that does not necessarily extend to third-party > library vendors (which includes various open source projects as well > as paid vendors), as library creators are not necessarily the customers of the compiler vendors. > Library creators/vendors have a somewhat different set of problems > than the typical customer (need for maximum portability, need to > control as much as possible how the library is compiled, etc.) > > So I think it is important that the standard at least try to address > these issues. And it seems silly to claim that it is too hard to do > now, given that this stuff is only implemented in one compiler (that I > know of) right now and that compiler has finer-grained control over > these things than anything imagined as a solution in the Standard. So > I can't imagine any significant implementation or description problems > now -- that will get harder as we go down the road and these things get more widely implemented. I definitely agree with Randy, Erhard, et al, that one needs finer grained control to do things like enable preconditions while disabling postconditions. I'm wondering though if we haven't already provided this control in the language. One can use statically unevaluated expressions to accomplish this. RM 4.9 (32.1/3 - 33/3) package Config is Expensive_Postconditions_Enabled : constant Boolean := False; end Config; with Config; use Config; package Pak1 is procedure P (Data : in out Array_Type) with Pre => Data'First = 0 and then Is_A_Power_Of_Two (Data'Length) , Post => Expensive_Postconditions_Enabled and then Correct_Results (Data'Old, Data); end Pak1; This already gives programmers tons of flexibility, as I see it, since they can define as many different combinations of statically unevaluated constant expressions as needed. One possibly perceived advantage of this approach, is that it might encourage one to leave the assertion policy enabled, and let the programmer provide the switches that can be configured for production vs debugging releases of the application, rather than apply a brute force, all on or all off approach, or some other such coarser grained policy, which might make developers/management uneasy, without an exhaustive analysis of all the source. In spite of these current possibilities though, I think Randy's proposals below make sense, and are worth pursuing, since it can be desirable to disable/enable checks without having to modify source code, or implement some sort of configuration dependent/implementation independent project file approach. **************************************************************** From: Randy Brukardt Sent: Wednesday, February 15, 2012 10:50 PM [Responding to old editorial review comments, the quotes are from Bob Duff.] > I don't see any problem with 4.6(51/3) and 4.6(57). "Checks" > are the things controlled by pragma Suppress. We decided that > predicates are controlled by assertion policy, so they can't be > checks. That's a *big* problem, because preconditions and the like are most certainly described as "checks" (see, for example, the Dynamic Semantics section of 6.1.1). And trying to describe them as something else is very difficult (in terms of wording). The wording of 4.6(51/3) and 4.6(57) is an example of how confusing this is. I would prefer to call these things "checks" (because they are), and then exempt checks controlled by Assertion_Policy (that is, checks caused by assertions) from Suppress. That shouldn't take many words in 11.5. (Of course, it would be even better to allow these to be controlled by Suppress as well as Assertion_Policy. But that seems like more change than we'd like to make now. Of course, that would provide a solution to the AI05-0290-1 problems of assertion control, in particular the availability of Unsuppress. But I digress...) > (I don't entirely agree with that decision, but it's not important.) > So > 4.6(51/3) correctly avoids "check" for the predicate. And > 4.6(57) correctly says that checks raise P_E or C_E. An AARM note > could mention that a predicate failure could raise A_E, but isn't > mentioned because it isn't a check. That's what I did for now, but essentially all of the other assertions are described as checks. It's just too complicated to invent an entire new language and semantics just for them. If we have to change that, I think we're adding a month of work to the Standard, because every line of dynamic semantics for assertions will have to be totally rewritten. And it took us dozens of iterations to get them right as it is (IF they're right). > But now I think 4.9(34/3) needs to say: > > The expression is illegal if its evaluation raises an exception. > For the purposes of this evaluation, the assertion policy is assumed > to be Check. > > As I said above, if "evaluation" can happen at compile time, then so > can "raising". I'm still dubious, but I think it is irrelevant because assertions are surely "checks". Thoughts? **************************************************************** From: Bob Duff Sent: Thursday, February 16, 2012 5:18 AM > That's a *big* problem, ... Please don't panic! The only big problem is that the RM is too big. ;-) >...because preconditions and the like are most certainly described as >"checks" (see, for example, the Dynamic Semantics section of 6.1.1). I see. I didn't realize that, so I wrote the predicates section thinking that assertions are not checks. Nobody seemed to care at the time, so let's not get too excited. This is what happens when we make the RM too big for one person to read cover to cover; we have to live with it now. >...And trying to describe them as something else is very difficult (in >terms of wording). The wording of 4.6(51/3) and 4.6(57) is an example >of how confusing this is. > > I would prefer to call these things "checks" (because they are), and > then exempt checks controlled by Assertion_Policy (that is, checks > caused by assertions) from Suppress. OK, if you think that's easier, I'm all for it. Certainly assertions are checks, intuitively speaking. >...That shouldn't take many words > in 11.5. Good. >...(Of course, it would be even better to allow these to be controlled >by Suppress as well as Assertion_Policy. But that seems like more >change than we'd like to make now. Of course, that would provide a >solution to the > AI05-0290-1 problems of assertion control, in particular the >availability of Unsuppress. But I digress...) Yeah, it's a bit of a mess that we have two completely different ways of suppressing check-like things. But I agree with not trying to fix that now. Just yesterday I was discussing a compiler bug with Ed Schonberg, which involved something like "expand this node with all checks suppressed", and the bug was that assertions were NOT being suppressed, but they needed to be. > > (I don't entirely agree with that decision, but it's not important.) > > So > > 4.6(51/3) correctly avoids "check" for the predicate. And > > 4.6(57) correctly says that checks raise P_E or C_E. An AARM note > > could mention that a predicate failure could raise A_E, but isn't > > mentioned because it isn't a check. > > That's what I did for now, but essentially all of the other assertions > are described as checks. It's just too complicated to invent an entire > new language and semantics just for them. If we have to change that, I > think we're adding a month of work to the Standard, because every line > of dynamic semantics for assertions will have to be totally rewritten. > And it took us dozens of iterations to get them right as it is (IF they're < right). Well, I suppose they're not right, but they're close enough. > > But now I think 4.9(34/3) needs to say: > > > > The expression is illegal if its evaluation raises an exception. > > For the purposes of this evaluation, the assertion policy is assumed > > to be Check. > > > > As I said above, if "evaluation" can happen at compile time, then so > > can "raising". > > I'm still dubious, but I think it is irrelevant because assertions are > surely "checks". OK. **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 8:00 AM I believe (pretty strongly) that Suppress(All_Checks) ought to suppress assertion checks as well. **************************************************************** From: Bob Duff Sent: Thursday, February 16, 2012 8:10 AM I could be convinced of that, but: You don't give any reasons for your pretty-strong belief. We've discussed this, and that's not what we decided. If we had gone that way, then why would we have invented Assertion_Policy? It would make much more sense to have added some new check names. If we went that way, and the program executes "pragma Assert(False);", and assertion checks (or all checks) are suppressed, then is it erroneous? **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 8:28 AM > I could be convinced of that, but: > > You don't give any reasons for your pretty-strong belief. There are a lot of programs that use Suppress(All_Checks) to mean "turn off all run-time checks" and as we seem to now agree, assertion checks are "checks." > We've discussed this, and that's not what we decided. I must have misssed or slept through the discussion on All_Checks as applied to assertion checks. > If we had gone that way, then why would we have invented > Assertion_Policy? It would make much more sense to have added some > new check names. Assertion policy was invented because there was more than just "on" or "off" for assertions. We imagined "assume true," "ignore," "check", and "check fiercely," etc. > If we went that way, and the program executes "pragma Assert(False);", > and assertion checks (or all checks) are suppressed, then is it > erroneous? I was in the discussion where we decided that when assertion are ignored, they are really ignored. It is as though they weren't there at all. **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 8:33 AM ... > I could be convinced of that, but: > > You don't give any reasons for your pretty-strong belief. The other reason is the wording in 11.5 about All_Checks: [The following check corresponds to all situations in which any predefined exception is raised.] 25 All_Checks Represents the union of all checks; [suppressing All_Checks suppresses all checks.] 25.a Ramification: All_Checks includes both language-defined and implementation-defined checks. 25.b/3 To be honest: {AI05-0005-1} There are additional checks defined in various Specialized Needs Annexes that are not listed here. Nevertheless, they are included in All_Checks and named in a Suppress pragma on implementations that support the relevant annex. Look up "check, language-defined" in the index to find the complete list. --- This certainly conveys to me the message that Suppress(All_Checks) suppresses all checks, whether or not we have a "named" check associated with them. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 1:28 PM > I believe (pretty strongly) that Suppress(All_Checks) ought to > suppress assertion checks as well. This from the guy who insisted up and down that suppression was the wrong model for assertions. :-) I actually don't know how to reconcile that with Assertion_Policy(Ignore). Suppress(All_Checks) makes the program erroneous if checks fail; Assertion_Policy(Ignore) doesn't. The cool thing is that if this is true, we only need to add a user Note to 11.5: "Assertions are checks, so they're suppressed by Suppress(All_Checks). We don't give them a check name, since using Assertion_Policy is preferred if it is desired to turn off assertions only." (This is too important for just an AARM note.) And reword a few things so that predicates are described as checks. **************************************************************** From: Bob Duff Sent: Thursday, February 16, 2012 1:49 PM > The other reason is the wording in 11.5 about All_Checks: > > [The following check corresponds to all situations in which any > predefined exception is raised.] > > Represents the union of all checks; [suppressing All_Checks suppresses > all checks.] Assertion_Error is not a predefined exception. So if you want Suppress to suppress assertions, we need new wording somewhere. Most of the above text is @Redundant. If you want "the union of all checks" to make sense for assertions, then I think the assertions need check name(s). And from your other email, I assume you're proposing to change this: 26 If a given check has been suppressed, and the corresponding error situation occurs, the execution of the program is erroneous. although that wasn't 100% clear to me. **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 1:54 PM > I actually don't know how to reconcile that with Assertion_Policy(Ignore). > Suppress(All_Checks) makes the program erroneous if checks fail; > Assertion_Policy(Ignore) doesn't. Assertion checks are user-specified, and so there is no reason for execution to become erroneous if they are suppressed, so long as it is interpreted as meaning the same thing as ignored. Other kinds of checks, such as array out of bounds, null pointer, discriminant checks, etc., clearly make the execution erroneous if they fail and execution proceeds. Assertion policy was created to allow for more implementation flexibility through the use of implementation-defined policies. **************************************************************** From: Steve Baird Sent: Thursday, February 16, 2012 1:51 PM >> I believe (pretty strongly) that Suppress(All_Checks) ought to >> suppress assertion checks as well. > > This from the guy who insisted up and down that suppression was the > wrong model for assertions. :-) I think this means that we want to make it clear that a suppressing an assertion check cannot lead to erroneousness in the same way that suppressing other kinds of checks can. It doesn't make sense to try to define the behavior of a program execution which, absent suppression, would have failed an array indexing check. That's why such an execution is ,quite correctly, defined to be erroneous. Assertion checks are different - type-safety and all the other invariants that an implementation (as opposed to a user) might depend on are not compromised if we ignore assertion checks. If we say that a suppressed assertion check simply might or might not be performed, this doesn't lead to definitional problems. Presumably suppressing an assertion check (when the assertion policy is Check) means that an implementation is allowed, but not required, to behave as though the assertion policy in effect is Ignore (which, incidentally, includes evaluation of the Boolean expression). **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 2:04 PM > Assertion_Error is not a predefined exception. So if you want > Suppress to suppress assertions, we need new wording somewhere. > > Most of the above text is @Redundant. If you want "the union of all > checks" to make sense for assertions, then I think the assertions need > check name(s). I'm not sure I would agree that if something is a "check" then it automatically needs a "check name." > And from your other email, I assume you're proposing to change this: > > 26 If a given check has been suppressed, and the corresponding error > situation occurs, the execution of the program is erroneous. > > although that wasn't 100% clear to me. Yes, as mentioned in an earlier note, suppressing an assertion check would mean ignoring it, not presuming it was true. I gave my reasons why I think All_Checks should cover assertion checks. I'd be curious how others feel. When someone says "suppress all" I have always assumed they really meant it. The creation of Assertion_Policy was to give more flexibility, but I never thought it meant making "suppress all" mean "suppress some." Maybe I am the only person who feels this way... **************************************************************** From: Edmond Schonberg Sent: Thursday, February 16, 2012 2:07 PM > Assertion policy was created to allow for more implementation > flexibility through the use of implementation-defined policies. Indeed, but I'm afraid it's not flexible enough because it's a configuration pragma, while Suppress gives you a per-scope control. Could we just make it into a regular pragma with similar semantics? **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 2:45 PM ... > > Most of the above text is @Redundant. If you want "the union of all > > checks" to make sense for assertions, then I think the assertions > > need check name(s). > > I'm not sure I would agree that if something is a "check" > then it automatically needs a "check name." I would hope not, since there are no check names for checks defined in Annexes, and there is an AARM note (which someone quoted earlier) that says they're included in All_Checks. > > And from your other email, I assume you're proposing to change this: > > > > 26 If a given check has been suppressed, and the corresponding error > > situation occurs, the execution of the program is erroneous. > > > > although that wasn't 100% clear to me. > > Yes, as mentioned in an earlier note, suppressing an assertion check > would mean ignoring it, not presuming it was true. Could you suggest a wording for this paragraph that would have the right effect? > I gave my reasons why I think All_Checks should cover assertion checks. I'd be > curious how others feel. When someone says "suppress all" I have always > assumed they really meant it. The creation of Assertion_Policy was to give > more flexibility, > but I never thought it meant making "suppress all" mean "suppress some." > Maybe I am the only person who feels this way... I've always thought that Suppress was a better model than Assertion_Policy for assertions. I agree that not making them erroneous is probably a good idea. But I would much prefer that the implementation be allowed to check any assertions that it wants to, even when they are suppressed. (They're supposed to be true, after all.) Presumably, an implementation would only check those that it could then prove to be true or are very cheap. "Ignore" does not let the implementation have any flexibility in this area. OTOH, making Suppress(All_Checks) apply to assertions might be a (minor) compatibility problem. pragma Assert would be included in that, and it probably isn't included in Ada 2005 compilers. Thus it could be turned off where it is now on. Not a big deal, but we ought to remain aware of it. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 2:51 PM >> Assertion policy was created to allow for more implementation >> flexibility through the use of implementation-defined policies. >> > Indeed, but I'm afraid it's not flexible enough because it's a >configuration pragma, while Suppress gives you a per-scope control. > Could we just make it into a regular pragma with similar semantics? I forgot about that; it had come up in the write-up of AI05-0290-1 I did yesterday. If we gave these check names, at least some of the control issues would go away, because Unsuppress would do what Erhard and I have been asking for (give a way for a library to force predicate and precondition checks always). As noted in my previous note, Suppress is better than Ignore because it allows the compiler to make the check and then depend on it later (presumably it would only do that if the net code size was smaller). That eliminates some of the need for other policies. So perhaps much of what we really want could be accomplished by mostly abandoning Assertion_Policy, and using Suppress, modulo that it is not erroneous to fail an unchecked assertion. **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 2:58 PM >>> And from your other email, I assume you're proposing to change this: >>> >>> 26 If a given check has been suppressed, and the corresponding error >>> situation occurs, the execution of the program is erroneous. >>> >>> although that wasn't 100% clear to me. >> >> Yes, as mentioned in an earlier note, suppressing an assertion check >> would mean ignoring it, not presuming it was true. > > Could you suggest a wording for this paragraph that would have the > right effect? How about: If a given check has been suppressed, then if it is an assertion check, the corresponding assertion is simply ignored, while if it is some other check and the corresponding error situation occurs, the execution of the program is erroneous. >> I gave my reasons why I think All_Checks should cover assertion checks. I'd be >> curious how others feel. When someone says "suppress all" I have >> always assumed >> they really meant it. The creation of Assertion_Policy was to give >> more flexibility, >> but I never thought it meant making "suppress all" mean "suppress some." >> Maybe I am the only person who feels this way... > > I've always thought that Suppress was a better model than > Assertion_Policy for assertions. I agree that not making them > erroneous is probably a good idea. But I would much prefer that the > implementation be allowed to check any assertions that it wants to, > even when they are suppressed. (They're supposed to be true, after > all.) Presumably, an implementation would only check those that it could then prove to be true or are very cheap. "Ignore" > does not let the implementation have any flexibility in this area. > > OTOH, making Suppress(All_Checks) apply to assertions might be a > (minor) compatibility problem. pragma Assert would be included in > that, and it probably isn't included in Ada 2005 compilers. Thus it > could be turned off where it is now on. Not a big deal, but we ought to remain aware of it. Pragma Assert was in plenty of Ada 95 compilers as well. I'd be curious how it was implemented there. Certainly in Green Hills and Aonix compilers I can assure you that Suppress(All_Checks) turned off assertion checks as well. **************************************************************** From: Steve Baird Sent: Thursday, February 16, 2012 3:10 PM > If a given check has been suppressed, then if it is an assertion > check, the corresponding assertion is simply ignored, while if it is > some other check and the corresponding error situation occurs, the > execution of the program is erroneous. Although I agree with the general direction you are suggesting, I see two minor problems with this wording. 1) For Assertion_Policy Ignore, we still evaluate the Boolean. I don't think we want something similar but slightly different here. 2) I want to preserve the longstanding rule that suppression only gives an implementation additional permissions - it never imposes a requirement on an implementation. It sounds like you are requiring that the assertion must be ignored, as opposed to allowing it to be ignored. I'll try to come up with wording that addresses these points. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 3:25 PM > 1) For Assertion_Policy Ignore, we still evaluate the Boolean. > I don't think we want something similar but slightly different > here. Huh? When the policy is Ignore, nothing is evaluated. Nor can anything be assumed. It wouldn't be very useful if it evaluated something. > 2) I want to preserve the longstanding rule that suppression only > gives an implementation additional permissions - it never imposes > a requirement on an implementation. It sounds like you are > requiring that the assertion must be ignored, as opposed to > allowing it to be ignored. Definitely. I consider that an advantage rather than a "difference". **************************************************************** From: Tucker Taft Sent: Thursday, February 16, 2012 3:32 PM > Although I agree with the general direction you are suggesting, I see > two minor problems with this wording. > > 1) For Assertion_Policy Ignore, we still evaluate the Boolean. > I don't think we want something similar but slightly different here. I had forgotten that, and I doubt if all Ada 95 compilers follow that rule, and it would really defeat the purpose of ignoring the assertion from a performance point of view. Where is that specified? > 2) I want to preserve the longstanding rule that suppression only > gives an implementation additional permissions - it never imposes a > requirement on an implementation. It sounds like you are requiring > that the assertion must be ignored, as opposed to allowing it to be > ignored. I suppose, but then it must either check or ignore. It can't suppress the check and then assume it is true. One of the fundamental principles that got us to the Assertion_Policy approach was that adding a pragma Assert never made the program *less* safe because it was asserting something that was in fact untrue. > > I'll try to come up with wording that addresses these points. All power to you. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 3:53 PM ... > > 1) For Assertion_Policy Ignore, we still evaluate the Boolean. > > I don't think we want something similar but slightly different here. > > I had forgotten that, and I doubt if all Ada 95 compilers follow that > rule, and it would really defeat the purpose of ignoring the assertion > from a performance point of view. > Where is that specified? I don't think you forgot anything, I think Steve is making this up. > > 2) I want to preserve the longstanding rule that suppression only > > gives an implementation additional permissions - it never imposes a > > requirement on an implementation. It sounds like you are requiring > > that the assertion must be ignored, as opposed to allowing it to be > > ignored. > > I suppose, but then it must either check or ignore. > It can't suppress the check and then assume it is true. > One of the fundamental principles that got us to the Assertion_Policy > approach was that adding a pragma Assert never made the program *less* > safe because it was asserting something that was in fact untrue. Right, this is a general principle in Ada. But of course "check" doesn't mean that any code will actually be executed: such a check might be optimized out. For instance: procedure P (A : access Something) with Pre => A /= null; Ignoring the fact that the programmer should probably have used a null exclusion, if a call looks like: if Ptr /= null then P (Ptr); The check would also be Ptr /= null. One would expect that normal common-subexpression and dead code elimination would completely remove the check. But it still can be assumed true in the body in this case. **************************************************************** From: Steve Baird Sent: Thursday, February 16, 2012 3:54 PM > I'll try to come up with wording that addresses these points. 1) Add Assertion_Check to the list of defined checks (details TBD) 2) Replace "check" with "check other than Assertion_Check" in the erroneous execution section of 11.5. 3) In the Implementation Permissions section, add At any point within a region for which Assertion_Check is suppressed, an implementation is allowed (but not required) to define the Assertion_Policy in effect at that point to be Ignore. > Huh? When the policy is Ignore, nothing is evaluated. Nor can anything > be assumed. Oops. My mistake. The RM does contain If the assertion policy is Ignore at the point of a pragma Assert, ...the elaboration of the pragma consists of evaluating the boolean expression ... but the elided text is significant. My bad. Nonetheless, I still think that defining the effect of suppression in terms of the Ignore policy is a good idea. **************************************************************** From: Bob Duff Sent: Thursday, February 16, 2012 5:29 PM > Assertion checks are user-specified, and so there is no reason for > execution to become erroneous if they are suppressed,... Not sure what "no reason" means. Yeah, there is no reason why we MUST define a suppressed False assertion to be erroneous. But there is a reason that there ought to be a mode where it's erroneous: efficiency. Currently, that's done via an impl-def policy. There are lots of cases where efficiency can be improved by assuming (and not checking) that assertions are True. >...so long as it is interpreted as meaning the same thing as ignored. That part is circular reasoning: yes, of course if we define it like "Ignore" then it's not erroneous. >...Other kinds of checks, > such as array out of bounds, null pointer, discriminant checks, etc., >clearly make the execution erroneous if they fail and execution >proceeds. On the flip side, there are non-assertion checks that could sensibly have an "ignore" option. Range checks, for example: don't check the range, but don't later assume it's in range. Overflow checks: don't check, but return an implementation dependent value of the type. Robert is fond of pointing out that: pragma Suppress(...); if X = 0 then Put_Line(...); end if; Y := 1/X; most programmers are surprised that the compiler can completely eliminate the if statement. > Assertion policy was created to allow for more implementation > flexibility through the use of implementation-defined policies. Right, but it's a bit of a mess. It's not clear why more flexibility isn't desirable for non-assertion checks. > > The cool thing is that if this is true, we only need to add a user > > Note to > > 11.5: "Assertions are checks, so they're suppressed by Suppress(All_Checks). > > We don't give them a check name, since using Assertion_Policy is > > preferred if it is desired to turn off assertions only." (This is > > too important for just an AARM note.) > > > > And reword a few things so that predicates are described as checks. Right, if assertions are checks, we really should be using the standard wording for checks: "a check is made...". Currently we say "If the policy is Check, then ... Assertion_Error is raised", which is weird, because we don't say "if Divide_Check is not suppressed, a check is made that ... is nonzero". It would simplify if the assertion-check wording assumed that the policy is Check. Then when we define Assertion_Policy, we say, "Wording elsewhere assumes ... Check. If the policy is Ignore, then instead ...". I fear we don't have the time to do all this rewording. Oh, well, it's not the end of the world if we're inconsistent. > I'm not sure I would agree that if something is a "check" then it > automatically needs a "check name." I think the intent was that all checks have names. But the ones in the annexes are only named via a "To be honest". That's a cheat, of course. > Yes, as mentioned in an earlier note, suppressing an assertion check > would mean ignoring it, not presuming it was true. I'm a little uncomfortable with the idea that Suppress wouldn't mean "erroneous". I'm a little uncomfortable that: subtype S is Natural range 0..10; has confusingly different semantics than: subtype S is Natural with Static_Predicate => S <= 10; > I gave my reasons why I think All_Checks should cover assertion > checks. I'd be curious how others feel. I have mixed feelings. > ...When someone > says "suppress all" I have always assumed they really meant it. Yeah, but to me, "really mean it" means "cross my heart and hope to die, you may strike me with erroneous lightning if I'm wrong". In other words, Suppress is the extreme, "prefer efficiency over safety". > The creation of Assertion_Policy was to give more flexibility, but I > never thought it meant making "suppress all" mean "suppress some." > Maybe I am the only person who feels this way... I see your point. Mixed feelings. > > 1) For Assertion_Policy Ignore, we still evaluate the Boolean. > > I don't think we want something similar but slightly different here. > > I had forgotten that, ... Please re-forget that. ;-) > One of the fundamental principles that got us to the Assertion_Policy > approach was that adding a pragma Assert never made the program *less* > safe because it was asserting ^^^^^ > something that was in fact untrue. No, "never" is wrong. The principle holds for Check and Ignore policies, but implementations can have a policy where the principle is violated -- and such a policy has some advantage. > 1) Add Assertion_Check to the list of defined checks (details TBD) I'd prefer to split out Precondition_Check, Postcondition_Check, Predicate_Check, Invariant_Check, Assert_Check (Pragma_Assert_Check?). Assertion_Check could be the union of these. Predicate_Check could be the union of Static_Predicate_Check and Dynamic_Predicate_Check. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 6:17 PM ... > Steve Baird wrote: > > 1) Add Assertion_Check to the list of defined checks (details TBD) > > I'd prefer to split out Precondition_Check, Postcondition_Check, > Predicate_Check, Invariant_Check, Assert_Check (Pragma_Assert_Check?). > Assertion_Check could be the union of these. > Predicate_Check could be the union of Static_Predicate_Check and > Dynamic_Predicate_Check. And something like Before_Call_Assertion_Check is the union of Predicate_Check and Precondition_Check (I don't have the perfect name); and After_Call_Assertion_Check is the union of Postcondition_Check and Invariant_Check. In any case, this is definitely getting into AI05-0290-1 territory ("Improved control for assertions"), and there is not enough time to come to any conclusions before the agenda is finalized (that's tomorrow), so I think I'll probably just add all of this to that AI and we'll have to hash it out at the meeting. **************************************************************** From: Bob Duff Sent: Thursday, February 16, 2012 6:33 PM > I'll probably just add all of this to that AI and we'll have to hash > it out at the meeting. OK. Or, we can hash it out between 2013 and 2020. **************************************************************** From: Randy Brukardt Sent: Thursday, February 16, 2012 6:59 PM > OK. Or, we can hash it out between 2013 and 2020. I don't think that works, at least for Suppress(All_Checks), because changing that later would be a massive incompatibility. (It already might be an incompatibility, but not with Tucker's compilers.) Similarly, we've had several strong comments that we need mechanisms for 3rd-party packages. That shouldn't be ignored. Once we've dealt with those two, it seems inconceivable that we couldn't agree on the rest (which seems easy to me). In any case, I hope we don't spend the whole meeting on things that no one will ever notice (this is definitely *not* in that category). **************************************************************** From: Erhard Ploedereder Sent: Friday, February 17, 2012 3:49 AM >> I believe (pretty strongly) that Suppress(All_Checks) ought to >> suppress assertion checks as well. > The cool thing is that if this is true, we only need to add a user > Note to > 11.5: "Assertions are checks, so they're suppressed by Suppress(All_Checks). > We don't give them a check name, since using Assertion_Policy is > preferred if it is desired to turn off assertions only." (This is too > important for just an AARM note.) And the semantics of pragma Unsuppress(All_Checks); pragma Assertion_Policy(Ignore); is what ? Obviously there needs to be a rule to resolve the apparent contradiction. Incidently: The point that "the old checks" prevent erroneousness within the framework of language semantics while assertion checks are unrelated to erroneousness but rather deal with application semantics is a very good one. We ought to keep that in mind when deciding on assertion control. **************************************************************** From: Jeff Cousins Sent: Friday, February 17, 2012 4:12 AM John's book says "pragma Suppress (All_Checks); which does the obvious thing". So it's not so obvious. What would people naturally reply if asked what it covers, without thinking too much? As the Assertion_Policy is called "Check" and not something like "Verify" my top-of-my-head answer would be that assertions are checks. **************************************************************** From: Erhard Ploedereder Sent: Friday, February 17, 2012 4:29 AM >> OK. Or, we can hash it out between 2013 and 2020. > I don't think that works, at least for Suppress(All_Checks), because > changing that later would be a massive incompatibility. I agree with Randy. This is way too important to not resolve now. and I want to expand the future incompatibility argument to the Assertion control in general. **************************************************************** From: Erhard Ploedereder Sent: Friday, February 17, 2012 5:06 AM If pragma Assertion_Policy(Ignore) "guarantees" that the assertion is not evaluated, then there is no check to be talked about, is there? Consequently pragma Unsuppress(All_Checks); pragma Assertion_Policy(Ignore); would imply that there is nothing there to be unsuppressed, hence Unsuppress would not be an answer to the 3rd party-SW question. In the end, I propose to make the Assertion Control pragmas semantically analogous to the Suppress/Unsuppress pragmas (including the scoping), but controlling only the assertion world. The syntactic differences are there only to separate assertion checks from runtime checks that prevent erroneousness. (Maybe it would be a good idea to have the assertion check names as 2. argument to assertion control pragmas only, with "All_Checks" as a special comprehensive choice on the Suppress side.) **************************************************************** From: John Barnes Sent: Friday, February 17, 2012 1:26 PM > John's book says "pragma Suppress (All_Checks); which does the obvious thing". > So it's not so obvious. It was obvious when I wrote it! **************************************************************** From: Tucker Taft Sent: Saturday, February 25, 2012 10:43 PM Here is a new version from Erhard and Tuck of an AI on pragma Assertion_Policy. [This is version /04 of the AI - Editor.] **************************************************************** From: Randy Brukardt Sent: Saturday, February 25, 2012 11:08 PM Comments: (1) Shouldn't "assertion_kind" be "assertion_aspect_mark"? That is, why use words to repeat here what is specified in 2.8 and 13.1.1? (Still will need the list of them, of course.) (2) Then the wording would be about "assertion aspects" rather than "assertion kinds". **************************************************************** From: Erhard Ploedereder Sent: Sunday, February 26, 2012 11:32 AM Two more places to fix (fairly easily): 4.6. 51/3 6.1.1. 19/3 Apply the boilerplate... If requires by the assertion policies in effect at < > , --------------- (I checked for all occurrences of assertion policy in the RM and AARM) 4.9. 34/3 is another, but it is ok as is. **************************************************************** From: Erhard Ploedereder Sent: Sunday, February 26, 2012 11:48 AM Discovery of 19/3 led to a simplification of 31/3 and 35/3 : modify 6.1.1. 19/3 to read If required by the Pre, Pre'Class, Post, or Post'Class assertion policies (see 11.4.2) in effect at the point of a corresponding aspect specification applicable to a given subprogram or entry, then the respective preconditions and postconditions are considered to be enabled for that subprogram or entry. Modify 6.1.1 31/3 to: Upon a call of the subprogram or entry, after evaluating any actual parameters, checks for enabled preconditions are performed as follows: Modify 6.1.1. 35/3, first sentence, to: Upon successful return from a call of the subprogram or entry, prior to copying back any by-copy in out or out parameters, the check of enabled postconditions is performed. **************************************************************** From: Robert Dewar Sent: Sunday, February 26, 2012 3:48 PM Query: for run time checks, the implementation can assume that no check is violated if checks are suppressed. This is certainly not true for ignoring assertions, a compiler should ignore assertions if they are off (GNAT at least in one case uses suppressed assertions to sharpen warning messages, the case is an assert of pragma Assert (S'First = 1); which suppresses warnings about S'First possibly not being 1 even if assertions are suppressed. But warnings are insignificant semantically, so that's OK. What about preconditions and postconditions that are suppressed, are they also required to be totally ignored? Same question with predicates. If predicates are suppressed, I assume they still operate as expected in controlling loops etc?? Sorry if this is confused! **************************************************************** From: Erhard Ploedereder Sent: Monday, February 27, 2012 1:46 AM ... > What about preconditions and postconditions that are suppressed, are > they also required to be totally ignored? Same question with > predicates. If predicates are suppressed, I assume they still operate > as expected in controlling loops etc?? > > Sorry if this is confused! Present thinking is that assertions, pre- and postconditions and type invariants are ignored in the above sense. Compiler is not allowed to assume truth. I think that is a pretty solid position. On subtype predicates the jury is still out. It was written up as "ignored" in the above sense, but then we discovered some issues; that is the one issue remaining on the assertion control. **************************************************************** From: Robert Dewar Sent: Monday, February 27, 2012 7:20 AM > Present thinking is that assertions, pre- and postconditions and type > invariants are ignored in the above sense. Compiler is not allowed to > assume truth. I think that is a pretty solid position. I think that makes sense. I am also thinking that perhaps in GNAT we will implement another setting, which says, assume these are True, but don't generate run-time code. That's clearly appropriate for things you have proved true. **************************************************************** From: Tucker Taft Sent: Monday, February 27, 2012 10:13 AM > What about preconditions and postconditions that are suppressed, are > they also required to be totally ignored? Same question with > predicates. If predicates are suppressed, I assume they still operate > as expected in controlling loops etc?? Pragma Suppress has no effect on pre/postconditions, except for pragma Suppress(All_Checks), which *allows* the implementation to interpret it as Assertion_Policy(Ignore). We intentionally do not want erroneousness to be associated with assertion-ish things like pre/postconditions, etc. The exact details of that are part of this AI, and for subtype predicates, are pretty subtle. We decided we didn't like the solution currently proposed by this AI for subtype predicates, but we have agreed on a different approach which still avoids erroneousness. I will be writing that up in the next couple of days. > Sorry if this is confused! This is a confusing area. We spent several hours on this topic, and visited many different places in the design "space." Personally, I feel pretty good about where we "landed," but I haven't written it up yet, so you will have to hang in there for a few more days before you will see the full story, at least with respect to subtype predicates. But we all felt it was quite important that adding assertion-like things to a program and then specifying Assertion_Policy(Ignore) should *not* introduce erroneousness, even if your assertion-like things are incorrect. **************************************************************** From: Steve Baird Sent: Monday, February 27, 2012 10:42 PM How does the design you have in mind handle the following example? declare pragma Assertion_Policy (Ignore); subtype Non_Zero is Integer with Static_Predicate => Non_Zero /= 0; type Rec (D : Non_Zero) is record case D is when Integer'First .. -1 => ...; when 1 .. Integer'Last => ....; end case; end record; Zero : Integer := Ident_Int (0); X : Rec (D => Zero); I'm wondering about the general issue of how the Ignore assertion policy interacts with the coverage rules for case-ish constructs (case statements, case expressions, variant parts) when the nominal subtype is a static subtype with a Static_Predicate. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 6:42 AM > I'm wondering about the general issue of how the Ignore assertion > policy interacts with the coverage rules for case-ish constructs (case > statements, case expressions, variant parts) when the nominal subtype > is a static subtype with a Static_Predicate. Ignore should only be about removing checks, not altering any of the other semantics of static predicates, it would not be acceptable at all for ignore to make a case statement illegal! **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 9:03 AM >> I'm wondering about the general issue of how the Ignore assertion >> policy interacts with the coverage rules for case-ish constructs >> (case statements, case expressions, variant parts) when the nominal >> subtype is a static subtype with a Static_Predicate. > > Ignore should only be about removing checks, not altering any of the > other semantics of static predicates, it would not be acceptable at > all for ignore to make a case statement illegal! Correct. The intent is "Ignore" means "ignore for the purpose of range checks." Any non-checking semantics remain, such as membership, full coverage, etc. Case statements luckily include an option to raise Constraint_Error at run-time if the value is not covered for some reason (RM 5.4(13)). The same is true for case expressions. Variant records don't really have such a fall back, but we will have to be clear that even if Ignore applies where the discriminant subtype is specified, the subtype's predicate still determines whether or not a particular discriminant-dependent component exists. It is as though there were a "when others => null;" alternative in the variant record, where the "when others" covers values that don't satisfy the predicate. I suppose one way to think of it is that values that don't satisfy the predicate are like values outside of the base range of the subtype. Some objects of the subtype can have them, and some objects can't, but the rules of the language should be set up so that values that don't satisfy the predicate, analogous to values outside the base range, don't cause erroneousness. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 9:06 AM ... > I suppose one way to think of it is that values that don't satisfy the > predicate are like values outside of the base range of the subtype. > Some objects of the subtype can have them, and some objects can't, but > the rules of the language should be set up so that values that don't > satisfy the predicate, analogous to values outside the base range, > don't cause erroneousness. That seems exactly right, and for a case statement, it means that just as you do a range check even if checks are suppressed, you will do a predicate check even if checks are suppressed. Interestingly, gnat has a switch -gnatV that forces extra validity checks around the place. I think these should also trigger extra predicate checks for predicated subtypes. **************************************************************** From: Jeff Cousins Sent: Tuesday, February 28, 2012 9:42 AM Have we got a definite answer yet as to whether Steve's example subtype Non_Zero is Integer with Static_Predicate => Non_Zero /= 0; type Rec (D : Non_Zero) is record case D is when Integer'First .. -1 => ...; when 1 .. Integer'Last => ....; end case; end record; (i.e. without coverage of the Zero case) is always legal, always illegal, or dependent on the Assertion Policy (which is implementation defined if not specified)? I could live with either of the first two as long as it's spelt out which. I think that before the meeting we would have been evenly split about which we were expecting. It seems to have been a mistake to have sold predicates as being similar to constraints. **************************************************************** From: Steve Baird Sent: Tuesday, February 28, 2012 10:02 AM ... >> I'm wondering about the general issue of how the Ignore assertion >> policy interacts with the coverage rules for case-ish constructs >> (case statements, case expressions, variant parts) when the nominal >> subtype is a static subtype with a Static_Predicate. > Ignore should only be about removing checks, not altering any of the > other semantics of static predicates, it would not be acceptable at > all for ignore to make a case statement illegal! Agreed. My question was about dynamic semantics only. ... > Variant records don't really have > such a fall back, but we will have to be clear that even if Ignore > applies where the discriminant subtype is specified, the subtype's > predicate still determines whether or not a particular > discriminant-dependent component exists. It is as though there were a > "when others => null;" alternative in the variant record, where the > "when others" covers values that don't satisfy the predicate. I think this is a bad idea. I think (as in the case of case statements) Constraint_Error should be raised. Let's detect a problem like this ASAP instead of sweeping it under the rug and then seeing mystifying behavior later. **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 10:19 AM > Have we got a definite answer yet as to whether Steve's example ... > (i.e. without coverage of the Zero case) is always legal, always illegal, or > dependent on the Assertion Policy (which is implementation defined if not > specified)? This is always legal. Legality should not be affected by the Assertion_Policy. I will be writing this up shortly. For case statements, if the case expression's value doesn't satisfy the static predicate, then Constraint_Error would be raised, per the existing RM paragram 5.4(13). For variant records, I think we have to assume an implicit "when others => null" meaning that if the discriminant does not satisfy the predicate, the implicit "null" variant alternative is chosen. > I could live with either of the first two as long as it's spelt out which. I > think that before the meeting we would have been evenly split about which we > were expecting. It seems to have been a mistake to have sold predicates as > being similar to constraints. Stay tuned for my upcoming write-up. **************************************************************** From: Bob Duff Sent: Sunday, February 26, 2012 5:23 PM > This is certainly not true for ignoring assertions, ... ... > What about preconditions and postconditions that are suppressed, Note on terminology: "assertion" is not synonymous with "pragma Assert", according to the RM. The term "assertion" includes pragma Assert, and also predicates, pre, post, and invariants. I believe this agrees with the usage in Eiffel. I think the intent is to treat all assertions, including pre/post, the same way with respect to your question. But it's still somewhat up in the air, and there was some talk during the meeting of treating some kinds of assertions differently. **************************************************************** From: Robert Dewar Sent: Sunday, February 26, 2012 7:01 PM > Note on terminology: "assertion" is not synonymous with "pragma > Assert", great, another case where the RM invents a term that NO Ada programmer will use. I will bet you anything that to virtually 100% of Ada programmers, assertion will mean what ever it is that pragma Assert does. > according to the RM. The term "assertion" includes pragma Assert, and > also predicates, pre, post, and invariants. Why invent inherently confusing terminology Come up with a new word. Contracts for example or somesuch. **************************************************************** From: Bob Duff Sent: Tuesday, February 28, 2012 7:02 AM > > according to the RM. The term "assertion" includes pragma Assert, > > and also predicates, pre, post, and invariants. > > Why invent inherently confusing terminology ARG didn't invent this terminology. It's pretty standard to include preconditions and the like in "assertion". > Come up with a new word. Contracts for example or somesuch. Ada has too many "new" words as it is. For example, using "access" for what everybody else calls "pointer" or "reference" just causes confusion. We should go along with industry-wide terms, to the extent possible. "Contract" is an informal term, which to me means a whole set of assertions, such as all the assertions related to a given abstraction. Using "contract" to refer to a particular assertion would confuse things, I think. **************************************************************** From: Bob Duff Sent: Tuesday, February 28, 2012 7:18 AM > ARG didn't invent this terminology. It's pretty standard to include > preconditions and the like in "assertion". By the way, you snipped my reference to Eiffel, which agrees that "assertion" includes preconditions and the like. Eiffel should be respected when talking about contracts. If you still don't believe me, google "assertion programming language". (Note that the 10'th hit is an AdaCore web page that correctly includes pre, etc.) In languages that only have assert statements (C and pre-2012 Ada[1], for example), that's all you have. But in languages that have preconditions and invariants[2], those are included. [1] Yeah, I know it's a macro in C, and a pragma in Ada, but conceptually, it's a statement. [2] Called "predicates" in Ada. ;-) **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 7:25 AM >> Why invent inherently confusing terminology > > ARG didn't invent this terminology. It's pretty standard to include > preconditions and the like in "assertion". I mean it's confusing in the Ada context. It is always a mistake to push terminology that no one will use (e.g. a generic package is not a package a package spec is not what you think it is a type should be called a subtype a class type has to be called a tagged type and yes, I would add access/pointer to that list. etc.) I am willing to bet that for nearly all Ada programmers Assertion will continue to refer to things done by pragma Assert, as it always have. And I often see ARG members using this terminology, e.g. something like "Well a precondition in a subprogram body is really just an assertion." which is official nonsense, but everyone understands it! >> Come up with a new word. Contracts for example or somesuch. > > Ada has too many "new" words as it is. For example, using "access" > for what everybody else calls "pointer" or "reference" just causes > confusion. We should go along with industry-wide terms, to the extent > possible. > > "Contract" is an informal term, which to me means a whole set of > assertions, such as all the assertions related to a given abstraction. > Using "contract" to refer to a particular assertion would confuse > things, I think. Contract items? **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 7:30 AM ... > By the way, you snipped my reference to Eiffel, which agrees that > "assertion" includes preconditions and the like. > Eiffel should be respected when talking about contracts. does Eiffel have assertions in the Ada 2005 sense? The trouble here is that it is so natural to assume that assertion refers to pragma assert because a) it has always done so in Ada b) the name just encourages this identification That's why I think Ada programmers will continue to use the term assertion to mean that which is specified with pragma Assert, and it is futile for the ARG to try to push terminology that won't stick. More Ada 2012 programmers will know previous versions of Ada than Eiffel!! **************************************************************** From: Bob Duff Sent: Tuesday, February 28, 2012 7:45 AM > does Eiffel have assertions in the Ada 2005 sense? Yes, but I think it's called "check", so the potential confusion you're worried about ("assertion" /= "assert") doesn't arise. By the way, I don't deny that there's a potential confusion. I usually get around it by saying something like "assertions, such as preconditions and the like...". In general, programming language terminology is a mess! "Procedure", "function", "method", "subroutine", "routine", "subprogram" -- all mean basically the same thing. It really damages communication among people with different backgrounds. Note that the RM often uses the term "remote procedure call" and its abbreviation "RPC", even though it could be calling a function. It doesn't make perfect sense, but RPC is so common in the non-Ada world, we decided to use it. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 7:53 AM >> does Eiffel have assertions in the Ada 2005 sense? > > Yes, but I think it's called "check", so the potential confusion > you're worried about ("assertion" /= "assert") doesn't arise. Right, thats what I remembered (no assertions as such). > By the way, I don't deny that there's a potential confusion. > I usually get around it by saying something like "assertions, such as > preconditions and the like...". Yes, but you can't expect people to do that all the time. I suppose we should get in the habit of using "asserts" to refer to the set of things done with pragma assert. (like preconditions). **************************************************************** From: Jean-Pierre Rosen Sent: Tuesday, February 28, 2012 8:04 AM >> "Contract" is an informal term, which to me means a whole set of >> assertions, such as all the assertions related to a given >> abstraction. Using "contract" to refer to a particular assertion >> would confuse things, I think. > > Contract items? I'd rather avoid "contract", because we often refer to generic formals as a contract, we talk about the contract model for assume the best/assume the worst, etc. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 8:08 AM yes, good point **************************************************************** From: Erhard Ploedereder Sent: Tuesday, February 28, 2012 10:01 AM > That's why I think Ada programmers will continue to use the term > assertion to mean that which is specified with pragma Assert, and it > is futile for the ARG to try to push terminology that won't stick. So what is your proposal? We need some word or words to describe the group of pragma Assert, prcondition, postcondition, type invariant, and maybe subtype predicate. Repeating them all every time something is said about all of them is editorial suicide. "Contract" isn't it, since Assert(ions) have no contract idea. (By the way, I don't believe that Ada programmers will be so literal in their interpretations that they believe that only pragma Assert can create assertions.) **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 10:14 AM >> That's why I think Ada programmers will continue to use the term >> assertion to mean that which is specified with pragma Assert, and it >> is futile for the ARG to try to push terminology that won't stick. > > So what is your proposal? We need some word or words to describe the > group of pragma Assert, prcondition, postcondition, type invariant, > and maybe subtype predicate. Repeating them all every time something > is said about all of them is editorial suicide. > "Contract" isn't it, since Assert(ions) have no contract idea. At this point I think we need to stick with "assertion expressions" given that we are using "Assertion_Policy" to control them all, and they all raise Assertion_Error on failure. **************************************************************** From: Jean-Pierre Rosen Sent: Tuesday, February 28, 2012 10:19 AM > So what is your proposal? Assumptions? Checks? (I think it's too vague) **************************************************************** From: Ben Brosgol Sent: Tuesday, February 28, 2012 10:37 AM ... > At this point I think we need to stick with "assertion expressions" > given that we are using "Assertion_Policy" to control them all, and > they all raise Assertion_Error on failure. Maybe "assertion forms"? **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 2:38 PM > (By the way, I don't believe that Ada programmers will be so literal > in their interpretations that they believe that only pragma Assert can > create assertions.) The reason I think this is that this has been the case for years. **************************************************************** From: Erhard Ploedereder Sent: Wednesday, February 29, 2012 7:21 AM > Assumptions? Has exactly the opposite meaning in verification land, i.e. something that you can always assume on faith. > Checks? (I think it's too vague) Already taken by the run-time checks and, as an axiom, not to be confused with "the new stuff". and from Ben B.: > assertion forms Does not solve Robert's concern that the term is completely occupied by pragma Assert. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 2:37 PM > I could live with either of the first two as long as it's spelt out > which. I think that before the meeting we would have been evenly > split about which we were expecting. It seems to have been a mistake > to have sold predicates as being similar to constraints. To me, if predicates are not similar to constraints they are broken. The following two should be very similar in effect subtype R is integer range 1 .. 10; subtype R is integer with Static_Predicate => R in 1 .. 10; **************************************************************** From: Ed Schonberg Sent: Tuesday, February 28, 2012 2:45 PM At the meeting there was an agreement in that direction (not unanimous of course, nothing is). This is why Assertion_Policy has to have a different effect on pre/postconditions than on predicate checks. Creating values that violate the predicate is as bad as creating out-of- range values. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 2:53 PM The reason I think those two cases have to be very similar (I never like to use the word equivalent :-)) is that I think in practice any Ada programmer would expect them to be similar, they sure *look* similar :-) **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 3:01 PM They are very similar. The one significant difference is that the first one if you suppress the range check, your program can go erroneous if you violate the constraint. In the second one, if you set Assertion_Policy(Ignore) then some checks will be eliminated, but there will be no assumption that the checks would have passed if they had been performed. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 3:24 PM > They are very similar. The one significant difference is that the > first one if you suppress the range check, your program can go > erroneous if you violate the constraint. Not easily though, since generally e.g. uninitialized values don't drive you into erroneousness. **************************************************************** From: Bob Duff Sent: Tuesday, February 28, 2012 3:39 PM > They are very similar. Agreed. If you don't turn off the checks, the only difference is which exception gets raised. That counts as "very similar" in my book, especially if you're not going to handle these exceptions (which I expect is the usual case). If you DO turn off the checks, then you should have made sure by other means that the checks won't fail, so they're still quite similar. Of course, you might make a mistake, and the consequences of that mistake might be different, but the rule for programmers is simple: Don't do that. That is, don't violate constraints, and don't violate predicates, and especially don't turn the checks off without good reason. **************************************************************** From: Robert Dewar Sent: Tuesday, February 28, 2012 3:43 PM > and especially don't turn the checks off without good reason. Exactly, and deliberately wanting to violate the check without an exception is NOT a good reason :-) **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 5:07 PM Here is the "final" version, incorporating the newer ideas about how Assertion_Policy should apply to subtype predicate checks, and using the term "enabled" more uniformly. [Editor's note: This is version /05.] **************************************************************** From: Steve Baird Sent: Tuesday, February 28, 2012 6:42 PM > Here is the "final" version, .. That's optimistic. Some questions and observations: 1) In general, I like the approach for defining how assertion policies interact with subtype predicates. What about cases where a subtype name is repeated with a conformance check? For example pragma Assertion_Policy (Static_Predicate => Check); subtype Foo is Integer with Static_Predicate => Foo /= 123; package Pkg is type T (D : Foo) is private; pragma Assertion_Policy (Static_Predicate => Ignore); private type T (D : Foo) is .... end Pkg; For a subtype that has a specified predicate aspect (either static or dynamic), perhaps we want the assertion policy corresponding to that aspect to participate in subtype conformance. In the above example, the two uses of Foo would fail the subtype conformance legality check even though they denote the same subtype. Or perhaps we punt and just say that this case is implementation dependent. Similar issues occur for subprogram declarations (spec, stub, and body), and deferred constants. 2) What about conformance for 'Access attributes and related checking? We have an access-to-foo type with one assertion policy and an aliased object of type foo declared with a different assertion policy in effect. Do we want to allow the Access attribute of the object to yield a value of the access type? Or two access types which would otherwise be convertible but they differ with respect to applicable assertion policy. Do we want to allow conversion between such access types? Maybe allowing these things is ok, but it makes it very hard to write assertions that you can count on. The same problems occur for access-to-subprogram types. Would we want this stricter conformance rule only if the subtype in question is affected by the assertion policy? Should two uses of Standard,Integer fail a conformance check because of a difference in assertion policy? What about the case where we don't know (e.g., an access type declaration where the designated type is incomplete) or where getting this knowledge requires breaking privacy? What about overriding primitive subprograms of tagged types? Is it ok if overrider and overridden differ with respect to assertion policy? 3) For a call with a defaulted expression which includes a qualified expression, which assertion policy determines what predicate checking is performed for the qualified expression? 4) I think the rules for dealing with assertion policies and generics are fine, but I'd still like to see an AARM note stating explicitly that an assertion_policy pragma (and, for that matter, a suppress pragma) which applies to a region which includes the declaration or body of a generic has no effect on an instance of that generic (assuming the instance is declared outside of the region affected by the pragma). I think the consequences of the rule If a pragma Assertion_Policy applies to a generic_instantiation, then the pragma Assertion_Policy applies to the entire instance. are a bit less obvious than some other rules and an AARM note is justified. **************************************************************** From: Tucker Taft Sent: Tuesday, February 28, 2012 8:19 PM ... > For a subtype that has a specified predicate aspect (either static or > dynamic), perhaps we want the assertion policy corresponding to that > aspect to participate in subtype conformance. In the above example, > the two uses of Foo would fail the subtype conformance legality check > even though they denote the same subtype. > > Or perhaps we punt and just say that this case is implementation > dependent. > > Similar issues occur for subprogram declarations (spec, stub, and > body), and deferred constants. I think the initial declaration should determine whether the check is enabled. The state of the assertion policy at the point of the completion should not be relevant. > 2) What about conformance for 'Access attributes and related checking? Yuck. First we should be sure we agree on the rules without worrying about Assertion_Policy. We already decided that "static subtype matching" requires that predicates come from the same declaration (in 4.9.1(2/3)). So that seems pretty clear. We had hoped that legality didn't depend on Assertion_Policy, but I guess I would be inclined to say that conflicting Assertion_Policies should cause errors when using 'Access. > We have an access-to-foo type with one assertion policy and an aliased > object of type foo declared with a different assertion policy in > effect. Do we want to allow the Access attribute of the object to > yield a value of the access type? > > Or two access types which would otherwise be convertible but they > differ with respect to applicable assertion policy. Do we want to > allow conversion between such access types? > > Maybe allowing these things is ok, but it makes it very hard to write > assertions that you can count on. > > The same problems occur for access-to-subprogram types. I would go with it being illegal if the predicate-check policies conflict and there are any subtypes with non-trivial predicates. Alternatively, for access-to-subprogram, we could try to generalize the assertion-policy rules for preconditions and postconditions (whatever those are) for calling through an access-to-subprogram. > Would we want this stricter conformance rule only if the subtype in > question is affected by the assertion policy? I would hope so. That is, if the predicate is True, then conflicting assertion policies are not a problem. It is only if the predicate is not True that the policies need to agree between the aliased object and the designated subtype of the access type. Or equivalently between the designated profile and the subprogram that is the prefix to the 'Access. > ... Should two uses > of Standard,Integer fail a conformance check because of a difference > in assertion policy? What about the case where we don't know (e.g., an > access type declaration where the designated type is > incomplete) or where getting this knowledge requires breaking privacy? I would need to see some examples there. I can't quite imagine how that would happen. > What about overriding primitive subprograms of tagged types? > Is it ok if overrider and overridden differ with respect to assertion > policy? Here I think in the same way that an overriding subprogram inherits the convention of the inherited subprogram, the overriding subprogram should probably inherit the predicate-check assertion policy state. It is not clear how else it could work. > 3) For a call with a defaulted expression which includes a qualified > expression, which assertion policy determines what predicate checking > is performed for the qualified expression? I think the place where the default expression appears. > 4) I think the rules for dealing with assertion policies and generics > are fine, but I'd still like to see an AARM note stating explicitly > that an assertion_policy pragma (and, for that matter, a suppress > pragma) which applies to a region which includes the declaration or > body of a generic has no effect on an instance of that generic > (assuming the instance is declared outside of the region affected by > the pragma). > > I think the consequences of the rule > > If a pragma Assertion_Policy applies to a generic_instantiation, then > the pragma Assertion_Policy applies to the entire instance. > > are a bit less obvious than some other rules and an AARM note is > justified. Agreed. **************************************************************** From: Steve Baird Sent: Wednesday, February 29, 2012 11:35 AM > I think the initial declaration should determine whether the check is > enabled. The state of the assertion policy at the point of the > completion should not be relevant. That seems like a good principle. >> 2) What about conformance for 'Access attributes and related checking? > > We had hoped that legality didn't depend on Assertion_Policy, but I > guess I would be inclined to say that conflicting Assertion_Policies > should cause errors when using 'Access. Right. It is as if the same identifier can be used to denote two different subtypes, one with predicate checking and one not. I think we *could* allow those two subtypes to conform (i.e., it wouldn't compromise type-safety), but I agree with you that we probably don't want to. >> Would we want this stricter conformance rule only if the subtype in >> question is affected by the assertion policy? > > I would hope so. Me too, but I think we have to deal with the "maybe" case discussed below. >> ... Should two uses >> of Standard.Integer fail a conformance check because of a difference >> in assertion policy? What about the case where we don't know (e.g., >> an access type declaration where the designated type is >> incomplete) or where getting this knowledge requires breaking >> privacy? > > I would need to see some examples there. I can't quite imagine how > that would happen. For example, an access type where the designated type is incomplete (perhaps a Taft type or a type from a limited view). pragma Asserion_Policy (Check); procedure P (X : access Some_Incomplete_Type); package Nested is pragma Assertion_Policy (Ignore); type Ref is access procedure (Xx : access Some_Incomplete_Type); Ptr : Ref := P'Access; -- legal? Or, if we are concerned about privacy here (and perhaps we aren't; we could view assertion_policy stuff as being like repspec stuff, but it would be nicer if we didn't have to do that): package Pkg is type Has_A_Predicate is private; type Has_no_Predicate is private; private type Has_A_Predicate is new Integer with Static_Predicate => Has_A_Predicate /= 123; type Has_No_Predicate is new Integer; end Pkg; pragma Assertion_Policy (Check); use Pkg; Has : aliased Has_A_Predicate; Has_No : aliased Has_No_Predicate; package Nested is pragma Assert_Policy (Ignore); type Has_Ref is access Has_A_Predicate; type Has_No_Ref is access Has_No_Predicate; Ptr1 : Has_Ref := Has'Access; -- we want this to be illegal ... Ptr2 : Has_No_Ref := Has_No'Access; -- but what about this? >> What about overriding primitive subprograms of tagged types? >> Is it ok if overrider and overridden differ with respect to assertion >> policy? > > Here I think in the same way that an overriding subprogram inherits > the convention of the inherited subprogram, the overriding subprogram > should probably inherit the predicate-check assertion policy state. > It is not clear how else it could work. And if interface types are involved and one subprogram overrides more than one inherited subp and the inherited guys don't all agree? I think there are also much more obscure cases not involving interface types where one primitive op can override two, but I would have to check to be sure. >> 3) For a call with a defaulted expression which includes a qualified >> expression, which assertion policy determines what predicate checking >> is performed for the qualified expression? > > I think the place where the default expression appears. Sounds good. Ditto for all the other forms of default expressions, presumably. >> Variant records don't really have >> such a fall back, but we will have to be clear that even if Ignore >> applies where the discriminant subtype is specified, the subtype's >> predicate still determines whether or not a particular >> discriminant-dependent component exists. It is as though there were >> a "when others => null;" alternative in the variant record, where the >> "when others" covers values that don't satisfy the predicate. > > I think this is a bad idea. I think (as in the case of case > statements) Constraint_Error should be raised. Let's detect a problem > like this ASAP instead of sweeping it under the rug and then seeing > mystifying behavior later. Did my "let's treat all case-ish constructs uniformly" argument change your mind? **************************************************************** From: Steve Baird Sent: Wednesday, February 29, 2012 12:05 PM >> Here I think in the same way that an overriding subprogram inherits >> the convention of the inherited subprogram, the overriding subprogram >> should probably inherit the predicate-check assertion policy state. >> It is not clear how else it could work. >> > > And if interface types are involved .... Never mind. I see that 3.9.2(10/2) already has .... if the operation overrides multiple inherited operations, then they shall all have the same convention. So your proposal already covers this case. **************************************************************** From: Tucker Taft Sent: Wednesday, February 29, 2012 12:17 PM > ... Or, if we are concerned about privacy here (and perhaps we aren't; > we could view assertion_policy stuff as being like repspec stuff, but > it would be nicer if we didn't have to do that): I think conflicting assertion policies is like conflicting rep-clauses. It has to see through privacy. When a programmer starts applying micro-control with the assertion policy, they are now talking about the concrete representations of things, not just their abstract interfaces. ... >> I think this is a bad idea. I think (as in the case of case >> statements) Constraint_Error should be raised. Let's detect a problem >> like this ASAP instead of sweeping it under the rug and then seeing >> mystifying behavior later. > > Did my "let's treat all case-ish constructs uniformly" > argument change your mind? Not sure, though probably some realistic examples might help decide. Since discriminants cannot be changed after an object is created, except by whole-record assignment, the question is whether we check to be sure the discriminant is covered by the variant alternatives on object creation or on component selection, I suppose. Doing it on object creation is perhaps simpler, so I guess making an unconditional check (even if predicate checks are disabled) to be sure that some variant alternative covers the discriminant value seems reasonable. If there is a "when others" then there won't be any check anyway, so it would only be if the variant part did *not* have a "when others" and relied on full coverage of exactly those elements that satisfied the static predicate. So I guess I can go with your suggestion. **************************************************************** From: Tucker Taft Sent: Friday, March 2, 2012 1:44 PM Steve, Would you be willing to do the next "rev" on this AI? **************************************************************** From: Steve Baird Sent: Friday, March 2, 2012 2:20 PM Sure, but only after I know what ideas we want to express. In particular, I'm wondering about the interactions with "statically matching" and incomplete types that I mentioned earlier. I'd like to avoid cases where we conservatively disallow some construct (e.g., an access type conversion) because of assertion policy differences when it turns out that there are no predicate specifications anywhere in the neighborhood. Maybe we want to be more permissive and assume when dealing with an incomplete view of a type that the full type has no applicable predicate specifications? That would make predicates for designated subtypes less trustworthy, but I think users would be unlikely to accidentally violate a predicate via this mechanism. Whether it would be used deliberately (the old "designated predicate bypass trick") is a separate question. It would be more of a problem that tools, developers, maintainers, etc. would have to deal with the possibility that someone *might* use this trick, even nobody ever does. **************************************************************** From: Tucker Taft Sent: Friday, March 2, 2012 2:31 PM We could disallow predicates on the first subtype of a type that is a deferred incomplete type. Putting predicates on a first subtype seems pretty weird anyway. The purpose of a predicate is normally to define an interesting subset, not to specify something that is true for all values of the type (that would be more appropriate for a type invariant). **************************************************************** From: Bob Duff Sent: Friday, March 2, 2012 2:43 PM > I'd like to avoid cases where we conservatively disallow some > construct (e.g., an access type conversion) because of assertion > policy differences when it turns out that there are no predicate > specifications anywhere in the neighborhood. Well, I don't think we want any incompatibilities. I haven't followed much of this discussion, but if you're suggesting that pragma Assertion_Policy would affect the legality of a program, I look askance. **************************************************************** From: Robert Dewar Sent: Friday, March 2, 2012 2:43 PM > We could disallow predicates on the first subtype of a type that is a > deferred incomplete type. Putting predicates on a first subtype seems > pretty weird anyway. The purpose of a predicate is normally to define > an interesting subset, not to specify something that is true for all > values of the type (that would be more appropriate for a type > invariant). I disagree, it's not weird to have a range on a first subtype type R is new integer range 1 .. 10; so why should it be weird to have a prediate on a first subtype type Non_Zero is new Integer with Predicate => Non_Zero /= 0; to me predicates and constraints are really pretty much the same beast, it is just that predicates have more power. In fact it seems a bit odd to me to have constraints at all, given a more powerful mechanism that emcompasses ranges, but of course I understand the historical reasoning. On the other hand to me deferred incomplete types are a bit weird anyway, so I really don't mind restrictions on them. **************************************************************** From: Tucker Taft Sent: Friday, March 2, 2012 2:52 PM >> We could disallow predicates on the first subtype of a type that is a >> deferred incomplete type. Putting predicates on a first subtype seems >> pretty weird anyway. The purpose of a predicate is normally to define >> an interesting subset, not to specify something that is true for all >> values of the type (that would be more appropriate for a type >> invariant). > > I disagree, it's not weird to have a range on a first subtype You are right, if the first subtype is derived from some preexisting type. But I believe it would be weird to put a predicate on the first subtype of a non-private, non-numeric, non-derived type. And if the type is private, a type invariant is more appropriate. If the type is numeric or derived, then I can see the use of the predicate on the first subtype, since you can convert to an un-predicated version using 'Base if numeric, or by explicit conversion to the parent type if derived. In any case, it sounds like we agree that imposing restrictions on the use of predicates with deferred incomplete types would be OK. **************************************************************** From: Bob Duff Sent: Friday, March 2, 2012 2:54 PM > We could disallow predicates on the first subtype of a type that is a > deferred incomplete type. Putting predicates on a first subtype seems > pretty weird anyway. I don't agree. I think it makes perfect sense to say: type Nonzero is new Integer with Static_Predicate => Nonzero /= 0; >...The purpose > of a predicate is normally to define an interesting subset, not to >specify something that is true for all values of the type... Agreed, specifying a predicate on a scalar first subtype does not say anything about all values of the type, because 'Base takes away the predicate, as well as the constraint: X: Nonzero'Base := 0; -- no exception type Color is (Red, Yellow, Green, None) with Static_Predicate => Color /= None; subtype Optional_Color is Color'Base; -- allows None For composites, there's no 'Base, so it applies to all objects of the type (modulo some loopholes): type My_String is array (Positive range <>) of Character with Dynamic_Predicate => My_String'First = 1; Perhaps we should allow 'Base on composites in Ada 2020. >... (that would be more appropriate for a type invariant). Well, type invariants only work for private-ish types. **************************************************************** From: Steve Baird Sent: Friday, March 2, 2012 3:15 PM > We could disallow predicates on the first subtype of a type that is a > deferred incomplete type. This might be ok for explicit incomplete type declarations. How does this work with limited withs? Any type declaration that can be seen via a limited with has an incomplete view. Fortunately, limited withs don't make subtype declarations visible, so we are only concerned with first subtypes. > Putting predicates on > a first subtype seems pretty weird anyway. The purpose of a predicate > is normally to define an interesting subset, not to specify something > that is true for all values of the type (that would be more > appropriate for a type invariant). > I disagree in the case of a derived type (upon reading Robert's message halfway through composing this one, I see that he made this point too and you now agree). Even in the case of a non-derived type, I could imagine wanting something like type Foo (Max_Length : Natural) is record Data : Buffer(1 .. Max_Length); Current_Length : Natural := 0; end record with Dynamic_Predicate Foo.Current_Length < Foo.Max_Length; and I wouldn't want to ban this in order to support mixing of assertion policies. Another ugly option would be to have a configuration pragma (one which has to be consistent across the entire partition) for specifying the assertion policy associated with the designated subtype of an access type when the view of that subtype is incomplete at the point of the access type definition. Just a thought. I haven't thought about DSA interactions at all - maybe there aren't any. **************************************************************** From: Robert Dewar Sent: Friday, March 2, 2012 3:30 PM > X: Nonzero'Base := 0; -- no exception > > type Color is (Red, Yellow, Green, None) > with Static_Predicate => Color /= None; > > subtype Optional_Color is Color'Base; -- allows None > > For composites, there's no 'Base, so it applies to all objects of the > type (modulo some loopholes): Sure, but 'Base for integer types is an odd beast anyway, I virtually never saw it in user code. And this is really no different from type R is range 1 .. 10; R'Base range disappeared **************************************************************** From: Tucker Taft Sent: Friday, March 2, 2012 4:52 PM >> We could disallow predicates on the first subtype of a type that is a >> deferred incomplete type. > > This might be ok for explicit incomplete type declarations. > > How does this work with limited withs? I don't see a problem with limited withs. Can you construct an example which has such a problem. > Any type declaration that can be seen via a limited with has an > incomplete view. Fortunately, limited withs don't make subtype > declarations visible, so we are only concerned with first subtypes. > ... I don't think we need to restrict predicates on types declared in the visible part of a package just because they might be referenced in a limited with. I couldn't, but perhaps you can construct a problem that involves such a type... **************************************************************** From: Steve Baird Sent: Friday, March 2, 2012 5:09 PM > I don't see a problem with limited withs. > Can you construct an example which has such a problem. I haven't tried to compile this, but this should be close. package Pkg1 is type Even is new Integer with Dynamic_Predicate => Even mod 2 = 0; end Pkg1; limited with Pkg1; package Checker is pragma Assertion_Policy (Check); type Safe_Ref is access all Pkg1.Even; Safe_Ptr . Safe_Ref; end Checker; limited with Pkg1; package Ignorer is pragma Assertion_Policy (Ignore); type Unsafe_Ref is access all Pkg1.Even; Unsafe_Ptr . Unsafe_Ref := new Even'(3); -- violates predicate, but that's ok end Ignorer; with Checker, Ignorer; use Checker, Ignorer; procedure Converter is Safe_Ptr_Has_Been_Corrupted; begin Safe_Ptr := Safe_Ref (Unsafe_Ptr); if Safe_Ptr.all not in Even then raise Safe_Ptr_Has_Been_Corrupted; end if; end Converter; **************************************************************** From: Tucker Taft Sent: Friday, March 2, 2012 5:19 PM >> I don't see a problem with limited withs. >> Can you construct an example which has such a problem. > > I haven't tried to compile this, but this should be close. I don't see a problem here, since the compiler can certainly see that there is a predicate on the type "Pkg1.Even", even if it is using a limited "with" and the type is officially incomplete. But I suppose it would be a bigger problem if Even were defined as derived from some other type declared in a different package, and "Even" didn't have a predicate itself but it inherited one. We could keep following the chain of "with" clauses I suppose just to learn whether the subtypes have predicates, but once you start doing name resolution you are starting to violate the spirit of "limited" with. At this point it looks simpler to disallow the conversion if different assertion policies apply to the two designated subtypes, even if we don't know for sure whether there is a predicate. **************************************************************** From: Steve Baird Sent: Friday, March 2, 2012 5:24 PM > But I suppose it would be a bigger problem if Even were defined as > derived from some other type declared in a different package, and > "Even" didn't have a predicate itself but it inherited one. We could > keep following the chain of "with" clauses I suppose just to learn > whether the subtypes have predicates, but once you start doing name > resolution you are starting to violate the spirit of "limited" with. There would be bigtime implementation problems if we ever introduced a language rule that required resolution of a name which occurs in a limited view of a package. The implementation model is that an implementation should be able to compile a limited wither after only parsing the limited withee. ****************************************************************