!standard 1.1.3(17/3) 16-11-10 AI12-0179-1/03 !standard 11.4.2(23.1/3) !class binding interpretation 16-01-04 !status Amendment 1-2012 16-11-10 !status ARG Approved 10-0-0 16-10-08 !status work item 16-01-04 !status received 15-12-18 !priority Low !difficulty Easy !qualifier Omission !subject Failure of postconditions of language-defined units !summary Postconditions and invariants of language-defined units are intended to describe the results of a call on the unit and as such do not fail in a correct implementation. !question AI12-0144-1 proposes using a postcondition rather than wording to describe the result of a function. But postconditions are dynamic in Ada, so it might fail and raise Assertion_Error, rather than return a correct result. This can't be what we want, should we add a rule to prevent that? (Yes.) !recommendation (See Summary.) !wording Add after 1.1.3(17/3): The implementation of a language-defined unit shall abide by all postconditions and type invariants specified for the unit International Standard (see 11.4.2). Add after 11.4.2(23.1/3): Implementation Requirements Any postcondition expression or type invariant expression occurring in the specification of a language-defined unit is enabled (see 6.1.1 and 7.3.2). AARM Ramification: The Assertion_Policy does not have an effect on such postconditions and invariants. This has no execution impact since such assertions shouldn't fail anyway (see the next rule). The evaluation of any such postcondition or type invariant expression shall either yield True or propagate an exception from a raise_expression that appears within the assertion expression. AARM Ramification: In other words, evaluating such an an assertion expression will not return a result of False, nor will it propagate an exception other than by evaluating a raise_expression which is syntactically all or part of the assertion expression. AARM To Be Honest: Evaluation of any expression might raise Storage_Error. AARM Reason: This allows the standard to express semantic requirements as postconditions or invariants (which are invariably clearer than English prose would be) while keeping it clear that failing the assertion check (or any other run time check) is not conforming behavior. !discussion We allow a postcondition or invariant to explicitly raise some exception, but no other exceptions. If we don't want to do that, we could simplify the wording to just: The evaluation of any such postcondition or type invariant shall yield True. (and delete the Ramification, adjusting the Reason to say "an exception" rather than Assertion_Error.) We include type invariants as in normal usage they act as implicit postconditions. The client of a package should not be able to cause an invariant to fail. Preconditions and subtype predicate checks, however, typically fail because of something that the client did. (The file isn't open, the cursor is null, and so on.) Neither the language Standard nor the implementer has any control over what the client will write, so we do not want any such rule for those. Some subtype predicate checks are made upon exit of a subprogram, and could be considered to be a sort of postcondition. But trying to include those in the rule above would complicate it a lot, and such predicates would be confusing to a reader of the Standard (it would not be immediately apparent whether they apply to the client or to the implementation), so we do not include them here and suggest that the Standard avoid using them. !ASIS No ASIS effect. !ACATS test No separate test is needed; any ACATS test that tests the result of a language-defined subprogram that has a postcondition will implicitly check the rule. !appendix [Editor's note: This was split from an thread that starts in AI12-0144-1.] From: John Barnes Sent: Friday, December 18, 2015 7:26 AM I am fretting over that postcondition. [Specifically, the proposed A.5.2(20) in AI12-0144-1 - Editor.] The text says that the value returned will be in the range. Fact. There is absolutely no hint that it might try to be outside the range and so raise some sort of exception. But if we express this idea with a postcondition, then by the dynamic nature of postconditions in Ada (as opposed to good old Spark) there is an implication that it might fail and so raise Assertion_Error or whatever. So for me the postcondition seems weaker. I don't like it. **************************************************************** From: Tucker Taft Sent: Friday, December 18, 2015 8:41 AM > So for me the postcondition seems weaker. I don't like it. Interesting point. It seems a shame to abandon the use of Pre/Post aspects due to this issue. I wonder if there is some way to strengthen the postcondition, e.g.: with Post => Random'Result in First .. Last or else Halt_And_Catch_Fire; **************************************************************** From: Randy Brukardt Sent: Friday, December 18, 2015 1:23 PM > Interesting point. It seems a shame to abandon the use of Pre/Post > aspects due to this issue. I wonder if there is some way to > strengthen the postcondition, e.g.: > > with Post => Random'Result in First .. Last or else Halt_And_Catch_Fire; On every postcondition? Bah humbug. (Getting into the spirit of the season!) After all, we're going to add Postconditions to various container routines and conceivably new routines as well. I surely hope there's more than one in the standard. How about we just say so in the beginning of the RM (probably in 1.1.3): The evaluation of a postcondition or type invariant defined in a language-defined unit shall not raise Assertion_Error. AARM Ramification: The evaluation could raise some exception explicitly, via a raise_expression, but it cannot evaluate to False. AARM Reason: This allows the standard to write semantic requirements as postconditions or invariants (which are invariably clearer than English prose would be) without opening the door to Assertion_Error being raised rather than correct behavior. **************************************************************** From: Randy Brukardt Sent: Friday, December 18, 2015 1:28 PM > But if we express this idea with a postcondition, then by the dynamic > nature of postconditions in Ada (as opposed to good old Spark) there > is an implication that it might fail and so raise Assertion_Error or > whatever. "Whatever" would be hard to justify. ;-) The language says Assertion_Error is raised if the expression evaluates to False, no "whatever". > So for me the postcondition seems weaker. I don't like it. My understanding of Jeff's message is that he didn't want the text removed (essentially, he wanted both). That's what I was going to do, in the absence of other guideance. So your concern would not matter in this case. (Indeed, I'd argue that an exception-raising Random wouldn't meet the requirements of A.5.2(41-2), and surely not the spirit, so even without the wording we'd be OK here.) But you do raise a general problem; I addressed that in my reply to Tucker. **************************************************************** From: Randy Brukardt Sent: Friday, December 18, 2015 1:38 PM ... > How about we just say so in the beginning of the RM (probably in > 1.1.3): > > The evaluation of a postcondition or type invariant defined in a > language-defined unit shall not raise Assertion_Error. I'm not sure if "shall not" or "does not" is the correct usage here, Tucker always seems to correct us on these sorts of rules. :-) > AARM Ramification: The evaluation could raise some exception > explicitly, via a raise_expression, but it cannot evaluate to False. Maybe add "in a correct implementation." to the end, to make it clear that this is an implementation requirement. > AARM Reason: This allows the standard to write semantic requirements > as postconditions or invariants (which are invariably clearer than > English prose would be) without opening the door to Assertion_Error > being raised rather than correct behavior. --- I also neglected to mention that we don't need a similar rule for preconditions or predicates, because those are requirements on the caller, not the implementation, and surely the RM and the implementer can't control what sort of calls are written. I included invariants in the text above even though I doubt that we'll use them for anything, for completeness and as an aid to anyone adopting similar coding standards for their organization. Finally, this is clearly a separate AI from AI12-0144-1, since it cross-cuts all of the language-defined packages. As such, AI12-0144-1 itself won't address this issue (and as I replied to John, it doesn't have to). **************************************************************** From: Tucker Taft Sent: Friday, December 18, 2015 1:53 PM > ... >> How about we just say so in the beginning of the RM (probably in >> 1.1.3): >> >> The evaluation of a postcondition or type invariant defined in a >> language-defined unit shall not raise Assertion_Error. > > I'm not sure if "shall not" or "does not" is the correct usage here, > Tucker always seems to correct us on these sorts of rules. :-) It could be "shall not" if it is in an implementation requirements section. It would be "does not" if it was in a dynamic semantics section. > >> AARM Ramification: The evaluation could raise some exception >> explicitly, via a raise_expression, but it cannot evaluate to False. > > Maybe add "in a correct implementation." to the end, to make it clear > that this is an implementation requirement. > >> AARM Reason: This allows the standard to write semantic requirements >> as postconditions or invariants (which are invariably clearer than >> English prose would be) without opening the door to Assertion_Error >> being raised rather than correct behavior. I think your idea might be a nice way to deal with it. We added raise expressions in part to allow preconditions to indicate exactly what exceptions are raised. But we have used postconditions to explain the effect, and have presumed they will never fail. **************************************************************** From: Randy Brukardt Sent: Friday, December 18, 2015 2:06 PM > > I'm not sure if "shall not" or "does not" is the correct usage here, > > Tucker always seems to correct us on these sorts of rules. :-) > > It could be "shall not" if it is in an implementation requirements section. > It would be "does not" if it was in a dynamic semantics section. OK, to make this concrete, I suggest that we add this after 1.1.3(17/3), which is Implementation Requirements. So then it certainly is "shall". ... > I think your idea might be a nice way to deal with it. We added raise > expressions in part to allow preconditions to indicate exactly what > exceptions are raised. But we have used postconditions to explain the > effect, and have presumed they will never fail. Right. That seems to be the only sensible approach, but if it bothers someone like John, it's best to make it explicit. We rarely want to depend on the Dewar rule to read the Standard. **************************************************************** From: John Goodenough Sent: Friday, December 18, 2015 5:07 PM I can't help but observe that saying "in a correct implementation" is superfluous. Surely everything in the RM specifies what is expected of a correct implementation, but maybe I am missing some nuance. Similarly, isn't "without [allowing X] rather than the correct behavior" unnecessary. Surely it is sufficient to say "without [allowing X]". This must be a minor issue if I can make a contribution at this point :-) However, I would expect John Barnes to be making the same point, but surely he is asleep when I am sending this so I get to say it first. **************************************************************** From: Randy Brukardt Sent: Friday, December 18, 2015 5:42 PM You are commenting on proposed AARM notes that go in a section devoted specifically to defining what a correct implementation is (1.1.3 - technically "a conforming implementation"). Elsewhere in the Standard I'd agree with you, but the entire point of this clause (and the new rule as well) is to define what is and is not a correct implementation. Ergo, these are essentially "meta rules" and ideas that we'd use in the rest of the Standard don't necessarily apply. In particular, we cannot assume anything about what is and is not a correct implementation in 1.1.3, lest we end up with circular rules. As always, these were my off-the-cuff wording suggestions, and surely one can reword them better -- but we ought not lose the point, which is that we want to say that raising Assertion_Error from a language-defined postcondition is NOT correct behavior. Probably the second note would be better written as: AARM Reason: This allows the standard to write semantic requirements as postconditions or invariants (which are invariably clearer than English prose would be) without opening the door to the raising of Assertion_Error being considered correct behavior. (or, maybe "conforming behavior", to match the rest of the clause). **************************************************************** From: John Barnes Sent: Saturday, December 19, 2015 10:17 AM John is now awake and is happy with the outcome (he thinks, but it is Saturday and not a key day for thinking). **************************************************************** From: Jeff Cousins Sent: Saturday, December 19, 2015 12:54 PM Sorry I’ve been off - at a concert, then moving my mother-in-laws stuff out of her care home – her dementia is now so bad that she can’t feed herself and she needs to go into a nursing home. Anyway, it’s nice to see that John has raised a valid concern and it seems to have come to a sensible conclusion. **************************************************************** From: Bob Duff Sent: Wednesday, December 30, 2015 12:44 PM > On every postcondition? Bah humbug. (Getting into the spirit of the > season!) I agree with "Bah humbug". ;-) If a procedure's postcondition is not True, that's a bug in that procedure. Always. It doesn't matter what the assertion policy is -- if it's Ignore, it's still a bug if the postcondition would have failed. So I agree with Randy's general approach below. All we need is a blanket statement that says that postconditions of predefined procedures do not fail. A few nits, though... > After all, we're going to add Postconditions to various container > routines and conceivably new routines as well. I surely hope there's > more than one in the standard. I'm not so sure. I agree it's desirable, and we should have done it that way in the first place, but at this point it's probably not worth the effort. It sounds like an awful lot of work. If it's done in a compatible way, then it changes nothing in any implementation or user program -- a complete waste of time. OTOH, if it's incompatible, then that's Just Wrong. I certainly have nothing against using assertions in the RM for new features. > How about we just say so in the beginning of the RM (probably in 1.1.3): > > The evaluation of a postcondition or type invariant defined in a > language-defined unit shall not raise Assertion_Error. I don't see why we would ever have "...or else raise Some_Error" in a postcondition. But if we did, that should still be considered a failure. And if we ever had a postcondition that fails run-time checks, that should also be considered a failure. E.g.: Post => X.all.Count > Y.all.Count can fail if X or Y is null, or if the ">" returns False. All 3 of those should be considered failures. Most of the designers of SPARK 2014 think the above is poor style: they want you to write: Post => X /= null and then Y /= null and then X.all.Count > Y.all.Count But either we write all postconditions so they don't fail run-time checks, or we require the implementation to make sure they don't fail run-time checks. Therefore, I think we want to say predefined postconditions do not raise exceptions. I'd say "does not raise" instead of "shall not raise". I think predicates that are checked on the way out should be treated like postconditions. Not sure about invariants, but I think they should be treated like postconditions, both on the way in and out. > AARM Ramification: The evaluation could raise some exception > explicitly, via a raise_expression, but it cannot evaluate to False. I disagree. > AARM Reason: This allows the standard to write semantic requirements > as postconditions or invariants (which are invariably clearer than > English prose would be) without opening the door to Assertion_Error > being raised rather than correct behavior. I agree. Or predicates. ---------------- Now what about preconditions? A precondition failure is not always a bug, IMHO. We can legitimately write: Pre => Is_Open(File) or else raise Status_Error in a predefined package, and the user can legitimately handle the Status_Error. AI05-0290-1.TXT says: In order to achieve the intended effect that library writers stay in control over the enforcement of pre- and postconditions as well as type invariants, the policy control needs to extend over all uses of the respective types and subprograms. The place of usage is irrelevant. This is enforced by the rules in the !wording. I never agreed with that. I won't try to reargue the point, but I note that there's a giant loophole: It uses the opposite rule for generic instantiations (the policy used is the one at the instantiation). So library writers are NOT "in control", unless they avoid the use of generics. You can't set the policy for the predefined library. Except for the above loophole. So what are the semantics of: Pre => Is_Open(File) or else raise Status_Error in some predefined package (or generic package), and what is the effect of policy, in the generic case? It seems to me we can't let users set precondition policy to Ignore in any implementation code, because we don't know what's in there -- it could be code that depends on those preconditions. Or we could say Ignore really means Suppress in the predefined case (i.e. precondition failure is erroneous) -- but that seems rather confusing. OTOH, it seems odd that I can disable checks on (say) array indexing, but not on (say) Ada.Numerics.Some_Op. Predicates checked on the way in should be treated like preconditions. Note that GNAT allows "pragma Suppress(Container_Checks);" and "pragma Suppress(Tampering_Check);" to suppress checks in instances of Ada.Containers.*. **************************************************************** From: Randy Brukardt Sent: Wednesday, December 30, 2015 2:09 PM ... > > After all, we're going to add Postconditions to various container > > routines and conceivably new routines as well. I surely hope there's > > more than one in the standard. > > I'm not so sure. I agree it's desirable, and we should have done it > that way in the first place, but at this point it's probably not worth > the effort. It sounds like an awful lot of work. If it's done in a > compatible way, then it changes nothing in any implementation or user > program -- a complete waste of time. OTOH, if it's incompatible, then > that's Just Wrong. Well, we're definitely rewriting all of the containers to use preconditions rather than English text for initial checks. Yes, it's a lot of work -- I have it as a separate line item in my budget because of that. As part of that, we'll add some postconditions where those make sense. (Presumably, we'll also add Global and Nonblocking as needed, assuming those are in the next amendment/revision.) So yes it's a lot of work, but its work we're already planning to do. (Sadly, we can't use any predicates, as that would be an incompatibility, since the subtypes would change in subprogram profiles. Bah Humbug!) Aside: There's a general problem with preconditions/postconditions as to how to declare properties that are unchanged by (most) routines. For instance, for the Vector container, most routines do not change the length. One could imagine adding Container.Length = Container.Length'Old to every postcondition, but that's nasty to read (it adds a lot of clutter). But one needs that information in order to reason about the property without peeking at the body (the compiler or prover has to know that the length is unchanged by calling Element [for instance] in order to propagate its knowledge to a later precondition). Someone had the idea of having a global property that gets added automatically to every postcondition unless the same property is explicitly tested in the postcondition. That would solve the problem, but how to define "same property being explicitly tested" is unclear. Should I try to put together a proposal on these lines?? ... > > How about we just say so in the beginning of the RM (probably in 1.1.3): > > > > The evaluation of a postcondition or type invariant defined in a > > language-defined unit shall not raise Assertion_Error. > > I don't see why we would ever have "...or else raise Some_Error" in a > postcondition. Me either, but if we did write it, we'd have had some reason. > But if we did, that should > still be considered a failure. I'm not sure. The best alternative to exception contracts is to ensure that no routine ever (intentionally) raises a hidden exception. To accomplish that, one has to put all of the exception raises into the precondition (for prechecks) or postcondition (for other causes). > And if we ever had a postcondition that fails run-time checks, that > should also be considered a failure. E.g.: > > Post => X.all.Count > Y.all.Count > > can fail if X or Y is null, or if the ">" returns False. > All 3 of those should be considered failures. Most of the designers > of SPARK 2014 think the above is poor style: > they want you to write: > > Post => X /= null and then Y /= null and then X.all.Count > > Y.all.Count > > But either we write all postconditions so they don't fail run-time > checks, or we require the implementation to make sure they don't fail > run-time checks. > Therefore, I think we want to say predefined postconditions do not > raise exceptions. As I said, I'm not sure. It's definitely limiting. I can agree that runtime checks shouldn't fail, but not necessarily about explicit raises. > I'd say "does not raise" instead of "shall not raise". > > I think predicates that are checked on the way out should be treated > like postconditions. Not sure about invariants, but I think they > should be treated like postconditions, both on the way in and out. Invariants are always treated like postconditions on subprograms. The other cases aren't worth talking about (we'd drive ourselves nuts if we worried about the unusual cases for invariants, there are many). And I doubt that there will be any invariants in Ada predefined packages (if there are, we can revisit if necessary). Predicates are primarily checked on subtype conversion, so the work like a precondition. There are a few unusual cases where they get post-checked, but they're easliy avoided. Besides, predicates are impossible on existing packages for compatibility reasons, so it's unlikely that we'll use many of them anyway. As with invariants, if that causes a problem, we can revisit. > > AARM Ramification: The evaluation could raise some exception > > explicitly, via a raise_expression, but it cannot evaluate to False. > > I disagree. I'm not sure. I certainly want to expose as many exception raises as possible in the specification, and that would require putting some in the postcondition. > > AARM Reason: This allows the standard to write semantic requirements > > as postconditions or invariants (which are invariably clearer than > > English prose would be) without opening the door to Assertion_Error > > being raised rather than correct behavior. > > I agree. Or predicates. As I said about, predicates are unlikely because of compatibility concerns. And it's even less likely that we'd use them use them for postcondition checks (I'd consider that a mistake, myself, just because it would be confusing). > ---------------- > > Now what about preconditions? > > A precondition failure is not always a bug, IMHO. We can legitimately > write: > > Pre => Is_Open(File) or else raise Status_Error > > in a predefined package, and the user can legitimately handle the > Status_Error. Correct. > AI05-0290-1.TXT says: > > In order to achieve the intended effect that library writers stay in > control over the enforcement of pre- and postconditions as well as > type invariants, the policy control needs to extend over all uses of > the respective types and subprograms. The place of usage is > irrelevant. This is enforced by the rules in the !wording. > > I never agreed with that. I won't try to reargue the point, but I > note that there's a giant loophole: It uses the opposite rule for > generic instantiations (the policy used is the one at the > instantiation). > So library writers are NOT "in control", unless they avoid the use of > generics. There's no loophole. You've forgotten that the assertion policy can be set inside of a unit. The rule for generic instantations only applies to assertion policies that apply to the entire generic unit. But the intent is that units that need to control assertions (those that *depend* on preconditions) would have a *local* assertion policy. That is: generic ... package Ada.Containers.Vectors is pragma Assertion_Policy (Check); ... end Ada.Containers.Vectors; The policy given in the instance only applies up to the local Assertion_Policy pragma. This is the same model that is used for pragma Unsuppress (the local pragma overrides any global setting). The point is, that it's totally within the package designers control as to whether to allow Ignoring of assertions (or individually, since you can set just the policy for preconditions). > You can't set the policy for the predefined library. Surely you can, with local assertion policy pragmas. > Except for the above loophole. So what are the semantics of: > > Pre => Is_Open(File) or else raise Status_Error > > in some predefined package (or generic package), and what is the > effect of policy, in the generic case? There is no effect of the policy, because we can add an Assertion_Policy pragma as above to the specification of any language-defined package that we want to ensure checking. (This is definitely how preconditions will be handled in Claw, should I ever add any to it.) But we could consider something else: > It seems to me we > can't let users set precondition policy to Ignore in any > implementation code, because we don't know what's in there -- it could > be code that depends on those preconditions. Or we could say Ignore > really means Suppress in the predefined case (i.e. precondition > failure is erroneous) -- but that seems rather confusing. But that's actually what I was going to suggest. Specifically, setting the policy to Ignore for calls into any predefined unit makes the program erroneous if the precondition or predicate would have failed. (I would state that explicitly.) > OTOH, it seems odd that I can disable checks on (say) array indexing, > but not on (say) Ada.Numerics.Some_Op. Exactly. Of course, you can't disable Status_Error and Mode_Error checking in Ada today (or any of the containers, either, except for the GNAT hack), so why would anyone be surprised? But if we do allow it, it has to be erroneous to fail check. > Predicates checked on the way in should be treated like preconditions. Yes, of course, and there shouldn't be any other predicates (too confusing) in the language-defined packages. (Do what you want in your own code, of course.) Probably irrelevant in any case, as there won't be many (if any) predicates in language-defined packages. > Note that GNAT allows "pragma Suppress(Container_Checks);" > and "pragma Suppress(Tampering_Check);" to suppress checks in > instances of Ada.Containers.*. We're not going to implement "Container_Checks", as that would be redundant with Ignoring preconditions. (It is silly to have two ways to do the same things. And why wouldn't we want similar capabilities for Elementary_Functions and Text_IO??? Why just containers?? **************************************************************** From: Bob Duff Sent: Wednesday, December 30, 2015 3:32 PM > Well, we're definitely rewriting all of the containers to use > preconditions rather than English text for initial checks. ... Has ARG discussed and voted on that? Can you point me to the minutes? I'd like to see the reasoning, because on the face of it, it seems like a useless exercise. >...(Presumably, > we'll also add Global ... OK, I can see how Global might change my opinion. Anyway, I'd still like to read the minutes. Even a hint as to the rough timeframe would be helpful. >...(Sadly, we can't use any predicates, as that would be an >incompatibility, since the subtypes would change in subprogram >profiles. I don't understand that. Yes, we'd have to add new subtypes like Open_File and Non_Empty_Vector. That's incompatible because of namespace pollution, but that seems minor. Did you mean something else? Example? >... Bah Humbug!) ;-) > Aside: There's a general problem with preconditions/postconditions as > to how to declare properties that are unchanged by (most) routines. > For instance, for the Vector container, most routines do not change > the length. One could imagine adding Container.Length = > Container.Length'Old to every postcondition, but that's nasty to read > (it adds a lot of clutter). But one needs that information in order to > reason about the property without peeking at the body (the compiler or > prover has to know that the length is unchanged by calling Element > [for instance] in order to propagate its knowledge to a later > precondition). Someone had the idea of having a global property that > gets added automatically to every postcondition unless the same > property is explicitly tested in the postcondition. That would solve > the problem, but how to define "same property being explicitly tested" is > unclear. Should I try to put together a proposal on these lines?? Something along those lines seems like a good idea. I have spent some time thinking along those lines, but I don't know the right answer(s). > > I don't see why we would ever have "...or else raise Some_Error" in > > a postcondition. > > Me either, but if we did write it, we'd have had some reason. For preconditions, the reason is to raise a different exception on failure, so you can handle that particular failure in a fine-grained way. It's still considered a failure. But you (almost) never want to handle postcondition failure. > > But if we did, that should > > still be considered a failure. > > I'm not sure. OK, fair enough. To be discussed. > There's no loophole. You've forgotten that the assertion policy can be > set inside of a unit. ... Good point. Indeed I missed that. But I think there are various other loopholes. Should I give examples? As I said, I don't want to reargue the issue -- I just want to study its effect on assertions in the predefined library. > > You can't set the policy for the predefined library. > > Surely you can, with local assertion policy pragmas. How? That is, how does a client of (say) Text_IO set the policy for assertions in Text_IO? > Exactly. Of course, you can't disable Status_Error and Mode_Error > checking in Ada today (or any of the containers, either, except for > the GNAT hack), so why would anyone be surprised? They shouldn't be surprised, but they might be annoyed if they're writing efficiency-critical code. > > Note that GNAT allows "pragma Suppress(Container_Checks);" > > and "pragma Suppress(Tampering_Check);" to suppress checks in > > instances of Ada.Containers.*. > > We're not going to implement "Container_Checks", as that would be > redundant with Ignoring preconditions. (It is silly to have two ways > to do the same things. And why wouldn't we want similar capabilities > for Elementary_Functions and Text_IO??? Why just containers?? I agree there's no reason to add Container_Checks or Tampering_Check to the RM. In GNAT, the reason they're in containers and not Text_IO is that containers generated a lot of complaints from customers about gross inefficiency. **************************************************************** From: Randy Brukardt Sent: Wednesday, December 30, 2015 4:24 PM > > Well, we're definitely rewriting all of the containers to use > > preconditions rather than English text for initial checks. ... > > Has ARG discussed and voted on that? Can you point me to the minutes? > I'd like to see the reasoning, because on the face of it, it seems > like a useless exercise. Just preliminarily; I did a prototype in AI12-0112-1 (which we discussed and I believe voted intent, not sure when). It's actually quite easy to do and simplifies the wording a lot. The simplified wording is the main benefit to Ada users, as it makes the purpose of each subprogram much clearer if the description doesn't have to start with several lines of "If blah then blech.". The other benefit is that it then allows compilers to optimize the checks just as can be done with other language-defined checks. Anything that reduces the need for users to turn off checks is a good thing, IMHO (and it makes the containers more like the built-in data structures). (It also opens the door to allowing users to turn off the checks, with some appropriate rules, as you noted later.) > >...(Presumably, > > we'll also add Global ... > > OK, I can see how Global might change my opinion. > Anyway, I'd still like to read the minutes. > Even a hint as to the rough timeframe would be helpful. Timeframe is "before the next Amendment". I've budgeted to do a version next fall (I wanted to wait to see if any further progress is made on Global and Nonblocking before then). > >...(Sadly, we can't use any predicates, as that would be an > >incompatibility, since the subtypes would change in subprogram > >profiles. > > I don't understand that. Yes, we'd have to add new subtypes like > Open_File and Non_Empty_Vector. That's incompatible because of > namespace pollution, but that seems minor. Did you mean something > else? > Example? Changing the subtypes of a profile would change the behavior when subtype conformance is required. For instance, if we changed Put_Line to procedure Put_Line (File : in Open_File_Type; Str : in String); then existing code like: type Logger_Type is access procedure (File : in File_Type; Str : in String); My_Logger : Logger_Type := Ada.Text_IO.Put_Line'Access; -- Illegal with change above! no longer works. (I've done the above to pass Text_IO logging mechanisms into Pure and Preelaborated packages.) In the case of the containers, where things are tagged, we'd also have problems with overriding of primitive operations (which also requires subtype conforance). Neither of this things is very common, but given that the change isn't critical, taking any incompatibility is a non-starter. ... > > Aside: There's a general problem with preconditions/postconditions > > as to how to declare properties that are unchanged by (most) routines. > > For instance, for the Vector container, most routines do not change > > the length. One could imagine adding Container.Length = > > Container.Length'Old to every postcondition, but that's nasty to > > read (it adds a lot of clutter). But one needs that information in > > order to reason about the property without peeking at the body (the > > compiler or prover has to know that the length is unchanged by > > calling Element [for instance] in order to propagate its knowledge > > to a later precondition). Someone had the idea of having a global > > property that gets added automatically to every postcondition unless > > the same property is explicitly tested in the postcondition. That > > would solve the problem, but how to define "same property being explicitly > > tested" is unclear. Should I try to put together a proposal on these lines?? > > Something along those lines seems like a good idea. I have spent some > time thinking along those lines, but I don't know the right answer(s). Me either, but I sounds to me as if it would be a good idea to open an AI so the topic gets considered. (And I have an idea, but I don't know if it will work out.) ... > > There's no loophole. You've forgotten that the assertion policy can > > be set inside of a unit. ... > > Good point. Indeed I missed that. > > But I think there are various other loopholes. Should I give > examples? > As I said, I don't want to reargue the issue -- I just want to study > its effect on assertions in the predefined library. > > > > You can't set the policy for the predefined library. > > > > Surely you can, with local assertion policy pragmas. > > How? That is, how does a client of (say) Text_IO set the policy for > assertions in Text_IO? Clients? Who's talking about clients? The assertion (pun intended) that you were commenting on was "library writers stay in control over the enforcement of pre- and postconditions". There's nothing about clients there! The reason that library authors need the possibility of control is so that they can use preconditions to define things like Status_Error without having to deal with the possibility of support calls from people who turned the checks off and now are having weird results. For instance, we could never use any preconditions in Claw if that meant that some potentially failing checks would get suppressed. It's not unusual that a missed check causes a later deadlock in Claw, and those are hell to debug -- to have any because of a client action (rather than our own mistake) would be unacceptable. The intent was that clients only can turn off assertion checks if the library author allows them to do so. (This matches current practice - that is, without using preconditions.) (But, as you say, we aren't going to rediscuss this. :-) The question of whether or not to allow checks to be suppressed in the language-defined library is a separate question. We haven't allowed that in earlier versions of Ada, but there could be an argument for it going forward (so long as we don't have to do significant work as either implementers or language-lawyers to get it). > > Exactly. Of course, you can't disable Status_Error and Mode_Error > > checking in Ada today (or any of the containers, either, except for > > the GNAT hack), so why would anyone be surprised? > > They shouldn't be surprised, but they might be annoyed if they're > writing efficiency-critical code. > > > > Note that GNAT allows "pragma Suppress(Container_Checks);" > > > and "pragma Suppress(Tampering_Check);" to suppress checks in > > > instances of Ada.Containers.*. > > > > We're not going to implement "Container_Checks", as that would be > > redundant with Ignoring preconditions. (It is silly to have two ways > > to do the same things. And why wouldn't we want similar capabilities > > for Elementary_Functions and Text_IO??? Why just containers?? > > I agree there's no reason to add Container_Checks or Tampering_Check > to the RM. In GNAT, the reason they're in containers and not Text_IO > is that containers generated a lot of complaints from customers about > gross inefficiency. We are going to consider adding Tampering_Check (as that is more fine-grained than just turning off all preconditions, and also it changes the way the instance works), but not Container_Checks as that is (most likely) redundant with pragma Assertion_Policy (Ignore, Preconditions); before an instance of Ada.Containers.Something. At least that's the preliminary plan. Obviously subject to change. **************************************************************** From: Bob Duff Sent: Wednesday, December 30, 2015 5:57 PM > Timeframe is "before the next Amendment". No, no, I meant the timeframe in which it was discussed, so I can find the relevant minutes without looking at all the minutes of the past 10 years. You usually take pretty good notes of the meetings, and put them in the minutes. I just want to know which meeting minutes to look at. **************************************************************** From: Randy Brukardt Sent: Monday, January 4, 2016 6:46 PM I don't know off-hand. The one and only version of the AI is dated May 15, 2014, so the discussion must have happened after that, and I don't think it happened last year, so there aren't many meetings to look at. (Aside: it would be useful to have some sort of index as to at which meetings a particular AI is discussed. We have such an index in each meeting minutes, but as those are created by hand there isn't an obvious way to generate the inverse. Maybe there'd be some way to extract that from the HTML minutes and create an index -- something to put on my to-do list.) The AI gives one of the reasons for doing the work as providing a way to suppress checks in the container packages without requiring some sort of magic. (It's always uncomfortable for the language to provide a compatibility that can't be used in other reusable libraries.) P.S. Sorry about the delay in answering; I spent the last couple of working days backing up and replacing a sick hard disk on my computer. **************************************************************** From: Randy Brukardt Sent: Monday, January 4, 2016 11:16 PM ... > > There's no loophole. You've forgotten that the assertion policy can > > be set inside of a unit. ... > > Good point. Indeed I missed that. > > But I think there are various other loopholes. Should I give examples? > As I said, I don't want to reargue the issue -- I just want to study > its effect on assertions in the predefined library. I'd definitely be interested if you can find such examples. We spent quite a bit of effort on ensuring that the assertion policy at the point of a declaration (rather than the use) determines whether it is checked or ignored. (That's necessary both so that a package author, rather than a client, can have the last say on this, and also so that assertions can be included [or not] in the body of a subprogram -- we surely don't want to have to generate two bodies to implement postconditions!). (Wow! A fourth different topic from this thread. That's got to be getting close to a record... :-) **************************************************************** From: Jeff Cousins Sent: Tuesday, January 5, 2016 11:16 PM >I don't know off-hand. The one and only version of the AI is dated May 15, >2014 ... I had replied on this (accidentally only to Bob it seems) that Paris voted Keep Alive on AI-112, if that helps. **************************************************************** From: Bob Duff Sent: Tuesday, January 5, 2016 4:33 PM > > But I think there are various other loopholes. Should I give examples? > > As I said, I don't want to reargue the issue -- I just want to study > > its effect on assertions in the predefined library. > > I'd definitely be interested if you can find such examples. I fear if I give examples of loopholes, you'll want to fix them. ;-) > ...a package author, rather than a > client, can have the last say on this, ... I never thought it was a worthwhile goal that the package author is "in control", as in preventing junk data from being passed in to that package. As I said, I won't try to change that now (I lost that battle, which is fine). But I don't particularly want to spend MORE time trying to achieve that goal. OK, OK, here's one I discussed privately with Steve: The rules about predicates seem to say you can cause the library's predicate to be Ignored by adding a new predicate in the client. package Library is subtype Nonzero is Integer with Static_Predicate => Nonzero /= 0; procedure P(X: Nonzero); end Library; with Text_IO; use Text_IO; package body Library is procedure P(X: Nonzero) is begin Put_Line(X'Img); -- We just printed a Nonzero value that is 0. end P; end Library; with Library; use Library; procedure Client is pragma Assertion_Policy(Ignore); -- Ignores the predicate on Nonzero as well as the one on -- My_Nonzero. subtype My_Nonzero is Nonzero with Static_Predicate => False; X : My_Nonzero := 0; -- Wrong! begin P(X); end Client; I have decided that predicates are better than pre/post, in cases where we're talking about a property of a single parameter (as opposed to a relationship between two parameters), because you can reuse the predicate -- there will be other parameters that have the same property, and also local variables. That's why I would tend to write the above as a predicate. It doesn't bother me too much that Client can disable Library's predicate, but I suspect it bothers you and others. If someone thinks Library is "in control", they might reason that Suppress can be based on forced-on assertions. So add the following after "-- We just printed a Nonzero value that is 0.": declare pragma Suppress(Range_Check); -- Suppress should be safe, because X is not zero. Y : Integer range 1..1 := X; -- In fact, X = 0. -- Now we're in erroneous territory. begin Put_Line(Y'Img); end; **************************************************************** From: Bob Duff Sent: Tuesday, January 5, 2016 5:27 PM > OK, OK, here's one I discussed privately with Steve: Hmm. Maybe this isn't a loophole after all. After re-reading the RM, it seems that one can set a My_Nonzero variable to 0, but it will still be checked on entry to P, which takes a Nonzero parameter. Is that right? **************************************************************** From: Randy Brukardt Sent: Tuesday, January 5, 2016 5:37 PM > I fear if I give examples of loopholes, you'll want to fix them. ;-) It's possible. :-) > > ...a package author, rather than a > > client, can have the last say on this, ... > > I never thought it was a worthwhile goal that the package author is > "in control", as in preventing junk data from being passed in to that > package. As I said, I won't try to change that now (I lost that > battle, which is fine). But I don't particularly want to spend MORE > time trying to achieve that goal. > > OK, OK, here's one I discussed privately with Steve: > > The rules about predicates seem to say you can cause the library's > predicate to be Ignored by adding a new predicate in the client. > > package Library is > > subtype Nonzero is Integer with > Static_Predicate => Nonzero /= 0; > > procedure P(X: Nonzero); > > end Library; > > with Text_IO; use Text_IO; > package body Library is > > procedure P(X: Nonzero) is > begin > Put_Line(X'Img); > -- We just printed a Nonzero value that is 0. > end P; > > end Library; > > with Library; use Library; > procedure Client is > pragma Assertion_Policy(Ignore); > -- Ignores the predicate on Nonzero as well as the one on > -- My_Nonzero. > subtype My_Nonzero is Nonzero with > Static_Predicate => False; > X : My_Nonzero := 0; -- Wrong! > begin > P(X); > end Client; > > I have decided that predicates are better than pre/post, in cases > where we're talking about a property of a single parameter (as opposed > to a relationship between two parameters), because you can reuse the > predicate -- there will be other parameters that have the same > property, and also local variables. That's why I would tend to write > the above as a predicate. > > It doesn't bother me too much that Client can disable Library's > predicate, but I suspect it bothers you and others. It does bother me, somewhat. But that's where we decided to draw the line, because it would be too weird (and would require a lot of chasing around) to have the rule be anything else. In particular, the meta-rule is that the policy is determined by the last declaration that contains the aspect in question. That does allow clients to "counterfeit" an aspect, but only if they *explicitly* redefine it. That's not going to be an accident, or likely to be surprising. (There are similar "loopholes" involving extensions for class-wide preconditions, postconditions, and type invariants.) I would hope that anyone reading the above would find the "Static_Predicate => False" to be suspicious (well, *especially* that). Note that there is no loophole with the more likely: subtype My_Nonzero is Nonzero; > If someone thinks Library is "in control", they might reason that > Suppress can be based on forced-on assertions. So add the following > after "-- We just printed a Nonzero value that is 0.": > > declare > pragma Suppress(Range_Check); > -- Suppress should be safe, because X is not zero. > > Y : Integer range 1..1 := X; -- In fact, X = 0. > -- Now we're in erroneous territory. > begin > Put_Line(Y'Img); > end; I think this example might be a bit more compelling if it wasn't almost certain to be erroneous (it would be unless X = 1 in the above example, and I don't see any reasoning that could prove that, even assuming the predicates). Try Y : Positive := X; -- In fact, X = 0. or something like that. But the above doesn't bother me much; the use of Suppress should be so limited (and one should assume that any code in the scope of a Suppress is most likely erroneous -- at least, that's how it works for me :-) that the above situation should almost never come up. Anyway, this particular loophole was a big fight in the ARG, and we settled on the above rule (essentially cutting the baby in half, but I digress). I certainly wouldn't try to reopen that, as we knew about the above cases at the time that we designed the rules. In an ideal world, no one would ever feel a need to Suppress checks or Ignore assertions (as the majority of them would be optimized away), but we don't live in that world. (Even though a lot of people who do those things have no real reason to do so, given the gains are minimal.) One hopes that the current rules are good enough, as they prevent casual misuse, but not blatant attempts to avoid checking (it's probably not possible to prevent the evil programmer from achieving their goals without making it too hard for the honest programmer to get their work done). **************************************************************** From: Randy Brukardt Sent: Tuesday, January 5, 2016 5:49 PM > Hmm. Maybe this isn't a loophole after all. After re-reading the RM, > it seems that one can set a My_Nonzero variable to 0, but it will > still be checked on entry to P, which takes a Nonzero parameter. Is > that right? Surely the subtype conversion to Nonzero would canonically be checked (that predicate is Checked, not Ignored). In the absence of any optimization, there is no loophole in this specific case. The problem is that I don't know how the rules that allow optimization of predicates operate in this case. That's in large part because I can't find any such rules, even though 3.2.4(31.b/3) hints at their existence. (Effectively, I can't figure out the justification for 31.b from the RM wording, and that might matter in a case like this. Maybe I could figure it out by re-reading the associated AIs, but I'd like to do some useful work today... ;-) So I can't give you a definitive answer. ****************************************************************