!standard 3.2.4(0) 12-02-15 AI05-0290-1/01 !standard 7.3.2(0) !class Amendment 12-02-14 !status Amendment 2012 12-02-14 !status work item 12-02-14 !status received 11-11-21 !priority Medium !difficulty Medium !subject Improved control over assertions !summary ** TBD ** !proposal When assertions (particularly predicates and preconditions) are used in 3rd party libraries of packages (which can also describe reusable code used within an organization, and implementation-defined packages), additional control over the assertions is needed. In particular, assertions used to control inbound correctness checks should (almost) never be turned off, as these are needed to prevent failures in the the implementation of the library. In particular, clients should be discouraged from turning these checks off. However, assertions used to make internal correctness checks (such as postconditions and invariants) are less important - actions of the client should not be able to cause these to fail. As such, there should be some separation between these types of assertions. We make the following proposals: Proposal #1: The assertion policy in effect at the point of the declaration of a subtype with a predicate is the one used to determine whether it is checked or not (rather than the assertion policy at the point of the check, as it is currently proposed in the draft Standard). Reason: It should be possible for the body of a subprogram to be able to assume that checks are on for all calls to that subprogram. The rules for preconditions (and invariants and postconditions as well) already determine whether checks are made by the policy at the point of the subprogram declaration, which gives this property. It's weird that predicates are different. Proposal #2: If the invoked subprogram and denoted subprogram of a dispatching call have different assertion policies, it is unspecified which is used for a class-wide invariant. Reason: This is fixing a bug; invariants and postconditions should act the same and this is the rule used for postconditions (otherwise, the compiler would always be required to make checks in the body in case someone might turn off the checks elsewhere, not something we want to require). Proposal #3: Provide an additional assertion policy "Check_External_Only". In this policy, Preconditions and Predicates are checked, and others (Postconditions, type invariants, and pragma asserts) are ignored. Reason: Once unit testing is completed for a package, assertions that are (mostly) about checking the correctness of a body are not (necessarily) needed. However, assertions that are (mostly) about checking the correctness of calls are needed so long as new calls are being written and tested (which is usually so long as the package is in use). So it makes sense to treat these separately. Proposal #4: There is a boolean aspect "Always_Check_Assertions" for packages and subprograms. If this aspect is True, the assertions that belong to and are contained in the indicated unit are checked, no matter what assertion policy is specified. Reason: There ought to be a way for a program unit to declare that it requires assertion checks on for proper operation. This ought to include individual subprograms. This as an aspect since it applies to a program unit (package or subprogram), but a pragma would also work. This aspect (along with pragma Unsuppress) would allow a library like Claw to not only say in the documentation (that no one ever reads) that turning off checks and assertions is not supported, but also to declare that fact to the compiler and reader (so checks are in fact not turned off unless some non-standard mode is used). Note that we have a similar capability for constraint checks (pragma Unsuppress), and it seems unfortunate to have more control over those checks (which are cheaper and thus less necessary to turn off). Alternative to #4: An alternative would be to push this to runtime. In particular, if we defined an attribute 'Assertions_Checked (allowed on subtypes and subprograms, returns True if the Assertion_Policy is Check for the declaration of the indicated entity), then the subprogram can defend itself without any extra runtime cost. If we in addition defined an attribute 'Pre_Check that explicitly invoked the precondition and predicate checks on the parameters of a subprogram, then bulletproofing a subprogram would just require writing: if not Sub'Assertions_Checked and then Sub'Pre_Check then raise Assertion_Error with "Failed preconditions with checking off"; end if; This would not generate any code if assertion checking is on (as the checking code would be known to be dead at compile-time and would be removed by any barely competent compiler). It's interesting to note that a subprogram so protected would probably run faster with precondition checking on, as in that case the compiler could optimize out unnecessary checks at compile-time at the call site when checks are on, but with them off, they're generated inside of the body and are immune to optimization. That's why the original proposal is preferred (the cost doesn't change when checks are turned off). !wording ** TBD ** We will need to define the terms "class-wide invariant" and "specific invariant", since we keep finding ourselves talking about them and the parallel with preconditions (where these are defined terms) makes it even more likely that everyone will use them. Best to have a definition. This will simplify the wording for #2. !discussion For #1: The subtype declaration should be the determining point, as that is similar to that for preconditions. A predicate is a subtype property, so having it depend on some unrelated subprogram declaration seems bizarre. In addition, that would require having a different determination for predicate checks in other contexts (aggregate components, object initializations, etc.) Finally, using the subtype declaration should be enough, as a subprogram body always knows the location of the subtype declarations used in its parameters profiles. For #2: The proposal applies only to class-wide invariants, as the specific invariants only can be determined for the actually invoked subprogram (so there is no point in applying them at the call site rather than in the body). We could generalize this to cover all invariants, but the freedom doesn't seem necessary (and "unspecified" scares some users). For #3: There might be a better name for this checking mode. One proposal was "Check_Pres_Only" which is a trick as "Pre"s here means PREconditions and PREdicates. But that could be confused with the Pre aspect only, which is not what we want. It has been suggested that calls that are fully internal to a package ought to be exempted from this check. This seems to the author to have been a strawman set up to prove that since we can't get this separation perfect, we shouldn't do it at all. But that makes little sense; at worst, there will be extra checks that can't fail, and the user will always have the option of turning off all assertion checks if some are left on that are intolerable. The proposed rule is simple, and errs on the side of leaving checks on that might in fact be internal. It seems good enough (don't let best be the enemy of better)! For #4: It could be argued that applying pragma Assertion_Policy(Check) as part of a library. There are four problems with this: first, a configuration pragma only applies to the units with which it actually appears (not necessarily children or subunits). That means it has to be repeated with every unit of a set. Secondly, since the default assertion policy is implementation-defined, there would be no clear difference between requiring assertions for correct operation and just wanting assertions checked (for testing purposes). Third, this requires checking for the entire library (as a configuration pragma cannot be applied to an individual subprogram, as Suppress and Unsuppress can). That might be too much of a requirement for some uses. Finally, as a configuration pragma, the pragma cannot be part of the library [package] (it has to appear outside). That makes it more likely that it get separated from the unit, and less likely that the requirement be noted by the reader. None of these are compelling by themselves, but as a set it seems to suggest something stronger is needed. Having a special assertion policy for this purpose (say "Always_Check") doesn't seem sufficient because it still has the previously noted issues caused by the pragma being a configuration pragma. In the absence of some mechanism to force these checks, paranoid programmers will simply refuse to use preconditions and predicates, simply because writing the checks as normal Ada code eliminates the possibility of some clueless client turning them off (and then asking for extensive and expensive support as to why their application failed). Since predicates and preconditions provide better and more formal documentation than comments can, it would be unfortunate to discourage their use by a segment of the programming population. !ACATS Test Create an ACATS C-Test to test these changes. !ASIS No change needed. !appendix From: Randy Brukardt Sent: Monday, November 21, 2011 5:13 PM Let me start by saying that it is awfully late for any significant change, my default position is no change at this point, and the following issue isn't that critical to get right. But I'd like to discuss it some and either feel better about our current decision or possibly consider some alternatives. ---- John remarked in his latest ARG draft: >> *** I really think that this should raise Constraint_Error and that >> subtype predicates should not be controlled by Assertion_Policy but >> by Suppress. Ah well..... My initial response was: > I agree with this. The problem is that you want to replace constraints with > (static) predicates, but you can't really do that because people are used to > removing assertions even though they would never remove a constraint > check. But upon thinking about it further, it's not that clear-cut. First, it would be odd for static and dynamic predicates to raise different exceptions. Second, there really isn't that much difference in use between predicates and preconditions (entering calls) and invariants and postconditions (after calls). So there is an argument that they all should be the same. (But of course you can make the same arguments for constraints in both of the positions.) The thing that really bothers me is that contracts somehow seem less important than constraints, even though their purposes are the same. While assertions (including the "body postconditions" we discussed a few weeks ago) really are in a different category of importance. So it seems weird to me to include pragma Assert and the precondition and predicate aspects in the same bucket as far are suppression/ignoring is concerned. Let me give an example from Claw to illustrate my point. Most Claw routines have requirements on the some or all of the parameters passed. For instance, Show is defined as: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal); -- Show Window according to How. -- Raises: -- Not_Valid_Error if Window does not have a valid (Windows) window. -- Windows_Error if Windows returns an error. The implementation of Show will start something like: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is begin if not Is_Valid (Window) then raise Not_Valid_Error; end if; -- Assume Window is valid in the following code. ... end Show; This is of course Ada 95 code (assuming no implementation-defined extensions). So we couldn't have used pragma Assert. But even if it had existed, I don't think we would have used it because we consider the validity check as an important part of the contract and would not want it turned off. In particular, the body of Show will take no precautions against Window being invalid, and if violated, almost anything could happen. (Especially if checks are also suppressed; note that the Claw documentation says that compiling Claw with checks suppressed is not supported.) Essentially, this could be incorrect without the check, and as such it is not appropriate to use an assertion (which is supposed to be able to be ignored) to make the check. If I was writing this code in Ada 2012, I would want to use a predicate or precondition to make this requirement more formal (and executable and visible to tools and all of those other good things). For instance, using a predicate: (I'm using a precondition mainly because it is easier to write here; I'd probably use a predicate in practice because almost all of the Claw routines need this contract and it would be a pain to repeat it hundreds of times. But the same principles apply either way.) procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) with Pre => Is_Valid (Window); -- Show Window according to How. -- Raises: -- Windows_Error if Windows returns an error. We'd prefer to remove the check from the body: procedure Show (Window : in Root_Window_Type; How : in Claw.Codes.Show_Window_Type := Claw.Codes.Show_Normal) is begin -- Assume Window is valid in the following code. ... end Show; But note now that the body is incorrect if the Assertion_Policy is Ignore and the Window really is invalid. This is a bad thing: it means that code is at risk of being used in a way that has not be tested and is not supported. (Obviously, in this case the results aren't quite as bad in that the results aren't safety-critical. But what about a subpool-supporting storage pool? These are defined with a precondition: with Pre'Class => Pool_of_Subpool(Subpool) = Pool'Access; and I'd expect a user-written pool to do very bad things if this is not true. I'd surely not expect such a pool to repeat this check, but it would be necessary to be defensive.) To see this even more clearly, imagine that the containers library had implemented most of its exceptional cases as predicates or preconditions. The body of the containers may not even be implemented in Ada (GNAT for instance suppresses checks as a matter of course in the predefined packages); if those cases aren't checked, it is hard to predict what might happen. Possibly important aside: I was wrong when I said the writing the condition as a precondition or predicate didn't matter. For a precondition, the declaration of the call determines what Assertion_Policy is in effect. Thus, extending the existing Claw rule to include "compiling Claw with the Assertion_Policy to be other than Check is not supported" would be sufficient to eliminate the problem (at least formally - not so sure that is true practically). For predicates, however, it is the Assertion_Policy in effect at the point of the evaluation (subtype conversion, etc.) that determines whether the check is made. That means that predicates can be ignored even if all of Claw is compiled with Assertion_Policy = Check -- which makes the protections even weaker (since I don't think a library should be dictating how a client compiles their code). End possibly important aside. Fundamentally, contract aspects are more like constraints than they are like assertions. So it is dubious to put them into the same bucket with them. I know at one point Tucker argued that making contracts "suppressed" rather than "ignored" was bad because it could mean that adding contracts could make a program less-safe. And I agree with that, except that the problem exists anyway because the body of a subprogram with contracts is going to assume that those contracts are true -- and the failure of that assumption is going to make the results unpredictable. (Maybe not to the level of erroneousness, but at least in any formal sense. And if combined with Suppress as in GNAT, we will really will have erroneousness.) I think it is just as bad to require a bullet-proof library to repeat all of the predicate and precondition checks in the body "just-in-case" someone turns them off. People hate redundant code for good reason (it gets out of sync; it makes programs larger, it's often dead and thus causes problems with coverage analysis, etc.) --- If we accept the premise that assertions are different than contracts, what can we do? Going all the way to Suppression semantics is probably going too far, for a number of reasons. Most importantly, if this is not combined with Suppress of constraint checks, the code is likely to malfunction logically, but not in Ada terms. (It's likely to fail raising a predefined exception in some unexpected way; a manageable error.) The most logical solution would be to have a separate Contract_Policy. Such a policy would not include assertions (of any stripe) but would include all of the contract aspects. (And Assertion_Policy would revert to only handling assertions.) With such a separate policy, turning it off could be treated similarly to suppressing checks (something only to be done in very restricted circumstances for critical [and clearly demonstrated] needs). More importantly, management can clearly see if it is being abused and set appropriate limits. A less disruptive solution (from the user's perspective, it would require more rewording in the Standard than the former) would be to have an additional Assertion_Policy "Check_Contracts_and_Ignore_Others", which would do exactly what it says. One would hope that if such a policy existed that the use of the full "Ignore" could be discouraged just like the use of Suppress is discouraged. --- I've written enough on this topic. I've been uncomfortable since thinking about the implications of turning off these things on the storage pools and in containers more than a year ago. But I've never been convinced in my own mind that there is significant problem here -- and I'm still not. But noting that John has some similar concerns, I thought it would be good to air these out. Now it's your turn to comment. **************************************************************** From: Tucker Taft Sent: Monday, November 21, 2011 5:48 PM I could see adding another policy which distinguished assertions from other contract-like things. On the other hand, some folks (e.g. us) currently use assertions exactly like preconditions, postconditions, constraints, invariants, etc., so those of us already using assertions heavily probably don't see the point. If an assertion is violated, then bad things can happen. You don't generally get erroneousness, because the built-in constraint checks prevent that. But you can certainly get complete garbage-out, given enough garbage-in. As far as where the policy is relevant, I am surprised that for predicates it depends on where the check is performed. I would expect all checks associated with the predicate on a formal parameter would be on or off depending on the policy at the point where the subprogram spec is compiled, just like preconditions and postconditions. Certainly you want to know when you compile a body whether you can rely on the predicates associated with the formal parameters, otherwise you *will* get into the erroneousness world if the compiler makes the wrong assumption. **************************************************************** From: Randy Brukardt Sent: Monday, November 21, 2011 7:17 PM > I could see adding another policy which distinguished assertions from > other contract-like things. > On the other hand, some folks (e.g. us) currently use assertions > exactly like preconditions, postconditions, constraints, invariants, > etc., so those of us already using assertions heavily probably don't > see the point. If an assertion is violated, then bad things can > happen. You don't generally get erroneousness, because the built-in > constraint checks prevent that. But you can certainly get complete > garbage-out, given enough garbage-in. My view of assertions is a bit different: an assertion is something that ought to be True at a given point, but it doesn't affect the correctness of the program. IMHO, checks that do affect the correctness of the program should not be given as an assertion (pragma Assert), but rather be directly part of the program. For instance, if you have an if statement that handles two possible cases, but other cases are not handled, there ought to be some else branch with a bug box or exception raise in it -- not some assertion that can be ignored. (I've yet to see a case where a pragma Assert could do something that you couldn't do with regular code; that's not true of our old Condcomp facility, which allows conditionally compiled declarations and pragmas.) Thus, pragma Assert is purely a debugging aid; it never has an affect on any sort of correctness and it can be ignored without any harmful effects. I realize that pragma Assert can be used in other ways than I outlined above, but I always thought that the above was the intent -- it would surely be the way that I'd use it (if we had implemented it at all; we have not done so to date, so I've never actually used it). This difference is probably why I view the other contracty things differently, because they surely aren't optional, are not just debugging aids, and shouldn't be ignored without lots of careful consideration. Thus lumping them together doesn't make much sense. I'd be happier if there was an extra policy; but I won't scream to much since impl-def policies are allowed (so I can always define something sensible). > As far as where the policy is relevant, I am surprised that for > predicates it depends on where the check is performed. > I would expect all checks associated with the predicate on a formal > parameter would be on or off depending on the policy at the point > where the subprogram spec is compiled, just like preconditions and > postconditions. > Certainly you want to know when you compile a body whether you can > rely on the predicates associated with the formal parameters, > otherwise you *will* get into the erroneousness world if the compiler > makes the wrong assumption. I had that thought as well when I noticed that difference. The problem is that predicates belong to the subtype, not the subprogram. I could imagine a similar rule to subprograms where the policy that matters is that which applies to the subtype -- but that wouldn't necessarily give you certainty over the calls (especially when the subtype and subprogram are declared in different packages). A rule that made subtype conversions to formal parameters work differently than other subtype conversions is just too weird to contemplate. In any case, it does seem like there is a bug in the handling of subtype predicates compared to the other contracts. I suppose I shouldn't be surprised; Bob wrote the wording of predicates without considering how the other contracts worked, while the other three were pretty much developed together. Aside: We made whether pre and post-conditions checks are made for a dispatching call unspecified if the call and the invoked subprogram have different policies. We didn't do anything similar for type invariants; I would think that we ought to (the same problems apply). [I noticed this checking to see if the rules for type invariants were the same as for pre- and post-conditions; they are except for this difference.] P.S. I should stop thinking about this stuff. All I seem to find is problems. :-) **************************************************************** From: Bob Duff Sent: Monday, November 21, 2011 7:35 PM > Let me start by saying that it is awfully late for any significant > change, my default position is no change at this point, ... Agreed. I don't think this stuff is very important. There are all sorts of levels of checking ("policies") one might want (all-checks-on, checks-suppressed, checks-ignored, preconditions checked but not postconditions, etc). Implementations will provide those as needed or requested by customers. The only important one for the Standard to require is all-checks-on. Terminology: I use "assertion" to refer to any sort of checks, including pre/post-conditions. Not just pragma Assert. I think that matches Eiffel terminology. And Constraint_Error vs. Assertion_Failure is a non-issue. A bug is a bug. **************************************************************** From: Tucker Taft Sent: Monday, November 21, 2011 8:41 PM > ... Aside: We made whether pre and post-conditions checks are made for > a dispatching call unspecified if the call and the invoked subprogram > have different policies.... Your description seems a bit off. It is always based on where the declaration of some subprogram appears; the unspecified part is whether it is determined by the named subprogram or the invoked subprogram. It has nothing to do with where the call is. That is good because when compiling the body of a subprogram, the compiler knows which rule it is following. If it depended on where the call was, the compiler would have no idea whether to assume the precondition has been checked. > ... We didn't do anything similar for type invariants; I would think > that we ought to (the same problems apply). [I noticed this checking > to see if the rules for type invariants were the same as for pre- and > post-conditions; they are except for this difference.] Yes, I would agree we need to do roughly the same thing for invariants, or pick one, e.g., always base it on which subprogram body is invoked. I think predicate checks associated with parameter associations really need to be determined based on where the subprogram is declared (either the named subprogram or the invoked subprogram). Otherwise it is hopeless for the compiler to take any advantage of the predicate checks. **************************************************************** From: Jean-Pierre Rosen Sent: Tuesday, November 22, 2011 2:50 AM > Let me start by saying that it is awfully late for any significant > change, my default position is no change at this point, and the > following issue isn't that critical to get right. But I'd like to > discuss it some and either feel better about our current decision or > possibly consider some alternatives. > > ---- > > John remarked in his latest ARG draft: > >>> *** I really think that this should raise Constraint_Error and that >>> subtype predicates should not be controlled by Assertion_Policy but >>> by Suppress. Ah well..... > [...] discussion deleted > Now it's your turn to comment. You convinced me - maybe more than you would like ;-) I think that if we want to do it right, we need a special suppress, and a special exception: Contract_Error. That's really what it is about. The really important question is: is it too late in the game? We the ARG are very carefull about meeting our deadlines. But the message I heard from NBs and WG9 is "we are not so much in a hurry, we can accept a delay if it really improves the language". So it might be worth considering. **************************************************************** From: Tucker Taft Sent: Tuesday, November 22, 2011 7:11 AM I think it could be a mistake to make such a strong distinction between pragma Assert and these other kinds of assertions. I realize that some people have not been using assertions at all. But for those who have, they serve very much the same purpose as these new contracts. They are essentially higher-level constraint checks. To imply that somehow they are fundamentally different seems like an unjustifiable leap. And I suppose if you don't use pragma Assert, then you really shouldn't care. **************************************************************** From: Erhard Ploedereder Sent: Tuesday, November 22, 2011 12:23 PM Thinking more about future users...: What I would like to be able to is to turn off Assertions and Postconditions in MY code, because I have verified it to my heart's content. But what I would like to continue checking is the Preconditions of services that my library offers. After all, there is no limit to the s... of people (not reading contracts) and I want to ensure that my code works as advertised. Unfortunately, it looks like it is an all-or-nothing policy that is provided today. If that issue is (re)opened, I would argue for the differentiation above. If it is not re-opened then I want to note for 2020 that the upward incompatibility of billions and billions of lines of code relying on turning off precondition checks exists - and it would be a good thing to cause that incompatibility ;-) **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:01 PM > What I would like to be able to is to turn off Assertions and > Postconditions in MY code, because I have verified it to my heart's > content. But what I would like to continue checking is the > Preconditions of services that my library offers. > After all, there is no limit to the s... of people (not reading > contracts) and I want to ensure that my code works as advertised. I was thinking about that this morning. Propagating "Assertion_Error" from the body of some routine indicates a bug in that routine, pure and simple. But propagating it from a call indicates a bug in *your* code. It would be valuable to have such a distinction -- not all code is written by the same user. (Note: you need to include predicates with preconditions above.) I'm thinking about something like Claw, where the failure of a postcondition or invariant means that there is a bug in Claw (call tech support!) while the failure of a precondition or predicate means that there is a bug in the client's code (damn, better go fix it). There ought to be some differentiation between these. > Unfortunately, it looks like it is an all-or-nothing policy that is > provided today. If that issue is (re)opened, I would argue for the > differentiation above. Well, what I think we see is that there are quite a few differentiations that makes sense (contracts vs. pure debugging code [assertions]); caller vs. callee bugs. Which is probably why we punted. Hopefully, compiler optimizations of contracts will be good enough that management can treat Assertion_Policy(Ignore) the same as it treats Suppress(All_Checks) -- it needs reams of justification. (Note that in "correct" code, most of the preconditions and postconditions will match up, so turning off one or the other will have almost no effect on the code performance, since the compiler will only check one anyway.) > If it is not re-opened then I want to note for 2020 that the upward > incompatibility of billions and billions of lines of code relying on > turning off precondition checks exists - and it would be a good thing > to cause that incompatibility ;-) Agreed. Any code that *relies* on turning off checks (of any kind) is broken. There might in very rare cases be a reason to write intentionally broken code, but it should not be considered portable in any sense. **************************************************************** From: Tucker Taft Sent: Tuesday, November 22, 2011 2:19 PM Assertion_Policy is based on compilations (typically a source file), so perhaps you could use Ignore in the body, while using Check on the spec. I could imagine a version of Assertion_Policy which allowed you to specify a different exception is to be raised. Now go talk to your favorite compiler vendor, and don't forget to bring some cash... ;-) **************************************************************** From: Bob Duff Sent: Tuesday, November 22, 2011 2:38 PM > I could imagine a version of Assertion_Policy which allowed you to > specify a different exception is to be raised. Now go talk to your > favorite compiler vendor, and don't forget to bring some cash... ;-) I agree. I can think of all sorts of useful variations on the idea of "remove some assertions/checks for efficiency". Given that we're not all in agreement about what we want, the appropriate thing is to let vendors experiment, based on customer feedback, and maybe standardize something for Ada 2020. The Ada 2012 rules should be left alone at this point. By the way, have any of the people participating in this discussion looked at Meyer's Eiffel book? He gives a set of options for turning on/off checks, with rationale. I don't agree with everything he says, but I think we should all look at it before trying to reinvent that wheel. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:25 PM ... > But for those who have, they serve very much the same purpose as these > new contracts. They are essentially higher-level constraint checks. That's an abuse of the construct, IMHO. > To imply that somehow they are fundamentally different seems like an > unjustifiable leap. They *are* fundamentally different: (1) As pragmas, they should never have an effect on the dynamic correctness of the program. It should always be possible to erase all of the pragmas and get a functioning program. Anything else is an abuse of pragmas. We got rid of almost all of the offending pragmas in Ada 2012. (2) I've always viewed its purpose as holding code that eases debugging without having any effect on correctness. My programs have a lot of such code, and it makes sense to remove it from the production code. It makes sense for the Ada language to have a feature aimed at providing supporting such uses. It does *not* make sense to remove checks that protect the server from clueless clients (that's especially true in the case where the writer of the "server" (library) is different from the writer of the client -- exactly the case where Ada has better support than other languages). As such, it doesn't make sense for the language to support such a feature. (In the rare case where a check has to be removed for performance reasons, the good old comment symbol will suffice -- such things should *never* be done globally). I can see Erhard's point that invariants and postconditions are fundamentally similar (they verify the promises to the client and internally to the server, neither of which is much interest to the client), but preconditions and predicates are very different: they detect client mistakes -- something that is totally out of the hands of the library creator. > And I suppose if you don't use pragma Assert, then you really > shouldn't care. I would use it (as outlined above) if it had been defined in Ada 95. But I wouldn't use it for inbound contract checks, because those should never be turned off and I don't want to mix requirements on the client with internal self-checks -- these are wildly different things. (Personally, I'd never want to even use the same exception for such things, but that can't really be helped for language-defined checks -- Constraint_Error has a similar problem of confusing client contract checks vs. body errors.) **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 1:35 PM ... > I think predicate checks associated with parameter associations really > need to be determined based on where the subprogram is declared > (either the named subprogram or the invoked subprogram). Otherwise it > is hopeless for the compiler to take any advantage of the predicate > checks. You'd have to propose a rule in order to convince me about this. A predicate is a property of a subtype, and it can get checked in a lot of contexts that have nothing to do with a subprogram call. Having different rules for subprogram calls from other things (like type conversions, aggregate components, etc.) would add a lot of implementation complexity, possibility for user confusion, and wording complexity for little gain. I could easily imagine making the rule that it is the state at the declaration of the subtype that controls whether the check is made or not. That would mean whether the check is made or not is not context dependent (and it would be more like the other checks). And I think that would be enough for the compiler to be able to make assumptions about whether the call is checked, as the subtype is necessarily visible and previously compiled when the body is compiled (so the compiler will know whether checks are on or off for the predicate at that point). Note that I think whether Type_Invariants are on or off should depend on the type declaration, and not on the subprograms (they're necessarily in the same package, so it would be very unusual for them to have a different state). That would make the most sense since a Type_Invariant is a property of a (private) type; it's odd for some unrelated declaration to be determining anything about such a property. As noted above, the compiler would always know when the body is compiled, and I think that is sufficient for it to make optimizations. **************************************************************** From: Bob Duff Sent: Tuesday, November 22, 2011 3:17 PM > I would use it (as outlined above) if it had been defined in Ada 95. > But I wouldn't use it for inbound contract checks, because those > should never be ^^^^^^^^^^^^^^^^^^ > turned off and I don't want to mix requirements on the client with > internal self-checks -- these are wildly different things. I agree with the distinction you make between internal self-checks in a library versus checks on clients' proper use of that library. But I think it's very wrong to say "should never be turned off" about ANY checks. If I have strong evidence (from testing, code reviews, independent proof tools, ...) that calls to the library won't fail preconditions, and I need to turn them off to meet my performance goals, then I want to be able to do so. CLAW is not such a case, because it's not performance-critical, because it's painting stuff on a screen. But you can't generalize from that to all libraries. Commenting them out is not a good option, because then I can't automatically turn them back on for debug/test builds. Look at it another way: The ability to turn off checks (of whatever sort) gives me the freedom to put in useful checks without worrying about how inefficient they are. I've written some pretty complicated assertions, on occasion. Or yet another way: The decision to turn checks on or off properly belongs to the programmer, not to us language designers. Erhard imagines a library carefully written by competent people, and a client sloppily written by nincompoops. That is indeed a common case, but the opposite case, and everything in between, can also happen. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 4:23 PM > Assertion_Policy is based on compilations (typically a source file), > so perhaps you could use Ignore in the body, while using Check on the > spec. > > I could imagine a version of Assertion_Policy which allowed you to > specify a different exception is to be raised. Now go talk to your > favorite compiler vendor, and don't forget to bring some cash... ;-) My favorite vendor needs more time to do the work as opposed to cash. :-) [Although I suppose cash to live on would help provide that time...probably ought to go buy some lottery tickets. :-)] I agree that vendor extensions here will be important (some sort of function classification to allow optimizations will be vital). I do worry that we'll end up with too much fragmentation this way -- although I suppose we'll all end up copying whatever AdaCore does by some sort of necessity. **************************************************************** From: Randy Brukardt Sent: Tuesday, November 22, 2011 4:56 PM ... > But I think it's very wrong to say "should never be turned off" > about ANY checks. If I have strong evidence (from testing, code > reviews, independent proof tools, ...) that calls to the library won't > fail preconditions, and I need to turn them off to meet my performance > goals, then I want to be able to do so. > CLAW is not such a case, because it's not performance-critical, > because it's painting stuff on a screen. But you can't generalize > from that to all libraries. I don't disagree with this basic point, but supposed "performance goals" are usually used to justify all sorts of iffy behavior. But in actual practice, performance is impacted only a small amount by checks, and it is very rare for the checks to be the difference in performance. That is, most of the time, performance is way too slow/large/whatever, and turning off checks doesn't make enough difference. You have to switch algorithms. Or performance is really good enough as it is, and turning off checks just makes the program more fragile and prone to malfunction without any real benefit. The area between where turning off checks helps is pretty narrow. It's almost non-existent for constraint checks (a large number of which compilers remove). I like to think that the situation will be pretty similar for contract checks. Specifically, my fantasy compiler (I say fantasy compiler not because I don't know how to implement this, but more because I don't know if I'll ever have the time to actually implement it to make it real -- and I don't want make non-real things sound like they exist) should be able to remove most invariants and postconditions by proving them true. Similarly, it should be able to remove most predicates and preconditions by proving them true (based in part of the postconditions, known to either be checked or proved true). So the incremental effect of contract checks will be quite small. I should note that this depends on writing "good" contracts, using only provably "pure" functions. My fantasy compiler will surely give lots of warnings for bad contracts (and reject them altogether in some modes). Bad contracts always have to be evaluated and are costly - but these aren't really "contracts" given that they change based on factors other than the object values involved. One hopes these things aren't common. > Commenting them out is not a good option, because then I can't > automatically turn them back on for debug/test builds. True enough. But that's the role of pragma Assert: put the debugging stuff it in, not into contracts. > Look at it another way: The ability to turn off checks (of whatever > sort) gives me the freedom to put in useful checks without worrying > about how inefficient they are. I've written some pretty complicated > assertions, on occasion. Which is fine. But if you write complicated (and non-provable) contracts, I hope your compiler cuts you off at the knees. (My fantasy compiler surely will complain loudly about such things.) Keep contracts and debugging stuff separate! (And anything that can't be part of the production system is by definition debugging stuff!) > Or yet another way: The decision to turn checks on or off properly > belongs to the programmer, not to us language designers. True enough. But what worries me is lumping together things that are completely different. Debugging stuff can and should be turned off, and no one should have to think too much about doing so. Contract stuff, OTOH, should only be turned off under careful consideration. It's annoying to put those both under the same setting, encouraging the confusion of thinking these things are the same somehow. > Erhard imagines a library carefully written by competent people, and a > client sloppily written by nincompoops. That is indeed a common case, > but the opposite case, and everything in between, can also happen. Yes, of course. But as I note, well-written contracts can be (and should be) almost completely eliminated by compilers. So the reason for turning those off should be fairly minimal (especially as compilers get better at optimizing these). The danger is confusing debugging stuff (like expensive assertions that cannot be left in the production program -- and yes, I've written some things like that) with contract stuff that isn't costly in the first place and should be left in all programs with the possible exception of the handful on the razor's edge of performance. (And, yes, I recognize the need to be able to turn these things off for that razor's edge. I've never argued that you shouldn't be able to turn these off at all, only that they shouldn't be lumped with "expensive assertions" that can only be for debugging purposes.) BTW, my understanding is that GNAT already has much finer control over assertions and contracts than that offered by the Standard. Do you have any feeling for how often this control is used vs. the blunt instrument given in the Standard? If it is widely used, that suggests that the Standard is already deficient. **************************************************************** From: Robert Dewar Sent: Tuesday, November 22, 2011 10:07 PM > I don't disagree with this basic point, but supposed "performance > goals" are usually used to justify all sorts of iffy behavior. But in > actual practice, performance is impacted only a small amount by > checks, and it is very rare for the checks to be the difference in > performance. But checks can be a menace in terms of deactivated code in a certified environment, so often in a 178B context people DO want to turn off all checks, because they don't want to deal with deactivated code. > That is, most of the time, performance is way too slow/large/whatever, > and turning off checks doesn't make enough difference. You have to > switch algorithms. Or performance is really good enough as it is, and > turning off checks just makes the program more fragile and prone to > malfunction without any real benefit. Yes, most of the time, but there are VERY significant exceptyions. > The area between where turning off checks helps is pretty narrow. It's > almost non-existent for constraint checks (a large number of which > compilers remove). I like to think that the situation will be pretty > similar for contract checks. That's just wrong, there are cases in which constraint checks have to be turned off to meet performance goals. > Specifically, my fantasy compiler (I say fantasy compiler not because > I don't know how to implement this, but more because I don't know if > I'll ever have the time to actually implement it to make it real -- > and I don't want make non-real things sound like they exist) should be > able to remove most invariants and postconditions by proving them > true. Similarly, it should be able to remove most predicates and > preconditions by proving them true (based in part of the postconditions, known to either be checked or proved true). > So the incremental effect of contract checks will be quite small. Well hardly worth discussing fantasies! > Which is fine. But if you write complicated (and non-provable) > contracts, I hope your compiler cuts you off at the knees. (My fantasy > compiler surely will complain loudly about such things.) Keep > contracts and debugging stuff separate! (And anything that can't be > part of the production system is by definition debugging stuff!) I think you definitely are in fantasy land here, and I don't see any point trying to set you straight, since you proclaim this to be a fantasy. > True enough. But what worries me is lumping together things that are > completely different. Debugging stuff can and should be turned off, > and no one should have to think too much about doing so. Contract > stuff, OTOH, should only be turned off under careful consideration. > It's annoying to put those both under the same setting, encouraging > the confusion of thinking these things are the same somehow. For many people they are the same somehow, because assertions were always about contracts... > Yes, of course. But as I note, well-written contracts can be (and > should be) almost completely eliminated by compilers. So the reason > for turning those off should be fairly minimal (especially as > compilers get better at optimizing these). This idea of complete elimitation is plain nonsense, and the idea that it can happen automatically even more nonsensical. Randy, you might want to follow the Hi-Lite project to get a little more grounded here and also familiarize yourself with SPARK, and what can and cannot be feasibly achieved in terms of proof of partial correctness. > The danger is confusing debugging stuff (like expensive assertions > that cannot be left in the production program -- and yes, I've written > some things like that) with contract stuff that isn't costly in the > first place and should be left in all programs with the possible > exception of the handful on the razor's edge of performance. (And, > yes, I recognize the need to be able to turn these things off for that > razor's edge. I've never argued that you shouldn't be able to turn > these off at all, only that they shouldn't be lumped with "expensive > assertions" that can only be for debugging purposes.) I find your distinction between assertions and pre/postconditions etc to be pretty bogus. > BTW, my understanding is that GNAT already has much finer control over > assertions and contracts than that offered by the Standard. Do you > have any feeling for how often this control is used vs. the blunt > instrument given in the Standard? If it is widely used, that suggests > that the Standard is already deficient. I have never seen any customer code using this fine level of control. It still seems reasonable to provide it! Actually in GNAT, we have a generalized assertion pragma Check (checkname, assertion [, string]); where checkname can be referenced in a Check_Policy to turn it on or off, but I don't know if anyone uses it. Internally something like a precondition gets turned into pragma Check (Precondition, .....) **************************************************************** From: Robert Dewar Sent: Tuesday, November 22, 2011 11:07 PM ... > I'm thinking about something like Claw, where the failure of a > postcondition or invariant means that there is a bug in Claw (call > tech support!) while the failure of a precondition or predicate means > that there is a bug in the client's code (damn, better go fix it). > There ought to be some differentiation between these. I would assume that appropriate messages indicate what is going on. **************************************************************** From: Bob Duff Sent: Wednesday, November 23, 2011 7:26 AM > BTW, my understanding is that GNAT already has much finer control over > assertions and contracts than that offered by the Standard. Do you > have any feeling for how often this control is used vs. the blunt > instrument given in the Standard? If it is widely used, that suggests > that the Standard is already deficient. My evidence is not very scientific, but my vague impression is that most people turn checks on or off for production. I.e. not fine grained. I think "fine grained" would be a good idea in many cases, but I suppose most people find it to be too much trouble. Which reminds me of another reason to have a way to turn off ALL checks: If you're thinking of turning off some checks, you should turn off all checks, and measure the speed -- that tells you whether turning off SOME checks can help, and a best-case estimate of how much. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 1:58 PM > > I don't disagree with this basic point, but supposed "performance > > goals" are usually used to justify all sorts of iffy behavior. But > > in actual practice, performance is impacted only a small amount by > > checks, and it is very rare for the checks to be the difference in > > performance. > > But checks can be a menace in terms of deactivated code in a certified > environment, so often in a 178B context people DO want to turn off all > checks, because they don't want to deal with deactivated code. I don't understand. A check of the sort I'm talking about is by definition not something "deactivated". Either the compiler can prove it to be OK, in which case it doesn't appear in the code at all (it's essentially a comment), or it is executed everytime the runs. In which case it is an integral part of the code. I'd expect problems in terms of 178B for the handlers of check failures (since there wouldn't be an obvious way to execute or verify them), but not the checks themselves. Perhaps you meant the handlers? ... > > The area between where turning off checks helps is pretty narrow. > > It's almost non-existent for constraint checks (a large number of > > which compilers remove). I like to think that the situation will be > > pretty similar for contract checks. > > That's just wrong, there are cases in which constraint checks have to > be turned off to meet performance goals. I said "almost non-existent". The only time it makes sense to turn off constraint checks is in a loop, verified to be very "hot" in terms of program performance, and no better algorithm is available. And even then, you would be better off restructuring the loop to move the checks outside of it rather than turning them off (that's usually possible with the addition of subtypes). ... > > Which is fine. But if you write complicated (and non-provable) > > contracts, I hope your compiler cuts you off at the knees. (My > > fantasy compiler surely will complain loudly about such things.) > > Keep contracts and debugging stuff separate! (And anything that > > can't be part of the production system is by definition debugging > > stuff!) > > I think you definitely are in fantasy land here, and I don't see any > point trying to set you straight, since you proclaim this to be a > fantasy. I'd like you try. The only reason I proclaimed this a fantasy is because I have no idea if I'll ever be able to spend the 2-3 months of time to implement aspect_specifications and Pre/Post in the Janus/Ada *front-end*. That will be a *lot* of work. But the "proving" is just the stuff the Janus/Ada optimizer (and I would expect every other compiler optimizer) already does. I'd add two very simple extensions to what it already does: (1) add support for "facts" to the intermediate code -- that is expressions that the optimizer knows the value of without explicitly evaluating them (so they would have no impact on generated code); (2) extending common-subexpression elimination to function calls of functions that are provably pure (using a new categorization). The only real issue I see with this is that the Janus/Ada optimizer is not set up to give detailed feedback to the user, which means that it can only report that it was unable to "prove" a postcondition -- it can't give any indication as to why. Obviously that is something that will need future enhancement. > > True enough. But what worries me is lumping together things that are > > completely different. Debugging stuff can and should be turned off, > > and no one should have to think too much about doing so. Contract > > stuff, OTOH, should only be turned off under careful consideration. > > It's annoying to put those both under the same setting, encouraging > > the confusion of thinking these things are the same somehow. > > For many people they are the same somehow, because assertions were > always about contracts... Hiding contracts in the body was always a bad idea. In any case, there is clearly a grey area, and it will move as compiler (and other tools) technology improves. But I would say that anything that a compiler cannot use in proving should be an assertion (that is, a debugging aid) rather than part of a contract. That means to me that such things should not change the correctness of the program if turned off, and they should be suppressable independent of those things that can be used automatically. > > Yes, of course. But as I note, well-written contracts can be (and > > should be) almost completely eliminated by compilers. So the reason > > for turning those off should be fairly minimal (especially as > > compilers get better at optimizing these). > > This idea of complete elimitation is plain nonsense, and the idea that > it can happen automatically even more nonsensical. > Randy, you might want to follow the Hi-Lite project to get a little > more grounded here and also familiarize yourself with SPARK, and what > can and cannot be feasibly achieved in terms of proof of partial > correctness. I think SPARK is doing more harm than good at this point. It was very useful as a proof-of-concept, but it requires a mindset and leap that prevents the vast majority of programmers from every using any part of it. The effect is that they are solving the wrong problem. In order for something to be used by a majority of (Ada) programmers, it has to have at least the following characteristics: (1) Little additional work required; (2) Little degrading of performance; (3) Has to be usable in small amounts. The last is the most important. The way I and probably many other Ada programmers learned the benefit of Ada constraint checks was to write some code, and then have a bug detected by such a check. It was noticed that (a) fixing the bug was simple since the check pinpointed the location of the problem, and (b) finding and fixing the bug would have been much harder without the check. That caused me to add additional constraints to future programs, which found additional bugs. The feedback back loop increased the adoption of the checks to the point that I will only consider turning them off for extreme performance reasons. I think we have to set up the same sort of feedback loop for contracts. That means that simple contracts should have little impact on performance (and in some cases, they might even help performance by giving additional information to the optimizer and code generator). (I also note that this focus also provides obvious ways to leverage multicore host machines, which is good as a lot of these techniques are not easily parallelizable.) As such, I'm concentrating on what can be done with existing technology -- that is, subprogram level understanding using well-known optimization techniques. I'm sure we'll want to go beyond that eventually, but we need several iterations of the feedback loop in order to build up customer demand for such things. (And then we'll have to add exception contracts, side-effect contracts, and similar things to Ada to support stronger proofs.) > > The danger is confusing debugging stuff (like expensive assertions > > that cannot be left in the production program -- and yes, I've > > written some things like that) with contract stuff that isn't costly > > in the first place and should be left in all programs with the > > possible exception of the handful on the razor's edge of > > performance. (And, yes, I recognize the need to be able to turn > > these things off for that razor's edge. I've never argued that you > > shouldn't be able to turn these off at all, only that they shouldn't > > be lumped with "expensive assertions" that can only be for debugging > > purposes.) > > I find your distinction between assertions and pre/postconditions etc > to be pretty bogus. Fair enough. But then there will never be any real progress here, because few people will use expensive contracts that slow down their programs. And I *really* don't want to go back to the bad old days of multiple versions of programs to test (one with checks and one without). It's like the old saw says: create and test a car with seatbelts, then remove them in the production version. So I want to separate the contracts (an integral part of program correctness, and almost never turned off) from the assertions (expensive stuff that has little impact on program correctness, can be turned off without impact). I suspect that one probably could easily find several additional categories, which you may want to control separately. Note that one of the reasons why I supported the "assert_on_start" and "assert_on_exit" ideas is that these provide a place to put those expensive assertions that cannot be reasoned about with simple technology. I surely agree that just because something is expensive is no good reason not to include it. But simply because you have a few expensive assertions around is no good reason to turn off the bulk of the contracts which are cheap and easily eliminated completely. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:19 PM ... > My view of assertions is a bit different: an assertion is something > that ought to be True at a given point, but it doesn't affect the > correctness of the program. IMHO, checks that do affect the > correctness of the program should not be given as an assertion (pragma > Assert), but rather be directly part of the program. For instance, if > you have an if statement that handles two possible cases, but other > cases are not handled, there ought to be some else branch with a bug > box or exception raise in it -- not some assertion that can be > ignored. (I've yet to see a case where a pragma Assert could do > something that you couldn't do with regular code; that's not true of > our old Condcomp facility, which allows conditionally compiled > declarations and > pragmas.) OK, for me, I am with Tuck, assertions are definitely about preconditions and postconditions, and if false, something is just as wrong as if a Pre or Post aspect was violated > I realize that pragma Assert can be used in other ways than I outlined > above, but I always thought that the above was the intent -- it would > surely be the way that I'd use it (if we had implemented it at all; we > have not done so to date, so I've never actually used it). No, that's nbot at all the intent as I see it, in fact I find your intepretration very odd. > Thus, pragma Assert is purely a debugging aid; it never has an affect > on any sort of correctness and it can be ignored without any harmful > effects. I absolutely disagree with this view of assertions. The primary purpose of assertions is to inform the reader of pre and post conditions for sections of code. They are valuable even if turned off, and doing the same thing with regular code is actively confusing. Assertions are NOT part of the program, they are statements about the program (aka contracts) at least as I use them. Yes, they can be turned on and just as Pre/Post are useful debugging aids when turned on, so our assertions. Once again, I see no substantial differences between assersions and pre/post conditions. Have a look for example at the sources of einfo.ads/adb in the GNAT compiler. The subprograms in the body are full of assertions like: > procedure Set_Non_Binary_Modulus (Id : E; V : B := True) is > begin > pragma Assert (Is_Type (Id) and then Is_Base_Type (Id)); > Set_Flag58 (Id, V); > end Set_Non_Binary_Modulus; Here the assert is to be read as, and behaves as, a precondition. Any call that does not meet this precondition is wrong. Yes, it would be much better if these were expressed as preconditions in the spec, but we didn't have that capability fifteen years ago. We have an internal ticket to replace many of our assertions with pre post conditions, but that's not always feasible. Consider this kind of codce if Ekind (E) = E_Signed_Integer then ... elsif Ekind (E) = E_Modular_Integer then ... else pragma Assert (Is_Floating_Point_Type (E)); ... Here the pragma assert is saying that the only piossibility (in the absence of someone screweing up preconditions somewhere) is a floating-point type. It does not need to be tested, since it is the only possibility, but this is very useful documentation, and of course we can turn on assertions, and then, like all contracts, we enable useful debugging capabilities. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:21 PM Overall comment from Robert. I don't think anything needs to be changed here. Let's let usage determine the need for finer grained control, and next time around, we can consider whether to standardize some of this finer control. In practice it won't make much difference, most implementations go out of their way to accomdoate pragmas/attributes/restrictions etc from other implementations, we have dozens of pragmas that are there just because some other implementation invented them. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:23 PM > Thinking more about future users...: > > What I would like to be able to is to turn off Assertions and > Postconditions in MY code, because I have verified it to my heart's > content. But what I would like to continue checking is the > Preconditions of services that my library offers. > After all, there is no limit to the s... of people (not reading > contracts) and I want to ensure that my code works as advertised. > > Unfortunately, it looks like it is an all-or-nothing policy that is > provided today. If that issue is (re)opened, I would argue for the > differentiation above. Don't worry, so far every implementation that provides pre and post conditions provides the selective control you ask for. I will be VERY surprised if that does not continue to be the case. > If it is not re-opened then I want to note for 2020 that the upward > incompatibility of billions and billions of lines of code relying on > turning off precondition checks exists - and it would be a good thing > to cause that incompatibility ;-) I agree more control is needed, I just think it's premature to try to dictate to implementations what it should be, let users and customers decide. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:31 PM > ... >> But for those who have, they serve very much the same purpose as >> these new contracts. They are essentially higher-level constraint >> checks. > > That's an abuse of the construct, IMHO. Very peculiar viewpoint, in that case, most people using assertions are in your idiosyncratic view, abusing it, but I find this a bit silly, since we know in practice that most users of assertions are in fact carrying out this abuse, regarding it as the proper thing to do >> To imply that somehow they are fundamentally different seems like an >> unjustifiable leap. > > They *are* fundamentally different: No they aren't > (1) As pragmas, they should never have an effect on the dynamic > correctness of the program. It should always be possible to erase all > of the pragmas and get a functioning program. Anything else is an > abuse of pragmas. We got rid of almost all of the offending pragmas in Ada > 2012. That's a purist position that bares no relationship to reality. E.g. if you take away a pragma that specifies the queing policy, of course the program may fail, I would say half the pragmas in existence, both language defined and impl defined are like that. pragmas that affect dynamic behavior of the program (I really don't know what you mean by dynamic correctness, to most programmers correct means the program is doing what it is meant to do) Detect_Blocking Default_Storage_Pool Discard_Names Priorty_Specific_Dispatching Locking_Policy Restrictions Atomic Atomic_Components Attach_Handler Convention Elaborate_All Export Import Interrupt_Priority Linker_Options Pack Priority Storage_Size Volatile Volatile_Components In fact the number of pragmas that do NOT affect the dynamic behavior of a program is very small. So if your viewpoint is based on this fantasy that pragmas do not affect the correctness of a program, they are built on a foundation of sand. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:32 PM Really one conclusion from this thread is that there is no consensus on whether a late change is desirable, let alone important enough to do as a late change. So it seems obvious to me that the proper action is to leave things alone. You would need a VERY clear consensus to change something like this at this stage! **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 2:56 PM >> But checks can be a menace in terms of deactivated code in a >> certified environment, so often in a 178B context people DO want to >> turn off all checks, because they don't want to deal with deactivated >> code. > > I don't understand. A check of the sort I'm talking about is by > definition not something "deactivated". Either the compiler can prove > it to be OK, in which case it doesn't appear in the code at all (it's > essentially a comment), or it is executed everytime the runs. In > which case it is an integral part of the code. > I'd expect problems in terms of 178B for the handlers of check > failures (since there wouldn't be an obvious way to execute or verify > them), but not the checks themselves. Perhaps you meant the handlers? I mean that if you have if A < 10 then raise COnstraint_Error; end if; then you will have deactivated code (the raise can never be executed). Now if you are doing source level coverage (which is really what 178B requires), you may be able to deal with this with a suitable traceability study. But in practice many producers of safety critical code (and not a few DER's) prefer to see every line of object code executed in tests. It is controversial whether 178B requires this, but let me say for sure Robert Dewar requires it :-) If I was managing a SC project, I would want to see 100% coverage at the object level, and having to deal with all these cases by injection tests would be too painful, so I would probably go with turnning checks off. After all if you are using a SPARK approach in which you prove that no constraint errors can occur, what's the point in leaving in junk dead tests. > I said "almost non-existent". The only time it makes sense to turn off > constraint checks is in a loop, verified to be very "hot" in terms of > program performance, and no better algorithm is available. And even > then, you would be better off restructuring the loop to move the > checks outside of it rather than turning them off (that's usually > possible with the addition of subtypes). This is just totally at odds with how many people use the language. There is nothing necessarily wrong with making sure the language covers RB's idiosyncratic views on how the language should be used, as well as everyone else's idiosyncratic views, but we don't want to designt the language so it is ONLY suitable for this usage view. > I'd like you try. The only reason I proclaimed this a fantasy is > because I have no idea if I'll ever be able to spend the 2-3 months of > time to implement aspect_specifications and Pre/Post in the Janus/Ada *front-end*. > That will be a *lot* of work. As I say, I recommend you take a close look both at the SPARK technology and at the Hi-Lite project, these are both very real. > But the "proving" is just the stuff the Janus/Ada optimizer (and I > would expect every other compiler optimizer) already does. I'd add two > very simple extensions to what it already does: (1) add support for > "facts" to the intermediate code -- that is expressions that the > optimizer knows the value of without explicitly evaluating them (so > they would have no impact on generated code); (2) extending > common-subexpression elimination to function calls of functions that are provably pure (using a new categorization). It is MUCH MUCH harder than you think, and lots of people have spent lots of time working on these problems, and the naive optimism you express is not shared by those people. > Hiding contracts in the body was always a bad idea. In any case, there > is clearly a grey area, and it will move as compiler (and other tools) > technology improves. But I would say that anything that a compiler > cannot use in proving should be an assertion (that is, a debugging > aid) rather than part of a contract. That means to me that such things > should not change the correctness of the program if turned off, and > they should be suppressable independent of those things that can be used automatically. Well some preconditions and postconditions belong naturally in the body. Consider a memo function. The memoizing should be totally invisible in the spec, and generally the caller won't even be able to see or talk about the memo data. But it makes perfect sense to write preconditions and postconditions in the body saying what the memo data must look like on entry and exit. > I think SPARK is doing more harm than good at this point. It was very > useful as a proof-of-concept, but it requires a mindset and leap that > prevents the vast majority of programmers from every using any part of > it. The effect is that they are solving the wrong problem. I have no idea what you mean, but I am pretty sure you are not really familiar with SPARK, or the associated proof technologies. Are you even familiar with the annotation language? > In order for something to be used by a majority of (Ada) programmers, > it has to have at least the following characteristics: > (1) Little additional work required; > (2) Little degrading of performance; > (3) Has to be usable in small amounts. Again, how about looking at Hi-Lite which is precisely about figuring out the extent to which formal methods can be employed using roughly these criteria. Randy, while you sit fantasizing over what can be done, other people are devoting huge amounts of effort to thinking > The last is the most important. The way I and probably many other Ada > programmers learned the benefit of Ada constraint checks was to write > some code, and then have a bug detected by such a check. It was > noticed that (a) fixing the bug was simple since the check pinpointed > the location of the problem, and (b) finding and fixing the bug would > have been much harder without the check. That caused me to add > additional constraints to future programs, which found additional > bugs. The feedback back loop increased the adoption of the checks to > the point that I will only consider turning them off for extreme performance reasons. Yes, we know contraint checks are a relatively simple case. In the SPARK context for example, it has proved practical to prove large applications like iFacts to be free of any run-time errors, but it has NOT proved feasible to prove partial correctness for the whole application. > As such, I'm concentrating on what can be done with existing > technology -- that is, subprogram level understanding using well-known > optimization techniques. I'm sure we'll want to go beyond that > eventually, but we need several iterations of the feedback loop in > order to build up customer demand for such things. (And then we'll > have to add exception contracts, side-effect contracts, and similar > things to Ada to support stronger > proofs.) We are already WAY beyond this. And there is very real customer demand Have a look at http://www.open-do.org/projects/hi-lite/. There are many large scale users of Ada in safety-critical contexts who are very interested, especially in the environment of 178-C in the extent to which unit proof can replace unit testing. > Fair enough. But then there will never be any real progress here, > because few people will use expensive contracts that slow down their > programs. And I > *really* don't want to go back to the bad old days of multiple > versions of programs to test (one with checks and one without). It's > like the old saw > says: create and test a car with seatbelts, then remove them in the > production version. People DO use expensive contracts that slow down their programs, ALL THE TIME! Remember that at AdaCore, we have years of experience here. The precondition and postcondition pragmas of GNAT which have been around for over three years, were developed in response to customer demand by large customers developing large scale safety-critical applications. It really doesn't matter than the tests may be expensive if enabled at run time, they won't be enabled in the final build. > So I want to separate the contracts (an integral part of program > correctness, and almost never turned off) from the assertions > (expensive stuff that has little impact on program correctness, can be > turned off without impact). I suspect that one probably could easily > find several additional categories, which you may want to control separately. the almost never turned off does not correspond with how we see our customers using these features *at all*. And indeed even after we replace many of the assertions in GNAT itself with pre and post conditions, we will normally turn them all off as we do now for production builds of the compiler, since turning assertions on does compromise compiler performance3 significantly, and this matters to many people. We do builds internally with assertions turned on, and these are indeed useful for debugging porposes. > Note that one of the reasons why I supported the "assert_on_start" and > "assert_on_exit" ideas is that these provide a place to put those > expensive assertions that cannot be reasoned about with simple > technology. I surely agree that just because something is expensive is > no good reason not to include it. But simply because you have a few > expensive assertions around is no good reason to turn off the bulk of > the contracts which are cheap and easily eliminated completely. The idea that preconeditions and postconditions will be restricted to simple things that can "be reasoned about with simple technology": a) bares no relation to the way people are actually using the feature b) does not correspond with my expectations of how people will use the technology c) does not correspond with my recommendations of how people *should* use the technology I have no objection to making sure that what we define is reasonably usage for your (to me very peculiar) view of how these features should be used, but I would be appalled to think this was the official view of the language design, or that the designm would be compromised by this viewpoint (I find assert_on_start and assert_on_exit deeply flawed if they are motivated by dealing with expensive stuff). I contest the idea that the bulk of contracts are cheap and easily eliminated. Again look at Hi-Lite to get a better feel for the state of the art in this respect. Even just eliminating all contstraint checks is far from trivial (read some of the SPARK papers on this subject). **************************************************************** From: Bob Duff Sent: Wednesday, November 23, 2011 2:59 PM >...But I would say that anything that a compiler cannot use in proving >should be an assertion (that is, a debugging aid) rather than part of >a contract. Could we please settle on terminology? Preferably somewhat standard terminology? The term "assertion" means pragma Assert, precondition, postcondition, invariant, and predicate. Bertrand Meyer uses it that way, and I think it makes sense. Probably constraints and null exclusions should also be called assertions. I really think equating "assertion" and "debugging aid" will cause nothing but confusion. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 4:11 PM > > I don't understand. A check of the sort I'm talking about is by > > definition not something "deactivated". Either the compiler can > > prove it to be OK, in which case it doesn't appear in the code at > > all (it's essentially a comment), or it is executed everytime the > > runs. In which case it is an integral part of the code. > > > I'd expect problems in terms of 178B for the handlers of check > > failures (since there wouldn't be an obvious way to execute or > > verify them), but not the checks themselves. Perhaps you meant the handlers? > > I mean that if you have > > if A < 10 then > raise COnstraint_Error; > end if; > > then you will have deactivated code (the raise can never be executed). I see. But this is not actually what will be generated in the object code (at least for language-defined checks). > Now if you are doing source level coverage (which is really what 178B > requires), you may be able to deal with this with a suitable > traceability study. But in practice many producers of safety critical > code (and not a few DER's) prefer to see every line of object code > executed in tests. It is controversial whether 178B requires this, but > let me say for sure Robert Dewar requires it :-) > > If I was managing a SC project, I would want to see 100% coverage at > the object level, and having to deal with all these cases by injection > tests would be too painful, so I would probably go with turnning > checks off. After all if you are using a SPARK approach in which you > prove that no constraint errors can occur, what's the point in leaving > in junk dead tests. I agree with this. But it seems to me that you don't need to test every possible check, only one. That's because in practice the code will look something like: Mov EAX,[A] Cmp EAX, 10 Jl Check_Failed And every instruction of the object code here will be executed whether or not the check fails. The branch might even be a jump to the handler, if it is local). I suppose if your requirement is to exercise every possible *path*, then you have to remove all of the checks, but otherwise, you only need remove complex ones with dead code. (And there aren't many of those that are language-defined.) > > I said "almost non-existent". The only time it makes sense to turn > > off constraint checks is in a loop, verified to be very "hot" in > > terms of program performance, and no better algorithm is available. > > And even then, you would be better off restructuring the loop to > > move the checks outside of it rather than turning them off (that's > > usually possible with the addition of subtypes). > > This is just totally at odds with how many people use the language. > There is nothing necessarily wrong with making sure the language > covers RB's idiosyncratic views on how the language should be used, as > well as everyone else's idiosyncratic views, but we don't want to > designt the language so it is ONLY suitable for this usage view. I agree with you in general. I started out as one of those "many people". I eventually learned I was wrong, and also got bigger machines to run my code on. The language should discourage bad practices and misuse, but not to the extent of preventing things that need to be done occasionally. I surely would not suggest eliminating the ability to turn off checks, what bothers me of course is the requirement to treat them all the same, when they are clearly not the same. ... > > But the "proving" is just the stuff the Janus/Ada optimizer (and I > > would expect every other compiler optimizer) already does. I'd add > > two very simple extensions to what it already does: (1) add support > > for "facts" to the intermediate code -- that is expressions that the > > optimizer knows the value of without explicitly evaluating them (so > > they would have no impact on generated code); (2) extending > > common-subexpression elimination to function calls of functions that > > are provably pure (using a new categorization). > > It is MUCH MUCH harder than you think, and lots of people have spent > lots of time working on these problems, and the naive optimism you > express is not shared by those people. Trying to do it all surely is harder than I am expressing here. But I only am interested in the 75% that is easy, because that has the potential to make contracts as cheap as constraint checks (can't eliminate *all* of those either, it just matters that you eliminate most). > > Hiding contracts in the body was always a bad idea. In any case, > > there is clearly a grey area, and it will move as compiler (and > > other tools) technology improves. But I would say that anything that > > a compiler cannot use in proving should be an assertion (that is, a > > debugging > > aid) rather than part of a contract. That means to me that such > > things should not change the correctness of the program if turned > > off, and they should be suppressable independent of those things > that can be used automatically. > > Well some preconditions and postconditions belong naturally in the body. > Consider a memo function. The memoizing should be totally invisible in > the spec, and generally the caller won't even be able to see or talk > about the memo data. But it makes perfect sense to write preconditions > and postconditions in the body saying what the memo data must look > like on entry and exit. I agree that some things naturally belong in the body -- those clearly are not part of the contract. They're some other kind of thing (I've been calling them assertions, but Bob hates that, so I'll just say that they need a good name - "non-contract assertions" is the best I've got, and it isn't great). I'd argue that such things matter not to the client at all, and probably don't matter much to the implementation of the body either. Which makes them a very different sort of thing to a contract assertion -- even a (contract) postcondition which can be used as an assumption at the call site. > > I think SPARK is doing more harm than good at this point. It was > > very useful as a proof-of-concept, but it requires a mindset and > > leap that prevents the vast majority of programmers from every using > > any part of it. The effect is that they are solving the wrong problem. > > I have no idea what you mean, but I am pretty sure you are not really > familiar with SPARK, or the associated proof technologies. Are you > even familiar with the annotation language? I'm familiar enough to know that there is essentially no circumstance where I could use it, as it has no support for dynamic dispatching, exceptions, or references (access types or some real equivalent, not Fortran 66-type tricks). I know it progressed some from the days when I studied it extensively (about 10 years ago), but not in ways that would be useful to me or many other Ada programmers. Anyway, the big problem I see with SPARK is that you pretty much have to use it exclusively in some significant chunk of code to get any benefit. Which prevents the sort of incremental adoption that is really needed to change the attitudes of the typical programmer. ... > Again, how about looking at Hi-Lite which is precisely about figuring > out the extent to which formal methods can be employed using roughly > these criteria. Randy, while you sit fantasizing over what can be > done, other people are devoting huge amounts of effort to thinking The only reason I'm just thinking rather than doing is lack of time and $$$ to implement something. Prior commitments (to Ada 2012) have to be done first. > > The last is the most important. The way I and probably many other > > Ada programmers learned the benefit of Ada constraint checks was to > > write some code, and then have a bug detected by such a check. It > > was noticed that (a) fixing the bug was simple since the check > > pinpointed the location of the problem, and (b) finding and fixing > > the bug would have been much harder without the check. That caused > > me to add additional constraints to future programs, which found > > additional bugs. The feedback back loop increased the adoption of > > the checks to the point that I will only consider turning them off > > for extreme performance reasons. > > Yes, we know contraint checks are a relatively simple case. > In the SPARK context for example, it has proved practical to prove > large applications like iFacts to be free of any run-time errors, but > it has NOT proved feasible to prove partial correctness for the whole > application. I'm not certain I understand what you mean by "partial correctness", but if I am even close, that seems like an obvious statement. And even if it was possible, I'd leave that to others to do. I'm not talking about anything like that; I don't want to prove anything larger than a subprogram. You can chain those together to prove something larger, but I doubt very much it would ever come close to "correctness". I just want to prove the obvious stuff, both for performance reasons (so contracts don't cost much in practice) and to provide an aid as to where things are going wrong. > > As such, I'm concentrating on what can be done with existing > > technology -- that is, subprogram level understanding using > > well-known optimization techniques. I'm sure we'll want to go beyond > > that eventually, but we need several iterations of the feedback loop > > in order to build up customer demand for such things. (And then > > we'll have to add exception contracts, side-effect contracts, and > > similar things to Ada to support stronger > > proofs.) > > We are already WAY beyond this. And there is very real customer demand > Have a look at http://www.open-do.org/projects/hi-lite/. There are > many large scale users of Ada in safety-critical contexts who are very > interested, especially in the environment of 178-C in the extent to > which unit proof can replace unit testing. I looked at the site and didn't see anything concrete, just a lot of motherhood statements. I realize that there had to be a lot of that in the grant proposals, but I would like to see more than that to feel that anything is really being accomplished. How is Hi-Lite going to reach its goals. And clearly, the users requiring 178 and the like are a tiny minority. It's perfectly good to support them (they surely need it), but their needs are far from the mainstream. As such "Hi-Lite" is just a corner of what I want to accomplish. (If you don't have big goals, there is no point; and if the goals are attainable you probably will end up disappointed. :-) I would like to see an environment where all programmers include this sort of information in their programs, and all compilers and tools use that information to detect (some) bugs immediately. That can only be approached incrementally, and it requires a grass-roots effort -- it's not going to happen just from big projects where management can mandate whatever they want/need. ... > People DO use expensive contracts that slow down their programs, ALL > THE TIME! Remember that at AdaCore, we have years of experience here. > The precondition and postcondition pragmas of GNAT which have been > around for over three years, were developed in response to customer > demand by large customers developing large scale safety-critical > applications. > It really doesn't matter than the tests may be expensive if enabled at > run time, they won't be enabled in the final build. Which is exactly wrong: leaving the garage without seatbelts. I understand that there are reasons for doing this, but it doesn't make it any more right. And I'm personally not that interested in "safety-critical applications", because these people will almost always do the right thing ultimately -- they have almost no other choice (they'd be exposed to all sorts of liability otherwise). I'm much more concerned about the mass of less critical applications that still would benefit from fewer errors (that would be all of them!) I've wasted too much of my (and your) time with this, so I'll stop here. No need to convince each other -- time will tell. **************************************************************** From: Randy Brukardt Sent: Wednesday, November 23, 2011 4:25 PM > The term "assertion" means pragma Assert, precondition, postcondition, > invariant, and predicate. Bertrand Meyer uses it that way, and I > think it makes sense. Probably constraints and null exclusions should > also be called assertions. > > I really think equating "assertion" and "debugging aid" will cause > nothing but confusion. OK, I'll stop trying to forward this discussion. I could live with some better terminology for the separation, but the separation is clearly real and those that chose to ignore it force programmers into a corner where they have to choose between a too-slow program with checks on, or a decently performing program that is no better than a C program. We have the technology to prevent the need of this Hobson's choice, and it's sad not to be able to get any traction with it. Beyond that, I find it incredibly sad that people want to drive Ada into a box where it is only useful for large safety-critical applications. *All* applications can benefit from this sort of technology, but not if it is turned into a difficult formal language that only a few high priests can understand. And that's what I'm getting out of this discussion. Probably you are all right, and there is no real value to the beginner and hobbyist anymore, since they can't be monetized. Probably I'd be best off stopping thinking for myself and just becoming one of the sheep punching a clock. Because the alternative isn't pretty. **************************************************************** From: John Barnes Sent: Wednesday, November 23, 2011 4:30 PM Partial correctness simply means that a program is proved to be correct provided it terminates. It's nothing to do with the proof being otherwisee flaky. Mostly if a program has been proved to be partially correct then it really is correct but there are amusing examples where all the proof comes out OK yet the program is flawed because it cannot be proved to terminate in some cases. There is a simple example on page 289 of my Spark book. It's a factorial function. It looks OK but it loops for ever in one case. **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 4:57 PM > Partial correctness simply means that a program is proved to be > correct provided it terminates. It's nothing to do with the proof > being otherwisee flaky. Mostly if a program has been proved to be > partially correct then it really is correct but there are amusing > examples where all the proof comes out OK yet the program is flawed > because it cannot be proved to terminate in some cases. In the SPARK class, one of the examples was a integer square root. You were not asked to prove partial correctness, but I did anyway. I programmed a binary search, and tested as the termination condition that I actually had the square root. Obviously this only terminates if it correctly computes the square root. However, the trick in binary searches is always to get the termination correct and make sure you don't go into an infinite loop :-) Of course I did not prove that. > There is a simple example on page 289 of my Spark book. It's a > factorial function. It looks OK but it loops for ever in one case. Yes, a very nice example. BTW Randy, it would be a mistake to think that Hi-Lite is ONLY relevant to 178B/C environments, the purpose is precisely to make proof of preconditions and postconditions more generally accessible. **************************************************************** From: John Barnes Sent: Thursday, November 24, 2011 9:31 AM I contrived it by accident when giving a Spark course in York. One thing it taught me was the value of for loops which by their very nature always terminate. And by contrast I always avoid while loops. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 9:53 AM Well *always* sounds a bit suspicious, not everything is in the primitive recursive domain! **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 4:51 PM > I see. But this is not actually what will be generated in the object > code (at least for language-defined checks). Yes it is, what else are you thinking of (certainly you don't want to use the built-in x86 operations like the range check, they are pipline catastrophes. > I agree with this. But it seems to me that you don't need to test > every possible check, only one. That's because in practice the code > will look something like: > > Mov EAX,[A] > Cmp EAX, 10 > Jl Check_Failed well that's not what it will look like for us, since we maintain traceability on what check failed. > I suppose if your requirement is to exercise every possible *path*, > then you have to remove all of the checks, but otherwise, you only > need remove complex ones with dead code. (And there aren't many of > those that are > language-defined.) Right, if you have a JL, it has to execute both ways even in simple coverage. > I agree with you in general. I started out as one of those "many > people". I eventually learned I was wrong, and also got bigger > machines to run my code on. The language should discourage bad > practices and misuse, but not to the extent of preventing things that > need to be done occasionally. I surely would not suggest eliminating > the ability to turn off checks, what bothers me of course is the > requirement to treat them all the same, when they are clearly not the same. But checks are seldom turned on > I'm familiar enough to know that there is essentially no circumstance > where I could use it, as it has no support for dynamic dispatching, > exceptions, or references (access types or some real equivalent, not > Fortran 66-type tricks). I know it progressed some from the days when > I studied it extensively (about 10 years ago), but not in ways that > would be useful to me or many other Ada programmers. Well do you write safety-critical code at all? If not you really don't have the right context, people usually severely subset in the SC case. Certainly you would be very surprised to see access types and dynamic storage allocation in an SC app. So the SPARK restrictions in practice correspond with the subset that people anyway. > Anyway, the big problem I see with SPARK is that you pretty much have > to use it exclusively in some significant chunk of code to get any > benefit. Which prevents the sort of incremental adoption that is > really needed to change the attitudes of the typical programmer. Well SC apps are more likely to be generated from scratch anyway, and we are not talking typical programmer when it comes to SC apps. So SPARK is not for the environment you are thinking in terms of (gewnertal purpose programming using the whole language). > ... >> Again, how about looking at Hi-Lite which is precisely about figuring >> out the extent to which formal methods can be employed using roughly >> these criteria. Randy, while you sit fantasizing over what can be >> done, other people are devoting huge amounts of effort to thinking > > The only reason I'm just thinking rather than doing is lack of time > and $$$ to implement something. Prior commitments (to Ada 2012) have > to be done first. Well take a quick look at the Hi-Lite pages, how much effort is that, I think you will find them interesting > I'm not certain I understand what you mean by "partial correctness", > but if I am even close, that seems like an obvious statement. And even > if it was possible, I'd leave that to others to do. I'm not talking > about anything like that; I don't want to prove anything larger than a > subprogram. You can chain those together to prove something larger, > but I doubt very much it would ever come close to "correctness". I > just want to prove the obvious stuff, both for performance reasons (so > contracts don't cost much in > practice) and to provide an aid as to where things are going wrong. Gosh, I am quite surprised you don't know what partial correctness means. This is a term of art in proof of program correctness that is very old, at least decades (so old I have trouble tracking down first use, everyone even vageuly associated with proof techniques knows this term). Briefly, proving partial correctness means that IF the program terminates THEN you your proof says something about the results. If a single subprogram is involved, then it would mean for example proving that IF the preconditions hold, and IF thew subprogram terminates, then the postconditions hold. Proving termination (total correctness) is far far harder. > I looked at the site and didn't see anything concrete, just a lot of > motherhood statements. I realize that there had to be a lot of that in > the grant proposals, but I would like to see more than that to feel > that anything is really being accomplished. How is Hi-Lite going to > reach its goals. Did you read the technical papers? I don't think you looked hard enough! > And clearly, the users requiring 178 and the like are a tiny minority. > It's perfectly good to support them (they surely need it), but their > needs are far from the mainstream. As such "Hi-Lite" is just a corner > of what I want to accomplish. (If you don't have big goals, there is > no point; and if the goals are attainable you probably will end up > disappointed. :-) I would like to see an environment where all > programmers include this sort of information in their programs, and > all compilers and tools use that information to detect (some) bugs > immediately. That can only be approached incrementally, and it > requires a grass-roots effort -- it's not going to happen just from big > projects where management can mandate whatever they want/need. Randy, I think you have a LOT to study before understanding the issues here :-) > Which is exactly wrong: leaving the garage without seatbelts. I > understand that there are reasons for doing this, but it doesn't make > it any more right. If you can prove the car won't crash, seatbelts are not needed. It is pretty useless for a flight control system to get a constraint error :-) **************************************************************** From: Robert Dewar Sent: Wednesday, November 23, 2011 5:04 PM >> I really think equating "assertion" and "debugging aid" will cause >> nothing but confusion. > > OK, I'll stop trying to forward this discussion. > > I could live with some better terminology for the separation, but the > separation is clearly real Well you think it's real, that does not make it "clearly real", and I don't find the separation meaningful at all. And I gather that Tuck agrees with this. > and those that chose to ignore it force programmers into a corner > where they have to choose between a too-slow program with checks on, > or a decently performing program that is no better than a C program. > We have the technology to prevent the need of this Hobson's choice, > and it's sad not to be able to get any traction with it. It's because we can't agree on how it should be done! In the GNAT situation you could have pragma Check (Expensive, condition, string) pragma Check (Cheap, condition, string) and then control activation of Cheap or Expensive with e.g. pragma Check_Policy (Expensive, Off); pragma Check_Policy (Cheap, On); which seemed to me useful when I implemented it, but in fact we never saw a customer use this feature, and we never used it ourselves internally, so perhaps I was mistaken in thinking this a useful idea. For sure, the artificial attempt to corral cheap stuff into the fold of pragma Assert, and expensive stuff into the fold of preconditions/postconditions seems seriously misguided to me! > Beyond that, I find it incredibly sad that people want to drive Ada > into a box where it is only useful for large safety-critical > applications. *All* applications can benefit from this sort of > technology, but not if it is turned into a difficult formal language > that only a few high priests can understand. And that's what I'm getting out of this discussion. Nobody is saying that, it is just that your suggestions here make no technical sense to some of us. You will have to do a better job of cobnvincing people, or perhaps decide you are wrong, so far I have not seen any sympathetic response to the claim that assertions are about cheap debugging stuff, and pre/post conditions are about expensive contracts. > Probably you are all right, and there is no real value to the beginner > and hobbyist anymore, since they can't be monetized. Probably I'd be > best off stopping thinking for myself and just becoming one of the > sheep punching a clock. Because the alternative isn't pretty. That's just not true, we have huge numbers of students in the GAP program enthusiastically using Ada, and the precondition/postcondition feature is one we get student questions about all the time (there are also several university courses focused on programming by contract using the Ada 2012 features). **************************************************************** From: Yannick Moy Sent: Thursday, November 24, 2011 3:33 AM >> We are already WAY beyond this. And there is very real customer >> demand Have a look at http://www.open-do.org/projects/hi-lite/. There >> are many large scale users of Ada in safety-critical contexts who are >> very interested, especially in the environment of 178-C in the extent >> to which unit proof can replace unit testing. > > I looked at the site and didn't see anything concrete, just a lot of > motherhood statements. I realize that there had to be a lot of that in > the grant proposals, but I would like to see more than that to feel > that anything is really being accomplished. How is Hi-Lite going to > reach its goals. For something concrete, look at this page, with a small example of Ada 2012 code whose contracts and absence of run-time errors are proved with our tool (called gnatprove): http://www.open-do.org/projects/hi-lite/a-database-example/ How we do it is presented in various papers pointed-to by the main page: - the high-level view: http://www.open-do.org/wp-content/uploads/2011/02/DewarSSS2010.doc - a short introduction: http://www.open-do.org/wp-content/uploads/2011/06/Hi_Lite_Contract.pdf - the proof tool-chain: http://www.open-do.org/wp-content/uploads/2011/06/Why_Hi_Lite_Ada.pdf - the modified "formal" containers: http://www.open-do.org/wp-content/uploads/2011/06/Correct_Code_Containing_Containers.pdf > And clearly, the users requiring 178 and the like are a tiny minority. > It's perfectly good to support them (they surely need it), but their > needs are far from the mainstream. As such "Hi-Lite" is just a corner > of what I want to accomplish. We are clearly targetting DO-178 users in Hi-Lite, but not only. As an example, a partner of the project is Astrium, which is in the space industry, where DO-178 does not apply. And customers who have expressed interest in Hi-Lite come from a variety of industries. > (If you don't have big goals, there is no point; and if the > goals are attainable you probably will end up disappointed. :-) I > would like to see an environment where all programmers include this > sort of information in their programs, and all compilers and tools use > that information to detect (some) bugs immediately. We agree then! The technology we develop has this potential, because: * We are not restricting the user in his code. Instead, we detect automatically which subprograms we can analyze. * We don't require any user work upfront. You can start with the default implicit contract for subprograms of True for both precondition and postcondition! And the effects of subprograms (global variables read/written) are generated automatically. * We don't require an expensive analysis. Each subprogram is analyzed separately, based on the contracts of the subprograms it calls. > That can only be approached incrementally, > and it requires a grass-roots effort -- it's not going to happen just > from big projects where management can mandate whatever they want/need. I don't think Hi-Lite qualifies as a "big project", if you mean by that something immobile and monolithic. We expect there will be many uses of the technology, and we will try to accommodate as many as possible. BTW, Hi-Lite is a very open project, so feel free to participate in technical discussions on hi-lite-discuss mailing list (http://lists.forge.open-do.org/mailman/listinfo/hi-lite-discuss), and let us know if you would like to be involved in any way. **************************************************************** From: Jean-Pierre Rosen Sent: Thursday, November 24, 2011 8:06 AM Let's summarize the positions (correct me if I'm wrong): The D view: A program is a whole. If there's a bug in it, it is a bug, irrespectively of whether it is to be blamed on the caller or the callee. If checks are disabled, they are disabled as a whole. That's what the important (i.e. paying) customers -most of them in safety critical applications- want. The B view: There are library providers and library users. The library provider is responsible for his own bugs, and wants to be protected from incorrect calls from the user. The user must be aware of the right protocol to call the library. It makes sense to disable checks internal to the library (algorithmic checks), but not those that enforce the contract with the user. Users are the public at large, by far the highest number of users, few of them write critical applications. Not many are paying customers, but this is the public we should try to conquere. Personnaly, I understand D, but tend to agree with B. For example, the whole ASIS interface follows the B model: each query has an "expected element kind", and I would feel terribly insecure if the checks on the element kind were removed when I put a pragma to increase the speed on my own program. (Of course, currently these checks are not implemented as pre-conditions, but they could). Now, what is the issue? To have one or two pragmas to disable some checks, either together or separately. Those who want to disable all checks together can easily do so with two pragmas. Those who want to disconnect the two aspects are unable to do so if there is only one pragma. Moreover, it's not like a big change that could bring nasty consequences all over the language. So I think there should be really two pragmas, even at this late stage (I even think that contract violations should have their own, different, exception, but I don't have much hope to get support on this). **************************************************************** From: Tullio Vardanega Sent: Thursday, November 24, 2011 8:24 AM For what my opinion is worth here, I find JPR's summary useful for the layman to get to grips with the avalanche of views aired in this (very interesting) thread. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 8:36 AM > Now, what is the issue? To have one or two pragmas to disable some > checks, either together or separately. Those who want to disable all > checks together can easily do so with two pragmas. Those who want to > disconnect the two aspects are unable to do so if there is only one > pragma. Moreover, it's not like a big change that could bring nasty > consequences all over the language. So I think there should be really > two pragmas, even at this late stage (I even think that contract > violations should have their own, different, exception, but I don't > have much hope to get support on this). I strongly object to changing things in an incoimpatible way from the way they are now. Turning off assertions should turn off all assertions including preconditions and postconditions. But I don't have any strong feeling about introducing new gizmos that provide more control (GNAT already has a far greater level of control than anything that is likely to be stuffed in at the last minute). I still overall oppose any change at this stage, we just don't have a clear consensus for a last minute change. I would say let the market place decide what level of control is needed. After all in real life, such control is provided with a mixture of compiler switches (about which we can say nothing in the RM) and pragmas, and the blance between them has to be carefully considered in real life. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 8:54 AM > For what my opinion is worth here, I find JPR's summary useful for the > layman to get to grips with the avalanche of views aired in this (very > interesting) thread. Indeed! Note that I definitely agree that finer control is needed, and we have implemented this finer control for years in GNAT (remember that we have precondition and postcondition pragmas that work in Ada 95 and have been around for years). So we agree that this kind of control is needed. My concern is trying to figure out and set in stone what this control should be at this late stage, when we lack a consensus about what should be done. In practice you need both compiler switches and appropriate pragmas to control things. The former (switches) will always be the province of the implementation anyway. To me it is not so terrible if we have to leave some of the latter up to the implementor as well. **************************************************************** From: Robert Dewar Sent: Thursday, November 24, 2011 9:00 AM Let me note that Tullio captures the disucssion with Erhardt, but it does not capture the very different view between Randy and Robert/Tuck, which can be summarized as follows Randy thinks pragma Assert is about simple things that have negligible overhead, and should normally not be suppressed since the overhead is small. But pre/post conditions are about complex things that may need to be suppressed. That's his basis for wanting separate control (QUITE different from Erhard's point of view about standard libraries, which I think we all share). Note that Randy uses the term "assertions" to apply only to pragma Assert, and does not think of pre/post as assertions. Robert and Tuck don't make this big distinction, for them, following Eiffel thinking, these all come under the heading of assertions, and there is no basis for separate control *on those grounds*. The Erhard argument about separate control of Pre and Post(I think this would include body asserts as well) is quite different and seems valid, though I can't get too excited, we have this degree of control in GNAT, but we never saw it used either by a customer or by us ibnternally. But to be fair, we haven't really seen people use pre/post for general library development (we certainly do NOT use them yet in our GNAT run-time library). **************************************************************** From: Erhard Ploedereder Sent: Friday, November 25, 2011 6:14 AM I agree 100% with J.P. We ought to get it right in the standard, not just the implementation, which later will not budge at all from whatever it implemented initially. Sorry, Robert, but saying that GNAT already provides fine-grained control and therefore the standard need not simply does not cut it for me. Further on the subject: Hopefully there is an ENABLE option as well as a SUPPRESS, because as a library writer I have to chooose between: a) make it a PRE, which is really good documentation b) check it explicitly in the body and raise an exception If I cannot insist on the PRE-check, I have no choice but to go for b), and I'll be damned if I then provide the same information as a PRE as well. I really do not want to check it twice. So, having no fine control over PRE is truly bad software-engineering, because it forces users interested in robust software to do the wrong thing. **************************************************************** From: Robert Dewar Sent: Friday, November 25, 2011 8:47 AM > I agree 100% with J.P. > > We ought to get it right in the standard, not just the implementation, > which later will not budge at all from whatever it implemented > initially. Sorry, Robert, but saying that GNAT already provides > fine-grained control and therefore the standard need not simply does > not cut it for me. I just think it is too much of a rush to "get this right in the standard", when there is a wide divergence of opinion on what getting it right means. No one is disagreeing that a finer level of control would be desirable *in the standard*, we are just disagreeing over whether there is time for this relatively non-urgent last minute change given the lack of consensus. Erhard, make a specific proposal, I think the only chance for a change at this stage (when you have several people opposed to making a change) is to make a proposal, then if that proposal gets a consensus we can make the change. If not, we can continue the discussion process, and then when we come to an agreement, we just make it recommended practice and existing implementations can conform right away to the decision. It may well be possible to agree on this so that a decision is made by the time the standard is official. > So, having no fine control over PRE is truly bad software-engineering, > because it forces users interested in robust software to do the wrong thing. I don't think any users will be forced to do anything they don't like in practice :-) **************************************************************** From: Bob Duff Sent: Friday, November 25, 2011 10:31 AM > Erhard, make a specific proposal, ... Right, I've lost track of what is being proposed -- it got buried underneath a lot of philosophical rambling (much of which I agree with, BTW). I certainly agree with Erhard that for a library, one wants a way to turn off "internal" assertions, but keep checks (such as preconditions) that protect the library from its clients. I'm not sure how to accomplish that (e.g., what about preconditions that are "internal", such as on procedures in a private library package? What about when a library calls an externally-visible part of itself?). And I don't think it's that big of a problem, because vendors will give their customers what they need. We can standardize existing practice later (which is really how standards are supposed to be built, anyway!). **************************************************************** From: Randy Brukardt Sent: Monday, November 28, 2011 8:51 PM > > Erhard, make a specific proposal, ... > > Right, I've lost track of what is being proposed -- it got buried > underneath a lot of philosophical rambling (much of which I agree > with, BTW). I certainly agree with Erhard that for a library, one > wants a way to turn off "internal" > assertions, but keep checks (such as preconditions) that protect the > library from its clients. > > I'm not sure how to accomplish that (e.g., what about preconditions > that are "internal", such as on procedures in a private library > package? What about when a library calls an externally-visible part > of itself?). And I don't think it's that big of a problem, because > vendors will give their customers what they need. I don't think that there have really been too many concrete proposals in this thread - I'll try to rectify that below. But as someone who has supported a large third-party library that was intended to work on multiple compilers, I have to disagree with the above statement. In order to create and support something like Claw, one has to minimize compiler dependencies. It is not likely that we could have depended on some compiler-specific features unless those are widely supported. On top of that, while I agree that vendors will give their *customers* what they need, that does not necessarily extend to third-party library vendors (which includes various open source projects as well as paid vendors), as library creators are not necessarily the customers of the compiler vendors. Library creators/vendors have a somewhat different set of problems than the typical customer (need for maximum portability, need to control as much as possible how the library is compiled, etc.) So I think it is important that the standard at least try to address these issues. And it seems silly to claim that it is too hard to do now, given that this stuff is only implemented in one compiler (that I know of) right now and that compiler has finer-grained control over these things than anything imagined as a solution in the Standard. So I can't imagine any significant implementation or description problems now -- that will get harder as we go down the road and these things get more widely implemented. --- Anyway, enough philosophy. Let me turn to some details. Following are several proposals (most previously discussed), provided in my estimated order of importance: (note that I try to provide counter-arguments when they have come up) Proposal #1: The assertion policy in effect at the point of the declaration of a subtype with a predicate is the one used to determine whether it is checked or not (rather than the assertion policy at the point of the check, as it is currently proposed in the draft Standard). Reason #1A: It should be possible for the body of a subprogram to be able to assume that checks are on for all calls to that subprogram. The rules for preconditions (and invariants and postconditions as well) already determine whether checks are made by the policy at the point of the subprogram declaration, which gives this property. It's weird that predicates are different. Reason #1B: I'm suggesting using the subtype declaration as the determining point, as that is similar to that for preconditions. A predicate is a subtype property, so having it depend on some unrelated subprogram declaration (as Tucker suggested) seems bizarre. And that would require having a different determination for predicate checks in other contexts (aggregate components, object initializations, etc.) Finally, using the subtype declaration should be enough, as a subprogram body always knows the location of the subtype declarations used in its parameters profiles. Rebuttal #1Z: One thing that strikes me about this change is that a body can know more about the assertions that apply to the parameters than it can about the constraint checks that were made on those same parameters. Specifically, if checks are suppressed at a call-site, then random junk can be passed into a subprogram -- and there is nothing reasonable that the subprogram can do to protect itself. (Rechecking all of constraints of the parameters explicitly in the subprogram body is not reasonable, IMHO.) The erroneousness caused by such suppression bails us out formally, but that doesn't provide any reassurance in practice. The concern addressed above for predicates is very similar. However, just because we got it wrong in 1983 doesn't mean that we have to get it wrong now. So I don't find the suppress parallel to be a very interesting argument. --- Proposal #2: If the invoked subprogram and denoted subprogram of a dispatching call have different assertion policies, it is unspecified which is used for a class-wide invariant. Reason #2A: This is fixing a bug; invariants and postconditions should act the same and this is the rule used for postconditions (otherwise, the compiler would always be required to make checks in the body in case someone might turn off the checks elsewhere, not something we want to require). Reason #2B: I wrote this as applying only to class-wide invariants, as the specific invariants only can be determined for the actually invoked subprogram (so there is no point in applying them at the call site rather than in the body). We could generalize this to cover all invariants, but the freedom doesn't seem necessary (and I know "unspecified" scares some users). Corollary #2C: We need to define the terms "class-wide invariant" and "specific invariant", since we keep finding ourselves talking about them and the parallel with preconditions (where these are defined terms) makes it even more likely that everyone will use them. Best to have a definition. --- One thing that strikes me is that we seem to have less control over assertions (including contracts) than we have over constraint checks. One would at least expect equivalence. Specifically, a subprogram (or entire library) can declare that it requires constraint checks on (via pragma Unsuppress). It would make sense to have something similar for libraries to use for assertions. Thus the next proposal. Proposal #3: There is a boolean aspect "Always_Check_Assertions" for packages and subprograms. If this aspect is True, the assertions that belong to and are contained in the indicated unit are checked, no matter what assertion policy is specified. Reason #3A: There ought to be a way for a program unit to declare that it requires assertion checks on for proper operation. This ought to include individual subprograms. I've described this as an aspect since it applies to a program unit (package or subprogram), but a pragma would also work. This aspect (along with pragma Unsuppress) would allow a library like Claw to not only say in the documentation (that no one ever reads) that turning off checks and assertions is not supported, but also to declare that fact to the compiler and reader (so checks are in fact not turned off unless some non-standard mode is used). Rebuttal #3Z: One could do this by applying pragma Assertion_Policy(Check) as part of a library. There are four problems with this: first, a configuration pragma only applies to the units with which it actually appears (not necessarily children or subunits). That means it has to be repeated with every unit of a set. Secondly, since the default assertion policy is implementation-defined, there would be no clear difference between requiring assertions for correct operation and just wanting assertions checked (for testing purposes). Third, this requires checking for the entire library (as a configuration pragma cannot be applied to an individual subprogram, as Suppress and Unsuppress can). That might be too much of a requirement for some uses. Finally, as a configuration pragma, the pragma cannot be part of the library [package] (it has to appear outside). That makes it more likely that it get separated from the unit, and less likely that the requirement be noted by the reader. None of these are compelling by themselves, but as a set it seems to suggest something stronger is needed. Reason #3C: Having a special assertion policy for this purpose (say "Always_Check") doesn't seem sufficient because it still has the issues caused by the pragma being a configuration pragma (as noted in the Rebuttal). --- Proposal #4: Provide an additional assertion policy "Check_External_Only". In this policy, Preconditions and Predicates are checked, and others (Postconditions, type invariants, and pragma asserts) are ignored. Reason #4A: The idea here is that once unit testing is completed for a package, assertions that are (mostly) about checking the correctness of a body are not (necessarily) needed. OTOH, assertions that are (mostly) about checking the correctness of calls are needed so long as new calls are being written and tested (which is usually so long as the package is in use). So it makes sense to treat these separately. Alternative #4B: I'm not sure I have the right name here, and the name is important in this case. My first suggestion was "Check_Pres_Only" which is a trick as "Pre"s here means PREconditions and PREdicates. But that could be confused with the Pre aspect only, which is not what we want. Discussion #4C: In the note above, Bob suggests that calls that are fully internal ought to be exempted (or something like that). This sounds like FUD to me - a strawman set up to prove that since we can't get this separation perfect, we shouldn't do it at all. But that makes little sense to me; at worst, there will be extra checks that can't fail, and the user will always have the option of turning off all assertion checks if some are left on that are intolerable. The proposed rule is simple, and errs on the side of leaving checks on that might in fact be internal. It seems good enough (don't let best be the enemy of better! :-) --- My intent is to put all of these on the agenda for the February meeting, unless of course we have a consensus here that one or more of these are bad ideas. Finally, I'll mention a couple of other ideas that I won't put forth proposals for, as I don't think that they will get traction. Not a proposal #5: Provide an additional assertion policy "Check_Contracts_Only". In this policy, pragma Asserts are ignored, and all of the others are checked. The way I view pragma Assert vs. the other contracts, this makes sense. But I seem to be in the minority on this one, and I have to agree that Proposal #4 makes more sense (except for the lame name), so I am going to support that one and not push for this. Not a proposal #6: Have predicate and precondition failures raise a different exception, Contract_Error. This also makes sense in my world view, as failures of such inbound contracts is a very different kind of error than the failure of a pragma Assert or even a postcondition in a subprogram body. It would make it much clearer as to whether the cause of the problem is one within the subprogram or one of the call. But this feels like a bigger change than any of the first four proposals, we already have a similar problem with Constraint_Error, and I don't want to go too far here (I'd rather the easy fixes get accomplished rather than getting bogged down in the harder ones). If someone else wants to take the lead on this idea, I'll happily support it. **************************************************************** From: Brad Moore Sent: Tuesday, December 6, 2011 12:28 AM ... > On top of that, while I agree that vendors will give their *customers* > what they need, that does not necessarily extend to third-party > library vendors (which includes various open source projects as well > as paid vendors), as library creators are not necessarily the customers of the compiler vendors. > Library creators/vendors have a somewhat different set of problems > than the typical customer (need for maximum portability, need to > control as much as possible how the library is compiled, etc.) > > So I think it is important that the standard at least try to address > these issues. And it seems silly to claim that it is too hard to do > now, given that this stuff is only implemented in one compiler (that I > know of) right now and that compiler has finer-grained control over > these things than anything imagined as a solution in the Standard. So > I can't imagine any significant implementation or description problems > now -- that will get harder as we go down the road and these things get more widely implemented. I definitely agree with Randy, Erhard, et al, that one needs finer grained control to do things like enable preconditions while disabling postconditions. I'm wondering though if we haven't already provided this control in the language. One can use statically unevaluated expressions to accomplish this. RM 4.9 (32.1/3 - 33/3) package Config is Expensive_Postconditions_Enabled : constant Boolean := False; end Config; with Config; use Config; package Pak1 is procedure P (Data : in out Array_Type) with Pre => Data'First = 0 and then Is_A_Power_Of_Two (Data'Length) , Post => Expensive_Postconditions_Enabled and then Correct_Results (Data'Old, Data); end Pak1; This already gives programmers tons of flexibility, as I see it, since they can define as many different combinations of statically unevaluated constant expressions as needed. One possibly perceived advantage of this approach, is that it might encourage one to leave the assertion policy enabled, and let the programmer provide the switches that can be configured for production vs debugging releases of the application, rather than apply a brute force, all on or all off approach, or some other such coarser grained policy, which might make developers/management uneasy, without an exhaustive analysis of all the source. In spite of these current possibilities though, I think Randy's proposals below make sense, and are worth pursuing, since it can be desirable to disable/enable checks without having to modify source code, or implement some sort of configuration dependent/implementation independent project file approach. ****************************************************************