CVS difference for ai05s/ai05-0177-1.txt

Differences between 1.1 and version 1.2
Log of other versions for file ai05s/ai05-0177-1.txt

--- ai05s/ai05-0177-1.txt	2009/10/30 01:16:32	1.1
+++ ai05s/ai05-0177-1.txt	2010/06/14 01:07:27	1.2
@@ -1,13 +1,14 @@
-!standard  13.11.1(3/2)                             09-10-29    AI05-0177-1/01
+!standard  13.11.1(3/2)                             10-06-13    AI05-0177-1/02
 !class Amendment 09-10-29
 !status work item 09-10-29
 !status received 09-03-15
-!priority Low
+!priority Medium
 !difficulty Easy
-!subject Renaming of expressions as functions
+!subject Parameterized expressions
 !summary
 
-Expressions can be renamed as functions; they act like default expressions for
+A parameterized expression can define the body of a function, even in a package
+specification. They act like default expressions for
 freezing and evaluation purposes.
 
 !problem
@@ -31,27 +32,32 @@
 
 !proposal
 
-   renamed_expression ::=
+   parameterized_expression ::=
       [overriding_indicator]
-      function_specification renames (<*renamed_*expression>);
+      function_specification is (<*parameterized_*expression>);
 
-[Note: The parens are needed to clearly (?) separate renaming an expression as a
-function from renaming some other function as a function. Perhaps it would be
+[Note: The parens are needed to clearly (?) separate a parameterized expression
+from the start of a normal function body. Perhaps it would be
 better to use a keyword somewhere to make it crystal-clear.]
 
-The expected type of the renamed_expression is that of the return subtype of the
-function_specification.
+The expected type of the parameterized_expression is that of the return subtype
+of the function_specification.
 
-Within the renamed_expression, the parameters of the function_specification are
-visible. (This would work the same way as the parameters are visible within a
-precondition.)
+Within the parameterized_expression, the names of the parameters of the
+function_specification are visible.
 
 The dynamic semantics are the same as a function whose body contains nothing but
-"return renamed_expression".
+"return parameterized_expression".
 
-This renames could be used as a renames-as-body or as a renames-as-declaration.
-(A renames-as-body could appear in the private part to complete a subprogram
-declared in the visible part, as well as in the body.)
+A parameterized expression can be used as a declaration or as the completion of
+a function specification. If used as a completion, the completion can be
+given anywhere in the same package (if the declaration is a package). Specifically,
+parameterized expressions in a private part can complete function declarations
+from the visible part.
+
+For consistency of syntax, we also allow null procedures to complete declarations
+in the same way as parameterized expressions. [Editor's note: I think we should
+be able to use the same wording for that purpose.]
 
 Freezing: The renamed expression would freeze as does a default expression. That
 is, freezing would only be performed at the point of a call of the function,
@@ -61,25 +67,11 @@
 !wording
 
 ** TBD **
+[Lack of time prevented creating wording. The model is fairly well
+understood. - RLB]
 
 !discussion
 
-Elaboration: I believe that freezing is enough for this construct to ensure that
-the types of this construct are elaborated, so that no elaboration check would
-ever need to be made on an call of one of these renames. However, the
-renamed_expression could include a (separate) call that needed an elaboration
-check; in that case the check would have to be made (somewhere - in the body if
-not inlined, and at the point of the inlined call if inlined). It's not
-completely clear to me if there is enough benefit to defining these specially in
-terms of elaboration, although it always seems like a good idea to eliminate
-run-time checks. Perhaps it would be enough to point this fact out in an AARM
-note so that implementors are aware of this to take advantage of it. (Note that
-this is only true if the compiler *knows* that a construct is a renamed
-expression; if this completes a visible function in a package body, then the
-elaboration would be "normal".)
-
----
-
 One could imagine these functions (when visible) to be used in
 static expressions if the inlined expression would be static. This would
 definitely allow abstracting expressions that currently are written as a
@@ -88,17 +80,33 @@
 
 ---
 
+It has been suggested to allow parameterized expressions as generic formal
+subprogram defaults. This would increase the analogy with null procedures.
+
+I didn't include this for three reasons:
+(1) The syntax would be very close to the existing default_name syntax. Adding
+or removing parens would change the semantics substantially.
+(2) This would probably require explicitly creating a body at the point of the
+instantiation. So the implementation effort would be substantially increased.
+(3) We're not likely to make the analogy with null procedures complete (we would
+need the next item as well), so there is going to be an inconsistency somewhere.
+
+Therefore, I stuck with the simpler proposal.
+
+---
 An additional idea would be to allow this as a primitive subprogram of an
 interface. If we did that, we would get the benefit of the identity function
 idea (see AI05-0140-1) without adding anything special specifically for that case.
-(If we wanted to add the generic default for that case, then it gets a bit messier,
-but not a huge amount.) The expression would have to be included in the 8.3 homograph
+The expression would have to be included in the 8.3 homograph
 matching rules, probably by borrowing the rules for full conformance of expressions.
 That would be necessary to deal with the Steve Baird problem for diamond inheritance
 with interfaces (that was the primary reason that AI05-0140-1 was scuttled).
+
+Including both this and the formal subprogram default would make this completely
+similar to null procedures.
+
+---
 
-This would add support for additional things you cannot do now, and actually would
-improve the case for this feature.
 
 !example
 
@@ -1053,3 +1061,1550 @@
 definitional problem).
 
 ****************************************************************
+
+From: Bob Duff
+Date: Thursday, April 29, 2010  7:32 PM
+
+AI05-0177-1.TXT -- Renaming of expressions as functions (Also called "expression
+functions".)
+
+At today's subgroup phone meeting, everybody except me supported this feature.
+I was asked to explain why, and I gave a rather incoherent explanation (I'm not
+good at thinking on my feet).  Plus everybody interrupted with objections before
+I was done.  So let me try to summarize my opinions, for the record.
+
+I think there is definitely some value in exposing the implementations (i.e.
+body) of small-ish subprograms to clients, to allow proof tools to see that
+information.  I don't think that applies JUST to functions used in assertions
+(e.g. preconditions).
+
+"Exposing to clients" should mean "in the visible part".  Private parts and
+bodies are supposed to be hidden from clients.  That's the Ada model.
+
+Expression functions are a poor tool for achieving the "expose to clients" goal,
+because they are restrictive.  Why can't they be procedures? (Tucker suggested a
+solution to that.)  Why can't they have locally-declared constants?  I hate
+these kinds of choices:  I can have the advantages of expression functions, or I
+can have the advantages of local constants, but I can't have both at the same
+time.  Why not?
+
+It's fine with me if some tools (possibly optionally) look at private parts. But
+if they do, they might as well look at bodies, too.  Conceptually, private part
+and body are both part of the "implementation"; only the visible part is
+"specification".  In fact, they might as well look at all type extensions of a
+given root type.  That's what's needed to "expose the implementation" in all
+cases.  So I don't see any advantage to expression functions, for tools that
+wish to break privacy.
+
+But a modular analysis of a client should look only at visible parts of whatever
+it calls.
+
+My final point here is that I think globals annotations are a more fruitful
+endeavor.  As proposed, they're not ready to be standardized, but I think
+implementations can experiment with them, and should give that experimentation
+higher priority than expression functions.
+
+Which leads to a more general point:  There are two extremes for standards:
+
+    (1) Never standardize anything until it's been implemented and used
+        successfully.
+
+    (2) Standardize first, then implement and use.
+
+Both have serious problems.  Ada tends toward (2).  I'm saying we should move
+more in the direction of (1), at least in this case.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Thursday, April 29, 2010  10:33 PM
+
+> At today's subgroup phone meeting, everybody except me supported this
+> feature.  I was asked to explain why, and I gave a rather incoherent
+> explanation (I'm not good at thinking on my feet).  Plus everybody
+> interrupted with objections before I was done.  So let me try to
+> summarize my opinions, for the record.
+
+Sorry about the interruptions. I think you were pausing to gather your thoughts
+long enough that we thought you were done talking.
+
+...
+> Expression functions are a poor tool for achieving the "expose to clients"
+> goal, because they are restrictive.  Why can't they be procedures?
+> (Tucker suggested a solution to that.)  Why can't they have
+> locally-declared constants?  I hate these kinds of choices:
+> I can have the advantages of expression functions, or I can have the
+> advantages of local constants, but I can't have both at the same time.
+> Why not?
+
+I don't see any way to accomplish a significantly less restrictive vision.
+Honestly, I don't see much point (beyond the easier inlining) to doing this for
+anything other than predicates. For the sorts of functions that occur as
+predicates, either they are trivial combinations of component values, or they
+require some complex operations that have no advantage to being exposed anyway.
+
+For instance, if you need a local variable and a loop to implement a predicate,
+it's not very likely to be analyzable anyway. (At least not with the tools that
+Ada 2012 is going to be providing - we don't have loop invariants and the like.)
+So there is no huge loss to having to put it into a body.
+
+The big win is for the tiny one-liner accessor functions: these might as well be
+exposed to any tool that cares to look (and especially the compiler).
+
+> It's fine with me if some tools (possibly optionally) look at private
+> parts.
+> But if they do, they might as well look at bodies, too.
+> Conceptually, private part and body are both part of the
+> "implementation"; only the visible part is "specification".
+> In fact, they might as well look at all type extensions of a given
+> root type.  That's what's needed to "expose the implementation" in all
+> cases.  So I don't see any advantage to expression functions, for
+> tools that wish to break privacy.
+>
+> But a modular analysis of a client should look only at visible parts
+> of whatever it calls.
+
+Ada allows children to have visibility on the private part, so it's not correct
+to claim that no one should look there. Compilers look there all the time, and
+the runtime correctness of a program *depends* on what is found there. And what
+we're trying to analyze is the runtime correctness, isn't it??
+
+In any case, I think the most important thing is to allow *compilers* to be able
+to do these sorts of analyses, without having to depend on anything that they
+don't naturally have. A compiler always has access to the private part of a
+package, and rarely has access to a body other than the one being compiled.
+Compilers usually don't pay much attention to privacy once they've gotten the
+legality rules out of the way - I don't think our compiler does any analysis on
+the code whatsoever until after the stage where privacy and the like is
+respected. (And of course there are runtime considerations which require
+ignoring privacy.)
+
+Why do I think compilers are so important? It's not so much compilers as
+whatever is the first tool run on the source code to remove errors. It should be
+possible to find the maximum number of errors immediately after finishing
+writing the code for a single unit, because that's when they're easiest to fix.
+These days, the first tool run is almost always the compiler (at least to do
+deep analysis - some IDEs do some syntactic checking). For that to work, no
+other units other than direct dependencies (in particular, no bodies) need
+exist.
+
+The other, related issue for me is that compilers have to be able to eliminate
+most of these contracts in the same way that we eliminate most range checks.
+Like range checks, turning off contracts is like using seat belts until you
+leave the garage - it's something you shouldn't do. The effort to do that can
+naturally be used to also help the programmer see where their contracts are
+iffy. Doing that is necessarily going to require looking under the hood, as all
+optimization does to some extent or other.
+
+> My final point here is that I think globals annotations are a more
+> fruitful endeavor.  As proposed, they're not ready to be standardized,
+> but I think implementations can experiment with them, and should give
+> that experimentation higher priority than expression functions.
+
+You are for some reason equating things that have very different orders of
+magnitude. Expression functions probably will take a week or so of effort to
+implement. Globals annotations are at least an order of magnitude more effort
+(probably two orders). It doesn't make sense to say "don't spend a week
+implementing an easy, useful feature because there is another feature that will
+take 5 months to implement that might be more important". But the effort saved
+will have no impact whatsoever on the results of the globals annotations.
+
+I don't agree that the framework for globals annotations is not ready to be
+standardized. The specific annotations that work best probably will require some
+experimentation, but by not putting together a common framework populated with a
+few important annotations which are obvious (specific items, everything in a
+package, nothing at all), we're either going to end up with a tower-of-babble or
+just as likely nothing at all. Even so, I understand why people feel this way
+and am willing to go without. But what I will not stand for is having supposed
+contracts with no way whatsoever for the compiler to analyze them if they're
+written only in portable Ada. If we don't have one or the other, then I don't
+want ANY new contracts in Ada. And I am going to fight that battle as long as I
+can.
+
+> Which leads to a more general point:  There are two extremes for
+> standards:
+>
+>     (1) Never standardize anything until it's been implemented and used
+>         successfully.
+>
+>     (2) Standardize first, then implement and use.
+>
+> Both have serious problems.  Ada tends toward (2).  I'm saying we
+> should move more in the direction of (1), at least in this case.
+
+I can buy that for globals annotations. Expression functions are completely
+harmless even if you have them and follow your standard of visibility. They
+don't do any good in that case, but they can always be treated as a normal
+function. There's nothing really new here semantically (they work just like any
+other function renames in terms of visibility, primitiveness, and the like). For
+analysis purposes they are cost-free, since they will either be inlined or
+called normally -- both of which an analyzer of any kind can already deal with.
+Finally, since they cost almost nothing to implement (at least compared to any
+other alternative), and even allow writing a few things that aren't currently
+possible (because they defer freezing), it's hard to imagine how they are
+harmful.
+
+You are always telling us to not let "best" prevent us from adopting a solution
+that is "good enough". Here is a solution that is good enough for 80% of
+predicate functions, lets compilers at least have the information they need, has
+very little cost since there is hardly anything new here (just a new combination
+of existing pieces). You're saying that we can't have this solution because it
+isn't "best". And *then* you go on and say we can't have the "best" solution
+either because it needs more study. I can't imagine a more classic example of
+letting "best" kill off "good enough"!!
+
+****************************************************************
+
+From: Tucker Taft
+Date: Friday, April 30, 2010  7:01 AM
+
+I think I agree with Randy that this is a simple feature which will make a
+significant pragmatic difference in the ability for compilers to do additional
+compile-time analysis.  After all, in Ada 83, there was no real reason to have a
+private part, except for pragmatic reasons associated with separate compilation.
+The private part is exactly those things you need to provide the compiler when
+compiling a *client* of an abstraction, to allow the compiler to generate
+efficient code.  These expression functions seem to fit that purpose quite
+exactly.
+
+I don't particularly buy the notion that things in the private part shouldn't
+affect whether some client code succeeds or fails at run-time; predicates are
+clearly run-time semantics, not static semantics.  If a compiler chooses to emit
+warnings about potential run-time failures at compile-time, that certainly is
+its business, and it seems irrelevant whether it gleans that information from
+the visible part, the private part, or the body, since clearly all three of
+those places affect run-time semantics.
+
+As an example of existing private-part information that might affect
+compile-time warnings, consider whether a private type includes full default
+initialization.  If the private type is implemented with a scalar type, then no
+default initialization is provided.  If it is implemented with an access type,
+then it is default initialized to null. If it is implemented using a composite
+type, then how much of it is default initialized varies according to information
+that is specified in the private part.
+
+Clearly if you write:
+
+    X : Priv_Type;
+    Default_Value : constant Priv_Type := X;
+
+it would make sense for the compiler to provide a compile-time warning if
+Priv_Type is in fact implemented with a scalar type (or an array-of-scalar).
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  5:08 PM
+
+I completely  agree with Bob's position on this. I find restricting to an
+expression to be hostile to good coding practice, since it results in having to
+construct awkward complex expressions when simple sets of declarations and
+statements would do better.
+
+I also think these belong in the visible part, what on earth is the point of
+having them in the private part? The private part is simply part of the
+implementation separated off for convenience of the compiler (it should not even
+be in the same file as the spec, and we plan to implement that separation in
+GNAT some day).
+
+Why is it convenient for the compiler to be able to see the body when the
+programmer cannot? Well I guess for inlining, but we already have that
+mechanism.
+
+So in short I see no point whatever in renaming of expressions in the private
+part, and if we allow it in the public part, we should just allow bodies in the
+public part and be done with it.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  5:10 PM
+
+> You are for some reason equating things that have very different
+> orders of magnitude. Expression functions probably will take a week or
+> so of effort to implement. Globals annotations are at least an order
+> of magnitude more effort (probably two orders). It doesn't make sense
+> to say "don't spend a week implementing an easy, useful feature
+> because there is another feature that will take 5 months to implement
+> that might be more important". But the effort saved will have no
+> impact whatsoever on the results of the globals annotations.
+
+By the way, in GNAT it is 5 mins work tyo allow bodies in the spec, because we
+do anyway, we just have an arbitratry check to forbid it at the source level,
+but we allow it in generated code (and all sorts of bodies are generated in the
+spec).
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  5:19 PM
+
+In fact all Ada compilers allow instance bodies in specs.
+
+But that's a bit different, because nothing in the instance can refer to
+anything in the outer spec (because the generic must have come from elsewhere).
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  5:34 PM
+
+The sort of thing we generate in GNAT is e.g. bodies of equality routines ...
+
+****************************************************************
+
+From: Tucker Taft
+Date: Friday, April 30, 2010  5:34 PM
+
+I personally don't want to see bodies showing up in specs.  I think that really
+confuses the model.
+
+The idea behind limiting the capabilities is that these are intended for simple,
+predicate-like things that need visibility on the full type, so clearly they
+can't be in the visible part. They are in the private part simply so the
+compiler can do a bit more compile-time analysis of preconditions,
+postconditions, and subtype predicates without breaking separate compilation.
+
+This seems to be a case where the "same" feature is of interest to two different
+groups for almost diametrically opposed reasons, which in fact means the two
+groups come up with opposite requirements for the feature, and hence it really
+should be discussed as two different features:
+
+   1) Arbitrary bodies anywhere in specs.
+
+   2) Simple expression-level predicates in the private part.
+
+I might note that we are limiting preconditions and postconditions and
+assertions to expressions, so there is some precedent for having to live without
+temporary variables, etc., when trying to express something relevant to static
+analysis.  That's what these "expression-functions" really are.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  6:03 PM
+
+OK, I see the point of view here, and am inclined to agree with what Tucker says
+below:
+
+[Previous message was quoted here - Editor.]
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  5:55 PM
+
+>    2) Simple expression-level predicates in the private part.
+
+What I don't get is why "simple" should mean exactly "one expression".
+I mean, declaring my one little Square constant (see other e-mail) doesn't imply
+any complexity.  OTOH, an expression can look at globals, so it's not
+necessarily "simple" from an analysis point of view.
+
+Shrug.  I guess I see small payoff for this feature -- but I agree with Randy
+and Tuck that the cost is also fairly small.  We're just disagreeing on the
+ratio.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  6:04 PM
+
+> Shrug.  I guess I see small payoff for this feature -- but I agree
+> with Randy and Tuck that the cost is also fairly small.  We're just
+> disagreeing on the ratio.
+
+Of course if you put in LET, then you are close to a full expressioun language,
+and if you put in LAMBDA, you are all the way there :-)
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  6:03 PM
+
+> Sorry about the interruptions. I think you were pausing to gather your
+> thoughts long enough that we thought you were done talking.
+
+Yeah, that's a fault of mine -- I can't think quickly when speaking to people,
+so I pause, so people jump in, and that causes me to get even more distracted...
+
+That's why I prefer e-mail for most technical communication.
+
+Of course, face-to-face meetings are better than phone meetings, because you can
+see visual cues about who wants to speak when.
+That's why I like to go to SofCheck for the ARG phone meetings
+-- at least I see ONE person face to face.
+
+> I don't see any way to accomplish a significantly less restrictive vision.
+> Honestly, I don't see much point (beyond the easier inlining) to doing
+> this for anything other than predicates. For the sorts of functions
+> that occur as predicates, either they are trivial combinations of
+> component values, or they require some complex operations that have no
+> advantage to being exposed anyway.
+>
+> For instance, if you need a local variable and a loop to implement a
+> predicate, it's not very likely to be analyzable anyway.
+
+How about a local constant, and no loop?
+
+function Is_Good (X, Y : Integer) return Boolean
+    renames X * X > Y and X * X + 1 <= Y + Blah;
+
+versus:
+
+function Is_Good (X, Y : Integer) return Boolean is
+    Square : constant Integer := X * X;
+begin
+    return Square > Y and Square + 1 <= Y + Blah; end Is_Good;
+
+or (to mix in some Lisp-y syntax):
+
+function Is_Good (X, Y : Integer) return Boolean
+     renames (let Square = X * X
+        in (Square > Y and Square + 1 <= Y + Blah));
+
+Anyway, part of my point is that "not very likely to be analyzable"
+is not something we are in a position to judge.  It depends on tools that have
+not yet been designed.
+
+...
+> > It's fine with me if some tools (possibly optionally) look at
+> > private parts.
+> > But if they do, they might as well look at bodies, too.
+> > Conceptually, private part and body are both part of the
+> > "implementation"; only the visible part is "specification".
+> > In fact, they might as well look at all type extensions of a given
+> > root type.  That's what's needed to "expose the implementation" in
+> > all cases.  So I don't see any advantage to expression functions,
+> > for tools that wish to break privacy.
+> >
+> > But a modular analysis of a client should look only at visible parts
+> > of whatever it calls.
+>
+> Ada allows children to have visibility on the private part, so it's
+> not correct to claim that no one should look there.
+
+I didn't say "no one".  I said "clients".  Children are not clients -- they are
+part of the abstraction.  (In fact, if I ran the circus, children would see
+parent bodies.)
+
+>...Compilers look there all the
+> time, ...
+
+Never for legality checks.  Only for code gen.  And for code gen, compilers look
+at bodies (for inlining, and for generic instantiation (except in the case of
+your generic-sharing implementation)).
+
+>...and the runtime correctness of a program *depends* on what is found
+>there. And what we're trying to analyze is the runtime correctness,
+>isn't  it??
+
+Well, yeah, sort of.  But a modular analysis that doesn't break privacy is more
+useful -- it means the analysis will still be correct if you change the
+implementation.
+
+In other words, a proof that says "this thing won't raise C_E in this program"
+is useful, but a proof that says, "this thing won't raise C_E, period, no matter
+what calls it or is called by it" is MORE useful.
+
+The main point of putting various assertions on specs (like preconditions) is to
+enable the latter more-useful kind of analysis.
+
+> In any case, I think the most important thing is to allow *compilers*
+> to be able to do these sorts of analyses, without having to depend on
+> anything that they don't naturally have. A compiler always has access
+> to the private part of a package, and rarely has access to a body
+> other than the one being compiled. Compilers usually don't pay much
+> attention to privacy once they've gotten the legality rules out of the
+> way - I don't think our compiler does any analysis on the code
+> whatsoever until after the stage where privacy and the like is
+> respected. (And of course there are runtime considerations which
+> require ignoring privacy.)
+>
+> Why do I think compilers are so important? It's not so much compilers
+> as whatever is the first tool run on the source code to remove errors.
+> It should be possible to find the maximum number of errors immediately
+> after finishing writing the code for a single unit, because that's
+> when they're easiest to fix. These days, the first tool run is almost
+> always the compiler (at least to do deep analysis - some IDEs do some
+> syntactic checking). For that to work, no other units other than
+> direct dependencies (in particular, no bodies) need exist.
+>
+> The other, related issue for me is that compilers have to be able to
+> eliminate most of these contracts in the same way that we eliminate
+> most range checks. Like range checks, turning off contracts is like
+> using seat belts until you leave the garage - it's something you shouldn't do.
+
+Bad analogy: seat belts cost near-zero to use.
+
+...
+> > My final point here is that I think globals annotations are a more
+> > fruitful endeavor.  As proposed, they're not ready to be
+> > standardized, but I think implementations can experiment with them,
+> > and should give that experimentation higher priority than expression
+> > functions.
+
+> You are for some reason equating things that have very different
+> orders of magnitude. Expression functions probably will take a week or
+> so of effort to implement. Globals annotations are at least an order
+> of magnitude more effort (probably two orders).
+
+Yes, I agree that expression functions are much simpler to implement (and to
+define the semantics of) than globals annotations.
+
+...
+> I don't agree that the framework for globals annotations is not ready
+> to be standardized. The specific annotations that work best probably
+> will require some experimentation, but by not putting together a
+> common framework populated with a few important annotations which are
+> obvious (specific items, everything in a package, nothing at all),
+> we're either going to end up with a tower-of-babble or just as likely
+> nothing at all. Even so, I understand why people feel this way and am
+> willing to go without. But what I will not stand for is having
+> supposed contracts with no way whatsoever for the compiler to analyze
+> them if they're written only in portable Ada. If we don't have one or
+> the other, then I don't want ANY new contracts in Ada. And I am going to fight
+> that battle as long as I can.
+
+I don't see why you want to pound your shoe on the table in this way.
+I mean, preconditions and predicates are useful even if they are just run-time
+checks, with no "proof tools" in sight.
+
+You have to look at the alternative:  Put comments, which are obviously not
+checked at all.
+
+...
+> You are always telling us to not let "best" prevent us from adopting a
+> solution that is "good enough". Here is a solution that is good enough
+> for 80% of predicate functions, lets compilers at least have the
+> information they need, has very little cost since there is hardly
+> anything new here (just a new combination of existing pieces). You're
+> saying that we can't have this solution because it isn't "best". And
+> *then* you go on and say we can't have the "best" solution either
+> because it needs more study. I can't imagine a more classic example of letting
+> "best" kill off "good enough"!!
+
+Your honor, I plead guilty as charged.
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  6:15 PM
+
+> I think I agree with Randy that this is a simple feature ...
+
+I agree it's a simple feature.
+
+>...which will make a significant pragmatic  difference in the ability
+>for compilers to do  additional compile-time analysis.
+
+That part, I doubt.
+
+I don't see how knowing that Is_Empty means "Blah.Count = 0" helps, unless the
+compiler knows all places where the private Count component is modified.  That
+requires looking at the body, not just the private part.
+
+And compilers already know how to look at bodies (for inlining), so I don't see
+any benefit from moving information from the body to the private part.
+
+>...  After all, in Ada 83,
+> there was no real reason to have a private part,
+
+You can stop right there.  ;-)
+
+> except for pragmatic reasons associated with separate compilation.
+> The private part is exactly those things you need to provide the
+> compiler when compiling a *client* of an abstraction, to allow the
+> compiler to generate efficient code.
+
+Efficient code has little to do with static analysis.
+
+And anyway, efficient code requires inlining bodies, sometimes.
+
+...
+> As an example of existing private-part information that might affect
+> compile-time warnings, consider whether a private type includes full
+> default initialization.  If the private type is implemented with a
+> scalar type, then no default initialization is provided.  If it is
+> implemented with an access type, then it is default initialized to
+> null.
+> If it is implemented using a composite type, then how much of it is
+> default initialized varies according to information that is specified
+> in the private part.
+>
+> Clearly if you write:
+>
+>     X : Priv_Type;
+>     Default_Value : constant Priv_Type := X;
+>
+> it would make sense for the compiler to provide a compile-time warning
+> if Priv_Type is in fact implemented with a scalar type (or an
+> array-of-scalar).
+
+Yes, I agree.  That's because there's a flaw in Ada: you can't know from the visible part whether it makes sense to declare a default-initialized object of type Priv_Type.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Friday, April 30, 2010  6:42 PM
+
+...
+> > For instance, if you need a local variable and a loop to implement a
+> > predicate, it's not very likely to be analyzable anyway.
+>
+> How about a local constant, and no loop?
+>
+> function Is_Good (X, Y : Integer) return Boolean
+>     renames X * X > Y and X * X + 1 <= Y + Blah;
+>
+> versus:
+>
+> function Is_Good (X, Y : Integer) return Boolean is
+>     Square : constant Integer := X * X; begin
+>     return Square > Y and Square + 1 <= Y + Blah; end Is_Good;
+>
+> or (to mix in some Lisp-y syntax):
+>
+> function Is_Good (X, Y : Integer) return Boolean
+>      renames (let Square = X * X
+>         in (Square > Y and Square + 1 <= Y + Blah));
+
+I thought about this when I wrote my original reply, and it stuck me that for
+*constants* you might as well just define a second expression function. One
+presumes the constant has some well-defined meaning (if it doesn't, how could
+you name it and why would you want it?); it's unlikely to hurt to be more
+generally available.
+
+That is, I'd say that you should write:
+
+function Square (X : Integer) return Integer renames (X * X);
+
+function Is_Good (X, Y : Integer) return Boolean is (Square(X) > Y and
+Square(X) + 1 <= Y + Blah);
+
+Presuming inlining, it will generate the same code either way; the amount of
+text is about the same either way.
+
+I didn't think this really answered your concern properly, so I didn't put it
+into my reply. But maybe I was right the first time...
+
+> Anyway, part of my point is that "not very likely to be analyzable"
+> is not something we are in a position to judge.  It depends on tools
+> that have not yet been designed.
+
+This is true, of course.
+
+...
+> >...Compilers look there all the
+> > time, ...
+>
+> Never for legality checks.  Only for code gen.  And for code gen,
+> compilers look at bodies (for inlining, and for generic instantiation
+> (except in the case of your generic-sharing implementation)).
+
+Well, all of these things are technically done at runtime. There aren't any
+legality checks involved. Yes, I realize this is Ada lawyer hairsplitting.
+
+> >...and the runtime correctness of a program *depends* on what is
+> >found there. And what we're trying to analyze is the runtime
+> >correctness, isn't  it??
+>
+> Well, yeah, sort of.  But a modular analysis that doesn't break
+> privacy is more useful -- it means the analysis will still be correct
+> if you change the implementation.
+>
+> In other words, a proof that says "this thing won't raise C_E in this
+> program"
+> is useful, but a proof that says, "this thing won't raise C_E, period,
+> no matter what calls it or is called by it" is MORE useful.
+
+I can't argue with this. But I don't think I want to have to wait until 2025 to
+be able to get anything, and as Tucker says, proofs based only on specs are
+beyond the state of the art today (at least for full-size programming languages
+like Ada).
+
+...
+> > The other, related issue for me is that compilers have to be able to
+> > eliminate most of these contracts in the same way that we eliminate
+> > most range checks. Like range checks, turning off contracts is like
+> > using seat belts until you leave the garage - it's
+> something you shouldn't do.
+>
+> Bad analogy: seat belts cost near-zero to use.
+
+This analogy is used for range checks all the time. The point is that with
+effort, the compiler can make the cost of range checks near-zero.
+
+I think this is an important property of contracts (as opposed to just runtime
+assertions, which definitely don't have this property): the compiler ought to be
+able to eliminate the majority of the checks. (And the ones that remain suggest
+places where the program might have problems.)
+
+If contracts are expensive, no one will use them in the first place. But they're
+an important safety feature, so they need to be reasonably cheap. And in any
+case turning off safety devices to go out in the world is silly. That's when you
+*need* safety devices - to deal with the unforeseen!
+
+(This expense argument is often used against Ada's range and other checks, but
+we Ada people know from experience that it is rare that a program has a
+performance problem specifically because of the presence of these checks.
+Usually the program is plenty fast whether or not they are on, or it is too slow
+either with them on or turned off. You can make the same argument about size of
+checks. Ada provides Suppress for the rare program in the middle, not for some
+sort of general need. All of this applies just as well to all contracts -- and
+in large part to assertions as well. [The vast majority of assertions in my code
+are just a normal part of the code; I don't make any effort to be able to turn
+them off. Why compile/serve web pages/analyze spam without seat belts??])
+
+...
+> > I don't agree that the framework for globals annotations is not
+> > ready to be standardized. The specific annotations that work best
+> > probably will require some experimentation, but by not putting
+> > together a common framework populated with a few important
+> > annotations which are obvious (specific items, everything in a
+> > package, nothing at all), we're either going to end up with a
+> > tower-of-babble or just as likely nothing at all. Even so, I
+> > understand why people feel this way and am willing to go without.
+> > But what I will not stand for is having supposed contracts with no
+> > way whatsoever for the compiler to analyze them if they're written
+> > only in portable Ada. If we don't have one or the other, then I
+> > don't want ANY new contracts in Ada. And
+> I am going to fight that battle as long as I can.
+>
+> I don't see why you want to pound your shoe on the table in this way.
+> I mean, preconditions and predicates are useful even if they are just
+> run-time checks, with no "proof tools" in sight.
+
+Sure, but we don't need new features for that; putting pragma Assert inside of a
+subprogram body will provide that just fine. (Or just using ordinary code, as we
+do in Claw.)
+
+The promise of these new features is for them to be at least partially
+compile-time checkable, with little or no runtime overhead. That's highly
+unlikely to happen for pragma Assert (or anything in the body, for that matter).
+
+> You have to look at the alternative:  Put comments, which are
+> obviously not checked at all.
+
+No, the alternative is using pragma Assert and/or ordinary code in the
+subprogram body. Still checked, but not well communicated to the caller. We have
+a name for people who don't check boundary conditions: "C programmers" ;-)
+
+...
+> > You are always telling us to not let "best" prevent us from adopting
+> > a solution that is "good enough". Here is a solution that is good
+> > enough for 80% of predicate functions, lets compilers at least have
+> > the information they need, has very little cost since there is
+> > hardly anything new here (just a new combination of existing
+> > pieces). You're saying that we can't have this solution because it
+> > isn't "best". And
+> > *then* you go on and say we can't have the "best" solution either
+> > because it needs more study. I can't imagine a more classic
+> example of letting "best" kill off "good enough"!!
+>
+> Your honor, I plead guilty as charged.
+
+I sentence you to getting your homework done early. :-)
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  7:06 PM
+
+>> ...  After all, in Ada 83,
+>> there was no real reason to have a private part,
+>
+> You can stop right there.  ;-)
+>
+>> except for pragmatic reasons associated with separate compilation.
+>> The private part is exactly those things you need to provide the
+>> compiler when compiling a *client* of an abstraction, to allow the
+>> compiler to generate efficient code.
+
+Pretty fundamental "pragmatic reasons", the compiler HAS to know the layout of
+types, to allocate space for them if nothing else!
+
+> Efficient code has little to do with static analysis.
+
+True indeed
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  7:46 PM
+
+> Pretty fundamental "pragmatic reasons", the compiler HAS to know the
+> layout of types, to allocate space for them if nothing else!
+
+It doesn't HAVE to.  It HAS to only for efficiency reasons.
+
+And for maximal efficiency, it has to do inlining, which
+(currently) requires looking at bodies.  I don't think the sky would fall if
+compilers also had to take a peek at bodies in order to efficiently allocate
+space for objects of private types.
+
+****************************************************************
+
+From: Roberrt Dewar
+Date: Friday, April 30, 2010  8:20 PM
+
+I strongly disagree, avoiding dynamic allocation is not just an efficiency
+issue, it is a fundamental functional issue in certified environments that do
+not allow dynamic allocation.
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  7:32 PM
+
+> I thought about this when I wrote my original reply, and it stuck me
+> that for *constants* you might as well just define a second expression function.
+> One presumes the constant has some well-defined meaning (if it
+> doesn't, how could you name it and why would you want it?); it's
+> unlikely to hurt to be more generally available.
+>
+> That is, I'd say that you should write:
+>
+> function Square (X : Integer) return Integer renames (X * X);
+>
+> function Is_Good (X, Y : Integer) return Boolean is (Square(X) > Y and
+> Square(X) + 1 <= Y + Blah);
+
+Good point.
+
+> Presuming inlining, it will generate the same code either way; the
+> amount of text is about the same either way.
+
+Agreed.
+
+> I didn't think this really answered your concern properly, so I didn't
+> put it into my reply. But maybe I was right the first time...
+
+Yes, you were right the first time.  ;-)
+
+I learned something.
+
+> I can't argue with this. But I don't think I want to have to wait
+> until 2025 to be able to get anything, and as Tucker says, proofs
+> based only on specs are beyond the state of the art today (at least
+> for full-size programming languages like Ada).
+
+Well, Tuck's CodePeer tool looks at bodies (transitively) but it does so by
+generating automatically the preconditions and the globals annotations that
+we're discussing.
+
+So I think "beyond state of the art" doesn't mean "the tools aren't smart
+enough", but more like "the tools don't have the information they need".  We're
+trying to give them that info.
+
+> ...
+> > > The other, related issue for me is that compilers have to be able
+> > > to eliminate most of these contracts in the same way that we
+> > > eliminate most range checks. Like range checks, turning off
+> > > contracts is like using seat belts until you leave the garage -
+> > > it's
+> > something you shouldn't do.
+> >
+> > Bad analogy: seat belts cost near-zero to use.
+>
+> This analogy is used for range checks all the time.
+
+Yeah, I know.  Or similar analogies involving floatation devices for sailors,
+and parachutes for airplane pilots. I'm not picking on you -- I think it's
+equally bogus when Tony Hoare does such reasoning-by-sound-bite.  ;-)
+
+>... The point is that with
+> effort, the compiler can make the cost of range checks near-zero.
+
+Sometimes, other times not.
+
+> I think this is an important property of contracts (as opposed to just
+> runtime assertions, which definitely don't have this property): the
+> compiler ought to be able to eliminate the majority of the checks.
+> (And the ones that remain suggest places where the program might have
+> problems.)
+
+I think you're using "contract" in a non-standard way.
+Eiffel's contracts are just run-time checks, just like ours are
+
+> If contracts are expensive, no one will use them in the first place.
+
+Nonsense.  If something is too expensive for production use, it can still be
+useful for testing.
+
+>...But
+> they're an important safety feature, so they need to be reasonably cheap.
+
+No, they don't need to be.  We would like them to be.
+
+> And in any case turning off safety devices to go out in the world is silly.
+> That's when you *need* safety devices - to deal with the unforeseen!
+
+Nonsense again.  Turning off "safety devices" is an engineering decision based
+on costs/benefit.
+
+> (This expense argument is often used against Ada's range and other
+> checks,
+
+I agree that range checks are analogous.
+
+> ...
+> > I don't see why you want to pound your shoe on the table in this way.
+> > I mean, preconditions and predicates are useful even if they are
+> > just run-time checks, with no "proof tools" in sight.
+>
+> Sure, but we don't need new features for that; putting pragma Assert
+> inside of a subprogram body will provide that just fine. (Or just
+> using ordinary code, as we do in Claw.)
+
+No, it isn't "just fine".  Preconditions get scattered about all callers automatically.  That's useful, even if unsound.
+
+> The promise of these new features is for them to be at least partially
+> compile-time checkable, with little or no runtime overhead. That's
+> highly unlikely to happen for pragma Assert (or anything in the body,
+> for that matter).
+>
+> > You have to look at the alternative:  Put comments, which are
+> > obviously not checked at all.
+>
+> No, the alternative is using pragma Assert and/or ordinary code in the
+> subprogram body. Still checked, but not well communicated to the
+> caller. We have a name for people who don't check boundary conditions: "C
+> programmers" ;-)
+
+OK, but preconditions are still better than that (sprinkle checks in various
+useful places, compared to the programmer sprinkling Assert in some of those
+places).
+
+> ...
+> > > You are always telling us to not let "best" prevent us from
+> > > adopting a solution that is "good enough". Here is a solution that
+> > > is good enough for 80% of predicate functions, lets compilers at
+> > > least have the information they need, has very little cost since
+> > > there is hardly anything new here (just a new combination of
+> > > existing pieces). You're saying that we can't have this solution
+> > > because it isn't "best". And
+> > > *then* you go on and say we can't have the "best" solution either
+> > > because it needs more study. I can't imagine a more classic
+> > example of letting "best" kill off "good enough"!!
+> >
+> > Your honor, I plead guilty as charged.
+>
+> I sentence you to getting your homework done early. :-)
+
+Yes, sir, your honor, sir!  ;-)
+
+Seriously, I hope to do it tomorrw or Sunday.  Thank you very much for providing
+the relevant meeting notes!  (which I have not yet looked at)
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  8:18 PM
+
+>> If contracts are expensive, no one will use them in the first place.
+>
+> Nonsense.  If something is too expensive for production use, it can
+> still be useful for testing.
+
+Machines are so fast, that for many users, CPU time is simply not a limited
+resource.
+
+>> ...But
+>> they're an important safety feature, so they need to be reasonably cheap.
+>
+> No, they don't need to be.  We would like them to be.
+>
+>> And in any case turning off safety devices to go out in the world is silly.
+>> That's when you *need* safety devices - to deal with the unforeseen!
+>
+> Nonsense again.  Turning off "safety devices" is an engineering
+> decision based on costs/benefit.
+
+Furthermore, leaving in run-time checks for production builds can be MUCH more
+dangerous than omitting them, if you haven't carefully figured out what to do if
+a check fails. Ariane-5 is a really good example of this.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Friday, April 30, 2010  8:34 PM
+
+...
+> > > I don't see why you want to pound your shoe on the table in this way.
+
+It's interesting that "pound your shoe" has become a synonym for "take an
+extreme" position within the Ada community. I generally keep my shoes on...
+
+> > > I mean, preconditions and predicates are useful even if they are
+> > > just run-time checks, with no "proof tools" in sight.
+> >
+> > Sure, but we don't need new features for that; putting pragma Assert
+> > inside of a subprogram body will provide that just fine. (Or just
+> > using ordinary code, as we do in Claw.)
+>
+> No, it isn't "just fine".  Preconditions get scattered about all
+> callers automatically.  That's useful, even if unsound.
+
+Sorry, I was talking about putting the Asserts/other code inside the called
+subprogram body. I agree that putting those at the call site is not "just fine".
+But there is no reason to put those at the call site for the purposes of runtime
+checks -- all that does is bloat up the code. The purpose of putting them at the
+call site is to enable optimization/static checking; if you're not doing that,
+you have no reason to not put them inside of the body.
+
+> > The promise of these new features is for them to be at least
+> > partially compile-time checkable, with little or no runtime
+> > overhead. That's highly unlikely to happen for pragma Assert (or
+> > anything in the body, for that matter).
+> >
+> > > You have to look at the alternative:  Put comments, which are
+> > > obviously not checked at all.
+> >
+> > No, the alternative is using pragma Assert and/or ordinary code in
+> > the subprogram body. Still checked, but not well communicated to the
+> > caller. We have a name for people who don't check boundary
+> conditions: "C programmers"
+> > ;-)
+>
+> OK, but preconditions are still better than that (sprinkle checks in
+> various useful places, compared to the programmer sprinkling Assert in
+> some of those places).
+
+No sprinkling going on; those checks are in the called body, not at the call
+site, and surely are going to occur in every call for that reason.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Friday, April 30, 2010  8:45 PM
+
+Oops, missed one comment.
+
+...
+> > I think this is an important property of contracts (as opposed to
+> > just runtime assertions, which definitely don't have this property):
+> > the compiler ought to be able to eliminate the majority of the checks.
+> > (And the ones that remain suggest places where the program might
+> > have
+> > problems.)
+>
+> I think you're using "contract" in a non-standard way.
+> Eiffel's contracts are just run-time checks, just like ours are
+
+I probably am; see my paper on this topic. I needed a way to differentiate pure
+runtime checks (which I called "assertions") from checks that can usefully be
+analyzed statically (which I called "contracts"). Maybe there is a better term
+for the latter, but that is always what I meant by "contracts" (including in the
+ARG instructions). The big benefit of using Ada is the many ways that Ada
+detects problems and refuses to let you proceed until they're fixed; runtime
+checks are of much less benefit (which is not to say NO benefit!) as they
+require extensive testing to trigger problems and cause what you call a
+"tripping hazard" in fielded systems.
+
+Using this terminology: I'm not interested in any more "assertions" in Ada (that
+is, pure runtime checks); if we get some because of people writing bad
+contracts, that's OK, but they are nearly pointless. (I have more sympathy for
+predicate assertions, because "sprinkling" those is hard;
+precondition/postcondition assertions on the other hand are pretty simple to
+write in the body and I suspect most of us have been writing those since the
+early days of Ada 83).
+
+My preference would be for contracts with real legality rules (like we have for
+parameter passing, generic instantiation, and the like), but that's probably
+unrealistic for anything beyond "set constraints". So I'll live with ones that
+can be reduced/detected as warnings of one sort of another.
+
+My understanding of Eiffel is that it doesn't have any real contracts beyond
+subprogram profiles.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  8:54 PM
+
+>>>> I don't see why you want to pound your shoe on the table in this way.
+>
+> It's interesting that "pound your shoe" has become a synonym for "take
+> an extreme" position within the Ada community. I generally keep my shoes on...
+
+It means that in any context, it comes from Kruschev and the U-2 affair!
+
+****************************************************************
+
+From: Bob Duff
+Date: Friday, April 30, 2010  9:04 PM
+
+> It's interesting that "pound your shoe" has become a synonym for "take
+> an extreme" position within the Ada community. I generally keep my shoes on...
+
+Sorry.  When I say "pound one's shoe on the table" I am accusing you of being
+somewhat over-the-top in your rhetoric, either for or against some technical
+position.  It's a reference to the famous case where Nikita Khrushchev of the
+Soviet Union actually pounded his shoe on the table to make a point (whetever
+point it was, I don't much care) in a United Nations debate.
+
+There's a description of it here:
+
+http://en.wikipedia.org/wiki/Shoe-banging_incident
+
+So when you say, for example, "that's a nonstarter" or "that is unacceptable", I
+respond that "you're pounding your shoe on the table".  Get it?
+
+When you say, "I don't like that" or "I think that causes maintenance problems",
+then I won't cry "shoe on table".  ;-)
+
+****************************************************************
+
+From: Robert Dewar
+Date: Friday, April 30, 2010  8:57 PM
+
+>> I think you're using "contract" in a non-standard way.
+>> Eiffel's contracts are just run-time checks, just like ours are
+>
+> I probably am; see my paper on this topic. I needed a way to
+> differentiate pure runtime checks (which I called "assertions") from
+> checks that can usefully be analyzed statically (which I called
+> "contracts"). Maybe there is a better term for the latter, but that is always what I meant by "contracts"
+
+That's REALLY confusing terminology. As Bob says, the word contract is strongly
+associated with Ada (as in the trademarked phrase programming by contract) and
+in this context it ALWAYS refers to run-time checks.
+
+> Using this terminology: I'm not interested in any more "assertions" in
+> Ada (that is, pure runtime checks); if we get some because of people
+> writing bad contracts, that's OK, but they are nearly pointless. (I
+> have more sympathy for predicate assertions, because "sprinkling"
+> those is hard; precondition/postcondition assertions on the other hand
+> are pretty simple to write in the body and I suspect most of us have
+> been writing those since the early days of Ada 83).
+
+To say that contracts as used in Eiffel are nearly pointless is provocative and
+is definitely an extreme minority position. And for sure we have lots of major
+Ada users who are VERY interested in contracts in the normal sense, as Eiffel
+defines it.
+
+> My preference would be for contracts with real legality rules (like we
+> have for parameter passing, generic instantiation, and the like), but
+> that's probably unrealistic for anything beyond "set constraints". So
+> I'll live with ones that can be reduced/detected as warnings of one sort of another.
+>
+> My understanding of Eiffel is that it doesn't have any real contracts
+> beyond subprogram profiles.
+
+Well you are using contracts in such a confusing manner that I don't know what
+to make of such a comment!
+
+****************************************************************
+
+From: Yannick Moy
+Date: Saturday, May 1, 2010  2:28 AM
+
+>> I probably am; see my paper on this topic. I needed a way to
+>> differentiate pure runtime checks (which I called "assertions") from
+>> checks that can usefully be analyzed statically (which I called
+>> "contracts"). Maybe there is a better term for the latter, but that
+>> is always what I meant by "contracts"
+>
+> That's REALLY confusing terminology. As Bob says, the word contract is
+> strongly associated with Ada (as in the trademarked phrase programming
+> by contract) and in this context it ALWAYS refers to run-time checks.
+
+I think the term "contract" is meaning the two things for different groups of
+people. Historically, it means the run-time checking as performed in Eiffel, and
+it still means this for most people. In the research community of program
+proving, it means of course what Randy means: side-effect free deterministic
+terminating boolean expressions.
+
+JML and Spec# take the word of the programmer for it, as only methods marked as
+"pure" in the source can be called from contracts, where pure means essentially
+side-effect free. Here is the JML definition of pure:
+
+http://www.eecs.ucf.edu/~leavens/JML/jmlrefman/jmlrefman_7.html#SEC59
+
+SPARK allows calls to SPARK functions (which cannot have side-effects anyway)
+and logic functions (declared in stylized comments and defined in Prolog syntax
+for the prover).
+
+ACSL only allows calls to logic functions, either defined in stylized comments
+(which look a lot like expression functions), or only declared in stylized
+comments and defined by some axiomatic. Here is the ACSL definition of
+expressions used in contracts (section 2.2):
+
+http://frama-c.com/download/acsl_1.4.pdf
+
+I think it is less confusing to speak about contracts in the Eiffel way, while
+working towards making these contracts amenable to static analysis.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  5:38 AM
+
+> I think the term "contract" is meaning the two things for different
+> groups of people. Historically, it means the run-time checking as
+> performed in Eiffel, and it still means this for most people.
+
+Given the trademarked phrase, I think that's the only way it should be used. How
+about using the phrase "static contracts" when we mean that and "dynamic
+contracts" when we mean that, so we never get confused. Of course part of the
+goal is in some cases to have contracts that can be regarded as either static or
+dynamic (that would be true in Eiffel too, to the extent that Eiffel contracts
+can be treated statically, it would make sense to do so).
+
+> In the research community of program
+> proving, it means of course what Randy means: side-effect free
+> deterministic terminating boolean expressions.
+
+(side comment on side-effect free)
+
+I never know what side-effect free means, for instance the only possible
+reasonable implementation of a function like Ackerman is to memoize. Does the
+memoization of such a function make it have a side effect, or is it just an
+efficiency detail of the implementation (in some languages you get the
+memoization effect just by asking for it without having to program it, and there
+you can most clearly see it is just an low level im-plementation detail, whereas
+in this kind of context side-effect free is a high level semantic concept.
+
+Or if you add a counter to a square root function so that for tuning purposes
+you can tell how many times it is called, then it is still conceptually pure,
+and indeed you want the compiler still to hoist calls, eliminate duplicate calls
+etc.
+
+During the discussion of Ada 83, there was some considerable sentiment for
+distinguishing between side effect free functions and value returning procedures
+(a la the DEC pragma). But the discussion ALWAYS got bogged down in discussions
+of points like these, and never got anywhere.
+
+And of course all functions have side effects if you think in terms of the stack
+usage, needing to allocate stack space (and possibly raising SE), not to mention
+leaving traces of the call on the stack.
+
+So for my taste, one should avoid the phrase side-effect free and instead use a
+well defined term in its place that specifies the semantic level at which this
+property is defined.
+
+> SPARK allows calls to SPARK functions (which cannot have side-effects
+> anyway) and logic functions (declared in stylized comments and defined
+> in Prolog syntax for the prover).
+
+Of course at the low level of looking at memory usage, all SPARK functions *do*
+have side effects, and indeed when the claim is made that a program has been
+proved to be run-time exception free, this does not cover Storage_Error! That
+has to be verified at a totally different level.
+
+****************************************************************
+
+From: Tucker Taft
+Date: Saturday, May 1, 2010  8:02 AM
+
+>> ...
+>> Efficient code has little to do with static analysis.
+>
+> True indeed
+
+This one surprises me.  Efficient Ada code implies eliminating unnecessary
+run-time checks.  Good static analysis (e.g. range propagation)is normally the
+key to that.  Now that we have user-written assertions, preconditions,
+postconditions, etc., eliminating unnecessary the checks associated with these
+requires even more static analysis.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  8:20 AM
+
+Yes, of course, but what we (or at least I) meant by this is that issues of
+efficient code generation are not relevant to the *process* of static analysis,
+though indeed the *results* of static analysis most certainly can lead to better
+code generation (e.g. not generating code for dead code that is not obviously
+dead without subtle analysis).
+
+****************************************************************
+
+From: Bob Duff
+Date: Saturday, May 1, 2010  10:29 AM
+
+Right, me too.
+
+Also, there are some important differences between static analysis done to make
+sure the program works, versus making it fast.
+
+For the former, you prefer to be modular (clients don't depend on private parts
+and bodies), so the analysis of one thing can't be invalidated by changes in the
+other.  For the latter, it's just fine to depend on bodies -- that's what
+inlining is all about.
+
+The world desperately needs the former.  For the latter, well, shrug, lots of
+programs are plenty fast enough as it is, and besides, we already have a
+solution for slow run-time checks -- suppress them.  (Don't start shoe pounding
+-- I'm well aware of the costs and benefits of suppressing checks, and I don't
+buy analogies with seat belts and life jackets).
+
+If static analysis can make programs faster without resorting to Suppress,
+that's a nice side effect, but I have zero interest in discussing how to design
+our assertions to accomplish that.
+
+P.S. Note to Randy on terminology: "Assertions" includes pragma Assert,
+pre/post-conditions, invariants, null exclusions, constraints, predicates.  (I'm
+saying this because you asserted during the meeting that "assertion" means only
+pragma Assert.)  "Contract" means that subset of assertions that are visible to
+clients. This is pretty well established terminology (see e.g. Bertrand Meyer's
+writings), so I suggest we not try to change it.
+
+And both terms (assertion and contract) are broad -- they don't imply static
+versus dynamic checking (could be some of both).  This part is admittedly not
+universal (some static analysis zealots claim that an "assertion" must be
+statically checked, or it's not an assertion -- but they're wrong. ;-) )
+
+Even a comment can be called an assertion.  And if it's in the spec, it can be
+called part of the contract.  Those are checked only by "code review".
+
+****************************************************************
+
+From: Bob Duff
+Date: Saturday, May 1, 2010  11:23 AM
+
+> Given the trademarked phrase, I think that's the only way it should be
+> used. How about using the phrase "static contracts" when we mean that
+> and "dynamic contracts" when we mean that, so we never get confused.
+
+I think "contract" means an expression of some property that one piece of code
+agrees to ensure, so another piece of code can rely on it.
+
+"Static" and "dynamic" should not be viewed as properties of a contract, but as
+properties of various tools that automatically check them (or even human beings
+who check them "by hand").
+
+> Of course part of the goal is in some cases to have contracts that can
+> be regarded as either static or dynamic (that would be true in Eiffel
+> too, to the extent that Eiffel contracts can be treated statically, it
+> would make sense to do so).
+
+Right, exactly.
+
+****************************************************************
+
+From: Tucker Taft
+Date: Saturday, May 1, 2010  8:22 AM
+
+I think it might be useful to refer to these two different proposals as:
+
+   Parameterized Expressions
+
+and
+
+   Bodies in Specs
+
+I personally have an interest in Parameterized Expressions, but no interest in
+Bodies in Specs.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  8:28 AM
+
+> I personally have an interest in Parameterized Expressions, but no
+> interest in Bodies in Specs.
+
+I don't think these are so separate, bodies-in-specs is intended to address
+exactly the same issues as renamed expressions, just in a more general form, so
+I don't think it is helpful to try to separate two solutions for the same
+problem.
+
+I do have some sympathy for Tuck's objections to bodies in specs as confusing,
+and for sure the potential for abuse is huge:
+
+"what's all this crap about specs and bodies? let's just put everything in the
+specs and avoid this nonsense!"
+
+****************************************************************
+
+From: Bob Duff
+Date: Saturday, May 1, 2010  10:44 AM
+
+> I do have some sympathy for Tuck's objections to bodies in specs as
+> confusing, and for sure the potential for abuse is huge:
+
+I agree with you and Tuck that the potential for abuse is huge.
+
+But I have zero sympathy for Tuck's objections -- "potential abuse"
+is NEVER an appropriate language-design criterion.  "Potential for accidental
+misuse" is, but not mere "potential abuse".
+
+This example illustrates why.  Tuck wants to prevent "abuse", but he can't
+define that precisely, so he defines "abuse" as "putting anything besides an
+expression in the spec", which is just a wrong definition.  It prevents some
+(not all) misuse, but also prevents legitimate use (like my example of a local
+constant).  My definition is "putting large, complicated stuff in the spec" --
+but that's not precise enough for a language rule.
+
+> "what's all this crap about specs and bodies? let's just put
+> everything in the specs and avoid this nonsense!"
+
+Sure -- the same folks who think every type should have a non-portable rep
+clause, or unchecked conversions should be scattered all over the place, etc.
+It's not our job as language designers to rein in those folks.  It's your job as
+an NYU prof to educate them.  ;-)
+
+Having said all that, I must admit that my objections to parameterized
+expressions are weak.  I'm not going to quit using Ada if this feature is added.
+In fact, I would use this feature myself, if it existed.
+
+I also admit that it's easy to implement.
+
+P.S. I don't understand why the term "parameterized expressions"
+is preferable to "expression functions" -- I don't much care.
+
+P.P.S. Tucker suggested a similar thing for a procedure, which I imagine looks
+like this:
+
+    procedure Blah (X : T);
+  private
+    procedure Blah (X : T) renames
+      Mumble(X*X); -- A single procedure call goes here.
+
+So what's the term for that?
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  11:58 AM
+
+> But I have zero sympathy for Tuck's objections -- "potential abuse"
+> is NEVER an appropriate language-design criterion.  "Potential for
+> accidental misuse" is, but not mere "potential abuse".
+
+For me, I never say never! And in this case, I think the potential for confusion
+and abuse is too high compared to the utility.
+
+I do think that potential abuse is a legitimate consideration.
+For example, suppose we argued to introduce completely unsafe untyped pointers
+to solve some very specific problem with the understanding that they should only
+be used in this situation. I would object that it would simply be too easy for
+people to abuse this feature.
+
+Unchecked_Union is an interesting example for me of a feature on the edge. You
+can easily see it being abused, but I think we are safe given its very specific
+descriptyion as being for interfacing with C unions.
+
+> This example illustrates why.  Tuck wants to prevent "abuse", but he
+> can't define that precisely, so he defines "abuse"
+> as "putting anything besides an expression in the spec", which is just
+> a wrong definition.  It prevents some (not all) misuse, but also
+> prevents legitimate use (like my example of a local constant).  My
+> definition is "putting large, complicated stuff in the spec" -- but
+> that's not precise enough for a language rule.
+
+I would prefer to solve the local constant problem by introducing LET.
+
+>> "what's all this crap about specs and bodies? let's just put
+>> everything in the specs and avoid this nonsense!"
+>
+> Sure -- the same folks who think every type should have a non-portable
+> rep clause, or unchecked conversions should be scattered all over the
+> place, etc.
+
+No, I don't buy the analogy in this case, I don't think these cases are similar.
+
+****************************************************************
+
+From: Bob Duff
+Date: Saturday, May 1, 2010  12:43 PM
+
+> For me, I never say never!
+
+I rarely do.  ;-)
+
+> I would prefer to solve the local constant problem by introducing LET.
+
+If we can come up with a good syntax for it, then that seems like a reasonable
+solution.
+
+****************************************************************
+
+From: Tucker Taft
+Date: Saturday, May 1, 2010  1:20 PM
+
+> I agree with you and Tuck that the potential for abuse is huge.
+>
+> But I have zero sympathy for Tuck's objections -- "potential abuse"
+> is NEVER an appropriate language-design criterion.  "Potential for
+> accidental misuse" is, but not mere "potential abuse"...
+
+Well, here we differ.  I believe that part of good language design is
+"encouraging" the programmer to make wise choices, by making some things easier
+than others.  Just giving programmers free and uniform license to do whatever
+they want is a design philosophy, but it isn't mine.  Of course there is a
+spectrum here, but I think you (Bob) and I are on somewhat different places on
+this spectrum.
+
+So for me, if a feature has more "potential for abuse"
+than some other feature, I will favor the one with less such potential, and not
+surprisingly that will sometimes "crimp" someone's style.  If it doesn't do that
+now and then, it is probably too lax in my book.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  1:28 PM
+
+So I seem to be somewhere in between, on this issue I tend to side with Tuck,
+but when it comes to IN OUT parameters in functions, I would be happy to allow
+them without the complexification of the anti-side effect rule.
+
+****************************************************************
+
+From: Robert Dewar
+Date: Saturday, May 1, 2010  1:26 PM
+
+>> I would prefer to solve the local constant problem by introducing
+>> LET.
+>
+> If we can come up with a good syntax for it, then that seems like a
+> reasonable solution.
+
+Well if you introduce a keyword, it's easy
+
+LET declaration IN expression
+
+(let A : constant Integer := X * Y in A * A + (A - 1))
+
+If you don't like introducing a keyword (which I think is a reasonable
+reluctance, you could have an unreserved keyword (would work to have let be
+unreserved), or use one of the following
+
+     (A * A + (A - 1) with A : constant Integer := 23)
+
+     (do A : constant Integer := 23 in ....)
+
+     (use A : constant Integer := 23 in ....)
+
+or just
+
+     (A : constant Integer := 23 in ...)
+
+****************************************************************
+
+From: Tucker Taft
+Date: Saturday, May 1, 2010  1:42 PM
+
+> ...
+> or just
+>
+>     (A : constant Integer := 23 in ...)
+
+Unfortunately membership test creates
+a possible ambiguity here.
+
+You could perhaps use "with...do" instead of "let ... in"
+
+   (with A = <expression>, B = <expression>, ... do ...)
+
+If these are all constants, I don't see the need for
+": constant <type> :=" when "=" will do.
+
+****************************************************************
+
+[Editor's note: There are another 120 messages in this thread; ran out of
+time to get them filed. They'll appear soon...]

Questions? Ask the ACAA Technical Agent