Version 1.2 of acs/ac-00156.txt

Unformatted version of acs/ac-00156.txt version 1.2
Other versions for file acs/ac-00156.txt

!standard 11.4.2          08-01-18 AC95-00156/01
!class Amendment 08-01-18
!status received no action 08-01-18
!status received 08-01-10
!subject Contracts in Ada
!summary
!appendix

From: Robert A. Duff
Date: Thursday, January 10, 2008  1:07 PM

I'm interested in discussing AI95-00288 (and perhaps AI95-00375).
These are about adding pre/postconditions and invariants to Ada.
They were voted "No Action" by the ARG.
AdaCore is currently thinking about implementing something
along these lines.

First, why the split between Pre_Assert and Classwide_Pre_Assert?
I think it makes sense, but I'm having trouble coming up with
realistic and compelling examples.

Also, there's a school of OOP thought that says all calls to dispatching ops
should be dispatching calls, except in the case where a method calls its own
parent method.  If you believe that, then Pre_Assert is useless except in that
one not-so-common case.

In Classwide_Pre_Assert, parameters are automatically converted to T'Class, so
we get dispatching by default.  Why?  Why not have nondispatching by default,
and let the programmer explicitly convert to T'Class if they want dispatching
-- that's the way the rest of the language works.

Has anybody thought about how this stuff interacts with interfaces?
In particular, the rules about inheritance and combination of
Classwide_Pre_Asserts.

****************************************************************

From: Tucker Taft
Date: Thursday, January 10, 2008  1:28 PM

> Also, there's a school of OOP thought that says all calls to dispatching ops
> should be dispatching calls, except in the case where a method calls its own
> parent method.  If you believe that, then Pre_Assert is useless except in that
> one not-so-common case.

In Ada, the static binding case is more common because calls
from one method to another are by default statically bound.
There are good reasons for that, as it means you
can treat the implementation of a given method as a
"black box" which is not mysteriously affected when you
inherit some methods and override others.  If the method
really wants to "redispatch" on calling another method,
it can do so, but that is pretty rare, and certainly needs
to be documented as part of its external effect, at least
for the purposes of inheritance.  Because of the
use one method makes of another, it seems quite reasonable
that the precondition for a statically bound call might
be looser, since it wants to both serve external calls
and inter-method calls, whereas the precondition for
dispatching calls might be tighter so you don't impose the
"lax" precondition on all of the overridings.
>
> In Classwide_Pre_Assert, parameters are automatically converted to T'Class, so
> we get dispatching by default.  Why?  Why not have nondispatching by default,
> and let the programmer explicitly convert to T'Class if they want dispatching
> -- that's the way the rest of the language works.

Because these assertions are intended to be applicable to
all overridings, and so a non-dispatching call wouldn't
give the right semantics when applied to an overriding.
It is vaguely analogous to the requirement that the default
expression for a controlling parameter needs to be
tag-indeterminate.  You have an expression that you want
to be meaningful when inherited, both from the caller's
perspective and from the callee's perspective.
>
> Has anybody thought about how this stuff interacts with interfaces?
> In particular, the rules about inheritance and combination of
> Classwide_Pre_Asserts.

All of the Classwide_Pre_Asserts would apply to an operation
that is inherited from multiple ancestors.
And of course Pre_Assert is irrelevant for interfaces
because there is no code.  That would similarly be
true for all abstract primitives of non-interface
abstract types.

****************************************************************

From: Tucker Taft
Date: Thursday, January 10, 2008  1:48 PM

By saying that the Classwide_Pre_Asserts from all of the
ancestors apply, I mean that if any one of them
is True, the precondition check is satisfied.  So
it is more of an "or" than an "and" combination of
preconditions.  Because of this, a given caller can
feel comfortable so long as they satisfy the
Classwide_Pre_Assert associated with the "view"
through which they are calling the dispatching op.
Even if they *don't* satisfy that assert, the call
might still succeed, but I would expect that a
static analysis tool might want to warn about that.

****************************************************************

From: Tucker Taft
Date: Thursday, January 10, 2008  2:09 PM

I no longer accept the claim that these are
not useful for static analysis.  Plain old
assert statements are very valuable for
static analysis, even ones that are
"dynamic," if your static analysis tool is
smart enough (ours is ;-).  Having pre-
and post-conditions would be even better.

It is true that there are certain kinds of
things you can't express easily, and which
might involve significant processing if you
do try to express them, so turning
on full run-time assertion checking
might slow the program down by more than
an order of magnitude.  But that is already
true given the current Pragma Assert, yet
that doesn't lessen its usefulness.

****************************************************************

From: Randy Brukardt
Date: Thursday, January 10, 2008  2:36 PM

> I no longer accept the claim that these are
> not useful for static analysis.  Plain old
> assert statements are very valuable for
> static analysis, even ones that are
> "dynamic," if your static analysis tool is
> smart enough (ours is ;-).  Having pre-
> and post-conditions would be even better.

That's my opinion, too. I'm interested in "static analysis for the rest of
us", which surely is not going to be able to prove (or disprove) everything
about a program, but would be able to handle access types (at least if
partitioned into collections), OOP including dispatching, and propagation of
exceptions (because a failure of the target system should raise an
exception, and those are very hard to eliminate). But every piece of
information that is available adds to what can be proven, and much of that
can be very valuable.

> It is true that there are certain kinds of
> things you can't express easily, and which
> might involve significant processing if you
> do try to express them, so turning
> on full run-time assertion checking
> might slow the program down by more than
> an order of magnitude.  But that is already
> true given the current Pragma Assert, yet
> that doesn't lessen its usefulness.

I guess that an Assert that is expensive is going to "look" expensive (it's
going to contain a bunch of function calls with hopefully reasonable names),
but the postcondition stuff was not (necessarily) going to look so
expensive. Especially as the cost was going to happen on all calls with
certain kinds of parameters (assuming the postcondition processing is on).

I was particularly concerned about the "Initial" attribute (I think that is
what it was called) which was going to have a distributed overhead if any
code containing it could be executed. And because of the ability to write (I
hope!) helper functions for use in postconditions, it was going have to be
allowed everywhere. (Or it has to be disallowed in a number of useful
places, including for composite types, greatly limiting it's usefulness.)

Anyway, that's a gripe about a particular feature of a particular proposal,
and I don't think it necessarily is relevant to the entire idea.

****************************************************************

From: Robert A. Duff
Date: Thursday, January 10, 2008  8:31 PM

> Keep in mind that these two AIs never got anywhere near done, so it is hard
> to say how valuable/correct these models are.

Understood.  We are viewing these AIs as a good starting point, but we're
making no attempt to be compatible with every detail.  For one thing, we have
pretty much decided not to use the names Pre_Assert and Post_Assert.
We're using Precondition and Postcondition.  Today, I had some second
thoughts about that, and suggested using the Eiffel terms: pragmas
Require and Ensure.  I think perhaps those terms make it easier to
remember which one can be weakened on overriding, and which one can
be strengthened.

We'll see how user's like this stuff, and maybe ARG will decide to include it
in Ada 2015.  :-)

> Indeed, one of the objections was that they aren't useful for static
> analysis.

Yes, I know some people have that view.  I don't agree with that view -- I
rather strongly disagree, in fact.  (Hence the following ranting!  ;-))
This is the same attitude that caused the ARG to take 10 years to standardize
pragma Assert, which users of GNAT had been using happily all that time.

First, as you and Tuck have pointed out in subsequent notes, it's not true
that these features are useless for static analysis.

Second, even if these features were useless for static analysis, that would
NOT imply that they are useless, nor that they are harmful.  I know that these
features are useful for (say) writing compilers.  If they are useless for (say)
avionics, then so be it, but don't tell me I can't use a feature because SOME
application areas find it useless.

Third, this is the most eggregious case of "making 'best' the enemy of 'good
enough'".  Nobody has yet succeeded in creating a static analysis technique
that eliminates all bugs from all kinds of software, and they probably never
will.  Don't make me wait until the halting problem is solved before I'm
allowed to use useful features!

Fourth, this is the way Ada works.  Array indexing has a precondition that
requires the index to be in range.  It's a run-time check.  If you have
technology that can catch it statically in restricted cases, that's great.
SPARK, SofCheck Inspector, whatever.  For some applications, it might
make sense to insist that all preconditions be checked statically -- but that
applies to array bounds (etc) just as much as to pragma Precondition.
Rejecting pragma Precondition because it can't always be checked statically
requires (for consistency) rejecting Ada itself.

> Tucker writes:
>
> > I no longer accept the claim that these are
> > not useful for static analysis.  Plain old
> > assert statements are very valuable for
> > static analysis, even ones that are
> > "dynamic," if your static analysis tool is
> > smart enough (ours is ;-).  Having pre-
> > and post-conditions would be even better.
>
> That's my opinion, too. I'm interested in "static analysis for the rest of
> us",

Me, too!

>...which surely is not going to be able to prove (or disprove) everything
> about a program, but would be able to handle access types (at least if
> partitioned into collections), OOP including dispatching, and propagation of
> exceptions (because a failure of the target system should raise an
> exception, and those are very hard to eliminate). But every piece of
> information that is available adds to what can be proven, and much of that
> can be very valuable.
>
> > It is true that there are certain kinds of
> > things you can't express easily, and which
> > might involve significant processing if you
> > do try to express them, so turning
> > on full run-time assertion checking
> > might slow the program down by more than
> > an order of magnitude.  But that is already
> > true given the current Pragma Assert, yet
> > that doesn't lessen its usefulness.
>
> I guess that an Assert that is expensive is going to "look" expensive (it's
> going to contain a bunch of function calls with hopefully reasonable names),
> but the postcondition stuff was not (necessarily) going to look so
> expensive.

I don't buy the idea that expensive things should "look" expensive.
That's the exact opposite of what a high-level language is all about.
Fortunately, Ada does not adhere to this principle!

>... Especially as the cost was going to happen on all calls with
> certain kinds of parameters (assuming the postcondition processing is on).
>
> I was particularly concerned about the "Initial" attribute (I think that is
> what it was called)...

The AI calls it 'Incoming.  We're calling it 'Old.

That reminds me of another question: The AI resrticts it to X'Incoming, where X
is an IN OUT parmameter.  Why?  We plan to allow any name, including function
calls, e.g.:

    pragma Postcondition(Is_Evil(X) = not Is_Evil(X)'Old);

where X is anything visible.  The generated code will call Is_Evil(X) on
subprogram entry, and save the value for later use.

>... which was going to have a distributed overhead if any
> code containing it could be executed.

I don't see why you call this overhead "distributed".
If you don't use the 'Old feature, you don't have the overhead.

Anyway, I think it's always wrong to worry about overhead of any kind of
checks, so long as there's a way to turn them off.  The programmer
should decide which checks are "too slow" -- the language designer
has no business making this decision.

>...And because of the ability to write (I
> hope!) helper functions for use in postconditions, it was going have to be
> allowed everywhere. (Or it has to be disallowed in a number of useful
> places, including for composite types, greatly limiting it's usefulness.)

We plan to allow 'Old anywhere in the procedure.  But if you use it in a
"helper function", it refers to the value on entry to that function!

It could be mildly useful outside postconditions.  E.g., you could use it to
restore the value of an 'in out' parameter or global to its old value
in case of some error condition.

> Anyway, that's a gripe about a particular feature of a particular proposal,
> and I don't think it necessarily is relevant to the entire idea.

Seems relevant.  Thanks for your comments.  Thanks to Tuck, too.

****************************************************************

From: Randy Brukardt
Date: Thursday, January 10, 2008  9:28 PM

> That reminds me of another question: The AI restricts it to X'Incoming,
where X
> is an IN OUT parmameter.  Why?  We plan to allow any name, including
function
> calls, e.g.:
>
>     pragma Postcondition(Is_Evil(X) = not Is_Evil(X)'Old);
>
> where X is anything visible.  The generated code will call Is_Evil(X) on
> subprogram entry, and save the value for later use.

Ummm, danger Will Robinson. If X is something that changes with each use,
you could have a real hard time figuring out that value, because it is not
the same one as on entry:

     pragma Postcondition(Is_Evil(Random) = not Is_Evil(Random)'Old);

I suspect you could allow this if you required all of the entities to be
constant views or pure functions, but it seems nasty.

Here's a more realistic example (this probably represents a mistake by the
user, but still it seems a legitimate construct):

     procedure Do_Something (X : in out Integer);
     pragma Postcondition (not Is_Valid(X)'Old);

     procedure Do_Something (X : in out Integer) is
     begin
         X := 10;
     end Do_Something;

     Obj : Integer := 0;
     Do_Something (Obj);

Obj has the value 0 when you call Is_Valid(X), but it has the value 10 when
you ask for the old value. How are you going to get it?

Maybe the user meant "pragma Postcondition (not Is_Valid(X'Old)'Old);", but
maybe they meant just what was written. (It makes more sense to test the
validity of the result rather than of the input parameter.)

There's also a possible tasking race condition if some globals are involved.
(Imagine X being a global rather than an object in the above, and imagine
another task changing it.)

So I think the restriction to "in out" parameters helps, because it avoids
these oddities. But I would probably just limit it to all parameters (for
"in" and "access", X'Old = X -- but not for composite types because of the
Rosen trick!).

> >... which was going to have a distributed overhead if any
> > code containing it could be executed.
>
> I don't see why you call this overhead "distributed".
> If you don't use the 'Old feature, you don't have the overhead.

If you can use 'Old anywhere (as opposed to only textually inside of
Postconditions), then we don't have a practical way to find out if it is
used (I'm talking specifically about Janus/Ada). That's because our compiler
is close to a pure one-pass design. It would be way easier to simply copy
all of the parameters on entrance. We do use a few tricks in the syntax
pass, so we probably could find out if 'Old was used anywhere (much like
"limited with" works), but any resolution is out of the question.

Your fully general 'Old simply isn't implementable in Janus/Ada without
major handstands. (There is always the possibility of putting a thunk call
at the start of each subprogram, that would be generated at the end and
contains the needed stuff. But that also would be distributed overhead of an
empty subprogram call for many subprograms. I suppose the optimizer could
eliminate it in some cases.)

> Anyway, I think it's always wrong to worry about overhead of any kind of
> checks, so long as there's a way to turn them off.  The programmer
> should decide which checks are "too slow" -- the language designer
> has no business making this decision.

That's true, but that assumes that there is a way to turn 'Old off. If you
allow it outside of postconditions, then it cannot be turned off!

> >...And because of the ability to write (I
> > hope!) helper functions for use in postconditions, it was going have to be
> > allowed everywhere. (Or it has to be disallowed in a number of useful
> > places, including for composite types, greatly limiting it's usefulness.)
>
> We plan to allow 'Old anywhere in the procedure.  But if you use it in a
> "helper function", it refers to the value on entry to that function!
>
> It could be mildly useful outside postconditions.  E.g., you could use it to
> restore the value of an 'in out' parameter or global to its old value
> in case of some error condition.

Exactly my point.

I also worry about this extra copy. Does it call Adjust and Finalize? It
seems like it would have to. What about limited objects? (The initial value
could be quite different than the final one, but they obviously can't be
copied.)

> > Anyway, that's a gripe about a particular feature of a particular proposal,
> > and I don't think it necessarily is relevant to the entire idea.
>
> Seems relevant.  Thanks for your comments.  Thanks to Tuck, too.

Anyway, enjoy creating your design. And keep us posted on it -- if there are
stones to throw, this would be a good time to do it. I'd hate for the ARG to
come up with an almost the same but not quite proposal in a few years.

(One the objections to this in the past was that no compilers did anything
like it. If GNAT changes that, there would a much smaller objection to
standardizing it, I think.)

****************************************************************

From: Tucker Taft
Date: Thursday, January 10, 2008  10:19 PM

> ...  For one thing, we have
> pretty much decided not to use the names Pre_Assert and Post_Assert.
> We're using Precondition and Postcondition.  Today, I had some second
> thoughts about that, and suggested using the Eiffel terms: pragmas
> Require and Ensure.  I think perhaps those terms make it easier to
> remember which one can be weakened on overriding, and which one can
> be strengthened.

Hmmm... I'm not sure why "Requires" implies "or" and "Ensures"
implies "and" on inheritance.  But perhaps a stronger argument
for using Requires and Ensures is that they were picked up
by the Java Modeling Language (JML) as well, which seems to be
developing a bit of a following:

    http://www.cs.ucf.edu/~leavens/JML/

I would encourage you to steal ideas from JML, as
it seems to be getting a fair amount of use and it
has some nice ideas.  I have never personally used
it, but I keep seeing references to it in random
places.

Personally, I still prefer [Classwide_]Precondition
and [Classwide_]Postcondition over Requires and
Ensures.  Its funny in the documentation for JML
they are called preconditions and postconditions,
even though they are spelled "Requires" and "Ensures" ;-).

Another source of inspiration could be SPARK.  They
use simply "Pre" and "Post" I believe, which has a nice
economy to it.  It is a bit unclear how the distinction
between classwide and specific preconditions might fit
into this minimalist approach.  Perhaps "Class_Pre(...)" and
"Class_Post(...)".

> That reminds me of another question: The AI resrticts it to X'Incoming, where X
> is an IN OUT parmameter.  Why?

I think the original restriction came from a rather stupid-minded
model of saving all of the initial values of all of the
IN OUT parameters.  Once you shift over to only saving the
initial values when referenced with a 'Incoming, then
your approach of allowing any arbitrary expression makes
perfect sense.  I think you will still want to limit yourself
to expressions of a nonlimited type...

> ... We plan to allow any name, including function
> calls, e.g.:
>
>     pragma Postcondition(Is_Evil(X) = not Is_Evil(X)'Old);
>
> where X is anything visible.  The generated code will call Is_Evil(X) on
> subprogram entry, and save the value for later use.

I presume "anything visible" includes all the formal parameters.

> ...
> Seems relevant.  Thanks for your comments.  Thanks to Tuck, too.

Glad to see some work occurring in this area.
Hopefully it will start a nice ball rolling...

****************************************************************

From: Tucker Taft
Date: Thursday, January 10, 2008  10:37 PM

> We plan to allow 'Old anywhere in the procedure.  But if you use it in a
> "helper function", it refers to the value on entry to that function!

This seems a bit weird.  I could see allowing it inside
Asserts (or perhaps a GNAT-specific "Check" attribute),
but allowing it anywhere seems to be a gratuitous
nonportable extension to the language, in my view.  In general,
implementation-defined attributes should be used very
sparingly, perhaps only in code that is likely to be
implementation-specific anyway (e.g. machine code insertions).

Implementation-defined pragmas are much better behaved,
as they can simply be ignored and the program still compiles
and runs on other compilers.  Implementation-defined
attributes end up being a way to lock people into a compiler,
and split the market.  That isn't such a big deal when
most code is proprietary, but when supposedly "open source" code
starts to incorporate implementation-defined attributes in what
is supposed to be a standardized, very portable language,
I'll admit I start to get a bit hot under the collar.

>
> It could be mildly useful outside postconditions.  E.g., you could use it to
> restore the value of an 'in out' parameter or global to its old value
> in case of some error condition.

A locally declared constant does this quite nicely.  I can
see using 'Old inside a pragma that you might be instructing
the code generator to ignore anyway, so you don't want
to have an explicitly declared constant.  But to start allowing
'Old in code that is outside a pragma is beginning to give me very
bad vibes.

****************************************************************

From: Robert A. Duff
Date: Thursday, January 10, 2008  11:13 PM

> Hmmm... I'm not sure why "Requires" implies "or" and "Ensures"
> implies "and" on inheritance.

To me, "requires" reminds me that I could sensibly require less on overriding,
and "ensures" indicates I could ensure more.  Then it's a short step to
"less" = "or" and "more" = "and".

I'm not at all sure about this -- all I know is that various folks (including
me) get confused about which is which, and talking about covariance and
contravariance and pre and post doesn't help much.

>...But perhaps a stronger argument
> for using Requires and Ensures is that they were picked up
> by the Java Modeling Language (JML) as well, which seems to be
> developing a bit of a following:
>
>     http://www.cs.ucf.edu/~leavens/JML/
>
> I would encourage you to steal ideas from JML, as
> it seems to be getting a fair amount of use and it
> has some nice ideas.  I have never personally used
> it, but I keep seeing references to it in random
> places.

Thanks for the reference.  I'll check it out.  Although Eiffel is not used
much, it does seem that the "contract" ideas from Eiffel are getting wide play
these days.  Meyer didn't get everything right, but he got pretty close.

> Personally, I still prefer [Classwide_]Precondition
> and [Classwide_]Postcondition over Requires and
> Ensures.  Its funny in the documentation for JML
> they are called preconditions and postconditions,
> even though they are spelled "Requires" and "Ensures" ;-).
>
> Another source of inspiration could be SPARK.  They
> use simply "Pre" and "Post" I believe, which has a nice
> economy to it.  It is a bit unclear how the distinction
> between classwide and specific preconditions might fit
> into this minimalist approach.  Perhaps "Class_Pre(...)" and
> "Class_Post(...)".

Yeah, I like the conciseness of that.  I'm still annoyed that I have to say
"pragma" before all my Asserts!

> > That reminds me of another question: The AI resrticts it to X'Incoming, where X
> > is an IN OUT parmameter.  Why?
>
> I think the original restriction came from a rather stupid-minded
> model of saving all of the initial values of all of the
> IN OUT parameters.  Once you shift over to only saving the
> initial values when referenced with a 'Incoming, then
> your approach of allowing any arbitrary expression makes
> perfect sense.  I think you will still want to limit yourself
> to expressions of a nonlimited type...

Right.  The idea is that "Foo(Bar(X), Baz(Y))'Old" evaluates that whole thing,
and saves the value for future reference.  It's not saving the values of X and
Y, and then later calling Foo, Bar, and Baz on them.  Robert Dewar came up with
a nice equivalence, something like "Temp : constant T := Foo(Bar(X), Baz(Y));"
and 'Old is legal if and only if this equivalence is legal.  That nicely takes
care of the limited case -- you can't do Blah'Old if Blah is of a limited type.

>  > ... We plan to allow any name, including function
> > calls, e.g.:
> >     pragma Postcondition(Is_Evil(X) = not Is_Evil(X)'Old);
> > where X is anything visible.  The generated code will call Is_Evil(X) on
> > subprogram entry, and save the value for later use.
>
> I presume "anything visible" includes all the formal parameters.

Yes, as in the AI, the Pre/Post pragmas have visibility of parameters -- as if
they appeared right after the last param in the procedure's spec.

But <expr>'Old can also appear anywhere in a procedure body, and refers to the
value of <expr> on entry to that procedure.

> > ...
> > Seems relevant.  Thanks for your comments.  Thanks to Tuck, too.
>
> Glad to see some work occurring in this area.
> Hopefully it will start a nice ball rolling...

Yes, I hope so!  ;-)

****************************************************************

From: Robert A. Duff
Date: Thursday, January 10, 2008  11:43 PM

> Ummm, danger Will Robinson. If X is something that changes with each use,
> you could have a real hard time figuring out that value, because it is not
> the same one as on entry:
>
>      pragma Postcondition(Is_Evil(Random) = not Is_Evil(Random)'Old);
>
> I suspect you could allow this if you required all of the entities to be
> constant views or pure functions, but it seems nasty.

I think you misunderstand our meaning of 'Old.  The above means to save the
value of Is_Evil(Random) on entry to this subprogram.  It does not mean to save
the value of Random, nor to save all values of all variables Random looks at.

> Here's a more realistic example (this probably represents a mistake by the
> user, but still it seems a legitimate construct):
>
>      procedure Do_Something (X : in out Integer);
>      pragma Postcondition (not Is_Valid(X)'Old);
>
>      procedure Do_Something (X : in out Integer) is
>      begin
>          X := 10;
>      end Do_Something;
>
>      Obj : Integer := 0;
>      Do_Something (Obj);
>
> Obj has the value 0 when you call Is_Valid(X), but it has the value 10 when
> you ask for the old value. How are you going to get it?

The idea is to transform it into:

      procedure Do_Something (X : in out Integer);

      procedure Do_Something (X : in out Integer) is
        Old_Thing : constant Boolean := Is_Valid(X);
      begin
          X := 10;
          Assert (not Old_Thing);
      end Do_Something;

      Obj : Integer := 0;
      Do_Something (Obj);

...so of course we're using the old value of X (i.e. 0).  We don't save the old
value of X -- we save the old value of whatever expression 'Old applies to --
in this case, Is_Valid(X).

> Maybe the user meant "pragma Postcondition (not Is_Valid(X'Old)'Old);", but
> maybe they meant just what was written. (It makes more sense to test the
> validity of the result rather than of the input parameter.)

Is_Valid(X'Old)'Old means the same as Is_Valid(X)'Old.

> If you can use 'Old anywhere (as opposed to only textually inside of
> Postconditions), then we don't have a practical way to find out if it is
> used (I'm talking specifically about Janus/Ada). That's because our compiler
> is close to a pure one-pass design.

OK, I agree, with a one-pass compiler, 'Old causes difficulties.
Any use of <expr>'Old means you have to save a copy of <expr> at the start
of the procedure.

> I also worry about this extra copy. Does it call Adjust and Finalize? It
> seems like it would have to.

Yes.

>...What about limited objects?

<expr>'Old is illegal if the corresponding constant decl would be illegal --
e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.

> Anyway, enjoy creating your design. And keep us posted on it...

Will do.

****************************************************************

From: Robert A. Duff
Date: Thursday, January 10, 2008  11:57 PM

> > We plan to allow 'Old anywhere in the procedure.  But if you use it in a
> > "helper function", it refers to the value on entry to that function!
>
> This seems a bit weird.  I could see allowing it inside
> Asserts (or perhaps a GNAT-specific "Check" attribute),
> but allowing it anywhere seems to be a gratuitous
> nonportable extension to the language, in my view.  In general,
> implementation-defined attributes should be used very
> sparingly, perhaps only in code that is likely to be
> implementation-specific anyway (e.g. machine code insertions).
>
> Implementation-defined pragmas are much better behaved,
> as they can simply be ignored and the program still compiles
> and runs on other compilers.  Implementation-defined
> attributes end up being a way to lock people into a compiler,
> and split the market.  That isn't such a big deal when
> most code is proprietary, but when supposedly "open source" code
> starts to incorporate implementation-defined attributes in what
> is supposed to be a standardized, very portable language,
> I'll admit I start to get a bit hot under the collar.
>
> > It could be mildly useful outside postconditions.  E.g., you could use it to
> > restore the value of an 'in out' parameter or global to its old value
> > in case of some error condition.
>
> A locally declared constant does this quite nicely.  I can
> see using 'Old inside a pragma that you might be instructing
> the code generator to ignore anyway, so you don't want
> to have an explicitly declared constant.  But to start allowing
> 'Old in code that is outside a pragma is beginning to give me very
> bad vibes.

I agree with Tucker's points here, FWIW.

****************************************************************

From: Gary Dismukes
Date: Friday, January 11, 2008  12:26 AM

> >...What about limited objects?
>
> <expr>'Old is illegal if the corresponding constant decl would be illegal --
> e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.

Well, except that doesn't quite apply to all cases, such as the example
you gave in an earlier message where Temp was initialized by a function
call.  So I think there needs to be a specific rule to disallow limited.

****************************************************************

From: Gary Dismukes
Date: Friday, January 11, 2008  12:30 AM

Bob Duff replied to Tucker:

> > Implementation-defined pragmas are much better behaved,
> > as they can simply be ignored and the program still compiles
> > and runs on other compilers.  Implementation-defined
> > attributes end up being a way to lock people into a compiler,
> > and split the market.  That isn't such a big deal when
> > most code is proprietary, but when supposedly "open source" code
> > starts to incorporate implementation-defined attributes in what
> > is supposed to be a standardized, very portable language,
> > I'll admit I start to get a bit hot under the collar.
> >
> > > It could be mildly useful outside postconditions.  E.g., you could use it to
> > > restore the value of an 'in out' parameter or global to its old value
> > > in case of some error condition.
> >
> > A locally declared constant does this quite nicely.  I can
> > see using 'Old inside a pragma that you might be instructing
> > the code generator to ignore anyway, so you don't want
> > to have an explicitly declared constant.  But to start allowing
> > 'Old in code that is outside a pragma is beginning to give me very
> > bad vibes.
>
> I agree with Tucker's points here, FWIW.

I agree with those points as well (also FWIW).  Implementation-defined
attributes can really hurt portability (though they're great for lock-in;).

****************************************************************

From: Randy Brukardt
Date: Friday, January 11, 2008  1:12 AM

...
> >      pragma Postcondition(Is_Evil(Random) = not Is_Evil(Random)'Old);
> >
> > I suspect you could allow this if you required all of the entities to be
> > constant views or pure functions, but it seems nasty.
>
> I think you misunderstand our meaning of 'Old.  The above means to save the
> value of Is_Evil(Random) on entry to this subprogram.  It does not mean to save
> the value of Random, nor to save all values of all variables Random looks at.

I did, but I worry that others will too. My interpretation makes just as
much sense as yours, if you're just reading the source code.

...
> > Maybe the user meant "pragma Postcondition (not Is_Valid(X'Old)'Old);", but
> > maybe they meant just what was written. (It makes more sense to test the
> > validity of the result rather than of the input parameter.)
>
> Is_Valid(X'Old)'Old means the same as Is_Valid(X)'Old.

Right, given your definition. But if the user *meant* that, they could have
written it. Maybe they really meant what I wrote -- but that is essentially
impossible to evaluate. So I guess I see the possibility of confusion here.
Beware of unneeded generality.

****************************************************************

From: Cyrille Comar
Date: Friday, January 11, 2008  5:51 AM

> In Ada, the static binding case is more common because calls
> from one method to another are by default statically bound.
> There are good reasons for that, as it means you
> can treat the implementation of a given method as a
> "black box" which is not mysteriously affected when you
> inherit some methods and override others.  If the method
> really wants to "redispatch" on calling another method,
> it can do so, but that is pretty rare, and certainly needs
> to be documented as part of its external effect, at least
> for the purposes of inheritance.

This is an interesting, but not mainstream, view. The "Simple Dispatch
Rule" says exactly the opposite. (This rule is described in OOTiA among
other places but it is pretty standard, for instance Java's OO model
enforce this rule...). Your "black box" argument, which is usually used
in the other direction btw, is strange. You cannot mysteriously affect
the behavior of a method when it is applied to a member of the original
class. You can only affect it when it is applied to a member of a
derived class and that would bizarre to call this "mysterious" since
this is the essence of OOD ;-)

It is good that the Ada programmer has the choice between static &
dynamic binding but that should not distord too much the design for
pre/postcondtions...

>  Because of the
> use one method makes of another, it seems quite reasonable
> that the precondition for a statically bound call might
> be looser, since it wants to both serve external calls
> and inter-method calls, whereas the precondition for
> dispatching calls might be tighter so you don't impose the
> "lax" precondition on all of the overridings.

I have a hard time with this. For me a precondition is really attached
to the notion of method in general. Attaching it to the binding way
(whether it is a dispatching call or not) puts a huge burden on the
programmer. It is already difficult find the appropriate precondition
for a subprogram but if you have to take into account possibly different
needs between dispatching calls and non dispatching ones this becomes a
nightmare both for the writer of the routine and for the user.
This is the practical aspect. On the theoretical side, I just don't see
anything in Liskov and associated work that would justify such a
distinction. Did I miss something?

Eiffel's model is a nice starting point. There is one thing I don't like
about it though is its dymamic model for weakening preconditions which
has been mimicked in AI 288 (same for strengthening postconditions). The
notion of "require else" does indeed implement some kind of weakening
but it has 2 disadvantages:
   1. the complete precondition is not easily readable (so this is a
case of favoring writers over readers)
   2. the weakening is forced by the implementation model instead of
being a property binding the 2 preconditions.

2 is particularly problematic because it means that you cannot catch a
certain design errors. Suppose your overriden method is not really
following LSP. Normally what should happen is that by looking at the
code, you can deduce the proper preconditions that will make the call
safe. Since  you did not follow LSP properly, the precondition is *not*
weaker than the original one. If you put that condition in the "require
else", then the effective precondition is <"original" or "new"> and
since "new" is not weaker than "original" it is equivalent to
"original"... and your precondition does not really protect you anymore
neither in static nor in dynamic binding...

I much prefer a model where the only preconditions checked are those
associated with a concrete method and a different (possibly static)
analysis verifies appropriate weakening/strengthening along the
inheritance tree.

****************************************************************

From: Jeffery R. Carter
Date: Friday, January 11, 2008  11:23 AM

>>      pragma Postcondition(Is_Evil(Random) = not Is_Evil(Random)'Old);
>>
> I think you misunderstand our meaning of 'Old.  The above means to save the
> value of Is_Evil(Random) on entry to this subprogram.  It does not mean to save
> the value of Random, nor to save all values of all variables Random looks at.
>
> Is_Valid(X'Old)'Old means the same as Is_Valid(X)'Old.

I don't want to complain that your idea is useless because it won't slice a
pineapple, but ...

As a potential user with some experience with SPARK, my understanding is that,
in a postcondition, X refers to the value of X after completion of the
operation, and X~ (equivalent to X'Initial, and I would expect to X'Old) means
the value before the beginning of the operation. So

F (X)'Old

would mean the value of F as evaluated at the beginning (presuming that F is not
pure and so its result varies with time) applied to the value of X after
completion (this would be expensive to determine). This would be different from

F (X'Old)'Old

So I'm concerned that your notation as you're defining it will be confusing to
those already experienced with postconditions.

> <expr>'Old is illegal if the corresponding constant decl would be illegal --
> e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.

Aren't there now cases where such a limited constant declaration is legal?

****************************************************************

From: Robert A. Duff
Date: Friday, January 11, 2008  1:08 PM

> As a potential user with some experience with SPARK, my understanding is that,
> in a postcondition, X refers to the value of X after completion of the
> operation, and X~ (equivalent to X'Initial, and I would expect to X'Old) means
> the value before the beginning of the operation. So
>
> F (X)'Old

How does SPARK deal with this?  Is it illegal?  Can you report the SPARK rules
for what's allowed in a ~ expression?

> would mean the value of F as evaluated at the beginning (presuming that F is
> not pure and so its result varies with time) applied to the value of X after
> completion (this would be expensive to determine).

How would this be determined (even expensively)?  I don't get it.

And what if F looks at global variables?  Would you expect to use the old or
new values of them?  (The compiler doesn't know about them!)

>...This would be different from
>
> F (X'Old)'Old
>
> So I'm concerned that your notation as you're defining it will be confusing to
> those already experienced with postconditions.

Perhaps.  Note that the design matches Eiffel, AFAIK.

> > <expr>'Old is illegal if the corresponding constant decl would be illegal --
> > e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.
>
> Aren't there now cases where such a limited constant declaration is legal?

Yes.  It's illegal if X is a variable name, but legal if it's a function call.

****************************************************************

From: Robert A. Duff
Date: Friday, January 11, 2008  1:21 PM

> ...
> > >      pragma Postcondition(Is_Evil(Random) = not Is_Evil(Random)'Old);
> > >
> > > I suspect you could allow this if you required all of the entities to be
> > > constant views or pure functions, but it seems nasty.
> >
> > I think you misunderstand our meaning of 'Old.  The above means to save
> the
> > value of Is_Evil(Random) on entry to this subprogram.  It does not mean to
> save
> > the value of Random, nor to save all values of all variables Random looks
> at.
>
> I did, but I worry that others will too. My interpretation makes just as
> much sense as yours, if you're just reading the source code.

Well, I must admit that when I first read about Eiffel, I was confused in a
similar way (don't remember the details).
I remember being surprised when I learned that any expression is allowed, and
that the whole expression is evaluated and saved.  As soon as I learned that,
it seemed "obviously" the only sensible way to do it.  ;-)

It seems essential to allow function calls inside an 'Old expression,
because it will normally appear on a spec:

    procedure Grind (X : in out T);
    pragma Postcondition (F(X) = G);

T is private, so we need F to access its components, and we need function G to
access some global variable updated by Grind.

> > Is_Valid(X'Old)'Old means the same as Is_Valid(X)'Old.
>
> Right, given your definition. But if the user *meant* that, they could have
> written it.

You can't sensibly write 'Old in the body of G, though.

>...Maybe they really meant what I wrote -- but that is essentially
> impossible to evaluate. So I guess I see the possibility of confusion here.
> Beware of unneeded generality.

Unfortunately, I think it's needed (so function calls can be allowed).

****************************************************************

From: Robert A. Duff
Date: Friday, January 11, 2008  1:23 PM

> > >...What about limited objects?
> >
> > <expr>'Old is illegal if the corresponding constant decl would be illegal --
> > e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.
>
> Well, except that doesn't quite apply to all cases, such as the example
> you gave in an earlier message where Temp was initialized by a function
> call.  So I think there needs to be a specific rule to disallow limited.

I don't see any particular need to forbid limited.  The above is illegal
because X is an object name.  (Yeah, I know you can't see that from the
syntax!)  F(X)'Old would be legal (but not useful) even if F returns
limited.

****************************************************************

From: Jeffrey R. Carter
Date: Friday, January 11, 2008  1:31 PM

> How does SPARK deal with this?  Is it illegal?  Can you report the SPARK rules
> for what's allowed in a ~ expression?

In SPARK, ~ can only apply to parameters of mode in out in postconditions. So
you can write

F (X~)

but not

F (X)~

> How would this be determined (even expensively)?  I don't get it.

That's the PC way of saying that it probably can't be done. I consider the case
where F accesses memory-mapped HW with a destructive read ...

****************************************************************

From: Gary Dismukes
Date: Friday, January 11, 2008  2:01 PM

> > > >...What about limited objects?
> > >
> > > <expr>'Old is illegal if the corresponding constant decl would be illegal --
> > > e.g. "X'Old" is illegal if X is limited, because "Temp ... := X" would be.
> >
> > Well, except that doesn't quite apply to all cases, such as the example
> > you gave in an earlier message where Temp was initialized by a function
> > call.  So I think there needs to be a specific rule to disallow limited.
>
> I don't see any particular need to forbid limited.  The above is illegal
> because X is an object name.  (Yeah, I know you can't see that from the
> syntax!)

Well, F(X) is also an object name ;-)

> F(X)'Old would be legal (but not useful) even if F returns
> limited.

Fair enough.  I realized later after posting that it would be fine to allow
that (though, as you say, not particularly useful).

****************************************************************

From: Robert A. Duff
Date: Friday, January 11, 2008  4:09 PM

> In SPARK, ~ can only apply to parameters of mode in out in postconditions. So
> you can write
>
> F (X~)
>
> but not
>
> F (X)~

Thanks for the info.

SPARK is using this only in static proofs, so efficiency doesn't matter, right?

The feature being proposed here is checked at run time.  Of course, you might
want to build static proofs on top of this feature, just like you might want to
prove array indices are in bounds.  But at least some useful assertions will
need to be evaluated at run time in some cases.

My point is that it's usually cheaper to save the result of calling F than the
value of X -- F might well be Boolean or Integer, whereas X is a giant record.
F(X~) also runs into trouble when X is limited.

> > How would this be determined (even expensively)?  I don't get it.
>
> That's the PC way of saying that it probably can't be done. I consider the case
> where F accesses memory-mapped HW with a destructive read ...

Indeed.

****************************************************************

From: Randy Brukardt
Date: Friday, January 11, 2008  7:25 PM

(Lots of good stuff about OOP pre and post conditions.)

> I much prefer a model where the only preconditions checked are those
> associated with a concrete method and a different (possibly static)
> analysis verifies appropriate weakening/strengthening along the
> inheritance tree.

I can say that I can hardly follow any of this weakening and strenghtening
stuff. It is so confusing, and I have to wonder if ordinary users would be
able to figure out what is what. Pre and post conditions are hard enough to
figure out by themselves. Which makes me wonder if anyone is actually going
to try to use it.

It seems to me that the basic pre and post condition stuff is far more
useful, and I have to wonder if it pays to clutter this up.

****************************************************************

From: Tucker Taft
Date: Friday, January 11, 2008  8:34 PM

>> In Ada, the static binding case is more common because calls
>> from one method to another are by default statically bound.
>> There are good reasons for that, as it means you
>> can treat the implementation of a given method as a
>> "black box" which is not mysteriously affected when you
>> inherit some methods and override others.  If the method
>> really wants to "redispatch" on calling another method,
>> it can do so, but that is pretty rare, and certainly needs
>> to be documented as part of its external effect, at least
>> for the purposes of inheritance.
>
> This is an interesting, but not mainstream, view. The "Simple Dispatch
> Rule" says exactly the opposite. (This rule is described in OOTiA among
> other places but it is pretty standard, for instance Java's OO model
> enforce this rule...). Your "black box" argument, which is usually used
> in the other direction btw, is strange. You cannot mysteriously affect
> the behavior of a method when it is applied to a member of the original
> class. You can only affect it when it is applied to a member of a
> derived class and that would bizarre to call this "mysterious" since
> this is the essence of OOD ;-)

I might not have described the situation very well.  But I
resist the notion that whether or not this is the "mainstream"
view is very relevant at this stage in the discussion.
Let's try to get it right for Ada, and then try to
use the same vocabulary for things that are the same as
other formulations of preconditions, and use different
vocabulary for things that are, for good reason, different.

I believe it is a *very* important part of the Ada OO model that
calls from one primitive subprogram to another are by default static.
It is true that most other OO languages don't work that way,
but I think that is one reason why other OO languages
can make it so painful to understand fully the inheritance
properties of a class library without having full
visibility on the source of the class library.  This problem
has been identified by many others who need to make
updates to a widely used class library, and find that
there is a large body of client code that has subtle
dependencies on the innards of the class library due
to the by-default dynamic dispatch associated with
calls from one method to another.

The fact that Java doesn't provide a separation between
spec and body is unfortunate, but my experience is that
you very frequently need to look at the source of Java
(virtual) methods to understand how to extend a class.
In Ada, extending a type is much more straightforward
because of the relative rarity of redispatching
in most primitive subprograms.


> It is good that the Ada programmer has the choice between static &
> dynamic binding but that should not distort too much the design for
> pre/postcondtions...

The distinction between specific and classwide types is
a fundamental part of the Ada model, while being a source
of confusion in most other OO models.   Any model
of pre and postconditions for Ada seems like it must
recognize this important distinction.

>
> Eiffel's model is a nice starting point. There is one thing I don't like
> about it though is its dynamic model for weakening preconditions which
> has been mimicked in AI 288 (same for strengthening postconditions)...

This same model is used in JML

> ... The
> notion of "require else" does indeed implement some kind of weakening
> but it has 2 disadvantages:
>   1. the complete precondition is not easily readable (so this is a case
> of favoring writers over readers)
>   2. the weakening is forced by the implementation model instead of
> being a property binding the 2 preconditions.


> 2 is particularly problematic because it means that you cannot catch a
> certain design errors. Suppose your overriden method is not really
> following LSP. Normally what should happen is that by looking at the
> code, you can deduce the proper preconditions that will make the call
> safe. Since  you did not follow LSP properly, the precondition is *not*
> weaker than the original one. If you put that condition in the "require
> else", then the effective precondition is <"original" or "new"> and
> since "new" is not weaker than "original" it is equivalent to
> "original"... and your precondition does not really protect you anymore
> neither in static nor in dynamic binding...
>
> I much prefer a model where the only preconditions checked are those
> associated with a concrete method and a different (possibly static)
> analysis verifies appropriate weakening/strengthening along the
> inheritance tree.

I wonder how this relates to the "mainstream" concern... ;-)

In any case, this doesn't work very well if you want to evaluate the
preconditions at the call site, since you don't know what
concrete method you are going to reach.  I could see a model
where you only check against the "Class_Pre" precondition on the
subprogram named in the dispatching call (as opposed to
the body that is executed).  This in turn
guarantees that you satisfy the "or" of all the Class_Pre
preconditions whenever you get into a
subprogram body.  The body itself can only rely on
the "or" of the Class_Pre preconditions from all of its
ancestors for the purposes of static analysis, since it
doesn't know which one the caller is going to check.

A "friendly" (and smart!) compiler could also
provide a warning when a Class_Pre on an
overriding is in fact a strengthening of the "or" of
the Class_Pre's of the ancestor's primitives.
I certainly wouldn't want your average compiler
to be required to do such an analysis.  I fear
it might require the solution to the halting problem.  ;-)

****************************************************************

From: Dmitry A. Kazakov
Date: Saturday, January 12, 2008  2:54 AM

On Fri, 11 Jan 2008 12:50:40 +0100, you wrote:

> Tucker Taft wrote:

>> In Ada, the static binding case is more common because calls
>> from one method to another are by default statically bound.
>> There are good reasons for that, as it means you
>> can treat the implementation of a given method as a
>> "black box" which is not mysteriously affected when you
>> inherit some methods and override others.  If the method
>> really wants to "redispatch" on calling another method,
>> it can do so, but that is pretty rare, and certainly needs
>> to be documented as part of its external effect, at least
>> for the purposes of inheritance.

I believe that all such cases are design problems. There should be no such
thing as re-dispatch.

> This is an interesting, but not mainstream, view. The "Simple Dispatch
> Rule" says exactly the opposite. (This rule is described in OOTiA among
> other places but it is pretty standard, for instance Java's OO model
> enforce this rule...). Your "black box" argument, which is usually used
> in the other direction btw, is strange. You cannot mysteriously affect
> the behavior of a method when it is applied to a member of the original
> class. You can only affect it when it is applied to a member of a
> derived class and that would bizarre to call this "mysterious" since
> this is the essence of OOD ;-)

To me the argument is quite obvious. The contract of the inherited
subprogram is that it takes objects of the base type T. It does not ones of
the derived type S. The effect is that semantically upon dispatch the
derived type object S <: T ((from T'Class) is converted to the base type T
and then the subprogram is applied to the result. For this reason the
behavior shall be exactly as it was before, otherwise substitutability
might get lost.

>>  Because of the
>> use one method makes of another, it seems quite reasonable
>> that the precondition for a statically bound call might
>> be looser, since it wants to both serve external calls
>> and inter-method calls, whereas the precondition for
>> dispatching calls might be tighter so you don't impose the
>> "lax" precondition on all of the overridings.
>
> I have a hard time with this. For me a precondition is really attached
> to the notion of method in general.

Method may mean different things: there is a polymorphic subprogram defined
on the class T'Class. There exist specific non-polymorphic bodies defined
on individual types S <: T from the class T. Clearly they all have
different preconditions. Actually, the precondition of a non-polymorphic
body is stricter in the sense that it takes only objects of a specific type
S. I guess disjunction of the preconditions of all specific bodies of types
S <: T (from class T) should imply the precondition of the polymorphic body
(defined on T'Class).

****************************************************************

From: Jeffrey R. Carter
Date: Saturday, January 12, 2008  11:53 AM

> SPARK is using this only in static proofs, so efficiency doesn't matter, right?

Static proofs, yes, but I suspect they still want their tools to run in a
reasonable amount of time.

The SPARK pre- and postcondition annotation language is interesting, and
probably worth your investigation. In addition to standard Ada Boolean
expressions, it has notations for such useful things as implication, "for all",
and "there exists". In his book, Barnes presents an example of a sorting
procedure with a postcondition that not only says the result is in order
(sorted), but also that it is a permutation of the input array.

****************************************************************

From: Robert A. Duff
Date: Saturday, January 12, 2008  6:36 PM

> > SPARK is using this only in static proofs, so efficiency doesn't matter,
> > right?
>
> Static proofs, yes, but I suspect they still want their tools to run in a
> reasonable amount of time.

Sure, but I was specifically talking about when X is a giant record,
and F is a function returning (say) Boolean.  And F(X'Old) vs.
F(X)'Old.  Saving a copy of X is very expensive at run time,
whereas saving a copy of the result of F is not.

A static proof engine wouldn't care one way or 'tother about that.
AFAIK all the static proofs SPARK does are done in roughly linear
time in the size of the program under analysis.

And I still don't understand how F(X)'Old and F(X'Old)'Old could sensibly mean
something different from each other.

> The SPARK pre- and postcondition annotation language is interesting, and
> probably worth your investigation. In addition to standard Ada Boolean
> expressions, it has notations for such useful things as implication, "for all",
> and "there exists". In his book, Barnes presents an example of a sorting
> procedure with a postcondition that not only says the result is in order
> (sorted), but also that it is a permutation of the input array.

John asked me to review an earlier version of his SPARK book, pre-publication,
which I did.  I suppose I should read the new version.  It's sitting on my desk
right now, but I have bugs to fix before I sleep, bugs to fix before I sleep...

****************************************************************

From: Jeffrey R. Carter
Date: Saturday, January 12, 2008  9:30 PM

> Sure, but I was specifically talking about when X is a giant record,
> and F is a function returning (say) Boolean.  And F(X'Old) vs.
> F(X)'Old.  Saving a copy of X is very expensive at run time,
> whereas saving a copy of the result of F is not.
>
> A static proof engine wouldn't care one way or 'tother about that.
> AFAIK all the static proofs SPARK does are done in roughly linear
> time in the size of the program under analysis.

SPARK doesn't save anything, nor does it allow expressions it can't process in a
reasonable amount of time, which is probably why it doesn't allow ~ on functions.

> And I still don't understand how F(X)'Old and F(X'Old)'Old could sensibly mean
> something different from each other.

Since you're defining it, it can mean whatever you want it to. What I was
saying, and I think Brukardt was saying, too, was that those of us already
familiar with a postcondition notation such as ~ will expect your expressions to
work in a similar manner. And that means our initial understanding, at least, of

F (X)'Old

is that X, not having 'Old, refers to the value of X after the procedure
completes, as it should in

X = X'Old + 1

and the 'Old applied to a function evaluation indicates its evaluation in the
environment that existed before the procedure was called.

****************************************************************

From: Niklas Holsti
Date: Sunday, January 13, 2008  2:30 AM

>> And I still don't understand how F(X)'Old and F(X'Old)'Old could
>> sensibly mean something different from each other.
>
>
> Since you're defining it, it can mean whatever you want it to. What I
> was saying, and I think Brukardt was saying, too, was that those of us
> already familiar with a postcondition notation such as ~ will expect
> your expressions to work in a similar manner. And that means our initial
> understanding, at least, of
>
> F (X)'Old
>
> is that X, not having 'Old, refers to the value of X after the procedure
> completes, as it should in
>
> X = X'Old + 1
>
> and the 'Old applied to a function evaluation indicates its evaluation
> in the environment that existed before the procedure was called.

I have no experience in SPARK pre/post-conditions, but I would
intuitively understand (expression)'Old to mean evaluating the
whole expression in the Old environment, for any expression. This
means that F(X)'Old uses the Old (at call) environment for F and
the Old value for X, as Robert intends.

To evaluate F in the Old environment, but on the new (on
completion) value of X, the form F'Old(X) seems clearest to me.
Here F'Old would mean the function F in the Old environment. The
function F'Old is of course practically impossible to capture for
run-time checks in Ada, but not impossible to reason about.

In summary, my intuition is that F(X)'Old is equivalent to
F'Old(X'Old) -- the 'Old distributes into the expression.

****************************************************************

From: John Barnes
Date: Sunday, January 13, 2008  4:31 AM

> John asked me to review an earlier version of his SPARK book, pre-publication,
> which I did.  I suppose I should read the new version.  It's sitting on my desk
> right now, but I have bugs to fix before I sleep, bugs to fix before I sleep...

Read the book right now. It will help you to sleep. The new bits relevant to
this discussion are in section 11.9. One can write things like

--# for_all T in D => Ingredient(T'Last) = Vermouth.
--# for_some I in T => Ingredient(I) = Gin.

Hmm. Lousy example.

Sleep well

****************************************************************

From: Cyrille Comar
Date: Saturday, January 12, 2008  3:48 AM

> The distinction between specific and classwide types is
> a fundamental part of the Ada model, while being a source
> of confusion in most other OO models.

I agree with this part.

>   Any model
> of pre and postconditions for Ada seems like it must
> recognize this important distinction.

Depending what you call "must recognize" ;-)
There are 2 complementary ways of looking at preconditions:
    - the caller view: you are told what you must ensure to call safely
a routine
    - the callee view: you tell the rest of the world the protocol
needed to safely execute the specific body of a routine

Distinguishing classwide preconditions is understandable (although not
desirable according to me) for the first view but less for the second.
Whether you get to the body of your method statically or dyamically once
you are there, you will have the same constraints to execute safely the
body.

 > A "friendly" (and smart!) compiler could also
 > provide a warning when a Class_Pre on an
 > overriding is in fact a strengthening of the "or" of
 > the Class_Pre's of the ancestor's primitives.
 > I certainly wouldn't want your average compiler
 > to be required to do such an analysis.  I fear
 > it might require the solution to the halting problem.  ;-)

This would not be the role of the compiler. LSP checking is not mandated
by Ada's semantics.  An external static analyser would be better ;-)

****************************************************************

From: Cyrille Comar
Date: Saturday, January 12, 2008  8:55 AM

>    Any model
> of pre and postconditions for Ada seems like it must
> recognize this important distinction.

By this standard (which is a way of favoring the caller view of a
contract) Classwide_Pre_Assert is not enough, you would also need a
Suprogram_Access_Pre_Assert so that you know what are the preconditions
for an indirect call...

Note also that, in this multiple-preconditions-for-the-same-subprogram
model there must be some relationship between them: as you mentioned in
your earlier mail, Classwide_Pre_Assertions and
Suprogram_Access_Pre_Assert must be at least as strong as any of the
corresponding preconditions.

So by trying to *enforce* LSP using something equivalent to Eiffel's
"require else" (this is the AI-288 model), one end up having to *verify*
something similar somewhere else (the relathionship between Pre_Assert
and Classwide_Pre_Assert for a given subprogram).

Since there will be a need for independent verification anyway, one may
as well keep things simple and have a single method/precondition
association.

****************************************************************

From: Tucker Taft
Date: Monday, January 14, 2008  3:13 PM

> Depending what you call "must recognize" ;-)
> There are 2 complementary ways of looking at preconditions:
>    - the caller view: you are told what you must ensure to call safely a
> routine
>    - the callee view: you tell the rest of the world the protocol needed
> to safely execute the specific body of a routine

I think there is a third view, namely what sort of requirements
the definer of an interface wants to impose on all future
implementors of the interface (or equivalently, overridings
of a given primitive).  The precondition for a given
subprogram body is relatively well defined.  What you want
to impose on all overridings seems like a more subtle
question, and may have to be defined more abstractly.

Hence, for a precondition, you might use a relatively
loose one for a given subprogram body since you know
the exact minimal requirements to get its job done, but specify
a somewhat stronger precondition for overridings so they
can perhaps do more than the initial implementation.
Similarly the postcondition for a given subprogram
body can be very stringent, since you know exactly what
it does.  On the other hand, the postcondition to be imposed
on all overridings might want to be less stringent, so they
can do somewhat different things than the initial
implementation.

When trying to find bugs in a given subprogram, you would like
to prove the strongest possible postcondition given the weakest
possible precondition.  To define the expected behavior of
all overridings, you want to be more flexible since you
don't know exactly what the overridings will want to accomplish,
and you could help flexibility by tightening up the preconditions
and loosening the postconditions.

> Distinguishing classwide preconditions is understandable (although not
> desirable according to me) for the first view but less for the second.
> Whether you get to the body of your method statically or dyamically once
> you are there, you will have the same constraints to execute safely the
> body.

Suppose the subprogram is abstract.  Clearly the only
precondition/postconditions of interest in those cases
are the ones that you want to impose on overridings.
You don't have any existing body to look at, so you
need to think about the "abstract" pre/postconditions.
These seem quite distinguishable from the "concrete"
pre/postconditions specified for any given subprogram
body.

One thing that would almost certainly help this discussion
would be several examples where there is or is not
a value to having both the "Pre" and the "Class_Pre"
pragmas and/or the "Post" and the "Class_Post" pragmas.
Unfortunately, coming up with real examples is real
work, whereas it is easier for me to just opine
abstractly... ;-)

>  > A "friendly" (and smart!) compiler could also
>  > provide a warning when a Class_Pre on an
>  > overriding is in fact a strengthening of the "or" of
>  > the Class_Pre's of the ancestor's primitives.
>  > I certainly wouldn't want your average compiler
>  > to be required to do such an analysis.  I fear
>  > it might require the solution to the halting problem.  ;-)
>
> This would not be the role of the compiler. LSP checking is not mandated
> by Ada's semantics.  An external static analyser would be better ;-)

It sounded like your proposal presumed that the compiler
had to enforce "LSP," since you weren't automatically
ensuring it by using the "or" of the preconditions.

****************************************************************

From: Randy Brukardt
Date: Friday, January 18, 2008  10:11 PM

> >    Any model
> > of pre and postconditions for Ada seems like it must
> > recognize this important distinction.
>
> By this standard (which is a way of favoring the caller view of a
> contract) Classwide_Pre_Assert is not enough, you would also need a
> Suprogram_Access_Pre_Assert so that you know what are the preconditions
> for an indirect call...

It has always seemed to me that that was necessary, because in order to
write correct calls, you need to know the specification of the call. Indeed,
that seems to be the primary benefit of precondition/postcondition pragmas
over using Asserts inside the subprogram -- that these things are part of
the specification. (Which is why I think they should be part of the syntax
ultimately, but that's for the distant future.)

Indeed, that is another reason why I am interested in encapsulating as much
as possible of the preconditions into the parameter subtypes: because the
existing static matching rules will then automatically ensure that the
preconditions match appropriately. (Similarly, such preconditions would be
part of the profile and the meaning would be obvious.)

Anyway, that seems to be the reason that Classwide_Precondition is needed,
and surely the same reason applies to Access_Precondition. (After all,
access-to-subprogram are just a less structured way to do the same thing -
make indirect calls.)

****************************************************************

From: Cyrille Comar
Date: Saturday, January 19, 2008  12:27 AM

>> By this standard (which is a way of favoring the caller view of a
>> contract) Classwide_Pre_Assert is not enough, you would also need a
>> Suprogram_Access_Pre_Assert so that you know what are the preconditions
>> for an indirect call...
>
> It has always seemed to me that that was necessary, because in order to
> write correct calls, you need to know the specification of the call.

I was using this argument to show that favoring the caller's view of
pre/postconditions was not necessarily the way to go but that does not
seem sufficient so let me push the reasoning one step further: what
about anonymous access to subprogram parameters? By your standard, they
are necessary too, right? where do you put them? Isn't it becoming ugly?

>  Indeed,
> that seems to be the primary benefit of precondition/postcondition pragmas
> over using Asserts inside the subprogram -- that these things are part of
> the specification.

I certainly agree with this but I don't see that it is relevant to the
point I am making.

pre/postconditions form the contract between the caller and the callee
so theoretically both views are equally important. This contract is what
will help you catch integration errors (Data Coupling analysis in
DO-178B terminology)

In practice, you often write pre/postconditions when you design reusable
components. That is to say when you have no idea of who the caller will
be. So you have the callee view: the precondition is what makes it safe
to call your routine and the postcondition is what the user can expect
as a result. So, in this view, the pre/postconditions are really
properties of the body of your subprogram that you publish as part of
the spec to inform users. This is a bottom-up approach...

If you favor the caller view, as for access_precond and
classwide_precond,  you basically ask the programmer to write
pre/postconditions before he knows what will be called. You say: here is
how I plan to use that stuff, or in the classwide case, here is what
needs to be true in order to respect LSP, so make sure this it is so
down the road. So, in a sense, this is more of a top-down approach...

In both cases, you have favored on side of the contract, so the errors
will more likely be on the other side.

I am not saying that one model is "better" than the other but I
personally prefer the first one because it is simpler and more adapted
to reusable components.

****************************************************************

From: Randy Brukardt
Date: Saturday, January 19, 2008  12:55 AM

> I was using this argument to show that favoring the caller's view of
> pre/postconditions was not necessarily the way to go but that does not
> seem sufficient so let me push the reasoning one step further: what
> about anonymous access to subprogram parameters? By your standard, they
> are necessary too, right? where do you put them? Isn't it becoming ugly?

I've previously said that I think these should be an integrated part of the
syntax of a subprogram spec, so *obviously* they go in the same place they
always do. Is that ugly in the case of anonymous access-to-subprogram? Sure,
but anonymous access-to-subprograms are always ugly. [These syntax contracts
probably also should include contracted exceptions, BTW.]

What's really ugly is using pragmas for this purpose, especially ones with
special visibility.

Indeed, I proposed user-defined constraints in large part to get some of
this information into the Ada subprogram specification where it belongs.

> >  Indeed,
> > that seems to be the primary benefit of precondition/postcondition pragmas
> > over using Asserts inside the subprogram -- that these things are part of
> > the specification.
>
> I certainly agree with this but I don't see that it is relevant to the
> point I am making.
>
> pre/postconditions form the contract between the caller and the callee
> so theoretically both views are equally important. This contract is what
> will help you catch integration errors (Data Coupling analysis in
> DO-178B terminology)

Yes, which is why they should be an integral part of the specification; then
they'll actually appear on both halves. Having them on just one side or the
other is not a help.

> In practice, you often write pre/postconditions when you design reusable
> components. That is to say when you have no idea of who the caller will
> be. So you have the callee view: the precondition is what makes it safe
> to call your routine and the postcondition is what the user can expect
> as a result. So, in this view, the pre/postconditions are really
> properties of the body of your subprogram that you publish as part of
> the spec to inform users. This is a bottom-up approach...
>
> If you favor the caller view, as for access_precond and
> classwide_precond,  you basically ask the programmer to write
> pre/postconditions before he knows what will be called. You say: here is
> how I plan to use that stuff, or in the classwide case, here is what
> needs to be true in order to respect LSP, so make sure this it is so
> down the road. So, in a sense, this is more of a top-down approach...
>
> In both cases, you have favored on side of the contract, so the errors
> will more likely be on the other side.
>
> I am not saying that one model is "better" than the other but I
> personally prefer the first one because it is simpler and more adapted
> to reusable components.

We absolutely have to support both. The language shouldn't be favoring one
kind of development over another. Indeed, I've developed components both
ways.

Besides that, there isn't much value to a new language feature for the
bottom-up approach. Pragma Assert works just fine. It's the top-down
approach where the benefits really begin to show.

****************************************************************

From: Robert A. Duff
Date: Saturday, January 19, 2008  7:57 AM

> In practice, you often write pre/postconditions when you design reusable
> components. That is to say when you have no idea of who the caller will be.

In this case, the contract is not negotiated between equal parties.  It's more
like a license you get with shrink-wrapped software -- clients have to take it
or leave it (more or less).

;-)

****************************************************************

From: Robert A. Duff
Date: Saturday, January 19, 2008  8:02 AM

...
> Besides that, there isn't much value to a new language feature for the
> bottom-up approach. Pragma Assert works just fine. It's the top-down
> approach where the benefits really begin to show.

I don't agree that pragma Assert works just fine.  There are advantages to
putting the precondition on the spec -- more likely that static proofs can
work, for example.

****************************************************************

From: Arnaud Charlet
Date: Saturday, January 19, 2008  8:06 AM

> I don't agree that pragma Assert works just fine.  There are advantages to
> putting the precondition on the spec -- more likely that static proofs can
> work, for example.

Plus the fact that some users want to be able to enable/disable preconditions
without disabling/enabling all other assertions (or postconditions).

It also gives more freedom to the compiler, to decide whether it makes more
sense to generate the precondition check at the caller or callee side.
It also makes sure that the user doesn't put the pre-condition at the wrong
place.

****************************************************************

From: Randy Brukardt
Date: Saturday, January 19, 2008  9:09 PM

> > Besides that, there isn't much value to a new language feature for the
> > bottom-up approach. Pragma Assert works just fine. It's the top-down
> > approach where the benefits really begin to show.
>
> I don't agree that pragma Assert works just fine.  There are advantages to
> putting the precondition on the spec -- more likely that static proofs can
> work, for example.

You can't usefully do anything at the call site with his bottom-up approach,
because you don't know the preconditions in general (dispatching,
access-to-subprogram, etc.). Moreover, with his approach, the preconditions
are going to be in a constant state of flux, so they can't be depended on
and shouldn't appear to be part of the specification.

But this is a straw man: I don't agree with limiting preconditions to the
bottom-up case, so I do think they should be in intregral part of the
specification of *all* subprogram profiles. (And that needs syntax, not
pragmas, because overloading and anonymous types cause serious problems for
pragmas).

****************************************************************

From: Robert Dewar
Date: Monday, January 21, 2008  10:24 PM

> I've previously said that I think these should be an integrated part of the
> syntax of a subprogram spec, so *obviously* they go in the same place they
> always do. Is that ugly in the case of anonymous access-to-subprogram? Sure,
> but anonymous access-to-subprograms are always ugly. [These syntax contracts
> probably also should include contracted exceptions, BTW.]

I agree that in the long run some syntax extension might be appropriate.

> What's really ugly is using pragmas for this purpose, especially ones with
> special visibility.

Doesn't seem so ugly in practice, though it might be nicer if the
pragmas could go inside the formal part (WHY OH WHY are pragmas
forbidden inside a formal part?) Anyway, we don't feel like
inventing new syntax, or waiting for the ARG to do so to get
this feature off the ground (it is now fully implemented in
our current build).

Certainly it's easy enough to implement, at least it was for us,
since we already have these expressions that have to be analyzed
once for visibility and again for code generation (e.g. default
expressions), and we already have the mechanism for making formals
visible in the body, so the code in GNAT to analyze these pragmas
looks like:

                --  Analyze the pragma appearing in spec

                Install_Formals (S);
                Push_Scope (S);

                Analyze_Per_Use_Expression
                  (Get_Pragma_Arg (Arg1), Standard_Boolean);

                if Arg_Count = 2 then
                   Analyze_Per_Use_Expression
                     (Get_Pragma_Arg (Arg2), Standard_String);
                end if;

                End_Scope;

P.S. Per_Use_Expression is an ugly name, I am going to think of a better
one now that this is used for

   default expressions
   per use stuff (e.g. pragmas in tasks)
   preconditions and postconditions

****************************************************************

Questions? Ask the ACAA Technical Agent