Version 1.3 of ai05s/ai05-0289-1.txt

Unformatted version of ai05s/ai05-0289-1.txt version 1.3
Other versions for file ai05s/ai05-0289-1.txt

!standard 7.3.2(0)          12-03-14 AI05-0289-1/02
!class Amendment 12-02-13
!status Amendment 2012 12-02-13
!status ARG Approved 7-0-3 12-02-24
!status work item 12-02-13
!status received 12-02-04
!priority Low
!difficulty Easy
!subject Invariants and in-mode parameters whose value is changed
!summary
Invariants are checked on all parameters of the type after a call, including "in" parameters.
!proposal
The Rosen technique is a method by which a writable access can be obtained to a constant limited object. As such, it is possible for "in" parameters of limited objects to be modified such that their invariant is no longer true after a subprogram call.
Similar methods can be used with controlled objects (via Initialize), and indeed for any object if part of the object is accessed via an embedded access value.
Should "in" parameters be checked to see if their invariants still hold after calls? (Yes.)
!wording
Modify 7.3.2(19/3):
* has a result with a part of type T, or one or more [in out or out] parameters with a part of type T, or an access to variable parameter whose designated type has a part of type T.
Modify the last sentence of 7.3.2(21/3):
Invariant checks, any postcondition check, and any constraint {or predicate} checks associated with [by-copy] in out or out parameters are performed in an arbitrary order.
!discussion
This issue occurs whenever the logical view of an object that is declared constant still can be changed via some technique. "In" parameters are just one facet of this problem (although the most visible one).
Unlike many issues with Invariants, this issue is solely under the control of the programmer of a package. In particular, there is nothing that a client can do to invalidate an invariant unless the programmer of the package makes that possible.
OTOH, invariants are nowhere near bullet-proof if the programmer intends to shoot herself in the foot. For instance, exporting access-to-subprogram values can expose routines that do not check the invariants to the use of clients.
This situation is similar: the programmer can prevent the problems in the first place:
(1) If the invariant dereferences an embedded pointer to a writable object, the invariant can be changed even for constants. Clearly this can be avoided; when it can't be avoided, invariants can only be assumed to hold the moment they are checked.
(2) If a writable pointer to the object exists anywhere in the package, the invariant can be changed even for constants. The pointer could exist because of the Rosen-technique, because of squirrelling by Initialize, or because of squirrelling of a access to a "in out" tagged parameter.
Of course, the author of the package can avoid these things. If the writable pointer can be avoided, the issue disappears.
The issue also can be avoided by declaring parameters that can be modified as "in out". Indeed, by using "in out" parameters when necessary, the Rosen technique (in particular) can be avoided completely. It was developed primarily because Ada did not allow "in out" parameters on functions. But Ada 2012 now allows such parameters, and thus the necessity to use the Rosen technique has been substantially lessened.
However, using "in" parameters is a time-honored technique in Ada programs, and changing to "in out" isn't always easy. Ada.Text_IO is a widely used (and copied) example.
So, while this issue is under the control of the programmer, it's not clear-cut that it can always be avoided. Moreover, invariants (unlike predicates) are intended to be bullet-proof (that's why they're restricted to private types) outside of unusual circumstances. Only a fool (or your editor - draw your own conclusions :-) would argue that Ada.Text_IO is unusual.
So we require that all parameters are checked after calls into the package.
!corrigendum 7.3.2(0)
Insert new clause:
[A placeholder to cause a conflict; the real wording is found in the conflict file.]
!ACATS Test
Create an ACATS C-Test to test that invariant checks are applied to "in" parameters.
!ASIS
No change needed.
!appendix

From: Randy Brukardt
Sent: Saturday, February 4, 2012  12:08 AM

> 7.3.2. 12/3 needs to be amended to account for in parameters that are
> modified thanks to the legalized Rosen-trick. (sorry JP.)
>
> Aside: somebody needs to check all the places, where in out and out
> parameter rules apply, for this issue. Any constraint and predicate
> checks will need to be extended to in parameters as well, as in the
> two cases that I have already found, since changes can happen after
> all with the language's blessing and hence the checks are needed.

First, the other case you "found" is just plain wrong, we don't want to check
for that in any case (see my response to your review).

Second, if 7.3.2(12/3) needs to be modified, 7.3.2(17/3) needs to be modified a
lot more (it being the mainline case).

Third, this is not a "trick", it's an "idiom" or (better to leave the initials
the same) "technique".

My initial reaction is that nothing really has changed here; we don't make any
checks during the return for in out or out parameters that are passed by
reference, so it seems strange (and incompatible) to say that simply because we
fixed the wording to clearly allow something that's obviously been allowed all
along we need to change the semantics of things. We surely aren't going to start
changing the way constraint checks are made (that would be seriously
incompatible - or have no effect), besides, Ada doesn't allow changing
constraints on parameters (*any* parameters) anyway, so I don't see how those
could fail.

But this gut reaction is really wrong, because invariants and predicates really
are new things, and thus we ought to get them right (even though changing the
semantics of the Ada 95 portion of such calls is a non-starter IMHO).

I also realize (after starting this message), that we in fact have a special
rule in 3.2.4(21/3) to make predicate checks on in out and out parameters that
are passed by-reference.

So I have some sympathy for Erhard's position that we need to recheck predicates
and invariants on in parameters that might be modified under the covers (the
so-called Rosen technique is one way to do so). In order to do that, we'd need
to add a rule that in parameters that have "parts that are of controlled or
immutably limited types" (to use the wording of 3.3(13/3)) also are checked on
exit.

But I'm torn on this one. Claw sometimes modifies "in" parameters (mainly
because Ada 95 didn't let us declare them "in out"), and these aren't always
controlled. 'Access can be taken of any tagged parameter, and
Unchecked_Conversion can be used to get a writable access to modify it easily if
needed. Does that mean that we need to include all tagged types, too? How about
explicitly aliased parameters? And using 'Address and address clauses, it's
possible to modify any parameter passed by reference (even if it is not
aliased). So do we need to do this on any parameter not passed by copy?

There's surely a slippery slope involved here; there are levels of likely
modification, and we need to draw the line somewhere. As such, the place where
the line is drawn currently is easiest to write and understand. It's certainly
easy enough for someone using the Rosen technique to put any needed checks into
the postcondition, and doubt that many Ada programmers would be surprised about
the lack of such checks. Moreover, we already know that there are holes in the
protection allowed by invariants, and this one doesn't seem any worse than any
of the others.

So I lean to doing nothing here. I can see an argument for drawing the line at
chapter-13ish things (the Rosen technique is on the other side of that line, no
chapter 13 stuff is needed, and the same is true when using Initialize to
squirrel a writable access as Claw does internally). That would require changes
to 7.3.2(12/3), 7.3.2(17/3), and 3.2.4(21/3), but nothing else as we'd only want
to change rules for *new* constructs.

What do others think? Should we plug this "hole"?

****************************************************************

From: Bob Duff
Sent: Saturday, February 4, 2012  10:12 AM

I'm inclined to do nothing.  There are holes, and this isn't a particularly big
one.

I could live with saying that immutably limited 'in' params have their
invariants and predicates checked on the way out.

I don't see why you consider including controlled -- the only way to modify
those is in Finalize.

And I don't see any reason to include any chap-13 stuff -- if you use an address
clause or unch_conv of pointers to modify a constant (including an 'in'
parameter), you are being erroneous.

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, February 4, 2012  11:00 AM

> But I'm torn on this one. Claw sometimes modifies "in" parameters
> (mainly because Ada 95 didn't let us declare them "in out"), and these
> aren't always controlled. 'Access can be taken of any tagged
> parameter, and Unchecked_Conversion can be used to get a writable
> access to modify it easily if needed. Does that mean that we need to
> include all tagged types, too? How about explicitly aliased
> parameters? And using 'Address and address clauses, it's possible to
> modify any parameter passed by reference (even if it is not aliased).
> So do we need to do this on any parameter not passed by copy?
>
> There's surely a slippery slope involved here; there are levels of
> likely modification, and we need to draw the line somewhere.

To me, this line belongs at the line that also separates erroneous/bounded
error/undefined/implementation-defined behavior from perfectly legitimate,
guaranteed behaviour, where the user has a right to expect that the general
notion tied to a given semantics are being obeyed. (Which is why legitimizing
the JPRT bothered me so much.) For a type invariant my general notion is that it
generally holds (with short well defined intervals of allowed inconsistency).
Not checking the type invariant predicate on things that might have changed,
merely because the parameter mechanism says "in" is completely at odds with the
term "type invariant". For the remainder of time, the type invariant might stay
corrupted. That is simply unacceptable for programs that took great pains to
stay out of anything of an unstandardized behaviour. (The only reason not to
check is because it is guaranteed that the invariant was already checked at the
precise point of the change. That is not the case for JPRT as far as I can
tell.)

> As such, the place
> where the line is drawn currently is easiest to write and understand.
> It's certainly easy enough for someone using the Rosen technique to
> put any needed checks into the postcondition, and doubt that many Ada
> programmers would be surprised about the lack of such checks.
> Moreover, we already know that there are holes in the protection
> allowed by invariants, and this one doesn't seem any worse than any of the others.

If by holes you mean the periods of temporary inconsistency, then it is worse
because the resulting hole is not bounded at all. If you mean "chap 13" then it
is worse because JPRT is not called out at all as risking type saveness - I do
see type invariants as a safety feature, not as a debug feature that sometimes
catches the bug and sometimes does not.)

> So I lean to doing nothing here. I can see an argument for drawing the
> line at chapter-13ish things (the Rosen technique is on the other side
> of that line, no chapter 13 stuff is needed, and the same is true when
> using Initialize to squirrel a writable access as Claw does
> internally). That would require changes to 7.3.2(12/3), 7.3.2(17/3),
> and 3.2.4(21/3), but nothing else as we'd only want to change rules for *new* constructs.
>
> What do others think? Should we plug this "hole"?

So, my position is, that this hole needs to be plugged, what else? :-)

****************************************************************

From: Tucker Taft
Sent: Saturday, February 4, 2012  11:01 AM

> ...
> What do others think? Should we plug this "hole"?

No, I don't see why.  Invariants and dynamic predicates can follow pointers, and
I see the Rosen trick as simply following a pointer.  And we don't worry about
the fact that a pointed-to thing might change, at any moment.

****************************************************************

From: Bob Duff
Sent: Saturday, February 4, 2012  11:39 AM

I find this argument very convincing.

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, February 4, 2012  12:37 PM

With the type invariant being
   T.length = T.content'Last - T'Content'First + 1; (I hope I wrote it
						     the right way)

you just found it very convincing that an instance X with components

   X.length = 42 and X.content = (1,2)

is perfectly o.k. despite the above type invariant, if X happens to be of a
rosen-able type (and JPRT was invoked to legitimately change X.length to 42).

I really cannot follow why I should not worry.

Maybe we have a different interpretation of what a type invariant is supposed to
be. I always thought that they needed to hold between client calls.

****************************************************************

From: Tucker Taft
Sent: Saturday, February 4, 2012  12:57 PM

> With the type invariant being
>     T.length = T.content'Last - T'Content'First + 1; (I hope I wrote
> it the right way)
>
> you just found it very convincing that an instance X with components
>
>     X.length = 42 and X.content = (1,2)
>
> is perfectly o.k. despite the above type invariant, if X happens to be
> of a rosen-able type (and JPRT was invoked to legitimately change
> X.length to 42).

Remember that type invariants only apply to *private* types (and private
extensions).  This means that inside the type you can easily have a pointer, and
the invariant might depend on the pointed-to object, which can be a variable
even if the private-type object is an IN parameter.  So I really think we should
treat "Rosen"-able types equivalently to types with access-type components.

In my view, using the Rosen trick is merely an optimization, allowing the
elimination of a level of indirection via a hidden pointer. It doesn't add
capability that isn't already provided by pointers, and should not be singled
out for treatment that doesn't apply to objects containing pointers.

> I really cannot follow why I should not worry.

You should worry exactly as much as you worry about types with hidden pointers,
no more and no less.

> Maybe we have a different interpretation of what a type invariant is
> supposed to be. I always thought that they needed to hold between
> client calls.

But alas there are holes, and in my view the hidden pointer "hole" is much more
likely to occur in practice than the rosen-trick hole.  Think about all of the
file I/O operations that take File_Type as an in parameter, but clearly have
side-effects on the state of the "internal" file.  In most implementations these
are implemented using a hidden pointer. I suppose a few folks might use the
Rosen trick, but that is quite a bit more trouble.

So come up with a solution for the hidden-pointer hole, and I will happily let
you apply it to Rosen-trick types.

****************************************************************

From: Bob Duff
Sent: Saturday, February 4, 2012  1:01 PM

> With the type invariant being
>    T.length = T.content'Last - T'Content'First + 1; (I hope I wrote it
> the right way)
>
> you just found it very convincing that an instance X with components
>
>    X.length = 42 and X.content = (1,2)
>
> is perfectly o.k.

I didn't say it was OK.  In fact, it's quite likely a bug.

But Tucker's point was that we don't know how to catch bugs like that.  Imagine,
for example, we have an 'in' parameter of access-to-T type -- X.Length could
legitimately be modified, and the invariant isn't going to catch that.

Or imagine a Doubly_Linked_List type.  Nobody is going to stop you from
modifying parts of the list, and that might invalidate the invariant of the 'in'
parameter.

As soon as you have access values, invariants can have holes.
The JPR trick (which I agree is a "trick") is just one small case of access
values.

Look at it this way: if you have an 'in' parameter that contains access values,
you can modify anything they point at. And that can include objects that have
constant views lying around. In the JPR trick case, that includes the parameter
itself.

>...despite the above type invariant,
> if X happens to be of a rosen-able type (and JPRT was invoked to
>legitimately change X.length to 42).
>
> I really cannot follow why I should not worry.

You should worry.  Invariants aren't a panacea, so you have to worry about bugs
that cause invariants to be False.

> Maybe we have a different interpretation of what a type invariant is
> supposed to be. I always thought that they needed to hold between
> client calls.

That would be a good thing, but we don't know how to guarantee that property.
Or more precisely, the language can't guarantee that property, so it's partly
the programmer's responsibility to guarantee it.

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 4, 2012  3:56 PM

Right. We don't know how to guarantee *any* properties that relate to
modification via access types, modification of individual components (relevant
mainly for predicates, when the components are visible), and the like. It could
be done, of course, by checking every predicate and invariant after *any*
operation that could have modified it. But as any compiler optimizer-writer
knows, there are a *lot* of operations that can only be assumed to "write the
world", because we just don't have enough information to know. (That's
especially true when Unchecked_Conversion is tossed into the mix, and given that
it is guaranteed to work in many interesting circumstances, it's dubious to
avoid it.)

I could imagine a compiler having a "check everything everywhere" mode --
indeed, Janus/Ada has such a mode for constraint checks to help users figure out
why erroneous programs are misbehaving -- but it doesn't make sense (especially
because of the significant runtime hit involved) to require any such thing.

Anyway, the only thing you can trust implicitly is invariants and predicates on
types that are both private and contain no access types. And even then, you
could get trouble from aliased references (see my other note).

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, February 4, 2012  6:29 PM

> Right. We don't know how to guarantee *any* properties that relate to
> modification via access types, modification of individual components
> (relevant mainly for predicates, when the components are visible), and
> the like. It could be done, of course, by checking every predicate and
> invariant after *any* operation that could have modified it.

Well, as Tuck pointed out, type invariants apply only to private types.
And, as long as the type invariant stays within the bounds of this privateness,
all the changes would be under control of the operations of the private type and
thus checked against the type invariant upon return. I admit that the type
predicate that reaches to a reference to a non-private modifyable object will
not always be consistently enforced, but I really do not want to make reasonable
type invariants (i.e. the doubly-linked predicate in the previous msg) hostage
to pathological ones.

Let me turn the question around. What is so bad about checking type invariants
on all parameters on return, not just "in out" and "out" parameters? Aren't we
trying to catch as many errors as possible (as Randy2 pointed out to me in
another msg on aliased parameters)?

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 4, 2012  6:47 PM

...
> Let me turn the question around. What is so bad about checking type
> invariants on all parameters on return, not just "in out" and "out"
> parameters? Aren't we trying to catch as many errors as possible (as
> Randy2 pointed out to me in another msg on aliased parameters)?

If these checks were free, absolutely nothing, and I would suggest making as
many as possible after every call and assignment.

But they're not free, especially if expensive functions are involved. So the
runtime cost of executing many checks that hardly ever could fail becomes
significant. (I've run programs where 90% of the execution time was checks in
hopes of tracking down non-reproducible bugs.) If you're in the camp that such
checks get turned off as soon as unit testing if finished (presuming we get to
separate in-bound checks from the rest, and these are out-bound checks), then
this is not so bad. If you're in the camp that checks only get turned off in
dire circumstances (like me), such a rule would probably push me to avoid
invariants altogether (I'd get much better control using postconditions, only
things that really matter would get checked).

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 4, 2012  6:55 PM

...
> > As such, the place
> > where the line is drawn currently is easiest to write and understand.
> > It's certainly easy enough for someone using the Rosen technique to
> > put any needed checks into the postcondition, and doubt that many
> > Ada programmers would be surprised about the lack of such checks.
> > Moreover, we already know that there are holes in the protection
> > allowed by invariants, and this one doesn't seem any worse than any
> > of the others.
>
> If by holes you mean the periods of temporary inconsistency, then it
> is worse because the resulting hole is not bounded at all. If you mean
> "chap 13" then it is worse because JPRT is not called out at all as
> risking type saveness - I do see type invariants as a safety feature,
> not as a debug feature that sometimes catches the bug and sometimes
> does
> not.)

No, by "holes" I was referring to the fact that the protection can be totally
evaded by using access-to-subprogram types. You may recall that we had tried to
come up with some way to plug that hole, but eventually gave up because the
cures were worse than the disease. (We might have plugged some simple cases, but
I'm pretty sure that the issue remains.)

And there are similar problems with access-to-object, as discussed in other
messages.

These holes aren't bounded, either.

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 4, 2012  7:06 PM

> I could live with saying that immutably limited 'in' params have their
> invariants and predicates checked on the way out.
>
> I don't see why you consider including controlled -- the only way to
> modify those is in Finalize.

Like immutably limited, controlled objects can never be truly constant; see
3.3(13/3). I figured that if we were going to do this, we should follow the same
rules as for constantness.

Any in any case, controlled constants can also be modified in Initialize. And if
Initialize squirrels away a pointer, the effect is the same as the Rosen
technique. Moreover, this is at least as common a technique as Rosen's; for
instance, Claw works this way (to support cloning, among other things). It's a
major reason why it would never be safe or useful to write or depend on
predicates or invariants of Claw objects. (About the only invariant that would
make sense is that the self-pointer actually is a self-pointer. Anything further
can be modified by the "wetware" task [shorthand for referring to the biological
entity controlling the program with a mouse, keyboard, and other devices], and
that happens mostly asynchonously to the Ada code.)

> And I don't see any reason to include any chap-13 stuff -- if you use
> an address clause or unch_conv of pointers to modify a constant
> (including an 'in' parameter), you are being erroneous.

Not if it is controlled or immutably limited. That's Erhard's point: we added
wording to explicitly allow the Rosen technique (which is why I won't call it a
trick anymore), although it was always expected that such techniques would work.
Essentially, controlled/immutably limited things can never be truly constant. I
don't think it matters how exactly you make the modification, I don't see much
reason to treat an address clause differently than a current-instance
self-pointer here (despite my general disgust for the abuse of address clauses
to do anything other than access memory-mapped devices).

****************************************************************

From: Bob Duff
Sent: Saturday, February 4, 2012  7:30 PM

Please note that my current opinion is not what you quoted above.
Right now, I agree 100% with Tucker that no RM change is needed in this area.

Above I said, "I'm inclined to do nothing."  Now I say "Do nothing."
I think you agree with that, so you and I need not argue the details of your
message I snipped.  (Maybe Erhard still wants to, I don't know.)

****************************************************************

From: Randy Brukardt
Sent: Saturday, February 4, 2012  7:39 PM

Fair enough. I just wanted to make the point that it makes no sense at all to
just recheck immutably limited parameters; any change would have to include
controlled as well, or it makes no sense, given the current rules of the
language.

And I tend to agree that that is too much change. But I'd be more inclined to
support that change than one that only covered immutably limited (which would
leave a similar hole of the same scope as the Rosen technique).

****************************************************************

From: Tucker Taft
Sent: Sunday, February 5, 2012  11:44 AM

> Let me turn the question around. What is so bad about checking type
> invariants on all parameters on return, not just "in out" and "out"
> parameters? Aren't we trying to catch as many errors as possible (as
> Randy2 pointed out to me in another msg on aliased parameters)?

Perhaps we should leave it as is for now, but recommend another Assertion_Policy
that checks more frequently.  But I agree with you that we should do the
Type_Invariant check on all IN parameters if we decide to do it on any IN
parameters, since a Type_Invariant that follows pointers seems quite possible,
and doesn't rely on any sort of a "trick" or "idiom."

Clearly checking all IN parameters will reduce the size of the hole, but at
considerable expense, given the normal much higher percentage of calls that pass
IN parameters.   Furthermore, the implementor of the abstraction could always
toss in some explicit "Assert" pragmas at places after they change the
underlying state of an IN parameter.

A Type_Invariant is really just a short-hand for a bunch of Assert pragmas, and
the only danger of trying to close a loop-hole like this is that you make them
too expensive for daily use.  We could even put in a NOTE that Type_Invariants
are *not* (normally) checked on IN parameters, and if the underlying state of an
IN parameter is changed, it is wise for the implementor of the abstraction to
insert additional Assert pragmas to ensure the Type_Invariant is preserved.

****************************************************************

From: Erhard Ploedereder
Sent: Sunday, February 5, 2012  1:48 PM

Without wanting to water down the need for trustable type invariants (which is
my real concern -- "mostly trustable type invariants" are no good as far as I'm
concerned) ...

I have been thinking about the capability of a compiler to determine that the
type invariant cannot be violated for an "in" parameter and thus can be safely
omitted. It doesn't look too demanding to determine the "no access can be
legitimately involved"-predicate (i.e., "no dereferencing in the type invariant
and no Rosen-access possibility to the flat components of the full type of the
parameter" is clearly statically decidable, simple and good enough for most
cases). If that predicate is true, the "in" mode guarantees constancy of the
type invariant across the call, except for chap 13 holes (I think. Any opposing
views?) Presumably, that's enough to allow elimination of the check as an
optimization, since anybody interested in verification will forbid features
anyway that lead to erroneous changes of "in" parameters.

****************************************************************

From: Randy Brukardt
Sent: Sunday, February 5, 2012  3:10 PM

> Without wanting to water down the need for trustable type invariants
> (which is my real concern -- "mostly trustable type invariants" are no
> good as far as I'm concerned) ...

Nothing else is possible, of course. It's where you draw the "mostly" line that
matters. And in any case, remember that we decided that checking
access-to-subprograms was too painful, so just checking "in" parameters doesn't
make these "trustable", even ignoring Chapter 13 issues. It much like the
situation with dynamic predicates -- there is no reasonable way to make them
"trustable".

Besides, you're talking about "invariants", which aren't very useful to begin
with. There aren't many properties that hold on *all* members of a type, because
such properties reflect redundant (and thus strictly unnecessary) information.
So invariants are mostly useful for checking implementation artifacts (like the
self-pointers in Claw).

> I have been thinking about the capability of a compiler to determine
> that the type invariant cannot be violated for an "in" parameter and
> thus can be safely omitted. It doesn't look too demanding to determine
> the "no access can be legitimately involved"-predicate (i.e., "no
> dereferencing in the type invariant and no Rosen-access possibility to
> the flat components of the full type of the parameter" is clearly
> statically decidable, simple and good enough for most cases).
> If that predicate is true, the "in" mode guarantees constancy of the
> type invariant across the call, except for chap 13 holes (I think. Any
> opposing views?) Presumably, that's enough to allow elimination of the
> check as an optimization, since anybody interested in verification
> will forbid features anyway that lead to erroneous changes of "in"
> parameters.

Well, we'd need a permission to remove the check. (Perhaps we already have one,
but that's not clear to me; the only checks that I would remove are those that
are clearly redundant assuming the program is not erroneous. Most Chapter 13
effects aren't erroneous, after all.

Anyway, I'd think that such a permission would have almost no effect in
practice. The number of invariants written on types for which the check could be
eliminated by such a permission should be very, very small. First of all, almost
all real private types include embedded pointers. Second, almost all (new)
private types should allow extension, meaning that they ought to be tagged and
controlled. (They need to be controlled in order that extensions can be
controlled, since that can't be added after the fact.) The few types that aren't
controlled probably are that way because of severe space constraints (for
instance, if billions of instances are needed) or because of outside issues
(such as the need to avoid "implicit" operations). In the former case, the types
are almost never private (consider "String" or "Complex" as examples), and in
the latter case, "implicit" invariant checks are worse than finalization would
be (they happen at a lot more places), so I can hardly imagine allowing them if
finalization and dispatching are banned.

The net is that compiler writers would not bother with removing such checks, as
it would make no difference in practice. Most likely, it would make more sense
to allow the normal elimination of checks (however that works) to do the job. So
why even bother to mention the possibility?

As I said before, I think it would make sense to have a compiler mode ("paranoid
mode") where all in parameters (and all assignments as well) are checked, and
most likely any attempt at checking elimination is turned off. But I don't think
it is a good idea to try to make this feature into something it is not (it is
just a set of automatic postconditions that are applied, and it is just as
trustworthy as a postcondition -- which is not very; like all assertions, its
only true at the instant it is checked. Only static predicates have any stronger
properties).

****************************************************************

From: Bob Duff
Sent: Sunday, February 5, 2012  3:13 PM

> Without wanting to water down the need for trustable type invariants
> (which is my real concern -- "mostly trustable type invariants" are no
> good as far as I'm concerned) ...

I strongly disagree with "no good".  Maybe "less good".

It bothers me that:

    X: Integer range 1..10;
    ...
    Put(Integer'Image(X));

can print 11, but I don't go so far as to say "constraints are no good".
That's a "mostly trustable constraint".

> I have been thinking about the capability of a compiler to determine
> that the type invariant cannot be violated for an "in" parameter and
> thus can be safely omitted. It doesn't look too demanding to determine
> the "no access can be legitimately involved"-predicate (i.e., "no
> dereferencing in the type invariant and no Rosen-access possibility to
> the flat components of the full type of the parameter" is clearly
> statically decidable, simple and good enough for most cases). If that
> predicate is true, the "in" mode guarantees constancy of the type
> invariant across the call, except for chap 13 holes (I think. Any
> opposing views?)

Invariants can refer to global variables, which can be modified.  These
references and modifications can be hidden inside function calls. This is
another way (in addition to references through pointers) whereby invariants can
be untrustworthy.

Without globals annotations, there's no way to solve this problem.

But all is not lost.  Maybe we'll have globals annotations in Ada 2020. Maybe
some implementer will create them sooner.  AdaCore is already working on
something along these lines (calculating globals information automatically), as
part of the Hi-Lite project. Hi-Lite probably restricts pointers enough to make
you happy.

All of this means that invariants can be useful even if they're not perfect.

****************************************************************

From: Jean-Pierre Rosen
Sent: Sunday, February 5, 2012  4:06 PM

> As I said before, I think it would make sense to have a compiler mode
> ("paranoid mode") where all in parameters (and all assignments as
> well) are checked, and most likely any attempt at checking elimination is
> turned off.

What about an implementation permission, saying that in addition to the required
checks, an implementation is allowed to add checks at any place it sees fit.

And see what compiler writers do.

****************************************************************

From: Erhard Ploedereder
Sent: Sunday, February 5, 2012  4:22 PM

> The number of invariants written on types for which the check could be
> eliminated by such a permission should be very, very small. First of
> all, almost all real private types include embedded pointers.

That wasn't part of the criterion. The criterion was that the invariant did not
dereference pointers. Not every invariant does that even if the type includes
pointers.

****************************************************************

From: Randy Brukardt
Sent: Sunday, February 5, 2012  4:30 PM

I see. Doesn't matter, though, because my main point is that nearly all real
types will (or should) be controlled (or contain controlled parts), and that
means that the Rosen-technique or relatives are in play. And that has nothing to
do with the contents of the invariant.

****************************************************************

From: Bob Duff
Sent: Sunday, February 5, 2012  4:35 PM

> That wasn't part of the criterion. The criterion was that the
> invariant did not dereference pointers. Not every invariant does that
> even if the type includes pointers.

But invariants can call functions, so the compiler can't tell what it does.

****************************************************************

From: Bob Duff
Sent: Sunday, February 5, 2012  4:47 PM

> What about an implementation permission, saying that in addition to
> the required checks, an implementation is allowed to add checks at any
> place it sees fit.

I tried something like that with predicates, but it won't work.
You can't say "any place", because (for example) that would give permission to
the compiler to introduce arbitrary race conditions into the code.  And any
attempt to define which "places" make sense and which don't is too complicated.

Anyway, compilers can do whatever they like in non-standard modes.
And to do so usefully, they don't have to "define which..." as in the last
sentence of the previous paragraph.  They just have to do something useful,
which is much easier than defining all useful things in this neighborhood.

I've come to believe that many/most Implementation Permissions are misguided.
They're somewhat based on the erroneous[*] belief that we standard writers have
the power to require someone to do something.  Compiler writers can do useful
things without ARG's permission!

[*] I'm using "erroneous" in the normal English sense, not the Ada-ese sense.
;-)

****************************************************************

From: Yannick Moy
Sent: Monday, February 6, 2012  2:17 AM

> Anyway, compilers can do whatever they like in non-standard modes.
> And to do so usefully, they don't have to "define which..." as in the
> last sentence of the previous paragraph.  They just have to do
> something useful, which is much easier than defining all useful things
> in this neighborhood.
>
> I've come to believe that many/most Implementation Permissions are
> misguided.  They're somewhat based on the erroneous[*] belief that we
> standard writers have the power to require someone to do something.
> Compiler writers can do useful things without ARG's permission!

Just to confirm what you're saying, we plan to have a special mode in GNAT that
performs additional checks at run time. The goal of this special mode is to be
able to mix tests and proofs in a way that preserves the confidence level one
gets with tests only (with "enough" tests, I should say, like mandated in
certification standards such as DO-178).

The key idea is that, when a subprogram is formally verified, the proof relies
on assumptions. These assumptions get proved at some point when you prove the
callers/callees. We want to check these assumptions at run time when the
callers/callees are tested only.

One such assumption is that, for parameters of public subprograms whose type has
an invariant (or a predicate), the invariant (or predicate) holds on subprogram
entry (for IN and IN OUT parameters) and on subprogram exit (for OUT and IN OUT
parameters). These will be assumed/proved in formal verification, and tested at
execution when the code is compiled with the special mode of GNAT we will
implement.

(for more details about our strategy for mixing tests and proofs, see the paper
we just published at ERTS2:
http://www.open-do.org/wp-content/uploads/2011/12/hi-lite-erts2012.pdf) -

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 14, 2012  9:31 PM

The last bit of 7.3.2(21/3) says:

Invariant checks, any postcondition check, and any constraint checks associated
with by-copy in out or out parameters are performed in an arbitrary order.

Great, but what about predicate checks associated with by-copy in out and out
parameters? They're not constraint checks, or any of the other kinds of checks either.

Invariant checks, any postcondition check, and any constraint {or predicate} checks
associated with by-copy in out or out parameters are performed in an arbitrary order.

P.S. Happy PI day!

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, March 15, 2012  5:26 AM

> Invariant checks, any postcondition check, and any constraint {or
> predicate} checks associated with by-copy in out or out parameters are
> performed in an arbitrary order.

Fix it, of course. (I am tempted to ask for "subtype predicate" in lieu of just
"predicate", but I bow to uniformity if the "subtype" is consistently missing
throughout the RM.)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 14, 2012  10:20 PM

After sending the last message, I though of a slightly more important bug:

3.2.4 also defines predicate checks on by-reference in out or out parameters.
Probably that sentence of 7.3.2(21/3) ought to include them as well:

   Invariant checks, any postcondition check, and any constraint {or predicate}
   checks associated with [by-copy] in out or out parameters are performed
   in an arbitrary order.

But that set me to wondering: we changed the rules for invariants so that
they're checked for all parameters on the way out, regardless of mode. Should we
have done something similar for predicates as well?

That is, the current rules for predicates check by-copy in out and out
parameters by the "normal" subtype conversion check, and by-reference in out and
out parameters by a special rule.

But we're now checking all parameters for invariants; perhaps we need to do that
for parameters?

We surely don't need to do that for by-copy parameters; the original objects
will never be modified and thus any changes will be lost.

[Aside: Humm: I wonder if invariants also should except by-copy in parameters
somehow? These can't be changed by any means, and rechecking them is just wasted
time. I suppose it depends on the invariant; the parameter is an index, it might
point into a global table, which could have changed. But then the by-copy
objects could have changed, too. Barf.]

But a by-reference object could have been changed (especially if it has embedded
parameters).

I personally would say that such checks should not be performed; a predicate
should be about the value of the object and not about the value of anything
else. But then again, I'd say the same about an invariant and I lost that
argument. So I don't think that anything *I* think on this one is likely to be
relevant.

That being the case, I think we need to make these checks on *all* parameters on
the way out, just like we decided to do for invariants. That's probably not a
real problem for static predicates (they can't depend on anything global, their
types are always by-copy, so there would never be any need to perform those
checks again). But it would be fairly rare that a dynamic predicate could be
removed (it could only be removed if it didn't contain any references outside of
the object, which means no dereferences, no globals, and (in most compilers,
anyway) no regular function calls). (This is just like the situation with
invariants, which almost never can be eliminated for "in" parameters; only in
similar cases as outlined above.)

So, I suggest that we modify the second sentence of 3.2.4(22/3) as follows:

   After normal completion and leaving of a subprogram, for each [in out or out]
   parameter [that is passed by reference]{other than an in out or out parameter
   passed by copy}, the predicate of the subtype of the actual is evaluated, and
   a check is made that the predicate is True.

And extend the existing AARM note:

   Static_Predicate checks can be removed even in the presence of potentially
   invalid values, just as constraint checks can be removed. {In particular, it
   is never necessary to perform a Static_Predicate check on an in parameter of
   a by-copy type after a call, as neither the parameter value nor the result of
   the predicate expression can change from when it was evaluated before the
   call. In contrast, a Dynamic_Predicate check of an in parameter of a by-copy
   type can only be eliminated if the compiler can prove that the value of the
   predicate expression cannot depend on any global values (or it can prove that
   those values haven't changed).}

This is a fairly big change (it will force more users to turn off predicate
checks, because there will be many more checks on parameters), so I think we'll
need to vote.

---

Just as I was about to hit "send" on this, I realized that the situation is not
quite as similar with invariants as I previously thought. An invariant evaluates
the *same* check in both directions. The predicate checks described about are on
the subtype of the *formal* on the way into a call, and on the subtype of the
*actual* on the way out.

But the point is the same - if an "in" parameter is modified "under-the-covers",
the check may need to be retried. OTOH, there are many holes in the
Dynamic_Predicate model in any case, and we're not going to try to plug those.
That's the point of 3.2.4(27/3). So the need to plug this hole is much less than
the need to plug the hole for invariants. (There are other holes in invariants,
which can only be exploited with the help of the abstraction creator; I would
have thought that "in" parameters were also in that category, but somehow I lost
that one. Again, I don't think that too much of what I think is relevant here.)

---

So now I'm just 100% confused. There seem to be three possibilities; please vote
for one and explain if you want:

         ________  Leave 3.2.4(22/3) unmodified. (We're not going to try to make
		   Dynamic_Predicates "work"; it's impossible.)

         ________  Recheck a by-reference parameter of any mode after a call;
		   don't check by-copy parameters. (The by-copy parameter is
		   unmodified, and if a Dynamic_Predicate changes value by other
		   reason, we're not trying to keep it working. OTOH, it's
		   possible to modify an "in" parameter via the Rosen technique,
		   so rechecking makes sense.)

         ________  Recheck everything after a call as described above. (The
		   fewer holes the better.)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, March 14, 2012  10:27 PM

And my vote in this straw poll:

> So now I'm just 100% confused. There seem to be three possibilities;
> please vote for one and explain if you want:
>
>          __3/4th__ Leave 3.2.4(22/3) unmodified. (We're not going
>                    to try to make Dynamic_Predicates "work"; it's
>                    impossible.)
>
>          __1/4th__ Recheck a by-reference parameter of any mode
>                    after a call; don't check by-copy parameters.
>                    (The by-copy parameter is unmodified, and if a
>                    Dynamic_Predicate changes value by other reason, we're
>                    not trying to keep it working. OTOH, it's possible to
>                    modify an "in" parameter via the Rosen technique, so
>                    rechecking makes sense.)
>
>          ________  Recheck everything after a call as described above.
>                    (The fewer holes the better.)

I'm torn on this one. There is an argument that by-reference parameters should
recheck; these could have their values changed and thus a recheck is a good
idea. The constraints can't change by definition, but the predicate can. OTOH, a
by-copy parameter can never change value and we don't recheck constraints on
them unless they do change, so why treat predicates differently?

But that seems pretty weird, so I lean toward saying that we already have it
right. Thus the split vote above.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 15, 2012   9:47 AM

> After sending the last message, I though of a slightly more important bug:
>
> 3.2.4 also defines predicate checks on by-reference in out or out
> parameters. Probably that sentence of 7.3.2(21/3) ought to include
> them as well:
>
>     Invariant checks, any postcondition check, and any constraint {or
>     predicate}
>     checks associated with [by-copy] in out or out parameters are performed
>     in an arbitrary order.
>
> But that set me to wondering: we changed the rules for invariants so
> that they're checked for all parameters on the way out, regardless of mode.
> Should we have done something similar for predicates as well?

I would say "yes" for dynamic predicates, and "no" for static predicates.
Dynamic predicates were specifically intended to handle cases involving
indirection, etc., so it seems to make sense to check them at the same points we
check type invariants.  Static predicates don't need to be rechecked since they
are specifically defined to behave more like range constraints.

And because of indirection, by-copy vs. by-reference is irrelevant to the issue
here.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 15, 2012  5:39 PM

> I would say "yes" for dynamic predicates, and "no" for static
> predicates.  Dynamic predicates were specifically intended to handle
> cases involving indirection, etc., so it seems to make sense to check
> them at the same points we check type invariants.  Static predicates
> don't need to be rechecked since they are specifically defined to
> behave more like range constraints.
>
> And because of indirection, by-copy vs. by-reference is irrelevant to
> the issue here.

Fascinating. I gave three options, and you managed to invent a 4th. :-)

I had rejected that possibility because it seemed very weird, especially as we
allow static predicates on all types. So we still would need the current rule
for static predicates and then a new rule for dynamic predicates. Beyond that,
by-copy vs. by-reference does matter in the wording of the rule, at least,
because for by-copy parameters (in out or out), the copy back does the check
"naturally" (it falls out of the rules for subtype conversion). So we only need
an extra check in other cases; and listing those makes the rule rather messy.

Probably the best way to do that would be to implement a separate sentence after
the existing one (but we have to be careful not to create two different
predicates for a subtype):

... After normal completion and leaving of a subprogram, for each in out or out
parameter that is passed by reference, the predicate of the subtype of the
actual is evaluated, and a check is made that the predicate is True. {Similarly,
for every parameter of mode in, if the subtype of the actual has any dynamic
predicates that apply, the predicate of the subtype of the actual is evaluated,
and a check is made that the predicate is True.} ...

AARM Ramification: For a parameter of mode in, if the subtype of the actual has
only static predicates, no check is made after returning.

Consider this choice #4.

I'm unconvinced that this is a good idea. Besides the general clunkiness, it's
not clear to me that this is the right split. A static predicate can be applied
to a by-reference type, after all (although it would be hard for it to do much),
so it's possible for it to occur on a type that uses the Rosen technique. OTOH,
since it doesn't allow any components, I can't quite imagine how it could change
(today). OT3H, it would make sense for these to allow access to bounds and
discriminants of the type (not in Ada 2012, of course), and doing so would make
this rule a bad choice. OT4H, we'd probably change it then.

In any case, at this point, we have 1.75 votes for option #1 (do nothing), 0.25
votes for option #2, and 1 vote for option #4. That doesn't give me enough
guidance even to do nothing, so I hope that others will weight in on this topic.

****************************************************************

From: Tucker Taft
Sent: Thursday, March 15, 2012  6:05 PM

> ... After normal completion and leaving of a subprogram, for each in
> out or out parameter that is passed by reference, the predicate of the
> subtype of the actual is evaluated, and a check is made that the predicate is True.
> {Similarly, for every parameter of mode in, if the subtype of the
> actual has any dynamic predicates that apply, the predicate of the
> subtype of the actual is evaluated, and a check is made that the predicate is True.} ...
>
> AARM Ramification: For a parameter of mode in, if the subtype of the
> actual has only static predicates, no check is made after returning.
>
> Consider this choice #4.

Sounds fine to me.

> I'm unconvinced that this is a good idea. Besides the general
> clunkiness, it's not clear to me that this is the right split. A
> static predicate can be applied to a by-reference type, after all
> (although it would be hard for it to do much), so it's possible for it
> to occur on a type that uses the Rosen technique.

I am not following you here.  I thought an important property of a static
predicate was that it involves the use of the predefined comparison, equality,
or membership operations against static expressions (i.e. scalars or strings),
so there are no indirections possible, and the specified places for checks are
sufficient to ensure that the predicate remains true everywhere.

> ... OTOH, since it doesn't allow any components, I can't quite imagine
> how it could change (today). OT3H, it would make sense for these to
> allow access to bounds and discriminants of the type (not in Ada 2012,
> of course), and doing so would make this rule a bad choice. OT4H, we'd
> probably change it then.

It wouldn't be a "static predicate" if you started changing the rules in this
way.

> In any case, at this point, we have 1.75 votes for option #1 (do
> nothing),
> 0.25 votes for option #2, and 1 vote for option #4. That doesn't give
> me enough guidance even to do nothing, so I hope that others will
> weight in on this topic.

Vote for option #4! Vote for option #4!
The rest are junk.

Never mind, that was my Super PAC talking... ;-)

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 15, 2012  6:26 PM

...
> > I'm unconvinced that this is a good idea. Besides the general
> > clunkiness, it's not clear to me that this is the right split. A
> > static predicate can be applied to a by-reference type, after all
> > (although it would be hard for it to do much), so it's possible for
> > it to occur on a type that uses the Rosen technique.
>
> I am not following you here.  I thought an important property of a
> static predicate was that it involves the use of the predefined
> comparison, equality, or membership operations against static
> expressions (i.e. scalars or strings), so there are no indirections
> possible, and the specified places for checks are sufficient to ensure
> that the predicate remains true everywhere.

I think you are right, but it mostly follows from general uselessness of static
predicates on composite types and not from any well-defined principle. Indeed, I
think I would have preferred that static predicates were only allowed on scalar
types, in which case this discussion would not have come up.

> > ... OTOH, since it doesn't allow any components, I can't quite
> > imagine how it could change (today). OT3H, it would make sense for
> > these to allow access to bounds and discriminants of the type (not
> > in Ada 2012, of course), and doing so would make this rule a bad
> > choice. OT4H, we'd probably change it then.
>
> It wouldn't be a "static predicate" if you started changing the rules
> in this way.

The bounds and non-mutable discriminants of a subtype can never change from an
object after its creation. So it's hard to imagine how the predicate of the
subtype of an object could change if it depended on those things. If we're going
to allow static predicates on composite types, they might as well be useful.

I suspect that mutable discriminants act pretty similarly to the stand-alone
scalar objects for which static predicates are currently designed. So I think
they too would not cause any problems -- but that would have to be verified (and
surely not right now).

My main point is that there are two possible views of static predicates: (1) a
special gizmo to allow case statements/loops/etc.; and (2) a user-defined
constraint that has similar properties to the built-in constraints.

If (1), we should have restricted static predicates only to scalar (or maybe
discrete) types. The gizmo only works there, and allowing more is just
confusing. If (2), then why we don't allow bounds and discriminants (allowed in
real constraints, after all) is not clear. The current rules are just weird
because they don't really support either view quite right. (But this is an
insufficient problem to worry about for Ada 2012 -- although if we don't
restrict the use of static predicates now, then I presume we're leaning [sooner
or later] to view (2), and thus some expansion of static predicates in the
future should be expected.)

****************************************************************

From: Robert Dewar
Sent: Thursday, March 15, 2012  9:40 PM

One of the reasons I think that doing nothing is fine is that in practice I
suspect we will have to revisit details like this anyway, no matter what we
decide now. In particular, adding or subtracting predicate checks is something
that will have very little impact and can certainly be done later.

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, March 16, 2012  4:47 AM

> please
> vote for one and explain if you want:
>
>          ________  Leave 3.2.4(22/3) unmodified. (We're not going to
> try to make Dynamic_Predicates "work"; it's impossible.)
>
>          ________  Recheck a by-reference parameter of any mode after
> a call; don't check by-copy parameters.
>                    (The by-copy parameter is unmodified, and if a
> Dynamic_Predicate changes value by other reason, we're not
>                    trying to keep it working. OTOH, it's possible to
> modify an "in" parameter via the Rosen technique, so
>                    rechecking makes sense.)
>
>          ________  Recheck everything after a call as described above.
> (The fewer holes the better.)

Abstain (not helpful to Randy), due to insufficiant understanding of the issues.

Some thoughts though:
IIUC, the problem is when an in out parameter is passed by reference, the actual
has a dynamic predicate that the formal has not, and /something in the called
subprogram changes the dynamic predicates in such a way that it does not hold
any more/. I tend to think "what's the heck, the guy who does that gets what he
deserves" (enforces my abstention).

What happens in the case of null exclusion? i.e. if the actual excludes null,
but not the formal? Hmmph, it's passed by copy, so there is no issue... Saved by
the gong. I think that, had we had predicates in 2005, there would have been no
null exclusion. We can argue if predicates should behave like constraints, but
null exclusion should really behave the same as Static_Predicate => ptr /= null.

> My main point is that there are two possible views of static
> predicates: (1) a special gizmo to allow case statements/loops/etc.;
> and (2) a user-defined constraint that has similar properties to the built-in
> constraints.
>
> If (1), we should have restricted static predicates only to scalar (or maybe
> discrete) types. The gizmo only works there, and allowing more is just
> confusing. If (2), then why we don't allow bounds and discriminants
> (allowed in real constraints, after all) is not clear. The current
> rules are just weird because they don't really support either view
> quite right. (But this is an insufficient problem to worry about for
> Ada 2012 -- although if we don't restrict the use of static predicates
> now, then I presume we're leaning [sooner or later] to view (2), and
> thus some expansion of static predicates in the future should be
> expected.)

I support 2), with a growing feeling that dynamic predicates are really
different from static predicates. In practice, I think there will be 3 uses of
predicates:

1) Static predicates (behave like constraints)
2) Dynamic predicates that the user would have liked to be static, but that are
   just too complicated. Don't expect the predicate to change during execution.
   (behave like constraints)
3) Sophisticated dynamic predicates (behave like assertions)

The hard part of the discussion is due to the presence of 2)

****************************************************************

From: Jeff Cousins
Sent: Friday, March 16, 2012  5:02 AM

> I think you are right, but it mostly follows from general uselessness
> of static predicates on composite types and not from any well-defined
> principle. Indeed, I think I would have preferred that static predicates were
> only allowed on scalar types, in which case this discussion would not have
> come up.

I don't think that many users are expecting predicates on composite types, and
the one example in the rationale (actually a dynamic predicate) shows that it
can have dodgy consequences.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, March 16, 2012 11:47 AM

I am abstaining, even though I am leaning towards the "Recheck everything" position.  My reasons are

- it is indeed very late to start fixing a complicated area

- I feel deeply uncomfortable about a model that allows false
  predicates to go unchecked at interface boundaries (which is
  the case today even without "Ignore"). The term "as described
  above" unfortunately makes the "Recheck everything" a rather
  relative "everything", which I'd rather not discuss again at this
  late stage. If it meant: "Check all predicates for all types and
  parameter modes (and permit optimizing checks away even if the
  predicates have side-effects)" then my vote is for "Recheck
  Everything".

****************************************************************

From: Tullio Vardanega
Sent: Friday, March 16, 2012  12:12 PM

I abstain to the vote options as proposed.

Other than that, I am leaning toward Erhard's "recheck everything"
in the less extensive interpretation of "everything" that he suggests.

****************************************************************

From: Jeff Cousins
Sent: Friday, March 16, 2012  12:54 PM

I think I have to abstain too, though I'd support any of Randy's ideas to limit
predicates to scalars.

****************************************************************

From: Bob Duff
Sent: Friday, March 16, 2012  2:12 PM

> [...]

I vote for:

         ___X____  Leave 3.2.4(22/3) unmodified.

> ...(We're not going to try to
> make Dynamic_Predicates "work"; it's impossible.)

I take the above to be irrelevant editorializing.  ;-) My vote neither agrees
nor disagrees with it.

The rationale for my vote is:  (1) same as Robert, I don't see any reason for
last-minute changes.  (2) I'm concerned that the proposed wording changes make a
difference between whether the compiler chose by-ref vs. by-copy.  I don't know
if that's true, but I don't want to study the issue (at the last minute!). If it
is true, I'm definitely opposed -- opposed to having compiler choices affect
whether certain parameters are checked.

I will not carefully read the rest of this discussion (until we consider it
again as an Ada 2012 AI).

I'm puzzled by the two "abstain" votes I noticed. If you don't understand the
issues, or don't want to think about them at this late date, I'd think you'd
vote for "leave unmodified".

****************************************************************

From: Geert Bosch
Sent: Friday, March 16, 2012  2:47 PM

>         ____X___  Leave 3.2.4(22/3) unmodified. (We're not going to
> try to make Dynamic_Predicates "work"; it's impossible.)

It's too late to consider anything else.

****************************************************************

From: Brad Moore
Sent: Friday, March 16, 2012  3:20 PM

> There seem to be three possibilities; please
> vote for one and explain if you want:
>
>     _____X___  Leave 3.2.4(22/3) unmodified. (We're not going to try
> to make Dynamic_Predicates "work"; it's impossible.)
>

I agree with others that it it's too late to perfect this now. Fixing for Ada
2020 is a better choice, as I think we will need some time to come up with a
solution that will provide an acceptable level of confidence going forward.

****************************************************************

From: Randy Brukardt
Sent: Friday, March 16, 2012  6:34 PM

> I am abstaining, even though I am leaning towards the "Recheck
> everything" position.  My reasons are
>
> - it is indeed very late to start fixing a complicated area
>
> - I feel deeply uncomfortable about a model that allows false
>   predicates to go unchecked at interface boundaries (which is
>   the case today even without "Ignore"). The term "as described
>   above" unfortunately makes the "Recheck everything" a rather
>   relative "everything", which I'd rather not discuss again at this
>   late stage.

I'm not sure that you understood my point with "as described above"; there was a
specific wording proposal given in my original message, and I was referring to
that specifically when I said "as described above".

That wording explicitly excluded by-copy in out and out parameters from the
extra check; that's simply because they are checked by the subtype conversion
implicit in the copy-back. Having rules that mandate *two* checks for a single
parameter at a single point would be completely insane.

But the effect would be that every actual parameter would be rechecked after a
call.

(Note a problem with that wording: not all "in" parameters are even objects that
can be checked after the call -- if they are just values, they have no subtype
and trying to talk about it is dodgy.)

> If it meant: "Check all predicates for all types and
>   parameter modes (and permit optimizing checks away even if the
>   predicates have side-effects)" then my vote is for "Recheck
>   Everything".

Yes, that's the effect. The parenthetical remark is always true anyway, because
we already have a rule to that effect (11.4.2(26/3)). Of course, it will rarely
be the case that one could apply that rule to a dynamic predicate (just as for
invariants), because the compiler would have to be able to prove that the value
of the expression did not change on a re-evaluation. That's impossible in
general; only dynamic predicates that don't contain any globals (including any
dereferences to storage pools) and no functions whose implementation is unknown
could possibly work, and those will be fairly rare.

In any case, there is no model of dynamic predicates that will ever make them
"safe", for any definition of "safe". The only question is how far to go?

For static predicates, of course they can be optimized; and no permissions are
needed to be able to do it (they can't have any side-effects or globals).

Anyway, it looks like this should be an AI12, because its pretty obvious that
there is neither consensus for a change nor any critical urgency. (Mandating
additional checks would only be incompatible in programs that were already
broken, so it is hard to get very concerned.)

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, March 17, 2012  9:26 PM

> I'm puzzled by the two "abstain" votes I noticed.
> If you don't understand the issues, or don't want to think about them
> at this late date, I'd think you'd vote for "leave unmodified".

Very easy. Voting to leave it unmodified would say that I "sort of agree" with
the position, which I definitely do not (but the vote did not have a "none of
the above" option).

****************************************************************

From: Bob Duff
Sent: Sunday, March 18, 2012  8:08 AM

OK, fair enough.  My vote for "leave unmodified" means "leave unmodified for
now".  It doesn't mean I agree (not even "sort of") with what we've got.

****************************************************************

Questions? Ask the ACAA Technical Agent