Version 1.1 of ai05s/ai05-0085-1.txt

Unformatted version of ai05s/ai05-0085-1.txt version 1.1
Other versions for file ai05s/ai05-0085-1.txt

!standard 11.4.2(7/2)          08-01-21 AI05-0085-1/01
!class binding interpretation 08-01-21
!status work item 08-01-21
!status received 08-01-10
!priority Low
!difficulty Easy
!qualifier Clarification
!subject Changing Assertion_Policy in an smaller scope
!summary
** TBD **
!question
11.4.2(7/2) says that Assertion_Policy is a configuration pragma. Is that really the intent? It would be useful to change policies within a single subprogram or other scope, much like pragma Suppress.
!recommendation
(See Summary.)
!wording
** TBD **
!discussion
It really was the intent that Assertion_Policy be applied to an entire unit. It was noted that it is possible to use the comment symbol to remove a single Assert pragma, so the need to do so in a restricted scope is less than that of pragma Suppress. Thus the simple solution of a configuration pragma was adopted.
However, this is a somewhat simplistic argument. It would be difficult to turn off checking in half of a unit, for instance. (Although it should be noted that pragma Suppress don't support that, either.)
So (conclusion TBD).
--!corrigendum 13.9(7)
!ACATS Test
** TBD **
!appendix

From: Robert A. Duff
Sent: Thursday, January 10, 2007  2:01 PM

Assertion_Policy is a configuration pragma -- 11.4.2(7/2).
Is that really the intent?  It would be useful to
change policies within a single subprogram, or whatever.
Just like pragma Suppress.

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2007  2:40 PM

Yes, that was really the intent.  I suppose you
might turn them on and off in a single subprogram
like pragma Suppress, but I don't remember
ever considering that.  Doing it at less than
a whole-program level seems like it
will be rare, and doing it at a sub-file level
apparently didn't survive the cost/benefit cutoff.
Seems pretty straightforward to do it, but
unlike pragma Suppress, a comment indicator
can be placed in front of a pragma Assert to
achieve the effect of suppressing the check.

It would be admittedly more complex to suppress
preconditions some of the time, but the implementation
model seems to be that the checks are generally
performed inside the called subprogram, so it
is not clear what it would mean to have different
Assertion_Policies at different call sites of
the same subprogram.  That is a question that
needs to be settled even with Assertion_Policy
as a configuration pragma, if we add Pre_Assert
and friends.

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  2:40 PM

> Assertion_Policy is a configuration pragma -- 11.4.2(7/2).
> Is that really the intent?

Of course it is, because that's what the standard (and associated AI) say.
;-)

> It would be useful to
> change policies within a single subprogram, or whatever.
> Just like pragma Suppress.

Maybe, but that seems like a rather dubious need.

For a single subprogram, "--" works well and requires no new language
features. It's not at all clear why you would want to skip assertions on one
subprogram without wanting to remove the assertions completely. (That's what
I do in most cases: insert debugging code; once the debugging is finished,
if the runtime cost of the code is too high to leave permanently in the
debugging image I just comment it out - leaving the code around in case a
similar problem pops up in the future.)

For single packages, configuration pragmas work fine - they don't have to be
partition-wide, they're "compilation"-wide. So you can turn off assertions
in one subsystem and leave them on in another.

I agree that you can't turn off assertions in half of a package and leave
them on in the other half, but is that an important need? Sounds a bit weird
to me.

****************************************************************

From: Robert A. Duff
Sent: Thursday, January 10, 2007  4:13 PM

> It would be admittedly more complex to suppress
> preconditions some of the time, but the implementation
> model seems to be that the checks are generally
> performed inside the called subprogram, so it
> is not clear what it would mean to have different
> Assertion_Policies at different call sites of
> the same subprogram.

Actually, AdaCore is leaning toward checking preconditions at each call site,
when possible.  There are advantages:

    - More likely to be able to optimize away the check.  E.g. if the call site
      looks like "if X > 0 then P(X);", and the precondition is
      "X in Positive", it is possible to know statically that the check can't
      fail.

    - More likely to be able to prove things statically.  Same example.  The
      compiler could warn if it said "if X >= 0".

> For single packages, configuration pragmas work fine - they don't have to be
> partition-wide, they're "compilation"-wide. So you can turn off assertions
> in one subsystem and leave them on in another.

Actually, the example that triggered the question was subsystems.  There's no
way to apply the pragma to a package and all of its children, without repeating
the pragma all over.  (Or putting the entire subsystem in one file.  Yuck!)

If the pragma were allowed in a package spec, it would presumably apply to
children.

AdaCore is inventing a new pragma that is something like Assertion_Policy.
We wanted to allow it to apply to whole subsystems, or to individual
subprograms.  But it would be confusing if it had subtle differences with
Assertion_Policy.  No big deal, I admit.

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  4:19 PM

> Actually, AdaCore is leaning toward checking preconditions at each call site,
> when possible.  There are advantages:
...

One more reason, actually the most important from a customer point of 
view. With a restricted run time with no exception propagation, it is
much easier for testing if the precondition failure can be raised and
caught locally (which does not need exception propagation).

Anyway, it seems clearly useful to be able to turn off assertions for
some subprograms and turn them on for other subprograms.

****************************************************************

From: Robert A. Duff
Sent: Thursday, January 10, 2007  7:40 PM

> One more reason, actually the most important from a customer point of
> view. With a restricted run time with no exception propagation, it is
> much easier for testing if the precondition failure can be raised and
> caught locally (which does not need exception propagation).

Right.  I meant to mention that.

A little more detail: GNAT has a way of saying that exceptions are handled
locally -- by a statically enclosing handler.  So you can say:

    begin
        ... raise Cain;
    exception
        when Cain =>

but you can't propagate them out of a procedure.  I won't go into the details
about "what if you try?".  The point is, it's nice to be able to use this mode
with preconditions, which is only possible if preconditions are checked at the
call site.

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  4:48 PM

>> Assertion_Policy is a configuration pragma -- 11.4.2(7/2).
>> Is that really the intent?

Now another question, unless stated otherwise configuration
pragmas are required to be consistent across a partition, isn't
that right?

Surely we do not need to insist on partition consistency for
this pragma? That would be a huge pain.

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2007  5:04 PM

> Now another question, unless stated otherwise configuration
> pragmas are required to be consistent across a partition, isn't
> that right?

No, you are confusing the restrictions with configuration
pragmas.  And in any case, the description of the
Assertion_Policy configuration pragma explicitly mentions
the policy differing from one compilation unit to the next.
> 
> Surely we do not need to insist on partition consistency for
> this pragma? That would be a huge pain.

No one ever proposed that.

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  5:13 PM

> Now another question, unless stated otherwise configuration
> pragmas are required to be consistent across a partition, isn't
> that right?

No, that's not right. A configuration pragma given on a "compilation"
applies only to that compilation, not the entire partition. An individual
pragma can require consistency, but it is not required for configuration
pragmas in general.

> Surely we do not need to insist on partition consistency for
> this pragma? That would be a huge pain.

No, there is no such requirement. Other pragmas (such as pragma
Restrictions) have this as a requirement, but that is by a separate rule. (A
misguided one, IMHO.)

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  5:30 PM


>> Now another question, unless stated otherwise configuration
>> pragmas are required to be consistent across a partition, isn't
>> that right?
> 
> No, you are confusing the restrictions with configuration
> pragmas. 

Well it says "they are generally used to select partition-wide
of system-wide options", and at least some, like Normalize_Scalars
do insist on consistency.

>> Surely we do not need to insist on partition consistency for
>> this pragma? That would be a huge pain.
> 
> No one ever proposed that.

OK, sorry for noise
 
****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  4:31 PM

> Actually, the example that triggered the question was subsystems. There's no
> way to apply the pragma to a package and all of its children, without repeating
> the pragma all over.  (Or putting the entire subsystem in one file.  Yuck!)
>
> If the pragma were allowed in a package spec, it would presumably apply to
> children.

That's true. But I'm not sure it is really relevant. "Configuration pragma"
is Ada-standard code-words for "compiler-option" -- I don't think there is
any real intent that anyone would actually write these as pragmas. The
standard doesn't talk about how "options" are specified to the compiler, so
"configuration pragmas" were invented to fufill that need; but the intent is
that those setting are handled by the project tools or compiler options or
something like that.

Thus, applying a configuration pragma to a set of units is simply something
that gets done via the project management tools. It's just changing one
checkbox in Janus/Ada (or, at least it would be if I ever got that GUI
project manager interface tool written).

> AdaCore is inventing a new pragma that is something like Assertion_Policy.
> We wanted to allow it to apply to whole subsystems, or to individual
> subprograms.  But it would be confusing if it had subtle differences with
> Assertion_Policy.  No big deal, I admit.

Why a new pragma? Assertion_Policy allows implementation-defined policies,
does it not? (Yes, it does, see 11.4.2(9/2)). The original intent was
(before the precondition stuff got dropped) that it would apply to them,
too. I could imagine wanting to be able to turn off preconditions and leave
assertions on or something like that -- and that is what the
implementation-defined policies are for.

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2007  5:02 PM

> Actually, AdaCore is leaning toward checking preconditions at each call site,
> when possible.  There are advantages:
...
I went back and reread AI-288, and it talks about this
whole issue pretty extensively.  It ultimately suggests
generating two bodies, one that includes the precondition
checks and one that doesn't.  The one that includes the
precondition checks would at a minimum be used for
indirect calls.  It might also be used for dispatching
calls, though one could argue that if the call doesn't
satisfy the classwide_pre_assert given on the named
subprogram, then it should fail, and that can be
checked at the call site.

This makes me realize that if a dispatching
operation doesn't have a classwide_pre_assert,
then it should be illegal for any overriding to have
either a pre_assert or a classwide_pre_assert, since
essentially the overriding is already inheriting
"True" as its classwide_pre_assert, and you can't
further weaken "True."

By the way, another reason to automatically convert
parameters in a classwide_pre_assert to the classwide
type is that if the type is abstract, non-dispatching
calls would generally be illegal.
 

> Actually, the example that triggered the question was subsystems.  There's no
> way to apply the pragma to a package and all of its children, without repeating
> the pragma all over.  (Or putting the entire subsystem in one file.  Yuck!)
> 
> If the pragma were allowed in a package spec, it would presumably apply to
> children.

Alternatively, you could have an implementation-defined configuration
pragma or an implementation-defined assertion policy that caused
the policy to apply to all descendants of the current
compilation unit.  Keeping the pragmas at the file level
seems desirable for various reasons.

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  4:50 PM

> That's true. But I'm not sure it is really relevant. "Configuration pragma"
> is Ada-standard code-words for "compiler-option" -- I don't think there is
> any real intent that anyone would actually write these as pragmas. The
> standard doesn't talk about how "options" are specified to the compiler, so
> "configuration pragmas" were invented to fufill that need; but the intent is
> that those setting are handled by the project tools or compiler options or
> something like that.

Of course people will write them as actual pragmas, that's a really odd
statement I think ...

> Why a new pragma? Assertion_Policy allows implementation-defined policies,
> does it not? (Yes, it does, see 11.4.2(9/2)). The original intent was
> (before the precondition stuff got dropped) that it would apply to them,
> too. I could imagine wanting to be able to turn off preconditions and leave
> assertions on or something like that -- and that is what the
> implementation-defined policies are for.

Our new version allows an identifier to specify a group of
assertions

    pragma Check (Critical_Error, x > 5);

    pragma Check_Policy (Critical_Error, On);

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  5:21 PM

...
> Of course people will write them as actual pragmas, that's a really odd
> statement I think ...

You *could*, but I think *that* is an odd thing to do. That's because
putting your debugging settings (such as how pragma Asserts are handled)
into your source code substantially increases the number of changes to your
source code and thus complicates management of real changes to the code.

For instance, we have a number of development profiles for our compiler
("full debugging", "normally instrumented", and "release"), and it is not
unusual to build and test several of these in parallel. (The "normally
instrumented" version is used for most testing, but of course the "release"
version has much less debugging and that also has to be tested.) If the
source code included the settings for debugging (especially in the case of
configuration pragmas, which have to go into each individual unit), it would
be much harder to automate those creating those versions, especially as
other development may be going on at the same time.

It probably makes sense to write other configuration pragams explicitly, but
for those that change depending on your development (surely
Assertion_Policy, probably also Suppress used as a configuration pragma), it
doesn't make much sense to write them explicitly. (Suppress also can be used
as a regular pragma, and I am definitely not talking about that usage.)

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  5:35 PM

>> Of course people will write them as actual pragmas, that's a really odd
>> statement I think ...
> 
> You *could*, but I think *that* is an odd thing to do. That's because
> putting your debugging settings (such as how pragma Asserts are handled)
> into your source code substantially increases the number of changes to your
> source code and thus complicates management of real changes to the code.

These are not debugging settings necessarily, they may be essential
parts of the build, for instance some of our customers plan to deliver
safety-critical certified code with preconditions enabled, but
postconditions disabled.

> It probably makes sense to write other configuration pragams explicitly, but
> for those that change depending on your development (surely
> Assertion_Policy, probably also Suppress used as a configuration pragma), it
> doesn't make much sense to write them explicitly. (Suppress also can be used
> as a regular pragma, and I am definitely not talking about that usage.)

It makes perfect sense, in our environment, the project file selects the
appropriate files of configurationo pragmas for the desired build.

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  5:48 PM

> These are not debugging settings necessarily, they may be essential
> parts of the build, for instance some of our customers plan to deliver
> safety-critical certified code with preconditions enabled, but
> postconditions disabled.

True, but that is what I meant by "release" profile; we don't turn all
checking and assertions off, just those which has a substantial performance
drag. And there also is a matching "normal" profile in which all of the
checks and assertions are on. These need different settings.

> > It probably makes sense to write other configuration pragams explicitly, but
> > for those that change depending on your development (surely
> > Assertion_Policy, probably also Suppress used as a configuration pragma), it
> > doesn't make much sense to write them explicitly. (Suppress also can be used
> > as a regular pragma, and I am definitely not talking about that usage.)
>
> It makes perfect sense, in our environment, the project file selects the
> appropriate files of configurationo pragmas for the desired build.

That works for partition-wide options, but it doesn't work very well for
settings that apply only to a single file or set of files. Such as
Assertion_Policy and Suppress (as a configuration pragma). You would need
two copies of every such file, and keeping them in sync would be a massive
pain. I think it is better to treat those as compiler options (presumably
managed by the project file).

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  5:52 PM

Well there is no such massive pain, or requirement for two copies of
every file, or anything like that in our project file environment,
so perhaps you are just projecting your implementation notions
to other compilers.

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  6:11 PM

Huh? How do you write a "file of configuration pragmas" that only apply to
particular files (and not others) of a partition? The language-defined
configuration pragmas only apply to a particular file if they are compiled
as the start of that file. If they're compiled in a separate file, they
apply to the entire partition.

I suppose you could define a bunch of implementation-defined configuration
pragmas that specify which file that they apply to, or some such other
implementation-defined meaning. But such source code is not really Ada
source code at all, and it surely is not portable in any case. It is
effectively the same as what I was talking about (describing it as part of
the project file), as it is something whose definition depends on the
implementers toolkit, not in any way on the standard language.

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  6:18 PM

> Huh? How do you write a "file of configuration pragmas" that only apply to
> particular files (and not others) of a partition? The language-defined
> configuration pragmas only apply to a particular file if they are compiled
> as the start of that file. If they're compiled in a separate file, they
> apply to the entire partition.

You are making the mistake of assuming that compilation unit = file.
The notion of file does not appear in the RM in conjunction with
source representation. The rule is not that configuration pragmas
only apply to a particular *file* if they are compiled at the
start of the *file*. That can be true, but if true is merely
an artifact of your implementation.

The actual rule is

> 8     Certain pragmas are defined to be configuration pragmas; they shall
> appear before the first compilation_unit of a compilation. They are generally
> used to select a partition-wide or system-wide option. The pragma applies to
> all compilation_units appearing in the compilation, unless there are none, in
> which case it applies to all future compilation_units compiled into the same
> environment.

You will not find the word file in this paragraph! How compilation units
are represented (as one or more files or whatever) is not addressed by
the RM.

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  7:10 PM

> You are making the mistake of assuming that compilation unit = file.
> The notion of file does not appear in the RM in conjunction with
> source representation. The rule is not that configuration pragmas
> only apply to a particular *file* if they are compiled at the
> start of the *file*. That can be true, but if true is merely
> an artifact of your implementation.

No, I'm not. I've been using "file" as a shorthand for "compilation" (not
"compilation unit"!!). I suppose I shouldn't be so careless on this list
because of the potential for misunderstanding.

...
> You will not find the word file in this paragraph! How compilation units
> are represented (as one or more files or whatever) is not addressed by
> the RM.

"compilation" is in the syntax font. That means it is an syntactic entity.
We're not talking about the nebulous idea of "compilation" as to what is
given to the compiler, but rather the specific syntax of that entity. It
cannot be made out of air; it comes from a file somewhere. So I think what
you are saying is that Gnat has some implementation-defined way to make a
"compilation" out of pieces of multiple files. I guess that meets the letter
of the standard.

It surely is something implementation-defined, and in that sense, it still
is exactly what I was originally talking about. (The syntax used to
represent the project-specific instructions is irrelevant.) In the closest
thing we have to a standard source representation (the ACATS files), a
"compilation" = a "file". That's pretty much the only way to represent
*portable* code (every implementation has some way to process the ACATS). If
the configuration pragmas are in a separate file, they have to be
reconnected with the rest of the source to make a portable "compilation".
(One presumes other implementations have other ways to handle this issue.)
Thus, separating the configuration pragmas in this way isn't portable --
it's just another implementation-defined trick. (A very clever one at
that...)

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  9:13 PM

> No, I'm not. I've been using "file" as a shorthand for "compilation" (not
> "compilation unit"!!). I suppose I shouldn't be so careless on this list
> because of the potential for misunderstanding.

So of course the project mechanism can add configuration pragmas at
the start of designated files to form compilation units.

> "compilation" is in the syntax font. That means it is an syntactic entity.
> We're not talking about the nebulous idea of "compilation" as to what is
> given to the compiler, but rather the specific syntax of that entity. It
> cannot be made out of air; it comes from a file somewhere. So I think what
> you are saying is that Gnat has some implementation-defined way to make a
> "compilation" out of pieces of multiple files. I guess that meets the letter
> of the standard.

Yup!

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  9:17 PM

> So of course the project mechanism can add configuration pragmas at
> the start of designated files to form compilation units.

oops that should be compilations, not compilation units.

****************************************************************

From: Robert Dewar
Sent: Thursday, January 10, 2007  9:32 PM

> "compilation" is in the syntax font. That means it is an syntactic entity.
> We're not talking about the nebulous idea of "compilation" as to what is
> given to the compiler, but rather the specific syntax of that entity. It
> cannot be made out of air; it comes from a file somewhere.

No, it does not need to come from a file, for instance the input could
be a tree prepared by the editor and in no sense be a text file.

The business about it being a syntactic entity is
irrelevant, the standard has NOTHING to say about representation. it
might (see for example the Algol-68 representation sub-standard), but
it does not. In practice the ACATS tests provide some kind of 
commonality, but a compiler may have to do arbitrary preprocessing
on the ACATS tests as distributed before they can be fed to the
compiler.

> So I think what
> you are saying is that Gnat has some implementation-defined way to make a
> "compilation" out of pieces of multiple files. I guess that meets the letter
> of the standard.

Rignt, which is just fine, in fact one of our planned implementation
features is to allow the private part to be stored in a separate file,
the private with is quite useful for that, so it is something we will
do sometime, and the private withs will get stored with the file
containing the private part.

> It surely is something implementation-defined, and in that sense, it still
> is exactly what I was originally talking about. 

Every compiler has to do something entirely implementation-defined when
it comes to how to represent the source, since, once again, the standard
has nothing to say about concrete representation of the source.

You may think that somehow the ACATS tests are in some normative form,
but there is nothing in the RM that confirms that view, and things like
the use of UTF-8 or brackets notation for wide characters are completely
outside the standard, as is the use of CR/LF (or whatever) to physically
end lines (in a real compiler on VMS or OS/360, lines might be ended by
an entirely different means).

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2007  11:39 PM

...
> > It surely is something implementation-defined, and in that sense, it still
> > is exactly what I was originally talking about.
>
> Every compiler has to do something entirely implementation-defined when
> it comes to how to represent the source, since, once again, the standard
> has nothing to say about concrete representation of the source.

Right, but that is exactly the same as saying that there is no requirement
that code can be transported from one implementation to another. That would
be great for vendor lock-in, but would pretty much eliminate the value of
the Ada standard.

As a practical matter, all compilers have to be able to process the ACATS,
so there is surely a portable input form. What worries me, is when people
start doing clever source representations, is that there is no similar
pressure for the reverse conversion. (I know that the AARM claims that such
a reverse conversion is required, but I can find no practical reason to
assume that is the case.) Gnatchop, to take an example, is a one-way
conversion; it's not possible to get back to the original source code from
it. That's harmless in the case of Gnatchop, but it is very easy to imagine
a system where getting back is very hard.

> You may think that somehow the ACATS tests are in some normative form,
> but there is nothing in the RM that confirms that view, and things like
> the use of UTF-8 or brackets notation for wide characters are completely
> outside the standard, as is the use of CR/LF (or whatever) to physically
> end lines (in a real compiler on VMS or OS/360, lines might be ended by
> an entirely different means).

Yes, I've heard this said many times, and I understand the reasoning behind
it. But it also has the potential to be a great drag on the portability of
Ada code. The standard goes to great lengths to define exactly what every
character in an Ada source file means, and then essentially says that you
can ignore all of that. I find that silly; I think it would have been far
better to codify the actual practice (an implementation has to be able to
process the ACATS) and then also note that other source representations are
allowed. But I suppose it is far too late for such a discussion.

> Rignt, which is just fine, in fact one of our planned implementation
> features is to allow the private part to be stored in a separate file,
> the private with is quite useful for that, so it is something we will
> do sometime, and the private withs will get stored with the file
> containing the private part.

This worries me, not because it violates the standard (as you have pointed
out, it does not), but because the path back to "portable" Ada code (that
is, literally following the standard with the source representation) is not
obvious. With pieces of the Ada in a number of files, the "real" Ada code is
not trivial to reconstruct. (And your example is just the tip of the
iceberg; it may make sense to do be able to do that on a subprogram basis in
a sufficiently smart system.) Hopefully, you'll provide a tool to
reconstruct the "real" Ada code, so that code portability is maintained.

****************************************************************

From: Ronert Dewar
Sent: Friday, January 11, 2007  6:39 AM

...
>> Every compiler has to do something entirely implementation-defined when
>> it comes to how to represent the source, since, once again, the standard
>> has nothing to say about concrete representation of the source.
> 
> Right, but that is exactly the same as saying that there is no requirement
> that code can be transported from one implementation to another. That would
> be great for vendor lock-in, but would pretty much eliminate the value of
> the Ada standard.

It is just fine to let the market place handle that concern, and if you
find it a concern still, you should do what Algol-68 did, and specify
a formal canonical form.

> As a practical matter, all compilers have to be able to process the ACATS,
> so there is surely a portable input form.

Indeed

> What worries me, is when people
> start doing clever source representations, is that there is no similar
> pressure for the reverse conversion. (I know that the AARM claims that such
> a reverse conversion is required, but I can find no practical reason to
> assume that is the case.) 

I think it is fine to let the market place handle this, we have plenty
of customers maintaining common code bases between different vendors,
so we certainly could not stand in the way of that, it would be bad
for business.

> Gnatchop, to take an example, is a one-way
> conversion;

It's just a utility at this stage, it is non-essential in either
direction, just a convenience if you want to put sources in the
one compilation per file preferred form (GNAT does not require
one unit per file).

> Yes, I've heard this said many times, and I understand the reasoning behind
> it. But it also has the potential to be a great drag on the portability of
> Ada code. 

Only language lawyers would worry about that, and if they worry,
they should define a canonical form.

> The standard goes to great lengths to define exactly what every
> character in an Ada source file means, and then essentially says that you
> can ignore all of that. I find that silly; I think it would have been far
> better to codify the actual practice (an implementation has to be able to
> process the ACATS) and then also note that other source representations are
> allowed. But I suppose it is far too late for such a discussion.

Fine, but obviously the ARG and WG9 did not agree with this revisionist
position. For the record, I think it would have been a mistake, because

a) it wouldn't have any real effect

b) if it did have the effect of discouraging useful stuff, that would be 
bad.

> This worries me, not because it violates the standard (as you have pointed
> out, it does not), but because the path back to "portable" Ada code (that
> is, literally following the standard with the source representation) is not
> obvious. With pieces of the Ada in a number of files, the "real" Ada code is
> not trivial to reconstruct. (And your example is just the tip of the
> iceberg; it may make sense to do be able to do that on a subprogram basis in
> a sufficiently smart system.) Hopefully, you'll provide a tool to
> reconstruct the "real" Ada code, so that code portability is maintained.

Whether we provide such a tool will depend on our customers and other
users, not on Randy's "worries". I see two scenarios.

People who use only GNAT and are not interested in portability (and
certainly not interested in portability to non-free non-open source
alternatives) use this feature and don't care about portability.

People use this who are interested in such portability, and we provide
the (perfectly trivial) tool that assembles sources into canonical form.

In the latter case, the tool gets provided.
Probably it gets provided anyway, since it is trivial.

There certainly have been cases in the past where back conversion
is not easy. There is at least one proprietary compiler that allows
assembling systems and subsystems in which compilation unit names
are duplicated, that iss quite hard to deal with.

In the case of the handling of configuration pragmas the GNAT way

a) this has been very practical and useful

b) in over a decade of use, no one has ever had any difficulties
with this approach.

Believe me, our users feel very free to suggest enhancements,
we currently have 909 filed (and that's after I dealt with over
50 of them this last holidays :-)), and no one ever bothered
about this.

After all a certain RB suggests that configuration pragmas don't
really exist in practice and are replaced by (presumably highly
non-portable) compiler switches, which I find MUCH worse.

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  12:56 PM

...
> Only language lawyers would worry about that, and if they worry,
> they should define a canonical form.

I do think we need a canonical form. If for no other reason than it would
eliminate a lot of these silly arguments...

...
> Fine, but obviously the ARG and WG9 did not agree with this revisionist
> position. For the record, I think it would have been a mistake, because
>
> a) it wouldn't have any real effect
>
> b) if it did have the effect of discouraging useful stuff, that would be
> bad.

I would hope the case is (a), in that the market already pushes vendors to
support portability.

...
> Whether we provide such a tool will depend on our customers and other
> users, not on Randy's "worries". I see two scenarios.

Of course.

> People who use only GNAT and are not interested in portability (and
> certainly not interested in portability to non-free non-open source
> alternatives) use this feature and don't care about portability.

If everyone is in this category, Ada is essentially dead. I surely hope we
aren't getting to that point.

> People use this who are interested in such portability, and we provide
> the (perfectly trivial) tool that assembles sources into canonical form.
>
> In the latter case, the tool gets provided.
> Probably it gets provided anyway, since it is trivial.

Well, I encourage you to provide the tool. Beyond that, you surely are going
to serve your customers - you'd be foolish to do anything else.

> There certainly have been cases in the past where back conversion
> is not easy. There is at least one proprietary compiler that allows
> assembling systems and subsystems in which compilation unit names
> are duplicated, that iss quite hard to deal with.

That is even less in the spirit of Ada, and it surely is bad too. But just
because someone else committed a crime doesn't mean you should go out and do
so too. (Humm, "crime" in this analogy could be taken as a value statement
on this GNAT feature, which is not my intent. Can't think of another word
that makes the analogy work, though.)

...
> After all a certain RB suggests that configuration pragmas don't
> really exist in practice and are replaced by (presumably highly
> non-portable) compiler switches, which I find MUCH worse.

Fair enough. I think that what you are doing is essentially writing
implementation-defined compiler options in the syntax of configuration
pragmas. The resulting "code" won't work on another implementation anyway
(unless they adopt your project management files, which I would think is
highly unlikely). Thus, I find it pretty much irrelevant what the syntax of
those options are in your project files. We use Ada-like syntax for various
things, but I wouldn't call that Ada code.

At this point, I think we should just agree to disagree on this point,
because in the absense of a new insight, I don't think there is any further
value in discussing it.

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  1:58 PM

> I do think we need a canonical form. If for no other reason than it would
> eliminate a lot of these silly arguments...

I agree it's a silly argument, but it seems to be you that started it!

> I would hope the case is (a), in that the market already pushes vendors to
> support portability.

so why worry?

>> People who use only GNAT and are not interested in portability (and
>> certainly not interested in portability to non-free non-open source
>> alternatives) use this feature and don't care about portability.
> 
> If everyone is in this category, Ada is essentially dead. I surely hope we
> aren't getting to that point.

Not everyone is in that category, and that won't be the case for the
forseeable future, but *some* have been in that category for over
a decade.

> Well, I encourage you to provide the tool. Beyond that, you surely are going
> to serve your customers - you'd be foolish to do anything else.

Er, Randy, we let our customers tell us what to do and what not to do,
I am afraid your encouragement has little effect!

> That is even less in the spirit of Ada, and it surely is bad too. But just
> because someone else committed a crime doesn't mean you should go out and do
> so too. (Humm, "crime" in this analogy could be taken as a value statement
> on this GNAT feature, which is not my intent. Can't think of another word
> that makes the analogy work, though.)

Indeed if you think this GNAT feature is a crime, your opinion is coming
from so far left field that we can't take it seriously

> Fair enough. I think that what you are doing is essentially writing
> implementation-defined compiler options in the syntax of configuration
> pragmas. The resulting "code" won't work on another implementation anyway
> (unless they adopt your project management files, which I would think is
> highly unlikely). Thus, I find it pretty much irrelevant what the syntax of
> those options are in your project files. We use Ada-like syntax for various
> things, but I wouldn't call that Ada code.
> 
> At this point, I think we should just agree to disagree on this point,
> because in the absense of a new insight, I don't think there is any further
> value in discussing it.

Sounds rewasonable, it has nothing whatever to do with the issue at hand
which is, in case you forgot, clearly explained in the subject line :-)

****************************************************************

From: Arnaud Charlet
Sent: Friday, January 11, 2007  2:19 AM

> Why a new pragma? Assertion_Policy allows implementation-defined policies,
> does it not? (Yes, it does, see 11.4.2(9/2)). The original intent was
> (before the precondition stuff got dropped) that it would apply to them,
> too. I could imagine wanting to be able to turn off preconditions and leave
> assertions on or something like that -- and that is what the
> implementation-defined policies are for.

Actually it does not allow that very easily: what if I want to enable
Assertions and Preconditions, but not post conditions ? Do we need
to predefine all possible combinations of checks as an assertion policy,
rather than simply allowing each of these "checks" to be enabled
separately.

Having pragma Assertion_Policy (I_Want_Assert_And_Preconditions_But_Not_Posconditions_And_Not_Mubo_Jumbo_And_Not_Mumble);
gets out of hands pretty rapidly, so the pragma Assertion_Policy does not fly.

Instead, having a pragma Check_Policy with two arguments allows for
2 dimensional setting, without having to flatten the 2d possibilities into
a single dimension by hand.

Also, we intend to add a pragma Check (Identifier, Condition) which allows
users to define their own categories, so again, Assertion_Policy cannot
handle this.

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  12:40 PM

> Having pragma Assertion_Policy
> (I_Want_Assert_And_Preconditions_But_Not_Posconditions_And_Not_Mub
> o_Jumbo_And_Not_Mumble);
> gets out of hands pretty rapidly, so the pragma Assertion_Policy
> does not fly.

Well, I think it is just fine. I don't is any real need for very fine
control of these things, a couple of levels would be enough. Janus/Ada only
has the choice of "on" and "off" on a per-unit basis, and I think that the
number of times we wanted finer control can be counted on one hand. Nor can
I remember any customers asking for more control.

Maybe pre- and post- conditions are different somehow, but I rather doubt
it.

> Instead, having a pragma Check_Policy with two arguments allows for
> 2 dimensional setting, without having to flatten the 2d possibilities into
> a single dimension by hand.

I think this is overkill.

> Also, we intend to add a pragma Check (Identifier, Condition) which allows
> users to define their own categories, so again, Assertion_Policy cannot
> handle this.

That makes more sense, but again I think it is overkill. But you are welcome
to prove me wrong.

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  1:52 PM

> Maybe pre- and post- conditions are different somehow, but I rather doubt
> it.

Definitely different! For instance we have customers who want to enable
pre and post conditions during development, but for deployment they only
want to enable pre conditions. in general when distributing a library,
you might often want this commbination (to check against junk calls,
but you don't want to check for internal bugs in the library

****************************************************************

From: Robert Dewar
Sent: Friday, January 10, 2007  1:54 PM

>> Also, we intend to add a pragma Check (Identifier, Condition) which allows
>> users to define their own categories, so again, Assertion_Policy cannot
>> handle this.
> 
> That makes more sense, but again I think it is overkill. But you are welcome
> to prove me wrong.

Well it is definitely useful as the underpinning for preconditinos and
postconditions, and we have other cases of categories of assertions 
which we want to enable separately.

For instance, you might definitely want in general, never mind 
preconditions and postconditions, to distinguish between assertions
you want as part of debugging, and those you want to include in the
production code.

****************************************************************

From: Arnaud Charlet
Sent: Friday, January 11, 2007  1:54 PM

> Well, I think it is just fine. I don't is any real need for very fine
> control of these things, a couple of levels would be enough. Janus/Ada only
> has the choice of "on" and "off" on a per-unit basis, and I think that the
> number of times we wanted finer control can be counted on one hand. Nor can
> I remember any customers asking for more control.

We already have customers asking for such fine-grained control, so this is
not a theoretical or rhetorical discussion as far as we are concerned,
it's based on customer feedback and our own experience.

> I think this is overkill.

Again, trying to map all possible values of a 2 dimension matrix in a
1 dimension set of values if both painful, unnatural and does not allow for any
flexibility.

****************************************************************

From: Robert A. Duff
Sent: Friday, January 11, 2007  3:58 PM

> Again, trying to map all possible values of a 2 dimension matrix in a
> 1 dimension set of values if both painful, unnatural and does not allow for any
> flexibility.

Enabling preconditions while disabling postconditions is definitely useful.
It corresponds to the case where a library is well tested, and you don't
want to pay the price of detecting bugs in it, but you do want to pay
the price of detecting bugs in clients.

The other way around is much less useful.  B. Meyer says it makes no sense at
all, but I don't think I'd go that far.

Anyway, this is a case where the more general solution is actually simpler.

Another scenario is where you want to disable the more expensive assertions,
which you determine either by guessing, or by doing profiling.

****************************************************************

From: Arnaud Charlet
Sent: Friday, January 11, 2007  4:07 PM

> Enabling preconditions while disabling postconditions is definitely useful.
> It corresponds to the case where a library is well tested, and you don't
> want to pay the price of detecting bugs in it, but you do want to pay
> the price of detecting bugs in clients.

Right, but then you have already all kinds of combinations:

none
assertions only
post only
pre only
assert+pre
assert+pre+post
pre+post

> Anyway, this is a case where the more general solution is actually simpler.

Right, there's indeed no technical complication here, and being able to
add user defined kinds of assertions is also a nice capability, not supported
at all by the Assertion_Policy pragma by any mean.

****************************************************************

From: Robert A. Duff
Sent: Friday, January 11, 2007  5:11 PM

> Right, but then you have already all kinds of combinations:
...

Meyer puts them all in order:

    no checks
    preconditions
    pre- and postconditions
    pre- and postconditions and invariants
    pre- and postconditions, invariants, and loop [in]variants (*)
    all of the above plus "check statements" (same as Ada's pragma Assert)

Meyer argues that no other combinations should be allowed.
He specifically says the "post only" case makes no sense.

I'm not advocating this!

Note that the above covers what Ada calls "language defined checks" --
e.g. "array" is just a generic class in Eiffel, and array indexing has a
precondition requiring the index to be in bounds.

(*) To me, loop invariants seem unnecessary -- they're just a pragma Assert that
happens to be in a loop.  A loop variant in Eiffel is an expression that must
decrease on each iteration, and must remain positive -- used to prove that
the loop will terminate.

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  4:16 PM

> Enabling preconditions while disabling postconditions is
> definitely useful.
> It corresponds to the case where a library is well tested, and you don't
> want to pay the price of detecting bugs in it, but you do want to pay
> the price of detecting bugs in clients.

Humm, that brings up an unrelated musing: if postconditions are mainly about
detecting bugs in the operation, why should they be in the specification? I
would think the spec should only include the information that is valuable to
the clients.

Just wondering.

****************************************************************

From: Robert A. Duff
Sent: Friday, January 11, 2007  5:03 PM

Because you want to do static proofs at the call site, and you want to be able
to assume that the postcondition is true after the call.  This is true whether
postconditions are checked statically or dynamically.  And the proofs at the
call site might be informal "in-the-programmer's-head" reasoning.

To put it another way, the postcondition characterizes "what does this thing
do?", and the caller needs to know about that.  E.g.:

    X : Integer := F(...);
    ... Blah / X ...

Suppose the postcondition for F is "F'Result /= 0".  That allows us to prove
that we're not violating the precondition for "/" (no divide by zero).
Or:

    Push (Stack, ...);
    Pop (Stack);

If the postcondition for Push promises "not Is_Empty(Stack)", and the
precondition for Pop requires "not Is_Empty(Stack)", we can know that the
precondition will not fail.  Interestingly, we don't need to know what
"Is_Empty" means to do this sort of symbolic reasoning.  (We might like
to know, or at least hope, that Is_Empty doesn't have side effects!)

Meyer's view (which not everyone shares) is that a failing precondition always
indicates a bug in the caller, whereas a failing postcondition always indicates
a bug in the callee.  Of course, it could indicate a bug in the
pre/postcondition itself!

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  5:35 PM

> Humm, that brings up an unrelated musing: if postconditions are mainly about
> detecting bugs in the operation, why should they be in the specification? I
> would think the spec should only include the information that is valuable to
> the clients.

Well think of post conditions as a formalized description of the 
function. Surely you don't think comments describing the function
of a subprogram don't belong in the specification. For example,
something like

     function Sqrt (Arg : Natural) return Natural;
     pragma Postcondition
       (Sqrt'Result ** 2 >= Arg
          and then
        (Arg = 0 or else (Sqrt'Result - 1) ** 2 < Arg));

This tells you EXACTLY what Sqrt does, but the issue of whether
you want to *check* it at run time is quite a different matter.

(hope I got the condition right!)

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, January 11, 2007  4:24 PM

> Enabling preconditions while disabling postconditions is definitely useful.
> It corresponds to the case where a library is well tested, and you don't
> want to pay the price of detecting bugs in it, but you do want to pay
> the price of detecting bugs in clients.
 
But then, if you think that checking the pre-condition is part of the 
normal behaviour and required in any case, why express it with a 
pre-condition rather than a good ol' explicit test?

You do need special machinery for post-conditions (because of 'Old), but 
not for pre-conditions.

****************************************************************

From: Robert A. Duff
Sent: Friday, January 11, 2007  5:13 PM

> But then, if you think that checking the pre-condition is part of the normal
> behaviour and required in any case, why express it with a pre-condition rather
> than a good ol' explicit test?

I think the key point of preconditions is that they are part of the spec,
and therefore visible to the caller.  Otherwise, they're no better than
a "pragma Assert" that happens to be at the start of the procedure.

****************************************************************

From: Robert Dewar
Sent: Friday, January 10, 2007  5:38 PM

> But then, if you think that checking the pre-condition is part of the 
> normal behaviour and required in any case, why express it with a 
> pre-condition rather than a good ol' explicit test?

Because it is very valuable to put this in the spec, and that requires
special visibility machinery (and special implementation machinery)

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  5:39 PM

> Because you want to do static proofs at the call site, and you want to be able
> to assume that the postcondition is true after the call.  This is true whether
> postconditions are checked statically or dynamically.  And the proofs at the
> call site might be informal "in-the-programmer's-head" reasoning.

OK.

> To put it another way, the postcondition characterizes "what does this thing
> do?", and the caller needs to know about that.  E.g.:
>
>     X : Integer := F(...);
>     ... Blah / X ...
>
> Suppose the postcondition for F is "F'Result /= 0".  That allows us to prove
> that we're not violating the precondition for "/" (no divide by zero).
> Or:
>
>     Push (Stack, ...);
>     Pop (Stack);
>
> If the postcondition for Push promises "not Is_Empty(Stack)", and the
> precondition for Pop requires "not Is_Empty(Stack)", we can know that the
> precondition will not fail.  Interestingly, we don't need to know what
> "Is_Empty" means to do this sort of symbolic reasoning.  (We might like
> to know, or at least hope, that Is_Empty doesn't have side effects!)

Assuming that Is_Empty does not have side effects is very dubious. (At least
if we're talking about Ada!) Even if Is_Empty is declared Pure, you can't
assume anything for some types of parameters (this was nailed down much
better for the Amendment). The net effect is that you can only do this
reasoning in some cases, and you had better not do it the rest of the time.

There would be value to requiring that any functions used in pre and post
conditions be declared pure (note that I'm assuming that we can agree on a
way to define them for standardization, but GNAT doesn't have that problem
as it already has the concept). Functions with real side effects are going
to make formal reasoning (by compilers and by other static analysis tools)
hard or impossible - it's not clear that there is much benefit to allowing
those. (Besides, this expression is already rather special given the funny
visibility that it must have, so an extra legality check doesn't seem to be
a problem.)

> Meyer's view (which not everyone shares) is that a failing precondition always
> indicates a bug in the caller, whereas a failing postcondition always indicates
> a bug in the callee.  Of course, it could indicate a bug in the
> pre/postcondition itself!

Makes sense. Especially the last part!

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  5:43 PM

> Meyer's view (which not everyone shares) is that a failing precondition always
> indicates a bug in the caller, whereas a failing postcondition always indicates
> a bug in the callee.  Of course, it could indicate a bug in the
> pre/postcondition itself!

Or it could indicate a situation where it is not possible to have
complete preconditions. E.g. a precondition might be that a database
is not corrupted, or that the file system itself is not corrupted,
or that no memory module is experiencing intermittent failure, or
that no external actor has messed with memory, or that there is
sufficient CPU power to compute the result by some deadline, or ...

The Meyer view is too simplistic for real life :-)

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  5:54 PM

> Meyer argues that no other combinations should be allowed.
> He specifically says the "post only" case makes no sense.

Another interesting division is proved vs not_proved when you are
using a proof system like SPARK. Yes, you could remove the proved
ones, but it might be nice to keep them, and be able to avoid
activating them in production use.

Really there are two issues here

a) the use of various assertions, PPC's etc to document the
code and provide input to proof tools (or input to humans as
formalized documentation).

b) generation and actual execution of run time checks

It is important to realize that a) is important even in
the absence of run time checks, and being able to activate
selected checks may be an issue for all sorts of reasons.
For instance one may not have enough memory in the target
board to activate all assertions, so you make a hierarchy
of importance and activate only some of them.

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  6:03 PM

> There would be value to requiring that any functions used in pre and post
> conditions be declared pure (note that I'm assuming that we can agree on a
> way to define them for standardization, but GNAT doesn't have that problem
> as it already has the concept). Functions with real side effects are going
> to make formal reasoning (by compilers and by other static analysis tools)
> hard or impossible - it's not clear that there is much benefit to allowing
> those. (Besides, this expression is already rather special given the funny
> visibility that it must have, so an extra legality check doesn't seem to be
> a problem.)

Please don't try to legistate coding rules. If you want to make this
rule for *your* code feel free to make it so, and if you like have a
tool that checks this (it would be an easy check to add to gnatcheck
for example).

There are many many reasons why this would be a bad idea. For instance
a memo function cannot be declared pure, even though logically it is
pure in the useful sense, yet memo functions are very definitely useful
in this context.

Reminds me a bit of the move during Ada 83 development to require 
functions not to have side effects, the trouble is that the informal
notion of what this means cannot be formalized, e.g. a square root
function that internally keeps a count of how many times it is called
for performance reasons is sort of logically pure, but how can you
tell if that count is for

a) telling how many times it is called in the canonical sequence

b) telling how many times it is called after optimization

If a) then the function is not really pure, but if b) then
it is pure, here I mean by pure, OK to optimize duplicated
calls. For a) the optimization is wrong, for b) it is right.

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  6:28 PM

> There are many many reasons why this would be a bad idea. For instance
> a memo function cannot be declared pure, even though logically it is
> pure in the useful sense, yet memo functions are very definitely useful
> in this context.

Umm, Robert, the Gnat Pure_Function pragma asserts but does not check
function purity. That's one option, and for this purpose it is good enough.
Gnat's Pure_Function pragma surely allows memo functions (it even allows
functions that aren't pure at all). The idea is mainly to make it explicit
in programmer's minds that there shouldn't be any side-effects in the
function.

To actually have the correct effect would require a lot more than just
purity, and I don't know whether trying to require that would be a good
idea. In order to do static proving, you have to assume that any functions
depend only on their parameters and always return the same result. If that's
not true, any proofs are pretty much worthless. Moreover, if you're using
the proofs to drive code generation, the generated code is likely to be
wrong.

I suppose one possibility would be for a compiler to ignore (other than any
runtime check) any pre and post conditions that don't meet the requirements
for static proving. That would allow more general functions to be used while
still allowing useful proofs.

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  6:41 PM

> Umm, Robert, the Gnat Pure_Function pragma asserts but does not check
> function purity. That's one option, and for this purpose it is good enough.

Well I thought that if you were saying functions should be pure, then
you meant it in the RM sense. The GNAT Pure_Function declaration has
nothing to do with side effects, which I think was your point (that's
certainly how you phrased your point).

Pure_Function in GNAT is about code generation:

> This pragma appears in the same declarative part as a function
> declaration (or a set of function declarations if more than one
> overloaded declaration exists, in which case the pragma applies
> to all entities).  It specifies that the function @code{Entity} is
> to be considered pure for the purposes of code generation.  This means
> that the compiler can assume that there are no side effects, and
> in particular that two calls with identical arguments produce the
> same result.  It also means that the function can be used in an
> address clause.
> 
> Note that, quite deliberately, there are no static checks to try
> to ensure that this promise is met, so @code{Pure_Function} can be used
> with functions that are conceptually pure, even if they do modify
> global variables.  For example, a square root function that is
> instrumented to count the number of times it is called is still
> conceptually pure, and can still be optimized, even though it
> modifies a global variable (the count).  Memo functions are another
> example (where a table of previous calls is kept and consulted to
> avoid re-computation).

I really think the optimization issue is not critical wrt PPC's.
Yes the intent is that Pure_Function indicate logical purity,
but that's only intent, and this has nothing to do with requirements.

But you talked about *requiring* purity (go look at your message).
Well it is pretty silly to require something similar to
Pure_Function, which is only about code generation and intent.
You can get the intent without some silly non-requirement
requirement like this.

> To actually have the correct effect would require a lot more than just
> purity, and I don't know whether trying to require that would be a good
> idea. In order to do static proving, you have to assume that any functions
> depend only on their parameters and always return the same result. If that's
> not true, any proofs are pretty much worthless. Moreover, if you're using
> the proofs to drive code generation, the generated code is likely to be
> wrong.

No, not necessarily
 
> I suppose one possibility would be for a compiler to ignore (other than any
> runtime check) any pre and post conditions that don't meet the requirements
> for static proving. That would allow more general functions to be used while
> still allowing useful proofs.

I have no idea what this means, compilers generate run time checks, that
is their job. They can then optimize based on these checks if they want,
just as then can on any conditionals. I just don't get your point here.
 
****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  7:18 PM

> > Umm, Robert, the Gnat Pure_Function pragma asserts but does not check
> > function purity. That's one option, and for this purpose it is good enough.
>
> Well I thought that if you were saying functions should be pure, then
> you meant it in the RM sense.

No, because the RM sense is useless for this purpose. These functions are
likely to be primitive operations of an abstraction, and we surely couldn't
require all abstractions to be in packages declared Pure. I was specifically
thinking of something like Pure_Function that declares the intent of purity
for a single function.

> The GNAT Pure_Function declaration has
> nothing to do with side effects, which I think was your point (that's
> certainly how you phrased your point).
>
> Pure_Function in GNAT is about code generation:
...
> I really think the optimization issue is not critical wrt PPC's.
> Yes the intent is that Pure_Function indicate logical purity,
> but that's only intent, and this has nothing to do with requirements.
>
> But you talked about *requiring* purity (go look at your message).
> Well it is pretty silly to require something similar to
> Pure_Function, which is only about code generation and intent.
> You can get the intent without some silly non-requirement
> requirement like this.

Sorry if I misled you, but what I wanted to require was that any functions
used have a *declaration of intent* to be pure. That would reinforce that
side-effects are bad and that differing results are bad, but that doesn't
necessarily mean that there would be checks to ban them. The GNAT
Pure_Function seems to have the correct declaration of intent, but if you
think there is a better way to declare such an intent, I'd like to hear it.

...
> > I suppose one possibility would be for a compiler to ignore (other than any
> > runtime check) any pre and post conditions that don't meet the requirements
> > for static proving. That would allow more general functions to be used while
> > still allowing useful proofs.
>
> I have no idea what this means, compilers generate run time checks, that
> is their job. They can then optimize based on these checks if they want,
> just as then can on any conditionals. I just don't get your point here.

Removing checks is harder than not generating them in the first place. I
know our compiler tries to avoid generating checks and other code based on
what it can prove. (It currently can't prove much, but that's a different
issue. ;-)

Consider the "not null" modifier. If the access subtype of an object has
such a modifier, the compiler is not going to generate null access value
checks (presuming it can prove that the object is valid, which is almost
always true for access objects). It makes perfect sense (to me, at least) to
use precondition expressions for similar check elimination. But that can
only be done if any functions in the expression are very well-behaved. If
the compiler assumes a function is well-behaved when it is not (that is, can
return a different answer on different calls with the same value, or has
significant side-effects), it is quite possible that checks and code would
be eliminated that could (or even should) actually be executed. That would
be bad.

To take an example from a hypothetical future version of Claw:

     procedure Move (Window : in out Root_Window; ...);
     pragma Precondition (Is_Valid(Window));

     procedure Move (Window : in out Root_Window; ...) is
     begin
         if not Is_Valid(Window) then
             raise Not_Valid_Error;
         end if;
         ...
     end Move;

The compiler can use the precondition to eliminate the explicit test of
Is_Valid (left-over in this case from the Ada 95 version of Claw). But that
can only be done safely if Is_Valid doesn't have a significant side effect
and always returns the same answer for a particular value.

Anyway, that's what I'm thinking about; I want to pull this sort of proof
earlier in the compilation where it can do more good. (It's hard even to
represent this precondition in the relatively low-level representation used
by our optimizer.)

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  7:31 PM

> No, because the RM sense is useless for this purpose. These functions are
> likely to be primitive operations of an abstraction, and we surely couldn't
> require all abstractions to be in packages declared Pure. I was specifically
> thinking of something like Pure_Function that declares the intent of purity
> for a single function.

if all you are interested in is intent, it is meaningless to make this
a requirement. From the point of view you are interested in, the use
of

    Pure_Function (X);

has no more force than

    --  X should be logically pure


> Sorry if I misled you, but what I wanted to require was that any functions
> used have a *declaration of intent* to be pure. That would reinforce that
> side-effects are bad and that differing results are bad, but that doesn't
> necessarily mean that there would be checks to ban them. The GNAT
> Pure_Function seems to have the correct declaration of intent, but if you
> think there is a better way to declare such an intent, I'd like to hear it.

This is the domain of programming style and style checking, not the 
domain of language requirements. I see no posssible use in fiddling
around with weak "intent" requirements.

Pure_Function in GNAT is all about optimization and nothing else. The
associated intent is just a way of reminding that the optimization
may occur. The optimization is about making sure that the meaning of
the program is not affected by such optimization. Well in the case
of assertions, we get the same effect from the operational requirement
that the meaning of the program should not be affected by whether
assertions are enabled or not, but you don't want to try to formalize
that with any requirements at all (for instance you don't want to
even suggest that it is bad to mention Volatile variables in a
precondition, even though technically this is a change in behavior).

> Removing checks is harder than not generating them in the first place. I
> know our compiler tries to avoid generating checks and other code based on
> what it can prove. (It currently can't prove much, but that's a different
> issue. ;-)

Of course you don't generate checks if you don't need them, but what's
that got to do with your odd idea that compilers should ignore PPC's
that are not statically verifiable except that it should still generate
the run time check. Did you really mean to say that? It really seems a
bit nonsensical to me.

> Anyway, that's what I'm thinking about; I want to pull this sort of proof
> earlier in the compilation where it can do more good. (It's hard even to
> represent this precondition in the relatively low-level representation used
> by our optimizer.)

Well preconditions are no different from any other conditional, and
of course optimizers draw conclusions from conditionals as a normal
part of their operation.

****************************************************************

From: Robert Dewar
Sent: Friday, January 11, 2007  7:34 PM

As interesting as this totally irrelevant chit chat about pre and
post conditions is, it would be nice to draw the discussion back
to the subject line. I still think it would be useful to allow
more general use of Assertion_Policy.

We could of course allow Check_Policy to have more general use,
and we are debating this.

On the one hand, it would definitely be useful, and is very
easy to implement.

On the other hand, it is too bad if Assertion_Policy and
Check_Policy differ in this respect.

Interestingly, as any of you who have access to GNAT will
find, this is a check we forgot, currently Assertion_Policy
can be used anywhere.

Still, you have to decide if this is just lexical, like
pragma Warnings (Off/On) (which is the way it is in GNAT
now), or whether it is unit based, like Suppress.

****************************************************************

From: Randy Brukardt
Sent: Friday, January 11, 2007  9:05 PM

> As interesting as this totally irrelevant chit chat about pre and
> post conditions is, it would be nice to draw the discussion back
> to the subject line. I still think it would be useful to allow
> more general use of Assertion_Policy.

I know I made my opinion known - No.

For a single subprogram, that's what "--" (the comment symbol) is for. (And
it would be quicker, too.)
Turning off just a small number of subprograms seems pretty weird.

I also saw that Tucker seemed to concur with this position. I never saw
anyone disagree with it.

...

> Interestingly, as any of you who have access to GNAT will
> find, this is a check we forgot, currently Assertion_Policy
> can be used anywhere.
>
> Still, you have to decide if this is just lexical, like
> pragma Warnings (Off/On) (which is the way it is in GNAT
> now), or whether it is unit based, like Suppress.

That seems to be another good reason for not generalizing it. The only
language-defined lexical pragma that I can think of is List(On/Off);
everything else language-defined is unit based (Suppress, Optimize, etc.) or
a program unit or configuration pragma. But a unit-based one doesn't seem
useful at all, since the only need that has been seen is to turn assertions
off in a set of subprograms but not all of them in a unit -- which would not
be possible with a unit based pragma (at least without warping the
structure. So it still seems best to be a configuration pragma.

Whether there should have been a second dimension specifying which kinds of
assertions are being discussed is more interesting. While I doubt I'd use
such a facility, I can believe that others might (you've said that you have
customers that want to). But that seems like a dubious change without
introducing the actual pre and post conditions to the language.

So I think no change is appropriate for now, maybe something in Ada 2019.

****************************************************************

From: Robert Dewar
Sent: Saturday, January 12, 2007  1:31 AM

> I know I made my opinion known - No.
> 
> For a single subprogram, that's what "--" (the comment symbol) is for. (And
> it would be quicker, too.)
> Turning off just a small number of subprograms seems pretty weird.

Actually for us a turned off assertion is not completely ignored,
we still extract information from it for the purposes of warrnings,
e.g.

      function X (S : String) return String is
         pragma Assert (S'First = 1);
      begin
         ... S(1)
      end;

Normally (if the option is on) S(1) generates a warning about assuming
a lower bound of 1, but the pragma Assert kills the warning (even if
assertions are being "ignored"). So commenting out the assertion would
not be benign in all cases.

****************************************************************


Questions? Ask the ACAA Technical Agent