Version 1.6 of ais/ai-00147.txt

Unformatted version of ais/ai-00147.txt version 1.6
Other versions for file ais/ai-00147.txt

!standard 07.06 (21)          99-03-25 AI95-00147/06
!class binding interpretation 96-06-06
!status ARG Approved 6-0-2 99-03-26
!status work item 98-04-01
!status ARG Approved (with changes) 10-1-2 97-11-16
!status work item 5-1-4 96-10-07
!status work item 96-06-06
!status received 96-06-06
!priority High
!difficulty Hard
!subject Optimization of Controlled Types
!summary
7.6(18-21) does not apply to limited controlled types. Thus, the Initialize and Finalize calls will happen as described in other paragraphs of the Standard.
For non-limited controlled types, the implementation permission of RM-7.6(18-21) is extended as follows:
An implementation is permitted to omit implicit Initialize, Adjust and Finalize calls and associated assignment operations on an object of non-limited controlled type if
a) any usage of the value of the object after the implicit Initialize or
Adjust call and before any subsequent Finalize call on the object does not affect the external effect of any program execution, and
b) after the omission of such calls and operations, any execution of
the program that executes an Initialize or Adjust call on an object or initializes an object by an aggregate will also later execute a Finalize call on the object and will always do so prior to assigning a new value to the object.
This permission applies even if the implicit calls have additional effects.
The last sentence of 7.6(21) is amended to include the omission of the finalization of the anonymous object (along with the adjustment) and to exclude the optimization when there are any access values designating the anonymous object as a whole.
!question
RM-7.6(18-21) give implementations permission to eliminate certain calls to Adjust and Finalize. The purpose of these rules is to allow greater efficiency than is implied by the canonical semantics of controlled types. However, it appears that these rules may not go far enough.
Consider first a very simple example:
X: Controlled_Type; -- Initialize(X) begin null; end; -- Finalize(X)
Are the calls really needed for an obviously "dead" variable ? [No]
Second example:
X: Controlled_Type; -- Initialize(X) begin X := something; -- Finalize(X); Adjust(X); X := something_else; -- Finalize(X); Adjust(X); ... X.Comp -- so that it isn't dead end; -- Finalize(X);
The comments indicate the implicit calls, already taking advantage of the Implementation Permission to omit creation, adjustment and finalization of a temporary anonymous object. Is it really necessary to mandate the execution of the first assignment and its implicit calls, merely for the sake of potential side-effects of these calls ? [No] Do we really have to Initialize and immediately thereafter Finalize X ? [No]
Third example:
X1: Controlled_Type; -- Initialize(X1); ... -- no use of X1 declare X: Controlled_Type := Init; -- Adjust(X); begin X1 := X; -- Finalize(X1); Adjust(X1); end; -- Finalize(X); ...
Should the declare block not be optimizable to
X1 := Init; -- Finalize(X1); Adjust(X1) [Yes]
or the whole example to
X1: Controlled_Type := Init; -- Adjust(X1); [Yes]
avoiding so many unneeded calls ?
Finally, a more complicated example:
function F return Some_Controlled_Type is Result: Some_Controlled_Type := (...); -- Adjust begin return Result; -- Adjust, Finalize end F;
function G return Some_Controlled_Type is Result: Some_Controlled_Type := F; -- Adjust begin return Result; -- Adjust, Finalize end G;
X: Some_Controlled_Type := G; -- Adjust
Is an implementation allowed to optimize the above by creating the aggregate directly in X, thus avoiding all calls to Adjust and Finalize? [Yes] (Such an optimization might well be feasible if F and G are inlined, or if the compiler is doing other kinds of inter-procedural analysis. It might even be feasible without inter-procedural analysis, if the sizes of the function results are known at the call site and there is no danger of source/target overlap.)
Note: Some of these issues came up during a discussion of ACVC test C760007.
Finally, what is the meaning of the last sentence of 7.6(21) ? It seems to allow elimination of an Adjust operation without eliminating a matching Finalize operation. Is this intended ? [No]
!recommendation
For limited controlled types, the canonical semantics apply.
For non-limited controlled types, matching pairs of Adjust/Finalize and Initialize/Finalize calls may be eliminated by the implementation as follows:
For an explicit initialization of an object or an assignment to it, where the assigned value is not read after the Adjust call (if any) and prior to the Finalize call for the next assignment to the object or for its destruction, the Adjust/Finalize pair of calls (and the value assignment) need not be performed.
For an implicit Initialize call, where the initialized value is not read after the Initialize call and prior to the Finalize call for the next assignment to the object or for its destruction, the Initialize/Finalize pair of calls need not be performed.
An intermediate recipient object in the flow of a value from a source to its final destination, where the value of the intermediate object is not used thereafter, can be eliminated along with all operations on it by assigning the value directly from its source to the final destination and adjusting it.
The last sentence of 7.6(21) omits an important additional constraint: In order for the adjustment to be omitted, it must also be guaranteed that there are no access values designating the anonymous object as a whole.
!wording
7.6(18-21) need to be carefully rephrased to encompass the permissions stated in the !summary.
The last sentence of 7.6(21) needs to be amended as stated in the !summary.
!discussion
The above examples show that the implicit calls can create a non-trivial overhead on the execution even when the existing Implementation Permission of RM-7.6. is applied. It is therefore worthwhile to consider extending the Implementation Permission even further.
It is interesting to note that the declare block of the third example above is nothing but an Ada encoding of the canonical semantics of assignments as stated in RM 7.6(17), with the anonymous temporary object being introduced as a named "temporary" object. In the case of the anonymous object, RM 7.6(21) allows the simplification asked for in the question. It seems strange that the explicit introduction of the object in the code should make a significant difference in semantics.
A slight rewriting of the last example would apparently fall under the Implementation Permission, since the "result object" assigned to by the return_statement is an anonymous object:
function F return Some_Controlled_Type is begin return (<<<aggregate>>); end F;
function G return Some_Controlled_Type is begin return F; end G;
X: Some_Controlled_Type := G; -- Adjust
Again, it seems strange that this rewritten version should have semantics different from the original.
So, one question is whether "adjacent" Initialize/Finalize calls or Adjust/Finalize calls without intervening usage of the affected object can be eliminated. The other question is whether the "copy propagation" rule manifest in 7.6(21) for temporary anonymous objects can be extended to "temporary" named objects.
7.6(18-21) does not allow the elimination of Initialize/Finalize pairs -- just Adjust/Finalize pairs, and then only on anonymous objects.
For limited types, this is exactly what we want. Limited controlled types are used, for example, to ensure properly matched calls, such as Lock/Unlock or Open/Close. Another example is a limited controlled type that is used purely because Initialize is an abort-deferred region. That is, a limited controlled object might be used purely for the side-effects of its Initialize and Finalize operations. (It is better to use Initialize for this purpose, rather than Finalize, because then the implementation has a better chance of optimizing away any finalization-related overhead.) Therefore, 7.6(18-21) should certainly not be extended to apply to limited types in any way -- the Initialize and Finalize calls required by the canonical semantics should happen exactly (unless of course the compiler can prove that their side effects are irrelevant). For limited control types, we have exactly one Initialize and exactly one Finalize call, making it possible to exploit the "once-only" semantics in the implementation of these operations.
However, for non-limited types, the situation is somewhat different, since assignments cause additional Adjust and Finalize calls on an object; moreover, many Adjust calls may be applied to a value as it flows from object to object. By virtue of the existing implementation permission, the number of such calls is indeterminate. It is highly advisable that users essentially make Finalize an inverse dual to both Initialize and Adjust, and allow for the possibility of an unknown number of Adjust and Finalize calls on the same object. Consequently, to preserve a canonical semantics that mandates the execution of assignments, even if their only external effect is through the implicit calls on the routines in question, makes little sense, given the already unknown number of call pairs involved.
Conversely, it does make sense to weaken the canonical semantics for the benefit of important optimizations. We can do so by extending the permissions of 7.6(18-21) to Initialize/Finalize pairs and to Adjust/Finalize pairs for non-limited types in such a way that the practical effect is merely more uncertainty over an already uncertain number of call pairs.
Note also that it is the assignment operation that is abort deferred, not the assignment_statement. Thus, it is possible under canonical semantics for "A := B;" to be aborted after adjusting B in an anonymous temporary object or after finalizing A, but before assigning the new value into A and adjusting it. (Only the latter part is abort deferred.) So, some indeterminacy in the number of Adjust calls is already part of the canonical semantics and not merely a result of existing implementation permissions.
All this argues to give optimizations more leeway in eliminating these implicit calls in situations where "properly written" Adjust and Finalize procedures (i.e., there is no reliance on side-effects other than changes to the value of the object itself) would have no discernible effects on the external effect of the program execution.
In any case, we want to preserve the duality of the calls and guarantee that any program execution that executes an Initialize or Adjust call on an object also executes a Finalize on the object.
Eliminating Initialize/Finalize pairs is obviously interesting, when a variable is declared, but never used. Or, the more likely case, other optimizations have found such use to be unnecessary. Equally interesting are Adjust/Finalize pairs for an initialized variable that is never used.
Consider also, for example, if the user had written
"Result: Some_Controlled_Type; ... Result := F;"
above, instead of "Result: Some_Controlled_Type := F;".
Canonical semantics would then cause a pair of Initialize/Finalize calls on Result, followed by an Adjust call, while the original program caused only the Adjust call. A semantics that makes these two alternatives significantly different, is surely a surprise to many users. Permission to eliminate the Initialize/Finalize pair creates at least the possibility that the two alternatives might result in the same behavior (and acts as minor additional incentive to the programmer to make Initialize and Finalize an "inverse" pair to get identical semantics from both alternatives).
The optimizations argued for are restricted to those cases where, at Ada source level, an initialization or assignment occurs, but the assigned value is never used, or a "copy propagation" eliminates an intermediate recipient object in the flow of a value from one source to its final destination, where the value of the intermediate object is not used thereafter.
While the above discussion is formulated in terms of pairs of calls for simplicity sake, the real situation is slightly more complicated, since it needs to consider all execution paths.
Consider:
X: Controlled_Type; begin X := something; -- Adjust(X); if P then X := then_something; -- Finalize(X); Adjust(X); else X := else_something; -- Finalize(X); Adjust(X); ... X.Comp -- so that it isn't dead end if; ...
We would like to eliminate the first Adjust call and BOTH Finalize calls. As a group, they constitute the matching pairs. We cannot always eliminate single pairs. It is fairly easy to do in a compiler, but hard to describe in these terms in a language definition.
The words in the !summary therefore capture the bounds of the optimization by its externally visible effects.
Finally, the !question raises an issue with the last sentence of 7.6(21), which indeed is seriously incomplete. The sentence needs to be interpreted to mean that, along with the omission of the re-adjustment, the temporary object is not finalized either. Also, this permission causes a major problem in the case where the object itself is referenced by one of its subcomponents or by other objects reachable via subcomponents and properly modified by the Adjust call to refer to the object in question (e.g., as a means to track all objects of a certain type). The sentence therefore needs to be amended to exclude the optimization in this case.
!appendix

!section 7.6(21)
!subject Can implicit Initialize/Adjust/Finalize pairs be optimized away ?
!reference RM95 7.6(21)
!reference RM95 1.1.3(15)
!from Erhard Ploedereder 96-05-29
!reference 96-5579.a Erhard Ploedereder  96-5-28>>
!discussion

Is it a permissible optimization to omit calls to Initialize, Adjust and
Finalize in connection with declarations, assignments and completions of
masters, when the implementation can determine that these calls are the
only possible source of external effects ?

As a specific example, consider the case of a variable declared to be
of a controlled type with user-defined Initialize and Finalize operations.
This variable is never used. Is is legitimate to omit the Initialize and
Finalize calls for this variable ?

Consider further an assignment to such variable with no subsequent use of
the variable prior to completion of the master. Is it legitimate to omit the
Finalize/Adjust pair of calls along with the value transfer, which would
normally implement such an assignment ?

Put differently: Can dead code and dead variable elimination disregard
the effects of such implicit calls in determining whether an object or
assignment is needed at all ?

In the spirit of 7.6(21), which makes the exact number of such calls
implementation-defined, anyhow, one might assume that such liberty causes
little damage, while being quite important for further reducing the number of
such (unneeded) calls and enabling other optimizations.


****************************************************************

!section 7.6(21)
!subject Can implicit Initialize/Adjust/Finalize pairs be optimized away ?
!reference RM95 7.6(21)
!reference RM95 1.1.3(15)
!reference 96-5579.a Erhard Ploedereder  96-5-28
!from Norman Cohen  96-5-29
!reference 96-5580.a Norman H. Cohen 96-5-29>>
!discussion

Erhard writes:

 > In the spirit of 7.6(21), which makes the exact number of such calls
 > implementation-defined, anyhow, one might assume that such liberty causes
 > little damage, while being quite important for further reducing the number of
 > such (unneeded) calls and enabling other optimizations.

I agree.  I had thought that 7.6(21) already provided such permission,
but after reading it again, I see that it does not, and a binding
interpretation is needed.


****************************************************************

!section 7.6(21)
!subject Can implicit Initialize/Adjust/Finalize pairs be optimized away ?
!reference RM95 7.6(21)
!reference RM95 1.1.3(15)
!reference 96-5579.a Erhard Ploedereder  96-5-28
!reference 96-5582.a Robert A Duff 96-5-29>>
!discussion

> Is it a permissible optimization to omit calls to Initialize, Adjust and
> Finalize in connection with declarations, assignments and completions of
> masters, when the implementation can determine that these calls are the
> only possible source of external effects ?

I don't see any permission to *ever* omit an Initialize, nor the
corresponding Finalize.  7.6(18-21) are about omitting Adjust/Finalize
pairs.  We might want to liberalize the rules slightly there, but one
thing I feel strongly is that:

    For *limited* types, what you see is what you get.

That is, the Initialize and Finalize calls that 7.6 and 7.6.1 talks
about really need to happen, for limited types.  For example, a limited
controlled type might be used to lock/unlock a semaphore, and the
compiler should not be allowed to eliminate that code, even if the
limited controlled object is never otherwise used.

For non-limited types, I'm not so sure, but it's probably best to stick
to eliminating Adjust/Finalize pairs, and never eliminating
Initialize/Finalize pairs -- this would achieve the above principle for
limited types, and also for non-limited types that never happen to get
assigned here and there.

> As a specific example, consider the case of a variable declared to be
> of a controlled type with user-defined Initialize and Finalize operations.
> This variable is never used. Is is legitimate to omit the Initialize and
> Finalize calls for this variable ?

If limited, certainly not.  If non-limited, probably not.

> Consider further an assignment to such variable with no subsequent use of
> the variable prior to completion of the master. Is it legitimate to omit the
> Finalize/Adjust pair of calls along with the value transfer, which would
> normally implement such an assignment ?

Hmm.  Maybe.

> Put differently: Can dead code and dead variable elimination disregard
> the effects of such implicit calls in determining whether an object or
> assignment is needed at all ?

No.

> In the spirit of 7.6(21), which makes the exact number of such calls
> implementation-defined, anyhow, one might assume that such liberty causes
> little damage, while being quite important for further reducing the number of
> such (unneeded) calls and enabling other optimizations.

True, but certainly not for limited types.

- - Bob


****************************************************************

!section 7.6(21)
!subject Can implicit Initialize/Adjust/Finalize pairs be optimized away ?
!from Robert Dewar 96-05-30
!reference RM95 7.6(21)
!reference RM95 1.1.3(15)
!reference 96-5579.a Erhard Ploedereder  96-5-28
!reference 96-5582.a Robert A Duff 96-5-29
!reference 96-5586.a Robert Dewar 96-5-30>>
!discussion

Bob Duff said

  I don't see any permission to *ever* omit an Initialize, nor the
  corresponding Finalize.  7.6(18-21) are about omitting Adjust/Finalize
  pairs.  We might want to liberalize the rules slightly there, but one
  thing I feel strongly is that:

    For *limited* types, what you see is what you get.

I agree this is critical. Dummy objects with finalization routines are
an important feature of the language and must work.

The following is just one example:

A surprising thing about Ada 95 is that there
is no convenient way to defer abort, yet in an environment where one is
actually using asynchronous aborts this is crucial.

In GNAT, the pragma Abort_Defer is available, and at least in the case of
one user who tried to use ATC (but has since decided it was a bad idea all
round), they found their program growing full of instances of this pragma.

However, if you are trying to use standard features, you only have two
choices for deferring abort in practice, protected operations, and
finalization operations. For example, suppose I am writing a bubble sort,
and I want it to be asynch safe, in the sense that if it is interrupted
I don't want data loss.

This means that the exchange has to be abort protected. In GNAT one would
just write:

     procedure Exchange (I, J) is
     begin
        pragma Abort_Defer;
        ...
     end;

but if you want to avoid non-standard pragmas, then you either write a
dummy protected record, and pray that your compiler will optimize and
completely remove the junk locks (a very difficult optimization), or
you declare a dummy object to be finalized and make the exchange
procedure be the finalization routine. Both approaches are very kludgy,
but at least the finalization approach should generate semi-reasonable
code (at least avoiding the likely extra kernel calls in the protected
object case).

So, in short, it sure is VERY important that limited types do not mysteriously
disappear as far as initialize and finalize go. Indeed during the language
development, we deliberately decided we could do without block finalization
on the grounds that you can always declare a dummy object and use the
finalization of the dummy object for this purpose.

Actually it is a little obscure that this only works in the limited case,
I don't see the RM justification for this?



****************************************************************

!section 7.6(21)
!subject Can implicit Initialize/Adjust/Finalize pairs be optimized away ?
!reference RM95 7.6(21)
!reference RM95 1.1.3(15)
!reference 96-5579.a Erhard Ploedereder  96-5-28
!reference 96-5582.a Robert A Duff 96-5-29
!reference 96-5586.a Robert Dewar 96-5-30
!from Bob Duff
!reference 96-5594.a Robert A Duff 96-6-5>>
!discussion

> However, if you are trying to use standard features, you only have two
> choices for deferring abort in practice, protected operations, and
> finalization operations.
...
> but if you want to avoid non-standard pragmas, then you either write a
> dummy protected record, and pray that your compiler will optimize and
> completely remove the junk locks (a very difficult optimization), or
> you declare a dummy object to be finalized and make the exchange
> procedure be the finalization routine. Both approaches are very kludgy,
> but at least the finalization approach should generate semi-reasonable
> code (at least avoiding the likely extra kernel calls in the protected
> object case).

Well, this is a side issue, but: Why not put the abort-deferred code in
an Initialize routine, which is also abort-deferred?  The compiler could
special-case limited controlled types that do not override Finalize, and
generate code that is no less efficient than GNAT's pragma Abort_Defer.
The overhead of controlled types comes from Finalize, not from
Initialize.  That is, if Finalize does nothing, there's no need to hook
objects onto finalization lists and unhook them, and deal with
exceptions and so forth.  This seems like a pretty straightforward
optimization.

> So, in short, it sure is VERY important that limited types do not mysteriously
> disappear as far as initialize and finalize go. Indeed during the language
> development, we deliberately decided we could do without block finalization
> on the grounds that you can always declare a dummy object and use the
> finalization of the dummy object for this purpose.
>
> Actually it is a little obscure that this only works in the limited case,
> I don't see the RM justification for this?

It doesn't -- if you don't do any assignments, so Adjust never happens,
limited and non-limited are equivalent.  The question is, do we want to
change that fact.  I suspect not.  But I do see *some* point in
eliminating pseudo-dead non-limited controlled variables, whereas I see
*no* point in eliminating pseudo-dead *limited* controlled variables
(which may well exist purely for the side effects of their Initialize
and/or Finalize).  So, as far as the RM is concerned, until
"interpreted" otherwise by the ARG, the only permissions to play games
apply to Adjust/Finalize pairs, and not Initialize/Finalize pairs.

- - Bob

****************************************************************

!section 7.6(17)
!subject Can "A:=B" be two assignment operations ?
!reference RM95-7.6(17)
!reference RM95-9.8(11)
!from Erhard Ploedereder 96-06-30
!reference 96-5618.a Erhard Ploedereder  96-6-29>>
!discussion

For controlled types, RM95-7.6(17) describes the canonical semantics of
"A := B" as assigning the value of B to an anonyomous object, adjusting the
value and then assigning the value of the anonymous object to A (plus doing
the finalizations involved).  So, there are really two assignments taking
place.

Is such an (implementation of) "A:=B" one or two assignment operations for
the purposes of 9.8(11) ? Put differently, is "A:=B" an abort-deferred
operation in its entirety or is it possible that an abortion may take place
after the value of the anonymous object is already adjusted, but "A" is as
yet untouched ?

If Adjust or Finalize have side-effects, the question above is semantically
significant to implementers and testers of 9.8(11) -- maybe not so much to
users.

[Note: ACVC-test C980003 wants to know to check abort-deferral of
assignments in general.]

****************************************************************

!section 7.6(17)
!subject Can "A:=B" be two assignment operations ?
!reference RM95-7.6(17)
!reference RM95-9.8(11)
!reference 96-5618.a Erhard Ploedereder 96-06-30
!from Tucker Taft 96-06-30
!reference 96-5619.a Tucker Taft 96-6-30>>
!discussion

> For controlled types, RM95-7.6(17) describes the canonical semantics of
> "A := B" as assigning the value of B to an anonyomous object, adjusting the
> value and then assigning the value of the anonymous object to A (plus doing
> the finalizations involved).  So, there are really two assignments taking
> place.

Correct, though implementation permissions allow the two to be combined.

> Is such an (implementation of) "A:=B" one or two assignment operations for
> the purposes of 9.8(11) ? Put differently, is "A:=B" an abort-deferred
> operation in its entirety or is it possible that an abortion may take place
> after the value of the anonymous object is already adjusted, but "A" is as
> yet untouched ?

Paragraph 9.8(11) says "assignment operation" not "assignment statement"
so it seems that an abort between the two assignments is perfectly
acceptable.  Of course, the anonymous object needs to get cleaned up
properly.

> If Adjust or Finalize have side-effects, the question above is semantically
> significant to implementers and testers of 9.8(11) -- maybe not so much to
> users.

There are implementation permissions to eliminate Adjusts and Finalizes,
so the side effects had better be benign.

> [Note: ACVC-test C980003 wants to know to check abort-deferral of
> assignments in general.]

- -Tuck

****************************************************************

From: 	Tucker Taft
Sent: 	Tuesday, March 02, 1999 9:55 PM
Subject: 	Re: AI-00147 revised

> An implementation is permitted to omit implicit Initialize, Adjust and Finalize
> calls and associated assignment operations on an object of non-limited controlled
> types if
>   a) any change to the value of the object performed by these calls or
>      assignment operations do not affect the external effect of any
>      program execution, and

I don't find this wording very understandable.  You explain things
better later.  The presumption is that after a Finalize, there
is no reference to the value anyway.  So all we care about is that
between the return from Initialize/Adjust, and the call to Finalize,
the value of the object is not used.  Another way to put it is that
if the Initialize is omitted, then the value is undefined until the
next copy/Adjust (if any), and the Initialize can be omitted only
if the content of this undefined value does not affect the
external interactions of the program up to that next Adjust.  This is
similar to the permission in 11.6(5) to omit raising an exception upon a
check failure.  The raise may be omitted only if the content of the
undefined value wouldn't alter the external interactions.

The key point is that the compiler does *not* need to look "inside" the
Initialize, Adjust, or Finalize routine to make this determination.
All it needs to notice is whether the content of the object is referenced
(in an externally significant way) between the initial Initialize/Adjust
call and the matching Finalize.

>   b) any execution of the program that executes an Initialize or Adjust
>      call on an object or initializes the object by an aggregate will
>      also later execute a Finalize call on the object and will always do
>      so prior to assigning a new value to the object.

I also have trouble with this wording.  You are implying that an
initialize/finalize can be omitted *if* (a) and (b), but I think rather
you want to say that "if" (a) is true, and that *after* removing the pair,
(b) must still be true.  (b) is certainly true *before* you omit the pair,
which seems to be all that the word "if" is requiring.

> ...

-Tuck

****************************************************************

From: 	Tucker Taft
Sent: 	Friday, April 16, 1999 3:14 PM

Tom Moran wrote:
>
> I found AI-00147, which as far as I can see gives no significant
> arguments *for* eliminating in the non-limited case, except "doing
> Initialization/Finalization is expensive".  It seems to that either
> the programmer intended the object to be declared, even though he
> doesn't use it, or it was a mistake on his part and a warning should
> be generated (as for unused Integer, Boolean, and other non-controlled
> types).  So possibly such a warning in the non-limited case would be
> reasonable, but simply assuming the programmer did not really intend
> to declare the controlled object, is hubris.
>   Or is there another AI that I missed?

Here is a motivating example:

    X : My_Controlled_Type;
  begin
    ...
    X := Y;
    ...
  end;

Presuming My_Controlled_Type has a user-defined Initialize,
Finalize, and Adjust, the sequence of implicit calls currently
required is:
    Initialize(X);
     ...
    Finalize(X)
    <copy Y to X>
    Adjust(X)
    ...
    Finalize(X)

The goal is to eliminate the first Initialize/Finalize pair.
Doing this requires some care, because if an exception is raised
before the assignment of Y to X, you don't want X to be finalized
if it has never been initialized.

Perhaps one way to resolve the concern is to give the permission
to remove an Initialize/Finalize pair only if there is an assignment
(presumably with Adjust) to the object, essentially allowing
the creation of the object to be deferred until the first assignment,
thereby bypassing the need for any default initialization.

-Tuck

****************************************************************

From: 	Robert Dewar
Sent: 	Saturday, April 17, 1999 8:33 AM

My view is that this kind of optimization is unacceptable unless someone
shows a REAL program where it makes a REAL difference.

It is one thing to let optimizations affect exceptions, quite another to
let them introduce non-deterministic semantics.

I think this decision is a bad one, and it seems from the CLA discussion
that I am not alone ...

****************************************************************

From: 	Robert A Duff
Sent: 	Saturday, April 17, 1999 8:34 AM

Tucker wrote:

> Here is a motivating example:
>...

Another example: a function declares a local variable, does some stuff,
returns the local, and the caller uses the function result to initialize
an object (in an object declaration, say).  There's a lot of shuffling
of data going on, and it would be nice to let the compiler eliminate
some or all of it.  Eg, the compiler can often arrange for the local
inside the function to occupy the same storage as the new object, thus
avoiding lots of copying.  One can imagine a whole chain of function
calls like this.  It would be a shame to disable that sort of
optimization for controlled things.

Nondeterministic semantics makes *me* uneasy, too, but in this case it
seems worth it.

- Bob

****************************************************************

From: 	Pascal Leroy
Sent: 	Monday, April 19, 1999 5:19 AM

> My view is that this kind of optimization is unacceptable unless someone
> shows a REAL program where it makes a REAL difference.

I think that the argument goes both ways: it doesn't seem very good to prevent
this kind optimization unless someone shows a REAL program where it makes a
REAL difference.

I have not followed the CLA discussion, but if someone came up with real-life
examples where the optimization (of its absence) made a difference, it would
be interesting to know.  Everybody can come up with 20-line examples to make a
point (and the ARG has done a lot of that on this AI) but the evidence is
inconclusive unless it's real code from a real project.

I must say that I am equally uncomfortable with the notion of making the
dynamic semantics non-deterministic and with the notion of insisting on a
pedantic and costly canonical model.

Pascal

****************************************************************

From: 	Robert A Duff
Sent: 	Monday, April 19, 1999 7:40 AM

> I must say that I am equally uncomfortable with the notion of making the
> dynamic semantics non-deterministic and with the notion of insisting on a
> pedantic and costly canonical model.

FWIW, making the semantics deterministic is a bigger change to the
language.

- Bob

****************************************************************

From: 	Robert Dewar
Sent: 	Monday, April 19, 1999 9:08 AM

<<I think that the argument goes both ways: it doesn't seem very good to prevent
this kind optimization unless someone shows a REAL program where it makes a
REAL difference.
>>

First, we definitely have some customer code which uses controlled types
of this kind to achieve local finalization. Yes, they could be limited,
but the RM does not require this, and this is a change they would have
to make now.

Second, optimizations are supposed to be improvements in your code that
are *TRANSPARENT* to the semantic behavior.

I think it is a VERY bad precedent to allow an optimization that has
a fundamental effect on the semantics of the program, NOT related to
raising of built-in exceptions.

THis is not just an interpretation, it is a significant change in the
language, and for that we need a significant justification.

The justification that says "this change in the language may allow some
programs to run faster, but we have no data to support this claim" is
FAR too weak to justify an incompatible language change that *WILL*
cause trouble for existing programs.

You say we should not prevent an optimization. BUT THIS IS NOT AN
OPTIMIZATION. I don't have any quarrel with the optimization, My
quarrel is with the fact that this is a language change!

In GNAT, we have introduced a pragma

pragma Finalize_Storage_Only (controlled type);

which tells the compiler that the finalization routine is present ONLY
for storage reclamation purposes (i.e. that it has no semantics, and can
therefore be omitted on an optimization basis, e.g. at the outer level, or
in our Java implementation all the time).

I think the proper approach here is to use this or a similar pragma to give
the compiler permission to make the "optimization".

Using other languages, many programmers are quite familiar with the idea
that optimization changes the behavior of the program. They don't like it,
and often they decide they cannot risk doing ANY optimization.

In Ada, we have tried to be very careful to reassure people that, except
for the case of built-in language checks, something that people do not
care too much about when talking about high performance, since most often
such checks are suppressed, OPTIMIZATION in Ada IS OPTIMIZATION.

It does NOT change the semantics.

This proposed decision by the ARG is a major step backwards. The only
possible justification would be if we knew it had a major performance
significance, but we don't know that.

I run across cases all the time where an optimization would be attractive
but is prevented by the Ada semantics. My reaction is to forget the
optimization, not go changing the language to permit implementation
dependent semantics!

Robert

****************************************************************

From: 	Tucker Taft
Sent: 	Monday, April 19, 1999 10:02 AM

But Robert, we already have a number of permissions relating
to removing controlled intermediaries...

In any case, I am getting more enamored of an intermediate position,
which restricts the permission to apply only to types for
which there is no user-defined Initialize procedure.  Without
a user-defined Initialize procedure, there is no real guarantee
of atomicity relative to abort anyway, since component default
initializations can occur pretty much in any order.  Also, all
the examples which seem to cause problems involve types with
user-defined Initialize procedures, be they limited or non-limited.
In general, programmers should (already?) be aware that defining their own
Initialize procedure increases the overhead associated with
controlled types, while it provides more atomicity guarantees.

-Tuck

****************************************************************

From: 	Robert Dewar
Sent: 	Monday, April 19, 1999 10:35 AM

I like this better, note that nearly 100% of the time, such cases will
correspond to the Finalize_Storage_Only case anyway, where it is entirely
reasonable to optimize.

****************************************************************

From: 	Gary Dismukes
Sent: 	Monday, April 19, 1999 12:13 PM

I think I can get behind this proposed revision. :-)

This sounds like a reasonable compromise and addresses my
concerns about affecting existing code and surprising users.

Time for another round of revision on the AI...

****************************************************************

From: 	Robert A Duff
Sent: 	Monday, April 19, 1999 12:04 PM

> I think it is a VERY bad precedent to allow an optimization that has
> a fundamental effect on the semantics of the program, NOT related to
> raising of built-in exceptions.

I prefer to say "change the behavior" rather than "change the semantics"
-- semantics being the whole set of possible behaviors a program might
have, so optimizations can't change semantics by definition.

Yes, the rules we're talking about allow an optimization to change the
behavior of a program (compared to without optimization, or compared to
some other optimizer, or whatever).  But there are lots of other places
in Ada that have the same property (in addition to 11.6, which you say
doesn't count).  For example, order of evaluation of actual parameters.
(I would have preferred a strict left-to-right order, for the same
reasons you're worried about this controlled-types stuff.  Presumably
the decision was made to allow arbitrary order because somebody thought
it would increase efficiency, and people find side effects distasteful
anyway, so the order shouldn't really matter too much.  Note that the
controlled types issue is similar -- the optimization mainly affects
programs that have "evil" side effects.)  Another example is
pass-by-copy vs pass-by-reference.  Thankfully, Ada has fewer such cases
than some other languages.

I usually argue against non-determinism.  But in this case, it seems
worthwhile -- I hardly think it's precedent-setting.

> Can you elucidate this comment ...
>
>> Making the semantics deterministic is a bigger change to the
>> language.
>>

I just meant that the RM clearly allows *some* of the behavior-changing
optimizations related to eliminating Finalize and friends.  So
forbidding any such optimizations for controlled types seems like a
radical change to the language; a change in philosophy, whereas merely
changing (and/or clarifying) exactly which such optimizations we're
going to allow seems like a minor tweak to the language (albeit not
necessarily an upward compatible one).

I guess I'm pretty ambivalent on this issue.  I could certainly live
with either the AI as currently written up, or with Tucker's latest
compromise solution.  I could even live with forbidding all such
optimizations, but as I said, that seems like a bigger language change
to me.

- Bob

****************************************************************

From: 	Randy Brukardt
Sent: 	Monday, April 19, 1999 1:52 PM

I assume this discussion is about AI-0147. Not all of the mail makes this
clear, perhaps some of participants need to go back and re-read the AI...

What I find most important is that a decision is made on this AI, so it can
be enforced. If we don't get this done, users will not have any recourse if
compilers implement the rules as written in the RM. (This has caused
problems in the past.)

In any case, I don't understand Tucker's "intermediate position". I don't
know exactly what permission he is talking about, and I suspect that Tucker
and Robert are interpreting his position differently.

I can think of three possible interpretations of Tucker's "intermediate
position":

1) He means all of the permissions of this AI. That would mean that the
assignment of a type with a user-defined initialize routine gets the
canonical semantics - which means two Adjusts and two Finalizes.

I certainly hope this isn't what Tucker meant. Such a change would make
most compilers need to change, and would make Claw quite a bit slower
(since most Claw controlled types have user-defined initialize routines,
and some are assigned often). How significant this would be, I don't know,
but I don't believe this is what Tuck meant (although it might have been
what Robert thought Tuck meant.)

2) He means just the permission to remove Initialize calls, and he meant
what he said: user-defined Initialize calls cannot be removed.

Such a position is exactly the same as saying that Initialize calls can't
be removed - as the predefined Initialize does nothing. I don't see how
this can be an "intermediate" position - it seems to be a complex way of
saying one of the endpoint positions. Therefore, I think that Tucker must
have had something else in mind.

3) He means just the permission to remove Initialize calls, and he meant
that top-level user-defined Initialize calls cannot be removed, but
user-defined initialize calls in components can be removed.

I find this idea to be silly. This just makes a more complicated
permission, with no corresponding benefit. If the user wraps the controlled
(lock) object in a record, suddenly the Initialize call can be removed.


The primary argument for allowing the removal of Initialize is that the
permission to remove operations in the other, similar cases already exists.
If Control is a Controlled type:

     declare
          X : Control; -- Can we remove the object, Initialize, and
Finalize?
          Y : Control := (aggregate); -- We can remove the object and the
Finalize.
          Z : Control := X; -- We can remove the object, Adjust, and
Finalize.
     begin
          null;
     end;

Why should X be treated differently than Y and Z?

Note that a non-limited Lock type is a very dubious construct. (Even if
Robert's customers have already written it). Assignments of the lock would
have to have clearly defined semantics (probably to clone the lock), and
would make it easy to hold the lock forever (get copy it into something
that never goes away).
Another possible semantics is for assignment to NOT copy the lock, but then
of course assignment is a vacuous operation, and it would have been better
for the user to have declared the type limited to avoid the operation in
the first place.

I suspect that most, if not all, people who have such locks have never
thought about the implications of making the item non-limited, especially
what it means to assign it.

In any case, this is an implementation permission, and Robert is free to
ignore it. Then GNAT wouldn't break the user's code, even if it not
guaranteed to work on other compilers. RRS is likely to ignore most such
permissions, because they confuse users more than they save. But it is
clear we need a permission to eliminate "extra" operations on assignments,
and how far that is taken is only a matter of degree.

Therefore, I would prefer to leave the AI alone, as it most consistent the
way it currently is. My second choice is to placate people by simply saying
that Initialize calls are never "optimized" away; but of course that
doesn't apply to Adjust or Finalize calls. More restrictive rules are more
likely to make Ada code perform poorly when compared to other languages,
and don't buy us much.

                                        Randy.

****************************************************************

From: 	Tucker Taft
Sent: 	Monday, April 19, 1999 6:00 PM

> I can think of three possible interpretations of Tucker's "intermediate
> position":
> 
> 1) He means all of the permissions of this AI. That would mean that the
> assignment of a type with a user-defined initialize routine gets the
> canonical semantics - which means two Adjusts and two Finalizes.
> 
> I certainly hope this isn't what Tucker meant. 

It isn't.

> 2) He means just the permission to remove Initialize calls, and he meant
> what he said: user-defined Initialize calls cannot be removed.
> 
> Such a position is exactly the same as saying that Initialize calls can't
> be removed - as the predefined Initialize does nothing. I don't see how
> this can be an "intermediate" position - it seems to be a complex way of
> saying one of the endpoint positions. Therefore, I think that Tucker must
> have had something else in mind.

This is in fact what I had in mind.  It is apparently more acceptable
to Robert and Gary, and it does, I believe, resolve most of the
complaints.  You are right that I could have expressed the
final rule more simply, but I was trying to relate it to
the existing proposal.  It means effectively that user-defined Initialize
calls cannot be optimized away (unless the compiler "looks inside" them
somehow).  In my experience, user-defined Initialize procedures are 
relatively rare for non-limited controlled types.  (Your experience
may of course differ.)  They are needed only if there is some
kind of default initialization that needs protection against
abort, which seems to be exactly the kind of thing which might be bad
news to remove.  There are a number of simplifications possible for
controlled types with no user-defined Initialize routines, so users
who are concerned about performance would presumably avoid creating
"trivial" Initialize routines, and only create one when they really
have something "important" to do, abort-deferred.

> 3) He means just the permission to remove Initialize calls, and he meant
> that top-level user-defined Initialize calls cannot be removed, but
> user-defined initialize calls in components can be removed.

Nope, I didn't mean that.

Basically I was convinced by the well-founded claim that many Ada 
programmers avoid "limitedness" like the plague.  However, to use
a controlled type to do something like seize a lock, you pretty
much have to have a user-defined Initialize routine, so that seems
like a safer way to identify a "hands-off" situation than requiring
all "hands off" controlled types to be limited.


>                                         Randy.

-Tuck

****************************************************************

From: 	Robert Dewar
Sent: 	Monday, April 19, 1999 6:21 PM

<<Basically I was convinced by the well-founded claim that many Ada
programmers avoid "limitedness" like the plague.  However, to use
a controlled type to do something like seize a lock, you pretty
much have to have a user-defined Initialize routine, so that seems
like a safer way to identify a "hands-off" situation than requiring
all "hands off" controlled types to be limited.
>>

Right, that is EXACTLY my feeling, and is why I agree with this proposal.

****************************************************************

From: 	Randy Brukardt
Sent: 	Monday, April 19, 1999 7:45 PM

>> 2) He means just the permission to remove Initialize calls, and he meant
>> what he said: user-defined Initialize calls cannot be removed.
>>
>> Such a position is exactly the same as saying that Initialize calls
can't
>> be removed - as the predefined Initialize does nothing. I don't see how
>> this can be an "intermediate" position - it seems to be a complex way of
>> saying one of the endpoint positions. Therefore, I think that Tucker
must
>> have had something else in mind.

>This is in fact what I had in mind.  It is apparently more acceptable
>to Robert and Gary, and it does, I believe, resolve most of the
>complaints.  You are right that I could have expressed the
>final rule more simply, but I was trying to relate it to
>the existing proposal.  It means effectively that user-defined Initialize
>calls cannot be optimized away (unless the compiler "looks inside" them
>somehow).

OK, but I don't think this is new - it just the original proposal that was
defeated at the Paris meeting and again in Erhard's E-Mail vote.

>In my experience, user-defined Initialize procedures are
>relatively rare for non-limited controlled types.  (Your experience
>may of course differ.)  They are needed only if there is some
>kind of default initialization that needs protection against
>abort, which seems to be exactly the kind of thing which might be bad
>news to remove.

Not exactly. If the initialization needs control structures (i.e. IF), it
is much more convinient to write that as an Initialize routine rather than
as a series of functions. If the initialization requires interaction
between multiple components, it also might need to be a routine. (Note that
the second hasn't happened to Claw).
Claw uses the Initialize routine to set up the clone pointers (which point
to self at initialization); I don't recall if this is legal in
initialization expressions for a non-limited type.

>There are a number of simplifications possible for
>controlled types with no user-defined Initialize routines, so users
>who are concerned about performance would presumably avoid creating
>"trivial" Initialize routines, and only create one when they really
>have something "important" to do, abort-deferred.

I suppose this is not much different than requiring users to look at making
locks limited, but it might force significant redesigns.

>Basically I was convinced by the well-founded claim that many Ada
>programmers avoid "limitedness" like the plague.

They need some re-education. :)

>However, to use
>a controlled type to do something like seize a lock, you pretty
>much have to have a user-defined Initialize routine, so that seems
>like a safer way to identify a "hands-off" situation than requiring
>all "hands off" controlled types to be limited.

Certainly, I would rather that the AI be reverted to this form (this was
the way it was written before the Paris meeting, I believe), than that it
not get approved at all. I've personally never cared that much about this
issue - my main concern is that bogus optimizations allowed by the RM get
deep-sixed, the sooner the better. Therefore, I would switch my vote here
if it would help get the this AI done. I fear, however, that procedurally
we will put off the completion of the AI another year, and it certainly
will miss the Corrigendum (meaning that the test suite won't be able to
enforce the ruling for a long while).

                                Randy.

****************************************************************

From: 	Gary Dismukes
Sent: 	Monday, April 19, 1999 11:43 PM

> Certainly, I would rather that the AI be reverted to this form (this was
> the way it was written before the Paris meeting, I believe), than that it
> not get approved at all. I've personally never cared that much about this
> issue - my main concern is that bogus optimizations allowed by the RM get
> deep-sixed, the sooner the better. Therefore, I would switch my vote here
> if it would help get the this AI done. I fear, however, that procedurally
> we will put off the completion of the AI another year, and it certainly
> will miss the Corrigendum (meaning that the test suite won't be able to
> enforce the ruling for a long while).

While I agree it would be nice to have this included in the Corrigendum
(though hardly earth-shattering if it isn't), I'm not sure why you're so
concerned about "enforcing the ruling".  It seems that the only thing
you can reasonably test is that Initialize doesn't get optimized away,
assuming that we go with the "new" revised rules (that should be tested
I suppose, but I'm not too worried that compilers are going to violate
it if that's what we decide).  If we stick with the current write-up
for AI-147, then what are you going to "enforce", that compilers don't
do too much optimization?  That doesn't seem worth expending the effort
to test, even if feasible.

****************************************************************

From: 	Erhard Ploedereder
Sent: 	Tuesday, April 20, 1999 4:24 AM

> What I find most important is that a decision is made on this AI, so it can
> be enforced. If we don't get this done, users will not have any recourse if

No way.  This being a permission, there is no test to enforce it. In fact,
some of the discussion reads as if this were seen as a requirement, which it
surely isn't. Nor does GNAT have to implement it, if the GNAT-suppliers so
decide. The ACATs impact is that some existing tests may need to be modified
to allow for the permission. This is equally true, if the AI is scaled back
to Adjust/Finalize (again) or recast as Adjust/user-defined Initialize/Finalize.

On the discussion per se:
It is somewhat illuminating to try to write examples with non-limited types
and unreferenced lock variables to implement locking semantics and then do the
experiment: "What if, in the course of maintenance, somebody decides to
assign a value that includes a lock." Try to write the "Adjust" such that the lock
protection remains (presuming for the moment that the maintainer is aware
of the lock at all; if he isn't, he's in deep trouble to begin with, as the
lock will go away silently with the first finalization that now happens on either
copy.) You will find yourself in deeeeeeep trouble (at least I do) trying to
write the Adjust, because of the indeterminate number of Adjust calls.

Somewhat pointedly, one can then ask the question: "Do I prefer to be caught
by an optimization destroying my envisaged lock protection and would I
rather be screwed during maintenance ?" Put differently: "When do I learn to
make things limited when they need to be limited ? Sooner or later, I will.
All else is pretty risky." But then, programmers like to live in the fast
lane, don't they :-) Let's save the "limited" keystrokes and (let somebody
else) live dangerously.

Erhard

****************************************************************

From: 	Robert Dewar
Sent: 	Tuesday, April 20, 1999 9:04 AM

Well if you want to worry about this (I think it is a marginal case), then
just write an Adjust that raises Program_Error! THat should help the
incompetent maintenance programmer :-)

Remember that one reason for not making it limited is that it means you
can initialize it with just an initialization expression, and I have
seen this done!

Let me say one thing about Erhard's point about maintenance. I would
accept this argument *during* the design, but not after the standard
is issued!

The point here is that we have working portable programs, written according
to the rules in the RM. And the ARG has just made a ruling that these
programs are no longer portable, but rather implementation dependent.

This means that the programs MAY but not necesarily WILL stop working
suddenly.

That is the worst kind of disruption that the ARG can cause. They had
better have a very good reason for it.

Let's ask the reason for creating this incompatible chaos

"Well we think it might speed up some programs?"

"How significant is this speed up"

"We haven't any idea, we never measured it"

Pretty weak ground if you ask me!

****************************************************************

From: 	Erhard Ploedereder
Sent: 	Tuesday, April 20, 1999 10:07 AM

> Remember that one reason for not making it limited is that it means you
> can initialize it with just an initialization expression, and I have
> seen this done!

That's the first technical argument that I have seen that really holds
water. We got the programmers between a rock and a hard place with this
prohibition of initialization for limited types. (Where did that one come
from, anyway ? Surely, an aggregate could be allowed, especially in view
of the "must build in place"-AI, but that's a different story.)

However, when you read AI-147, it does not extend its permission to
this case, i.e., in this case the explicit initialization and the
finalization will happen.

..unless the initialization is provably side-effect free, in which case the
general observability argument might be envoked (see below). But that won't
be the case for the locking example.

Relatedly, a question on Tuck's compromise: Does that include elimination of
a corresponding call on user-defined Finalize, if the Initialize is
defaulted ?

Erhard

****************************************************************

From: 	Tucker Taft
Sent: 	Tuesday, April 20, 1999 11:26 AM

Erhard Ploedereder wrote:
> ...
> Relatedly, a question on Tuck's compromise: Does that include elimination of
> a corresponding call on user-defined Finalize, if the Initialize is
> defaulted ?

Yes, the point of the permission is to allow removal of the
Finalize without having to "look inside" it.  We would not be
permitting the removal of user-defined Initialize.

Randy should speak for himself, but I suspect his interest in this
AI is that it talks about other issues where he wants to create
tests to be sure that compilers are not overstepping their bounds
in optimizing away stuff.  He just wants this Initialize/Finalize
part to be resolved, so he can write tests for the other parts,
where he has already been burned by over-aggressive compilers.

As far as your general point about initializing limited types,
I agree that we could get creative here some day.  What would
make sense to me is if you could initialize from an aggregate
or a function call, but not by copying an existing limited object.
The function would similarly be required to build its return
value "in place."  There are a lot of details to work out,
but it would make an interesting "amendment" proposal.  But
that is certainly for another day ;-).

> Erhard

****************************************************************

From: 	Randy Brukardt
Sent: 	Tuesday, April 20, 1999 12:32 PM

>While I agree it would be nice to have this included in the Corrigendum
>(though hardly earth-shattering if it isn't), I'm not sure why you're so
>concerned about "enforcing the ruling".  It seems that the only thing
>you can reasonably test is that Initialize doesn't get optimized away,
>assuming that we go with the "new" revised rules (that should be tested
>I suppose, but I'm not too worried that compilers are going to violate
>it if that's what we decide).  If we stick with the current write-up
>for AI-147, then what are you going to "enforce", that compilers don't
>do too much optimization?  That doesn't seem worth expending the effort
>to test, even if feasible.

Yes, we need to test that compilers don't take advantage of  the repealed parts
of 7.6(21). (And if the AI is rewritten to prohibit optimization of Initialize,
that should be tested as well - although I think that may already be tested).
The permissions of 7.6(21) break real, existing programs (i.e. Claw), and they
presumably would break other people's programs as well. (This happens any time
that objects include "self" pointers). While vendors how have Windows NT
compilers have Claw to use as a test, those vendors without an NT compiler may
very well get this wrong. I would happy to write an "extract of Claw" test for
this area but I need the AI before I could consider issuing it.

I realize that your and Robert think that such a test would be of little value,
mainly because you don't expect anyone to take advantage of these permissions.
And I would agree with you (with my compiler hat on, I wouldn't use them,
because the tech support cost would probably outweigh any performance benefit),
except that our experience with Claw is that people do implement these
permissions - and sometimes go too far. And without a test, there is no possible
way for an implementor to find out that they have gone too far until a large
system breaks in an incredibly hard to find way.

					Randy.

****************************************************************

From: 	Randy Brukardt
Sent: 	Tuesday, April 20, 1999 2:32 PM

>>I suppose this is not much different than requiring users to look at making
>>locks limited, but it might force significant redesigns.

>How could it force significant redesigns? Do we really have figures on how
>important these optimizations are in any case?

What I was thinking was that IF the efficiency of a library was important, then
the implementor of that library would have to make sure that there were no
user-defined Initialize routines. In the case of Claw, we used those because
they were convinient, not because of any concern about abort or optimization. In
order to remove them, we would have to figure out a different way to properly
initialize objects. Converting existing Initialize routines into default
initializations is a non-trivial exercise in general.

The optimization we are discussing (whether a call to a user-defined initialize
can be removed) probably isn't very useful, as it could only apply to dead
variables. One could assume that in any truly critical code, someone would have
checked to ensure that any unused and unneeded variables have been removed, so
it would mainly be of benefit in reducing code size, not improving the speed.

OTOH, removing calls to Adjust/Finalize pairs in assignments is pretty
important, as the canonical semantics requires double the number of calls that
would be reasonably expected. Of course, if no one ever actually assigns the
objects (as often is the case with Claw objects), then it doesn't matter. But
for types like Unbounded String, I would expect the optimizations to make a
significant difference.

I don't think anyone is arguing for the removal of all of these permissions (I
hope!), so the case in question is simply not that important.

At this point, I would prefer to simply remove the permission from the AI that
allows the removal of user-defined initializations. It seems that there will
always be a vocal minority against any such permission, and it simply is not
worth holding up the completion of the AI for such a minor benefit.

					Randy.

****************************************************************

Questions? Ask the ACAA Technical Agent