Version 1.1 of acs/ac-00267.txt

Unformatted version of acs/ac-00267.txt version 1.1
Other versions for file acs/ac-00267.txt

!standard 6.1.1(3/3)          15-06-03 AC95-00267/00
!standard 6.1.1(18/4)
!class confirmation 15-06-03
!status received no action 15-06-03
!status received 15-04-02
!subject Pre'Class and LSP when partial view is untagged
!summary
!appendix

From: Tucker Taft
Sent: Thursday, April  2, 2015  11:13 AM

Here is a tricky one.  We have started enforcing LSP in our SPARK-based
"GNATprove" tool.

  It complains about the following example:

package No_Primitive is
    type Root is private;
    procedure Init (I : in out Root; C : Natural)
    with Pre => C > 0;
private
    type Pos_Access is access all Positive;
    type Root is tagged record
       F : Pos_Access;
    end record;
end No_Primitive;

GNATprove complains:

no_primitive.ads:5:16: medium: precondition is stronger than the default
class-wide precondition of True

Although we cannot specify a Pre'Class on Init (because where Init is declared,
Root is not visibly tagged).

---

Note that GNATprove requires that Pre'Class represent a "worst case" of (i.e. be
"stronger" than) the Pre of any reachable subprogram at the point of a
dispatching call. Ada makes no such requirement, though this was the ultimate
intent (at least for me) for the appropriate relationship between Pre'Class and
Pre.

---

It seems that we should consider changing the default for Pre'Class on the
primitives of a not-visibly-tagged type in a case like this, perhaps to be the
same as Pre, rather than True.  Actually, there are several alternatives (see
6.1.1(3/3)):

1) Do nothing, which means that you cannot do any useful checking at the call
   site for a dispatching call on the primitives of such a type, since there is
   no way for the Pre'Class to be a useful summary of the Pre's that might be
   encountered.  Also, you cannot usefully specify a Pre'Class on any tagged
   descendants of Root, since "True" trumps all.

2) Change the default for Pre'Class to match the Pre in a case like this.

3) Change the default for Pre'Class to be False in a case like this.  This would
   preclude making dispatching calls to primitives of Root, but it would not
   preclude specifying more relaxed Pre'Class for Init on descendants of Root.

I think I favor (2).

***************************************************************

From: Bob Duff
Sent: Thursday, April  2, 2015  1:38 PM

> 2) Change the default for Pre'Class to match the Pre in a case like this.

Why doesn't the Pre get and-ed into the Pre'Class in all cases?

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  3:08 PM

(A) because Pre is for those cases where you don't want to be tied to the
    straightjacket of LSP.

(B) LSP requires "or" for combining, Pre uses "and" because it makes more sense.

(C) Practically, Pre'Class can only be specified on a root type. What would you
    do for a Pre on a derived operation? ("and"ing would strengthen the
    preconditions.) The whole reason we have both is so that non-LSP
    preconditions can be specified on derived operations.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  3:19 PM

My previous answer is confused (just like Tucker's question). The answer is that
you are correct. The precondition of any subprogram is
    (Pre'Class(self) or Pre'Class (parent) ... or Pre'Class (root)) and Pre(self)
with terms for ancestor Pre'Class omitted if there is no Pre'Class.

Tucker's tool needs to keep in mind what the precondition (not just the
Pre'Class or any other specific thing) actually is. After all, back when this
stuff was new, I was told by someone at AdaCore that they didn't use Pre'Class
but rather used Pre with a carefully controlled relationship. I would think any
practical proof tool would need to be able to handle such "virtual" LSP, and
only complain if *that* is violated *and* there are dispatching calls.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  3:35 PM

...
> GNATprove complains:
>
> no_primitive.ads:5:16: medium: precondition is stronger than the
> default class-wide precondition of True
>
> Although we cannot specify a Pre'Class on Init (because where Init is
> declared, Root is not visibly tagged).

Looks correct to me. LSP makes no sense on types without dispatching. And there
is little reason for the partial view to be untagged in such cases. (I think
that should never have been allowed in the first place, but of course we have to
live with that.)

> ---
>
> Note that GNATprove requires that Pre'Class represent a "worst case"
> of (i.e. be "stronger" than) the Pre of any reachable subprogram at
> the point of a dispatching call.
> Ada makes no such requirement, though this was the ultimate intent (at
> least for me) for the appropriate relationship between Pre'Class and
> Pre.

No, no, no!! If that was the case, we would not have allowed both at all.
The intent was for Pre to be used in circumstances where LSP did not work
(strengthing a precondition is much more likely to come up in practice than
weakening it). In that case, there is no relationship between Pre and Pre'Class.

Clearly, a proof tool will have to go beyond Ada (since proof of dispatching
calls is difficult-to-impossible without LSP), and how it does that is its
business. My rule of thumb is to keep Pre'Class and Pre completely separate, use
only one of them in a hierarchy and never mix them.

In this particular case, I'd suggest either (1) requiring the use of a tagged
partial view and Pre'Class, or (2) enforcing LSP on the set of Pre (completely
ignoring the Pre'Class, assuming none is specified). (As Bob pointed out, the
actual precondition is (Pre'Class and Pre), and the tool should require LSP on
*that*, not on Pre'Class alone; especially if Pre'Class is omitted.

> ---
>
> It seems that we should consider changing the default for Pre'Class on
> the primitives of a not-visibly-tagged type in a case like this,
> perhaps to be the same as Pre, rather than True.

This seems to be an abuse of Pre'Class. It's not *that* special.

>  Actually, there are several alternatives (see 6.1.1(3/3)):
>
> 1) Do nothing, which means that you cannot do any useful checking at
> the call site for a dispatching call on the primitives of such a type,
> since there is no way for the Pre'Class to be a useful summary of the
> Pre's that might be encountered.

Not necessarily true, if you're doing full program analysis. If not, it seems OK
to me, since there doesn't seem to be any good reason to leave "tagged" off of a
partial view in the first place. (Why would you want to make it harder for the
clients by denying them prefix notation and extension?? The reason to use
untagged types at all is because of performance concerns [both speed and space],
otherwise tagged types should be preferred. If you have such concerns, you don't
want them ignored in the actual implementation.)

> Also, you cannot usefully specify a Pre'Class on any tagged
> descendants of Root, since "True" trumps all.

That's fine, one should only use Pre'Class on the root anyway. No one
understands "weakening" and it almost never comes up in practice, so 99% of the
time, the only place Pre'Class would be used is on the root. I prefer to just
make that a coding rule and then no one has to try to understand the 1%.

> 2) Change the default for Pre'Class to match the Pre in a case like
> this.

I'm dubious that a case like this is either valuable enough or common enough to
mess up the language that badly. (That's a *very* significant change in runtime
semantics; it means that a Pre written on a root suddenly and silently inherits
even though that doesn't happen in any other case.)

There are lots of semantic oddities about tagged types with untagged partial
views. I don't see much reason to worry about one more. If it hurts when you do
that (declare a untagged partial view of a tagged type), then don't do that!!

> 3) Change the default for Pre'Class to be False in a case like this.
> This would preclude making dispatching calls to primitives of Root,
> but it would not preclude specifying more relaxed Pre'Class for Init
> on descendants of Root.
>
> I think I favor (2).

I favor (1), because Pre and LSP are not intended to work together (and
certainly not automatically). Plus, as Bob pointed out (and I missed initially),
the actual runtime semantics can be used for LSP (in full program analysis). A
tool doing that should not have a problem. And a local proving tool probably
just shouldn't try; I don't see any reason to break the language just to allow a
tool to make an assumption that we specifically did not want to make, especially
in a case that shouldn't happen in the first place.

***************************************************************

From: Tucker Taft
Sent: Thursday, April  2, 2015  3:48 PM

>> 2) Change the default for Pre'Class to match the Pre in a case like this.
>
> Why doesn't the Pre get and-ed into the Pre'Class in all cases?

Effectively they do get "and"ed because both the Pre and the Pre'Class are
checked independently.

The question is what gets or'ed into the Pre'Class of descendant's operations.  Normally "Pre'Class" of an ancestor has to be as restrictive as you will ever see in any descendant, and we effectively require a non-True ancestor Pre'Class for any descendant
 Pre'Class to be effective.  Leaving Pre'Class unspecified on a root type, since it defaults to True, is sort of the "kiss of death" for the use of Pre'Class.

But you have pointed out at least two more alternatives:

4) If Pre is specified but Pre'Class is not specified on a root type, then
   Pre'Class on the root type defaults to the expression given for Pre.

5) If Pre is specified, then it is and'ed into the Pre'Class, even if it is
   explicitly specified on this operation or some ancestor's corresponding
   operation.

(2), (4), and (5) all have the advantage of making it more likely that Pre'Class is not overly weak, since a weak Pre'Class at one level effectively weakens Pre'Class for every descendant.

Of this augmented set, I think I like (4) the best, as it is simple to state, is
generally useful, and works whether or not we are in this funny situation of
having a non-tagged partial view.

***************************************************************

From: Tullio Vardanega
Sent: Thursday, April  2, 2015  3:51 PM

> strengthing a precondition is much more likely to come up in practice
> than weakening it
I have to say that I think exactly the same as he does in this regard. I do miss
a clear use model to see the wisdom of the reverse.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  3:55 PM

...
> 2) Change the default for Pre'Class to match the Pre in a case like
> this.

More on this:
Depending upon the rule, this would be either a massive inconsistency (runtime
incompatibility), or it would introduce a maintenance hazard into the language
(or both!).

To me, the only sensible rule would be to do this always for tagged types.
That's inconsistent with the existing model of Pre.

   type Root is tagged ...
   function A (Obj : Root) return Boolean;
   procedure P (Obj : Root ...) with Pre => A(Obj);

   ...
   type Der is new Root ...
   function B (Obj : Der) return Boolean;
   procedure P (Obj : Der ...) with Pre => B(Obj);

   D : Der;

   P (D);

The precondition for this call of P as described in the current RM is B(D).

If (2) was adopted for all tagged types, then the precondition of this call of P
would become A(D) and B(D). That means that A would have to be satisfied, even
though it isn't involved at all with the body of P that's called. That
essentially forces LSP onto code that may have been designed without following
it. I don't see how that could be acceptable, especially as there will have been
3-5 years of use of these features by the time such a change was made to the
language.

If one tried to only apply (2) in more limited circumstances, then one has a
significant maintenance hazard. For instance, if the rule only applies to types
with an untagged partial view, then adding or removing "tagged" to a partial
view would silently change the effective preconditions of all of the routines
that inherit from the type with the untagged partial view. (And if there aren't
any, the rule has no effect anyway.)

If it only applies to tagged types with partial views, then we still have a
maintenance hazard, as adding a partial view (deciding to make the type private)
again silently changes the preconditions on routines that inherit from it.

Even the broad rule has a minor maintenance hazard, in that adding "tagged" to a
type would silently change the preconditions on inherited routines. (This is
minor because all of the child types would also need to have their type
declarations changed, so most people wouldn't be too surprised by a semantic
change.)

Anyway, we spent a lot of effort getting these rules so they work consistently
without maintenance hazards. I'd hate to see us introduce them now.

***************************************************************

From: Tucker Taft
Sent: Thursday, April  2, 2015  3:57 PM

>> ... Note that GNATprove requires that Pre'Class represent a "worst
>> case" of (i.e. be "stronger" than) the Pre of any reachable
>> subprogram at the point of a dispatching call.
>> Ada makes no such requirement, though this was the ultimate intent
>> (at least for me) for the appropriate relationship between Pre'Class
>> and Pre.
>
> No, no, no!! If that was the case, we would not have allowed both at all.
> The intent was for Pre to be used in circumstances where LSP did not
> work (strengthing a precondition is much more likely to come up in
> practice than weakening it). In that case, there is no relationship
> between Pre and Pre'Class.

We have adopted a different view for SPARK, but I understand yours as well.  So
let's not get caught up with that.  Let's return to the original question: what
is the Pre'Class on an operation of a type with a tagged full view and an
untagged partial view.

You are suggesting stick with the existing semantics, which seems to be that you
end up with a Pre'Class of True, and you can't do anything about it.
Dispatching calls to an operation of such a type that has a Pre is not
recommended, and perhaps would be disallowed in SPARK.  I suspect we can live
with that...

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  4:15 PM

> > No, no, no!! If that was the case, we would not have allowed both at all.
> > The intent was for Pre to be used in circumstances where LSP did not
> > work (strengthing a precondition is much more likely to come up in
> > practice than weakening it). In that case, there is no relationship
> > between Pre and Pre'Class.
>
> We have adopted a different view for SPARK, but I understand yours as
> well.

Right. It makes sense that a proof tool might have to enforce usage rules beyond
those that the language imposes. (That's the whole reason that SPARK is a
subset, after all.)

> So let's not get caught up with that.  Let's return to the original
> question: what is the Pre'Class on an operation of a type with a
> tagged full view and an untagged partial view.
>
> You are suggesting stick with the existing semantics, which seems to
> be that you end up with a Pre'Class of True, and you can't do anything
> about it.  Dispatching calls to an operation of such a type that has a
> Pre is not recommended, and perhaps would be disallowed in SPARK.  I
> suspect we can live with that...

Yes. Most of the time, the problem can be fixed by simply making the partial
view tagged and changing to using Pre'Class. If one can't do that, it seems most
likely because the author wants to deny clients the ability to dispatch and
extend the type. (Wanting to deny prefix notation is pointless, IMHO). In such a
case, it shouldn't be surprising that you're denying yourself that ability as
well (in terms of proof). I doubt this sort of thing is common.

I suppose that one somewhat common reason for a tagged completion to a untagged
type is that the completion has to be controlled. In that case, you have no
intent of doing any extension or dispatching anywhere, it's just about getting
Finalize, so it is no hardship if dispatching is not allowed for proof purposes
- you aren't using it anyways.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  4:05 PM

...
> 4) If Pre is specified but Pre'Class is not specified on a root type,
> then Pre'Class on the root type defaults to the expression given for
> Pre.

How does this differ from (2)? I thought this is what you originally meant by:

> 2) Change the default for Pre'Class to match the Pre in a case like this.

I tried all of the possibilities for "a case like this", and none of them make
any semantic sense, they all cause some sort of maintenance problem. The best is
what you're calling (4) here, but even that has problems. We worked very hard so
the rules for all kinds of types are the same for preconditions, it would be
terrible to go away from that.

We did not intend to force LSP on anyone that did not want to have to conform to
it. Any of these proposals would do exactly that.

Secondly, all of these ideas would change (strengthen) the effective
preconditions on (some) existing inherited subprograms. That could cause
programs to fail at runtime. Please, let's not go there.

There is no way for a local proof tool to handle all of the possibilities
provided by Ada, since Ada allows preconditions to be tied to particular bodies
(this comes up both for dispatching calls and for calls through
access-to-subprogram values). Such a tool will have to enforce some
restrictions. That's not the language's problem.

***************************************************************

From: Tucker Taft
Sent: Thursday, April  2, 2015  4:03 PM

> ...
>> 2) Change the default for Pre'Class to match the Pre in a case like
>> this.
>
> More on this:
> Depending upon the rule, this would be either a massive inconsistency
> (runtime incompatibility), or it would introduce a maintenance into
> the language (or both!).
>
> To me, the only sensible rule would be to do this always for tagged types.
> That's inconsistent with the existing model of Pre.
>
>     type Root is tagged ...
>     function A (Obj : Root) return Boolean;
>     procedure P (Obj : Root ...) with Pre => A(Obj);
>
>     ...
>     type Der is new Root ...
>     function B (Obj : Der) return Boolean;
>     procedure P (Obj : Der ...) with Pre => B(Obj);
>
>     D : Der;
>
>     P (D);
>
> The precondition for this call of P as described in the current RM is B(D).
>
> If (2) was adopted for all tagged types, then the precondition of this
> call of P would become A(D) and B(D).

Actually, if the default Pre'Class for P became the same as Pre for P, then this
would become:

   A(D) or B(D)

because ancestor Pre'Class are effectively "or"ed into descendants.

But this brings up the good point that we have restrictions on what you can
write for Pre'Class (based on the "notional formal derived type" model), and
there are no such restrictions for Pre, so just defaulting Pre'Class to what was
specified for Pre wouldn't work.  So I agree the idea is unwise.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  4:23 PM

...
> > If (2) was adopted for all tagged types, then the precondition of
> > this call of P would become A(D) and B(D).
>
> Actually, if the default Pre'Class for P became the same as Pre for P,
> then this would become:
>
>    A(D) or B(D)
>
> because ancestor Pre'Class are effectively "or"ed into descendants.

No, it's "or"ed into the Pre'Class of descendants. The explicit Pre (which is
not inherited) is always "and"ed.

See 6.1.1(31-33/3). The specific precondition is checked separately and has to
be True. The class-wide preconditions are checked as a group and only one needs
to be True.

That is, we never intended Pre to be used for LSP.

> But this brings up the good point that we have restrictions on what
> you can write for Pre'Class (based on the "notional formal derived
> type" model), and there are no such restrictions for Pre, so just
> defaulting Pre'Class to what was specified for Pre wouldn't work.  So
> I agree the idea is unwise.

Gee, I didn't think of that one. That's even better than the inconsistency --
the Bairdian cases that we just fixed would reappear with a vengance and you
could end up calling non-existent routines or making calls with tag mismatches.

I guess the time I spent answering this was well-spent. :-)

***************************************************************

From: Tucker Taft
Sent: Thursday, April  2, 2015  6:04 PM

> ...
>>> If (2) was adopted for all tagged types, then the precondition of
>>> this call of P would become A(D) and B(D).
>>
>> Actually, if the default Pre'Class for P became the same as Pre for
>> P, then this would become:
>>
>>     A(D) or B(D)
>>
>> because ancestor Pre'Class are effectively "or"ed into descendants.
>
> No, it's "or"ed into the Pre'Class of descendants. The explicit Pre
> (which is not inherited) is always "and"ed.

Good point.  The "or"ing is only between Pre'Class expressions.

>> ... But this brings up the good point that we have restrictions on
>> what you can write for Pre'Class (based on the "notional formal
>> derived type" model), and there are no such restrictions for Pre, so
>> just defaulting Pre'Class to what was specified for Pre wouldn't
>> work.  So I agree the idea is unwise.
>
> Gee, I didn't think of that one. That's even better than the inconsistency
> -- the Bairdian cases that we just fixed would reappear with a
> vengance and you could end up calling non-existent routines or making
> calls with tag mismatches.
>
> I guess the time I spent answering this was well-spent. :-)

Yes, you helped to kill it before it grew into a painful AI.

***************************************************************

From: Bob Duff
Sent: Thursday, April  2, 2015  4:30 PM

> But you have pointed out at least two more alternatives:
>
> 4) If Pre is specified but Pre'Class is not specified on a root type,
> then Pre'Class on the root type defaults to the expression given for Pre.
>
> 5) If Pre is specified, then it is and'ed into the Pre'Class, even if
> it is explicitly specified on this operation or some ancestor's corresponding
> operation.

Does the following make sense?

6) Allow Pre'Class on a subprogram if the type turns out to be tagged (untagged
   private, tagged full).  Presumably the Pre'Class would only apply to type
   extensions within the subsystem (i.e. package hierarchy).  This is just
   getting around a syntactic glitch that prevents specifying aspects "later".

BTW, I agree with much of what Randy said, but I don't agree that the "untagged
private, tagged full" case is a wrong way to code.  I think it makes perfect
sense to restrict type extension and dispatching to within a given subsystem.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  5:18 PM

...
> Does the following make sense?
>
> 6) Allow Pre'Class on a subprogram if the type turns out to be tagged
> (untagged private, tagged full).  Presumably the Pre'Class would only
> apply to type extensions within the subsystem (i.e. package
> hierarchy).  This is just getting around a syntactic glitch that
> prevents specifying aspects "later".

That would work. It would be much like the premature use of 'Class on an
incomplete type. Of course, we've been trying to get rid of such things...

> BTW, I agree with much of what Randy said, but I don't agree that the
> "untagged private, tagged full" case is a wrong way to code.  I think
> it makes perfect sense to restrict type extension and dispatching to
> within a given subsystem.

It makes "sense", but only to people that are afraid of those things. It
restricts what clients of your type can do, but it's not clear what, if any,
benefits follow from that restriction. The benefits of restricting access to
components or default initialization are clear, for instance, but since
dispatching and extension already respect privacy, it's not anywhere near as
clear what circumstances in which restricting dispatching and (especially)
extension would be beneficial.

One would expect that a "good" ADT would be usable in any way that makes sense
to the client; the creator of an ADT shouldn't be in the business of deciding
that. (That's one reason why minimizing the visible use of access types is
important; the client should be deciding memory management, not the ADT.)

I guess I start from the idea that an ADT should provide maximum capability to
its clients -- which means that the starting point for a new type should be
visibly controlled (since you can't add that later), use few (preferably none)
access types, and non-limited private. Then one looks at the requirements
(performance, operations, etc.) and determines if some capabilities interfere
with those requirements. Then, and only then, do I consider additional
restrictions (fully private rather than controlled, limited, and so on). I doubt
that I would end up with an untagged visible type often, since that denies
capabilities to clients without much gain in the typical case.

Obviously, you don't get to start from scratch that often, so it's possible to
end up with something that's "accidentally" tagged after maintenance; typically
that means the type ended up controlled (think File_Type in Ada.Text_IO) but
didn't start that way. But in such cases, there isn't going to be any
dispatching anywhere: neither inside nor outside the subsystem.

But that case would work fine with the tools as postulated. No language changes
would be needed.

The case that wouldn't work would expose a series of untagged types to clients
yet would use extension and dispatching to implement them. That doesn't make a
lot of sense to me; if you're converted to the OOP way of thinking, why would
you want to deny that way of thinking to your clients? And if you're *not*
converted, you're not very likely to use extension and dispatching in the first
place. (Why would you take one that overhead if you're not convinced of the
benefits?)

Anyway, I'd like to have a better idea of what the benefit of preventing
extension and dispatching is before spending effort on a feature that is solely
intended to support programs that do that. (Especially one that would advertise
that such things are in used but not allowed to the clients.)

***************************************************************

From: Bob Duff
Sent: Thursday, April  2, 2015  6:04 PM

> That would work. It would be much like the premature use of 'Class on
> an incomplete type. Of course, we've been trying to get rid of such things...

Yes, "such things" are a bit kludgy.  But maybe that's OK in this case.
To be discussed.

> > BTW, I agree with much of what Randy said, but I don't agree that
> > the "untagged private, tagged full" case is a wrong way to code.  I
> > think it makes perfect sense to restrict type extension and
> > dispatching to within a given subsystem.
>
> It makes "sense", but only to people that are afraid of those things.

Not true:  I am a counterexample.

[snip opinions about style]

> Anyway, I'd like to have a better idea of what the benefit of
> preventing extension and dispatching is before spending effort on a
> feature that is solely intended to support programs that do that.
> (Especially one that would advertise that such things are in used but
> not allowed to the clients.)

I don't think we should be arguing about that.

My point is:  Some ARG member says "feature X is useless (or evil, dangerous,
hardly ever used, etc)".  If there's near-universal agreement on that opinion,
then it's fine to conclude, "Let's not worry about feature X when designing new
features".

But if others disagree with that opinion, then we shouldn't conclude that.  We
shouldn't base language design decisions based on your or my personal stylistic
opinions (unless the opinion is near-universally held).

You mentioned prefix notation.  My opinion is that I don't much like it, and I
don't think it should have been added to the language.  But I won't waste time
trying to convince you not to use it, and you shouldn't waste time trying to
convince me to use it.  We should just accept that it's a part of the language
some/many people use, and don't make it a second-class citizen via future
language changes.

***************************************************************

From: Tucker Taft
Sent: Thursday, April  2, 2015  6:08 PM

> Does the following make sense?
>
> 6) Allow Pre'Class on a subprogram if the type turns out to be tagged
> (untagged private, tagged full).  Presumably the Pre'Class would only
> apply to type extensions within the subsystem (i.e. package
> hierarchy).  This is just getting around a syntactic glitch that
> prevents specifying aspects "later".

I am not sure this is worth the trouble.  Just have a private dispatching
operation with a Pre'Class and a Pre and make the visible operation call it, if
you really want a type that is only extendible within the subsystem, but which
has some visible operations.

***************************************************************

From: Randy Brukardt
Sent: Thursday, April  2, 2015  6:52 PM

> > That would work. It would be much like the premature use of 'Class
> > on an incomplete type. Of course, we've been trying to get rid of
> > such things...
>
> Yes, "such things" are a bit kludgy.  But maybe that's OK in this case.
> To be discussed.

Sure. But...

> > > BTW, I agree with much of what Randy said, but I don't agree that
> > > the "untagged private, tagged full" case is a wrong way to code.
> > > I think it makes perfect sense to restrict type extension and
> > > dispatching to within a given subsystem.
> >
> > It makes "sense", but only to people that are afraid of those things.
>
> Not true:  I am a counterexample.

Please enlighten me as to what benefit you think that you get from it. I can't
see any, but perhaps I'm missing something fundamental.

> [snip opinions about style]
>
> > Anyway, I'd like to have a better idea of what the benefit of
> > preventing extension and dispatching is before spending effort on a
> > feature that is solely intended to support programs that do that.
> > (Especially one that would advertise that such things are in used
> > but not allowed to the clients.)
>
> I don't think we should be arguing about that.

We're not *arguing* about that. I want to *know* why you think that. My feeling
here is that I've missed something important, and I'd like to know what it is.

> My point is:  Some ARG member says "feature X is useless (or evil,
> dangerous, hardly ever used, etc)".  If there's near-universal
> agreement on that opinion, then it's fine to conclude, "Let's not
> worry about feature X when designing new features".
>
> But if others disagree with that opinion, then we shouldn't conclude
> that.  We shouldn't base language design decisions based on your or my
> personal stylistic opinions (unless the opinion is near-universally
> held).

Sure. But we're not designing a new feature here. You are advocating changing an
existing feature to allow it to be used in a way that it was never intended to
be used. The burden of proof is on those who think that something is broken to
show that the supposedly broken thing is useful enough to change the language
(how high that bar is depends on how messy the change is).

> You mentioned prefix notation.  My opinion is that I don't much like
> it, and I don't think it should have been added to the language.  But
> I won't waste time trying to convince you not to use it, and you
> shouldn't waste time trying to convince me to use it.  We should just
> accept that it's a part of the language some/many people use, and
> don't make it a second-class citizen via future language changes.

Again, we're not talking about some existing feature here. We're talking about
Pre'Class on untagged types, which was never anticipated in any way as LSP does
not make sense in the absence of dispatching.

Now, your proposal has a much lower bar than Tucker's previous idea, because it
doesn't involve incompatibility, so perhaps I'll go along with it even if it
seems useless to me. But it would be even better to explain why you find it
valuable to restrict types in that way.

On top of which, tagged completions of untagged partial views is broken in a
number of other ways semantically. We have a bunch of weird Legality Rules to
cover those things up, which makes using such a thing unusual (and most likely
not portable as implementations likely vary a lot in how well they deal with
those oddities - most of which aren't tested in the ACATS). So I have to wonder
if the typical user of that feature is actually getting what they want out of it
(or would if the rules were actually enforced). The only way to tell that is to
understand what the use case is. (Note: not "argue about" the use case, but
simply understand why someone thinks its valuable.)

***************************************************************

From: Jean-Pierre Rosen
Sent: Friday, April  3, 2015  2:35 AM

> Please enlighten me as to what benefit you think that you get from it.
> I can't see any, but perhaps I'm missing something fundamental.

Slightly OT, but...

I am also in favour of limiting inheritance to where it is required. The trouble
is that inheritance creates transitive dependencies, therefore the overall
complexity of the dependency graph grows as N**2, while it grows as N with
composition.

Therefore, it makes perfect sense to have a type that uses inheritance for its
implementation, but keeping the visible type untagged limits the complexity. It
also means that the writer of the package is fully responsible for the behaviour
of the type, because no unknown user can redefine the behaviour of some
operations called from within the package body.

***************************************************************

From: Jeff Cousins
Sent: Tuesday, April  7, 2015  5:50 AM

> BTW, I agree with much of what Randy said, but I don't agree that the
> "untagged private, tagged full" case is a wrong way to code.  I think it makes
> perfect sense to restrict type extension and dispatching to within a given
> subsystem.

"untagged private, tagged full" seems a reasonable thing to me.  Though a quick
trawl through our code that happens to occur in bug reports shows the private
view to always be tagged when the full view is.  So it might not be common.

***************************************************************

Questions? Ask the ACAA Technical Agent