Version 1.3 of ai05s/ai05-0195-1.txt

Unformatted version of ai05s/ai05-0195-1.txt version 1.3
Other versions for file ai05s/ai05-0195-1.txt

!standard 13.9.1(10)          10-02-01 AI05-0195-1/01
!class binding interpretation 10-02-01
!status No Action (6-0-2) 10-06-20
!status work item 10-02-01
!status received 09-06-16
!priority Low
!difficulty Hard
!qualifier Clarification
!subject Uninitialized scalars
!summary
The semantics of operations on invalid values is implementation defined.
!question
Consider the following example:
type R is record F1, F2 : Natural; end record;
procedure Proc (X : in out R) is begin X.F1 := X.F2; end Proc;
Assume a conventional representation in which the layout of the record type R is the same as if the subtype of the two fields were Integer. Assume further that the implementation performs no default initialization of scalar objects other than what is required by the language definition. Assume further that no knowledge about the callers of the subprogram is available. Must procedure Proc perform a constraint check because of the possibility that X.F2 was not initialized and its value is negative? (No.)
!recommendation
(See summary.)
!wording
Replace the last sentence of 13.9.1(9) by:
The semantics of operations on invalid representations are [as follows:] implementation-defined, but does not by itself lead to erroneous or unpredictable execution, or to other objects becoming abnormal.
Delete 13.9.1(10-11)
Replace AARM Implementation Note 13.9.1(11.a/2):
The intended model is that invalid represesentations can be freely used until such a point that the value is used in a way that might cause erroneous execution. For instance, the exception mandated for case_statements must be raised. Array indexing must not cause memory outside of the array to be written (and usually, not read either). We say that these uses require a valid value.
One way to implement this is to assign a property known to be valid to each object. The property can be assigned because of the way that the object is declared (objects declared with an initialization of a known to be valid value are also known to be valid) or via flow analysis of the program. Assigning a value that is derived from an object that is not known to be valid can be handled in one of two ways: * the value can be checked for validity at the point of the assignment; the
target object will remain to be known to be valid; or
* the value can be assigned, and the target object will no longer be considered
to be known to be valid.
The latter approach can only work if flow analysis is used; the former is more common.
When a not known-to-be-valid object is used to calculate the value that is used in context that requires a valid value, a check is required.
[Editor's note: Do we want the following??]
Add after 13.9.1(13):
Implementation Advice
If the representation of an object with an invalid representation represents a value of the object's type, the value of the type should be used. However, when assigning to another object of the same type subtype checks need not be performed; the target object will also be invalid after the assignment.
AARM Ramification: But the implementation can also require that the target object is always valid; in that case the implementation will need to make a subtype check. The implementation cannot both skip checks on assignment and on a use that requires a valid value.
!discussion
In the example above, if we assume that the value has a representation of -1 for the type of Natural, then 13.9.1(10) states that "the value of the type is used". This means that that evaluation X.F2 yields a negative integer. The execution of an assignment statement includes the conversion of the value of the RHS expression to the subtype of the target (RM 5.2(11)). In the case of a conversion to a constrained subtype, this conversion includes a check that the value satisifies the subtype's constraint (RM 4.6(51/2)). Thus, a check is required.
However, there is no requirement that the representation of values of a type are the same as its subtypes. Thus, it is OK for an implementation to say that there exist no representations of values of a type outside of the range of a specific subtype, even though the values must be represented in the type.
For instance, -1 must be a value of the type of Natural (which is Integer); but there is no requirement that the representation of -1 for Integer also have the same meaning for Natural. Thus, 13.9.1(10) never is required to apply.
Effectively, the representation of objects is implementation-defined. Thus, it is implementation-defined whether or not 13.9.1(10) applies. That means it means nothing at all -- the behavior of invalid values is always implementation-defined. It is better to say that.
Note that both components in the example are not known to be valid, so there is no requirement for a check on the assignment. But there is a requirement for a check before the value is used in a location that requires a valid value; this is not a permission to omit the check forever.
--!corrigendum 13.13.2(1.2/2)
!ACATS Test
None needed, there is no intended change of semantics.
!appendix

From: Steve Baird
Sent: Tuesday, June 16, 2009  3:43 AM

An implementation needs to know under what circumstances it is permitted to
assume that a scalar object has a valid representation. When may the subtype of
an object be trusted?

RM 13.9.1(9-11) states

   If the representation of a scalar object does not represent a value
   of the object's subtype (perhaps because the object was not
   initialized), the object is said to have an invalid representation.
   It is a bounded error o evaluate the value of such an object.
   If the error is detected, either Constraint_Error or Program_Error
   is raised. Otherwise, execution continues using the invalid
   representation. The rules of the language outside this
   subclause assume that all objects have valid representations.
   The semantics of operations on invalid representations are as follows:

     * If the representation of the object represents a value of the
       object's type, the value of the type is used.

     * If the representation of the object does not represent a value of
       the object's type, the semantics of operations on such
       representations is implementation-defined, but does not by itself
       lead to erroneous or unpredictable execution, or to other objects
       becoming abnormal.

Consider the following example:

    type R is
      record
        F1, F2 : Natural;
      end record;

   procedure Proc (X : in out R) is
   begin
     X.F1 := X.F2;
   end Proc;

Assume a conventional representation in which the layout of the record type R is
the same as if the subtype of the two fields were Integer. Assume further that
the implementation performs no default initialization of scalar objects other
than what is required by the language definition. Assume further that no
knowledge about the callers of the subprogram is available. Must procedure Proc
perform a constraint check because of the possibility that X.F2 was not
initialized and its value is negative?

One argument suggests that such a check is required.

Consider the case where X.F2 is uninitialized and the representation of
X.F2 represents a negative Integer.

As mentioned above, paragraph 10 states that "the value of the type is used".
This means that that evaluation X.F2 yields a negative integer. The execution of
an assignment statement includes the conversion of the value of the RHS
expression to the subtype of the target (RM 5.2(11)). In the case of a
conversion to a constrained subtype, this conversion includes a check that the
value satisifies the subtype's constraint (RM 4.6(51/2)). Thus, a check is
required.

Tuck has argued that no such check is required, and that scalar subtypes may be
trusted in cases such as the given example (note that we are not discussing
array indexing or case statements here).

In conversation at the Brest ARG meeting, Tuck asserted that
13.9.1(10) imposes no requirements on an implementation.
An implementation would be correct in completely ignoring this passage if it
chose to do so.

The idea (Tuck - please correct me if I am mistating your position) is that the
question of whether "the representation of the object represents a value of the
object's type" is entirely up to the implementation. Clearly the mapping from an
object's representation to its value may differ beteen two objects of the same
type (consider, for example, a biased representation of an object such as
    X : Integer range 2 ** 23 .. 2 ** 23 + 1; ).
If an object of subtype Natural has a representation which would represent a
negative value if the object's subtype were Integer, that does not imply that
the object's representation representa a negative value. An implementation has
the freedom to decide that an object that has an invalid representation *never*
has a representation that "represents a value of the object's type.

If this interpretation is correct, then it seems that this section of the manual
is misleading (both Gary Dismukes and I concluded that the argument that a
constraint check is required was correct) and should be rewritten. If that seems
too drastic, then at least an AARM note clarifying the situation would be a step
in the right direction.

If this interpretation is not correct, then the language design would seem to be
broken. If a constraint check is required when assigning a Natural to a Natural,
then something is seriously wrong. In the case where an implementation is unable
to prove that a scalar object has been initialized (as in the Proc example
above), an implementation should still be able to trust the subtype of the
object.

Is Tuck's interpretation correct?

****************************************************************

From: Tucker Taft
Sent: Tuesday, June 16, 2009  7:00 AM

The one thing I don't like about what you write is "the subtype can be trusted."
I never meant to imply that!

My point was that rather than doing a check, you can propagate to the
left-hand-side the possibility that the value was uninitialized, rather than
performing a check.  That is *very* different in my view from "trusting the
subtype."

****************************************************************

From: Steve Baird
Sent: Tuesday, June 16, 2009  7:28 AM

Thanks for the correction.

****************************************************************

From: Robert Dewar
Sent: Tuesday, June 16, 2009  4:01 AM

> Consider the following example:
>
>     type R is
>       record
>         F1, F2 : Natural;
>       end record;
>
>    procedure Proc (X : in out R) is
>    begin
>      X.F1 := X.F2;
>    end Proc;

I think the answer should be no, a check is not required. Ada 83 for sure did
not require a check, and the idea behind the move to a bounded view was to cut
down on erroneousness without causing significant extra checks. So in the above
case, you don't want to consider it erroneous, since what we know really happens
is that some bit pattern gets copied from F2 to F1, and the bounds of the
"error" are that the value in F1 becomes dubious (no more or less dubious than
F2).

When I say the answer should be no, I am saying this from two points of view:

a) what is desirable from a language design point of view

b) what was intended by the Ada 95 design as best I understand this intent.

It is NOT based on the RM.

> One argument suggests that such a check is required.
>
> Consider the case where X.F2 is uninitialized and the representation
> of
> X.F2 represents a negative Integer.

> Tuck has argued that no such check is required The idea (Tuck - please
> correct me if I am mistating your position) is that the question of
> whether "the representation of the object represents a value of the
> object's type" is entirely up to the implementation.
> Clearly the mapping from an object's representation to its value may
> differ beteen two objects of the same type (consider, for example, a
> biased representation of an object such as
>     X : Integer range 2 ** 23 .. 2 ** 23 + 1; ). If an object of
> subtype Natural has a representation which would represent a negative
> value if the object's subtype were Integer, that does not imply that
> the object's representation representa a negative value.
> An implementation has the freedom to decide that an object that has an
> invalid representation *never* has a representation that "represents a
> value of the object's type.

Right, I think this is a legitimate though somewhat peculiar reading.
We need some language lawyer reading that gives the desired result (no check
required), and the above argument meets the bill.

> If this interpretation is correct, then it seems that this section of
> the manual is misleading (both Gary Dismukes and I concluded that the
> argument that a constraint check is required was correct) and should
> be rewritten. If that seems too drastic, then at least an AARM note
> clarifying the situation would be a step in the right direction.

I think first we should decide what we want. Should there be a check or not, and
answer this without doing ferocious exogesis of the manual.

Then we address the question of whether clarification is required in the
reference manual.

> If this interpretation is not correct, then the language design would
> seem to be broken. If a constraint check is required when assigning a
> Natural to a Natural, then something is seriously wrong.
> In the case where an implementation is unable to prove that a scalar
> object has been initialized (as in the Proc example above), an
> implementation should still be able to trust the subtype of the
> object.
>
> Is Tuck's interpretation correct?

I think so.

It is interesing that within AdaCore we have one strong opinion arguing for
installing checks in this case because the RM requires it, so this issue is not
just theoretical :-)

****************************************************************

From: Randy Brukardt
Sent: Wednesday, June 17, 2009  1:05 PM

> Consider the following example:
>
>     type R is
>       record
>         F1, F2 : Natural;
>       end record;
>
>    procedure Proc (X : in out R) is
>    begin
>      X.F1 := X.F2;
>    end Proc;
>
> Assume a conventional representation in which the layout of the record
> type R is the same as if the subtype of the two fields were Integer.
> Assume further that the implementation performs no default
> initialization of scalar objects other than what is required by the
> language definition. Assume further that no knowledge about the
> callers of the subprogram is available. Must procedure Proc perform a
> constraint check because of the possibility that X.F2 was not
> initialized and its value is negative?

I would say no. All objects have a property I named "known to be valid"
inside of Janus/Ada (since a valid value is within the subtype, this is
essentially the same a "trust the subtype"). It is implementation-defined how
this property is determined (it could be true always for a particular object, or
determined via flow analysis). One common definition is that stand-alone objects
that are initialized are always "known to be valid".

You never need to make a check for a scalar operation (assuming no size or
subtype changes) unless you are converting from an expression which is not
"known to be valid" into an object that is "known to be valid" (always). (Array
indexing is, of course, treated as something that is "known to be valid"). It is
always OK to assign an invalid value, so long as future checks do not assume it
is valid.

In this case, the components cannot be "known to be valid" a priori: they are
not initialized. Moreover, flow analysis cannot prove anything in this case
(nothing is known about the parameter). So no check is needed. But the caller of
Proc would not need to make a check, either.

Since the assignment of a complete composite type allows assigning of invalid
values of components without raising an exception, a record or array component
can *never* be "known to be valid" all of the time. The compiler can calculate
the property for a particular component with flow analysis (by proving that no
potentially invalidating have occurred), but in the absence of that you can't
assume much. (You could assume more for discriminants, since they have the be
initialized.)

> Consider the case where X.F2 is uninitialized and the representation
> of
> X.F2 represents a negative Integer.

> As mentioned above, paragraph 10 states that "the value of the type is used".
> This means that that evaluation X.F2 yields a negative integer. The execution
> of an assignment statement includes the conversion of the value of the
> RHS expression to the subtype of the target (RM 5.2(11)). In the case
> of a conversion to a constrained subtype, this conversion includes a
> check that the value satisfies the subtype's constraint (RM
> 4.6(51/2)). Thus, a check is required.

Clearly, this argument does not meet the intent (that invalid values can be
propagated). And I dislike Tucker's explanation, which essentially says that
13.9.1(10) never happens (because it never makes sense to make such checks, and
you have to ignore this bullet in order for that to happen). If that's the case,
the entire bullet should be dropped. So, I agree with Steve that this wording is
misleading, especially in this example where claiming that -1 is not a value of
the type of Natural is bogus (it *has* to be a value of the type).

I'm pretty sure that the intent of 13.9.1(10) was to specify how the value is
used (for instance in addition), not to require additional checks. I suspect
that we need an implementation permission to "propagate" invalid values (when an
invalid value is assigned to an object of the same subtype, the invalid value
can be assigned without a subtype check), since that is the intent of this
wording. Either that or just drop 13.9.1(10) completely, since it is actively
harmful, and just use 13.9.1(11) for all cases.

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  1:49 PM

> Clearly, this argument does not meet the intent (that invalid values
> can be propagated). And I dislike Tucker's explanation, which
> essentially says that
> 13.9.1(10) never happens (because it never makes sense to make such
> checks, and you have to ignore this bullet in order for that to
> happen). If that's the case, the entire bullet should be dropped. So,
> I agree with Steve that this wording is misleading, especially in this
> example where claiming that
> -1 is not a value of the type of Natural is bogus (it *has* to be a
> value of the type).

No, that's simply not true, there is nothing in the RM that suggests that a
subtype has to have a representation equivalent to its parent. For example,
subtype Natural could use the first 31 bits in reverse order, followed by a 1
bit always. Peculiar indeed, but nothing in the RM says this is non-conforming.

So it is fine to say that your representation for Natural is

   First bit is always 0, if it is 1, value is invalid junk with no
   interpretation as a value of the subtype or type.

   Remaining bits are the Natural value if the first bit is 0

Yes, it may look like -1, but nothing in the RM says that it is -1, and nothing
in my implementation says that it has to be considered to have anything to do
with -1. After all a sequence of four characters that happens to look like -1
has nothing to do with minus one.

Yes, it's a bit of a forced explanation, but what we are doing here is doing a
careful reading to show that the answer can be no even if we do NOT change the
RM.

> I'm pretty sure that the intent of 13.9.1(10) was to specify how the
> value is used (for instance in addition), not to require additional
> checks. I suspect that we need an implementation permission to
> "propagate" invalid values (when an invalid value is assigned to an
> object of the same subtype, the invalid value can be assigned without
> a subtype check), since that is the intent of this wording. Either
> that or just drop 13.9.1(10) completely, since it is actively harmful, and just use 13.9.1(11) for all cases.

Either approach is fine ... as long as the intent of not requiring a check here
is preserved.

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  2:27 PM

> > ...So, I agree with Steve that
> > this wording is misleading, especially in this example where
> > claiming that
> > -1 is not a value of the type of Natural is bogus (it *has* to be a
> > value of the type).
>
> No, that's simply not true, there is nothing in the RM that suggests
> that a subtype has to have a representation equivalent to its parent.

With my language lawyer hat on, I agree with Robert.
With my language lawyer hat off, I agree with Randy.

;-)

Point is, when discussing chap 13, we need to explicitly state which hat we wear
at any given time.

Note that Randy said "misleading", not "formally wrong".

I think perhaps the best we can do in this area is to add a lot of verbiage
(with examples) to the AARM.

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  2:38 PM

> With my language lawyer hat on, I agree with Robert.
> With my language lawyer hat off, I agree with Randy.

Yes, well of course :-) But this peculiar paragraph is in the RM, and thus is
allowed to be interpreted with hat on (personally I think this stuff would be
better is implementation advice, where you always have your hat off. I would
have gone for saying impl defined in the formal part, and then in IA saying what
you want .. no erroneous behavior for instance .. what ON EARTH is "erroneous
execution" or "unpredictable execution" formally?????

****************************************************************

From: Randy Brukardt
Sent: Wednesday, June 17, 2009  2:46 PM

> > Clearly, this argument does not meet the intent (that invalid values
> > can be propagated). And I dislike Tucker's explanation, which
essentially says that
> > 13.9.1(10) never happens (because it never makes sense to make such
> > checks, and you have to ignore this bullet in order for that to
> > happen). If that's the case, the entire bullet should be dropped.
> > So, I agree with Steve that this wording is misleading, especially
> > in this example where claiming that -1 is not a value of the type of
> > Natural is bogus (it *has* to be a value of the type).
>
> No, that's simply not true, there is nothing in the RM that suggests
> that a subtype has to have a representation equivalent to its parent.

I think this demonstrates how confusing this is! I said something about the
values of the type, and you immediately changed it to talk about the
*representation* of the type, which has nothing to do with the values. -1 is
surely a value of the type of Natural; whether or not it has a representation
does not change that fact.

I realize there is a hair-splitting explanation for not using the first bullet
(13.9.1(10)), but using that logic, the entire semantics of invalid values is
always implementation-defined (either because the representation is
implementation-defined or because the behavior is), in which case it is silly to
pretend that there are cases where it is not implementation-defined.

Mixing up representations and semantics is always suspicious outside of
representation clauses.

...
> > I'm pretty sure that the intent of 13.9.1(10) was to specify how the
> > value is used (for instance in addition), not to require additional
> > checks. I suspect that we need an implementation permission to
> > "propagate" invalid values (when an invalid value is assigned to an
> > object of the same subtype, the invalid value can be assigned
> > without a subtype check), since that is the intent of this wording.
> > Either that or just drop 13.9.1(10) completely, since it is actively
> > harmful, and just use 13.9.1(11) for all cases.
>
> Either approach is fine ... as long as the intent of not requiring a
> check here is preserved.

Right. I don't think anyone wants to change the intent here.

****************************************************************

From: Robert Dewar
Sent: Wednesday, June 17, 2009  3:08 PM

...
> I think this demonstrates how confusing this is! I said something
> about the values of the type, and you immediately changed it to talk
> about the
> *representation* of the type, which has nothing to do with the values.
> -1 is surely a value of the type of Natural; whether or not it has a
> representation does not change that fact.

Indeed confusing, but if the representation is junk, then you can't really talk
about the value at all, that's why the whole notion of a value that is outside
the subtype range but is somehow interpretable as a value of some other subtype
is indeed bogus.

> I realize there is a hair-splitting explanation for not using the
> first bullet (13.9.1(10)), but using that logic, the entire semantics
> of invalid values is always implementation-defined (either because the
> representation is implementation-defined or because the behavior is),
> in which case it is silly to pretend that there are cases where it is
> not implementation-defined.
>
> Mixing up representations and semantics is always suspicious outside
> of representation clauses.

Yes, well when it comes to invalid values, this is all about representations
that fall outside the permitted set of representations, so the mixing up is
inevitable.

****************************************************************

From: Bob Duff
Sent: Wednesday, June 17, 2009  2:12 PM

> Since the assignment of a complete composite type allows assigning of
> invalid values of components without raising an exception, a record or
> array component can *never* be "known to be valid" all of the time.
> The compiler can calculate the property for a particular component
> with flow analysis (by proving that no potentially invalidating have
> occurred), but in the absence of that you can't assume much.

That's true under Steve's scenario, but it is POSSIBLE to write a compiler where
"known valid" is true all the time.  Just default-initialize all scalars to some
in-range value.  I don't much like that idea.  It seems like the worst of both
worlds -- doesn't detect uninit var bugs, and is costly at run time. But it's
possible.

Well, there's one special case: for a subtype with a null range, there are no
valid values.  If it's known-null-range at compile time, you can replace every
read of any variable with "raise...".

>...(You could assume more for discriminants,  since they have the be
>initialized.)

Right, assuming you don't let them be invalid.  E.g.:

    procedure P(X: R) is -- R from Steve's example.
        Y : T(Discrim => X.F2);

In order to ensure all discrims are known-valid, if X.F2 is not known-valid, a
run-time check is needed.

Same for record components with default expressions, right?  You can set things
up so they're always known valid.

> Clearly, this argument does not meet the intent (that invalid values
> can be propagated). And I dislike Tucker's explanation, which
> essentially says that
> 13.9.1(10) never happens (because it never makes sense to make such
> checks, and you have to ignore this bullet in order for that to
> happen). If that's the case, the entire bullet should be dropped. So,
> I agree with Steve that this wording is misleading, especially in this
> example where claiming that
> -1 is not a value of the type of Natural is bogus (it *has* to be a
> value of the type).
>
> I'm pretty sure that the intent of 13.9.1(10) was to specify how the
> value is used (for instance in addition), not to require additional checks.

I agree.  However, I fear it's nigh unto impossible to get this wording right.

For example, I think the intent was that:

    type T1 is range 0..2;
    type T2 is (Red, Green, Blue);

should behave the same way with respect to uninit vars.  But T1'Base has a value
corresponding to 3, whereas T2'Base does not.

****************************************************************

From: Randy Brukardt
Sent: Thursday, June 18, 2009  11:52 AM

> > Since the assignment of a complete composite type allows assigning
> > of invalid values of components without raising an exception, a
> > record or array component can *never* be "known to be valid" all of the time.
> > The compiler can calculate the property for a particular component
> > with flow analysis (by proving that no potentially invalidating have
> > occurred), but in the absence of that you can't assume much.
>
> That's true under Steve's scenario, but it is POSSIBLE to write a
> compiler where "known valid" is true all the time.
> Just default-initialize all scalars to some in-range value.
> I don't much like that idea.  It seems like the worst of both worlds
> -- doesn't detect uninit var bugs, and is costly at run time.
> But it's possible.

You're right. I had let my world-view interfere a bit with my description of the
language. Technically, it is erroneous to read junk into scalar components, so
any assignment of not "known to be valid" components can be ignored if the
components are default initialized. But operations like 'Read and
Sequential_IO.Read are common in programs, and I don't like allowing anything
that comes from them to be erroneous. That's especially true as it doesn't
appear to make that much difference in checking. But of course that is my
opinion and has nothing to do with the rules of the language.

...
> >...(You could assume more for discriminants,  since they have the be
> >initialized.)
>
> Right, assuming you don't let them be invalid.  E.g.:
>
>     procedure P(X: R) is -- R from Steve's example.
>         Y : T(Discrim => X.F2);
>
> In order to ensure all discrims are known-valid, if X.F2 is not
> known-valid, a run-time check is needed.

Right; as in all other cases, assigning an object that isn't known to be valid
into one that is required to be valid requires a subtype check.

We do support an optimistic check elimination mode in our compiler (mainly to be
performance compatible with previous versions of the compiler; I didn't add this
validity stuff until a few years ago), that follows the Ada 83 rules.

> Same for record components with default expressions, right?
> You can set things up so they're always known valid.

Not in my world view (see above). You can't check components (as opposed to
discriminants) when you stream them in, so I don't think you could keep the
invariant true. (You could lean on erroneousness in that case, but as I
previously noted I don't think that is a good idea.)

> > Clearly, this argument does not meet the intent (that
> invalid values
> > can be propagated). And I dislike Tucker's explanation, which
> > essentially says that
> > 13.9.1(10) never happens (because it never makes sense to make such
> > checks, and you have to ignore this bullet in order for that to
> > happen). If that's the case, the entire bullet should be
> dropped. So,
> > I agree with Steve that this wording is misleading,
> especially in this
> > example where claiming that
> > -1 is not a value of the type of Natural is bogus (it *has* to be a
> > value of the type).
> >
> > I'm pretty sure that the intent of 13.9.1(10) was to
> specify how the
> > value is used (for instance in addition), not to require
> additional checks.
>
> I agree.  However, I fear it's nigh unto impossible to get this
> wording right.

You are probably right. I suspect the best plan is to put the intent into AARM
notes and delete the bullet 13.9.1(10) because it doesn't say anything that you
can depend on in any case, so it is just misleading.

> For example, I think the intent was that:
>
>     type T1 is range 0..2;
>     type T2 is (Red, Green, Blue);
>
> should behave the same way with respect to uninit vars.  But T1'Base
> has a value corresponding to 3, whereas T2'Base does not.

Exactly. We don't have a good reason to keep this rule as a dynamic semantics
rule. Maybe as IA?

****************************************************************

From: Robert Dewar
Sent: Thursday, June 18, 2009  12:12 PM

> Exactly. We don't have a good reason to keep this rule as a dynamic
> semantics rule. Maybe as IA?

I think it would be stronger as IA, because we always have our language lawyer
hats off when we write IA :-)

One change we did make in GNAT in response to 13.9.1(10) is that invalid
integer-type values always behave properly in comparisons and membership tests,
and I think that has some real value in avoiding surprises.

If someone has

    subtype R is integer range 1 .. 10;
    RV : R;

and we see

    if RV in 1 .. 10 then

it is not very helpful to assume that RV is valid in this context :-)

****************************************************************

From: Tucker Taft
Sent: Thursday, June 18, 2009  1:01 PM

>> Same for record components with default expressions, right?
>> You can set things up so they're always known valid.
>
> Not in my world view (see above). You can't check components (as
> opposed to
> discriminants) when you stream them in, so I don't think you could
> keep the invariant true. (You could lean on erroneousness in that
> case, but as I previously noted I don't think that is a good idea.)

I think you are confused here.  You *can* (and must) check record components
that have default initial values when streaming using 'Read or 'Input, per RM
13.13.2(35/2):

    In the default implementation of Read and Input for a composite type,
    for each scalar component that is a discriminant or whose
    component_declaration includes a default_expression, a check is made
    that the value returned by Read for the component belongs to its
    subtype.  ...

> ...
> Exactly. We don't have a good reason to keep this rule as a dynamic
> semantics rule. Maybe as IA?

I'm not willing to flush 13.9.1(10) quite so fast.
Perhaps we can combine (10) and (11).  It might also be nice to say something,
perhaps in an AARM note, that explains this issue of "known to be valid" and the
possible choices implementations can make.

****************************************************************

From: Robert Dewar
Sent: Thursday, June 18, 2009  1:12 PM

>> ...
>> Exactly. We don't have a good reason to keep this rule as a dynamic
>> semantics rule. Maybe as IA?
>
> I'm not willing to flush 13.9.1(10) quite so fast.

Well you have to remove junk nonsense like "erroneous execution"
which by definition is an undefined context, when we say that something can lead
to erroneous execution, we mean that anything can happen, how can that be given
a specific meaning. Here erroneous is being used in an informal sense (to mean
horrible behavior execution :-))

> Perhaps we can combine (10) and (11).  It might also be nice to say
> something, perhaps in an AARM note, that explains this issue of "known
> to be valid" and the possible choices implementations can make.

****************************************************************


Questions? Ask the ACAA Technical Agent