Version 1.7 of ai12s/ai12-0074-1.txt

Unformatted version of ai12s/ai12-0074-1.txt version 1.7
Other versions for file ai12s/ai12-0074-1.txt

!standard 4.6(56)          15-05-28 AI12-0074-1/04
!standard 6.4.1(12)
!standard 6.4.1(13.1/3)
!class binding interpretation 13-06-09
!status work item 13-07-08
!status ARG Approved 9-0-0 13-06-15
!status work item 13-06-09
!status received 13-04-30
!priority Medium
!difficulty Hard
!qualifier Omission
!subject View conversions and out parameters passed by copy
!summary
Scalar view conversion values are well defined if the Default_Value aspect of the target type is specified. Access type view conversions are not allowed for unrelated types.
!question
6.4.1(12-15) reads in part:
For an out parameter that is passed by copy, the formal parameter object is created, and:
... For a scalar type that has the Default_Value aspect specified, the formal parameter is initialized from the value of the actual, without checking that the value satisfies any constraint or any predicate
This presupposes that "the value of the actual" exists and is well defined.
How does this work in the case where the actual parameter is a view conversion? Consider:
with text_io; use text_io; procedure bad_conv1 is type Defaulted_Integer is new Integer with Default_Value => 123;
procedure foo (x : out integer) is begin null; end;
y : long_long_float := long_long_float'last; -- or, for a variation, leave y uninitialized begin foo (defaulted_integer (y)); -- Now illegal. put_line (long_long_float'image (y)); end;
with text_io; use text_io; procedure bad_conv2 is
type mod3 is mod 3 with default_value => 0; type mod5 is mod 5;
procedure foo (xx : out mod3) is begin null; end;
yy : mod5 := 4; begin foo (mod3 (yy)); -- Now illegal. put_line (mod5'image (yy)); end;
In this case, the converted value might not (probably won't) fit into the parameter representation. What should happen? (The calls on Foo are illegal.)
!recommendation
(See !summary.)
!wording
Modify 4.6(56):
Reading the value of the view yields the result of converting the value of the operand object to the target subtype (which might raise Constraint_Error), except if the object is of an {elementary}[access] type and the view conversion is passed as an out parameter; in this latter case, the value of the operand object {may be}[is] used to initialize the formal parameter without checking against any constraint of the target subtype ({as described more precisely in}[see] 6.4.1).
[We have to use "may be used to initialize" here in order that this is not a lie in the case of scalar subtypes without Default_Values, which are uninitialized. This has the pleasant side-effect of making this vague enough to make it clear that one has to read 6.4.1 for the actual rules.]
Modify AARM 4.6(56.a):
This ensures that even an out parameter of an {elementary}[access] type is initialized reasonably.
Append to end of 6.4.1 Legality Rules section:
If the mode is out, the actual parameter is a view conversion, and the type of the formal parameter is a a scalar type that has the Default_Value aspect specified, then
- the type of the operand of the conversion shall have the
Default_Value aspect specified; and
- there shall exist a type (other than a root numeric type) that is
an ancestor of both the target type and the operand type.
Similarly, if the mode is out, the actual parameter is a view conversion, and the type of the formal parameter is a an access type, then there shall exist a type that is an ancestor of both the target type and the operand type. In addition to the places where Legality Rules normally apply (see 12.3), these rules also apply in the private part of an instance of a generic unit.
AARM note: These rules is needed in order to ensure that a well-defined parameter value is passed.
Replace 6.4.1(13.1/3)
For a scalar type that has the Default_Value aspect specified, the formal parameter is initialized from the value of the actual, without checking that the value satisfies any constraint or any predicate;
with
For a scalar type that has the Default_Value aspect specified, the formal parameter is initialized from the value of the actual, without checking that the value satisfies any constraint or any predicate, except in the following case: if the actual parameter is a view conversion and either
- there exists no type (other than a root numeric type) that
is an ancestor of both the target type and the type of the operand of the conversion; or
- the Default_Value aspect is unspecified for the type of the
operand of the conversion
then Program_Error is raised;
[Note: The Program_Error case can only arise in the body of an
instance of a generic. Implementations that macro-expand generics can always detect this case when the enclosing instance body is expanded.]
!discussion
The value of an elementary view conversion is that of its operand.
We want the value of an elementary view conversion to be a well defined value of the target type of the conversion if the target type is either an access type or a scalar type for whose Default_Value aspect has been specified.
4.6(56) is supposed to define this, but it isn't close to correct. First, it was never updated to include predicates and null exclusions as 6.4.1(13/3) was. Second, it doesn't include the rules for scalar types with Default_Values specified at all. Third, it doesn't explain what happens if the original value does not fit in the representation of the target type (as in the example in the question).
Note that the "does not fit" case already exists in Ada, all the way back to Ada 83. Specifically, if an implementation has multiple representations for access types, and the view conversion is going from the larger representation to the smaller one, then the problem could occur. [This happened historically; the segmented architecture of the 16-bit 8086 led to compilers supporting both long (with segments) and short (without segments) pointers. Similarly, the Unisys U2200 compiler that the author worked on supported Ada pointers (machine addresses) and C pointers (byte addresses, a pair of a machine address and a byte offset).]
Clearly, the "does not fit" problem was unusual for access types, so it's not surprising that it was never solved. However, it becomes much more important for scalar types with Default_Values, especially when the source object is of a type that does not have a Default_Value (and thus might be uninitialized).
To ensure that the value of a scalar view conversion is well-defined, we disallow such scalar view conversions in cases where either
- the set of values of the type of the operand and of the
target type need not be the same; or
- an object of the type of the operand might not have a well
defined value.
The rule that was chosen to accomplish this is that the two types must have a common ancestor type and, in the scalar case, the operand type's Default_Value aspect must be specified.
This is expressed both as a Legality Rule (which handles most cases statically) and as a runtime check (which handles certain obscure cases involving generic instance bodies which make it past the legality rule). For an implementation which macro-expands instance bodies, the outcome of this runtime check is always known statically.
The runtime check would be triggered in examples like the following:
declare type T is range 0 .. 100; type T_With_Default is new T with Default_Value => 0;
generic type New_T is new T; package G is end G;
package body G is X : T;
procedure P (X : out New_T) is begin X := 0; end; begin P (New_T (X)); -- conversion raises Program_Error end G;
package I is new G (T_With_Default);
begin null; end;
This Legality Rule is compatible with previous versions of Ada as they do not have the Default_Value aspect. No calls on inherited subprograms will be affected, as 13.1(10) ensures that the Default_Value aspect is the same for the derived type and the parent type, and of course the types are related in this case. That means that all affected view conversions will be explicit in the program.
We use a similar Legality Rule to handle the more minor issue with access types. Note that this will be incompatible in rare cases (access type view conversions are rare in existing code, and ones that would trigger the rule should be much rarer). Also note that unlike scalar types, it is not that uncommon to read an "out" access parameter, in order to check bounds or discriminants of the designated object. This means that the problematic case is more likely to occur in practice.
---
An alternative solution was to initialize the parameter with the Default_Value in this case. However, that would potentially "deinitialize" the actual object in the view conversion case:
type Small is range 0 .. 255 with Default_Value => 0;
procedure P (Do_It : in Boolean; Result : out Small) is begin if Do_It then Result := 10; end if; end P;
Obj : Positive := 1;
P (Do_It => False, Result => Small(Obj)); -- (1)
The call to P at (1) would raise Constraint_Error upon return, even though the previously initialized value of Obj is properly initialized and it wasn't changed. This does not seem acceptable, especially as the similar rules for access types do not have this effect.
---
A different alternative solution would be to declare the parameter to be abnormal in cases where the legality rule would be triggered. This would make the program erroneous if (and only if) the parameter was actually read (which doesn't happen for many out parameters. The problem with this is that it introduces a new kind of erroneous execution.
A possibility to minimize both the incompatibility and the possibility of erroneousness is to use a suppressible error here (presuming that we define those, see AI12-0092-1). In that case, the programmer would have to take explicit action (suppress the error) to cause the possibility of erroneous execution. That might be the best solution for this case (because the incompatibility can be easily avoided by suppressing the error as well as by introducing a temporary), but it depends on us having decided to define suppressible errors.
!ACATS Test
An ACATS test will be needed to test whatever solution is adopted.
!ASIS
No ASIS effect.
!appendix

From: Steve Baird
Sent: Tuesday, April 30, 2013  6:30 PM

We've got

   For an out parameter that is passed by copy, the formal
   parameter object is created, and:
     ...
     For a scalar type that has the Default_Value aspect specified,
     the formal parameter is initialized from the value of the actual,
     without checking that the value satisfies any constraint or any
     predicate

This presupposes that "the value of the actual" exists and is well defined.

How does this work in the case where the actual parameter is a view conversion?
Perhaps the Default_Value should be copied in in cases like

   with text_io; use text_io;
   procedure bad_conv1 is
     type Defaulted_Integer is new Integer
       with Default_Value => 123;

     procedure foo (x : out integer) is begin null; end;

     y : long_long_float := long_long_float'last;
      -- or, for a variation, leave y uninitialized
   begin
    foo (defaulted_integer (y));
    put_line (long_long_float'image (y));
   end;

   with text_io; use text_io;
   procedure bad_conv2 is

     type mod3 is mod 3 with default_value => 0;
     type mod5 is mod 5;

     procedure foo (xx : out mod3) is begin null; end;

     yy : mod5 := 4;
   begin
    foo (mod3 (yy));
    put_line (mod5'image (yy));
   end;

If copy-in involves converting a (frequently uninitialized) value, that would
not be good - this feature was not supposed to make programs less reliable.

****************************************************************

From: Steve Baird
Sent: Wednesday, May 1, 2013  1:37 AM

>   For an out parameter that is passed by copy, the formal
>   parameter object is created, and:
>     ...
>     For a scalar type that has the Default_Value aspect specified,
>     the formal parameter is initialized from the value of the actual,
>     without checking that the value satisfies any constraint or any
>     predicate

I just discovered this and said "ouch". How comes the formal parameter is not
initialized from the default value, which is certainly what a casual user would
expect?

(didn't find anything about this in the AARM or the AIs)

****************************************************************

From: Robert Dewar
Sent: Wednesday, May 1, 2013  7:27 AM

> I just discovered this and said "ouch". How comes the formal parameter
> is not initialized from the default value, which is certainly what a
> casual user would expect?

I agree this makes more sense (btw, it's what GNAT happens to do now I believe).

****************************************************************

From: Bob Duff
Sent: Wednesday, May 1, 2013  7:34 AM

> >   For an out parameter that is passed by copy, the formal
> >   parameter object is created, and:
> >     ...
> >     For a scalar type that has the Default_Value aspect specified,
> >     the formal parameter is initialized from the value of the actual,
> >     without checking that the value satisfies any constraint or any
> >     predicate
> >
> I just discovered this and said "ouch". How comes the formal parameter
> is not initialized from the default value, which is certainly what a
> casual user would expect?

That would make more sense, I think, but that's not how it works for records.  A
record with defaulted components gets passed in for mode 'out', so I guess
Default_Value was designed by analogy.

****************************************************************

From: Robert Dewar
Sent: Wednesday, May 1, 2013  7:38 AM

> That would make more sense, I think, but that's not how it works for
> records.  A record with defaulted components gets passed in for mode
> 'out', so I guess Default_Value was designed by analogy.

The behavior for records is a bit strange, but it is more efficient, and easy to
implement.

It was a bad idea to extend the analogy

Especially given the nasty issues it raises as noted by Steve

****************************************************************

From: Tucker Taft
Sent: Wednesday, May 1, 2013  7:47 AM

All access values are copied in so that is another reason scalars use the same
rule.

Sent from my iPhone

****************************************************************

From: Bob Duff
Sent: Wednesday, May 1, 2013  8:07 AM

> All access values are copied in so that is another reason scalars use
> the same rule.

Good point.  I'm not sure that's the best rule either, but the analogy there is
even more apt.

> Sent from my iPhone

Sent from my Dell computer.

> Especially given the nasty issues it raises as noted by Steve

Well, I can't get too excited about interactions with view conversions of
scalars.  That's a weird feature anyway.  Is it even useful?

****************************************************************

From: Randy Brukardt
Sent: Wednesday, May 1, 2013  1:00 PM

> That would make more sense, I think, but that's not how it works for
> records.  A record with defaulted components gets passed in for mode
> 'out', so I guess Default_Value was designed by analogy.

I think the analogy was with parameters of access types (which are elementary)
rather than parameters of record types. We essentially copied the model of
access types (which of course always have a default value of Null) rather than
invent a new one.

The problem, of course, is that not all scalar types have a default value, while
that is true of access types. So these sorts of problems can appear.

I think the *best* solution would be to ban view conversions between types that
differ in whether they have default_values. That would mostly eliminate the
problem; probably we should ban conversions from other kinds of types in this
case as well. After all, the only reason that these conversions exist is to
support inheritance for type derivation, and that does not require a view
conversion of a float to an integer!

The problem with the *best* solution is that it would be a contract model
violation. Thus, I would suggest instead that any such conversion raise
Program_Error (that would eliminate Steve's concern about making things more
fragile). After all, all that would have to be done to eliminate such a
conversion is to introduce a temporary.

As far as initializing with the default value, I don't see anything particularly
logical about that. You're passing an object that presumably has a value (of
something), and that value suddenly disappears even when the parameter is unused
by the routine? That's going to cause more bugs than it fixes. This sort of
structure sometimes occurs in my code:

    procedure Get_Data (From : in Some_Container;
                        Key  : in Key_Type;
                        Data : out Data_Type;
                        Found: out Boolean) is
    begin
	  if Exists (From, Key) then
             Found := True;
             Data := From (Key);
        else
             Found := False;
             -- Data unchanged here.
        end if;
    end Get_Data;

    declare
        Found : Boolean;
        Data  : Data_Type := Initial_Value;
    begin
        Get_Data (A_Container, Some_Key, Data, Found);
        -- Use Data here and ignore Found.
    end;

Changing the untouched result of Data simply because the type has a
Default_Value specified (Get_Data probably is declared in a generic, after all)
seems bizarre.

****************************************************************

From: Gary Dismukes
Sent: Wednesday, May 1, 2013  1:22 PM

> > I just discovered this and said "ouch". How comes the formal
> > parameter is not initialized from the default value, which is
> > certainly what a casual user would expect?
>
> I agree this makes more sense (btw, it's what GNAT happens to do now I
> believe).

Actually what GNAT does is deinitialize the actual.

In any case, I agree with Randy that it's a bad idea to default initialize the
formal, as you don't want an initialized actual to be deinitialized in the case
where the formal is not assigned.  (Some other solution may be needed for the
weird view conversion case, though such cases are pretty uncommon.)

****************************************************************

From: Jean-Pierre Rosen
Sent: Wednesday, May 1, 2013  1:29 PM

> As far as initializing with the default value, I don't see anything
> particularly logical about that.

I always considered that the model of out parameters is one of an unitialized
local variable. Simple, and easy to explain.

The benefit of that is that you are telling the caller that it is perfectly safe
to call your subprogram with an uninitialized actual, precisely because you want
to initialize it.

If for some reason you use the previous value of the actual (including because
you want to return the previous value unchanged, as in your example), use an
in-out parameter.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, May 1, 2013  6:02 PM

Which means you need two subprograms, one that allows passing uninitialized
values, and one that allows leaving the value untouched. If you pass an "in out"
parameter an uninitialized value, you run the risk of a stray Constraint_Error
(a problem I know well, since most of my code has been running afoul of this
problem since Janus/Ada started strictly following the Ada 95 checking rules).
To fix this in my code, I've changed a lot of routines to take "out" parameters.
But if that change also changes the behavior of calls that pass initialized
values, then it becomes almost impossible to fix this problem short of ensuring
that every value that can be passed to a routine is always initialized (which is
a lot of useless code).

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, May 2, 2013  3:42 AM

> Which means you need two subprograms, one that allows passing
> uninitialized values, and one that allows leaving the value untouched.
> If you pass an "in out" parameter an uninitialized value, you run the
> risk of a stray Constraint_Error

It's an interesting problem... The whole issue comes from "-- Data unchanged
here.", where you want the "unchanged" to include invalid values. The heart of
the problem is that you want to pass an invalid value in, without raising
Constraint_Error. An out parameter allows you to do that, precisely on the
assumpting that the invalid value cannot be used.

Rather than relying on an out parameter being transmitted, I think I'd rather
write:
   Get_Data (Cont, Key, Temp, Found);
   if Found then
      My_Data := Temp;
   end if;

At least, the expected behaviour is much more explicit to the reader.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, May 2, 2013  3:46 AM

>> All access values are copied in so that is another reason scalars use
>> the same rule. -Tuck
> Good point.  I'm not sure that's the best rule either, but the analogy
> there is even more apt.

Access values are special. The natural behavior would be to initialize out
parameters to null, but if null is excluded, it is necessary to find a non null,
non invalid value - and the previous value of the parameter is the only such
thing available. This does not apply to scalars, especially if an explicit
default value is specified.

****************************************************************

From: Bob Duff
Sent: Thursday, May 2, 2013  8:48 AM

> Access values are special. The natural behavior would be to initialize
> out parameters to null, but if null is excluded, it is necessary to
> find a non null, non invalid value - and the previous value of the
> parameter is the only such thing available.

No, if the 'out' formal subtype excludes null, but the actual is null, we pass
in null without raising C_E.  That's exactly analogous to Steve's example with
scalars.

Same if the formal has "Predicate => Subtype_Name /= null".

>...This does not apply to scalars,
> especially if an explicit default value is specified.

The only difference is that access types always have a default, whereas scalars
have a default only if Default_Value was specified.

> Rather than relying on an out parameter being transmitted, I think I'd
> rather write:
>    Get_Data (Cont, Key, Temp, Found);
>    if Found then
>       My_Data := Temp;
>    end if;

Yes, that's what I would write, too.

****************************************************************

From: Tucker Taft
Sent: Thursday, May 2, 2013  11:09 AM

If you look in 4.6, para 56, you will see special handling for view conversions
of OUT parameters of an access type. This same paragraph should have been
modified to say the same thing for scalar types with a Default_Value specified.

>> Access values are special. The natural behavior would be to
>> initialize out parameters to null, but if null is excluded, it is
>> necessary to find a non null, non invalid value - and the previous
>> value of the parameter is the only such thing available.
>
> No, if the 'out' formal subtype excludes null, but the actual is null,
> we pass in null without raising C_E.  That's exactly analogous to
> Steve's example with scalars. ...

Right.  Access values and Scalar types with Default_Value specified should be
treated identically, as far as I can tell.  We just missed 4.6(56) when we
defined Default_Value, as far as I can tell.

The only teensy problem I see here is if the formal parameter is represented
with fewer bits than that required for the Default_Value of the actual
parameter's type.  The view conversion will be difficult to accomplish into this
smaller space.  This generally doesn't come up with access values, though I
suppose it could if one were a "fat pointer" and the other was a "thin pointer."

This sounds like a fun topic to discuss in the Berlin ARG. ;-)

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 2, 2013  5:55 PM

> If you look in 4.6, para 56, you will see special handling for view
> conversions of OUT parameters of an access type.
> This same paragraph should have been modified to say the same thing
> for scalar types with a Default_Value specified.

Thanks Tuck, for pointing this out. This seems just like a restating of the
original 6.4.1(13) -- and it's wrong even for access types because it doesn't
mention predicates and null exclusions. I suppose we need something like
6.4.1(13.1/3) as well.

But this doesn't seem to address Steve's problem, which is what if the value
simply doesn't fit into T?

...
> The only teensy problem I see here is if the formal parameter is
> represented with fewer bits than that required for the Default_Value
> of the actual parameter's type.

We already have covered this for the normal case (AARM 6.4.1(13.c/3) says "out
parameters need to be large enough to support any possible value of the base
type of T"). I recall some Tucker person bringing this up at the second St. Pete
Beach meeting at the Sirata (I think that's where it happened).

I don't think that the Default_Value of the actual has any bearing on this,
though, it's just the type of the actual and whether the *formal* has a
Default_Value specified.

> The view conversion will be difficult to accomplish into this smaller
> space.  This generally doesn't come up with access values, though I
> suppose it could if one were a "fat pointer"
> and the other was a "thin pointer."

And that's the crux of Steve's problem. What happens if the value simply can't
be made to fit (as in his Long_Long_Float'First example)?

And you are right, that this is not a new problem and doesn't specifically have
much to do the new features -- they just made the issue a lot more likely and
probably pushed it from a pathology to a likelihood. [Note that the Janus/Ada
U2200 compiler had two kinds of access values: C Pointers that included byte
addresses (not native on a 36-bit word machine) and Ada pointers that didn't (we
always ensured that allocated and aliased objects were aligned, thus we could
use the substantially cheaper pointers). We needed the C Pointers in order to
interface to the Posix subsystem, of course. I don't know what would have
happened if someone used an "out" parameter view conversion of a C Pointer to an
Ada Pointer -- perhaps the byte address part would have been dropped on the
floor -- but it probably wasn't 100% kosher.]

> This sounds like a fun topic to discuss in the Berlin ARG. ;-)

Surely funner than Adam's anonymous access allocated task issues, although I
think we can get away with a one-word change there if we wish. (And I'm sure
we're going to prefer that to duplicating all of the anonymous access rules into
the tasking section.)

Anyway, I think this case (the larger object into the smaller out parameter) has
to be a bounded error, so that compilers can detect it. The only other
alternative is to raise an exception if the actual value doesn't fit (we're not
going to change the semantics of out parameters at this late date, even if J-P
thinks it's a good idea). And that would mean that whether an exception is
raised depends on the value and for uninitialized things, would be
unpredictable. (This has to be a Bounded Error rather than a Legality Rule as
otherwise we'd have nasty generic contract issues. I suppose a pair of Legality
and Program_Error would work as it does for accessibility but that seems more
complex than necessary.)

More importantly, the only reason this feature exists is to provide a simple
explanation of how inherited derived subprogram calls work; in all other cases,
an explicit temporary makes more sense. And the Bounded Error I'm proposing
cannot happen for a derived type, as you can't change anything that would
trigger this Bounded Error for a derived type that has derived subprograms
because of the infamous 13.1(10/3). Any case where the programmer wrote this
explicitly would be better written with an explicit temporary (so they can leave
it uninitialized if that's OK, or initialize explicitly however they need).

****************************************************************

From: Steve Baird
Sent: Friday, November 1, 2013  6:57 PM

This AI was approved in Berlin but has been reopened largely because of the
discovery of related problems involving access types (the original AI was about
scalars).

We have essentially the same problem with access types that this AI already
addresses for scalars. As with a scalar type that has a default value, we want
to ensure that the value that is passed in for an out-mode parameter of an
access type will be a valid value of the type (but perhaps not the subtype) of
the formal parameter.

In the case where the actual parameter is a view conversion, several possible
violations of this rule have been identified, as illustrated by the following
examples:

    -- a discriminant problem
    declare
       type T (D : Boolean := False) is -- no partial view
          record
             case D is
                when False => null;
                when True  => Foo : Integer;
             end case;
          end record;

       type Unconstrained_Ref is access all T;
       type Constrained_Ref is access all T (True);

       X : aliased T;
       X_Ref : Unconstrained_Ref := X'Access;

       procedure P (X : out Constrained_Ref) is
       begin
          X.Foo := 123;
            --  We don't want to require a discriminant
            --  check here.
          X := null;
       end;
    begin
       P (Constrained_Ref (X_Ref));
    end;

    -- a tag problem
    declare
       type Root is tagged null record;
       type Ext is new Root with record F : Integer; end record;

       type Root_Ref is access all Root'Class;
       type Ext_Ref is access all Ext;

       procedure P (X : out Ext_Ref) is
       begin
          X.F := 123;
            -- No tag check is performed here and
            -- we don't want to add one.
          X := null;
       end;

       R : aliased Root;
       Ptr : Root_Ref := R'Access;
    begin
       P (Ext_Ref (Ptr));
    end;

    -- an accessibility problem
    declare
       type Int_Ref is access all Integer;
       Dangler : Int_Ref;
       procedure P (X : out Int_Ref) is
       begin
          Dangler := X;
            -- No accessibility checks are performed here.
            -- We rely here on the invariant that
            -- a value of type Int_Ref cannot designate
            -- an object with a shorter lifetime than Int_Ref.
          X := null;
       end;

       procedure Q is
          Local_Int : aliased Integer;
          Standalone : access Integer := Local_Int'Access;
       begin
          P (Int_Ref (Standalone));
       end;
    begin
       Q;
       Dangler.all := 123;
    end;

Randy points out that there may also be implementation-dependent cases where a
value of one general access type cannot be meaningfully converted to another
type (e.g., given some notion of memory segments for some target, an
implementation might choose to support a smaller type which can only point into
one fixed "segment" and a larger type which can point into any "segment"; a
value of the larger type which points into the wrong segment could not be
meaningfully converted to the smaller type).

What do we want to do with these cases?

Analogous scalar cases are disallowed statically (by this AI). Dynamic checks
are required in some corner cases involving expanded instance bodies, but the
outcome of these checks is always known statically if an implementation
macro-expands instances.

Do we want to take the same approach here?

Alternatively, we could allow these cases and, at runtime, pass in null (only in
these cases that we would otherwise have to disallow).

This would be inconsistent with our treatment of scalars, but it would also be
less incompatible (compatibility was less of a concern in the scalar case
because the Default_Value aspect was introduced in Ada2012). One could argue
that access types have a null value, scalar types do not, and that this
difference justifies the inconsistency.

Opinions?

I'd like to also mention one possible change that would apply to scalars too.
In 4.6(56), should we add ", predicate, or null exclusion" in
    without checking against any constraint {, predicate, or null exclusion} ?

This is not essential since we get this right in 6.4.1 and the proposed wording
for 4.6(56) does say "as described more precisely in 6.4.1". Perhaps the present
(i.e. as given in version 1.5 of the AI) wording is fine. TBD.

****************************************************************

From: Robert Dewar
Sent: Friday, November 1, 2013  7:10 PM

> Randy points out that there may also be implementation-dependent cases
> where a value of one general access type cannot be meaningfully
> converted to another type (e.g., given some notion of memory segments
> for some target, an implementation might choose to support a smaller
> type which can only point into one fixed "segment" and a larger type
> which can point into any "segment"; a value of the larger type which
> points into the wrong segment could not be meaningfully converted to
> the smaller type).

I don't consider this a valid implementation of general access types.
Furthermore, it concerns architectures that are of interest only in museums. I
do not think this concern should influence our thinking in any way.

At the most, I would allow this as an AI-329^^^^^^RM 1.1.3(6) issue

****************************************************************

From: Randy Brukardt
Sent: Friday, November 1, 2013  7:38 PM

...
> > Randy points out that there may also be implementation-dependent
> > cases where a value of one general access type cannot be
> > meaningfully converted to another type (e.g., given some notion of
> > memory segments for some target, an implementation might choose to
> > support a smaller type which can only point into one fixed "segment"
> > and a larger type which can point into any "segment"; a value of the
> > larger type which points into the wrong segment could not be
> > meaningfully converted to the smaller type).
>
> I don't consider this a valid implementation of general access types.

Why not?

> Furthermore, it concerns architectures that are of interest only in
> museums.

The x86 machine on my desk has segments. I'm pretty sure it's not in a museum.
I've used code on this machine that used segments.

The original problem came from a word-addressed machine. The C compiler used
byte addresses for its pointers, and thus C convention general access types
needed to use that representation. OTOH, Ada had no need for byte addresses
(anything aliased could be word aligned) so it used much cheaper word pointers
for it's (Ada convention) general access types. Ada allows these types to be
interconverted, so the problem comes up. Admittedly, the machine on which this
was a concern is (or should be) in a museum, but I thought there still were such
machines in use.

> I do not think this concern should influence our thinking in any way.

I don't think it matters, because the problem also exists for scalar types and
there are also the other issues that Steve identifies. I'd expect all of these
problems (there are at least 6 separate problem cases) to be solved the same
way, so whether or not *one* of them is critical isn't really the issue. It's
the weight of the entire group.

> At the most, I would allow this as an RM 1.1.3(6) issue

That goes without saying, IMHO. It would be silly to insist that Ada use slower
machine instructions if it doesn't have to do so. If the Standard did say that,
I would expect any competent implementer to ignore it.

****************************************************************

From: Robert Dewar
Sent: Friday, November 1, 2013  8:13 PM

>> Furthermore, it concerns architectures that are of interest only in
>> museums.
>
> The x86 machine on my desk has segments. I'm pretty sure it's not in a
> museum. I've used code on this machine that used segments.

No modern code ever uses segments any more, the whole segmentation thing was a
giant kludge to get more than 128K of memory on the original 8086. It was a
brilliant invention, given Intel's insistence on maintainting 16-bit
addressability ("no one could possibly need more than 64K memory, but just in
case, we will split data and code to go up to 128K"). But using segmentation
once you move to 32-bit addressing is IMO a total nonsense.

> The original problem came from a word-addressed machine. The C
> compiler used byte addresses for its pointers, and thus C convention
> general access types needed to use that representation. OTOH, Ada had
> no need for byte addresses (anything aliased could be word aligned) so
> it used much cheaper word pointers for it's (Ada convention) general
> access types. Ada allows these types to be interconverted, so the
> problem comes up. Admittedly, the machine on which this was a concern
> is (or should be) in a museum, but I thought there still were such machines in
> use.

Not as far as I know ... and as I said, if you have sufficiently weird
architectures, you can always appeal to AI-327.

> I don't think it matters, because the problem also exists for scalar
> types and there are also the other issues that Steve identifies. I'd
> expect all of these problems (there are at least 6 separate problem
> cases) to be solved the same way, so whether or not *one* of them is
> critical isn't really the issue. It's the weight of the entire group.

Fair enough, and indeed it would be comfortable if all these problems have a
common solution, which is what things seem to be tending to!

>> At the most, I would allow this as an RM 1.1.3(6) issue
>
> That goes without saying, IMHO. It would be silly to insist that Ada
> use slower machine instructions if it doesn't have to do so.

These days, smooth interoperation with C and C++ is an absolute requirement in
nearly all real world Ada applications, so in practice you use the C ABI for
everything (In GNAT we even use the C++ ABI for dispatching operations etc, so
that we can interoperate with C classes).

****************************************************************

From: Randy Brukardt
Sent: Friday, November 1, 2013  7:49 PM

...
> What do we want to do with these cases?
>
> Analogous scalar cases are disallowed statically (by this AI).
> Dynamic checks are required in some corner cases involving expanded
> instance bodies, but the outcome of these checks is always known
> statically if an implementation macro-expands instances.
>
> Do we want to take the same approach here?
>
> Alternatively, we could allow these cases and, at runtime, pass in
> null (only in these cases that we would otherwise have to disallow).
>
> This would be inconsistent with our treatment of scalars, but it would
> also be less incompatible (compatibility was less of a concern in the
> scalar case because the Default_Value aspect was introduced in
> Ada2012). One could argue that access types have a null value, scalar
> types do not, and that this difference justifies the inconsistency.

I don't like the idea of "deinitializing" an object to null if it fails one of
these checks. This is the same problem that caused us to decide to go with a
static check in the scalar case.

Steve apparently does not like my original suggested solution, because he never
seems to mention it even though I mention it in every private e-mail he has.

That is, I think that such cases should be a bounded error, with the choices of
Constraint_Error, Program_Error, or works without data loss. This would not
necessarily depend on the value in question, so an implementation could always
unconditionally raise Program_Error in such a case (hopefully with a warning
that it is doing so), or the implementation could make the non-constraint checks
on the way in and raise Constraint_Error if they fail.

For the access case in particular, simply mandating that all of the
non-constraint, non-exclusion checks are made on the way in would also work. The
main reason we didn't want to do that for scalars was the possibility of invalid
values, but for access types those can only happen from Chapter 13 stuff or from
erroneous execution (access values are always initialized). This would require a
bit of rewording in 4.6 and 6.4.1, but I think that was always the intent and
the wording just doesn't reflect it.

So there are at least 4 reasonable answers here; Steve is providing a false
choice. (There is even a fifth choice, combining Bob's soft errors and some sort
of runtime check if they are ignored.)

****************************************************************

From: Tucker Taft
Sent: Sunday, November 3, 2013  9:40 AM

> ... Analogous scalar cases are disallowed statically (by this AI).
> Dynamic checks are required in some corner cases involving expanded
> instance bodies, but the outcome of these checks is always known
> statically if an implementation macro-expands instances.
>
> Do we want to take the same approach here?

Yes.

> Alternatively, we could allow these cases and, at runtime, pass in
> null (only in these cases that we would otherwise have to disallow).
>
> This would be inconsistent with our treatment of scalars, but it would
> also be less incompatible (compatibility was less of a concern in the
> scalar case because the Default_Value aspect was introduced in
> Ada2012). One could argue that access types have a null value, scalar
> types do not, and that this difference justifies the inconsistency.
>
> Opinions?

Make the rule essentially identical to the scalar case, and not worry about the
extremely obscure compatibility issue.  If anything, substituting "null" is
scarier in my view, and not really "compatible" as far as run-time semantics.

>
> I'd like to also mention one possible change that would apply to scalars too.
> In 4.6(56), should we add ", predicate, or null exclusion" in
>     without checking against any constraint {, predicate, or null exclusion}
> ?

Seems fine.

****************************************************************

From: Tucker Taft
Sent: Sunday, November 3, 2013  9:44 AM

> For the access case in particular, simply mandating that all of the
> non-constraint, non-exclusion checks are made on the way in would also work. ...

I like this idea.  Basically, ensure that, after conversion, it will be a legal
value of the *type* but not necessarily of the subtype.

I am also fine with treating them identically to scalars.

****************************************************************

From: Robert Dewar
Sent: Sunday, November 3, 2013  10:15 AM

> Make the rule essentially identical to the scalar case, and not worry
> about the extremely obscure compatibility issue.  If anything,
> substituting "null" is scarier in my view, and not really "compatible" as far
> as run-time semantics.

Do we have data on the "extremely obscure" here, how many existing programs
would be affected?

****************************************************************

From: Steve Baird
Sent: Monday, November 4, 2013  11:53 AM

> I don't like the idea of "deinitializing" an object to null if it
> fails one of these checks. This is the same problem that caused us to
> decide to go with a static check in the scalar case.

I agree with you, but "deinitializing" is one of the fall-back options if we
decide that the  compatibility issues are different enough in the access case
that the reasoning we used in the scalar case does not apply.

I favor treating access values and scalars consistently (i.e., in the way that
we have already settled on for scalars), but other alternatives do seem worth
mentioning.

> Steve apparently does not like my original suggested solution, because
> he never seems to mention it even though I mention it in every private
> e-mail he has.
>
> That is, I think that such cases should be a bounded error, with the
> choices of Constraint_Error, Program_Error, or works without data loss.

You're right, I don't like this (for the obvious portability reasons).

> For the access case in particular, simply mandating that all of the
> non-constraint, non-exclusion checks are made on the way in would also work.

I agree that converting the incoming value to the base type of the parameter
(not to the first named subtype) would work. The point of the
basetype/first-named-subtype distinction is that subtype-specific properties
(e.g., predicates, null exclusions) would be ignored while type-specific
properties (e.g., designated subtype, accessibility level) would be checked,
which is just what we need.

As a variation on this approach, we could have a dynamic semantics rule that the
you get copy-in if this conversion succeeds and null (instead of an exception)
if the incoming value is not convertible to the basetype. Would silently passing
in null be preferable to raising an exception?

****************************************************************

From: Tucker Taft
Sent: Monday, November 4, 2013  12:09 PM

> As a variation on this approach, we could have a dynamic semantics
> rule that the you get copy-in if this conversion succeeds and null
> (instead of an exception) if the incoming value is not convertible to
> the basetype. Would silently passing in null be preferable to raising
> an exception?

No, not as far as I am concerned.  Problems like this should be advertised, not
silently ignored in my view.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 4, 2013  12:21 PM

> > As a variation on this approach, we could have a dynamic semantics
> > rule that the you get copy-in if this conversion succeeds and null
> > (instead of an exception) if the incoming value is not convertible
> > to the basetype. Would silently passing in null be preferable to
> > raising an exception?
>
> No, not as far as I am concerned.  Problems like this should be
> advertised, not silently ignored in my view.

I agree with Tuck here. And this is likely to be a rare case (conversion of an
out parameter in a call to an access type with different properties - remember
this is only for "out" parameters, "in out" is unchanged).

The problem with the compile-time check is that it will reject programs which
are perfectly fine (where the object is null, for instance). And while this
should be rare, this problem has been in the language back to Ada 83, so there
is a vast universe of code that could be affected. It's hard to know what the
result is when you multiply a very small number (probability of occurrence)
times a very large number (all Ada code ever written).

****************************************************************

From: Steve Baird
Sent: Monday, November 4, 2013  1:13 PM

>> No, not as far as I am concerned.  Problems like this should be
>> advertised, not silently ignored in my view.
>
> I agree with Tuck here.

As noted previously, my first choice is to be consistent with the treatment of
scalars (I think the three of us agree on this point) and deal with the issue
statically.

But let's still take a look at pass-in-null vs. raise-an-exception.

> The problem with the compile-time check is that it will reject
> programs which are perfectly fine

This argues for pass-in-null over raise-an-exception.

99% of the time (and even more often among programmers who bathe and floss
regularly) the incoming value of an out-mode parameter of an elementary type is
dead (I'm talking about "real" code here, not compiler tests).

Raising an exception, as opposed to silently passing in null, penalizes the
overwhelmingly common case.

****************************************************************

Questions? Ask the ACAA Technical Agent