Version 1.6 of ais/ai-10291.txt

Unformatted version of ais/ai-10291.txt version 1.6
Other versions for file ais/ai-10291.txt

!standard 13.01 (7)          05-10-10 AI95-00291-02/06
!standard 13.01 (18.1/1)
!standard 13.01 (21)
!standard 13.01 (24)
!standard 13.02 (06)
!standard 13.03 (18)
!standard 13.03 (22/1)
!standard 13.03 (23)
!standard 13.03 (24)
!standard 13.03 (25)
!standard 13.03 (28)
!standard 13.03 (32)
!standard 13.03 (34)
!standard 13.03 (43)
!standard 13.03 (56)
!class binding interpretation 04-10-05
!status Amendment 200Y 05-02-13
!status WG9 Approved (Letter ballot, 05-01)
!status ARG Approved 4-0-6 05-02-13
!status work item 04-11-20
!status received 04-10-05
!qualifier Error
!priority Medium
!difficulty Hard
!subject By-reference types and the recommended level of support for representation items
!summary
The recommended level of support for representation items does not preclude maintaining the invariant that all objects of a given by-reference type and all aliased objects of any given type are consistently aligned and have a common representation (ignoring extra space at the end of a composite object).
!question
The recommended level of support for representation items should not preclude maintaining the invariant that all objects of a given by-reference type and all aliased objects of any given type are consistently aligned and have a common representation (ignoring extra space at the end of a composite object).
There exist representation items which violate this rule (and which most implementations would therefore reasonably reject) but which are not excluded from the recommended level of support for representation items. Was this intended? (No.)
!recommendation
(See summary.)
!wording
Replace 13.1(7) with the following:
The representation of an object consists of a certain number of bits (the size of the object). For an object of an elementary type, these are the bits that are normally read or updated by the machine code when loading, storing, or operating-on the value of the object. For an object of a composite type, these are the bits reserved for this object, and include bits occupied by subcomponents of the object. If the size of an object is greater than that of its subtype, the additional bits are padding bits. For an elementary object, these padding bits are normally read and updated along with the others. For a composite object, padding bits might not be read or updated in any given composite operation, depending on the implementation.
Insert after 13.1(18.1/1)
A representation item that specifies an aspect of representation that would have been chosen in the absence of the representation item is said to be confirming.
Insert after 13.1(21) (that is, as the first item in the bulleted list)
A confirming representation item should be supported.
Replace bulleted-list item 13.1(24) with the following (formatted as 4 bulleted-list items and then a paragraph following the bulleted list):
* An implementation need not support a nonconfirming representation item if it could cause an aliased object or an object of a by-reference type to be allocated at a nonaddressable location or, when the alignment attribute of the subtype of such an object is nonzero, at an address that is not an integral multiple of that alignment.
* An implementation need not support a nonconfirming representation item if it could cause an aliased object of an elementary type to have a size other than that which would have been chosen by default.
* An implementation need not support a nonconfirming representation item if it could cause an aliased object of a composite type, or an object whose type is by-reference, to have a size smaller than that which would have been chosen by default.
* An implementation need not support a nonconfirming subtype-specific representation item specifying an aspect of representation of an indefinite or abstract subtype.
for purposes of these rules, the determination of whether a representation item applied to a type "could cause" an object to have some property is based solely on the properties of the type itself, not on any available information about how the type is used. in particular, it presumes that minimally aligned objects of this type might be declared at some point.
Insert after 13.2(6):
If a packed type has a component that is not of a by-reference type and has no aliased part, then such a component need not be aligned according to the Alignment of its subtype; in particular it need not be allocated on a storage element boundary.
Delete 13.3(18) [text was moved to 13.1(24)]
Following formatting of 13.3(39/1 - 48), replace 13.3(22/1 - 26) with
For a prefix X that denotes an object:
X'Alignment
The value of this attribute is of type universal_integer, and nonnegative; zero means that the object is not necessarily aligned on a storage element boundary. If X'Alignment is not zero, then X is aligned on a storage unit boundary and X'Address is an integral multiple of X'Alignment (that is, the Address modulo the Alignment is zero).
Alignment may be specified for stand-alone objects via an attribute_definition_clause; the expression of such a clause shall be static, and its value nonnegative.
For every subtype S:
S'Alignment
The value of this attribute is of type universal_integer, and nonnegative.
For an object X of subtype S, if S'Alignment is not zero, then X'Alignment is a nonzero integral multiple of S'Alignment unless specified otherwise by a representation item.
Alignment may be specified for first subtypes via an attribute_definition_clause; the expression of such a clause shall be static, and its value nonnegative.
Replace 13.3(28) with
For an object that is not allocated under control of the implementation, execution is erroneous if the object is not aligned according to its Alignment.
Insert after 13.3(32)
* An implementation need not support an Alignment specified for a derived tagged type which is not a multiple of the Alignment of the parent type. An implementation need not support a nonconfirming Alignment specified for a derived untagged by-reference type.
Delete 13.3(34)
Insert after 13.3(35)
For other objects, an implementation should at least support the alignments supported for their subtype, subject to the following:
An implementation need not support Alignments specified for objects of a by-reference type or for objects of types containing aliased subcomponents if the specified Alignment is not a multiple of the Alignment of the subtype of the object.
In 13.3(43) replace
"A Size clause should be supported for an object if ..."
with
"A nonconfirming Size clause should be supported for an object (excepting aliased elementary objects) if ..."
Insert after 13.3(56)
A nonconfirming size clause for the first subtype of a derived untagged by-reference type need not be supported.
!discussion
We do not want to require that implementations reject problematic representation items such as
type T is array (1..32) of aliased Boolean; for T'Size use 32;
We just want to allow implementations to reject such beasts without running afoul of C.2(2).
Note that the term "object" includes the case of a component. Thus, for example, the allowed non-support of a representation item which could result in a misaligned object would imply permission to reject something like
type Rec1 is record F : aliased Integer; end record; for Rec1'Alignment use Integer'Alignment / 2;
or
type Rec2 is record F : aliased Integer; end record; for Rec2 use record F at Integer'Alignment / 2 range 0 .. Integer'Size - 1; end record;
Given an object X of subtype S, X'Alignment must be consistent with S'Alignment "unless specified otherwise by a representation item". Note that the representation item need not be an alignment clause. It could be a Pack pragma, a Component_Size specification, a record representation clause, a size clause, etc.
Note that we distinguish between elementary types and composite types. Allowing for extra space at the end of a composite object is not considered to impose a burden on implementations, even for aliased or by-reference types. This is particularly important for record types because there are no standard rules for choosing the size of a record subtype. For example, some implementations round up to a multiple of the alignment of the record subtype when determining the subtype size, while others do not, deferring the padding to the allocation of particular objects or arrays of such objects.
We want to allow users to pad out the size of an external record object so as to match the expectations of a different compiler or a different language, or some hardware device or coprocessor, even if other objects of the type are not always given this padded size.
We don't reiterate the allowed restriction on alignment and size for aliased and by-reference objects in each RM paragraph where it might apply. Implementations can refer to the new overarching permission to justify the restriction in any given compiler error message.
!example
!corrigendum 13.1(7)
Replace the paragraph:
The representation of an object consists of a certain number of bits (the size of the object). These are the bits that are normally read or updated by the machine code when loading, storing, or operating-on the value of the object. This includes some padding bits, when the size of the object is greater than the size of its subtype. Such padding bits are considered to be part of the representation of the object, rather than being gaps between objects, if these bits are normally read and updated.
by:
The representation of an object consists of a certain number of bits (the size of the object). For an object of an elementary type, these are the bits that are normally read or updated by the machine code when loading, storing, or operating-on the value of the object. For an object of a composite type, these are the bits reserved for this object, and include bits occupied by subcomponents of the object. If the size of an object is greater than that of its subtype, the additional bits are padding bits. For an elementary object, these padding bits are normally read and updated along with the others. For a composite object, padding bits might not be read or updated in any given composite operation, depending on the implementation.
!corrigendum 13.1(18.1/1)
Insert after the paragraph:
If an operational aspect is specified for an entity (meaning that it is either directly specified or inherited), then that aspect of the entity is as specified. Otherwise, the aspect of the entity has the default value for that aspect.
the new paragraph:
A representation item that specifies an aspect of representation that would have been chosen in the absence of the representation item is said to be confirming.
!corrigendum 13.1(21)
Insert after the paragraph:
The recommended level of support for all representation items is qualified as follows:
the new paragraph:
!corrigendum 13.1(24)
Replace the paragraph:
by:
For purposes of these rules, the determination of whether a representation item applied to a type could cause an object to have some property is based solely on the properties of the type itself, not on any available information about how the type is used. In particular, it presumes that minimally aligned objects of this type might be declared at some point.
!corrigendum 13.2(6)
Insert after the paragraph:
If a type is packed, then the implementation should try to minimize storage allocated to objects of the type, possibly at the expense of speed of accessing components, subject to reasonable complexity in addressing calculations.
the new paragraph:
If a packed type has a component that is not of a by-reference type and has no aliased part, then such a component need not be aligned according to the Alignment of its subtype; in particular it need not be allocated on a storage element boundary.
!corrigendum 13.3(18)
Delete the paragraph:
!corrigendum 13.3(22/1)
Replace the paragraph:
For a prefix X that denotes a subtype or object:
by:
For a prefix X that denotes an object:
!corrigendum 13.3(23)
Replace the paragraph:
X'Alignment
The Address of an object that is allocated under control of the implementation is an integral multiple of the Alignment of the object (that is, the Address modulo the Alignment is zero). The offset of a record component is a multiple of the Alignment of the component. For an object that is not allocated under control of the implementation (that is, one that is imported, that is allocated by a user-defined allocator, whose Address has been specified, or is designated by an access value returned by an instance of Unchecked_Conversion), the implementation may assume that the Address is an integral multiple of its Alignment. The implementation shall not assume a stricter alignment.
by:
X'Alignment
The value of this attribute is of type universal_integer, and nonnegative; zero means that the object is not necessarily aligned on a storage element boundary. If X'Alignment is not zero, then X is aligned on a storage unit boundary and X'Address is an integral multiple of X'Alignment (that is, the Address modulo the Alignment is zero).
!corrigendum 13.3(24)
Delete the paragraph:
The value of this attribute is of type universal_integer, and nonnegative; zero means that the object is not necessarily aligned on a storage element boundary.
!corrigendum 13.3(25)
Replace the paragraph:
Alignment may be specified for first subtypes and stand-alone objects via an attribute_definition_clause; the expression of such a clause shall be static, and its value nonnegative. If the Alignment of a subtype is specified, then the Alignment of an object of the subtype is at least as strict, unless the object's Alignment is also specified. The Alignment of an object created by an allocator is that of the designated subtype.
by:
Alignment may be specified for stand-alone objects via an attribute_definition_clause; the expression of such a clause shall be static, and its value nonnegative.
For every subtype S:
S'Alignment
The value of this attribute is of type universal_integer, and nonnegative.
For an object X of subtype S, if S'Alignment is not zero, then X'Alignment is a nonzero integral multiple of S'Alignment unless specified otherwise by a representation item.
Alignment may be specified for first subtypes via an attribute_definition_clause; the expression of such a clause shall be static, and its value nonnegative.
!corrigendum 13.3(28)
Replace the paragraph:
If the Alignment is specified for an object that is not allocated under control of the implementation, execution is erroneous if the object is not aligned according to the Alignment.
by:
For an object that is not allocated under control of the implementation, execution is erroneous if the object is not aligned according to its Alignment.
!corrigendum 13.3(32)
Insert after the paragraph:
the new paragraph:
!corrigendum 13.3(34)
Delete the paragraph:
!corrigendum 13.3(35)
Insert after the paragraph:
the new paragraph:
!corrigendum 13.3(42)
!comment The change really is here, once we handle the conflict with AI-51-2.
!comment !corrigendum 13.3(43)
@dprepl A Size clause should be supported for an object @dby A nonconfirming Size clause should be supported for an object (excepting aliased elementary objects)
!corrigendum 13.3(56)
Insert after the paragraph:
the new paragraph:
!ACATS test
!appendix

From: Steve Baird
Sent: Tuesday, March 26, 2002  6:39 PM

The recommended level of support for representation items should not
preclude maintaining the invariant that all objects of a given by-reference
type and all aliased objects of any given type are consistently aligned and
have a common representation.

There exist representation items which violate this rule (and which most
implementations would therefore reasonably reject) but which are not excluded
from the recommended level of support for representation clauses.

These include:

1) Alignment clauses for derived tagged types specifying a value which is
   not a multiple of the parent type's alignment (13.1(10) allows this;
   an alignment clause is not a type-related representation item).

   For example:

       type T1 is tagged
           record
               F : Long_Float;
           end record;
       for T1'Alignment use 8; -- ok

       type T2 is new T1 with null record;
       for T2'Alignment use 4; -- bad

       X2 : T2;
       pragma Import (Ada, X2);

       -- Since T1 (X2) may be passed as a parameter to a subprogram,
       -- the subprogram may not assume that its argument is 8-byte
       -- aligned. This is not good.

2) Non-confirming size or alignment clauses for untagged derived
   by-reference types.

   For example:
       type My_Tagged_Type is tagged null record;
       for My_Tagged_Type'Alignment use 4; -- ok

       type Parent is array (1 .. 10) of My_Tagged_Type;
       for Parent'Alignment use 4; -- ok

       type Child is new Parent;
       for Child'Alignment use 8; -- bad

       -- A view conversion may result in a misaligned object of the
       -- more strictly aligned type.

3) Non-confirming size specifications for objects of a by-reference type.

4) Alignment specifications for objects of a by-reference
   type (or which have aliased subcomponents), where the specified
   alignment is not a non-zero multiple of the overridden alignment value.

5) Representation items (e.g. record representation clauses,
   component_size clauses, size clauses, and alignment clauses) which would
   result in misaligned components which are either aliased, contain aliased
   subcomponents, or are of a by-reference type.

   For example:

       type U32 is new Interfaces.Unsigned_32;
       for U32'Size use 32; -- ok
       for U32'Alignment use 4; -- ok

       type R1 is
          record
              F1 : aliased U32;
              F2 : Character;
          end record;
       for R1'Size use 40; -- ok

       type R1_Vec is array (1..10) of R1;
       for R1_Vec'Component_Size use 40; -- bad

   For purposes of this discussion, an object of type T may be "misaligned"
   in either of two ways:
       1) If T'Alignment > 0, then the object is misaligned if its address is
          not an multiple of T'Alignment.
       2) Any object which is not aligned on a storage element boundary
          is considered to be misaligned, regardless of the value of
          T'Alignment (i.e. even if T'Alignment = 0).

   For example:

       type Switch_Setting is (Off, On);
       for Switch_Setting'Alignment use 0; -- ok
       for Switch_Setting'Size use 1; -- ok

       type Setting_Set is array (1..32) of aliased Switch_Setting;
       for Setting_Set'Component_Size use 1; -- bad

6) Representation items which would result in aliased or by_reference
   components having different sizes than they otherwise would have had.

   For example:
       type I16 is new Interfaces.Integer_16;
       for I16'Size use 16; -- ok
       for I16'Alignment use 2; -- ok

       type R2 is
          record
              F : aliased I16;
          end record;

       for R2 use
          record
              F at 0 range 0 .. 31; -- bad
          end record;


Pragma Pack is also problematic. In this example

  type Vec is array (1 .. 32) of aliased Boolean;
  pragma Pack (Vec);

, the Pack pragma should be accepted, but choosing a Component_Size of 8,
rather than 1, for Vec should be consistent with the recommended
level of support for pragma Pack.

These representation items could all be supported (e.g. by representing
access-to-object values and by-reference parameter values as something
other than a simple storage-unit address), but the recommended level
of support for representation items should not require such contortions.

****************************************************************

From: Robert Dewar
Sent: Tuesday, March 26, 2002  9:36 PM

I agree with all Steve's points here, and consider them obvious, in the
sense that no other interpretation makes any sense.

****************************************************************

From: Randy Brukardt
Sent: Friday, April 19, 2002  7:58 PM

I believe each example you provided. But in order to proceed, we'll need to
have references to the particular recommended level of support paragraphs and
some suggestion of how to fix them. I've created an empty AI (AI-291) with just
a question for you to fill in.

Doing this will at least cover part of the ground of the never completed
AI-51/109. I'm sure Bob Duff will be happy that you have volunteered to write
the AI for this albatross^h^h^h^h^h^h^h^h^h issue.

****************************************************************

From: Tucker Taft
Sent: Monday, September 20, 2004  9:20 PM

I claimed that the Ada 9X team had a pretty clear
model in their heads relating to representation, and
I thought it might be useful to try to communicate it.
I realize that this model was not well communicated
in the RM, and that it may not be the "right" one
as far as some are concerned.  Nevertheless, it might
be useful to try to put it into words.

THE 9X MODEL

In the Ada 95 RM, we tried to make a clear distinction
between "type-related" representation aspects and
"subtype-specific" representation aspects.  The idea
was that all objects of a given type would share the
same type-related representation aspects, but that individual
objects might differ in subtype-specific representation
aspects.  For example, for a fixed-point type, the "small"
is type-related, whereas the range is subtype-specific
(and hence can vary from object to object for a given
fixed-point type).  Perhaps "subtype-specific" might
better have been called "object-specific."

Size and Alignment are subtype-specific in Ada 95,
whereas things like array component-size, record
component layout, fixed small, are all type-related.
Subtype-specific aspects need not be the same between
objects, and in particular, need not be the same between
actual and formal.

For by-copy parameter passing, it is clear that it is
easy for the actual and formal to have different
subtype-specific aspects.  For by-copy parameter passing,
it would be bad for type-related aspects to differ for two
reasons, one because precision might be lost (e.g. if
the smalls didn't agree), and the second because
the parameter passing would be too expensive (e.g.
having to change the layout of the record).

By by-reference parameter passing, if is even more clear
that type-related aspects need to match between actual
and formal.  For subtype-specific aspects to differ,
it must mean that the formal can reasonably provide a
"view" of the actual that will not "corrupt" the actual.
For example, if the actual is an unconstrained record,
but the formal is constrained, there is no problem
so long as the incoming discriminants satisfy the
constraints of the formal.  Similarly, it is fine if
the size of the actual differs from the size of the
formal, so long as the part that is actually touched
via the formal exists in the actual.  It seems clear
that it should be possible to "touch" from the 'Address
of the formal up through formal'Size bits thereafter.
This means that the size of the actual should be greater
than or equal to the size of the formal (again, we
are talking only about by-reference parameter passing
here).

Similarly, for by-reference paramter passing, there seems
no harm if the alignment of the actual is greater than
the alignment of the formal, so long as it is an integral
multiple of the alignment of the formal.

Given the above analysis, it seems like reasonable choices
were made in Ada 95 in terms of making certain aspects
type-related, and others (Size and Alignment), subtype-specific
(or perhaps better, "specifiable for individual objects").
Nevertheless, there are still some limitations on the specification
of the subtype-specific aspects.  For example,
for scalar types, the object Size is clearly required to be
large enough to represent all the distinct in-range values.
For objects that are going to be passed by reference, the size
(and alignment) must be no less than that of any formal with
which they are compatible (i.e., for which there is no Constraint_Error).
If it is an option to pass by copy, then the compiler could choose
to allow smaller size or alignment, but if the object is
required to be passed by reference, then smaller size or alignment
is not an option.

HOLES IN THE MODEL

One part of the model which was clearly missing had to do with
aliased elementary objects.  These are elementary objects which
are effectively accessed "by-reference."  This implies they must obey
rules similar to by-reference parameter passing, requiring that the
Size and Alignment be no smaller than that expected by the
access value through which they are referenced.  It would be
fine if their alignment were greater.  If their size were
greater, it would have to be purely padding occuring at the
"end" of the object.  If there were any "significant" bits
in an area not referenced by the access value, that would be
trouble.  Because of the traditional assumption for elementary
objects that the size means something about the kind of machine
instructions used to manipulate it, and we don't like to build
in dependence on the machine being "little-endian", it seems
like the Size should match exactly between the object and
the size referenced via the access value.  It is conceivable
that a compiler could support larger aliased discrete
or fixed objects if the machine were little-endian and the
range were non-negative (so it always had trailing zero bits), but
that seems of dubious value, and clearly non-portable.  Hence, only
confirming size specifications for aliased elementary objects
seem justifiable.   As mentioned above, increasing the aligmnent
is always safe for an aliased object.

A second part of the model which was incomplete had to do with
conversions.  If a type is a by-reference type, and the conversion
is used as an actual parameter, the conversion is required to
also be "by-reference" (see 6.2(10)).  Based on the discussion
above, this means that an object's size must be no smaller than
any subtype or type to which it can be converted, if its type
is by-reference.

For untagged types, where conversion can go between any two
types with a common ancestor, this means that all
by-reference types with a common ancestor must have the same
size for (namable) subtypes with the same constraints.
All objects (with the same constraints) must have a size no
smaller than this size.  Similarly, alignment requirements
must be the same for all such subtypes, and all objects must
have an alignment at least this great.

For tagged types, this means that sizes (and alignments) must
not decrease as you move from ancestor to descendant.
Objects must have a size and alignment no smaller
than that of their (nameable) subtype.

Note that AI-246 already disallowed the problematic array
conversions between types without a common ancestor, if the
type is (or might be) by-reference or limited.

SUMMARY OF THE ABOVE:

    The 9X model distinguishes type-related from subtype-specific
    representation aspects, where subtype-specific might be better
    thought of as aspects that can vary from one object of the
    type to another.  Size and Alignment are the two subtype-specific
    representation aspects.

    Any object can be safely given a larger alignment than that
    required by its type, provided it is an integral multiple
    of the required alignment.  (Note that we don't require supporting
    arbitrarily large alignments, but rather choose to allow
    implementations to define a maximum alignment for each class
    of types.)

    Non-aliased objects of a non-by-reference type may safely have any size
    and alignment (subject to representing all allowed values).
    (Note that we don't require support for arbitrarily large elementary
    objects, but rather choose to allow implementations to limit
    sizes of elementary objects to the "natural" sizes on the machine.)

    Only confirming size specifications need be supported for aliased
    objects of an elementary type (and anything else seems unwise).

    Composite aliased objects need to have a size that is no
    smaller than that of any (constrained) designated subtype
    through which they can be viewed.

    By-reference objects need to have a size that is no smaller than
    that of any (constrained) formal subtype through which they can
    be viewed, either directly or via a type conversion.

    Subtype Size (and Alignment) clauses are somewhat more limiting:
    For an untagged by-ref type, all its descendants must have the same
    size and alignment for "corresponding" subtypes.  (There was already a
    rule that disallowed type-related rep-clauses for untagged derived
    by-reference types).  For tagged types, sizes/alignments of subtypes
    should not shrink as we go from ancestors to descendants.

    We rely on AI-246 to disallow array conversions between unrelated
    by-reference types.

I hope the above is of some help in understanding the model
the 9X team was trying to use for representation, and also the two
holes in the model relating to aliased elementary objects
and by-reference conversion.  Neither hole seems to imply
we need to disallow Size clauses on composite objects that specify an
overly generous size.  There does seem to be a need to
require confirming Size and Alignment clauses on derived (sub)types
of an untagged by-reference type, and confirming size specifications
on elementary aliased objects.

****************************************************************

From: Robert Dewar
Sent: Saturday, October  2, 2004  7:55 PM

Tucker Taft wrote:

> THE 9X MODEL
>
> In the Ada 95 RM, we tried to make a clear distinction
> between "type-related" representation aspects and
> "subtype-specific" representation aspects.  The idea
> was that all objects of a given type would share the
> same type-related representation aspects, but that individual
> objects might differ in subtype-specific representation
> aspects.  For example, for a fixed-point type, the "small"
> is type-related, whereas the range is subtype-specific
> (and hence can vary from object to object for a given
> fixed-point type).  Perhaps "subtype-specific" might
> better have been called "object-specific."

Yes indeed, especially since the Ada 95 terminology
invites you to think you can change the subtype
specific attributes for subtypes :-)
>
> Size and Alignment are subtype-specific in Ada 95,
> whereas things like array component-size, record
> component layout, fixed small, are all type-related.
> Subtype-specific aspects need not be the same between
> objects, and in particular, need not be the same between
> actual and formal.

It's really too bad they cannot be specified for subtypes
(the GNAT extensions are useful here!)

> By by-reference parameter passing, if is even more clear
> that type-related aspects need to match between actual
> and formal.  For subtype-specific aspects to differ,
> it must mean that the formal can reasonably provide a
> "view" of the actual that will not "corrupt" the actual.
> For example, if the actual is an unconstrained record,
> but the formal is constrained, there is no problem
> so long as the incoming discriminants satisfy the
> constraints of the formal.  Similarly, it is fine if
> the size of the actual differs from the size of the
> formal, so long as the part that is actually touched
> via the formal exists in the actual.  It seems clear
> that it should be possible to "touch" from the 'Address
> of the formal up through formal'Size bits thereafter.
> This means that the size of the actual should be greater
> than or equal to the size of the formal (again, we
> are talking only about by-reference parameter passing
> here).

Yes, that works for size, but not for alignment ...

> Given the above analysis, it seems like reasonable choices
> were made in Ada 95 in terms of making certain aspects
> type-related, and others (Size and Alignment), subtype-specific
> (or perhaps better, "specifiable for individual objects").

Note that this distinction was already implicit in Ada 83
from the rules about derived types and primitive subprograms


> HOLES IN THE MODEL
>
...
> A second part of the model which was incomplete had to do with
> conversions.  If a type is a by-reference type, and the conversion
> is used as an actual parameter, the conversion is required to
> also be "by-reference" (see 6.2(10)).  Based on the discussion
> above, this means that an object's size must be no smaller than
> any subtype or type to which it can be converted, if its type
> is by-reference.

Should also say something about alignment here (alignment is
the twin sister of size, please always include both of them
for all social occasions on which either of them are
mentioned).

> I hope the above is of some help in understanding the model
> the 9X team was trying to use for representation, and also the two
> holes in the model relating to aliased elementary objects
> and by-reference conversion.  Neither hole seems to imply
> we need to disallow Size clauses on composite objects that specify an
> overly generous size.  There does seem to be a need to
> require confirming Size and Alignment clauses on derived (sub)types
> of an untagged by-reference type, and confirming size specifications
> on elementary aliased objects.

Thanks Tuck, a very helpful note, since I agree this
confuses many people.

I think this part of the model is in fact fine (though a bit
incomplete in not allowing specification of sizes of subtypes)

By the way, the mention I made of difficulties in Ada 95
from the handling of size relates to e.g. the decision
to make Natural'Size be 31. This is a different decision
than nearly all Ada 83 compilers and is in effect a fairly
severe incompatibility, and because of the incompleteness
mentioned above, it cannot easily be fixed.

****************************************************************

From: Tucker Taft
Sent: Sunday, February 13, 2005 11:03 AM

This contains Randy's alternative to 13.1(7).

---

In the interests of bringing this discussion to a higher level, let me
propose an alternative to Tucker's wording for 13.1(7):

    The *representation* of an objects consists of a certain number of bits
    (the *size* of the object). For an object of an elementary type, these
    are the bits that are normally read or updated by the machine code
    when loading, storing, or operating-on the value of the object. For an
    object of a composite type, these are the bits that are reserved for
    this object, and that could be read or updated by the machine code
    when loading, storing, or operating-on the value of the object.
    If the size of an elementary object is greater than that of its subtype,
    the additional bits are normally read and updated along with the others.
    If the size of a composite object is greater than that of its subtype,
    the additional bits could be read and updated along with the others.

    AARM Notes: In the case of composite objects, we want the implementation
    to have the flexibility to either do operations component-by-component, or
    with a block operation covering all of the bits. We don't want to specify
    a preference in the wording.

    In the case of a properly aligned, contiguous object whose size is a
    multiple of the storage unit size, no other bits should be read or
    updated as part of operating on the object. We don't say this normatively
    because it would be difficult to define "properly aligned" or
    "contiguous".
    End AARM Notes.

I'd change 13.1(7.f) to read something like:

    A composite object may include *padding bits* that are not part of any
    subcomponent of the object. The value of any padding bits is not specified
    by the language.

This wording gets rid of the notion of padding bits and gaps altogether;
there is no need to talk about such things. They exist; what happens with
them we don't care and certainly don't specify. There is no other use of
them in the Standard, so we don't need to define them. The AARM note only
exists to acknowledge the possibility of bits that don't belong to any
component. (Note that elementary types don't have padding bits, as it is
expected that all bits will be read or written. That's not my idea of
padding!)

This wording still seems to have problems with non-contiguous objects,
because one reading implies that no other bits could be involved with a
composite object. There is a "To Be Honest" in the existing AARM that says
that this isn't intended to cover them. That seems a bit clunky. I had
thought that there was a requirement that Size not include any discontguous
pieces, or that it was implementation-defined in that case, but I can't find
any such rule (in either direction). Perhaps this should explicitly add that
Size is implementation-defined for non-contiguous objects. Or that this is
only talking about contiguous representations. Or something.

****************************************************************

From: Robert I. Eachus
Sent: Sunday, February 13, 2005  7:33 PM

>This wording still seems to have problems with non-contiguous objects,
>because one reading implies that no other bits could be involved with a
>composite object. There is a "To Be Honest" in the existing AARM that says
>that this isn't intended to cover them. That seems a bit clunky. I had
>thought that there was a requirement that Size not include any discontguous
>pieces, or that it was implementation-defined in that case, but I can't find
>any such rule (in either direction). Perhaps this should explicitly add that
>Size is implementation-defined for non-contiguous objects. Or that this is
>only talking about contiguous representations. Or something.

Hmm...  I think that you also need to talk about implementation-defined
fields in composite objects.  This I think is orthogonal to the two
options presented.  (Is the tag an implementation defined field or not?)

IMHO if the SPA is supported, a 'Size clause should need only to
include room for the components of a constrained object. The implication
is that you don't want the indexes or discriminants to be part of each
object.  For any subtype they must be somewhere, which makes them
non-contiguous as well.

However, for objects of a tagged type, the tag needs to be part of the
object. So the wording for both options should allow that.

****************************************************************

From: Robert Dewar
Sent: Sunday, February 13, 2005  9:07 PM

> IMHO if the SPA is supported, a 'Size clause should need only to include
> room for the components of a constrained object. The implication is that
> you don't want the indexes or discriminants to be part of each object.
> For any subtype they must be somewhere, which makes them non-contiguous
> as well.

I really don't think it is worth catering for the possibility of
constrained subtypes omitting discriminants, too much trouble for
too little gain.

****************************************************************

From: Robert I. Eachus
Sent: Monday, February 14, 2005  4:05 AM

> I really don't think it is worth catering for the possibility of
> constrained subtypes omitting discriminants, too much trouble for
> too little gain.

Good point.  What I really had in mind was constrained subtypes of types
without default discriminants.  We could probably spend a lot of time
discussing cases and solutions, etc.  But the point of my post was that
the option eventually selected should discuss the issue to clarify if
and when it is required.  More important though, is that there should be
an agreement on whether or not such fields (and implementation-defined
fields) are included in the 'Size.

The issue of reading varying length records into objects with
unconstrained types is usually best dealt with by overriding 'Read and
'Write.  The issue I am concerned about is where you are doing
memory-mapped I/O and the stream model is not appropriate.  For example,
devices where there are a number of memory-mapped registers, or where
you have a randomly accessible buffer.

Yes, it would be nice if an implementation supported constrained
subtypes with implicit bounds or discriminants in these cases.  It isn't
vital.  But if, whatever the final decision in this area is, the writeup
makes it clear that the specified size does or does not include
separately (or implicitly) stored discriminants, it will tend to reduce
variability between implementations, and help users.

Having said all that, I think that the best answer is that implicit or
non-contiguous fields should not be considered when a user specifies
'Size.  That way when porting code from an implementation that does
support non-contiguous or implicit fields to one that doesn't, the code
will be rejected at compile time.

****************************************************************

From: Robert Dewar
Sent: Monday, February 14, 2005  4:13 PM

> Good point.  What I really had in mind was constrained subtypes of types
> without default discriminants.  We could probably spend a lot of time
> discussing cases and solutions, etc.  But the point of my post was that
> the option eventually selected should discuss the issue to clarify if
> and when it is required.  More important though, is that there should be
> an agreement on whether or not such fields (and implementation-defined
> fields) are included in the 'Size.

Most certainly the discriminants should be included in 'Size

As for other impl-defined fields, are you sure you want to mandate
the implementation here, could be onerous for an implementation
that depends on impl defined fields for holding offsets.

> Yes, it would be nice if an implementation supported constrained
> subtypes with implicit bounds or discriminants in these cases.  It isn't
> vital.  But if, whatever the final decision in this area is, the writeup
> makes it clear that the specified size does or does not include
> separately (or implicitly) stored discriminants, it will tend to reduce
> variability between implementations, and help users.

How does this reduce variability unless there really are implementations
that allow separately or implicitly stored discriminants? Are there such?
>
> Having said all that, I think that the best answer is that implicit or
> non-contiguous fields should not be considered when a user specifies
> 'Size.  That way when porting code from an implementation that does
> support non-contiguous or implicit fields to one that doesn't, the code
> will be rejected at compile time.

Yes, that sounds reasonable, though if there no such implementatoins, this
could perfectly well be left moot.

****************************************************************

From: Robert I. Eachus
Sent: Tuesday, February 15, 2005  9:36 PM

> Yes, that sounds reasonable, though if there no such implementatoins,
> this
> could perfectly well be left moot.

I felt I should respond to this with a real example.  Unfortunately, I
have to get a bit into how modern radar systems work to explain how the
mutant records feature in the Alsys compiler was used to significantly
improve ease of coding on at least one radar project.

In a modern multimode radar the data collected changes with the major
modes and with other settings.  Let me go off topic a bit here to
explain what goes on.  Think of a radar with a rotating antenna, in the
old days the data from the returns were displayed directly on a PPI,
plot position indicator.  A CRT was used with the beam sweeping out from
the center of the screen for each pulse emitted, and rotating around the
center of the screen with each pulse.   Modern radars don't work
anything like that, but it gives us some terminology.  One 360 degree
rotation (or side to side and back oscillation) of the antenna is a
sweep, and each line on the  screen is a  scan line.

In a modern radar, especially in pulse doppler mode, there may be many
transmitter pulses per scan line, and each successive scan line will
often use a different set of parameters for the pulse sequence.  This
allows the distance to targets to be disambiguated, and using another
set of parameters for the same scan line or set of scan lines,
eliminates blind spots.

Since the radar needs to be constantly changing the pulse settings for
pulse doppler mode, most PD radars also support other major modes, and
the sky can be divided into sectors with different radar modes or
parameters used in each sector.  (In fact in some radars, the number of
scans per degree of rotation can also change.)

Now to get back to Alsys mutant records.  On the particular radar
project I am thinking of, the computer controlling the radar transferred
instructions for the next sweep and then received the data in a
dual-ported memory called the frame buffer.  Obviously the data in the
frame buffer, although it had already been partially processed by
electronics hardware, was laid out in a  way that is determined by the
hardware.  The software of course has to know exactly where to look for
the data and if you treat each the data for each scan as an Ada variant
record, the discriminants are not stored in the frame buffer.

Here is where the Alsys mutant records came to the rescue.  The
discriminants for any scan were pushed onto the stack, in practice for a
full sweep at a time, and all that had to be done was to use a special
memory manager to provide the right pointer for each scan's data.  (The
project was using an Ada 83 version of Alsys when I was involved, but
the specialized allocator maps perfectly to the concept of storage pools
in Ada 95.)

Doing this hugely simplified the software.  Rather than have to pass
lots of parameters around, when passing a scan to a procedure required
only one parameter, independent of the complexity of the data structure
for that particular type of scan.  Well actually some of the procedures
for pulse doppler mode needed to process adjacent scans, so they had two
parameters.  I won't go into all of the complex processing going on, get
a good book on radar theory, and be sure you are familiar with the
Chinese Remainder Theorem, Z and Fourier transforms, convolution and so
on, so that all you need to learn is how it all applies to radar. ;-)

So there is a specific case where (implementation dependent) use of
Alsys mutant records probably saved 15% during  the software development
effort.  If the data were misaligned, there was only one small section
of code where it could happen. Usually we then had to go talk to the
hardware engineers and find out what had changed, but it was possible to
reach that conclusion much quicker. Similarly when the hardware people
changed the details, and remembered to tell us about it, things had to
be changed in only one place.  Note also that this was definitely a case
where the random access nature of the frame buffer was relied on.  Some
of the data laid down by the hardware was silently discarded, such as
during the blanking intervals when the receiver gain was turned to zero
and the transmitter would be swamping any real data anyway.

More important was the ability to simultaneously look at adjacent pairs
of scans, or in some cases to look at data that was milliseconds to
seconds old when  attempting to resolve confusion between targets seen
in the same scan.  The data from successive pulses is aggregated in the
first pass at detecting  targets in pulse doppler mode.  However, in
addition to the disambiguation using successive scans, you can reexamine
the original data and attempt to resolve the target distance directly.
Not all that good but if you are trying to combine two sets of
detections from adjacent scans knowing that this detection was at least
50 miles away but not more than 100 miles can be a big help in the
disambiguation.

****************************************************************

From: Robert Dewar
Sent: Tuesday, February 15, 2005  10:21 PM

Not sure what to make of this

a) ALsys mutant records were VERY limited, they applied only for
stand alone objects .. components were always handled by allocating
a fixed amount of space.

b) If you really need pointers, that's easily programmed, with
very little overhead in terms of programming effort.

c) No full Ada 95 implementation is using this approach anyway

d) What has this to do with separated discriminants (Alsys
mutant records did not store discrimninants separately).

****************************************************************

From: Randy Brukardt
Sent: Wednesday, February 16, 2005  12:34 PM

> Yes, that sounds reasonable, though if there no such implementatoins, this
> could perfectly well be left moot.

I don't know how that could work for implicit components. Certainly we don't
want to have to lie about 'Size for objects; we want it to reflect the
actual memory allocated. (At least I hope it is supposed to reflect actual
memory; otherwise I can't imagine what value it would have.) Since implicit
components (like array descriptors) can be in the middle of a record (it
would essentially ban many implementation techniques if we were to say
otherwise - because you would have a situation where multiple things would
need to be allocated last), I can't imagine how we could not include them in
'Size.

OTOH, non-contiguous fields could be left out. It's almost a coin toss as to
which is the most useful. Leaving them out means that the 'Size is an
accurate representation of the footprint of an object (the address of the
following object would always be the address of the first object plus the
size). OTOH, including non-contiguous fields in 'Size means that 'Size is an
accurate representation of the memory used by the object. These are really
two different questions, and its unfortunate that we don't have two
different attributes to answer these (that is another reason that we need a
standard 'Object_Size, which we won't get, sadly).

Ada 95 seems to lean toward including the non-contiguous fields in 'Size,
and (less clearly) leaving out implicit ones, which IMHO is the worst
possible decision. (Which is usually the case with Ada 95 and 'Size.)

> b) If you really need pointers, that's easily programmed, with
> very little overhead in terms of programming effort.

Give me a break! You could equally well say that about by-reference
parameters and unconstrained arrays (indeed, C says exactly this). But Ada's
model is to not make programmers use *explicit* pointers in most cases. I
think it is an abomination that implementers have been allowed to avoid
supporting discriminant dependent array components non-contiguously (at
least as an option), especially in the face of a clear demand for the
feature (users are told that compilers are allowed to allocate the max, and
go away please).

> c) No full Ada 95 implementation is using this approach anyway

I presume that this is another dig at Janus/Ada, which uses a non-contiguous
approach for allocating all objects (including components) with a dynamic
size. While it's fair to say that Janus/Ada isn't complete when it comes to
generics (a number of things aren't implemented there), it isn't for most of
the rest of the language. This area changed little from Ada 83 (other than
the introduction of storage pools), and we certainly implement everything in
this area. While it probably has some bugs (what doesn't), and a known
storage leak issue (which of course has nothing to do with the language
semantics), I don't see any valid reason to exclude it from the discussion.

****************************************************************

From: Robert Dewar
Sent: Wednesday, February 16, 2005  3:22 PM

> Ada 95 seems to lean toward including the non-contiguous fields in 'Size,
> and (less clearly) leaving out implicit ones, which IMHO is the worst
> possible decision. (Which is usually the case with Ada 95 and 'Size.)

I disagree, in both respects this would be an entirely bizarre interpretation.
Nonb-contiguous objects are a puzzle for Size in any case, and in practice
I just don't think this situation arises.

As for implicit fields, of course they have to be included in Size,
nothing else makes sense.

>>c) No full Ada 95 implementation is using this approach anyway
>
>
> I presume that this is another dig at Janus/Ada, which uses a non-contiguous
> approach for allocating all objects (including components) with a dynamic
> size. While it's fair to say that Janus/Ada isn't complete when it comes to
> generics (a number of things aren't implemented there), it isn't for most of
> the rest of the language. This area changed little from Ada 83 (other than
> the introduction of storage pools), and we certainly implement everything in
> this area. While it probably has some bugs (what doesn't), and a known
> storage leak issue (which of course has nothing to do with the language
> semantics), I don't see any valid reason to exclude it from the discussion.

To me the storage leak issue is the point. I don't see that it is practical
to adopt this implementation approach successfully, and I observe that no
implementation has done so. Alsys could not compose for components, and
Janus has a storage leak that is, as far as I can see, pretty fundamental.

I really think this implementation approach is impractical, and I certainly
don't think the standard need bother about it much. In practice the wording
in this kind of area is not going to prohibit such implementation approaches
anyway (leaving by the way one of the most serious non-portabilities in the
language, though in practice you can pretty much count on the allocate-max
strategy these days, the old Alsys compiler having long disappeared. If there
are large scale users of Janus, then that would be an exception, but as long
as the standard does not completely rule out the Janus approach then it's
OK (though certainly the intention of the standard should be to rule out
the storage leak solution).

****************************************************************

From: Randy Brukardt
Sent: Wednesday, February 16, 2005  6:55 PM

...
> >While it probably has some bugs (what doesn't), and a known
> > storage leak issue (which of course has nothing to do with the language
> > semantics), I don't see any valid reason to exclude it from the discussion.
>
> To me the storage leak issue is the point. I don't see that it is practical
> to adopt this implementation approach successfully, and I observe that no
> implementation has done so. Alsys could not compose for components, and
> Janus has a storage leak that is, as far as I can see, pretty fundamental.

It's not fundamental at all. The original problem was that Ada 83 didn't
have the notion of storage pools, and because of that, we couldn't figure
out how to deallocate the memory properly. That's not a problem with the Ada
95 compiler (the storage pool passed into assignment is the correct one for
allocations and deallocations); the only issue is that fixing this has had a
lower priority than finishing the Ada 95 implementation and fixing bugs that
cause things to not work.

The overhead is limited only to mutable types (as you can tell at
compile-time whether this support is required), so it isn't very widespread.
The costs could be reduced further by using allocate-on-the-stack when
possible (certainly during initialization) and by using a never-shrink
strategy.

> I really think this implementation approach is impractical, and I certainly
> don't think the standard need bother about it much. In practice the wording
> in this kind of area is not going to prohibit such implementation approaches
> anyway (leaving by the way one of the most serious non-portabilities in the
> language, though in practice you can pretty much count on the allocate-max
> strategy these days, the old Alsys compiler having long disappeared. If there
> are large scale users of Janus, then that would be an exception, but as long
> as the standard does not completely rule out the Janus approach then it's
> OK (though certainly the intention of the standard should be to rule out
> the storage leak solution).

Certainly, we don't intend to encourage implementations to have storage
leaks. It just hasn't been that important to fix it (it only happens in
relatively obscure cases). But I do think this approach is practical;
indeed, if I was starting Janus/Ada over from scratch, there are a lot of
things that I would do differently (generics and the way temporaries are
handled in our intermediate code are just two). But this implementation is
not one of them.

I agree about the non-portability, but I've never understood why it has been
allowed to persist. It's not especially difficult to support this without
the need to make users specify arbitrary limits which may have nothing to do
with their algorithms. I can understand supporting allocate-to-the-max,
especially in the context of restriction No_Implicit_Allocation, but there
is no real problem with supporting both  so that the majority of users who
do not have issues with heap use can use this feature as it was intended -
without arbitrary limitations.

****************************************************************

From: Robert Dewar
Sent: Wednesday, February 16, 2005  8:15 PM

> I agree about the non-portability, but I've never understood why it has been
> allowed to persist. It's not especially difficult to support this without
> the need to make users specify arbitrary limits which may have nothing to do
> with their algorithms. I can understand supporting allocate-to-the-max,
> especially in the context of restriction No_Implicit_Allocation, but there
> is no real problem with supporting both  so that the majority of users who
> do not have issues with heap use can use this feature as it was intended -
> without arbitrary limitations.

Well in our environment, we consider No_Implicit_Allocation to be a required
default, and we have never had any customers interested in the dynamic
allocation, so it doesn't have low priority for us, it has no priority.

TO me, the whole business of non-contiguous objects is ugly and messy
and better avoided, so this is certainly not somethging I would consider
imnplementing (unless of course some customer came along with a big money
bag :-)

****************************************************************

From: Robert I. Eachus
Sent: Wednesday, February 16, 2005  11:33 PM

> TO me, the whole business of non-contiguous objects is ugly and messy
> and better avoided, so this is certainly not somethging I would consider
> imnplementing (unless of course some customer came along with a big money
> bag :-)

Sigh!   There are two very separate issues here.  One is what
implementations actually do with respect to implicit and non-contiguous
fields. For the most part, I think that this should be left to the
implementations.  If GNAT and RR have different approaches in this area,
that is a good thing for potential customers.  If you need a particular
implementation, select a vendor who supports it.  If some vendor wants
to satisfy everyone, they can support multiple choices via compile-time
options or pragmas.

However, none of that affects the fact that the Reference Manual should
be clear on how 'Size and implicit and non-contiguous fields interact.
This is where the uniformity is needed.  If there is a need for a 'Size
like attribute that treats non-contiguous and implicit fields
differently, that is again a place for implementation specific
attributes, switches or pragmas.  If the RM stays silent on how these
are treated, then users have no alternative to experimentation in
determining what the implementation actually does.

Just a sentence saying that "It is implementation specified whether
'Size includes implicit or non-contiguous fields." would at least
require that implementors document what they do.  Not the best choice,
but certainly better than nothing.

>OTOH, non-contiguous fields could be left out. It's almost a coin toss as to
>which is the most useful. Leaving them out means that the 'Size is an
>accurate representation of the footprint of an object (the address of the
>following object would always be the address of the first object plus the
>size). OTOH, including non-contiguous fields in 'Size means that 'Size is an
>accurate representation of the memory used by the object. These are really
>two different questions, and its unfortunate that we don't have two
>different attributes to answer these (that is another reason that we need a
>standard 'Object_Size, which we won't get, sadly).

I agree, including about the 'Object_Size.  The problem in the
non-contiguous case is that often multiple objects can share the same
non-contiguous descriptors.  For a (contrived) example, you could have a
type with 23 discriminants and no defaults that only needed four bytes
for the object specific data.  If a 'Size representation clause
restricting the objects to 64 (eight bytes), or some implementation
specific pragma, resulted in an implementation choice of a pointer to
the  instance data and the object specific data, it would be silly for
the  implementation to report the 'Size as 800.  If another
implementation accepted a 'Size of 32.that searched the current stack
for the instance data, that's okay too.  (I think that would be very
aggressive, but if an implementation wanted to support it, fine.  Or an
implementation might only support a 'Size of 32 if the object was not
nested in a subprogram. i.e. in a library package or in a nested
sub-package.)

Again though, I am opposed, even in option 2, to requiring
implementation to support *all* of this, or any particular
implementation of it.  What I am saying is that the RM needs to specify
what an implementation should do with respect to 'Size if it does
support such things.

****************************************************************

From: Tucker Taft
Sent: Thursday, February 17, 2005  1:55 AM

>> Ada 95 seems to lean toward including the non-contiguous fields in 'Size,
>> and (less clearly) leaving out implicit ones, which IMHO is the worst
>> possible decision. (Which is usually the case with Ada 95 and 'Size.)
>
> I disagree, in both respects this would be an entirely bizarre
> interpretation...

I agree with Robert here.  If anything, RM95 is crystal clear
here (which when it comes to 'Size, is admittedly quite rare ;-):

Paragraph 13.3(56) says:

     * For a subtype implemented with levels of indirection, the Size
       should include the size of the pointers, but not the size of
       what they point at.

****************************************************************


Questions? Ask the ACAA Technical Agent