Version 1.3 of ais/ai-00195.txt

Unformatted version of ais/ai-00195.txt version 1.3
Other versions for file ais/ai-00195.txt

!standard 13.13.1 (00)          98-04-04 AI95-00195/02
!class binding interpretation 98-03-27
!status received 98-03-27
!reference AI95-00108
!reference AI95-00145
!priority High
!difficulty Hard
!subject Streams
!summary 98-04-04
When the predefined Input attribute creates an object, this object undergoes initialization and finalization.
In determining whether a stream-oriented attribute has been specified for a type, the normal visibility rules are applied to the attribute_definition_clause.
For a derived type which is limited, the attributes Read and Write are inherited, and the attributes Input and Output revert to their predefined definition (i.e. they cannot be called). (This amends AI95-00108.)
In the profiles of the stream-oriented attributes, the notation "italicized T" refers to the base type for a scalar type, and to the first subtype otherwise.
For an untagged derived type with new discriminants that have defaults, the predefined stream-oriented attributes read or write the new discriminants, not the old ones. (This amends AI95-00108.)
The predefined Read attribute for composite types with defaulted discriminants must ensure that, if exceptions are raised by the Read attribute for some discriminant, the discriminants of the actual object passed to Read are not modified. This may require the creation of an anonymous object, which undergoes initialization and finalization.
The predefined Read attribute for composite types with defaulted discriminants must raise Constraint_Error if the discriminants found in the stream differ from those of the actual parameter to Read, and this parameter is constrained.
If S is a subtype of an abstract type, an attribute_reference for S'Input is illegal unless this attribute has been specified by an attribute_definition_clause.
The number of calls performed by the predefined implementation of the stream- oriented attributes to the Read and Write operations of the stream type is unspecified, but there must be at least one such call for each top-level call to a stream-oriented attribute.
The predefined stream-oriented attributes for a scalar type shall only read or write the minimum number of stream elements required by the first subtype of the type. Constraint_Error is raised if such an attribute is passed (or would return) a value outside the range of the first subtype.
In an attribute_definition_clause for a stream-oriented attribute, the name shall not denote an abstract subprogram.
!question 98-04-04
1 - RM95 13.13.2(27) states that S'Input "creates an object (with the bounds or discriminants, if any, taken from the stream), initializes it with S'Read, and returns the value of the object."
Does the verb "initialize" in this sentence refer to the entire initialization process mentioned in RM95 3.3.1(18) and 7.6(10)? (yes) In particular, if S is a controlled subtype, or if it contains controlled components, is the Initialize subprogram called? (yes) Is the Finalize subprogram called when the intermediate object is finalized? (yes) For a record type whose components have initial values, are these values evaluated? (yes)
2 - RM95 13.13.2(36) states that "an attribute_reference for one of these attributes is illegal if the type is limited, unless the attribute has been specified by an attribute definition clause."
If some stream-oriented attribute has been specified in a private part, and we are at a point that doesn't have visibility over that private part, is a reference to the attribute legal? (no)
3 - Let T be a limited type with an attribute_definition_clause for attribute Read, and D a type derived from T, and assume that there is no attribute_definition_clause for D'Read. Is a usage of D'Read legal? (yes)
4 - The definition of the profiles of S'Read, S'Write, S'Input and S'Output given in RM95 13.13.2 uses the notation "italicized T" for the type of the Item parameter. What is the meaning of "italicized T" in this context? (T'Base for scalars, first subtype otherwise.)
5 - AI95-00108 states (in the !discussion section) that "for untagged derived types, there is no problem for the derived type inheriting the stream attributes."
This doesn't seem clear if the derived type includes a known discriminant part. Consider:
type Parent (D1, D2 : Integer := 1) is ...; type Child (D : Integer := 2) is new Parent (D1 => D, D2 => D);
Clearly Parent'Write writes two discriminant values. How many discriminants does Child'Write write? (one)
6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read reads the components in positional aggregate order. However, the language doesn't seem to specify what happens when exceptions are raised by the calls to the Read attribute for the components. Consider for example the following type declarations:
type T1 is range ...; type T2 is range ...;
type R (D1 : T1 := ...; D2 : T2 := ...) is record ... end record;
Say that attributes_definition_clauses have been given for T1'Read and T2'Read, and consider a call to R'Read. Assume that, during this call, an exception is raised by T2'Read. Is the discriminant X.D1 modified? (no)
7 - Consider a call to T'Read where T is a type with defaulted discriminants. If the discriminants found in the stream have values different from those of the discriminants of the object passed to T'Read for the Item parameter, and that object is constrained, is Constraint_Error raised? (yes)
8 - If T is an abstract type, is the function T'Input abstract? (no, but it cannot be called)
9 - RM95 13.13.1(1) states that "T'Read and T'Write make dispatching calls on the Read and Write procedures of the type Root_Stream_Type." Is the number of those calls specified? (no)
10 - For a scalar type T, the second parameter of the predefined stream attributes is of type T'Base. Given the same type declaration for T, different compilers may choose different base ranges, and therefore write different numbers of storage units to the stream. This compromises portability, and makes it difficult to use streams to match a file format defined externally to Ada.
11 - In an attribute_definition_clause for a stream attribute, is it legal to give a name that denotes an abstract subprogram? (no)
!recommendation 98-04-04
See summary.
!wording 98-04-04
See summary.
!discussion 98-04-04
1 - It seems logical to make Input follow as closely as possible the semantics of code that could be written by a user, so the implementation of S'Input should obtain the discriminants or bounds from the stream, and then declare an object with appropriate discriminants or bounds, as in:
declare Anon : S (<discriminants or bounds taken from the stream>); begin S'Read (..., Anon); end;
Accordingly, the initialization described in RM95 3.3.1(8-9) and RM95 7.6(10) takes place when Anon is declared, and the finalization described in RM95 7.6.1 takes when Anon disappears (and before returning from the call to S'Input).
Note that as part of initialization, compiler-specific fields are initialized as required by the implementation (and as permitted by AARM 3.3.1(14.a)).
2 - Consider for example:
package P is type T is limited private; private procedure Read (Stream : ...; Item : out T); for T'Read use Read; end P;
with P; procedure Q is X : P.T; begin P.T'Read (..., X); -- Illegal? end Q;
The call to P.T'Read is illegal, because Q doesn't have visibility over the private part of P, which contains the attribute_definition_clause for attribute Read. On the other hand, at a place that has visibility over the private part of P (and that comes after the attribute_definition_clause) a reference to T'Read is legal. This rule is necessary to preserve the privacy of private types.
Note that it is the location of the attribute_definition_clause that counts, not that of the subprogram specified in that clause. Thus, if the procedure Read above were moved to the visible part of P, a reference to P.T'Read would still be illegal (but a reference to P.Read wouldn't).
3 - AI95-00108 states that "for a type extension, the predefined Read attribute is defined to call the Read of the parent type, followed by the Read of the non-inherited components, if any, in canonical order."
This rule doesn't work for limited (tagged) types, because the non- inherited components might include protected objects or tasks for which the predefined Read and Write attributes cannot be called.
For limited derived types (tagged or not), the only sensible rule is that Read and Write are inherited "as is". This is consistent with what happens with the operator "=". On the other hand the attributes Input and Output cannot be inherited, as explained in the discussion of AI95- 00108. Therefore, these attributes must revert to their predefined definition, which means that they cannot be called, as stated in RM95 13.13.2(36).
4 - AI95-00145 specifies the meaning of the notation "italicized T" for operators as follows:
"The italicized T shown in the definitions of predefined operators means:
- T'Base, for scalars - the first subtype, for tagged types - the type without any constraint, in other cases"
In the case of stream-oriented attributes the notation "italicized T" must be consistent with the parameter subtype required for attribute_definition_clauses. If we chose the same rule as for operators, we would have a discrepancy in the case of constrained untagged types, and this would unnecessarily complicate the static and dynamic semantics.
When one of the stream-oriented attributes is specified by an attribute definition clause, RM95 13.13.2(36) states that "the subtype of the Item parameter shall be the base subtype if scalar, and the first subtype otherwise."
Therefore, in the parameter profiles for the stream-oriented attributes in section 13.13, the notation "italicized T" means:
- T'Base, for scalars - the first subtype, in other cases
5 - The inheritance rule given in AI95-00108 should only apply to those attributes that have been specified for the parent type.
If this rule was applied to the predefined stream-oriented attributes, it would require, in the example given, to read or write two discriminants, because the predefined Read and Write attributes of type Parent do read or write two discriminants. But that would be inconsistent with the rule given in RM95 13.13.2(9): "the Read or Write attribute for each component is called in canonical order," since D1 and D2 are not components of type Child.
Furthermore, definiteness can be changed by type derivation, and the dynamic semantics of Read and Write depend on definiteness. Consider the following modification of the original example:
type Parent (D1, D2 : Integer) is ...; type Child (D : Integer := 2) is new Parent (D1 => D, D2 => D);
In this case the predefined stream-oriented attributes for type Parent do not read or write the discriminants, so applying the inheritance rule of AI95- 00108 would cause the stream-oriented attributes for Child to not read or write any discriminant, which doesn't make sense.
Therefore, RM95 13.13.2(9) must have precedence, and the predefined stream- oriented attributes for Child only read or write exactly one discriminant, D.
The underlying model is that the predefined stream-oriented attributes are created anew for each type declaration, based on the structure of the type, much like predefined operators.
6 - The problem mentioned in the question only exists for a type R that is passed by-reference to R'Read: obviously if the type is passed by copy, an exception raised by R'Read cannot affect the original object.
In the case of a type which is passed by-reference, we must consider two cases:
- If the exception in T2'Read is due to the failure of some
language-defined check, RM95 11.6(6) explains that the object or its parts may become abnormal, so we don't have to specify what happens to the discriminants.
- If the exception in T2'Read is due to an explicit raise, the object
obviously doesn't become abnormal, and therefore we must preserve the integrity of its discriminants. In other words, either all discriminants are updated (if the calls to the Read attributes for the discriminants were successful) or none (if any call to a Read attribute for a discriminant raised an exception).
This model requires an implementation to read the discriminants, create an anonymous object using the given discriminants and assign that object to the formal parameter Item. The normal initialization and finalization take place for this anonymous object.
Strictly speaking, the use of an anonymous object is only required for a type with defaulted discriminants which is passed by-reference, if the actual parameter of Read is not constrained. However, an implementation is free to use anonymous objects in other cases.
Use of an anonymous object is only required for discriminants. An implementation is free to read the other components directly in the Item parameter. For example, if we change the original example as follows:
type R is record C1 : T1; C2 : T2; end record;
then if T2'Read raises an exception when reading component C2, it is unspecified if C1 is modified or not.
7 - When the type has defaulted discriminants, the predefined Read attribute must read them from the stream. It can be the case that the actual object passed to Read is constrained. In this case, the discriminants found in the stream may or may not match those of the actual. If they don't match, Constraint_Error is raised, and this is a Discriminant_Check.
It is unspecified whether this effect is achieved by assigning a temporary object, as explained in #6 above, or by other means.
8 - If T is an abstract type, calling the function T'Input would effectively create an object whose tag designates T, which is absurd. We could decide that T'Input is abstract, but it seems simpler to say that any attribute_reference for this attribute is illegal, by analogy with the rule stated in RM95 13.13.2(36) for limited types.
9 - Surely the user could count the calls to the Read and Write operations for the stream type by writing a pervert implementation of these operations, but it seems that performance should have precedence: for the predefined implementation of String'Write, requiring a call to Ada.Streams.Write for each character would be a big performance hit, with no real benefits.
Therefore, the number of calls to the Read and Write operations is unspecified, and implementations are free (and in fact advised) to do internal buffering. However, we don't want to allow an implementation to buffer all stream output and do only one call to Write when the program terminates: the user must be able to assume reasonable properties regarding the underlying buffering mechanism. That's why we require at least one call to Read or Write for each top-level call to a stream-oriented attribute. In other words, an implementation should not perform buffering across top-level calls.
10 - Consider the declaration:
type T is range 1 .. 10;
This declaration "defines an integer type whose base range includes at least the values [1 and 10] and is symmetric around zero, excepting possibly an extra negative value," as explained in RM95 3.5.4(9).
Based on this rule (and assuming typical hardware), an implementation might choose an 8-, 16-, 32- or 64-bit base type for T.
RM95 13.13.2(17) advise the implementation to "use the smaller number of stream elements needed to represent all values in the base range of the scalar type" when reading or writing values of type T.
Clearly this is a portability issue: if two implementation use (as is typical) 8-bit stream elements, but have different rules for selecting base types, the number of elements read to or written from a stream will differ. This makes it very hard to write stream operations that comply with an externally defined format.
In the above case, it would seem reasonable to read or write only the minimum number of stream elements necessary to represent the range 1 .. 10. This would remove the dependency on the base type selection, and make it easier to write portable stream operations. (There is still the possibility that different implementations would choose different sizes for stream elements, but that doesn't seem to happen in practice on typical hardware.)
The only issue with that approach is that the stream-oriented attributes for scalar types have a second parameter of type T'Base, e.g.:
procedure S'Write (Stream : access Ada.Streams.Root_Stream_Type'Class;
Item : in T'Base);
So one might call T'Write with the value 1000 for the Item parameter, and this might exceed the range representable in the stream. However, this usage is non-portable in the first place (because it depends on the choice of base range), so it doesn't seem important to preserve it. In fact any attempt at reading or writing a value outside the range of the first subtype is highly suspicious.
Based on this reasoning, the following rules are added. Note that these rules are Dynamic Semantics rules, not Implementation Advices:
- The predefined stream-oriented attributes for a scalar type T shall
only read or write the minimum number of stream elements necessary to represent the first subtype. If S is the first subtype, the number of stream elements read to or written from the stream is exactly:
(S'Size + Stream_Element'Size - 1) / Stream_Element'Size
- If Write or Output is called with a value of the Item parameter outside
the range of the first subtype, Constraint_Error is raised. This check is a Range_Check.
- If the value extracted from the stream by Read or Input is outside the
range of the first subtype, Constraint_Error is raised. This check is a Range_Check.
11 - Obviously it should not be possible to perform a non-dispatching call to an abstract subprogram (stream-oriented attributes are always called in a non- dispatching manner). Therefore, we have two options:
- Make the attribute_definition_clause illegal. - Make the calls (explicit or implicit) illegal.
The second option is a significant implementation burden, and allowing the attribute_definition_clause only to reject all calls doesn't seem to do any good. That's why the first option was preferred.
!appendix

!section 13.13.1
!subject Various issues with the stream-oriented attributes
!reference RM95 13.13.1
!reference RM95 13.13.2
!reference AI95-00108
!from Pascal Leroy 97-09-02
!reference 1997-15783.a Pascal Leroy 1997-9-2>>
!discussion

1 - RM95 13.13.2(27) states that S'Input "creates an object (with the bounds
or discriminants, if any, taken from the stream), initializes it with S'Read,
and returns the value of the object.

Does the verb "initialize" in this sentence refer to the entire initialization
process mentioned in RM95 3.3.1(18) and 7.6(10)? In particular, if S is a
controlled subtype, or if it contains controlled components, is the Initialize
subprogram called?  Is the Finalize subprogram called when the intermediate
object is finalized?  For a record type whose components have initial values,
are these values evaluated?

It seems that the simplest model is that S'Input declares a local object,
which is then passed to S'Read, as in:

                X : S (<discriminants taken from the stream>);
        begin
                S'Read (..., X);

If we don't do that, the object passed to S'Read might have uninitialized
access values, or uninitialized compiler-specific fields (dopes, offsets, tag,
etc.).

2 - RM95 13.13.2(36) states that "an attribute_reference for one of these
attributes is illegal if the type is limited, unless the attribute has been
specified by an attribute definition clause."

If some stream-oriented attribute has been specified in a private part, and we
are at a point that doesn't have visibility over that private part, is a
reference to the attribute legal? For example:

        package P is
                type T is limited private;
        private
                procedure Read (Stream : ...; Item : out T);
                for T'Read use Read;
        end P;

        with P;
        procedure Q is
                X : T;
        begin
                T'Read (..., X);        -- Legal?
        end Q;

One hopes that the call to T'Read is illegal, overwise it would break the
privacy of private types.

3 - Let T a limited type with an attribute_definition_clause for attribute
Read, and D a type derived from T, and assume that there is no attribute
definition clause for D'Read:

        type T is limited record ... end record;
        for T'Read use ...;

        type D is new T;

Is a usage of D'Read legal or not?  In other words, shall we consider that,
for the purpose of checking RM95 13.13.2(36), "an attribute has been
specified" for D?

If the answer is "yes", how do the inheritance rules stated in AI95-00108
apply in the case where T is tagged, and D has a component of a task type in
its record extension part?  How is an implementation expected to read that
component?

4 - The definition of the profiles of the predefined S'Read, S'Write, S'Input
and S'Read given in RM95 13.13.2 uses the notation "T italic" for the type of
the Item parameter.  AI95-00145 explains that:

"The italicized T shown in the definitions of predefined operators means:

     - T'Base, for scalars
     - the first subtype, for tagged types
     - the type without any constraint, in other cases"

(Note that the above sentence says "operators", so strictly speaking it
doesn't apply to attributes; it seems that the wording of this AI should be
broadened.)

When one of these attributes is specified by an attribute definition clause,
RM95 13.13.2(36) states that "the subtype of the Item parameter shall be the
base subtype if scalar, and the first subtype otherwise."

There is a problem because these definitions don't coincide in the case of
constrained array types.  Consider:

        type T is new String (1 .. 10);

It appears that the type of the Item parameter of the predefined
stream-oriented attributes is "T without any constraint" (an anonymous
unconstrained array type ), while the type of the Item parameter for a
user-defined stream-oriented subprogram is T.

In particular, does the presence or absence of an attribute definition clause
have any bearing on the legality of the calls?  For example, consider the
legality of the 'others' choice in an array aggregate; it depends on the
existence on an applicable index constraint, and therefore on whether the type
of the parameter is constrained or not:

        package P is
                type T is new String (1 .. 10);
                procedure Write (...; Item : in T);
        private
                for T'Write use Write;
        end P;

        with P;
        procedure Q is
        begin
                T'Write (..., (others => 'a')); -- Legal?
        end;

Is the above call legal? If yes, would it become illegal if the
attribute_definition_clause were removed?

A possible model is to say that an attribute_definition_clause changes the
body of the Read attribute, but not its profile.  It's as if the predefined
Read was just a wrapper calling the user-specified subprogram.

5 - AI95-00108 states (in the !discussion section) that "for untagged derived
types, there is no problem for the derived type inheriting the stream
attributes."

This doesn't seem clear if the derived type includes a known discriminant
part.  Consider:

        type Parent (D1, D2 : Integer := 1) is ...;
        type Child (D : Integer := 2) is new Parent (D1 => D, D2 => D);

Clearly Parent'Write writes two discriminant values.  It would seem that
Child'Write should only write one discriminant value, which contradicts the
simple inheritance rule given in the AI.

6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read reads
the components in positional aggregate order.  However, the RM95 doesn't seem
to specify what happens when exceptions are raised by the calls to the Read
attribute for the components.  Consider for example the following type
declarations:

        type T1 is range ...;
        type T2 is range ...;

        type R (D1 : T1 := ...; D2 : T2 := ...) is
                record
                        ...
                end record;

Say that attributes_definition_clauses have been given for T1'Read and
T2'Read, and consider the call:

        X : R;
        ...
        R'Read (..., X);

Assume that an exception is raised by T2'Read.  Is the discriminant X.D1
modified?  That would be unpleasant if there were components depending on this
discriminant!

Consider a different example, where there are no discriminants:

        type T1 is ...;
        type T2 is ...;

        type R is
                record
                        C1 : T1;
                        C2 : T2;
                end record;

Assume that an exception is raised by T2'Read.  Is the component X.C1
modified?

It would seem that we should stick to the notion that discriminants are only
modified by an assignment of entire objects.  This probably requires an
intermediate object in the case of discriminated types.  However, it would
seem quite expensive to require the creation of such an intermediate object
for types that don't have discriminants.

Note that if we adopt the idea that an intermediate object is needed (at least
in some cases) then we must define if initialization and finalization of this
object take place.

7 - Consider a call to T'Read where T is a type with defaulted discriminants.
 If the discriminants found in the stream have values different from those of
the discriminants of the object passed to T'Read for the Item parameter, and
that object is constrained, is Constraint_Error raised?

Note that if we adopt the model that an intermediate object is needed in this
case, then the constraint check performed when assigning the intermediate
object to the actual parameter will raise Constraint_Error.

8 - If T is an abstract type, is the function T'Input abstract?  One hopes so,
otherwise it makes it possible to create objects of an abstract type.

9 - RM95 13.13.1(1) states that "T'Read and T'Write make dispatching calls on
the Read and Write procedures of the type Root_Stream_Type."

Is the number of those calls specified?  For example, for the predefined
implementation of String'Write, is an implementation required to call
Ada.Streams.Write for each character? or is it allowed to perform internal
buffering and to call Ada.Streams.Write only once for the entire string?
 Obviously the user could tell the difference by writing a pervert
implementation of the Write subprogram, but it would seem that performance
considerations should have the priority in this case.

_____________________________________________________________________
Pascal Leroy                                    +33.1.30.12.09.68
pleroy@rational.com                             +33.1.30.12.09.66 FAX

****************************************************************

!section 13.13.2(17)
!subject Stream size of 16 bit integers
!reference RM95 13.13.2 (17)
!reference 1998-15826.a Stephen Leake 98-03-14
!reference 1998-15827.a Tucker Taft 1998-3-17
!from Randy Brukardt 98-03-18
!keywords base type, size clause, stream representation
!reference 1998-15828.a Randy Brukardt  1998-3-19>>
!discussion

We just ran into this problem.  We were trying to use Streams to read
Microsoft Windows bitmap files.  Since the file format is defined external
to Ada, and is not very friendly to Ada, it is fairly difficult.  We tried
various stearm solutions that worked on GNAT, but not on ObjectAda, because
of the 16-bit vs. 32-bit problem.  Since we can't control the stream
representation, we can't use Streams to portably read this externally
defined file format.

We ended up reading an array of Storage_Elements, then converting that with
Unchecked_Conversion.  Yuk!

This problem should be solved; otherwise, Ada 95 will not be able to
portably read files with a definition outside of Ada.

                        Randy.

****************************************************************

From: 	Tucker Taft[SMTP:stt@inmet.com]
Sent: 	Friday, April 10, 1998 4:28 PM
Subject: 	AI-195; Stream representation for scalar types

At the recent ARG meeting, we discussed resolving AI-195 by introducing
the notion of "stream base range" rather than simply "base range"
to determine the default representation for scalar types in a stream.
The stream base range would be determined strictly by the first subtype
'Size, rounded up to some multiple of the stream element size in bits.

This seems to me like a good solution to the problem that int-type'Base'Size
might always be 32-bits on modern RISC machines, even though users 
expect integer types with first subtype 'Size of 9..16 or 1..8 bits to
be represented in the stream with 16 or 8 bits per item, respectively.

I would like to make this change to our compiler sooner rather than later.
Are there any "morning after" concerns about this approach?

Also, it would be nice to settle soon on the details for cases like

   type Three_Bytes is range -2**23 .. +2**23-1;

Presuming that a stream element is 8 bits, should Three_Bytes'Write
write 3 bytes or 4?  I.e., when rounding up the first subtype 'Size
to a multiple of Stream_Element'Size, should we round up to some
power-of-2 times the Stream_Element'Size, or just the next
multiple?

I have used first-subtype'Size above intentionally, presuming that
if the user specifies a first-subtype'Size, then we would use that
specified 'Size to affect the number of stream elements per item.

-Tuck

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Friday, April 10, 1998 4:37 PM
Subject: 	Re: AI-195; Stream representation for scalar types

<<This seems to me like a good solution to the problem that int-type'Base'Size
might always be 32-bits on modern RISC machines, even though users
expect integer types with first subtype 'Size of 9..16 or 1..8 bits to
be represented in the stream with 16 or 8 bits per item, respectively.
>>

Very few machines are in this category by the way, so this is a marginal
concern.

What machine are you thinking of here?

I am a bit dubious about this change ...

****************************************************************

From: 	Tucker Taft[SMTP:stt@inmet.com]
Sent: 	Friday, April 10, 1998 4:47 PM
Subject: 	Re: AI-195; Stream representation for scalar types

> What machine are you thinking of here?

For essentially all modern RISC machines, all arithmetic is done in
32 bits, and the only overflow bit or signal available relates
to overflowing 32 bits.  Presuming 'Base'Size reflects the size of 
arithmetic intermediates, 'Base'Size might very well be 32 for all integer 
types, even for integer types declared with tiny ranges.

> I am a bit dubious about this change ...

Please elaborate.  

Note that this is a real problem now, because some Ada 95 compilers
are using 32 bits per item for all integer types in streams, even for
those whose first subtype'Size is less than or equal to 16 bits.
This is because 'Base'Size is 32 bits to reflect the register size,
as seems reasonable.  Even among "CISC" machines, there are few
these days that actually provide any kind of efficient 16- or 8-bit
arithmetic, so presumably if overflow is being detected, it is
associated with exceeding 32-bit values, not 16- or 8-bit values.

-Tuck

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Friday, April 10, 1998 5:46 PM
Subject: 	Re: AI-195; Stream representation for scalar types

<<For essentially all modern RISC machines, all arithmetic is done in
32 bits, and the only overflow bit or signal available relates
to overflowing 32 bits.  Presuming 'Base'Size reflects the size of
arithmetic intermediates, 'Base'Size might very well be 32 for all integer
types, even for integer types declared with tiny ranges.
>>

It seems a bad choice to me to make the base types 32-bits. Because of
the permission for intermediate values to exceed the base range, there
really is no reason to do this.

<<Note that this is a real problem now, because some Ada 95 compilers
are using 32 bits per item for all integer types in streams, even for
those whose first subtype'Size is less than or equal to 16 bits.
This is because 'Base'Size is 32 bits to reflect the register size,
as seems reasonable.  Even among "CISC" machines, there are few
these days that actually provide any kind of efficient 16- or 8-bit
arithmetic, so presumably if overflow is being detected, it is
associated with exceeding 32-bit values, not 16- or 8-bit values.
>>

I agree that is a very bad choice, and is one of the reasons why I think
it is a bad choice to have a single 32-bit base type.

The statement about CISC machines is incorrect, the x86 provides the
same efficiency for 8-bit arithmetic with overflow detection as for 
32-bit arithmetic with overflow detection, and 16-bit ariythemtic
with overflow detection is only very slightly less efficient.

... so the presumably here is wrong.

I really don't see the issue here. If you have

  type x is range -128 .. +127;

it is appropriate to use a one byte base type on nearly all machines (the
now obsolete old alphas and now obsolete AMD 29K are the only exceptions
among general purpose RISC machines).

If you have

  y,z : x;

and you write

  y := z + 35;

then the code to check for contraint error is exactly the same for the
case of x being derived from a 32-bit base type as if it is derived from
an 8-bit base type.

We seem to be asked here to patch up the language to deal with what basically
sounds like just a wrong decision in choice of base types. That is why I
am dubious about the need for the extra complexity.

Tuck, can you be more clear on why it ever makes sense to follow this 
approach of having a single 32-bit base type.

****************************************************************

From: 	Tucker Taft[SMTP:stt@inmet.com]
Sent: 	Friday, April 10, 1998 6:00 PM
Subject: 	Re: AI-195; Stream representation for scalar types

The ARG discussed this after you left, and concluded roughly what
I presented.  I was confirming their decision.  The basic point is
that base ranges are possibly based on what is efficient to store
for intermediates.  This is very different from what would be efficient
for a stream representation.  The Patriot is a 24-bit machine, but it
makes sense for stream elements to be 8 bits on that machine, and for
only 8 bits to be used for "8-bit" integer types.  Nevertheless,
'base'size is definitely 24 bits for such machines.

On many RISC machines, loading, storing, and passing objects of less 
than 32-bits can be less inefficient.  For such machines, it makes
sense for 'Base'Size to be 32 bits even for tiny ranges.

The point is to divorce the 'base'size from the stream element representation.
Connecting the two seems like a mistake, particularly since there is
no guarantee that you *won't* get constraint error when writing a value
stored in a variable of the 'Base subtype, since it is allowed to be
outside of 'Base'Range in any case (since it is unconstrained).

-Tuck
****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Friday, April 10, 1998 6:15 PM
Subject: 	Re: AI-195; Stream representation for scalar types

<<On many RISC machines, loading, storing, and passing objects of less
than 32-bits can be less inefficient.  For such machines, it makes
sense for 'Base'Size to be 32 bits even for tiny ranges.
>>

Please site details, I don't see this. It is certainly not true for
MIPS, SParc, HP, Power, x86, ALpha (new Alpha), ...

I would be interested in Tuck backing up this claim, because the need for
a new language feature really depends on the validity of this claim, which
seems dubious to me!

****************************************************************

From: 	Robert A Duff[SMTP:bobduff@world.std.com]
Sent: 	Saturday, April 11, 1998 8:06 AM
To: 	dewar@gnat.com
Subject: 	Re: AI-195; Stream representation for scalar types

I'm astonished that Robert thinks T'Base should be less than the 32-bit
range (assuming it's big enough), for most 32-bit architectures.  (OK,
on a 386, an 8-bit base range makes sense, but that's unusual.)  It
seems to me that T'Base should reflect the range of intermediate results
-- that was certainly the intent.

A useful idiom in Ada 95 is:

    type Dummy_Count is range 1..10_000;
    subtype Count is Dummy_Count'Base range 1..Dummy_Count'Base'Last;
    Number_Of_Widgets: Count := 1;

which means "I want Count to be a positive integer, at least up to
10_000, but there's no harm in giving me more than 10_000, so please use
the most efficient thing."

On most machines, I would expect Count'Last to be 2**31-1 (even though
in-memory variables could be stored in 16 bits).  Any other value for
Count'Last will result in an unnecessary (software) constraint check on:

    Number_Of_Widgets := Number_Of_Widgets + 1;

I see no reason for Dummy_Count'Base being -2**15..2**15-1.  Why, for
example, is that any more sensible than -10_000..10_000, which is the
smallest range allowed for the base range by the RM?

- Bob

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Saturday, April 11, 1998 8:27 AM
Subject: 	Re: AI-195; Stream representation for scalar types

<<I'm astonished that Robert thinks T'Base should be less than the 32-bit
range (assuming it's big enough), for most 32-bit architectures.  (OK,
on a 386, an 8-bit base range makes sense, but that's unusual.)  It
seems to me that T'Base should reflect the range of intermediate results
-- that was certainly the intent.
>>

It is not surprising that Bob Duff is surprised by this, because it is 
closely related to the whole business of the "wrong" size stuff in the RM.

It is a significant mistake in an implementation to by default have objects
of a subtype occupy less space than the type will occupy. This is becase
it is highly incompatible with nearly all Ada 83 implementations.

Consider:

   type x is record
     a : Natural;
     b : Natural range 1 .. 10;
     c : Character;
   end record;

Nearly all Ada 83 compilers will allocate 32 bits for b.

Now in Ada 95, we are forced to consider the size of the type of b to be
4 bits, and it would seem natural to allocate one byte, but this will
cause horrible compatibility difficulties.

We introduced Object_Size in GNAT to partially get around this (you see the
difficulty is that if your compiler *does* feel like allocating 8 bits
for that Natural field, there is no way to stop it. Even if you do:

   subtype N10 is Natural range 1 .. 10;

there is no way in Standard Ada 95 to override the size of N10 anyway.

It was a big mistake to require the type sizes to be wrong with respect
to Ada 83 (one I complained loudly and repeatedly about I might say :-)

Indeed we had recently in GNAT to introduce a new attribute VADS_Size
and a pragma Use_VADS_Size to provide additional tools for getting around
this annoying incomaptibility (easily the biggest incompatibility between
Ada 83 and Ada 95 in practice).

But back to what a compiler should do.

To me, the base type in Ada 83 (and Ada 95) is most naturally considered
to be the size used to *store* things. Intermediate values are allowed
to be bigger explicitly in 11.6 for this very reason (this was the original
thought behind the permission in 11.6, and we explcitly discussed machines
where most naturally arithmetic is done in higher precision).

You definitely want objects declarecd as

  type m is range 1 .. 10;
  mm : m;

to occupy 8 bits on nearly all machines.

Bob Duff's model of this (and apparently Tuck's too) is to make sure that
the subtype size is small enough, and then condition the object size on
the subtype size. But as described above, that is simply too incomaptible
with Ada 83 to be tolerable.

All the Ada 83 compilers I am familiar with provided 8- and 16-bit base
types for machines like the Sparc. We did discuss in Alsys the possibility
of having only one base type, and indeed we concluded that it would be
valid from the RM, but we quickly decided that there were all sortsw of
problems with this approach.

The problem with stream attributes is typical of the kind of things that
go wrong if you think this way.

<<<<I'm astonished that Robert thinks T'Base should be less than the 32-bit
range (assuming it's big enough), for most 32-bit architectures.  (OK,
on a 386, an 8-bit base range makes sense, but that's unusual.)  It
seems to me that T'Base should reflect the range of intermediate results
-- that was certainly the intent.
>>
>>

It may be the intent of the designers of Ada 95, but if so, it is a definite
distortion of the original intent.

Note that you *certainly* don't believe this for the floating-point case.
I cannot imagine anyone arguing that there should only be one floating-point
type (64-bit) on the power architecture, because all intermediate results
in registers are 64-bits.

I similarly cannot imagine anyone arguing that there should be only one
80-bit floating-point base type on the x86, on the grounds that this is
the only type for intermediate results in registers.

Note that when Bob says "intermediate results" above, he does not really
mean that, he means "intermediate results stored in registers". We have
always recognized that intermediate results stored in registers may
exceed the base range, so why be astonished when this happens!

I will say it again, on a machine like the SPARC, I think it is a very
bad decision to have only one 32-bit base type. I think it leads to
all sorts of problems, and makes compatibility with existing Ada 83
legacy programs impossible.

We briefly went down the route of letting object sizes be dictated by
the strange Ada 95 small subtype sizes (e.g. natural'size being 31 instead
of 32), but we 	quickly retracted this (inventing Object_SIze at the same
time) because it caused major chaos for our users (not to mention for some
of our own code).

The idea of subtypes by default having a different memory representation
from their parent types is fundamentally a bad approach.

The idea of letting compilers freely choose whether or not to follow this
approach in an implementation dependent manner is even worse.

The lack of a feature to control what the compiler chooses here makes the
problem still worse.

At least in GNAT, you can use Object_Size to tell the compiler what to do.
For example, if you absolutely need compatibility with an Ada compiler that
does behave in the manner that Bob envisions (object sizes for subtypes
by default being smaller than object sizes for the parent type), then you
can force this in GNAT with Object_Size attribute definition clauses (which
are allowed on subtypes). We have never had a customer that we know of who
had this requirement.

I have said it before, I will say it again. The Size stuff in Ada 95 is
badly messed up.

In Ada 83, it was insufficiently well defined, but it permitted compilers
to behave reasonably.

In Ada 95, it was specifically defined with respect to type sizes, in a
manner that was incompatible with most Ada 83 compilers and which causes
a lot of problems. 

Ada 95 copied Ada 83 in not providing a way to specify default object
sizes, but the problem is suddenly much worse in Ada 95. This is because
in Ada 83, the assumption was that object sizes correspond to type sizes
in general, which is reasonable. But you CANNOT do this in Ada 95, because
the types sizes are strange.

Very annoying!

****************************************************************

From: 	Tucker Taft[SMTP:stt@inmet.com]
Sent: 	Monday, April 13, 1998 1:22 PM
To: 	arg95@sw-eng.falls-church.va.us
Subject: 	Re: AI-195; Stream representation for scalar types

The goal of this AI is to minimize implementation dependence
in the area of scalar type stream representation.

I see two choices:

   1) Try to minimize implementation dependence in the area
      of base range selection, and retain the tie between
      base range and stream representation;

   2) Separate the selection of base range from the selection
      of stream representation, and tie stream representation
      to the properties of the first subtype.

Choosing (1) above means specifying the algorithm for choosing
the base range in terms of the first subtype size relative to the size 
of the storage element, the word size, etc.  

Choosing (2) means specifying the algorithm for choosing the
stream representation in terms of the first subtype size relative to
the size of the stream element (and perhaps taking the
size of the storage element and or the size of the word into acount).

If we were to choose (2), I would hope implementations that already
choose "minimal" size base ranges would have no need to change.
So choosing (2) would simply affect implementations which happen
to choose bigger base ranges today.

Choosing (1) would similarly only affect implementations that currently
have bigger base ranges, and would presumably have the identical
affect on users of streams, but would also affect existing users
of 'Base (which might be considered a bug or a feature).

Any comments on choice (1) vs. choice (2)?

-Tuck

****************************************************************

From: 	Robert A Duff[SMTP:bobduff@world.std.com]
Sent: 	Monday, April 13, 1998 5:39 PM
To: 	stt@inmet.com
Cc: 	arg95@sw-eng.falls-church.va.us
Subject: 	Re: AI-195; Stream representation for scalar types

> Any comments on choice (1) vs. choice (2)?

I favor choice 2.  One reason, which I stated in my previous message, is
that I would like the following idiom to be efficient:

    type Dummy is range 1..10_000;
    type Count is range Dummy'Base range 1..Dummy'Base'Last;

I would like to hear from other ARG members (eg things like, "Yeah, we
agree", or "No, that's a bogus/useless idiom", or "the above should be
INefficient", or whatever).

- Bob

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Tuesday, April 14, 1998 5:22 AM
Subject: 	Re: AI-195; Stream representation for scalar types

<<>    1) Try to minimize implementation dependence in the area
>       of base range selection, and retain the tie between
>       base range and stream representation;
>
>    2) Separate the selection of base range from the selection
>       of stream representation, and tie stream representation
>       to the properties of the first subtype.
>>

Just so things are clear, I agree that 1) is impractical at this stage,
although it is a pity that there is so much implementation dependence
here. For interest what do other compilers do between the two following
choices on e.g. the SPARC:

  1) Multiple integer base types, 8,16,32 bits
  2) Single integer base type 32 bits

(just ignore the presence of absence of a 64-bit integer type for this
purpose).

I underdstand the answers to be:

  GNAT: 1)
  Intermetrics: 2)

But what do other compilers do?

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Tuesday, April 14, 1998 5:07 AM
Subject: 	Re: AI-195; Stream representation for scalar types

<<I favor choice 2.  One reason, which I stated in my previous message, is
that I would like the following idiom to be efficient:

    type Dummy is range 1..10_000;
    type Count is range Dummy'Base range 1..Dummy'Base'Last;

I would like to hear from other ARG members (eg things like, "Yeah, we
agree", or "No, that's a bogus/useless idiom", or "the above should be
INefficient", or whatever).
>>

I see this particular choice of the idiom as strange, since you still
have a range on type Count, so you will save nothing in a decent compiler.
Intermediate results for type Dummy and Count are the same, because
the base is the same, and you get range checks anyway, because it is
not full range. Yes, OK, there are a few cases in which only the upper
bound is checked which can be eliminated.

A much more usual use of this idiom is when the range is entirely omitted
from the second declaration, and this is indeed useful.

However, a much more usual kind of use of this idiom is:

    type Dummy is range - (2 ** 31 - 1) .. + (2 ** 31 - 1);
    type Efficient is new Dummy'Base;

where you are clearly aiming at about a 32-bit type (allowing for 1's comp
and also asking for 36 bits on a 36 bit machine).

We used to do this in the GNAT sources, but these days, it's of academic
interest anyway, since all interesting machines are 2's complement, and
all interesting machines have 32 bit integers.

I would expect Bob Duff's particular set of declarations to be aiming at
an efficient 16-bit type if anything, although after learning of the 
peculiar choice of some compilers to fail to provide a 16-bit base type
on machines where I would have expected it to be present (e.g. SPARC), it
is clear that this is a highly non-portable idiom.

As such, I don't think we want to take this idiom (in the form Bob presents
it which is clearly non-portable) too much into effect, since any use of this
will be highly implementation dependent (and worse, you can't even clearly
tell what the programmer had in mind from looking at the text).

It is a pity that the RM gives no guidance here, but we are clearly stuck
with a situation where different compilers do VERY different things here.

Probably we have no choice but to kludge the Stream_IO stuff at this stage,
since it would probably be disruptive for the compiler that chooses large
base types to change.

What we have here in my mind is a situation where, especially given the
rules regarding stream types that are clear in the RM, a compiler has made
a bad choice of how to do things (Bob, I know that you think the choice is
good from other points of view, but you have to agree that *given the rules
in the RM for stream handling*, this choice is *not* the right one overall).

Given this unfortunate choice, the compiler comes along to the ARG and
petitions for a change in the language. OK, at this stage I think we have
to grant the petition.,

But I hope that makes my uneasiness about the precedent we are setting clear.


We have a situation where there are two implementation choices that are
consistent with the RM. One leads to clearly unfortunate choices for the
behavior of streams, one does things just fine. Normally if there are two
implementation choices consistent with the RM, and one works and one does
not, we expect the RM to be dictating the choice. We don't expect the choice
to dictate changes to the RM.

Still, as I note above, I think the only reasonable choice here is to
make the change to the RM. It has no effect on compilers that choose
base types to match storage modes rather than register modes (i.e. that
treat integer the same as floating-point), and it allows compilers that
choose base types for the integer case to match register modes to avoid
a clearly undesirable effect for stream behavior.

****************************************************************

From: 	Pascal Leroy[SMTP:phl@Rational.Com]
Sent: 	Tuesday, April 14, 1998 5:45 AM
Subject: 	Re: AI-195; Stream representation for scalar types

>    1) Try to minimize implementation dependence in the area
>       of base range selection, and retain the tie between
>       base range and stream representation;
>
>    2) Separate the selection of base range from the selection
>       of stream representation, and tie stream representation
>       to the properties of the first subtype.

Option (1) would obviously impact programs which use 'Base and programs which
have 'large' intermediate results.  While we know that such programs are
essentially non-portable, requiring their behavior to change seems like a very
bad idea to me.

I vote for (2).

Pascal

****************************************************************
From: 	Jean-Pierre Rosen[SMTP:rosen.adalog@wanadoo.fr]
Sent: 	Tuesday, April 14, 1998 1:39 AM
Subject: 	Re: AI-195; Stream representation for scalar types

>The goal of this AI is to minimize implementation dependence
>in the area of scalar type stream representation.
>
>I see two choices:
>
>   1) Try to minimize implementation dependence in the area
>      of base range selection, and retain the tie between
>      base range and stream representation;
>
>   2) Separate the selection of base range from the selection
>      of stream representation, and tie stream representation
>      to the properties of the first subtype.

1) may potientially affect all programs, while 2) affects only those that use streams.
I would therefore argue for 2) on the basis that it is much less disruptive.
----------------------------------------------------------------------------
                  J-P. Rosen (Rosen.Adalog@wanadoo.fr)

****************************************************************

From: 	Tucker Taft[SMTP:stt@inmet.com]
Sent: 	Tuesday, April 14, 1998 9:06 AM
Subject: 	Re: AI-195; Stream representation for scalar types

> ...
> Just so things are clear, I agree that 1) is impractical at this stage,
> although it is a pity that there is so much implementation dependence
> here. For interest what do other compilers do between the two following
> choices on e.g. the SPARC:
> 
>   1) Multiple integer base types, 8,16,32 bits
>   2) Single integer base type 32 bits
> 
> (just ignore the presence of absence of a 64-bit integer type for this
> purpose).
> 
> I underdstand the answers to be:
> 
>   GNAT: 1)
>   Intermetrics: 2)
> 
> But what do other compilers do?

The Intermetrics front end is happy to support either choice for 
base types.  I believe Aonix and Green Hills both chose to have 
only 32-bit base types.

The Raytheon "Patriot" compiler has only one 24-bit base integer
types, for what that's worth ;-).

-Tuck

****************************************************************

From: 	Robert A Duff[SMTP:bobduff@world.std.com]
Sent: 	Tuesday, April 14, 1998 8:55 AM
Subject: 	Re: AI-195; Stream representation for scalar types

I suspect that what we *should* have done in Ada 9X is to make stream
I/O completely portable by default, by specifying some particular
representation, such as some preexisting standard, like XDR.  I don't
know anything about XDR, so I don't know if it would have been the
appropriate choice, but I think it is unfortunate that (1) we didn't
nail things down precisely enough, (2) we tied stream representations to
machine-dependent in-memory representations, and (3) we tried to "roll
our own" instead of relying on some preexisting standard.

I guess it's too late, now.

- Bob

****************************************************************

From: 	Pascal Leroy[SMTP:phl@Rational.Com]
Sent: 	Tuesday, April 14, 1998 10:40 AM
Subject: 	Re: AI-195; Stream representation for scalar types

> For interest what do other compilers do between the two following
> choices on e.g. the SPARC:
>
>   1) Multiple integer base types, 8,16,32 bits
>   2) Single integer base type 32 bits
>
> I underdstand the answers to be:
>
>   GNAT: 1)
>   Intermetrics: 2)

We do 2.

Pascal

_____________________________________________________________________
Pascal Leroy                                    +33.1.30.12.09.68
pleroy@rational.com                             +33.1.30.12.09.66 FAX

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Tuesday, April 14, 1998 7:31 PM
Subject: 	Re: AI-195; Stream representation for scalar types

<<We do 2.

Pascal
>>

Interesting. Note that the differences will not be that great in practice.
Certainly simple variables as in

  type x is range 1 .. 100;
  y,z : x;

  y := z + y;

will generate identical code and allocations in the two cases.

Incidentally, I assume this means that Rational has the same problem with
excessive size for stream elements ... so that means that two of the main
technologies really need the fix that is in this AI.

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Tuesday, April 14, 1998 7:33 PM
Subject: 	Re: AI-195; Stream representation for scalar types

<<I suspect that what we *should* have done in Ada 9X is to make stream
I/O completely portable by default, by specifying some particular
representation, such as some preexisting standard, like XDR.  I don't
know anything about XDR, so I don't know if it would have been the
appropriate choice, but I think it is unfortunate that (1) we didn't
nail things down precisely enough, (2) we tied stream representations to
machine-dependent in-memory representations, and (3) we tried to "roll
our own" instead of relying on some preexisting standard.

I guess it's too late, now.
>>

GNAT provides XDR as an optional choice, but you would not want it as the
required default -- too inefficient.

****************************************************************

From: 	David Emery[SMTP:emery@mitre.org]
Sent: 	Wednesday, April 15, 1998 6:34 AM
Subject: 	Re: AI-195; Stream representation for scalar types

I forget the name of the encoding scheme that DCE RPC uses, but it's much
better than XDR, particularly in homogenous networks.  In fact, I argued to 
Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
consist of binding methods to RPC and associated data encoding schemes such
as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
functionality, since it's proven to be very useful.

				dave

****************************************************************

From: 	Robert Dewar[SMTP:dewar@gnat.com]
Sent: 	Wednesday, April 15, 1998 6:43 AM
Subject: 	Re: AI-195; Stream representation for scalar types

<<I forget the name of the encoding scheme that DCE RPC uses, but it's much
better than XDR, particularly in homogenous networks.  In fact, I argued to
Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
consist of binding methods to RPC and associated data encoding schemes such
as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
functionality,
since it's proven to be very useful.
>>

It's quite trivial in GNAT to provide a plugin replacement unit that 
specifies what protocol you want. I think this is a design that is
very attractive (see s-stratt.adb in the GNAT library).

Incidentally we use XDR *precisely* to deal with heterogenous networks. 
THe default encodings work perfectly well on homogenous networks. I
suppose one could worry about different compilers being used on a
homogenous network, but that seems somewhat theoretical so far.

I do hope that if any other compilers implement annex E that they follow
the GNAT design of isolating the low level encodings in a separate
easily modifiable module. This really seems the right solution, rather
than trying to pin down any one method as appropriate (now or when the
standard was being designed).

****************************************************************

From: 	Anthony Gargaro[SMTP:abg@SEI.CMU.EDU]
Sent: 	Wednesday, April 15, 1998 7:43 AM
Subject: 	Re: AI-195; Stream representation for scalar types 

Dear David-

>I forget the name of the encoding scheme that DCE RPC uses, but it's much
>better than XDR, particularly in homogenous networks.  

It is called Network Data Representation (NDR) transfer syntax (refer
X/Open DCE: Remote Procedure Call, Chapter 14) and is the default transfer
syntax for heterogeneous networks.

>In fact, I argued to
>Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
>consist of binding methods to RPC and associated data encoding schemes such
>as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
>functionality,
>since it's proven to be very useful.

I believe the approach taken in Annex E was (and is) correct. A binding
to DCE RPC would have been a significant undertaking and from my recent
experience using DCE a dubious investment by the design team since it is
relatively straightforward to use the specified C APIs from within an
Ada partition.

It is interesting to note that the P1003.21 (Realtime Distributed Systems
Communication) group has been trying to reach consensus on a suitable binding
for heterogeneous data transfer for some considerable time.

Anthony.

****************************************************************


Questions? Ask the ACAA Technical Agent