CVS difference for ais/ai-00195.txt

Differences between 1.5 and version 1.6
Log of other versions for file ais/ai-00195.txt

--- ais/ai-00195.txt	1998/10/01 22:45:54	1.5
+++ ais/ai-00195.txt	1999/02/28 00:31:18	1.6
@@ -1,59 +1,61 @@
-!standard 13.13.1 (00)                                98-04-04  AI95-00195/02
+!standard 13.13.1 (00)                                99-02-23  AI95-00195/03
 !class binding interpretation 98-03-27
-!status work item 98-04-04
 !status received 98-03-27
 !reference AI95-00108
 !reference AI95-00145
 !priority High
 !difficulty Hard
 !subject Streams
+!summary 99-02-23
 
-!summary 98-04-04
+1 - When the predefined Input attribute creates an object, this object
+undergoes default initialization and finalization.
 
-When the predefined Input attribute creates an object, this object undergoes
-initialization and finalization.
-
-In determining whether a stream-oriented attribute has been specified for a
-type, the normal visibility rules are applied to the
+2 - In determining whether a stream-oriented attribute has been specified for
+a limited type, the normal visibility rules are applied to the
 attribute_definition_clause.
 
-For a derived type which is limited, the attributes Read and Write are
-inherited, and the attributes Input and Output revert to their predefined
-definition (i.e. they cannot be called).  (This amends AI95-00108.)
-
-In the profiles of the stream-oriented attributes, the notation "italicized T"
-refers to the base type for a scalar type, and to the first subtype otherwise.
-
-For an untagged derived type with new discriminants that have defaults, the
-predefined stream-oriented attributes read or write the new discriminants, not
-the old ones.  (This amends AI95-00108.)
-
-The predefined Read attribute for composite types with defaulted discriminants
-must ensure that, if exceptions are raised by the Read attribute for some
-discriminant, the discriminants of the actual object passed to Read are not
-modified.  This may require the creation of an anonymous object, which
-undergoes initialization and finalization.
-
-The predefined Read attribute for composite types with defaulted discriminants
-must raise Constraint_Error if the discriminants found in the stream differ
-from those of the actual parameter to Read, and this parameter is constrained.
+3 - For a derived type which is limited (tagged or not), the attributes Read
+and Write are inherited, and the attributes Input and Output revert to their
+predefined definition (i.e. they cannot be called).  (This amends AI95-
+00108.)
+
+4 - In the profiles of the stream-oriented attributes, the notation
+"italicized T" refers to the base type for a scalar type, and to the first
+subtype otherwise.
+
+5 - For an untagged derived type with new discriminants that have defaults,
+the predefined stream-oriented attributes read or write the new
+discriminants, not the old ones.  (This amends AI95-00108.)
+
+6 - The predefined Read attribute for composite types with defaulted
+discriminants must ensure that, if exceptions are raised by the Read
+attribute for some discriminant, the discriminants of the actual object
+passed to Read are not modified.  This may require the creation of an
+anonymous object, which undergoes initialization and finalization.
+
+7 -The predefined Read attribute for composite types with defaulted
+discriminants must raise Constraint_Error if the discriminants found in the
+stream differ from those of the actual parameter to Read, and this parameter
+is constrained.
 
-If S is a subtype of an abstract type, an attribute_reference for S'Input is
-illegal unless this attribute has been specified by an
+8 - If S is a subtype of an abstract type, an attribute_reference for S'Input
+is illegal unless this attribute has been specified by an
 attribute_definition_clause.
 
-The number of calls performed by the predefined implementation of the stream-
-oriented attributes to the Read and Write operations of the stream type is
-unspecified, but there must be at least one such call for each top-level call
-to a stream-oriented attribute.
-
-The predefined stream-oriented attributes for a scalar type shall only read or
-write the minimum number of stream elements required by the first subtype of
-the type.  Constraint_Error is raised if such an attribute is passed (or would
-return) a value outside the range of the first subtype.
+9 - The number of calls performed by the predefined implementation of the
+stream-oriented attributes to the Read and Write operations of the stream
+type is unspecified.  However, all the calls to Read and Write needed to
+implement a top-level invocation of a stream-oriented attribute must take
+place before this top-level invocation returns.
+
+10 - The predefined stream-oriented attributes for a scalar type shall only
+read or write the minimum number of stream elements required by the first
+subtype of the type.  Constraint_Error is raised if such an attribute is
+passed (or would return) a value outside the range of the first subtype.
 
-In an attribute_definition_clause for a stream-oriented attribute, the name
-shall not denote an abstract subprogram.
+11 - In an attribute_definition_clause for a stream-oriented attribute, the
+name shall not denote an abstract subprogram.
 
 !question 98-04-04
 
@@ -61,7 +63,8 @@
 or discriminants, if any, taken from the stream), initializes it with S'Read,
 and returns the value of the object."
 
-Does the verb "initialize" in this sentence refer to the entire initialization
+Does the verb "initialize" in this sentence refer to the entire
+initialization
 process mentioned in RM95 3.3.1(18) and 7.6(10)? (yes) In particular, if S is
 a controlled subtype, or if it contains controlled components, is the
 Initialize subprogram called?  (yes) Is the Finalize subprogram called when
@@ -72,7 +75,8 @@
 attributes is illegal if the type is limited, unless the attribute has been
 specified by an attribute definition clause."
 
-If some stream-oriented attribute has been specified in a private part, and we
+If some stream-oriented attribute has been specified in a private part, and
+we
 are at a point that doesn't have visibility over that private part, is a
 reference to the attribute legal? (no)
 
@@ -98,7 +102,8 @@
 Clearly Parent'Write writes two discriminant values.  How many discriminants
 does Child'Write write? (one)
 
-6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read reads
+6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read
+reads
 the components in positional aggregate order.  However, the language doesn't
 seem to specify what happens when exceptions are raised by the calls to the
 Read attribute for the components.  Consider for example the following type
@@ -125,7 +130,8 @@
 cannot be called)
 
 9 - RM95 13.13.1(1) states that "T'Read and T'Write make dispatching calls on
-the Read and Write procedures of the type Root_Stream_Type."  Is the number of
+the Read and Write procedures of the type Root_Stream_Type."  Is the number
+of
 those calls specified? (no)
 
 10 - For a scalar type T, the second parameter of the predefined stream
@@ -146,11 +152,13 @@
 
 See summary.
 
-!discussion 98-04-04
+!discussion 99-02-23
 
-1 - It seems logical to make Input follow as closely as possible the semantics
+1 - It seems logical to make Input follow as closely as possible the
+semantics
 of code that could be written by a user, so the implementation of S'Input
-should obtain the discriminants or bounds from the stream, and then declare an
+should obtain the discriminants or bounds from the stream, and then declare
+an
 object with appropriate discriminants or bounds, as in:
 
         declare
@@ -161,7 +169,7 @@
 
 Accordingly, the initialization described in RM95 3.3.1(8-9) and RM95 7.6(10)
 takes place when Anon is declared, and the finalization described in RM95
-7.6.1 takes when Anon disappears (and before returning from the call to
+7.6.1 takes place when Anon disappears (and before returning from the call to
 S'Input).
 
 Note that as part of initialization, compiler-specific fields are initialized
@@ -280,7 +288,8 @@
 
    - If the exception in T2'Read is due to an explicit raise, the object
      obviously doesn't become abnormal, and therefore we must preserve the
-     integrity of its discriminants.  In other words, either all discriminants
+     integrity of its discriminants.  In other words, either all
+discriminants
      are updated (if the calls to the Read attributes for the discriminants
      were successful) or none (if any call to a Read attribute for a
      discriminant raised an exception).
@@ -323,24 +332,26 @@
 attribute_reference for this attribute is illegal, by analogy with the rule
 stated in RM95 13.13.2(36) for limited types.
 
-9 - Surely the user could count the calls to the Read and Write operations for
-the stream type by writing a pervert implementation of these operations, but
-it seems that performance should have precedence: for the predefined
-implementation of String'Write, requiring a call to Ada.Streams.Write for each
-character would be a big performance hit, with no real benefits.
+9 - Surely the user could count the calls to the Read and Write operations
+for the stream type by writing a pervert implementation of these operations,
+but it seems that performance should have precedence: for the predefined
+implementation of String'Write, requiring a call to Ada.Streams.Write for
+each character would be a big performance hit, with no real benefits.
 
 Therefore, the number of calls to the Read and Write operations is
-unspecified, and implementations are free (and in fact advised) to do internal
-buffering.  However, we don't want to allow an implementation to buffer all
-stream output and do only one call to Write when the program terminates: the
-user must be able to assume reasonable properties regarding the underlying
-buffering mechanism.  That's why we require at least one call to Read or Write
-for each top-level call to a stream-oriented attribute.  In other words, an
-implementation should not perform buffering across top-level calls.
+unspecified, and implementations are free (and in fact advised) to do
+internal buffering.  However, we don't want to allow an implementation to
+buffer all stream output and do only one call to Write when the program
+terminates: the user must be able to assume reasonable properties regarding
+the underlying buffering mechanism.  That's why we require that all the calls
+to Read or Write take place before the top-level call to a stream-oriented
+attribute completes.  In other words, an implementation may combine several
+consecutive calls to Write into a single one, provided these calls all
+pertain to a single top-level call to the attribute Write (or Output).
 
 10 - Consider the declaration:
 
-	type T is range 1 .. 10;
+        type T is range 1 .. 10;
 
 This declaration "defines an integer type whose base range includes at least
 the values [1 and 10] and is symmetric around zero, excepting possibly an
@@ -350,13 +361,16 @@
 choose an 8-, 16-, 32- or 64-bit base type for T.
 
 RM95 13.13.2(17) advise the implementation to "use the smaller number of
-stream elements needed to represent all values in the base range of the scalar
+stream elements needed to represent all values in the base range of the
+scalar
 type" when reading or writing values of type T.
 
-Clearly this is a portability issue: if two implementation use (as is typical)
+Clearly this is a portability issue: if two implementation use (as is
+typical)
 8-bit stream elements, but have different rules for selecting base types, the
 number of elements read to or written from a stream will differ.  This makes
-it very hard to write stream operations that comply with an externally defined
+it very hard to write stream operations that comply with an externally
+defined
 format.
 
 In the above case, it would seem reasonable to read or write only the minimum
@@ -369,17 +383,19 @@
 The only issue with that approach is that the stream-oriented attributes for
 scalar types have a second parameter of type T'Base, e.g.:
 
-	procedure S'Write (Stream : access Ada.Streams.Root_Stream_Type'Class;
+        procedure S'Write (Stream : access Ada.Streams.Root_Stream_Type'Class;
                          Item : in T'Base);
 
-So one might call T'Write with the value 1000 for the Item parameter, and this
+So one might call T'Write with the value 1000 for the Item parameter, and
+this
 might exceed the range representable in the stream.  However, this usage is
 non-portable in the first place (because it depends on the choice of base
 range), so it doesn't seem important to preserve it.  In fact any attempt at
 reading or writing a value outside the range of the first subtype is highly
 suspicious.
 
-Based on this reasoning, the following rules are added.  Note that these rules
+Based on this reasoning, the following rules are added.  Note that these
+rules
 are Dynamic Semantics rules, not Implementation Advices:
 
    - The predefined stream-oriented attributes for a scalar type T shall
@@ -398,7 +414,8 @@
      Range_Check.
 
 11 - Obviously it should not be possible to perform a non-dispatching call to
-an abstract subprogram (stream-oriented attributes are always called in a non-
+an abstract subprogram (stream-oriented attributes are always called in a
+non-
 dispatching manner).  Therefore, we have two options:
 
    - Make the attribute_definition_clause illegal.
@@ -423,9 +440,11 @@
 or discriminants, if any, taken from the stream), initializes it with S'Read,
 and returns the value of the object.
 
-Does the verb "initialize" in this sentence refer to the entire initialization
+Does the verb "initialize" in this sentence refer to the entire
+initialization
 process mentioned in RM95 3.3.1(18) and 7.6(10)? In particular, if S is a
-controlled subtype, or if it contains controlled components, is the Initialize
+controlled subtype, or if it contains controlled components, is the
+Initialize
 subprogram called?  Is the Finalize subprogram called when the intermediate
 object is finalized?  For a record type whose components have initial values,
 are these values evaluated?
@@ -438,14 +457,16 @@
                 S'Read (..., X);
 
 If we don't do that, the object passed to S'Read might have uninitialized
-access values, or uninitialized compiler-specific fields (dopes, offsets, tag,
+access values, or uninitialized compiler-specific fields (dopes, offsets,
+tag,
 etc.).
 
 2 - RM95 13.13.2(36) states that "an attribute_reference for one of these
 attributes is illegal if the type is limited, unless the attribute has been
 specified by an attribute definition clause."
 
-If some stream-oriented attribute has been specified in a private part, and we
+If some stream-oriented attribute has been specified in a private part, and
+we
 are at a point that doesn't have visibility over that private part, is a
 reference to the attribute legal? For example:
 
@@ -515,7 +536,8 @@
 In particular, does the presence or absence of an attribute definition clause
 have any bearing on the legality of the calls?  For example, consider the
 legality of the 'others' choice in an array aggregate; it depends on the
-existence on an applicable index constraint, and therefore on whether the type
+existence on an applicable index constraint, and therefore on whether the
+type
 of the parameter is constrained or not:
 
         package P is
@@ -552,7 +574,8 @@
 Child'Write should only write one discriminant value, which contradicts the
 simple inheritance rule given in the AI.
 
-6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read reads
+6 - RM95 13.13.2(9) states that for a record type, the predefined S'Read
+reads
 the components in positional aggregate order.  However, the RM95 doesn't seem
 to specify what happens when exceptions are raised by the calls to the Read
 attribute for the components.  Consider for example the following type
@@ -574,14 +597,14 @@
         R'Read (..., X);
 
 Assume that an exception is raised by T2'Read.  Is the discriminant X.D1
-modified?  That would be unpleasant if there were components depending on this
+modified?  That would be unpleasant if there were components depending on
+this
 discriminant!
 
 Consider a different example, where there are no discriminants:
 
         type T1 is ...;
         type T2 is ...;
-
         type R is
                 record
                         C1 : T1;
@@ -597,7 +620,8 @@
 seem quite expensive to require the creation of such an intermediate object
 for types that don't have discriminants.
 
-Note that if we adopt the idea that an intermediate object is needed (at least
+Note that if we adopt the idea that an intermediate object is needed (at
+least
 in some cases) then we must define if initialization and finalization of this
 object take place.
 
@@ -610,7 +634,8 @@
 case, then the constraint check performed when assigning the intermediate
 object to the actual parameter will raise Constraint_Error.
 
-8 - If T is an abstract type, is the function T'Input abstract?  One hopes so,
+8 - If T is an abstract type, is the function T'Input abstract?  One hopes
+so,
 otherwise it makes it possible to create objects of an abstract type.
 
 9 - RM95 13.13.1(1) states that "T'Read and T'Write make dispatching calls on
@@ -630,6 +655,146 @@
 
 ****************************************************************
 
+From: 	Stephen Leake
+Sent: 	Tuesday, March 17, 1998 8:45 PM
+Subject: 	Ada Issue; 16 bit integers in streams
+
+!topic Stream size of 16 bit integers
+!reference 13.13.2 (17)
+!from Stephen Leake 98-03-14
+!keywords base type, size clause, stream representation
+!reference as: 1998-15826.a Stephen Leake  1998-3-17
+!discussion
+
+This is mainly a uniformity issue. There was a discussion of this
+issue on comp.lang.ada in September 1997, under the subject "16 bit
+integers in streams".
+
+The use of streams should be as portable as possible among compilers,
+at least when running on the same hardware. Currently, ObjectAda 7.1
+represents a 16 bit integer by 32 bits in streams under Window 95/NT
+on Intel hardware, while GNAT represents a 16 bit integer by 16 bits
+in streams on the same system. Most C compilers on this system also
+use 16 bits in streams for a 16 bit integer.
+
+13.13.2 (9) says the stream representation is implementation-defined,
+as it must be. However, the implementation advice 13.13.2 (17) says:
+
+If a stream element is the same size as a storage element, then the
+normal in-memory representation should be used by Read and Write for
+scalar objects. Otherwise, Read and Write should use the smallest
+number of stream elements needed to represent all values in the base
+range of the scalar type.
+
+Given a type defintion:
+
+type Int16 is range -32768 .. 32676;
+for Int16'size use 16;
+
+The "normal in-memory representation" is 16 bits. However, ObjectAda
+7.1 uses 32 bits for this type in streams. This is because of RM95
+13.13.2(36), which says in part:
+
+For an attribute_definition_clause specifying one of these attributes,
+the subtype of the Item parameter shall be the base subtype if scalar
+...
+
+The rationale for this rule is that the user may declare a variable of
+the base type, and write it to a stream.
+
+ObjectAda chooses a 32 bit base integer in this case on Wintel
+systems; apparently for "efficient arithmetic".
+
+Ada has a general philosophy of giving the user as much control as
+possible over representation issues. However, the user has no control
+in this case (other than by choice of compiler). In principle, the
+user should be able to sacrifice efficient arithmetic for control over
+stream size.
+
+Clearly we cannot have arbitrary control over stream size; a 6 bit
+integer will never fit exactly in a stream element. But that is one
+reason to use a 16 bit integer in the first place - we know it can
+reasonably be represented in a stream.
+
+Proposed solutions:
+
+1) Make a size clause influence the base type size. The base type size
+should be the minimum that can contain the first subtype, even if that
+is not the most "efficient". This gives the user control over the
+choice of speed or space. pragma Optimize (Space) could also influence
+this choice. It is not clear how to add this to the language in the
+reference manual; it would probably be just implementation advice.
+
+2) Add a new attribute T'Stream_Size. The value must be a multiple of
+Stream_Element'size, and must be at least T'Size. It would be illegal
+to apply 'Read, 'Write, 'Input or 'Output to an object declared to be
+of type T'Base for a type T that has T'Stream_Size specified (or maybe
+just illegal to declare the variable of type T'Base in the first
+place).
+
+****************************************************************
+
+From: 	Tucker Taft
+Sent: 	Tuesday, March 17, 1998 9:22 PM
+Subject: 	Re: Ada Issue; 16 bit integers in streams
+
+!topic Stream size of 16 bit integers
+!reference RM95 13.13.2 (17)
+!reference 1998-15826.a Stephen Leake 98-03-14
+!from Tucker Taft 98-03-17
+!keywords base type, size clause, stream representation
+!reference as: 1998-15827.a Tucker Taft 1998-3-17
+!discussion
+
+: ...
+: Given a type defintion:
+
+: type Int16 is range -32768 .. 32676;
+: for Int16'size use 16;
+
+: The "normal in-memory representation" is 16 bits. However, ObjectAda
+: 7.1 uses 32 bits for this type in streams. This is because of RM95
+: 13.13.2(36), which says in part:
+
+: For an attribute_definition_clause specifying one of these attributes,
+: the subtype of the Item parameter shall be the base subtype if scalar
+: ...
+
+: The rationale for this rule is that the user may declare a variable of
+: the base type, and write it to a stream.
+
+: ObjectAda chooses a 32 bit base integer in this case on Wintel
+: systems; apparently for "efficient arithmetic".
+
+This is certainly the crux of the problem.  The base range is
+chosen to be a 32-bit range because that is the kind of arithmetic
+that is efficient.  On many RISC architectures, that is the only
+kind of arithmetic, yet even on those architectures, one would presumably
+want types whose first subtype range fits easily in 16 bits to 
+occupy no mor than 16 bits in the stream.  
+
+So the real problem seems to be the connection between base range
+and stream representation.
+
+One possible solution is to "pretend" the base range is narrower
+for such types.  The base range is just a minimum guaranteed range
+for arithmetic intermediates, but overflow need *not* always happen
+if you go outside the base range.  For such a type it turns out that
+the base range would be pretty meaningless, except as it determines
+the number of stream elements per item written to the stream.
+Or equivalently, the "pretend" 16-bit base range would be checked 
+by T'Write on output, and result in a Constraint_Error if exceeded,
+but the base range would be irrelevant pretty much everywhere else.
+
+Having a T'Stream_Size attribute is also a possible solution, but
+that requires the user to do extra work just to get the "expected"
+result, which is kind of annoying.
+
+--
+-Tucker Taft
+
+****************************************************************
+
 !section 13.13.2(17)
 !subject Stream size of 16 bit integers
 !reference RM95 13.13.2 (17)
@@ -657,2106 +822,43 @@
                         Randy.
 
 ****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Friday, April 10, 1998 4:28 PM
-Subject: 	AI-195; Stream representation for scalar types
-
-At the recent ARG meeting, we discussed resolving AI-195 by introducing
-the notion of "stream base range" rather than simply "base range"
-to determine the default representation for scalar types in a stream.
-The stream base range would be determined strictly by the first subtype
-'Size, rounded up to some multiple of the stream element size in bits.
-
-This seems to me like a good solution to the problem that int-type'Base'Size
-might always be 32-bits on modern RISC machines, even though users 
-expect integer types with first subtype 'Size of 9..16 or 1..8 bits to
-be represented in the stream with 16 or 8 bits per item, respectively.
-
-I would like to make this change to our compiler sooner rather than later.
-Are there any "morning after" concerns about this approach?
-
-Also, it would be nice to settle soon on the details for cases like
-
-   type Three_Bytes is range -2**23 .. +2**23-1;
-
-Presuming that a stream element is 8 bits, should Three_Bytes'Write
-write 3 bytes or 4?  I.e., when rounding up the first subtype 'Size
-to a multiple of Stream_Element'Size, should we round up to some
-power-of-2 times the Stream_Element'Size, or just the next
-multiple?
-
-I have used first-subtype'Size above intentionally, presuming that
-if the user specifies a first-subtype'Size, then we would use that
-specified 'Size to affect the number of stream elements per item.
 
--Tuck
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Friday, April 10, 1998 4:37 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<This seems to me like a good solution to the problem that int-type'Base'Size
-might always be 32-bits on modern RISC machines, even though users
-expect integer types with first subtype 'Size of 9..16 or 1..8 bits to
-be represented in the stream with 16 or 8 bits per item, respectively.
->>
-
-Very few machines are in this category by the way, so this is a marginal
-concern.
-
-What machine are you thinking of here?
-
-I am a bit dubious about this change ...
-
-****************************************************************
+From: 	Tucker Taft
+Sent: 	Friday, February 26, 1999 9:47 AM
+Subject: 	Re: Updated AIs
+
+AI 195 does not address an issue that we have confronted
+recently relating to stream attributes.  It is very difficult
+to specify stream attributes for a limited private type, if
+you require them to be specified in the visible part.  In general,
+you may not specify a "representation" attribute until after
+a type is fully defined.  But if you also have to specify stream
+attributes in the visible part to make them usable, then you
+have a contradiction.
+
+There seem to be two solutions.  One
+is to allow stream attributes to be specified before the
+type is fully defined.  The other is to allow them to be specified
+in the private part.
+
+I suppose another option is to have some
+kind of "incomplete" stream attribute specification,
+such as "for Lim_Type'Read use <>;" in the visible part,
+and then complete the definition in the private part.  This is
+rampant invention, of course, but it solves another problem.
+Types like Exception_Occurrence are required to have stream attributes,
+but not have any other primitive operations declared in the
+visible part.  However stream attributes can only be defined
+in terms of some "normal" subprogram, which must necessarily also
+be visible at the point of the stream attribute definition.
+Another contradiction...
+
+> The update to AI
+> 195 is partial: I believe that Tucker owns the part of this AI that has to do
+> with the number of stream elements written/read for a scalar type.
 
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Friday, April 10, 1998 4:47 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-> What machine are you thinking of here?
-
-For essentially all modern RISC machines, all arithmetic is done in
-32 bits, and the only overflow bit or signal available relates
-to overflowing 32 bits.  Presuming 'Base'Size reflects the size of 
-arithmetic intermediates, 'Base'Size might very well be 32 for all integer 
-types, even for integer types declared with tiny ranges.
-
-> I am a bit dubious about this change ...
-
-Please elaborate.  
-
-Note that this is a real problem now, because some Ada 95 compilers
-are using 32 bits per item for all integer types in streams, even for
-those whose first subtype'Size is less than or equal to 16 bits.
-This is because 'Base'Size is 32 bits to reflect the register size,
-as seems reasonable.  Even among "CISC" machines, there are few
-these days that actually provide any kind of efficient 16- or 8-bit
-arithmetic, so presumably if overflow is being detected, it is
-associated with exceeding 32-bit values, not 16- or 8-bit values.
+Okay, I will handle that part.
 
 -Tuck
 
 ****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Friday, April 10, 1998 5:46 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<For essentially all modern RISC machines, all arithmetic is done in
-32 bits, and the only overflow bit or signal available relates
-to overflowing 32 bits.  Presuming 'Base'Size reflects the size of
-arithmetic intermediates, 'Base'Size might very well be 32 for all integer
-types, even for integer types declared with tiny ranges.
->>
-
-It seems a bad choice to me to make the base types 32-bits. Because of
-the permission for intermediate values to exceed the base range, there
-really is no reason to do this.
-
-<<Note that this is a real problem now, because some Ada 95 compilers
-are using 32 bits per item for all integer types in streams, even for
-those whose first subtype'Size is less than or equal to 16 bits.
-This is because 'Base'Size is 32 bits to reflect the register size,
-as seems reasonable.  Even among "CISC" machines, there are few
-these days that actually provide any kind of efficient 16- or 8-bit
-arithmetic, so presumably if overflow is being detected, it is
-associated with exceeding 32-bit values, not 16- or 8-bit values.
->>
-
-I agree that is a very bad choice, and is one of the reasons why I think
-it is a bad choice to have a single 32-bit base type.
-
-The statement about CISC machines is incorrect, the x86 provides the
-same efficiency for 8-bit arithmetic with overflow detection as for 
-32-bit arithmetic with overflow detection, and 16-bit ariythemtic
-with overflow detection is only very slightly less efficient.
-
-... so the presumably here is wrong.
-
-I really don't see the issue here. If you have
-
-  type x is range -128 .. +127;
-
-it is appropriate to use a one byte base type on nearly all machines (the
-now obsolete old alphas and now obsolete AMD 29K are the only exceptions
-among general purpose RISC machines).
-
-If you have
-
-  y,z : x;
-
-and you write
-
-  y := z + 35;
-
-then the code to check for contraint error is exactly the same for the
-case of x being derived from a 32-bit base type as if it is derived from
-an 8-bit base type.
-
-We seem to be asked here to patch up the language to deal with what basically
-sounds like just a wrong decision in choice of base types. That is why I
-am dubious about the need for the extra complexity.
-
-Tuck, can you be more clear on why it ever makes sense to follow this 
-approach of having a single 32-bit base type.
-
-****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Friday, April 10, 1998 6:00 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-The ARG discussed this after you left, and concluded roughly what
-I presented.  I was confirming their decision.  The basic point is
-that base ranges are possibly based on what is efficient to store
-for intermediates.  This is very different from what would be efficient
-for a stream representation.  The Patriot is a 24-bit machine, but it
-makes sense for stream elements to be 8 bits on that machine, and for
-only 8 bits to be used for "8-bit" integer types.  Nevertheless,
-'base'size is definitely 24 bits for such machines.
-
-On many RISC machines, loading, storing, and passing objects of less 
-than 32-bits can be less inefficient.  For such machines, it makes
-sense for 'Base'Size to be 32 bits even for tiny ranges.
-
-The point is to divorce the 'base'size from the stream element representation.
-Connecting the two seems like a mistake, particularly since there is
-no guarantee that you *won't* get constraint error when writing a value
-stored in a variable of the 'Base subtype, since it is allowed to be
-outside of 'Base'Range in any case (since it is unconstrained).
-
--Tuck
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Friday, April 10, 1998 6:15 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<On many RISC machines, loading, storing, and passing objects of less
-than 32-bits can be less inefficient.  For such machines, it makes
-sense for 'Base'Size to be 32 bits even for tiny ranges.
->>
-
-Please site details, I don't see this. It is certainly not true for
-MIPS, SParc, HP, Power, x86, ALpha (new Alpha), ...
-
-I would be interested in Tuck backing up this claim, because the need for
-a new language feature really depends on the validity of this claim, which
-seems dubious to me!
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Saturday, April 11, 1998 8:06 AM
-To: 	dewar@gnat.com
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-I'm astonished that Robert thinks T'Base should be less than the 32-bit
-range (assuming it's big enough), for most 32-bit architectures.  (OK,
-on a 386, an 8-bit base range makes sense, but that's unusual.)  It
-seems to me that T'Base should reflect the range of intermediate results
--- ntent.
-
-A useful idiom in Ada 95 is:
-
-    type Dummy_Count is range 1..10_000;
-    subtype Count is Dummy_Count'Base range 1..Dummy_Count'Base'Last;
-    Number_Of_Widgets: Count := 1;
-
-which means "I want Count to be a positive integer, at least up to
-10_000, but there's no harm in giving me more than 10_000, so please use
-the most efficient thing."
-
-On most machines, I would expect Count'Last to be 2**31-1 (even though
-in-memory variables could be stored in 16 bits).  Any other value for
-Count'Last will result in an unnecessary (software) constraint check on:
-
-    Number_Of_Widgets := Number_Of_Widgets + 1;
-
-I see no reason for Dummy_Count'Base being -2**15..2**15-1.  Why, for
-example, is that any more sensible than -10_000..10_000, which is the
-smallest range allowed for the base range by the RM?
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Saturday, April 11, 1998 8:27 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<I'm astonished that Robert thinks T'Base should be less than the 32-bit
-range (assuming it's big enough), for most 32-bit architectures.  (OK,
-on a 386, an 8-bit base range makes sense, but that's unusual.)  It
-seems to me that T'Base should reflect the range of intermediate results
--- ntent.
->>
-
-It is not surprising that Bob Duff is surprised by this, because it is 
-closely related to the whole business of the "wrong" size stuff in the RM.
-
-It is a significant mistake in an implementation to by default have objects
-of a subtype occupy less space than the type will occupy. This is becase
-it is highly incompatible with nearly all Ada 83 implementations.
-
-Consider:
-
-   type x is record
-     a : Natural;
-     b : Natural range 1 .. 10;
-     c : Character;
-   end record;
-
-Nearly all Ada 83 compilers will allocate 32 bits for b.
-
-Now in Ada 95, we are forced to consider the size of the type of b to be
-4 bits, and it would seem natural to allocate one byte, but this will
-cause horrible compatibility difficulties.
-
-We introduced Object_Size in GNAT to partially get around this (you see the
-difficulty is that if your compiler *does* feel like allocating 8 bits
-for that Natural field, there is no way to stop it. Even if you do:
-
-   subtype N10 is Natural range 1 .. 10;
-
-there is no way in Standard Ada 95 to override the size of N10 anyway.
-
-It was a big mistake to require the type sizes to be wrong with respect
-to Ada 83 (one I complained loudly and repeatedly about I might say :-)
-
-Indeed we had recently in GNAT to introduce a new attribute VADS_Size
-and a pragma Use_VADS_Size to provide additional tools for getting around
-this annoying incomaptibility (easily the biggest incompatibility between
-Ada 83 and Ada 95 in practice).
-
-But back to what a compiler should do.
-
-To me, the base type in Ada 83 (and Ada 95) is most naturally considered
-to be the size used to *store* things. Intermediate values are allowed
-to be bigger explicitly in 11.6 for this very reason (this was the original
-thought behind the permission in 11.6, and we explcitly discussed machines
-where most naturally arithmetic is done in higher precision).
-
-You definitely want objects declarecd as
-
-  type m is range 1 .. 10;
-  mm : m;
-
-to occupy 8 bits on nearly all machines.
-
-Bob Duff's model of this (and apparently Tuck's too) is to make sure that
-the subtype size is small enough, and then condition the object size on
-the subtype size. But as described above, that is simply too incomaptible
-with Ada 83 to be tolerable.
-
-All the Ada 83 compilers I am familiar with provided 8- and 16-bit base
-types for machines like the Sparc. We did discuss in Alsys the possibility
-of having only one base type, and indeed we concluded that it would be
-valid from the RM, but we quickly decided that there were all sortsw of
-problems with this approach.
-
-The problem with stream attributes is typical of the kind of things that
-go wrong if you think this way.
-
-<<<<I'm astonished that Robert thinks T'Base should be less than the 32-bit
-range (assuming it's big enough), for most 32-bit architectures.  (OK,
-on a 386, an 8-bit base range makes sense, but that's unusual.)  It
-seems to me that T'Base should reflect the range of intermediate results
--- ntent.
->>
->>
-
-It may be the intent of the designers of Ada 95, but if so, it is a definite
-distortion of the original intent.
-
-Note that you *certainly* don't believe this for the floating-point case.
-I cannot imagine anyone arguing that there should only be one floating-point
-type (64-bit) on the power architecture, because all intermediate results
-in registers are 64-bits.
-
-I similarly cannot imagine anyone arguing that there should be only one
-80-bit floating-point base type on the x86, on the grounds that this is
-the only type for intermediate results in registers.
-
-Note that when Bob says "intermediate results" above, he does not really
-mean that, he means "intermediate results stored in registers". We have
-always recognized that intermediate results stored in registers may
-exceed the base range, so why be astonished when this happens!
-
-I will say it again, on a machine like the SPARC, I think it is a very
-bad decision to have only one 32-bit base type. I think it leads to
-all sorts of problems, and makes compatibility with existing Ada 83
-legacy programs impossible.
-
-We briefly went down the route of letting object sizes be dictated by
-the strange Ada 95 small subtype sizes (e.g. natural'size being 31 instead
-of 32), but we 	quickly retracted this (inventing Object_SIze at the same
-time) because it caused major chaos for our users (not to mention for some
-of our own code).
-
-The idea of subtypes by default having a different memory representation
-from their parent types is fundamentally a bad approach.
-
-The idea of letting compilers freely choose whether or not to follow this
-approach in an implementation dependent manner is even worse.
-
-The lack of a feature to control what the compiler chooses here makes the
-problem still worse.
-
-At least in GNAT, you can use Object_Size to tell the compiler what to do.
-For example, if you absolutely need compatibility with an Ada compiler that
-does behave in the manner that Bob envisions (object sizes for subtypes
-by default being smaller than object sizes for the parent type), then you
-can force this in GNAT with Object_Size attribute definition clauses (which
-are allowed on subtypes). We have never had a customer that we know of who
-had this requirement.
-
-I have said it before, I will say it again. The Size stuff in Ada 95 is
-badly messed up.
-
-In Ada 83, it was insufficiently well defined, but it permitted compilers
-to behave reasonably.
-
-In Ada 95, it was specifically defined with respect to type sizes, in a
-manner that was incompatible with most Ada 83 compilers and which causes
-a lot of problems. 
-
-Ada 95 copied Ada 83 in not providing a way to specify default object
-sizes, but the problem is suddenly much worse in Ada 95. This is because
-in Ada 83, the assumption was that object sizes correspond to type sizes
-in general, which is reasonable. But you CANNOT do this in Ada 95, because
-the types sizes are strange.
-
-Very annoying!
-
-****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Monday, April 13, 1998 1:22 PM
-To: 	arg95@sw-eng.falls-church.va.us
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-The goal of this AI is to minimize implementation dependence
-in the area of scalar type stream representation.
-
-I see two choices:
-
-   1) Try to minimize implementation dependence in the area
-      of base range selection, and retain the tie between
-      base range and stream representation;
-
-   2) Separate the selection of base range from the selection
-      of stream representation, and tie stream representation
-      to the properties of the first subtype.
-
-Choosing (1) above means specifying the algorithm for choosing
-the base range in terms of the first subtype size relative to the size 
-of the storage element, the word size, etc.  
-
-Choosing (2) means specifying the algorithm for choosing the
-stream representation in terms of the first subtype size relative to
-the size of the stream element (and perhaps taking the
-size of the storage element and or the size of the word into acount).
-
-If we were to choose (2), I would hope implementations that already
-choose "minimal" size base ranges would have no need to change.
-So choosing (2) would simply affect implementations which happen
-to choose bigger base ranges today.
-
-Choosing (1) would similarly only affect implementations that currently
-have bigger base ranges, and would presumably have the identical
-affect on users of streams, but would also affect existing users
-of 'Base (which might be considered a bug or a feature).
-
-Any comments on choice (1) vs. choice (2)?
-
--Tuck
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Monday, April 13, 1998 5:39 PM
-To: 	stt@inmet.com
-Cc: 	arg95@sw-eng.falls-church.va.us
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-> Any comments on choice (1) vs. choice (2)?
-
-I favor choice 2.  One reason, which I stated in my previous message, is
-that I would like the following idiom to be efficient:
-
-    type Dummy is range 1..10_000;
-    type Count is range Dummy'Base range 1..Dummy'Base'Last;
-
-I would like to hear from other ARG members (eg things like, "Yeah, we
-agree", or "No, that's a bogus/useless idiom", or "the above should be
-INefficient", or whatever).
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Tuesday, April 14, 1998 5:22 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<>    1) Try to minimize implementation dependence in the area
->       of base range selection, and retain the tie between
->       base range and stream representation;
->
->    2) Separate the selection of base range from the selection
->       of stream representation, and tie stream representation
->       to the properties of the first subtype.
->>
-
-Just so things are clear, I agree that 1) is impractical at this stage,
-although it is a pity that there is so much implementation dependence
-here. For interest what do other compilers do between the two following
-choices on e.g. the SPARC:
-
-  1) Multiple integer base types, 8,16,32 bits
-  2) Single integer base type 32 bits
-
-(just ignore the presence of absence of a 64-bit integer type for this
-purpose).
-
-I underdstand the answers to be:
-
-  GNAT: 1)
-  Intermetrics: 2)
-
-But what do other compilers do?
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Tuesday, April 14, 1998 5:07 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<I favor choice 2.  One reason, which I stated in my previous message, is
-that I would like the following idiom to be efficient:
-
-    type Dummy is range 1..10_000;
-    type Count is range Dummy'Base range 1..Dummy'Base'Last;
-
-I would like to hear from other ARG members (eg things like, "Yeah, we
-agree", or "No, that's a bogus/useless idiom", or "the above should be
-INefficient", or whatever).
->>
-
-I see this particular choice of the idiom as strange, since you still
-have a range on type Count, so you will save nothing in a decent compiler.
-Intermediate results for type Dummy and Count are the same, because
-the base is the same, and you get range checks anyway, because it is
-not full range. Yes, OK, there are a few cases in which only the upper
-bound is checked which can be eliminated.
-
-A much more usual use of this idiom is when the range is entirely omitted
-from the second declaration, and this is indeed useful.
-
-However, a much more usual kind of use of this idiom is:
-
-    type Dummy is range - (2 ** 31 - 1) .. + (2 ** 31 - 1);
-    type Efficient is new Dummy'Base;
-
-where you are clearly aiming at about a 32-bit type (allowing for 1's comp
-and also asking for 36 bits on a 36 bit machine).
-
-We used to do this in the GNAT sources, but these days, it's of academic
-interest anyway, since all interesting machines are 2's complement, and
-all interesting machines have 32 bit integers.
-
-I would expect Bob Duff's particular set of declarations to be aiming at
-an efficient 16-bit type if anything, although after learning of the 
-peculiar choice of some compilers to fail to provide a 16-bit base type
-on machines where I would have expected it to be present (e.g. SPARC), it
-is clear that this is a highly non-portable idiom.
-
-As such, I don't think we want to take this idiom (in the form Bob presents
-it which is clearly non-portable) too much into effect, since any use of this
-will be highly implementation dependent (and worse, you can't even clearly
-tell what the programmer had in mind from looking at the text).
-
-It is a pity that the RM gives no guidance here, but we are clearly stuck
-with a situation where different compilers do VERY different things here.
-
-Probably we have no choice but to kludge the Stream_IO stuff at this stage,
-since it would probably be disruptive for the compiler that chooses large
-base types to change.
-
-What we have here in my mind is a situation where, especially given the
-rules regarding stream types that are clear in the RM, a compiler has made
-a bad choice of how to do things (Bob, I know that you think the choice is
-good from other points of view, but you have to agree that *given the rules
-in the RM for stream handling*, this choice is *not* the right one overall).
-
-Given this unfortunate choice, the compiler comes along to the ARG and
-petitions for a change in the language. OK, at this stage I think we have
-to grant the petition.,
-
-But I hope that makes my uneasiness about the precedent we are setting clear.
-
-
-We have a situation where there are two implementation choices that are
-consistent with the RM. One leads to clearly unfortunate choices for the
-behavior of streams, one does things just fine. Normally if there are two
-implementation choices consistent with the RM, and one works and one does
-not, we expect the RM to be dictating the choice. We don't expect the choice
-to dictate changes to the RM.
-
-Still, as I note above, I think the only reasonable choice here is to
-make the change to the RM. It has no effect on compilers that choose
-base types to match storage modes rather than register modes (i.e. that
-treat integer the same as floating-point), and it allows compilers that
-choose base types for the integer case to match register modes to avoid
-a clearly undesirable effect for stream behavior.
-
-****************************************************************
-
-From: 	Pascal Leroy[SMTP:phl@Rational.Com]
-Sent: 	Tuesday, April 14, 1998 5:45 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
->    1) Try to minimize implementation dependence in the area
->       of base range selection, and retain the tie between
->       base range and stream representation;
->
->    2) Separate the selection of base range from the selection
->       of stream representation, and tie stream representation
->       to the properties of the first subtype.
-
-Option (1) would obviously impact programs which use 'Base and programs which
-have 'large' intermediate results.  While we know that such programs are
-essentially non-portable, requiring their behavior to change seems like a very
-bad idea to me.
-
-I vote for (2).
-
-Pascal
-
-****************************************************************
-From: 	Jean-Pierre Rosen[SMTP:rosen.adalog@wanadoo.fr]
-Sent: 	Tuesday, April 14, 1998 1:39 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
->The goal of this AI is to minimize implementation dependence
->in the area of scalar type stream representation.
->
->I see two choices:
->
->   1) Try to minimize implementation dependence in the area
->      of base range selection, and retain the tie between
->      base range and stream representation;
->
->   2) Separate the selection of base range from the selection
->      of stream representation, and tie stream representation
->      to the properties of the first subtype.
-
-1) may potientially affect all programs, while 2) affects only those that use streams.
-I would therefore argue for 2) on the basis that it is much less disruptive.
-----------------------------------------------------------------------------
-                  J-P. Rosen (Rosen.Adalog@wanadoo.fr)
-
-****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Tuesday, April 14, 1998 9:06 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-> ...
-> Just so things are clear, I agree that 1) is impractical at this stage,
-> although it is a pity that there is so much implementation dependence
-> here. For interest what do other compilers do between the two following
-> choices on e.g. the SPARC:
-> 
->   1) Multiple integer base types, 8,16,32 bits
->   2) Single integer base type 32 bits
-> 
-> (just ignore the presence of absence of a 64-bit integer type for this
-> purpose).
-> 
-> I underdstand the answers to be:
-> 
->   GNAT: 1)
->   Intermetrics: 2)
-> 
-> But what do other compilers do?
-
-The Intermetrics front end is happy to support either choice for 
-base types.  I believe Aonix and Green Hills both chose to have 
-only 32-bit base types.
-
-The Raytheon "Patriot" compiler has only one 24-bit base integer
-types, for what that's worth ;-).
-
--Tuck
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Tuesday, April 14, 1998 8:55 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-I suspect that what we *should* have done in Ada 9X is to make stream
-I/O completely portable by default, by specifying some particular
-representation, such as some preexisting standard, like XDR.  I don't
-know anything about XDR, so I don't know if it would have been the
-appropriate choice, but I think it is unfortunate that (1) we didn't
-nail things down precisely enough, (2) we tied stream representations to
-machine-dependent in-memory representations, and (3) we tried to "roll
-our own" instead of relying on some preexisting standard.
-
-I guess it's too late, now.
-
-- Bob
-
-****************************************************************
-
-From: 	Pascal Leroy[SMTP:phl@Rational.Com]
-Sent: 	Tuesday, April 14, 1998 10:40 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-> For interest what do other compilers do between the two following
-> choices on e.g. the SPARC:
->
->   1) Multiple integer base types, 8,16,32 bits
->   2) Single integer base type 32 bits
->
-> I underdstand the answers to be:
->
->   GNAT: 1)
->   Intermetrics: 2)
-
-We do 2.
-
-Pascal
-
-_____________________________________________________________________
-Pascal Leroy                                    +33.1.30.12.09.68
-pleroy@rational.com                             +33.1.30.12.09.66 FAX
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Tuesday, April 14, 1998 7:31 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<We do 2.
-
-Pascal
->>
-
-Interesting. Note that the differences will not be that great in practice.
-Certainly simple variables as in
-
-  type x is range 1 .. 100;
-  y,z : x;
-
-  y := z + y;
-
-will generate identical code and allocations in the two cases.
-
-Incidentally, I assume this means that Rational has the same problem with
-excessive size for stream elements ... so that means that two of the main
-technologies really need the fix that is in this AI.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Tuesday, April 14, 1998 7:33 PM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<I suspect that what we *should* have done in Ada 9X is to make stream
-I/O completely portable by default, by specifying some particular
-representation, such as some preexisting standard, like XDR.  I don't
-know anything about XDR, so I don't know if it would have been the
-appropriate choice, but I think it is unfortunate that (1) we didn't
-nail things down precisely enough, (2) we tied stream representations to
-machine-dependent in-memory representations, and (3) we tried to "roll
-our own" instead of relying on some preexisting standard.
-
-I guess it's too late, now.
->>
-
-GNAT provides XDR as an optional choice, but you would not want it as the
-required default -- too inefficient.
-
-****************************************************************
-
-From: 	David Emery[SMTP:emery@mitre.org]
-Sent: 	Wednesday, April 15, 1998 6:34 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-I forget the name of the encoding scheme that DCE RPC uses, but it's much
-better than XDR, particularly in homogenous networks.  In fact, I argued to 
-Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
-consist of binding methods to RPC and associated data encoding schemes such
-as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
-functionality, since it's proven to be very useful.
-
-				dave
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Wednesday, April 15, 1998 6:43 AM
-Subject: 	Re: AI-195; Stream representation for scalar types
-
-<<I forget the name of the encoding scheme that DCE RPC uses, but it's much
-better than XDR, particularly in homogenous networks.  In fact, I argued to
-Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
-consist of binding methods to RPC and associated data encoding schemes such
-as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
-functionality,
-since it's proven to be very useful.
->>
-
-It's quite trivial in GNAT to provide a plugin replacement unit that 
-specifies what protocol you want. I think this is a design that is
-very attractive (see s-stratt.adb in the GNAT library).
-
-Incidentally we use XDR *precisely* to deal with heterogenous networks. 
-THe default encodings work perfectly well on homogenous networks. I
-suppose one could worry about different compilers being used on a
-homogenous network, but that seems somewhat theoretical so far.
-
-I do hope that if any other compilers implement annex E that they follow
-the GNAT design of isolating the low level encodings in a separate
-easily modifiable module. This really seems the right solution, rather
-than trying to pin down any one method as appropriate (now or when the
-standard was being designed).
-
-****************************************************************
-
-From: 	Anthony Gargaro[SMTP:abg@SEI.CMU.EDU]
-Sent: 	Wednesday, April 15, 1998 7:43 AM
-Subject: 	Re: AI-195; Stream representation for scalar types 
-
-Dear David-
-
->I forget the name of the encoding scheme that DCE RPC uses, but it's much
->better than XDR, particularly in homogenous networks.  
-
-It is called Network Data Representation (NDR) transfer syntax (refer
-X/Open DCE: Remote Procedure Call, Chapter 14) and is the default transfer
-syntax for heterogeneous networks.
-
->In fact, I argued to
->Anthony Gargaro that the DS Annex should take a 'minimalist' approach and
->consist of binding methods to RPC and associated data encoding schemes such
->as XDR.   I'm still sorry that the DS Annex didn't provide this simplistic
->functionality,
->since it's proven to be very useful.
-
-I believe the approach taken in Annex E was (and is) correct. A binding
-to DCE RPC would have been a significant undertaking and from my recent
-experience using DCE a dubious investment by the design team since it is
-relatively straightforward to use the specified C APIs from within an
-Ada partition.
-
-It is interesting to note that the P1003.21 (Realtime Distributed Systems
-Communication) group has been trying to reach consensus on a suitable binding
-for heterogeneous data transfer for some considerable time.
-
-Anthony.
-
-****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Friday, May 15, 1998 4:43 PM
-Subject: 	AI-195 redux; stream rep for scalars
-
-I've forgotten who was working on AI-195, but a news thread
-on comp.lang.ada illustrates that we need to address this sooner
-rather than later.    
-
-In the comp.lang.ada posting, someone was quite confused by the
-fact that a type declared via:
-
-   type Byte is range 0..255;
-
-ended up taking 16 bits per item when written out using 'Write,
-even though Byte'Size is clearly 8.
-
-In my earlier note, I noted that 'Base'Size might be 32 in some 
-implementations for a type declared via:
-
-   type Signed_Byte is range -128 .. 127;
-
-resulting in 32 bits per stream item, while for other implementations,
-you might have just 8 bits per stream item.
-
-The "comp.lang.ada" case reflects a different problem, because we know 
-that for all Ada 95 compilers, Byte'Base'Size is at least 9,
-even though Byte'Size is clearly 8.
-
-Both cases, however, seem to reflect the confusing nature of
-the current RM rule, where the stream representation is based on
-'Base'Size rather than <first_subtype>'Size.
-
-As I have suggested, I think we should propose to change the rule so that
-the default stream representation for scalar types is based on the 
-'Size of the first subtype, rather than the base subtype 'Size.
-
-There is still the question of whether something such as:
-
-    type Three_Bytes is range 0..2**24-1;
-
-should occupy three (presumably 8-bit) stream elements or 4.
-
-I would tentatively suggest 4, based on a desire to limit representations
-to numbers of stream elements that are factors and multiples of the word 
-size, so that implementations like GNAT which already have 'Base'Size 
-being the smallest number of storage elements that is a factor/multiple of 
-the word size would not have to change for most cases (though the "0..255" 
-Byte case would still require a change in GNAT).
-
-So who is doing AI-195, and where does it stand now?
-
--Tuck
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Friday, May 15, 1998 5:10 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> So who is doing AI-195, and where does it stand now?
-
-According to the minutes, one Tucker Taft has an action item for AI-195.
-;-)
-
-The current status is that the AI is not written at all -- it just
-contains the !appendix, containing 2 e-mails (from Pascal Leroy and
-Randy Brukardt).  Randy raises the question you're interested in here;
-Pascal raises a whole truckload of other stream-related questions.
-
-I agree with your technical comments: Use the first subtype's 'Size.  I
-guess that means 'Write raises an exception for values outside the base
-range?  And a 3-byte value should get 4 bytes on a "typical" machine.
-(I'm not completely convinced of the last, but I think it makes the most
-sense.)
-
-There is perhaps a danger that the AI as a whole will get stalled on one
-of the more obscure questions.
-
-- Bob
-
-****************************************************************
-
-From: 	Randy Brukardt
-Sent: 	Friday, May 15, 1998 5:33 PM
-Subject: 	RE: AI-195 redux; stream rep for scalars
-
->> So who is doing AI-195, and where does it stand now?
-
->According to the minutes, one Tucker Taft has an action item for AI-195. ;-)
-
-Pascal sent a rewrite of that AI to this list on 5/7/98; I remember reading it
-(and I found it on my saved mail).  His version was further along than what Bob
-remembers.
-
-Personally, I think Pascal's questions and the really important one that Tucker
-just reraised should be handled in separate Ais, since the priority of answers
-are quite different.  The 'Size question is going to a binding interpretation
-which is going to change the behavior of some implementations (and require an
-ACVC test to check compliance to) & that is the most important kind of AI to get
-settled quickly.
-
-				Randy.
-
-****************************************************************
-
-From: 	Gary Dismukes[SMTP:dismukes@gnat.com]
-Sent: 	Friday, May 15, 1998 5:23 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> > So who is doing AI-195, and where does it stand now?
-> 
-> According to the minutes, one Tucker Taft has an action item for AI-195.
-> ;-)
-> 
-> The current status is that the AI is not written at all -- it just
-> contains the !appendix, containing 2 e-mails (from Pascal Leroy and
-> Randy Brukardt).  Randy raises the question you're interested in here;
-> Pascal raises a whole truckload of other stream-related questions.
-
-But Pascal sent out a revised version of AI-195 a week ago,
-didn't you receive it?  Below is the portion from the summary
-pertaining to the scalar size issue (further detail is in the
-discussion section):
-
-
-> From phl@sorbonne.rational.com Thu May  7 06:21 EDT 1998
-> Return-Path: <phl@sorbonne.rational.com>
-> From: "Pascal Leroy" <phl@Rational.COM>
-> Date: Thu, 7 May 1998 12:20:15 +0000
-> Subject: AI95-00195/02
-> 
-> !standard 13.13.1 (00)                                98-04-04  AI95-00195/02
-> !class binding interpretation 98-03-27
-> !status received 98-03-27
-> !reference AI95-00108
-> !reference AI95-00145
-> !priority High
-> !difficulty Hard
-> !subject Streams
-> !summary 98-04-04
->
-> ...
->
-> The predefined stream-oriented attributes for a scalar type shall only read or
-> write the minimum number of stream elements required by the first subtype of
-> the type.  Constraint_Error is raised if such an attribute is passed (or would
-> return) a value outside the range of the first subtype.
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Friday, May 15, 1998 7:17 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> Pascal sent a rewrite of that AI to this list on 5/7/98; I remember reading 
-> it (and I found it on my saved mail).  His version was further along than 
-> what Bob remembers...
-
-"I have a good memory -- it's just short."  ;-)
-
-It's not a matter of my memory; all my e-mail is automatically saved,
-and I grepped it all for the message in question, and it's not there.
-I've had problems in the past getting e-mail from Pascal.  I don't know
-why, but I sure don't like it.
-
-The current AI on sw-eng is as I reported -- nothing there but the two
-e-mails in the !appendix.
-
-> Personally, I think Pascal's questions and the really important one that 
-> Tucker just reraised should be handled in separate Ais, since the priority 
-> of answers are quite different.  The 'Size question is going to a binding 
-> interpretation which is going to change the behavior of some 
-> implementations (and require an ACVC test to check compliance to) - that is 
-> the most important kind of AI to get settled quickly.
-
-I tend to agree.  If we can agree (quickly) on the whole AI as is, then
-let's do it, but if there's some doubt about some obscure point, let's
-split out the important point, and officially approve that as a separate
-AI.  Then we can chat at leisure about the obscure stuff.  This
-particular issue is in the 5% or less of AI material that real users
-actual care about!
-
-I must say, the streams stuff didn't get the attention it deserved
-during the 9X design -- neither from the MRT nor from the reviewers.
-It's kind of boring stuff, but it's terribly *important* stuff, and I
-think we all ought to have paid more attention to it.  This is an
-important area where Java wins strongly over Ada 95, and it didn't have
-to be that way.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Friday, May 15, 1998 8:18 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<As I have suggested, I think we should propose to change the rule so that
-the default stream representation for scalar types is based on the
-'Size of the first subtype, rather than the base subtype 'Size.
->>
-
-This is a nasty incompatibility. I obnject to changes in the language
-which force us to introduce incompatibilities, so at best this should
-be a matter of *aqllowing* implementations to do this, not requiring
-them to do it.
-
-We would object STRONGLY to being forced to make incompatible changes which
-could negatively affect our cusomters.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Friday, May 15, 1998 8:23 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<I must say, the streams stuff didn't get the attention it deserved
-during the 9X design -- neither from the MRT nor from the reviewers.
-It's kind of boring stuff, but it's terribly *important* stuff, and I
-think we all ought to have paid more attention to it.  This is an
-important area where Java wins strongly over Ada 95, and it didn't have
-to be that way.
->>
-
-Why do you feel Java wins strongly over Ada 95 here?
-
-****************************************************************
-
-From: 	Tucker Taft[SMTP:stt@inmet.com]
-Sent: 	Friday, May 15, 1998 9:27 PM
-Subject: 	Re: more on the streams AI
-
-Robert Dewar wrote:
-> 
-> (I have forgotten the proper ARG address, feel free to forward this if
-> it is useful to do so, and do not forward it if I am confused)
-
-I have "cc"ed the ARG.
-
-> First, saying that the number of bytes written will correspond to the
-> size of objects of the first subtype may INCREASE non-portability!
-> 
-> Consider
-> 
-> 	type x is new integer range 1 .. 10;
-> 
-> Currently both GNAT and Intermetrics read/write four bytes. Under the
-> proposed change, GNAT will still read/write 4 bytes, but Intermetrics
-> will read/write one byte.
-
-Perhaps types defined by a derived type definition should follow
-a different rule -- they should inherit 'Read/'Write from the parent type,
-at least in the absence of an explicit Size clause on the derived type.
-
-In general my hope is to minimize the number of cases where GNAT's
-current representation will differ from the AI's recommendation.
-Mostly I want to have some set of recommendations that are independent
-of implementation decisions about base type selection, stand-alone
-object representation, etc.
-
-> Second, if a SIze clause is given, then surely the AI says that you
-> do indeed read/write the amount of space allocated for objects of
-> the first subtype:
-
-I don't think the size of stand-alone objects is relevant, since
-they might live in registers, etc.
-
-Perhaps the 'Component_Size of an unpacked array would be
-a good indication.
-
-But in some ways, agreeing on the size for stream representation
-is more important than agreeing about the default sizes chosen for 
-objects in memory, since interoperability is more dependent on 
-stream representation than on in-memory representation.
-
-> 17   If a stream element is the same size as a storage element, then the
-> normal in-memory representation should be used by Read and Write for scalar
-> objects.  Otherwise, Read and Write should use the smallest number of stream
-> elements needed to represent all values in the base range of the scalar type.
-> 
-> 
-> 
-> This AI only says to use the base range in a case which does not apply to any
-> of the im[plementations we are talking about. The first sentence SURELY
-> says that we should read/write one byte if objects of the type are one
-> byte.
-
-Remember that this is in the context of the definition of 'Read and
-'Write as operations of the *unconstrained* (base) subtype, as implied
-by the italicized T.  So this is saying that the in-memory representation
-for S'Base should be used, whereas it seems it is more useful if
-the in-memory representatin of the first subtype should be more relevant.
-
-> But neither GNAT nor Intermetrics seem to do this, they implement only
-> the second sentence, which does not apply to normal implementations.
-
-Both worry about the base subtype, which is consistent with both
-sentence (1) and (2).
-
-> I am *very* confused!
-
-Something about section 13.13 confuses everyone...
-
--Tuck
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Saturday, May 16, 1998 5:12 AM
-Subject: 	Re: more on the streams AI
-
-<<Remember that this is in the context of the definition of 'Read and
-'Write as operations of the *unconstrained* (base) subtype, as implied
-by the italicized T.  So this is saying that the in-memory representation
-for S'Base should be used, whereas it seems it is more useful if
-the in-memory representatin of the first subtype should be more relevant.
-
-> But neither GNAT nor Intermetrics seem to do this, they implement only
-> the second sentence, which does not apply to normal implementations.
-
-Both worry about the base subtype, which is consistent with both
-sentence (1) and (2).
-
-> I am *very* confused!
-
-Something about section 13.13 confuses everyone...
->>
-
-I disagree, to talk about the "in-memory" representation surely *is*
-talking about objects. If you look at the recent CLA confusion, the
-problem was precisely that the person expected the stream size to be
-related to the size of objects.
-
-In GNAT terms, I would expect the size of stream elements to be related
-to the Object_Size, not the Value_Size. This is consistent with the fact
-that we agree that they should, like objects, be padded to a natural
-boundary.
-
-I strongly object to the 24-bit case generating three bytes for an object
-with four byte alignment. This ppotentially introduces a *third* concept
-of size and is very confusing.
-
-The wording in the RM very clearly expects stream element size to be related
-to object size. Let's look at the para again:
-
-                            Implementation Advice
-
-17   If a stream element is the same size as a storage element, then the
-normal in-memory representation should be used by Read and Write for scalar
-objects.  Otherwise, Read and Write should use the smallest number of stream
-elements needed to represent all values in the base range of the scalar type.
-
-
-
-First, I must say, I have NO idea what the thinking of the designers is
-behind the conditional, perhaps they would care to clarify ...
-
-Tuck seems to say that the first sentence assumes the base range. But I
-find that dubious, since why mention the base range in the second sentence
-and not the first.
-
-I think almost anyone would read the first sentence to mean what it says,
-namely if you do Stream_IO on "junks", then the representation used is
-the representation of junks in memory, period.
-
-Whether this is a good idea or not is certainly fair discussion. Ada 95
-left very open the representation and size of objects, much more open than
-in Ada 83, where the gneral understanding was that by default the size of
-objects was the same as the size of the type of objects. 
-
-Ada 95 decided (in my opinion, as you well know, a mistake) to pin down the
-definition of the default 'Size for types in such a manner that this
-correspondence is *forced* to be broken. As a result, the RM has pretty
-much nothing to say about object sizes, and for example, leavs quite
-open the question of whethre:
-
-type x is record
-   a,b : integer range -128 .. +127;
-end record;
-
-occupies two bytes or eight bytes (this is even clearer in the simple case)
-
-   q : integer range -128 .. +127;
-
-for which GNAT wlil allocate four bytes and Intermetrics one byte. We have
-found that for compatibility with legacy Ada 83, it is essential that 
-subtypes like this by default have the same representation as the base type.
-
-We should remember that this paragraph is after all implementation advice.
-I hope the ARG is not considering trying to elevate it to the status of
-a requirement! 
-
-So the AI cannot in any case legislate interchnagability!
-
-I think that the best thing is to pin attention on the reasonable informal
-understanding of the important phrase in para 17:
-
-"the normal in-memory representation should be used by Read and Write for
- scalar objects [of the subtype in question]"
-
-I add the [...] phrase to make clear what I mean by a reasonable informal
-understanding.
-
-Clearly since different Ada 95 compilers have such very different 
-interpretations of what this means in the default case, there is not
-much that can be done in the default case.
-
-However, it certainly would be nice if
-
-	type bla is ...
-        for bla'size use ...
-
-Have a consistent effect of setting bla'size as the size of stream 
-stuff in the case where the specified size is the "reasonable" size
-for objects of the type.
-
-One thing of interest here is that in practice, many of our customers use
-the stream stuff extensively, and none has ever complained that they got
-unexpected results.
-
-Marcus Kuhn is not a customers, so he is not included in this paragraph,
-but I think there is some justification to his suprise that
-
-	type bla is range 0 .. 255;
- 	for bla'size use 8;
-
-results in 16-bit stream elements rather than 8-bit elements.
-
-I am very worried about making an incompatible change here, and of course
-we would have to introudce it under a switch and leave an option for ever
-which is arduous, but I must say it makes sense to use subtype'Object_Size
-in the GNAT context.
-
-****************************************************************
-
-From: 	Anthony Gargaro[SMTP:abg@SEI.CMU.EDU]
-Sent: 	Saturday, May 16, 1998 7:45 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars 
-
-Dear Bob-
-
->I must say, the streams stuff didn't get the attention it deserved
->during the 9X design -- neither from the MRT nor from the reviewers.
->It's kind of boring stuff, but it's terribly *important* stuff, and I
->think we all ought to have paid more attention to it. 
-
-I agree. In fact there was a strong sentiment prior to the Frankfurt
-workshop to remove it from the revision (along with Annex E).
-
-Anthony.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Saturday, May 16, 1998 7:49 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<I agree. In fact there was a strong sentiment prior to the Frankfurt
-workshop to remove it from the revision (along with Annex E).
->>
-
-First: I remember no such suggestion, certainly people were unusre about
-annex E, but streams are critically useful in many other contexts.
-
-Second, although there have been problems with the Intermetrics implementation
-due to the clash in using only one base type and the probably wrong RM
-requirements in this area, it is important to note that generally it is
-working out VERY well, we have LOTS of customers using this capability,
-and we have essentially no reported problems with it.
-
-Note that I am not saying that the Intermetrics implementation is wrong here,
-although I think havcing only one 32-bit base type is definitely a mistake
-for many reasons (only one of which is that it screws up the stream stuff).
-
-As I have said before, I think of base types as corresponding to the types
-that can be loaded and stored, NOT to the types on which computations in
-registers are possible. The clear indication of this is that there are
-many machines, e.g. Power and Intel x86, where all register computations
-are at maximukm precision (80 bits in x86 and 64 bits on power) and this
-will be an increasinvgly commong approach (Merced is the same way). Yet
-no one proposes NOT having a 32-bit base type for floating-point.
-
-Note also that all the discussions about streams in this context are
-about implementation advice. 
-
-It is very clear from the way the impleme3ntation advice is written that
-it does not envision the problems from having base types be chosen in
-a manner not coresponding to the storage layout. I don't see why
-Intermetrics just doesn't decided that given its decision to have only
-one 32-bit base type, it should ignore the implementation advice and
-do the right thing.
-
-Voila, problem solved.
-
-Implementation advice should not be taken too seriously, especially whenm
-it is plain wrong.
-
-I must say that I don't like giving IA as much force as we have in the
-past. It causes difficulties. Look at the toruble we are having trying to
-patch up the completely wrong advice in 
-
-   69  An Ada parameter of a record type T, of any mode, is passed as a
-       t* argument to a C function, where t is the C struct
-       corresponding to the Ada type T.
-
-In retrospect, I think the C_Pass_By_Copy was a horrible kludge, and
-I regret GNAT deciding to go along with this approach. No wonder WG9
-is having heartburn accepting the kludgy AI that tries to justify this
-incorrect approach.
-
-I am afraid that the AI here is headed for similar messy times. I think
-we should just completely abandon this AI. It is unnecessary.
-
-****************************************************************
-
-From: 	Anthony Gargaro[SMTP:abg@SEI.CMU.EDU]
-Sent: 	Saturday, May 16, 1998 8:25 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars 
-
-Dear Robert-
-
->First: I remember no such suggestion, certainly people were unusre about
->annex E, but streams are critically useful in many other contexts.
-
-I believe that if you refer to the Zero-based budget proposal that
-preceded the Frankfurt workshop you will find that streams had been
-removed from the core language.
-
-Anthony.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Saturday, May 16, 1998 8:42 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<I believe that if you refer to the Zero-based budget proposal that
-preceded the Frankfurt workshop you will find that streams had been
-removed from the core language.
->>
-
-Yes, but many things were left out of the ZBB proposal. As in real life
-with ZBB in the political arena, this represents a negotiating position
-that was an extreme. After all remember that child units also came under
-pretty strong attack at one point.
-
-My objection here is in linking Annex E and streams together in this
-discussion. The concners about Annex E were of a very different nature,
-and there were, as you say lots of people who worried that putting in
-Annex E stuff was premature in the sense we were not sure what we wanted.
-
-In contrast I think the streams had strong support and the reason for
-even suggesting taken them out was very different, namely a concern
-that their utility did not justify their complexity. In practice I
-think streams came out pretty well. I tis particularly interesting that
-in the GNAT context, we have found it VERYT easyt to plug in an XDR
-implementation on an optional basis that in practice provides complete
-stream portability in a heterogenous environment.
-
-I do understand the current problem with streams, but it is a very small
-glitch in the implemenmtation advice, and can trivially be fixed for
-the compiler in which trouble arises by simply "re-interpreting" the
-IA appropriately.
-
-I continue to think that an AI is not needed here.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Saturday, May 16, 1998 8:46 AM
-Subject: 	streams for scalars
-
-
-Note that the RM is defintiely NOT in the business of trying to ensure
-that streams are compatible across compilers, or of in any way specifying
-the exact representation to be used in streams.
-
-Part of the reason that I find this AI suspicious is that it moves a tiny
-little bit in this direction. I find the movement either too small to be
-relevant, or far too little if you really decide that this kind of
-interoperation is important.
-
-I do think that a compiler that generates more than one byte for
-
-  type x is range -128 .. +127;
-
-in streams is broken. I think that any such compiler should be fixed.
-No AI is needed in order to do this fix!
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 11:59 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> <<As I have suggested, I think we should propose to change the rule so that
-> the default stream representation for scalar types is based on the
-> 'Size of the first subtype, rather than the base subtype 'Size.
-> >>
-
-I was under the impression that the above means 'Size, but rounded up to
-some reasonable number (8,16,32,64 bits, on typical machines).  Given
-that, I don't understand:
-
-> This is a nasty incompatibility.
-
-Why is it incompatible?  I thought that's what GNAT was already doing.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 12:02 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> Why do you feel Java wins strongly over Ada 95 here?
-
-Because Java nails down representations so much (at least in theory)
-that one can pass data across Java implementations, and across machines,
-more portably than in Ada.  In Java, there's never any question about
-how many bits in a given integer type, whereas that's what we're arguing
-about, for Ada, in this discussion.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 12:10 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<Because Java nails down representations so much (at least in theory)
-that one can pass data across Java implementations, and across machines,
-more portably than in Ada.  In Java, there's never any question about
-how many bits in a given integer type, whereas that's what we're arguing
-about, for Ada, in this discussion.
->>
-
-Sure, but for the types that Java defines, any reasonable Ada compiler should
-do the right thing, and if you want endianness independence (does Java
-guarantee this for stream stuff), then you can of course implement this
-(we use XDR in GNAT to achieve a much greater degree of target 
-independence than Java, e.g. we do not need to depend on IEEE floating-point
-representation).
-
-
-I think you will find that ALL Ada representations behave as follows:
-
-modular type with range 0..255  1 byte
-32-bit integer type  4 bytes
-character  1 byte
-wide-character 2 bytes
-
-of course we have the capability in Ada, which does not exist in Java
-of defining various types and that is where the most problems exist.
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 12:07 PM
-Subject: 	Re: streams for scalars
-
-> Note that the RM is defintiely NOT in the business of trying to ensure
-> that streams are compatible across compilers, or of in any way specifying
-> the exact representation to be used in streams.
-
-Quite true.  I think that was probably a mistake -- we *should* have
-tried for that more ambitious goal.
-
-> Part of the reason that I find this AI suspicious is that it moves a tiny
-> little bit in this direction. I find the movement either too small to be
-> relevant, or far too little if you really decide that this kind of
-> interoperation is important.
-
-Good point.  But I don't find this particular issue so "tiny".
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 12:06 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<Why is it incompatible?  I thought that's what GNAT was already doing.
->>
-
-No, we use the Object_Size of the base type.
-
-However, we have 8, 16, and 32 bit base types
-
-(so we don't get peculiar behavior as often as Intermetrics does, who has
-only a 32-bit base type).
-
-However, GNAT for:
-
-   type x is range 0 .. 255;
-
-will read and write 16-bits regardless of a size clause on type x, since
-it will go to x'base which is 16-bits for us.
-
-That was the CLA case that was discussed.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 12:07 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<I was under the impression that the above means 'Size, but rounded up to
-some reasonable number (8,16,32,64 bits, on typical machines).  Given
-that, I don't understand:
->>
-
-I thought we were trying to eliminate target dependencies here, what does
-reasonable mean, is 8-bits reasonable on the old Alphas and the AMD 29K
-for instance, I can see one compiler saying yes, and another no.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 12:21 PM
-Subject: 	Re: streams for scalars
-
-<<Quite true.  I think that was probably a mistake -- we *should* have
-tried for that more ambitious goal.
->>
-
-I disagree, the current spec allows compilers to do the reasonable thing.
-If target independnet forms are desiraqble (like the GNAT XDR 
-impolementation), they are consistent with the current RM, and can
-be agreed on separately.
-
-There is nothing in the RM that stops compilers doing the right thing
-for streams. I remind again that the entire problem has arisen here
-because of compilers simply making poor implementation choices.
-
-It sems obvious to me that
-
-  	type x is range - 128 .. +127;
-
-should result in one byte stream elements.
-
-Since the AI we are discussing only affects implementation advice, it has
-no force at all. Compilers are as free before or after this AI do do
-the right or wrong thing.
-
-It also seems clear that
-
-	type x is range 0 .. 255;
-	for x'size use 8;
-
-should result in one byte stream elements, but due to what in retrospect
-I consider an error in the GNAT implementation it does not (and it also
-does not in IM).
-
-In fact right now, this type gives 2-byte elements for GNAT and 4-byte
-elements for IM.
-
-I am thinking of chnaging the GNAT implementation, but a bit worried about
-incompatibilities with existing data. The decision of whether or not to make
-this change will not be influeneced at all by an AI which does or does not
-change the implementation advice. We always ignore IA if we think it is wrong,
-which does not often happen, but there are cases
-
-For example, the IA says not to provide pragmas that affect legality, but
-we have found LOTS of useful pragmas that disobey this, e.g. 
-Unchecked_Union!
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 12:49 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> I thought we were trying to eliminate target dependencies here, what does
-> reasonable mean, is 8-bits reasonable on the old Alphas and the AMD 29K
-> for instance, I can see one compiler saying yes, and another no.
-
-"Reasonable" on a typical 32- or 64-bit machine should mean 8,16,32, and
-64 bits.  So yes, I think 8 bits is "reasonable" on an old alpha.  (I
-don't know anything about the AMD chip.)  All of this stuff is defined
-in terms of stream elements, which are impl def, but I presume all
-impl's on "typical" machines will choose 8 bit stream elements.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 1:13 PM
-Subject: 	Re: streams for scalars
-
-> <<Quite true.  I think that was probably a mistake -- we *should* have
-> tried for that more ambitious goal.
-> >>
-> 
-> I disagree, the current spec allows compilers to do the reasonable thing.
-> If target independnet forms are desiraqble (like the GNAT XDR 
-> impolementation), they are consistent with the current RM, and can
-> be agreed on separately.
-
-But no user can portably count on having XDR support on all Ada
-implementations.
-
-> There is nothing in the RM that stops compilers doing the right thing
-> for streams. I remind again that the entire problem has arisen here
-> because of compilers simply making poor implementation choices.
-> 
-> It sems obvious to me that
-> 
->   	type x is range - 128 .. +127;
-> 
-> should result in one byte stream elements.
-
-Agreed.  (On the other hand, you and I probably disagree on what should
-be the base range of x.  That would be fine, if streams weren't so
-closely tied to in-memory representations.)
-
-> Since the AI we are discussing only affects implementation advice, it has
-> no force at all. Compilers are as free before or after this AI do do
-> the right or wrong thing.
-> 
-> It also seems clear that
-> 
-> 	type x is range 0 .. 255;
-> 	for x'size use 8;
-> 
-> should result in one byte stream elements, but due to what in retrospect
-> I consider an error in the GNAT implementation it does not (and it also
-> does not in IM).
-
-I agree that the above should result in one-byte stream elements.  But I
-think that's currently forbidden by the RM.  Not just by the impl
-advice, but by the fact that we require the ability to read/write -1 of
-that type to a stream.  Even if you ignore the impl advice, you still
-need at least 9 bits.
-
-> For example, the IA says not to provide pragmas that affect legality, ...
-
-That's not what it says, but I agree that the pragma advice is
-questionable.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 3:23 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<"Reasonable" on a typical 32- or 64-bit machine should mean 8,16,32, and
-64 bits.  So yes, I think 8 bits is "reasonable" on an old alpha.  (I
-don't know anything about the AMD chip.)  All of this stuff is defined
-in terms of stream elements, which are impl def, but I presume all
-impl's on "typical" machines will choose 8 bit stream elements.
->>
-
-THe reason that this is questionable on the alpha is that the object
-size is probably 32 bits, since this machine has no 8-bit load/store
-instructions. So the old Alpha is NOT a typical 32-bit or 64-bit machine
-(all the others, *and* the new alpha) do have byte load/store instructions.
-
-The AMD29K is exactly like the old Alpha, byte addressable, but no
-byte load/store instructions.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 3:26 PM
-Subject: 	Re: streams for scalars
-
-<<I agree that the above should result in one-byte stream elements.  But I
-think that's currently forbidden by the RM.  Not just by the impl
-advice, but by the fact that we require the ability to read/write -1 of
-that type to a stream.  Even if you ignore the impl advice, you still
-need at least 9 bits.
->>
-
-Oh, of course, so yes, absolutely this does require two bytes and that
-is the end of it. In fact I now don't understand at all,
-
-If we are required to be able to write any element of the base type
-(why is this so?....) then of course you need to use the base range.
-
-Or is the intent to change the language here.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 9:05 PM
-Subject: 	Re: streams for scalars
-
-I find it too big a change in the language to disallow values outside
-the declared range (but in the base rnage) if they are currently allowed.
-
-Note that if people write type x is mod 2**8, they will presumably get
-8 bits on all architectures, so there is a path to interoperatbility.
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 9:03 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<Yes, I understand how the old alpha works, and that would definitely
-affect my decisions about how to represent in-memory variables, but my
-claim is that it shouldn't have such a strong affect on stream
-representations.  I realize this isn't what the RM says.  I'm not sure
-if I'm suggesting a language change, or merely wishing we had done
-differently in the past.
->>
-
-What about the Patriot, with its 24-bit words, does your attitude change
-here, or is the fact that 24 = 3 * 8 enough to convince you to use
-8-bit stream elements (this seems to be the reasoning for the old
-Alpha, right, this is a 32-bit word machine, but 32 = 4 * 8).
-Presumably on a 36-bit machine you would not expect 8-bit stream
-elenments (this is a reminder that short of things like XDR, you
-cannot guarantee inter-architecture commonality. Also what is your
-position on endianness (both bit and byte).
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 9:05 PM
-Subject: 	Re: streams for scalars
-
-> Oh, of course, so yes, absolutely this does require two bytes and that
-> is the end of it. In fact I now don't understand at all,
-> 
-> If we are required to be able to write any element of the base type
-> (why is this so?....) then of course you need to use the base range.
-   ^^^^^^^^^^^^^^^^^^^
-Mea Culpa, at least in part.
-
-> Or is the intent to change the language here.
-
-I think that's the intent of at least some folks in this discussion!
-
-I'm inclined to agree, but I'm not sure, and unfortunately my main
-feeling when faced with all these stream-related issues is to throw up
-my hands in despair.  Sigh.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 9:02 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> THe reason that this is questionable on the alpha is that the object
-> size is probably 32 bits, since this machine has no 8-bit load/store
-> instructions. So the old Alpha is NOT a typical 32-bit or 64-bit machine
-> (all the others, *and* the new alpha) do have byte load/store instructions.
-
-Yes, I understand how the old alpha works, and that would definitely
-affect my decisions about how to represent in-memory variables, but my
-claim is that it shouldn't have such a strong affect on stream
-representations.  I realize this isn't what the RM says.  I'm not sure
-if I'm suggesting a language change, or merely wishing we had done
-differently in the past.
-
-- Bob
-
-****************************************************************
-
-From: 	Robert A Duff[SMTP:bobduff@world.std.com]
-Sent: 	Sunday, May 17, 1998 9:25 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-> What about the Patriot, with its 24-bit words, does your attitude change
-> here, ...
-
-Well, I'm afraid I can't comment on the Patriot, since I don't know the
-machine (despite my current employer!).  I know it's got 24-bit words,
-but I don't know if addresses point at 24-bit words, or 8-bit storage
-units, 3-per-word.  Anyway, who cares?  There are no Patriot machines on
-the networks that most folks care about.
-
->...or is the fact that 24 = 3 * 8 enough to convince you to use
-> 8-bit stream elements (this seems to be the reasoning for the old
-> Alpha, right, this is a 32-bit word machine, but 32 = 4 * 8).
-
-The old alpha can address 8-bit bytes (although it can't load and store
-them, directly).
-
-> Presumably on a 36-bit machine you would not expect 8-bit stream
-> elenments (this is a reminder that short of things like XDR, you
-> cannot guarantee inter-architecture commonality.
-
-I don't know.  I would like to at least have the option of inter-arch
-commonality.  Short of that, I'd like commonality among 16/32/64-bit
-machines, which isn't as hard to achieve.  I suppose on a network of
-PDP-10's (that's the only 36-bit machine I know) the "natural" stream
-element would be 36 bits.  On the other hand, one might use 8 bits, just
-to "fit in".  On the third hand, I haven't seen any PDP-10's lately.
-
->...Also what is your
-> positionb on endianness (both bit and byte).
-
-Good question.  ;-)
-
-My position is that bit- and byte-endianness need to be the same.
-(Isn't that what you say in your microprocessor book -- you gripe about
-the confusion of 68000, but it's really a documentation issue?)  And my
-other position is that it's a shame that we have both types of
-computers.  It's also a shame that some countries want you to drive and
-the right-hand side of the road, and others, on the left.  But there's
-not much one can do about it.  Sigh.
-
-What's *your* position?  It seems to be that commonality is hopeless, so
-let's not bother.  Is that a correct reading?
-
-- Bob
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 9:27 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<What's *your* position?  It seems to be that commonality is hopeless, so
-let's not bother.  Is that a correct reading?
->>
-
-YOu need different levels of commonality for different purposes:
-
-1. same compiler, different but similar machines
-2. different compilers, different but similar machines
-3. different compilers, same machines
-4. very different machines
-
-is an approximate categorization.
-
-I think we can come up with disciplines that make sense in each case, but
-no one protocol can solve all these problems, since there is a trace off
-(oops trade off) between efficiency and portability. 
-
-The XDR solution in GNAT is a solution to 4, but is too inefficient to use
-for some of the other cases.
-
-I would not be opposed to some secondary standards here.
-
-But I don't see how the RM can solve all these problems.
-
-I do really like the way GNAT does things which is to hgave a single
-easily replacable library unit that describes the format used for all
-primitive types. Inlining from this unit avoids excessive inefficiency
-from using a library routine.
-
-This means that GNAT can trivially be adapted *by a user* to meet any
-variation in protocol that you want.
-
-(although the base type issue is of course above this level)
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Sunday, May 17, 1998 9:28 PM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<The old alpha can address 8-bit bytes
->>
-
-No, it can't when you address memory on the alpha, you are using the
-first 30-bits of the address as a word address, and for certain instructions,
-but not for loads and stores, the low order 2-bits can select one of the
-four byutes in a register.
-
-But I cannot see by what stretch of imagination you could talk about 
-the old Alpha having byte addressability.
-
-****************************************************************
-
-From: 	Robert I. Eachus[SMTP:eachus@mitre.org]
-Sent: 	Monday, May 18, 1998 6:06 AM
-Subject: 	Re: streams for scalars
-
-At 01:21 PM 5/17/98 EDT, Robert Dewar wrote:
->Since the AI we are discussing only affects implementation advice, it has
->no force at all. Compilers are as free before or after this AI do do
->the right or wrong thing.
-
-    I think that an AI is needed, but otherwise I agree with Robert Dewar.
-The AI is required to define which exceptions are raised by default 'Read
-and 'Write operations, and under what conditions.  I certainly think that
-given
-
-   type Foo is range 1..10;
-
-   Foo'Read should raise Constraint_Error if it reads a value of 11.
-Foo'Base'Read should not do constraint checks, but Foo'Base'Write should
-raise constraint error when confronted with a value outside the base range
-of Foo, and Foo'Write should raise Constraint_Error for values outside the
-subtype.
-
-   Note that even though these are all expressed as advice, since the only
-conditions where they can be checked border on pathological, the AI is
-needed to allow implementations to "do the right thing" for
-
-   type Byte is range 0..255;
-
-   even though I think that type should be defined as:
-
-   type Byte is mod 256;
-
-
-
-                                        Robert I. Eachus
-
-with Standard_Disclaimer;
-use  Standard_Disclaimer;
-function Message (Text: in Clever_Ideas) return Better_Ideas is...
-
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Monday, May 18, 1998 6:40 AM
-Subject: 	Re: streams for scalars
-
-Robert Eachus says
-
-<<   Foo'Read should raise Constraint_Error if it reads a value of 11.
-Foo'Base'Read should not do constraint checks, but Foo'Base'Write should
-raise constraint error when confronted with a value outside the base range
-of Foo, and Foo'Write should raise Constraint_Error for values outside the
-subtype.
-
-   Note that even though these are all expressed as advice, since the only
-conditions where they can be checked border on pathological, the AI is
-needed to allow implementations to "do the right thing" for
->>
-
-Nothing pathological at all about the test programs in question, it is
-easy to do a write with an "out of range" value, remember we are talking
-*values* here. 
-
-This is a definite change in the language as far as I can tell.
-
-****************************************************************
-
-From: 	ncohen@us.ibm.com[SMTP:ncohen@us.ibm.com]
-Sent: 	Monday, May 18, 1998 9:34 AM
-Subject: 	Re: more on the streams AI
-
-Robert Dewar writes:
-
-<<I strongly object to the 24-bit case generating three bytes for an object
-with four byte alignment. This ppotentially introduces a *third* concept of
-size and is very confusing.>>
-
-I strongly object to enforcing alignment within streams.  The most common
-streams correspond to communication channels (typically sockets) and files.
-There is certainly no hardware reason for enforcing alignment in these
-contexts.  However, in the case of communication channels, it is frequently
-important to conserve bandwidth, so stuffing gratuitous padding bytes into
-the stream is the last thing we want to do.  (I am thinking especially of
-distributed applications that involve mobile devices, connected by slow and
-wireless links with high per-byte costs.)
-
--- 
-
-****************************************************************
-
-From: 	Robert Dewar[SMTP:dewar@gnat.com]
-Sent: 	Monday, May 18, 1998 9:48 AM
-Subject: 	Re: more on the streams AI
-
-<<I strongly object to enforcing alignment within streams.  The most common
-streams correspond to communication channels (typically sockets) and files.
-There is certainly no hardware reason for enforcing alignment in these
-contexts.  However, in the case of communication channels, it is frequently
-important to conserve bandwidth, so stuffing gratuitous padding bytes into
-the stream is the last thing we want to do.  (I am thinking especially of
-distributed applications that involve mobile devices, connected by slow and
-wireless links with high per-byte costs.)
->>
-
-I am not taqlking about enforcing alignment for streams, I am talking about
-stream elements being related to the size of the object as stored.
-
-If we have type x is range 0 .. 2**21-1;
-
-then typically x'size is 22
-x'object_size will be 32
-
-All current compilers will generate 32 bit stream ntries for this
-(note that C does not even begin to have an option to generate anythking
-other than 32-bits in a stream for such a type).
-
-But it appears that some are trying to create the concept of a stream size
-different from either of the value_size or Object_size (i.e. 24 in this case)
-
-I find that confusing.
-
-To understand clearly, the proposal is that the stream size be the
-size of the type, rounded up to an integral multiple of stream elements,
-right?
-
-I much prefer the size of stream elements being based on the size of
-objetcts of the first known subtype (althouhgh we have not really solved
-how
-to do that).
-
-These then are the three proposals
-
-1. Size = size of base type increased to natural boundary (8,16,32,64)
-
-2. Size = size of objects of first named subtype 
-
-3. Size of first named subtype rounded up to the nearest multiple of
-stream elements.
-
-I could tolerate 1 or 2 in GNAT (we currently do 1, but would consider
-changing to 2), but 3 isa radical chanbge that we would oppose strongly.
-
-****************************************************************
-
-From: 	ncohen@us.ibm.com[SMTP:ncohen@us.ibm.com]
-Sent: 	Monday, May 18, 1998 10:51 AM
-Subject: 	Re: AI-195 redux; stream rep for scalars
-
-<<Sure, but for the types that Java defines, any reasonable Ada compiler
-should do the right thing, and if you want endianness independence (does
-Java guarantee this for stream stuff), then you can of course implement
-this (we use XDR in GNAT to achieve a much greater degree of target
-independence than Java, e.g. we do not need to depend on IEEE
-floating-point representation).>>
-
-Yes, the methods of java.io.DataInputStream/java.io.DataOutputStream are
-specified to read/write numeric data in big-endian format.  The default
-readObject and writeObject methods of java.io.Serializable use these
-methods to write primitive data.
-
-It is important to remember that Ada and Java have different philosophical
-bases.  The Ada philosophy is to support portability to the extent possible
-while providing efficient direct access to the underlying hardware.  It is
-recognized that a given type will have different representations on
-different machines.  The Java philosophy is to specify a virtual machine
-and to mandate how data is represented on that machine.
-
-The Java world view is that the "real data" in a distributed application
-consists not of bit patterns, but of Java objects; serialization of object
-graphs into bit patterns is a necessary evil to pump objects through
-communication channels and to interoperate with legacy non-Java code.
-
-The Ada view (or at least my view of the Ada view) is that if several
-programs are to use a common stream representation for data, that common
-representation must be specified (e.g. XDR), and each program must
-explicitly override the stream-oriented attributes to map its own
-target-dependent representation of the data to and from that common stream
-representation.  One cannot generally depend on the default 'Write and
-'Read to map to and from this common representation.  (Thus I believe that
-all this attention to the DEFAULT behavior of the stream attributes is a
-tempest in a teapot.  Any program that is writing to a stream with a
-required format will override the stream attributes to achieve that
-format.)
-
-In Java, part of the contract of readObject is that it be able to read from
-object streams written by writeObject, and reconstitute the data.  This
-contract applies both to the default methods and to overriding versions
-written by application programmers.  The current (JDK 1.1.x) documentation
-does not specify the format of object representations in the stream because
-the output of writeObject is meant to be written only by readObject, not by
-other, non-Java applications.*  JavaSoft reluctantly recognized that it
-would be necessary to allow independent implementors to build their own
-JREs (JRE = Java Runtime Environment = JVM + class libraries) without
-licensing the Sun code, and thus it would be necessary to specify the
-default serialized stream representation exactly.  JDK 1.2 specifies a
-"serialization protocol" and an API providing primitives for reading and
-writing objects in accordance with this protocol.
-
--- 
-
-*--The default format includes a lot of self-describing data.  This allows
-an object of an unknown subclass to be read from the stream and
-reassembled, in the manner of T'Class'Input.  It also allows the code for a
-class's methods to be downloaded from a remote machine if that code was not
-previously available locally. The specification of object fields by name
-rather than offset also allows the reading of objects whose classes have
-evolved through the addition of new fields since the objects were written:
-The readObject method for the new version recognizes the old version and
-fills in the missing data in some appropriate way; the readObject method
-for the old version is oblivious to the presence of another field in the
-new version.  (Chapter 13 of the Java Language Specification is
-specifically devoted to the ways in which a package, interface, or class
-may evolve while preserving compatibility with previous versions.)
-
-****************************************************************
-

Questions? Ask the ACAA Technical Agent