Version 1.3 of ai05s/ai05-0161-1.txt

Unformatted version of ai05s/ai05-0161-1.txt version 1.3
Other versions for file ai05s/ai05-0161-1.txt

!standard 13.12.1(4/2)          09-07-25 AI05-0161-1/02
!standard 13.13.2(40/2)
!standard 13.13.2(52/2)
!class Amendment 09-07-08
!status work item 09-07-08
!status received 09-06-14
!priority Medium
!difficulty Medium
!subject Restrictions for default stream attributes of elementary types
!summary
The following restrictions identifier exist:
No_Default_Elementary_Stream_Attributes
!problem
Streams can be used to reliably interchange data between programs that need not be compiled with the same implementation, or with programs written in other programming languages.
However, the stream representation of elementary types is implementation defined. In order to ensure portability, it is necessary to make sure that the stream attributes of all elementary types involved are user-defined. This restriction makes sure that it is not possible to use (directly or through composition) a stream attribute whose specification has been inadvertantly omitted.
!proposal
(See wording.)
!wording
Add after 13.12.1(4/2):
No_Default_Stream_Attributes
The default implementation of stream-oriented attributes for elementary types is never available.
Replace 13.13.2(40/2) with:
T is nonlimited and no pragma Restriction (No_Default_Stream_Attributes) applies to the partition.
Add to 13.13.2(52/2), after "a limited type ... attributes":
An elementary type which is not an access type supports supports external streaming, unless a pragma restriction (No_Default_Stream_Attributes) applies to the partition and no Read and Write attributes are available.
!discussion
This proposal builds upon the notion of availability of stream-oriented attributes because it seemed the simplest way to make sure that the composition rules work properly. This is basically what happens with limited types. With this approach, we could even decide that the pragma needs not be partition-wide, since the default implementation still exists (it is just not available where the pragma applies).
The restriction is purposedly limited to elementary types. There is no reason to restrict composite types, where proper behaviour is ensured by the composition rules, as soon as all component types are available. However, this does not show in the restriction name; should it be No_Default_Elementary_Stream_Attributes, or even No_Default_Elementary_Type_Stream_Attributes? Looks like a mouthfull.
The rule applies to all stream-oriented attributes, therefore including S'Stream_Size. It does not seem to make sense to access S'Stream_Size if the corresponding S'Read and S'Write are not available.
There is an issue with generic formal types. In the body of a generic (to which the restriction applies), the usual assume-the-worst rule would forbid the use of any stream-oriented attribute of a formal elementary type (including in the implicit implementation of a stream-oriented attribute with a subcomponent of such a type). This problem did not arise for limited types, since being limited is part of the contract. Some possibilities are:
- Accept the fact. For direct use of a stream attribute of a formal
elementary type, there is an escape, since the user can call the attribute through a renaming declared in the specification of the package (including the private part with the usual boilerplate). For the subcomponent case, the composite type itself should be declared in the specification (which can drag a lot of other stuff to the specification), with similar renamings of stream attributes. This trick would not be possible for generic subprograms (no big deal in my view), and puts all the burden on the user.
- Introduce some strange rule that the attribute is available in the
generic body if there is something (like a renaming) that requires its availability in the specification.
- Introduce a pragma Require_Stream_Availability(S), to be given in the
formal part.
- Make the check a post-compilation rule. Isn't it what 13.12(8.2/1)
is about?
- Choose a different approach, and rather than making the default
attributes non-available, change the default implementation to raising Program_Error. But a restriction is intended to check things at compile-time...
!example
--!corrigendum H.4(23)
!ACATS test
B-Tests should be constructed to verify that this restriction is enforced.
!appendix

Editor's note:

The original proposal for this restriction did not suggest where it would be
added to the Standard. I rather arbitraryly picked H.4(23). [That was version /01
of this AI.]

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, July 23, 2009  3:41 PM

Attached is my homework from Brest. [This is version /02 of the AI - ED]

I thought it was simple until I considered generics (sigh).
There are some open issues in the !discussion section; opinions welcome.

While looking at this section, it made me think of a related question:
what is the value of S'Stream_Size if it is not user-specified, while S'Write
has been user-specified?  13.13.2(1.6/2) does not say that it corresponds to
the default implementation. It seems that S'Stream_Size is intended to be used
to specify a size while using the default implementation of S'Write, but that
the other way round is not defined.
Is it worth another AI?

****************************************************************

From: Randy Brukardt
Sent: Thursday, October 15, 2009  8:08 PM

I don't see any problem. 13.13.2(1.6/2) says nothing about Read or Write.
The "minimum number of stream elements required by the first subtype" is
well-defined no matter how those attributes are defined. Stream_Size is of
course used in the default Read and Write attributes, but obviously does not
have any effect on user-defined attributes. Moreover, Stream_Size could be
useful in contexts other than the attributes (such as determining the
appropriate size for an Unchecked_Conversion to stream elements), so I think
this is a good thing.

I suppose one could have a user note saying that "Stream_Size is meaningless
for user-defined Read and Write attributes", but that seems obvious and clearly
follows from the definition. It seems more like something to mention in textbooks
and tutorials to me. So I don't think we need to do anything (and surely not an AI).

****************************************************************

From: Steve Baird
Sent: Wednesday, July 21, 2010  3:20 PM

I see some problems with this one.

The restriction (deliberately) does not include the text
   "This restriction applies only to the current compilation or
    environment, not the entire partition."
that some of the other predefined restrictions in 13.12.1 have.

Unfortunately, this means that we have legality rules that depend on information
that is not known when legality checking is performed.

This comes up in two cases -
   13.13.2(49/2) says
     An attribute_reference for one of the stream-oriented attributes
     is illegal unless the attribute is available at the place of the
     attribute_reference.

but the AI changes the definition of "available" to depend on the restriction:

   Replace 13.13.2(40/2) with:

     T is nonlimited and no pragma Restriction
       (No_Default_Stream_Attributes) applies to the partition.

There is a similar situation with "supports external streaming"
and the legality checking for pragma Remote_Types.

This is not how we do things. The wording needs to define an explicit
post-compilation check that needs to be performed. If an implementation then
wanted to detect a restriction violation at compile-time in cases where this
makes sense, it could of course do that.

There is no notion in the language of any implicit post-compilation rechecking
of legality rules based on additional information that was not available
earlier. Post-compilation checks need to be explicitly defined.

The particular post-compilation check that is needed here is not trivial to
define. We want to prevent uses of Stream attributes for "bad" (i.e. no
user-defined stream attributes) elementary types, including implicit uses
associated with streaming attributes of types enclosing such elementary types.

Consider, for example, someone who instantiates a container generic with
Standard.Integer. This user has no plans to stream any Integers, nor to stream
any containers containing Integers. The Container generic, however, doesn't know
that and perhaps explicitly defines stream attributes for the container type
which, in the logically expanded instance, contain calls to stream operations
for Standard.Integer. If thera are no calls to the streaming  operations for the
container type, should this be treated as a restriction violation? Either answer
introduces problems. "Yes" means that the container generics (and similar
user-defined abstractions) are likely to be unusable if this restriction is in
force. "No" introduces a lot of definitional and implementation complexity.

And what about classwide streaming operations for tagged types?

We have a tagged type T1 for which streaming is well-defined.
An extension, T2, adds a component of type Standard.Integer.
What are the rules concerning T1'Class'Some_Stream_Attribute ?

P.S. If we were going to somehow go with the existing approach given in the AI,
the definition of "available" still seems wrong. Consider an array type whose
element type is Standard.Integer. As I read it, it has "available" stream
attributes even if the restriction is in force. No point in worrying about this
until after deciding whether we are going to have to rewrite (or abandon) the
whole AI.

****************************************************************

From: Tucker Taft
Sent: Wednesday, July 21, 2010  3:40 PM

I don't like the idea of restrictions cropping up in random parts of the manual.
The restriction should if at all possible be fully defined where the restriction
identifier is introduced.  Is there a special reason why this one deserves to be
different? I believe we should stick with post-compilation checks only, even if
that means slightly altering the semantics of the restriction.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, July 21, 2010  4:07 PM

> I don't like the idea of restrictions cropping up in random parts of
> the manual.

I agree, but note that there already are restrictions that are highly
distributed. The obvious example is "needs finalization", a term used only for a
restriction and defined all over the Standard.

> The restriction should if at all
> possible be fully defined where the restriction identifier is
> introduced.  Is there a special reason why this one deserves to be
> different?

My guess is that it is because it seemed reasonable to tie it to "availability".
If we can't do that, we have to duplicate all of the rules involved - and there
are many.

> I believe we should stick with post-compilation checks only, even if
> that means slightly altering the semantics of the restriction.

I tend to agree, but I've come to the conclusion that this restriction isn't
worth the massive effort needed to define it as a post-compilation rule.

The main reason is the example that Steve gives about instantiations of the
containers. But I think it is much worse than just cases with particular element
types.

The implementation of a container will typically include user-defined stream
attributes in order to meet the requirements for streaming of containers.
(Streaming could be handled outside of the Ada language, of course, but our
intent has always been that the containers can and usually will be implemented
in regular Ada code.) Those user-defined stream attributes probably will include
streaming entities like the container length; that is going to require streaming
using the default implementation of type Containers.Count (a type whose stream
attributes can't be overridden as it is defined in a language-defined package).
Unless we want to go to incredible lengths to allow this (remember we are
talking about call in a random user-defined function or procedure), this
restriction will prevent using of any containers.

Similarly, many language-defined types may be implemented with user-defined
stream attributes having such components. Even Text_IO could be implemented by
streaming characters.

The net effect is the use of this restriction would ban the use of many of the
language-defined packages, and exactly which ones would be
implementation-defined. The only way to avoid this effect would be prescribe
implementations for many of the packages (thou shalt not stream an elementary
type in Text_IO), which clearly would bulk up this restriction even further.

The effort to do all of this just cannot be worth the relatively limited use
case.

The current version which uses a legality rule does not suffer from that,
because what happens in withed package bodies is irrelevant. (I'm assuming that
the rules are altered so that the restriction applies only to a single unit,
since otherwise a legality rule is out of the question.) Unfortunately, that
doesn't seem to fit the need very well.

So I'm leaning toward killing this one. (The author - Jean-Pierre, I think - is
free to try to save it by proving me wrong in some important way. But that will
require detailed wording which addresses Steve's and Tuck's and my concerns.)

****************************************************************

From: Tucker Taft
Sent: Wednesday, July 21, 2010  5:15 PM

I would *much* rather address this issue by being able to provide user-defined
implementations of the default elementary streaming attributes.  I have designed
such a capability many times, but just never got around to proposing it.

The basic idea is to define an extension of Root_Stream_Type which has
additional visible primitive operations for streaming the various sizes of
integers and floats.  If a stream object were of a type derived from this
Extended_Root_Stream_Type, then the default implementations of elementary type
'Read and 'Write would call these primitives rather than coercing the values to
an array of stream elements and calling the stream-array Read and Write
operations. (Enum, fixed, and access types would call the Integer read/write
primitives.)

The underlying implementation would be straightforward, namely the
Root_Stream_Type would have these operations as private primitives, and the
Extended_Root_Stream_Type would simply make them visible so they could be
overridden.

You could then easily add a restriction to disallow creating an object of a
stream type which is *not* descended from Extended_Root_Stream_Type.

We could debate how Text_Streams would relate to this restriction.  Perhaps we
could provide a generic which would take a descendant of
Extended_Root_Stream_Type and export an extension of that type with the basic
stream_array Read and Write operations overridden to write to a {Wide_}*Text_IO
File_Type, but with the Integer and Float operations inherited, presuming they
are implemented in terms of a re-dispatch to the stream-array Read/Write.

To avoid implementation dependence in the spec for Extended_Root_Stream_Type on
the number of integer or float sizes supported by the implementation, we would
probably have four primitives for each, one for "normal" Integer/Float, one for
Longest_Integer/Float, one which takes an Integer/Float and a size in bits which
is used for types smaller than Integer/Float, and one which takes a
Longest_Integer/Float and a size in bits which is used for types larger than
Integer/Float but smaller than Longest_Integer/Float.  E.g.:

    procedure Write_Integer
     (S : in out Extended_Root_Stream_Type; I : Integer);
    procedure Write_Longest_Integer
     (S : in out Extended_Root_Stream_Type; I : Longest_Integer);
    procedure Write_Shorter_Integer
     (S : in out Extended_Root_Stream_Type;
      I : Integer;
      Size_In_Bits : Natural);
    procedure Write_Longer_Integer
     (S : in out Extended_Root_Stream_Type;
      I : Longest_Integer;
      Size_In_Bits : Natural);
    ...


Somewhat painful, but presumably these could be implemented in terms of a
re-dispatch to the stream-array Read/Write, and could be "inherited" into other
types in a generic, similar to what is proposed above for Text_Streams.

There might be some more elegant way to accomplish this same thing using
interfaces, but I haven't bothered to think it through...

****************************************************************

From: Bob Duff
Sent: Wednesday, July 21, 2010  6:36 PM

> I would *much* rather address this issue by being able to provide
> user-defined implementations of the default elementary streaming
> attributes.  I have designed such a capability many times, but just
> never got around to proposing it.

I'm inlined to drop this AI, but if we keep it, the approach you outlined sounds
like a good one.

> You could then easily add a restriction to disallow creating an object
> of a stream type which is *not* descended from
> Extended_Root_Stream_Type.

Even without that restriction, the problem is basically solved by your idea.
The issue is that integers, enums, etc are many, and scattered all about.
Stream types, on the other hand, don't sprout up wildly like that.

> To avoid implementation dependence in the spec for
> Extended_Root_Stream_Type on the number of integer or float sizes
> supported by the implementation, ...

Is such avoidance desirable?

****************************************************************

From: Bob Duff
Sent: Wednesday, July 21, 2010  6:39 PM

> I'm inlined to drop this AI, but if we keep it, the approach you
> outlined sounds like a good one.

In case somebody thinks that typo means I am [getting] "in line to drop...", let
me say I meant "inclined to drop...".

****************************************************************

From: Randy Brukardt
Sent: Wednesday, July 21, 2010  6:54 PM

> > To avoid implementation dependence in the spec for
> > Extended_Root_Stream_Type on the number of integer or float sizes
> > supported by the implementation, ...
>
> Is such avoidance desirable?

Maybe not if you are a vendor that wants vendor lock-in, but otherwise I would
think that the interfaces need to be the same between compilers (so that Ada
code using a custom streaming scheme would be usable on multiple compilers).
Keep in mind that even compilers targeting the same target processor might
provide different numbers of integer and float types (one example: GNAT provides
64-bit integers on x86 targets and Janus/Ada does not [currently]). I don't see
any clear way to make sure a requirement without a single portable
specification.

****************************************************************

From: Bob Duff
Sent: Wednesday, July 21, 2010  7:56 PM

> Maybe not if you are a vendor that wants vendor lock-in,

Not me.  I work for a vendor, as you know, but I try hard to avoid being biased
by that in my ARG role.

>...but otherwise I
> would think that the interfaces need to be the same between compilers
>(so  that Ada code using a custom streaming scheme would be usable on
>multiple  compilers).

I guess my question was, "Is this any worse than the fact that compilers can put
Long_Long_Ever_So_Long_Integer" in package Standard (so-called)?"

>...Keep in mind that even compilers targeting the same target
>processor might provide different numbers of integer and float types (one
> example: GNAT provides 64-bit integers on x86 targets and Janus/Ada
>does not  [currently]). I don't see any clear way to make sure a
>requirement without a  single portable specification.

Well, we could require all compilers to support all of Short_Short_Integer,
Short_Integer, Integer, Long_Integer, and Long_Long_Integer, and allow some of
those to be the same size.  This is one case where C got it right and Ada got it
wrong.

By the way, I don't think there's any requirement that "is range A..B" match any
type in Standard.  That is, a compiler could support:

    type T is range 1.. 2**1000;

and have no type in Standard with that large a range.  I think.

Tucker's proposal mentions Longest_Integer, but it's not clear where it's
declared.  Putting it in Standard would be a no-no.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, July 21, 2010  8:26 PM

> >...but otherwise I
> > would think that the interfaces need to be the same between compilers
> >(so that Ada code using a custom streaming scheme would be usable on
> >multiple compilers).
>
> I guess my question was, "Is this any worse than the fact that
> compilers can put Long_Long_Ever_So_Long_Integer" in package Standard
> (so-called)?"

I would think it is, simply because there is no reason that you have to use
those types declared in package Standard. (Yes, you do need to avoid the names,
but that's easy.) Indeed, many style guides explicitly recommend avoiding using
the predefined types with "Short_" or "Long_" in their names because of
portability issues.

OTOH, if you implement this specification, you surely will have to implement all
of it for a particular compiler. There is no ignoring of
"Long_Long_Long_Long_Integer" simply because you don't use it.

> >...Keep in mind that even compilers targeting the same target
> >processor might provide different numbers of integer and float types
> >(one
> > example: GNAT provides 64-bit integers on x86 targets and Janus/Ada
> >does not  [currently]). I don't see any clear way to make sure a
> >requirement without a  single portable specification.
>
> Well, we could require all compilers to support all of
> Short_Short_Integer, Short_Integer, Integer, Long_Integer, and
> Long_Long_Integer, and allow some of those to be the same size.  This
> is one case where C got it right and Ada got it wrong.

We could, but that would be a fairly significant language change. And it still
wouldn't help for implementations that have Long_Long_Long_Integer (Janus/Ada
would have that should it ever support a 128-bit integer type). Surely we aren't
going to make that illegal.

> By the way, I don't think there's any requirement that "is range A..B"
> match any type in Standard.  That is, a compiler could support:
>
>     type T is range 1.. 2**1000;
>
> and have no type in Standard with that large a range.  I think.

I think you are right. Talking about Standard is a red herring. It's the
underlying types that the implementation supports that matters.

> Tucker's proposal mentions Longest_Integer, but it's not clear where
> it's declared.  Putting it in Standard would be a no-no.

I assumed he was talking about a conceptual type that doesn't actually exist. We
surely don't want *more* predefined types declared in Standard (or anywhere else
for that matter - declare your own if you need them). I presumed that the
conceptual type was for a type something like:
   type Longest_Integer is range System.Min_Int .. System.Max_Int;

But I'm still confused as to why the complex interface is needed at all for
integer types. Why not have a single

    procedure Write_Longer_Integer
     (S : in out Extended_Root_Stream_Type;
      I : Longest_Integer;
      Size_In_Bits : Natural);

since we need that to implement Stream_Size in any case. (Essentially, a routine
like this is the thing that underlies all of the discrete stream attributes in
Janus/Ada, although [for truth in advertising], [a] it is simplified by directly
writing to the stream when everything is static and "perfect"; [b] I never
actually got the stream attribute stuff to work right [a job I need to finish
ASAP]).

The problem seems to be most serious for floating point representations, where
the number supported is definitely implementation-defined and there is no
obvious relationship between the values.

My net takeaway is that this scheme isn't going to work unless we are willing to
put a maximum number on the floating point representations supported by an Ada
compiler -- and I don't think that will fly as an option.

****************************************************************

From: Tucker Taft
Sent: Wednesday, July 21, 2010  9:57 PM

> ...
>> Tucker's proposal mentions Longest_Integer, but it's not clear where
>> it's declared.  Putting it in Standard would be a no-no.
>
> I assumed he was talking about a conceptual type that doesn't actually
> exist. We surely don't want *more* predefined types declared in
> Standard (or anywhere else for that matter - declare your own if you
> need them). I presumed that the conceptual type was for a type something like:
>    type Longest_Integer is range System.Min_Int .. System.Max_Int;

Exactly right.

>
> But I'm still confused as to why the complex interface is needed at
> all for integer types. Why not have a single
>
>     procedure Write_Longer_Integer
>      (S : in out Extended_Root_Stream_Type;
>       I : Longest_Integer;
>       Size_In_Bits : Natural);
>
> since we need that to implement Stream_Size in any case.

You are right this is all that you really *need*.  But if you care about
efficiency at all, you would like to have some other primitives to choose from,
rather than having to convert everything to 64- or 128-bit integers just to
ultimately put out a simple 16- or 32-bit integer.  Perhaps I am prematurely
optimizing, but I think that some applications do a fair amount of streaming,
and clearly the distributed annex uses it for every remote call.

(Essentially, a
> routine like this is the thing that underlies all of the discrete
> stream attributes in Janus/Ada, although [for truth in advertising],
> [a] it is simplified by directly writing to the stream when everything
> is static and "perfect"; [b] I never actually got the stream attribute
> stuff to work right [a job I need to finish ASAP]).
>
> The problem seems to be most serious for floating point
> representations, where the number supported is definitely
> implementation-defined and there is no obvious relationship between the values.
>
> My net takeaway is that this scheme isn't going to work unless we are
> willing to put a maximum number on the floating point representations
> supported by an Ada compiler -- and I don't think that will fly as an
> option.

I don't see the problem, given the fall backs provided by Write_Shorter_Float
and Write_Longer_Float, plus the "optimized" Write_Float and
Write_Longest_Float.

****************************************************************

From: Robert Dewar
Sent: Wednesday, July 21, 2010  10:31 PM

> I would *much* rather address this issue by being able to provide
> user-defined implementations of the default elementary streaming
> attributes.  I have designed such a capability many times, but just
> never got around to proposing it.

I agree a way of doing this would be good, in GNAT we do this by allowing
redefinition of the library routine System.Stream_Attributes, but a more
official way of doing it would be nice

****************************************************************

From: Robert Dewar
Sent: Wednesday, July 21, 2010  10:32 PM

>>> To avoid implementation dependence in the spec for
>>> Extended_Root_Stream_Type on the number of integer or float sizes
>>> supported by the implementation, ...
>> Is such avoidance desirable?
>
> Maybe not if you are a vendor that wants vendor lock-in, but otherwise
> I would think that the interfaces need to be the same between
> compilers (so that Ada code using a custom streaming scheme would be
> usable on multiple compilers). Keep in mind that even compilers
> targeting the same target processor might provide different numbers of
> integer and float types (one
> example: GNAT provides 64-bit integers on x86 targets and Janus/Ada
> does not [currently]). I don't see any clear way to make sure a
> requirement without a single portable specification.

which is out of the question at this stage.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, July 21, 2010  11:01 PM

...
> > The problem seems to be most serious for floating point
> > representations, where the number supported is definitely
> > implementation-defined and there is no obvious relationship between
> > the values.
> >
> > My net takeaway is that this scheme isn't going to work unless we
> > are willing to put a maximum number on the floating point
> > representations supported by an Ada compiler -- and I don't think
> > that will fly as an option.
>
> I don't see the problem, given the fall backs provided by
> Write_Shorter_Float and Write_Longer_Float, plus the "optimized"
> Write_Float and Write_Longest_Float.

I don't see how that would work. Float representations tend to be unrelated with
each other; it's not like the integer case where you can just use the lowest (or
upper bits) and ignore the rest.

I suppose you are thinking of something like converting all of the float
representations to the largest float and then convert them back based on the
number of bits to be written. But I don't think that would work for all
machines; if I remember correctly, the VAX had two different float
representations that were each 64 bits. How could a routine determine which of
those was meant? Similarly, you are assuming that the largest float can hold
proper values of all of the others, but that may not be true, particularly if
conversions change the oddball properties (such as IEEE NaNs and the like). None
of these things could reasonably happen for an integer type.

So I think we would either have to define a maximum number of float
representations (which would be allowed to map to the same float representation
if there aren't enough) or make the specification implementation-dependent
(which kills portability of programs using this facility).

P.S. I just looked this up in in a 1981 VAX manual that I still have on the
shelf for some reason, the D and G formats are both 64 bit. There are also 32
and 128 bit formats. So given your suggested specification, Longest_Float would
be H_Floating (128 bits), and Shortest float would be F_Floating (32 bits).
You'd have to use the routine which takes a bit size, which would be 64. How
would the routine determine whether to write a D or G float (presuming the Ada
compiler supported both)?

So I don't think this scheme would work for float representations, at least not
on a 1981 VAX.

****************************************************************

From: Robert Dewar
Sent: Wednesday, July 21, 2010  11:39 PM

> I suppose you are thinking of something like converting all of the
> float representations to the largest float and then convert them back
> based on the number of bits to be written. But I don't think that
> would work for all machines; if I remember correctly, the VAX had two
> different float representations that were each 64 bits. How could a
> routine determine which of those was meant? Similarly, you are
> assuming that the largest float can hold proper values of all of the
> others, but that may not be true, particularly if conversions change
> the oddball properties (such as IEEE NaNs and the like). None of these
> things could reasonably happen for an integer type.

On the VAX, there are two 64-bit fpt formats, with a greater precision and one
with a greater range. Neither can be used as a unique largest float.

Of course in practice it would be fine to make this work for IEEE and be done
with that, but even IEEE has impl-defined number of types!

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, July 22, 2010  6:21 AM

> So I'm leaning toward killing this one. (The author - Jean-Pierre, I
> think - is free to try to save it by proving me wrong in some
> important way. But that will require detailed wording which addresses
> Steve's and Tuck's and my
> concerns.)

Well, this issue is not a surprise, it was foreseen in the !discussion section
(although not specifically for containers).

I don't see how this is different from any other restriction. If there is a
restriction (max_tasks=0), it may well prevent the use of a number of predefined
packages (including maybe Text_IO).

The goal of this restriction is to make sure that programs written using
different compilers may interoperate safely by forcing every data exchanged to
have user-defined streaming attributes. If the use of containers implies the
exchange of an implementation defined type, then it is a good thing if the
instantiation is rejected.

Note also that a possible alternative is to raise Program_Error in the default
streaming attributes. Although I think a compile-time check is preferable, I'd
certainly prefer an exception to dropping the AI completely. (Of course,
implementations are free to warn that C_E will be raised when they detect the
use of such an attribute).

****************************************************************

From: Tucker Taft
Sent: Thursday, July 22, 2010  8:34 AM

   Any comments on the proposed alternative approach allowing overriding of the
   default elementary streaming attribute implementations?

****************************************************************

From: Tucker Taft
Sent: Thursday, July 22, 2010  8:41 AM

> I don't see how that would work. Float representations tend to be
> unrelated with each other; it's not like the integer case where you
> can just use the lowest (or upper bits) and ignore the rest.
>
> I suppose you are thinking of something like converting all of the
> float representations to the largest float and then convert them back
> based on the number of bits to be written. But I don't think that
> would work for all machines; if I remember correctly, the VAX had two
> different float representations that were each 64 bits. How could a
> routine determine which of those was meant? Similarly, you are
> assuming that the largest float can hold proper values of all of the
> others, but that may not be true, particularly if conversions change
> the oddball properties (such as IEEE NaNs and the like). None of these
> things could reasonably happen for an integer type.

If you stick with IEEE, I don't see a problem.  The longer types are pure
supersets of the shorter types.

I'm not very worried about Vaxen at this point.  Some kind of configuration
pragma or equivalent could presumably be used to deal with this very special
case.  I suspect that most people who still use Vaxen (if there are any) would
adopt the 64-bit type that matched the IEEE model.

> So I think we would either have to define a maximum number of float
> representations (which would be allowed to map to the same float
> representation if there aren't enough) or make the specification
> implementation-dependent (which kills portability of programs using
> this facility).

I agree we want to avoid implementation dependence.
Simply adding more floating point versions seems pretty simple.  If you have 4
of them, I think you would be pretty safe!  We could then eliminate the ones
that take a "Size_In_Bits" as a parameter, meaning that we have a total of 4,
just as was proposed for integer types.  And of course for targets which have
fewer, some of them could simply be renames of the others.

****************************************************************

From: Robert Dewar
Sent: Thursday, July 22, 2010  8:54 AM

> If you stick with IEEE, I don't see a problem.  The longer types are
> pure supersets of the shorter types.

But it's not just vaxes, it's all VMS compilers, e.g. on Alpha that fully
support these types, please don't assume these have gone away!

> I'm not very worried about Vaxen at this point.  Some kind of
> configuration pragma or equivalent could presumably be used to deal
> with this very special case.  I suspect that most people who still use
> Vaxen (if there are any) would adopt the 64-bit type that matched the
> IEEE model.

As I say, it is quite wrong to think this is specific to the Vax, and MANY of
our customers are using the legacy floating-point models in their legacy code,
and yes these types must be fully supported.

It would be a step backward for Ada to assume that all the world is IEEE when
this is not the case, and when we already have severe compromises in the fpt
model to accomodate the more general case.

> I agree we want to avoid implementation dependence.
> Simply adding more floating point versions seems pretty simple.  If
> you have 4 of them, I think you would be pretty safe!  We could then
> eliminate the ones that take a "Size_In_Bits" as a parameter, meaning
> that we have a total of 4, just as was proposed for integer types.
> And of course for targets which have fewer, some of them could simply
> be renames of the others.

I think it's fine to have more fpt versions, we do this all over the place
already (e.g. extra instantiations of generics).

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, July 22, 2010  10:09 AM

>     Any comments on the proposed alternative approach allowing
> overriding of the default elementary streaming attribute
> implementations?

There are two levels in this AI:
1) provide the restriction
2) since it seems to be too strong on generics, how to allow more instantiations
in the presence of the restriction.

Your proposal is about 2). Why not, if it is "reasonably" doable, but I fear it
would not solve the case of containers, since they are bound to stream local
types, that may even not be visible.

****************************************************************

From: Randy Brukardt
Sent: Thursday, July 22, 2010  12:32 PM

This isn't my understanding of Tucker's proposal. He is suggesting getting rid
of (1) altogether (or replacing it with a simple restriction that no descendants
of Root_Stream_Type that aren't descendants of Extended_Root_Stream_Type appear
in the partition). But that would require the user to rewrite all existing
user-defined stream attributes (to use Extended_Root_Stream_Type) and I don't
think that actually makes much sense. It seems more like something that
AdaControl ought to do.

And his proposal has nothing specific to do with (2); it replaces all elementary
stream attributes by user-defined versions implicitly. It does that everywhere,
not just in generics.

BTW, I don't quite understand the proposal in the sense that it seems to me that
if this package is defined, then virtually all elementary stream attributes
would have to be implemented with a dispatching call to the user-defined
routine. That's because the vast majority of such attributes occur within
ordinary subprograms that will be used to define other user-defined stream
attributes (using an elementary stream attribute in a stand-alone manner is
rare), and there is no way for the routine to know whether the passed in stream
is actually descended from Extended_Root_Stream_Type. This means that there is a
level of distributed overhead with this feature: everyone pays for it whether
they use it or not.

That extra overhead might matter when streaming somewhere fast (such as in
marshalling for the DS annex); it won't matter much on the Internet or when
writing to files. Not sure if it matters, but we ought to consider that as part
of this proposal.

****************************************************************

From: Randy Brukardt
Sent: Thursday, July 22, 2010  12:43 PM

> The goal of this restriction is to make sure that programs written
> using different compilers may interoperate safely by forcing every
> data exchanged to have user-defined streaming attributes. If the use
> of containers implies the exchange of an implementation defined type,
> then it is a good thing if the instantiation is rejected.

But this is the problem: the instantiation is rejected even if the program has
no intention of streaming containers. Moreover, it is rejected depending on how
the containers are implemented, so some containers might work for one
implementation and not for another. The only way to write a truly portable
program using this restriction would be avoid all use of predefined packages
(whether or not any streaming is done with them), which seems nasty.

I agree that other restrictions can have this effect (a major flaw in the whole
idea, IMHO), but something like Max_Tasks=0 is much less likely to cause this
effect. (Who is putting tasks into the random number generator? But the language
requires streaming to work on type State in the random number generator, so
there very well may be a user-defined stream attribute as part of the
implementation.)

> Note also that a possible alternative is to raise Program_Error in the
> default streaming attributes. Although I think a compile-time check is
> preferable, I'd certainly prefer an exception to dropping the AI
> completely. (Of course, implementations are free to warn that C_E will
> be raised when they detect the use of such an attribute).

That would mostly eliminate my concern (and also make the Restriction much
easier to define), as you would only get Program_Error when you actually tried
to stream something language defined. The problem here is that the language
requires streaming to work in the predefined packages, and that means that
user-defined stream attributes might be defined that would use default stream
attributes. Rejecting the program simply because of the existence of those is
insane and a real cramp on portability; having them raise Program_Error in the
face of this restriction only has an effect if the streaming is actually used --
which is the real problem to be solved.

Note however that a runtime fix implies some distributed overhead; if we're
going to pay for that, Tucker's idea of providing a way to force user-defined
versions of all elementary stream attributes (and thus of all default stream
attributes indirectly) may be a better idea (presuming that the interface can be
worked out).

****************************************************************

From: Tucker Taft
Sent: Thursday, July 22, 2010  1:48 PM

> BTW, I don't quite understand the proposal in the sense that it seems
> to me that if this package is defined, then virtually all elementary
> stream attributes would have to be implemented with a dispatching call
> to the user-defined routine.

There is always at least one dispatching call involved with each call on a
stream attribute, since the stream parameter to a stream attribute is
access-to-class-wide. The question is whether there would need to be two
dispatching calls, one to get to the Write_Integer implementation, and then
another to get from Write_Integer to Write stream-array. It would seem to be
under the user's control whether that second call is a re-dispatch or a
statically bound call.

> ... That's because the vast majority of such attributes occur within
> ordinary subprograms that will be used to define other user-defined
> stream attributes (using an elementary stream attribute in a
> stand-alone manner is rare), and there is no way for the routine to
> know whether the passed in stream is actually descended from
> Extended_Root_Stream_Type. This means that there is a level of
> distributed overhead with this feature: everyone pays for it whether they use it or not.
>
> That extra overhead might matter when streaming somewhere fast (such
> as in marshalling for the DS annex); it won't matter much on the
> Internet or when writing to files. Not sure if it matters, but we
> ought to consider that as part of this proposal.

In some cases it might actually be faster, if the stream were optimized
appropriately.

****************************************************************

From: Randy Brukardt
Sent: Thursday, July 22, 2010  4:50 PM

> There is always at least one dispatching call involved with each call
> on a stream attribute, since the stream parameter to a stream
> attribute is access-to-class-wide.
> The question is whether there would need to be two dispatching calls,
> one to get to the Write_Integer implementation, and then another to
> get from Write_Integer to Write stream-array.
> It would seem to be under the user's control whether that second call
> is a re-dispatch or a statically bound call.

OIC!

I was thinking that you would call Write_Integer or whatever and then call the
stream Write (that's how it works in Janus/Ada). But I think you are arguing
that at the point of the default stream attribute, you would *only* dispatch to
Write_Integer, and what it did would be up to the user.

It strikes me that with this approach, there is no obvious reason not to put
these new routines directly into Root_Stream_Type, so long as they have
(implementation-defined) concrete implementions that are defined to redispatch
to the existing abstract routines. They'll be statically bound some of the time,
and the rest of the time no static binding would be possible anyway. That would
get rid of any extra stream types which are not really helpful (the actual
implementation of Root_Stream_Type would have to be as described here anyway).

The big question is whether the implementation disruption (which would be
considerable) is tolerable.

I guess there is another question as well: this would effectively require
repealing 13.13.2(56/2). For instance, if you have a string type with a
predefined stream attribute, currently it is OK to call Write only once writing
the entire stream data since the compiler knows the result is indistinguishable
from writing each of the characters individually. (Thus it still is OK to read
the string component-by-component if someone wants to do that.) But with a
possible user-defined elementary stream attribute for *every* stream, such an
optimization would never be legitimate -- there would be no way to tell when the
output would be the same. Moreover, doing anything other than the canonical set
of operations could very well generate something other than the canonical
stream, so I think we'd have to repeal 13.13.2(56/2) outright and replace it by
a requirement to make the exact number of calls (modulo as-if optimizations, of
course) specified by the standard. That's a much bigger performance hazard than
just an extra dispatch.

****************************************************************

Questions? Ask the ACAA Technical Agent