CVS difference for ai12s/ai12-0249-1.txt
--- ai12s/ai12-0249-1.txt 2018/01/23 06:15:43 1.1
+++ ai12s/ai12-0249-1.txt 2018/01/25 07:32:08 1.2
@@ -88,7 +88,7 @@
A one-dimensional array type whose component type is a character type
is called a /string/ type{, as is a type with a specified String_Literal
aspect (see 4.2.1)}.
-
+
Modify paragraph 4.2(3):
For a name that consists of a character_literal, either{:
@@ -158,28 +158,28 @@
to allow the use of one or more kinds of literals as values of the type.
Static Semantics
-
+
The following nonoverridable, type-related operational aspects may be
specified for a type T other than a numeric type:
Integer_Literal
This aspect is specified by a /function_/name that denotes a
primitive function of T with one parameter of type String and a
- result type of T.
-
+ result type of T.
+
Real_Literal
This aspect is specified by a /function_/name that denotes a
primitive function of T with one parameter of type String and a
- result type of T.
-
+ result type of T.
+
The following nonoverridable, type-related operational aspect may be
specified for a type T other than an access type:
-
+
Null_Literal
This aspect is specified by a name that denotes a constant object of
type T, or that denotes a primitive function of T with no parameters
- and a result type of T.
-
+ and a result type of T.
+
The following nonoverridable, type-related operational aspect may be
specified for a type T, unless T is a one-dimensional array type whose
component type is a character type:
@@ -187,11 +187,11 @@
String_Literal
This aspect is specified by a /function_/name that denotes a
primitive function of T with one parameter of type
- Wide_Wide_String and a result type of T.
-
- [Redundant: A type with a specified String_Literal aspect is considered a
+ Wide_Wide_String and a result type of T.
+
+ [Redundant: A type with a specified String_Literal aspect is considered a
/string/ type.]
-
+
The following nonoverridable, type-related operational aspect may be
specified for a type T other than an enumeration type:
@@ -199,14 +199,14 @@
This aspect is specified by a /function_/name that denotes a
primitive function of T with one parameter of type
Wide_Wide_Character and a result type of T.
-
+
[Redundant: A type with a specified Character_Literal aspect is
considered a
/character/ type.]
-
+
Dynamic Semantics
-
+
For the evaluation of an integer (or real) literal with expected type
having an Integer_Literal (or Real_Literal) aspect specified, the value
is the result of a call on the function specified by the aspect, with
@@ -229,7 +229,7 @@
Wide_Wide_String that corresponds to the literal.
Implementation Permissions
-
+
For a literal with an expected type having a corresponding _Literal
aspect specified, the implementation is permitted to evaluate the
literal at compile time, or at run time at any point after the freezing
@@ -239,7 +239,7 @@
one of the literals may be used as the value for both literals.
Bounded Errors
-
+
It is a bounded error if the evaluation of a literal with expected type
having a corresponding _Literal aspect specified, propagates an
exception. The possible effect is that an error is reported prior to
@@ -249,7 +249,7 @@
!discussion
-The !proposal section includes a relatively complete discussion of the
+The !proposal section includes a relatively complete discussion of the
issues. But here are a few other interesting questions or issues:
* Should numeric literals always be non-negative, with any
@@ -257,12 +257,12 @@
Probably; it would be somewhat weird if "normal" numeric literals
used the type's "-" operator, while a type supporting user-defined
literals bypassed the "-" operator.
-
+
* A literal may be usable on a partial view but not on the full view,
if the full view is a type that already has meaning for the same
sort of literal. That seems OK.
-
-* More generally, we chose to allow user-defined literals on almost
+
+* More generally, we chose to allow user-defined literals on almost
any sort of type, so long as that sort of type didn't already allow
that sort of literal. The thought was that you could imagine a
floating point type that would allow the use of string literals
@@ -270,61 +270,61 @@
that would allow character literals, which would be interpreted
as their 16-bit Unicode value, say -- the "signed_char" and
"unsigned_char" types in Interfaces.C come to mind.
-
+
For the purposes of this rule, we treated all numeric literals
as one "kind" of literal, and disallow any numeric type from having
user-defined numeric literals. Having the presence or absence of
a "." seemed too subtle to trigger a completely different effect
when the expected type was a numeric type.
-
+
* If we go with this proposal, we will have to decide whether to use
- this feature for existing language-defined packages
- (e.g. Unbounded_Strings), and if so, whether certain existing
+ this feature for existing language-defined packages
+ (e.g. Unbounded_Strings), and if so, whether certain existing
functions should be eliminated from the packages (e.g. overloadings
of "&" and the comparison operators that have String as one of
- the operand types). At this stage it might be safer to define a
+ the operand types). At this stage it might be safer to define a
new package and move the original Unbounded_Strings package to
- Annex J.
-
+ Annex J.
+
Alternatively, we could define some kind of "preference"
- for or against user-defined interpretation of literals, though we
- know that "Beaujolais" effects are lurking around the corner when
- you set up preference rules. We already have some preference rules
- dealing with "root" numeric types, and these could be seen as similar,
- but the root-numeric-type preference avoids Beaujolais effects because
+ for or against user-defined interpretation of literals, though we
+ know that "Beaujolais" effects are lurking around the corner when
+ you set up preference rules. We already have some preference rules
+ dealing with "root" numeric types, and these could be seen as similar,
+ but the root-numeric-type preference avoids Beaujolais effects because
the preference is for operations that are always visible.
-
+
So barring some clear proof of being Beaujolais-effect-free, we should
probably steer clear of a preference rule.
-
+
!examples
type Unbounded_String is private
with String_Literal => To_Unbounded_String;
-
+
function To_Unbounded_String (Source : Wide_Wide_String)
return Unbounded_String;
...
-
+
X : constant Unbounded_String := "This is a test";
-- Equivalent to:
-- X : constant Unbounded_String :=
- -- To_Unbounded_String
+ -- To_Unbounded_String
-- (Wide_Wide_String'("This is a test"));
-
+
type Big_Integer is private
with Integer_Literal => Big_Integer_Value;
-
+
function Big_Integer_Value (S : String)
return Big_Integer;
-
+
...
-
+
Y : Big_Integer := -3;
-- Equivalent to:
-- Y : Big_Integer := - Big_Integer_Value ("3");
-
+
!ASIS
[Not sure. Might need new aspect names, but I didn't check - Editor.]
@@ -374,17 +374,17 @@
string literal maps to a function call.
> Implementation Permissions
-> For a literal with an expected type having a corresponding _Literal
-> aspect specified, the implementation is permitted to evaluate the
-> literal at compile time, or at run time at any point after the
-> freezing point of the expected type, and prior to the first use of the
-> value of the literal. Furthermore, if two such literals with the same
-> expected type have identical textual representations, the result of
+> For a literal with an expected type having a corresponding _Literal
+> aspect specified, the implementation is permitted to evaluate the
+> literal at compile time, or at run time at any point after the
+> freezing point of the expected type, and prior to the first use of the
+> value of the literal. Furthermore, if two such literals with the same
+> expected type have identical textual representations, the result of
> evaluating one of the literals may be used as the value for both literals.
I don't think we want to give permission to *introduce* an ABE failure.
If the canonical evaluation point would be after the elaboration of the
-function body, then clearly we don't want to allow evaluation to precede
+function body, then clearly we don't want to allow evaluation to precede
the elaboration of that body. But suppose the function calls other subprograms,
or reads "constant after elaboration" state?
@@ -397,11 +397,11 @@
with the general intent).
> Bounded Errors
-> It is a bounded error if the evaluation of a literal with expected
-> type having a corresponding _Literal aspect specified, propagates an
-> exception. The possible effect is that an error is reported prior to
-> run time, Program_Error is raised at some point prior to any use of
-> the value of the literal, or the exception propagated by the
+> It is a bounded error if the evaluation of a literal with expected
+> type having a corresponding _Literal aspect specified, propagates an
+> exception. The possible effect is that an error is reported prior to
+> run time, Program_Error is raised at some point prior to any use of
+> the value of the literal, or the exception propagated by the
> evaluation is raised at the point of use of the value of the literal.
Allowing an exception to be raised at some arbitrary point (e.g., during an
@@ -449,15 +449,15 @@
> In general, this looks good. I think it is the right approach.
I agree.
-
+
BTW, there are some additional issues not noted by Steve mentioned at the
bottom of this message.
-> We need to modify 4.9 to say that certain literals are not static
+> We need to modify 4.9 to say that certain literals are not static
> expressions, right?
Certainly.
-
+
> > Modify paragraph 4.2(11):
> > For the evaluation of a string_literal of type T, {if its expected type
> > is a one-dimensional array type with a component subtype that is a
@@ -467,10 +467,10 @@
> > made that its lower bound is greater than the lower bound of the base
> > range of the index type. The exception Constraint_Error is raised if
> > either of these checks fails.
->
-> Do the changes made in this paragraph apply properly in the null
-> string literal case? This seems to be saying that the usual rules for
-> determining the bounds of a null string literal apply even in the case
+>
+> Do the changes made in this paragraph apply properly in the null
+> string literal case? This seems to be saying that the usual rules for
+> determining the bounds of a null string literal apply even in the case
> where the null string literal maps to a function call.
Don't they have to? The function call takes a Wide_Wide_Wide_String parameter
@@ -481,28 +481,28 @@
than one value. Or at least it would had Tucker proposed any wording.)
> > Implementation Permissions
-> > For a literal with an expected type having a corresponding
-> > _Literal aspect specified, the implementation is permitted to
-> > evaluate the literal at compile time, or at run time at any point
-> > after the freezing point of the expected type, and prior to the
-> > first use of the value of the literal. Furthermore, if two such
-> > literals with the same expected type have identical textual
-> > representations, the result of evaluating one of the literals may be
+> > For a literal with an expected type having a corresponding
+> > _Literal aspect specified, the implementation is permitted to
+> > evaluate the literal at compile time, or at run time at any point
+> > after the freezing point of the expected type, and prior to the
+> > first use of the value of the literal. Furthermore, if two such
+> > literals with the same expected type have identical textual
+> > representations, the result of evaluating one of the literals may be
> > used as the value for both literals.
->
+>
> I don't think we want to give permission to *introduce* an ABE failure.
-> If the canonical evaluation point would be after the elaboration of
-> the function body, then clearly we don't want to allow evaluation to
+> If the canonical evaluation point would be after the elaboration of
+> the function body, then clearly we don't want to allow evaluation to
> precede the elaboration of that body.
-> But suppose the function calls other subprograms, or reads "constant
+> But suppose the function calls other subprograms, or reads "constant
> after elaboration" state?
->
-> I don't think we ever give permission to evaluate something at
-> compile-time; that follows from the usual as-if rules. Do we want to
-> somehow reference the existing rules for a pair of pure function calls
+>
+> I don't think we ever give permission to evaluate something at
+> compile-time; that follows from the usual as-if rules. Do we want to
+> somehow reference the existing rules for a pair of pure function calls
> with the same arguments (10.2.1(18/3))?
>
-> I think the Implementation Permissions section needs some work (but I
+> I think the Implementation Permissions section needs some work (but I
> agree with the general intent).
It seems to me that the Pure function rules (and especially the corresponding
@@ -520,22 +520,22 @@
stupid code is fast?
> > Bounded Errors
-> > It is a bounded error if the evaluation of a literal with
-> > expected type having a corresponding _Literal aspect specified,
-> > propagates an exception. The possible effect is that an error is
-> > reported prior to run time, Program_Error is raised at some point
-> > prior to any use of the value of the literal, or the exception
+> > It is a bounded error if the evaluation of a literal with
+> > expected type having a corresponding _Literal aspect specified,
+> > propagates an exception. The possible effect is that an error is
+> > reported prior to run time, Program_Error is raised at some point
+> > prior to any use of the value of the literal, or the exception
> > propagated by the evaluation is raised at the point of use of the value of
> > the literal.
->
-> Allowing an exception to be raised at some arbitrary point (e.g.,
+>
+> Allowing an exception to be raised at some arbitrary point (e.g.,
> during an abort-deferred operation) seems like a bad idea.
Agreed.
-
-> It reminds me of the state of affairs before per-object constraints
+
+> It reminds me of the state of affairs before per-object constraints
> were invented. Given something like
->
+>
> type T (D : Integer) is private;
> subtype S is T (-10);
> ...
@@ -544,10 +544,10 @@
> type T (D : Integer) is
> record F : String (D .. 10); ...; end record;
> ...
->
-> , there was a constraint check associated with S that could fail
-> anywhere in some large region of text. That was a bad idea then (which
-> is why per-object constraints were invented) and it remains a bad idea
+>
+> , there was a constraint check associated with S that could fail
+> anywhere in some large region of text. That was a bad idea then (which
+> is why per-object constraints were invented) and it remains a bad idea
> today.
If we had exception contracts (hint, hint), this would be a non-problem as a
@@ -571,9 +571,9 @@
> > * A literal may be usable on a partial view but not on the full view,
> > if the full view is a type that already has meaning for the same
> > sort of literal. That seems OK.
->
+>
> Where is that defined? In a case like
->
+>
> type T (<>) is private with String_Literal => Convert;
> function Convert (X : Wide_Wide_String) return T;
> procedure Foo (X : T := "abc");
@@ -581,8 +581,8 @@
> type T is new Wide_Wide_String;
> function Convert (X : Wide_Wide_String) return T is (">>>" & T (X));
> procedure Foo (X : T := "abc") is null;
->
-> I like the idea that the second "abc" is illegal . Otherwise we'd need
+>
+> I like the idea that the second "abc" is illegal . Otherwise we'd need
> to modify the conformance rules to know about cases like this.
It's defined by the way that he wrote the interpretation of literals; you
@@ -619,7 +619,7 @@
>Modify paragraph 3.6.3(1):
> A one-dimensional array type whose component type is a character type
> is called a /string/ type{, as is a type with a specified String_Literal
-> aspect (see 4.2.1)}.
+> aspect (see 4.2.1)}.
Umm, don't "string types" have a bunch of extra semantics beyond just literals?
In particular, string types can be static. Doesn't this cause some problems?
@@ -629,12 +629,12 @@
---
> Modify paragraph 4.2(9):
-> {If its expected type is a numeric type, t}[T]he evaluation of a
+> {If its expected type is a numeric type, t}[T]he evaluation of a
> numeric literal[, or the literal null,] yields the represented value.
-> {If its expected type is an access type, the evaluation of the
-> literal null yields the null value of the expected type.} In other
-> cases, the effect of evaluating a numeric or null literal is
-> determined by the Integer_Literal, Real_Literal, or Null_Literal
+> {If its expected type is an access type, the evaluation of the
+> literal null yields the null value of the expected type.} In other
+> cases, the effect of evaluating a numeric or null literal is
+> determined by the Integer_Literal, Real_Literal, or Null_Literal
> aspect that applies (see 4.2.1).
Something is wrong with the Insertion/Deletion marks here, as I'm pretty sure
@@ -650,7 +650,7 @@
> as defined in 2.6. The bounds of this array value are determined
> according to the rules for positional_array_aggregates (see 4.3.3 ),
> except that for a null string literal, the upper bound is the
-> predecessor of the lower bound.
+> predecessor of the lower bound.
Don't we need these same rules for the bounds of the object that gets passed
into the function specified by the String_Literal aspect? That object needs
@@ -679,7 +679,7 @@
string types. The proposal even used an imaginary "String_Literal" aspect!
If one had such a package, then you'd want to replace all of the existing
-string packages with class-wide ones that would allow storing any string
+string packages with class-wide ones that would allow storing any string
type. (That would make unbounded-ness orthogonal to the representation of the
actual string; you'd probably implement the storage as an array of stream
elements.) That would give a good excuse to introduce proper literals. (We'd
@@ -695,10 +695,624 @@
> but the root-numeric-type preference avoids Beaujolais effects because
> the preference is for operations that are always visible.
> So barring some clear proof of being Beaujolais-effect-free, we should
-> probably steer clear of a preference rule.
+> probably steer clear of a preference rule.
-Umm, no thanks. Root numeric preference is a message in our compiler, and I
+Umm, no thanks. Root numeric preference is a mess in our compiler, and I
have no interest in trying to figure out something like it for other things.
+
+****************************************************************
+
+From: Steve Baird
+Sent: Tuesday, January 23, 2018 12:45 PM
+
+>>> Modify paragraph 4.2(11):
+>>> For the evaluation of a string_literal of type T, {if its
+>>> expected
+> type
+>>> is a one-dimensional array type with a component subtype that is a
+>>> constrained subtype of a character type,} a check is made that the
+>>> value of each character of the string_literal belongs to the component
+>>> subtype of T. For the evaluation of a null string literal, a check is
+>>> made that its lower bound is greater than the lower bound of the base
+>>> range of the index type. The exception Constraint_Error is raised if
+>>> either of these checks fails.
+>>
+>> Do the changes made in this paragraph apply properly in the null
+>> string literal case? This seems to be saying that the usual rules for
+>> determining the bounds of a null string literal apply even in the
+>> case where the null string literal maps to a function call.
+>
+> Don't they have to? The function call takes a Wide_Wide_Wide_String
+> parameter and one needs to know the bounds of that. And if those
+> bounds are garbage, we still have to raise Constraint_Error. (Note,
+> however, that the AI on "(null array)" -- AI12-0248-1 since I assigned
+> it a number an hour ago
+> -- eliminates this check, replacing it by a check that the index type
+> has more than one value. Or at least it would had Tucker proposed any
+> wording.)
+
+Given a null string literal whose evaluation is going to involve passing a
+wide_wide_string to a function, what is the case you are talking about where the
+bounds of the wide_wide_string might be bad (or, for that matter, anything other
+than 1 .. 0)?
+
+There are two separate sets of bounds here - the bounds of the Wide_Wide_String
+that is passed to the function and (in the case where the type of the literal is
+an array type) the bounds of the function result. Only the former need to be
+discussed in this AI (the latter follow from the usual rules about function
+results). In the phrase "its lower bound", the "it" refers to the value of the
+literal, not to the intermediate Wide_Wide_String value. This makes no sense,
+but it also wouldn't make any sense if it did refer to the intermediate
+Wide_Wide_String value. [Incidentally, I argue later on in this message that we
+don't need to talk about the bounds of the Wide_Wide_String value at all because
+all the dynamic semantics stuff either does or should follow from an equivalence
+rule.]
+
+> Indeed, I think we should get rid of this permission altogether and
+> replace it with an AARM Implementation note that discusses as-if
+> optimizations and the fact that an implementation need only call a
+> pure/Global => null function only once per set of parameters. If the
+> function has side-effects (yikes!), that's madness and the code should
+> simply revert to canonical semantics. Who cares if stupid code is fast?
+
+I agree that we don't care much about functions that have side effects, or about
+functions which are called twice with the same arguments and return different
+results. Behavior in these cases needs to be defined, but these are silly cases.
+
+The case I'm more concerned about is a function that reads (but does not write)
+variable state. In particular, I thinking of variable state that is updated at
+some point and never subsequently modified. If all calls to the function occur
+after the state is updated, then the state is effectively (for purposes of our
+function) constant and our function then has the desired pure-ish properties (no
+side effects and two calls with same arguments yield same results). But if an
+implementation permission allows evaluation of the function call before the
+state update occurs, then problems result.
+
+ABE checks can be viewed as a special case of this if you choose to think of
+there being a (notional) Boolean flag associated with each subprogram indicating
+whether the body has been elaborated. The elaboration of the body then has the
+effect of modifying the flag and a call which depends on the value of this flag
+can be thought of as reading variable state.
+
+And of course a function which is subject to its own ABE check is hardly a
+pathological corner case.
+
+> I don't see any way to reconcile the desire to "hoist" literal
+> functions with these side-cases. And I don't really see the point;
+> this is just a function call like any other function call.
+
+Agreed.
+
+> This only seems to improve the performance of weird (side-effects) or
+> buggy (unexpected exceptions) literal routines.
+
+I disagree, as described above ...
+
+> And that seems to be exactly the
+> cases we don't care about. So this smacks of premature optimization,
+> especially as it only helps one specific corner case (as an
+> implementer, I'm much more interested in things that will speed up a
+> substantial fraction of functions).
+>
+
+... but I still think your conclusion about hoisting is right.
+
+>>> * A literal may be usable on a partial view but not on the full view,
+>>> if the full view is a type that already has meaning for the same
+>>> sort of literal. That seems OK.
+>>
+>>
+>> Where is that defined? In a case like
+>>
+>> type T (<>) is private with String_Literal => Convert;
+>> function Convert (X : Wide_Wide_String) return T;
+>> procedure Foo (X : T := "abc");
+>> private
+>> type T is new Wide_Wide_String;
+>> function Convert (X : Wide_Wide_String) return T is (">>>" & T (X));
+>> procedure Foo (X : T := "abc") is null;
+>>
+>> I like the idea that the second "abc" is illegal . Otherwise we'd
+>> need to modify the conformance rules to know about cases like this.
+>
+> It's defined by the way that he wrote the interpretation of literals;
+> you check for the existing cases first, and only then (if the existing
+> cases don't work) do you use the aspect specification. I don't think
+> the wording is very clear about that.
+>
+
+There are 2 ways to go if both mechanisms are available (as in the private part
+of the above example): either a preference rule is used to choose one
+interpretation over the other or the literal is illegal.
+I interpret the statement
+ "A literal may be usable in a partial view but not on the
+ full view, if the full view is a type that already has a meaning
+ for the same sort of literal"
+to mean that the second option has been selected.
+I asked for an explanation of how this statement followed from the wording and
+you (implicitly) agreed with me that you couldn't justify it either (because the
+first option, not the second, was chosen).
+
+> ---
+>
+>> Modify paragraph 4.2(10):
+>> The evaluation of a string_literal that is a primary {and has an
+>> expected type that is a one-dimensional array type with a character
+>> type as its component type,} yields an array value containing the value
+>> of each character of the sequence of characters of the string_literal,
+>> as defined in 2.6. The bounds of this array value are determined
+>> according to the rules for positional_array_aggregates (see 4.3.3 ),
+>> except that for a null string literal, the upper bound is the
+>> predecessor of the lower bound.
+>
+> Don't we need these same rules for the bounds of the object that gets
+> passed into the function specified by the String_Literal aspect? That
+> object needs defined bounds just as much as the direct uses! The
+> function that is called is just a normal Ada function after all,
+> someone could reasonably query Val'First.
+>
+
+Do we need to mention this dynamic-semantics stuff at all? If this isn't already
+handled by an equivalence rule, then it should be.
+ T'("abc")
+is equivalent to
+ T'(T'String_Literal_Function (Wide_String_Param => "abc")) and we don't need to say any more than that about the bounds of the Wide_Wide_String literal.
+
+Note that the parameter subtype of the String_Literal function can be
+constrained.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, January 23, 2018 3:30 PM
+
+...
+> > Indeed, I think we should get rid of this permission altogether and
+> > replace it with an AARM Implementation note that discusses as-if
+> > optimizations and the fact that an implementation need only call a
+> > pure/Global => null function only once per set of parameters. If the
+> > function has side-effects (yikes!), that's madness and the code
+> > should simply revert to canonical semantics. Who cares if stupid code is fast?
+>
+> I agree that we don't care much about functions that have side
+> effects, or about functions which are called twice with the same
+> arguments and return different results. Behavior in these cases needs
+> to be defined, but these are silly cases.
+>
+> The case I'm more concerned about is a function that reads (but does
+> not write) variable state. In particular, I thinking of variable state
+> that is updated at some point and never subsequently modified.
+> If all calls to the function occur after the state is updated, then
+> the state is effectively (for purposes of our
+> function) constant and our function then has the desired pure-ish
+> properties (no side effects and two calls with same arguments yield
+> same results).
+> But if an implementation permission allows evaluation of the function
+> call before the state update occurs, then problems result.
+>
+> ABE checks can be viewed as a special case of this if you choose to
+> think of there being a (notional) Boolean flag associated with each
+> subprogram indicating whether the body has been elaborated. The
+> elaboration of the body then has the effect of modifying the flag and
+> a call which depends on the value of this flag can be thought of as
+> reading variable state.
+
+This is exactly how ABE is implemented in Janus/Ada; nothing "notional"
+about it. :-)
+
+> And of course a function which is subject to its own ABE check is
+> hardly a pathological corner case.
+
+Right, but such a function is still Global => null; nothing about the Pure
+permission allows ABE violations!
+
+My point here was that we already have the needed permission in useful cases and
+the ones that aren't that useful (or safely optimizable) shouldn't be optimized
+anyway. For unbounded BigNum, evaluating a literal will do a heap allocation.
+Optimizing that could very well end up with the wrong answer.
+
+> > I don't see any way to reconcile the desire to "hoist" literal
+> > functions with these side-cases. And I don't really see the point;
+> > this is just a function call like any other function call.
+>
+> Agreed.
+>
+> > This only seems to improve the performance of weird (side-effects)
+> > or buggy (unexpected exceptions) literal routines.
+>
+> I disagree, as described above ...
+>
+> > And that seems to be exactly the
+> > cases we don't care about. So this smacks of premature optimization,
+> > especially as it only helps one specific corner case (as an
+> > implementer, I'm much more interested in things that will speed up a
+> > substantial fraction of functions).
+>
+> ... but I still think your conclusion about hoisting is right.
+
+Good. I won't argue with you more if we agree on the conclusion.
+
+> >>> * A literal may be usable on a partial view but not on the full view,
+> >>> if the full view is a type that already has meaning for the same
+> >>> sort of literal. That seems OK.
+> >>
+> >> Where is that defined? In a case like
+> >>
+> >> type T (<>) is private with String_Literal => Convert;
+> >> function Convert (X : Wide_Wide_String) return T;
+> >> procedure Foo (X : T := "abc");
+> >> private
+> >> type T is new Wide_Wide_String;
+> >> function Convert (X : Wide_Wide_String) return T is (">>>" & T (X));
+> >> procedure Foo (X : T := "abc") is null;
+> >>
+> >> I like the idea that the second "abc" is illegal . Otherwise we'd
+> >> need to modify the conformance rules to know about cases like this.
+> >
+> > It's defined by the way that he wrote the interpretation of
+> > literals; you check for the existing cases first, and only then (if
+> > the existing cases don't work) do you use the aspect specification.
+> > I don't think the wording is very clear about that.
+>
+> There are 2 ways to go if both mechanisms are available (as in the
+> private part of the above example): either a preference rule is used
+> to choose one interpretation over the other or the literal is illegal.
+> I interpret the statement
+> "A literal may be usable in a partial view but not on the
+> full view, if the full view is a type that already has a meaning
+> for the same sort of literal"
+> to mean that the second option has been selected.
+
+I read it to mean that the _Literal aspect implementation is not available.
+
+The literal being illegal is nonsense; how would you write operations on the
+full type if it is Integer or Float without literals? That would be very
+limiting as to what the abstraction could be.
+
+If that was going to be the case, then I'd suggest instead making the full types
+themselves illegal if they have conflicting literals. (Almost every private type
+ends up getting completed with a record type anyway, so the other cases are
+corner cases anyway.) That way, the programmer would know to wrap the full type
+in a record before proceeding.
+
+> I asked for an explanation of how this statement followed from the
+> wording and you (implicitly) agreed with me that you couldn't justify
+> it either (because the first option, not the second, was chosen).
+
+The first option has to be chosen, and I read the statement as describing that
+choice (apparently, not very well). Tucker had better tell us what he meant...
+
+> > ---
+> >
+> >> Modify paragraph 4.2(10):
+> >> The evaluation of a string_literal that is a primary {and has an
+> >> expected type that is a one-dimensional array type with a character
+> >> type as its component type,} yields an array value containing the value
+> >> of each character of the sequence of characters of the string_literal,
+> >> as defined in 2.6. The bounds of this array value are determined
+> >> according to the rules for positional_array_aggregates (see 4.3.3 ),
+> >> except that for a null string literal, the upper bound is the
+> >> predecessor of the lower bound.
+> >
+> > Don't we need these same rules for the bounds of the object that
+> > gets passed into the function specified by the String_Literal
+> > aspect? That object needs defined bounds just as much as the direct
+> > uses! The function that is called is just a normal Ada function
+> > after all, someone could reasonably query Val'First.
+>
+> Do we need to mention this dynamic-semantics stuff at all? If this
+> isn't already handled by an equivalence rule, then it should be.
+> T'("abc")
+> is equivalent to
+> T'(T'String_Literal_Function (Wide_String_Param =>
+> "abc")) and we don't need to say any more than that about the bounds
+> of the Wide_Wide_String literal.
+
+There's no literal equivalence in the wording Tucker sent, but I suppose you
+could argue that "the value is the result of a call on the function specified by
+the aspect, with the parameter being the Wide_Wide_String that corresponds to
+the literal." requires some sort of equivalence.
+
+I would want that to be a lot more explicit if you expect to get bounds and the
+various rules on returns from this call.
+
+> Note that the parameter subtype of the String_Literal function can be
+> constrained.
+
+That shouldn't be allowed, as it would allow only one length of literal (and
+force Constraint_Error for most literals, something we want to avoid for
+optimization reasons if nothing else.). Note that the same consideration applies
+to the numeric literal routines. The wording of the rules suggest Tucker meant
+that the parameter would literally be Wide_Wide_String and nothing else, but I
+think you are right that a constrained subtype would be allowed. It probably
+should say "an unconstrained subtype of type Wide_Wide_String" and similarly for
+the numeric ones.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, January 23, 2018 4:49 PM
+
+...
+> Don't they have to? The function call takes a Wide_Wide_Wide_String
+> parameter and one needs to know the bounds of that. And if those
+> bounds are garbage, we still have to raise Constraint_Error. (Note,
+> however, that the AI on "(null array)" -- AI12-0248-1 since I assigned
+> it a number an hour ago
+> -- eliminates this check, replacing it by a check that the index type
+> has more than one value. Or at least it would had Tucker proposed any
+> wording.)
+
+Good point, we need bounds for the Wide_Wide_String parameter, so those might as
+well be determined the same way as a regular string literal.
+
+...
+> It seems to me that the Pure function rules (and especially the
+> corresponding ones for functions with Global => null -- I hope these
+> exist, they did in every version that I worked years ago -- we don't
+> have or want Pure_Function as we want Global instead, but then it
+> better work the same) are more appropriate here, along with as-if
+> optimizations, than this rather bizarre permission.
+>
+> Indeed, I think we should get rid of this permission altogether and
+> replace it with an AARM Implementation note that discusses as-if
+> optimizations and the fact that an implementation need only call a
+> pure/Global => null function only once per set of parameters. If the
+> function has side-effects (yikes!), that's madness and the code should
+> simply revert to canonical semantics. Who cares if stupid code is fast?
+
+OK, I guess I am convinced. We might encourage implementations to report at
+compile-time any expected run-time failures for literals. And I suppose we
+might as well allow the exceptions propagated by the conversion to simply
+propagate, and remove the bounded error completely. Still seems a bit weird for
+a literal to propagate an exception, rather than being rejected at compile-time.
+
+
+Perhaps we could still give a permission to give a compile-time error if the
+implementation can determine that the evaluation of a literal will always fail.
+Which I suppose gets us back into treating it like a bounded error.
+
+> ...
+>
+> I don't see any way to reconcile the desire to "hoist" literal
+> functions with these side-cases. And I don't really see the point;
+> this is just a function call like any other function call. If it is
+> simple enough, it probably will get auto-inlined and you are saving
+> essentially nothing with this permission and bounded error. Or extra
+> calls can be eliminated using the Pure/Global permissions noted earlier.
+>
+> This only seems to improve the performance of weird (side-effects) or
+> buggy (unexpected exceptions) literal routines. And that seems to be
+> exactly the cases we don't care about. So this smacks of premature
+> optimization, especially as it only helps one specific corner case (as
+> an implementer, I'm much more interested in things that will speed up
+> a substantial fraction of functions).
+
+I am not so concerned about performance, as I am about not hearing about bad
+literals until run-time.
+
+In ParaSail, a precondition on the conversion function is the way that you
+indicate exactly what range of literals, is supported for the type. It turns
+out in ParaSail every (library-level) function is "pure," and all pure functions
+with static parameters are evaluated at compile-time, so you find out about bad
+literals when you would expect, namely at compile-time.
+
+So I guess I would argue for some kind of bounded error, to allow
+implementations to give a compile-time error. But I agree that raising an
+exception at any point other than the occurrence of the literal is not useful.
+
+>>> * A literal may be usable on a partial view but not on the full view,
+>>> if the full view is a type that already has meaning for the same
+>>> sort of literal. That seems OK.
+>>
+>>
+>> Where is that defined? In a case like
+>>
+>> type T (<>) is private with String_Literal => Convert;
+>> function Convert (X : Wide_Wide_String) return T;
+>> procedure Foo (X : T := "abc");
+>> private
+>> type T is new Wide_Wide_String;
+>> function Convert (X : Wide_Wide_String) return T is (">>>" & T (X));
+>> procedure Foo (X : T := "abc") is null;
+>>
+>> I like the idea that the second "abc" is illegal . Otherwise we'd
+>> need to modify the conformance rules to know about cases like this.
+>
+> It's defined by the way that he wrote the interpretation of literals;
+> you check for the existing cases first, and only then (if the existing
+> cases don't work) do you use the aspect specification. I don't think
+> the wording is very clear about that.
+
+Yes, some clarification is required. But it sounds like there is agreement that
+when you can see the full view, and it has a particular kind of literal, we want
+that literal interpreted in the "builtin" way.
+
+> Note that this should be just hiding the specified literal with the
+> language-defined literal meaning. If you can give a literal for a
+> private type, it can always be given for the full type (but it might
+> not mean exactly the same thing).
+>
+> In any case, you are right that the conformance rules need to be modified.
+> If someone defines these using a goofy function with side-effects,
+> you'll need to reject conformance anyway:
+>
+>> type T (<>) is private with String_Literal => Convert;
+>> function Convert (X : Wide_Wide_String) return T;
+>> procedure Foo (X : T := "abc");
+>> private
+>> type T is new Float;
+>> function Convert (X : Wide_Wide_String) return T is (Random);
+>> procedure Foo (X : T := "abc") is null;
+>
+> Each evaluation of "abc" gives a different value of T, so argubly the
+> two Foos don't conform.
+>
+> And even if we decided to ignore this (perhaps we ought to -- here's a
+> place where a Bounded Error makes sense), we still need to detect the
+> case noted previously where the outer and inner literals use different rules.
+
+Yes, we should complain in that (Baird-ian ;-) case.
+
+> ---
+>
+>> Modify paragraph 3.6.3(1):
+>> A one-dimensional array type whose component type is a character type
+>> is called a /string/ type{, as is a type with a specified
+> String_Literal
+>> aspect (see 4.2.1)}.
+>
+> Umm, don't "string types" have a bunch of extra semantics beyond just
+> literals? In particular, string types can be static. Doesn't this
+> cause some problems?
+
+Good point, we will have to look at that.
+
+>
+> There is a similar comment about "character types" (wording not shown).
+>
+> ---
+>
+>> Modify paragraph 4.2(9):
+>> {If its expected type is a numeric type, t}[T]he evaluation of a
+>> numeric literal[, or the literal null,] yields the represented value.
+>> {If its expected type is an access type, the evaluation of the
+>> literal null yields the null value of the expected type.} In other
+>> cases, the effect of evaluating a numeric or null literal is
+>> determined by the Integer_Literal, Real_Literal, or Null_Literal
+>> aspect that applies (see 4.2.1).
+>
+> Something is wrong with the Insertion/Deletion marks here, as I'm
+> pretty sure that text involving the Null_Literal aspect is not
+> existing text. :-)
+
+Good point.
+
+> ---
+>
+>> Modify paragraph 4.2(10):
+>> The evaluation of a string_literal that is a primary {and has an
+>> expected type that is a one-dimensional array type with a character
+>> type as its component type,} yields an array value containing the
+>> value of each character of the sequence of characters of the
+>> string_literal, as defined in 2.6. The bounds of this array value are
+>> determined according to the rules for positional_array_aggregates
+>> (see 4.3.3 ), except that for a null string literal, the upper bound
+>> is the predecessor of the lower bound.
+>
+> Don't we need these same rules for the bounds of the object that gets
+> passed into the function specified by the String_Literal aspect? That
+> object needs defined bounds just as much as the direct uses! The
+> function that is called is just a normal Ada function after all,
+> someone could reasonably query Val'First.
+
+Yes, as mentioned above, I agree we need to define the bounds of the
+Wide_Wide_String passed into the user's conversion function.
+
+> ---
+>
+>> * If we go with this proposal, we will have to decide whether to use
+>> this feature for existing language-defined packages (e.g.
+>> Unbounded_Strings), and if so, whether certain existing functions
+>> should be eliminated from the packages (e.g. overloadings of "&" and
+>> the comparison operators that have String as one of the operand
+>> types). At this stage it might be safer to define a new package and
+>> move the original Unbounded_Strings package to Annex J.
+>
+> The reason that we did not make such a proposal for literals in Ada
+> 2012 (we discussed it briefly) was that it couldn't be used in Ada.String.Unbounded.
+>
+> I think the only way would be to replace the package. One reason to do
+> so is to come up with an overriding string abstraction, I made an
+> attempt at it in AI12-0021-1. I note that is suspiciously similar to
+> this idea, as it uses Wide_Wide_String as an intermediary to support
+> conversions between arbitrary string types. The proposal even used an
+> imaginary "String_Literal" aspect!
+>
+> If one had such a package, then you'd want to replace all of the
+> existing string packages with class-wide ones that would allow storing
+> any string type. (That would make unbounded-ness orthogonal to the
+> representation of the actual string; you'd probably implement the
+> storage as an array of stream elements.) That would give a good excuse
+> to introduce proper literals. (We'd also be able to make UTF-8 and the
+> like into proper types, and allow any of those to represent file names
+> and other I/O strings -- it would mean the end of Wide_Wide_Wide_Wide_
+> Madness.)
+
+I would want that to be a separate AI in any case. I think this AI will not
+propose using this feature in the existing String packages. A user could derive
+from one of these types and add the string literals, presumably.
+
+>> Alternatively, we could define some kind of "preference"
+>> for or against user-defined interpretation of literals, though we
+>> know that "Beaujolais" effects are lurking around the corner when
+>> you set up preference rules. We already have some preference rules
+>> dealing with "root" numeric types, and these could be seen as
+>> similar, but the root-numeric-type preference avoids Beaujolais
+>> effects because the preference is for operations that are always visible.
+>> So barring some clear proof of being Beaujolais-effect-free, we
+>> should probably steer clear of a preference rule.
+>
+> Umm, no thanks. Root numeric preference is a mess in our compiler,
+> and I have no interest in trying to figure out something like it for other
+> things.
+
+Agreed -- no preference rule.
+
+****************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Tuesday, January 23, 2018 11:26 PM
+
+>>>> type T (<>) is private with String_Literal => Convert;
+>>>> function Convert (X : Wide_Wide_String) return T;
+>>>> procedure Foo (X : T := "abc");
+>>>> private
+>>>> type T is new Wide_Wide_String;
+>>>> function Convert (X : Wide_Wide_String) return T is (">>>" & T (X));
+>>>> procedure Foo (X : T := "abc") is null;
+
+Couldn't we make this illegal (as ambiguous), but require:
+ procedure Foo (X : T := Wide_Wide_String'("abc")) is null;
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, January 24, 2018 2:31 AM
+
+Here you are implying there is an implicit conversion between Wide_Wide_String
+and T, and that is not the anticipated semantics of this feature.
+
+****************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Wednesday, January 24, 2018 8:59 AM
+
+No, I mean the qualification requires "abc" to be Wide_Wide_String, therefore
+the assignment triggers the call to the convert function. No ambiguity.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, January 24, 2018 9:10 AM
+
+It doesn't look like that to me. It is really weird for me to see
+"X : T := Y'(blah)" Everywhere else in Ada the type of Y'(blah) is Y, not T.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, January 24, 2018 12:16 PM
+
+>> Everywhere else in Ada the type of Y'(blah) is Y, not T.
+> Yes - that's what implicit conversion is about....
+
+But we certainly are not saying that all Wide_Wide_String expressions are
+implicitly convertible to all types with a String_Literal aspect. We are
+only saying that string literals are overloaded on such types. That is quite
+different. The use of Wide_Wide_String is just as an intermediate in the
+conversion, but should not be considered as the type of the literal in the
+source code.
****************************************************************
Questions? Ask the ACAA Technical Agent