Version 1.8 of ai12s/ai12-0366-1.txt

Unformatted version of ai12s/ai12-0366-1.txt version 1.8
Other versions for file ai12s/ai12-0366-1.txt

!standard A.5.6(0)          20-05-05 AI12-0366-1/04
!standard A.5.7(0)
!class Amendment 20-02-10
!status Amendment 1-2012 20-04-30
!status ARG Approved 13-1-0 20-04-30
!status work item 20-02-10
!status received 20-01-02
!priority Low
!difficulty Easy
!subject Changes to Big_Integer and Big_Real
!summary
!problem
After implementing and experimenting with Big_Integers and Big_Reals at AdaCore both from an Ada usage point of view and from a SPARK usage point of view, we came to the conclusion that the proposed API is unnecessarily complex and confusing for users. In addition, for SPARK there is a desire to map these to mathematical integers/reals, and having an explicit "invalid" representation prevents a natural mapping.
!proposal
We propose to eliminate the ability to create an explicitly invalid representation, and try to model Big_Integer more closely on Integer (and Big_Real more closely on Float), where it is potentially invalid if never initialized, but there is no way to generate an invalid value other than by omitting initialization. As a result we dispense with the word "Optional", though we preserve the way to check whether a Big_Integer/Real Is_Valid.
We originally intended to use Is_Valid directly in pre- and postconditions to clarify where the parameters and the results are expected to be valid. We ultimately declared a named subtype Valid_Big_Integer/Real with a Dynamic_Predicate => Is_Valid (Valid_Big_Integer/Real) and a Predicate_Failure => raise Program_Error, to avoid having to repeat the "or else raise Program_Error" on every appearance of Is_Valid. We were originally concerned that adding a named subtype would make the package more complex without providing sufficient benefit to the user, but having to repeat "or else raise Program_Error" in every precondition seems to tip the scale toward the use of a predicated subtype, and actually shortens and unclutters the final package specification.
We specify that the function Is_Valid has an Intrinsic calling convention to prevent a call on it from being "hidden" from the compiler by a level of indirection, so that it can be implemented by an attribute in some contexts (e.g. 'Initialized in SPARK).
!wording
[Author's note: This includes the effect of AI12-0340-1; it changed Put_Image to work on Text_Buffers.]
Replace A.5.6 (2/5-21/5) with the following:
with Ada.Strings.Text_Buffers; package Ada.Numerics.Big_Numbers.Big_Integers with Preelaborate, Nonblocking, Global => null is
type Big_Integer is private with Integer_Literal => From_String, Put_Image => Put_Image;
function Is_Valid (Arg : Big_Integer) return Boolean with Convention => Intrinsic;
subtype Valid_Big_Integer is Big_Integer with Dynamic_Predicate => Is_Valid (Valid_Big_Integer), Predicate_Failure => raise Program_Error;
function "=" (L, R : Valid_Big_Integer) return Boolean; function "<" (L, R : Valid_Big_Integer) return Boolean; function "<=" (L, R : Valid_Big_Integer) return Boolean; function ">" (L, R : Valid_Big_Integer) return Boolean; function ">=" (L, R : Valid_Big_Integer) return Boolean;
function To_Big_Integer (Arg : Integer) return Valid_Big_Integer;
subtype Big_Positive is Big_Integer with Dyanmic_Predicate => (if Is_Valid (Big_Positive) then Big_Positive > 0), Predicate_Failure => raise Constraint_Error;
subtype Big_Natural is Big_Integer with Dyanmic_Predicate => (if Is_Valid (Big_Natural) then Big_Natural => 0), Predicate_Failure => raise Constraint_Error;
function In_Range (Arg, Low, High : Valid_Big_Integer) return Boolean is (Low <= Arg and Arg <= High);
function To_Integer (Arg : Valid_Big_Integer) return Integer with Pre => In_Range (Arg, Low => To_Big_Integer (Integer'First), High => To_Big_Integer (Integer'Last)) or else raise Constraint_Error;
generic type Int is range <>; package Signed_Conversions is function To_Big_Integer (Arg : Int) return Valid_Big_Integer; function From_Big_Integer (Arg : Valid_Big_Integer) return Int with Pre => In_Range (Arg, Low => To_Big_Integer (Int'First), High => To_Big_Integer (Int'Last)) or else raise Constraint_Error; end Signed_Conversions;
generic type Int is mod <>; package Unsigned_Conversions is function To_Big_Integer (Arg : Int) return Valid_Big_Integer; function From_Big_Integer (Arg : Valid_Big_Integer) return Int with Pre => In_Range (Arg, Low => To_Big_Integer (Int'First), High => To_Big_Integer (Int'Last)) or else raise Constraint_Error; end Unsigned_Conversions;
function To_String (Arg : Valid_Big_Integer; Width : Field := 0; Base : Number_Base := 10) return String with => Post => To_String'Result'First = 1;
function From_String (Arg : String) return Valid_Big_Integer;
procedure Put_Image (Buffer : in out Ada.Strings.Text_Buffers.Root_Buffer_Type'Class; Arg : in Valid_Big_Integer);
function "+" (L : Valid_Big_Integer) return Valid_Big_Integer; function "-" (L : Valid_Big_Integer) return Valid_Big_Integer; function "abs" (L : Valid_Big_Integer) return Valid_Big_Integer; function "+" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "-" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "*" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "/" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "mod" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "rem" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "**" (L : Valid_Big_Integer; R : Natural) return Valid_Big_Integer; function Min (L, R : Valid_Big_Integer) return Valid_Big_Integer; function Max (L, R : Valid_Big_Integer) return Valid_Big_Integer;
function Greatest_Common_Divisor (L, R : Valid_Big_Integer) return Big_Positive with Pre => (L /= 0 and R /= 0) or else raise Constraint_Error;
private ... -- not specified by the language end Ada.Numerics.Big_Numbers.Big_Integers;
Modify A.5.6(22/5):
To_String and From_String behave analogously to the Put and Get procedures defined in Text_IO.Integer_IO (in particular, with respect to the interpretation of the Width and Base parameters) except that Constraint_Error, not Data_Error, is propagated in error cases and the result of a call To_String with a Width parameter of 0 and a nonnegative Arg parameter does not include a leading blank. Put_Image calls To_String (passing in the default values for the Width and Base parameters), prepends a leading blank if the argument is nonnegative, [converts that String to a Wide_Wide_String using To_Wide_Wide_String,] and writes the resulting value to the [stream using Wide_Wide_String'Write]{buffer using Text_Buffers.Put}.
Modify A.5.6 (24/5-26/5) [reordering these paragraphs so the last paragraph is in the middle]:
The type [Optional_]Big_Integer needs finalization (see 7.6).
{Dynamic Semantics}
for purposes of determining whether predicate checks are performed as part of default initialization, the type [Optional_]Big_Integer [shall be]{is} considered to have a subcomponent that has a default_expression.
Implementation Requirements
No storage associated with {a}[an Optional_]Big_Integer object shall be lost upon assignment or scope exit.
Replace A.5.7 (2/5 to 19/5) with the following:
with Ada.Numerics.Big_Numbers.Big_Integers; use all type Big_Integers.Big_Integer; with Ada.Strings.Text_Buffers; package Ada.Numerics.Big_Numbers.Big_Reals with Preelaborate, Nonblocking, Global => null is
type Big_Real is private with Real_Literal => From_String, Put_Image => Put_Image;
function Is_Valid (Arg : Big_Real) return Boolean with Convention => Intrinsic;
subtype Valid_Big_Real is Big_Real with Dynamic_Predicate => Is_Valid (Valid_Big_Real), Predicate_Failure => raise Program_Error;
function "/" (Num, Den : Big_Integers.Valid_Big_Integer) return Valid_Big_Real; with Pre => Den /= 0 or else raise Constraint_Error;
function Numerator (Arg : Valid_Big_Real) return Big_Integers.Valid_Big_Integer with Post => (if Arg = 0.0 then Numerator'Result = 0);
AARM Reason: The postcondition of Numerator cannot be complete as it cannot mention Denominator. Since the postcondition of Denominator uses Numerator, we would get an infinite mutual recursion if both postconditions are enabled. The postcondition of Denominator serves as the postcondition for Numerator as well unless Arg = 0.0.
function Denominator
(Arg : Valid_Big_Real) return Big_Integers.Big_Positive with Post =>
(if Arg = 0.0 then Denominator'Result = 1 else Big_Integers.Greatest_Common_Divisor (Numerator (Arg), Denominator'Result) = 1);
function To_Big_Real (Arg : Big_Integers.Valid_Big_Integer) return Valid_Big_Real is (Arg / 1);
function To_Real (Arg : Integer) return Valid_Big_Real is (Big_Integers.To_Big_Integer (Arg) / 1);
function "=" (L, R : Valid_Big_Real) return Boolean; function "<" (L, R : Valid_Big_Real) return Boolean; function "<=" (L, R : Valid_Big_Real) return Boolean; function ">" (L, R : Valid_Big_Real) return Boolean; function ">=" (L, R : Valid_Big_Real) return Boolean;
function In_Range (Arg, Low, High : Valid_Big_Real) return Boolean is (Low <= Arg and Arg <= High);
generic type Num is digits <>; package Float_Conversions is function To_Big_Real (Arg : Num) return Valid_Big_Real; function From_Big_Real (Arg : Valid_Big_Real) return Num with Pre => In_Range (Arg, Low => To_Big_Real (Num'First), High => To_Big_Real (Num'Last)) or else raise Constraint_Error; end Float_Conversions;
generic type Num is delta <>; package Fixed_Conversions is function To_Big_Real (Arg : Num) return Valid_Big_Real; function From_Big_Real (Arg : Valid_Big_Real) return Num with Pre => In_Range (Arg, Low => To_Big_Real (Num'First), High => To_Big_Real (Num'Last)) or else raise Constraint_Error; end Fixed_Conversions;
function To_String (Arg : Valid_Big_Real; Fore : Field := 2; Aft : Field := 3; Exp : Field := 0) return String with Post => To_String'Result'First = 1;
function From_String (Arg : String) return Valid_Big_Real;
function To_Quotient_String (Arg : Valid_Big_Real) return String is (Big_Integers.To_String (Numerator (Arg)) & " / " & Big_Integers.To_String (Denominator (Arg)));
function From_Quotient_String (Arg : String) return Valid_Big_Real;
procedure Put_Image (Buffer : in out Ada.Strings.Text_Buffers.Root_Buffer_Type'Class; Arg : in Valid_Big_Real);
function "+" (L : Valid_Big_Real) return Valid_Big_Real; function "-" (L : Valid_Big_Real) return Valid_Big_Real; function "abs" (L : Valid_Big_Real) return Valid_Big_Real; function "+" (L, R : Valid_Big_Real) return Valid_Big_Real; function "-" (L, R : Valid_Big_Real) return Valid_Big_Real; function "*" (L, R : Valid_Big_Real) return Valid_Big_Real; function "/" (L, R : Valid_Big_Real) return Valid_Big_Real; function "**" (L : Valid_Big_Real; R : Integer) return Valid_Big_Real; function Min (L, R : Valid_Big_Real) return Valid_Big_Real; function Max (L, R : Valid_Big_Real) return Valid_Big_Real;
private ... -- not specified by the language end Ada.Numerics.Big_Numbers.Big_Reals;
Modify A.5.7 (20/5):
To_String and From_String behave analogously to the Put and Get procedures defined in Text_IO.Float_IO (in particular, with respect to the interpretation of the Fore, Aft, and Exp parameters), except that Constraint_Error (not Data_Error) is propagated in error cases. From_Quotient_String implements the inverse function of To_Quotient_String; Constraint_Error is propagated in error cases. Put_Image calls To_String,[ converts that String to a Wide_Wide_String using To_Wide_Wide_String,] and {writes} the resulting value to the [stream] {buffer} using [Wide_Wide_String'Write]{Text_Buffers.Put}.
Modify A.5.7 (23/5 to 25/5): [reordering these paragraphs so the last paragraph is in the middle]:
The type [Optional_]Big_Real needs finalization (see 7.6).
{Dynamic Semantics}
for purposes of determining whether predicate checks are performed as part of default initialization, the type [Optional_]Big_Real [shall be] {is} considered to have a subcomponent that has a default_expression.
Implementation Requirements
No storage associated with {a}[an Optional_]Big_Real object shall be lost upon assignment or scope exit.
!discussion
See the !proposal for most of the rationale. We change the Global aspects to be simply "null", though the implementation is always permitted to add back the "synchronized in out Big_Integers/Reals" if needed. At least logically, there is no need for any side effects in any of the operations.
We add a "use all type Big_Integers.Big_Integer;" to the context clause for Big_Reals to make the specification easier to read. It seems unlikely that this will introduce any confusion or ambiguity.
After changing the "shall be" to "is", we move the indications that predicate checks are performed on default initialization to new Dynamic Semantics sections, as that is where they seem to best belong.
We changed the postcondition of Denominator to say something about the result when Arg = 0.0, and added a postcondition to Numerator to do the same.
!corrigendum A.5.6(0/5)
Insert new clause:
Dummy to force a conflict; the wording changes are in the conflict file.
!corrigendum A.5.7(0/5)
Insert new clause:
Dummy to force a conflict; the wording changes are in the conflict file.
!ASIS
Most likely, no ASIS support needed. (Unsure if the special semantics of Is_Valid requires any changes).
!ACATS test
ACATS B- and C-Tests are needed to check that the changes are supported (but mostly can be covered by tests for the original features).
!appendix

From: Tucker Taft
Sent: Monday, December 2, 2019  8:18 AM

Bob Duff noticed that we used a "shall" where we should simply be saying "is", 
in A.5.6(26/5):

(26/5) For purposes of determining whether predicate checks are performed as 
part of default initialization, the type Optional_Big_Integer [shall be] {is} 
considered to have a subcomponent that has a default_expression.

If we agree with this change, then the paragraph should be moved to "static 
semantics" from its current location in "implementation requirements."

****************************************************************

From: Randy Brukardt
Sent: Monday, December 2, 2019  9:38 PM

> Bob Duff noticed that we used a "shall" where we should simply be 
> saying "is", in A.5.6(26/5):
> 
> (26/5) For purposes of determining whether predicate checks are 
> performed as part of default initialization, the type 
> Optional_Big_Integer [shall be] {is} considered to have a subcomponent 
> that has a default_expression.

A.5.7(25.5) has similar wording.
 
> If we agree with this change, then the paragraph should be moved to 
> "static semantics" from its current location in "implementation 
> requirements."

I'm not certain I agree with this. I looked up all of the uses of "for the 
purpose of" in the RM, and it is in many sections but I didn't find any in 
"static semantics". (I saw "Legality Rules", "Metrics", and "Post-Compilation 
Rules" along with "Implementation Requirements"). The closest similar case is 
A(3.1/4), which is in Implementation Requirements.
(But it's not that similar; I do note that it uses "is".)

There are more uses of "for the purposes of", but again the majority of those 
are in Legality Rules or Dynamic Semantics. There are also some non-normative 
uses. There are a handful in "Static Semantics" sections:

There are some associated with language-defined packages, but for those the 
entire definition is in "Static Semantics".

3.9.3(12.3/3) is in "Static Semantics", but it starts "For the purposes of 
dynamic semantics...". Bizarre!

3.3(23.11/5) is in "Static Semantics", but it is part of the definition of 
"known to be constrained". A rather different case.

6.3.1(24.1/2) is in "Static Semantics", but it is part of the definition of 
"known to be constrained". A rather different case.

10.2.1(15.3/2) is in "Static Semantics", but it is part of the preceding rule 
rather than standing alone.

12.6(9.1/3) is in "Static Semantics", but it is connected by "then" to 
preceding text (it also does not stand alone).

13.10(3) is in "Static Semantics", but it is connected by "except" to 
preceding text (a very different meaning).

Conclusion: I can't find anything very similar in the Standard to this 
wording. There's an argument for leaving it in "Implementation Requirements"
and an argument for moving it to "Static Semantics". There's no argument for
using "shall", however.

****************************************************************

From: Tucker Taft
Sent: Tuesday, December 3, 2019  9:09 AM

It could be in dynamic semantics instead.  Either makes more sense than being 
an implementation requirement.

****************************************************************

From: Tucker Taft
Sent: Tuesday, December 10, 2019  1:08 PM

The GNAT folks have implemented a variant of the Big_Integers API, after a 
fair amount of internal discussion.  In this variant, there is no 
Optional_Big_Integer, but Big_Integer can be declared without an explicit 
initial value.  Ideally Bob or Steve will write up an AI to give the 
details.  The goal was to make Big_Integer more like "regular" Integer.

****************************************************************

From: Jeff Cousins
Sent: Thursday, December 12, 2019  2:38 AM

> The GNAT folks have implemented a variant of the Big_Integers API
 
That sounds like good news!

****************************************************************

From: Arnaud Charlet
Sent: Thursday, December 12, 2019  2:43 AM

Right and this confirms that prototyping and experimenting before 
standardizing is critically important and useful.

****************************************************************

From: Arnaud Charlet
Sent: Tuesday, January 14, 2020  2:33 AM

After implementing and experimenting with Big_Integers and Big_Reals at 
AdaCore both from an Ada usage point of view and from a SPARK usage point 
of view, we came to the conclusion that the proposed API is unnecessarily 
complex and confusing for users, and in addition not practical for use in 
SPARK.

For Ada users: having to deal with two different type (a type and a subtype) 
and having to choose which one is relevant is a real burden and can actually 
lead to the wrong choice, only caught late at runtime, and makes reading and 
understand the API much harder.

For SPARK users: having two different types makes generating provable formulas 
much harder and not direct. Since we anticipate SPARK users to be "big users" 
of "big integers", not having a straightforward and optimal mapping is really 
not suitable.

Proposed change: use a single type for Big_Integer and Big_Real and remove 
all the Optional_Big_* types. Instead state that reading an uninitialized
Big_* variable is a bounded error, similar to all scalar types, and add a 
precondition to each subprogram in the Big_Integers/Big_Reals packages 
ensuring that all possible uses of these objects is properly checked, except 
for the Is_Valid function itself, which is marked Intrinsic so that it's 
'Access cannot be taken.

See the below for the proposed API for Big_Integers. A similar change is 
proposed for Big_Reals which has been successfully implemented and used at 
AdaCore.

--
with Ada.Streams;

package Ada.Numerics.Big_Numbers.Big_Integers
  with Preelaborate, Nonblocking
is
   type Big_Integer is private
     with Integer_Literal => From_String,
          Put_Image => Put_Image;

   function Is_Valid (Arg : Big_Integer) return Boolean
     with Convention => Intrinsic;

   function "=" (L, R : Big_Integer) return Boolean
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "<" (L, R : Big_Integer) return Boolean
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "<=" (L, R : Big_Integer) return Boolean
     with Pre => Is_Valid (L) and Is_Valid (R);

   function ">" (L, R : Big_Integer) return Boolean
     with Pre => Is_Valid (L) and Is_Valid (R);

   function ">=" (L, R : Big_Integer) return Boolean
     with Pre => Is_Valid (L) and Is_Valid (R);

   function To_Big_Integer (Arg : Integer) return Big_Integer;

   subtype Big_Positive is Big_Integer
     with Dynamic_Predicate => Big_Positive > 0,
          Predicate_Failure => (raise Constraint_Error);

   subtype Big_Natural is Big_Integer
     with Dynamic_Predicate => Big_Natural >= 0,
          Predicate_Failure => (raise Constraint_Error);

   function In_Range (Arg, Low, High : Big_Integer) return Boolean is
     ((Low <= Arg) and (Arg <= High))
     with Pre => Is_Valid (Arg) and Is_Valid (Low) and Is_Valid (High);

   function To_Integer (Arg : Big_Integer) return Integer
     with Pre => In_Range (Arg,
                           Low  => To_Big_Integer (Integer'First),
                           High => To_Big_Integer (Integer'Last))
                  or else (raise Constraint_Error);

   generic
      type Int is range <>;
   package Signed_Conversions is

      function To_Big_Integer (Arg : Int) return Big_Integer;

      function From_Big_Integer (Arg : Big_Integer) return Int
        with Pre => In_Range (Arg,
                              Low  => To_Big_Integer (Int'First),
                              High => To_Big_Integer (Int'Last))
                     or else (raise Constraint_Error);

   end Signed_Conversions;

   generic
      type Int is mod <>;
   package Unsigned_Conversions is

      function To_Big_Integer (Arg : Int) return Big_Integer;

      function From_Big_Integer (Arg : Big_Integer) return Int
        with Pre => In_Range (Arg,
                              Low  => To_Big_Integer (Int'First),
                              High => To_Big_Integer (Int'Last))
                     or else (raise Constraint_Error);

   end Unsigned_Conversions;

   function To_String (Arg   : Big_Integer;
                       Width : Field := 0;
                       Base  : Number_Base := 10) return String
     with Pre  => Is_Valid (Arg),
          Post => To_String'Result'First = 1;

   function From_String (Arg : String) return Big_Integer;

   procedure Put_Image
     (Stream : not null access Ada.Streams.Root_Stream_Type'Class;
      Arg    : Big_Integer)
     with Pre => Is_Valid (Arg);

   function "+" (L : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L);

   function "-" (L : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L);

   function "abs" (L : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L);

   function "+" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "-" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "*" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "/" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "mod" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "rem" (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function "**" (L : Big_Integer; R : Natural) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function Min (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function Max (L, R : Big_Integer) return Big_Integer
     with Pre => Is_Valid (L) and Is_Valid (R);

   function Greatest_Common_Divisor
     (L, R : Big_Integer) return Big_Positive
     with Pre => (L /= 0 and R /= 0) or else (raise Constraint_Error);

private

   ... -- not specified by the language

end Ada.Numerics.Big_Numbers.Big_Integers;

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 14, 2020  4:39 PM

I think that solution was a case of temporary mass insanity, as it doesn't 
make a lot of sense on review. In hindsight, I have no idea why we went 
with that - probably an attempt to split the baby.

Anyway, a procedural note on this: the deadline for submissions for the 
upcoming meeting was noon yesterday. We set these deadlines early by the 
request of various ARG members who wanted a reasonable amount of time to 
review AIs before a meeting. Before we set that rule, certain people would 
do their homework on the plane to a meeting, leaving us discussing something 
no one had seen.

I can sometimes take late submissions (as with yesterday, when I ended up with 
a lengthy afternoon dental appointment), but I need to be given a heads-up 
before the deadline that they are coming so I can avoid repeating work.

As such, this submission won't be processed until after tomorrow's meeting, 
and won't be discussed unless we somehow magically get through all of the AIs 
on the agenda (very unlikely in three hours).

Ideally, submissions that aren't ready by the deadline should be held until 
after the meeting so as to not distract people with things that aren't 
immediately relevant and/or having it get lost as it may disappear into the 
volume of one's inbox by the time the meeting is over.

****************************************************************

From: Steve Baird
Sent: Tuesday, January 14, 2020  4:59 PM

> As such, this submission won't be processed until after tomorrow's 
> meeting, and won't be discussed unless we somehow magically get 
> through all of the AIs on the agenda (very unlikely in three hours).

Right - this wasn't intended to be discussed this meeting (unless, as you 
described, we run out of other topics).

Arno and I discussed this at a meeting on Monday morning (the end of the 
meeting coincided with the noon deadline) and I encouraged him to send the 
message without making any heroic efforts to meet any deadline.

He was prompt enough that it looked like a near miss in an attempt to meet 
the deadline; it wasn't.

I know, you aren't used to dealing with messages coming in *earlier* than 
expected.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 14, 2020  5:15 PM

Ideally, nothing would be sent in the dead period between the deadline and the 
meeting, as we don't want to be putting effort into non-agenda items before 
the meeting (particularly items that cause lengthy discussions), and it is 
easy to forget about such items after the meeting. Since I have to process it, 
that isn't a problem for me, I guess, but it likely is for everyone else (based
on the typical result that if there isn't a response to a posting in 24 hours, 
there almost never is one).

Such a dead period isn't really enforceable, but it seems preferable for 
everyone to follow it.

****************************************************************

From: Randy Brukardt
Sent: Monday, February 10, 2020  11:35 PM

I'm trying to organize this for an AI, and it seems underspecified and 
underjustified...
 
> After implementing and experimenting with Big_Integers and Big_Reals 
> at AdaCore both from an Ada usage point of view and from a SPARK usage 
> point of view, we came to the conclusion that the proposed API is 
> unnecessarily complex and confusing for users, and in addition not 
> practical for use in SPARK.
>
> For Ada users: having to deal with two different type (a type and a 
> subtype) and having to choose which one is relevant is a real burden 
> and can actually lead to the wrong choice, only caught late at 
> runtime, and makes reading and understand the API much harder.

There's a big difference between "two types" and "a type and a subtype", in
that the possibilities of the first are not necessarily comparable, while the
second is just a set and a subset of values. IMHO, subtypes are necessary in 
any decent-sized API, at a minimum to meet the DRY (Don't Repeat Yourself) 
principle. Still, the naming of the current proposal is likely to be confusing
(the type should definitely be Big_Integer).

In my experience, one declares objects (including components) with the least 
restrictive type (typically the base type), and parameters with a more 
restrictive subtype. That pretty much requires multiple subtypes (note that is 
pretty much the only way in Ada 95/2005 code to add any contract conditions to 
a subprogram specification).

> For SPARK users: having two different types makes generating provable 
> formulas much harder and not direct. Since we anticipate SPARK users 
> to be "big users" of "big integers", not having a straightforward and 
> optimal mapping is really not suitable.

This I don't understand, could you explain further? I would expect that a 
subtype would make analyzing a value of a type easier, since it necessarily 
restricts the set of values that are possible. And it has no other effect so 
it shouldn't make anything harder.

I tend to start introducing subtypes when something appears in my code more 
than 2 or 3 times; giving something a name rather that duplicating it all 
over seems to be classic DRY. I wouldn't expect introducing short-hands to 
make problems for SPARK or any other Ada-aware tool.

To take an example of what I typically find in my code, here's a tiny piece 
of the declarations from the web site log analyzer:

    type User_Counts is range 0 .. 1_000_000;
    NO_USER : constant User_Counts := 0;
    subtype User_Indices is User_Counts range 1 .. User_Counts'Last;

Objects typically are declared with type User_Counts (with the exception of a 
few local temporaries) and most everything else is declared with User_Indices 
(including most parameters).

Many types I declare end up with this or a similar structure, as there often 
is a need to separate an uninitialized/unknown value from the known values. 
I see that with access types (null exclusions mainly can be applied to 
parameters; components usually have to have the possibility of being null in 
order to deal with error cases and with piece-meal construction).

It's not unusual for the unknown/uninitialized value to be represented by the 
result of a function call like Is_Valid. (That's especially likely for 
private types.) With predicates, one would prefer to write a subtype with a 
predicate rather than to duplicate a check 30-some times in preconditions.

This is precisely what the original specification was doing. Pretty much 
every new specification (standard and in user code) has opportunities to take 
some advantage of this pattern (it's even recommended in the predicate 
examples in the Ada 2012 RM + Corrigendum 1, see 3.2.4(41-51/4)).

> Proposed change: use a single type for Big_Integer and Big_Real and 
> remove all the Optional_Big_* types. Instead state that reading an 
> uninitialized
> Big_* variable is a bounded error, similar to all scalar types, and 
> add a precondition to each subprogram in the Big_Integers/Big_Reals 
> packages ensuring that all possible uses of these objects is properly 
> checked, except for the Is_Valid function itself, which is marked 
> Intrinsic so that it's 'Access cannot be taken.
> 
> See the below for the proposed API for Big_Integers. A similar change 
> is proposed for Big_Reals which has been successfully implemented and 
> used at AdaCore.

The wording changes that go with this are missing. As such I don't know 
precisely what is intended. In particular, I do not understand how this 
Bounded Error is supposed to work or why we need it.

I'd also like some justification for marking Is_Valid Intrinsic, since one 
expects this package to be written in Ada. I don't see any reason that taking 
'Access of Is_Valid would be a problem (nor do I see any reason to do it, but 
that's a different issue). Is_Valid probably should be marked to have no 
side-effects (that is, with Global => null), but Intrinsic doesn't by itself 
have that effect.

>   type Big_Integer is private
>     with Integer_Literal => From_String,
>          Put_Image => Put_Image;

There should be some Default_Initial_Condition here; Ada package 
specifications are often used as patterns by Ada users and thus should 
represent best practices. Uninitialized objects, OTOH, are a necessary evil 
in the case of existing language-defined types, but are just evil otherwise.

I don't think that specifying
     Default_Initial_Condition => not Is_Valid (Big_Integer), 
has any significant performance or correctness impacts (and it should make 
it easier for SPARK by giving any such objects a known state). 

> with Ada.Streams;
...
>    procedure Put_Image
>      (Stream : not null access Ada.Streams.Root_Stream_Type'Class;
>       Arg    : Big_Integer)
>      with Pre => Is_Valid (Arg);

Put_Image doesn't use Streams anymore, we changed it to using a text buffer 
(AI12-0315-1). This should be:

    procedure Put_Image
      (Buffer : in out Ada.Strings.Text_Buffers.Root_Buffer_Type'Class;
       Arg    : Big_Integer)
      with Pre => Is_Valid (Arg);


>    function "+" (L : Big_Integer) return Big_Integer
>      with Pre => Is_Valid (L);

This precondition changes the exception raised from Constraint_Error to 
Assertion_Error. The exception raised might not matter to SPARK, but for Ada 
it should be either Constraint_Error or Program_Error. (That is the case for 
the "regular" integers as well, if you're really trying to match that 
behavior.) I think it would be better to be Program_Error (use of an invalid 
value), but 13.9.1(9) allows either. In any case, the exception needs to be 
reflected in this precondition and all of the others as well:

    function "+" (L : Big_Integer) return Big_Integer
      with Pre => (if Is_Valid (L) then raise Program_Error);

****************************************************************

From: Jean-Pierre Rosen
Sent: Monday, February 10, 2020  11:58 PM

>     function "+" (L : Big_Integer) return Big_Integer
>       with Pre => (if Is_Valid (L) then raise Program_Error);

       with Pre => (if {not} Is_Valid (L) then raise Program_Error); Presumably ;-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, February 11, 2020  2:01 AM

>        with Pre => (if {not} Is_Valid (L) then raise Program_Error);
>  Presumably ;-)

Just checking if anyone is reading carefully. ;-)

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 11, 2020  8:41 AM

> I'm trying to organize this for an AI, and it seems underspecified and 
> underjustified...

Steve and I can help fill in some of the missing rationale.  We had long 
discussions on this topic internally.  I can try to augment the !discussion 
section if you want to hand it off to me.

****************************************************************

From: Claire Dross
Sent: Tuesday, February 11, 2020  8:46 AM

>> For SPARK users: having two different types makes generating 
>> provable formulas much harder and not direct. Since we 
>> anticipate SPARK users to be "big users" of "big integers", 
>> not having a straightforward and optimal mapping is really 
>> not suitable.

> This I don't understand, could you explain further? I would expect that a
> subtype would make analyzing a value of a type easier, since it necessarily
> restricts the set of values that are possible. And it has no other effect so
> it shouldn't make anything harder.

The point here is that in SPARK, uninitialized values are not modelled. Said 
otherwise, we check that objects are initialized by other means without having
to represent the invalid "uninitialized" value. So here, having to care about 
this value which is nearly always incorrect makes things more complicated for 
no gain.

I hope it is clearer now,

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 11, 2020  8:54 AM

> The point here is that in SPARK, uninitialized values are not modelled. 
> Said otherwise, we check that objects are initialized by other means without 
> having to represent the invalid "uninitialized" value. So here, having to 
> care about this value which is nearly always incorrect makes things more 
> complicated for no gain.

Is it fair to saying that SPARK doesn't model the uninitialized *value* (at 
run time), but it does model the uninitialized "state" (at compile time)?

> I hope it is clearer now,

This issue is pretty subtle! ;-)

****************************************************************

From: Claire Dross
Sent: Tuesday, February 11, 2020  9:07 AM

It is a good image. Currently, SPARK does a simple flow analysis to check if 
objects are initialized, something that could be done at compile time.

****************************************************************

From: Tucker Taft
Sent: Tuesday, February 11, 2020  9:49 AM

I was including SPARK's flow analysis and proof within the notion of "compile 
time" ;-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, February 11, 2020  4:49 PM

>>> For SPARK users: having two different types makes generating 
>>> provable formulas much harder and not direct. Since we anticipate 
>>> SPARK users to be "big users" of "big integers", not having a 
>>> straightforward and optimal mapping is really not suitable.

>>This I don't understand, could you explain further? I would expect 
>>that a subtype would make analyzing a value of a type easier, since it 
>>necessarily restricts the set of values that are possible. And it has no 
>>other effect so it shouldn't make anything harder.

>The point here is that in SPARK, uninitialized values are not modelled. 
>Said otherwise, we check that objects are initialized by other means 
>without having to represent the invalid "uninitialized" value. So here, 
>having to care about this value which is nearly always incorrect makes 
>things more complicated for no gain.

OK, but there aren't any "uninitialized" values here, this is a private type 
so the values are default initialized and Ada has nothing to say as to whether
that is OK or not. For many private types (think File_Type, for example), a 
default initialized value is the normal case.

The only way to bring "uninitialized value" checking to bear on these (or
any) private types (Big_Integer and Big_Real) means applying some sort of 
non-Ada stuff ("magic" from an Ada perspective) to the types.

And that's not a good thing, since people can and often do use the packages in 
the Standard as patterns for their own packages. But those programmers don't 
have the same "magic" available, and they will get burned.

I suppose we could try to bring that "magic" to Ada in a defined way, but that 
has it's own problems. (How much flow analysis is enough analysis?
There have been strong objections in the past to leaving this sort of thing
implementation-defined.)

It also occurs to me that not mandating the initial state implies that 
implementations that initialize everything to a known-good value (such as
zero) would avoid needing to do any validity checking at all. That clearly 
would be the best implementation that meets the spec (the less work the 
better :-). I think we considered requiring that but some people (Bob in 
particular, if my memory is correct) had strong objections to that sort of 
implementation.

What we can't do is make the SPARK implementation easier at the cost of making 
the Ada implementation unsafe or unportable. One expects plenty of non-SPARK 
Ada users to use these packages once they're generally available.

****************************************************************

From: Claire Dross
Sent: Wednesday, February 12, 2020  2:13 AM

> What we can't do is make the SPARK implementation easier at the cost 
> of making the Ada implementation unsafe or unportable. One expects 
> plenty of non-SPARK Ada users to use these packages once they're generally 
> available.

I don't agree that this makes the Ada implementation unsafe or unportable. 
These big scalars would really be handled like normal scalars, which I think 
is the easiest to understand for everybody. Most people don't have a 
Valid_Natural type with a predicate right? If they care about initialization, 
they simply initialize all Natural objects at declaration, and I think this 
will be the same for these big scalar types.

****************************************************************

From: Tucker Taft
Sent: Wednesday, February 12, 2020  4:36 PM

The difference is that we have a run-time representation for "uninitialized" 
for such big integers, but not necessarily for normal integers.  I agree with
the notion that we don't want to "reify" this representation by providing, 
for example, a declared "Invalid_Value" constant.  But we do want to allow Ada 
run-time checks to detect use of uninitialized big integers, even if a SPARK 
tool might have eliminated the possibility before run-time.  It is somewhat of 
a delicate balance, and I don't think we should presume either answer is 
"obviously" correct.  In any case, I have been handed the AI to add more 
rationale based on our internal AdaCore discussions.  I may need some help in 
doing so!

****************************************************************

From: Claire Dross
Sent: Thursday, February 13, 2020  3:25 AM

> The difference is that we have a run-time representation for "uninitialized" 
> for such big integers, but not necessarily for normal integers.  I agree 
> with the notion that we don't want to "reify" this representation by 
> providing, for example, a declared "Invalid_Value" constant.  But we do 
> want to allow Ada run-time checks to detect use of uninitialized big 
> integers, even if a SPARK tool might have eliminated the possibility before 
> run-time.  It is somewhat of a delicate balance, and I don't think we should
> presume either answer is "obviously" correct.

I agree with that, and I think it is nice that it is possible for Ada run-time 
to detect uses of uninitialized values. I wouldn't want to prevent that. But I 
don't think this is the case for the new proposal.

> In any case, I have been handed the AI to add more rationale based on our 
internal AdaCore discussions.  I may need some help in doing so!

Sure! I would be happy to help, and probably Arnaud too.

****************************************************************

From: Randy Brukardt
Sent: Thursday, February 13, 2020  5:53 PM

> I agree with that, and I think it is nice that it is possible for Ada 
> run-time to detect uses of uninitialized values. I wouldn't want to 
> prevent that. But I don't think this is the case for the new proposal.

It needs to be *required* to detect these sorts of abuses, because no Ada user
ought to be dependent on the charity of an implementer to get safe behavior. 
The fact that Ada does not do such detection for scalar types is already a 
black eye for Ada (one that we can't fix because of compatibility issues). 
Making it worse isn't helping anything (perhaps other than the sales of static 
analysis tools).

****************************************************************

From: Claire Dross
Sent: Friday, February 14, 2020  3:57 AM

>It needs to be *required* to detect these sorts of abuses, because no Ada
>user ought to be dependent on the charity of an implementer to get safe
>behavior. The fact that Ada does not do such detection for scalar types is
>already a black eye for Ada (one that we can't fix because of compatibility
>issues). Making it worse isn't helping anything (perhaps other than the
>sales of static analysis tools).

You mean if there is no default initial condition then it is unsafe? I find 
it a bit definitive, there must be possible to have it enforced that the 
default value of Big_Integer is invalid without using this construct. After 
all, everybody has done without this aspect for years.

I won't fight for this more, all this back and forth are tiring. But 
honestly, we should not be surprised to get compatibility issues with SPARK 
when Ada takes features of SPARK and add them slightly modified in Ada... 
Here the correct annotation for SPARK is Default_Initial_Condition => null,
meaning that the type does not provide safe default initialization, but I 
know this is not Ada.

****************************************************************

From: Randy Brukardt
Sent: Friday, February 14, 2020  10:21 PM

...

>>It needs to be *required* to detect these sorts of abuses, because no 
>>Ada user ought to be dependent on the charity of an implementer to get 
>>safe behavior. The fact that Ada does not do such detection for scalar 
>>types is already a black eye for Ada (one that we can't fix because of
compatibility
>>issues). Making it worse isn't helping anything (perhaps other than 
>>the sales of static analysis tools).

>You mean if there is no default initial condition then it is unsafe? I 
>find

>it a bit definitive, there must be possible to have it enforced that 
>the default value of Big_Integer is invalid without using this 
>construct. After all, everybody has done without this aspect for years.

Every abstraction (containers, files, etc.) in Ada to date has a defined 
initial state (written in English text for older abstractions). It seems 
bizarre to eliminate that principle at the same time that we finally have a 
way to write that state within the language.

Whenever I've talked about Ada being designed to be safer than other languages 
(esp. that the defaults are generally safe), often the first objection comes 
from the fact that uninitialized objects aren't detected. And I don't have a 
very good rebuttal for that. In Ada 2012, we did add some mitigation with the 
Default_Value aspect, but it remains the case that the default is unsafe. Ada 
needs to be able to deal with uninitialized objects in order to deal safely 
with interfacing (both hardware and software) and with I/O, but there is no 
reason that needs to be the default for other code. We can't change that 
because of compatibility concerns. But I don't want to compound a problem by 
repeating the mistake for new features.

>I won't fight for this more, all this back and forth are tiring. 

Welcome to the ARG. :-) :-)

I've tried to let this discussion go, but I'm trying to understand your 
position here, since it doesn't to me seem to make any sense for Ada.

>But honestly, we should not be surprised to get compatibility issues 
>with SPARK when Ada takes features of SPARK and add them slightly 
>modified in Ada... Here the correct annotation for SPARK is 
>Default_Initial_Condition => null, meaning that the type does not 
>provide safe default initialization, but I know this is not Ada.

I agree that the slightly different goals of SPARK and Ada almost always are 
going to cause some issues if one tries to keep them completely "separate but 
equal".

But the real question here is why anyone would want to define a private type 
with "unsafe default initialization". If you want to *prevent* default 
initialization, you can use unknown discriminants on the partial view (in that 
case, a default-initialized object is illegal). And if you do want default 
initialization, then surely it needs to have a defined meaning which equates 
to a defined state. The only reason to do that in  pre-Ada 2012 was that it 
was impossible to set a default for a scalar type, so if the completion of the
private type is scalar, then there was no choice. But we fixed that in Ada 
2012, so there no longer is any reason that one cannot initialize something by 
default. I can see having a way to model existing code, but that doesn't apply 
in this case.

If Ada could detect this sort of error (especially at compile-time, like SPARK 
does), then I wouldn't care as much, but it can't (and as noted above, it's 
impractical to do so now for scalars).

****************************************************************

From: Christoph Grein
Sent: Saturday, April 4, 2020  1:53 PM

!topic Wrong Postcondition
!reference Ada 202x RM A.5.7(8/5) Draft 24
!from Christoph Grein 20-04-04
!discussion

function Denominator (Arg : Big_Real) return Big_Positive
with Post =>
  (Arg = 0.0) or else
  (Big_Integers.Greatest_Common_Divisor
     (Numerator (Arg), Denominator'Result) = 1);

The argument Arg in the postcondition looks strange. Perhaps this is meant:

(if Arg = 0.0 then Denominator'Result = 1)

****************************************************************

From: Yannick Moy
Sent: Wednesday, April 8, 2020  2:58 PM

I think the postcondition is correct. It says that in general Numerator and 
Denominator should be coprime. This does not apply to the case where Arg is 
0.0, hence the form of the postcondition.

****************************************************************

From: Christoph Grein
Sent: Thursday, April 9, 2020  10:48 AM

But this is exactly what my version says. Arg = 0.0 doesn't tell anything 
about the denominator. It could be any Big_Pos, e.g. 100 then. But I'm sure
1 will be returned in this case.

****************************************************************

From: Christoph Grein
Sent: Monday, April 20, 2020  12:19 PM

Since there is silence:

Am I the only one who thinks (Arg = 0.0) as a Post is utter nonsense? I try to
be polite...

Denominator (0/12) with this Post can return anything, for instance 
31415926535897, but I'm sure any sensible implementation will return 1.

If anyone can prove to me that this follows from Arg=0.0, I'll keep silent 
forever, I promise.

****************************************************************

From: Christoph Grein
Sent: Monday, April 20, 2020  12:30 PM

> Am I the only one who thinks (Arg = 0.0) as a Post is utter nonsense? I try 
> to be polite...

as written, the postcondition is an implication:

  if not (Arg = 0.0) then
  (Big_Integers.Greatest_Common_Divisor
     (Numerator (Arg), Denominator'Result) = 1)

as any implication, it does not say anything about the case when the guard is false

> Denominator (0/12) with this Post can return anything, for instance 
> 31415926535897, but I'm sure any sensible implementation will return 1.
> 
> If anyone can prove to me that this follows from Arg=0.0, I'll keep silent 
> forever, I promise.

It does not. I assume that the ARG did not mean here to write a complete 
postcondition for this operation, just one that they felt was useful. It also 
leaves to the implementation some flexibility in returning whatever for the 
Denominator of real "0".

****************************************************************

From: Christoph Grein
Sent: Monday, April 20, 2020  8:35 PM

...
>as any implication, it does not say anything about the case when the guard 
>is false

I admit that this is logically equivalent to what is written, but it is more 
intuitive in leaving the 0.0 case open. 

>It does not. I assume that the ARG did not mean here to write a complete 
>postcondition for this operation, just one that they felt was useful. It 
>also leaves to the implementation some flexibility in returning whatever 
>for the Denominator of real "0".

But why should ARG do so? 1 is the normal expected result. Anything else would 
be very surprising. Note the Dewar rule. (At least an AARM note would be 
appropriate in this case.)

I note that the RM does not say big reals are held in reduced form 
Big_Integers.Greatest_Common_Divisor (Numerator, Denominator) = 1. So 2/12 
and 1/6 would both be allowed internal representations, or I/(I*6) for any 
big integer. But I presume Numerator (no Post) would return 1 and Denominator
(because of Post) must return 6; and 2/12=1/6 would return True.

****************************************************************

From: Bob Duff
Sent: Tuesday, April 21, 2020  4:16 PM

> It does not. I assume that the ARG did not mean here to write a complete
> postcondition for this operation, just one that they felt was useful. It 
> also leaves to the implementation some flexibility in returning whatever 
> for the Denominator of real "0".

I agree with Christoph that this implementation freedom is undesirable.
The Post should specify both cases (Arg = zero and nonzero).

For zero, Numerator should be 0 and Denominator should be 1.

So the Post as written is not wrong, but incomplete.

> But why should ARG do so? 1 is the normal expected result. Anything 
> else would be very surprising. Note the Dewar rule. (At least an AARM 
> note would be appropriate in this case.)
> 
> I note that the RM does not say big reals are held in reduced form 
> Big_Integers.Greatest_Common_Divisor (Numerator,†Denominator)†=†1. So
> 2/12 and 1/6 would both be allowed internal representations, or 
> I/(I*6) for any big integer.

Yes, but having implemented these things before, I'm pretty sure the internal 
rep should use lowest terms.  Otherwise numbers grow bigger and bigger as you 
do computations.  (I mean "bigger" as in "number of bits", not the magnitude 
of the number. A small-magnitude number like 1/(10**1000) is "big" in terms 
of number of bits stored.)

And you have to be able to compute lowest terms anyway in order for "=" to get 
the right answer.

>...But I presume Numerator (no Post) would return 1  and Denominator 
>(because of Post) must return 6; and 2/12=1/6 would  return True.

Yes, and the easiest and most efficient way to accomplish that is to always 
store 1 and 6.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, April 21, 2020  5:16 PM

...
> Yes, but having implemented these things before, I'm pretty sure the 
> internal rep should use lowest terms.  Otherwise numbers grow bigger 
> and bigger as you do computations.  (I mean "bigger"
> as in "number of bits", not the magnitude of the number.
> A small-magnitude number like 1/(10**1000) is "big" in terms of number 
> of bits stored.)
> 
> And you have to be able to compute lowest terms anyway in order for 
> "=" to get the right answer.

I'm unconvinced. Normalization to lowest terms is *very* expensive in general 
(since it implies doing many trial divides, and divides are the most expensive
operations). Moreover, you never need it (you need to get the same denominator
for "=", not the lowest terms - that can be done by multiplying the numerator 
and denominator by the same value). If your operation mix includes a lot of 
math and just a few "=" operations, you may be better off only doing 
normalization when you need it.

(As I recall, the package used in Janus/Ada only normalizes away right-hand 
zero bits [as that is relatively cheap] rather than full normalization. I'd 
use that as the basis of any Bignum package for Janus/Ada, since I have no 
interest in writing/debugging a lot of assembler code again.)

> >...But I presume Numerator (no Post) would return 1  and Denominator 
> >(because of Post) must return 6; and 2/12=1/6 would  return True.
> 
> Yes, and the easiest and most efficient way to accomplish that is to 
> always store 1 and 6.

Sure, because in this case you're dealing with extra 2s and detecting those 
is cheap (test the bottom bit for zero). Detecting extra factors of 7 and 
13 and 19 is not cheap.

****************************************************************

From: Christoph Grein
Sent: Wednesday, April 22, 2020  1:13 PM

As I said, the RM does not require the reduced form (I supposed something 
like this as the reason: you need to get the same denominator for "=", not 
the lowest terms - that can be done by multiplying the numerator and 
denominator by the same value). Only for calculating Numerator and 
Denominator it must be computed. Thus I favor a complete Post. And 
implementing 0/10**10000 as 0/1 won't be costly.

****************************************************************

From: Randy Brukardt
Sent: Monday, May 4, 2020  9:21 PM

In working on putting this AI into the draft Standard (remember that we 
approved it during the meeting last Wednesday), I noticed a few additional
issues. None of them are particularly problematical; the first might need a
bit of input from some SPARK people (there being two possible fixes and I 
don't know if there is a preference for one of them vis-a-vis SPARK).

---

[1] We have:
   function In_Range (Arg, Low, High : Big_Integer) return Boolean is
     (Low <= Arg and Arg <= High);

The expression here will raise Program_Error if any of the arguments are 
invalid ("<=" requires valid arguments). However, this predicate function is
used in preconditions (specifically, in From_Big_Integer and To_Integer) where
it is expected to return False for values of Arg that are out of range, 
including those that are invalid.

It's also weird that we're not making explicit the implicit requirement that 
Low and High are valid. (Integer ranges are always valid in Ada, it seems 
appropriate to make the same requirement here.) This isn't *wrong* per-se, as 
the expression itself would raise Program_Error in the same way, but it likely
to be confusing to a reader.

So I recommend using the following instead:

   function In_Range (Arg : Big_Integer;
                      Low, High : Valid_Big_Integer) return Boolean is
     (Is_Valid (Arg) and then Low <= Arg and then Arg <= High);

Note that I had to change "and" to "and then" so that we don't call the 
relational operators if Arg is not valid. I could have left the second "and" 
with some additional parens, but that just seemed to make it harder to read.

An alternative would be to make From_Big_Integer, To_Integer, and In_Range 
only allow valid values (that is, only take Valid_Big_Integer arguments), but 
that's a substantially bigger change from what was approved, and it makes 
In_Range slightly more fragile. It also would change the exception raised from 
Constraint_Error to Program_Error for To_Integer and From_Big_Integer. [I have 
no idea which makes more sense for SPARK, and I don't think the exception 
raised is important in these cases.]

[2] The following also requires the arguments to be valid; for Den that's 
hidden in the precondition and it isn't checked at all for Num:

   function "/" (Num, Den : Big_Integers.Big_Integer) return Valid_Big_Real;
     with Pre =>  Den /= 0
                  or else raise Constraint_Error;

So I changed this to:

   function "/" (Num, Den : Big_Integers.Valid_Big_Integer) return Valid_Big_Real;
     with Pre =>  Den /= 0
                  or else raise Constraint_Error;

[3] There was a lot of blank lines added and subtracted. The spacing should be 
similar to that of Standard (see A.1) and System.Storage_Elements (see 13.7.1).
I adjusted the spacing in order to stay consistent with the layout of numeric 
types in the Standard.

[4] Put_Image needs mode "in" for the second parameter; procedures always give 
the mode in the Standard (functions do not, since it almost always is "in").

****************************************************************

From: Tucker Taft
Sent: Tuesday, May 5, 2020  7:50 AM

> [1] We have:
>   function In_Range (Arg, Low, High : Big_Integer) return Boolean is
>     (Low <= Arg and Arg <= High);
> 
> The expression here will raise Program_Error if any of the arguments 
> are invalid ("<=" requires valid arguments). However, this predicate 
> function is used in preconditions (specifically, in From_Big_Integer 
> and To_Integer) where it is expected to return False for values of Arg 
> that are out of range, including those that are invalid.

Actually, it is fine if it raises the "right" exception, namely Program_Error.
This is not like something used in a predicate aspect, where we need to be 
sure it returns True or False, and never raises an exception.

Did you get all of Arnaud's last-minute fixes here?  I think we agreed to 
change almost everything to Valid_Big_Integer.

> It's also weird that we're not making explicit the implicit 
> requirement that Low and High are valid. (Integer ranges are always 
> valid in Ada, it seems appropriate to make the same requirement here.) 
> This isn't *wrong* per-se, as the expression itself would raise 
> Program_Error in the same way, but it likely to be confusing to a reader.

This was my presuming that because there were validity checks already on all 
of the operators, we didn't have to repeat them.  I could go either way.
 
> So I recommend using the following instead:
> 
>   function In_Range (Arg : Big_Integer;
>                      Low, High : Valid_Big_Integer) return Boolean is
>     (Is_Valid (Arg) and then Low <= Arg and then Arg <= High);

Or just use Arg: Valid_Big_Integer.  Having to use Is_Valid explicitly seems 
to defeat the purpose somewhat of having the predicate.  I don't see a need 
for In_Range to return False on invalid inputs.  In fact, I suspect that 
makes it harder for SPARK.
 
> Note that I had to change "and" to "and then" so that we don't call 
> the relational operators if Arg is not valid. I could have left the 
> second "and" with some additional parens, but that just seemed to make 
> it harder to read.
> 
> An alternative would be to make From_Big_Integer, To_Integer, and 
> In_Range only allow valid values (that is, only take Valid_Big_Integer 
> arguments), but that's a substantially bigger change from what was 
> approved, and it makes In_Range slightly more fragile.

I think that is preferable.

> It also would change the exception raised from Constraint_Error to 
> Program_Error for To_Integer and From_Big_Integer. [I have no idea 
> which makes more sense for SPARK, and I don't think the exception 
> raised is important in these cases.]

I presume you mean To_Big_Integer, and in that case we want Valid_Big_Integer 
as the result subtype.

I believe we want Program_Error for any case of an invalid Big_Integer, and 
Constraint_Error only if we have a valid big integer which is outside a pair
of valid big-integer bounds.

So I would sprinkle Valid_Big_Integer liberally, and not put any explicit 
"Is_Valid" into Is_Range.  My goal was that the *only* language-defined 
function that should be allowed to take an invalid big integer should be 
Is_Valid.  Everything else should raise Program_Error.

> [2] The following also requires the arguments to be valid; for Den 
> that's hidden in the precondition and it isn't checked at all for Num:
> 
>   function "/" (Num, Den : Big_Integers.Big_Integer) return Valid_Big_Real;
>     with Pre =>  Den /= 0
>                  or else raise Constraint_Error;
> 
> So I changed this to:
> 
>   function "/" (Num, Den : Big_Integers.Valid_Big_Integer) return 
>      Valid_Big_Real;
>     with Pre =>  Den /= 0
>                  or else raise Constraint_Error;

Agreed.

> 
> [3] There was a lot of blank lines added and subtracted. The spacing 
> should be similar to that of Standard (see A.1) and 
> System.Storage_Elements (see 13.7.1).
> I adjusted the spacing in order to stay consistent with the layout of 
> numeric types in the Standard.

Ok.

> 
> [4] Put_Image needs mode "in" for the second parameter; procedures 
> always give the mode in the Standard (functions do not, since it 
> almost always is "in").

Ok.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, May 5, 2020  2:35 PM

>> [1] We have:
>>   function In_Range (Arg, Low, High : Big_Integer) return Boolean is
>>     (Low <= Arg and Arg <= High);
>> 
>> The expression here will raise Program_Error if any of the arguments are
>> invalid ("<=" requires valid arguments). However, this predicate function 
>> is used in preconditions (specifically, in From_Big_Integer and To_Integer)
>> where it is expected to return False for values of Arg that are out of 
>> range, including those that are invalid.

>Actually, it is fine if it raises the "right" exception, namely Program_Error.
>This is not like something used in a predicate aspect, where we need to be 
>sure it returns True or False, and never raises an exception.

In_Range is similar to a membership operation, and we had a lot of issues 
because we didn't want predicates to raise exceptions in memberships. It seems
to me that the same applies to this operation. You're probably right that it 
doesn't matter in a precondition, but having to protect it in a normal if 
statement seems like extra work:

      if In_Range (...) then -- Shouldn't raise any exception assuming the range is OK.

I don't feel very strongly about this, but my experience has been that any 
time you have a function returning a Boolean that fails somehow rather than 
returning True or False, you regret it. I just had a case like this in the 
Janus/Ada code: I changed Is_Ptr (Not_A_Type) to cause an internal error, but
that caused so many issues (especially with error handling and other 
preconditions) that I changed it back.

> Did you get all of Arnaud's last-minute fixes here?  I think we agreed to change 
> almost everything to Valid_Big_Integer.

I think so. Those mostly changed the result of To_Big_Integer, no parameters 
were involved.

>> It's also weird that we're not making explicit the implicit requirement that
>> Low and High are valid. (Integer ranges are always valid in Ada, it seems 
>> appropriate to make the same requirement here.) This isn't *wrong* per-se,
>> as the expression itself would raise Program_Error in the same way, but it
>> likely to be confusing to a reader.

>This was my presuming that because there were validity checks already on all 
>of the operators, we didn't have to repeat them.  I could go either way.

I think its confusing to not include the extra 6 characters to say that the 
value needs to be valid. Otherwise, one wonders why this function is different
from the others.

> So I recommend using the following instead:
> 
>   function In_Range (Arg : Big_Integer;
>                      Low, High : Valid_Big_Integer) return Boolean is
>     (Is_Valid (Arg) and then Low <= Arg and then Arg <= High);

>Or just use Arg: Valid_Big_Integer.  Having to use Is_Valid explicitly seems 
>to defeat the purpose somewhat of having the predicate.  I don't see a need 
>for In_Range to return False on invalid inputs.  In fact, I suspect that 
>makes it harder for SPARK.

The last here was my question. As noted above, it's better if it returns False 
on invalid inputs, since this is like a membership test and those should never 
raise exceptions. But I don't want to make something unnecessarily hard for 
SPARK.

>> It also would change the exception raised from
>> Constraint_Error to Program_Error for To_Integer and From_Big_Integer. [I 
>> have no idea which makes more sense for SPARK, and I don't think the exception 
>> raised is important in these cases.]

>I presume you mean To_Big_Integer, and in that case we want Valid_Big_Integer 
>as the result subtype.

No, I mean From_Big_Integer, which takes a Big_Integer (not Valid_Big_Integer) 
parameter. 

>I believe we want Program_Error for any case of an invalid Big_Integer, and 
>Constraint_Error only if we have a valid big integer which is outside a pair 
>of valid big-integer bounds.

That's fine, we could change To_Integer and the two From_Big_Integers to take 
Valid_Big_Integer. That would be more consistent. 

>So I would sprinkle Valid_Big_Integer liberally, and not put any explicit 
>"Is_Valid" into Is_Range.  My goal was that the *only* language-defined 
>function that should be allowed to take an invalid big integer should be 
>Is_Valid.  Everything else should raise Program_Error.

Even a range membership test? That's unlike the work we did for predicates. 

Note that
    if A in Valid_Big_Integer then -- never raises an exception, even if A is invalid

while you're suggesting that
    in In_Range (A, 1, 10) then -- raises Program_Error if A is invalid

Doesn't seem right to me.

****************************************************************

From: Tucker Taft
Sent: Tuesday, May 5, 2020  2:51 PM

...
>In_Range is similar to a membership operation, and we had a lot of issues 
>because we didn't want predicates to raise exceptions in memberships. It 
>seems to me that the same applies to this operation. You're probably right
>that it doesn't matter in a precondition, but having to protect it in a 
>normal if statement seems like extra work:
>
>    if In_Range (...) then -- Shouldn't raise any exception assuming the range is OK.
>
>I don't feel very strongly about this, but my experience has been that any 
>time you have a function returning a Boolean that fails somehow rather than
>returning True or False, you regret it. I just had a case like this in the 
>Janus/Ada code: I changed Is_Ptr (Not_A_Type) to cause an internal error, 
>but that caused so many issues (especially with error handling and other 
>preconditions) that I changed it back.

But there are three inputs to In_Range, and having to put Is_Valid on all of 
them seems silly.  I would argue all of the inputs should simply be 
Valid_Big_Integer, and not worry about it.  Even a "normal" integer membership 
test can raise Program_Error in Ada, if the compiler can determine the operand 
is uninitialized.  The only thing that is safe on an uninitialized scalar is 
'Valid, and I would argue that Is_Valid should be the only function that 
should be allowed to take an uninitialized Big_Integer.

...
>>This was my presuming that because there were validity checks already on all 
>>of the operators, we didn't have to repeat them.  I could go either way.

>I think its confusing to not include the extra 6 characters to say that the 
>value needs to be valid. Otherwise, one wonders why this function is 
>different from the others.

Agreed.  Slather "Valid_" in front of all of the Big_Integer operands and 
results.

...
>The last here was my question. As noted above, it's better if it returns 
>False on invalid inputs, since this is like a membership test and those 
>should never raise exceptions. But I don't want to make something 
>unnecessarily hard for SPARK.

Put "Valid_" everywhere, and don't allow any function to even be called if it 
has an uninitialized Big_Integer.

>>> It also would change the exception raised from
>>> Constraint_Error to Program_Error for To_Integer and From_Big_Integer. [I have 
>>> no idea which makes more sense for SPARK, and I don't think the exception 
>>> raised is important in these cases.]

>>I presume you mean To_Big_Integer, and in that case we want Valid_Big_Integer as 
>>the result subtype.

>No, I mean From_Big_Integer, which takes a Big_Integer (not Valid_Big_Integer) 
>parameter.

You wrote "To_Integer" and there is no such function.  I presume you meant 
"To_Big_Integer" when you wrote "To_Integer".

>>So I would sprinkle Valid_Big_Integer liberally, and not put any explicit 
>>"Is_Valid" into Is_Range.  My goal was that the *only* language-defined 
>>function that should be allowed to take an invalid big integer should be 
>>Is_Valid.  Everything else should raise Program_Error.

>Even a range membership test? That's unlike the work we did for predicates. 

Be that as it may, it is a function, and it should require all operands to be 
initialized before it is called.

>Note that
>    if A in Valid_Big_Integer then -- never raises an exception, even if A is invalid
>
>while you're suggesting that
>    in In_Range (A, 1, 10) then -- raises Program_Error if A is invalid

Yes, but there are three Big_Integer operands to In_Range.

>Doesn't seem right to me.

I don't see the problem.  As mentioned, a membership test like 
"if X in 1..10 then ..." can raise Program_Error if X is uninitialized.

****************************************************************

From: Tucker Taft
Sent: Tuesday, May 5, 2020  2:57 PM

The point is that it is not meaningful to ask if something is in a particular 
range if it isn't initialized.

On the other hand, asking first if something is initialized, and only *then* 
asking if it is in a particular range makes sense.  And that is exactly what 
it means by putting Valid_Big_Integer on all operands to In_Range.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, May 5, 2020  3:45 PM

...
>>I think its confusing to not include the extra 6 characters to say that the
>>value needs to be valid. Otherwise, one wonders why this function is 
>>different from the others.

>Agreed.  Slather "Valid_" in front of all of the Big_Integer operands and 
>results.

OK by me. Will do.


...
>> No, I mean From_Big_Integer, which takes a Big_Integer (not 
>>Valid_Big_Integer) parameter.

>You wrote "To_Integer" and there is no such function.  I presume you meant 
>"To_Big_Integer" when you wrote "To_Integer"

Umm, there is in AI12-0366-1 -- it's directly below In_Range.

  function To_Integer (Arg : Big_Integer) return Integer ...
...
>>Note that
>>    if A in Valid_Big_Integer then -- never raises an exception, even if A is invalid
>>
>>while you're suggesting that
>>    in In_Range (A, 1, 10) then -- raises Program_Error if A is invalid

>Yes, but there are three Big_Integer operands to In_Range.

The range has to be valid (as with all ranges), but not the argument. 

>>Doesn't seem right to me.
>
>I don't see the problem.  As mentioned, a membership test like "if X in 1..10 
>then ..." can raise Program_Error if X is uninitialized.

It would be a rather unfriendly compiler (and expensive at runtime) to do that 
for an Integer, since every value of Integer'Base is valid. The permission 
exists for signalling NaNs and bad enumeration representations, neither of 
which are very likely to occur in practice.

Anyway, I don't care enough to argue, there are worse problems with this spec 
foisted on us by SPARK.

****************************************************************

From: Christoph Grein
Sent: Tuesday, May 19, 2020  9:10 AM

I'm being tenacious.

In light of how many meanings to this list are posted on truely insignificant 
themes (e.g. end record [id]), this theme must be of utmost importance if I 
measure it inversely proportional to the number of posts. At least I feel a 
bit like this.

Is it really so absurd to request a complete post condition on Denominator? 
Why should the condition be left open on Value 0? As has been stated, the 
result of Denominator need not be what it is internally, i.e. GCD (costly) 
is only called to compute the results of Numerator and Denominator.

function Denominator (Arg : Big_Real) return Big_Positive
with Post =>
  (if Arg = 0.0 then Denominator'Result = 1)
  or else
  (Big_Integers.Greatest_Common_Divisor
     (Numerator (Arg), Denominator'Result) = 1);

And BTW: Shouldn't there be a related post condition on Numerator?

****************************************************************

From: Jeff Cousins
Sent: Tuesday, May 19, 2020  10:11 AM

Arenít the latest proposals adequate?, i.e. to quote the latest ARG minutes:

ď
Tucker didnít change the postcondition of Denominator. Bob would like a 
complete postcondition.

(if Arg = 0.0 then DenominatoríResult = 1 else 
  Greatest_Common_Divisor (Numerator (Arg), Denominator'Result) = 1);


Also, Post => (if Arg = 0.0 then NumeratoríResult = 0) on Numerator. We canít 
mention Denominator here, as that would be recursive (since the Post of 
Denominator calls Numerator).
ď

****************************************************************

From: Bob Duff
Sent: Tuesday, May 19, 2020  10:41 AM

> Is it really so absurd to request a complete post condition on 
> Denominator?

No.  Of course it should be complete.

> And BTW: Shouldn't there be a related post condition on Numerator?

Well, I think that would lead to infinite mutual recursion.

****************************************************************

From: Christoph Grein
Sent: Wednesday, May 20, 2020  5:23 AM

>> And BTW: Shouldn't there be a related post condition on Numerator?
> Well, I think that would lead to infinite mutual recursion.

Oh ah, of course. I guess a little remark on AARM would then be appropriate,
perhaps "To be honest".

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 28, 2020  10:21 PM

Sigh. You should have waited for Draft 25. Under Numerator there is an AARM
note:

Reason: The postcondition of Numerator cannot be complete as it cannot 
mention Denominator. Since the postcondition of Denominator uses Numerator,
we would get an infinite mutual recursion if both postconditions are enabled.
The postcondition of Denominator serves as the postcondition for Numerator as
well unless Arg = 0.0. 

****************************************************************

Questions? Ask the ACAA Technical Agent