Version 1.26 of ai12s/ai12-0208-1.txt
!standard A.5.5(0) 19-03-15 AI12-0208-1/11
!standard A.5.6(0)
!standard A.5.7(0)
!class Amendment 16-12-19
!status Amendment 1-2012 19-02-26
!status ARG Approved 8-1-2 19-02-26
!status work item 16-12-19
!status received 16-09-27
!priority Low
!difficulty Medium
!subject Predefined Big numbers support
!summary
Define Big Numbers packages to support arbitrary precision mathematics.
!problem
Some applications need larger numbers than Standard.Integer. All Ada compilers
have this capability in order to implement static expressions; shouldn't some
such package be available to Ada users as well? (Yes.)
!proposal
(See Summary.)
!wording
A.5.5 Big Numbers
Support is provided for integer arithmetic involving values larger than
than those supported by the target machine, and for arbitrary-precision
reals.
The library package Numerics.Big_Numbers has the following declaration:
package Ada.Numerics.Big_Numbers
with Pure, Nonblocking, Global => null
is
subtype Field is Integer range 0 .. implementation-defined;
subtype Number_Base is Integer range 2 .. 16;
end Ada.Numerics.Big_Numbers;
A.5.6 Big Integers
The library package Numerics.Big_Numbers.Big_Integers has the
following declaration:
with Ada.Streams;
package Ada.Numerics.Big_Numbers.Big_Integers
with Preelaborate, Nonblocking,
Global => in out synchronized Big_Integers
is
type Optional_Big_Integer is private with
Default_Initial_Condition => not Is_Valid (Optional_Big_Integer),
Integer_Literal => From_String,
Put_Image => Put_Image;
function Is_Valid (Arg : Optional_Big_Integer) return Boolean;
subtype Big_Integer is Optional_Big_Integer
with Dynamic_Predicate => Is_Valid (Big_Integer),
Predicate_Failure => (raise Constraint_Error);
function Invalid_Big_Integer return Optional_Big_Integer
with Post => not Is_Valid (Invalid_Big_Integer'Result);
function "=" (L, R : Big_Integer) return Boolean;
function "<" (L, R : Big_Integer) return Boolean;
function "<=" (L, R : Big_Integer) return Boolean;
function ">" (L, R : Big_Integer) return Boolean;
function ">=" (L, R : Big_Integer) return Boolean;
function To_Big_Integer (Arg : Integer) return Big_Integer;
subtype Optional_Big_Positive is Optional_Big_Integer
with Dynamic_Predicate => (not Is_Valid (Optional_Big_Positive))
or else (Optional_Big_Positive > 0),
Predicate_Failure => (raise Constraint_Error);
subtype Optional_Big_Natural is Optional_Big_Integer
with Dynamic_Predicate => (not Is_Valid (Optional_Big_Natural))
or else (Optional_Big_Natural >= 0),
Predicate_Failure => (raise Constraint_Error);
subtype Big_Positive is Big_Integer
with Dynamic_Predicate => Big_Positive > 0,
Predicate_Failure => (raise Constraint_Error);
subtype Big_Natural is Big_Integer
with Dynamic_Predicate => Big_Natural >= 0,
Predicate_Failure => (raise Constraint_Error);
function In_Range (Arg, Low, High : Big_Integer) return Boolean is
((Low <= Arg) and (Arg <= High));
function To_Integer (Arg : Big_Integer) return Integer
with Pre => In_Range (Arg,
Low => To_Big_Integer (Integer'First),
High => To_Big_Integer (Integer'Last))
or else (raise Constraint_Error);
generic
type Int is range <>;
package Signed_Conversions is
function To_Big_Integer (Arg : Int) return Big_Integer;
function From_Big_Integer (Arg : Big_Integer) return Int
with Pre => In_Range (Arg,
Low => To_Big_Integer (Int'First),
High => To_Big_Integer (Int'Last))
or else (raise Constraint_Error);
end Signed_Conversions;
generic
type Int is mod <>;
package Unsigned_Conversions is
function To_Big_Integer (Arg : Int) return Big_Integer;
function From_Big_Integer (Arg : Big_Integer) return Int
with Pre => In_Range (Arg,
Low => To_Big_Integer (Int'First),
High => To_Big_Integer (Int'Last))
or else (raise Constraint_Error);
end Unsigned_Conversions;
function To_String (Arg : Big_Integer;
Width : Field := 0;
Base : Number_Base := 10) return String
with Post => To_String'Result'First = 1;
function From_String (Arg : String) return Big_Integer;
procedure Put_Image
(Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Arg : Big_Integer);
function "+" (L : Big_Integer) return Big_Integer;
function "-" (L : Big_Integer) return Big_Integer;
function "abs" (L : Big_Integer) return Big_Integer;
function "+" (L, R : Big_Integer) return Big_Integer;
function "-" (L, R : Big_Integer) return Big_Integer;
function "*" (L, R : Big_Integer) return Big_Integer;
function "/" (L, R : Big_Integer) return Big_Integer;
function "mod" (L, R : Big_Integer) return Big_Integer;
function "rem" (L, R : Big_Integer) return Big_Integer;
function "**" (L : Big_Integer; R : Natural)
return Big_Integer;
function Min (L, R : Big_Integer) return Big_Integer;
function Max (L, R : Big_Integer) return Big_Integer;
function Greatest_Common_Divisor
(L, R : Big_Integer) return Big_Positive with
Pre => (L /= 0 and R /= 0) or else (raise Constraint_Error);
private
... --
end Ada.Numerics.Big_Numbers.Big_Integers;
To_String and From_String behave analogously to the Put and Get procedures
defined in Text_IO.Integer_IO (in particular, with respect to the
interpretation of the Width and Base parameters) except that
Constraint_Error, not Data_Error, is propagated in error cases
and the result of a call To_String with a Width parameter of 0 and
a nonnegative Arg parameter does not include a leading blank.
Put_Image calls To_String (passing in the default values for the Width and Base
parameters), prepends a leading blank if the argument is
nonnegative, converts that String to a Wide_Wide_String using
To_Wide_Wide_String, and writes the resulting value to the stream using
Wide_Wide_String'Write.
The other functions have their usual mathematical meanings.
The type Optional_Big_Integer needs finalization (see 7.6).
Implementation Requirements
No storage associated with an Optional_Big_Integer object shall be lost upon
assignment or scope exit.
AARM note: The "No storage ... shall be lost" requirement does not preclude
implementation techniques such as caching or unique number tables.
For purposes of determining whether predicate checks are performed
as part of default initialization, the type Optional_Big_Integer
shall be considered to have a subcomponent that has a default_expression.
AARM Note: This means that the elaboration of
Default_Initialized_Object : Big_Integer;
is required to propagate Assertion_Error.
A.5.7 Big Reals
The library package Numerics.Big_Numbers.Big_Reals has the
following declaration:
with Ada.Numerics.Big_Numbers.Big_Integers;
with Ada.Streams;
package Ada.Numerics.Big_Numbers.Big_Reals
with Preelaborate, Nonblocking.
Global => in out synchronized Big_Reals
is
type Optional_Big_Real is private with
Default_Initial_Condition => not Is_Valid (Optional_Big_Real),
Real_Literal => From_String,
Put_Image => Put_Image;
function Is_Valid (Arg : Optional_Big_Real) return Boolean;
function No_Big_Real return Optional_Big_Real
with Post => not Is_Valid (No_Big_Real'Result);
subtype Big_Real is Optional_Big_Real
with Dynamic_Predicate => Is_Valid (Big_Real),
Predicate_Failure => (raise Constraint_Error);
function "/" (Num, Den : Big_Integer) return Big_Real
with Pre => (if Big_Integers."=" (Den, 0)
then raise Constraint_Error);
function Numerator (Arg : Big_Real) return Big_Integer;
function Denominator (Arg : Big_Real) return Big_Positive
with Post =>
(Arg = 0.0) or else
(Big_Integers.Greatest_Common_Divisor
(Numerator (Arg), Denominator'Result) = 1);
function To_Big_Real (Arg : Big_Integer) return Big_Real is
(Arg / 1);
function To_Real (Arg : Integer) return Big_Real is
(Big_Integers.To_Big_Integer (Arg) / 1);
function "=" (L, R : Big_Real) return Boolean;
function "<" (L, R : Big_Real) return Boolean;
function "<=" (L, R : Big_Real) return Boolean;
function ">" (L, R : Big_Real) return Boolean;
function ">=" (L, R : Big_Real) return Boolean;
function In_Range (Arg, Low, High : Big_Real) return Boolean is
((Low <= Arg) and (Arg <= High));
generic
type Num is digits <>;
package Float_Conversions is
function To_Big_Real (Arg : Num) return Big_Real;
function From_Big_Real (Arg : Big_Real) return Num
with Pre => In_Range (Arg,
Low => To_Big_Real (Num'First),
High => To_Big_Real (Num'Last))
or else (raise Constraint_Error);
end Float_Conversions;
generic
type Num is delta <>;
package Fixed_Conversions is
function To_Big_Real (Arg : Num) return Big_Real;
function From_Big_Real (Arg : Big_Real) return Num
with Pre => In_Range (Arg,
Low => To_Big_Real (Num'First),
High => To_Big_Real (Num'Last))
or else (raise Constraint_Error);
end Fixed_Conversions;
function To_String (Arg : Big_Real;
Fore : Field := 2;
Aft : Field := 3;
Exp : Field := 0) return String
with Post => To_String'Result'First = 1;
function From_String (Arg : String) return Big_Real;
function To_Quotient_String (Arg : Big_Real) return String is
(To_String (Numerator (Arg)) & " / " & To_String (Denominator (Arg)));
function From_Quotient_String (Arg : String) return Big_Real;
procedure> Put_Image
(Stream : not null access> Ada.Streams.Root_Stream_Type'Class;
Arg : Big_Real);
function "+" (L : Big_Real) return Big_Real;
function "-" (L : Big_Real) return Big_Real;
function "abs" (L : Big_Real) return Big_Real;
function "+" (L, R : Big_Real) return Big_Real;
function "-" (L, R : Big_Real) return Big_Real;
function "*" (L, R : Big_Real) return Big_Real;
function "/" (L, R : Big_Real) return Big_Real;
function "**" (L : Big_Real; R : Integer)
return Big_Real;
function Min (L, R : Big_Real) return Big_Real;
function Max (L, R : Big_Real) return Big_Real;
private
... --
end Ada.Numerics.Big_Numbers.Big_Reals;
To_String and From_String behave analogously to the Put and Get procedures
defined in Text_IO.Float_IO (in particular, with respect to the
interpretation of the Fore, Aft, and Exp parameters), except that
Constraint_Error (not Data_Error) is propagated in error cases.
From_Quotient_String implements the inverse function of To_Quotient_String;
Constraint_Error is propagated in error cases. Put_Image calls To_String,
converts that String to a Wide_Wide_String using To_Wide_Wide_String, and
the resulting value to the stream using Wide_Wide_String'Write.
For an instance of Float_Conversions or Fixed_Conversions, To_Big_Real
is exact (that is, the result represents exactly the same
mathematical value as the argument) and From_Big_Real is subject to
the same precision rules as a type conversion of a value of type T to
the target type Num, where T is a hypothetical floating point type whose
model numbers include all of the model numbers of Num as well as the
exact mathematical value of the argument.
The other functions have their usual mathematical meanings.
The type Optional_Big_Real needs finalization (see 7.6).
Implementation Requirements
No storage associated with an Optional_Big_Real object shall be lost
upon assignment or scope exit.
AARM note: The "No storage ... shall be lost" requirement does not preclude
implementation techniques such as caching or unique number tables.
For purposes of determining whether predicate checks are performed
as part of default initialization, the type Optional_Big_Real
shall be considered to have a subcomponent that has a default_expression.
AARM Note: This means that the elaboration of
Default_Initialized_Object : Big_Real;
is required to propagate Assertion_Error.
!discussion
We presume that AI12-0311-1 is approved, which will allow the preconditions
and predicates of these packages to be suppressed.
!corrigendum A.5.5(0)
Insert new clause:
Support is provided for integer arithmetic involving values larger than
than those supported by the target machine, and for arbitrary-precision
reals.
Static Semantics
The library package Numerics.Big_Numbers has the following declaration:
package Ada.Numerics.Big_Numbers
with Pure, Nonblocking, Global => null is
subtype Field is Integer range 0 .. implementation-defined;
subtype Number_Base is Integer range 2 .. 16;
end Ada.Numerics.Big_Numbers;
!corrigendum A.5.6(0)
Insert new clause:
Static Semantics
The library package Numerics.Big_Numbers.Big_Integers has the
following declaration:
with Ada.Streams;
package Ada.Numerics.Big_Numbers.Big_Integers
with Preelaborate, Nonblocking,
Global => in out synchronized Big_Integers
is
type Optional_Big_Integer is private
with Default_Initial_Condition => not Is_Valid (Optional_Big_Integer),
Integer_Literal => From_String,
Put_Image => Put_Image;
function Is_Valid (Arg : Optional_Big_Integer) return Boolean;
subtype Big_Integer is Optional_Big_Integer
with Dynamic_Predicate => Is_Valid (Big_Integer),
Predicate_Failure => (raise Constraint_Error);
function Invalid_Big_Integer return Optional_Big_Integer
with Post => not Is_Valid (Invalid_Big_Integer'Result);
function "=" (L, R : Big_Integer) return Boolean;
function "<" (L, R : Big_Integer) return Boolean;
function "<=" (L, R : Big_Integer) return Boolean;
function ">" (L, R : Big_Integer) return Boolean;
function ">=" (L, R : Big_Integer) return Boolean;
function To_Big_Integer (Arg : Integer) return Big_Integer;
subtype Optional_Big_Positive is Optional_Big_Integer
with Dynamic_Predicate => (not Is_Valid (Optional_Big_Positive))
or else (Optional_Big_Positive > 0),
Predicate_Failure => (raise Constraint_Error);
subtype Optional_Big_Natural is Optional_Big_Integer
with Dynamic_Predicate => (not Is_Valid (Optional_Big_Natural))
or else (Optional_Big_Natural >= 0),
Predicate_Failure => (raise Constraint_Error);
subtype Big_Positive is Big_Integer
with Dynamic_Predicate => Big_Positive > 0,
Predicate_Failure => (raise Constraint_Error);
subtype Big_Natural is Big_Integer
with Dynamic_Predicate => Big_Natural >= 0,
Predicate_Failure => (raise Constraint_Error);
function In_Range (Arg, Low, High : Big_Integer) return Boolean is
((Low <= Arg) and (Arg <= High));
function To_Integer (Arg : Big_Integer) return Integer
with Pre => In_Range (Arg,
Low => To_Big_Integer (Integer'First),
High => To_Big_Integer (Integer'Last))
or else (raise Constraint_Error);
generic
type Int is range <>;
package Signed_Conversions is
function To_Big_Integer (Arg : Int) return Big_Integer;
function From_Big_Integer (Arg : Big_Integer) return Int
with Pre => In_Range (Arg,
Low => To_Big_Integer (Int'First),
High => To_Big_Integer (Int'Last))
or else (raise Constraint_Error);
end Signed_Conversions;
generic
type Int is mod <>;
package Unsigned_Conversions is
function To_Big_Integer (Arg : Int) return Big_Integer;
function From_Big_Integer (Arg : Big_Integer) return Int
with Pre => In_Range (Arg,
Low => To_Big_Integer (Int'First),
High => To_Big_Integer (Int'Last))
or else (raise Constraint_Error);
end Unsigned_Conversions;
function To_String (Arg : Big_Integer;
Width : Field := 0;
Base : Number_Base := 10) return String
with Post => To_String'Result'First = 1;
function From_String (Arg : String) return Big_Integer;
procedure Put_Image
(Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Arg : Big_Integer);
function "+" (L : Big_Integer) return Big_Integer;
function "-" (L : Big_Integer) return Big_Integer;
function "abs" (L : Big_Integer) return Big_Integer;
function "+" (L, R : Big_Integer) return Big_Integer;
function "-" (L, R : Big_Integer) return Big_Integer;
function "*" (L, R : Big_Integer) return Big_Integer;
function "/" (L, R : Big_Integer) return Big_Integer;
function "mod" (L, R : Big_Integer) return Big_Integer;
function "rem" (L, R : Big_Integer) return Big_Integer;
function "**" (L : Big_Integer; R : Natural)
return Big_Integer;
function Min (L, R : Big_Integer) return Big_Integer;
function Max (L, R : Big_Integer) return Big_Integer;
function Greatest_Common_Divisor
(L, R : Big_Integer) return Big_Positive
with Pre => (L /= 0 and R /= 0) or else (raise Constraint_Error);
private
... -- not specified by the language
end Ada.Numerics.Big_Numbers.Big_Integers;
To_String and From_String behave analogously to the Put and Get procedures
defined in Text_IO.Integer_IO (in particular, with respect to the
interpretation of the Width and Base parameters) except that
Constraint_Error, not Data_Error, is propagated in error cases
and the result of a call To_String with a Width parameter of 0 and
a nonnegative Arg parameter does not include a leading blank.
Put_Image calls To_String (passing in the default values for the Width and Base
parameters), prepends a leading blank if the argument is
nonnegative, converts that String to a Wide_Wide_String using
To_Wide_Wide_String, and writes the resulting value to the stream using
Wide_Wide_String'Write.
The other functions have their usual mathematical meanings.
The type Optional_Big_Integer needs finalization (see 7.6).
Implementation Requirements
No storage associated with an Optional_Big_Integer object shall be lost upon
assignment or scope exit.
For purposes of determining whether predicate checks are performed
as part of default initialization, the type Optional_Big_Integer
shall be considered to have a subcomponent that has a default_expression.
!corrigendum A.5.7(0)
Insert new clause:
Static Semantics
The library package Numerics.Big_Numbers.Big_Reals has the
following declaration:
with Ada.Numerics.Big_Numbers.Big_Integers;
with Ada.Streams;
package Ada.Numerics.Big_Numbers.Big_Reals
with Preelaborate, Nonblocking,
Global => in out synchronized Big_Reals
is
type Optional_Big_Real is private
with Default_Initial_Condition => not Is_Valid (Optional_Big_Real),
Real_Literal => From_String,
Put_Image => Put_Image;
function Is_Valid (Arg : Optional_Big_Real) return Boolean;
function No_Big_Real return Optional_Big_Real
with Post => not Is_Valid (No_Big_Real'Result);
subtype Big_Real is Optional_Big_Real
with Dynamic_Predicate => Is_Valid (Big_Real),
Predicate_Failure => (raise Constraint_Error);
function "/" (Num, Den : Big_Integer) return Big_Real
with Pre => (if Big_Integers."=" (Den, 0)
then raise Constraint_Error);
function Numerator (Arg : Big_Real) return Big_Integer;
function Denominator (Arg : Big_Real) return Big_Positive
with Post =>
(Arg = 0.0) or else
(Big_Integers.Greatest_Common_Divisor
(Numerator (Arg), Denominator'Result) = 1);
function To_Big_Real (Arg : Big_Integer) return Big_Real is
(Arg / 1);
function To_Real (Arg : Integer) return Big_Real is
(Big_Integers.To_Big_Integer (Arg) / 1);
function "=" (L, R : Big_Real) return Boolean;
function "<" (L, R : Big_Real) return Boolean;
function "<=" (L, R : Big_Real) return Boolean;
function ">" (L, R : Big_Real) return Boolean;
function ">=" (L, R : Big_Real) return Boolean;
function In_Range (Arg, Low, High : Big_Real) return Boolean is
((Low <= Arg) and (Arg <= High));
generic
type Num is digits <>;
package Float_Conversions is
function To_Big_Real (Arg : Num) return Big_Real;
function From_Big_Real (Arg : Big_Real) return Num
with Pre => In_Range (Arg,
Low => To_Big_Real (Num'First),
High => To_Big_Real (Num'Last))
or else (raise Constraint_Error);
end Float_Conversions;
generic
type Num is delta <>;
package Fixed_Conversions is
function To_Big_Real (Arg : Num) return Big_Real;
function From_Big_Real (Arg : Big_Real) return Num
with Pre => In_Range (Arg,
Low => To_Big_Real (Num'First),
High => To_Big_Real (Num'Last))
or else (raise Constraint_Error);
end Fixed_Conversions;
function To_String (Arg : Big_Real;
Fore : Field := 2;
Aft : Field := 3;
Exp : Field := 0) return String
with Post => To_String'Result'First = 1;
function From_String (Arg : String) return Big_Real;
function To_Quotient_String (Arg : Big_Real) return String is
(To_String (Numerator (Arg)) & " / " & To_String (Denominator (Arg)));
function From_Quotient_String (Arg : String) return Big_Real;
procedure Put_Image
(Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Arg : Big_Real);
function "+" (L : Big_Real) return Big_Real;
function "-" (L : Big_Real) return Big_Real;
function "abs" (L : Big_Real) return Big_Real;
function "+" (L, R : Big_Real) return Big_Real;
function "-" (L, R : Big_Real) return Big_Real;
function "*" (L, R : Big_Real) return Big_Real;
function "/" (L, R : Big_Real) return Big_Real;
function "**" (L : Big_Real; R : Integer)
return Big_Real;
function Min (L, R : Big_Real) return Big_Real;
function Max (L, R : Big_Real) return Big_Real;
private
... -- not specified by the language
end Ada.Numerics.Big_Numbers.Big_Reals;
To_String and From_String behave analogously to the Put and Get procedures
defined in Text_IO.Float_IO (in particular, with respect to the
interpretation of the Fore, Aft, and Exp parameters), except that
Constraint_Error (not Data_Error) is propagated in error cases.
From_Quotient_String implements the inverse function of To_Quotient_String;
Constraint_Error is propagated in error cases. Put_Image calls To_String,
converts that String to a Wide_Wide_String using To_Wide_Wide_String, and
the resulting value to the stream using Wide_Wide_String'Write.
For an instance of Float_Conversions or Fixed_Conversions, To_Big_Real
is exact (that is, the result represents exactly the same
mathematical value as the argument) and From_Big_Real is subject to
the same precision rules as a type conversion of a value of type T to
the target type Num, where T is a hypothetical floating point type whose
model numbers include all of the model numbers of Num as well as the
exact mathematical value of the argument.
The other functions have their usual mathematical meanings.
The type Optional_Big_Real needs finalization (see 7.6).
Implementation Requirements
No storage associated with an Optional_Big_Real object shall be lost
upon assignment or scope exit.
For purposes of determining whether predicate checks are performed
as part of default initialization, the type Optional_Big_Real
shall be considered to have a subcomponent that has a default_expression.
!ASIS
No ASIS effect (assuming this is ONLY a library).
!ACATS test
An ACATS C-Test is needed to check that the new capabilities are supported.
!appendix
From: Steve Baird
Sent: Tuesday, September 27, 2016 4:09 PM
professor at U. of Utah:
blog.regehr.org/archives/1401
Regehr says:
In most programming languages, the default integer type should be a
bignum: an arbitrary-precision integer that allocates more space when
needed. Efficient bignum libraries exist and most integers never end
up needing more than one machine word anyway, except in domains like
crypto.
Nobody is suggesting changing how Standard.Integer works for Ada, but a
language-defined Bignum package (presumably supporting Rationals as well as
Integers) would be a step in the right direction.
It seems like the same arguments which were used (correctly, IMO) to justify
adding predefined container packages to the language also apply here. As Tuck
phrased it in a private message: portability and more capability "out of the
box."
Does some de facto standard already exist?
****************************************************************
From: Bob Duff
Sent: Tuesday, September 27, 2016 4:32 PM
> Nobody is suggesting changing how Standard.Integer works
But somebody might suggest that things like "type T is range 1..10**100;"
should be supported by all Ada compilers.
> It seems like the same arguments which were used (correctly, IMO) to
> justify adding predefined container packages to the language also
> apply here. As Tuck phrased it in a private message:
> portability and more capability "out of the box."
Plus the fact that all Ada compilers have to support that functionality at
compile time, but can't provide it to their users in a portable way at run time.
> Does some de facto standard already exist?
For C and C++, yes. For Ada, no.
For Common Lisp, Java, C#, and many others, a de jure standard exists.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, September 28, 2016 12:49 PM
> Does some de facto standard already exist?
No. I could be convinced to contribute RR's Univmath package as a starting point
for discussion.
****************************************************************
From: Jean-Pierre Rosen
Sent: Thursday, September 29, 2016 12:22 AM
There are several packages available, see http://bignumber.chez.com/index.html
****************************************************************
From: Randy Brukardt
Sent: Thursday, September 29, 2016 12:28 PM
Surely, like containers there are as many Bignum packages as there are Ada
programmers (much like containers - everybody has one). But is someone putting
them into RM format?? That's what it means to "contribute" a package here.
****************************************************************
From: John Barnes
Sent: Thursday, September 29, 2016 2:05 PM
I see there has been chatter on big number packages.
I wrote such a package many years ago. I was intending to write a book called
Fun with Ada using big examples of Ada 83 programs. But it got overtaken by
events such as having to write the book on Ada 95.
But I kept the package, used some child stuff from Ada 95 but otherwise left it
alone, I still use it for dabbling with large prime numbers and so on. I think
it is based on base 10,000 which will run on a 16 bit machine and is easy for
conversion for printing.
But I fear that agreeing on something might be tricky.
****************************************************************
From: Florian Schanda
Sent: Friday, September 30, 2016 2:35 AM
> But I kept the package, used some child stuff from Ada 95 but
> otherwise left it alone, I still use it for dabbling with large prime
> numbers and so on. I think it is based on base 10,000 which will run
> on a 16 bit machine and is easy for conversion for printing.
Generally, these days, you would probably want to stick to largest power-of- two
as printing these is not a massive concern but performance is. :)
Anyway, I think whatever we come up with, it should be possible to implement it
via a binding to GMP [https://gmplib.org] which is more or less the gold
standard for arbitrary precision arithmetic. Of course, some runtime may wish to
have a more verifiable implementation... So, I think there are two requirements
we should make sure to fulfil:
1. the api should be amenable to static analysis and formal verification
2. the api should make it easy to bind to gmp
(Not saying this list is exhaustive.)
I just want to avoid people starting from various in-house and private projects;
its probably a good idea instead to start from established libraries.
****************************************************************
From: Steve Baird
Sent: Friday, September 30, 2016 12:27 PM
> So, I think there are two
> requirements we should make sure to fulfil:
>
> 1. the api should be amenable to static analysis and formal verification
> 2. the api should make it easy to bind to gm
It is also at least possible that we'll want something similar to what we have
with the containers, where we have one version for use in situations where
controlled types and dynamic storage allocation are ok and another for use in
other situations.
****************************************************************
From: Jean-Pierre Rosen
Sent: Friday, September 30, 2016 2:44 PM
Hmmm... bounded and unbounded bignums?
****************************************************************
From: Tucker Taft
Sent: Friday, September 30, 2016 3:52 PM
Perhaps: "big enough nums, already..."
****************************************************************
From: Steve Baird
Sent: Tuesday, December 12, 2017 7:20 PM
I thought I'd take a look at how Java and C++ do bignums to see if there are any
ideas there worth incorporating.
My going-in idea is to have two packages with similar specs; one has "Capacity"
discriminants and the other is implemented using dynamic storage allocation of
some sort (e.g., controlled types and allocators). Like the bounded/unbounded
versions of the containers.
C++ doesn't really have a standard for bignums, but the GCC/GMP stuff
looks pretty similar to what I expected.
Java, however, surprised me (note that I am far from a Java expert so it could
be that I am just confused here).
The Java big-real spec doesn't have Numerator and Denominator functions which
yield big-ints.
The Java type seems to be named BigDecimal.
BigDecimal is implemented as a single big-int value accompanied by two ints
(Scale and Precision), at least according to
stackoverflow.com/questions/10655254/how-bigdecimal-is-implemented
Which leads to my question:
If Ada defined a spec where the intended implementation for bigreal
is clearly two bigints (one for numerator, one for denominator),
would this result in lots of "I coded up the same algorithm in Ada
and Java and performance was a lot worse in Ada" horror stories?
Apparently BigDecimal lets you have, in effect, a lot of decimal digits but the
value "one third" still cannot be represented exactly.
Why did the Java folks do it that way? It seems like you lose a lot of value if
you can't exactly represent, for example, one third.
But perhaps most folks don't care about that functionality and the
performance/functionality tradeoff chosen by Java is closer to what most folks
want.
Opinions? Opinions about Java are of some interest, but what I really want is
opinions about what we should do in Ada.
p.s. Note that the current plan for this AI is to add one or more new predefined
packages but no changes to language rules. In particular, numeric literals for a
non-numeric type is the topic of another AI.
****************************************************************
From: Tucker Taft
Sent: Wednesday, December 13, 2017 9:15 AM
We want rational, not decimal, computations, I believe. So I would ignore
Java's BigDecimal.
A different and interesting capability is true "real" arithmetic, which works
for transcendentals, etc. It is intriguing, but probably not what people really
want.
I'll send the PDF for an article by Hans Boehm about "real" arithmetic
separately, since it will probably not make it through the SPAM filter!
****************************************************************
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 10:57 AM
> We want rational, not decimal, computations, I believe. So I would
> ignore Java's BigDecimal.
Isn't that Steve's question? Ada compiler vendors use rational computations
since that is required by the ACATS (it's necessary that 1/3 /=
0.33333333333333333333333333333). But is that the best choice for the Ada
community? I don't know.
> A different and interesting capability is true "real"
> arithmetic, which works for transcendentals, etc. It is intriguing,
> but probably not what people really want.
>
> I'll send the PDF for an article by Hans Boehm about "real"
> arithmetic separately, since it will probably not make it through the
> SPAM filter!
It might be too large for the list as well. If so, I can post it in the Grab Bag
if you send it directly to me.
****************************************************************
From: Edmond Schonberg
Sent: Wednesday, December 13, 2017 1:36 PM
> I'll send the PDF for an article by Hans Boehm about "real" arithmetic
> separately, since it will probably not make it through the SPAM filter!
Both the rational representation and Boehm’s approach require arbitrary
precision integer arithmetic, so the spec of that new package is
straightforward. The papers describing the implementation of Boehm’s approach
claim that it is much more efficient than working on rationals, where numerator
and denominator grow very rapidly, while the other method only computes required
bits. I have no idea whether numerical analysts use this method, From the
literature it seems to be of interest to number theorists.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 4:49 PM
> > It might be too large for the list as well. If so, I can post it in
> > the Grab Bag if you send it directly to me.
>
> It was too big.
By about 4 Megabytes. :-)
Since the article is copyrighted, I put it in the private part of the website.
Find it at:
http://www.ada-auth.org/standards/private/real_arithmetic-boehm.pdf
Ed suggested privately:
> Additional details on the underlying model and its implementation in:
> http://keithbriggs.info/documents/xr-paper2.pdf
****************************************************************
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 5:10 PM
...
> Both the rational representation and Boehm's approach require
> arbitrary precision integer arithmetic, so the spec of that new
> package is straightforward.
> The papers describing the implementation of Boehm's approach claim
> that it is much more efficient than working on rationals, where
> numerator and denominator grow very rapidly, while the other method
> only computes required bits. I have no idea whether numerical analysts
> use this method, From the literature it seems to be of interest to
> number theorists.
The problem with the Boehm method is that it requires specifying those "required
bits", which seems problematic in a programming environment. One could do it
with a form of type declaration (Ada's "digits" seems to be the right general
idea), but that doesn't make much sense in a library form. Boehm's actual use
gets those on the fly (by interacting with the user), and they also use a
rational representation as a backup. So it seems that a rational representation
is going to show up somewhere.
I've repeatedly had the fever dream of a class-wide bignum base type, something
like:
package Root_Bignum is
type Root_Bignum_Type is abstract tagged null record;
function Bignum (Val : in Long_Float) return Root_Bignum_Type is abstract;
-- To get the effect of literals and conversions.
function "+" (Left, Right : in Root_Bignum_Type) return Root_Bignum_Type is abstract;
-- And all of the rest.
function Expected_Digits return Natural is abstract;
-- The number of digits supported by this type; 0 is returned
-- if the number is essentially infinite.
-- And probably some other queries.
end Root_Bignum;
And then there could be multiple specific implementations with different
performance characteristics, everything from Long_Float itself thru infinite
rational representations.
This would allow one to create most of the interesting algorithms as class-wide
operations (with implementations that could adjust to the characteristics of the
underlying type), for instance:
function Sqrt (Val : in Root_Bignum_Type'Class;
Required_Digits : Natural := 0) return Root_Bignum_Type'Class;
Here with "Required_Digits" specifies how many digits of result are needed.
If 0, the value would be retrieved from the underlying representation.
(Probably have to raise an exception if that gives "infinite".)
Such a layout would also allow easy changing of representations, which probably
would be needed for tuning purposes (most of these maths being slow, at least by
conventional standards).
This would have the clear advantage of avoiding being locked into a single form
of Bignum math, when clearly there are other choices out there useful for
particular purposes.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 5:25 PM
I said:
...
> The problem with the Boehm method is that it requires specifying those
> "required bits", which seems problematic in a programming environment.
but then also noted:
> function Sqrt (Val : in Root_Bignum_Type'Class;
> Required_Digits : Natural := 0) return
> Root_Bignum_Type'Class;
essentially, recognizing that many non-terminating algorithms have to have some
sort of termination criteria.
For ease of use purposes, one would prefer to only specify numbers of digits if
they're really needed (as in Sqrt or PI, etc.). But if there are going to be a
lot of such operations, one would want to be able to specify that once. Hope
that explains my thinking here.
Also, a Bignum library needs a corresponding Text_IO library. And probably a
custom version of GEF. (The Janus/Ada compiler library has most of these
features, and they are used extensively.)
****************************************************************
From: Steve Baird
Sent: Friday, January 19, 2018 2:36 PM
We have agreed that we want bignum support in the form of one or more predefined
packages with no other language extensions (e.g., no new rules for numeric
literals) as part of this AI.
The general approach seems fairly clear, although there are a lot of details to
decide (not the least of which are the choices for names). I think we want two
forms, "vanilla" and "bounded" (analogous to, for example,
Ada.Containers.Vectors and Ada.Containers.Bounded_Vectors). In one form, the two
"big" numeric types (tentatively named Big_Integer and Big_Rational) are defined
as undiscriminated types. In the second form, these types are discriminated with
some sort of a capacity discriminant. The idea is that the first form is allowed
to use dynamic storage allocation and controlled types in its implementation
while the second form is not; the discriminant somehow indicates the set of
representable values via some mapping (should this mapping be implementation
dependent?).
At a high level, we might have something like
package Ada.Big_Numbers is
-- empty spec like Ada.Containers package
end;
package Ada.Big_Numbers.Big_Integers is
type Big_Integer is private;
function GCD (Left, Right : Big_Integer) return Integer;
function "+" (Arg : Some_Concrete_Integer_Type_TBD)
return Big_Integer;
... ops for Big_Integer ...
end Ada.Big_Numbers.Big_Integers.
with Ada.Big_Numbers.Big_Integers;
package Ada.Big_Numbers.Big_Rationals is
use type Big_Integers.Big_Integer;
type Big_Rational is private with
Type_Invariant =>
Big_Rational = +0 or else
Big_Integers.GCD
(Big_Integers.Numerator (Big_Rational),
Big_Integers.Denominator (Big_Rational)) = +1;
function Numerator (Arg : Big_Rational) return Big_Integer;
function Denominator (Arg : Big_Rational) return Big_Integer;
function "/" (Num, Den : Big_Integer) return Big_Rational
with Pre => Den /= +0;
... other ops for Big_Rational ...
end Ada.Big_Numbers.Big_Rationals;
package Ada.Big_Numbers.Bounded_Big_Integers is ... end;
package Ada.Big_Numbers.Bounded_Big_Rationals is ... end;
Questions/observations include:
1) Do we declare deferred constants, parameterless functions, or neither
for things like Zero, One, and Two?
2) Which ops do we include? It seems obvious that we define at least
the arithmetic and relational ops that are defined for any
predefined integer (respectively float) type for Big_Integer
(respectively, Big_Rational).
What Pre/Postconditions are specified for these ops?
These might involve subtype predicates.
For example (suggested by Bob), do we want
subtype Nonzero_Integer is Big_Integer with
Predicate => Nonzero_Integer /= Zero;
function "/"
(X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
-- similar for "mod", "rem".
?
What other operations should be provided?
- Conversion between Big_Int and what concrete integer types?
I'd say define a type with range Min_Int .. Max_Int
and provide conversion functions for that type. Also provide
two generic conversion functions that take a generic formal
signed/modular type.
- Conversion between Big_Rational and what concrete integer or
float types? Same idea. Conversion between a maximal
floating point type and then a pair of conversion generics
with formal float/fixed parameters.
- What shortcuts do we provide (i.e., ops that can easily be
built out of other ops)? Assignment procedures like
Add (X, Y); -- X := X + Y
or mixed-type operators whose only purpose is to spare users
from having to write explicit conversion?
3) It seems clear that we don't want the bounded form of either
package to "with" the unbounded form but we do want conversion
functions for going between corresponding bounded and unbounded
types. Perhaps these go in child units of the two bounded packages
(those child units could then "with" the corresponding unbounded
packages). Should streaming of the two forms be compatible as with
vectors and bounded vectors?
4) We need an Assign procedure. In the unbounded case it can be just
a wrapper for predefined assignment, but in the bounded case it
has to deal with the case where the two arguments have different
capacities. It's fairly obvious what to do in most cases, but what
about assigning a Big_Rational value which cannot be represented
exactly given the capacity of the target. Raise an exception or
round? In either case, we probably want to provide a Round function
that deterministically finds an approximation to a given
value which can be represented as a value having a given
capacity. This can be useful in the unbounded case just to save
storage. Should this Round function be implementation-dependent?
If not, then we might end up talking about convergents and
semi-convergents in the Ada RM (or at least in the AARM),
which would be somewhat odd (see
shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-approximations-are-the-convergents-of-the-continued-fraction
). I do not think we want to define Succ/Pred functions which take
a Big_Rational and a capacity value.
5) We want to be sure that a binding to GNU/GMP is straightforward in
the unbounded case. [Fortunately, that does not require using the
same identifiers used in GNU/GMP (mpz_t and mpq_t).]
See gmplib.org/manual for the GNU/GMP interfaces.
6) Do we want functions to describe the mapping between Capacity
discriminant values and the associated set of representable values?
For example, a function from a value (Big_Integer or Big_Rational)
to the smallest capacity value that could be used to represent it.
For Big_Integer there could presumably be Min and Max functions
that take a capacity argument. For Big_Rational, it's not so clear.
We could require, for example, that a given capacity value allows
representing a given Big_Rational value if it is >= the sum of
the capacity requirements of the Numerator and the Denominator.
7) Bob feels (and I agree) that the ARG should not formally approve any
changes until we have experience with an implementation. At this
point the ARG should be focused on providing informal guidance on
this topic.
Opinions?
****************************************************************
From: Randy Brukardt
Sent: Friday, January 19, 2018 10:18 PM
...
> Questions/observations include:
0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
If so, then we can use prefix notation on functions like Numerator and
Denominator. We could also consider deriving both versions (usual and bounded)
from an abstract ancestor.
> 1) Do we declare deferred constants, parameterless functions,
> or neither for things like Zero, One, and Two?
If tagged, I'll finally get an excuse to show why what I called "tag
propagation" is necessary to implement the dispatching rules in 3.9.2. :-) (One
has to consider a set of calls, not a single call, for determining the static or
dynamic tag for dispatching. That's demonstratably necessary to process tagged
expressions with constants or literals.)
Anyway, the answer to this depends on whether there is a sufficiently short
constructor -- and that really depends on whether Tucker invents a useful
"literals for private type" AI. So I don't think this can be answered until we
find out about that.
> 2) Which ops do we include? It seems obvious that we define at least
> the arithmetic and relational ops that are defined for any
> predefined integer (respectively float) type for Big_Integer
> (respectively, Big_Rational).
>
> What Pre/Postconditions are specified for these ops?
> These might involve subtype predicates.
> For example (suggested by Bob), do we want
>
> subtype Nonzero_Integer is Big_Integer with
> Predicate => Nonzero_Integer /= Zero;
> function "/"
> (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
> -- similar for "mod", "rem".
>
> ?
Shouldn't this predicate raise Constraint_Error rather than defaulting to
Assertion_Error, to be more like the other numeric operations? Otherwise, I'm
all in favor of this formulation. Note, however, that since the underlying type
is likely to be controlled and thus tagged, this would require some changes to
other rules; there is already an AI about that (AI12-0243-1).
> What other operations should be provided?
> - Conversion between Big_Int and what concrete integer types?
> I'd say define a type with range Min_Int .. Max_Int
> and provide conversion functions for that type. Also provide
> two generic conversion functions that take a generic formal
> signed/modular type.
Sounds OK.
> - Conversion between Big_Rational and what concrete integer or
> float types? Same idea. Conversion between a maximal
> floating point type and then a pair of conversion generics
> with formal float/fixed parameters.
Sounds OK again.
> - What shortcuts do we provide (i.e., ops that can easily be
> built out of other ops)? Assignment procedures like
> Add (X, Y); -- X := X + Y
> or mixed-type operators whose only purpose is to spare users
> from having to write explicit conversion?
The only reason for mixed type operators is to make literals available. But if
one does those, then we can't add literals properly in the future
(Ada.Strings.Unbounded is damaged by this). So I say no.
I wouldn't bother with any other routines until at least such time as Bob
:-) has built some ACATS tests.
> 3) It seems clear that we don't want the bounded form of either
> package to "with" the unbounded form but we do want conversion
> functions for going between corresponding bounded and unbounded
> types. Perhaps these go in child units of the two bounded packages
> (those child units could then "with" the corresponding unbounded
> packages).
Alternatively, both could be derived from an abstract type, and a class-wide
conversion provided. That would get rid of the empty package in your proposal.
:-)
> Should streaming of the two forms be compatible as with
> vectors and bounded vectors?
Yes.
> 4) We need an Assign procedure. In the unbounded case it can be just
> a wrapper for predefined assignment, but in the bounded case it
> has to deal with the case where the two arguments have different
> capacities. It's fairly obvious what to do in most cases, but what
> about assigning a Big_Rational value which cannot be represented
> exactly given the capacity of the target. Raise an exception or
> round?
I think I'd raise Capacity_Error. (Isn't that what the containers do?) Having
exact math be silently non-exact seems like exactly (pun) the wrong thing to do.
> In either case, we probably want to provide a Round function
> that deterministically finds an approximation to a given
> value which can be represented as a value having a given
> capacity. This can be useful in the unbounded case just to save
> storage. Should this Round function be implementation-dependent?
> If not, then we might end up talking about convergents and
> semi-convergents in the Ada RM (or at least in the AARM),
> which would be somewhat odd (see
> shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-appr
> oximations-are-the-convergents-of-the-continued-fraction
> ). I do not think we want to define Succ/Pred functions which take
> a Big_Rational and a capacity value.
???
I don't think Round (or any other operation) ought to be
implementation-dependent, so I think it would need a real definition. Hopefully
with "semi-convergents" or other terms that no one has heard of. ;-)
> 5) We want to be sure that a binding to GNU/GMP is straightforward in
> the unbounded case. [Fortunately, that does not require using the
> same identifiers used in GNU/GMP (mpz_t and mpq_t).]
> See gmplib.org/manual for the GNU/GMP interfaces.
Makes sense.
> 6) Do we want functions to describe the mapping between Capacity
> discriminant values and the associated set of representable values?
> For example, a function from a value (Big_Integer or Big_Rational)
> to the smallest capacity value that could be used to represent it.
> For Big_Integer there could presumably be Min and Max functions
> that take a capacity argument. For Big_Rational, it's not so clear.
> We could require, for example, that a given capacity value allows
> representing a given Big_Rational value if it is >= the sum of
> the capacity requirements of the Numerator and the Denominator.
It seems that the Capacity needs to mean something to the end user, not just the
compiler. So such functions seem necessary, but KISS for those!!
> 7) Bob feels (and I agree) that the ARG should not formally approve any
> changes until we have experience with an implementation. At this
> point the ARG should be focused on providing informal guidance on
> this topic.
I agree that Bob should prototype these packages, including writing ACATS-style
tests for them, so that we can put them into the Ada 2020 Standard. I'll put it
on his action item list. ;-)
Seriously, we already have an ARG rule that all Amendment AIs are supposed to
include (some) ACATS tests, and we really should have a similar rule that
proposed packages are prototyped as well. This is the assumed responsibility of
an AI author, so if you can't get Bob to help, you're pretty much stuck, and
need to do that before the AI could be assumed complete.
OTOH, we haven't required that from any other AI author, so why start now??
(We really ought to, I don't have a very big budget to write Ada 2020 ACATS
tests. Topic to discuss during the call?)
****************************************************************
From: Jean-Pierre Rosen
Sent: Saturday, January 20, 2018 12:22 AM
> Questions/observations include:
> [...]
>
I'd add:
8) IOs
Should an IO package be associated to each of these bignums?
Note that the issue of IO may influence the representation of
of bignums: I once knew an implementation where each super-digit
was limited to 1_000_000_000 (instead of the natural 2_147_483_647),
just to avoid terribly inefficient IOs.
****************************************************************
From: Tucker Taft
Sent: Saturday, January 20, 2018 11:08 AM
> ...
>
>> 1) Do we declare deferred constants, parameterless functions,
>> or neither for things like Zero, One, and Two?
>
> If tagged, I'll finally get an excuse to show why what I called "tag
> propagation" is necessary to implement the dispatching rules in 3.9.2.
> :-) (One has to consider a set of calls, not a single call, for
> determining the static or dynamic tag for dispatching. That's
> demonstratably necessary to process tagged expressions with constants
> or literals.)
I agree that you have to do "tag propagation" to properly handle tag
indeterminate calls. Has anyone claimed otherwise?
>
> Anyway, the answer to this depends on whether there is a sufficiently
> short constructor -- and that really depends on whether Tucker invents
> a useful "literals for private type" AI. So I don't think this can be
> answered until we find out about that.
I'm on it. ;-)
****************************************************************
From: Randy Brukardt
Sent: Saturday, January 20, 2018 7:29 PM
> I agree that you have to do "tag propagation" to properly handle tag
> indeterminate calls. Has anyone claimed otherwise?
Not that I know of, but based on my compiler surveys, no one implements it other
than Janus/Ada. Admittedly, I haven't checked this recently.
I've long had a tagged Bignum-like package on my ACATS test to-construct list
(because one needs usage-orientation for such tests) in order to test this rule.
So far as I can tell, the ACATS doesn't currrently test cases like those that
arise in Bignum:
procedure Something (Val : in out Num'Class) is
begin
Val := + Zero; -- Zero gets the tag of Val, propagated through "+".
declare
Org : Num'Class := Val + (- One); -- Org and One get the tag of Val.
begin
...
end;
end Something;
I'll probably come up with more realistic-looking expressions for this test, but
the idea should be obvious. (I'll have to test both static and dynamic binding,
as well as tag indeterminate cases.)
****************************************************************
From: John Barnes
Sent: Monday, January 22, 2018 5:49 AM
I wrote a bignum package in Ada 83 some 30 years ago. I did make some updates to
use Ada 95, mainly child packages. I still use it for numerical stuff for
courses at Oxford.
Notable points perhaps.
I did use a power of 10 for the base to ease IO. It was originally on a 16 bit
machine. (386 perhaps). It still works on this horrid Windows 10. Not much
faster than on my old XP laptop. I don't know what Windows 10 is doing.
Obviously playing with itself - ridiculous.
I provided constants Zero and One. I didn't think any others were necessary.
Others were provided by eg
Two: Number := Make-Number(2);
I provided a package for subprograms Add, Sub, Mul, Div, Neg, Compare, Length,
To_Number, To_Text, To_Integer.
And a package for functions +. -, abs, *, / rem, mod, <, <=, >, >=, =
And other packages for I/O.
Long time ago. Certainly very useful.
****************************************************************
From: Steve Baird
Sent: Monday, January 22, 2018 12:33 PM
> I'd add:
> 8) IOs
> Should an IO package be associated to each of these bignums?
Good question.
If we provide conversion functions to and from String then would any further IO
support be needed?
****************************************************************
From: Steve Baird
Sent: Monday, January 22, 2018 1:24 PM
> ...
>> Questions/observations include:
>
> 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
>
> If so, then we can use prefix notation on functions like Numerator and
> Denominator. We could also consider deriving both versions (usual and
> bounded) from an abstract ancestor.
If we go this way, then should this common ancestor be an interface type? I'd
say yes.
Does it then get all the same ops, so that the non-abstract ops declared for the
Bounded and Unbounded types would all be overriding?
Would this make the AI12-0243-ish issues any worse (consider the proposed
Nonzero_Integer parameter subtype mentioned earlier)? I know these problems are
bad enough already, but my question is whether this would make matters any
worse.
>> 2) Which ops do we include? It seems obvious that we define at least
>> the arithmetic and relational ops that are defined for any
>> predefined integer (respectively float) type for Big_Integer
>> (respectively, Big_Rational).
>>
>> What Pre/Postconditions are specified for these ops?
>> These might involve subtype predicates.
>> For example (suggested by Bob), do we want
>>
>> subtype Nonzero_Integer is Big_Integer with
>> Predicate => Nonzero_Integer /= Zero;
>> function "/"
>> (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
>> -- similar for "mod", "rem".
>>
>> ?
>
> Shouldn't this predicate raise Constraint_Error rather than defaulting
> to Assertion_Error, to be more like the other numeric operations?
Good point; I agree.
>> 3) It seems clear that we don't want the bounded form of either
>> package to "with" the unbounded form but we do want conversion
>> functions for going between corresponding bounded and unbounded
>> types. Perhaps these go in child units of the two bounded packages
>> (those child units could then "with" the corresponding unbounded
>> packages).
>
> Alternatively, both could be derived from an abstract type, and a
> class-wide conversion provided. That would get rid of the empty
> package in your proposal. :-)
Could you provide a more detailed spec? I don't see how this would work, but I
suspect that I'm misunderstanding your proposal.
>> 4) We need an Assign procedure. In the unbounded case it can be just
>> a wrapper for predefined assignment, but in the bounded case it
>> has to deal with the case where the two arguments have different
>> capacities. It's fairly obvious what to do in most cases, but what
>> about assigning a Big_Rational value which cannot be represented
>> exactly given the capacity of the target. Raise an exception or
>> round?
>
> I think I'd raise Capacity_Error. (Isn't that what the containers do?)
> Having exact math be silently non-exact seems like exactly (pun) the
> wrong thing to do.
Is it that simple? Suppose somebody wants large rationals (e.g., 2048-bit
numerators and denominators) with rounding. It's not that they require exact
arithmetic - they just want a lot more range/precision than what you get from
Ada's numeric types. It may be that this is an unimportant corner case and you
are right to dismiss it; I don't know.
>> 6) Do we want functions to describe the mapping between Capacity
>> discriminant values and the associated set of representable values?
>> For example, a function from a value (Big_Integer or Big_Rational)
>> to the smallest capacity value that could be used to represent it.
>> For Big_Integer there could presumably be Min and Max functions
>> that take a capacity argument. For Big_Rational, it's not so clear.
>> We could require, for example, that a given capacity value allows
>> representing a given Big_Rational value if it is >= the sum of
>> the capacity requirements of the Numerator and the Denominator.
>
> It seems that the Capacity needs to mean something to the end user,
> not just the compiler. So such functions seem necessary, but KISS for those!!
Am I right in guessing that you'd like these functions to be portable (as
opposed to being implementation-defined)?
****************************************************************
From: Randy Brukardt
Sent: Monday, January 22, 2018 3:41 PM
> > I'd add:
> > 8) IOs
> > Should an IO package be associated to each of these bignums?
>
> Good question.
>
> If we provide conversion functions to and from String then would any
> further IO support be needed?
We currently have Text_IO nested packages or children for pretty much any type
for which it makes sense to have text input-output, despite the fact that every
such type has an Image function or the equivalent (To_String for unbounded
strings).
So I'd rather expect a Ada.Text_IO.BigNum_IO package. If we don't define it now,
we will the next time around.
(The Janus/Ada UnivMath package has a complete set of Text_IO packages, and they
are heavily used. I believe they can output both rational and decimal
representation for the universal_real type.)
****************************************************************
From: Randy Brukardt
Sent: Monday, January 22, 2018 3:36 PM
> > Steve Baird writes:
> > ...
> >> Questions/observations include:
> >
> > 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
> >
> > If so, then we can use prefix notation on functions like Numerator
> > and Denominator. We could also consider deriving both versions
> > (usual and
> > bounded) from an abstract ancestor.
>
> If we go this way, then should this common ancestor be an interface
> type? I'd say yes.
I suggested making it abstract so it could have some concrete operations if
those made sense. But perhaps they don't make sense.
> Does it then get all the same ops, so that the non-abstract ops
> declared for the Bounded and Unbounded types would all be overriding?
I would expect that the vast majority of operations are in the interface, so
dispatching can be used, and one can write class-wide algorithms that work with
any Bignum representation. Probably the capacity-specific operations would be
left out.
> Would this make the AI12-0243-ish issues any worse (consider the
> proposed Nonzero_Integer parameter subtype mentioned earlier)? I know
> these problems are bad enough already, but my question is whether this
> would make matters any worse.
It just makes a solution more urgent, but it doesn't change the issues any.
...
> >> 3) It seems clear that we don't want the bounded form of either
> >> package to "with" the unbounded form but we do want conversion
> >> functions for going between corresponding bounded and unbounded
> >> types. Perhaps these go in child units of the two bounded packages
> >> (those child units could then "with" the corresponding unbounded
> >> packages).
> >
> > Alternatively, both could be derived from an abstract type, and a
> > class-wide conversion provided. That would get rid of the empty
> > package in your proposal. :-)
>
> Could you provide a more detailed spec? I don't see how this would
> work, but I suspect that I'm misunderstanding your proposal.
I was thinking about including cross-cut operations in the spec, something
like:
type BigNum is abstract tagged with private;
function Convert (Val : in Bignum'Class) return Bignum;
but thinking about it now, I can't figure out how one would implement one of
those.
You'd probably have to have a concrete universal representation to make that
work:
function Convert (Val : in Bignum) return Universal_Big;
function Convert (Val : in Universal_Big) return BigNum;
but of course that would bring in the memory allocation/finalization issues
that you are trying to avoid.
So at this moment I'm thinking that direct conversions would have to be left
out; you could generally do it through intermediary types like Max_Integer
using Numerator/Demomonator.
> >> 4) We need an Assign procedure. In the unbounded case it can be just
> >> a wrapper for predefined assignment, but in the bounded case it
> >> has to deal with the case where the two arguments have different
> >> capacities. It's fairly obvious what to do in most cases, but what
> >> about assigning a Big_Rational value which cannot be represented
> >> exactly given the capacity of the target. Raise an exception or
> >> round?
> >
> > I think I'd raise Capacity_Error. (Isn't that what the containers
> > do?) Having exact math be silently non-exact seems like exactly
> > (pun) the wrong thing to do.
>
> Is it that simple? Suppose somebody wants large rationals (e.g.,
> 2048-bit numerators and denominators) with rounding.
> It's not that they require exact arithmetic - they just want a lot
> more range/precision than what you get from Ada's numeric types.
> It may be that this is an unimportant corner case and you are right to
> dismiss it; I don't know.
We're not trying to be all things to all people. I'd consider these "exact"
math packages and treat them accordingly. If there is an abstract root, one
can "easily" make a clone version that uses rounding if someone needs that.
(Defining the rounding is hard, as you noted elsewhere.)
> >> 6) Do we want functions to describe the mapping between Capacity
> >> discriminant values and the associated set of representable values?
> >> For example, a function from a value (Big_Integer or Big_Rational)
> >> to the smallest capacity value that could be used to represent it.
> >> For Big_Integer there could presumably be Min and Max functions
> >> that take a capacity argument. For Big_Rational, it's not so clear.
> >> We could require, for example, that a given capacity value allows
> >> representing a given Big_Rational value if it is >= the sum of
> >> the capacity requirements of the Numerator and the Denominator.
> >
> > It seems that the Capacity needs to mean something to the end user,
> > not just the compiler. So such functions seem necessary, but KISS
> > for those!!
>
> Am I right in guessing that you'd like these functions to be portable
> (as opposed to being implementation-defined)?
I think so; otherwise it rather defeats the purpose of language-defined packages
(to provide the ultimate in portability).
****************************************************************
From: Bob Duff
Sent: Sunday, January 28, 2018 11:29 AM
> Steve Baird writes:
> ...
> > Questions/observations include:
>
> 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
Surely not. I think we want to be competetive (efficiency-wise) with all
sorts of other languages, and taggedness will destroy that.
Let's not have another "tampering" fiasco.
> If so, then we can use prefix notation on functions like Numerator and
> Denominator.
I'm not a big fan of that feature, but if we want it, we should figure out
how to do it for untagged types.
>... We could also consider deriving both versions (usual and
> bounded) from an abstract ancestor.
Consider, ..., and reject. ;-)
****************************************************************
From: Jeff Cousins
Sent: Sunday, January 28, 2018 12:21 PM
John Barnes wrote:
I wrote a bignum package in Ada 83 some 30 years ago
Would you be able to let us see the spec for this?
****************************************************************
From: Randy Brukardt
Sent: Sunday, January 28, 2018 9:13 PM
> > 0) Should Big_Integer and (especially) Big_Rational be
> visibly tagged?
>
> Surely not. I think we want to be competetive
> (efficiency-wise) with all sorts of other languages, and taggedness
> will destroy that.
??? Tags (as opposed to controlled types) add almost no overhead, especially
in a case like unbounded Bignum which probably will have to be controlled
anyway. (The only overhead of a tagged type is initializing the tag in the
object.) So long as one uses a single specific type, everything is statically
bound and the cost is essentially the same as an untagged type (again,
especially as the underlying specific type most likely will be tagged and
certainly will be large).
I wasn't suggesting that we define any class-wide operations other than
representation conversion (which should be rarely used in any case).
Class-wide operations are the only operations that add overhead.
> Let's not have another "tampering" fiasco.
I'm still waiting for an example program showing this supposed "fiasco". No
one has ever submitted one to the ARG. We've essentially been asked to believe
this issue by repeated assertion. (And most tampering checks can be done at
compile-time, with sufficient will.)
If there was a fiasco here, it was that the goals of the containers did not
include making them particularly fast. If they are then misused for
high-performance code, one is going to get the expected disappointment.
Perhaps we started with the wrong set of goals.
> > If so, then we can use prefix notation on functions like Numerator
> > and Denominator.
>
> I'm not a big fan of that feature, but if we want it, we should figure
> out how to do it for untagged types.
We've already discussed that in a different e-mail thread. It seems dangerous.
> >... We could also consider deriving both versions (usual and
> > bounded) from an abstract ancestor.
>
> Consider, ..., and reject. ;-)
Again, why? We have a request for a "universal" numeric type, and the only
sane way to provide that is with dispatching. Probably, we'll just forget
that request, but it seems worth spending a bit of time to see if it makes
sense.
****************************************************************
From: John Barnes
Sent: Tuesday, January 30, 2018 4:22 AM
I am feverishly giving lectures on numbers at Oxford at the moment but I am
trying to keep an eye on what the ARG is up to.
Did you know that a new Mersenne prime was discovered on Boxing Day (26
December) 2017. It is 2**77232917 - 1 and has only 23,249,425 digits. Will
the Bignum package cope with it?
****************************************************************
From: Tucker Taft
Sent: Tuesday, January 30, 2018 3:02 PM
Yes, I noticed that new Mersenne prime as well. And 23 mega-digit is nothing
for a modern iPhone. ;-) Just be sure to set aside a bit of extra time to
print it out using the Image function. Except in base 2, of course, which
I could do right now. Ready: 1111111111 [... 77,232,900 1's] 1111111!
****************************************************************
From: Jeff Cousins
Sent: Wednesday, January 31, 2018 6:31 AM
[This is John Barnes' Bignum package and some test programs - Editor.]
-- file books\fun\progs\numbers.ada
-- Restructured using children
-- Types and No_Of_Places in parent package
-- 20-10-06
package Numbers is
Max_Index: constant := 1000;
subtype Index is Integer range 0 .. Max_Index;
type Number(Max_Digits: Index := 1) is private;
Zero, One: constant Number;
Number_Error : exception;
private
Base_Exp: constant := 4;
Base: constant := 10 ** Base_Exp;
type Base_Digit is range -Base .. 2 * Base - 1;
type Base_Digit_Array is
array(Index range <>) of Base_Digit;
type Number(Max_Digits: Index := 1) is
record
Sign: Integer := +1;
Length: Index := 0;
D: Base_Digit_Array(1..Max_Digits);
end record;
Zero: constant Number := (0, +1, 0, (others => 0));
One: constant Number := (1, +1, 1, (1 => 1));
function No_Of_Places(N: Number) return Integer;
end Numbers;
package body Numbers is
function No_Of_Places(N: Number) return Integer is
begin
if N.Length = 0 then
return 1;
else
return N.Length * Base_Exp;
end if;
end No_Of_Places;
end Numbers;
--------------------------------------------------------------------
package Numbers.Proc is
subtype Index is Numbers.Index;
subtype Number is Numbers.Number;
Zero: Number renames Numbers.Zero;
One: Number renames Numbers.One;
Number_Error: exception renames Numbers.Number_Error;
procedure Add(X, Y: Number; Z: out Number);
procedure Sub(X, Y: Number; Z: out Number);
procedure Mul(X, Y: Number; Z: out Number);
procedure Div(X, Y: Number; Quotient,
Remainder: out Number);
procedure Neg(X: in out Number);
function Compare(X, Y: Number) return Integer;
function Length(N: Number) return Index;
procedure To_Number(S: String; N: out Number);
procedure To_Number(I: Integer; N: out Number);
procedure To_Text(N: Number; S: out String);
procedure To_Integer(N: Number; I: out Integer);
end Numbers.Proc;
--------------------------------------------------------------------
package Numbers.IO is
Default_Width: Natural := 0;
procedure Put(Item: Number;
Width: Natural := Default_Width);
procedure Get(Item: out Number);
end Numbers.IO;
--------------------------------------------------------------------
package Numbers.Func is
subtype Number is Numbers.Number;
Zero: Number renames Numbers.Zero;
One: Number renames Numbers.One;
Number_Error: exception renames Numbers.Number_Error;
function "+" (X: Number) return Number;
function "-" (X: Number) return Number;
function "abs" (X: Number) return Number;
function "+" (X, Y: Number) return Number;
function "-" (X, Y: Number) return Number;
function "*" (X, Y: Number) return Number;
function "/" (X, Y: Number) return Number;
function "rem" (X, Y: Number) return Number;
function "mod" (X, Y: Number) return Number;
function "**" (X: Number; N: Natural) return Number;
function "<" (X, Y: Number) return Boolean;
function "<=" (X, Y: Number) return Boolean;
function ">" (X, Y: Number) return Boolean;
function ">=" (X, Y: Number) return Boolean;
function "=" (X, Y: Number) return Boolean;
function Make_Number(S: String) return Number;
function Make_Number(I: Integer) return Number;
function String_Of(N: Number) return String;
function Integer_Of(N: Number) return Integer;
end Numbers.Func;
--------------------------------------------------------------------
package body Numbers.Proc is
Base_Squared: constant := Base**2;
subtype Single is Base_Digit;
type Double is range -Base_Squared .. 2*Base_Squared - 1;
function Unsigned_Compare(X, Y: Number) return Integer is
-- ignoring signs
-- returns +1, 0 or -1 according as X >, = or < Y
begin
if X.Length > Y.Length then return +1; end if;
if X.Length < Y.Length then return -1; end if;
for I in reverse 1 .. X.Length loop
if X.D(I) > Y.D(I) then return +1; end if;
if X.D(I) < Y.D(I) then return -1; end if;
end loop;
return 0; -- the numbers are equal
end Unsigned_Compare;
function Compare(X, Y: Number) return Integer is
-- returns +1, 0 or -1 according as X >, = or < Y
begin
if X.Sign /= Y.Sign then return X.Sign; end if;
return Unsigned_Compare(X, Y) * X.Sign;
end Compare;
procedure Raw_Add(X, Y: Number; Z: out Number) is
-- assumes X not smaller than Y
Carry: Single := 0;
Digit: Single;
ZL: Index := X.Length; -- length of answer
begin
if Z.Max_Digits < ZL then
raise Number_Error; -- Z not big enough to hold X
end if;
for I in 1 .. ZL loop
Digit := X.D(I) + Carry;
if I <= Y.Length then
Digit := Digit + Y.D(I);
end if;
if Digit >= Base then
Carry := 1; Digit := Digit - Base;
else
Carry := 0;
end if;
Z.D(I) := Digit;
end loop;
if Carry /= 0 then
if ZL = Z.Max_Digits then
raise Number_Error; -- too big to fit in Z
end if;
ZL := ZL + 1;
Z.D(ZL) := Carry;
end if;
Z.Length := ZL;
end Raw_Add;
procedure Raw_Sub(X, Y: Number; Z: out Number) is
-- assumes X not smaller than Y
Carry: Single := 0;
Digit: Single;
ZL: Index := X.Length; -- length of answer
begin
if Z.Max_Digits < ZL then
raise Number_Error; -- Z not big enough to hold X
end if;
for I in 1 .. ZL loop
Digit := X.D(I) - Carry;
if I <= Y.Length then
Digit := Digit - Y.D(I);
end if;
if Digit < 0 then
Carry := 1; Digit := Digit + Base;
else
Carry := 0;
end if;
Z.D(I) := Digit;
end loop;
while Z.D(ZL) = 0 loop -- SHOULD THIS NOT FAIL???
ZL := ZL - 1; -- remove leading zeroes
end loop;
Z.Length := ZL;
end Raw_Sub;
procedure Add(X, Y: Number; Z: out Number) is
UCMPXY: Integer := Unsigned_Compare(X, Y);
begin
if X.Sign = Y.Sign then
Z.Sign := X.Sign;
if UCMPXY >= 0 then
Raw_Add(X, Y, Z);
else
Raw_Add(Y, X, Z); -- reverse if Y larger
end if;
else
if UCMPXY > 0 then
Raw_Sub(X, Y, Z);
Z.Sign := X.Sign;
elsif UCMPXY < 0 then
Raw_Sub(Y, X, Z);
Z.Sign := -X.Sign;
else -- answer is zero
Z.Sign := +1; Z.Length := 0;
end if;
end if;
end Add;
procedure Sub(X, Y: Number; Z: out Number) is
UCMPXY: Integer := Unsigned_Compare(X, Y);
begin
if X.Sign /= Y.Sign then
Z.Sign := X.Sign;
if UCMPXY >= 0 then
Raw_Add(X, Y, Z);
else
Raw_Add(Y, X, Z); -- reverse if Y larger
end if;
else
if UCMPXY > 0 then
Raw_Sub(X, Y, Z);
Z.Sign := X.Sign;
elsif UCMPXY < 0 then
Raw_Sub(Y, X, Z);
Z.Sign := -X.Sign;
else -- answer is zero
Z.Sign := +1; Z.Length := 0;
end if;
end if;
end Sub;
procedure Neg(X: in out Number) is
begin
-- do nothing in zero case
if X.Length = 0 then return; end if;
X.Sign := -X.Sign;
end Neg;
function Length(N: Number) return Index is
begin
return N.Length;
end Length;
procedure Mul(X, Y: Number; Z: out Number) is
Carry: Double;
Digit: Double;
ZL: Index;
begin
if Z.Max_Digits < X.Length + Y.Length then
raise Number_Error;
end if;
if X.Length = 0 or Y.Length = 0 then -- zero case
Z.Sign := +1; Z.Length := 0;
return;
end if;
ZL := X.Length + Y.Length - 1;
-- lower possible length of answer
-- copy X to top of Z; so X and Z can be same array
for I in reverse 1 .. X.Length loop
Z.D(I + Y.Length) := X.D(I);
end loop;
declare -- initialise limits and length of cycle
Z_Index: Index;
Y_Index: Index;
Initial_Z_Index: Index := Y.Length + 1;
Initial_Y_Index: Index := 1;
Cycle_Length: Index := 1;
begin
Carry := 0;
for I in 1 .. ZL loop
Digit := Carry;
Carry := 0;
Z_Index := Initial_Z_Index;
Y_Index := Initial_Y_Index;
for J in 1 .. Cycle_Length loop
if Digit > Base_Squared then
Digit := Digit - Base_Squared;
Carry := Carry + Base;
end if;
Digit := Digit + Double(Z.D(Z_Index))
* Double(Y.D(Y_Index));
Z_Index := Z_Index + 1;
Y_Index := Y_Index - 1;
end loop;
-- now adjust limits and length of cycle
if I < Y.Length then
Cycle_Length := Cycle_Length + 1;
Initial_Y_Index := Initial_Y_Index + 1;
else
Initial_Z_Index := Initial_Z_Index + 1;
end if;
if I < X.Length then
Cycle_Length := Cycle_Length + 1;
end if;
Cycle_Length := Cycle_length - 1;
Carry := Carry + Digit / Base;
Z.D(I) := Single(Digit mod Base);
end loop;
end;
if Carry /= 0 then -- one more digit in answer
ZL := ZL + 1;
Z.D(ZL) := Single(Carry);
end if;
Z.Length := ZL;
Z.Sign := X.Sign * Y.Sign;
end Mul;
procedure Div(X, Y: Number; Quotient,
Remainder: out Number) is
U: Number renames Quotient;
V: Number renames Remainder;
Digit, Scale, Carry: Double;
U0, U1, U2: Double;
V1, V2: Double;
QD: Double;
LOQ: constant Index := Y.Length;
HIQ: constant Index := X.Length;
QL: Index;
RL: Index;
QStart: Index;
J : Index;
begin
if Y.Length = 0 then
raise Number_Error;
end if;
if Quotient.Max_Digits < X.Length or
Remainder.Max_Digits < Y.Length then
raise Number_Error;
end if;
if X.Length < Y.Length then -- Quotient is definitely zero
Quotient.Sign := +1;
Quotient.Length := 0;
Remainder.Sign := X.Sign;
Remainder.Length := X.Length;
for I in 1 .. X.Length loop
Remainder.D(I) := X.D(I);
end loop;
return;
end if;
QL := X.Length - Y.Length + 1;
RL := Y.Length;
QStart := QL;
-- compute normalizing factor
Scale := Base/Double(Y.D(Y.Length)+1);
-- scale X and copy to U
Carry := 0;
for I in 1 .. X.Length loop
Digit := Double(X.D(I)) * Scale + Carry;
Carry := Digit / Base;
U.D(I) := Single(Digit mod Base);
end loop;
U0 := Carry; -- leading digit of dividend
-- scale Y and copy to V
Carry := 0;
for I in 1 .. Y.Length loop
Digit := Double(Y.D(I)) * Scale + Carry;
Carry := Digit / Base;
V.D(I) := Single(Digit mod Base);
end loop;
-- no further carry
-- set V1 and V2 to first two digits of divisor
V1 := Double(V.D(Y.Length));
if Y.Length > 1 then
V2 := Double(V.D(Y.Length-1));
else
V2 := 0;
end if;
-- now iterate over digits in answer
-- with U0, U1 and U2 being first three digits of dividend
for I in reverse LOQ .. HIQ loop
U1 := Double(U.D(I));
if Y.Length > 1 then
U2 := Double(U.D(I-1));
else
U2 := 0;
end if;
-- now set initial estimate of digit in quotient
if U0 = V1 then
QD := Base - 1;
else
QD := (U0 * Base + U1) / V1;
end if;
-- now refine estimate by considering U2 also
while V2*QD > (U0*Base+U1-QD*V1)*Base + U2 loop
QD := QD - 1;
end loop;
-- QD is now correct digit or possibly one too big
-- subtract QD times V from U
Carry := 0;
J := QStart;
for I in 1 .. Y.Length loop
Digit := Double(U.D(J)) - Carry
- QD * Double(V.D(I));
if Digit < 0 then
Carry := (-1-Digit) / Base + 1;
Digit := Digit + Carry * Base;
else
Carry := 0;
end if;
U.D(J) := Single(Digit);
J := J + 1;
end loop;
if Carry > U0 then -- estimate was too large
declare
Carry, Digit: Single;
begin
QD := QD - 1;
Carry := 0; J := QStart;
for I in 1 .. Y.Length loop
Digit := U.D(J) + Carry + V.D(I);
if Digit >= Base then
Carry := 1;
Digit := Digit - Base;
else
Carry := 0;
end if;
U.D(J) := Digit;
J := J + 1;
end loop;
end;
end if;
-- QD is now the required digit
U0 := Double(U.D(I)); U.D(I) := Single(QD);
QStart := QStart - 1;
end loop;
-- delete possible leading zero in quotient
if U.D(HIQ) = 0 then
QL := QL - 1;
end if;
-- copy remainder into place and scale
-- top digit is in U0 still
Digit := U0;
for I in reverse 2 .. RL loop
Remainder.D(I) := Single(Digit/Scale);
Carry := Digit mod Scale;
Digit := Double(U.D(I-1)) + Carry * Base;
end loop;
Remainder.D(1) := Single(Digit/Scale);
-- delete leading zeroes in remainder
while RL > 0 and then Remainder.D(RL) = 0 loop
RL := RL - 1;
end loop;
Remainder.Length := RL;
if Remainder.Length = 0 then
Remainder.Sign := +1;
else
Remainder.Sign := X.Sign;
end if;
-- slide quotient into place
-- Quotient.D(1 .. QL) := U.D(LOQ .. HIQ);
for I in 1 .. QL loop
Quotient.D(I) := U.D(I + LOQ - 1);
end loop;
Quotient.Length := QL;
if Quotient.Length = 0 then
Quotient.Sign := +1;
else
Quotient.Sign := X.Sign * Y.Sign;
end if;
end Div;
procedure To_Number(S: String; N: out Number) is
NL: Index := 0;
Place: Integer := 0;
Is_A_Number: Boolean := False;
Digit: Single := 0;
Ch: Character;
Last_I: Positive;
Dig_Of: constant array (Character range '0' .. '9') of
Single := (0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
begin
N.Sign := +1; -- set default sign
-- scan string from end
for I in reverse S'Range loop
Last_I := I; -- note how far we have got
Ch := S(I);
case Ch is
when '0' .. '9' =>
-- add digit to number so far
if Place = 0 then
NL := NL + 1;
if NL > N.Max_Digits then
raise Number_Error;
end if;
end if;
Digit := Digit + Dig_Of(Ch) * 10**Place;
Place := Place + 1;
if Place = Base_Exp then
N.D(NL) := Digit;
Digit := 0;
Place := 0;
end if;
Is_A_Number := True;
when '_' =>
-- underscore must be embedded in digits
if not Is_A_Number then
raise Number_Error;
end if;
Is_A_Number := False;
when '+' | '-' | ' ' =>
-- lump so far must be a valid number
if not Is_A_Number then
raise Number_Error;
end if;
if Ch ='-' then N.Sign := -1; end if;
exit; -- leave loop
when others =>
raise Number_Error;
end case;
end loop;
-- check we had a number
if not Is_A_Number then
raise Number_Error;
end if;
-- add the last digit if necessary
if Place /= 0 then
N.D(NL) := Digit;
end if;
-- check that any other characters are leading spaces
for I in S'First .. Last_I - 1 loop
if S(I) /= ' ' then
raise Number_Error;
end if;
end loop;
-- remove leading zeroes if any, beware zero case
while NL > 0 and then N.D(NL) = 0 loop
NL := NL - 1;
end loop;
N.Length := NL;
end To_Number;
procedure To_Number(I: Integer; N: out Number) is
NL: Index := 0;
II: Integer;
begin
if I = 0 then
N.Sign := +1; N.Length := 0;
return;
end if;
if I > 0 then
II := I; N.Sign := +1;
else
II := -I; N.Sign := -1;
end if;
while II /= 0 loop
NL := NL + 1;
if NL > N.Max_Digits then
raise Number_Error;
end if;
N.D(NL) := Single(II mod Base);
II := II / Base;
end loop;
N.Length := NL;
end To_Number;
procedure To_Text(N: Number; S: out String) is
SI: Natural := S'Last;
Digit: Single;
Char_Of: constant array (Single range 0 .. 9) of
Character := "0123456789";
begin
if N.Length = 0 then -- zero case
if SI < 2 then
raise Number_Error;
end if;
S(SI) := Char_Of(0);
S(SI-1) := '+';
for I in 1 .. SI-2 loop
S(I) := ' ';
end loop;
return;
end if;
if SI < Base_Exp * N.Length + 1 then
raise Number_Error;
end if;
for I in 1 .. N.Length loop
Digit := N.D(I);
for J in 1 .. Base_Exp loop
S(SI) := Char_Of(Digit mod 10);
Digit := Digit / 10;
SI := SI - 1;
end loop;
end loop;
while S(SI + 1) = '0' loop
SI := SI + 1; -- delete leading zeroes
end loop;
if N.Sign = +1 then
S(SI) := '+';
else
S(SI) := '-';
end if;
for I in 1 .. SI - 1 loop
S(I) := ' ';
end loop;
end To_Text;
procedure To_Integer(N: Number; I: out Integer) is
II: Integer := 0;
begin
for I in reverse 1 .. N.Length loop
II := II * Base + Integer(N.D(I));
end loop;
if N.Sign = -1 then II := -II; end if;
I := II;
end To_Integer;
end Numbers.Proc;
--------------------------------------------------------------------
with Ada.Text_IO; use Ada;
with Numbers.Proc;
package body Numbers.IO is
use Proc;
procedure Put(Item: Number;
Width: Natural := Default_Width) is
Block_Size: constant := 3;
Places: Integer := No_Of_Places(Item);
S: String(1 .. Places + 1);
SP: Positive := 1;
Before_Break: Integer;
begin
To_Text(Item, S);
-- allow for leading spaces in S
while S(SP) = ' ' loop
SP := SP + 1; Places := Places - 1;
end loop;
-- now output leading spaces for padding if any
for I in 1 .. Width -
(Places + 1 + (Places - 1) / Block_Size) loop
Text_IO.Put(' ');
end loop;
if S(SP) = '+' then S(SP) := ' '; end if;
Text_IO.Put(S(SP)); -- output minus or space
-- output digits with underscores every "Blocksize"
Before_Break := (Places - 1) rem Block_Size + 1;
for I in SP + 1 .. S'Last loop
if Before_Break = 0 then
Text_IO.Put('_');
Before_Break := Block_Size;
end if;
Text_IO.Put(S(I));
Before_Break := Before_Break - 1;
end loop;
end Put;
procedure Get(Item: out Number) is
-- declare string large enough to hold maximum value
-- allows every other character to be an underscore!
S: String(1 .. Base_Exp * Max_Index * 2);
SP: Positive := 1;
Places: Integer := 0;
Ch: Character;
EOL: Boolean; -- end of line
begin
-- loop for first digit or sign, skipping spaces
loop
Text_IO.Get(Ch);
case Ch is
when ' ' =>
null;
when '+'| '-' =>
S(SP) := Ch; SP := SP + 1;
exit;
when '0' .. '9' =>
S(SP) := Ch; SP := SP + 1; Places := 1;
exit;
when others =>
raise Number_Error;
end case;
end loop;
-- now accept only digits and underscores
-- count the digits in Places
-- stop on end of line or other character
loop
Text_IO.Look_Ahead(Ch, EOL);
exit when EOL;
Text_IO.Get(Ch);
case Ch is
when '0' .. '9' =>
S(SP) := Ch; SP := SP + 1;
Places := Places + 1;
when '_' =>
S(SP) := Ch; SP := SP + 1;
when others =>
exit;
end case;
end loop;
-- now declare a Number big enough
-- note Item assumed unconstrained
declare
Result: Number((Places - 1)/Base_Exp + 1);
begin
To_Number(S(1 .. SP - 1), Result);
Item := Result;
end;
end Get;
end Numbers.IO;
--------------------------------------------------------------------
with Numbers.Proc;
package body Numbers.Func is
use Proc;
function "+" (X: Number) return Number is
begin
return X;
end "+";
function "-" (X: Number) return Number is
N: Number(X.Max_Digits);
begin
N := X; Neg(N);
return N;
end "-";
function "abs" (X: Number) return Number is
begin
if X < Zero then return -X; else return X; end if;
end "abs";
function "+" (X, Y: Number) return Number is
Z: Number(Index'Max(Length(X), Length(Y)) + 1);
begin
Add(X, Y, Z);
return Z;
end "+";
function "-" (X, Y: Number) return Number is
Z: Number(Index'Max(Length(X), Length(Y)) + 1);
begin
Sub(X, Y, Z);
return Z;
end "-";
function "*" (X, Y: Number) return Number is
Z: Number(Length(X) + Length(Y));
begin
Mul(X, Y, Z);
return Z;
end "*";
function "/" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
begin
Div(X, Y, Q, R);
return Q;
end "/";
function "rem" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
begin
Div(X, Y, Q, R);
return R;
end "rem";
function "mod" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
begin
Div(X, Y, Q, R);
if (X < Zero and Y > Zero) or (X > Zero and Y < Zero) then
R := R + Y;
end if;
return R;
end "mod";
function "**" (X: Number; N: Natural) return Number is
Result: Number := One;
Term: Number := X;
M: Natural := N;
begin
loop
if M rem 2 /= 0 then
Result := Term * Result;
end if;
M := M / 2;
exit when M = 0;
Term := Term * Term;
end loop;
return Result;
end "**";
function "<" (X, Y: Number) return Boolean is
begin
return Compare(X, Y) < 0;
end "<";
function "<=" (X, Y: Number) return Boolean is
begin
return Compare(X, Y) <= 0;
end "<=";
function ">" (X, Y: Number) return Boolean is
begin
return Compare(X, Y) > 0;
end ">";
function ">=" (X, Y: Number) return Boolean is
begin
return Compare(X, Y) >= 0;
end ">=";
function "=" (X, Y: Number) return Boolean is
begin
return Compare(X, Y) = 0;
end "=";
function Make_Number(S: String) return Number is
Result: Number((S'Length - 1) / Base_Exp + 1);
begin
To_Number(S, Result);
return Result;
end Make_Number;
function Make_Number(I: Integer) return Number is
Base_Digits: Index := 0;
II: Integer := abs I;
Base: constant := 10 ** Base_Exp;
begin
-- loop to determine discriminant for result
while II /= 0 loop
Base_Digits := Base_Digits + 1;
II := II / Base;
end loop;
declare
Result: Number(Base_Digits);
begin
To_Number(I, Result);
return Result;
end;
end Make_Number;
function String_Of(N: Number) return String is
Places: Integer := No_Of_Places(N);
S: String(1 .. Places + 1);
SP: Positive := 1;
begin
To_Text(N, S);
-- allow for leading spaces in S
while S(SP) = ' ' loop
SP := SP + 1;
end loop;
return S(SP .. S'Last);
end String_Of;
function Integer_Of(N: Number) return Integer is
Result: Integer;
begin
To_Integer(N, Result);
return Result;
end Integer_Of;
end Numbers.Func;
--- Numbers calculator ---------------------------------------------
--with ID, Numbers.Func; use Numbers.Func;
--package Numbers_ID is new ID(Number, Zero, One);
--with ID.IO, Numbers.IO; use Numbers.IO;
--package Numbers_ID.The_IO is new Numbers_ID.IO;
--with Numbers_ID.The_IO;
--with Numbers.Func;
--with Calculator;
--procedure Numcalc is
-- new Calculator.Run(Numbers_ID,
-- Numbers_ID.The_IO,
-- Numbers.Func."+");
--- test 1 - powers of 11 and 99 -----------------------------------
with Numbers.Proc; use Numbers.Proc;
with Ada.Text_IO; use Ada;
procedure Test_11_99 is
U: Number(2);
procedure P(X: Number) is
S: String(1 .. 150);
begin
To_Text(X, S);
Text_IO.Put(S); Text_IO.New_Line;
end P;
procedure Power(U: Number; V: Integer) is
W: Number(50);
begin
To_Number(1, W);
for I in 1 .. V loop
Mul(W, U, W);
P(W);
end loop;
end Power;
begin
To_Number(11, U);
Power(U, 50);
To_Number(99, U);
Power(U, 50);
end;
-- test 2 - Mersenne using procedural forms ------------------------
with Ada.Calendar; use Ada.Calendar;
with Numbers.Proc; use Numbers.Proc;
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure Test2 is
Nop: constant := 30;
Loop_Start, Loop_End: Integer;
T_Start, T_End: Time;
Is_Prime: Boolean;
MM: Number(50);
Primes: array(1 .. Nop) of Integer :=
(3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,
61,67,71,73,79,83,89,97,101,103,107,109,113,127);
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
procedure Lucas_Lehmer (P: Integer;
Mersenne: out Number;
Is_Prime: out Boolean) is
Two: Number(1);
M: Number(Mersenne.Max_Digits);
L, W, Quotient: Number(M.Max_Digits*2);
begin
To_Number(2, Two);
To_Number(4, L);
To_Number(1, M);
for I in 1 .. P loop
Mul(M, Two, M);
end loop;
Sub(M, One, M);
for I in 1 .. P-2 loop
Mul(L, L, W);
Sub(W, Two, W);
Div(W, M, Quotient, L);
-- L := (L**2 - Two) mod M;
end loop;
Is_Prime := Compare(L, Zero) = 0;
Mersenne := M;
end Lucas_Lehmer;
procedure Put(X: Number) is
S: String(1 .. 45);
begin
To_Text(X, S);
Put(S);
end Put;
begin
Put_Line("Start loop? "); Get(Loop_Start);
Put_Line("End loop? "); Get(Loop_End);
Put_Line("Mersenne Primes");
Put_Line(" Time P 2**P-1");
for I in Loop_Start .. Loop_End loop
New_Line;
T_Start := Clock;
Lucas_Lehmer(Primes(I), MM, Is_Prime);
T_End := Clock;
New_Line; Put(T_End - T_Start, 2, 1);
Put(Primes(I), 4); Put(" : ");
Put(MM);
if Is_Prime then
Put(" is prime");
else
Put(" is not prime");
end if;
end loop;
end Test2;
--- test 5 - Mersenne using functional forms -----------------------
with Ada.Calendar; use Ada.Calendar;
with Numbers.Func; use Numbers.Func;
with Numbers.IO; use Numbers.IO;
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure Test5 is
Nop: constant := 30;
Loop_Start, Loop_End: Integer;
T_Start, T_End: Time;
Is_Prime: Boolean;
LL, MM: Number;
Primes: array(1 .. Nop) of Integer :=
(3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,
61,67,71,73,79,83,89,97,101,103,107,109,113,127);
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
procedure Lucas_Lehmer (Q: Integer;
Mersenne: out Number;
Lout: out Number;
Is_Prime: out Boolean) is
Two: constant Number := Make_Number(2);
M: constant Number := Two**Q - One;
L: Number := Make_Number(4);
begin
for I in 1 .. Q-2 loop
L := (L**2 - Two); -- mod M; -- mod M here is optional;
Put(L); New_line(2);
end loop;
Is_Prime := L mod M = Zero;
Lout := l;
Mersenne := M;
end Lucas_Lehmer;
begin
Put_Line("Start loop? "); Get(Loop_Start);
Put_Line("End loop? "); Get(Loop_End);
Put_Line("Mersenne Primes");
Put_Line(" Time P 2**P-1");
for I in Loop_Start .. Loop_End loop
New_Line;
T_Start := Clock;
Lucas_Lehmer(Primes(I), MM, LL, Is_Prime);
T_End := Clock;
New_Line; Put(T_End - T_Start, 2, 1);
Put(Primes(I), 4); Put(" : ");
Put(MM, 20);
if Is_Prime then
Put(" is prime");
New_line;
put((MM+One)/Make_number(2)*MM, 30); Put(" is perfect");
New_Line;
else
Put(" is not prime");
end if;
New_line;
-- comment out next four lines to avoid detail
Put("L is "); Put(LL); Put(" equals "); Put(MM); Put(" times "); Put(LL/MM);
if not Is_prime then
New_Line; Put(" remainder = "); Put(LL mod MM);
end if;
-- end of comment
end loop;
skip_Line(2);
end Test5;
--- test 6 - GCD and Mersenne --------------------------------------
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
with Numbers.Proc; use Numbers.Proc;
procedure Test6 is
subtype CNumber is Number(50);
M1, M2, M3: CNumber;
P1, P2, P3: Integer;
G, H: CNumber;
XX, YY, QQ, ZZ: CNumber;
Two: CNumber;
N: Cnumber;
Start, Stop: Integer;
procedure GCD(X, Y: CNumber; Z: out CNumber) is
begin
XX := X; YY := Y;
while Compare(YY, Zero) /= 0 loop
Div(XX, YY, QQ, ZZ);
XX := YY;
YY := ZZ;
end loop;
Z := XX;
end GCD;
procedure Mersenne(P: Integer; M: in out CNumber) is
begin
To_Number(2, Two);
To_Number(1, N);
for I in 1 .. P loop
Mul(N, Two, N);
end loop;
Sub(N, One, N);
M := N;
end Mersenne;
begin
To_Number(2, Two);
Put_Line("Start? "); Get(Start);
Put_Line("Stop? "); Get(Stop);
for I in Start .. Stop loop
P1 := 2*I + 1;
Mersenne(P1, M1);
for J in 1 .. I -1 loop
P2 := 2*J + 1;
Mersenne(P2, M2);
GCD(M1, M2, G);
for K in 1 .. I loop
P3 := 2*K + 1;
Mersenne(P3, M3);
if Compare(M3, G) > 0 then
exit;
end if;
if Compare(M3, G) =0 then
New_Line;
Put(P1); Put(P2); Put(P3);
exit;
end if;
end loop;
end loop;
end loop;
end Test6;
------- test 7 multipication
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
with Numbers.Proc; use Numbers.Proc;
with Numbers.IO; use Numbers.IO;
procedure Test7 is
X: Number;
Y: Number;
Z: Number(50);
begin
Put_line("Multiplier test");
Put("X = "); Get(X);
Put("Y = "); Get(Y);
Mul(X, Y, Z);
Put_Line("product is ");
Put(Z);
New_Line(2);
end Test7;
****************************************************************
From: Jeff Cousins
Sent: Wednesday, January 31, 2018 6:17 AM
[This some additional test programs for John Barnes' Bignum package; the
Numbers package was duplicated at the front, which I removed. - Editor.]
--package Monitor is
-- C1, C2, C3, C4: Integer;
--end;
package Primes is
Max: constant := 20000;
Prime: array (1..Max) of Integer;
pragma Elaborate_Body;
end;
package body Primes is
N: Integer:= 2;
Index: Integer := Prime'First;
Found: Boolean := False;
begin
-- initialization part to build prime table
Prime(Prime'First) := 2;
loop
N := N+1; Found := True;
for I in Prime'First .. Index loop
if n rem Prime(I) = 0 then -- divides by existing prime
Found := False; exit;
end if;
end loop;
if Found then
-- found a new prime
Index := Index+1; Prime(Index) := n;
exit when Index = Max;
end if;
end loop;
end Primes;
Package Squares is
Square_Digits: Integer := 4;
Square_Ten_Power: Integer := 10**Square_Digits;
Poss_Last_Digits: array(0..Square_Ten_Power-1) of Boolean := (others => False);
pragma Elaborate_Body;
end;
package body Squares is
begin
-- make Poss_Last_Digits array
for I in 1 .. Square_Ten_Power loop
Poss_Last_Digits(I*I mod Square_Ten_Power) := True;
end loop;
end Squares;
with Numbers.Func; use Numbers.Func;
-- with Monitor;
procedure Square_root(XX: in Number; Try: Number; X: out Number; R: out Number) is
-- X is the largest X such that X*X + R equals XX with R >= zero
-- Try is initial guess
K, KK: Number;
DK: Number;
Three: Number := Make_Number(3);
begin
k := Try;
loop
KK := K*K;
DK := (XX - KK)/(K+K);
-- iterate until nearly there
if abs DK < Three then -- nearly there
if XX < KK then
K := XX/K; -- ensure K*K is less than XX
end if;
exit;
end if;
-- do another iterate
K := K+DK;
-- Monitor.c2 := Monitor.C2 + 1;
end loop;
-- now loop from below
loop
KK := K*K;
-- Monitor.C2 := Monitor.C2 + 1;
if KK >= XX then
if KK = XX then
X := K; R := Zero; return;
end if;
X := K - One; R := XX - X*X; return;
end if;
K := K+One;
end loop;
end Square_Root;
with Square_Root;
with Numbers.Func; use Numbers.Func;
-- with Monitor;
with Squares; use Squares;
procedure Fermat_Long(N: in Number; Min_Prime: in Number; P, Q: out Number) is
-- we know that factors up to Min_Prime have been removed, so max square to try is
-- roughly (N/Min_Prime + Min_Prime)/2; we add 2 for luck
Two: Number := Make_Number(2);
Number_Square_Digits: Number := Make_Number(Square_Digits);
Number_Square_Ten_Power: Number := Make_Number(Square_Ten_Power);
X: Number;
Y: Number;
R: Number;
K: Number;
DK: Number;
N2, X2, K2: Integer;
Last_Digits: Integer;
Try: Number := One;
Max_square : Number := (N/Min_Prime + Min_Prime)/ Two + Two;
begin
Square_Root(N, One, X, R);
if R = Zero then
-- N was a perfect square
P := X; Q := X;
return;
end if;
-- Monitor.C1 := Monitor.C2;
-- Monitor.C2 := 0;
K := X*X-n;
DK := X+X+One;
N2 := Integer_Of(N rem Number_Square_Ten_Power);
X2 := Integer_Of(X rem Number_Square_Ten_Power);
loop
-- Monitor.C3 := Monitor.C3 + 1;
X := X + One;
if X > Max_Square then
-- must be prime
P := N; Q := One;
return;
end if;
X2 := (X2 + 1) rem Square_Ten_Power;
K := K + DK; -- omit if DK not used
DK := DK + Two;
-- K := X*X-N; -- omit if DK used
K2 := (X2*X2-N2) mod Square_Ten_Power;
-- Last_Digits := Integer_Of(K rem Number_Square_Ten_Power);
Last_Digits := K2;
if Poss_Last_Digits(Last_Digits) then
-- Monitor.C4 := Monitor.C4 + 1;
Square_Root(K, Try, Y, R);
if R = Zero then
-- X*X-N was a perfect square
P := X+Y; Q := X-Y;
return;
end if;
Try := Y;
end if;
end loop;
exception
when others => p := Zero; Q := Zero;
end Fermat_long;
with Numbers.IO; use Numbers.IO;
with Numbers.Func; use Numbers.Func;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
-- with Monitor;
with Primes; use Primes;
with Fermat_Long;
with Ada.Calendar; use Ada.Calendar;
procedure Do_Fermat_Long is
NN, PP, QQ: Number;
T_Start, T_End: Time;
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
begin
Put_Line("Welcome to Fermat's method (multilength)");
loop
<<Again>>
Begin
New_Line;
Put("Insert number N = "); Get(NN);
exception
when others =>
Put_Line("Not a number or too big"); Skip_Line;
goto Again;
end;
exit when NN = Zero;
-- check to see if N divides by a known prime
for i in Prime'Range loop
loop
if NN rem make_Number(Prime(I)) = Zero then
Put("N divides by "); Put(Prime(I), 0); Put_Line(" so removing factor");
NN := NN / Make_Number(Prime(I));
else
exit;
end if;
end loop;
end loop;
if nN rem Make_number(4) = Make_Number(2) then
Put_Line("Algorithm fails on odd multiples of 2, so halving N");
NN := NN/Make_Number(2);
Put("New N = "); Put(NN, 0); New_Line;
end if;
if NN = One then
Put_Line("all factors removed");
else
-- Monitor.C1 := 0;
-- Monitor.C2 := 0;
-- Monitor.C3 := 0;
-- Monitor.C4 := 0;
T_Start := Clock;
Fermat_Long(NN, Make_Number(Prime(Max)), PP, QQ);
T_End := Clock;
if PP = Zero and QQ = zero then
Put_Line("Failed internally");
goto Again;
end if;
Put("Two factors are "); Put(PP, 0); Put(" "); Put(QQ, 0); New_Line;
Put(T_End-T_Start, 2, 1); New_Line;
-- Put(Monitor.C1, 9); Put(Monitor.C2, 9); Put(Monitor.C3, 9); Put(Monitor.C4, 9); New_Line;
end if;
end loop;
Put_line("Goodbye"); Skip_Line(2);
end Do_Fermat_Long;
****************************************************************
From: Randy Brukardt
Sent: Wednesday, January 31, 2018 5:03 PM
>Sending in three parts as the original message was too big for the mail
>server.
Just so the rest of you know, "Part 3" was an executable Windows program which
we've decided not to distribute at all (it's too big for the list, and
executables for specific computers seem out-of-bounds for this list anyway,
given all of us know how to operate our favorite Ada compiler and probably
several non-favorite compilers as well).
So don't look for the non-existent part 3.
****************************************************************
From: Steve Baird
Sent: Wednesday, February 28, 2018 8:03 PM
I'm attaching some preliminary specs that reflect the feedback from the last
discussion of this set of predefined packages. [This is version /01 of the
AI - ED.]
There is still some polishing to do with respect to, for example, pre/post
conditions for the various operations, but this should give us a more concrete
framework for discussion.
I gave up on making Bounded_Integer a discriminated type and instead defined
the enclosing package Bounded_Integers as a generic package. That eliminates
questions such as "what is the discriminant of the result of adding together
two values whose discriminants differ?".
It is intended that the bounded and unbounded Big_Integer types should be
streaming-compatible as with Vectors and Bounded_Vectors
As discussed last meeting, there is no Bounded_Big_Rationals package (or
generic package).
The types are tagged now and descended from a common interface type.
This means that most parameter subtypes have to be first named subtypes.
This should not be considered an endorsement of this idea; we might (as Bob
suggests) want these types to be untagged. It's just easier to see what this
looks like and then have to imagine what the untagged version might be than
the other way around. I agree with Randy's comments that taggedness by itself
doesn't imply performance problems.
Currently no declaration of a GCD function. Do we want that? If so, then in
which package? If we declare it in Big_Reals then it is not a primitive and so
parameters can be Big_Positives instead of Big_Integers. If we really want it
in Big_Integers package(s) then it would either need Big_Integer parameter
subtypes or it would need to be declared in a nested package to avoid
primitive status.
Comments?
****************************************************************
From: Randy Brukardt
Sent: Thursday, March 1, 2018 9:49 PM
> I'm attaching some preliminary specs that reflect the feedback from
> the last discussion of this set of predefined packages.
...
> Comments?
Shouldn't this be some set of children of Ada.Numerics? These kinda seem like
numbers. ;-)
You don't have the numeric literal definitions (see AI12-0249-1) -- that seems
necessary for usability. (One wonders if the literals should be defined on the
interface, which suggests that AI12-0249-1 needs a bit of extension.)
Otherwise, I didn't notice anything that I would change.
****************************************************************
From: Steve Baird
Sent: Sunday, March 4, 2018 1:00 AM
> Shouldn't this be some set of children of Ada.Numerics? These kinda
> seem like numbers.;-)
>
Good point.
So the root package for this stuff becomes Ada.Numerics.Big_Numbers.
> You don't have the numeric literal definitions (see AI12-0249-1) --
> that seems necessary for usability. (One wonders if the literals
> should be defined on g interface, which suggests that AI12-0249-1
> needs a bit of
> extension.)
We decided earlier that we didn't want that inter-AI dependency.
But I agree that if we we are willing to introduce that dependency then of
course support for literals would make sense.
Should it be conditional, as in "if AI12-0249 is approved, then this AI also
includes blah, blah"?
****************************************************************
From: Randy Brukardt
Sent: Sunday, March 4, 2018 1:16 AM
Yes, I'd write it that way. I'd probably stay away from other AI dependencies,
but literals are pretty fundamental - supporting them makes the package way
more usable.
****************************************************************
From: Steve Baird
Sent: Wednesday, March 28, 2018 6:50 PM
Attached is proposed wording for this AI. [This is version /02 of this
AI - ED].
There are some TBDs interspersed.
****************************************************************
From: Randy Brukardt
Sent: Thursday, March 29, 2018 7:48 PM
> Attached is proposed wording for this AI.
>
> There are some TBDs interspersed.
Here's a few thoughts:
>[TBD: aspects specified for this package? Pure, Nonblocking, others?
>Same question applies to other packages declared in later sections.
>Would these aspects constrain implementations in undesirable ways?]
All of the packages should be nonblocking. I don't think any reasonable
implementation would need access to delay statements. ;-)
The interface package should be Pure (why not, it doesn't have any
implementation). The bounded package also should be pure (we do that for all
of the bounded forms elsewhere.
The others probably should be preelaborated (and the types having
preelaborable_initialization), lest we make it too hard to use the needed
dynamic allocation.
>[TBD: It would be nice to use subtypes in parameter profiles (e.g.,
>a Nonzero_Number subtype for second argument of "/", but this requires
>AI12-0243 and the future of that AI is very uncertain.]
You can always use a Pre'Class as an alternative to a subtype. It's not
quite as convenient, but it makes the same check, and presuming that
AI12-0112-1 stays are currently envisioned, that check would be suppressible
with "pragma Suppress (Numerics_Check);".
>[TBD: Remove Integer_Literal aspect spec if AI12-0249-1 not approved.
>If Default_Initial_Condition AI12-0265-1 is approved and Integer_Literal AI
is
>not then replace "0" with "+0" in the condition and as needed in
>subsequent conditions.]
I put the AI number in here for Default_Initial_Condition.
>[TBD: In_Range formal parameter names. "Lo & Hi" vs. "Low & High"?]
Ada usually doesn't use abbreviations, and saving one or two characters this
way isn't appealing. Use Low and High.
> A.5.5.1.1 Bounded Big Integers
Umm, please, no 5 level subclauses. Since there are currently no four level
items in the RM, we need to discuss that explicitly. I had to add a fourth
level for ASIS, but Ada only uses three levels. And the ACATS only uses two
levels in the annexes, which is already a problem for the containers (there
being only one set of sequence numbers for all of the containers tests).
>AARM Note: Roughly speaking, behavior is as if the type invariant for
> Bounded_Big_Integer is
> In_Range (Bounded_Big_Integer, First, Last) or else (raise
Constraint_Error)
>although that is not specified explicitly because that would
>require introducing some awkward code in order to avoid infinite
>recursion.
Awkward code? Please explain. Type invariants are explicitly not enforced on
'in' parameters of functions specifically to avoid infinite recursion in the
type invariant expression. You'd probably need a function for this purpose
(to avoid the functions First and Last -- is that what you meant??), say:
In_Base_Range (Bounded_Big_Integer) or else (raise Constraint_Error)
where In_Base_Range is equivalent to In_Range (Bounded_Big_Integer, First,
Last) with the expressions of First and Last substituted. One could also
make those expressions the defaults for Low and High.
>[TBD: This could be done differently by using a formal instance instead
>of declaring the Conversions package as a child of Bounded_Big_Integers.
>Would there be any advantage to this approach? The advantage of the
>proposed approach is visibility of the private part, but it does seem
>awkward to have a generic with no generic formals and no local state.]
Well, it would be easier to implement in Janus/Ada, where we never got
sprouting to work. But that's hardly a reason. I suspect it would be more
obvious what's going on than a generic child -- as a data point, all of the
extra operations of Ada.Strings.Bounded take formal packages rather than
being children -- but that may have been driven by other considerations.
One argument for making it a formal package is that this conversion package
really belongs to both big number packages -- it's somewhat artificial to
make it live in the hierarchy of one or the other.
>Any Big_Rational result R returned by any of these functions satisifies the
>condition
> (R = 0.0) or else
> (Greatest_Common_Denominator (Numerator (R), Denominator (R)) = 1).
Arguably, that should be a postcondition, since the implementation isn't
required to check it (it it required to *pass* it). Then a separate rule
isn't needed. You'd probably want to declare a function with this meaning,
'cause duplicating the above 2 dozen times would be annoying.
>AARM Note: No Bounded_Big_Rationals generic package is provided.
We've discussed why, but there needs to be a version of that discussion
either in this note or in the (sadly empty) !discussion section. Future
readers will be puzzled otherwise (including, most likely, us).
****************************************************************
From: Steve Baird
Sent: Tuesday, June 5, 2018 3:08 PM
>> Attached is proposed wording for this AI.
>>
>> There are some TBDs interspersed.
>
> Here's a few thoughts:
>
>> [TBD: aspects specified for this package? Pure, Nonblocking, others?
>> Same question applies to other packages declared in later sections.
>> Would these aspects constrain implementations in undesirable ways?]
>
> All of the packages should be nonblocking. I don't think any reasonable
> implementation would need access to delay statements. ;-)
>
Sounds good.
> The interface package should be Pure (why not, it doesn't have any
> implementation). The bounded package also should be pure (we do that for all
> of the bounded forms elsewhere.
>
> The others probably should be preelaborated (and the types having
> preelaborable_initialization), lest we make it too hard to use the needed
> dynamic allocation.
>
Also sounds good.
>> [TBD: It would be nice to use subtypes in parameter profiles (e.g.,
>> a Nonzero_Number subtype for second argument of "/", but this requires
>> AI12-0243 and the future of that AI is very uncertain.]
>
> You can always use a Pre'Class as an alternative to a subtype. It's not
> quite as convenient, but it makes the same check, and presuming that
> AI12-0112-1 stays are currently envisioned, that check would be suppressible
> with "pragma Suppress (Numerics_Check);".
>
Let's leave things as I originally proposed for now, with a possible
revision if AI12-0243 is approved.
>> [TBD: Remove Integer_Literal aspect spec if AI12-0249-1 not approved.
>> If Default_Initial_Condition AI12-0265-1 is approved and Integer_Literal AI
> is
>> not then replace "0" with "+0" in the condition and as needed in
>> subsequent conditions.]
>
> I put the AI number in here for Default_Initial_Condition.
>
Thanks.
>> [TBD: In_Range formal parameter names. "Lo & Hi" vs. "Low & High"?]
>
> Ada usually doesn't use abbreviations, and saving one or two characters this
> way isn't appealing. Use Low and High.
>
You convinced me. Sounds good.
>> A.5.5.1.1 Bounded Big Integers
>
> Umm, please, no 5 level subclauses. Since there are currently no four level
> items in the RM, we need to discuss that explicitly. I had to add a fourth
> level for ASIS, but Ada only uses three levels. And the ACATS only uses two
> levels in the annexes, which is already a problem for the containers (there
> being only one set of sequence numbers for all of the containers tests).
>
What would you suggest instead?
>> AARM Note: Roughly speaking, behavior is as if the type invariant for
>> Bounded_Big_Integer is
>> In_Range (Bounded_Big_Integer, First, Last) or else (raise
> Constraint_Error)
>> although that is not specified explicitly because that would
>> require introducing some awkward code in order to avoid infinite
>> recursion.
>
> Awkward code? Please explain. Type invariants are explicitly not enforced on
> 'in' parameters of functions specifically to avoid infinite recursion in the
> type invariant expression. You'd probably need a function for this purpose
> (to avoid the functions First and Last -- is that what you meant??), say:
> In_Base_Range (Bounded_Big_Integer) or else (raise Constraint_Error)
> where In_Base_Range is equivalent to In_Range (Bounded_Big_Integer, First,
> Last) with the expressions of First and Last substituted. One could also
> make those expressions the defaults for Low and High.
Ok, we can make this type invariant explicit.
>
>> [TBD: This could be done differently by using a formal instance instead
>> of declaring the Conversions package as a child of Bounded_Big_Integers.
>> Would there be any advantage to this approach? The advantage of the
>> proposed approach is visibility of the private part, but it does seem
>> awkward to have a generic with no generic formals and no local state.]
>
> Well, it would be easier to implement in Janus/Ada, where we never got
> sprouting to work. But that's hardly a reason. I suspect it would be more
> obvious what's going on than a generic child -- as a data point, all of the
> extra operations of Ada.Strings.Bounded take formal packages rather than
> being children -- but that may have been driven by other considerations.
>
> One argument for making it a formal package is that this conversion package
> really belongs to both big number packages -- it's somewhat artificial to
> make it live in the hierarchy of one or the other.
>
I don't see any real strong arguments one way or the other here
(it would probably be ok to settle this one with a coin-flip)
but I agree that it does seem odd to put it in one hierarchy or
the other. So let's do it with a formal instance.
>> Any Big_Rational result R returned by any of these functions satisfies the
>> condition
>> (R = 0.0) or else
>> (Greatest_Common_Denominator (Numerator (R), Denominator (R)) = 1).
>
> Arguably, that should be a postcondition, since the implementation isn't
> required to check it (it it required to *pass* it). Then a separate rule
> isn't needed. You'd probably want to declare a function with this meaning,
> 'cause duplicating the above 2 dozen times would be annoying.
>
Ok, let's make the postconditions explicit.
>> AARM Note: No Bounded_Big_Rationals generic package is provided.
>
> We've discussed why, but there needs to be a version of that discussion
> either in this note or in the (sadly empty) !discussion section. Future
> readers will be puzzled otherwise (including, most likely, us).
Agreed, more should be said about this in the !discussion section.
Thanks, as always, for the feedback.
Do we want revised wording incorporating what we have
discussed here for Lisbon?
****************************************************************
From: Randy Brukardt
Sent: Tuesday, June 5, 2018 3:28 PM
...
> >> A.5.5.1.1 Bounded Big Integers
> >
> > Umm, please, no 5 level subclauses. Since there are currently no
> > four level items in the RM, we need to discuss that explicitly. I
> > had to add a fourth level for ASIS, but Ada only uses three levels.
> > And the ACATS only uses two levels in the annexes, which is already
> > a problem for the containers (there being only one set of sequence
> > numbers for all of the containers tests).
>
> What would you suggest instead?
There aren't any great answers here.
If you want to keep these in with the other numerics packages, then I think
you need a flat organization: A.5.5 Big Integer Interface, A.5.6 Unbounded
Big Integers A.5.7 Bounded Big Integers etc. That's how the queues are
organized, after all, they have the same issue.
Alternatively, given that this is a substantial subsystem unto itself, you
could just give them there own subclause in Annex A:
A.20 Big Numbers A.20.1 Big Integer Interface, A.20.2 Unbounded Big Integers
A.20.3 Bounded Big Integers etc.
...
> Do we want revised wording incorporating what we have discussed here
> for Lisbon?
Always best to have the latest into the actual AI. Otherwise, we end up saying
stuff like "ignore that part of the AI, we decided to change it in e-mail" and
people get all confused. And end up asking questions about "that part of the
AI" anyway.
****************************************************************
From: Jeff Cousins
Sent: Tuesday, June 5, 2018 4:23 PM
> Do we want revised wording incorporating what we have discussed here
> for Lisbon?
Yes please, the more that can be tidied up beforehand the better.
****************************************************************
From: John Barnes
Sent: Monday, June 11, 2018 2:00 AM
I thought I had recently is that it would be handy to have a subprogram that
did square root. Given input parameter x and two out parameters. For positive
x, It would return the largest integer whose square is not greater than the
parameter and a separate remainder.
Could be a procedure or a function with the remainder as an out parameter.
It would be useful for some number theory stuff.
****************************************************************
From: Randy Brukardt
Sent: Monday, June 11, 2018 9:23 PM
Is there any particular reason that ought to be in the BigNum library rather
than a package implemented using BigNum? (I recall we have a bunch test
programs for the Janus/Ada UnivMath that do things like calculate square
roots and E [until you run out memory!].) There doesn't appear to be any
implementation that would take special advantage of the implementation
details of a BigNum.
One could imagine having the entire set of GEF operations available, but
those surely would want to be in a separate package to keep things
manageable. And defining that might be just too much work for Ada 2020
(only 9 months to go!)
****************************************************************
From: John Barnes
Sent: Tuesday, June 12, 2018 3:52 AM
When I implemented sqrt for big integers I did indeed just use the normal
visible operations in the number package. However, it was slow and I just
wondered whether it could be speeded up by knowing the implementation details
and taking short cuts. However, I suppose it might apply to other maths ops as
well. Best solution might be to put such things in a child. We could leave
that to 2028 I suppose (I might not be around). I could ask the maths people
(NAG) at Oxford. One of the old Ada blokes from Alsys in Henley works for NAG.
See you in Lisbon.
****************************************************************
From: Steve Baird
Sent: Wednesday, June 13, 2018 6:37 PM
>> Do we want revised wording incorporating what we have discussed here
>> for Lisbon?
> Always best to have the latest into the actual AI.
I think the attached is just the result of incorporating what we discussed.
[This is version /03 of the AI - Editor.]
The one exception is that I used a type invariant to express the idea that the
GCD of the numerator and denominator of a non-zero big real is always one.
****************************************************************
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:13 PM
> I think the attached is just the result of incorporating what we
> discussed.
Looks good. I fixed a couple of typos (missing "-1" on AI numbers, missing
";" at the end of your new type invariant).
Do we still need the text:
Any Big_Rational result R returned by any of these functions satisifies the condition
(R = 0.0) or else
(Greatest_Common_Divisor (Numerator (R), Denominator (R)) = 1).
since it is essentially just restating the Type_Invariant. (The spelling of
"satisfies" is interesting, too. ;-)
> The one exception is that I used a type invariant to express the idea
> that the GCD of the numerator and denominator of a non-zero big real
> is always one.
You would use the least used of the contracts. ;-) I note that the new
ObjectAda doesn't include Type_Invariants (and I don't have any plans to
implement them in Janus/Ada, either). But that's hardly a good reason to not
use something when it is appropriate. (I'd probably turn it into a
postcondition for Janus/Ada.)
****************************************************************
From: Steve Baird
Sent: Thursday, June 14, 2018 7:29 PM
> Do we still need the text:
>
> Any Big_Rational result R returned by any of these functions
> satisifies the condition
> (R = 0.0) or else
> (Greatest_Common_Divisor (Numerator (R), Denominator (R)) = 1).
>
> since it is essentially just restating the Type_Invariant. (The
> spelling of "satisfies" is interesting, too.;-)
Good point. I agree that we no longer need that text.
> You would use the least used of the contracts.
You should be thanking me for not using the "stable properties" stuff.
****************************************************************
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:37 PM
...
> > You would use the least used of the contracts.
>
> You should be thanking me for not using the "stable properties" stuff.
I was trying to figure out how to implement your type invariant with a
stable property function, but that wouldn't work on the function results.
Anyway, stable properties is extensively used in the containers contracts
(AI12-0112-1), so I wouldn't mind in the least if you used it.
****************************************************************
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:39 PM
> Good point. I agree that we no longer need that text.
OK, I deleted it (making a new AI version).
****************************************************************
From: Steve Baird
Sent: Wednesday, November 21, 2018 7:23 PM
The attached is intended to incorporate the directions I received from the
group in Lisbon. [This is version /05 of the AI - Editor.]
The big numeric types are no longer tagged.
They default to an invalid value instead of to zero.
Conversions to and from String look a lot more like the corresponding Text_IO
operations (with Width, Base, Fore, Aft, and Exp parameters).
Each big number type gets a visible Put_Image aspect spec.
Some name changes (To_Unbounded => From_Bounded, etc.).
Also (not directed by group) the old To_String and From_String that
generated/consumed strings of the form
<numerator image> / <denominator image> are still around, but are now named
To_Quotient_String and From_Quotient_String These names, like any names,
are subject to discussion.
Finally, a Big_Number_Check argument for pragma Suppress.
As usual, thanks to Randy for preliminary review.
Hopefully, all the right rules are in place for deciding when you get a
leading blank on the image of a positive integer.
****************************************************************
From: Jeff Cousins
Sent: Friday, November 23, 2018 1:06 PM
Thanks Steve. A few comments:
Big_Integers function To_String is missing the ; after the first parameter.
I think you need to define deferred constants for zero and unity.
Big_Integers spurious space after Wide_Wide_String’Write.
Big_Rationals – I think you need to define a Rationals version of In_Range for
use in Float_Conversions.
Big_Rationals From_String should have an Arg of String not Valid_Big_Rational,
and should return Valid_Big_Rational.
Text_Io -> Text_IO (two places).
****************************************************************
From: Randy Brukardt
Sent: Monday, November 26, 2018 9:19 PM
...
>I think you need to define deferred constants for zero and unity.
Why? We have numeric literals in these packages (thanks to that new
mechanism), so just using 0 and 1 are many times more likely. I would expect
very few people would even think to use such constants. If you had to write:
V := Ada.Numerics.Big_Numbers.Big_Integers.From_String ("0"); or
V := Ada.Numerics.Big_Numbers.Big_Integers.From_Integer (0);
I could see that. Even with use clauses, these would be annoying. But with
literals:
V := 0;
is obvious. I don't see anyone wanting to write:
V := Ada.Numerics.Big_Numbers.Big_Integers.Zero;
instead. (Especially for those of us in the "use type" only camp.)
****************************************************************
From: Jeff Cousins
Sent: Tuesday, November 27, 2018 6:51 AM
Ok, I suppose I’d never really digested what AI-0249 was for.
But without AI-0249, wouldn’t deferred constants have been needed to get
from universal type 0 and 1 to private type 0 and 1 for use in the contracts?
More justification for AI-0249 I suppose.
PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born
****************************************************************
From: Tucker Taft
Sent: Tuesday, November 27, 2018 9:00 AM
Good point. In fact, there is a bit of a delicate dance going on here in the
contracts. We are using the literals 0 and 1, as you point out. Name
resolution on aspect specifications isn't performed until the word "private."
The question is whether things are done in the right order. In particular the
aspect specification on Big_Integer saying "Integer_Literal => From_String"
needs to be processed to some degree before we try to interpret the aspect
specification "Dynamic_Predicate => Big_Positive > 0" since the implicit
conversion from Universal-integer "0" to Big_Integer depends on it. Perhaps
the "Integer_Literal =>" part can have an immediate effect, while the name
resolution of the "From_String" part can be postponed until the word
"private."
Steve? Others?
>PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born
Great news, Gramps!
****************************************************************
From: Randy Brukardt
Sent: Tuesday, November 27 2018 6:01 PM
I've thought about this some this afternoon, and I don't think there is a real
problem, but there definitely is a dance.
It's known as soon as the aspect is used that (integer, in this case) literals
can be used. (That sort of property seems like it would be useful for many
aspects, for instance generalized indexing.) That's clear because the name of
aspect is given in the type_declaration; that can't be altered by later
declarations. But we don't need to know the precise meaning of those literals
until an expression containing one is *evaluated*.
Remember that preconditions and default expressions and many other expressions
are *not* evaluated when they are seen. So the compiler can allow the use of
literals in those expressions without having to know how they are interpreted
in the future. Moreover, an expression that is evaluated (such as the
initializer of an object) freezes the type. And freezing the type evaluates
the aspects, so the details of evaluation are known before the evaluation
is actually done.
Combined with the rules that prevent a literal from having a different meaning
for the full type, means to me that there is no problem allowing the use of
literals as soon as aspect Integer_Literal is seen, even though the exact
meaning of the aspect is not yet known. If the aspect turns out to have
been illegal for some reason, then the compilation unit will be rejected
before anyone tries to evaluate the literals -- and otherwise, we'll known in
time.
Whether the wording in 13.1.1 and elsewhere actually imply this model, or
whether we need some extra wording to get it, I'll leave to the
hair-splitters of the ARG. (Gotta get back to finishing the minutes in any
case.)
>>PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born ??
>
>Great news, Gramps!
Second that!
****************************************************************
From: Steve Baird
Sent: Sunday, December 2, 2018 1:08 PM
Thanks as always for the careful reading.
The attached [version /06 - Editor.] is intended to include the corrections
you suggested except for the stuff about deferred constants for zero and one,
which I think was determined to be ok as it is in subsequent discussions. We
may or may not (I haven't tried to determine this yet) need a separate AI to
support Randy's argument that it is ok to use user-defined literals in aspect
specifications as is done in this spec; I believe Randy's argument is correct
- it is just a question of whether it is supported by existing RM wording.
Incidentally, identifying that issue was a good catch!
Upon rereading, I wonder whether we should delete the Width parameters from
the From_String functions. Do those make sense for a function that is passed
a String (as opposed to reading characters from a file)?
If we decide to do this, then we should also delete the mention of Width in
the accompanying text for Big_Rationals (but not in the corresponding text
for Big_Integers) because with this change there will no longer be any
subprograms in the Big_Rationals spec that take a Width parameter.
****************************************************************
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:36 PM
There's one necessary correction that no one suggested: the 11.5(23) text calls
the package "Ada.Big_Numbers", but it obviously is called
"Ada.Numerics.Big_Numbers". There are three occurrences. I've fixed them all.
(There also was a stray blank in here.)
Also the AI number in the preceding note is missing the '2': AI12-0112-1, not
"AI12-011". Also fixed.
****************************************************************
From: Jeff Cousins
Sent: Monday, December 3, 2018 2:22 PM
Looks good to me.
****************************************************************
From: Tucker Taft
Sent: Sunday, December 2, 2018 1:32 PM
> ...
> Upon rereading, I wonder whether we should delete the Width parameters
> from the From_String functions. Do those make sense for a function
> that is passed a String (as opposed to reading characters from a
> file)?
If you look at the "Get" functions in Text_IO that get from a string, they
do omit the Width parameters, so it would seem we would be justified in
omitting them here as well. In general, the length of the string provides
the value of the Width parameter for operations that operate on strings.
However, if the operation *returns* a String rather fills in a String provided
as an OUT parameter, there is no Length, for for conversion *to* a string,
some sort of Width might be useful, but note that 'Image doesn't provide that.
> If we decide to do this, then we should also delete the mention of
> Width in the accompanying text for Big_Rationals (but not in the
> corresponding text for Big_Integers) because with this change there
> will no longer be any subprograms in the Big_Rationals spec that take
> a Width parameter.
Makes sense.
****************************************************************
From: John Barnes
Sent: Monday, December 3, 2018 2:09 PM
I should read this one carefully. But what should I read? The AI on the
database seems a bit old.
****************************************************************
From: Jeff Cousins
Sent: Monday, December 3, 2018 2:25 PM
Look at the v4 attached to Steve’s last e-mail.
Hopefully I’ve attached it.
****************************************************************
From: John Barnes
Sent: Tuesday, December 4, 2018 1:56 AM
OK Got it. I assumed the AI had been updated. This is the text for the
uncluttered AARM
****************************************************************
From: Jean-Pierre Rosen
Sent: Monday, December 3, 2018 11:36 PM
> function "+" (Arg : Integer) return Valid_Big_Integer;
> function To_Big_Integer (Arg : Integer) return Valid_Big_Integer
> renames "+";
Wouldn't it be more logical to have these the other way round? i.e.:
function To_Big_Integer (Arg : Integer) return
Valid_Big_Integer;
function "+" (Arg : Integer) return Valid_Big_Integer
renames To_Big_Integer;
Better have "+" as a shorthand for To_Big_Integer rather than To_Big_Integer
as a "longhand" for "+"...
****************************************************************
From: Bob Duff
Sent: Tuesday, December 4, 2018 6:14 AM
I agree. It doesn't make any difference to the compiler, but it might make
a difference to tools such as debuggers, and it does seem more logical.
****************************************************************
From: Jeff Cousins
Sent: Tuesday, December 4, 2018 9:22 AM
That seems a good point to me, thanks JP.
****************************************************************
From: Tucker Taft
Sent: Tuesday, December 4, 2018 6:43 AM
Good point, JP.
****************************************************************
From: Steve Baird
Sent: Tuesday, December 4, 2018 11:11 AM
Sounds good to me.
****************************************************************
From: John Barnes
Sent: Tuesday, December 4, 2018 11:33 AM
And to me.
****************************************************************
From: John Barnes
Sent: Wednesday, December 5, 2018 4:30 AM
I just started to read this in detail. Gosh Ada has lots of things now that
I don't remember.
But a minor point first.
A5.5 says The package Ada.Numerics.Big_Numbers has the following declaration:
But A5.6 says The package Ada.Numerics.Big_Numbers.Big_Integers has the
following definition:
Why definition and not declaration?
Same in A.5.8 for Big_Integers.
****************************************************************
From: Randy Brukardt
Sent: Wednesday, December 5, 2018 1:33 PM
The RM is not consistent about the wording introducing language-defined
packages.
I found 58 "has the following declaration" and 33 "language-defined package
exits". There's also some that don't use any words at all or aren't
consistent.
OTOH, I didn't find any "has the following definition", except for the ones
I added last night. So that must have been a mis-read on my part. And clearly,
the ones in AI12-0208-1 are also wrong - should be "declaration".
****************************************************************
From: Bob Duff
Sent: Thursday, December 6, 2018 2:05 PM
> The attached is intended to include the corrections you suggested
> except for the stuff about deferred
Thanks, Steve.
Comments on the attached AI12-0208.v4.txt: [Editor's note: This attachment
is /06 of the AI, despite the name.]
> package Ada.Numerics.Big_Numbers.Big_Integers
> with Preelaborate, Nonblocking
> is
> type Big_Integer is private with
> Default_Initial_Condition => not Is_Valid (Big_Integer),
> Integer_Literal => From_String,
> Put_Image => Put_Image;
>
> function Is_Valid (Arg : Big_Integer) return Boolean;
>
> subtype Valid_Big_Integer is Big_Integer
> with Dynamic_Predicate => Is_Valid (Valid_Big_Integer),
> Predicate_Failure => (raise Constraint_Error);
I expect Valid_Big_Integer will be used far more commonly than Big_Integer.
(That's true in the code in the package spec, and for client code.)
Furthermore, an invalid Big_Integer is not an integer at all. Therefore
I suggest name changes:
Big_Integer --> Optional_Big_Integer
Valid_Big_Integer --> Big_Integer
We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems inconsistent
and a bit weird.
Same for Big_Rats.
> Implementation Requirements
> No storage associated with a Big_Integer object shall be lost upon
> assignment or scope exit.
The CodePeer implementation of big ints doesn't do that
-- it leaks very slowly in practice, and that has proven to work well. I fear
it's the only efficient way to do it, unless you have garbage collection.
> - Bounded_Big_Integers is a generic package and takes a generic formal:
> Capacity : Natural;
I question making Capacity a generic formal. I think it should be a
discriminant. It seems to me the primary use of Bounded_Big_Integers will be
to implement Big_Integers, and that only works if you can have various-sized
Bounded_Big_Integers all of the same type.
Yes, I know that means assignment statements won't work "right". Too bad.
We can provide a Copy procedure.
We made this mistake with Bounded_Strings, and they're a huge pain to use
because of that.
> with Ada.Numerics.Big_Numbers.Big_Integers;
> with Ada.Numerics.Big_Numbers.Bounded_Big_Integers;
> generic
> with package Bounded is new Bounded_Big_Integers (<>);
This seems pretty heavy, syntactically. See above about discrims.
> package Ada.Numerics.Big_Numbers.Conversions
> function "+" (Arg : Integer) return Valid_Big_Rational is
> ((+Arg) / 1);
Seems like you want a conversion from Big_Int to Big_Rat.
But probably not called "+".
> generic
> type Num is digits <>;
> package Float_Conversions is
> function To_Big_Rational (Arg : Num) return Valid_Big_Rational;
>
> function From_Big_Rational (Arg : Valid_Big_Rational) return Num
> with Pre => In_Range (Arg,
> Low => To_Big_Rational (Num'First),
> High => To_Big_Rational (Num'Last))
> or else (raise Constraint_Error);
> end Float_Conversions;
Should we have conversions to/from fixed point?
> function To_String (Arg : Valid_Big_Rational;
> Fore : Field := 2;
> Aft : Field := 3;
> Exp : Field := 0) return String
> with Post => To_String'Result'First = 1;
> function From_String (Arg : String;
> Width : Field := 0) return Valid_Big_Rational;
>
> function To_Quotient_String (Arg : Valid_Big_Rational) return String is
> (To_String (Numerator (Arg)) & " /" & To_String (Denominator (Arg)));
Why is there an extra blank before "/" but not after?
It says about that To_String for Big_Int doesn't put the annoying blank, by default.
> function "**" (L : Valid_Big_Rational; R : Integer)
> return Valid_Big_Rational;
How about another "**" that takes a Big_Int?
> [TBD: Is Constraint_Error the exception we want on the
> Predicate_Failure aspect specs for Valid_Big_Integer and
> Valid_Big_Rational?]
OK with me. I don't much caare.
> [TBD: do we want a Fixed_Conversions generic package analogous to
> Float_Conversions?]
Ah, I asked that above. I'd say probably yes.
> [TBD: the range check on From_Big_Rational is slightly too tight.
> For example,
> X : IEEE_Float32 :=
> IEEE_Float32 (IEEE_Float64'Succ (IEEE_Float64
> (IEEE_Float32'Last))); does not overflow but the corresponding
> conversion using From_Big_Rational would fail the range check. Do we
> care?]
I don't know.
> This section, or at least the AARM note, is intended to follow the
> structure of the analogous wording for AI12-011 (contracts for
> containers).
>
> Add after 11.5(23):
>
> Big_Number_Check
>
> Perform the checks associated with Pre, Static_Predicate,
> Dynamic_Predicate, or Type_Invariant aspect specifications occuring in
> the visible part of package Ada.Big_Numbers or of any of its descendants.
>
> [TBD: Include Static_Predicate in this list just for completeness,
> even though it happens that there are no Static_Predicate
> specifications in these units?]
Either way is OK with me.
****************************************************************
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:21 PM
> We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems
> inconsistent and a bit weird.
I note that we discussed these names in Lexington and decided on these
exactly:
Tucker suggests that the names would be Big_Integer and Valid_Big_Integer. This
seems consistent with existing languages. Bob says he can live with that (and
he will complain about it).
I suppose this comment qualifies. :-)
> Same for Big_Rats.
>
> > Implementation Requirements
> > No storage associated with a Big_Integer object shall be lost upon
> > assignment or scope exit.
>
> The CodePeer implementation of big ints doesn't do that
> -- it leaks very slowly in practice, and that has proven to work well.
> I fear it's the only efficient way to do it, unless you have garbage
> collection.
I don't believe that there is any efficient implementation of unbounded
Big_Integers for Ada. Any implementation like the one you described in
Lexington (and above) would have to use a global level of indirection to deal
with oversize objects (modern Oses scramble address spaces, so no assumptions
can be made about the location of anything), and that would be a problem for
multitasking (since you'd be accessing a global data structure).
You'd have to use some form of locking (at a minimum via atomic objects), and
that would also sap performance. Moreover, for a 32-bit implementation like
Janus/Ada, you're going to have a lot of memory used that way -- it's not a
*slow* storage leak.
If you need critical performance, you'll have to use the bounded form.
> > - Bounded_Big_Integers is a generic package and takes a generic formal:
> > Capacity : Natural;
>
> I question making Capacity a generic formal. I think it should be a
> discriminant. It seems to me the primary use of Bounded_Big_Integers
> will be to implement Big_Integers, and that only works if you can have
> various-sized Bounded_Big_Integers all of the same type.
Agreed. This is a discriminant for the bounded containers for this reason.
...
> > [TBD: do we want a Fixed_Conversions generic package analogous to
> > Float_Conversions?]
>
> Ah, I asked that above. I'd say probably yes.
I'd say no, because accuracy requirements seem to be major problem for such
conversions. I know that Steve has shown that one can always get the right
answer using essentially a binary search of model intervals, but that seems to
be more of a thought experiment than a real implementation technique (it would
use thousands of big rational operations).
****************************************************************
From: Tucker Taft
Sent: Thursday, December 6, 2018 9:33 PM
> I'd say no, because accuracy requirements seem to be major problem for such
> conversions. I know that Steve has shown that one can always get the right
> answer using essentially a binary search of model intervals, but that seems
> to be more of a thought experiment than a real implementation technique (it
> would use thousands of big rational operations).
I believe there is a well documented mechanism for doing this properly.
AdaMagic does the right thing here. I'd be happy to provide the algorithm.
It is based on the notion of "continued fractions" I believe. I recently
analyzed supporting fixed point in our code generator for Simulink, and wrote
up the algorithms in a short document.
****************************************************************
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:51 PM
>I believe there is a well documented mechanism for doing this properly.
Maybe, but if I don't know it, it might as well not exist. (I'd have no idea
how to Google for such a thing, as one has no idea of what it is called.)
>AdaMagic does the right thing here. I'd be happy to provide the algorithm.
>It is based on the notion of "continued fractions" I believe. I
>recently analyzed supporting fixed point in our code generator for
>Simulink, and wrote up the algorithms in a short document.
I have the additional problem of having to do it in a shared generic. It is
completely impossible to make completely accurate conversions to universal
fixed in that environment, so the algorithm has to use only integers and the
(base of the) target fixed point type. (In general, universal fixed
conversions are inaccurate, see Annex G, so any truly usage algorithm for Ada
has to avoid anything like that; it's not 100% a problem with shared generics.
That rules out fixed-fixed multiplies and divides. Not sure if that is
possible.)
****************************************************************
From: Bob Duff
Sent: Sunday, December 9, 2018 10:13 AM
> I note that we discussed these names in Lexington and decided on these
> exactly:
>
> Tucker suggests that the names would be Big_Integer and Valid_Big_Integer.
> This seems consistent with existing languages. Bob says he can live with
> that (and he will complain about it).
Are you sure that's what I meant? I think maybe I was advocating having no
default initial value, like regular integer types, as opposed to commenting on
the names. Not sure.
Anyway, I really do not like the name "Valid_Big_Integer", for reasons I
stated. Tucker says this is consistent with other languages. I skept.
Tucker, please name these existing languages.
> I suppose this comment qualifies. :-)
;-)
> I don't believe that there is any efficient implementation of
> unbounded Big_Integers for Ada.
"efficient" = "comparable to other languages that support bignums".
If that's not possible, then we have another embarrassment like the tampering
fiasco.
>... Any implementation like the one you described in Lexington (and
>above) would have to use a global level of indirection to deal with
>oversize objects (modern Oses scramble address spaces, so no
>assumptions can be made about the location of anything), and that would
>be a problem for multitasking (since you'd be accessing a global data structure).
> You'd have to use some form of locking (at a minimum via atomic
>objects), and that would also sap performance. Moreover, for a 32-bit
>implementation like Janus/Ada, you're going to have a lot of memory
>used that way -- it's not a *slow* storage leak.
Sorry, but the above is just not true. If you want to know why, look at the
codepeer sources. The codepeer implementation is 32 bit (even on 64-bit
machines), it has no problem with address space randomization, it leaks slowly,
it's task safe. There is an atomic write in there, but it's on the slow-anyway
path.
> Agreed. This is a discriminant for the bounded containers for this reason.
:-)
> ...
> > > [TBD: do we want a Fixed_Conversions generic package analogous to
> > > Float_Conversions?]
> >
> > Ah, I asked that above. I'd say probably yes.
>
> I'd say no, because accuracy requirements seem to be major problem for
> such conversions. I know that Steve has shown that one can always get
> the right answer using essentially a binary search of model intervals,
> but that seems to be more of a thought experiment than a real
> implementation technique (it would use thousands of big rational operations).
I don't know enough to have a strong opinion here.
****************************************************************
From: Bob Duff
Sent: Sunday, December 9, 2018 10:29 AM
> I believe there is a well documented mechanism for doing this
> properly. AdaMagic does the right thing here.
Is this needed in Ada anyway, for converting universal_integer to misc
fixed-point types?
>...I'd be happy to provide the algorithm.
Yes, please. That would be useful.
> ...It is based on the notion of "continued fractions" I believe. I
> recently analyzed supporting fixed point in our code generator for
> Simulink, and wrote up the algorithms in a short document.
****************************************************************
From: Tucker Taft
Sent: Sunday, December 9, 2018 12:04 PM
> Anyway, I really do not like the name "Valid_Big_Integer", for
> reasons I stated. Tucker says this is consistent with other
> languages. I skept.
>
> Tucker, please name these existing languages.
I really don't remember what I meant by this, if I really said it. What I would say today (and perhaps what I meant then) is that it is consistent with the existing *language* (i.e. "normal" integer types in Ada), which do not guarantee they have a valid
value unless you initialize them.
>> I suppose this comment qualifies. :-)
>
> ;-)
>
>> I don't believe that there is any efficient implementation of
>> unbounded Big_Integers for Ada.
>
> "efficient" = "comparable to other languages that support bignums".
> If that's not possible, then we have another embarrassment like the
> tampering fiasco.
I would suggest we say that the storage shall not be "lost," but allow it to be
used for caching or other purposes. Based on the fact that our commercial
CodePeer product has been using Bob's implementation of big nums for the past
fifteen years, I don't agree that this cannot be done efficiently. Yes, you do
need to worry about multi-tasking (and async-transfer-of-control, for that
matter), but I think we need to give implementations some freedom, especially
for the implementation of a new package like this. We can set limits, but we
don't want to preclude useful approaches out of the gate.
****************************************************************
From: Tucker Taft
Sent: Sunday, December 9, 2018 12:26 PM
>> ...It is based on the notion of "continued fractions" I believe. I
>> recently analyzed supporting fixed point in our code generator for
>> Simulink, and wrote up the algorithms in a short document.
I looked into it a bit further. I have extracted below two different documents
that might be relevant to this topic. The first is a general discussion of
converting and multiplying fixed-point values with different scale factors. The
second is a function that converts from "universal real" to another rational
number that approximates it, but with limits on the numerator and denominator.
I realize now that I am not sure whether either of these directly addresses the
issue that the Fixed_Conversion package might face, but it feels like they are
at least somewhat relevant! ;-)
----- Converting and multiplying fixed-point values with potentially different
scale factors ------
Here we give the actual proposed implementation for both C and Ada for Simulink
fixed-point operations. Many of these operations depend on the relationship
between the scale factors of the two (or three) types involved. In several
cases it is necessary to take the ratio between two scale factors, and then
approximate it by a ratio of relatively small integers. The “continued
fraction” (CF) method is used to do the approximation. This method generates
successively better approximations, until the numerator or the nominator exceeds
some bound (typically the maximum value that can be represented as an integer in
say 16 or 32 bits). We will use the function “CF(X, 16)” to be the rational
number that most closely approximates X without while both the numerator and
denominator of the rational number remain within the representation of a 16-bit
signed integer. We will use “Size_In_Bits” rather than 16 or 32 below, to
represent the size of integer values that are being manipulated.
Below we talk about the “scale” factor. This is the number we multiply the
underlying integer representation to produce the “real-world” value.
Real_Word_Value := Scale * Representation
In Ada, this is called the “small” (or less precisely, the “delta”).
We are ignoring the issue of a “bias” for now, which is used when the
representation for the “real-world” zero is not an integer 0. Bias should
probably be handled at the “edges” of the model (subtracted or added as
appropriate), and not appear in any intermediate values.
Below we use a routine “Fixed_Mul_Shift_Div(A, B, C, D)” which computes A * B *
2^C / D. It multiples A * B into a double-width temporary. Then if C is
positive, it does a double-width shift left. If C is negative, it does a
double-width shift right. If C is zero, it performs no shift. It then does a
divide, taking the double-width dividend down to a single-width quotient.
----------
Conversion:
From Integer/Fixed to Integer/Fixed
Source scale = S, Target scale = T
(use scale = 1.0 for an Integer type)
Basic re-scaling formula is as follows:
Target_Rep := Source_Rep * S / T
But to be more efficient, we translate ratio into rational number:
Num/Den == CF(S/T, Size_In_Bits)
Now we do the scaling, using strictly integer arithmetic:
Target_Rep := Source_Rep * Num / Den
For C (presuming no special rounding):
if Den = 1:
if Num is power of 2:
generate “Result = Source << Log2(Num);”
else:
generate “Result = Source * Num;”
elsif Num = 1:
if Den is power of 2:
generate “Result = Source >> Log2(Den);”
else:
generate “Result = Source / Den;”
else:
generate
“Result = Fixed_Mul_Shift_Div (Source, Num, 0, Den);”
For C, if special Rounding is required, the last three code snippets which
involve division or a right-shift will need to be adjusted. If truncate toward
zero, use division even for power-of-2 divisor when Source might be negative.
If round to nearest, add “Den/2 * Sign(Source)” to Source before dividing
(special rounding can be provided by having different versions of
Fixed_Mul_Shift_Div).
NOTE: The above algorithm for re-scaling a value we will reuse later, as
“Rescale (Value, Scale)” meaning compute CF(Scale, Size_In_Bits) and then go
through the above set of cases.
For Ada:
generate “Result := Target_Type(Source);”
-- presuming Ada types have “small” matching Simulink scale,
-- and desired rounding matches Ada rounding.
For Ada, if special Rounding is required, add an adjustment to Source before
converting. Amount depends on exact rounding performed by Ada and expected by
Simulink. An explicit use of a Fixed_Mul_Div routine might be the simplest
solution.
Add/Subtract:
Addition and subtraction use the “native” arithmetic operations, presuming the
types are the same. If they are not the same, we would convert both to the
target type for the computation (see above), and then do a simple addition.
For C:
generate “Result = Operand1 + Operand2;”
For Ada:
generate “Result := Operand1 + Operand2;”
Multiply/Divide:
Multiplication and division generally need to consider three types, the two
operand types, and the result type. Assume the scale factors are O1, O2, and R
for Operand1 scale, Operand2 scale, and Result scale. Ideally, the scales match
in the sense that, for multiplication, O1 * O2 = R, and for division, O1 / O2 =
R. When this is the case, “native” multiplication or division can be used. If
not, we compute an overall re-scaling factor S = O1*O2/R for multiplication, or
S = O1/(O2*R) for division. The code we generate depends on the operation and
this re-scaling factor.
For C multiplication:
if S >= 1:
-- Simple case, just rescale a "native" multiply
generate “Result := Rescale (O1 * O2, S);”
else (S < 1):
-- More complex case, use Fixed_Mul_Shift_Div;
-- Compute Shift and Divisor for inverse of scale factor
Inverse_Scale := 1/S;
if Floor(Inverse_Scale) = Inverse_Scale and then
Inverse_Scale < 2^(Size_In_Bits-1):
-- 1/Scale is an exact, not-too-big integer
Divisor := Integer(Inverse_Scale);
Shift := 0;
else:
-- 1/Scale is not an integer, or is too big
-- Compute a Divisor and a Shift such that
-- S ~= 2^Shift/Divisor
Shift := Size_In_Bits - 2 - Floor(Log2(Inverse_Scale));
Divisor := Floor(2^Shift/S);
-- Reduce Divisor until fits in integer
while Divisor >= 2^(Size_In_Bits-1):
Shift := Shift - 1;
Divisor := Floor(2^Shift/S);
if Shift > 0:
-- Reduce 2^Shift/Divisor to lowest terms
G := GCD(2^Shift, Divisor);
Divisor := Divisor/G;
Shift := Shift - Log2(G);
generate
“Result := Fixed_Mul_Shift_Div
(O1, O2, Shift, Divisor);”
For C division:
if S <= 1:
-- Simple case; rescale dividend and then do a "normal" divide
generate "Result := Rescale(O1, S) / O2;"
else (S > 1):
-- More complex case; use Fixed_Mul_Shift_Div;
-- Compute a Multiplier and a Shift
if Floor(S) = S & S < 2^(Size_In_Bits-1):
-- S is an exact and not-too-big integer
Multiplier := S;
Shift := 0;
else:
-- S is not an integer, or is too big
Shift := Size_In_Bits - 2 - Ceiling(Log2(S))
Multiplier := Floor(2^Shift * S)
-- Reduce Multiplier until it is representable as an integer
while Multiplier >= 2^(Size_In_Bits -1):
Shift := Shift - 1;
Multiplier := Floor(2^Shift * S);
if Shift > 0:
-- Reduce ratio of Multiplier and 2^Shift to lowest terms
G := GCD(2^Shift, Multiplier);
Multiplier := Multiplier / G;
Shift := Shift - Log2(G);
generate
"Result := Fixed_Mul_Shift_Div(O1, Multiplier, Shift, O2);"
For Ada multiplication, with no special rounding:
generate “Result := Operand1 * Operand2;”
For Ada division, with no special rounding:
generate “Result := Operand1 / Operand2;”
---- ur_to_rational conversion function from AdaMagic ---
void
ur_to_rational(ur_ptr real, ui_ptr bound, ui_ptr *num, ui_ptr *den)
/*******************************************************************
* Overview: *
* Approximate real number by rational number using continued *
* fraction method. The resulting rational is num/den, where *
* both num and den are less than or equal to the specified *
* bound. *
* *
* Notes: *
* Since the 'real' number is also a fraction, we are just *
* approximating a rational number by another rational number *
* with potentially smaller numbers in the numerator and *
* denominator. *
*******************************************************************/
{
bool saved_here = use_scratch();
ui_ptr p2, q2; /* previous previous ratio */
ui_ptr p1, q1; /* previous ratio */
ui_ptr p, q; /* current ratio */
ui_ptr a; /* new denominator */
ur_ptr abs_real = ur_abs(real);
ur_ptr r; /* remaining fraction */
assert(ui_is_positive(bound));
/*
* continued fractions algorithm:
* p0/q0 = 1 / 0
* r1 = abs(real) a1 = floor(r1) p1/q1 = a1 / 1
* r2 = 1/(r1-a1) a2 = floor(r2) p2/q2 = a2*p1+p0 / a2*q1+q0
* rn = 1/(r[n-1]-a[n-1]) an = floor(rn) pn/qn = an*p[n-1]+p[n-2] /
* an*q[n-1]+q[n-2]
*
* we start at r0 = 1/real
*/
p2 = q1 = &ui_one; /* q[n-1] and p[n-2] */
q2 = p1 = &ui_zero; /* q[n-2] and p[n-1] */
r = abs_real;
while (!ur_is_zero(r) &&
ui_is_less_or_equal(p1, bound) && ui_is_less_or_equal(q1, bound)) {
r = ur_divide(&ur_one, r);
a = ur_trunc_to_ui(r);
r = ur_diff(r, ui_to_ur(a));
p = ui_sum(ui_mult(p1, a), p2);
q = ui_sum(ui_mult(q1, a), q2);
p2 = p1; q2 = q1;
p1 = p; q1 = q;
}
/* At this point p1/q1 and p2/q2 are approximations to real,
with p1/q1 being the more accurate but only p2/q2 guaranteed
to be within the bounds specified for p and q. */
if (ui_is_less_or_equal(p1, bound) && ui_is_less_or_equal(q1, bound)) {
*num = p1;
*den = q1;
} else {
/* p1 or q1 is too big */
/* Try subtracting from p1 and q1 to make them <= bound */
ui_ptr max_pq = ui_is_greater_or_equal(p1, q1)? p1: q1;
ui_ptr bound_diff = ui_diff(max_pq, bound);
ur_ptr reduced_ratio = ur_normalize(
ui_diff(p1, bound_diff), ui_diff(q1, bound_diff));
ur_ptr p2_over_q2 = ur_no_normalize(p2, q2);
ur_ptr reduced_diff = ur_abs(ur_diff(abs_real, reduced_ratio));
ur_ptr p2q2_diff = ur_abs(ur_diff(p2_over_q2, abs_real));
if (ur_is_less_than(reduced_diff, p2q2_diff)) {
/* (p1-bound_diff)/(q1-bound_diff) is better approx. */
*num = ur_numerator(reduced_ratio);
*den = ur_denominator(reduced_ratio);
} else {
/* p2/q2 is better */
*num = p2;
*den = q2;
}
}
if (ur_is_negative(real)) {
*num = ui_neg(*num);
}
if (copy_result()) {
*num = ui_new_ui(*num);
*den = ui_new_ui(*den);
}
end_scratch();
}
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 9, 2018 9:52 PM
...
> I looked into it a bit further. I have extracted below two different
> documents that might be relevant to this topic.
> The first is a general discussion of converting and multiplying
> fixed-point values with different scale factors.
> The second is a function that converts from "universal real"
> to another rational number that approximates it, but with limits on
> the numerator and denominator. I realize now that I am not sure
> whether either of these directly addresses the issue that the
> Fixed_Conversion package might face, but it feels like they are at
> least somewhat relevant! ;-)
I didn't see anything new here; the "small integer" technique is well-known and
we use it when possible. But that's the key here: (1) "Small integer" techniques
aren't usable with every possible value of 'Small. Annex G recognizes this and
has weaker requirements on such smalls (see G.2.3). (Specifically, the "perfect
result set" and the "close result set".) The requirements here are not taking
this into account, they assume that every value can be perfectly converted. (2)
One cannot use any of these techniques when both smalls aren't known at
compile-time, as often happens in a generic body. The usual fix for such
problems is to use thunks, but that just changes the problem from not knowing
one small to not knowing the other (since thunks depend solely on the generic
contract and have no idea as to what appears in the body).
If we *do* support fixed conversions, we have to at a minimum take into account
the accuracy requirements as given in Annex G (it is insane to require more
accurate operations here than you would normally get), and for obvious reasons,
I would like to see an understanding of the impossibility of doing this
perfectly in a generic body (if either operand is of a generic formal type, as
would be the case here).
A third thought: accuracy requirements in general are given only in Annex G. If
we did that here, I would be less concerned (not everyone supports Annex G). It
seems bizarre to give accuracy requirements here when there are none at all on
anything else in the core, and I don't believe there are any on Text_IO or
'Value anywhere.
****************************************************************
From: Tucker Taft
Sent: Sunday, December 9, 2018 10:09 PM
> ...
> A third thought: accuracy requirements in general are given only in Annex G.
> If we did that here, I would be less concerned (not everyone supports
> Annex G). It seems bizarre to give accuracy requirements here when
> there are none at all on anything else in the core, and I don't
> believe there are any on Text_IO or 'Value anywhere.
Yes, that makes sense.
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 9, 2018 10:18 PM
> I would suggest we say that the storage shall not be "lost,"
> but allow it to be used for caching or other purposes. Based on the
> fact that our commercial CodePeer product has been using Bob's
> implementation of big nums for the past fifteen years, I don't agree
> that this cannot be done efficiently.
> Yes, you do need to worry about multi-tasking (and
> async-transfer-of-control, for that matter), but I think we need to
> give implementations some freedom, especially for the implementation
> of a new package like this. We can set limits, but we don't want to
> preclude useful approaches out of the gate.
Bob insisted that any efficient implementation would have to "slowly leak
memory". That is not an acceptable implementation of anything language-defined
in Ada. Many Ada programs have to run a long time, and "slowly leaking memory"
means that it is guaranteed to run out of memory eventually. (I'm also dubious
of the "slowly" leaking part; that would depend very much on what is being
calculated, especially for rational numbers. But that's not important for this
discussion.)
I agree about not preventing useful approaches, but I think the existing wording
captures this idea just fine. If there is a scenario where the implementation
runs out of memory when there are few bignum values in existence, that
implementation is wrong. Otherwise, the "requirement" is untestable anyway, so
the exact implementation isn't relevant. That is, exactly where the memory is
recovered isn't detectable or relevant, but it has to be recovered within some
reasonable amount of time.
Note that this is *exactly* like tampering, where the simple implementation can
be slow, but there are a variety of better, more complicated implementations
that aren't slow at all. The "fiasco" there is simply that implementers didn't
put in the effort needed to do a good job. And users blamed a number of
self-inflicted problems on the implementation because they could. Are either of
those really the language definition's fault?
---
BTW, if we're worried about "not precluding useful approaches", then we
shouldn't be specifying the Big_Rational representation always has a
Greatest_Common_Divisor of 1. That requires an expensive reduction operation
after every individual math operation (even "+" in the case where the
denominators are different), and I don't really see the point of the
requirement. If we're going to leave it up to the implementation to manage
memory, then it should also be up to the implementation as to what approaches to
apply to reduce the use of memory (which is the only obvious value of the GCD
requirement).
Indeed, I don't remember any cheap normalization algorithm to meet that
requirement; only the binary normalization (which is discarding trailing zero
bits) is cheap. Otherwise, one has to do various trial divides, which definitely
aren't cheap (especially if large values are involves, as they often after
multiplication.
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 9, 2018 10:28 PM
> > I note that we discussed these names in Lexington and decided on
> > these
> > exactly:
> >
> > Tucker suggests that the names would be Big_Integer and Valid_Big_Integer.
> > This seems consistent with existing languages. Bob says he can live
> > with that (and he will complain about it).
>
> Are you sure that's what I meant? I think maybe I was advocating
> having no default initial value, like regular integer types, as
> opposed to commenting on the names. Not sure.
I of course have no idea what you meant as opposed to what you said. But I
recorded as exactly as I could what you said -- and I am quite sure of that
because of the parenthetical remark. There is no way in heck that I would have
invented that and I would have been very careful to attribute it correctly.
At this point, I have no remaining memory of the discussion of the names, other
(perhaps) that it was short. I think you were the only one to say anything about
them, so perhaps the rest of the group had dozed off by then. :-)
I personally don't care that much about the names, but I do care about churning
them all of the time. And I also worry that if you make the valid form
Big_Integer, then:
A : Big_Integer;
raises an exception. If that's not the case, then I have a major problem IMHO
(because the name is lying about the validity).
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 9, 2018 10:32 AM
> > I believe there is a well documented mechanism for doing this
> > properly. AdaMagic does the right thing here.
>
> Is this needed in Ada anyway, for converting universal_integer to misc
> fixed-point types?
The Ada compiler can use the binary search algorithm, since the performance of
the conversion is nearly irrelevant (there aren't going to be programs with
millions of fixed literals). Such an algorithm would be very slow, since it
would need to do a bunch of bigrat operations for each bit of the result (in
general). This routine can't use that sort of algorithm, since there very well
might be millions of conversions to sort data or the like.
****************************************************************
From: Bob Duff
Sent: Thursday, December 13, 2018 4:49 PM
> > Tucker, please name these existing languages.
>
> I really don't remember what I meant by this, if I really said it.
Randy keeps pretty accurate notes, so I'm guessing you said it.
But I think Randy is misunderstanding what you and I meant.
It wasn't (I think) about the names of the subtypes.
My memory is:
- An earlier version had default-init to 0.
- I objected to that (and still do).
I think you agree.
- You suggested default-init to an invalid value,
and run-time checks that you don't pass it to
most ops.
- I said I can live with that, but I don't like it,
and will complain in the future. However, I have
changed my mind -- I now think the invalid value,
with run-time checks, is the right answer, especially
if we have the ability to suppress.
> What I would say today (and perhaps what I meant then) is that it is
> consistent with the existing *language* (i.e. "normal" integer types
> in Ada), which do not guarantee they have a valid value unless you
> initialize them.
OK, that's reasonable, but the current sub-sub-sub-discussion[*] is about the
names of the subtypes. I advocate changing the names:
Big_Integer --> Optional_Big_Integer
Valid_Big_Integer --> Big_Integer
Can you go along with that?
It really looks silly to me to have Valid_Big_Integer, but Big_Natural and
Big_Positive (which are also guaranteed valid).
And the guaranteed-valid[**] ones will be most commonly used, so one wants the
shorter name for that.
And, in fact, the invalid value is NOT an integer at all, big or small. So
Big_Integer is a wrong name for the one that allows invalid.
[*] Apologies to the anti-hyphenation[***] police, Gary.
[***] Har-har.
[**], [***] No apology necessary!
****************************************************************
From: Bob Duff
Sent: Thursday, December 13, 2018 4:50 PM
> I of course have no idea what you meant as opposed to what you said.
> But I recorded as exactly as I could what you said -- and I am quite
> sure of that because of the parenthetical remark. There is no way in
> heck that I would have invented that and I would have been very
> careful to attribute it correctly.
Yes, I have great respect for your ability to take accurate notes of what was
said. I just think that perhaps you misunderstood what I meant at the time, or
are taking it out of context. I don't think I was arguing about the subtype
names, but about the semantics (and have changed my mind on that). Now I'm just
arguing about the subtype names. ;-)
> I personally don't care that much about the names, but I do care about
> churning them all of the time.
Agreed. I don't see any churn here. Tucker came up with names (for a new
concept -- invalid big ints) on the fly during a meeting, and I'm now objecting
to those names. If Tucker and others are happy with my suggestions, then we're
done.
>...And I also worry that if you make the valid form Big_Integer, then:
> A : Big_Integer;
> raises an exception. If that's not the case, then I have a major
>problem IMHO (because the name is lying about the validity).
With my suggested naming, the above raises an exception.
With or without my suggested naming, this raises an exception:
B : Big_Positive;
That seems consistent and good.
****************************************************************
From: Tucker Taft
Sent: Monday, December 17, 2018 3:21 PM
> OK, that's reasonable, but the current sub-sub-sub-discussion[*]
> is about the names of the subtypes. I advocate changing the
> names:
>
> Big_Integer --> Optional_Big_Integer
> Valid_Big_Integer --> Big_Integer
>
> Can you go along with that?
>
> It really looks silly to me to have Valid_Big_Integer,
> but Big_Natural and Big_Positive (which are also guaranteed
> valid).
Interestingly for "small-nums," "X : Positive;" and "Y : Natural;" are
allowed (and not a bounded error), but make no guarantees about validity.
On the other hand, when used as a subprogram parameter or a function result,
it would clearly be an error to pass or return an uninitialized value of an
Integer type, or any subtype of Integer.
It is not possible to match this behavior with big nums, unless we leave them
completely uninitialized, since predicates are applied to stand-alone
composite objects even if they have no explicit initialization, if any
components of the composite type are default initialized.
So we should probably focus on usability. Clearly we want all parameters and
results to be valid by default, so that argues for the shorter names to ensure
validity. It will still be annoying that stand-alone variables have to be
initialized or you have to use some longer name. But I suppose with
if-expressions and case-expressions, you almost never need to declare an
uninitialized variable any more.
I suppose I could get behind having "Optional_Big_Integer,"
"Optional_Big_Positive," and "Optional_Big_Natural." These would have to have
predicates that allow "Invalid" plus whatever further value restriction they
have.
> And the guaranteed-valid[**] ones will be most commonly used,
> so one wants the shorter name for that.
>
> And, in fact, the invalid value is NOT an integer at all,
> big or small. So Big_Integer is a wrong name for the one
> that allows invalid.
I guess I am convinced. But I would think we would want
Optional_Big_{Positive,Natural} if we have Optional_Big_Integer.
****************************************************************
From: Bob Duff
Sent: Monday, December 17, 2018 3:41 PM
> I suppose I could get behind having "Optional_Big_Integer,"
> "Optional_Big_Positive," and "Optional_Big_Natural." These would have
> to have predicates that allow "Invalid" plus whatever further value
> restriction they have.
OK, that's even better than what I suggested.
...with Pre =>
(if Is_Valid (Optional_Big_Positive) then Optional_Big_Positive >= 1)
****************************************************************
From: John Barnes
Sent: Monday, December 10, 2018 4:31 PM
Reading some more BigNum. I hadn't expected to cover rationals as well as
integers but seems an interesting idea.
I see that the type invariant ensures that they are reduced so that the gcd
is always 1. In the type invariant it says (Big_Rational = 0.0) ...
How does that 0.0 work? I assume there is an AI that covers the use of
literals in that context that I have missed.
And generally where does Is_Valid come from?
****************************************************************
From: Bob Duff
Sent: Monday, December 10, 2018 5:30 PM
Do you have to send this base64 stuff? See here:
> DQpSZWFkaW5nIHNvbWUgbW9yZSBCaWdOdW0uIEkgaGFkbid0IGV4cGVjdGVkIHRvIGNvdmVyIHJh
> dGlvbmFscyBhcyB3ZWxsIGFzIGludGVnZXJzIGJ1dCBzZWVtcyBhbiBpbnRlcmVzdGluZyBpZGVh
> LiANCg0KSSBzZWUgdGhhdCB0aGUgdHlwZSBpbnZhcmlhbnQgZW5zdXJlcyB0aGF0IHRoZXkgYXJl
> IHJlZHVjZWQgc28gdGhhdCB0aGUgZ2NkIGlzIGFsd2F5cyAxLiBJbiB0aGUgdHlwZSBpbnZhcmlh
> bnQgaXQgc2F5cyAoQmlnX1JhdGlvbmFsID0gMC4wKSAuLi4NCg0KSG93IGRvZXMgdGhhdCAwLjAg
> d29yaz8gIEkgYXNzdW1lIHRoZXJlIGlzIGFuIEFJIHRoYXQgY292ZXJzIHRoZSB1c2Ugb2YgbGl0
> ZXJhbHMgaW4gdGhhdCBjb250ZXh0IHRoYXQgSSBoYXZlIG1pc3NlZC4NCg0KQW5kIGdlbm
> VyYWxs eSB3aGVyZSBkb2VzIElzX1ZhbGlkIGNvbWUgZnJvbT8NCg0KSm9obg0K
Anyway:
> How does that 0.0 work? I assume there is an AI that covers the use
> of literals in that context that I have missed.
That's AI12-0249-1, User-defined literals. And note the "Real_Literal => ..."
on the type decl.
Randy, I suppose that should be "User-defined numeric literals",
^^^^^^^ since we decided to split off:
AI12-0295-1.TXT : !subject User-defined string literals
AI12-0296-1.TXT : !subject User-defined character and null literals
> And generally where does Is_Valid come from?
That's a function declared in the package.
****************************************************************
From: Randy Brukardt
Sent: Monday, December 10, 2018 5:45 PM
> Do you have to send this base64 stuff? See here:
John will have to answer that, but I'll point out that when doing so, the
list footers get trashed, the messages get delayed (because of encoding
issues), and I've seen messages that show up blank in people's inboxes
(luckily, that hasn't happened yet with John's messages). So I hope he can
find a way to turn it off when sending to ARG.
...
> Randy, I suppose that should be "User-defined numeric literals",
It was that for a while, but it seems to have disappeared in the posted
version. Not sure why (hopefully nothing else important disappeared - this
sometimes happens when I manage to open two copies of a file and put some of
the mods in one copy and some in the other). I note that the clause title
remains "User-Defined Literals", as we did approve the string literals as well
(just in a separate AI).
****************************************************************
From: John Barnes
Sent: Tuesday, December 11, 2018 5:08 AM
>> How does that 0.0 work? I assume there is an AI that covers the use
>> of literals in that context that I have missed.
>That's AI12-0249-1, User-defined literals. And note the "Real_Literal
>=> ..." on the type decl.
I would expect the literal for a rational number to be something like 23/55.
And in this case 0/1 or just plain 0 rather than 0.0
****************************************************************
From: John Barnes
Sent: Saturday, December 15, 2018 8:55 AM
Hmm. But Real_Literal aspect says From_String so I would have expected "0.0".
In any case it is surely asking for trouble to use a real_literal as a literal
for a rational number. Is there not a way of expressing that the literal
should be of the form integer_literal/integer_literal?
Sorry I have got years behind in following this largely because of 1) maths at
Oxford, 2) getting old, 3)Brexit.
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 10:21 AM
> Hmm. But Real_Literal aspect says From_String so I would have expected
> "0.0".
Yes.
> In any case it is surely asking for trouble to use a real_literal as a
> literal for a rational number.
Why is that? Every real_literal can be represented exactly as a Big_Integer,
so I don't see the "trouble".
>... Is there not a way of expressing that the literal should be of the
>form integer_literal/integer_literal?
You can write 1/3, because of this:
function "/" (Num, Den : Valid_Big_Integer) return Valid_Big_Rational
with Pre => (Den /= 0) or else (raise Constraint_Error);
But 1/3 is not a literal, it's two Big_Integer literals and a division.
I think we'll train GNAT to evaluate that at compile time.
We also have this:
function "/" (L, R : Valid_Big_Rational) return Valid_Big_Rational;
which allows 1.0/3.0, or 123.456/7.89 .
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 10:30 AM
Another suggestion:
Change Ada.Numerics.Big_Numbers to Ada.Big_Numbers.
I think "numerics" (short for "numerical analysis") generally implies
"floating point" in computer science.
https://en.wikipedia.org/wiki/Numerical_analysis
Numerical analysis is the study of algorithms that use numerical
approximation (as opposed to general symbolic manipulations) for the
problems of mathematical analysis (as distinguished from discrete
mathematics).
And a shorter name is better, given that Numerics doesn't add any readability
benefit.
****************************************************************
From: John Barnes
Sent: Saturday, December 15, 2018 10:49 AM
I don't see Numerics as being short for Numerical analysis at all. In any
event it is possibly rather narrow-minded to think that numerics implies
approximations. I see numerics as being about all sorts of numbers. And it
just depends upon the field one is working in. It could be a finite field say
integers mod 4.
No I prefer leaving the package name alone. That's OK.
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 10:53 AM
Should Big_Rats have an operation for throwing away some precision? Like
Round X to the nearest multiple of Y. Or maybe the rounding should be
relative to the magnitude of X, or something like that.
The purpose would be to shrink the size (not in terms of magnitude, but in
terms of memory use).
****************************************************************
From: John Barnes
Sent: Saturday, December 15, 2018 11:03 AM
>> In any case it is surely asking for trouble to use a real_literal as
>> a literal for a rational number.
>Why is that? Every real_literal can be represented exactly as a Big_Integer,
>so I don't see the "trouble".
Yes but the real trouble is that every rational number cannot be represented
as a real_literal. Note that 5/7 can be represented as a real literal by
7#0.5#. Bur one cannot do it with say 15/17.
>>... Is there not a way of expressing that the literal should be of
>>the form integer_literal/integer_literal?
> You can write 1/3, because of this:
< function "/" (Num, Den : Valid_Big_Integer) return Valid_Big_Rational
< with Pre => (Den /= 0) or else (raise Constraint_Error);
> But 1/3 is not a literal, it's two Big_Integer literals and a division.
> I think we'll train GNAT to evaluate that at compile time.
I see 1/3 as the obvious literal form of a rational number.
> We also have this:
> function "/" (L, R : Valid_Big_Rational) return Valid_Big_Rational;
>which allows 1.0/3.0, or 123.456/7.89 .
And that is really disgusting. As Bad as Brexit!!
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 11:56 AM
> Yes but the real trouble is that every rational number cannot be
> represented as a real_literal.
True.
But that causes trouble for To_String (solved with the Fore/Aft/Exp kludgery).
But when you write a literal in your program, you're going the other
direction, so it doesn't matter that it's impossible to write a literal for
every rational number.
It still makes sense to write 1.234_567 instead of 1_234_567/1_000_000.
Or did I count the zeros wrong? ;-)
> I see 1/3 as the obvious literal form of a rational number.
You can think of it that way if you like, but it is not any form of literal
allowed by Ada syntax.
We could allow quoted literals:
X := "1/3";
but I don't see the point of that.
It's analogous to the fact that -1 (meaning negative one) is not a literal in
Ada. It's a call to the unary minus op, passing the literal 1.
In any case, you're free to always use the "/" notation.
> > We also have this:
>
> > function "/" (L, R : Valid_Big_Rational) return
> Valid_Big_Rational;
>
> >which allows 1.0/3.0, or 123.456/7.89 .
>
> And that is really disgusting. As Bad as Brexit!!
Not THAT bad.
But anyway, it doesn't matter:
Patient: "Doctor, it hurts when I write disgusting things."
Doctor: "So don't do that."
Shirley, you don't object to being able to divide rational numbers in general:
P (X / Y);
****************************************************************
From: Ben Brosgol
Sent: Saturday, December 15, 2018 11:16 AM
As I recall the history, the child package name "Numerics" was chosen as a
compromise, to avoid an Anglo-American dust-up over "Maths" versus "Math" as
the name :-). It had nothing to do with an implication of numerical analysis
/ floating point.
In any event I agree with John.
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 11:40 AM
> I don't see Numerics as being short for Numerical analysis at all.
This web site:
https://scicomp.stackexchange.com/tags/numerics/synonyms
says "numerics" is a synonym of "numerical analysis".
This:
https://scicomp.stackexchange.com/questions/tagged/numerics
has a bunch of questions tagged "numerics". I didn't look carefully, but
most/all seem to be about floating point.
These:
https://www5.in.tum.de/lehre/vorlesungen/parnum/WS17/lecture_1.pdf
https://math.temple.edu/~queisser/numerics-scientific-computing.html
seem to use "numerics" and "numerical methods" synonymously.
Are not "numerical analysis" and "numerical methods"
more or less the same thing?
> ... In any event it is possibly rather narrow-minded to think that
> numerics implies approximations. I see numerics as being about all
> sorts of numbers. And it just depends upon the field one is working
> in. It could be a finite field say integers mod 4.
>
> No I prefer leaving the package name alone. That's OK.
But anyway, my main objection is that the name is too long.
If we move Big_Numbers out of Numerics, that doesn't imply it's NOT numerics.
After all, Integer and Float are in Standard, not Numerics.
Another suggestion:
Ada.Numerics.Big_Integers
Ada.Numerics.Big_Rationals
And get rid of Big_Numbers, and move its content into Numerics.
P.S. The course "Combinatorial Analysis" at CMU was commonly referred to as
"Combinatorics". It was also called "Combinatorture", because it was hard.
How can the study of counting be hard? ;-)
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 11:40 AM
> As I recall the history, the child package name "Numerics" was chosen
> as a compromise, to avoid an Anglo-American dust-up over "Maths"
> versus "Math" as the name :-).
I would have been happier with Maths. ;-)
>... It had nothing to do with an implication of numerical analysis /
>floating point.
>
> In any event I agree with John.
An argument in your favor is Ada.Numerics.Discrete_Random.
That's the ONLY one that is not all about floating point.
Ada has a well-deserved reputation for verbosity.
That's often a good thing, but here it's just noise.
What about my second suggestion for shortening the names?
P.S. I shall attempt to refrain from further bikeshedding.
Maybe others will weigh in, or we can have a straw poll.
****************************************************************
From: Ben Brosgol
Sent: Saturday, December 15, 2018 1:53 PM
>> As I recall the history, the child package name "Numerics" was chosen
>> as a compromise, to avoid an Anglo-American dust-up over "Maths"
>> versus "Math" as the name :-).
>
> I would have been happier with Maths. ;
That might have been a fair trade for the American culturally imperialistic
"Initialize" and "Finalize" spellings :-)
>> ... It had nothing to do with an implication of numerical analysis /
>> floating point.
>>
>> In any event I agree with John.
>
> An argument in your favor is Ada.Numerics.Discrete_Random.
> That's the ONLY one that is not all about floating point.
...and seems sufficient to debunk the claim that Numerics means
floating-point.
>
> Ada has a well-deserved reputation for verbosity.
> That's often a good thing, but here it's just noise.
It reflects the logical categorization.
>
> What about my second suggestion for shortening the names?
If there are future standard packages related to integer or discrete
arithmetic, should these also come in as immediate children of Ada? I think
that Ada.Numerics is the logical parent for bignums as well as any such future
packages.
> P.S. I shall attempt to refrain from further bikeshedding.
Right, no sense bikeshedding. Instead let's rearrange the deck chairs on the
Titanic(*) :-)
> Maybe others will weigh in, or we can have a straw poll.
(*)
http://www.newsbiscuit.com/2011/02/15/rearranging-titanics-deckchairs-would-have-saved-1500-lives-say-researchers/
[I assume that this is tongue-in-cheek :-)]
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 16, 2018 12:14 AM
...
> Change Ada.Numerics.Big_Numbers to Ada.Big_Numbers.
> I think "numerics" (short for "numerical analysis") generally implies
> "floating point" in computer science.
...
> And a shorter name is better, given that Numerics doesn't add any
> readability benefit.
Ada has a tradition of having a small number of 2nd-level root packages,
with almost everything else built on top of that: Ada.Characters,
Ada.Strings, Ada.Containers.
(Aside: Ada 95 clearly messed up the IO packages here, there should have
been Ada.IO (contents of IO_Exceptions), with Ada.IO.Text,
Ada.IO.Sequential, Ada.IO.Direct, etc. The renames could have still used
the old names. I don't remember, perhaps this was proposed but rejected as
a bridge too far at the time? The whole idea of child packages was new then,
it's kinda taken over since.)
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 16, 2018 12:19 AM
...
> Ada has a well-deserved reputation for verbosity.
> That's often a good thing, but here it's just noise.
Not necessarily. My model for suppressing language-defined preconditions is
that is available on a per-subsystem basis (we don't want too many of these
check names). "Big_Numbers" isn't important enough to be a subsystem by
itself, I would say.
Besides, if there ever was a package that would be "used", it's Big_Integer.
Even I would "use" that package (and I won't even "use"
Ada.Strings.Unbounded). So the name is nearly irrelevant; I'd be more worried
about having to write Ada.Unchecked_Deallocation all over the place.
(Which we already do...)
****************************************************************
From: John Barnes
Sent: Saturday, December 15, 2018 11:16 AM
>Should Big_Rats have an operation for throwing away some precision?
I see the whole purpose of working with rationals to be precise. It wouldn't
make sense to me to throw accuracy away. Encryption/Decryption wouldn't work
if you chucked bits away.
****************************************************************
From: Bob Duff
Sent: Saturday, December 15, 2018 12:16 PM
> I see the whole purpose of working with rationals to be precise.
That's the main purpose.
But you might want much-higher precision than the biggest floating-point type,
but still not exact precision. If you can't throw away precision, repeated
calculations can use up memory without bound.
> ...It wouldn't make sense to me to throw accuracy away.
> Encryption/Decryption wouldn't work if you chucked bits away.
Encryption can use Big_Integers, not Big_Rationals.
Here I'm talking only about Big_Rationals.
Actually, encryption wants to use Bounded_Big_Integers, except it wants
modular arithmetic. Hmm. Maybe we should provide such (functions that take
a Modulus param).
...which leads me to question:
- Bounded_Big_Integers is a generic package and takes a generic formal:
Capacity : Natural;
I think we agreed Capacity should be a discrim.
- two additional visible expression functions are declared:
function Last return Valid_Big_Integer is ((+256) ** Capacity);
function First return Valid_Big_Integer is (-Last);
Why 256? You want it to have an array of word-sized super-digits, not
byte-sized ones. And this is the first time Ada is biased against (say)
36-bit machines.
- the partial view of Bounded_Big_Integers.Big_Integer includes
a type invariant specification,
Type_Invariant =>
(if Is_Valid (Bounded_Big_Integer) then
In_Range (Bounded_Big_Integer, First, Last)
or else (raise Constraint_Error))
How does that work? You can't compute numbers bigger than Last (because they
can't be represented) and then check that they're <= Last.
And there's confusion about the type name -- is it Big_Integer or
Bounded_Big_Integer?
And doesn't Last check the invariant, causing infinite recursion?
Lots of evidence here that we should standardize existing practice!
****************************************************************
From: Randy Brukardt
Sent: Sunday, December 16, 2018 12:36 AM
> > I see the whole purpose of working with rationals to be precise.
>
> That's the main purpose.
>
> But you might want much-higher precision than the biggest
> floating-point type, but still not exact precision.
> If you can't throw away precision, repeated calculations can use up
> memory without bound.
Yes, of course, that's the nature of the beast. Which of course is why memory
has to be reclaimed. (Big_Integer alone being almost never necessary these
days with 64-bit integers being available.)
And whatever you do to try to minimize the use of such memory also has a
substantial cost. There's no free lunch here.
...
> Lots of evidence here that we should standardize existing practice!
Is there any existing practice that would actually be suitable for the Ada
runtime? Most of it is in garbage collected interpreted languages, and hardly
any languages actually worry about tasking/parallel operation.
An Ada language-defined library:
* cannot leak memory (at least not in the long run);
* must be task safe (operations on different objects have to work concurrently);
* has to have a bounded option without allocated memory use;
* has to be strongly typed;
* has to be implementable in Ada (ideally, verified by SPARK).
I'd also add something about performance, but that can't be quantified anyway.
****************************************************************
From: John Barnes
Sent: Sunday, December 16, 2018 4:05 AM
Waking up in the night and fretting about Brexit I thought I agreed with Bob
that we should drop Numerics since the big numbers and rationals are exact
and all the Numerics stuff was about rough old floating point. Then Ben
pointed out that random numbers are in numerics. So stuff it.
But do something about that horrid literal 0.0. It has absolutely no place in
a big number or rational package. Maybe the user defined literals need to be
revisited. Maybe we don't need literals for Rationals anyway.
And big numbers are typically very big in my view. If you look in my book Nice
Numbers on pages 31 and 32 you will find some fairly big numbers. They have
about 400 digits. They were done by my little Ada 95 package that I wrote a
long long time ago.
****************************************************************
From: John Barnes
Sent: Sunday, December 16, 2018 4:13 AM
PS
And I really think we should have a procedure that does division and returns
both quotient and remainder in one go to save wasting time doing the job
twice.
Also, as I mentioned before I would like square root returning the largest
integer whose square is less than or equal to the integer operand and also
returns the remainder. It's easy to write crudely but there are probably short
cuts that would be useful.
****************************************************************
From: Erhard Ploedereder
Sent: Sunday, December 16, 2018 11:41 AM
> And I really think we should have a procedure that does division and returns
> both quotient and remainder in one go to save wasting time doing the job
> twice.
Hear, Hear! This has been dearly missing since day 1.
****************************************************************
From: Bob Duff
Sent: Sunday, December 16, 2018 1:50 PM
> And I really think we should have a procedure that does division and
> returns both quotient and remainder in one go to save wasting time
> doing the job twice.
Seems reasonable.
> Also, as I mentioned before I would like square root returning the
> largest integer whose square is less than or equal to the integer
> operand and also returns the remainder. It's easy to write crudely but
> there are probably short cuts that would be useful.
But that one seems pretty specialized. What would one use it for?
By the way, "floor of square root" was an example exercise at the SPARK
training course I attended years ago. The parameter is 0..System.Max_Int,
and it's a little tricky to avoid overflow (and get SPARK to prove that you
avoided overflow). Fun example.
****************************************************************
From: Bob Duff
Sent: Sunday, December 16, 2018 1:50 PM
> ... My model for suppressing language-defined preconditions is that is
> available on a per-subsystem basis...
Reasonable argument.
****************************************************************
From: Bob Duff
Sent: Sunday, December 16, 2018 1:51 PM
> > Lots of evidence here that we should standardize existing practice!
>
> Is there any existing practice that would actually be suitable for the
> Ada runtime?
Not yet. ;-)
I meant we should have first created some existing practice.
And then maybe standardize it.
If AdaCore is willing to implement Big_Numbers, then great.
If they're not, then we're wasting our time designing its spec.
****************************************************************
From: Bob Duff
Sent: Sunday, December 16, 2018 1:50 PM
> (Aside: Ada 95 clearly messed up the IO packages here, there should
> have been Ada.IO (contents of IO_Exceptions), with Ada.IO.Text,
> Ada.IO.Sequential, Ada.IO.Direct, etc. The renames could have still
> used the old names. I don't remember, perhaps this was proposed but
> rejected as a bridge too far at the time?
I don't remember that being proposed.
If I were designing the I/O stuff from scratch, I would separate Output from
Input. They're not just flip sides of each other.
>... The whole idea of child packages was new then, it's kinda taken
>over since.)
****************************************************************
From: Edmond Schonberg
Sent: Sunday, December 16, 2018 4:27 PM
> If AdaCore is willing to implement Big_Numbers, then great.
> If they're not, then we're wasting our time designing its spec.
Why do you refer to adacore in the third person :-)?
I think the algorithms in unintp and urealp are an excellent starting point
anyway, Robert used every trick described in Knuth 2 to make them efficient.
i’m certainly eager to start the implementation of 2020.
****************************************************************
From: Bob Duff
Sent: Sunday, December 16, 2018 4:48 PM
> Why do you refer to adacore in the third person :-)?
Because I'm not AdaCore. ;-)
I'm an employee of AdaCore, and I'm not in a position to dictate which 2020
features AdaCore should implement when, and who should do the work. I can
make recommendations to AdaCore management, of course.
> I think the algorithms in unintp and urealp are an excellent starting
> point anyway, Robert used every trick described in Knuth 2 to make
> them efficient.
Whoever implements it should look at those for sure, plus the codepeer
implementation, and the libadalang implementation, and libgmp, and probably
others.
I'm happy to do that, and Steve has also expressed an interest, and you would
also be a good candidate. But I can't just go and do that without explicit
direction from AdaCore management.
> i’m certainly eager to start the implementation of 2020.
Indeed.
****************************************************************
From: Jeff Cousins
Sent: Sunday, December 16, 2018 5:01 PM
I thought that half the motivation for thus AI was that it was thought that
compilers had to have something similar internally anyway so implementation
wouldn't be too expensive?
****************************************************************
From: Edmond Schonberg
Sent: Sunday, December 16, 2018 9:38 PM
indeed, every conforming Ada compiler needs such a facility to handle
universal integers and universal reals. so they are built into the front-ends
of these compilers. Adapting them for run-time use will require some work, in
particular to meet the proposed requirements of storage management (no leaks?
very slow leaks?) but there is a good starting point. Incidentally, every
compiler is equipped to handle a conversion from standard floating-point
notation to rational internal representation, so I agree with John that this
should be provided in the new packages.
****************************************************************
From: John Barnes
Sent: Monday, December 17, 2018 2:05 AM
> Also, as I mentioned before I would like square root returning the
> largest integer whose square is less than or equal to the integer
> operand and also returns the remainder. It's easy to write crudely but
> there are probably short cuts that would be useful.
>But that one seems pretty specialized. What would one use it for?
For one thing Its used in Fermat's method for finding prime factors. See
Nice Numbers page 166. I make the students do it on my numbers course at
Oxford.
>By the way, "floor of square root" was an example exercise at the SPARK
>training course I attended years ago. The parameter is 0..System.Max_Int,
>and it's a little tricky to avoid overflow (and get SPARK
> to prove that you avoided overflow). Fun example.
There you are, important stuff.
****************************************************************
From: Randy Brukardt
Sent: Monday, December 17, 2018 3:41 PM
>>I thought that half the motivation for thus AI was that it was thought that
>>compilers had to have something similar internally anyway so implementation
>>wouldn't be too expensive?
>indeed, every conforming Ada compiler needs such a facility to handle
>universal integers and universal reals. so they are built into the front-ends
>of these compilers. Adapting them for run-time use will require some work,
>in particular to meet the proposed requirements of storage management (no
>leaks? very slow leaks?) but there is a good starting point.
Right, but Bob's (unspoken) concern is that once we put these things into the
Standard library, some users are going to want to use them in ways that
require high performance. The version used in the compiler doesn't have any
significant performance concern: even if a multiply took 1/10 second, it
wouldn't noticeably change the compilation time for the vast majority of
units. There are just not that many static expressions to evaluate in most
units. Even if there are, any sane performance will do.
OTOH, if someone wants to do complex calculations to 400 digits in real-time,
the performance could be a big deal. That's likely to lead to pressure on
implementers to spend extra effort on these packages and on how the compiler
uses them (perhaps special conversions).
BTW, I don't see a problem with leaking (the Janus/Ada packages never leaked,
we couldn't afford to lose any memory). But the Janus/Ada packages (which were
designed for Ada 83) use limited types with an explicit deallocator call. And
in practice, most of the operations are done with procedure calls, to avoid
creating temporaries. Something like:
-- Find 'Base'Last for an integer type of Size:
declare
Temp : U_Int;
begin
Uconvert (Temp, 2);
Uexp (Temp, Integer(Size_in_Bits));
Uminus (Temp, U_ONE);
Uassign (Type_Rec.Last, Temp);
Udispose (Temp);
end;
Of course, no one would want a package like this. But the counterpart would
be a controlled type and operator functions. While this would look good:
Type_Rec.Last := (2 ** Integer(Size_in_Bits)) - 1;
It would create and destroy a number of controlled temporaries, which would
likely be more expensive than the original version.
One could mitigate that somewhat by using built-in compiler transformations
to the procedure form, but obviously that would put a lot of extra stress on
the implementer. (And there has been a lot of resistance to doing that sort
of transformation for Reference/Constant_Reference.)
So the best path forward isn't completely clear, even though compilers have
such implementations.
>Incidentally, every compiler is equipped to handle a conversion from
>standard floating-point notation to rational internal representation,
>so I agree with John that this should be provided in the new packages.
It *is* provided in the new packages, and John appeared to be arguing
*against* providing it. That didn't make a lot of sense to me. Janus/Ada
doesn't, however, provide the conversion from rational to real notation
string (we of course do provide a conversion to 32 and 64 bit float values,
but we didn't need the strings outside of the calculator test program, so we
didn't figure out how to do that).
****************************************************************
From: Steve Baird
Sent: Monday, January 7, 2019 2:51 PM
The attached version #5 of this AI [Regardless of what this message says,
this is version /07 of this AI - Editor] is intended to address
issues/questions raised since distribution of version #4 on 12/2/18.
Thanks, in particular, to Bob and Randy for their careful review and
thoughtful comments.
A summary of those issues/questions, along with their proposed resolutions,
follows below.
====
1) Delete Width parameter from From_String functions?
Yes; removed in version 5.
2) Should unary "+" rename To_Big_Integer instead of vice versa (and similarly
for To_Big_Rational)?
Yes; change is in version 5.
3) "has the following definition" => "has the following declaration"
wording change?
Yes; change is in version 5.
4) Change names?
Big_Integer => Optional_Big_Integer
Valid_Big_Integer => Big_Integer
Is_Valid => Is_Present
(and similarly for Big_Rational)
and add Optional_Big_Natural, Optional_Big_Positive subtypes.
Yes; change is in version 5. The Is_Valid -> Is_Present substitution wasn't
mentioned in the discussion, but it seems obvious now that we are no longer
talking about validity.
5) Add a Big_Int -> Big_Rational conversion function?
Similarly, add a Big_Rat exponentiation operator that takes
a Big_Integer instead of an Integer for the exponent argument?
Further discussion is needed here.
There are changes in version 5 for this, but they are not what was
requested.
The problem (which was not mentioned in the original discussion)
is to avoid introducing ambiguity in the case of a call
where the actual parameter is a numeric literal.
This non-overloading requirement makes the problem trickier.
We could introduce a nested package, but the obvious solution is
to avoid overloading by using different names for the function pairs.
In the case of exponentiation, choosing a name other than "**" would
defeat the whole point, so we just give up and don't provide a
version that takes a Standard.Integer exponent (we only provide
the Big_Integer exponent version - in version #4, only a
Standard.Integer exponent version was provided, but if we are
only going to provide one then shouldn't it be the Big_Integer version?).
In the case of To_Big_Rational, we've got
To_Big_Rational, going from Big_Integer to Big_Rational
and
To_Rational, going from Standard.Integer to Big_Rational
. This is a bit ugly - for example, it's not obvious why the names
shouldn't be reversed. Perhaps the name should be Integer_To_Big_Rational?
Other candidate names include "Make" or "Convert".]
6) Missing blank in To_Quotient_String?
Yes; change is in version 5.
Indeed, the intent is to generate a string like "123 / 4567".
7) Do we want a Fixed_Conversions generic package analogous to
Float_Conversions?
Consensus seems to be yes and, in particular, that the implementability
issues that Randy raised have been addressed; change is in version 5.
The precision requirement wording is the same for both
Some_Instance_Of_Float_Conversions.To_Big_Rational and for
Some_Instance_Of_Fixed_Conversions.To_Big_Rational. An equivalence rule
is used to avoid any need to talk about support/non-support of Annex G.
8) Avoid long package names by eliminating Big_Numbers package or
moving Big_Numbers package out of Numerics (so that it is
Ada.Big_Numbers instead of Ada.Numerics.Big_Numbers).
In the opinion of the AI author, this is not a good idea. The originally
chosen names are consistent with Ada's naming conventions in other
cases, and are just fine as proposed.
No changes in version 5 related to this issue.
9) Do we want some sort of a "coarsen" operator whose informal idea is
"given a Big_Rational argument, return a value that is close to the
given value but which takes up less storage"? In some cases,
"1 / 1" might be preferable to something like
"999999999999999999999999999998 / 999999999999999999999999999999".
This sounds like a good idea, but it is not clear what the spec should
look like. One idea is a Round_To_Nearest_Multiple function, where the
user passes in a delta value (a positive big_rational) and the function
returns a result which is an exact integral multiple of that delta value
(with tiebreaker rules based on Ada's rules for converting to a fixed
point type). Another idea is a Best_Approximation function where
the user passes in a denominator value (a positive big_integer) and the
function computes the closest continued fraction approximation
having a denominator no greater than the specified denominator value.
It would be nice to simply require that the function returns the closest
approximation to the given argument subject to the denominator
constraint, but I don't know how to implement that. [It has to do with
not just computing convergents, but also semi-convergents; for more
details, see the site I mentioned in a message to the ARG list last
January,
shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-approximations-are-the-convergents-of-the-continued-fraction
].It does seem clear that the function, however it is defined,
should be portable. I'm leaning toward the Round_To_Nearest_Multiple
approach, but it seems like more discussion is needed.
10) Get rid of the GCD type invariant for big rationals
and replace it with a functionally equivalent
postcondition on Denominator? [this question came up in private
communication with Randy]
Yes; change is in version 5.
11) For each of the types defined in the AI, specify whether the
type needs finalization? [point brought up by Randy]
Yes; change is in version 5.
12) There are 3 relatively unimportant TBD's remaining in version 5,
left over from version 4. These have to do with whether we want
Constraint_Error (as opposed to some other exception) raised in certain
cases, do we care if the range check on From_Big_Rational is
ever-so-slightly too tight, and whether we should mention Static_Predicate
in the Big_Number_Check rules. [There is also fourth TBD having to do
with the proposed type invariant for Bounded big integers, but that is
discussed below in item #14.]
status: unchanged since version 4.
13) Is the standard "No storage .. shall be lost" implementation
requirement what we want here? For example, does this wording preclude a
"unique number table" type of implementation and, if it does, is
this a problem?
status: undecided
14) The thorniest set of problems has to do with Bounded_Big_Integers.
Bob and Randy argue that Bounded_Big_Integers should not be a generic
with a formal Capacity parameter and that instead it should be a
non-generic package and the type
Bounded_Big_Integers.Optional_Big_Integer should have a visible
Capacity discriminant (with no default value).
There are other questions too, but let's start with that one.
There are drawbacks to the discriminated approach:
- Equality: presumably we don't want the discriminant value to
participate in equality. So just have an explicitly-declared
"=" function that does the right thing; this isn't much of a
problem, although this definition of equality can lead to
unintuitive results in some cases.
- Assignment: assignment between objects with differing discriminant
values will fail a discriminant check, so users will have to
somehow avoid this. This gets messy.
One solution is to use procedures instead of functions, where we
pass in the result as an out-mode parameter. It's ugly,
and it is a very different way of doing things than what the
unbounded version provides. Another solution
is to use a Copy function which creates a copy of a given
number having a given discriminant value. With this approach, an
aggregate might look something like
S_Capacity : constant Capacity_Type := ... ;
subtype S is Bounded.Big_Integer (Capacity => S_Capacity);
type R is
record
... <other fields> ...
Big_Int : S;
end record;
X : R := ( ...,
Big_Int => Copy (Some_Big_Int + Another_Big_Int,
Capacity => S_Capacity));
In addition to being ugly, this approach has a performance
impact (those Copy calls won't be free).
- Function results:
There is the question of what the discriminant
values should be for the results of the various functions
defined in the spec. We certainly do not want those to be
implementation-dependent (why would we want to introduce a
substantial portability problem?), so they would need to be
defined. What should those definitions be? [Note that there
seem to be some holes in the RM in this area for functions
that return bounded containers. If you try to answer this
question by asking "Well, how was this issue handled for
unbounded vectors?", you will be disappointed.]
The discriminated approach would also mean that function
calls would have to use the presumably-less-efficient
unconstrained function result mechanism. If a function call
provides the value for a component of an aggregate, for example,
the address of the component of the aggregate's object can
be passed in for the callee to fill in (i.e. build in place)
in the undiscriminated implementation, but probably not
in the discriminated case.
Another question whose answer does not seem obvious has to do with
the mapping from Capacity values (whether expressed as a generic formal
or as a discriminant) to bounds. The current proposal defines this via
function Last return Big_Integer is ((+256) ** Capacity);
function First return Big_Integer is (-Last);
. I think we want this mapping to be portable (although there
may be performance-based arguments for making this mapping
implementation dependent), but even so it is not at all obvious that
the above mapping is the best one.
There is also a related TBD issue in version 5 regarding the question
of whether we want to use a type invariant, a subtype predicate, or
something else to express this range "constraint". In version 5, we have
Type_Invariant =>
(if Is_Present (Optional_Big_Integer) then
In_Range (Optional_Big_Integer, First, Last)
or else (raise Constraint_Error))
but perhaps this idea should be expressed as a dynamic predicate on
the subtype Big_Integer.
Bob suggests that we consider completely omitting the bounded version
from this AI on the grounds that we don't know enough yet to make
informed decisions about these issues. For clients who want to support
N-bit arithmetic where N is large but fixed (e.g., 1024-bit integers),
perhaps we want to provide "wrapping" operators (similar to the
operations of modular types).
Perhaps standardizing the bounded version would be premature.
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 3:13 PM
> 5) ...
>
> In the case of exponentiation, choosing a name other than "**" would
> defeat the whole point, so we just give up and don't provide a version
> that takes a Standard.Integer exponent (we only provide the
> Big_Integer exponent version - in version #4, only a Standard.Integer
> exponent version was provided, but if we are only going to provide one
> then shouldn't it be the Big_Integer version?)....
I think exponentiation is really a special case, and Standard.Integer should
be the exponent for X ** <int>. Even for user-defined integer types, the
exponent is still Standard.Integer. It is inconceivable that we could
represent values with exponents larger than Integer'Last unless the base is
0, 1, or -1.
As a fall-back for those really interested in Aleph-1-sized values (and with
appropriate amount of memory to go with it ;-), presumably you could use
Big_Rat ** Big_Rat, and then convert the result to Big_Int.
****************************************************************
From: Steve Baird
Sent: Monday, January 7, 2019 5:02 PM
> I think exponentiation is really a special case, and Standard.Integer
> should be the exponent for X ** <int>.
...
> It is inconceivable that we could represent values with exponents
> larger than Integer'Last unless
Fine with me, as long as you promise not to complain the next time you want
to estimate the number of possible orderings of the atoms in the known
universe.
****************************************************************
From: Randy Brukardt
Sent: Monday, January 7, 2019 5:16 PM
Remember that Ada only requires Integer'Last >= 32767, and 2 ** 32768 isn't
completely impossible.
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 5:28 PM
I'm not worried about this particular issue, personally. ;-)
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 3:20 PM
> 4) Change names?
> Big_Integer => Optional_Big_Integer
> Valid_Big_Integer => Big_Integer
> Is_Valid => Is_Present
> (and similarly for Big_Rational)
> and add Optional_Big_Natural, Optional_Big_Positive subtypes.
> Yes; change is in version 5. The Is_Valid -> Is_Present substitution
> wasn't mentioned in the discussion, but it seems obvious now that we
> are no longer talking about validity.
I will say I am not in love with "Is_Present" as a way to check whether an
Optional_Integer has been initialized. I think the name of the constant
representing the default value of an Optional_integer (in this case, still
"Invalid") should determine the name of the query function. So I would stick
with "Is_Valid" if the named constant is "Invalid." If we change the constant
to be named something else, such as "Null_Big_Integer" then "Not_Null" and
"Is_Null" would be nice query functions.
In general, I don't think of an "Optional_Integer" as being optionally
"present," but rather being optionally "initialized." But in any case, I
think the name of the constant and the name of the query should be aligned.
We could name the constant "Absent_Integer" I suppose, but that just seems
weird. ;-)
****************************************************************
From: Steve Baird
Sent: Monday, January 7, 2019 4:36 PM
> So I would stick with "Is_Valid" if the named constant is "Invalid."
That was an oversight on my part. I intended to remove all references to
validity but forgot about Invalid (which, incidentally, is defined as a
parameterless function). I'd say those functions should be named
Null_Big_Integer and Null_Big_Rational.
And I certainly agree with you about the undesirable inconsistency of
Is_Present and Invalid.
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 4:56 PM
>> So I would stick with "Is_Valid" if the named constant is "Invalid."
> That was an oversight on my part. I intended to remove all references
> to validity but forgot about Invalid (which, incidentally, is defined
> as a parameterless function). I'd say those functions should be named
> Null_Big_Integer and Null_Big_Rational.
Then I would argue for providing Not_Null and perhaps also Is_Null as
query functions, rather than Is_Present.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, January 8, 2019 8:12 PM
> I'd say those functions should be named
> Null_Big_Integer and Null_Big_Rational.
Gee, if we hadn't deferred the Null_Literal aspect (now, AI12-0296-1), you
could have called them "null". (Ducking incoming brickbats... :-)
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 3:32 PM
> 13) Is the standard "No storage .. shall be lost" implementation
> requirement what we want here? For example, does this wording preclude
> a "unique number table" type of implementation and, if it does, is
> this a problem?
> status: undecided
>
> 14) The thorniest set of problems has to do with Bounded_Big_Integers.
> Bob and Randy argue that Bounded_Big_Integers should not be a generic
> with a formal Capacity parameter and that instead it should be a
> non-generic package and the type
> Bounded_Big_Integers.Optional_Big_Integer should have a visible
> Capacity discriminant (with no default value).
> There are other questions too, but let's start with that one.
>
> There are drawbacks to the discriminated approach:...
I think we can leave the wording about "No storage shall be lost" as is. We
could add an AARM note clarifying that this is intended to permit caching or
unique-num tables. I don't see the need for an explicit "Implementation
Permission." Compiler customers will make their concerns known if they really
care.
I agree with keeping the bound as a generic formal. Bounded_String works this
way, so that is a precedent. And anything where each object has its own bound
seems like a nightmare to maintain as the rest of the code changes. And as you
point out, if it is a discriminant that will inevitably mean heavier use of
more expensive function returns, which runs counter to the purpose of bounding
the size.
I would also support carving bounded big nums off as a separate AI, and not
including it in this revision of the standard, but having it "out there" as
an AI for anyone to use as a model for a package they implement, should they
so choose.
***************************************************************
From: Randy Brukardt
Sent: Monday, January 7, 2019 4:01 PM
> I think we can leave the wording about "No storage shall be lost" as
> is. We could add an AARM note clarifying that this is intended to
> permit caching or unique-num tables. I don't see the need for an
> explicit "Implementation Permission."
> Compiler customers will make their concerns known if they really care.
I agree. Storage being lost would be a significant problem for long running
programs, so it seems wrong for a general-purpose library. The RM should be
clear that never recovering storage is incorrect. However, how often storage
is recovered and similar topics, though, are between an implementer and their
customers.
> I agree with keeping the bound as a generic formal.
> Bounded_String works this way, so that is a precedent.
Yup, it's bad enough that there is almost no use to bounded strings. :-) I've
stopped trying to use them at all because of that. Surely, let's double down
on a mistake.
> And anything where each object has its own bound seems like a
> nightmare to maintain as the rest of the code changes. And as you
> point out, if it is a discriminant that will inevitably mean heavier
> use of more expensive function returns, which runs counter to the
> purpose of bounding the size.
I don't understand this last point. A discriminant is just a constant
component, and build-in-place works the same way for records regardless of
the component types (assuming the type is not mutable).
Moreover, for Janus/Ada, adding genericity makes things slower in general
(since it requires passing around descriptors), so it seems likely that the
discriminant approach would be a bit more efficient (fewer things to pass into
these functions).
Moral: I don't think we ought to make the decision here based on efficiency
concerns -- they vary wildly -- usability is much more important. Steve's
arguments about assignment are much more important than anything to do with
efficiency. (Aside: The bounded containers use a procedure Assign for
assignment; that wasn't an option that Steve considered, I'm not sure why.
Perhaps assignment is less likely for the containers than for BigNums.)
> I would also support carving bounded big nums off as a separate AI,
> and not including it in this revision of the standard, but having it
> "out there" as an AI for anyone to use as a model for a package they
> implement, should they so choose.
Yes, I fully concur with this. This doesn't seem sufficiently baked at the
moment, and I'm not sure there is enough baking time left. (We don't want to
be that chef that serves our clients raw bread pudding. ;-)
We need to figure out a reasonable implementation-independent way to specify
the capacity that makes sense for the user view (and not from some theoretical
implementation view). Perhaps that would answer the other questions anyway (if
the capacity is specified as a real value, then it can't be a discriminant).
One possibility is to simply specify the bounds (like with other integer
types), probably as real values, and left the implementation figure out how
convert that into bytes or nibbles or whatever representation it is using.
(It would be tempting to specify the bounds as unbounded bignums, but that
of course eliminates the solution.)
****************************************************************
From: Tucker Taft
Sent: Monday, January 7, 2019 5:00 PM
>> ...
>
>> And anything where each object has its own bound seems like a
>> nightmare to maintain as the rest of the code changes. And as you
>> point out, if it is a discriminant that will inevitably mean heavier
>> use of more expensive function returns, which runs counter to the
>> purpose of bounding the size.
>
> I don't understand this last point. A discriminant is just a constant
> component, and build-in-place works the same way for records
> regardless of the component types (assuming the type is not mutable).
If the caller doesn't know the size of the object being returned, then you
inevitably end up doing some sort of secondary stack or heap allocation.
The point of having a bounded type is usually to reduce the amount of dynamic
allocation, so having object's whose size is unknown to the caller makes
function return more expensive, even if you are doing build in place.
In any case, if we agree to defer bounded big nums, then we don't have to
agree on the bounding mechanism.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, January 8, 2019 7:56 PM
> The attached version #5 of this AI is intended to address
> issues/questions raised since distribution of version #4 on 12/2/18.
Typos that I fixed while posting:
Start of A.5.7:
> The declaration of the generic library package
Big_Numbers.Bounded_Big_Integers
> has the same contents and semantics as
> Big_Numbers.Optional_Big_Integer
except:
Cool! There is a package Optional_Big_Integer! I wonder if it is optional?
:-)
It looks like the original name was missing the "s" (Big_Integer{s}), and thus
the global replace went too far.
Also, when naming packages in the Standard, we usually drop "Ada" and leave
everything else. (That was the silliest decision ever; I recall someone wanted
the text of the Standard to be shorter, and that was the MRT response
- drop 4 characters in a handful of places.)
So this needs to include the "Numerics" part in this wording. (See how A.5.2
and A.5.3 present their packages.) And drop "Ada." wherever it appears in the
text. (Gotta save those 4 characters! :-)
> Each subprogram of an instance of
> Bounded_Big_Integers.Bounded_Optional_Big_Integer
> behaves like the corresponding Big_Numbers.Big_Integers subprogram
> except that type invariant checks are performed as described in 7.3.2.
??? What is Bounded_Optional_Big_Integer? It isn't mentioned anywhere else.
Is this supposed to be
Big_Numbers.Bounded_Big_Integers? I've assumed that's the intent.
> Big_Number_Check
>
> Perform the checks associated with Pre, Static_Predicate, Dynamic_Predicate,
> or Type_Invariant aspect specifications occuring in the visible part
> of package Ada.Big_Numbers or of any of its descendants.
No package "Ada.Big_Numbers", it's "Ada.Numerics.Big_Numbers" (and we always
drop the "Ada." in text).
============
Technical issues on this (I didn't make any of these changes):
My intent was that there was one such suppression check per subsystem, so this
would be:
Numeric_Check
Perform the checks associated with Pre, Static_Predicate, Dynamic_Predicate,
or Type_Invariant aspect specifications occuring in the visible part of
package Numerics or of any of its descendants.
The idea is that there would also be "String_Check" (for Ada.Strings),
"Character_Check" (for Ada.Characters), "IO_Check" (for the packages that
should have been in Ada.IO, Text_IO, Sequential_IO, etc.), "System_Check"
(for System), and "Interface_Check" (for Interfaces). Of course, those only
matter if we start defining preconditions and the like for them.
----
Type_Invariants are required to be followed by 1.1.3(17.1/5); there's no
reason to suppress them as they can't fail for a correct implementation (same
reason we don't include postconditions here). I could see a compiler
generating them for debugging purposes, but that wouldn't be the standard
mode (what compiler wants to generate unnecessary code??).
----
Aspect specifications are "associated" with a declaration, it would be clearer
to use that wording I think (it's certainly closer to the Containers wording).
----
We need to mention instances of generic units here as well. Adapting the
current wording of the Containers version with all of the above:
Perform the checks associated with the Pre, Static_Predicate, or
Dynamic_Predicate aspects associated with an entity declared in a descendant
unit of Numerics or in an instance of a generic unit that is declared in, or
is, a descendant unit of Numerics.
****************************************************************
From: Steve Baird
Sent: Tuesday, January 29, 2019 7:23 PM
The attached version #6 of this AI is intended to address issues/questions
raised since distribution of version #5 on 1/17/19. [This is version /08 of
the AI - Editor.]
A summary of those issues/questions, along with their proposed resolutions,
follows below. The big one is splitting the Bounded_Big_Integers package off
into a separate AI.
1) Fixed a "Numeerics" typo, as well as incorporating Randy's editorial
fixes that he made when posting version #5 (hopefully I didn't
miss any).
2) The second argument for the "**" operators for Big_Integer and
Big_Rational is of type Integer, not Big_Integer.
3) There is one TBD remaining in the AI text having to do with a
choice of names for the Null/Nil/Undefined/Missing/Whatever
Big_Integer and Big_Rational values and for the corresponding
predicate.
The AI as written uses the names
No_Big_[Integer,Rational] and Is_Present.
An earlier version I was discussing with Tuck had the names
Null_Big_[Integer,Rational] and Is_Present
and Tuck did not care for those. We discussed several alternatives,
which are also listed in the AI:
1) Is_Null and Null_Big_Integer
2) Augment #1 with a Not_Null function
(i.e. provide both polarities, like "=" and "/=").
3) Is_Valid and Invalid_Big_Integer.
4) Is_Defined and Undefined_Big_Integer.
5) Is_Present and Null_Big_Integer.
6) Is_Present and No_Big_Integer.
We may need a straw poll for this one. I'll note that
"No_Big_Xxx" is consistent with Ada.Tags.No_Tag. Opinions?
4) Remove the Bounded_Big_Integers proposal from this AI so that it
becomes a separate AI (which, obviously, depends on this AI).
[This is now AI12-0305-1 - Editor.]
****************************************************************
From: Randy Brukardt
Sent: Friday, February 2, 2019 7:23 PM
...
> 1) Fixed a "Numeerics" typo, as well as incorporating Randy's
> editorial
> fixes that he made when posting version #5 (hopefully I didn't
> miss any).
I didn't see any editorial fixes missed. Good job!
However, you neither addressed nor dismissed my technical concerns over the
11.5(23) wording. At a bare minimum, you need to adjust the wording to mention
instances of these packages and drop the mention of Type_Invariants (since
they're treated like Post -- must be True or else). The "Reason"
talks about the non-existent package "Ada.Big_Numbers".
I had suggested totally replacing the 11.5(23) change with the following:
Numeric_Check
Perform the checks associated with the Pre, Static_Predicate, or
Dynamic_Predicate aspects associated with an entity declared in a descendant
unit of Numerics or in an instance of a generic unit that is declared in, or
is, a descendant unit of Numerics.
AARM Reason: One could use the Assertion_Policy to eliminate such checks,
but that would require recompiling the Ada.Numerics packages (the assertion
policy that determines whether the checks are made is that used to compile
the unit). In addition, we do not want to specify the behavior of the
Ada.Numerics operations if a precondition or predicate fails; that is
different than the usual behavior of Assertion_Policy.
By using Suppress for this purpose, we make it clear that suppressing a
check that would have failed results in erroneous execution.
----
But this is way too big a change from me to make without concurrence from
others. In particular, I don't know if my "subsystem suppression" plan was
really adopted by the group or if I just invented it and really only the
containers were approved. (BTW, I think it is likely that I will change the
containers to use wording like the above, if we go forward with this; the
containers wording is more targeted to just Pre, as we can't add predicates
to existing packages. But I'd rather that each such check was described the
same.)
Below is the relevant part of my original message on this topic:
[See the last part of Randy's message of January 8, 2019 7:56 PM
for this text - Editor.]
****************************************************************
From: John Barnes
Sent: Thursday, February 7, 2019 7:06 AM
I have been fretting about this topic for some time – with some restless
nights.
Please find attached some thoughts. I fear they are in Word rather than plain
text for typographical reasons.
[Editor's note: These thoughts are found in the Grab-Bag on Ada-Auth.org:
http://www.ada-auth.org/ai-files/grab_bag/literals.pdf ]
Jeff has helped me with getting my thoughts in order for which I am most
grateful.
I also have a problem with being very busy with a course at Oxford this term.
****************************************************************
From: Randy Brukardt
Sent: Thursday, February 7, 2019 7:45 PM
>I have been fretting about this topic for some time - with some
>restless nights.
I'm the only one that's allowed to have sleepless nights over unimportant Ada
issues. ;-)
>Please find attached some thoughts.
[Some thoughts quoted below...]
> So what to do. Well maybe change AI-249 to provide only one literal
> and please call it just Literal and change the !subject of the AI to
> "User-defined literals".
The problem with this is that there are three other kinds of user-defined
literals defined in other AIs. We adopted the string literal AI as well (left
the character and null literals for the future, since uses of those are a
stretch).
Merging real and integer literals also has the very significant downside of
changing a compile-time error into an exception raised in the case of
operations that only want integer literals (like Big_Integer). If
Int : Big_Integer := 3.14;
is legal but raises an exception at runtime, we've lost something significant.
>I think that if AI-249 defines aspects for Real_Literal and
>Integer_Literal
>then it ought to say exactly what they are.
They correspond to a use of the corresponding lexical construct. This is
described in the Dynamic Semantics of each aspect (see 4.2.1 in the draft
RM):
For the evaluation of an integer (or real) literal with expected type
having an Integer_Literal (or Real_Literal) aspect specified ...
There might be a clearer way to state this in the wording, and especially in
the AI presentation, but that's the point of editorial review.
One thing that immediately stands out to me is that it isn't clear that this
is really two separate rules:
For the evaluation of an integer literal with expected type having an
Integer_Literal aspect specified ...
and
For the evaluation of a real literal with expected type having a Real_Literal
aspect specified ...
Tucker tried to save some wording here, but I don't think the result is
particularly clear (one could easily read these as one single rule, from your
various gripes it appears that you did that), and two shorter sentences (even
if very repetitive) would be better than one longer one.
This seems like a clear editorial review correction, if you want to request it.
>Remember also that a key goal of Ada has always been that programs should
>be easy to maintain by someone other than the author. Perhaps we need some
>mark to indicate that the literal has come via an aspect.
No, that would destroy the purpose (to make the use of user-defined types the
same as the use of built-in types). Ideally, one would get rid of the built-in
types altogether and define everything in packages using these aspects
(literals, indexing, de-reference, iterators, and so on).
>We might write
> BI: Big_Integer := [-123456];
The thing is square brackets is an array/container aggregate in Ada 2020.
Supporting this syntax for literals would destroy the critical reason for
introducing the square bracket syntax for aggregates in the first place: the
ability to write single-valued aggregates in a natural way.
So you need a different syntax.
...
> 1. Do nothing, leave it as it is. There is a risk that old John will
> resign from the ARG and not review anything else ever, ever! A bit
> unlikely since one supposes that he has to update his book. But he is
> very grumpy.
I don't think we will leave it *exactly* as it is, but I don't think there are
going to be major changes. I had some other thoughts, but they are best kept
to myself. :-)
> 3. Do something more exciting as outlined above. But at least get rid
> of the Real_Literal and Integer_Literal aspects. Make it just Literal.
> Or if you must keep them, define exactly what the literal form is and
> provide a third aspect Literal for the ambitious user. Or maybe give
> the user the ability to add aspects so that Rational_Literal can be added.
The reason for Real_Literal and Integer_Literal being separate was discussed
above. There is a third aspect, String_Literal (see AI12-0295-1, which you
haven't reviewed yet). It was split out because people wanted to vote
separately on the different aspects.
Since these things are tied to the lexical structure of the program, adding
something more is complicated. And it is likely that an ambitious user could
use String_Literal for the sorts of suggestions that you have.
> 4. Do not confuse rational numbers with a means of providing highly
> accurate real arithmetic. Define Big_Float or Big_Real. Preferably
Big_Real.
Ah, here we have an actionable suggestion. One has to use rational numbers to
*implement* accurate real arithmetic, but one does not have to *present* it
that way.
Would you be happier if we renamed the current package and types to "Big_Real"
(no other changes)? This a change I could get behind, as I don't think anyone
really wants rational numbers (certainly not enough people to justify a
standard package), but lots of people want more accurate real numbers. Our
compiler's package calls these things "Univ_Real" (short for universal_real);
there's no sign of rational until you look under the covers.
>To provide rationals is so easy that it hardly merits being in the language.
>But if it is, provide it as a separate item and do it properly.
It's not remotely easy (to do correctly and efficiently). One has to reduce
the rationals frequently in order to avoid unbounded memory usage -- but you
can't do that after every operation because it is too expensive. In any case,
rationals are a rare need that doesn't need to be in the language per-se.
Of course, someone could use the "Big_Real" package for rational math, (the
operations still all being there) but it wouldn't be presented that way.
>Whatever is done, correct the phrase arbitrary-precision rationals from A.5.5.
>Rationals are always precise.
Should say "arbitrary-precision reals", of course. That's what people
want/understand.
****************************************************************
From: Tucker Taft
Sent: Thursday, February 7, 2019 9:05 PM
I understand some of your concerns, John, but I also have experience with
supporting the use literals with user-defined types much as we have proposed,
and it is extremely valuable. I also think the use of the term "Big_Rational"
has set you on edge, and I would agree with Randy that we should change these
to "Big_Real" to avoid the confusion. If they are really viewed externally
as rationals, then you are right that the correct syntax for defining one
would be "N / M" where N and M are integer literals. Instead, they are
intended to be used for arbitrary-precision real values, as is needed, say,
for your average Ada compiler front end.
The point of AI-249 is not to support odd-ball syntaxes for literals, but
rather to allow the *existing* syntax for numeric literals to be used with
private types. If the AI made you think that users were being given the
opportunity to invent some new kind of syntax for literals, it definitely
needs to be clarified. The first thing to change is the !subject -- instead
of "user-defined numeric literals" it should be "numeric literals for
user-defined types." And we should use that terminology throughout, rather
than ever saying "user-defined literals." We really are not supporting that
in this AI.
****************************************************************
From: John Barnes
Sent: Tuesday, February 12, 2019 9:27 AM
Changing the title of 249 as you suggest would help a lot. I completely
misunderstood the purpose.
And making it Big_Real would be excellent. Don't call them rationals
externally. That was the big mistake which as you say set me on edge.
Make such changes and then I can go to my grave in peace. (well apart from
anonymous access types of course)
So at the bottom of the problem is that the rationale for these AIs was not
clearly stated.
****************************************************************
From: Bob Duff
Sent: Tuesday, February 12, 2019 11:08 AM
> And making it Big_Real would be excellent.
Why on earth would we call something a "real"
if it can't represent irrational numbers such as square root of 2?
As for literals, you want to be able to say things like:
X := 1/3;
where X is a Big_Rat. And you CAN say that. I don't see why it makes any
difference that the RM doesn't call that notation a "literal". You can
think of it as a "literal" notation if you like.
****************************************************************
From: Jean-Pierre Rosen
Sent: Tuesday, February 12, 2019 11:16 AM
> Why on earth would we call something a "real"
> if it can't represent irrational numbers such as square root of 2?
Because since FORTRAN-2, it is a tradition to have the false advertisement
of calling "Real" a finite number of values from a bounded segment of the
rationals...
****************************************************************
From: Tucker Taft
Sent: Tuesday, February 12, 2019 11:25 AM
>> And making it Big_Real would be excellent.
>
> Why on earth would we call something a "real"
> if it can't represent irrational numbers such as square root of 2?
Well Ada has "universal-real" and it has the same problem. In the context
of Ada, I think Big_Real would be a reasonable name. I agree in a wider
context, it could be misleading.
> As for literals, you want to be able to say things like:
>
> X := 1/3;
>
> where X is a Big_Rat. And you CAN say that. I don't see why it makes
> any difference that the RM doesn't call that notation a "literal".
> You can think of it as a "literal" notation if you like.
It is worth pointing out that we do include in the "big-num" AI a "/" operator
that takes two (big) integers and produces a Big_R... We also have a "/" that
takes 2 Big_Rs and produces another Big_R.
****************************************************************
From: Steve Baird
Sent: Tuesday, February 12, 2019 11:31 AM
> Why on earth would we call something a "real"
> if it can't represent irrational numbers such as square root of 2?
>
> As for literals, you want to be able to say things like:
>
> X := 1/3;
>
> where X is a Big_Rat. And you CAN say that. I don't see why it makes
> any difference that the RM doesn't call that notation a "literal".
> You can think of it as a "literal" notation if you like.
I agree with Bob.
And don't forget that we have
function Numerator (Arg : Big_Rational) return Big_Integer;
function Denominator (Arg : Big_Rational) return Big_Positive ... ; which,
to me, strongly suggests that we are talking about rationals here.
****************************************************************
From: Randy Brukardt
Sent: Tuesday, February 12, 2019 3:15 PM
You are confusing implementation with intended use. Conservatively, 95% of
users of this package are interested in getting exact real math, as in static
universal real calculations. They probably care little about the
implementation technique. I know I find the use of rationals in our universal
math package more annoying that useful. For that large subset of users, they
want "Big_Real" and could care less about rational anything.
Note that Big_Real usage is the justification for supporting real literals
with the package. I agree with John in the sense that 0.0 makes no sense as a
rational number. But of course it is completely sensible for a Big_Real
number.
The number of users that actually would want a rational number package is
small enough that we couldn't justify spending any time on standardizing it.
It would make perfect sense to scrub this package of the operations which
expose a rational implementation, and literally make it a Big_Real package.
But the implementation still would have to be using rationals.
There's no need to expose the rational implementation for most users, but it's
also harmless to do so for that <5% that actually care for some reason (and it
adds a bit of utility to the package). It's very much like Claw in this
regard: Claw has a handle interface which lets one access the underlying
Windows handles. But hardly anyone should ever need that (I've never used it,
for instance); it just exists for that rare user that has a need.
One thing I would suggest is to place any of the operations that expose the
implementation dead last in the package (there shouldn't be many), so that the
typical user doesn't have to wade through irrelevant junk to get to the meat.
****************************************************************
From: Steve Baird
Sent: Tuesday, February 12, 2019 6:13 PM
> Well Ada has "universal-real" and it has the same problem. In the context
> of Ada, I think Big_Real would be a reasonable name.
I'd still prefer Big_Rational, but I agree that what you are saying makes
sense.
Ada, for example, defines a type Integer rather than Bounded_Integer even
though the notion of Integer'Last is mathematically somewhat dubious.
I can live with Big_Real.
****************************************************************
From: Jean-Pierre Rosen
Sent: Wednesday, February 13, 2019 2:47 AM
> One thing I would suggest is to place any of the operations that
> expose the implementation dead last in the package (there shouldn't be
> many), so that the typical user doesn't have to wade through
> irrelevant junk to get to the meat.
Why not put these operations in a child package (with an implementation
permission to not provide them if the implementation does not use a rational
implementation)?
****************************************************************
From: Randy Brukardt
Sent: Friday, March 15, 2019 8:31 PM
I'm working on putting this AI into the draft RM, and so of course I'm seeing
some glitches. Nothing very substantive, so this is essentially a
for-the-record note:
(1) The lead-ins for the packages read:
The package Ada.Numerics.Big_Numbers has the following declaration:
However, these usually say "*library* package" and they omit the "Ada.", giving:
The library package Numerics.Big_Numbers has the following declaration:
(2) Thou shall not use "i.e.":
For an instance of Float_Conversions or Fixed_Conversions, To_Big_Real is
exact (i.e., the result represents exactly the same ...
so replaced that by "that is"
For an instance of Float_Conversions or Fixed_Conversions, To_Big_Real is
exact (that is, the result represents exactly the same ...
(3) The Post for Denominator reads:
with Post =>
(Arg = 0.0) or else
(Greatest_Common_Divisor
(Numerator (Arg), Denominator'Result) = 1);
but there is no Greatest_Common_Divisor in this package. It seems that when
someone (that would have been me) suggested removing a confusing and barely
used use clause, they missed one of the places it was (ahem) used. So this
should be:
with Post =>
(Arg = 0.0) or else
(Big_Integers.Greatest_Common_Divisor
(Numerator (Arg), Denominator'Result) = 1);
(4) We have:
The type Optional_Big_Integer requires finalization.
but the term is "needs finalization", and we usually put a cross-reference by
it to make it clearer that this is a term and formal definition. So this
should be:
The type Optional_Big_Integer needs finalization (see 7.6).
Also, this statement is missing for Optional_Big_Real, even though the
expected implementation is of two components of Optional_Big_Integer.
(5) Since we swapped the parameters of S'Put_Image, we better do that for the
routine intended to implement that aspect. (Once in each package.)
(6) To_Integer reads:
function To_Integer (Arg : Big_Integer) return Integer
with Pre => In_Range (Arg,
Low => +Integer'First,
High => +Integer'Last)
or else (raise Constraint_Error);
but we (ahem) deleted the overloaded "+" in the last revision. We probably
ought not use it here. ;-)
So this should read:
function To_Integer (Arg : Big_Integer) return Integer
with Pre => In_Range (Arg,
Low => To_Big_Integer (Integer'First),
High => To_Big_Integer (Integer'Last))
or else (raise Constraint_Error);
****************************************************************
From: Jeff Cousins
Sent: Saturday, March 16, 2019 2:36 PM
They all seem reasonable tidy ups.
****************************************************************
Questions? Ask the ACAA Technical Agent