Version 1.3 of ai12s/ai12-0208-1.txt

Unformatted version of ai12s/ai12-0208-1.txt version 1.3
Other versions for file ai12s/ai12-0208-1.txt

!standard A.20(0)          16-12-19 AI12-0208-1/00
!class Amendment 16-12-19
!status work item 16-12-19
!status received 16-09-27
!priority Low
!difficulty Medium
!subject Predefined bignum support
!summary
Define a "bignum" package.
[Editor's note: I surely hope that we have a better name for it than "bignum", which is as non-Ada a name as possible (two words with no underscore and an unnecessary abbreviation.]
!problem
Some applications need larger numbers than Standard.Integer. All Ada compilers have this capability in order to implement static expressions; shouldn't some such package be available to Ada users as well? (Yes.)
!proposal
(See Summary.)
!wording
** TBD **.
!discussion
!ASIS
No ASIS effect (assuming this is ONLY a library).
!ACATS test
An ACATS C-Test is needed to check that the new capabilities are supported.
!appendix

From: Steve Baird
Sent: Tuesday, September 27, 2016  4:09 PM

professor at U. of Utah:
    blog.regehr.org/archives/1401

Regehr says:
   In most programming languages, the default integer type should be a
   bignum: an arbitrary-precision integer that allocates more space when
   needed. Efficient bignum libraries exist and most integers never end
   up needing more than one machine word anyway, except in domains like
   crypto.

Nobody is suggesting changing how Standard.Integer works for Ada, but a
language-defined Bignum package (presumably supporting Rationals as well as
Integers) would be a step in the right direction.

It seems like the same arguments which were used (correctly, IMO) to justify
adding predefined container packages to the language also apply here. As Tuck
phrased it in a private message: portability and more capability "out of the
box."

Does some de facto standard already exist?

****************************************************************

From: Bob Duff
Sent: Tuesday, September 27, 2016  4:32 PM

> Nobody is suggesting changing how Standard.Integer works

But somebody might suggest that things like "type T is range 1..10**100;"
should be supported by all Ada compilers.

> It seems like the same arguments which were used (correctly, IMO) to
> justify adding predefined container packages to the language also
> apply here. As Tuck phrased it in a private message:
>     portability and more capability "out of the box."

Plus the fact that all Ada compilers have to support that functionality at
compile time, but can't provide it to their users in a portable way at run time.

> Does some de facto standard already exist?

For C and C++, yes.  For Ada, no.

For Common Lisp, Java, C#, and many others, a de jure standard exists.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, September 28, 2016  12:49 PM

> Does some de facto standard already exist?

No. I could be convinced to contribute RR's Univmath package as a starting point
for discussion.

****************************************************************

From: Jean-Pierre Rosen
Sent: Thursday, September 29, 2016  12:22 AM

There are several packages available, see http://bignumber.chez.com/index.html

****************************************************************

From: Randy Brukardt
Sent: Thursday, September 29, 2016  12:28 PM

Surely, like containers there are as many Bignum packages as there are Ada
programmers (much like containers - everybody has one). But is someone putting
them into RM format?? That's what it means to "contribute" a package here.

****************************************************************

From: John Barnes
Sent: Thursday, September 29, 2016  2:05 PM

I see there has been chatter on big number packages.

I wrote such a package many years ago. I was intending to write a book called
Fun with Ada using big examples of Ada 83 programs. But it got overtaken by
events such as having to write the book on Ada 95.

But I kept the package, used some child stuff from Ada 95 but otherwise left it
alone, I still use it for dabbling with large prime numbers and so on. I think
it is based on base 10,000 which will run on a 16 bit machine and is easy for
conversion for printing.

But I fear that agreeing on something might be tricky.

****************************************************************

From: Florian Schanda
Sent: Friday, September 30, 2016  2:35 AM

> But I kept the package, used some child stuff from Ada 95 but
> otherwise left  it alone, I still use it for dabbling with large prime
> numbers and so on. I think it is based on base 10,000 which will run
> on a 16 bit machine and is easy for conversion for printing.

Generally, these days, you would probably want to stick to largest power-of- two
as printing these is not a massive concern but performance is. :)

Anyway, I think whatever we come up with, it should be possible to implement it
via a binding to GMP [https://gmplib.org] which is more or less the gold
standard for arbitrary precision arithmetic. Of course, some runtime may wish to
have a more verifiable implementation... So, I think there are two requirements
we should make sure to fulfil:

   1. the api should be amenable to static analysis and formal verification
   2. the api should make it easy to bind to gmp

(Not saying this list is exhaustive.)

I just want to avoid people starting from various in-house and private projects;
its probably a good idea instead to start from established libraries.

****************************************************************

From: Steve Baird
Sent: Friday, September 30, 2016  12:27 PM

> So, I think there are two
> requirements we should make sure to fulfil:
>
>     1. the api should be amenable to static analysis and formal verification
>     2. the api should make it easy to bind to gm

It is also at least possible that we'll want something similar to what we have
with the containers, where we have one version for use in situations where
controlled types and dynamic storage allocation are ok and another for use in
other situations.

****************************************************************

From: Jean-Pierre Rosen
Sent: Friday, September 30, 2016  2:44 PM

Hmmm... bounded and unbounded bignums?

****************************************************************

From: Tucker Taft
Sent: Friday, September 30, 2016  3:52 PM

Perhaps: "big enough nums, already..."

****************************************************************

From: Steve Baird
Sent: Tuesday, December 12, 2017  7:20 PM

I thought I'd take a look at how Java and C++ do bignums to see if there are any
ideas there worth incorporating.

My going-in idea is to have two packages with similar specs; one has "Capacity"
discriminants and the other is implemented using dynamic storage allocation of
some sort (e.g., controlled types and allocators). Like the bounded/unbounded
versions of the containers.

C++ doesn't really have a standard for bignums, but the GCC/GMP stuff
looks pretty similar to what I expected.

Java, however, surprised me (note that I am far from a Java expert so it could
be that I am just confused here).

The Java big-real spec doesn't have Numerator and Denominator functions which
yield big-ints.

The Java type seems to be named BigDecimal.

BigDecimal is implemented as a single big-int value accompanied by two ints
(Scale and Precision), at least according to
    stackoverflow.com/questions/10655254/how-bigdecimal-is-implemented

Which leads to my question:
    If Ada defined a spec where the intended implementation for bigreal
    is clearly two bigints (one for numerator, one for denominator),
    would this result in lots of "I coded up the same algorithm in Ada
    and Java and performance was a lot worse in Ada" horror stories?

Apparently BigDecimal lets you have, in effect, a lot of decimal digits but the
value "one third" still cannot be represented exactly.

Why did the Java folks do it that way? It seems like you lose a lot of value if
you can't exactly represent, for example, one third.

But perhaps most folks don't care about that functionality and the
performance/functionality tradeoff chosen by Java is closer to what most folks
want.

Opinions? Opinions about Java are of some interest, but what I really want is
opinions about what we should do in Ada.

p.s. Note that the current plan for this AI is to add one or more new predefined
packages but no changes to language rules. In particular, numeric literals for a
non-numeric type is the topic of another AI.

****************************************************************

From: Tucker Taft
Sent: Wednesday, December 13, 2017  9:15 AM

We want rational, not decimal, computations, I believe.  So I would ignore
Java's BigDecimal.

A different and interesting capability is true "real" arithmetic, which works
for transcendentals, etc.  It is intriguing, but probably not what people really
want.

I'll send the PDF for an article by Hans Boehm about "real" arithmetic
separately, since it will probably not make it through the SPAM filter!

****************************************************************

From: Randy Brukardt
Sent: Wednesday, December 13, 2017  10:57 AM

> We want rational, not decimal, computations, I believe.  So I would
> ignore Java's BigDecimal.

Isn't that Steve's question? Ada compiler vendors use rational computations
since that is required by the ACATS (it's necessary that 1/3 /=
0.33333333333333333333333333333). But is that the best choice for the Ada
community? I don't know.

> A different and interesting capability is true "real"
> arithmetic, which works for transcendentals, etc.  It is intriguing,
> but probably not what people really want.
>
> I'll send the PDF for an article by Hans Boehm about "real"
> arithmetic separately, since it will probably not make it through the
> SPAM filter!

It might be too large for the list as well. If so, I can post it in the Grab Bag
if you send it directly to me.

****************************************************************

From: Edmond Schonberg
Sent: Wednesday, December 13, 2017  1:36 PM

> I'll send the PDF for an article by Hans Boehm about "real" arithmetic
> separately, since it will probably not make it through the SPAM filter!

Both the rational representation and Boehm’s approach require arbitrary
precision integer arithmetic, so the spec of that new package is
straightforward. The papers describing the implementation of Boehm’s approach
claim that it is much more efficient than working on rationals, where numerator
and denominator grow very rapidly, while the other method only computes required
bits. I have no idea whether numerical analysts use this method, From the
literature it seems to be of interest to number theorists.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, December 13, 2017  4:49 PM

> > It might be too large for the list as well. If so, I can post it in
> > the Grab Bag if you send it directly to me.
>
> It was too big.

By about 4 Megabytes. :-)

Since the article is copyrighted, I put it in the private part of the website.
Find it at:

http://www.ada-auth.org/standards/private/real_arithmetic-boehm.pdf

Ed suggested privately:

> Additional details on the underlying model and its implementation in:
> http://keithbriggs.info/documents/xr-paper2.pdf

****************************************************************

From: Randy Brukardt
Sent: Wednesday, December 13, 2017  5:10 PM

...
> Both the rational representation and Boehm's approach require
> arbitrary precision integer arithmetic, so the spec of that new
> package is straightforward.
> The papers describing the implementation of Boehm's approach claim
> that it is much more efficient than working on rationals, where
> numerator and denominator grow very rapidly, while the other method
> only computes required bits. I have no idea whether numerical analysts
> use this method, From the literature it seems to be of interest to
> number theorists.

The problem with the Boehm method is that it requires specifying those "required
bits", which seems problematic in a programming environment. One could do it
with a form of type declaration (Ada's "digits" seems to be the right general
idea), but that doesn't make much sense in a library form. Boehm's actual use
gets those on the fly (by interacting with the user), and they also use a
rational representation as a backup. So it seems that a rational representation
is going to show up somewhere.

I've repeatedly had the fever dream of a class-wide bignum base type, something
like:

      package Root_Bignum is
          type Root_Bignum_Type is abstract tagged null record;

          function Bignum (Val : in Long_Float) return Root_Bignum_Type is abstract;
          -- To get the effect of literals and conversions.

          function "+" (Left, Right : in Root_Bignum_Type) return Root_Bignum_Type is abstract;
          -- And all of the rest.

          function Expected_Digits return Natural is abstract;
              -- The number of digits supported by this type; 0 is returned
              -- if the number is essentially infinite.

          -- And probably some other queries.
      end Root_Bignum;

And then there could be multiple specific implementations with different
performance characteristics, everything from Long_Float itself thru infinite
rational representations.

This would allow one to create most of the interesting algorithms as class-wide
operations (with implementations that could adjust to the characteristics of the
underlying type), for instance:

   function Sqrt (Val : in Root_Bignum_Type'Class;
                  Required_Digits : Natural := 0) return Root_Bignum_Type'Class;

Here with "Required_Digits" specifies how many digits of result are needed.
If 0, the value would be retrieved from the underlying representation.
(Probably have to raise an exception if that gives "infinite".)

Such a layout would also allow easy changing of representations, which probably
would be needed for tuning purposes (most of these maths being slow, at least by
conventional standards).

This would have the clear advantage of avoiding being locked into a single form
of Bignum math, when clearly there are other choices out there useful for
particular purposes.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, December 13, 2017  5:25 PM

I said:
...
> The problem with the Boehm method is that it requires specifying those
> "required bits", which seems problematic in a programming environment.

but then also noted:

>           function Sqrt (Val : in Root_Bignum_Type'Class;
>                          Required_Digits : Natural := 0) return
> Root_Bignum_Type'Class;

essentially, recognizing that many non-terminating algorithms have to have some
sort of termination criteria.

For ease of use purposes, one would prefer to only specify numbers of digits if
they're really needed (as in Sqrt or PI, etc.). But if there are going to be a
lot of such operations, one would want to be able to specify that once. Hope
that explains my thinking here.

Also, a Bignum library needs a corresponding Text_IO library. And probably a
custom version of GEF. (The Janus/Ada compiler library has most of these
features, and they are used extensively.)

****************************************************************

From: Steve Baird
Sent: Friday, January 19, 2018  2:36 PM

We have agreed that we want bignum support in the form of one or more predefined
packages with no other language extensions (e.g., no new rules for numeric
literals) as part of this AI.

The general approach seems fairly clear, although there are a lot of details to
decide (not the least of which are the choices for names). I think we want two
forms, "vanilla" and "bounded" (analogous to, for example,
Ada.Containers.Vectors and Ada.Containers.Bounded_Vectors). In one form, the two
"big" numeric types (tentatively named Big_Integer and Big_Rational) are defined
as undiscriminated types. In the second form, these types are discriminated with
some sort of a capacity discriminant. The idea is that the first form is allowed
to use dynamic storage allocation and controlled types in its implementation
while the second form is not; the discriminant somehow indicates the set of
representable values via some mapping (should this mapping be implementation
dependent?).

At a high level, we might have something like

    package Ada.Big_Numbers is
      -- empty spec like Ada.Containers package
    end;

    package Ada.Big_Numbers.Big_Integers is
       type Big_Integer is private;

       function GCD (Left, Right : Big_Integer) return Integer;

       function "+" (Arg : Some_Concrete_Integer_Type_TBD)
         return Big_Integer;

       ... ops for Big_Integer ...
    end Ada.Big_Numbers.Big_Integers.

    with Ada.Big_Numbers.Big_Integers;
    package Ada.Big_Numbers.Big_Rationals is
      use type Big_Integers.Big_Integer;

      type Big_Rational is private with
        Type_Invariant =>
          Big_Rational = +0 or else
          Big_Integers.GCD
            (Big_Integers.Numerator (Big_Rational),
             Big_Integers.Denominator (Big_Rational)) = +1;

      function Numerator (Arg : Big_Rational) return Big_Integer;
      function Denominator (Arg : Big_Rational) return Big_Integer;

      function "/" (Num, Den : Big_Integer) return Big_Rational
        with Pre => Den /= +0;

      ... other ops for Big_Rational ...
    end Ada.Big_Numbers.Big_Rationals;

    package Ada.Big_Numbers.Bounded_Big_Integers is ... end;

    package Ada.Big_Numbers.Bounded_Big_Rationals is ... end;

Questions/observations include:

1) Do we declare deferred constants, parameterless functions, or neither
    for things like Zero, One, and Two?

2) Which ops do we include? It seems obvious that we define at least
    the arithmetic and relational ops that are defined for any
    predefined integer (respectively float) type for Big_Integer
    (respectively, Big_Rational).

    What Pre/Postconditions are specified for these ops?
    These might involve subtype predicates.
    For example (suggested by Bob), do we want

       subtype Nonzero_Integer is Big_Integer with
           Predicate => Nonzero_Integer /= Zero;
       function "/"
         (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
       -- similar for "mod", "rem".

     ?

    What other operations should be provided?
      - Conversion between Big_Int and what concrete integer types?
        I'd say define a type with range Min_Int .. Max_Int
        and provide conversion functions for that type. Also provide
        two generic conversion functions that take a generic formal
        signed/modular type.

      - Conversion between Big_Rational and what concrete integer or
        float types? Same idea. Conversion between a maximal
        floating point type and then a pair of conversion generics
        with formal float/fixed parameters.

      - What shortcuts do we provide (i.e., ops that can easily be
        built out of other ops)? Assignment procedures like
          Add (X, Y); -- X := X + Y
        or mixed-type operators whose only purpose is to spare users
        from having to write explicit conversion?

3) It seems clear that we don't want the bounded form of either
    package to "with" the unbounded form but we do want conversion
    functions for going between corresponding bounded and unbounded
    types. Perhaps these go in child units of the two bounded packages
    (those child units could then "with" the corresponding unbounded
    packages). Should streaming of the two forms be compatible as with
    vectors and bounded vectors?

4) We need an Assign procedure. In the unbounded case it can be just
    a wrapper for predefined assignment, but in the bounded case it
    has to deal with the case where the two arguments have different
    capacities. It's fairly obvious what to do in most cases, but what
    about assigning a Big_Rational value which cannot be represented
    exactly given the capacity of the target. Raise an exception or
    round? In either case, we probably want to provide a Round function
    that deterministically finds an approximation to a given
    value which can be represented as a value having a given
    capacity. This can be useful in the unbounded case just to save
    storage. Should this Round function be implementation-dependent?
    If not, then we might end up talking about convergents and
    semi-convergents in the Ada RM (or at least in the AARM),
    which would be somewhat odd (see
shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-approximations-are-the-convergents-of-the-continued-fraction
    ). I do not think we want to define Succ/Pred functions which take
    a Big_Rational and a capacity value.

5) We want to be sure that a binding to GNU/GMP is straightforward in
    the unbounded case. [Fortunately, that does not require using the
    same identifiers used in GNU/GMP (mpz_t and mpq_t).]
    See gmplib.org/manual for the GNU/GMP interfaces.

6) Do we want functions to describe the mapping between Capacity
    discriminant values and the associated set of representable values?
    For example, a function from a value (Big_Integer or Big_Rational)
    to the smallest capacity value that could be used to represent it.
    For Big_Integer there could presumably be Min and Max functions
    that take a capacity argument. For Big_Rational, it's not so clear.
    We could require, for example, that a given capacity value allows
    representing a given Big_Rational value if it is >= the sum of
    the capacity requirements of the Numerator and the Denominator.

7) Bob feels (and I agree) that the ARG should not formally approve any
    changes until we have experience with an implementation. At this
    point the ARG should be focused on providing informal guidance on
    this topic.

Opinions?

****************************************************************

From: Randy Brukardt
Sent: Friday, January 19, 2018  10:18 PM

...
> Questions/observations include:

0) Should Big_Integer and (especially) Big_Rational be visibly tagged?

If so, then we can use prefix notation on functions like Numerator and
Denominator. We could also consider deriving both versions (usual and bounded)
from an abstract ancestor.

> 1) Do we declare deferred constants, parameterless functions,
>     or neither for things like Zero, One, and Two?

If tagged, I'll finally get an excuse to show why what I called "tag
propagation" is necessary to implement the dispatching rules in 3.9.2. :-) (One
has to consider a set of calls, not a single call, for determining the static or
dynamic tag for dispatching. That's demonstratably necessary to process tagged
expressions with constants or literals.)

Anyway, the answer to this depends on whether there is a sufficiently short
constructor -- and that really depends on whether Tucker invents a useful
"literals for private type" AI. So I don't think this can be answered until we
find out about that.

> 2) Which ops do we include? It seems obvious that we define at least
>     the arithmetic and relational ops that are defined for any
>     predefined integer (respectively float) type for Big_Integer
>     (respectively, Big_Rational).
>
>     What Pre/Postconditions are specified for these ops?
>     These might involve subtype predicates.
>     For example (suggested by Bob), do we want
>
>        subtype Nonzero_Integer is Big_Integer with
>            Predicate => Nonzero_Integer /= Zero;
>        function "/"
>          (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
>        -- similar for "mod", "rem".
>
>      ?

Shouldn't this predicate raise Constraint_Error rather than defaulting to
Assertion_Error, to be more like the other numeric operations? Otherwise, I'm
all in favor of this formulation. Note, however, that since the underlying type
is likely to be controlled and thus tagged, this would require some changes to
other rules; there is already an AI about that (AI12-0243-1).

>     What other operations should be provided?
>       - Conversion between Big_Int and what concrete integer types?
>         I'd say define a type with range Min_Int .. Max_Int
>         and provide conversion functions for that type. Also provide
>         two generic conversion functions that take a generic formal
>         signed/modular type.

Sounds OK.

>       - Conversion between Big_Rational and what concrete integer or
>         float types? Same idea. Conversion between a maximal
>         floating point type and then a pair of conversion generics
>         with formal float/fixed parameters.

Sounds OK again.

>       - What shortcuts do we provide (i.e., ops that can easily be
>         built out of other ops)? Assignment procedures like
>           Add (X, Y); -- X := X + Y
>         or mixed-type operators whose only purpose is to spare users
>         from having to write explicit conversion?

The only reason for mixed type operators is to make literals available. But if
one does those, then we can't add literals properly in the future
(Ada.Strings.Unbounded is damaged by this). So I say no.

I wouldn't bother with any other routines until at least such time as Bob
:-) has built some ACATS tests.

> 3) It seems clear that we don't want the bounded form of either
>     package to "with" the unbounded form but we do want conversion
>     functions for going between corresponding bounded and unbounded
>     types. Perhaps these go in child units of the two bounded packages
>     (those child units could then "with" the corresponding unbounded
>     packages).

Alternatively, both could be derived from an abstract type, and a class-wide
conversion provided. That would get rid of the empty package in your proposal.
:-)

>     Should streaming of the two forms be compatible as with
>     vectors and bounded vectors?

Yes.

> 4) We need an Assign procedure. In the unbounded case it can be just
>     a wrapper for predefined assignment, but in the bounded case it
>     has to deal with the case where the two arguments have different
>     capacities. It's fairly obvious what to do in most cases, but what
>     about assigning a Big_Rational value which cannot be represented
>     exactly given the capacity of the target. Raise an exception or
>     round?

I think I'd raise Capacity_Error. (Isn't that what the containers do?) Having
exact math be silently non-exact seems like exactly (pun) the wrong thing to do.

>     In either case, we probably want to provide a Round function
>     that deterministically finds an approximation to a given
>     value which can be represented as a value having a given
>     capacity. This can be useful in the unbounded case just to save
>     storage. Should this Round function be implementation-dependent?
>     If not, then we might end up talking about convergents and
>     semi-convergents in the Ada RM (or at least in the AARM),
>     which would be somewhat odd (see
> shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-appr
> oximations-are-the-convergents-of-the-continued-fraction
>     ). I do not think we want to define Succ/Pred functions which take
>     a Big_Rational and a capacity value.

???

I don't think Round (or any other operation) ought to be
implementation-dependent, so I think it would need a real definition. Hopefully
with "semi-convergents" or other terms that no one has heard of. ;-)

> 5) We want to be sure that a binding to GNU/GMP is straightforward in
>     the unbounded case. [Fortunately, that does not require using the
>     same identifiers used in GNU/GMP (mpz_t and mpq_t).]
>     See gmplib.org/manual for the GNU/GMP interfaces.

Makes sense.

> 6) Do we want functions to describe the mapping between Capacity
>     discriminant values and the associated set of representable values?
>     For example, a function from a value (Big_Integer or Big_Rational)
>     to the smallest capacity value that could be used to represent it.
>     For Big_Integer there could presumably be Min and Max functions
>     that take a capacity argument. For Big_Rational, it's not so clear.
>     We could require, for example, that a given capacity value allows
>     representing a given Big_Rational value if it is >= the sum of
>     the capacity requirements of the Numerator and the Denominator.

It seems that the Capacity needs to mean something to the end user, not just the
compiler. So such functions seem necessary, but KISS for those!!

> 7) Bob feels (and I agree) that the ARG should not formally approve any
>     changes until we have experience with an implementation. At this
>     point the ARG should be focused on providing informal guidance on
>     this topic.

I agree that Bob should prototype these packages, including writing ACATS-style
tests for them, so that we can put them into the Ada 2020 Standard. I'll put it
on his action item list. ;-)

Seriously, we already have an ARG rule that all Amendment AIs are supposed to
include (some) ACATS tests, and we really should have a similar rule that
proposed packages are prototyped as well. This is the assumed responsibility of
an AI author, so if you can't get Bob to help, you're pretty much stuck, and
need to do that before the AI could be assumed complete.

OTOH, we haven't required that from any other AI author, so why start now??
(We really ought to, I don't have a very big budget to write Ada 2020 ACATS
tests. Topic to discuss during the call?)

****************************************************************

From: Jean-Pierre Rosen
Sent: Saturday, January 20, 2018  12:22 AM

> Questions/observations include:
> [...]
>
I'd add:
8) IOs
   Should an IO package be associated to each of these bignums?
   Note that the issue of IO may influence the representation of
   of bignums: I once knew an implementation where each super-digit
   was limited to 1_000_000_000 (instead of the natural 2_147_483_647),
   just to avoid terribly inefficient IOs.

****************************************************************

From: Tucker Taft
Sent: Saturday, January 20, 2018  11:08 AM

> ...
>
>> 1) Do we declare deferred constants, parameterless functions,
>>    or neither for things like Zero, One, and Two?
>
> If tagged, I'll finally get an excuse to show why what I called "tag
> propagation" is necessary to implement the dispatching rules in 3.9.2.
> :-) (One has to consider a set of calls, not a single call, for
> determining the static or dynamic tag for dispatching. That's
> demonstratably necessary to process tagged expressions with constants
> or literals.)

I agree that you have to do "tag propagation" to properly handle tag
indeterminate calls.  Has anyone claimed otherwise?

>
> Anyway, the answer to this depends on whether there is a sufficiently
> short constructor -- and that really depends on whether Tucker invents
> a useful "literals for private type" AI. So I don't think this can be
> answered until we find out about that.

I'm on it. ;-)

****************************************************************

From: Randy Brukardt
Sent: Saturday, January 20, 2018  7:29 PM

> I agree that you have to do "tag propagation" to properly handle tag
> indeterminate calls.  Has anyone claimed otherwise?

Not that I know of, but based on my compiler surveys, no one implements it other
than Janus/Ada. Admittedly, I haven't checked this recently.

I've long had a tagged Bignum-like package on my ACATS test to-construct list
(because one needs usage-orientation for such tests) in order to test this rule.
So far as I can tell, the ACATS doesn't currrently test cases like those that
arise in Bignum:

      procedure Something (Val : in out Num'Class) is
      begin
          Val := + Zero; -- Zero gets the tag of Val, propagated through "+".
          declare
              Org : Num'Class := Val + (- One); -- Org and One get the tag of Val.
          begin
              ...
          end;
      end Something;

I'll probably come up with more realistic-looking expressions for this test, but
the idea should be obvious. (I'll have to test both static and dynamic binding,
as well as tag indeterminate cases.)

****************************************************************

From: John Barnes
Sent: Monday, January 22, 2018  5:49 AM

I wrote a bignum package in Ada 83 some 30 years ago. I did make some updates to
use Ada 95, mainly child packages. I still use it for numerical stuff for
courses at Oxford.

Notable points perhaps.

I did use a power of 10 for the base to ease IO. It was originally on a 16 bit
machine.  (386 perhaps). It still works on this horrid Windows 10. Not much
faster than on my old XP laptop. I don't know what Windows 10 is doing.
Obviously playing with itself - ridiculous.

I provided constants Zero and One. I didn't think any others were necessary.
Others were provided by eg

Two: Number := Make-Number(2);

I provided a package for subprograms Add, Sub, Mul, Div, Neg, Compare, Length,
To_Number, To_Text, To_Integer.

And a package for functions +. -, abs, *, / rem, mod, <,  <=, >, >=, =

And other packages for I/O.

Long time ago. Certainly very useful.

****************************************************************

From: Steve Baird
Sent: Monday, January 22, 2018  12:33 PM

> I'd add:
> 8) IOs
>     Should an IO package be associated to each of these bignums?

Good question.

If we provide conversion functions to and from String then would any further IO
support be needed?

****************************************************************

From: Steve Baird
Sent: Monday, January 22, 2018  1:24 PM

> ...
>> Questions/observations include:
>
> 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
>
> If so, then we can use prefix notation on functions like Numerator and
> Denominator. We could also consider deriving both versions (usual and
> bounded) from an abstract ancestor.

If we go this way, then should this common ancestor be an interface type? I'd
say yes.

Does it then get all the same ops, so that the non-abstract ops declared for the
Bounded and Unbounded types would all be overriding?

Would this make the AI12-0243-ish issues any worse (consider the proposed
Nonzero_Integer parameter subtype mentioned earlier)? I know these problems are
bad enough already, but my question is whether this would make matters any
worse.

>> 2) Which ops do we include? It seems obvious that we define at least
>>      the arithmetic and relational ops that are defined for any
>>      predefined integer (respectively float) type for Big_Integer
>>      (respectively, Big_Rational).
>>
>>      What Pre/Postconditions are specified for these ops?
>>      These might involve subtype predicates.
>>      For example (suggested by Bob), do we want
>>
>>         subtype Nonzero_Integer is Big_Integer with
>>             Predicate => Nonzero_Integer /= Zero;
>>         function "/"
>>           (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
>>         -- similar for "mod", "rem".
>>
>>       ?
>
> Shouldn't this predicate raise Constraint_Error rather than defaulting
> to Assertion_Error, to be more like the other numeric operations?

Good point; I agree.

>> 3) It seems clear that we don't want the bounded form of either
>>      package to "with" the unbounded form but we do want conversion
>>      functions for going between corresponding bounded and unbounded
>>      types. Perhaps these go in child units of the two bounded packages
>>      (those child units could then "with" the corresponding unbounded
>>      packages).
>
> Alternatively, both could be derived from an abstract type, and a
> class-wide conversion provided. That would get rid of the empty
> package in your proposal. :-)

Could you provide a more detailed spec? I don't see how this would work, but I
suspect that I'm misunderstanding your proposal.

>> 4) We need an Assign procedure. In the unbounded case it can be just
>>      a wrapper for predefined assignment, but in the bounded case it
>>      has to deal with the case where the two arguments have different
>>      capacities. It's fairly obvious what to do in most cases, but what
>>      about assigning a Big_Rational value which cannot be represented
>>      exactly given the capacity of the target. Raise an exception or
>>      round?
>
> I think I'd raise Capacity_Error. (Isn't that what the containers do?)
> Having exact math be silently non-exact seems like exactly (pun) the
> wrong thing to do.

Is it that simple? Suppose somebody wants large rationals (e.g., 2048-bit
numerators and denominators) with rounding. It's not that they require exact
arithmetic - they just want a lot more range/precision than what you get from
Ada's numeric types. It may be that this is an unimportant corner case and you
are right to dismiss it; I don't know.


>> 6) Do we want functions to describe the mapping between Capacity
>>      discriminant values and the associated set of representable values?
>>      For example, a function from a value (Big_Integer or Big_Rational)
>>      to the smallest capacity value that could be used to represent it.
>>      For Big_Integer there could presumably be Min and Max functions
>>      that take a capacity argument. For Big_Rational, it's not so clear.
>>      We could require, for example, that a given capacity value allows
>>      representing a given Big_Rational value if it is >= the sum of
>>      the capacity requirements of the Numerator and the Denominator.
>
> It seems that the Capacity needs to mean something to the end user,
> not just the compiler. So such functions seem necessary, but KISS for those!!

Am I right in guessing that you'd like these functions to be portable (as
opposed to being implementation-defined)?

****************************************************************

From: Randy Brukardt
Sent: Monday, January 22, 2018  3:41 PM

> > I'd add:
> > 8) IOs
> >     Should an IO package be associated to each of these bignums?
>
> Good question.
>
> If we provide conversion functions to and from String then would any
> further IO support be needed?

We currently have Text_IO nested packages or children for pretty much any type
for which it makes sense to have text input-output, despite the fact that every
such type has an Image function or the equivalent (To_String for unbounded
strings).

So I'd rather expect a Ada.Text_IO.BigNum_IO package. If we don't define it now,
we will the next time around.

(The Janus/Ada UnivMath package has a complete set of Text_IO packages, and they
are heavily used. I believe they can output both rational and decimal
representation for the universal_real type.)

****************************************************************

From: Randy Brukardt
Sent: Monday, January 22, 2018  3:36 PM

> > Steve Baird writes:
> > ...
> >> Questions/observations include:
> >
> > 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
> >
> > If so, then we can use prefix notation on functions like Numerator
> > and Denominator. We could also consider deriving both versions
> > (usual and
> > bounded) from an abstract ancestor.
>
> If we go this way, then should this common ancestor be an interface
> type? I'd say yes.

I suggested making it abstract so it could have some concrete operations if
those made sense. But perhaps they don't make sense.

> Does it then get all the same ops, so that the non-abstract ops
> declared for the Bounded and Unbounded types would all be overriding?

I would expect that the vast majority of operations are in the interface, so
dispatching can be used, and one can write class-wide algorithms that work with
any Bignum representation. Probably the capacity-specific operations would be
left out.

> Would this make the AI12-0243-ish issues any worse (consider the
> proposed Nonzero_Integer parameter subtype mentioned earlier)? I know
> these problems are bad enough already, but my question is whether this
> would make matters any worse.

It just makes a solution more urgent, but it doesn't change the issues any.

...
> >> 3) It seems clear that we don't want the bounded form of either
> >>      package to "with" the unbounded form but we do want conversion
> >>      functions for going between corresponding bounded and unbounded
> >>      types. Perhaps these go in child units of the two bounded packages
> >>      (those child units could then "with" the corresponding unbounded
> >>      packages).
> >
> > Alternatively, both could be derived from an abstract type, and a
> > class-wide conversion provided. That would get rid of the empty
> > package in your proposal. :-)
>
> Could you provide a more detailed spec? I don't see how this would
> work, but I suspect that I'm misunderstanding your proposal.

I was thinking about including cross-cut operations in the spec, something
like:

    type BigNum is abstract tagged with private;

    function Convert (Val : in Bignum'Class) return Bignum;

but thinking about it now, I can't figure out how one would implement one of those.

You'd probably have to have a concrete universal representation to make that
work:

    function Convert (Val : in Bignum) return Universal_Big;

    function Convert (Val : in Universal_Big) return BigNum;

but of course that would bring in the memory allocation/finalization issues that you are trying to avoid.

So at this moment I'm thinking that direct conversions would have to be left out; you could generally do it through intermediary types like Max_Integer using Numerator/Demomonator.

> >> 4) We need an Assign procedure. In the unbounded case it can be just
> >>      a wrapper for predefined assignment, but in the bounded case it
> >>      has to deal with the case where the two arguments have different
> >>      capacities. It's fairly obvious what to do in most cases, but what
> >>      about assigning a Big_Rational value which cannot be represented
> >>      exactly given the capacity of the target. Raise an exception or
> >>      round?
> >
> > I think I'd raise Capacity_Error. (Isn't that what the containers
> > do?) Having exact math be silently non-exact seems like exactly
> > (pun) the wrong thing to do.
>
> Is it that simple? Suppose somebody wants large rationals (e.g.,
> 2048-bit numerators and denominators) with rounding.
> It's not that they require exact arithmetic - they just want a lot
> more range/precision than what you get from Ada's numeric types.
> It may be that this is an unimportant corner case and you are right to
> dismiss it; I don't know.

We're not trying to be all things to all people. I'd consider these "exact" math
packages and treat them accordingly. If there is an abstract root, one can
"easily" make a clone version that uses rounding if someone needs that.
(Defining the rounding is hard, as you noted elsewhere.)

> >> 6) Do we want functions to describe the mapping between Capacity
> >>      discriminant values and the associated set of representable values?
> >>      For example, a function from a value (Big_Integer or Big_Rational)
> >>      to the smallest capacity value that could be used to represent it.
> >>      For Big_Integer there could presumably be Min and Max functions
> >>      that take a capacity argument. For Big_Rational, it's not so clear.
> >>      We could require, for example, that a given capacity value allows
> >>      representing a given Big_Rational value if it is >= the sum of
> >>      the capacity requirements of the Numerator and the Denominator.
> >
> > It seems that the Capacity needs to mean something to the end user,
> > not just the compiler. So such functions seem necessary, but KISS
> > for those!!
>
> Am I right in guessing that you'd like these functions to be portable
> (as opposed to being implementation-defined)?

I think so; otherwise it rather defeats the purpose of language-defined packages
(to provide the ultimate in portability).

****************************************************************


Questions? Ask the ACAA Technical Agent