CVS difference for ai12s/ai12-0208-1.txt

Differences between 1.14 and version 1.15
Log of other versions for file ai12s/ai12-0208-1.txt

--- ai12s/ai12-0208-1.txt	2018/12/07 07:01:28	1.14
+++ ai12s/ai12-0208-1.txt	2019/01/05 02:34:52	1.15
@@ -3789,23 +3789,23 @@
 Sent: Sunday, December  2, 2018  1:32 PM
 
 > ...
-> Upon rereading, I wonder whether we should delete the Width parameters 
-> from the From_String functions. Do those make sense for a function 
-> that is passed a String (as opposed to reading characters from a 
+> Upon rereading, I wonder whether we should delete the Width parameters
+> from the From_String functions. Do those make sense for a function
+> that is passed a String (as opposed to reading characters from a
 > file)?
 
-If you look at the "Get" functions in Text_IO that get from a string, they 
-do omit the Width parameters, so it would seem we would be justified in 
-omitting them here as well.  In general, the length of the string provides 
-the value of the Width parameter for operations that operate on strings.  
+If you look at the "Get" functions in Text_IO that get from a string, they
+do omit the Width parameters, so it would seem we would be justified in
+omitting them here as well.  In general, the length of the string provides
+the value of the Width parameter for operations that operate on strings.
 However, if the operation *returns* a String rather fills in a String provided
-as an OUT parameter, there is no Length, for for conversion *to* a string, 
+as an OUT parameter, there is no Length, for for conversion *to* a string,
 some sort of Width might be useful, but note that 'Image doesn't provide that.
 
-> If we decide to do this, then we should also delete the mention of 
-> Width in the accompanying text for Big_Rationals (but not in the 
-> corresponding text for Big_Integers) because with this change there 
-> will no longer be any subprograms in the Big_Rationals spec that take 
+> If we decide to do this, then we should also delete the mention of
+> Width in the accompanying text for Big_Rationals (but not in the
+> corresponding text for Big_Integers) because with this change there
+> will no longer be any subprograms in the Big_Rationals spec that take
 > a Width parameter.
 
 Makes sense.
@@ -3815,7 +3815,7 @@
 From: John Barnes
 Sent: Monday, December  3, 2018  2:09 PM
 
-I should read this one carefully. But what should I read? The AI on the 
+I should read this one carefully. But what should I read? The AI on the
 database seems a bit old.
 
 ****************************************************************
@@ -3832,7 +3832,7 @@
 From: John Barnes
 Sent: Tuesday, December  4, 2018  1:56 AM
 
-OK Got it. I assumed the AI had been updated. This is the text for the 
+OK Got it. I assumed the AI had been updated. This is the text for the
 uncluttered AARM
 
 ****************************************************************
@@ -3850,7 +3850,7 @@
        function "+" (Arg : Integer) return Valid_Big_Integer
          renames To_Big_Integer;
 
-Better have "+" as a shorthand for To_Big_Integer rather than To_Big_Integer 
+Better have "+" as a shorthand for To_Big_Integer rather than To_Big_Integer
 as a "longhand" for "+"...
 
 ****************************************************************
@@ -3858,7 +3858,7 @@
 From: Bob Duff
 Sent: Tuesday, December  4, 2018  6:14 AM
 
-I agree.  It doesn't make any difference to the compiler, but it might make 
+I agree.  It doesn't make any difference to the compiler, but it might make
 a difference to tools such as debuggers, and it does seem more logical.
 
 ****************************************************************
@@ -3887,21 +3887,21 @@
 From: John Barnes
 Sent: Tuesday, December  4, 2018  11:33 AM
 
-And to me. 
+And to me.
 
 ****************************************************************
 
 From: John Barnes
 Sent: Wednesday, December  5, 2018  4:30 AM
 
-I just started to read this in detail. Gosh Ada has lots of things now that 
+I just started to read this in detail. Gosh Ada has lots of things now that
 I don't remember.
 
 But a minor point first.
 
 A5.5 says The package Ada.Numerics.Big_Numbers has the following declaration:
 
-But A5.6 says The package Ada.Numerics.Big_Numbers.Big_Integers has the 
+But A5.6 says The package Ada.Numerics.Big_Numbers.Big_Integers has the
 following definition:
 
 Why definition and not declaration?
@@ -3913,14 +3913,14 @@
 From: Randy Brukardt
 Sent: Wednesday, December  5, 2018  1:33 PM
 
-The RM is not consistent about the wording introducing language-defined 
+The RM is not consistent about the wording introducing language-defined
 packages.
 
-I found 58 "has the following declaration" and 33 "language-defined package 
-exits". There's also some that don't use any words at all or aren't 
+I found 58 "has the following declaration" and 33 "language-defined package
+exits". There's also some that don't use any words at all or aren't
 consistent.
 
-OTOH, I didn't find any "has the following definition", except for the ones 
+OTOH, I didn't find any "has the following definition", except for the ones
 I added last night. So that must have been a mis-read on my part. And clearly,
 the ones in AI12-0208-1 are also wrong - should be "declaration".
 
@@ -3929,12 +3929,12 @@
 From: Bob Duff
 Sent: Thursday, December  6, 2018  2:05 PM
 
-> The attached is intended to include the corrections you suggested 
+> The attached is intended to include the corrections you suggested
 > except for the stuff about deferred
 
 Thanks, Steve.
 
-Comments on the attached AI12-0208.v4.txt: [Editor's note: This attachment 
+Comments on the attached AI12-0208.v4.txt: [Editor's note: This attachment
 is /06 of the AI, despite the name.]
 
 >    package Ada.Numerics.Big_Numbers.Big_Integers
@@ -3951,15 +3951,15 @@
 >         with Dynamic_Predicate => Is_Valid (Valid_Big_Integer),
 >              Predicate_Failure => (raise Constraint_Error);
 
-I expect Valid_Big_Integer will be used far more commonly than Big_Integer. 
-(That's true in the code in the package spec, and for client code.)  
-Furthermore, an invalid Big_Integer is not an integer at all.  Therefore 
+I expect Valid_Big_Integer will be used far more commonly than Big_Integer.
+(That's true in the code in the package spec, and for client code.)
+Furthermore, an invalid Big_Integer is not an integer at all.  Therefore
 I suggest name changes:
 
     Big_Integer --> Optional_Big_Integer
     Valid_Big_Integer --> Big_Integer
 
-We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems inconsistent 
+We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems inconsistent
 and a bit weird.
 
 Same for Big_Rats.
@@ -3969,21 +3969,21 @@
 >   assignment or scope exit.
 
 The CodePeer implementation of big ints doesn't do that
--- practice, and that has proven to work well. I fear 
+-- it leaks very slowly in practice, and that has proven to work well. I fear
 it's the only efficient way to do it, unless you have garbage collection.
 
 >    - Bounded_Big_Integers is a generic package and takes a generic formal:
 >         Capacity : Natural;
 
-I question making Capacity a generic formal. I think it should be a 
-discriminant. It seems to me the primary use of Bounded_Big_Integers will be 
-to implement Big_Integers, and that only works if you can have various-sized 
+I question making Capacity a generic formal. I think it should be a
+discriminant. It seems to me the primary use of Bounded_Big_Integers will be
+to implement Big_Integers, and that only works if you can have various-sized
 Bounded_Big_Integers all of the same type.
 
-Yes, I know that means assignment statements won't work "right".  Too bad.  
+Yes, I know that means assignment statements won't work "right".  Too bad.
 We can provide a Copy procedure.
 
-We made this mistake with Bounded_Strings, and they're a huge pain to use 
+We made this mistake with Bounded_Strings, and they're a huge pain to use
 because of that.
 
 >    with Ada.Numerics.Big_Numbers.Big_Integers;
@@ -4034,13 +4034,13 @@
 
 How about another "**" that takes a Big_Int?
 
-> [TBD: Is Constraint_Error the exception we want on the 
-> Predicate_Failure aspect specs for Valid_Big_Integer and 
+> [TBD: Is Constraint_Error the exception we want on the
+> Predicate_Failure aspect specs for Valid_Big_Integer and
 > Valid_Big_Rational?]
 
 OK with me.  I don't much caare.
 
-> [TBD: do we want a Fixed_Conversions generic package analogous to 
+> [TBD: do we want a Fixed_Conversions generic package analogous to
 > Float_Conversions?]
 
 Ah, I asked that above.  I'd say probably yes.
@@ -4048,27 +4048,27 @@
 > [TBD: the range check on From_Big_Rational is slightly too tight.
 > For example,
 >     X : IEEE_Float32 :=
->       IEEE_Float32 (IEEE_Float64'Succ (IEEE_Float64 
-> (IEEE_Float32'Last))); does not overflow but the corresponding 
-> conversion using From_Big_Rational would fail the range check. Do we 
+>       IEEE_Float32 (IEEE_Float64'Succ (IEEE_Float64
+> (IEEE_Float32'Last))); does not overflow but the corresponding
+> conversion using From_Big_Rational would fail the range check. Do we
 > care?]
 
 I don't know.
 
-> This section, or at least the AARM note, is intended to follow the 
-> structure of the analogous wording for  AI12-011 (contracts for 
+> This section, or at least the AARM note, is intended to follow the
+> structure of the analogous wording for  AI12-011 (contracts for
 > containers).
 >
 > Add after 11.5(23):
 >
 > Big_Number_Check
 >
-> Perform the checks associated with Pre, Static_Predicate, 
-> Dynamic_Predicate, or Type_Invariant aspect specifications occuring in 
+> Perform the checks associated with Pre, Static_Predicate,
+> Dynamic_Predicate, or Type_Invariant aspect specifications occuring in
 > the visible part of package Ada.Big_Numbers or of any of its descendants.
 >
-> [TBD: Include Static_Predicate in this list just for completeness, 
-> even though it happens that there are no Static_Predicate 
+> [TBD: Include Static_Predicate in this list just for completeness,
+> even though it happens that there are no Static_Predicate
 > specifications in these units?]
 
 Either way is OK with me.
@@ -4078,7 +4078,7 @@
 From: Randy Brukardt
 Sent: Thursday, December  6, 2018  9:21 PM
 
-> We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems 
+> We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems
 > inconsistent and a bit weird.
 
 I note that we discussed these names in Lexington and decided on these
@@ -4091,47 +4091,47 @@
 I suppose this comment qualifies. :-)
 
 > Same for Big_Rats.
-> 
+>
 > > Implementation Requirements
 > >   No storage associated with a Big_Integer object shall be lost upon
 > >   assignment or scope exit.
-> 
+>
 > The CodePeer implementation of big ints doesn't do that
-> -- it leaks very slowly in practice, and that has proven to work well. 
-> I fear it's the only efficient way to do it, unless you have garbage 
+> -- it leaks very slowly in practice, and that has proven to work well.
+> I fear it's the only efficient way to do it, unless you have garbage
 > collection.
 
-I don't believe that there is any efficient implementation of unbounded 
-Big_Integers for Ada. Any implementation like the one you described in 
-Lexington (and above) would have to use a global level of indirection to deal 
-with oversize objects (modern Oses scramble address spaces, so no assumptions 
-can be made about the location of anything), and that would be a problem for 
+I don't believe that there is any efficient implementation of unbounded
+Big_Integers for Ada. Any implementation like the one you described in
+Lexington (and above) would have to use a global level of indirection to deal
+with oversize objects (modern Oses scramble address spaces, so no assumptions
+can be made about the location of anything), and that would be a problem for
 multitasking (since you'd be accessing a global data structure).
-You'd have to use some form of locking (at a minimum via atomic objects), and 
-that would also sap performance. Moreover, for a 32-bit implementation like 
-Janus/Ada, you're going to have a lot of memory used that way -- it's not a 
+You'd have to use some form of locking (at a minimum via atomic objects), and
+that would also sap performance. Moreover, for a 32-bit implementation like
+Janus/Ada, you're going to have a lot of memory used that way -- it's not a
 *slow* storage leak.
 
 If you need critical performance, you'll have to use the bounded form.
 
 > >    - Bounded_Big_Integers is a generic package and takes a generic formal:
 > >         Capacity : Natural;
-> 
-> I question making Capacity a generic formal. I think it should be a 
-> discriminant. It seems to me the primary use of Bounded_Big_Integers 
-> will be to implement Big_Integers, and that only works if you can have 
+>
+> I question making Capacity a generic formal. I think it should be a
+> discriminant. It seems to me the primary use of Bounded_Big_Integers
+> will be to implement Big_Integers, and that only works if you can have
 > various-sized Bounded_Big_Integers all of the same type.
 
 Agreed. This is a discriminant for the bounded containers for this reason.
 
 ...
-> > [TBD: do we want a Fixed_Conversions generic package analogous to 
+> > [TBD: do we want a Fixed_Conversions generic package analogous to
 > > Float_Conversions?]
-> 
+>
 > Ah, I asked that above.  I'd say probably yes.
 
-I'd say no, because accuracy requirements seem to be major problem for such 
-conversions. I know that Steve has shown that one can always get the right 
+I'd say no, because accuracy requirements seem to be major problem for such
+conversions. I know that Steve has shown that one can always get the right
 answer using essentially a binary search of model intervals, but that seems to
 be more of a thought experiment than a real implementation technique (it would
 use thousands of big rational operations).
@@ -4147,10 +4147,10 @@
 > to be more of a thought experiment than a real implementation technique (it
 > would use thousands of big rational operations).
 
-I believe there is a well documented mechanism for doing this properly. 
+I believe there is a well documented mechanism for doing this properly.
 AdaMagic does the right thing here.  I'd be happy to provide the algorithm.
-It is based on the notion of "continued fractions" I believe.  I recently 
-analyzed supporting fixed point in our code generator for Simulink, and wrote 
+It is based on the notion of "continued fractions" I believe.  I recently
+analyzed supporting fixed point in our code generator for Simulink, and wrote
 up the algorithms in a short document.
 
 ****************************************************************
@@ -4164,17 +4164,764 @@
 how to Google for such a thing, as one has no idea of what it is called.)
 
 >AdaMagic does the right thing here.  I'd be happy to provide the algorithm.
->It is based on the notion of "continued fractions" I believe.  I 
->recently analyzed supporting fixed point in our code generator for 
+>It is based on the notion of "continued fractions" I believe.  I
+>recently analyzed supporting fixed point in our code generator for
 >Simulink, and wrote up the algorithms in a short document.
 
 I have the additional problem of having to do it in a shared generic. It is
-completely impossible to make completely accurate conversions to universal 
+completely impossible to make completely accurate conversions to universal
 fixed in that environment, so the algorithm has to use only integers and the
-(base of the) target fixed point type. (In general, universal fixed 
+(base of the) target fixed point type. (In general, universal fixed
 conversions are inaccurate, see Annex G, so any truly usage algorithm for Ada
 has to avoid anything like that; it's not 100% a problem with shared generics.
-That rules out fixed-fixed multiplies and divides. Not sure if that is 
+That rules out fixed-fixed multiplies and divides. Not sure if that is
 possible.)
 
 ****************************************************************
+
+From: Bob Duff
+Sent: Sunday, December  9, 2018  10:13 AM
+
+> I note that we discussed these names in Lexington and decided on these
+> exactly:
+>
+>   Tucker suggests that the names would be Big_Integer and Valid_Big_Integer.
+>   This seems consistent with existing languages. Bob says he can live with
+>   that (and he will complain about it).
+
+Are you sure that's what I meant?  I think maybe I was advocating having no
+default initial value, like regular integer types, as opposed to commenting on
+the names.  Not sure.
+
+Anyway,  I really do not like the name "Valid_Big_Integer", for reasons I
+stated.  Tucker says this is consistent with other languages.  I skept.
+
+Tucker, please name these existing languages.
+
+> I suppose this comment qualifies. :-)
+
+;-)
+
+> I don't believe that there is any efficient implementation of
+> unbounded Big_Integers for Ada.
+
+"efficient" = "comparable to other languages that support bignums".
+If that's not possible, then we have another embarrassment like the tampering
+fiasco.
+
+>... Any implementation like the one you described in  Lexington (and
+>above) would have to use a global level of indirection to  deal with
+>oversize objects (modern Oses scramble address spaces, so no
+>assumptions can be made about the location of anything), and that would
+>be a  problem for multitasking (since you'd be accessing a global data structure).
+> You'd have to use some form of locking (at a minimum via atomic
+>objects),  and that would also sap performance. Moreover, for a 32-bit
+>implementation  like Janus/Ada, you're going to have a lot of memory
+>used that way -- it's  not a *slow* storage leak.
+
+Sorry, but the above is just not true.  If you want to know why, look at the
+codepeer sources.  The codepeer implementation is 32 bit (even on 64-bit
+machines), it has no problem with address space randomization, it leaks slowly,
+it's task safe.  There is an atomic write in there, but it's on the slow-anyway
+path.
+
+> Agreed. This is a discriminant for the bounded containers for this reason.
+
+:-)
+
+> ...
+> > > [TBD: do we want a Fixed_Conversions generic package analogous to
+> > > Float_Conversions?]
+> >
+> > Ah, I asked that above.  I'd say probably yes.
+>
+> I'd say no, because accuracy requirements seem to be major problem for
+> such conversions. I know that Steve has shown that one can always get
+> the right answer using essentially a binary search of model intervals,
+> but that seems to be more of a thought experiment than a real
+> implementation technique (it would use thousands of big rational operations).
+
+I don't know enough to have a strong opinion here.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Sunday, December  9, 2018  10:29 AM
+
+> I believe there is a well documented mechanism for doing this
+> properly.  AdaMagic does the right thing here.
+
+Is this needed in Ada anyway, for converting universal_integer to misc
+fixed-point types?
+
+>...I'd be happy to provide the algorithm.
+
+Yes, please.  That would be useful.
+
+> ...It is based on the notion of "continued fractions" I believe.  I
+> recently analyzed supporting fixed point in our code generator for
+> Simulink, and wrote up the algorithms in a short document.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, December  9, 2018  12:04 PM
+
+> Anyway,  I really do not like the name "Valid_Big_Integer", for
+> reasons I stated.  Tucker says this is consistent with other
+> languages.  I skept.
+>
+> Tucker, please name these existing languages.
+
+I really don't remember what I meant by this, if I really said it.  What I would say today (and perhaps what I meant then) is that it is consistent with the existing *language* (i.e. "normal" integer types in Ada), which do not guarantee they have a valid
 value unless you initialize them.
+
+>> I suppose this comment qualifies. :-)
+>
+> ;-)
+>
+>> I don't believe that there is any efficient implementation of
+>> unbounded Big_Integers for Ada.
+>
+> "efficient" = "comparable to other languages that support bignums".
+> If that's not possible, then we have another embarrassment like the
+> tampering fiasco.
+
+I would suggest we say that the storage shall not be "lost," but allow it to be
+used for caching or other purposes.  Based on the fact that our commercial
+CodePeer product has been using Bob's implementation of big nums for the past
+fifteen years, I don't agree that this cannot be done efficiently.  Yes, you do
+need to worry about multi-tasking (and async-transfer-of-control, for that
+matter), but I think we need to give implementations some freedom, especially
+for the implementation of a new package like this.  We can set limits, but we
+don't want to preclude useful approaches out of the gate.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, December  9, 2018  12:26 PM
+
+>> ...It is based on the notion of "continued fractions" I believe.  I
+>> recently analyzed supporting fixed point in our code generator for
+>> Simulink, and wrote up the algorithms in a short document.
+
+I looked into it a bit further.  I have extracted below two different documents
+that might be relevant to this topic.  The first is a general discussion of
+converting and multiplying fixed-point values with different scale factors.  The
+second is a function that converts from "universal real" to another rational
+number that approximates it, but with limits on the numerator and denominator.
+I realize now that I am not sure whether either of these directly addresses the
+issue that the Fixed_Conversion package might face, but it feels like they are
+at least somewhat relevant! ;-)
+
+-----  Converting and multiplying fixed-point values with potentially different
+scale factors ------
+
+Here we give the actual proposed implementation for both C and Ada for Simulink
+fixed-point operations.  Many of these operations depend on the relationship
+between the scale factors of the two (or three) types involved.  In several
+cases it is necessary to take the ratio between two scale factors, and then
+approximate it by a ratio of relatively small integers.  The “continued
+fraction” (CF) method is used to do the approximation.  This method generates
+successively better approximations, until the numerator or the nominator exceeds
+some bound (typically the maximum value that can be represented as an integer in
+say 16 or 32 bits).  We will use the function “CF(X, 16)” to be the rational
+number that most closely approximates X without while both the numerator and
+denominator of the rational number remain within the representation of a 16-bit
+signed integer.  We will use “Size_In_Bits” rather than 16 or 32 below, to
+represent the size of integer values that are being manipulated.
+
+Below we talk about the “scale” factor.  This is the number we multiply the
+underlying integer representation to produce the “real-world” value.
+
+   Real_Word_Value := Scale * Representation
+
+In Ada, this is called the “small” (or less precisely, the “delta”).
+
+We are ignoring the issue of a “bias” for now, which is used when the
+representation for the “real-world” zero is not an integer 0.  Bias should
+probably be handled at the “edges” of the model (subtracted or added as
+appropriate), and not appear in any intermediate values.
+
+Below we use a routine “Fixed_Mul_Shift_Div(A, B, C, D)” which computes A * B *
+2^C / D.  It multiples A * B into a double-width temporary.  Then if C is
+positive, it does a double-width shift left.  If C is negative, it does a
+double-width shift right.  If C is zero, it performs no shift.  It then does a
+divide, taking the double-width dividend down to a single-width quotient.
+
+
+----------
+
+Conversion:
+
+  From Integer/Fixed to Integer/Fixed
+    Source scale = S, Target scale = T
+    (use scale = 1.0 for an Integer type)
+
+  Basic re-scaling formula is as follows:
+    Target_Rep := Source_Rep * S / T
+
+  But to be more efficient, we translate ratio into rational number:
+    Num/Den == CF(S/T, Size_In_Bits)
+
+  Now we do the scaling, using strictly integer arithmetic:
+    Target_Rep := Source_Rep * Num / Den
+
+For C (presuming no special rounding):
+   if Den = 1:
+      if Num is power of 2:
+         generate “Result = Source << Log2(Num);”
+      else:
+         generate “Result = Source * Num;”
+   elsif Num = 1:
+      if Den is power of 2:
+         generate “Result = Source >> Log2(Den);”
+      else:
+         generate “Result = Source / Den;”
+   else:
+      generate
+        “Result = Fixed_Mul_Shift_Div (Source, Num, 0, Den);”
+
+For C, if special Rounding is required, the last three code snippets which
+involve division or a right-shift will need to be adjusted.  If truncate toward
+zero, use division even for power-of-2 divisor when Source might be negative.
+If round to nearest, add “Den/2 * Sign(Source)” to Source before dividing
+(special rounding can be provided by having different versions of
+Fixed_Mul_Shift_Div).
+
+NOTE: The above algorithm for re-scaling a value we will reuse later, as
+“Rescale (Value, Scale)” meaning compute CF(Scale, Size_In_Bits) and then go
+through the above set of cases.
+
+For Ada:
+   generate “Result := Target_Type(Source);”
+     -- presuming Ada types have “small” matching Simulink scale,
+     -- and desired rounding matches Ada rounding.
+
+For Ada, if special Rounding is required, add an adjustment to Source before
+converting.  Amount depends on exact rounding performed by Ada and expected by
+Simulink.  An explicit use of a Fixed_Mul_Div routine might be the simplest
+solution.
+
+
+Add/Subtract:
+
+Addition and subtraction use the “native” arithmetic operations, presuming the
+types are the same.  If they are not the same, we would convert both to the
+target type for the computation (see above), and then do a simple addition.
+
+For C:
+   generate “Result = Operand1 + Operand2;”
+
+For Ada:
+   generate “Result := Operand1 + Operand2;”
+
+
+Multiply/Divide:
+
+Multiplication and division generally need to consider three types, the two
+operand types, and the result type.  Assume the scale factors are O1, O2, and R
+for Operand1 scale, Operand2 scale, and Result scale.  Ideally, the scales match
+in the sense that, for multiplication, O1 * O2 = R, and for division, O1 / O2 =
+R.  When this is the case, “native” multiplication or division can be used.  If
+not, we compute an overall re-scaling factor S = O1*O2/R for multiplication, or
+S = O1/(O2*R) for division.  The code we generate depends on the operation and
+this re-scaling factor.
+
+For C multiplication:
+   if S >= 1:
+      --  Simple case, just rescale a "native" multiply
+      generate “Result := Rescale (O1 * O2, S);”
+
+   else (S < 1):
+      --  More complex case, use Fixed_Mul_Shift_Div;
+      --  Compute Shift and Divisor for inverse of scale factor
+
+      Inverse_Scale := 1/S;
+
+      if Floor(Inverse_Scale) = Inverse_Scale and then
+        Inverse_Scale < 2^(Size_In_Bits-1):
+         --  1/Scale is an exact, not-too-big integer
+         Divisor := Integer(Inverse_Scale);
+         Shift := 0;
+      else:
+         --  1/Scale is not an integer, or is too big
+         --  Compute a Divisor and a Shift such that
+         --  S ~= 2^Shift/Divisor
+         Shift := Size_In_Bits - 2 - Floor(Log2(Inverse_Scale));
+         Divisor := Floor(2^Shift/S);
+
+         --  Reduce Divisor until fits in integer
+         while Divisor >= 2^(Size_In_Bits-1):
+            Shift := Shift - 1;
+            Divisor := Floor(2^Shift/S);
+
+         if Shift > 0:
+            --  Reduce 2^Shift/Divisor to lowest terms
+            G := GCD(2^Shift, Divisor);
+            Divisor := Divisor/G;
+            Shift := Shift - Log2(G);
+
+      generate
+        “Result := Fixed_Mul_Shift_Div
+                    (O1, O2, Shift, Divisor);”
+
+
+For C division:
+   if S <= 1:
+      --  Simple case; rescale dividend and then do a "normal" divide
+      generate "Result := Rescale(O1, S) / O2;"
+
+   else (S > 1):
+      --  More complex case; use Fixed_Mul_Shift_Div;
+      --  Compute a Multiplier and a Shift
+      if Floor(S) = S & S < 2^(Size_In_Bits-1):
+         --  S is an exact and not-too-big integer
+         Multiplier := S;
+         Shift := 0;
+      else:
+         --  S is not an integer, or is too big
+         Shift := Size_In_Bits - 2 - Ceiling(Log2(S))
+         Multiplier := Floor(2^Shift * S)
+         --  Reduce Multiplier until it is representable as an integer
+         while Multiplier >= 2^(Size_In_Bits -1):
+            Shift := Shift - 1;
+            Multiplier := Floor(2^Shift * S);
+         if Shift > 0:
+            --  Reduce ratio of Multiplier and 2^Shift to lowest terms
+            G := GCD(2^Shift, Multiplier);
+            Multiplier := Multiplier / G;
+            Shift := Shift - Log2(G);
+      generate
+        "Result := Fixed_Mul_Shift_Div(O1, Multiplier, Shift, O2);"
+
+
+For Ada multiplication, with no special rounding:
+
+   generate “Result := Operand1 * Operand2;”
+
+For Ada division, with no special rounding:
+
+   generate “Result := Operand1 / Operand2;”
+
+---- ur_to_rational conversion function from AdaMagic ---
+
+    void
+ur_to_rational(ur_ptr real, ui_ptr bound, ui_ptr *num, ui_ptr *den)
+/*******************************************************************
+ * Overview:                                                       *
+ *     Approximate real number by rational number using continued  *
+ *     fraction method.  The resulting rational is num/den, where  *
+ *     both num and den are less than or equal to the specified	   *
+ *     bound.							   *
+ *                                                                 *
+ * Notes:                                                          *
+ *     Since the 'real' number is also a fraction, we are just	   *
+ *     approximating a rational number by another rational number  *
+ *     with potentially smaller numbers in the numerator and	   *
+ *     denominator.						   *
+ *******************************************************************/
+{
+    bool   saved_here = use_scratch();
+
+    ui_ptr   p2, q2;  /* previous previous ratio */
+    ui_ptr   p1, q1;  /* previous ratio */
+    ui_ptr   p,  q;   /* current ratio */
+    ui_ptr   a;       /* new denominator */
+    ur_ptr   abs_real = ur_abs(real);
+    ur_ptr   r;       /* remaining fraction */
+
+    assert(ui_is_positive(bound));
+
+    /*
+     * continued fractions algorithm:
+     *                                        p0/q0 =        1 / 0
+     * r1 = abs(real)         a1 = floor(r1)  p1/q1 =       a1 / 1
+     * r2 = 1/(r1-a1)         a2 = floor(r2)  p2/q2 = a2*p1+p0 / a2*q1+q0
+     * rn = 1/(r[n-1]-a[n-1]) an = floor(rn)  pn/qn = an*p[n-1]+p[n-2] /
+     *                                                     an*q[n-1]+q[n-2]
+     *
+     * we start at r0 = 1/real
+     */
+    p2 = q1 = &ui_one;    /* q[n-1] and p[n-2] */
+    q2 = p1 = &ui_zero;   /* q[n-2] and p[n-1] */
+    r = abs_real;
+
+    while (!ur_is_zero(r) &&
+	   ui_is_less_or_equal(p1, bound) && ui_is_less_or_equal(q1, bound)) {
+	r = ur_divide(&ur_one, r);
+	a = ur_trunc_to_ui(r);
+	r = ur_diff(r, ui_to_ur(a));
+	p = ui_sum(ui_mult(p1, a), p2);
+	q = ui_sum(ui_mult(q1, a), q2);
+	p2 = p1;  q2 = q1;
+	p1 = p;   q1 = q;
+    }
+
+    /* At this point p1/q1 and p2/q2 are approximations to real,
+       with p1/q1 being the more accurate but only p2/q2 guaranteed
+       to be within the bounds specified for p and q. */
+
+    if (ui_is_less_or_equal(p1, bound) && ui_is_less_or_equal(q1, bound)) {
+	*num = p1;
+	*den = q1;
+    } else {
+	/* p1 or q1 is too big */
+	/* Try subtracting from p1 and q1 to make them <= bound */
+	ui_ptr max_pq = ui_is_greater_or_equal(p1, q1)? p1: q1;
+	ui_ptr bound_diff = ui_diff(max_pq, bound);
+	ur_ptr reduced_ratio = ur_normalize(
+	  ui_diff(p1, bound_diff), ui_diff(q1, bound_diff));
+	ur_ptr p2_over_q2 = ur_no_normalize(p2, q2);
+	ur_ptr reduced_diff = ur_abs(ur_diff(abs_real, reduced_ratio));
+	ur_ptr p2q2_diff = ur_abs(ur_diff(p2_over_q2, abs_real));
+
+	if (ur_is_less_than(reduced_diff, p2q2_diff)) {
+	    /* (p1-bound_diff)/(q1-bound_diff) is better approx. */
+	    *num = ur_numerator(reduced_ratio);
+	    *den = ur_denominator(reduced_ratio);
+	} else {
+	    /* p2/q2 is better */
+	    *num = p2;
+	    *den = q2;
+	}
+    }
+
+    if (ur_is_negative(real)) {
+	*num = ui_neg(*num);
+    }
+
+    if (copy_result()) {
+	*num = ui_new_ui(*num);
+	*den = ui_new_ui(*den);
+    }
+    end_scratch();
+}
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Sunday, December  9, 2018  9:52 PM
+
+...
+> I looked into it a bit further.  I have extracted below two different
+> documents that might be relevant to this topic.
+> The first is a general discussion of converting and multiplying
+> fixed-point values with different scale factors.
+> The second is a function that converts from "universal real"
+> to another rational number that approximates it, but with limits on
+> the numerator and denominator.  I realize now that I am not sure
+> whether either of these directly addresses the issue that the
+> Fixed_Conversion package might face, but it feels like they are at
+> least somewhat relevant! ;-)
+
+I didn't see anything new here; the "small integer" technique is well-known and
+we use it when possible. But that's the key here: (1) "Small integer" techniques
+aren't usable with every possible value of 'Small. Annex G recognizes this and
+has weaker requirements on such smalls (see G.2.3). (Specifically, the "perfect
+result set" and the "close result set".) The requirements here are not taking
+this into account, they assume that every value can be perfectly converted. (2)
+One cannot use any of these techniques when both smalls aren't known at
+compile-time, as often happens in a generic body. The usual fix for such
+problems is to use thunks, but that just changes the problem from not knowing
+one small to not knowing the other (since thunks depend solely on the generic
+contract and have no idea as to what appears in the body).
+
+If we *do* support fixed conversions, we have to at a minimum take into account
+the accuracy requirements as given in Annex G (it is insane to require more
+accurate operations here than you would normally get), and for obvious reasons,
+I would like to see an understanding of the impossibility of doing this
+perfectly in a generic body (if either operand is of a generic formal type, as
+would be the case here).
+
+A third thought: accuracy requirements in general are given only in Annex G. If
+we did that here, I would be less concerned (not everyone supports Annex G). It
+seems bizarre to give accuracy requirements here when there are none at all on
+anything else in the core, and I don't believe there are any on Text_IO or
+'Value anywhere.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, December  9, 2018  10:09 PM
+
+> ...
+> A third thought: accuracy requirements in general are given only in Annex G.
+> If we did that here, I would be less concerned (not everyone supports
+> Annex G). It seems bizarre to give accuracy requirements here when
+> there are none at all on anything else in the core, and I don't
+> believe there are any on Text_IO or 'Value anywhere.
+
+Yes, that makes sense.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Sunday, December  9, 2018  10:18 PM
+
+> I would suggest we say that the storage shall not be "lost,"
+> but allow it to be used for caching or other purposes.  Based on the
+> fact that our commercial CodePeer product has been using Bob's
+> implementation of big nums for the past fifteen years, I don't agree
+> that this cannot be done efficiently.
+> Yes, you do need to worry about multi-tasking (and
+> async-transfer-of-control, for that matter), but I think we need to
+> give implementations some freedom, especially for the implementation
+> of a new package like this.  We can set limits, but we don't want to
+> preclude useful approaches out of the gate.
+
+Bob insisted that any efficient implementation would have to "slowly leak
+memory". That is not an acceptable implementation of anything language-defined
+in Ada. Many Ada programs have to run a long time, and "slowly leaking memory"
+means that it is guaranteed to run out of memory eventually. (I'm also dubious
+of the "slowly" leaking part; that would depend very much on what is being
+calculated, especially for rational numbers. But that's not important for this
+discussion.)
+
+I agree about not preventing useful approaches, but I think the existing wording
+captures this idea just fine. If there is a scenario where the implementation
+runs out of memory when there are few bignum values in existence, that
+implementation is wrong. Otherwise, the "requirement" is untestable anyway, so
+the exact implementation isn't relevant. That is, exactly where the memory is
+recovered isn't detectable or relevant, but it has to be recovered within some
+reasonable amount of time.
+
+Note that this is *exactly* like tampering, where the simple implementation can
+be slow, but there are a variety of better, more complicated implementations
+that aren't slow at all. The "fiasco" there is simply that implementers didn't
+put in the effort needed to do a good job. And users blamed a number of
+self-inflicted problems on the implementation because they could. Are either of
+those really the language definition's fault?
+
+---
+
+BTW, if we're worried about "not precluding useful approaches", then we
+shouldn't be specifying the Big_Rational representation always has a
+Greatest_Common_Divisor of 1. That requires an expensive reduction operation
+after every individual math operation (even "+" in the case where the
+denominators are different), and I don't really see the point of the
+requirement. If we're going to leave it up to the implementation to manage
+memory, then it should also be up to the implementation as to what approaches to
+apply to reduce the use of memory (which is the only obvious value of the GCD
+requirement).
+
+Indeed, I don't remember any cheap normalization algorithm to meet that
+requirement; only the binary normalization (which is discarding trailing zero
+bits) is cheap. Otherwise, one has to do various trial divides, which definitely
+aren't cheap (especially if large values are involves, as they often after
+multiplication.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Sunday, December  9, 2018  10:28 PM
+
+> > I note that we discussed these names in Lexington and decided on
+> > these
+> > exactly:
+> >
+> >   Tucker suggests that the names would be Big_Integer and Valid_Big_Integer.
+> > This seems consistent with existing languages. Bob says he can live
+> > with that (and he will complain about it).
+>
+> Are you sure that's what I meant?  I think maybe I was advocating
+> having no default initial value, like regular integer types, as
+> opposed to commenting on the names.  Not sure.
+
+I of course have no idea what you meant as opposed to what you said. But I
+recorded as exactly as I could what you said -- and I am quite sure of that
+because of the parenthetical remark. There is no way in heck that I would have
+invented that and I would have been very careful to attribute it correctly.
+
+At this point, I have no remaining memory of the discussion of the names, other
+(perhaps) that it was short. I think you were the only one to say anything about
+them, so perhaps the rest of the group had dozed off by then. :-)
+
+I personally don't care that much about the names, but I do care about churning
+them all of the time. And I also worry that if you make the valid form
+Big_Integer, then:
+    A : Big_Integer;
+raises an exception. If that's not the case, then I have a major problem IMHO
+(because the name is lying about the validity).
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Sunday, December  9, 2018  10:32 AM
+
+> > I believe there is a well documented mechanism for doing this
+> > properly.  AdaMagic does the right thing here.
+>
+> Is this needed in Ada anyway, for converting universal_integer to misc
+> fixed-point types?
+
+The Ada compiler can use the binary search algorithm, since the performance of
+the conversion is nearly irrelevant (there aren't going to be programs with
+millions of fixed literals). Such an algorithm would be very slow, since it
+would need to do a bunch of bigrat operations for each bit of the result (in
+general). This routine can't use that sort of algorithm, since there very well
+might be millions of conversions to sort data or the like.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Thursday, December 13, 2018   4:49 PM
+
+> > Tucker, please name these existing languages.
+>
+> I really don't remember what I meant by this, if I really said it.
+
+Randy keeps pretty accurate notes, so I'm guessing you said it.
+But I think Randy is misunderstanding what you and I meant.
+It wasn't (I think) about the names of the subtypes.
+
+My memory is:
+
+    - An earlier version had default-init to 0.
+
+    - I objected to that (and still do).
+      I think you agree.
+
+    - You suggested default-init to an invalid value,
+      and run-time checks that you don't pass it to
+      most ops.
+
+    - I said I can live with that, but I don't like it,
+      and will complain in the future.  However, I have
+      changed my mind -- I now think the invalid value,
+      with run-time checks, is the right answer, especially
+      if we have the ability to suppress.
+
+> What I would say today (and perhaps what I meant then) is that it is
+> consistent with the existing *language* (i.e. "normal" integer types
+> in Ada), which do not guarantee they have a valid value unless you
+> initialize them.
+
+OK, that's reasonable, but the current sub-sub-sub-discussion[*] is about the
+names of the subtypes.  I advocate changing the names:
+
+    Big_Integer --> Optional_Big_Integer
+    Valid_Big_Integer --> Big_Integer
+
+Can you go along with that?
+
+It really looks silly to me to have Valid_Big_Integer, but Big_Natural and
+Big_Positive (which are also guaranteed valid).
+
+And the guaranteed-valid[**] ones will be most commonly used, so one wants the
+shorter name for that.
+
+And, in fact, the invalid value is NOT an integer at all, big or small.  So
+Big_Integer is a wrong name for the one that allows invalid.
+
+[*] Apologies to the anti-hyphenation[***] police, Gary.
+
+[***] Har-har.
+
+[**], [***] No apology necessary!
+
+****************************************************************
+
+From: Bob Duff
+Sent: Thursday, December 13, 2018   4:50 PM
+
+> I of course have no idea what you meant as opposed to what you said.
+> But I recorded as exactly as I could what you said -- and I am quite
+> sure of that because of the parenthetical remark. There is no way in
+> heck that I would have invented that and I would have been very
+> careful to attribute it correctly.
+
+Yes, I have great respect for your ability to take accurate notes of what was
+said.  I just think that perhaps you misunderstood what I meant at the time, or
+are taking it out of context. I don't think I was arguing about the subtype
+names, but about the semantics (and have changed my mind on that). Now I'm just
+arguing about the subtype names.  ;-)
+
+> I personally don't care that much about the names, but I do care about
+> churning them all of the time.
+
+Agreed.  I don't see any churn here.  Tucker came up with names (for a new
+concept -- invalid big ints) on the fly during a meeting, and I'm now objecting
+to those names.  If Tucker and others are happy with my suggestions, then we're
+done.
+
+>...And I also worry that if you make the valid  form Big_Integer, then:
+>     A : Big_Integer;
+> raises an exception. If that's not the case, then I have a major
+>problem  IMHO (because the name is lying about the validity).
+
+With my suggested naming, the above raises an exception.
+
+With or without my suggested naming, this raises an exception:
+
+    B : Big_Positive;
+
+That seems consistent and good.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, December 17, 2018   3:21 PM
+
+> OK, that's reasonable, but the current sub-sub-sub-discussion[*]
+> is about the names of the subtypes.  I advocate changing the
+> names:
+>
+>   Big_Integer --> Optional_Big_Integer
+>   Valid_Big_Integer --> Big_Integer
+>
+> Can you go along with that?
+>
+> It really looks silly to me to have Valid_Big_Integer,
+> but Big_Natural and Big_Positive (which are also guaranteed
+> valid).
+
+Interestingly for "small-nums,"  "X : Positive;" and "Y : Natural;" are
+allowed (and not a bounded error), but make no guarantees about validity.
+On the other hand, when used as a subprogram parameter or a function result,
+it would clearly be an error to pass or return an uninitialized value of an
+Integer type, or any subtype of Integer.
+
+It is not possible to match this behavior with big nums, unless we leave them
+completely uninitialized, since predicates are applied to stand-alone
+composite objects even if they have no explicit initialization, if any
+components of the composite type are default initialized.
+
+So we should probably focus on usability.  Clearly we want all parameters and
+results to be valid by default, so that argues for the shorter names to ensure
+validity.  It will still be annoying that stand-alone variables have to be
+initialized or you have to use some longer name.  But I suppose with
+if-expressions and case-expressions, you almost never need to declare an
+uninitialized variable any more.
+
+I suppose I could get behind having "Optional_Big_Integer,"
+"Optional_Big_Positive," and "Optional_Big_Natural." These would have to have
+predicates that allow "Invalid" plus whatever further value restriction they
+have.
+
+> And the guaranteed-valid[**] ones will be most commonly used,
+> so one wants the shorter name for that.
+>
+> And, in fact, the invalid value is NOT an integer at all,
+> big or small.  So Big_Integer is a wrong name for the one
+> that allows invalid.
+
+I guess I am convinced.  But I would think we would want
+Optional_Big_{Positive,Natural} if we have Optional_Big_Integer.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Monday, December 17, 2018   3:41 PM
+
+> I suppose I could get behind having "Optional_Big_Integer,"
+> "Optional_Big_Positive," and "Optional_Big_Natural." These would have
+> to have predicates that allow "Invalid" plus whatever further value
+> restriction they have.
+
+OK, that's even better than what I suggested.
+
+    ...with Pre =>
+        (if Is_Valid (Optional_Big_Positive) then Optional_Big_Positive >= 1)
+
+****************************************************************
+

Questions? Ask the ACAA Technical Agent