!standard 4.09 (29) 01-02-23 AI95-00263/01 !class binding interpretation 01-02-22 !status work item 01-02-22 !status received 01-02-22 !qualifier Omission !priority Medium !difficulty Medium !subject Scalar formal derived types are never static !summary A formal derived type of a scalar is never a static subtype. !question If it is possible for a static expression in a generic to have a different value than the corresponding expression in some instance of the generic, then there is a language design problem. This principle is violated for scalar generic formal derived types. Consider: procedure Foo is type E is (Aa, Bb, Cc); generic type D is new E'Base; package G is Last_D : constant := D'Pos (D'Last); end G; subtype S is E range Bb .. Bb; package I is new G (S); package J is new G (E); begin null; end Foo; I.Last_D = 2, while J.Last_D = 3. The value of Last_D is static, but depends on the instantiation. Note that the principle can be violated even when the derived type is constrained. For instance: procedure Fooey is generic type D is new Integer; package G is D_Alignment : constant := D'Alignment; end G; type Byte_Aligned_Integer is new Integer; for Byte_Aligned_Integer'Alignment use 1; package I is new G (Byte_Aligned_Integer); package J is new G (Integer); begin null; end Fooey; I.D_Alignment = 1, while J.D_Alignment probably equals 4. Again, the value of a constant is static, but depends on the instantiation. Is this an oversight? (Yes.) !recommendation (See summary.) !wording (See corrigendum.) !discussion An important design principle of Ada is that a static expression has the same value in all instances. 4.9(26) ensures that by declaring that descendants of formal scalar types are not static. However, this rule does not cover formal derived types that happen to be scalar. This was a clear oversight. !corrigendum 4.9(26) @drepl A @i is either a @i or a @i. A static scalar subtype is an unconstrained scalar subtype whose type is not a descendant of a formal scalar type, or a constrained scalar subtype formed by imposing a compatible static constraint on a static scalar subtype. A static string subtype is an unconstrained string subtype whose index subtype and component subtype are static (and whose type is not a descendant of a formal array type), or a constrained string subtype formed by imposing a compatible static constraint on a static string subtype. In any case, the subtype of a generic formal object of mode @b, and the result subtype of a generic formal function, are not static. @dby A @i is either a @i or a @i. A static scalar subtype is an unconstrained scalar subtype whose type is not a descendant of a scalar formal type, or a constrained scalar subtype formed by imposing a compatible static constraint on a static scalar subtype. A static string subtype is an unconstrained string subtype whose index subtype and component subtype are static (and whose type is not a descendant of a formal array type), or a constrained string subtype formed by imposing a compatible static constraint on a static string subtype. In any case, the subtype of a generic formal object of mode @b, and the result subtype of a generic formal function, are not static. !ACATS test A B-test for this should be created. !appendix From: Randy Brukardt Sent: Monday, February 19, 2001 2:31 PM Steve Baird said: > Making all scalar formal types non-static, rather than just > "formal scalar types", would also solve some similar problems that > have nothing to do with enumeration type extension. Perhaps that > warrants a separate AI. I tend to agree (although I couldn't think of an example off-hand). Do you have an example of a problem that occurs without enumeration type extension? (If you do, then we probably do need an AI on this, as it would imply a bug in the language.) **************************************************************** From: Baird, Steve Sent: Tuesday, February 20, 2001 6:18 PM If it is possible for a static expression in a generic to have a different value than the corresponding expression in some instance of the generic, then there is a language design problem. Agreed? This example violates this rule, procedure Foo is type E is (Aa, Bb, Cc); generic type D is new E'Base; package G is Last_D : constant := D'Pos (D'Last); end G; subtype S is E range Bb .. Bb; package I is new G (S); begin null; end Foo; but it isn't clear whether this problem should be fixed by changing 4.9(26) or 12.5.1(7-10). This example also violates the rule procedure Foo is generic type D is new Integer; package G is D_Alignment : constant := D'Alignment; end G; type Byte_Aligned_Integer is new Integer; for Byte_Aligned_Integer'Alignment use 1; package I is new G (Byte_Aligned_Integer); begin null; end Foo; , but it seems clear that this is a 4.9(26) problem. For 12.5.1(7-10), it seems like it boils down a decision about whether something like generic type D is new Integer'Base; package G is end G; package I is new G (Natural); should be accepted. **************************************************************** From: Gary Dismukes Sent: Friday, March 23, 2001 7:23 PM This is a question about an interaction between static expressions and generic instances. It came up as a result of a user of GNAT running into a somewhat obscure warning involving an expression within a generic (in the context of an instantation). The basic question comes down to whether the use of a formal object within a generic instance is to be treated as static if the formal object is associated with a static actual. (This could be broadened to include static properties of other formals, but we'll limit this to formal objects to focus the discussion.) Consider the following examples: -- Example 1: procedure Ex_1 is generic B1 : in Boolean; B2 : in out Boolean; package Gen_1 is C1 : constant Boolean := B1; C2 : constant Boolean := B2; end Gen_1; B_True : constant Boolean := True; package New_Gen_1 is new Gen_1 (False, B_True); begin case B_True is when New_Gen_1.C1 => -- Legal? null; when New_Gen_1.C2 => -- Legal? null; end case; end Ex_1; -- Example 2: procedure Ex_2 is generic type T is range <>; One : T; Zero : T := 0; package Gen_2 is Bang : T := One / Zero; end Gen_2; package New_Gen_2 is new Gen_2 (Integer, 1); -- Is this instantiation illegal, because of division by zero? begin null; end Ex_2; -- End of examples -- The AARM gives the following general rule about instances: 13 The instance is a copy of the text of the template. [Each use of a formal parameter becomes (in the copy) a use of the actual, as explained below.] ... and elaborates with some annotations: ... 13.c We describe the substitution of generic actual parameters by saying (in most cases) that the copy of each generic formal parameter declares a view of the actual. Suppose a name in a generic unit denotes a generic_formal_parameter_declaration. The copy of that name in an instance will denote the copy of that generic_formal_ parameter_declaration in the instance. Since the generic_formal_ parameter_declaration in the instance declares a view of the actual, the name will denote a view of the actual. 13.d Other properties of the copy (for example, staticness, classes to which types belong) are recalculated for each instance; this is implied by the fact that it's a copy. Looking specifically at the rules for formal objects, 12.4(10) says: 10 {stand-alone constant (corresponding to a formal object of mode in)} In an instance, a formal_object_declaration of mode in declares a new stand-alone constant object whose initialization expression is the actual, whereas a formal_object_declaration of mode in out declares a view whose properties are identical to those of the actual. The above paragraphs suggest that in an instance where the actual is a static expression (or name denoting a static constant in the case of mode in out), the name of a formal object denotes a constant initialized by that static expression (for mode in), or declares a view of a static constant (for mode in out). Certainly in the case of the in out formal, it would seem that the intent is that within an instance the name of this formal denotes a static constant ("has properties identical to those of the actual"). This interpretation would seem to make Example 1 legal and Example 2 illegal. However, the definition of a static constant specifies (4.9(24)): 24 {static (constant)} A static constant is a constant view declared by a full constant declaration or an object_renaming_declaration with a static nominal subtype, having a value defined by a static scalar expression or by a static string expression whose value has a length not exceeding the maximum length of a string_literal in the implementation. Given this definition, it would seem that the constant or view declared for a formal object in an instance could never be a static constant, since it is does not satisfy the (syntax-based) requirements for being a static constant. This would seem to imply that Example 1 is illegal and Example 2 is legal. GNAT currently follows the first interpretation (i.e., it treats uses of formals as static within an instance when the actuals are static), but there's some debate about whether this is correct. Regardless of the answer, I think it would be useful to have an ACATS test that checks this, to ensure that compilers implement a consistent interpretation. **************************************************************** From: Randy Brukardt Sent: Friday, March 23, 2001 7:51 PM This seems to be a similar question to the one asked in AI-00263 (Scalar formal derived types are never static), and it seems that we would probably want the same answer. (Of course, the answer in that AI is the view of a small number of people in an E-Mail discussion, so I wouldn't pay too much attention to the conclusions, yet.) That question came up in the context of the enumeration extension discussion, but it became clear that the question has nothing to do with enumeration extensions, and everything to do with a language design issue. The contention of the person who brought up this question is that the value of a static expression should not be different in different instantiations. If you buy his contention (which I do), then it is clear that staticness as in Gary's examples needs to be prohibited. OTOH, if you DON'T buy this "design principle", then probably Gary's examples are correct. In any case, I would think that this question and that one should be handled together. I'll add this mail and succeeding discussion to that AI. > Regardless of the answer, I think it would be useful to have an > ACATS test that checks this, to ensure that compilers implement > a consistent interpretation. From a pure testing perspective, I'd agree with you. But, as your boss likes to say, the mere presence of a potential portability problem isn't alone a good enough reason for a test -- there also has to be a probability that users will bump into the problem. Since this issue has stayed under the rug for the entire life of Ada 95, I can't get too excited about its importance. (Of course, once an AI is approved, a test objective will get added to the next ARG ballot, and we'll go from there.) **************************************************************** From: Robert Dewar Sent: Friday, March 23, 2001 7:30 PM In the event that the answer to this is that the formal in the spec *is* a static expression (I believe it isn't because of para 24, but it's arguable), we have to ask if the formal would be a static expression in the body. Surely the answer must be no, or we have horrible contract/shared generic problems. But where in the RM does it provide a hint that the spec/body are different in this particular respect? **************************************************************** From: Tucker Taft Sent: Friday, March 23, 2001 7:56 PM Our compiler certainly treats formal objects as potentially static, determined by the staticness of the actual. I think there is general agreement that this is the "right" way to do things, and we have discussed other AIs, I believe, which presumed this was true (e.g. the one about the rules for formal package matching). I presumed there already were some ACATS tests that checked the potential staticness of formal objects, because it was certainly a lot of work to get it right. **************************************************************** From: Robert Dewar Sent: Friday, March 23, 2001 8:34 PM Does it do this in bodies as well? Because if it does, then you get significant contract violations, n'est-ce-pas? That's the way GNAT behaves, the expression is static in the spec instance, and in the body, it is considered non-static. For example, this program is illegal: procedure m is generic zero : integer := 0; package x is vvv : integer := 1 / zero; end; package body x is end x; package rr is new x (0); begin null; end; m.adb:12:04: instantiation error at line 6 m.adb:12:04: division by zero m.adb:12:04: static expression raises "constraint_error" But the following program is legal: procedure m is generic zero : integer := 0; package x is end; package body x is vvv : integer := 1 / zero; end x; package rr is new x (0); begin null; end; And generates the warnings: m.adb:12:04: warning: in instantiation at line 9 m.adb:12:04: warning: division by zero m.adb:12:04: warning: "constraint_error" will be raised at run time ---------- We actually noticed this in GNAT as a result of the rule that requires biased rounding of floating-point in static expressions (and of course we provide proper unbiased rounding for non-static expressions, even if they are folded at compile time). So we were getting a warning on this biased rounding. **************************************************************** From: Gary Dismukes Sent: Friday, March 23, 2001 9:02 PM Robert responded to Tuck: > < static, determined by the staticness of the actual. I think > there is general agreement that this is the "right" way to > do things, and we have discussed other AIs, I believe, which > presumed this was true (e.g. the one about the rules for > formal package matching). > >> > > does it do this in bodies as well? Because if it does, then you > get significant contract violations, n'est-ce-pas? There's a rule that legality rules aren't rechecked in instance bodies, which I presume Intermetrics/Averstar compiler follows (I think there are some tests for that). > That's the way GNAT behaves, the expression is static in the spec instance, > and in the body, it is considered non-static. From a legality point of view GNAT is indeed not checking the expression in the body, but the warning about biased rounding was happening because the compiler was still treating the expression evaluation as static in the instance body, n'est-ce pas? **************************************************************** From: Robert Dewar Sent: Friday, March 23, 2001 9:40 PM That's a completely independent problem, nothing to do with the basic question at hand here. **************************************************************** From: Tucker Taft Sent: Friday, March 23, 2001 10:26 PM We do it everywhere, I believe, but don't do any legality checks during body instantiation. **************************************************************** From: Robert Dewar Sent: Saturday, March 24, 2001 3:33 AM But then surely you get things wrong in the body. in particular, let's address the case that brought this up in the first place. There is a (to us very annoying) rule that requires that floating-point rounding be done wrong for static expressions (i.e. in a biased manner). We have the option of generating a warning for this, but obviously you do not want to do such not-the-same-as-run-time-rounding unless you have to, so we do not want to do it in bodies. Also, is it really the case that you do no legality checks in the body? That's surely wrong for the cases on which we have agreed that contract violations are OK, notably for certain cases of pragma restrictions, and also as I remember some rep clause cases. **************************************************************** From: Pascal Leroy Sent: Monday, March 26, 2001 1:44 AM > generic > B1 : in Boolean; > B2 : in out Boolean; > package Gen_1 is > C1 : constant Boolean := B1; > C2 : constant Boolean := B2; > end Gen_1; > > B_True : constant Boolean := True; > > package New_Gen_1 is new Gen_1 (False, B_True); > > begin > case B_True is > when New_Gen_1.C1 => -- Legal? > null; > when New_Gen_1.C2 => -- Legal? > null; > end case; I think this example is a red herring. The instantiation is illegal because the actual corresponding to B2 has to be a variable (RM95 12.4(7)). So, happily enough, we don't have to argue about the staticness of C2... > procedure Ex_2 is > > generic > type T is range <>; > One : T; > Zero : T := 0; > package Gen_2 is > Bang : T := One / Zero; > end Gen_2; > > package New_Gen_2 is new Gen_2 (Integer, 1); > > -- Is this instantiation illegal, because of division by zero? Yes, obviously the instantiation is illegal because of division by zero, and One/Zero is static in the spec (this follows from RM95 12.4(10)). As Robert pointed out, the same expression in the body would not be static, and it would raise C_E at execution, and a warning would be user-friendly. This follows from RM95 12.3(11). **************************************************************** From: Robert Dewar Sent: Monday, March 26, 2001 8:10 AM <> I really don't understand the obviously here. I don't understand why the occurrence of One is static according to the 4.9 rules, since I don't see any constant declaration. I understand that One is a constant, I just don't see the declaratoin required by the 4.9 rules. **************************************************************** From: Pascal Leroy Sent: Monday, March 26, 2001 8:21 AM > I really don't understand the obviously here. I don't understand why the > occurrence of One is static according to the 4.9 rules, since I don't > see any constant declaration. RM95 12.4(10): "In an instance, a formal_object_declaration of mode in declares a new stand-alone constant object whose initialization expression is the actual..." Therefore, One is a constant in every instantiation. Moreover, in the instantiations where the actual expression is static, One is a static constant. The same applies to Zero. Thus, in the instantiation in question One/Zero is a static expression. (It is static both in the body and in the spec of the instantiation, but the legality rules don't matter in the body.) I am not sure what we are arguing about. This seems pretty straightforward to me (at least in Ada 95; Ada 83 was muddled in this area). **************************************************************** From: Robert Dewar Sent: Monday, March 26, 2001 8:55 AM <> This says that the constant object is declared, but my reading of 4.9(24) says that the constant object must be declared by a full constant declaration -- I do not see this here. Note that the reason this is significant is because we are not just talking about whether static expressions dividing by zero cause warnings or errors, which is not particularly critical, but rather whether the annoying "wrong" compile time rounding rule (biased, always away from 0.0) applies to floating-point expressions. THat's why we care :-) **************************************************************** From: Pascal Leroy Sent: Monday, March 26, 2001 9:03 AM > This says that the constant object is declared, but my reading of > 4.9(24) says that the constant object must be declared by a full > constant declaration -- I do not see this here. > > Note that the reason this is significant is ... whether the annoying > "wrong" compile time rounding rule (biased, always away from 0.0) applies > to floating-point expressions. I have some sympathy for the notion of fixing the rounding rule (I assume that you want round-to-nearest even, right?). However, if that's what we want to do, let's do it by changing 4.9(38), not by dubious theological interpretation of the staticness of constants declared in an instance. **************************************************************** From: Robert Dewar Sent: Monday, March 26, 2001 9:17 AM <> Well fixing the rounding rule would be nice, but the question right now is where it applies, and where it doesn't. This is not theology, it is about what results you get (and in our case whether we issue the warning about unexpected rounding). You still did not answer the "theological" questions about 4.9(24), whose reading seems pretty straightforward to me. **************************************************************** From: Gary Dismukes Sent: Monday, March 26, 2001 1:54 PM Pascal wrote: > > I think this example is a red herring. The instantiation is illegal because > the actual corresponding to B2 has to be a variable (RM95 12.4(7)). So, > happily enough, we don't have to argue about the staticness of C2... Yes, the mention of in out parameters was a mistake, so only the in formal case is relevant (so the example is only half a red herring :). > Yes, obviously the instantiation is illegal because of division by zero, and > One/Zero is static in the spec (this follows from RM95 12.4(10)). That was my interpretation as well, but Robert takes issue based on the definition of static constant in 4.9(24) as explained in my first posting. (It's always been my understanding that these things are static in instance specs, but we don't have agreement on that and there may be a wording issue to resolve.) > As Robert pointed out, the same expression in the body would not be static, > and it would raise C_E at execution, and a warning would be user-friendly. > This follows from RM95 12.3(11). It's interesting that you say that in the body it would not be static, because in a subsequent message you stated (see last sentence): Pascal(2) wrote: > Therefore, One is a constant in every instantiation. Moreover, in the > instantiations where the actual expression is static, One is a static constant. > The same applies to Zero. Thus, in the instantiation in question One/Zero is a > static expression. (It is static both in the body and in the spec of the > instantiation, but the legality rules don't matter in the body.) I agree that the legality rules don't matter in the body. The question that raised this whole issue is whether the expression is static in the body, because if it is, then presumably static arithmetic rules apply, which means that it would seem rounding needs to be done according to the rules of 4.9(38). But presumably that can't possibly be intended because it would be a major hardship for shared generics. It seems though, that if an expression involving a generic formal parameter can be static in an instance spec (as I believe), that by 12.3(13) (and as explained by 12.3(13.d)) it can also be static in the instance body, which leads to this problem. So something seems to be broken here if this interpretation is correct. **************************************************************** From: Robert Dewar Sent: Monday, March 26, 2001 2:14 PM <> Indeed, I agree here. *if* it is the case that the expression is static in the instance spec, then I see no rule which would differentiate this from the instance body. Note that we are not just talking about rounding here, if the expression is static in the instance body, that requires full run-time multiple-precision arithmetic for the shared generic case. **************************************************************** From: Tucker Taft Sent: Monday, March 26, 2001 5:35 PM There seems to be agreement that a formal IN object initialized with a static actual should be: 1) static in the spec 2) non-static in the body I agree that the RM does not say this, and that we need an AI to appropriately "interpret" it this way. **************************************************************** From: Robert Dewar Sent: Monday, March 26, 2001 9:24 PM I agree this is an AI that is definitely needed. **************************************************************** From: Pascal Leroy Sent: Tuesday, March 27, 2001 3:12 AM > > As Robert > > pointed out, the same expression in the body would not be static, and it would > > raise C_E at execution, and a warning would be user-friendly. This follows from > > RM95 12.3(11). > > It's interesting that you say that in the body it would not be static, > because in a subsequent message you stated (see last sentence): > > Pascal(2) wrote: > > Therefore, One is a constant in every instantiation. Moreover, in the > > instantiations where the actual expression is static, One is a static constant. > > The same applies to Zero. Thus, in the instantiation in question One/Zero is a > > static expression. (It is static both in the body and in the spec of the > > instantiation, but the legality rules don't matter in the body.) Ok, so much for consistency. I agree that the RM should be fixed, and that the constant should not be static in the body. **************************************************************** From: Randy Brukardt Sent: Wednesday, March 28, 2001 4:43 PM Before I try to write this up, I want to make sure that it is clear exactly what we are talking about. There are really 4 places to consider for a generic: 1) The generic specification; 2) The generic body; 3) The instance specification; and 4) The instance body. For an implementation using generic sharing, there really isn't any such thing as a "instance body". The generic body is used for all instances. Thus, we want the rules to have the same effect for (2) the generic body and (4) the instance body. For this case, it's clear that we want OTOH, that isn't so critical for (1) and (3). Indeed, in this case, we really want different answers. Otherwise, we have a "hidden" contract, and generic sharing can be prohibitively expensive. Indeed, the current wording of 4.9(24) is that way for a reason (well, several reasons, only one of which matters to this discussion). That is, we didn't want formal objects to be treated as static in the generic specification. However, for Ada 83 compatibility, we did want these things to be static in the instance specification. What does this mean? The items might be static when used outside of the generic, but never inside of the generic. Let me give an example (based on Gary's): generic I1 : in Integer; I2 : in Integer; package A_Generic is subtype Short_Int is Integer range I1 .. I2; -- (1) type Another_Int is range 0 .. (2**I2)-1; -- (2) for Another_Int'Size use I2; -- (3); type Some_Record is record Len : Boolean; Comp1 : Another_Int; end record; for Some_Record use record Comp1 at 0 range 0 .. I2-1; -- (4) Len at 0 range I2 .. I2+7; end record; end A_Generic; All of (2), (3), and (4) are illegal by 4.9(24), and this is intended. (1) doesn't require a static item, and thus is legal. Otherwise, the representation of Another_Int and Some_Record would be different for different instantiations. The only way to implement these would be as if they are generic formal derived parameters, and this would have the effect of a 'hidden' contract. Moreover, the implementation of Some_Record would be prohibitively expensive (as the components would have the located at runtime). (Luckily, generic formal derived untagged record types are rare, so they *can* be that expensive.) OTOH, uses of the objects declared in the instance *outside* of the generic can be static. That was required by Ada 83 (and tested by a controversial ACVC test), and thus must be true in order to keep compatibility. For instance: package An_Instantiation is new A_Generic (I1 => 1, I2 => 16); Var : An_Instantiation.Short_Int; ... case Var is when An_Instantiation.Short_Int'First => ... -- OK when An_Instantiation.Short_Int'First+1 .. An_Instantiation.Short_Int'Last => ... -- OK end case; -- OK, no others required. It's clear that the RM language does not provide the latter at all; items derived from a generic object are never static, because a generic formal or actual object can never meet the requirements of 4.9(24). What we want is to preserve the properties mentioned above. 12.3(11) should be helpful, since it is says that Legality Rules are enforced in the generic_declaration given the properties of the formals. The AARM also notes that 4.9(24) is intended to exclude deferred constants. Clearly, we also want to continue to exclude generic formal objects other than in an instance. As far as I know, the only non-legality rule that apply to static expressions are those in 4.9(38). (12.3(11) seems to cover all of the Legality Rules sufficiently, because they are not checked in the instance body, which is what we need). So, it appears to me that what we need to do is modify 4.9(24) to explicitly include generic formal objects of mode 'in' in an instance, and modify 4.9(38) to exclude these as we do with formal scalar types. For 4.9(24), I'd suggest wording like: A static constant is a constant view declared by a full constant declaration, an object_renaming declaration, or a formal_object_declaration of mode 'in' in an instance with a static nominal subtype, .... ['in' would be in bold in the actual RM wording.] For 4.9(38), I'm not quite sure what to say. We don't want this rule to apply instance bodies to any expression to which it does not apply in the Generic_Body. Perhaps just saying that: If the static real expression occurs in an instance body, and is not a static expression in the generic_body, then no special rounding or truncating is required -- normal accuracy rules apply. If we applied the above to the entire instance, then the last sentence of 4.9(38) is unnecessary (because a formal scalar type can never be static in the generic_declaration or generic_body, only in an instance). Perhaps we delete the last sentence of 4.9(38) to the above, and make this rule apply in any part of an instance: If the static real expression occurs in an instance, and is not a static expression in the generic_declaration or generic_body, then no special rounding or truncating is required -- normal accuracy rules apply. Or we could just repeal 4.9(38) altogether -- but that seems like a different AI. Does the above make sense? If so, I will update the write-up of AI-00263 to include this. **************************************************************** From: Robert Dewar Sent: Wednesday, March 28, 2001 10:20 PM I must say on further reflection, I find it mighty odd that an expression like x+0.1 is of course the same in the body and the spec if x (a formal object) has a dynamic value, but if the x is static, then the expression might be required to be different, in other words, there are values of x for which the instantiation of the following generic is required to print FAILED. That seems mighty odd to me! generic x : float; package y is z1 : float := x + 0.1; end y; with Text_IO; use Text_IO; package body y is z2 : float := x + 0.1; begin if z1 /= z2 then Put_line ("FAILED"); end if; end y; This can happen on a machine with unbiased rounding at run time (the normal IEEE default) where in the spec, the RM requires biased rounding. So unless this anomoly is otherwise fixed, I would prefer that the relevant expressions both be considered non-static (which is what I think the RM says now anyway). **************************************************************** From: Randy Brukardt Sent: Wednesday, March 28, 2001 10:37 PM > So unless this anomoly is otherwise fixed, I would prefer that the relevant > expressions both be considered non-static (which is what I think the RM > says now anyway). This was my second alternative to the wording fix for 4.9(38), and this is a good reason for adopting that. Note that under no interpretation are these expressions ever static in the generic_declaration or generic_body; the only case where they might be static is in any instance. And the rounding and "infinite precision" issues are the only ones that might matter. Since code generation for the generic may occur at the point of the generic_body, we can't have rules that depend on the instantiation. I would argue that we don't want rules that give different answers in the generic_declaration/body and in the instance. Thus it makes the most sense to suspend any non-legality staticness rules inside the instance when the item in question is not static in the generic_declaration/body. But, we probably do want the "infinite precision" rules to apply to uses of static instance values (that are non-static in the generic_declaration/body) when they are used outside of the instance. (We can't just declare these things to be non-static because of the compatibility problem, even though that would be easiest.) What a mess! **************************************************************** From: Randy Brukardt Sent: Wednesday, March 28, 2001 10:44 PM I neglected to say in my previous message that Z1 can't be considered non-static, (even though that is what the RM says, and certainly would be easiest to deal with), because Ada 83's ACVC insisted that such things are static, at least when used outside of the generic. I'm pretty sure that test is still there (although I don't know the number off-hand). So, I don't think we have the easy option of declaring it non-static -- I think Ada compiler customers would be surprised and unhappy. (And then Robert would be arguing the other position of the same issue -- let's not go there!!) **************************************************************** From: Robert Dewar Sent: Wednesday, March 28, 2001 10:46 PM One possibility would be to eliminate the obnoxious rounding rule. I must say I never noticed this during review, and it seems quite horrible to me3 to require that static expressions get the wrong result. As for infinite precision, well you are always free to use infinite precision in evaluating expressions anway, because of 11.6 stuff (I must say I can't understand the new 11.6, but it must allow out of range intermediate values :-) Of course to *require* the infinite precision in the body would be disastrously wrong. **************************************************************** From: Robert Dewar Sent: Wednesday, March 28, 2001 10:52 PM <> I agree that we can't change current practice, and remember that both the Intermetrics front end (I call it that because that's what it used to be, and we can't keep renaming it :-) and GNAT currently treat the stuff in the spec as static, and in fact both play the trick of treating stuff in the body as static but not giving error messages (which mean that both get the rounding wrong in the body). This whole thing came up because in GNAT we introduced a warning when incorrect rounding is forced by the RM rules, and then were surprised when this message unexpectedly occurred in a generic body. **************************************************************** From: Pascal Leroy Sent: Thursday, March 29, 2001 2:21 AM > So unless this anomoly is otherwise fixed, I would prefer that the relevant > expressions both be considered non-static (which is what I think the RM > says now anyway). As Randy mentioned, this is absolutely unacceptable for compatibility reasons, especially since it has been that way since Ada 83 (as I recall this was muddled in the Ada 83 RM but was clarified by one of the first Ada 83 AIs). > One possibility would be to eliminate the obnoxious rounding rule. I must > say I never noticed this during review, and it seems quite horrible to me > to require that static expressions get the wrong result. There are two issues here: do we want rounding? and if we do, do we want biased or unbiased rounding? I assume we have a consensus that biased rounding was the wrong choice, the RM should either require unbiased rounding or left mid-point rounding implementation-defined. But I would even argue that rounding is obnoxious regardless of the mid-point issue. A few years ago we had a complaint from a customer who had code like: X := Y / 3#0.1#; Well, the divisor was not the literal 3#0.1#, it was some complicated universal expression which turned out to be exactly 1/3. In Ada 83 we transformed this into 3.0 * Y, which was both fast and accurate. In Ada 95 we have to round 3#0.1# to some obscure machine number, and then generate a division. This is both way slower and significantly less accurate... **************************************************************** From: Robert Dewar Sent: Thursday, March 29, 2001 5:09 AM <> Can you point to this AI, it would help inform the current discussion, and in some sense would be decisive on the issue, since we intend to be back compatible with both the Ada 83 RM and relevant AI's. **************************************************************** From: Robert Dewar Sent: Thursday, March 29, 2001 5:06 AM <> This is a very annoying and incorrect transformation for a compiler to make. You *definitely* do not want a compiler fiddling with floating-point expressions. The expressions 3.0 * Y Y / 3#0.1# are simply different, and have different semantics. It is a huge advantage of Ada that it stops compilers from fiddling around like this. I consider that this was quite inappropriate in Ada 83 as well (it is arguable whether or not it was allowed, I would say no, but in any case it is a good thing that in Ada 95 it is made clear that this kind of transformation is not allowed). We definitely DO want rounding, and it is important that it be properly defined how the rounding should work. I think it is reasonable to argue that the rounding should be done in a manner consistent with the run-time rounding mode. Either this should be required, or it should be at least permitted. The current rules which require it to be done wrong are very irritating (irritating in exactly the same kind of manner that the transformation that Pascal mentions is irritating). Yes, of course we know that a floating point multiplication is faster than a division. So does any competent programmer. Indeed if a programmer writes the unusual expression Y / 3#0.1# the *only* legitimate explanation is that it is VERY deliberately different from a multiplication by 3.0. Compilers should NOT assume that the programmer intended something different than what they wrote. Floating-point is NOT an easy domain, it is after all not even valid to replace A+B by B+A in IEEE arithmetic. Incidentally I would consider it quite acceptable to distinguish strict mode from relaxed mode here, and allow this kind of transformation in relaxed mode. So I would not oppose some language that says that the rounding rule applies only in strict mode. **************************************************************** From: Pascal Leroy Sent: Thursday, March 29, 2001 6:23 AM > Floating-point is NOT an easy domain, it is after all not even valid to > replace A+B by B+A in IEEE arithmetic. That statement surprises me (but then I have been surprised before). Can you give an example, just to enlighten me? > Can you point to this AI, it would help inform the current discussion, and > in some sense would be decisive on the issue, since we intend to be back > compatible with both the Ada 83 RM and relevant AI's. AI83-00001 makes a subtle-but-significant distinction between "denotes" and "declares". RM83 12.3(6-14) then uses "denotes" all over the place. The conclusion is that, outside the instance, an expression is static if all the names in this expression "denote" static things. (I must admit that 15 years later the logic seems a bit tenuous, but it was quite convincing at the time.) This was deemed important for the usability of generics. **************************************************************** From: Robert Dewar Sent: Thursday, March 29, 2001 6:39 AM <> A = +0.0 B = -0.0 A+B = +0.0 B+A = -0.0 gcc is VERY careful to preserve proper IEEE semantics for negative zeroes You should not do commutative optimizations on floating-point addition unless you can prove that the result is not subject to this special negative zero case (which is in general very hard to do). I am talking here about an implementation in which Signed_Zeroes is true, so that it conforms to IEC 559. Note that the Ada RM is quite explicit in this regard, see for example, the subtle note in (RM G.1.1(57)) which covers a case that is very easy to get wrong, it is very easy to assume that real+complex should be done by first supplying a zero imaginary component to the real and then doing a complex addition, but as this paragraph points out, this gives the wrong result for the negative zero case. This stuff has been very carefully considered in Ada, especially in Ada 95 (it was one of my special goals for Ada 95, following from the PhD thesis work of my student Sam Figueroa on floating-point semantics in high level languages, that Ada 95 be properly consistent with IEEE floating-point, and this goal was indeed achieved :-) Robert Going back to the original topic of the forced biased rounding, we put an unconditional warning in version 3.14a of GNAT to note that this was happening, and our experience is that a lot of customers ran into this warning and were puzzled, we tried to make it clear, but nothing in floating-point is clear to most programmers :-) The warning says: x.adb:245:27: warning: static expression does not round to even (RM 4.9(38)) after explaining this in detail to several customers, we decided to have this particular warning separately controlled and off by default, but the fact that the warning appears quite often means that we are not talking about some theoretical case that never happens. On the contrary this annoying rule requires a significant number of literals to be rounded incorrectly, and we find this quite worrisome. **************************************************************** From: Tucker Taft Sent: Thursday, March 29, 2001 11:18 AM Pascal Leroy wrote: > > One possibility would be to eliminate the obnoxious rounding rule. I must > > say I never noticed this during review, and it seems quite horrible to me > > to require that static expressions get the wrong result. > > There are two issues here: do we want rounding? and if we do, do we want > biased or unbiased rounding? The original goal was portability. That would argue for either: 1) Specify unbiased rounding, per IEEE (portability between machines) 2) Specify rounding per the target machine behavior (portability between implementations on the same machine). Clearly (2) provides somewhat less portability (though given the enormous preponderance of IEEE, that is more theoretical than actual), and (2) minimizes the difference between static and non-static (again, given the IEEE preponderance, that is pretty minor). I guess I would vote for (2), since whether to round or not is based on the 'Machine_Rounds attribute, so we are clearly trying to match that aspect of the target. Might as well go all the way. On the other hand, (2) is clearly more work because it requires additional target-specific tables in the front end to keep track of what sort of rounding the target machine performs. It seems that in any case, either (1) or (2) is better than the current "biased" rounding approach. I still believe the "biased" rounding approach for real => integer was the right decision, but in retrospect it seems like it was an unwise generalization of that decision to the floating point => floating point rounding case. > > I assume we have a consensus that biased rounding was the wrong choice, > the RM should either require unbiased rounding or left mid-point rounding > implementation-defined. I don't think implementation-defined is the right choice. It ought to be determined by the target characteristics, not compiler implementor whim, or be specified as always unbiased, in my view. > [discussion of tranforming X / (1.0/3.0) into X * 3.0]... I don't think we should allow for infinite precision transformations. That just seems too big a can of worms. **************************************************************** From: Randy Brukardt Sent: Thursday, March 29, 2001 6:02 PM > It seems that in any case, either (1) or (2) is better than the > current "biased" rounding approach. I certainly agree with this. > I still believe the "biased" rounding > approach for real => integer was the right decision, but in retrospect > it seems like it was an unwise generalization of that decision to > the floating point => floating point rounding case. Well, I disagree with the "biased" rounding for integers, at least as the default option. The problem is that it is very expensive to implement on Pentiums (the best I can do is about 100 times slower and 20 times larger than an "unbiased" conversion). I agree that it is sometimes convenient to have it defined that way, but when you don't care specifically about the rounding, you are paying a huge price from which there is no way to escape (because there is no way to avoid the type conversion which carries along the expense). It would have been much better to leave the "default" rounding (in type conversions to integers) implementation-defined, letting the users who care use the various attributes defined in A.5.3 to get the specific rounding they need. Moreover, type conversions look "cheap", and it is a surprise when they are not cheap. **************************************************************** From: Randy Brukardt Sent: Thursday, March 29, 2001 6:16 PM > One possibility would be to eliminate the obnoxious rounding rule. I must > say I never noticed this during review, and it seems quite horrible to me > to require that static expressions get the wrong result. That's probably the best solution to that part of the problem, but it doesn't solve the compatibility problem (the RM says that expressions derived from formal objects are not static, but clearly this is different from both Ada 83 and current practice). > As for infinite precision, well you are always free to use infinite > precision in evaluating expressions anyway, because of 11.6 stuff (I > must say I can't understand the new 11.6, but it must allow out of > range intermediate values :-) > > Of course to *require* the infinite precision in the body would be > disastrously wrong. Right, and that's the second problem. It is quite easy to tell if you are using infinite precision in the generic (just use a variation of the 0.3333....33 /= 16#0.1# check). We certainly don't want to require it in the body, and I would argue that you don't really want to require it in the spec, either. Otherwise, you have expressions that (potentially) give different answers in the spec and in the body of a generic. To me, at least, it make the most sense that these things are treated as static only outside of the instance. But this is not a major issue. Whether or not it is static in the generic spec, we're going to have the weird situation of a static expression (in the instance, at a minimum the instance body) to which 4.9(33) and the following do not apply. THAT is weird, and seems like it will be tough to word meaningfully. **************************************************************** From: Pascal Leroy Sent: Friday, March 30, 2001 2:32 AM > Well, I disagree with the "biased" rounding for integers, at least as the > default option. The problem is that it is very expensive to implement on > Pentiums (the best I can do is about 100 times slower and 20 times larger > than an "unbiased" conversion). I agree that it is sometimes convenient to > have it defined that way, but when you don't care specifically about the > rounding, you are paying a huge price from which there is no way to escape > (because there is no way to avoid the type conversion which carries along > the expense). I agree with this assessment. We have run into the same problem on a variety of processors. At some point we considered having a configuration pragma to mean I-don't-give-a-damn-about-mid-point-rounding, but we never had time to implement it. One practical situation where this shows up is the implementation of the elementary functions. You perform some FP computation, based on which you extract pre-computed values from a table. The index in the table is obtained by float-to-integer conversion. You really don't care what happens in the mid-point case, because the two table entries to which you can round are equally good (or equally bad), but you do care if the silly language requires 12 extra instructions to do the conversion. **************************************************************** From: Pascal Leroy Sent: Friday, March 30, 2001 2:37 AM > It is quite easy to tell if you are > using infinite precision in the generic (just use a variation of the > 0.3333....33 /= 16#0.1# check). We certainly don't want to require it in the > body, and I would argue that you don't really want to require it in the > spec, either. But, Randy, there is nothing specific to generics here. You can have very strange problems if for instance you modify a subtype bound so that it becomes non-static during maintenance. Static computations will become non-static, and they may yield vastly different results. But these are merely the pitfalls of using floats. The result of the above comparison might change based on details of the generated code, optimization level, etc., so regardless of staticness you probably don't want to do things like that. In general, you are in a state of sin as soon as you use literals that are not machine numbers (note that 'Machine helps a lot in Ada 95). **************************************************************** From: Robert Dewar Sent: Friday, March 30, 2001 5:13 AM No, I think the biased rounding of integers is too peculiar. This was frequently reported as a bug by users of Alsys Ada. The arguments for unbiased rounding just do not apply in this case in my view. And Randy's figure of a factor of 100 hit is way way way off on the ia32. **************************************************************** From: Erhard Ploedereder Sent: Friday, March 30, 2001 11:05 AM As a bit of input from left-field on rounding: I had a discussion two days ago about FP rounding with someone who seems to know what he is talking about. He told me of significant problems across all programming languages in actually exploiting the four modes of rounding available on IEEE-conformant FPs. (He also told me that the (co)author of the standard is kind of fuming that no programming language let's you get at this capability.) The capability is needed for most accurate calculations involving fractions of floating-point values. There is a good chance that we might get an ada-comment to the effect that access to those 4 modes be provided via pragmas. A 5. variant may well be: I don't care. I know this is not to the point in question, but it might trigger some thoughts. **************************************************************** From: Randy Brukardt Sent: Friday, March 30, 2001 12:20 PM > No, I think the biased rounding of integers is too peculiar. This was > frequently reported as a bug by users of Alsys Ada. The arguments for > unbiased rounding just do not apply in this case in my view. The problem is simply that sometimes you want fast, and sometimes you want predicable. And since there is a substantial cost difference between the two, you really want to be able to chose. But Ada 95 has made the choice for you: slow and predicable. That's not good for some applications. > And Randy's figure of a factor of 100 hit is way way way off > on the ia32. Well, its the best *I* can do. None of the predefined rounding modes come close to what you want, so you have to: Grab and save the current status mode; fiddle the bits to a known mode (I use chop, but the code is about the same for the other modes). Grab the sign of the value to round and save it somewhere. Take the absolute value of the value. Add 0.5 to the value. Round the value. Put the sign back on the value. Convert it to integer by storing it to memory. Restore the previous status mode. This is about 20 instructions; and the mode changes and the rounding in place are relatively expensive. This compares to: Convert it to integer by storing it to memory. which is faster than rounding in place. (Gosh knows why.) On the original Pentium, the code sequence takes about 150 clocks, while the straightforward store takes 5 clocks, so the ratio is more like 30 times slower. On newer Pentiums, the timings are quite dependent on the surrounding code, so it is hard to draw any conclusions, but it appears that the change would be about proportional. This code sequence is such a mess that we don't even bother with it in-line, and simply use the old software routine. Because it uses simple instructions, it generally is faster on most processors (haven't tested the timings on Pentium IIIs and IVs), but it is so large I always think of it of as 100 times slower, but it really isn't. Sorry about the mild exaggeration. **************************************************************** From: Robert Dewar Sent: Tuesday, April 03, 2001 7:00 AM <> That person should fume less, and politic more :-) Also this is not true, Apple Pascal with SANE is fully compliant So is Ada 95 with the proposed IEEE package (see thesis of Sam Figueroa, NYU 1999) **************************************************************** From: Robert Dewar Sent: Tuesday, April 03, 2001 7:06 AM <> fast and unpredictable (and unexpected) is not an acceptable design goal in Ada. If you have a need for this particular operation, you can always get it with a machine code insertion or an interfaced unit. **************************************************************** From: Randy Brukardt Sent: Tuesday, April 03, 2001 2:59 PM > < predicable. And since there is a substantial cost difference between the > two, you really want to be able to chose. But Ada 95 has made the choice for > you: slow and predicable. That's not good for some applications.>> > > fast and unpredictable (and unexpected) is not an acceptable design goal > in Ada. If you have a need for this particular operation, you can always > get it with a machine code insertion or an interfaced unit. Sigh. So we have another case where you have to use another programming language or otherwise make your code not portable in order to get performance matching that of most other programming languages. If you do care about how the rounding is done, you can always write: Integer(Float'Rounding(X)) (or one of the other attributes) to get exactly the rounding you need. But if you don't care (which is often the case, even in numerical software as Pascal pointed out), you have pay the penalty of biased rounding. Even if you write "Integer(Float'Unbiased_Rounding(X))" (which a clever compiler can make fast on the Intel processors), you have overspecification. And that could be dreadfully slow if you ever ported it to a machine that doesn't use Unbiased_Rounding as a native operation. Admittedly, the Ada 83 definition had problems, but that was because there was no portable work-around. Now that we have one, we didn't need to make the worst possible choice for the default case. Unfortunately, it probably is too late to undo the damage here. Even making this implementation-defined when not in strict mode probably would break some programs. For the record, Janus/Ada ignores 4.6(33) unless the compiler is running in "validation mode". I'll reconsider this if a user ever complains. (Hasn't happened yet.) Randy. **************************************************************** From: Tucker Taft Sent: Tuesday, April 03, 2001 6:01 PM Every other language I know of insists on truncation when converting float to int. **************************************************************** From: Randy Brukardt Sent: Tuesday, April 03, 2001 6:24 PM Probably because there is no argument about how to truncate, like there is for rounding. **************************************************************** From: Robert Dewar Sent: Wednesday, April 04, 2001 5:05 AM <> If you want sloppy undefined semantics, it is not surprising that you have trouble doing this in Ada! In practice I think there are quite quick ways of doing what you want. Do you *really* have a program where the overall performance of the program is affected by this, or are you doing the typical compiler writer thing of focussing on a particular inefficiency without this data? :-) **************************************************************** From: Randy Brukardt Sent: Wednesday, April 04, 2001 5:51 PM > If you want sloppy undefined semantics, it is not surprising > that you have trouble doing this in Ada! . Of course, I only care that it is fast, and that the semantics are some sort of rounding; I don't need precisely defined semantics. > In practice I think there are quite quick ways of doing what you want. Do > you *really* have a program where the overall performance of the program is > affected by this, or are you doing the typical compiler writer thing of > focussing on a particular inefficiency without this data? :-) Well, the software version of Generic_Elementary_Functions (GEF) in Janus/Ada uses Float => Integer conversion in Sin, Cos, Tan, and "**". Anyone who has an inner loop containing any of those functions would notice the slowdown. The default version of GEF in Janus/Ada uses the software version. (I recall that there was some inaccuracies in the hardware versions of some of these routines.) I don't have a particular program using these functions in mind, but I'd be pretty surprised if no user of Janus/Ada has ever written such a program. OTOH, anyone really needed high performance probably would use the hardware version of the functions, so I can't say for certain any real user would be impacted. I never wanted to take the chance. Rewriting the GEF software might help, but that is not something that I'd particularly want to undertake. I remember how painful it was to analyze every conversion in the original Fortran source to see whether unbiased rounding was going to be a problem, or whether it did not matter to the results. I don't particularly want to revisit that. When you have carefully crafted numeric software, it is generally best to leave it alone. **************************************************************** From: Robert Dewar Sent: Thursday, April 05, 2001 2:49 PM < Integer conversion in Sin, Cos, Tan, and "**". Anyone who has an inner loop containing any of those functions would notice the slowdown. The default version of GEF in Janus/Ada uses the software version. (I recall that there was some inaccuracies in the hardware versions of some of these routines.)>> a) this is still hypothetical, I would be surprised if it is significant in practice. b) there is no possible reason on the x86 to have such an operation for trig functions, you should be using the reduction instruction to reduce the argument, and then the hardware operations from then on. Have a look at the GNAT sources which have specialized code for the x86 and you can see how it should be done. Remember that the IEEE standard requires a scaling instruction, so it is very unlikely that you EVER have to do this rounding operation. **************************************************************** From: Randy Brukardt Sent: Thursday, April 05, 2001 5:52 PM We have a hardware implementation for the X86 (got it out of the Intel manuals, then fixed as needed). It's just not the default implementation for GEF. As with everything, that was decided back in the days when floating point on Intel processors was an option, not something that was included on every processor. We still have and use a few machines here that don't support any floating point. (They should be curbed, soon, though.) Probably those defaults ought to be changed, but doing so probably would break a few customers programs (and more importantly, compilation scripts). There's a lot of defaults in Janus/Ada that come from 16-bit MS-DOS and aren't really correct anymore. But I've never wanted to confuse everyone by having different defaults on different targets. > Remember that the IEEE standard requires a scaling instruction, so it is > very unlikely that you EVER have to do this rounding operation. Well, you're assuming floating point hardware, and the "generic" version of Janus/Ada does not -- it provides its own floating point emulation code. And that code is very simplified to rather basic requirements, and not strictly IEEE compliant. In any case, I don't honestly know *what* those conversions are there for, and I'm not interested in messing with working numeric software without a very good reason. **************************************************************** From: Robert Dewar Sent: Thursday, April 05, 2001 5:55 PM Either performance is important or it is not. Inadequate performance is not "working" in my book, but then you still don't have real data to say that this is a practical problem. But in any case in Ada 95, you definitely do NOT need to be doing rounding of this kind, please look at S'Remainder. So really the example we have here is code that was appropriate to Ada 83, but that is inappropriate for Ada 95, and a complaint that says "this does not work well in Ada 95 but I do not want to rewrite it". Not very convincing! **************************************************************** From: Pascal Leroy Sent: Friday, April 06, 2001 4:14 AM > a) this is still hypothetical, I would be surprised if it is significant > in practice. On RISC machines, the effect of rounding on the performance of our implementation of the GEF was of the order of 10-20% (your mileage may vary, in particular depending on the architecture). Not huge, but significant. The reason is that the drudgery you have to perform to get the rounding right requires tests which tend to flush the pipeline. All the rest of the GEF code is pretty much devoid of tests, and leads to very efficient scheduling/pipelining. Anyway, we are not going to change the language at this stage... > b) there is no possible reason on the x86 to have such an operation for > trig functions, you should be using the reduction instruction to reduce > the argument, and then the hardware operations from then on. Have a look > at the GNAT sources which have specialized code for the x86 and you can > see how it should be done. There is no way that such an implementation will meet the requirements of annex G, so a very good reason for avoiding it is if you want to adhere to strict mode. We carefully considered the hardware support provided by the x86 and decided not to use it, because the accuracy of most of these operations is appalling. Granted, they are a bit faster than a software implementation, but not that much, because you can do an awful lot of mults/adds in the time it takes to run a single fcos instruction. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 4:38 AM Actually it is quite unfair to say that the results of the hardware operations are appalling. What's your reference for that, I assume you have the relevant Intel documentation. It is true that in 80-bit mode there are some problems, but we know of no accuracy issues for the hardware operations when used in 32- and 64-bit modes. In any case I am still surprised that any implementation of the GEF would depend on this kind of rounding, rather than the use of 'Remainder (I assume we are talking about argument reduction here?) Of course Ada 83 code casually ported may indeed show this problem. **************************************************************** From: Pascal Leroy Sent: Friday, April 06, 2001 5:00 AM Here is a test program which only uses Long_Float, which I believe is 64-bit for GNAT, right? with Ada.Numerics.Long_Elementary_Functions; with Ada.Text_Io; with System.Machine_Code; procedure Reduction is -- A value chosen to be really hard for argument reduction. Angle : constant Long_Float := 16#1.921FB54442D18#; -- The machine number nearest to the exact mathematical value. Exact_Cos : constant Long_Float := 16#4.69898CC51701C#E-14; Actual_Cos : constant Long_Float := Ada.Numerics.Long_Elementary_Functions.Cos (Angle); begin Ada.Text_Io.Put (Long_Float'Image ((Actual_Cos / Exact_Cos - 1.0) / Long_Float'Model_Epsilon)); Ada.Text_Io.New_Line; end Reduction; When I execute this with GNAT 3.12p (probably an oldish version, btw), it prints: -1.48736395356916E+11 which is, well, larger than the upper bound 2.0 required by RM95 G.2.4(6). The root of the problem is that the x86 gives you a Pi with a 64-bit mantissa (corresponding to the 80-bit format) but to get the proper reduction over the range specified by G.2.4(10) you need to have about 110-120 bits for Pi. > In any case I am still surprised that any implementation of the GEF would > depend on this kind of rounding, rather than the use of 'Remainder (I assume > we are talking about argument reduction here?) After the reduction algorithm, we need to have an integer value to look up some stuff in precomputed tables, so 'Remainder would not help much, we would still pay the price of a float-to-integer-conversion-with-rounding at some point. Moreover, we implement 'Remainder in software, so we don't use it in the GEF for obvious performance reasons (it's about 180 lines of code, including comments, so it must be slower than Randy's version :-) > Of course Ada 83 code casually ported may indeed show this problem. This was all implemented from scratch for Ada 95. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 5:22 AM It seems a real pity if no one is really implementing things like 'Remainder properly. Perhaps they should just be removed from the language? If they are implemented in these horrible software routines, they are much worse than useless. And note that I think GNAT has the same nasty problem at the moment, so I am not making some competitive statement here. I have no idea if GNAT still shows the problems you mention, as you say, 3.12p is way out of date, and in any case that version did not come from us. (we take no responsibility for any public versions of GNAT, we have no way of knowing if they are the same as what we distributed or not). **************************************************************** From: Pascal Leroy Sent: Friday, April 06, 2001 5:29 AM Yes, we didn't pay for support ;-) This was not intended as a criticism of GNAT btw, just as a demonstration that it is really hard to use hardware support on x86 to implement strict mode. I had a quick glance at the code of GNAT, and it looks perfectly reasonable for a relaxed mode implementation. But I believe that in order to meet the strict mode requirements you would have to add a lot of software "glue" around the x86 instructions, and you would quickly reach a point where the glue is so costly that you are better off doing it all in software. At least, that's the conclusion that I came to... **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 5:32 AM That may well be a correct conclusion. Of course one has to wonder in practice whether there is any real code that can benefit from the extra accuracy requirements provided in Ada here. There was never any real cost-benefits analysis performed :-) **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 5:24 AM Anyway, I filed that program as an internal bug report :-) So we will look at it and see if this is still problematical. **************************************************************** From: Randy Brukardt Sent: Friday, April 06, 2001 4:50 PM > When I execute this with GNAT 3.12p (probably an oldish > version, btw), it prints: > > -1.48736395356916E+11 For grins, I ran this program on several of the Ada compilers I have around: GNAT 3.14a (using the options from the most recent ACT ACATR) -1.48736395356916E+11 Janus/Ada 3.12 (note that Janus/Ada doesn't support strict mode, so this is rather irrelevant) 9.53995173944739E+09 ObjectAda 7.2 -1.48736395356916E+11 (I didn't try Rational Apex.) Seems that either the program is wrong, or all of the compilers are. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 6:50 PM The program may well be right, but it is a diabolical case. Note that Rational took a huge hit from VERY slow GEF at first, because they were very fanatic in getting exact results. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 6:54 PM incidentally, I realize my comment on Rational could be read as negative, but please don't take it that way. Indeed the problem was that Rational was really VERY careful to guarantee what the RM says -- that's why I am inclined to believe Pascal has the program right (plus it looks right to me, and indeed runs fine on the Solaris port of GNAT). At some point we should perhaps revisit the whole issue of GEF accuracy. Super-accuracy is dubious if it has too great a penalty, and if everyone uses relaxed mode, then we lose all control. But, Pascal can enlighten, I believe that Rational was able to greatly speed up their GEF implementation without sacrificing accuracy. Actually for us, an important request from several of our customers is a super-relaxed mode that would be as fast as possible, reasonably accurate, but not worry about error conditions, weird cycles etc :-) **************************************************************** From: Tucker Taft Sent: Saturday, April 07, 2001 6:25 AM I ran this on our SPARC compiler (that uses ANSI C as its intermediate) using an all-software implementation of Cosine (based on the work done at Argonne National Laboratory) and got an error ratio of 0.000000. The program seems to be correct. The all-software implementation uses a 90 digit representation of Pi/2 for argument reduction. **************************************************************** From: Stephen Michell Sent: Saturday, April 07, 2001 11:13 AM I ran it on Rational yesterday. The result was exact to 15 digits. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 5:29 AM BY the way, if you find the behavior of the round to integer to be such a problem, why not just introduce a new attribute called Unbiased_Round or whatever, and use it instead? Certainly we cannot base the reasonable semantics of Float to Integer rounding on low level requirements for implementation of the GEF! As I noted, the round-to-even behavior is just too strange and unfamiliar to be a reasonable part of the language, and to leave it implementation defined is not at all in harmony with the design and goals of Ada here. Users always regarded it as a bug that 2.5 rounded to 2 and 3.5 rounded to 4, and I find this a reasonable reaction to Ada 83 implementations that did this. I would not mind a proposal for a new semi-standard attribute in this area, that would seem the most constructive outcome of this discussion. >Anyway, we are not going to change the language at this stage... Well we can change the language by adding the attribute. That would be well-defined, and clearly useful, since we have two implementatations saying that the availability of this new attribute would significantly improve performance for some code, and the addition of an attribute like this is a relatively simple task, both from the point of view of implementation and definition. Pascal, does that seem reasonable to you? **************************************************************** From: Pascal Leroy Sent: Friday, April 06, 2001 5:36 AM > BY the way, if you find the behavior of the round to integer to be such > a problem, why not just introduce a new attribute called Unbiased_Round > or whatever, and use it instead? Funny that you mention this because I just came to the same conclusion this morning. When I wrote the GEF I considered adding a pragma to control rounding (and didn't do it) but it didn't cross my mind that an attribute would be the right solution. > Certainly we cannot base the reasonable semantics of Float to Integer > rounding on low level requirements for implementation of the GEF! Agreed. Note that the implementation of the GEF would really prefer an attribute called Random_Round :-) which would use whatever rounding is faster on a given architecture. It's often round-to-even, but not always. I have a vague recollection that HP was an oddball in that respect (as in many others). > I would not mind a proposal for a new semi-standard attribute in this > area, that would seem the most constructive outcome of this discussion. Makes sense. > Well we can change the language by adding the attribute. That would be > well-defined, and clearly useful, since we have two implementatations > saying that the availability of this new attribute would significantly > improve performance for some code, and the addition of an attribute > like this is a relatively simple task, both from the point of view > of implementation and definition. > > Pascal, does that seem reasonable to you? Absolutely. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 5:58 AM OK, so why not call it 'Fast_Round since that is really the intention here, and the documentation for it is something like: function S'Fast_Round (X : T) return Integer; where the result is obtained by rounding X to the result, the case of exactly half way between two integers is handled in an implementation defined manner, consistent with providing the fastest possible implementation. By the way, why can't you get the result you want on most machines by using S'Unbiased_Rounding, and then convert the result to an integer (well I guess the answer is this would require more smarts in the code generator than any of us are likely to have around :-) :-) **************************************************************** From: Randy Brukardt Sent: Friday, April 06, 2001 6:41 PM Sounds like an idea, wonder why I never thought of it. Humm, do we really want this to return type "Integer"? That seems mildly restrictive, and we always recommend to avoid using the predefined types -- I hate to add another reason that you have to. > By the way, why can't you get the result you want on most machines by > using S'Unbiased_Rounding, and then convert the result to an integer > (well I guess the answer is this would require more smarts in the code > generator than any of us are likely to have around :-) :-) Well, I actually suggested this (and I think that Janus/Ada actually does this optimization - yes, see below). But such code is not very portable; if it ever gets moved to a machine that doesn't use Unbiased_Rounding, the performance be terrible. For GEF, that doesn't much matter (it's probably tweaked for every target anyway), but it might be a problem for some apps. Appendix: For Janus/Ada, the intermediate code generated for: A := Integer(Float'Unbiased_Rounding(F)); is: PSHF [F] FUNRND <-- Unbiased rounding ('Unbiased_Rounding) FRND <-- Biased rounding ('Rounding, inserted by type conversion) CTYPI <-- Checked type conversion to integer (rounding not specified a-la Ada 83) POPI [A] And the optimizer notes that the FRND can have no effect, so it is eliminated. But you still get two conversion operations, because the code generator doesn't look for FUNRND | CTYPxx and eliminate it. And I suspect that the folks that don't use their own code generator would have even more trouble eliminating it. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 7:11 PM <> I suggested Integer because I think it has a better chance of being implemented efficiently, and if you are worried about rounding you are almost certainly in Integer range. Certainly the examples you chose fit this rule! **************************************************************** From: Randy Brukardt Sent: Friday, April 06, 2001 7:25 PM True. I would have thought that having it return Universal_Integer would be better, although a bit more complex to implement. You almost always know a real type for the expression -- I assume that there is a fastest conversion to any possible integer type supported by a compiler, even though that might not be all that fast! The only problem with that would be the weird cases where that is the actual type of the expression is actually Universal_Integer, but I believe that the standard says such things are evaluated in the largest possible integer type. I'd prefer the more flexible definition, but either is fine by me. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 8:25 PM Yes, and that is nasty because it means unless you are very clever you will end up using 64-bit integers when you really want 32-bit integers, and generate lousy code after all. And you really do NOT want to create an Ada-83 style disincentive in implementing decent length integers here. **************************************************************** From: Randy Brukardt Sent: Friday, April 06, 2001 9:34 PM Of course not. But I think you will have to be clever anyway to do this right. (At least the IA will make it visible for those who don't). For instance, Janus/Ada supports 8, 16, and 32 bit integers, and you really want to do all of them the same (that is, without extra conversions). Either way, you'll have to do some optimizations: If the attribute returns Integer, and you're converting to a 16-bit integer: S : My_Short_Int; S := My_Short_Int(Float'Fast_Round (F)); In Janus/Ada, this would generate intermediate code of (assuming no range check is needed: PSHF [F] CTYPI CTYPSI POPSI [S] You'd need some optimizations to generate just the Pentium instructions: FLD Dword Ptr [F] FIST Word Ptr [S] You'd need to do these optimizations for any use of the attribute on any type other than Integer, since there would have to be an explicit type conversion. (And most uses would need a conversion, since hopefully most users aren't using type Integer in portable code.) If the attribute returns Universal_Integer, and you're converting to a 16-bit integer: S := Float'Fast_Round (F); In Janus/Ada, this would generate intermediate code of (assuming no range check is needed: PSHF [F] CTYPLI CTYPSI POPSI [S] As you can see, the intermediate code is nearly identical, and so are the optimizations needed. Of course, you could fail to do the optimizations and generate a 64-bit intermediate, but that seems unlikely in a performance-centered application. Of course, what needs to be done would vary from target to target, and code generator to code generator, but it seems to me that you would need to be clever about just about every use of this attribute no matter what type it returned. So, I don't think that this is much of an argument either way for the return type of the attribute. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 11:19 AM note that this discussion got hijacked by a really-completely-irrelevant discussion of rounding to integer. Can we get it back where it was, and ask if we can reconsider the requirement for biased rounding of floating-point constants (well static expressions in general, but it is constants where it is most offensive!) **************************************************************** From: Tucker Taft Sent: Friday, April 06, 2001 3:36 PM I agree that static rounding should be specified as being determined by the target, rather than always away from zero. This implies there should be an attribute something like: 'Machine_Uses_Unbiased_Rounding or equivalent. Perhaps this should be a specifiable attribute? **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 3:43 PM I am dubious about going that far, without taking on the entire iEEE rounding semantics, and in particular, let's be sure not to do anything that inteferes with this semantics. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 3:47 PM Basically the rule should be that the rounding mode is the same as the default rounding mode at run-time. But that can only be IA I think, the forml rule should be that it is impl defined. **************************************************************** From: Randy Brukardt Sent: Friday, April 06, 2001 6:59 PM > note that this discussion got hijacked by a really-completely-irrelevant > discussion of rounding to integer. Can we get it back where it was, and > ask if we can reconsider the requirement for biased rounding of floating-point > constants (well static expressions in general, but it is constants where > it is most offensive!) Well, really the rounding of floating-point constants discussion had hijacked the discussion of staticness for generic formal objects, and then that was hijacked again. Indeed, there are three issues that we've discussed: 1) Generic formal objects aren't static in the instance by the RM, although all (?) compilers implement them as such. And that is necessary for Ada 83 compatibility (AI83-00001). Care must be taken when fixing this to insure that such objects are not required to be static in the body, otherwise we would be requiring static rounding and exact evaluation in shared generic bodies. Which led to the second discussion: 2) Do we really want the biased rounding of floating point static expressions? It provides different answers than occur when the same expression is evaluated at run-time. That had been required by analogy to the real => integer conversion rule, which then led to: 3) Real => Integer conversions can be pretty expensive. How do we write fast conversions in Ada 95? At this point, all of these discussions are attached to an existing AI on generic staticness. I don't think (2) and especially (3) are related enough to (1) and the other topic in AI-0263, so I think they will need their own AI(s). Note that changing (2) does not eliminate the need to handle (1), and vice versa. **************************************************************** From: Robert Dewar Sent: Friday, April 06, 2001 7:16 PM Quite so. the relation between (2) and (1) is simply that the GNAT warning for (2) dug up (1) :-) **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** **************************************************************** ****************************************************************