CVS difference for ai12s/ai12-0348-1.txt

Differences between 1.5 and version 1.6
Log of other versions for file ai12s/ai12-0348-1.txt

--- ai12s/ai12-0348-1.txt	2020/01/30 01:56:33	1.5
+++ ai12s/ai12-0348-1.txt	2020/02/20 05:16:11	1.6
@@ -1054,3 +1054,143 @@
 to allow a compiler to raise Program_Error in this case.
 
 ****************************************************************
+
+From: Brad Moore
+Sent: Friday, January 31, 2020  12:31 AM
+
+I see your point, but I think a program error would be welcomed if the answer 
+is wrong enough where enough error has accumulated.
+
+For example, if the most significant digit or the exponent value differs from 
+what you'd get using higher precision calculation.
+
+While such errors can be difficult to detect, I don't think it is impossible 
+to detect. And the problem is not necessarily related to parallelism. You can 
+get answers just as wrong using sequential calculations.
+
+I can imagine there being an error tolerance threshold (possibly user 
+configurable) for the compiler and a floating point verification mode where 
+the same calculations are done (possibly in parallel) using two levels of 
+precision (the user specified precision, and some higher level of precision). 
+If the most significant digit differs between the two results, a program error 
+could be raised.
+
+Perhaps another approach might be to keep track of the total digits of 
+precision thrown away, such as adding many small floating point values to a 
+single large value.
+
+Other (normal) compilation modes or other compilers wouldn't have to detect 
+the error.
+
+For example, if I sequentially calculate the sum of integer values cast to 
+32 bit floating point from 1.0 to 100_000_000 I get a result of;
+    2_251_799_813_685_248.000
+
+If I didn't know there was an issue, I might assume that is the correct 
+answer, with possible disastrous consequence depending on what the number was 
+being used for.
+
+But if I use 128 bit floating point, the result is;
+    5_000_000_050_000_000.000
+
+Which is the correct answer and more than double the first result.
+
+It'd be nice while executing my 32 bit float program in a floating point 
+verification mode, to be told that there is a program error due to 
+insufficient precision and too much error accumulation if the answers I am 
+getting are wildly incorrect. Changing my code to use say, a 64 bit float 
+type instead might be sufficient to address the program error.
+
+OpenMP might call it a numerical analysis problem, and that is OK for use in 
+languages such as C. But in Ada, with a higher focus on safety critical and 
+high integrity systems, it seems that treating this as a bounded error might 
+possibly be considered as another notch of safety that Ada has over other 
+languages?
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, January 31, 2020  1:07 AM
+
+> I see your point, but I think a program error would be welcomed if the 
+> answer is wrong enough where enough error has accumulated. ...
+
+I would be surprised that an occasional data-dependent Program_Error would 
+be "welcomed."  Some kind of compile-time warning might be reasonable, but 
+a run-time exception that is raised now and then sounds like a bit of a 
+nightmare.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, January 31, 2020  1:57 PM
+
+> I see your point, but I think a program error would be welcomed if the 
+> answer is wrong enough where enough error has accumulated.
+
+Sure, but this has exactly nothing to do with reductions (sequential or 
+parallel). It's a consequence of the model of floating point math, and it 
+surely doesn't make any sense to single out this particular case.
+
+The interesting point brought up by your messages is that it's possible that 
+an over-literal compiler implementer might interpret the "equality" as having
+something to do with the (relatively useless) "=" operator for floating point 
+types.
+
+...
+> While such errors can be difficult to detect, I don't think it is 
+> impossible to detect.
+> And the problem is not necessarily related to parallelism. 
+> You can get answers just as wrong using sequential calculations.
+
+Exactly. If you wanted to detect these errors, you could use interval math.
+If the final interval is too wide, the answer is nonsense. All you would need 
+is an interval math package to use instead of the basic floating point.
+I know the 8087 (which is the instruction set available on all modern x86
+processors) has modes to make implementing that easier (round down and round 
+up modes), and I would guess those came from IEEE (pretty much everything else 
+did).
+
+Since Ada has operator overloading, it would be pretty easy to drop in such a 
+thing (of course, attributes would have to be replaced by function calls, how 
+bad that is depends on the number of function calls).
+
+It doesn't seem necessary to standardize such a thing, certainly not if there 
+isn't any demand from actual Ada users.
+
+****************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Friday, January 31, 2020  2:47 AM
+
+> Exactly. If you wanted to detect these errors, you could use interval math.
+
+CADNA (http://cadna.lip6.fr/) is the tool you need. It was initially 
+developped for Ada with operator overloading, but unfortunately it is now 
+available only for C/C++/Fortran...
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Friday, January 31, 2020  3:41 AM
+
+...
+> I would be surprised that an occasional data-dependent Program_Error would 
+> be "welcomed."  Some kind of compile-time warning might be reasonable, but 
+> a run-time exception that is raised now and then sounds like a bit of a 
+> nightmare.
+
+I am firmly with Tuck on this view, viz. compile-time warning instead of 
+program error.
+
+****************************************************************
+
+From: Arnaud Charlet
+Sent: Friday, January 31, 2020  3:45 AM
+
+Agreed. And as Randy said, the issue is much broader and concerns the
+semantics and dangers of floating point, and would better be handled by a 
+general purpose library (with e.g. operator overloading) rather than via a 
+special case.
+
+****************************************************************

Questions? Ask the ACAA Technical Agent