CVS difference for ai12s/ai12-0119-1.txt

Differences between 1.7 and version 1.8
Log of other versions for file ai12s/ai12-0119-1.txt

--- ai12s/ai12-0119-1.txt	2017/08/31 04:10:03	1.7
+++ ai12s/ai12-0119-1.txt	2017/09/07 02:46:48	1.8
@@ -5038,3 +5038,92 @@
 How would you parallelize this loop??
 
 ****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, August 31, 2017  11:02 PM
+
+I do not believe we should eliminate parallel loop syntax.  The question that
+relates more directly to this example is whether we need special reduction
+syntax beyond the “map/reduce” construct.  This example argues that in cases
+where you have multiple accumulators, you might want some way of declaring
+them so that you get one per chunk.
+
+Alternatively, we encourage the use of an accumulator “record” for this sort
+of computation.  E.g.:
+
+   type Accums is record
+       A_Sum, A_Sum_Sqr : Float;
+       B_Sum, B_Sum_Sqr : Float;
+       Prod_Sum, Prod_Sum_Sqr : Float;
+   end record;
+
+   function Update_Accums(Orig : Accums; A_Elem, B_Elem : Float) return Accums is
+   begin
+       return (A_Sum => Orig.A_Sum + A_Elem,
+                  A_Sum_Sqr => Orig.A_Sum_Sqr + A_Elem * A_Elem,
+                  B_Sum => Orig.B_Sum + B_Elem,
+                  B_Sum_Sqr => Orig.B_Sum_Sqr + B_Elem * B_Elem,
+                  Prod_Sum => Orig.Prod_Sum + A_Elem * B_Elem;
+                  Prod_Sum_Sqr => Orig.Prod_Sum_Sqr + (A_Elem * B_Elem) **2);
+   end Update_Accums;
+
+   Result : constant Accums :=
+      (for I in 1..MAX => Update_Accums(<(others => 0.0)>, A(I), B(I));
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, August 31, 2017  3:19 PM
+
+>> You just need to prove two slices don't overlap, which is equivalent 
+>> to proving the high bound of one is less than the low bound of the 
+>> other.  This is the kind of thing that static analysis and more 
+>> advanced proof tools are pretty good at doing!  This is not 
+>> significantly harder than eliminating run-time checks for array 
+>> indexing when the bounds are not known at compile-time, which is 
+>> something that many Ada compilers can do based on other information 
+>> available at the point of usage.
+> 
+> OK, but again these are Legality Rules, not something that compilers 
+> are doing on their own. The entire set of rules have to be defined 
+> formally and required of all Ada compilers. What "static analysis" and 
+> "advanced proof tools" can or can't do is irrelevant (unless of course 
+> we allowed what is legal or not to be implementation-defined -- when I 
+> previously proposed that for exception contracts, the idea seemed to 
+> be as effective as a lead balloon).
+
+This is a general problem with data race detection, I would say.  My suggestion
+is to define what we consider to be a "potential data race" in a way that
+run-time detection can be used, but make it clear that compilers are permitted
+to give compile-time errors if they can show that the "potential data race"
+exists in a given instance.  This is somewhat analogous to the "potentially
+blocking operation," but in this case, we require the equivalent of
+"detect-blocking."  And of course a compiler can omit the run-time check if it
+can prove there is no potential data race.  Anything in between presumably many
+compilers would provide warnings.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Thursday, August 31, 2017  3:38 PM
+
+That's effectively making the checks implementation-defined (whether a
+particular piece of code will compile is impl-def). Not sure how well that
+will fly. Ada usually requires a run-time check in such instances and does
+not allow a compile-time check (the latter because of the mess that comes
+up in conditional code). Anyway, food for thought.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, August 31, 2017  8:17 PM
+
+Perhaps you are right.  No big reason to treat this differently than any other
+run-time check.  Most compilers already produce a warning when a run-time check
+is certain to fail, and often provide a mode where certain classes of warnings
+are treated as errors, so that would effectively have the same net effect.  In
+generic bodies, we often define things to raise Program_Error under certain
+circumstances, but for compilers that do macro-expansion, almost all of these
+end up as compile-time warnings, which are often treated as errors by the user.
+
+****************************************************************

Questions? Ask the ACAA Technical Agent