CVS difference for ai12s/ai12-0242-1.txt

Differences between 1.4 and version 1.5
Log of other versions for file ai12s/ai12-0242-1.txt

--- ai12s/ai12-0242-1.txt	2018/01/27 04:54:51	1.4
+++ ai12s/ai12-0242-1.txt	2018/02/27 07:04:06	1.5
@@ -5479,3 +5479,1831 @@
 
 ****************************************************************
 
+From: Jean-Pierre Rosen
+Sent: Wednesday, February  7, 2018  10:33 AM
+
+Since my proposal seems to not have been very clear, here is my suggestion
+again from scratch.
+
+Currently, the proposal uses some kind of expression (aggregate?
+container? array?) as the prefix:
+   <expression>'Reduce (F, <value>)
+
+However, this would make 'Reduce very different from all other
+attributes:
+- Current syntax says that the prefix of an attribute is a name (or an
+  implicit dereference). This proposal would be a complete change of
+  syntax.
+
+- All other attributes denote values or subprograms. What does 'Reduce
+  denote?  And what is F? A name? Then 'Reduce can't be a function! It
+  is actually a whole new kind of expression.
+
+- I find having a prefix that extends over several lines before you
+  discover it IS the prefix of an attribute extremely unreadable and
+  confusing. Admitedly this is personal taste (I've never been a fan
+  of prefixed notations)
+
+My proposal is to change the syntax (NOT the semantics) by making F the
+prefix: F'Reduce (<expression>) (The issue of having or not a default value
+is independent and deserves another discussion).
+
+Here:
+- The prefix is a name - no change in syntax
+
+- 'Reduce denotes a function that performs the reduction of its
+  argument. The argument is syntactically an array whose components
+  are appropriate for F. Its behaviour can be described in terms of
+  a regular function
+
+- For parallelism, it would be easy to specify the reduction function
+  as an aspect Reduction => F'Reduce
+
+This does NOT imply really creating an array to be passed to a regular
+function.  We just need to specify that the components are evaluated in no
+particular order.  Not creating the array is simply like putting F'Reduce
+inline. It's 100% "as if".
+
+OTOH, a compiler would be /allowed/ to construct an array if it is more
+convenient or efficient in some cases, and/or to generate an actual function.
+
+We could rule that F'Reduce has convention => intrinsic if we don't want it
+to be passed to generics - or not. After all, all inlined subprograms need a
+regular version for generics.
+
+I think this model would be easier to understand and explain, and would fit
+better in the current syntax.
+
+****************************************************************
+
+From: Edmond Schonberg
+Sent: Wednesday, February  7, 2018  11:24 AM
+
+Thatís an attractive alternative, as long as the first argument can also be a
+container or other itterable object,  not just an explicit iterator construct:
+"+"'Reduce (S, 0) (adjacent double and single quote are unfortunate however).
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, February  7, 2018  5:55 PM
+
+> Since my proposal seems to not have been very clear, here is my 
+> suggestion again from scratch.
+> 
+> Currently, the proposal uses some kind of expression (aggregate?
+> container? array?) as the prefix:
+>    <expression>'Reduce (F, <value>)
+
+It seems that you are responding to various wild ideas proposed in e-mail, and
+not the concrete proposal - AI12-0242-1 - that Brad (presumably) spent hours
+writing. Brad's proposal clearly explains what this is: the prefix is an array
+or container object. Brad does not propose any syntax change.
+
+<Rant>Please, can we at least talk about what is proposed and not old fever
+dreams?? We need to make progress on this standard, and we can't do that if we
+keep rehashing things already rejected.</Rant>
+
+There is no functionality need for the prefix to be anything else; it might be
+easier to write with some other rule but surely not easier to understand.
+The prefix can be an aggregate of course, and that provides the iterators given
+in the e-mail examples. But that is NOT fundamental to the operation -- at
+least not to me (it makes much more sense when explained as operating on an
+array object than when tangling the iterators into it). [Just because it is
+written as a separate object doesn't require it to be implemented that way, of
+course, and the idea was to allow the object evaluation and the reduction to be
+interleaved -- probably a "Notwithstanding" rule.]
+
+Aside: I don't see any real need for the prefix to be a container (one can
+always write an array aggregate out of the appropriate container elements/parts
+of elements), and Brad's proposal never explains what he means by "container",
+anyway. (The Ada.Containers types are just normal tagged private types.) I
+suspect that at a minimum an iterator needs to be present. I can see that it
+might be more usable to allow containers, but I think it would be best to just
+define the semantics in terms of arrays and figure out an extension to
+containers later (they'll be a lot more complicated because of the need to talk
+about iterators and to just define what is meant by a "container" in this
+context). End Aside.
+
+> However, this would make 'Reduce very different from all other
+> attributes:
+> - Current syntax says that the prefix of an attribute is a name (or an
+>   implicit dereference). This proposal would be a complete change of
+>   syntax.
+
+Brad proposes no syntax change at all. I strongly concur with Brad on this; if
+we later think that we need to improve the usability of the attribute, we can
+consider other extensions but I don't think any of the ones I've considered fit
+into Ada very well. (Nor does your proposal, so far as I can tell; more below.)
+
+To use an aggregate in this notation, it would have to be a qualified
+expression (something has to specify the type of the aggregate). That might
+make it clunky to use in some contexts (needing to define a type just for it),
+but the idea of an expression with no defined type is wildly out of character
+for Ada and is a nonstarter with me. (Your proposal does nothing for this.)
+ 
+> - All other attributes denote values or subprograms. What does 'Reduce
+>   denote?  And what is F? A name? Then 'Reduce can't be a function! It
+>   is actually a whole new kind of expression.
+
+It denotes a value, of course. Why would you expect it to denote something
+else? The whole idea of attributes denoting functions is a nasty hack that
+causes more trouble than it is worth (functions with parameters of types that
+you can't otherwise describe are hardly functions at all).
+
+And surely there is nothing about a "value" that insists that the value has
+to be constant; it could be the result of some calculation (indeed, even
+A'First is the result of a calculation if the prefix is a parameter).
+
+> - I find having a prefix that extends over several lines before you
+>   discover it IS the prefix of an attribute extremely unreadable and
+>   confusing. Admitedly this is personal taste (I've never been a fan
+>   of prefixed notations)
+
+A legitimate gripe, but one that applies equally to prefixed views and
+(recently) existing attributes like Obj'Image. Argubly, that bird has flown.
+Moreover, I expect that many of these expressions will be directly in array 
+objects and the prefixes will be no longer than other typical attributes (more
+below).
+
+> My proposal is to change the syntax (NOT the semantics) by making F 
+> the prefix: F'Reduce (<expression>) (The issue of having or not a 
+> default value is independent and deserves another discussion).
+> 
+> Here:
+> - The prefix is a name - no change in syntax
+
+Straw man argument, the current proposal includes no change in syntax.
+
+I've previously suggested allowing an aggregate as a prefix, but I now realize
+that would not work, as an attribute prefix has to be resolved without
+context, while an aggregate can only be resolved *with* context (else a
+qualified expression, already allowed, would have to be used). I'm
+uninterested in any Ada construct having no type.
+
+Also, your proposal doesn't work very well for operators, as a string literal
+is not syntactically a name (nor could it be, I think). One has to use an
+expanded name for the prefix.
+
+> - 'Reduce denotes a function that performs the reduction of its
+>   argument. The argument is syntactically an array whose components
+>   are appropriate for F. Its behaviour can be described in terms of
+>   a regular function
+
+Sorry, but Ada does not have a type which is "syntactically an array whose
+components are appropriate for F". One cannot write such a function. This 
+would be a function in name only, which is as useful as describing 'Pos as a
+function (not at all, leaning toward harmful).
+
+Moreover, it hard to imagine how the resolution of such a thing would work.
+I'm very uninterested in describing a new kind of type resolution; at least
+the resolution of an attribute prefix is already context-free. And if you
+make this context-free, then (just as in the existing proposal) an aggregate
+would have to be qualified (because again the context would not give a type).
+So this is not helping anything for usability.
+
+> - For parallelism, it would be easy to specify the reduction function
+>   as an aspect Reduction => F'Reduce
+
+I don't see any existing situation where this would be of use. Either one has
+existing imperative code, where the whole idea of reduction is foreign (and
+requires lots of restructuring), or you are creating new code in a functional
+style (where the attribute would be applied directly). One can always wrap a
+reduce in an expression function if one really needs to treat it that way
+(another reason why few attributes need to be treated as subprograms).
+
+In any case, this is not a real function as the argument cannot be described
+in Ada terms. So one would never want to treat it as a real function (in terms
+of renaming and the like).
+
+> This does NOT imply really creating an array to be passed to a regular 
+> function.  We just need to specify that the components are evaluated 
+> in no particular order.  Not creating the array is simply like putting 
+> F'Reduce inline.
+> It's 100% "as if".
+> 
+> OTOH, a compiler would be /allowed/ to construct an array if it is 
+> more convenient or efficient in some cases, and/or to generate an 
+> actual function.
+
+This is surely the intent of Brad's proposal, and every proposal (even the
+crazy ones) I've seen.
+
+> We could rule that F'Reduce has convention => intrinsic if we don't 
+> want it to be passed to generics - or not. After all, all inlined 
+> subprograms need a regular version for generics.
+
+All of the attribute functions are already Intrinsic, but that doesn't prevent
+renaming or passing as a generic. Janus/Ada has a whole special block of code
+devoted solely to resolving this possibility (renaming attributes or passing
+them to generics) -- it's not practical to share that with anything else. I'd
+much rather not have Reduce falling into this case. (This sort of use is
+barely tested, as it never happens in practice and rarely in the ACATS.)
+
+Luckily, this doesn't matter for your proposal, as with Pos and Val and many
+others, you cannot describe the profile in Ada, so renaming/generic actuals 
+aren't possible anyway.
+
+> I think this model would be easier to understand and explain, and 
+> would fit better in the current syntax.
+
+I'm not going to argue with an opinion, but there is no difference in the
+syntax. If anything, Brad's proposal is slightly better syntactically (as
+there is no need to use expanded names for operators).
+
+Still, when you originally proposed this, the examples made more sense to me
+when written as Brad proposes. (Although no one ever wrote any of the examples
+in a form that is actually sensible Ada -- with proper type names
+-- so the jury is still out on that.) The most compelling example to me -- the
+example with Tucker's "manual chunking" proposal -- uses an array object as
+the prefix. (I'd guess using an aggregate would be rare for most users --
+Tucker might be an exception. :-) In Brad's syntax:
+      Sum := Partial_Sum'Reduce("+", 0);
+In your proposal, the obvious expression is illegal:
+      Sum := "+"'Reduce (Partial_Sum, 0); -- Illegal syntax as noted previously
+so one has to write:
+      Sum := Standard."+"'Reduce (Partial_Sum, 0);
+
+The original makes much more sense to me, as one is reducing the array object
+Partial_Sum with the operator "+". Reduction is not a function of "+", yet
+your syntax makes it look that way.
+
+Ed wrote:
+
+>... "+"'Reduce (S, 0) (adjacent double and single quote are unfortunate
+>however).
+
+Well, the above is not legal Ada syntax, and J-P is not proposing that it be
+changed (nor is Brad, for that matter). So one has to write this as an
+expanded name:
+
+    Standard."+"'Reduce (S, 0)
+
+Note that there is nothing new about the adjacent " and '. For user-defined
+operators, something like:
+    Some_Pkg."+"'Access
+is perfectly legal Ada 95 (not just Ada 2012).
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, February  8, 2018  9:37 AM
+
+From an admittedly personal perspective, it feels like the ongoing discussion
+of the "reduction expression" seems to have entered into the realm of "design
+by committee," which I think we have usually avoided in the past. The notion 
+of a "reduction expression" was suggested a year or so ago, in part because I
+have found it to be extraordinarily useful in ParaSail, and because it
+included the notion of "reduction" which keeps coming up in discussions of
+parallelism.  
+
+I have also seen the use of containers grow in Ada 2012 relative to Ada 2005
+largely because of the iterator "syntactic sugar" added in Ada 2012.  The
+reduction expression was new "syntactic sugar" that combined the power of
+iterators with the notion of Map/Reduce.
+
+The notion of "map/reduce" refers to performing an operation (the "map" part)
+on some or all elements of some large data set or large search space, and then
+combining the individual results (the "reduce" part) to produce a single
+"reduced" result.  After many back and forths on e-mail, we seem to have ended
+up with little more than something which applies an operator over the elements
+of an array, and makes no use of the power of "syntactic sugar" to raise the
+level of abstraction.  To me this is quite a disappointment.
+
+I believe iterators are probably the most important new "programming" feature
+of Ada 2012, while pre/postconditions are the most important new "declarative"
+feature.  And for that matter, iterator-based constructs like quantified
+expressions are critical to writing concise pre/postconditions.
+
+For Ada 2020, building on the iterator "syntactic sugar" to me seems a key to
+growing the real power of programming in Ada.  Things like iterator "filters"
+are part of that -- they are not just "gewgaws."   Various combiners such as 
+concurrent iterators also add to the power.
+
+Below is a copy of the set of examples of use of map/reduce expressions from
+an earlier e-mail, with ParaSail's "|" replaced with Ada's "&" where
+appropriate and "when" used for filters.  One thing to notice is that many of
+the "reductions" involve concatenations of strings, which clearly would be a 
+real pain if you first had to create an array of strings, something which is
+very awkward in Ada.
+
+I realize this proposed syntax was unclear to some, but it also provides a lot
+of power, and keeps operators like "+" and "&" in their natural "infix"
+position, so the reduction computation is written in its natural form.  It
+also allows an arbitrary amount of processing of the elements generated by the
+iterator (the "map" part).  If we lose both of these advantages of this
+notation, by reverting to something like "+"'Reduce(...), as well as losing the
+ability to use any sort of iterator and filter, the proposal loses much of its
+power.  It is not much more than what a generic "Reduce" package could
+provide, so it really doesn't add any real power to programming in Ada.
+
+So I would recommend we return to a powerful syntax that is based on
+iterators, with operators in their natural "infix" position, etc.  Otherwise,
+I believe we are wasting the opportunity and not really adding anything
+significant.  At a minimum, we should look at how examples like those below
+would be supported in whatever proposal we come up with.
+
+------- Examples of map/reduce expressions, originally in ParaSail, but now with some Ada-ization ------
+
+--  Count the number of formal-params that are output parameters
+-- The "when bool" is a filter that controls whether any action is taken
+-- for a particular item in the container
+
+const Num_Outputs := 
+                     (for each P of Formal_Params
+                       when P.Is_Operation_Output => <0> + 1)
+
+
+--  Create a bracketed list of the elements of DF.
+--  Note that ParaSail allows two iterators that iterate together,
+--  stopping when either reaches its end.
+--  So in the case, we are iterating over the elements of "DF" while
+--  also doing a simple "X := Y then Z"
+--  iterator which sets X to Y initially and then sets it to Z on each
+--  subsequent iteration.
+const DF_Image :=
+ (for (F in DF; Sep := "" then ", ") => <"["> & Sep & F) & "]"
+
+
+--  This produces a string representation of a sequence of ranges
+--  separated by spaces
+return (for R in Rngs forward =>
+  <"Countable::["> & R.First & ".." & R.Last & " ") & "]"
+
+
+--  This prints out a bracketed, comma-separated string representation
+--  of the inputs
+--  It again uses two iterators being iterated together
+Println((for (each Input of Inputs forward;
+                   Sep := "" then ", ") =>
+                <"["> & Sep & "VN" & Input) & "]")
+
+
+--  This counts the number of parameters that are outputs or variables
+--  with the filter in "{...}" again
+(for each P of Params
+   when P.Is_Operation_Output or else P.Is_Var => <0> + 1)
+
+
+--  This determine the minimum index of a set of parameters that pass
+--  the filter (i.e. are either outputs or variables).
+--  Rather than using the most positive and most negative values as the
+--  "identity" for Min and Max, in Parasail, null is used, meaning both
+--  Min(null, X) = X and Max(null, X) = X 
+const First_Updated :=
+ (for each [I => P] of Params
+    when P.Is_Operation_Output or else P.Is_Var => Min(<null>, I))
+
+
+-- Count the number of non-inputs
+const Num_Inputs :=
+ (for each P of Params when not P.Is_Operation_Output => <0> + 1)
+
+
+--  Print the list of VNs in VNs_To_Prop
+Println(
+ (for VN in VNs_To_Prop forward => <"Recursive_Top_Down"> & " VN" & VN))
+
+
+--  Create a set of the parent value numbers (VNs) of the VNs in
+--  Changed_Operands
+const Parents : VN_Set :=
+  (for Opnd in Changed_Operands =>
+    <[]> & Op_Ctx.VN_To_Parent_Set_Map[Opnd])
+
+
+--  Print a list of [VNx => VNy] pairs representing the live-out set
+Println(
+ (for each [Loc => VN] of Live_Out_Set =>
+   <" Live_Out_Set ="> & " [VN" & Loc & "=>VN" & VN & "]"))
+
+
+--  Count the number of nodes that are found by following the 
+--  Parent chain. 
+--  This uses the general "X := Y then Z while W" iterator which 
+--  initializes X to Y, and then sets it to Z on each subsequent
+--  iteration, as long as "W" remains true.
+return (for N := Node_Id then Node_Tree.Parent[N]
+          while N in Node_Tree.Parent => <0> + 1)
+
+
+--  This asserts that the sum of the sizes in Bit_Field_Sizes should be <= 63
+assert ((for each Size of Bit_Field_Sizes => <0> + abs(Size)) <= 63);
+
+
+--  Build up an array mapping from Bit-Field Key to the 
+--  sum of the sizes of all bit-fields preceding this bitfield
+const Bit_Field_Offsets : Array<Univ_Integer, Indexed_By => Key_Type> :=
+ [for each [Key => Size] of Bit_Field_Sizes, Key =>
+    (for K in Key_Type'First .. Key-1 => <0> + Bit_Field_Sizes[K])]
+
+
+--  Dump the sizes of the bit fields
+Println(
+ (for (each [Key => Size] of Bit_Field_Sizes; Sep := "" then ", ")
+   forward => <"Bit_Field_Sizes:["> & Sep & Key & " => " & Size) & "]")
+
+
+--  Hash the characters of a string to produce an integer
+func Hash_Vec(Vec : Vector<Univ_Character>) return Univ_Integer is
+  return (for I in 1 .. Vec'Length reverse =>
+    (<0> * 127 + (Vec[I] - Char_First)) mod Hash_Modulus)
+end func Hash_Vec
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Thursday, February  8, 2018  5:52 PM
+
+I don't want to rehash all of the previous arguments, so I'm just going to
+say a few words (but that doesn't mean that the old arguments aren't still
+true).
+
+> The reduction
+> expression was new "syntactic sugar" that combined the power of 
+> iterators with the notion of Map/Reduce.
+
+I recall the lead designer of Ada 95 telling people far and wide that Ada was
+about providing building blocks, not finished structures. This was a classic
+example of a "finished" feature rather than a set of building blocks. That
+becomes clear when one just needs to reduce an array (likely to be a very
+common operation associated with normal parallel loops in Ada 2020). With your
+old proposed syntax, you have to write an iterator to do that, even though the
+iterator is irrelevant noise. 
+
+Ada has a great support for creating maps (in the form of arrays), and Ada
+2020 promises to extend that even further. So what's missing is a Reduce
+primitive, and that's what's being proposed in AI12-0242-1.
+
+Building some sort of Frankenstein combination of iterators and reduce without
+having an actual reduce primitive just throws away all of the Ada history of
+building blocks.
+
+> The notion of "map/reduce" refers to performing an operation (the 
+> "map" part) on some or all elements of some large data set or large 
+> search space, and then combining the individual results (the "reduce" 
+> part) to produce a single "reduced"
+> result.  After many back and forths on e-mail, we seem to have ended 
+> up with little more than something which applies an operator over the 
+> elements of an array, and makes no use of the power of "syntactic 
+> sugar" to raise the level of abstraction.  To me this is quite a 
+> disappointment.
+
+Raising the "level of abstraction" beyond the level that most programmers can
+understand is not helpful. I'd rather the handful of super-advanced programmers
+be disappointed. And there is nothing natural or understandable about the
+combination of Map/Reduce. It only began to make sense when we started talking 
+about Reduce on its own. (I think it exists solely because of its potential to
+describe parallelism -- Ada is about describing the problem in programming
+language terms, not making the problem unrecognizable to fit some preconceived
+notion of abstraction.) 
+
+Adding a bunch of "syntactic broccoli" to describe a construct that few will
+understand or use is not an improvement.
+
+Let me add that I've been searching for ways to make the Reduce attribute
+easier to use; I'm not unwilling to consider improvements. But discarding the
+entire thing (and/or discarding strong typing) is not the way to go.
+
+In any case, you haven't explained what it is that you can't do a combination
+of an aggregate and the reduce operation. There should be a permission to
+interleave the evaluation of the prefix (and its parts) with the actual Reduce
+operation (I think it is missing in Brad's proposal).
+
+> I believe iterators are probably the most important new "programming" 
+> feature of Ada 2012, while pre/postconditions are the most important 
+> new "declarative" feature.  And for that matter, iterator-based 
+> constructs like quantified expressions are critical to writing concise 
+> pre/postconditions.
+
+I definitely do not agree with this. For loop iterators for containers are a
+natural extension that I'm not surprised are widely used. But I don't see much
+value to quantified expressions; their parts of pre/post should be encapsulated
+in a function and as such the body of that function can use a regular for loop
+if necessary. (We do need to be able to declare such functions "extra pure" ==
+global null, so that they can be optimized algebraically - Ada 2020 ought to
+fix that.)
+ 
+> For Ada 2020, building on the iterator "syntactic sugar" to me seems a 
+> key to growing the real power of programming in Ada.  Things like 
+> iterator "filters" are part of that -- they
+> are not just "gewgaws."   Various combiners such as 
+> concurrent iterators also add to the power.
+
+We surely need concurrent iterators, but more than that is just too much.
+Turning everything into an iterator is not "raising the abstraction level"
+-- that's done by providing good abstract operations for an ADT. It's exposing
+more implementation than necessary.
+
+... 
+> So I would recommend we return to a powerful syntax that is based on 
+> iterators, with operators in their natural "infix"
+> position, etc.  Otherwise, I believe we are wasting the opportunity 
+> and not really adding anything significant.
+
+I'm afraid that reraising this at this point is likely to kill Ada 2020, as
+there is little likelihood of consensus on this point.
+
+> At a minimum, we should look at how examples like those below would be 
+> supported in whatever proposal we come up with.
+
+I agree with this. I did the first few examples here. The annoyance (IMHO) is
+having to find/declare an appropriate array type. I was hoping for suggestion
+to deal with that, not to start over.
+
+====
+
+Before we look at Tuck's example's (many of which are weird uses for this
+construct), let's look at a pair of common cases written both ways:
+
+-- Reducing the result of a manually-chunked parallel loop:
+
+-- Parasail-like:
+    Sum := (for E of Partial_Sum => <0> + E);
+-- AI12-0242-1:
+    Sum := Partial_Sum'Reduce("+", 0);
+
+-- Calculating the sum of squares for a data array (part of statistics gathering):
+-- Parasail-like:
+    Sum_Sqr := (for I in Data'range => <0.0> + Data(I)*Data(I));
+-- AI12-0242-1:
+    Sum_Sqr := Data_Array'(for I in Data'range => Data(I)*Data(I))'Reduce("+", 0.0);
+-- Note: "Data_Array" is the type of Data here. No additional declaration is needed.
+
+
+...
+> const Num_Outputs := 
+>                      (for each P of Formal_Params
+>                        when P.Is_Operation_Output => <0> + 1)
+
+    type Mapper is array (Positive range <>) of Natural;
+
+    Num_Outputs : constant Natural :=
+        Mapper'(for each P of Formal_Params => Boolean'Pos(P.Is_Operation_Output)'Reduce("+",0);
+
+-- I'm assuming that we add all iterators to array aggregates as previously
+-- proposed (although no one has written that up).
+
+> --  This counts the number of parameters that are outputs or variables
+> --  with the filter in "{...}" again
+> (for each P of Params
+>    when P.Is_Operation_Output or else P.Is_Var => <0> + 1)
+
+-- This is essentially the same as the last one:
+
+    Num_Outputs_or_Vars : constant Natural :=
+        Mapper'(for each P of Formal_Params => 
+           Boolean'Pos(P.Is_Operation_Output or else P.Is_Var)'Reduce("+",0);
+
+> -- Count the number of non-inputs
+> const Num_Inputs :=
+>  (for each P of Params when not P.Is_Operation_Output => <0> + 1)
+
+-- This is identical to the first, not going to waste my time copying it.
+
+> --  This asserts that the sum of the sizes in Bit_Field_Sizes should 
+> be <= 63 assert ((for each Size of Bit_Field_Sizes => <0> + abs(Size)) 
+> <= 63);
+
+pragma assert (Mapper'(for each Size of Bit_Field_Sizes =>
+                        abs(Size))'Reduce("+",0) <= 63);
+
+> --  Hash the characters of a string to produce an integer func 
+> Hash_Vec(Vec : Vector<Univ_Character>) return Univ_Integer is
+>   return (for I in 1 .. Vec'Length reverse =>
+>     (<0> * 127 + (Vec[I] - Char_First)) mod Hash_Modulus) end func 
+> Hash_Vec
+
+-- I think this one needs a helper function (I used Ada types here):
+   function Hash_One (New_Char, Pending : Natural) return Natural is
+      ((Pending * 127 + (New_Char - Char_First)) mod Hash_Modulus);
+
+   function Hash_Vec(Vec : String) return Natural is
+     (Mapper'(for I in 1 .. Vec'Length => Character'Pos(Vec(Vec'Length-I-1))'Reduce(Hash_One, 0));
+
+-- If 'Reduce allowed the first parameter of the combiner to be of another type,
+-- and you didn't insist on hashing in reverse (which seems unnecessary to me),
+-- this could be a lot simpler:
+
+   function Hash_One (New_Char : Character; Pending : Natural) return Natural is
+      ((Pending * 127 + (Character'Pos(New_Char) - Char_First)) mod Hash_Modulus);
+
+   function Hash_Vec(Vec : String) return Natural is
+     (Vec'Reduce(Hash_One, 0));
+
+> --  Create a bracketed list of the elements of DF.
+> --  Note that ParaSail allows two iterators that iterate together,
+> --  stopping when either reaches its end.
+> --  So in the case, we are iterating over the elements of "DF" while
+> --  also doing a simple "X := Y then Z"
+> --  iterator which sets X to Y initially and then sets it to Z on each
+> --  subsequent iteration.
+> const DF_Image :=
+>  (for (F in DF; Sep := "" then ", ") => <"["> & Sep & F) & "]"
+
+-- I cannot for the life of me figure how this is supposed to work. Which
+-- demonstrates my original point, I think...
+
+-- In any case, for a language that supposedly is mostly used for uses
+-- where no heap use is allowed, this kind of example (which will use
+-- loads of heap doing all of these concats) seems silly. I'd rather use
+-- 'Image function overloading for this sort of thing anyway.
+
+I give up here for now. I think we may need to allow the first parameter of
+the combiner to be of a different type in order to avoid having clunky
+conversions (and unnecessary array types) in the uses of Reduce, but I'd like
+to see more examples that are actually reducing something...
+
+One thought: "'Reduce("+",0)" is so common that one could easily imagine
+having a "'Sum" operation with just that meaning; it would simplify the
+writing of a lot of these operations.
+
+****************************************************************
+
+From: Brad Moore
+Sent: Thursday, February  8, 2018  10:45 PM
+
+> Before we look at Tuck's example's (many of which are weird uses for 
+> this construct), let's look at a pair of common cases written both ways:
+> 
+> -- Reducing the result of a manually-chunked parallel loop:
+> 
+> -- Parasail-like:
+>    Sum := (for E of Partial_Sum => <0> + E);
+> -- AI12-0242-1:
+>    Sum := Partial_Sum'Reduce("+", 0);
+> 
+> -- Calculating the sum of squares for a data array (part of statistics gathering):
+> -- Parasail-like:
+>    Sum_Sqr := (for I in Data'range => <0.0> + Data(I)*Data(I));
+> -- AI12-0242-1:
+>    Sum_Sqr := Data_Array'(for I in Data'range => Data(I)*Data(I))'Reduce("+", 0.0);
+> -- Note: "Data_Array" is the type of Data here. No additional declaration is needed.
+
+With the 'Reduce attribute, I'm wondering if people would naturally gravitate
+towards using an expression function to express the array aggregate. This I
+think addresses Jean-Pierre's complaint about potentially long prefixes, by
+splitting into two parts. It also eliminates the use of a qualified
+expression, which Randy finds annoying. I think this also fits more in the
+idea of using building blocks to arrive at a solution.
+
+eg. The previous example could be written as;
+
+function Squares return Data_Array is
+  (for I in Data'Range => Data(I)**2);
+
+Sum_Sqr := Squares'Reduce("+", 0.0);
+
+> ...
+>> const Num_Outputs :=
+>>                      (for each P of Formal_Params
+>>                        when P.Is_Operation_Output => <0> + 1)
+> 
+>    type Mapper is array (Positive range <>) of Natural;
+> 
+>    Num_Outputs : constant Natural :=
+>        Mapper'(for each P of Formal_Params => 
+> Boolean'Pos(P.Is_Operation_Output)'Reduce("+",0);
+
+I think a paren is missing, and the "each" keyword which doesn't exist in
+Ada should be dropped.
+
+ Num_Outputs : constant Natural :=
+        Mapper'(for P of Formal_Params =>
+             Boolean'Pos(P.Is_Operation_Output))'Reduce("+",0);
+
+or alternatively;
+
+function Operations return Mapper is
+   (for P of Formal_Params => Boolean'Pos(P.Is_Operation_Output));
+
+Num_Outputs : constant Natural := Operations'Reduce("+",0);
+
+If element filtering is commonly needed, such as the Boolean'Pos mechanism
+of this example, I think it would be worthwhile to consider adding "when" as
+syntactic sugar to replace the Boolean'Pos construct in this example. I see
+this being useful for array aggregates in general, not necessarily anything
+to do with 'Reduce attribute usage. This would also provide early elimination
+of junk values from the aggregate that aren't of interest.
+
+This example could then be written with 'Length instead of 'Reduce, for
+instance:
+
+ Num_Outputs : constant Natural :=
+     Mapper'(for P of Formal_Params when P.Is_Operation_Output => 0)'Length;
+
+I think whether we decide to add "When" syntax to aggregates should be
+separate from this AI. As Randy's examples show, the reductions can be written
+without them, but they seem to be a "nice to have" feature.
+We can decide separately whether it'd be "nice enough to have".
+
+For the remaining examples, I will assume that we don't have the "When" clause.
+
+> -- I'm assuming that we add all iterators to array aggregates as 
+> previously proposed (although no one has written that up).
+> 
+>> --  This counts the number of parameters that are outputs or 
+>> variables
+>> --  with the filter in "{...}" again
+>> (for each P of Params
+>>    when P.Is_Operation_Output or else P.Is_Var => <0> + 1)
+> 
+> -- This is essentially the same as the last one:
+> 
+>    Num_Outputs_or_Vars : constant Natural :=
+>        Mapper'(for each P of Formal_Params => 
+> Boolean'Pos(P.Is_Operation_Output or else P.Is_Var)'Reduce("+",0);
+
+Same typos apply here (missing paren and extra each) ....
+
+  Num_Outputs_or_Vars : constant Natural :=
+         Mapper'(for P of Formal_Params =>
+            Boolean'Pos(P.Is_Operation_Output or else P.Is_Var))'Reduce("+",0);
+
+or 
+   function Output_Or_Vars return Mapper is
+      (for P of Formal_Params => Boolean'Pos(P.Is_Operation_Output or else P.Is_Var => 0));
+
+   Num_Outputs_Or_Vars : constant Natural := Output_Or_Vars'Reduce("+",0);
+
+
+...
+> -- If 'Reduce allowed the first parameter of the combiner to be of 
+> another type,
+> -- and you didn't insist on hashing in reverse (which seems 
+> unnecessary to me),
+> -- this could be a lot simpler:
+> 
+>   function Hash_One (New_Char : Character; Pending : Natural) return 
+> Natural is
+>      ((Pending * 127 + (Character'Pos(New_Char) - Char_First)) mod 
+> Hash_Modulus);
+> 
+>   function Hash_Vec(Vec : String) return Natural is
+>     (Vec'Reduce(Hash_One, 0));
+
+As the other examples show below, I think we will probably want to extend
+the AI to allow the parameters and result to be of different types.
+
+>> --  Create a bracketed list of the elements of DF.
+>> --  Note that ParaSail allows two iterators that iterate together,
+>> --  stopping when either reaches its end.
+>> --  So in the case, we are iterating over the elements of "DF" while
+>> --  also doing a simple "X := Y then Z"
+>> --  iterator which sets X to Y initially and then sets it to Z on 
+>> each
+>> --  subsequent iteration.
+>> const DF_Image :=
+>>  (for (F in DF; Sep := "" then ", ") => <"["> & Sep & F) & "]"
+> 
+> -- I cannot for the life of me figure how this is supposed to work. 
+> Which
+> -- demonstrates my original point, I think...
+> 
+> -- In any case, for a language that supposedly is mostly used for uses
+> -- where no heap use is allowed, this kind of example (which will use
+> -- loads of heap doing all of these concats) seems silly. I'd rather 
+> use
+> -- 'Image function overloading for this sort of thing anyway.
+> 
+> I give up here for now. I think we may need to allow the first 
+> parameter of the combiner to be of a different type in order to avoid 
+> having clunky conversions (and unnecessary array types) in the uses of 
+> Reduce, but I'd like to see more examples that are actually reducing something...
+> 
+> One thought: "'Reduce("+",0)" is so common that one could easily 
+> imagine having a "'Sum" operation with just that meaning; it would 
+> simplify the writing of a lot of these operations.
+> 
+
+Carrying on from where Randy left off....
+
+--  This produces a string representation of a sequence of ranges
+--  separated by spaces
+
+Note: This example is a case where we want the combiner result type to be
+different than the combiner parameter types. We should consider allowing
+that in the AI. For sequential reductions, this should work fine. If we want
+this to work for parallel reductions, then we probably need to bring back the
+aspects of Identity, and Reducer, where Identity indicates the value to use to
+initialize the accumulator of any parallel chunk, and Reducer is an aspect
+applied to the combiner subprogram that identifies another subprogram whose
+parameters and results are all of the same time, so that the results from
+different parallel chunks can be combined.
+
+OTOH, these examples seem to be all about fiddling with character strings, and
+those examples which seem to involve reduction of string arrays into a string
+result maybe are not as common a use case as it might seem, looking at the
+broader usage. We might find in 90 % or more the reductions involve parameters
+and results of the same type. 
+
+-- Parasail
+return (for R in Rngs forward =>
+  <"Countable::["> & R.First & ".." & R.Last & " ") & "]"
+
+-- Ada
+function Ranges return String_Array is
+  (for R in Rngs => R.First & ".." & R.Last & " ");
+
+return ("Countable::[" & Ranges'Reduce("&", "") & "]");
+
+
+--  This prints out a bracketed, comma-separated string representation
+--  of the inputs
+--  It again uses two iterators being iterated together
+
+-- Parasail
+Println((for (each Input of Inputs forward;
+                   Sep := "" then ", ") =>
+                <"["> & Sep & "VN" & Input) & "]")
+
+NOTE, this example also needs support for the combiner result type to be
+different than the combiner parameters.
+
+-- Ada
+function Inputs_CSV return String_Array is
+  ((for I in Inputs'Range => (if I = Inputs'First then "" else ", ") & "VN" & Inputs(I));
+
+Put_Line("[" & Inputs_CSV'Reduce("&", "") & "]");
+
+
+--  This determine the minimum index of a set of parameters that pass
+--  the filter (i.e. are either outputs or variables).
+--  Rather than using the most positive and most negative values as the
+--  "identity" for Min and Max, in Parasail, null is used, meaning both
+--  Min(null, X) = X and Max(null, X) = X 
+
+-- Parasail
+const First_Updated :=
+ (for each [I => P] of Params
+    when P.Is_Operation_Output or else P.Is_Var => Min(<null>, I))
+
+-- Ada
+function Updates return Mapper is
+  (for I in Params => (if Params(I).Is_Operation_Output
+                         or else Params(I).Is_Var
+                       then Params(I)
+                       else Integer'Max));
+
+constant First_Updated := Updates'Reduce(Integer'Min, Integer'Max);
+
+
+--  Print the list of VNs in VNs_To_Prop
+
+-- Parasail
+
+Println(
+ (for VN in VNs_To_Prop forward => <"Recursive_Top_Down"> & " VN" & VN))
+
+-- Ada
+
+-- NOTE another case where the combiner result type differs from the combiner
+-- parameters type
+
+function VNs return String_Array is
+  (for VN in VNs_To_Prop => " VN" & VN);
+
+Put_Line("Recursive_Top_Down" & VNs'Reduce("&", ""));
+
+
+Scanning down the other examples of Tuck's, I dont see anything that couldn't
+be handled similarly as the examples already shown.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, February  9, 2018  10:00 AM
+
+> I recall the lead designer of Ada 95 telling people far and wide that 
+> Ada was about providing building blocks, not finished structures. This 
+> was a classic example of a "finished" feature rather than a set of 
+> building blocks. That becomes clear when one just needs to reduce an 
+> array (likely to be a very common operation associated with normal 
+> parallel loops in Ada 2020). With your old proposed syntax, you have 
+> to write an iterator to do that, even though the iterator is irrelevant noise.  ...
+
+It seems our respective notion of "building blocks" don't agree.  I believe
+iterators (plus filters) are the best "building block" for generating values,
+and what we need is some "building block" for absorbing the values generated
+by an iterator, optionally performing a computation on them (using the
+existing Ada computational  "building blocks"), and then combining the
+elements into a single value (this is the new building block using
+"<initial val>" or some other syntax).  
+
+Building blocks always produce somewhat more verbose representations for any
+given special case (e.g. reducing an array), but ultimately provide much more
+power to the programmer when the programmer wants to go beyond a specially
+supported special case.  I can see that the 'Reduce attribute works nicely
+for the array special case, but fails when the elements you want to reduce
+represent some other sort of sequence of items, such as that which can be
+produced by a combination of an iterator, an optional filter, and an Ada
+expression.
+
+I think it is a big loss to only support reduction over an existing concrete
+object with a single binary function.  By using more flexible syntax, we can
+support reduction of a sequence of values produced in many different ways,
+without ever having to produce in memory a single concrete object representing
+the sequence.    
+
+The original discussion of 'Reduce had it combined with a container aggregate,
+and at that point I presumed we were talking about a very special
+interpretation of the container aggregate, where it didn't have a type, it
+just represented a sequence of values that are to be reduced.  But that
+presumes we agree on the syntax of a container aggregate and the special
+significance of 'Reduce applied to such an aggregate.  But if we start
+insisting that this is a "normal" attribute with normal rules such as being
+able to resolve the prefix to a single, pre-existing type, then the idea falls
+apart in my view.  So if we have heartburn with giving the aggregate'Reduce
+attribute special overloading rules, etc., then we should go back to some sort
+of new syntax rather than lose all of the power of using an iterator, filter,
+etc., which the container aggregate would provide.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, February  9, 2018  11:31 AM
+
+> I definitely do not agree with this. For loop iterators for containers are a
+> natural extension that I'm not surprised are widely used. But I don't see
+> much value to quantified expressions; their parts of pre/post should be
+> encapsulated in a function and as such the body of that function can use a
+> regular for loop if necessary. ...
+
+This claim of little value for quantified expressions may be based on your own
+coding style, or anticipation thereof, but we have a number of folks actually
+using Ada 2012, and SPARK 2014 which is based on it, and quantified expressions
+are used heavily.  As one example, below is the spec for an implementation of
+red-black trees.  The SPARK tools proved that the implementation matches the
+specified contracts, which define full functional correctness.  It uses the
+SPARK 2014 "Contract_Cases" aspect which is essentially a post-condition that
+is similar to a case statement in that it requires non-overlapping predicates
+as preconditions to each associated post-condition.  You could substitute it
+with a large Post => (if xxx then ... elsif ...) and have essentially the same
+semantics, but only lose the requirement that exactly one of the conditions
+must evaluate to true.
+
+Even a quick perusal will show the heavy use of quantified expressions.
+Proving the functional correctness of this implementation of red-black trees
+was to some extent an academic exercise, but we have, as an example, a customer
+writing a secure kernel in Ada 2012/SPARK 2014, and theirs is definitely not an
+academic exercise.  You can say that you would not write Ada like this, but
+that is not the point.  Those who use formal methods in industrial settings
+do make heavy use of quantified expressions.
+
+In general we do have to rely on our own coding styles and experience to try to
+evaluate the potential value of some new language feature, but if we have access
+to code written by others, it is important to factor in their usage patterns as
+well.  We should try not to fall into the trap that only our own personal
+coding style is the "correct style" and any other usage pattern is inherently
+bad style.  Our customers for these future Ada extensions are the existing and
+potential professional Ada programmers making a living writing Ada.  The more
+we know about how such folks use existing Ada features (or features of other
+languages), the more we can serve their future needs, and/or make the Ada
+language a friendly place for them to move to achieve high productivity with
+high security.
+
+------  red-black trees (author is Claire Dross) -------
+
+
+with Tree_Model; use Tree_Model;
+use Tree_Model.D_Seq;
+use Tree_Model.V_Set;
+with Ada.Containers; use Ada.Containers;
+with Binary_Trees; use Binary_Trees;
+
+
+--  This package provides an implementation of binary search trees on top of the
+--  Forest type defined in Binary_Trees. A search tree is a forest with a root
+--  and a natural value per node. It has an invariant stating that values of the
+--  nodes of the tree are ordered. It provides primitives for inserting a value
+--  in the tree and for searching the tree for a given value. It also provides
+--  Rotate primitives that can be used to balance the search tree.
+
+
+package Search_Trees with SPARK_Mode is
+   pragma Unevaluated_Use_Of_Old (Allow);
+
+
+   type Search_Tree is private with Default_Initial_Condition =>
+     Size (Search_Tree) = 0;
+
+
+   function Size (T : Search_Tree) return Extended_Index_Type;
+
+
+   function Root (T : Search_Tree) return Index_Type with
+     Pre => Size (T) /= 0;
+
+
+   function Parent (T : Search_Tree; I : Index_Type) return Extended_Index_Type with
+     Post => (if Size (T) = 0 then Parent'Result = Empty);
+
+
+   function Position (T : Search_Tree; I : Index_Type) return Direction with
+     Pre => Parent (T, I) /= Empty;
+
+
+   function Model (T : Search_Tree) return Model_Type with
+     Ghost,
+     Pre => Size (T) /= 0;
+
+
+   function Peek (T : Search_Tree; I : Index_Type; D : Direction) return Extended_Index_Type with
+     Pre => Size (T) /= 0 and then Model (T) (I).K;
+
+
+   function Values (T : Search_Tree) return Value_Set with
+     Ghost,
+     Post => (if Size (T) = 0 then Is_Empty (Values'Result));
+
+
+   function Contains (T : Search_Tree; V : Natural) return Boolean with
+     Post => Contains'Result = Contains (Values (T), V);
+
+
+   procedure Insert
+     (T : in out Search_Tree; V : Natural; I : out Extended_Index_Type)
+   with
+     Pre  =>
+       --  There are free nodes to use
+       Size (T) < Max,
+     Contract_Cases =>
+
+
+       --  Case 1: V is already in the tree. Return the empty node in I. All
+       --  values in the tree, as well as the values of positions and parent
+       --  link, are preserved. We cannot state simply that T = T'Old as this
+       --  would be the component-wise equality of SPARK, not the logical
+       --  equality of T and T'Old. Instead, state explicitly which properties
+       --  are preserved.
+       (Contains (Values (T), V) =>
+          I = Empty
+          and then Values (T) = Values (T'Old)
+          and then (for all J in Index_Type => Parent (T, J) = Parent (T'Old, J))
+          and then (for all J in Index_Type =>
+                     (if Parent (T, J) /= Empty
+                      then Position (T, J) = Position (T'Old, J)))
+          and then (for all J in Index_Type =>
+                     (if Model (T) (J).K then Model (T'Old) (J).K))
+          and then (for all J in Index_Type =>
+                     (if Model (T'Old) (J).K then Model (T) (J).K))
+          and then (for all J in Index_Type =>
+                        (for all D in Direction =>
+                           (if Model (T) (J).K then
+                                Peek (T'Old, J, D) = Peek (T, J, D)))),
+
+
+        --  Case 2: the tree is empty. Return a singleton tree in T, whose root
+        --  is I. Value V is added to the previous (empty) set of values. The
+        --  value of the parent link is preserved for all nodes.
+        Size (T) = 0 =>
+          I /= Empty
+          and then Size (T) = 1
+          and then Root (T) = I
+          and then Is_Add (Values (T'Old), V, Values (T))
+          and then (for all I in Index_Type =>
+                     (if I /= Root (T) then not Model (T) (I).K))
+          and then (for all I in Index_Type => Parent (T, I) = Parent (T'Old, I)),
+
+
+        --  Case 3: the tree is not empty and V is not in it. Return in T a tree
+        --  with the same root, the same nodes expect a new node I, whose size
+        --  has been increased by 1. Value V is added to the previous set of
+        --  values. The value of position and parent link is preserved for all
+        --  nodes except I.
+        others =>
+          I /= Empty
+          and then Model (T) (I).K
+          and then Root (T) = Root (T'Old)
+          and then Size (T) = Size (T)'Old + 1
+          and then Is_Add (Values (T'Old), V, Values (T))
+          and then (for all J in Index_Type =>
+                     (if I /= J then Parent (T, J) = Parent (T'Old, J)))
+          and then (for all J in Index_Type =>
+                     (if I /= J and Parent (T, J) /= Empty
+                      then Position (T, J) = Position (T'Old, J)))
+          and then (for all J in Index_Type =>
+                     (if I /= J and Model (T) (J).K
+                      then Model (T'Old) (J).K))
+          and then (for all J in Index_Type =>
+                     (if Model (T'Old) (J).K
+                      then Model (T) (J).K and I /= J))
+          and then (for all D in Direction => Peek (T, I, D) = 0)
+          and then Peek (T'Old, Parent (T, I), Position (T, I)) = 0
+          and then (for all J in Index_Type =>
+                        (for all D in Direction =>
+                           (if Model (T) (J).K
+                            and I /= J
+                            and (J /= Parent (T, I) or D /= Position (T, I))
+                            then
+                                Peek (T'Old, J, D) = Peek (T, J, D)))));
+
+
+   procedure Left_Rotate (T : in out Search_Tree; I : Index_Type) with
+     Pre  =>
+       --  The tree is not empty
+       Size (T) > 0
+
+
+       --  I is in the tree
+       and then Model (T) (I).K
+
+
+       --  I has a right child
+       and then Peek (T, I, Right) /= Empty,
+     Post =>
+       --  The size of the tree is preserved
+       Size (T) = Size (T)'Old
+
+
+       --  When rotating on the root, the right child of the root becomes the
+       --  new root, otherwise the root is preserved.
+       and then (if Root (T)'Old /= I then Root (T) = Root (T)'Old
+                 else Root (T) = Peek (T'Old, I, Right))
+
+
+       --  The right child of I becomes it parent, of which I becomes the left
+       --  child.
+       and then Parent (T, I) = Peek (T'Old, I, Right)
+       and then Position (T, I) = Left
+
+
+       --  The parent of I becomes the parent of its previous right child, and
+       --  unless I was root, as the same child (left or right).
+       and then Parent (T, Peek (T'Old, I, Right)) = Parent (T'Old, I)
+       and then (if Root (T)'Old /= I
+                 then Position (T, Peek (T'Old, I, Right)) = Position (T'Old, I))
+
+
+       --  During the rotation, the subtree located at the left child of the
+       --  right child of I becomes the right child of I.
+       and then (if Peek (T'Old, Peek (T'Old, I, Right), Left) /= Empty
+                 then Parent (T, Peek (T'Old, Peek (T'Old, I, Right), Left)) = I
+                   and then Position (T, Peek (T'Old, Peek (T'Old, I, Right), Left)) = Right)
+
+
+       --  Except for I, its right child, and the left child of its right child,
+       --  the parent link is preserved.
+       and then (for all J in Index_Type =>
+                  (if J /= I
+                     and then (Parent (T'Old, J) /= I
+                               or else Position (T'Old, J) = Left)
+                     and then (Parent (T'Old, J) = Empty
+                               or else Parent (T'Old, Parent (T'Old, J)) /= I
+                               or else Position (T'Old, Parent (T'Old, J)) = Left
+                               or else Position (T'Old, J) = Right)
+                   then Parent (T, J) = Parent (T'Old, J)))
+
+
+       --  Except for I, its right child, and the left child of its right child,
+       --  the position is preserved.
+       and then (for all J in Index_Type =>
+                  (if J /= I
+                     and then Parent (T'Old, J) /= Empty
+                     and then (Parent (T'Old, J) /= I
+                               or else Position (T'Old, J) = Left)
+                     and then (Parent (T'Old, Parent (T'Old, J)) /= I
+                               or else Position (T'Old, Parent (T'Old, J)) = Left
+                               or else Position (T'Old, J) = Right)
+                   then Position (T, J) = Position (T'Old, J)))
+
+
+       --  Nodes in the tree are preserved. We are using two implications
+       --  instead of a Boolean equality, as automatic provers work better with
+       --  two implications, which they can instantiate just for the directions
+       --  of interest in a given proof.
+       and then (for all J in Index_Type =>
+                  (if Model (T) (J).K then Model (T'Old) (J).K))
+       and then (for all J in Index_Type =>
+                  (if Model (T'Old) (J).K then Model (T) (J).K))
+
+
+       --  Values are preserved
+       and then Values (T) = Values (T'Old)
+
+
+       --  The right child of I now is the former left child of I's former right
+       --  child.
+       and then Peek (T, I, Right) = Peek (T'Old, Peek (T'Old, I, Right), Left)
+
+
+       --  I's former right child has taken its place in the tree
+       and then (if Parent (T'Old, I) /= 0 then
+                      Peek (T, Parent (T'Old, I), Position (T'Old, I)) =
+                      Peek (T'Old, I, Right))
+
+
+       --  I is now the left child of its former right child
+       and then Peek (T, Peek (T'Old, I, Right), Left) = I
+
+
+       --  Other children are preserved
+       and then
+       (for all J in Index_Type =>
+           (for all D in Direction =>
+               (if (J /= I or D = Left)
+                 and (J /= Parent (T'Old, I) or else D /= Position (T'Old, I))
+                 and (J /= Peek (T'Old, I, Right) or else D = Right)
+                 and Model (T) (J).K
+                then Peek (T, J, D) = Peek (T'Old, J, D))));
+
+
+   procedure Right_Rotate (T : in out Search_Tree; I : Index_Type) with
+     Pre  =>
+       --  The tree is not empty
+       Size (T) > 0
+
+
+       --  I is in the tree
+       and then Model (T) (I).K
+
+
+       --  I has a left child
+       and then Peek (T, I, Left) /= Empty,
+     Post =>
+       --  The size of the tree is preserved
+       Size (T) = Size (T)'Old
+
+
+       --  When rotating on the root, the left child of the root becomes the
+       --  new root, otherwise the root is preserved.
+       and then (if Root (T)'Old /= I then Root (T) = Root (T)'Old
+                 else Root (T) = Peek (T'Old, I, Left))
+
+
+       --  The left child of I becomes it parent, of which I becomes the right
+       --  child.
+       and then Parent (T, I) = Peek (T'Old, I, Left)
+       and then Position (T, I) = Right
+
+
+       --  The parent of I becomes the parent of its previous left child, and
+       --  unless I was root, as the same child (left or right).
+       and then Parent (T, Peek (T'Old, I, Left)) = Parent (T'Old, I)
+       and then (if Root (T)'Old /= I
+                 then Position (T, Peek (T'Old, I, Left)) = Position (T'Old, I))
+
+
+       --  During the rotation, the subtree located at the right child of the
+       --  left child of I becomes the left child of I.
+       and then (if Peek (T'Old, Peek (T'Old, I, Left), Right) /= Empty
+                 then Parent (T, Peek (T'Old, Peek (T'Old, I, Left), Right)) = I
+                   and then Position (T, Peek (T'Old, Peek (T'Old, I, Left), Right)) = Left)
+
+
+       --  Except for I, its left child, and the right child of its left child,
+       --  the parent link is preserved.
+       and then (for all J in Index_Type =>
+                  (if J /= I
+                     and then (Parent (T'Old, J) /= I
+                               or else Position (T'Old, J) = Right)
+                     and then (Parent (T'Old, J) = Empty
+                               or else Parent (T'Old, Parent (T'Old, J)) /= I
+                               or else Position (T'Old, Parent (T'Old, J)) = Right
+                               or else Position (T'Old, J) = Left)
+                   then Parent (T, J) = Parent (T'Old, J)))
+
+
+       --  Except for I, its left child, and the right child of its left child,
+       --  the position is preserved.
+       and then (for all J in Index_Type =>
+                  (if J /= I
+                     and then Parent (T'Old, J) /= Empty
+                     and then (Parent (T'Old, J) /= I
+                               or else Position (T'Old, J) = Right)
+                     and then (Parent (T'Old, Parent (T'Old, J)) /= I
+                               or else Position (T'Old, Parent (T'Old, J)) = Right
+                               or else Position (T'Old, J) = Left)
+                   then Position (T, J) = Position (T'Old, J)))
+
+
+       --  Nodes in the tree are preserved. We are using two implications
+       --  instead of a Boolean equality, as automatic provers work better with
+       --  two implications, which they can instantiate just for the directions
+       --  of interest in a given proof.
+       and then (for all J in Index_Type =>
+                  (if Model (T) (J).K then Model (T'Old) (J).K))
+       and then (for all J in Index_Type =>
+                  (if Model (T'Old) (J).K then Model (T) (J).K))
+
+
+       --  Values are preserved
+       and then Values (T) = Values (T'Old)
+
+
+       --  The left child of I now is the former right child of I's former left
+       --  child.
+       and then Peek (T, I, Left) = Peek (T'Old, Peek (T'Old, I, Left), Right)
+
+
+       --  I's former left child has taken its place in the tree
+       and then (if Parent (T'Old, I) /= 0 then
+                      Peek (T, Parent (T'Old, I), Position (T'Old, I)) =
+                      Peek (T'Old, I, Left))
+
+
+       --  I is now the right child of its former left child
+       and then Peek (T, Peek (T'Old, I, Left), Right) = I
+
+
+       --  Other children are preserved
+       and then
+       (for all J in Index_Type =>
+           (for all D in Direction =>
+               (if (J /= I or D = Right)
+                 and (J /= Parent (T'Old, I) or else D /= Position (T'Old, I))
+                 and (J /= Peek (T'Old, I, Left) or else D = Left)
+                 and Model (T) (J).K
+                then Peek (T, J, D) = Peek (T'Old, J, D))));
+
+
+private
+
+
+   type Value_Array is array (Index_Type) of Natural;
+
+
+   type Search_Tree is record
+      Root   : Extended_Index_Type := Empty;
+      Struct : Forest;
+      Values : Value_Array := (others => 0);
+   end record
+     with Type_Invariant =>
+       (if Size (Struct) = 0 then Root = Empty
+        else Root /= Empty
+          and then Valid_Root (Struct, Root)
+          and then Ordered_Leafs (Struct, Root, Values));
+
+
+   function Ordered_Leafs (F : Forest; Root : Index_Type; Values : Value_Array)
+     return Boolean
+   --  Returns True iff the values in the tree rooted at Root are ordered in
+   --  strictly ascending chain in a left-to-right depth-first traversal of
+   --  the tree.
+
+
+   with
+     Ghost,
+     Pre => Valid_Root (F, Root);
+
+
+   function Model (T : Search_Tree) return Model_Type is
+     (Model (T.Struct, T.Root));
+
+
+   function Size (T : Search_Tree) return Extended_Index_Type is
+     (Size (T.Struct));
+
+
+   function Root (T : Search_Tree) return Index_Type is
+     (T.Root);
+
+
+   function Parent (T: Search_Tree; I : Index_Type) return Extended_Index_Type is
+     (Parent (T.Struct, I));
+
+
+   function Position (T: Search_Tree; I : Index_Type) return Direction is
+     (Position (T.Struct, I));
+
+
+   function Peek (T : Search_Tree; I : Index_Type; D : Direction) return Extended_Index_Type is
+     (Peek (T.Struct, I, D));
+
+
+end Search_Trees;
+
+****************************************************************
+
+From: Edmond Schonberg
+Sent: Friday, February  9, 2018  12:58 PM
+
+I hesitate to wade into this heated discussion, but Iíd like to try to
+summarize my view on the possible approaches to Reduce operations:
+
+a) I agree with Tuck that iterators are an extremely useful addition to Ada,
+and that iterators over containers add  a lot of expressive power to the
+language. Iterators by themselves describe a stream of values that can be used
+to construct an array or a container object, or can be an argument to a
+reduction operation. As such they are a natural prefix for an attribute, if
+this is the choice syntax. That prefix may look like an aggregate but of course
+there is no need to build one, and therefore no type needs to be specified and 
+the resolution rules are the same as for any other iterator construct.
+
+b) The specification of the default of the reduction by means of a bracketed
+value looks unappealing to all but to Tuck. Making the nature of the construct
+depend on an <init> buried somewhere in an expression is error-prone, requires
+some arbitrary look-up (for the reader, parsing is trivial anyway) and does
+not advertise properly the purpose of the construct.  Attribute ĎReduce seems
+like a simpler and clearer alternative.
+
+c) A stream generated by an iterator can be fed into a reduction operation, but
+there will be reductions performed over already constructed composite objects,
+and thus the prefix of ĎReduce can also be an expression that yields a type
+with defined iterator aspects.  This makes the iterator itself implicit without
+obscuring the purpose of the construct.
+
+d) Filters over iterators are familiar from other languages, are indispensable
+in mathematical notation, and should be part of the language: they are
+intuitive, expressive, and trivial to implement.
+
+e) Container aggregates fall naturally out of these iterators. We do need to
+generalize named associations to handle map containers of course, and Tuck has
+already proposed such. Cross-product iterators, i.e. nested iterators to
+describe multi-dimensional constructions, are useful but Iím not sure the
+syntax is clear enough.
+
+My two pesetas ...
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, February  9, 2018  3:28 PM
+
+Thanks, Ed, for wading in.  I agree with you.  In particular, I understand the
+opposition (by some ;-) to the "<initial_val>" syntax, and would be happy with
+some potentially more visible alternative such as "{initial_val}" or
+"<<initial_val>>".  Alternatively, the aggregate'Reduce syntax also works, so
+long as the overload resolution rules for aggregate'Reduce are appropriate for
+the purpose, and don't require the definition of a named type for the
+pseudo-aggregate just so you can use qualified-expression syntax.  It also
+presumes we define container aggregates.  
+
+Both syntactic approaches rely on having iterators with filters.  I have no 
+particular problem with allowing "when" clauses on more kinds of statements,
+but that seems to me (but not to everyone, I guess) rather orthogonal to their
+role as filters.  Their role as filters is closer to their role as entry-body
+guards, than to the existing use of "when" clauses on an "exit" statement.
+But if horse-trading implies we get raise...when as a side-effect of adding
+filters, so be it.
+
+JP's proposal, which puts the aggregate inside the parameter list (after
+'Reduce) potentially makes special overload resolution rules harder, as it
+only applies to the first parameter of potentially multiple parameters,
+whereas there is exactly one prefix to an attribute, and we already know that
+some attributes (such as 'Access) have special overloading rules for their
+prefix. I also happen to think it is more natural to have the iterator appear
+first, though that is clearly a personal preference.  That is, I like to see 
+the generator of the values first, and then the operation to be performed on
+the generated values.  Finally, putting the iterator inside the parameter list
+would result in two levels of parentheses in most cases, which adds to the
+unpleasantness.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, February  9, 2018  4:44 PM
+
+> I hesitate to wade into this heated discussion, but I'd like to try to 
+> summarize my view on the possible approaches to Reduce operations:
+
+Thanks for doing so. Someone has to be a voice of moderation here. :-)
+
+I think it is interesting that I view your message as generally agreeing with
+me (particularly on the most critical issues -- the silly <0> syntax and the
+use of an attribute on actual array/container objects), while Tucker seems to
+view the same (probably with a vastly different view of what is most
+critical :-). Hopefully, that means that our differences are really not that
+significant.
+
+...
+> e) Container aggregates fall naturally out of these iterators. We do 
+> need to generalize named associations to handle map containers of 
+> course, and Tuck has already proposed such. Cross-product iterators, 
+> i.e. nested iterators to describe multi-dimensional constructions, are 
+> useful but I'm not sure the syntax is clear enough.
+
+This I (mildly) disagree with. Perhaps this is mainly terminology. I see
+container aggregates and "these iterators" as completely orthogonal issues.
+Probably they would end up having similar syntax, but clearly the resolution
+rules are completely different (the type coming from context for an aggregate,
+and the type coming from ??? for "these iterators".
+
+(I'm quoting "these iterators" as they have no obvious name, not to suggest
+disapproval.)
+
+I also note that anything done for "these iterators" and container aggregates
+also has to be done for array aggregates. (Arrays and containers -- especially
+vector containers -- being as equivalent as possible.) Assuming that is the
+case, "these iterators" are closer to an array aggregate than a container
+aggregate (a container needing a complex iterator, while for an array,
+iterating is simple indexing) -- but probably they'd be better being defined
+completely separately.
+
+In any case, I've said repeatedly that I'm open to looking at ease-of-use
+improvements (which is all this is IMO). My roadmap has always been:
+(1) Define 'Reduce using as few new concepts as possible and with no new syntax.
+(2) Create examples using that 'Reduce.
+(3) Determine what is problematical about those examples (specifically in
+    ease-of-use).
+(4) Propose an extension (or extensions) to address those problems.
+(5) Create revised examples.
+(6) Repeat as necessary.
+
+The advantage of such a roadmap is that one always has something usable (if
+not ideal), and with the deadline approaching, we need to take similar
+approaches to everything we do. (Incremental development is the only sane form
+of development. :-)
+
+Tucker seems to have been leaping to the end, or leaping to the conclusion
+that the first step is the only one, or something. He hasn't even commented
+on the examples that Brad and I built to demonstrate that his examples can
+easily be written using the proposed Reduce attribute (and that after
+complaining that we hadn't written any such examples).
+
+He doesn't seem to realize that I've spent a lot of time thinking about the
+bare aggregate case; the problem is that I can't make it work, because the 
+type of the prefix is needed -- it's what determines the type/profiles of the
+various other parts of the attribute. I don't want to invent new kinds of
+type-less things (that's not Ada, and besides resolution is complex enough).
+So I was hoping someone more invested (like, say, Mr. Taft) would make an
+actual proposal for such an extension, rather than just going back to square
+one and complaining.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, February  9, 2018  4:53 PM
+
+>This claim of little value for quantified expressions may be based on 
+>your own coding style, or anticipation thereof, but we have a number of 
+>folks actually using Ada 2012, and SPARK 2014 which is based on it, and
+>quantified expressions are used heavily.
+
+Sigh. This example demonstrates all that is wrong with those existing
+mechanisms: little or no abstraction, giant expressions that are totally
+useless to a (human) client, loss of the #1 goal of Ada (emphasis on
+programming as a HUMAN activity), and more.
+
+Going further in a bad direction is not going to be helpful to Ada's
+underlying goals - the human element in programming.
+
+This is too off-topic to discuss further (and moreover, it is irrelevant, as
+we're stuck with these things and certainly they are going to appear in
+aggregates as well).
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, February  9, 2018  5:57 PM
+
+...
+> Building blocks always produce somewhat more verbose representations 
+> for any given special case (e.g. reducing an array), but ultimately 
+> provide much more power to the programmer when the programmer wants to 
+> go beyond a specially supported special case.  I can see that the 
+> 'Reduce attribute works nicely for the array special case, but fails 
+> when the elements you want to reduce represent some other sort of 
+> sequence of items, such as that which can be produced by a combination 
+> of an iterator, an optional filter, and an Ada expression.
+
+We're going to have to agree to disagree on what is and is not a building
+block.
+
+But the real problem here is the claim that the 'Reduce attribute "fails"
+when the elements are "some other sequence of items". That's the definition
+of an array aggregate! Brad and I had no real difficulty writing your examples
+using the proposed simple features (and even without the filter).
+With the filter, many of them didn't even need the 'Reduce.
+
+You have not made any comment on the actual examples, which makes it
+impossible to answer any criticisms, propose any additional features or
+anything else. I've never expected that the current proposal would be the
+only thing we'd consider, but just saying that it "fails" without any attempt
+to explain why is not progressing the discussion at all.
+
+What *I* find from the examples is two-fold: one, the restriction that all of
+the operands of the reducer expression are the same type is too restrictive,
+and two, array aggregates need the addition of the other kinds of container
+iterators in order that the better represent sequences. I also find the need
+to declare an appropriate array type annoying, but I'm unable to find any sane
+semantics that avoids it.
+
+> I think it is a big loss to only support reduction over an existing 
+> concrete object with a single binary function.  By using more flexible 
+> syntax, we can support reduction of a sequence of values produced in 
+> many different ways, without ever having to produce in memory a single 
+> concrete object representing the sequence.    
+
+Again, you're totally ignoring what is actually proposed. The actual proposal 
+includes (or is intended to, I think Brad may have left it out) a
+notwithstanding allowing interleaving of the evaluation of the prefix and
+'Reduce, including if the prefix is an aggregate, interleaving of the
+evaluation of the individual associations. (Aside: gotta be careful that
+doesn't allow evaluation out-of-order for array delta aggregates.) And Brad
+already included a rule saying that the temporary object of such a prefix need
+not actually exist.
+
+There never, ever has been any intent that the prefix has to be materialized.
+On top of which, any compiler could apply an as-if optimization to not
+materialize an object that has no other lifetime. So this is simply not a real
+issue and it is intensely frustrating to have to you raise this bogeyman
+repeatedly.
+
+> The original discussion of 'Reduce had it combined with a container 
+> aggregate, and at that point I presumed we were talking about a very 
+> special interpretation of the container aggregate, where it didn't 
+> have a type, it just represented a sequence of values that are to be 
+> reduced.  But that presumes we agree on the syntax of a container 
+> aggregate and the special significance of 'Reduce applied to such an 
+> aggregate.
+>  But if we start insisting that this is a "normal" attribute with 
+> normal rules such as being able to resolve the prefix to a single, 
+> pre-existing type, then the idea falls apart in my view.  So if we 
+> have heartburn with giving the aggregate'Reduce attribute special 
+> overloading rules, etc., then we should go back to some sort of new 
+> syntax rather than lose all of the power of using an iterator, filter, 
+> etc., which the container aggregate would provide.
+
+The insight that I had is that this operation really doesn't have anything to
+do with a container at all, as you say, it just is a sequence of items.
+And one way to represent a sequence of items in Ada is as an array. Once one
+just looks at this as an array (only) operation, it becomes much simpler.
+
+Moreover, this has nothing whatsoever to do with container aggregates (whether
+we have them at all, etc.) Rather, it is really about *array* aggregates, and
+potential expansions of what they can represent.
+
+If we have to do special resolution, I view that as operating on something
+with the syntax of an array aggregate, but that is not really an array
+aggregate. But I think I need to see more realistic examples to be convinced
+of the need, especially as it requires a potentially problematical syntax
+change.
+
+Container aggregates are completely orthogonal, as their primary goal is to
+look as closely as possible to an array aggregate. (For vectors, that match
+should be almost exact). They have nothing that I can see to do with the
+prefix of this attribute, regardless of what rules we ultimately chose for it.
+
+****************************************************************
+
+From: John Barnes
+Sent: Saturday, February 10, 2018  8:53 AM
+
+I fear I have been distracted by giving lectures at Oxford to concentrate on
+ARG.
+
+I must say however, that I am concerned about the overuse of functional
+notation. In small lumps it is OK. But readers need to "take a breath" as it
+were and that is difficult if one's brain is heavily nested in a giant
+functional lump.
+
+I am also being triggered in that a student just submitted a whole lump of
+Mathematica for me to look at. It's readable in small doses but indigestible
+when the brackets mount up.
+
+One of the design goals for Ada was to ensure that it was readable. And
+remember that a program is read many more times than it is written. Jean
+was right much of the time.
+
+(I am also worried that (deo volente) when I rewrite my book for Ada 2020 that
+it might be nearly 2000 pages.)
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Saturday, February 10, 2018  11:21 AM
+
+Luckily most people don't try to prove full functional correctness.  I agree
+you wouldn't want most of your specs to look like this.  Much more common is
+to just specify some simple properties that should be true before or after an
+operation, without trying to specify every detail of the effects.  But even
+simple properties are often best captured with a quantified expression.  The
+classic is the property of being ordered.  Most specs for sorting don't bother
+requiring that the sorted output be a permutation of the unsorted input, but
+they do often want to express succinctly that the result is ordered.
+
+****************************************************************
+
+From: Eddmond Schonberg
+Sent: Sunday, February 11, 2018  2:27 PM
+
+> He [Tuck] doesn't seem to realize that I've spent a lot of time 
+> thinking about the bare aggregate case; the problem is that I can't 
+> make it work, because the type of the prefix is needed -- it's what 
+> determines the type/profiles of the various other parts of the 
+> attribute. I don't want to invent new kinds of type-less things 
+> (that's not Ada, and besides resolution is complex enough). So I was 
+> hoping someone more invested (like, say, Mr. Taft) would make an 
+> actual proposal for such an extension, rather than just going back to square
+> one and complaining.
+
+I agree to the need for simple resolution rules for this new construct. I may
+be inevitable to use a qualification for the prefix of ĎReduce, but let me
+propose another simplification (hope springs eternal...).  So far we have
+described the prefix either as an object of an itterable type (array or
+container implementing iterator interface) or else as an aggregate.
+
+i just donít see an arbitrary aggregate as a useful construct here: if you
+need to apply a reduction to some random collection of values you will have
+built that collection previously as an object. Furthermore, an aggregate with
+a miscellaneous collection of values, specified positionally or by
+associations (named or iterated) will not lend itself to any kind of parallel
+processing. So I would propose to restrict the prefix to a parenthesized
+iterated component association, which is the best model we have for a
+generated sequence:
+
+  (for X of C when P (X) => F (X))íReduce (Op, init)
+
+The needed resolution rule is now for the expression in the association, not
+for the aggregate, which suggests two simple resolution rules:
+
+a) the expected type of the expression is that of the default value (the second
+   argument of the attribute)
+
+b) The the expression must have a single type independent of context. This can
+   be obtained by means of qualified expression when needed.
+
+This way there is no misleading suggestion that the prefix is in any way a
+constructed object. I suggest that this covers all useful cases of explicit
+sequences. The other form:
+
+     CíReduce (Op, Init)
+
+can have a similar rule :  the object C must have a unique type (and a
+qualification in the case is a small burden.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, February 11, 2018  6:06 PM
+
+I have what I believe are pretty simple and implementable overload resolutions
+rules for pseudo-aggregate'Reduce, without needing a qualified expression.  I
+am reviewing them with Randy before sending them to the ARG as a whole.
+
+****************************************************************
+
+From: Edward Fish
+Sent: Monday, February 12, 2018  12:41 PM
+
+Some of this discussion has reminded me of Guy Steele's talk Four Solutions to
+a Trivial Problem - especially around 40 min when he's talking about algebraic
+properties and then goes into parallel properties, eventually drawing the
+parallel between garbage-collection and parallelization (the former being
+assigning data to memory, the latter assigning work [code] to processors). 
+
+It also occurs to me that the application/promotion of ranges into a
+first-class Ada citizen might help; perhaps it might not be required, but as
+a 1st class construct we could avoid the temptation to use "for" as a map. (He
+makes the point that C-style "for( ; ; )" is possibly the worst way to
+represent a map, but that also applies to what we're trying to do here;
+granted that there is a big distinction between syntax and semantics, but
+there's a reason that Java 8's streams have several tricks/libraries to work
+with indexed elements.)
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, February 12, 2018  2:35 PM
+
+Hearing no objection from Randy, here is the overload resolution I would
+recommend for pseudo-aggregate'Reduce(...):
+
+We have talked about allowing a syntax of:
+
+  aggregate'Reduce(...)
+
+for our reduction expression (in addition to obj'Reduce()), but have not
+specified the new syntax rules that will allow this, nor specified the
+overload resolution that will make it usable without introducing a bogus
+type qualifier.  The intent I believe is that we are just adopting the
+aggregate *syntax*, presuming a more generalized use of iterators and perhaps
+filters in the aggregate, but there is no expectation that an actual object
+will be constructed. The new syntax might better be described as:
+
+ (for iterator [filter] => elem_expression)'Reduce(combiner, initial-val)
+
+since we are concerned about doing overload resolution on the parts of the
+aggregate, not on the aggregate as a whole.
+
+The easiest thing to require is that a 'Reduce attribute reference have a
+single expected type.  Once we require that, then that provides the expected
+type for the initial-val.  We don't really need to require that combiner be
+symmetrical so long as we agree that the initial-val is the first operand.
+In any case, the overload resolution problem is now equivalent to:
+
+  X : expected_type := combiner(expected-type'(initial-val), elem_expression);
+
+This seems pretty straightforward to resolve.  The iterator/filter are somewhat
+irrelevant to this, and need to be resolved on their own without external
+context.
+
+Comments welcome...
+
+****************************************************************
+
+From: Edmond Schonberg
+Sent: Monday, February 12, 2018  2:50 PM
+
+We seem to be in full agreement!
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, February 12, 2018  3:00 PM
+
+Very glad to hear.
+
+****************************************************************
+

Questions? Ask the ACAA Technical Agent