CVS difference for ai12s/ai12-0119-1.txt

Differences between 1.10 and version 1.11
Log of other versions for file ai12s/ai12-0119-1.txt

--- ai12s/ai12-0119-1.txt	2017/11/30 04:18:01	1.10
+++ ai12s/ai12-0119-1.txt	2018/01/10 00:58:39	1.11
@@ -1,4 +1,4 @@
-!standard 5.5.2 (2/3)                              17-10-11    AI12-0119-1/04
+!standard 5.5.2 (2/3)                              17-12-27    AI12-0119-1/05
 !class Amendment 14-06-20
 !status work item 14-06-20
 !status received 14-06-17
@@ -7,8 +7,8 @@
 !subject Parallel operations
 !summary
 
-New syntax and semantics to facilitate parallelism via Parallel Loops,
-Concurrent Blocks, and Parallel Reduction Expressions.
+New syntax and semantics to facilitate parallelism via Parallel Loops and
+Parallel Blocks.
 
 !problem
 
@@ -47,8 +47,6 @@
 - To efficiently guide the compiler in the creation of effective
   parallelism (without oversubscription) with minimal input from the
   programmer;
-- To support the parallel reduction of results calculated by parallel
-  computation;
 - To avoid syntax that would make a POP erroneous or produce noticeably
   different results if executed sequentially vs in parallel.
 
@@ -59,7 +57,7 @@
 
 Note that in this model the compiler will identify any code where a
 potential data race occurs (following the rules for concurrent access
-to objects as specified in the Language Reference Manual RM 9.10(23)),
+to objects as specified in the Language Reference Manual RM 9.10([23]{11???})),
 and point out where objects cannot be guaranteed to be independently
 addressable. If not determinable at compile-time, the compiler may
 insert run-time checks to detect data overlap.
@@ -78,35 +76,28 @@
 presence of the asynchronous transfer of control capability
 (RM 9.7.4 (23)).
 
-There are three areas of this proposal; one that introduce capabilities
-for parallel/concurrent blocks, another that introduces capabilities for
-parallel loops, and another that introduces capabilities for parallel
-reduction.
+There are two areas of this proposal; one that introduce capabilities
+for parallel/concurrent blocks, and another that introduces capabilities for
+parallel loops. A third proposal that introduces capabilities for parallel
+reduction is covered in a separate AI, AI-????????.
 
-Concurrent Blocks
+Parallel Blocks
 ---------------
 
-Concurrent blocks may be used to specify that two or more parts of an
-algorithm may be executed concurrently, and possibly in parallel with
-each other.
-
-Semantics: A concurrent block statement encloses two or more sequences of
-statements (two or more "concurrent sequences") separated by the reserved
-word "and".  Each concurrent sequence represents a separate tasklet, but
+Parallel blocks may be used to specify that two or more parts of an
+algorithm may be executed in parallel with each other.
+
+Semantics: A parallel block statement encloses two or more sequences of
+statements (two or more "parallel sequences") separated by the reserved
+word "and".  Each parallel sequence represents a separate tasklet, but
 all within a single Ada task. Task identity remains that of the
 enclosing Ada task, and a single set of task attributes is shared
 between the tasklets. Each sequence of statements is assigned a unique
-and independent executor to execute the tasklet. If the parallel keyword
-is present, then each executor can execute on any core or migrate
-independently to other cores and the tasklets can execute in parallel
-with each other. If the parallel keyword is absent, then each executor
-associated with the concurrent block is assigned to the core of the
-enclosing Ada task, unless the compiler can determine implicitly
-that the tasklets can safely be executed on different cores in parallel.
+and independent executor to execute the tasklet.
 
 With respect to the rules for shared variables (see RM 9.10(13)), two
-actions occurring within two different concurrent sequences of the same
-concurrent block are not automatically sequential, so execution can be
+actions occurring within two different parallel sequences of the same
+parallel block are not automatically sequential, so execution can be
 erroneous if one such action assigns to an object, and the other reads
 or updates the same object or a neighboring object that is not
 independently addressable from the first object.  The appropriate use of
@@ -117,518 +108,110 @@
 specified to enable the static detection of such problems at compile
 time (see AI12-0079-1 and AI12-0064-1).
 
-Any transfer of control out of one concurrent sequence will initiate the
-aborting of the other concurrent sequences not yet completed.  Once all
-other concurrent sequences complete normally or abort, the transfer of
-control takes place.  If multiple concurrent sequences attempt a transfer
-of control before completing, one is chosen arbitrarily and the others
-are aborted.
+Any transfer of control out of one parallel sequence will initiate the
+aborting of the other parallel sequences not yet completed.  Once all
+other parallel sequences complete normally or abort, the transfer of
+control takes place.  If multiple parallel sequences attempt a transfer
+of control before completing, the first occurrence of transfer of contgrol is
+chosen arbitrarily and the others are aborted.
 
-If an exception is raised by any of the concurrent sequences, it is
+If an exception is raised by any of the parallel sequences, it is
 treated similarly to a transfer of control, with the exception being
 propagated only after all the other sequences complete normally or due
-to abortion.  If multiple concurrent sequences executing in parallel
-raise an exception before completing, one is chosen arbitrarily and the
-others are aborted. The concurrent block completes when all of the
-concurrent sequences complete, either normally or by being aborted.
+to abortion.  If multiple parallel sequences executing in parallel
+raise an exception before completing, the first occurrence is chosen and the
+others are aborted. The parallel block completes when all of the
+parallel sequences complete, either normally or by being aborted.
 
 Note that aborting a tasklet need not be preemptive, but should prevent
-the initiation of further nested concurrent blocks or parallel loops.
+the initiation of further nested parallel blocks or parallel loops.
 
 Parallel Loops
 --------------
 
-A common approach to parallelizing loops is to break the loop iterations
-into chunks where each chunk consists of a subset of the entire
-iteration set, and may be as small as a single iteration, up to as large
-as the entire number of iterations, or somewhere in-between. In Ada,
-each chunk becomes a separate tasklet.
-
-The implementation is expected to determine appropriate values for the
-number of chunks and the size of each chunk for each parallel loop, without
-these having to be specifed by the programmer.
+A parallel loop is a loop where any iteration of the loop can execute in
+parallel with any other. There is no implied execution order for the iterations.
+Iterations that execute in parallel with other iterations are associated with
+different tasklets.
 
 To indicate that a loop is a candidate for parallelization, the reserved
 word "parallel" may be inserted immediately before the word "for" in a
-"for" loop. Such a loop is broken into chunks of iterations, where each
-chunk is processed sequentially, but potentially in parallel with the
-other chunks of iterations.
+"for" loop.
 
-Note that the same rules presented for concurrent blocks above apply to
+Note that the same rules presented for parallel blocks above also apply to
 the update of shared variables and the transfer of control to a point
-outside of the loop, and for this purpose each iteration (or chunk) is
-treated as equivalent to a separate sequence of a concurrent block.
-
-Reduction Expressions
----------------------
+outside of the loop, and for this purpose each iteration can be considered
+as being equivalent to a separate sequence of a concurrent block.
 
-A Reduction Expression is a syntactic construct, similar to a quantified
-expression that can be used to combine a set of values into a single result.
-This mechanism is called reduction.
-
-Indeed, a quantified expression can be viewed as being a special purpose
-Reduction Expression that applies an operation, the Predicate, to a set of
-values to reduce the predicate results to a single Boolean result. Similarly, an
-array aggregate in the form of an iterated_component_association can also be
-viewed as being a special purpose Reduction Expression that applies an
-operation, in place concatenation, to a set of values to reduce into an array
-result.
-
-A Reduction expression is a more general syntactic form where there is
-less constraint on the type of operation to be applied to the set of
-values, and the operation can produce result values of types other than
-Boolean values, or array creation.
-
-Semantics: A Reduction Expression looks like a quantified expression,
-except that the quantifier keyword ("all" or "some") is not present, and
-the predicate expression is replaced with an expression that evaluates
-to a function call returning an object of a nonlimited type, that is
-called the combiner_function_call.
-
-The combiner function call must have at least one parameter of the same
-type as the type of the Reduction expression. The combiner function call
-is called iteratively to accumulate a result, which ultimately becomes
-the result of the Reduction Expression. The accumulation of the result
-occurs as the result of each iteration is implicitly fed back as an
-input parameter to the combiner function call. The implicit parameter is
-identified syntactically based on the "<>" box notation syntax, except
-that the box notation can optionally enclose the initial value to be
-used for the first call to the function, since at that point there would
-otherwise be no existing value to pass into the function. The initial
-value also serves to be the result of the reduction expression for the
-case when there are no iterations performed (E.g. For the case when a
-when a loop_parameter_specification for a Reduction Expression specifies
-a null range).
-
-If the initial value is not specified, then the initial value is assumed
-to be a special value called the identity value that is associated with
-the combiner function call. The identity value is specified by either
-applying an Identity aspect to the declaration of the function denoted
-by the combiner function call, or indirectly to another function called
-a reducer function that is similarly associated with the combiner
-function call via the use of a Reducer aspect applied to the function
-denoted by the combiner function call. The Identity aspect of a function
-identifies a value of the same type as the combiner function call result.
-If the combiner_function_call box parameter does not specify an initial
-value and the combiner_function_call is not associated with an identity
-value, then the compilation is illegal.
-
-To indicate that a Reduction Expression is a candidate for
-parallelization, the reserved word "parallel" may be inserted
-immediately before the reserved word "for". Similar to the way Parallel
-Loops are broken into chunks, a Parallel Reduction expression will also
-have chunking applied, where each chunk is processed sequentially, but
-potentially in parallel with other iteration chunks of the expression.
-
-For each chunk, an accumulator result is generated that is local to the
-tasklet. The local accumulator result for each chunk is initialized to
-the identity value, except the first chunk is initialized to the initial
-value if specified, otherwise the first chunk is also initialized to the
-identity value. As the chunk results are calculated in parallel, the
-result of the Reduction Expression is generated by combining/reducing
-the final function results of each chunk by applying a Reducer function
-for the Reduction Expression. A parallel reduction expression is illegal
-if the combiner function call is not associated with an identity value,
-regardless whether an initial value is specified in the box parameter or
-not.
-
-The Reducer function for a parallel Reduction expression is a function
-that accepts two parameters where both parameters are of the same type,
-and returns a result of that same type, which is also the type of the
-Reduction expression.
-
-The Combiner function must have a Reducer aspect specified for
-its declaration if the function itself is not a Reducer function. The
-Reducer aspect identifies another function that is to be used to combine
-multiple chunk results into the final result of the Reduction expression.
-The multiple results are combined two at a time.
-
-The Reducer function is expected to generate an associative result based on the
-input parameters. A Reducer function does not need to be commutative (e.g.
-vector concatenation), as it is expected that the implementation will ensure
-that the results are combined in a manner that is consistent with sequentially
-applying the Reducer function to the chunk results in iteration order. If the
-parallel keyword is not present in the Reduction Expression, then sequential
-computation is assumed and the Reducer aspect does not need to be specified for
-the function declaration denoted by the combiner function call.
-
-For parallel Reductions Expressions, it is important that the value of the
-Identity aspect associated with a function does not affect the result of the
-computation. For example, if the reduction result type is Integer and the
-reducer is addition, then the Identity value should be zero, since adding zero
-to any value does not affect the value. Similarly, if the reducer is
-multiplication then the Identity value should be one, since multiplying any
-Integer value by 1 does not affect the value. If the result type is a vector and
-the reducer is concatenation, then the Identity value should be an empty vector,
-since concatenating an empty vector to another vector does not affect the value
-of the other vector.
-
-Note that the same rules presented for parallel blocks above apply to
-the update of shared variables, and for this purpose each iteration (or chunk)
-is treated as being a separate tasklet.
-is treated as equivalent to a separate sequence of a parallel block.
-
 !wording
 
 Append to Introduction (28)
-"A concurrent block statement requests that two or more sequences of
- statements should execute concurrently with each other, and in parallel
- with each other if the parallel keyword is specified."
+"A parallel block statement requests that two or more sequences of
+ statements should execute in parallel with each other."
 
 Modify 2.9(2/3)
 Add "parallel" to the list of reserved words.
-
-Modify 4.3.3 (5.1/5)
-
-iterated_component_association ::= [parallel] for defining_identifier in discrete_choice_list => expression
-
-Modify 4.4(1/3)
-
-"In this International Standard, the term "expression" refers
-to a construct of the syntactic category expression or of any of the following
-categories: choice_expression, choice_relation, relation, simple_expression,
-term, factor, primary, conditional_expression, quantified_expression
-{, reduction_expression}."
-
-Modify 4.4(7/3)
-
-"primary ::=
-   numeric_literal | null | string_literal | aggregate
- | name | allocator | (expression)
- | (conditional_expression) | (quantified_expression) {| (reduction_expression)}"
-
-Add 4.4(15.h)
-
-"Wording Changes from Ada 2012
-
-Added Reduction_expression to primary."
-
-Replace 4.5.1(2-3) with
-
-"The following logical operators are predefined for every boolean type T
-
- function "and"(Left, Right : T) return T with Identity => True
- function "or" (Left, Right : T) return T with Identity => False
- function "xor"(Left, Right : T) return T with Identity => False"
-
- The following logical operators are predefined for every modular type T
-
- function "and"(Left, Right : T) return T
- function "or" (Left, Right : T) return T with Identity => 0
- function "xor"(Left, Right : T) return T with Identity => 0"
-
- The following logical operators are predefined for every one-dimensional
- array type T whose component type is a boolean type:
-
- function "and"(Left, Right : T) return T
- function "or" (Left, Right : T) return T
- function "xor"(Left, Right : T) return T"
-
-Add AARM note
-
-"The Identity aspect for the "and" function for modular types is not specified
- because it is difficult to statically specify a value that would work for all
- types. eg. type X is mod 97; We could say that it is T'Last for types where the
- modulus is a power of 2, for example, but that is not general enough,
- similarly, we could say that the Identity for the "and" function for a
- one-dimensional array is T'(others => True), but that only works if T is a
- constrained array type, so we dont bother trying to specify the Identity aspect
- for these operations."
-
-Modify 4.5.3(2)
-
-"function "+"(Left, Right : T) return T{ with Identity => T(0)}
- function "-"(Left, Right : T) return T{ with Reducer => "+"}
-"
-
-Modify 4.5.5(2)
-
-"function "*"  (Left, Right : T) return T{ with Identity => 1}
- function "/"  (Left, Right : T) return T{ with Reducer => "*"}
- function "mod"(Left, Right : T) return T
- function "rem"(Left, Right : T) return T"
-
-Modify 4.5.5(12)
-"function "*"(Left, Right : T) return T{ with Identity => 1.0}
- function "/"(Left, Right : T) return T{ with Reducer => "*"}
-"
-
-Modify 4.5.5(14)
-"function "*"(Left : T; Right : Integer) return T{ with Reducer => "*"}
- function "*"(Left : Integer; Right : T) return T{ with Reducer => "*"}
- function "/"(Left : T; Right : Integer) return T{ with Reducer => "*"}
-"
-
-Modify 4.5.5(19)
-"function "*"(Left, Right : universal_fixed) return universal_fixed{ with Identity => 1.0}
- function "/"(Left, Right : universal_fixed) return universal_fixed{ with Reducer => "*"
-"
-
-Modify 4.5.8(1/3)
-
-quantified_expression ::= [parallel] for quantifier loop_parameter_specification => predicate
-  | [parallel] for quantifier iterator_specification => predicate
-
-Modify 4.5.8 (6/4)
-
-For the evaluation of a quantified_expression, the
-loop_parameter_specification or iterator_specification is first
-elaborated. The evaluation of a quantified_expression then evaluates the
-predicate for the values each value of the loop parameter. {If the
-parallel keyword is not specified, these}[These] values are examined in
-the order specified by the loop_parameter_specification (see 5.5) or
-iterator_specification (see 5.5.2). {Otherwise these values are examined
-in an arbitary order consistent with parallel execution.}
-
-Add Section 4.5.9
-
-"Reduction Expressions"
-
-Reduction expressions provide a way to write a reduction that combines a set of
-values into a single result.
-
-Syntax
-
- reducer_function ::= function_specification
-
- reduction_expression ::=
-    [parallel] for loop_parameter_specification => combiner_function_call
-  | [parallel] for iterator_specification => combiner_function_call
-
-combiner_function_call ::= function_call
-
-Wherever the Syntax Rules allow an expression, a reduction_expression may be
-used in place of the expression, so long as it is immediately surrounded by
-parentheses.
-
-Discussion: The syntactic category reduction_expression appears only as a
-primary that is parenthesized. The above rule allows it to additionally be used
-in other contexts where it would be directly surrounded by parentheses. This is
-the same rule that is used for conditional_expressions; see 4.5.7 for a detailed
-discussion of the meaning and effects of this rule.
-
-Name Resolution Rules
-
-The expected type of a reduction_expression is any nonlimited type. The
-combiner_function_call in a reduction_expression is expected to return a
-result of that same type.
-
-A reducer_function is a function that has exactly two parameters where
-both formal parameters and the result are of the same type. A
-reducer_function is called implicitly to combine results from
-multiple executions of the combiner_function_call of a reduction
-expression when the reduction expression has parallel execution. The
-function denoted by a combiner_function_call can be associated with a
-reducer_function. If such an association exists, either the function is
-itself a reducer_function or its declaration has the Reducer aspect
-specified which indicates the associated reducer_function.
-
-An identity_value is a value that can be used to initialize implicit
-declarations of variables that accumulate the results of a
-reduction_expression. These accumulator variables are passed as the
-actual parameter associated with the reduction_expression_parameter of a
-combiner_function_call. The identity_value for the function denoted by a
-combiner_function_call is determined from either; the Identity aspect
-specified on the declaration of the denoted function, or by the Identity
-aspect specified on the declaration of another function named by the the
-Reducer aspect of the denoted function.
-
-Legality Rules
-
-The combiner_function_call of a reduction_expression shall have exactly one
-explicit_actual_parameter that is a reduction_expression_parameter (See 6.4(6)).
-
-If the parallel keyword is specified on the reduction_expression then
-the combiner_function_call shall either denote a function that is a
-reducer_function, or the denoted function shall have the Reducer aspect
-specified on its declaration.
-
-If the parallel keyword is specified on the reduction_expression or the
-reduction_expression_parameter of the combiner_function_call does not
-specify an initial_reduction_value then the combiner_function_call shall
-either denote a function that has the Identity aspect specified on its
-declaration, or the denoted function shall have the Reducer aspect
-specified on its declaration and the function named by the Reducer
-aspect shall have the Identity aspect specified on its declaration.
-
-The type of an initial_reduction_value or associated identity_value
-specified for a reduction_expression_parameter shall be of the same type
-as the reduction_expression.
-
-The result type and the type of both formal parameters of an associated
-reducer_function shall be of the same type as the reduction_expression.
-
-Static Semantics
-
- For a function_specification, the following language-defined
- operational aspects may be specified with an aspect_specification
- (see 13.1.1):
-
-  Identity
-
-The aspect Identity denotes a value that is to specified as the
-identity_value associated with a Reduction_Expression. The aspect shall
-be specified by a static expression, and that expression shall be
-explicit, even if the aspect has a boolean type.
-
-Identity shall be specified only on a function_specification
-declaration.
-
-Reason: The part about requiring an explicit expression is to disallow
-omitting the value for this aspect, which would otherwise be allowed by
-the rules of 13.1.1.
-
-Aspect Description for Identity: Identity value for a function.
-
-The expected type for the expression specified for the Identity aspect
-is the result type of the function_specification declaration on which it
-appears.
-
-Only one Identity aspect may be applied to a single function declaration;
-
-
-  Reducer
-
-The aspect Reducer shall denote a reducer_function
-that is to be associated with a declared function.
-
-Reducer The aspect Reducer denotes a function with the following
-specification:
-
- function reducer(L, R : in result_type) return result_type
-
-where result_type statically matches the subtype of the result of the
-declared function.
-
-Only one Reducer aspect may be applied to a single function declaration;
-
-Aspect Description for Reducer: Reducer function associated with the
-combiner_function_call of a reduction expression.
-
-Dynamic Semantics
-
-For the evaluation of a reduction_expression, the loop_parameter_specification
-or iterator_specification is first elaborated, and accumulator variables of the
-type of the reduction_expression are implicitly declared. Each accumulator
-variable corresponds to a unique non-overlapping subset of the iterations to be
-performed where all accumulators together cover the full set of iterations to be
-performed. For the accumulator dedicated to the first iterations, the
-accumulator is initialized to the initial_reduction_value, if specified,
-otherwise it is initialized to the associated identity_value of the
-combining_function_call. Any other accumulator values are initialized to the
-associated identity value of the combining_function_call.
-
-The evaluation of a reduction_expression then evaluates the
-combiner_function_call for the values of the loop parameter in the order
-specified by the loop_parameter_specification (see 5.5) or
-iterator_specification (see 5.5.2) within each iteration subset.
-For each subsequent evaluation of the combiner_function_call within an
-iteration subset, the result from the previous evaluation is
-passed as an actual parameter to the reduction_expression_parameter of
-the current evaluation.
-
-The value of the reduction_expression is determined as follows: As accumulator
-results are determined they are reduced into a single result two at a time by
-implicit calls to the reducer_function associated with the
-combiner_function_call. The accumulator results passed to the reducer_function
-are always for two adjacent iteration subsets where the result for the lower
-iteration subset is passed as the first actual parameter to the reducer_function
-and the result for the higher iteration subset is passed as the second actual
-parameter to the reducer_function. If the Reduction_Expression does not evaluate
-any iterations, then the value of the Reduction_Expression is the
-initial_reduction_value is specified, otherwise it is the identity_value
-associated with the combiner_function_call.
-
-Notes: The accumulator results must represent adjacent iteration subsets
-as described above to ensure that non-commutative reductions will
-produce consistent results for parallel execution.
-
-Examples
-
-  A reduction expression to calculate the sum of elements of an array
-
-  (parallel for Element of Arr => <0> + Element)
-
-  A reduction expression to calculate the minimum value of an array
 
-  (parallel for X of Arr => Integer'Min(<Integer'Last>,  X))
+Modify 5.1(5/2)
+add "parallel_block_statement" to the list of compound statements
 
-  A reduction expression to create an unbounded string containing the alphabet
-
-  (for Letter in 'A' .. 'Z' => <Null_Unbounded_String> & Letter)
-
-  A reduction expression to create a string containing the alphabet
-
-  (for Letter in 'A' .. 'Z' => <""> & Letter)
-
-  A reduction expression to determine how many people in a database are 30 or older
-   ThirtySomething : contant Natural :=
-     (parallel for P of Personel => <0> + (if Age(P) > 30 then 1 else 0));
-
-  An expression function that returns its result as a Reduction Expression
-
-   function Factorial(N : Natural) return Natural is (for J in 1..N => <1> * J);
-
-  An expression function that computes the Sin of X using Taylor expansion
-   function Sin(X : Float; Num_Terms : Positive := 5) return Float is
-     (for I in 1..Num_Terms => <0.0> + (-1.0)**I * X**(2*I-1)/Float(Fact(2*I-1)));
-
-   A reduction expression that computes taxable income
-
-   Taxable_Income : constant Float := Total_Income +
-     (for Deduction of Deductions => <0.0> - Deduction);
-
-   A reduction expression that outputs the sum of squares
-
-   Put_Line ("Sum of Squares is" & Integer'Image(for I in 1 .. 10 => <0> + I**2));
-
-   A reduction expression to compute the value of Pi in parallel
-
-   Number_Of_Steps : constant := 100_000;
-   Step : constant := 1.0 / Number_Of_Steps;
-
-   Pi : constant Long_Float := Step *
-     (parallel for I in 1 .. Number_Of_Steps =>
-         <0.0> + (4.0 / (1.0 + ((Long_Float (I) - 0.5) * Step)**2)));
-
-Wording Changes from Ada 2012
-Reduction expressions are new.
-
 Modify 5.5(3/3)
 
 Iteration_scheme ::= while condition
    | [parallel] for loop_parameter_specification
    | [parallel] for iterator_specification
 
-Modify 5.5 (6/5)
+{Legality Rules}
+A loop_statement shall not have both reserve words reverse and parallel present.
 
-A loop_parameter_specification declares a {set of }loop parameter{s},
-{where eash loop parameter corresponds to a unique non-overlapphing
-subset of iterations that together cover the full set of iterations to
-be performed.}
-[which is an ]{These} object{'s} [whose ]subtype{s} (and nominal subtype{s})
-[is]{are} that defined by the discrete_subtype_definition.
+A loop_statement with the parallel reserve word shall not update variables global
+to the loop unless either the action is sequential (see 9.10(11)), or the
+expression for the update mentions the loop parameter.
+
+New paragraph after 5.5 (9/4)
+
+For a parallel loop, a check is made that updates to a component of an global
+array or to an element of a container via an expression that mentions the loop
+parameter are sequential. See 9.10(11).
+
+AARM Reason:
+The check ensures that different tasklets executing iterations of the same loop
+in parallel will not involve data races. These checks are not needed if it can
+statically be determined that each iteration of the loop accesses a unique set
+of array components or container elements via the loop parameter.
+
+Add New paragraph after 5.5 (7)
+
+When the reserved word parallel is present and a transfer of control out of the
+loop occurs, an attempt is made to cancel further parallel execution of the
+sequence_of_statements that has not yet started. The completion of a
+loop_statement for a transfer of control out of the loop is delayed until all
+parallel execution of the sequence_of_statements is complete. If a transfer of
+control out of the loop occurs in multiple parallel executions of the
+sequence_of_statements then only the run time action of the first encountered
+transfer of control occurs.
 
 Modify to 5.5(9/4)
 
 "For the execution of a loop_statement with the iteration_scheme being for
  loop_parameter_specification, the loop_parameter_specification is first
- elaborated. This elaboration creates the loop parameter{s} and elaborates the
- discrete_subtype_definition. {Multiple loop parameters may be created where
- each loop parameter is associated with a unique non-overlapping range of the
- iterations if the keyword parallel is present, otherwise a single loop
- parameter is assumed. Each loop parameter can execute concurrently with other
- loop parameters of the same loop. Each thread of control proceeds independently
- and concurrently between the points where they interact with other tasks and
- with each other.} If the discrete_subtype_definition defines a subtype with a
- null range, the execution of the loop_statement is complete. Otherwise, the
- sequence_of_statements is executed once for each value of the discrete subtype
- defined by the discrete_subtype_definition that satisfies the predicates of the
- subtype (or until the loop is left as a consequence of a transfer of control).
+ elaborated. This elaboration creates {objects of} the loop parameter and
+ elaborates the discrete_subtype_definition. {If the keyword parallel is present,
+ multiple objects of the loop parameter are created where each iteration is
+ associated with a specific loop parameter object, otherwise a single loop
+ parameter object is created. Each loop parameter object is associated with a
+ thread of control where each thread proceeds independently and concurrently
+ between the points where they interact with other tasks and with each other.}
+ If the discrete_subtype_definition defines a subtype with a null range, the
+ execution of the loop_statement is complete. Otherwise, the sequence_of_statements
+ is executed once for each value of the discrete subtype defined by the
+ discrete_subtype_definition that satisfies the predicates of the subtype (or
+ until the loop is left as a consequence of a transfer of control).
  Prior to each such iteration, the corresponding value of the discrete subtype
- is assigned to the loop parameter. These values are assigned in increasing
- order unless the reserved word reverse is present, in which case the values are
- assigned in decreasing order. "
+ is assigned to [the]{a} loop parameter {object}. These values are assigned in
+ increasing order unless the reserved word reverse is present, in which case the
+ values are assigned in decreasing order {or unless the reserved word parallel
+ is present, in which case the order is arbitrary}. "
 
 AARM - An implementation should statically treat the
 sequence_of_statements as being executed by separate threads of control,
@@ -640,47 +223,288 @@
 
 Example of a parallel loop
 
+-- See 3.6(30/2)
 parallel
-for I in Buffer'Range loop
-   Buffer(I) := Arr1(I) + Arr2(I);
+for I in Grid(1)'Range loop
+   Grid(I, 1) := (for all J in Grid(2)'Range => Grid(I,J) = True);
 end loop;
 
+Add after 5.5.1 (1/3)
+
+with Ada.Containers;
 
-"5.6.1 Concurrent Block Statements
+Add after 5.5.1 (4/3)
+
+   use type Ada.Containers.Count_Type;
+   subtype Count_Type is Ada.Containers.Count_Type;
+
+   type Cursor_Offset is range -(Count_Type'Last - 1) .. Count_Type'Last - 1;
+
+   type Parallel_Iterator is limited interface and Forward_Iterator;
+
+   function Length
+     (Object   : Parallel_Iterator) return Count_Type is abstract;
+
+   function Jump
+     (Object   : Parallel_Iterator;
+      Position : Cursor;
+      Offset   : Cursor_Offset) return Cursor is abstract;
+
+   type Reversible_Parallel_Iterator is
+      limited interface and Parallel_Iterator and Reversible_Iterator;
+
+Modify 5.5.1 (6/3)
+
+An iterator type is a type descended from the Forward_Iterator
+interface from some instance of Ada.Iterator_Interfaces. A reversible iterator
+type is a type descended from the Reversible_Iterator interface from some
+instance of Ada.Iterator_Interfaces. {A parallel iterator type is a type
+descended from the Parallel_Iterator interface from some instance of
+Ada.Iterator_Interfaces. A parallel reversible iterator type is a type
+decended from both the Parallel_Iterator interace and the Reversible_Iterator
+interface from some instance of Ada.Iterator_Interfaces.} An iterator object is
+an object of an iterator type. A reversible iterator object is an object of a
+reversible iterator type{, a parallel iterator object is an object of a parallel
+iterator type, and a parallel reversible iterator object is an object of a
+parallel reversible iterator type.} The formal subtype Cursor from the associated
+instance of Ada.Iterator_Interfaces is the iteration cursor subtype for the iterator type.
+
+Modify 5.5.1 (11/3)
+
+An iterable container type is an indexable container type with specified
+Default_Iterator and Iterator_Element aspects. A reversible iterable container
+type is an iterable container type with the default iterator type being a
+reversible iterator type. {A parallel iterable container type is an iterable
+container type with the default iterator type being a parallel iterator type. A
+parallel reversible iterable container type is an iterable container type with the
+default iterator type being a parallel reversible iterator type.}
+An iterable container object is an object of an iterable container type. A
+reversible iterable container object is an object of a reversible iterable
+container type. {A parallel iterable container object is an object of a parallel
+iterable container type. A parallel reversible iterable container object is an
+object of a parallel reversible iterable container type.}
+
+Modify 5.5.2 (4/3)
+
+If the reserved word reverse appears, the iterator_specification is a reverse
+iterator[.;] {If the reserved word parallel appears, the iterator_specificaiton
+is a parallel iterator;} otherwise it is a forward iterator. In a reverse
+generalized iterator, the iterator_name shall be of a reversible iterator type.
+{In a parallel generalized iterator, the iterator_name shall be of a parallel
+iterator type.} In a reverse container element iterator, the default
+iterator type for the type of the iterable_name shall be a reversible iterator
+type. {In a parallel container element iterator, the default iterator type for
+the type of the iterable_name shall be of a parallel iterator type.}
+
+Modify 5.5.2 (7/5)
+
+An iterator_specification declares a {set of} loop parameter {objects. If the
+ keyword parallel is present, multiple objects of the loop parameter are
+ created where each iteration is associated with a specific loop parameter
+ object; otherwise a single loop parameter object is created for the loop. Each
+ loop parameter object is associated with a thread of control where each thread
+ of control proceeds independently and concurrently between the points
+ where they interact with other tasks and with each other}. In a generalized
+ iterator, an array component iterator, or a container element iterator,
+ if a loop_parameter_subtype_indication is present, it determines the nominal
+ subtype of the loop parameter{s}. In a generalized iterator, if a
+ loop_parameter_subtype_indication is not present, the nominal subtype of the
+ loop parameter{s are} [is] the iteration cursor subtype. In an array component
+ iterator, if a loop_parameter_subtype_indication is not present, the nominal
+ subtype of the loop parameter{s are}[ is] the component subtype of the type of
+ the iterable_name. In a container element iterator, if a
+ loop_parameter_subtype_indication is not present, the nominal subtype of the
+ loop parameter{s are}[is] the default element subtype for the type of the iterable_name.
+
+Modify 5.5.2 (8/3)
+In a generalized iterator, the loop parameter{s are} [is a] constant. In an
+array component iterator, the loop parameter{s are} [is a] constant if the
+iterable_name denotes a constant; otherwise it denotes a variable. In a container
+element iterator, the loop parameter{s are}[is a] constant if the iterable_name
+denotes a constant, or if the Variable_Indexing aspect is not specified for the
+type of the iterable_name; otherwise it is a variable.
+
+Modify 5.5.2 (8.a/5)
+
+Ramification: The loop parameter{s} of a generalized iterator {have}[has] the
+same accessibility as the loop statement. This means that the loop parameter
+object{s} {are}[is] finalized when the loop statement is left. ([It]{They} also
+may be finalized as part of assigning a new value to the loop parameter{s}.) For array
+component iterators, the loop parameter{s each} directly denotes an element of
+the array and [has]{have} the accessibility of the associated array. For container
+element iterators, the loop parameter{s each} denotes the result of [the]{an}
+indexing function call (in the case of a constant indexing) or a generalized
+reference thereof (in the case of a variable indexing). Roughly speaking, the
+loop parameter{s have} [has] the accessibility level of a single iteration of
+the loop. More precisely, the function result (or the generalized reference
+thereof) is considered to be renamed in the declarative part of a notional block
+statement which immediately encloses the loop's sequence_of_statements; the
+accessibility of the loop parameter{s are} [is] that of the block statement.
+
+Modify 5.5.2 (10/3)
+
+For a generalized iterator, the loop parameter{s are} [is] created, the
+iterator_name is evaluated, and the denoted iterator object{s} becomes the loop
+iterator{s}. In a forward generalized iterator, the operation First of the
+iterator type is called on the loop iterator, to produce the initial value for
+the loop parameter. If the result of calling Has_Element on the initial value is
+False, then the execution of the loop_statement is complete. Otherwise, the
+sequence_of_statements is executed and then the Next operation of the iterator
+type is called with the loop iterator and the current value of the loop
+parameter to produce the next value to be assigned to the loop parameter. This
+repeats until the result of calling Has_Element on the loop parameter is False,
+or the loop is left as a consequence of a transfer of control. For a
+reverse generalized iterator, the operations Last and Previous are called rather
+than First and Next. {For a parallel generalized iterator, the operation Length
+of the iterator type is called first to determine the number of iterations that
+can execute in parallel. If the result of calling Length is 0, then the execution
+of the loop_statement is complete. Otherwise, the operation First of
+the iterator type is then called on a loop iterator, to produce the initial
+value for one of the associated loop parameters. The operation Jump of the
+iterator type is then called using the result of the First call as an input, and
+then repeatedly using the result from each previous call to produce the inital
+value of the remaining associated loop parameters. The threads of control
+associated with each loop parameter execute until all threads have completed
+execution of their associated iterations, or the loop is left as a consequence
+of a transfer of control.}
+
+AARM Note
+
+Jump is used rather than Next, to provide better support for implementations
+to apply loop chunking, where each thread of control might be responsible for multiple
+loop iterations. For instances, it is more efficient for a Vector to call Jump to determine
+the cursor for the next chunk, via a single call, rather than issue multiple calls to
+Next to arrive at the same cursor position.
+
+Modify 5.5.2 (10.a/4)
+
+Ramification: The loop parameter{s} of a generalized iterator [is a]{are} variable{s}
+of which the user only has a constant view. It follows the normal rules for a
+variable of its nominal subtype. In particular, if the nominal subtype is indefinite,
+the variable is constrained by its initial value. Similarly, if the nominal subtype
+is class-wide, the variable (like all variables) has the tag of the initial value.
+Constraint_Error may be raised by a subsequent iteration if Next{, }[ or] Previous{, or Jump}
+return an object with a different tag or constraint.
+
+Modify 5.5.2 (11/3)
+
+For an array component iterator, the iterable_name is evaluated and the denoted
+array object becomes the array for the loop. If the array for the loop is a
+null array, then the execution of the loop_statement is complete. Otherwise, {if
+the iterator is not a parallel array component iterator,} the sequence_of_statements
+is executed with the loop parameter denoting each component of the array for the loop,
+using a canonical order of components, which is last dimension varying fastest
+(unless the array has convention Fortran, in which case it is first dimension
+varying fastest). For a forward array component iterator, the iteration starts
+with the component whose index values are each the first in their index range,
+and continues in the canonical order. For a reverse array component iterator, the
+iteration starts with the component whose index values are each the last in their
+index range, and continues in the reverse of the canonical order. {For a parallel
+array component iterator, the order of iteration is arbitrary.} The loop iteration
+proceeds until the sequence_of_statements has been executed for each component of
+the array for the loop, or until the loop is left as a consequence of a transfer of
+control.
+
+Modify 5.5.2 (12/3)
+
+For a container element iterator, the iterable_name is evaluated and the denoted
+iterable container object becomes the iterable container object for the loop.
+The default iterator function for the type of the iterable container object for
+the loop is called on the iterable container object and the result is the loop
+iterator. [An object] {Objects} of the default cursor subtype [is]{are} created
+(the loop cursor{s}).
+
+Modify 5.5.2 (13/3)
+
+For a forward container element iterator, the operation First of the iterator
+type is called on the loop iterator, to produce the initial value for the loop
+cursor. If the result of calling Has_Element on the initial value is False, then
+the execution of the loop_statement is complete. Otherwise, the sequence_of_statements
+is executed with the loop parameter denoting an indexing (see 4.1.6) into the
+iterable container object for the loop, with the only parameter to the indexing
+being the current value of the loop cursor; then the Next operation of the iterator
+type is called with the loop iterator and the loop cursor to produce the next value
+to be assigned to the loop cursor. This repeats until the result of calling
+Has_Element on the loop cursor is False, or until the loop is left as a consequence
+of a transfer of control. For a reverse container element iterator, the operations
+Last and Previous are called rather than First and Next. {For a parallel container
+element iterator, the operation Length is first called to determine the number of
+iterations that can execute in parallel. If the result of calling Length is 0,
+then the execution of the loop_statement is complete. Otherwise, the operation
+First of the iterator type is then called on a loop iterator, to produce the initial
+value for one of the associated loop parameters. The operation Jump of the
+iterator type is then called with the loop iterator and the result of First, to produce
+the initial value for the next loop cursor. Repeated calls to Jump using the loop
+iterator and the previous loop cursor result from the Jump call are used to produce
+the initial value of the remaining loop cursors. The threads of control
+associated with each loop cursor execute until all threads have completed
+execution of their associated iterations, or the loop is left as a consequence
+of a transfer of control.} If the loop parameter is a constant (see above), then
+the indexing uses the default constant indexing function for the type of the
+iterable container object for the loop; otherwise it uses the default variable
+indexing function.
+
+Modify 5.5.2 (15/3)
+
+-- Array component iterator example:
+{parallel}
+for Element of Board loop  -- See 3.6.1.
+   Element := Element * 2.0; -- Double each element of Board, a two-dimensional array.
+end loop;
 
-[A concurrent_block_statement encloses two or more sequence_of_statements
-where all the sequence_of_statements can execute concurrently or possibly
-in parallel with each other.]
 
+"5.6.1 Parallel Block Statements
+
+[A parallel_block_statement encloses two or more handled_sequence_of_statements
+where all the handled_sequence_of_statements can execute in parallel with each
+other.]
+
 Syntax
 
-concurrent_block_statement ::=
-    [parallel] do
-      sequence_of_statements
+parallel_block_statement ::=
+    parallel
+    do
+      handled_sequence_of_statements
     and
-      sequence_of_statements
+      handled_sequence_of_statements
    {and
-      sequence_of_statements}
+      handled_sequence_of_statements}
     end do;
 
+Legality Rules
+
+A parallel_block_statement shall not update variables global to the loop unless
+the action is sequential (see 9.10(11)).
+
 Static Semantics
 
-Each sequence_of_statements represents a separate thread of control that
-proceeds independently and concurrently between the points where they
-interact with other tasks and with each other. If the parallel keyword
-is present, then each threads of control can execute on any available
-processor in parallel with other threads. Otherwise, the threads of
-control execute concurrently on the same processor as the enclosing task.
+Each handled_sequence_of_statements represents a separate thread of control that
+proceeds independently and in parallel between the points where they
+interact with other tasks and with each other.
+
+For the execution of a parallel_block_statement, each handled_sequence_of_statements
+is executed once, until the parallel_block_statement is complete. The
+parallel_block_statement is complete when all handled_sequence_of_statements have
+completed execution or when transfer of control occurs that transfers
+control out of the parallel block. When a transfer of control out of the
+parallel block occurs, an attempt is made to cancel further parallel execution of
+a handled_sequence_of_statements that have not yet started. The completion of a
+parallel_block_statement for a transfer of control out
+of the parallel block is delayed until all parallel execution of the handled_sequence_of_statements
+is complete. If a transfer of control out of the loop occurs in multiple parallel
+executions of handled_sequence_of_statements then only the run time action of the first
+encountered transfer of control out of the parallel block occurs.
 
+
 AARM - An implementation should statically treat each
-sequence_of_statements as a separate thread of control, but whether they
-actually execute concurrently or sequentially should be a determination
+handled_sequence_of_statements as a separate thread of control, but whether they
+actually execute in parallel or sequentially should be a determination
 that is made dynamically at run time, dependent on factors such as the
 available computing resources.
 
 Examples
 
-Example of a concurrent block statement:
+Example of a parallel block statement:
 
    do
      Foo(Z);
@@ -689,31 +513,17 @@
    and
      Put_Line ("Executing Foo and Bar in parallel with Other_Work");
      Other_Work;
+     exception
+        when Constraint_Error =>
+          Put_Line ("Constraint Error raised doing Other_Work");
    end do;
 
-Modify 6.4(6)
-"explicit_actual_parameter ::= expression | variable_name {| reduction_expression_parameter}"
-
-reduction_expression_parameter ::= <[initial_reduction_value]>
-
-initial_reduction_value ::= simple_expression
-
-
-Add 6.4(7.1) Legality Rules
-
-A reduction_expression_parameter shall only be supplied as an actual
-parameter to a combiner_function_call of a reduction expression
-
 Change 9.10 (13)
 
 "Both actions occur as part of the execution of the same task {unless
  either are part of a;
-   - different sequence_of_statements of a concurrent
-   - block statement,
-   - parallel loop statement,
-   - parallel quantified expression,
-   - parallel array aggregate, or
-   - parallel reduction expression.}"
+   - different handled_sequence_of_statements of a parallel block statement,
+   - parallel loop statement.}"
 
 New section 9.12 Executors and Tasklets
 
@@ -726,188 +536,314 @@
 its execution to a set of executors, it cannot proceed with its own
 execution until all the executors have completed their respective
 executions.
-
-A concurrent block statement, parallel loop statement, parallel
-quantified expression, parallel aggregate, or parallel reduction
-expression may assign a set of executors to execute the construct, if
-extra computing resources are available.
-
-Modify A.1 (9.1)
-
-   -- function "and" (Left, Right : Boolean'Base) return Boolean'Base {with Identity => True};
-   -- function "or"  (Left, Right : Boolean'Base) return Boolean'Base {with Identity => False};
-   -- function "xor" (Left, Right : Boolean'Base) return Boolean'Base {with Identity => False};
-
-Modify A.1 (17)
-
-   -- function "+"   (Left, Right : Integer'Base) return Integer'Base {with Identity => 0};
-   -- function "-"   (Left, Right : Integer'Base) return Integer'Base {with Reducer => "+"};
-   -- function "*"   (Left, Right : Integer'Base) return Integer'Base {with Identity => 1};
-   -- function "/"   (Left, Right : Integer'Base) return Integer'Base {with Reducer => "*"};
-
-Modify A.1 (25)
-
-   -- function "+"   (Left, Right : Float) return Float {with Identity => 0.0};
-   -- function "-"   (Left, Right : Float) return Float {with Reducer  => "+"};
-   -- function "*"   (Left, Right : Float) return Float {with Identity => 1.0};
-   -- function "/"   (Left, Right : Float) return Float {with Reducer  => "*"};
-
-Modify A.1 (29-34)
-
-function "*" (Left : root_integer; Right : root_real)
-     return root_real {with Identity => 1.0};
-
-   function "*" (Left : root_real;    Right : root_integer)
-     return root_real {with Identity => 1.0};
-
-   function "/" (Left : root_real;    Right : root_integer)
-     return root_real {with Reducer => "*"};
-
-   -- The type universal_fixed is predefined.
-   -- The only multiplying operators defined between
-   -- fixed point types are
-
-   function "*" (Left : universal_fixed; Right : universal_fixed)
-     return universal_fixed {with Identity => 1.0};
-
-   function "/" (Left : universal_fixed; Right : universal_fixed)
-     return universal_fixed {with Reducer => "*"};
-
-Modify A.4.4 (13-17)
-
- function Append (Left, Right : in Bounded_String;
-                       Drop        : in Truncation  := Error)
-         return Bounded_String{ with Reducer => "&"};
 
-      function Append (Left  : in Bounded_String;
-                       Right : in String;
-                       Drop  : in Truncation := Error)
-         return Bounded_String{ with Reducer => "&"};
+A parallel block statement or parallel loop statement may assign a set of
+executors to execute the construct, if extra computing resources are available.
 
-      function Append (Left  : in String;
-                       Right : in Bounded_String;
-                       Drop  : in Truncation := Error)
-         return Bounded_String{ with Reducer => "&"};
+Add new paragraph after 11.5 (20)
 
-      function Append (Left  : in Bounded_String;
-                       Right : in Character;
-                       Drop  : in Truncation := Error)
-         return Bounded_String{ with Reducer => "&"};
+Loop_Parameter_Check
+  For a parallel loop, check that updates to a component of an array or to an
+  elememnt of a container via the loop parameter are sequential. See 9.10(11).
+
+Modify A.18.2(74.1/3)
+
+function Iterate (Container : in Vector)
+      return Vector_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.2 (74.2/3)
+
+function Iterate (Container : in Vector; Start : in Cursor)
+      return Vector_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.2 (230.1/3)
+
+function Iterate (Container : in Vector)
+   return Vector_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.2 (230.2/3)
+
+Iterate returns a {parallel} reversible iterator object (see 5.5.1) that will
+generate a value for [a] loop parameter{s} (see 5.5.2) designating each node in
+Container, starting with the first node and moving the cursor as per the Next
+function when used as a forward iterator, and starting with the last node and
+moving the cursor as per the Previous function when used as a reverse iterator
+{, and starting with all notes simultaneously using the First and Jump functions
+to generate cursors for all the iterations of the loop when used as a parallel
+iterator}. Tampering with the cursors of Container is prohibited while the
+iterator object exists (in particular, in the sequence_of_statements of the
+loop_statement whose iterator_specification denotes this object). The iterator
+object needs finalization.
+
+Modify A.18.2 (230.3/3)
+
+function Iterate (Container : in Vector; Start : in Cursor)
+   return Vector_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.2 (230.4/3)
+
+If Start is not No_Element and does not designate an item in Container, then
+Program_Error is propagated. If Start is No_Element, then Constraint_Error is
+propagated. Otherwise, Iterate returns a {parallel }reversible iterator object
+(see 5.5.1) that will generate a value for [a] loop parameter{s} (see 5.5.2)
+designating each node in Container, starting with the node designated by Start
+and moving the cursor as per the Next function when used as a forward iterator,
+or moving the cursor as per the Previous function when used as a reverse iterator
+{, or all nodes simulataneously starting with the node designated by Start and obtaining
+the other cursors via calls to Jump when used as a parallel iterator}. Tampering
+with the cursors of Container is prohibited while the iterator object exists (in
+particular, in the sequence_of_statements of the loop_statement whose
+iterator_specification denotes this object). The iterator object needs finalization.
+
+Modify A.18.3 (46.1/3)
+
+function Iterate (Container : in List)
+      return List_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.3 (46.2/3)
+
+function Iterate (Container : in List; Start : in Cursor)
+      return List_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.3 (144.1/3)
+
+function Iterate (Container : in List)
+   return List_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.3 (144.2/3)
+
+Iterate returns a {parallel }reversible iterator object (see 5.5.1) that will
+generate a value for [a] loop parameter{s} (see 5.5.2) designating each node in
+Container, starting with the first node and moving the cursor as per the Next
+function when used as a forward iterator, and starting with the last node and
+moving the cursor as per the Previous function when used as a reverse iterator
+{, and starting with all notes simultaneously using the First and Jump functions
+to generate cursors for all the iterations of the loop when used as a parallel
+iterator}. Tampering with the cursors of Container is prohibited while the iterator
+object exists (in particular, in the sequence_of_statements of the loop_statement
+whose iterator_specification denotes this object). The iterator object needs finalization.
+
+Modify A.18.3 (144.3/3)
+
+function Iterate (Container : in List; Start : in Cursor)
+   return List_Iterator_Interfaces.Parallel_Reversible_Iterator'Class;
+
+Modify A.18.3 (144.4/3)
+
+If Start is not No_Element and does not designate an item in Container, then
+Program_Error is propagated. If Start is No_Element, then Constraint_Error is
+propagated. Otherwise, Iterate returns a {parallel} reversible iterator object
+(see 5.5.1) that will generate [a] value{s} for [a] loop parameter{s} (see 5.5.2)
+designating each node in Container, starting with the node designated by Start
+and moving the cursor as per the Next function when used as a forward iterator,
+or moving the cursor as per the Previous function when used as a reverse iterator
+{or all nodes simulataneously starting with the node designated by Start and obtaining
+the other cursors via calls to Jump when used as a parallel iterator}. Tampering
+with the cursors of Container is prohibited while the iterator object exists (in
+particular, in the sequence_of_statements of the loop_statement whose
+iterator_specification denotes this object). The iterator object needs finalization.
+
+Modify A.18.5 (37.1/3)
+
+function Iterate (Container : in Map)
+      return Map_Iterator_Interfaces.{Parallel_}[Forward_]Iterator'Class;
+
+Modify A.18.5 (61.1/3)
+
+function Iterate (Container : in Map)
+   return Map_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.5 (61.2/3)
+Iterate returns a[n] {parallel} iterator object (see 5.5.1) that will generate [a]
+value{s} for [a] loop parameter{s} (see 5.5.2) designating each node in Container, starting
+with the first node and moving the cursor according to the successor relation
+{when used as a forward iterator, and starting with all nodes simultaneously using
+the First and Jump functions to generate cursors for all the iterations of the loop
+when used as a parallel iterator}. Tampering with the cursors of Container is
+prohibited while the iterator object exists (in particular, in the
+sequence_of_statements of the loop_statement whose iterator_specification denotes
+this object). The iterator object needs finalization.
+
+Modify A.18.6(51.1/3)
+
+function Iterate (Container : in Map)
+      return Map_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.6(51.2/3)
+
+function Iterate (Container : in Map; Start : in Cursor)
+      return Map_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.6(94.1/3)
+
+function Iterate (Container : in Map)
+   return Map_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.6(94.2/3)
+
+ Iterate returns a {parallel} reversible iterator object (see 5.5.1) that will
+ generate [a] value{s} for [a] loop parameter{s} (see 5.5.2) designating each
+ node in Container, starting with the first node and moving the cursor according
+ to the successor relation when used as a forward iterator, and starting with the
+ last node and moving the cursor according to the predecessor relation when used
+ as a reverse iterator {, and starting with all notes simultaneously using the
+ First and Jump functions to generate cursors for all the iterations of the loop
+ when used as a parallel iterator}. Tampering with the cursors of Container is
+ prohibited while the iterator object exists (in particular, in the
+ sequence_of_statements of the loop_statement whose iterator_specification denotes
+ this object). The iterator object needs finalization.
+
+Modify A.18.6(94.3/3)
+
+function Iterate (Container : in Map; Start : in Cursor)
+   return Map_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.6(94.4/3)
+
+If Start is not No_Element and does not designate an item in Container, then
+Program_Error is propagated. If Start is No_Element, then Constraint_Error is
+propagated. Otherwise, Iterate returns a {parallel} reversible iterator object
+(see 5.5.1) that will generate [a] value{s} for [a] loop parameter{s} (see 5.5.2)
+designating each node in Container, starting with the node designated by Start
+and moving the cursor according to the successor relation when used as a forward
+iterator, or moving the cursor according to the predecessor relation when used as
+a reverse iterator {, or all nodes simulataneously starting with the node
+designated by Start and obtaining the other cursors via calls to Jump when used
+as a parallel iterator}. Tampering with the cursors of Container is prohibited
+while the iterator object exists (in particular, in the sequence_of_statements
+of the loop_statement whose iterator_specification denotes this object). The
+iterator object needs finalization.
+
+Modify A.18.8(49.1/3)
+
+function Iterate (Container : in Set)
+      return Set_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.8(85.1/3)
+
+function Iterate (Container : in Set)
+   return Set_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.8(85.2/3)
+
+Iterate returns a[n] {parallel} iterator object (see 5.5.1) that will generate
+[a] value{s} for [a] loop parameter{s} (see 5.5.2) designating each element in
+Container, starting with the first element and moving the cursor according to the
+successor relation {when used as a forward iterator, and starting with all nodes
+simultaneously using the First and Jump functions to generate cursors for all the
+iterations of the loop when used as a parallel iterator. Tampering with the
+cursors of Container is prohibited while the iterator object exists (in
+particular, in the sequence_of_statements of the loop_statement whose
+iterator_specification denotes this object). The iterator object needs finalization.
+
+Modify A.18.9(61.1/3)
+
+function Iterate (Container : in Set)
+      return Set_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.9(61.2/3)
+
+function Iterate (Container : in Set; Start : in Cursor)
+      return Set_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.9(113.1/3)
+
+function Iterate (Container : in Set; Start : in Cursor)
+      return Set_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.9(113.2/3)
+
+Iterate returns a {parallel} reversible iterator object (see 5.5.1) that will
+generate [a] value{s} for [a] loop parameter{s} (see 5.5.2) designating each element
+in Container, starting with the first element and moving the cursor according to
+the successor relation when used as a forward iterator, and starting with the
+last element and moving the cursor according to the predecessor relation when
+used as a reverse iterator {, and starting with all notes simultaneously using the
+ First and Jump functions to generate cursors for all the iterations of the loop
+ when used as a parallel iterator}. Tampering with the cursors of Container is
+ prohibited while the iterator object exists (in particular, in the
+ sequence_of_statements of the loop_statement whose iterator_specification
+ denotes this object). The iterator object needs finalization.
+
+Modify A.18.9(113.3/3)
+
+function Iterate (Container : in Set; Start : in Cursor)
+   return Set_Iterator_Interfaces.{Parallel_}Reversible_Iterator'Class;
+
+Modify A.18.9(113.4/3)
+
+If Start is not No_Element and does not designate an item in Container, then
+Program_Error is propagated. If Start is No_Element, then Constraint_Error is
+propagated. Otherwise, Iterate returns a {parallel} reversible iterator object
+(see 5.5.1) that will generate [a] value{s} for [a] loop parameter{s} (see 5.5.2)
+designating each element in Container, starting with the element designated by
+Start and moving the cursor according to the successor relation when used as a
+forward iterator, or moving the cursor according to the predecessor relation when
+used as a reverse iterator {, or all nodes simulataneously starting with the node
+designated by Start and obtaining the other cursors via calls to Jump when used
+as a parallel iterator}. Tampering with the cursors of Container is prohibited
+while the iterator object exists (in particular, in the sequence_of_statements
+of the loop_statement whose iterator_specification denotes this object). The
+iterator object needs finalization.
+
+Modify A.18.10(44/3)
+
+function Iterate (Container : in Tree)
+      return Tree_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.10(45/3)
+
+function Iterate_Subtree (Position : in Cursor)
+      return Tree_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.10(156/3)
+
+function Iterate (Container : in Tree)
+   return Tree_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.10(157/4)
+
+Iterate returns a[n] {parallel} iterator object (see 5.5.1) that will generate
+[a] value{s} for [a] loop parameter{s} (see 5.5.2) designating each element node
+in Container, starting from with the root node and proceeding in a depth-first order
+{when used as a forward_iterator, and starting with all nodes
+simultaneously using the First and Jump functions to generate cursors for all the
+iterations of the loop when used as a parallel iterator}. Tampering with the
+cursors of Container is prohibited while the iterator object exists (in particular,
+in the sequence_of_statements of the loop_statement whose iterator_specification
+denotes this object). The iterator object needs finalization.
+
+Modify A.18.10(158/3)
+
+function Iterate_Subtree (Position : in Cursor)
+   return Tree_Iterator_Interfaces.{Parallel}[Forward]_Iterator'Class;
+
+Modify A.18.10(159/3)
+
+If Position equals No_Element, then Constraint_Error is propagated. Otherwise,
+Iterate_Subtree returns a[n] {parallel} iterator object (see 5.5.1) that will
+generate [a] value{s} for [a] loop parameter{s} (see 5.5.2) designating each
+element in the subtree rooted by the node designated by Position, starting from
+with the node designated by Position and proceeding in a depth-first order
+{when used as a forward iterator, or or all nodes simulataneously starting from
+ the element in the subtree rooted by the node designated by Position, and obtaining
+ the other cursors via calls to Jump when used as a parallel iterator}. If Position
+ equals No_Element, then Constraint_Error is propagated. Tampering with the cursors
+ of the container that contains the node designated by Position is prohibited while
+ the iterator object exists (in particular, in the sequence_of_statements of the
+ loop_statement whose iterator_specification denotes this object). The iterator
+ object needs finalization.
 
-      function Append (Left  : in Character;
-                       Right : in Bounded_String;
-                       Drop  : in Truncation := Error)
-         return Bounded_String{ with Reducer => "&"};
 
-Modify A.4.4 (21-26)
-
-      function "&" (Left, Right : in Bounded_String)
-         return Bounded_String{ with Identity => Null_Bounded_String};
-
-      function "&" (Left : in Bounded_String; Right : in String)
-         return Bounded_String{ with Reducer => "&"};
-
-      function "&" (Left : in String; Right : in Bounded_String)
-         return Bounded_String{ with Reducer => "&"};
-
-      function "&" (Left : in Bounded_String; Right : in Character)
-         return Bounded_String{ with Reducer => "&"};
-
-      function "&" (Left : in Character; Right : in Bounded_String)
-         return Bounded_String{ with Reducer => "&"};
-
-Modify A.4.5 (15-19)
-
-   function "&" (Left, Right : in Unbounded_String)
-      return Unbounded_String{ with Identity => Null_Unbounded_String};
-
-   function "&" (Left : in Unbounded_String; Right : in String)
-      return Unbounded_String{ with Reducer => "&"};
-
-   function "&" (Left : in String; Right : in Unbounded_String)
-      return Unbounded_String{ with Reducer => "&"};
-
-   function "&" (Left : in Unbounded_String; Right : in Character)
-      return Unbounded_String{ with Reducer => "&"};
-
-   function "&" (Left : in Character; Right : in Unbounded_String)
-      return Unbounded_String{ with Reducer => "&"};
-
-Modify A.18.2 (15/2-18/.2)
-
-
-   function "&" (Left, Right : Vector) return Vector
-     { with Identity => Empty_Vector};
-
-   function "&" (Left  : Vector;
-                 Right : Element_Type) return Vector
-     { with Reducer => "&"};
-
-   function "&" (Left  : Element_Type;
-                 Right : Vector) return Vector
-     { with Reducer => "&"};
-
-   function "&" (Left, Right  : Element_Type) return Vector
-     { with Reducer => "&"};
-
-Add after A.18.3 (32/2)
-
-
-   function "&" (Left, Right : List) return List
-     { with Identity => Empty_List};
-
-   function "&" (Left  : List;
-                 Right : Element_Type) return List
-     { with Reducer => "&"};
-
-   function "&" (Left  : Element_Type;
-                 Right : List) return List
-     { with Reducer => "&"};
-
-   function "&" (Left, Right  : Element_Type) return List
-     { with Reducer => "&"};
-
-Modify A.18.7(55/2)
-
-    function Union (Left, Right : Set) return Set
-     { with Identity => Empty_Set};
-
-
-Modify A.18.7(67/2)
-
-    function Symmetric_Difference (Left, Right : Set) return Set
-    {with Identity => Empty_Set};
-
-Modify A.18.8(27/2)
-
-   function Union (Left, Right : Set) return Set
-     { with Identity => Empty_Set};
-
-Modify A.18.8(35/2)
-    function Symmetric_Difference (Left, Right : Set) return Set
-    { with Identity => Empty_Set};
-
-Modify A.18.9(28/2)
-
-   function Union (Left, Right : Set) return Set
-     { with Identity => Empty_Set};
-
-Modify A.18.9(37/2)
-    function Symmetric_Difference (Left, Right : Set) return Set
-    { with Identity => Empty_Set};
-
-
 Add C.7.1 (5.1)
 
-The Task_Id value associated with each sequence_of_statements of a
-concurrent_block_statement, parallel quantified expression, parallel
-reduction expression, parallel array aggregate, or of a parallel loop
+The Task_Id value associated with each handled_sequence_of_statements of a
+parallel_block_statement, or of each sequence_of_statements of a parallel loop
 statement is the same as that of the enclosing statement.
 
-AARM - Each sequence_of_statements of a concurrent block, parallel
-quantified expression, parallel reduction expression, parallel array
-aggregate, or parallel loop are treated as though they are all executing
-as the task that encountered the parallel construct.
+AARM - Each handled_sequence_of_statements of a parallel block or of each
+sequence_of_statements of a parallel loop are treated as though they are all
+executing as the task that encountered the parallel construct.
 
 !discussion
 
@@ -940,7 +876,7 @@
 annotations identifying global variable usage on subprogram
 specifications (see AI12-0079-1).
 
-Concurrent Blocks
+Parallel Blocks
 ---------------
 
 example:
@@ -966,7 +902,7 @@
 complain if the parallel sequences might have conflicting global
 side-effects.
 
-The concurrent block construct is flexible enough to support recursive usage as
+The parallel block construct is flexible enough to support recursive usage as
 well, such as:
 
    function Fibonacci (N : Natural) return Natural is
@@ -989,34 +925,27 @@
          Log ("Unexpected Error");
    end Fibonacci;
 
-We considered allowing the concurrent block to be preceded with an
-optional declare part, and followed with optional exception handlers,
-but it was observed that it was more likely to be useful to have objects
-that are shared across multiple parallel sequences to outlive the
-parallel block, and that having exception handlers after the last
-parallel sequence could easily be misconstrued as applying only to the
-last sequence.  Therefore we reverted to the simpler syntax proposed
-above. This simpler syntax is also more congruous with the syntax for
-select statements. Because there are no local declarations, there was
+We considered allowing the parallel block construct to be preceded with an
+optional declare part, but it was observed that it was more likely to be useful
+to have objects that are shared across multiple parallel sequences to outlive
+the parallel block.  Therefore we reverted to the simpler syntax proposed
+above. Because there are no local declarations, there was
 also no point in having a statement_identifier (block label) for a
-concurrent block. This is actually not entirely true. Allowing an exit
-statement to replace a goto that targets the location following the
-end of the select statement seems useful. To allow such a statement,
-a block label might be needed to identify the concurrent block that is
-being exited. It was felt that the need for allowing an exit statement
-in a concurrent block could be the subject of a separate AI.
-
-We considered whether a concurrent block statement without the parallel
-keyword should be sequential instead of concurrent. A concurrent
-construct seems more useful as it allows for abstractions such as
-coroutines where producers and consumers produce and consume data that
-is shared between tasklets. Such usage would work whether the pararallel
-keyword is present or not in our model, but if instead the construct
-was sequential, then removing the parallel keyword from a working
-program could cause the application to deadlock, which is a safety concern.
-Also, there is not much point to a sequential block abstraction, since
-one could simply remove the construct and execute all the sequences
-sequentially.
+parallel block. Allowing an exit statement to replace a goto that targets the
+location following the end of the parallel block statement seems useful, and to
+allow such a statement, a block label might be needed to identify the parallel
+block that is being exited. However, allowing exit statements is problematic
+for the same reason that exit statements are not currently allowed in block
+statements. It could be confusing if an exit occurred within a loop enclosed
+by the parallel block. Some might see it as syntax to exit the loop, others
+might see it as syntax to exit the parallel block. Rather than deal with this
+now, it was felt that if there was strong enough of a need for allowing an exit
+statement in a parallel block, it could be the subject of a separate AI.
+
+We considered what the semantics might be for a parallel block if the parallel
+keyword were absent. This might be a good syntactic construct to use for
+supporting coroutines, for example. Rather than deal with that question in this
+AI, we leave that for consideration in separate AI's.
 
 Parallel Loops
 --------------
@@ -1027,19 +956,6 @@
 solution space. To benefit from parallel hardware, the computation
 associated with a loop should be spread across the available processors.
 
-One approach, presuming the iterations of the loop have no data
-dependences between them, is to treat each iteration of the loop as a
-separate tasklet, and then have the processors work away on the set of
-tasklets in parallel. However, this introduces overhead from the queuing
-and de-queuing of work items, and the communication of results from each
-work item. Furthermore, there often are data dependences between
-iterations, and creating a separate work item for each iteration can
-introduce excessive synchronization overhead to deal safely with these
-interdependences. Therefore, it is common to break large arrays, and/or
-the loops that iterate over them, into chunks (or slices or tiles),
-where each chunk is processed sequentially, but multiple chunks can be
-processed in parallel with one another.
-
 Note: Allowing the parallel keyword on while loops was considered, but
 was discarded since while loops cannot be easily parallelized, because
 the control variables are inevitably global to the loop.
@@ -1065,127 +981,6 @@
 achieve that end, but a reduction expression seems better suited since it is
 designed to produce a result value.
 
-Reduction Expressions
----------------------
-
-A reduction expression provides a concise way to use iteration to combine a set
-of values into a single result.
-
-We already have some special purpose reduction expressions in Ada in the form of
-quantified expressions and array aggregates of the form using an
-iterated_component_association.
-
-Consider:
-
-   All_Graduated : constant Boolean :=
-     (parallel for all Student of Class => Passed_Exam (Student));
-
-   Someone_Graduated : constant Boolean :=
-     (parallel for some Student of Class => Passed_Exam (Student));
-
-
-Note that the keyword parallel would now be allowed in quantified
-expressions.
-
-Here we are in both cases effectively reducing a set of Boolean values into a
-single result using quantified expressions.
-
-A similar effect can be written using the more generalized Reduction expression
-syntax.
-
-   All_Graduated : constant Boolean :=
-     (parallel for Student of Class => <True> and Passed_Exam (Student));
-
-   Someone_Graduated : constant Boolean :=
-     (parallel for Student of Class => <False> or Passed_Exam (Student));
-
-Here we use "and" and "or" as the combining_function_call of the Reduction
-expression. The initial value plays an important role, since the Identity for
-"and" must be true, and false for "or".
-
-Another concern might be whether parallel loops can generally be written as
-expressions.
-
-Consider the calculation of Pi:
-
-   Number_Of_Steps : constant := 100_000;
-   Step : constant := 1.0 / Number_Of_Steps;
-
-   Pi : constant Long_Float := Step *
-     (parallel for I in 1 .. Number_Of_Steps =>
-         <0.0> + (4.0 / (1.0 + ((Long_Float (I) - 0.5) * Step)**2)));
-
-One might feel that the readability of this example might be improved if
-temporary variable declarations were to be added for intermediate results.
-For instanced, it might be nice to replace the sub-expression;
-   (1 + ((Long_Float (I) - 0.5) * Step)**2)
-with
-   (1.0 + X**2)
-
-One ordinarily cannot declare variables in an expression, but one can always write a
-function which can be called from an expression.
-
-We might instead decide to write:
-
-   function Pi_Slicer (R : Long_Float; Index : Natural) return Float
-     with Reducer => "+"
-   is
-      X : constant Long_Float := (Long_Float (Index) - 0.5) * Step;
-   begin
-      return R + 4.0 / (1.0 + X ** 2);
-   end Pi_Slicer;
-
-   Pi : constant Long_Float := Step *
-     (parallel for I in 1 .. Number_Of_Steps => Pi_Slicer (<0.0>, I));
-
-Which leaves a simpler looking Reduction Expression to do computation.
-
-Another concern might be about the need to perform multiple reductions at the
-same time.
-
-Consider:
-
-   Sum : Integer := (parallel for X of Arr => <0> + X);
-   Min : Integer := (parallel for X of Arr => Integer'Min(<Integer'Last>,  X));
-   Max : Integer := (parallel for X of Arr => Integer'Max(<Integer'First>, X));
-
-Here we have three calculations that occur in parallel, but sequentially with
-respect to each other. The performance benefits of parallelism should be
-noticeable for larger arrays, but one might want to calculate these three
-results iterating only once through the array.
-
-This can be accomplished by creating a composite result type, and writing a user
-defined reducer function.
-
-   type Summary is
-      record
-         Sum : Integer;
-         Min : Integer;
-         Max : Integer;
-      end record;
-
-   -- Identity value for the Reduce function
-   Identity : constant Summary := Summary'(Sum => 0,
-                                           Min => Integer'Last,
-                                           Max => Integer'First);
-
-   -- Reducer function for the Reduction expression
-   function Reduce (L, R : Summary) return Summary with Identity => Identity is
-     (Summary'(Sum => L + R,
-               Min => Integer'Min (L, R),
-               Max => Integer'Max (L, R)));
-
-   -- Combiner function for the Reduction expression
-   function Process (L : Summary; X : Integer) return Summary
-     with Reducer => Reduce is
-       (Summary'(Sum => L.Sum + X,
-                 Min => Integer'Min (L.Min, X),
-                 Max => Integer'Max (L.Max, X)));
-
-   -- Reduction expression to compute all 3 results at once
-   Result : constant Summary :=
-        (for parallel X of Arr => Process (<Identity>, X));
-
 !ASIS
 
 ** TBD.
@@ -5874,14 +5669,14 @@
 ...
 > I fixed some minor issues in this AI (without making a separate version):
 ...
-> There was a missing ) in the first paragraph of the Legality Rules for 
+> There was a missing ) in the first paragraph of the Legality Rules for
 > 4.5.9.
 
 Thanks for catching these, Randy.
 
 > There was a stray "Modify 5.5.2", which was removed.
 >
-> Some of the formatting was unusually narrow (while it was unusually 
+> Some of the formatting was unusually narrow (while it was unusually
 > wide last time - can't win, I guess).
 
 I dont know how wide is good. I had my text editor set at a 72 character
@@ -5890,14 +5685,14 @@
 
  ===
 >
-> Comment: Examples in the RM need to be complete, typically by 
-> depending on declarations from previous examples. None of the examples 
-> in 4.5.9 or 5.6.1 seem to do that. (A couple might be stand-alone, 
+> Comment: Examples in the RM need to be complete, typically by
+> depending on declarations from previous examples. None of the examples
+> in 4.5.9 or 5.6.1 seem to do that. (A couple might be stand-alone,
 > which is OK, but I didn't check carefully.) That needs to be fixed.
 >
 > Comment: You added "reducer" aspects as needed to Ada.Strings and the like.
-> But don't the containers need something similar? I could do that in 
-> AI12-0112-1, but I'd have to know what is needed (and I'm not the 
+> But don't the containers need something similar? I could do that in
+> AI12-0112-1, but I'd have to know what is needed (and I'm not the
 > right person to figure that out).
 
 I thought I had gone through the containers in this AI. It should be there.
@@ -5913,7 +5708,7 @@
 
 ...
 > I dont know how wide is good. I had my text editor set at a
-> 72 character margin by default and tried to stick to that. 
+> 72 character margin by default and tried to stick to that.
 > Would 80 characters be better?
 
 The usual is either 79 or 80 (depends on whether a program or I am doing it).
@@ -5921,3 +5716,767 @@
 Tough to compare.
 
 ****************************************************************
+
+From: Brad Moore
+Sent: Thursday, December 14, 2017  9:45 AM
+
+I have been thinking more about parallelization problems that need what I have
+been calling, manual chunking, where the user needs to have more control of
+how loops are broken down into chunks.
+
+Previously I had identified a number of such problems, including loops that
+involve synchronous barriers, such as gaussian elimination for solving
+matrices, or where multiple loops need to operate on an array using the same
+chunking, such as for computing a cumulative sum of an array of elements.
+
+Since then, I have identified more generally the need, which basically amounts
+to cases where user-provided initialization and/or finalization needs to be
+applied to the chunks.
+
+I think a good example for this need is the case of parallel file IO. Consider a
+large file of fixed-size records where all records in the file need to be
+processed, and where it is desired to to this in parallel.
+
+It would be too inefficient to open and close a file handle for each record, and
+then seek to the position of the record to be modified.
+
+What is needed here is for each chunk to open its own file handle, process the
+records of that chunk, then close the file handle. It is not reasonable to
+expect that the compiler will be able to automatically provide such chunk
+initialization and finalization, and I think trying to accomplish this with
+syntax using aspects would be too messy. A library approach seems like a better
+alternative.
+
+Another example is memory allocation, where when processing the elements of an
+array or container, a temporary data structure is needed for the processing.
+Again, rather than allocate and deallocate this data structure for every element
+of the array or container, it makes better sense to allocate the structure once
+for each chunk (whether it be allocated from the stack or the heap), and then
+free the data structure once the chunk processing is complete.
+
+It seems that there ought to be a standard library/facility that can be used to
+generate the chunking. This makes sense in particular because;
+  - Such a library can be non-trivial to write
+  - Such a library if provided by the vendor can be better tuned to how the
+    vendor implements parallelism for parallel loops, blocks, etc.
+
+I have been prototyping what such a library might look like, and have a working implementation that also ties in to how parallel iteration can be added to all the standard containers.
+
+The basic idea is to provide a special container library that can generate a chunk-vector. The chunk-vector is an abstraction that essentially is a vector of chunk boundaries, where each chunk boundary identifies the start and end cursors of a larger set 
of iteration.
+
+I have two generic libraries for this. One is intended for use when iterating over discrete subtypes, and the other is intended for use when iterating over containers.
+
+The basic skeleton of the discrete chunker is;
+
+generic
+   type Loop_Cursor is (<>);
+package Ada202x.Containers.Loop_Chunking.Discrete_Iteration is
+
+   type Chunk_Bounds is tagged private;
+
+   function Start  (Chunk : Chunk_Bounds) return Loop_Cursor;
+   function Finish (Chunk : Chunk_Bounds) return Loop_Cursor;
+
+   type Chunk_Vector (<>) is tagged limited private
+   with
+      Constant_Indexing => Constant_Reference,
+      Default_Iterator  => Iterate,
+      Iterator_Element  => Chunk_Bounds;
+
+   function Chunks
+     (From : Loop_Cursor := Loop_Cursor'First;
+      To   : Loop_Cursor := Loop_Cursor'Last) return Chunk_Vector;
+
+   type Cursor is private;
+
+   package Chunk_Vector_Iterator_Interfaces is new
+     Ada202x.Iterator_Interfaces (Cursor, Has_Element);
+
+   function Iterate (Container : Chunk_Vector)
+      return Chunk_Vector_Iterator_Interfaces.Parallel_Iterator'Class;
+
+private
+   ...
+end Ada202x.Containers.Loop_Chunking.Discrete_Iteration;
+
+The basic skeleton of the container chunker is very similar, differing only in
+the generic formal parameters;
+
+generic
+   type Loop_Cursor is private;
+
+   with package Container_Iterator_Interfaces is new
+     Ada202x.Iterator_Interfaces(Cursor       => Loop_Cursor,
+                                 Has_Element  => <>);
+
+package Ada202x.Containers.Loop_Chunking.Container_Iteration is
+
+   type Chunk_Bounds is tagged private;
+
+   function Start  (Chunk : Chunk_Bounds) return Loop_Cursor;
+   function Finish (Chunk : Chunk_Bounds) return Loop_Cursor;
+
+   type Chunk_Vector (<>) is tagged limited private
+   with
+      Constant_Indexing => Constant_Reference,
+      Default_Iterator  => Iterate,
+      Iterator_Element  => Chunk_Bounds;
+
+   function Chunks
+     (Iterator : Container_Iterator_Interfaces.Parallel_Iterator'Class)
+      return Chunk_Vector;
+
+   type Cursor is private;
+
+   package Chunk_Vector_Iterator_Interfaces is new
+     Ada202x.Iterator_Interfaces (Cursor, Has_Element);
+
+   function Iterate (Container : Chunk_Vector)
+      return Chunk_Vector_Iterator_Interfaces.Parallel_Iterator'Class;
+
+private
+ ...
+end Ada202x.Containers.Loop_Chunking.Container_Iteration;
+
+I've also added the following parallel iterator interface to
+Ada.Iterator_Interfaces;
+
+   use type Ada202x.Containers.Count_Type;
+   subtype Count_Type is Ada202x.Containers.Count_Type;
+
+   type Cursor_Offset is range -(Count_Type'Last - 1) .. Count_Type'Last - 1;
+
+   type Parallel_Iterator is limited interface and Forward_Iterator;
+
+   function Length
+     (Object   : Parallel_Iterator) return Count_Type is abstract;
+   -- A count of the number of elements in the container to be iterated over
+
+   function Jump
+     (Object   : Parallel_Iterator;
+      Position : Cursor;
+      Offset   : Cursor_Offset) return Cursor is abstract;
+   -- A means to generate a cursor based on an offset number of elements
+   -- from another cursor of the container.
+
+   type Reversible_Parallel_Iterator is
+      limited interface and Parallel_Iterator and Reversible_Iterator;
+
+A chunk-vector is a container that can be iterated over in parallel using
+constructs such as parallel loops and reduction expressions.
+
+To see how this might be used, consider the parallel IO example I described
+above, where we have a file of employee records, and we want to update all the
+records to give everyone a 2% raise of their current salary.
+
+One could write;
+
+with Ada.Direct_IO;
+
+with Ada202x.Containers.Loop_Chunking.Discrete_Iteration;
+
+procedure Test_Chunking
+is
+   subtype Last_Name is String (1 .. 12);
+   subtype Address is String (1 .. 20);
+   type Annual_Salary is delta 0.01 digits 15;
+
+   type Employee is record
+      Employee_Name : Last_Name;
+      Home_Address  : Address;
+      Salary        : Annual_Salary;
+   end record;
+
+   package Employee_IO is new Ada.Direct_IO (Element_Type => Employee);
+
+   Data_Filename : constant String := "Employees.dat";
+
+   Record_Count : constant Employee_IO.Positive_Count := Get_Record_Count;
+
+   subtype Loop_Index is Employee_IO.Positive_Count range 1 .. Record_Count;
+
+   package Manual_Chunking is new
+      Ada202x.Containers.Loop_Chunking.Discrete_Iteration (Loop_Index);
+
+   procedure Give_Raise
+     (From, To    : Loop_Index)
+   is
+      Employee_File    : Employee_IO.File_Type;
+      Current_Employee : Employee;
+   begin
+
+      Employee_IO.Open (File => Employee_File,
+                        Mode => Employee_IO.Inout_File,
+                        Name => Data_Filename,
+                        Form => "shared=no");
+
+      Employee_IO.Set_Index (File => Employee_File,
+                             To   => From);
+
+      for Position in From .. To loop
+
+         Employee_IO.Read (File => Employee_File,
+                           Item => Current_Employee);
+
+         --  2% Raise to everyone!
+         Current_Employee.Salary := @ + @ * 0.02;
+
+         Employee_IO.Write (File => Employee_File,
+                            Item => Current_Employee);
+      end loop;
+
+      Employee_IO.Close (Employee_File);
+
+   end Give_Raise;
+
+begin --  Test_Chunking
+
+   parallel
+   for Chunk of Manual_Chunking.Chunks loop
+      Give_Raise (From => Chunk.Start,
+                  To   => Chunk.Finish);
+   end loop;
+
+end Test_Chunking;
+
+One might further enhance this example to use the anonymous loop body syntax, if
+that AI is approved.
+
+A similar example could be concocted for manually chunking parallel iteration of
+a container.
+
+Further to this, once we've defined the parallel iterator interface, we could
+update all the standard containers to use this interface.
+
+For example, I have prototyped the changes to
+Ada.Containers.Doubly_Linked_Lists;
+
+The relevant bits are;
+
+with Ada202x.Iterator_Interfaces;
+
+generic
+...
+package Ada202x.Containers.Doubly_Linked_Lists is
+
+   ...
+
+   package List_Iterator_Interfaces is new
+     Ada202x.Iterator_Interfaces (Cursor, Has_Element);
+
+   ...
+
+   function Iterate (Container : List)
+      return List_Iterator_Interfaces.Reversible_Parallel_Iterator'Class;
+
+   function Iterate (Container : List; Start : Cursor)
+      return List_Iterator_Interfaces.Reversible_Parallel_Iterator'Class;
+
+    ...
+private
+
+end Ada202x.Containers.Doubly_Linked_Lists;
+
+If a loop is written without the parallel keyword or reverse keyword, you get a
+forward iterator. If you have the reverse keyword, you get the reverse iterator,
+or if you have the parallel keyword, you get the parallel iterator semantics.
+
+Note also that when user initialization/finalization of chunks is not needed,
+the implementation could fall back on its own chunking strategies, but a vendor
+may find that the chunking libraries I've described above are also useful for
+implementing its own parallelism for these cases.
+
+Does this seem like ideas worth pursuing? If so, should this be added to the
+parallel operations AI (AI12-0119-1), or should this be a separate AI?
+
+***************************************************************
+
+From: Tucker Taft
+Sent: Thursday, December 14, 2017  4:56 PM
+
+The paper about Generalized Parallel Iterators from Ada 2016 seems relevant:
+
+   http://www.ada-europe.org/archive/auj/auj-37-2.pdf  (scan to physical page
+   31, logical page 95)
+
+The accompanying presentation is here:
+
+    http://www.cister.isep.ipp.pt/ae2016/presentations/taft.pdf
+
+It proposes a syntax that would allow the following:
+
+declare
+      package String_Hyp_Objs is new
+          Indefinite_Hyper_Objects (String, Identity => “”, Reducer => “&”);
+      String_Accum :  String_Hyp_Objs.Accumulator (Count => Num_Chunks);
+
+begin
+     for Elem of parallel(C in 1 .. String_Accum.Count) My_Str_Vector loop
+         String_Accum.Update (Index => C, Element => Elem);
+         -- Concatenate Elem onto end of growing accumulated result
+    end loop;
+
+    Put_Line (String_Accum.Reduce);
+    -- Put concatenation of all strings from My_Str_Vector end;
+
+Since we have moved the location of the "parallel" reserved word in newer
+proposals, we would have to move the part that declares the "chunk index" C,
+namely "C in 1..String_Accum.Count"  (using syntax reminiscent of an
+entry-family index).  But the basic idea of providing an explicit chunk index
+still seems viable.
+
+Providing a chunk index to the body of the loop might enable many of the
+capabilities you are suggesting.  Would such a chunk index be enough?
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Thursday, December 14, 2017  5:17 PM
+
+...
+> Does this seem like ideas worth pursuing? If so, should this be added
+> to the parallel operations AI (AI12-0119-1), or should this be a
+> separate AI?
+
+I can't really answer the first part (again, it makes me wonder if there is
+sufficient value to the "usual" parallel loop), but the second part is pretty
+clear: it has to be a separate AI. If we don't buckle down and finish some
+version of the parallel loop stuff ASAP, there is no chance that any of it
+will be ready the summer after next (when our work on Ada 2020 completes).
+
+If this really isn't sufficiently mature to standardize (and the fact that you
+have a radically different proposal every 2-3 months strongly suggests that),
+then we should completely forget it for this cycle and spend our time on other
+proposals that actually have a chance to converge. We've only got
+18 months to finish this!!!
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, December 14, 2017  5:21 PM
+
+> Does this seem like ideas worth pursuing? If so, should this be added
+> to the parallel operations AI (AI12-0119-1), or should this be a separate AI?
+
+I would recommend a separate AI.  I believe we have also recommended splitting
+AI12-0119 into two AIs, one about reduction expressions and one about parallel
+loop/bock.  Perhaps you could do that sooner rather than later, so we will have
+a new AI for reduction expressions that we can reference.  Ed Schonberg at
+AdaCore has already implemented a version of reduction expressions, and it is
+pretty nice.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Thursday, December 14, 2017  5:29 PM
+
+Yes, all of the above is true. And with a January 26th homework deadline, all
+homework needs to be done sooner rather than later. Hint, Hint!! :-)
+
+BTW, the "new idea" deadline for ARG members is June 15th, so keep that in
+mind. After June, we're supposed to ignore new feature ideas from everyone,
+including each other. (For the general public, it is January 15th.)
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, December 14, 2017  5:41 PM
+
+> If this really isn't sufficiently mature to standardize (and the fact
+> that you have a radically different proposal every 2-3 months strongly
+> suggests that), then we should completely forget it for this cycle and
+> spend our time on other proposals that actually have a chance to
+> converge. We've only got 18 months to finish this!!!
+
+This is not a good outcome from my perspective.  Our prime directive should
+be to get parallel blocks and parallel loops done.  So I agree with Randy that
+we should finish the basic capabilities that we started, before inventing
+anything new.  Splitting the AI into two parts would be a good start, and then
+refining the wording of each as needed.
+
+****************************************************************
+
+From: Jeff Cousins
+Sent: Tuesday, December 19, 2017  4:48 AM
+
+How active is the “Gang of Four” concept?  Probably naively, I’ve been
+assuming that they would have been having e-mail discussions, and possibly
+even a meeting, outside of the full ARG.  Or is the correspondence that we’ve
+been seeing on the ARG list everything?
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, December 19, 2017  8:08 AM
+
+No regular "gang-of-four" meetings these days.  I think we are largely past
+that stage, and moved into the "AI" stage, where the ARG is the center of
+action.
+
+****************************************************************
+
+From: Brad Moore
+Sent: Wednesday, December 27, 2017  11:00 PM
+
+Here is a first chunk of my homework.
+
+Here we have what remains of AI12-0119-1. [This is version /05 - ED.]
+
+As you may recall, reduction expression syntax is being separated from this AI,
+to be placed in a new so-far unknown AI. Coroutine capabilities are also
+similarly deferred to a different AI.
+
+I have hopefully addressed the comments from the last meeting.
+Some notable new bits are;
+
+- I have added parallel iterator interfaces to Ada.Iterator_Interfaces to
+  facilitate container iteration, and updated all the containers to have
+  parallel iterators (in addition to the iterator schemes currently supported).
+
+- I have also attempted to add legality rules to eliminate data races
+  in parallel loops and parallel blocks, and a new suppressible check called
+  Loop_Parameter_Check, which dynamically checks that the loop parameters
+  are not used non-sequentially between tasklets to eliminate that as a source
+  of erroneousness.
+
+- I am hoping that the global AI means that most data races can be eliminated as
+  a legality check, rather than a dynamic runtime check, or static warning.
+
+- I think the only ones that need dynamic checking are those that update global
+  variables that also mention the loop parameter variable.
+
+Cheers, Merry Christmas, and Happy New Year!
+
+***************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Thursday, December 28, 2017  12:07 AM
+
+> On the other hand, calls by different
+> tasklets of the same task into the same protected object are treated
+> as different calls resulting in distinct protected actions; therefore
+> synchronization between tasklets can be performed using protected
+> operations.
+
+What if a tasklet calls an entry of the enclosing task, and another one accepts
+it? It would work with real parallelism, but it would be a sure deadlock if
+executed sequentially.
+
+Maybe forbid accepts in parallel statements?
+
+***************************************************************
+
+From: Jeff Cousins
+Sent: Thursday, December 28, 2017  2:25 AM
+
+Thanks for doing that Brad, and Happy Christmas and New Year everyone.
+
+***************************************************************
+
+From: Brad Moore
+Sent: Thursday, December 28, 2017  11:03 AM
+
+> Maybe forbid accepts in parallel statements?
+
+Good point, I agree that probably a legality rule should be added to forbid
+accept statements in parallel blocks and loops.
+
+Note, a similar problem can occur for protected objects, for example if one side
+of a parallel block is a producer and another is a consumer. If executing
+sequentially, the consumer might block waiting for the producer, which could
+deadlock, whereas this would work fine for parallel execution. I think what we'd
+want is to fall back to coroutine-like behaviour if executing sequentially where
+if a tasklet blocks, it implicitly yields allowing other tasklets to execute.
+
+Just prior to last meeting, the AI dealt more with sequential behaviour for the
+case of a parallel block without the parallel keyword. It was considered to be a
+coroutine. We decided at last meeting to move the coroutine part of it to a
+separate AI, but these AIs are still closely intertwined because even if the
+parallel keyword is present we need to consider the case where the
+implementation decides to ignore that hint from the programmer (perhaps due to
+oversubscription of cores from other parallel statements) and execute the
+parallel block sequentially.
+
+I think I need to add some wording to this AI that talks about implicit yields
+when tasklets are blocked. The other AI can probably focus more on explicit
+yield statements.
+
+***************************************************************
+
+From: Randy Brukardt
+Sent: Friday, December 29, 2017  12:35 AM
+
+> > Maybe forbid accepts in parallel statements?
+>
+> Good point, I agree that probably a legality rule should be added to
+> forbid accept statements in parallel blocks and loops.
+
+I thought the intent was to not allow blocking in bodies of parallel blocks and
+loops. Otherwise, a tasklet could hardly be a lightweight entity, as it would
+require the entire mechanisms of an Ada task (including queuing, priority
+inheritance, and the like). While that certainly would be more capable, it also
+would be much more dangerous.
+
+We have a mechanism for compile-time enforcement of nonblocking (just finished),
+and I had thought part of the reason for defining that mechanism was to use it
+in parallel blocks and loops. Otherwise, it seems like it was a lot of work for
+minimal gain.
+
+> Note, a similar problem can occur for protected objects, for example
+> if one side of a parallel block is a producer and another is a
+> consumer. If executing sequentially, the consumer might block waiting
+> for the producer, which could deadlock, whereas this would work fine
+> for parallel execution. I think what we'd want is to fall back to
+> coroutine-like behaviour if executing sequentially where if a tasklet
+> blocks, it implicitly yields allowing other tasklets to execute.
+
+Again, I'd expect this sort of thing to be banned. Protected operations still
+could be used to get mutual exclusion, but entry calls would not be allowed.
+
+I don't see any reasonable way to reason about the behavior of tasklets if
+scenarios like the above are allowed, since neither the number of tasklets nor
+how they "chunk" iterations is known to the author of the parallel loop. The
+only portable way to write such a loop is to avoid blocking altogether (since
+there can be no reliable way to ensure that deadlock can't happen).
+Communication between tasklets has to be prevented (outside of reduction
+expressions, of course).
+
+> Just prior to last meeting, the AI dealt more with sequential
+> behaviour for the case of a parallel block without the parallel
+> keyword. It was considered to be a coroutine.
+> We decided at last meeting to move the coroutine part of it to a
+> separate AI, but these AIs are still closely intertwined because even
+> if the parallel keyword is present we need to consider the case where
+> the implementation decides to ignore that hint from the programmer
+> (perhaps due to oversubscription of cores from other parallel
+> statements) and execute the parallel block sequentially.
+>
+> I think I need to add some wording to this AI that talks about
+> implicit yields when tasklets are blocked. The other AI can probably
+> focus more on explicit yield statements.
+
+You can't be serious. This is way worse than my original fear; not only do the
+supposedly lightweight tasklets have to carry all of the machinery of Ada tasks,
+but they also have to act like coroutines, with an entire additional level of
+complication.
+
+Assuming that the coroutine proposal actually get fleshed out and into the
+Standard, a user could write such a mechanism if they wanted it. But paying for
+it all the time seems mad to me, and you still wouldn't be able to reason in any
+useful way about how tasklets execute (since you don't know the mapping).
+
+Communication between parallel loop iterations (and between parallel block
+"limbs") should be banned. Otherwise, it means that the supposedly "parallel"
+operations aren't the independent entities that can actually be run in parallel
+in the first place.
+
+I could imagine relaxing such restrictions in the case when the number of
+tasklets and their mapping is specified via some future mechanism. (In that
+case, one could actually portably reason about the behavior.) But it seems to me
+that we have Ada tasks for such needs; parallel blocks and loops are supposed to
+be easy-to-use, safe-by-construction facilities to that make it simple to
+introduce parallelism into a program. If things are complex enough, a library
+like your (Brad) Parafin makes more sense. It can't have safety-by-construction
+but can allow more complex structures.
+
+***************************************************************
+
+From: Brad Moore
+Sent: Friday, December 29, 2017  12:33 AM
+
+> We have a mechanism for compile-time enforcement of nonblocking (just
+> finished), and I had thought part of the reason for defining that
+> mechanism was to use it in parallel blocks and loops. Otherwise, it
+> seems like it was a lot of work for minimal gain.
+
+One of the later developments of the gang of four was that we determined that it
+would be OK to support protected actions in tasklets. That doesn't necessarily
+mean we should support that, but that it is an option, so we included it in our
+proposal. I'd have to look back, but it might have been that we were more keen
+on supporting protected function calls and protected procedure calls, but not so
+much blocking calls.
+
+I agree that disallowing blocking calls would simplify the proposal, and would
+be OK with doing that if that is the general consensus. We could always add that
+in later if we felt it was needed, but it is much harder to disallow something
+later once it gets in the standard, as you know.
+
+***************************************************************
+
+From: Randy Brukardt
+Sent: Friday, December 29, 2017  6:44 PM
+
+...
+> One of the later developments of the gang of four was that we
+> determined that it would be OK to support protected actions in
+> tasklets. That doesn't necessarily mean we should support that, but
+> that it is an option, so we included it in our proposal. I'd have to
+> look back, but it might have been that we were more keen on supporting
+> protected function calls and protected procedure calls, but not so
+> much blocking calls.
+
+Protected subprogram calls can be considered nonblocking assuming the protected
+type is declared that way (meaning that static nonblocking checking is applied
+to the operations). These are important for mutual exclusion, and so they would
+have to be allowed (and they are with suitable declarations, which applies to
+everything).
+
+> I agree that disallowing blocking calls would simplify the proposal,
+> and would be OK with doing that if that is the general consensus. We
+> could always add that in later if we felt it was needed, but it is
+> much harder to disallow something later once it gets in the standard,
+> as you know.
+
+Right, that's part of my thinking here. If we find some compelling examples
+where some limited blocking would work and make sense even if the mapping of
+iterations to tasklets is unknown, then we can revisit. But the goal is
+safety-by-construction: if the loop compiles, it will execute safely without
+deadlocks or races. That's a lot easier if we don't have to worry about blocking
+(it's not clear to me that it is even possible if blocking is involved).
+
+And I want tasklets to be a light-weight as possible. Indeed, I would like to
+view tasks as being constructed on top of tasklets by adding blocking
+capabilities and the like.
+
+***************************************************************
+
+From: Jeff Cousins
+Sent: Monday, January 1, 2018  6:15 PM
+
+Thanks Brad for all that.
+
+Some comments, mostly minor/typos:
+
+!proposal 2nd para
+
+notational -> notional ?
+
+!proposal 5th para
+
+9.10 ([23]{11???})), -> simply 9.10(11) ?
+
+!proposal 6th para
+
+I don’t think that you can say that tasklets are orthogonal to tasks,
+particularly as the very next sentence covers how they relate; tasklets are a
+sort of tasks--.
+
+Parallel Blocks 3rd para last sentence
+
+Following on from Jean-Pierre’s and Randy’s comments, this would reduce to
+something like “The appropriate use of atomic or protected subprogram calls can
+be used to avoid erroneous execution.”
+
+Parallel Blocks 4th para
+
+Potentially_Blocking has been superseded by Nonblocking.
+
+Parallel Blocks 5th para
+
+“contgrol” -> “control”
+
+!wording
+
+New paragraph after 5.5 (9/4)
+
+This is out of order as 5.5 (7) follows.
+
+“an global” -> “a global”
+
+Modify to 5.5 (9/4)
+
+“decreasing order {or unless” -> “decreasing order{, or unless”
+
+“Examples after 5.5(20)” -> “Examples after 5.5(21)”
+
+Modify 5.5.1 (6/3)
+
+“decended” -> “descended”
+
+Separate sentences for parallel iterator object and parallel reversible iterator
+object would be more consistent wording.
+
+Modify 5.5.2 (4/3)
+
+“iterator_specificaiton” -> “iterator_specification”
+
+Modify 5.5.2 (7/5)
+
+“the nominal subtype of the loop parameter{s are} [is]” -> “the nominal subtype
+of the loop parameter{s} is” – parameters may be plural but their subtype is
+singular.
+
+Modify 5.5.2 (10/3)
+
+“the denoted iterator object{s} becomes” -> “the denoted iterator object{s}
+become[s]”
+
+“5.6.1 Parallel Block Statements -> Insert new paragraph 5.6.1 Parallel Block Statements
+
+? Why the opening quotes, and isn’t all of this part an insertion?  Why the []
+around the first paragraph?
+
+Add new paragraph after 11.5 (20)
+
+“elememnt” -> “element”
+
+Modify A.18.2 (230.2/3)
+
+Modify A.18.3 (144.2/3)
+
+Modify A.18.6 (94.2/3)
+
+Modify A.18.9 (113.2/3)
+
+“notes” -> “nodes”
+
+Modify A.18.2 (230.4/3)
+
+Modify A.18.3 (144.4/3)
+
+Modify A.18.6 (94.4/3)
+
+Modify A.18.9 (113.4/3)
+
+Modify A.18.10 (159/3)
+
+“simulataneously” -> “simultaneously”
+
+Modify A.18.10 (159/3)
+
+“or or” -> “or”
+
+**************************************************************
+
+From: Brad Moore
+Sent: Monday, January 1, 2018  12:33 AM
+
+Thanks Jeff for your comments.
+
+Since there are now quite a lot of changes being discussed compared to the
+version of the AI that was sent out,
+
+I have attached a new version of the AI. [This is version /06 of the AI - ED.]
+
+This version includes the following changes;
+
+1) Addressing Jeff's Comments.
+2) Potentially Blocking operations are now disallowed in parallel loop and
+   parallel block statements
+3) Added Nonblocking aspect to the new routines where appropriate
+4) Modified the Parallel Iterator interface to have Tuck's Split function idea,
+   instead of the Jump function.
+5) Added an Advised_Split parameter to the Split function so that the
+   implementation can recommend the number of splits to be applied to the
+   iterator, allowing container implementors to override the recommendation if
+   desired.
+6) Added function in System.Multiprocessors to query the implementation for a
+   recommended split count, that can be called by User code if desired.
+7) Added a package Ada.Discrete_Chunking, which basically contains a single
+   Split function that returns a Chunk_Array that can be used for parallel
+   iteration involving manual chunking of discrete types, similar to what is
+   also possible using containers.
+
+Happy New Year everyone!
+
+***************************************************************

Questions? Ask the ACAA Technical Agent