CVS difference for ai12s/ai12-0119-1.txt

Differences between 1.22 and version 1.23
Log of other versions for file ai12s/ai12-0119-1.txt

--- ai12s/ai12-0119-1.txt	2018/07/15 00:25:57	1.22
+++ ai12s/ai12-0119-1.txt	2018/09/05 21:28:45	1.23
@@ -1,5 +1,25 @@
-!standard 2.9(2/3)                                   18-07-13    AI12-0119-1/12
+!standard 0.3(28)                                    18-08-31    AI12-0119-1/13
+!standard 2.9(2/3) 
+!standard 5.1(1)
+!standard 5.1(5/2)
+!standard 5.1(15)
+!standard 5.5(1)
+!standard 5.5(3/3)
+!standard 5.5(5)
+!standard 5.5(7)
+!standard 5.5(9/4)
+!standard 5.5(21)
 !standard 5.6.1(0)
+!standard 9(1/3)
+!standard 9(10)
+!standard 9(11)
+!standard 9.5.1(7/4)
+!standard 9.8(17)
+!standard 9.10(1/3)
+!standard 9.10(2)
+!standard 9.10(13)
+!standard D.2.1(4/2)
+!standard D.16.1(33/3)
 !class Amendment 14-06-20
 !status Amendment 1-2012 18-07-12
 !status ARG Approved 7-0-2  18-06-24
@@ -16,11 +36,11 @@
 !problem
 
 The increased presence of parallel computing platforms brings concerns to the
-general purpose domain that were previously prevalent only in the specific niche
-of high-performance computing. As parallel programming technologies become more
-prevalent in the form of new emerging programming languages and extensions of
-existing languages, safety concerns need to consider the paradigm shift from
-sequential to parallel behavior.
+general purpose domain that were previously prevalent only in the specific 
+niche of high-performance computing. As parallel programming technologies 
+become more prevalent in the form of new emerging programming languages and 
+extensions of existing languages, safety concerns need to consider the paradigm 
+shift from sequential to parallel behavior.
 
 Ada needs mechanisms to allow the programmer to specify explicitly in the code
 where parallelism should be applied. In particular, loops are often a good
@@ -28,12 +48,14 @@
 parallelism more generally to allow potentially different groups of calls to be
 executed in parallel with other groups of calls. While Ada tasks can be used to
 generate parallel execution, they are better suited for separating concerns of
-independent processing where the need for parallelization is mostly a
-performance concern, where it should be easy to convert an existing sequential
-algorithm into a parallel one, or vice versa. Declaring tasks for this purpose
-is too error prone, requires significant rework, more difficult to maintain, and
-not as easily transferable to different hardware platforms. It is better to rely
-more on the compiler to map the parallelism to the target platform.
+independent processing whereas the need for parallelization is mostly a
+performance concern. Since parallelism can be considered a form of optimization
+that can be applied later in the development cycle, it should be easy to 
+convert an existing sequential algorithm into a parallel one, or vice 
+versa. Declaring tasks for this purpose is too error prone, requires 
+significant rework, is more difficult to maintain, and is not as easily 
+transferable to different hardware platforms. It is better to rely more on the 
+compiler to map the parallelism to the target platform.
 
 !proposal
 
@@ -41,7 +63,7 @@
 loops. A parallel block consists of a set of concurrent activities each
 specified by a handled sequence of statements, separated by the reserved
 word "and," analogous to the syntax for a select statement where the
-alternatives are separated by the reserved word "or."  A parallel loop
+alternatives are separated by the reserved word "or." A parallel loop
 defines a loop body which is designed such that the various iterations
 of the loop can run concurrently.  The implementation is expected to
 group the iterations into "chunks" to avoid creating an excessive number
@@ -65,16 +87,16 @@
 Modify 2.9(2/3):
   Add "parallel" to the list of reserved words.
 
-Add a sentence to the end of 5.1(1):
+Add to the end of 5.1(1):
 
   A /parallel construct/ is a construct that introduces additional
-  logical threads of control (see 9) without creating a new task.
+  logical threads of control (see clause 9) without creating a new task.
   Parallel loops (see 5.5) and parallel_block_statements (see
   5.6.1) are parallel constructs.
 
 Modify 5.1(5/2):
 
-  add "parallel_block_statement" to the list of compound statements.
+  Add "parallel_block_statement" to the list of compound statements.
 
 Add after 5.1(15):
 
@@ -83,7 +105,7 @@
   attempt is made to /cancel/ all other logical threads of control
   initiated by the parallel construct. Once all other logical threads of
   control of the construct either complete or are canceled, the transfer
-  of control occurs.  If two or more logical threads of control of the
+  of control occurs. If two or more logical threads of control of the
   same construct initiate such a transfer of control concurrently, one
   of them is chosen arbitrarily and the others are canceled.
 
@@ -117,7 +139,7 @@
    | [parallel] for loop_parameter_specification
    |            for iterator_specification
 
-Add after 5.5(6/5):
+Add after 5.5(5):
 
   [editor: this rule is still part of the syntax rules]
   An iteration_scheme that begins with the reserved word parallel shall
@@ -163,7 +185,7 @@
 
   Example of a parallel loop:
 
-  -- See 3.6(30/2)
+  -- see 3.6
   parallel for I in Grid'Range(1) loop
      Grid(I, 1) := (for all J in Grid'Range(2) => Grid(I,J) = True);
   end loop;
@@ -198,7 +220,7 @@
 by one of the handled_sequence_of_statements (see 5.1).
 
   AARM Implementation Note: Although each handled_sequence_of_statements
-  of a parallel block is a separate logical thread of control, the
+  of a parallel block represents a separate logical thread of control, the
   implementation may choose to combine two or more such logical threads
   of control into a single physical thread of control to reduce the cost
   of creating numerous physical threads of control.
@@ -251,7 +273,7 @@
             return False;
           end;
       end if;
-   end Contains;
+   end Search;
 
 Modify 9(1/3):
 
@@ -288,10 +310,10 @@
   the protected action. If there are multiple such elements initiated
   at the same point, they execute in an arbitrary order.
 
-  AARM Rationale: It would be feasible to allow multiple logical
+  AARM Reason: It would be feasible to allow multiple logical
   threads of control within a protected action, but it would significantly
   complicate the definition of "sequential" and "concurrent" actions,
-  since we generally presume that everthing occuring within protected
+  since we generally presume that everything occuring within protected
   actions of a given protected object is sequential. We could simply
   disallow any use of parallel constructs, but that seems unnecessary,
   particularly as a parallel construct might be buried within a
@@ -350,7 +372,7 @@
   logical threads of control, each of which can appear separately on a
   ready queue.}
 
-Add at end of D.16.1(33/3):
+Add after D.16.1(33/3):
   The implementation may defer the effect of a Set_CPU or an Assign_Task
   operation until the specified task leaves an ongoing parallel construct.
 
@@ -385,7 +407,7 @@
             Y := Bar(Z);
          end do;
 
-         Put_Line("X + Y=" & Integer'Image(X + Y));
+         Put_Line("X + Y =" & Integer'Image(X + Y));
       end;
 
 In this example, the calculation of Z and Y occur sequentially with
@@ -463,8 +485,8 @@
   end loop;
 
 Note: Allowing the parallel keyword on while loops was considered, but
-was discarded since while loops cannot be easily parallelized, because
-the control variables are inevitably global to the loop.
+was discarded since while loops cannot be easily parallelized. That's because
+the variables in the while condition are inevitably global to the loop.
 
 We have introduced the term "logical thread of control" to describe the
 new kinds of parallelism introduced by these parallel constructs, rather
@@ -482,8 +504,461 @@
 AI12-0251-1/2).
 
 AI12-0267-1 provides rules to check for blocking operations
-and for race conditions within parallel constructs.
+and for race conditions within parallel constructs. AI12-0266-1 addresses
+user-defined parallel iterators, includes those defined in the container
+packages.
+
+!corrigendum 0.3(28)
+
+@drepl
+Certain statements are associated with concurrent execution. A delay 
+statement delays the execution of a task for a specified duration or 
+until a specified time. An entry call statement is written as a procedure
+call statement; it requests an operation on a task or on a protected object,
+blocking the caller until the operation can be performed. A called task may 
+accept an entry call by executing a corresponding accept statement, which 
+specifies the actions then to be performed as part of the rendezvous with 
+the calling task. An entry call on a protected object is processed when the
+corresponding entry barrier evaluates to true, whereupon the body of the 
+entry is executed. The requeue statement permits the provision of a service
+as a number of related activities with preference control. One form of the 
+select statement allows a selective wait for one of several alternative 
+rendezvous. Other forms of the select statement allow conditional or timed
+entry calls and the asynchronous transfer of control in response to some 
+triggering event.
+@dby
+Certain statements are associated with concurrent execution. A delay 
+statement delays the execution of a task for a specified duration or 
+until a specified time. An entry call statement is written as a procedure
+call statement; it requests an operation on a task or on a protected object,
+blocking the caller until the operation can be performed. A called task may 
+accept an entry call by executing a corresponding accept statement, which 
+specifies the actions then to be performed as part of the rendezvous with 
+the calling task. An entry call on a protected object is processed when the
+corresponding entry barrier evaluates to true, whereupon the body of the 
+entry is executed. The requeue statement permits the provision of a service
+as a number of related activities with preference control. One form of the 
+select statement allows a selective wait for one of several alternative 
+rendezvous. Other forms of the select statement allow conditional or timed
+entry calls and the asynchronous transfer of control in response to some 
+triggering event. Various parallel constructs, including parallel loops and 
+parallel blocks, support the initiation of multiple logical threads of 
+control designed to execute in parallel when multiple processors are 
+available.
+
+
+!corrigendum 2.9(2/3)
+
+@dinsl
+@b<parallel>
+
+!corrigendum 5.1(1)
+
+@drepl
+A @fa<statement> is either simple or compound. A @fa<simple_statement> 
+encloses no other @fa<statement>. A @fa<compound_statement> can enclose 
+@fa<simple_statement>s and other @fa<compound_statement>s.
+@dby
+A @fa<statement> is either simple or compound. A @fa<simple_statement> 
+encloses no other @fa<statement>. A @fa<compound_statement> can enclose 
+@fa<simple_statement>s and other @fa<compound_statement>s.
+A @i<parallel construct> is a construct that introduces additional
+logical threads of control (see clause 9) without creating a new task.
+Parallel loops (see 5.5) and @fa<parallel_block_statement>s (see
+5.6.1) are parallel constructs.
+
+!corrigendum 5.1(5/2)
+
+@drepl
+@xcode<@fa<compound_statement>@fa< ::=>
+     @fa<if_statement> | @fa<case_statement>
+   | @fa<loop_statement> | @fa<block_statement>
+   | @fa<extended_return_statement>
+   | @fa<accept_statement> | @fa<select_statement>>
+@dby
+@xcode<@fa<compound_statement>@fa< ::=>
+     @fa<if_statement> | @fa<case_statement>
+   | @fa<loop_statement> | @fa<block_statement>
+   | @fa<parallel_block_statement>
+   | @fa<extended_return_statement>
+   | @fa<accept_statement> | @fa<select_statement>>
+
+!corrigendum 5.1(15)
+
+@dinsa
+The execution of a @fa<sequence_of_statements> consists of the execution of 
+the individual @fa<statement>s in succession until the @fa<sequence_> is 
+completed. 
+@dinss
+Within a parallel construct, if a transfer of control out of the
+construct is initiated by one of the logical threads of control, an
+attempt is made to @i<cancel> all other logical threads of control
+initiated by the parallel construct. Once all other logical threads of
+control of the construct either complete or are canceled, the transfer
+of control occurs. If two or more logical threads of control of the
+same construct initiate such a transfer of control concurrently, one
+of them is chosen arbitrarily and the others are canceled.
+
+When a logical thread of control is canceled, the cancellation causes
+it to complete as though it had performed a transfer of control to the
+point where it would have finished its execution. Such a cancellation
+is deferred while the logical thread of control is executing within an
+abort-deferred operation (see 9.8), and may be deferred further, but
+not past a point where the logical thread initiates a new nested
+parallel construct or reaches an exception handler that is outside
+such an abort-deferred operation.
+
+@s8<@i<Bounded (Run-Time) Errors>>
+
+During the execution of a parallel construct, it is a bounded error to
+invoke an operation that is potentially blocking (see 9.5).
+Program_Error is raised if the error is detected by the
+implementation; otherwise, the execution of the potentially blocking
+operation might proceed normally, or it might result in the indefinite
+blocking of some or all of the logical threads of control making up
+the current task.
+
+!corrigendum 5.5(1)
+
+@drepl
+A @fa<loop_statement> includes a @fa<sequence_of_statements> that is to be
+executed repeatedly, zero or more times.
+@dby
+A @fa<loop_statement> includes a @fa<sequence_of_statements> that is to be
+executed repeatedly, zero or more times with the iterations
+running sequentially or concurrently with one another.
+
+!corrigendum 5.5(3/3)
+
+@drepl
+@xcode<@fa<iteration_scheme>@fa< ::= >@ft<@b<while>> @fa<condition>
+   | @ft<@b<for>> @fa<loop_parameter_specification>
+   | @ft<@b<for>> @fa<iterator_specification>>
+@dby
+@xcode<@fa<iteration_scheme>@fa< ::= >@ft<@b<while>> @fa<condition>
+   | [@ft<@b<parallel>>] @ft<@b<for>> @fa<loop_parameter_specification>
+   |            @ft<@b<for>> @fa<iterator_specification>
+
+!corrigendum 5.5(5)
+
+@dinsa
+If a @fa<loop_statement> has a @i<loop_>@fa<statement_identifier>, then the 
+@fa<identifier> shall be repeated after the @b<end loop>; otherwise, there 
+shall not be an @fa<identifier> after the @fa<end loop>.
+
+@dinst
+An @fa<iteration_scheme> that begins with the reserved word @b<parallel> shall
+not have the reserved word @b<reverse> in its @fa<loop_parameter_specification>.
+
+!corrigendum 5.5(7)
+
+@drepl
+For the execution of a @fa<loop_statement>, the @fa<sequence_of_statements> is
+executed repeatedly, zero or more times, until the @fa<loop_statement> is 
+complete. The @fa<loop_statement> is complete when a transfer of control 
+occurs that transfers control out of the loop, or, in the case of an 
+@fa<iteration_scheme>, as specified below.
+@dby
+For the execution of a @fa<loop_statement>, the @fa<sequence_of_statements> is
+executed zero or more times, until the @fa<loop_statement> is 
+complete. The @fa<loop_statement> is complete when a transfer of control 
+occurs that transfers control out of the loop, or, in the case of an 
+@fa<iteration_scheme>, as specified below.
+
+!corrigendum 5.5(9/4)
+
+@drepl
+For the execution of a @fa<loop_statement> with the @fa<iteration_scheme>
+being @b<for> @fa<loop_parameter_specification>,
+the @fa<loop_parameter_specification> is first elaborated. This
+elaboration creates the loop parameter and elaborates the
+@fa<discrete_subtype_definition>.
+If the @fa<discrete_subtype_definition> defines a subtype with a null range,
+the execution of the @fa<loop_statement> is complete. Otherwise, the
+@fa<sequence_of_statements> is executed once for each value of the
+discrete subtype defined by the @fa<discrete_subtype_definition> that
+satisfies the predicates of the subtype (or until
+the loop is left as a consequence of a transfer of control).
+Prior to each such iteration,
+the corresponding value of the discrete subtype is assigned to the
+loop parameter. These values are assigned in increasing order unless
+the reserved word @b<reverse> is present, in which case the values
+are assigned in decreasing order.
+@dby
+For the execution of a @fa<loop_statement> with the @fa<iteration_scheme>
+being @b<for> @fa<loop_parameter_specification>,
+the @fa<loop_parameter_specification> is first elaborated. This
+elaboration creates the loop parameter and elaborates the
+@fa<discrete_subtype_definition>.
+If the @fa<discrete_subtype_definition> defines a subtype with a null range,
+the execution of the @fa<loop_statement> is complete. Otherwise, the
+@fa<sequence_of_statements> is executed once for each value of the
+discrete subtype defined by the @fa<discrete_subtype_definition> that
+satisfies the predicates of the subtype (or until
+the loop is left as a consequence of a transfer of control).
+Prior to each such iteration,
+the corresponding value of the discrete subtype is assigned to the
+loop parameter. If the reserved word parallel is present (a @i<parallel
+loop>), each iteration is a separate logical thread of control (see 
+clause 9), with its own copy of the loop parameter; otherwise the values
+are assigned in increasing order unless the reserved word @b<reverse> is 
+present, in which case the values are assigned in decreasing order.
+
+!corrigendum 5.5(21)
+
+@dinsa
+@xcode<Summation:
+   @b<while> Next /= Head @b<loop>       -- @ft<@i<see 3.10.1>>
+      Sum  := Sum + Next.Value;
+      Next := Next.Succ;
+   @b<end loop> Summation;>
+@dinss
+@i<Example of a parallel loop:>
+
+@xcode<-- @ft<@i<see 3.6>>
+@b<parallel for> I @b<in> Grid'Range(1) @b<loop>
+   Grid(I, 1) := (@b<for all> J @b<in> Grid'Range(2) =@> Grid(I,J) = True);
+@b<end loop>;>
+
+!corrigendum 5.6.1(0)
+
+@dinsc
+
+A @fa<parallel_block_statement> comprises two or more
+@fa<handled_sequence_of_statements> separated by @b<and> where each represents
+an independent activity that is intended to proceed concurrently with
+the others.
+
+@s8<@i<Syntax>>
+
+@xcode<@fa<parallel_block_statement>@fa< ::= >
+  @ft<@b<parallel do>>
+     @fa<handled_sequence_of_statements>
+  @ft<@b<and>>
+     @fa<handled_sequence_of_statements>
+ {@ft<@b<and>>
+     @fa<handled_sequence_of_statements>}
+  @ft<@b<end do>> @fa<;>>
 
+@s8<@i<Static Semantics>>
+
+Each @fa<handled_sequence_of_statements> represents a separate logical thread
+of control that proceeds independently and concurrently. The
+@fa<parallel_block_statement> is complete once every one of the
+@fa<handled_sequence_of_statements> has completed, either by reaching the end
+of its execution, or due to a transfer of control out of the construct
+by one of the @fa<handled_sequence_of_statements> (see 5.1).
+
+@s8<@i<Examples>>
+
+@xcode<@b<procedure> Traverse (T : Expr_Ptr) @b<is> --@FT<@I< see 3.9>>
+@b<begin>
+   @b<if> T /= @b<null> @b<and> @b<then>
+      T.@b<all> @b<in> Binary_Operation'Class --@FT<@I< see 3.9.1>>
+   @b<then> --@FT<@I< recurse down the binary tree>>
+      @b<parallel do>
+         Traverse (T.Left);
+      @b<and>
+         Traverse (T.Right);
+      @b<and>
+         Ada.Text_IO.Put_Line
+            ("Processing " & Ada.Tags.Expanded_Name (T'Tag));
+      @b<end do>;
+   @b<end if>;
+@b<end> Traverse;>
+
+@xcode<@b<function> Search (S : String; Char : Character) @b<return> Boolean @b<is>
+@b<begin>
+   @b<if> S'Length <= 1000 @b<then>
+      --@FT<@I< Sequential scan>>
+      @b<return> (@b<for some> C @b<of> S =@> C = Char);
+   @b<else>
+      --@FT<@I< Parallel divide and conquer>>
+      @b<declare>
+         Mid : @b<constant> Positive := S'First + S'Length/2 - 1;
+      @b<begin>
+         @b<parallel do>
+            @b<for> C @b<of> S(S'First .. Mid) @b<loop>
+               @b<if> C = Char @b<then>
+                  @b<return> True;  --@FT<@I< Terminates enclosing@b< do>>>
+               @b<end if>;
+            @b<end loop>;
+         @b<and>
+            @b<for> C @b<of> S(Mid + 1 .. S'Last) @b<loop>
+               @b<if> C = Char @b<then>
+                  @b<return> True;  --@FT<@I< Terminates enclosing@b< do>>>
+               @b<end if>;
+            @b<end loop>;
+         @b<end do>;
+         --@FT<@I< Not found>>
+         @b<return> False;
+      @b<end>;
+   @b<end if>;
+@b<end> Search;>
+
+!corrigendum 9(1/3)
+
+@drepl
+The execution of an Ada program consists of the execution of one or more 
+@i<tasks>. Each task represents a separate thread of control that proceeds 
+independently and concurrently between the points where it @i<interacts> with 
+other tasks. The various forms of task interaction are described in this clause, 
+and include: 
+@dby
+The execution of an Ada program consists of the execution of one or
+more @i<tasks>. Each task represents a separable activity that proceeds 
+independently and concurrently between the points where it @i<interacts> with 
+other tasks. A single task, when within the context of a parallel construct, 
+can represent multiple logical threads of control which can proceed in 
+parallel; in other contexts, each task represents one logical thread of 
+control.
+
+The various forms of task interaction are described in this clause,
+and include:
+
+!corrigendum 9(10)
+
+@drepl
+Over time, tasks proceed through various @i<states>. A task is initially
+@i<inactive>; upon activation, and prior to its @i<termination> it is either 
+@i<blocked> (as part of some task interaction) or @i<ready> to run. While 
+ready, a task competes for the available @i<execution resources> that it 
+requires to run. 
+@dby
+Over time, tasks proceed through various @i<states>. A task is initially
+@i<inactive>; upon activation, and prior to its @i<termination> it is either 
+@i<blocked> (as part of some task interaction) or @i<ready> to run. While 
+ready, a task competes for the available @i<execution resources> that it 
+requires to run. In the context of a parallel construct, a single task can 
+utilize multiple processing resources simultaneously.
+
+!corrigendum 9(11)
+
+@drepl
+@xindent<@s9<NOTES@hr
+1  Concurrent task execution may be implemented on multicomputers, 
+multiprocessors, or with interleaved execution on a single physical 
+processor. On the other hand, whenever an implementation can determine that
+the required semantic effects can be achieved when parts of the execution of
+a given task are performed by different physical processors acting in 
+parallel, it may choose to perform them in this way.>>
+@dby
+@xindent<@s9<NOTES@hr
+1  Concurrent task execution may be implemented on multicomputers, 
+multiprocessors, or with interleaved execution on a single physical 
+processor. On the other hand, whenever an implementation can determine that
+the required semantic effects can be achieved when parts of the execution of
+a single logical thread of control are performed by different physical 
+processors acting in parallel, it may choose to perform them in this way.>>
+
+!corrigendum 9.5.1(7/4)
+
+@dinsa
+After performing an exclusive protected operation on a protected object other 
+than a call on a protected function, but prior to completing the associated 
+protected action, the entry queues (if any) of the protected object are 
+serviced (see 9.5.3).
+@dinst
+If a parallel construct occurs within a protected action, no new
+logical threads of control are created. Instead, each element of the
+parallel construct that would have become a separate logical thread of
+control executes on the logical thread of control that is performing
+the protected action. If there are multiple such elements initiated
+at the same point, they execute in an arbitrary order.
+
+!corrigendum 9.8(17)
+
+@dinsa
+@xbullet<the end of the activation of a task;>
+@dinss
+@xbullet<a point within a parallel construct where a new logical thread
+of control is created;>
+@xbullet<the end of a parallel construct;>
+
+!corrigendum 9.10(1/3)
+
+@drepl
+If two different objects, including nonoverlapping parts of the same object, 
+are @i<independently addressable>, they can be manipulated concurrently by two 
+different tasks without synchronization. Any two nonoverlapping objects are 
+independently addressable if either object is specified as independently 
+addressable (see C.6). Otherwise, two nonoverlapping objects are independently
+addressable except when they are both parts of a composite object for which a
+nonconfirming value is specified for any of the following representation 
+aspects: (record) Layout, Component_Size, Pack, Atomic, or Convention; in this
+case it is unspecified whether the parts are independently addressable.
+@dby
+If two different objects, including nonoverlapping parts of the same object, 
+are @i<independently addressable>, they can be manipulated concurrently by two 
+different logical threads of control without synchronization. Any two 
+nonoverlapping objects are 
+independently addressable if either object is specified as independently 
+addressable (see C.6). Otherwise, two nonoverlapping objects are independently
+addressable except when they are both parts of a composite object for which a
+nonconfirming value is specified for any of the following representation 
+aspects: (record) Layout, Component_Size, Pack, Atomic, or Convention; in this
+case it is unspecified whether the parts are independently addressable.
+
+!corrigendum 9.10(2)
+
+@drepl
+Separate tasks normally proceed independently and concurrently with one 
+another. However, task interactions can be used to synchronize the actions of
+two or more tasks to allow, for example, meaningful communication by the 
+direct updating and reading of variables shared between the tasks. The actions
+of two different tasks are synchronized in this sense when an action of one 
+task @i<signals> an action of the other task; an action A1 is defined to signal 
+an action A2 under the following circumstances: 
+@dby
+Separate logical threads of control normally proceed independently and 
+concurrently with one another. However, task interactions can be used to 
+synchronize the actions of two or more logical threads of control to allow, 
+for example, meaningful communication by the 
+direct updating and reading of variables shared between them. The actions
+of two different logical threads of control are synchronized in this sense 
+when an action of one @i<signals> an action of the other; an action A1 is 
+defined to signal an action A2 under the following circumstances: 
+
+!corrigendum 9.10(13)
+
+@drepl
+@xbullet<Both actions occur as part of the execution of the same task;>
+@dby
+@xbullet<Both actions occur as part of the execution of the same logical
+thread of control;>
+
+!corrigendum D.2.1(4/2)
+
+@drepl
+@i<Task dispatching> is the process by which one ready task is selected 
+for execution on a processor. This selection is done at certain points during
+the execution of a task called @i<task dispatching points>. A task reaches a
+task dispatching point whenever it becomes blocked, and when it terminates. 
+Other task dispatching points are defined throughout this Annex for specific 
+policies. 
+@dby
+@i<Task dispatching> is the process by which a logical thread of control 
+associated with a ready task is selected for execution on a processor. This 
+selection is done during the execution of such a logical thread of control, at 
+certain points called @i<task dispatching points>. Such a logical thread of 
+control reaches a task dispatching point whenever it becomes blocked, and when
+its associated task terminates. Other task dispatching points are defined 
+throughout this Annex for specific policies. Below we talk in terms of tasks, 
+but in the context of a parallel construct, a single task can be represented 
+by multiple logical threads of control, each of which can appear separately on
+a ready queue.
+
+!corrigendum D.16.1(33/3)
+
+@dinsa
+An implementation may limit the number of dispatching domains that can be
+created and raise Dispatching_Domain_Error if an attempt is made to exceed
+this number.
+@dinst
+The implementation may defer the effect of a Set_CPU or an Assign_Task
+operation until the specified task leaves an ongoing parallel construct.
+
+
 !ASIS
 
 ** TBD.
@@ -7302,5 +7777,134 @@
 Sent: Saturday, June 23, 2018  6:31 PM
 
 Here is an update. [This is version /10 of the AI - Editor.]
+
+***************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, September 4, 2018  10:31 PM
+
+For the record, here's some (significant*) editorial corrections in
+AI12-0119-1:
+
+In wording, we have:
+
+Add a sentence to the end of 5.1(1):
+
+Which is then followed by *two* sentences. I suppose I'm supposed to pick one
+at random to add? :-) I deleted "a sentence" from this instruction.
+
+---
+
+Add after 5.5(6/5):
+
+  [editor: this rule is still part of the syntax rules]
+
+Unfortunately, the syntax rules end with 5.5(5). Otherwise, this would need to 
+be a Legality Rule, but there is no such part in the existing text, and if the 
+intent was to add one, then a Legality Rules header would have been needed. The
+lack of one and the editor's note imply that it's the reference that's wrong.
+Also, 5.5(5) is an English syntax rule; having a second one seems natural --
+and the new rule is based purely on syntax. I've presumed that and changed the
+instructions accordingly (to say "after 5.5(5)").
+
+---
+
+Add after 5.5(21):
+
+  -- see 3.6(30/2)
+
+Sorry, no paragraph references in the RM (or AARM), the former because of ISO 
+rules, the latter because of lack of tool support (in part because the ISO 
+rules mean that it wouldn't get much use).
+
+---
+
+Example in 5.6.1.
+
+For some reason, the subprogram header for the first example is in three 
+pieces, while the corresponding header for the second example is on one line.
+I changed them to match.
+
+---
+
+Add after 9.5.1(7/4):
+
+   AARM Rationale: It ...
+
+We have Ramification and Reason in AARM Notes, but no Rationale. I presume this 
+one should be "Reason".
+
+---
+
+Add to the end of D.16.1(33/3):
+
+This is a separate and completely unrelated implementation permission from the 
+existing one in D.16.1(33/3). As such, it makes more sense to make this a 
+separate paragraph rather than to glob two unrelated things together. So I
+changed this instruction to:
+
+Add after D.16.1(33/3):
+
+(*) There were also a couple of punctuation and spacing mistakes, not worth 
+chronicling here or anywhere.
+
+***************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, September 4, 2018  11:00 PM
+
+Having just finished added AI12-0119-1, I was checking AARM notes in 9.10 when
+I started wondering about all of the bullets 9.10(3-10). Specifically, the 
+introductory text to this list of bullets in 9.10(2) was all changed from 
+"task" to "logical thread of control".
+
+Most of these bullets contain "task" in the wording. Upon closer inspection, 
+most of them are about task-specific operations like activation, so no change
+is needed. However the first bullet seems like it might be a problem:
+
+  * If A1 and A2 are part of the execution of the same task, and the language 
+    rules require A1 to be performed before A2;
+
+It seems to me that we want this to apply to logical threads of control, and 
+not just tasks. Otherwise, we would have no reason to treat two actions within
+the same logical thread of control to be sequential (as 9.10(13) claims is 
+true). I was going to just change the text as an editorial review...but then I
+started wondering if we really need this rule about tasks as well.
+
+Specifically, this rule as currently written says that an action that occurs 
+inside of a parallel construct (possibly in different threads) is still 
+sequential with actions that follow the parallel construct. If we only talked
+about logical threads of control, then we'd have no reason to make such an 
+assumption (no signalling would happen).
+
+I'm at the limit of my knowledge here -- in particular, it's not clear to me 
+why we needed both 9.10(3) and 9.10(13) in previous Ada. (If we didn't, then 
+we don't need a new version of 9.10(3), either. But these rules were very 
+carefully constructed, and unlike other parts of the language have hardly been
+touched in the intervening 23+ years.)
+
+So, is there a problem here? Do we need to pull the AI to fix it (if a fix is
+required)?
+
+P.S. At a minimum, AARM notes 9.10(13.a) and (15.a) need to talk about 
+"logical threads of control" rather than "tasks". I'm just making those 
+corrections.
+
+P.P.S. In a case of what Steve calls "heat vision", I noted an unrelated issue
+in this clause. Specifically, I suspect that 9.10(14) is wrong, because it 
+doesn't take into account aspect Exclusive_Functions. If the action is of two
+protected functions with Exclusive_Functions set, the actions should be 
+exclusive and thus should be sequential. (Since functions don't usually write 
+anything, that usually isn't an issue -- but it could be an issue for something
+like the tampering indicator of a container -- which was the motivating case
+for aspect Exclusive_Functions.) The wording should talk about "exclusive 
+protected operations" (which is what 9.5.1 uses these days) rather than 
+"protected functions", something like:
+
+Both actions occur as part of protected actions on the same protected object, 
+and at {least}[most] one of the actions is part of a call on {an exclusive 
+protected operation}[a protected function] of the protected object. 
+
+Thoughts on this aside?
 
 ***************************************************************

Questions? Ask the ACAA Technical Agent