Version 1.9 of ai12s/ai12-0298-1.txt

Unformatted version of ai12s/ai12-0298-1.txt version 1.9
Other versions for file ai12s/ai12-0298-1.txt

!standard 9.10(10)          19-04-17 AI12-0298-1/06
!standard 9.10(17/5)
!standard 9.10.1(3/5)
!standard 9.10.1(5/5)
!standard 9.10.1(8/5)
!standard 9.10.1(10/5)
!standard 9.10.1(11/5)
!standard 9.10.1(12/5)
!standard 9.10.1(13/5)
!standard 9.10.1(14/5)
!class Amendment 18-12-06
!status Amendment 1-2012 19-04-09
!status ARG Approved 7-0-4 19-04-09
!status work item 18-12-06
!status received 18-12-06
!priority Low
!difficulty Medium
!subject Revise the conflict check policies to ensure compatibility
!summary
AI12-0267-1 introduced the notion of Conflict Check policies. In this AI we separate Known_Conflict_Checks into Known_Parallel_Conflict_Checks and Known_Tasking_Conflict_Checks. Similarly we separate All_Conflict_Checks into All_Parallel_Conflict_Checks and All_Tasking_Conflict_Checks. The original terms are preserved as a union of the two sorts of checks. The Conflict_Check_Policy pragma now permits two policies, one for parallel constructs, and one for tasking. We provide a No_Tasking_Conflict_Checks and No_Parallel_Conflict_Checks, with a combined term of No_Conflict_Checks (replacing "Unchecked"). The default is:
pragma Conflict_Check_Policy
(All_Parallel_Conflict_Checks, No_Tasking_Conflict_Checks);
!problem
As defined by AI12-0267-1, Parallel_Conflict_Checks includes Known_Conflict_Checks. Known_Conflict_Checks can mark existing Ada code illegal, and it is possible for that code to be benign. It is important that the new constructs include checking by default, but it is much less clear of the value of that for existing code using tasks.
!proposal
(See Summary.)
!wording
Add after 9.10(10):
Action A1 is defined to /potentially signal/ action A2 if A1 signals A2, if action A1 and A2 occur as part of the execution of the same logical thread of control, and the language rules permit action A1 to precede action A2, or if action A1 potentially signals some action that in turn potentially signals A2.
Modify 9.10(17/5):
... Similarly two calls are considered to potentially conflict if they each have Global (or Global'Class in the case of a dispatching call) aspects such that conflicting actions would be possible during the execution of the calls. {Finally, two actions that conflict are also considered to potentially conflict.}
Replace 9.10.1(3/5):
pragma Conflict_Check_Policy (/policy_/identifier[, /policy_/identifier]);
Replace 9.10.1(5/5):
Each policy_identifier shall be one of No_Parallel_Conflict_Checks, Known_Parallel_Conflict_Checks, All_Parallel_Conflict_Checks, No_Tasking_Conflict_Checks, Known_Tasking_Conflict_Checks, All_Tasking_Conflict_Checks, No_Conflict_Checks, Known_Conflict_Checks, All_Conflict_Checks, or an implementation-defined conflict check policy. If two /policy_/identifiers are given, one shall include the word Parallel and one shall include the word Tasking. If only one /policy_/identifier is given, it shall not include the word Parallel or Tasking.
Modify 9.10.1(8/5):
If multiple Conflict_Check_Policy pragmas apply to a given construct, the conflict check policy is determined by the one in the innermost enclosing region. If no Conflict_Check_Policy pragma applies to a construct, the policy is (All_Parallel_Conflict_Checks, No_Tasking_Conflict_Checks) (see below).
Replace 9.10.1(10/5-13/5):
No_Parallel_Conflict_Checks
This policy imposes no restrictions on concurrent actions arising from parallel constructs.
No_Tasking_Conflict_Checks
This policy imposes no restrictions on concurrent actions arising from tasking constructs.
Known_Parallel_Conflict_Checks
If this policy applies to two concurrent actions appearing within parallel constructs, they are disallowed if they are known to denote the same object (see 6.4.1) with uses that conflict. For the purposes of this check, any parallel loop may be presumed to involve multiple concurrent iterations. Also, for the purposes of deciding whether two actions are concurrent, it is enough for the logical threads of control in which they occur to be concurrent at any point in their execution, unless all of the following are true:
* the shared object is volatile;
* the two logical threads of control are both known to also refer to a shared synchronized object; and
* each thread whose potentially conflicting action updates the shared volatile object, also updates this shared synchronized object.
AARM Reason: To properly synchronize two actions to prevent concurrency, a thread that does an update of a volatile object must update a synchronized object afterward to indicate it has completed its update, and the other thread needs to test the value of the synchronized object before it reads the updated volatile object. In a parallel construct, "signaling" cannot be used to prevent concurrency, since that generally requires some blocking, so testing the value of the synchronized object would probably need to use a busy-wait loop.
Known_Tasking_Conflict_Checks
If this policy applies to two concurrent actions appearing within the same compilation unit, at least one of which appears within a task body but not within a parallel construct, they are disallowed if they are known to denote the same object (see 6.4.1) with uses that conflict, and neither potentially signals the other (see 9.10). For the purposes of this check, any named task type may be presumed to have multiple instances. Also, for the purposes of deciding whether two actions are concurrent, it is enough for the tasks in which they occur to be concurrent at any point in their execution, unless all of the following are true:
* the shared object is volatile;
* the two tasks are both known to also refer to a shared synchronized object; and
* each task whose potentially conflicting action updates the shared volatile object, also updates this shared synchronized object.
AARM Reason: To properly synchronize two actions to prevent concurrency, a task that does an update of a nonvolatile object must use signaling via a synchronized object to indicate it has completed its update, and the other task needs to be signaled by this action on the synchronized object before it reads the updated nonvolatile object. In other words, to synchronize communication via a nonvolatile object, signaling must be used. To synchronize communication via a volatile object, an update of a shared synchronized object followed by a read of the synchronized object in the other task can be sufficient.
All_Parallel_Conflict_Checks
This policy includes the restrictions imposed by the Known_Parallel_Conflict_Checks policy, and in addition disallows a parallel construct from reading or updating a variable that is global to the construct, unless it is a synchronized object, or unless the construct is a parallel loop, and the global variable is a part of a component of an array denoted by an indexed component with at least one index expression that statically denotes the loop parameter of the loop_parameter_specification or the chunk parameter of the parallel loop.
All_Tasking_Conflict_Checks
This policy includes the restrictions imposed by the Known_Tasking_Conflict_Checks policy, and in addition disallows a task body from reading or updating a variable that is global to the task body, unless it is a synchronized object.
No_Conflict_Checks, Known_Conflict_Checks, All_Conflict_Checks
These are short hands for (No_Parallel_Conflict_Checks, No_Tasking_Conflict_Checks), (Known_Parallel_Conflict_Checks, Known_Tasking_Conflict_Checks), and (All_Parallel_Conflict_Checks, All_Tasking_Conflict_Checks), respectively.
Replace 9.10.1(14/5): [Implementation Permissions]
When the conflict check policy Known_Parallel_Conflict_Checks applies, the implementation may disallow two concurrent actions appearing within parallel constructs if the implementation can prove they will at run-time denote the same object with uses that conflict. Similarly, when the conflict check policy Known_Tasking_Conflict_Checks applies, the implementation may disallow two concurrent actions, at least one of which appears within a task body but not within a parallel construct, if the implementation can prove they will at run-time denote the same object with uses that conflict.
[The AARM Ramification is unchanged - Editor.]
!discussion
Most code rejected by Known_Conflict_Checks is highly likely to cause erroneous execution. However, if the program only has one instance of a task type existing at a time (including the common case of only having a single instance declared), the check can reject code that is perfectly safe. Hence, there is a potential incompatibility if we include Known_Conflict_Checks in the default policy. Therefore, we choose to create separate policy_identifiers for parallel and tasking constructs, and provide short-hands when the same sort of policy is desired for both sorts of constructs. We make the default policy have no conflict checks for tasking constructs, and all conflict checks for parallel constructs.
In the Known_*_Conflict_Checks cases, we now try to define what it means for two actions to be concurrent, in a way that can reasonably be implemented. Our conclusion is that two actions are considered concurrent if the threads (or tasks) in which they occur are concurrent for any part of their execution, unless there has been some sort of synchronization within the associated threads/tasks. To really show whether two actions might be concurrent you would have to use flow analysis, and we don't believe in doing that in Ada compilers, so we propose that so long as there are some activities in the two threads/tasks that might properly synchronize things, we will presume the associated actions are not concurrent. The goal is for the Known_*_Conflict_Checks to be quite relaxed, only picking up the egregious unsynchronized uses of shared variables. The All_*_Conflict_Checks are the opposite end of the spectrum, being very strict, allowing almost no use of unsynchronized global variables.
!corrigendum 9.10(10)
Insert after the paragraph:
the new paragraph:
Action A1 is defined to potentially signal action A2 if A1 signals A2, if action A1 and A2 occur as part of the execution of the same logical thread of control, and the language rules permit action A1 to precede action A2, or if action A1 potentially signals some action that in turn potentially signals A2.
!corrigendum 9.10(15)
Replace the paragraph:
Aspect Atomic or aspect Atomic_Components may also be specified to ensure that certain reads and updates are sequential — see C.6.
by:
Two actions that are not sequential are defined to be concurrent actions.
Two actions are defined to conflict if one action assigns to an object, and the other action reads or assigns to a part of the same object (or of a neighboring object if the two are not independently addressable). The action comprising a call on a subprogram or an entry is defined to potentially conflict with another action if the Global aspect (or Global'Class aspect in the case of a dispatching call) of the called subprogram or entry is such that a conflicting action would be possible during the execution of the call. Similarly, two calls are considered to potentially conflict if they each have Global (or Global'Class in the case of a dispatching call) aspects such that conflicting actions would be possible during the execution of the calls. Finally, two actions that conflict are also considered to potentially conflict.
!comment The following is mainly to force a conflict.
!corrigendum 9.10.1(0)
Replace the paragraph:
If multiple Conflict_Check_Policy pragmas apply to a given construct, the conflict check policy is determined by the one in the innermost enclosing region. If no such Conflict_Check_Policy pragma exists, the policy is Parallel_Conflict_Checks (see below).
by:
If multiple Conflict_Check_Policy pragmas apply to a given construct, the conflict check policy is determined by the one in the innermost enclosing region. If no Conflict_Check_Policy pragma applies to a construct, the policy is (All_Parallel_Conflict_Checks, No_Tasking_Conflict_Checks) (see below).
!ASIS
Probably needs to modify the handling of this pragma (see AI12-0267-1).
!ACATS test
ACATS B- and C-Tests are needed to check that the new capabilities are supported.
!appendix

From: Randy Brukardt
Sent: Thursday, December 6, 2018  4:43 PM

When I was rereading this new wording, I had a couple of questions:

[The first question is filed and addressed in AI12-0294-1.]

(2) At the recent meeting, we decided that Parallel_Conflict_Checks should
include Known_Conflict_Checks, so that we had "levels" of checking. What we
forgot, however, is that Known_Conflict_Checks is incompatible with existing
Ada code, and as such it is dubious to be part of the default. I consider it
very important that the Parallel_Conflict_Checks are on by default, but we
also consider it very important that we don't have unnecessary
incompatibilities.

Any task that writes global objects has a risk of being rejected by
Known_Conflict_Checks, but owing to the difficulty of determining
synchronization for tasks, I can't think of anything realistic that uses two
tasks. However, since one is required to consider that there are multiple
simultaneous instances of any named task type, it is easier to think of such
incompatibilities for task types. For instance:

      task type Long_Calc (Inputs : Param_Rec);

      task body Long_Calc is -- (Inputs : Param_Rec)
      begin
         Result_Valid := False;
         Result := <<Long calculation using Inputs>>;
         Result_Valid := True;
      end Long_Calc;

This task is implementing a function call to be executed in a separate logical
thread of control. (It would be easier to do this with a parallel block in
Ada 2020, but we're talking about existing code here.) So long as there is
only one instance of this type running at a time, this is safe (even without
using synchronization). However, Known_Conflict_Checks would reject it.

Let me take a digression here. Given that Known_Conflict_Checks is supposed to
take synchronization into account, I don't think it is possible to actually
determine such conflicts for tasks except in the trivial case of a task body
that is nonblocking. Certainly, going beyond that simply makes my head hurt,
and I don't see how the ACATS could really require much beyond it.

This digression suggests one possible solution to this incompatibility:
remove tasks from "Known_Conflict_Checks". I don't think there is much value
to trying to make the check on tasks, especially as in the general case, as it
requires comparing the contents of two task bodies, including synchronization
(that's not even a possibility in the Janus/Ada compiler, at least not until
the global optimizer/code generation phases, where there isn't enough
information to report problems accurately). Given the difficulty, low value,
and compatibility issues, it makes sense to not include tasks in the least
aggressive level of checking. (Note that parallel constructs cannot contain
blocking code, so synchronization is essentially irrelevant vis-a-vis
conflicts, which makes the checks easier - moreover, the checks are always
within a single construct.)

That would look something like:

If this policy applies to a parallel construct, two concurrent actions
occurring within that construct are disallowed if they are known to denote
the same object (see 6.4.1) with uses that potentially conflict. For the
purposes of this check, any parallel loop may be presumed to involve multiple
concurrent iterations.

Aside: This rewrite also mostly eliminates the weird case where different
policies apply to different actions, it can only happen with nested policy
pragmas in a parallel block. That would look like nonsense, so it's unlikely
to occur in practice.

An alternative fix would be to back out the change applied by the Lexington
meeting; and of course living with the incompatibility is also a possibility.

Thoughts?

****************************************************************

From: Tucker Taft
Sent: Thursday, December 6, 2018  8:34 PM

This needs more thought, and should probably be discussed at a meeting.

****************************************************************

From: Randy Brukardt
Sent: Monday, December 10, 2018  7:10 PM

This AI (about eliminating default incompatibility from the conflict checks)
has been assigned to Tucker.

I had some thoughts on the Known_Conflict_Checks. The policy states that two
concurrent actions "are disallowed if they are known to denote the same object
(see 6.4.1) with uses that potentially conflict."

All of the conflict checks are intended to be statically enforced. But the
notion of sequential actions and its inverse concurrent actions are Dynamic
Semantics concepts. Exactly how those get mapped into a static check is
unclear to me.

Consider something like:

    Glob : Natural := ...; -- Normal object
    Atom : Natural with Atomic := ...; -- Atomic object

    parallel ...
    loop
       if Glob > 5 then
           Glob := Atom;
       else
           Glob := 0;
       end if;
    end loop;

Here, the read and write of Glob are sequential actions if Glob > 5 (as the
read of Atom makes that so), and they are concurrent actions (and thus
conflicting) if Glob <= 5. This is intended to be a conservative check, so how
is this handled? The obvious conservative check would say that since there is
a path which is OK, then there is no error. But that seems complex to figure
out in general (especially when multiple code sequences are involved), and in
any case there's nothing in the wording to suggest that is the intent. And
certainly, whether actions are sequential or concurrent is not usually known
at compile-time. And how are the ends of possibly overlapping iterations
handled? My head spins. :-)

Note that the Parallel_Conflict_Checks policy (which is the one we're mainly
concerned about, as it is supposed to be the default) and the
All_Conflict_Checks policy don't have this problem; their rules are easily
enforced statically.

As the ACATS manager, I would like to have some idea of what needs to be
tested for the Known_Conflict_Checks policy. Being able to understand the
rules would help.

Aside: 9.10 defines actions that conflict, and actions that potentially
conflict. But nowhere does it say that potentially conflicting actions
include the ones that directly conflict. So it seems that those are omitted
from Known_Conflict_Checks -- which I'm certain wasn't intended. I suppose
that answers the question for my example above -- but not for a good reason
-- and in any case, one could use calls to generate the same effect as this
example.


****************************************************************

From: Tucker Taft
Sent: Tuesday, December 11, 2018  10:10 AM

> This AI (about eliminating default incompatibility from the conflict
> checks) has been assigned to Tucker.

For what it is worth, I am thinking we should have the following policies:

    Unchecked
    Known_Parallel_Conflict_Checks,
    All_Parallel_Conflict_Checks
    Known_Task_Conflict_Checks,
    All_Task_Conflict_Checks
    All_Conflict_Checks

and allow one or two policies in a single Conflict_Check_Policy pragma,
allowing two only when one has the word Parallel in it, and the other has
the word Task in it.

"All_Conflict_Checks" would be short-hand for "All_Parallel_Conflict_Checks,
All_Task_Conflict_Checks"

I suppose we could also define a "Known_Conflict_Checks" which would be
short-hand for "Known_Parallel_Conflict_Checks, Known_Task_Conflict_Checks."


> I had some thoughts on the Known_Conflict_Checks. The policy states
> that two concurrent actions "are disallowed if they are known to
> denote the same object (see 6.4.1) with uses that potentially conflict."
>
> All of the conflict checks are intended to be statically enforced. But
> the notion of sequential actions and its inverse concurrent actions
> are Dynamic Semantics concepts. Exactly how those get mapped into a
> static check is unclear to me.
>
> Consider something like:
>
>    Glob : Natural := ...; -- Normal object
>    Atom : Natural with Atomic := ...; -- Atomic object
>
>    parallel ...
>    loop
>       if Glob > 5 then
>           Glob := Atom;
>       else
>           Glob := 0;
>       end if;
>    end loop;
>
> Here, the read and write of Glob are sequential actions if Glob > 5
> (as the read of Atom makes that so), and they are concurrent actions
> (and thus
> conflicting) if Glob <= 5.

The fact that "Atom" is atomic only affects the reads and writes of Atom
itself, and makes them sequential w.r.t. one another.  It has no effect
beyond that as far as two actions being sequential.  Furthermore, just
because flow analysis might prove that two actions cannot occur at the
same time, that does not make them "sequential."  To be sequential,
according to RM 9.10, they must apply to an atomic object, signal each
other, be part of the same logical thread of control, or be exclusive
protected actions of the same protected object.  None of these apply
to the above actions, so they are *not* sequential, and hence are
considered concurrent.

> This is intended to be a conservative check, so how is this handled?
> The obvious conservative check would say that since there is a path
> which is OK, then there is no error. But that seems complex to figure
> out in general (especially when multiple code sequences are involved),
> and in any case there's nothing in the wording to suggest that is the
> intent. And certainly, whether actions are sequential or concurrent is
> not usually known at compile-time.

Actually, whether two actions are sequential or concurrent is intended to be
something that *is* generally known at compile time.  For task types and
parallel loops, we presume there are multiple instances, and multiple
iterations, so that eliminates many of the dynamic cases.  If we have two
externally-callable subprograms, then whether actions within the two
subprograms are sequential or concurrent with one another is determined by
the caller context.  Our use of the term "potentially conflict" in the rule
for Known_Conflict_Checks simplifies the checking, because we can use the
Global aspect to provide the caller with sufficient information to know what
globals a given call might read or update.

We have limited the scope to be that of a single compilation unit, so we need
to find constructs within the compilation unit where we have two (or more)
logical threads of control, which implies we need a parallel or tasking
construct that is apparent in the compilation unit.  If it is a parallel
loop, or a named task type, then our job is easier, since any up-level
reference from such a construct will be suspicious, and we can apply the
"known to refer to the same object" rules.  More generally, we need to look
at pieces of code that will run concurrently with one another (e.g. different
"arms" of a parallel block, or inside a task body and other code that runs
while the task is active), and look for references that are known to refer
to the same object.

> And how are the ends of possibly
> overlapping iterations handled? My head spins. :-)

You should be able to ignore control flow for the purposes of this check.
Whether two actions are sequential or concurrent does not depend on control
flow (at least that was the intent).  In other words, all of the code of a
task body can be treated equivalently as far as whether it runs concurrently
with code outside the task body (or with itself if it is a task body for a
named task type).  Similarly, all of the code inside a parallel loop body,
or an "arm" of a parallel block can be treated equivalently.

> Note that the Parallel_Conflict_Checks policy (which is the one we're
> mainly concerned about, as it is supposed to be the default) and the
> All_Conflict_Checks policy don't have this problem; their rules are
> easily enforced statically.
>
> As the ACATS manager, I would like to have some idea of what needs to
> be tested for the Known_Conflict_Checks policy. Being able to
> understand the rules would help.
>
> Aside: 9.10 defines actions that conflict, and actions that
> potentially conflict. But nowhere does it say that potentially
> conflicting actions include the ones that directly conflict.

We should certainly say that.  In English that is implicit, but I agree we
should be explicit here.

> So it seems that those are omitted
> from  Known_Conflict_Checks -- which I'm certain wasn't intended.

Agreed, not intended.

****************************************************************

From: Randy Brukardt
Sent: Monday, January 7, 2019  10:10 PM

...
> > Consider something like:
> >
> >    Glob : Natural := ...; -- Normal object
> >    Atom : Natural with Atomic := ...; -- Atomic object
> >
> >    parallel ...
> >    loop
> >       if Glob > 5 then
> >           Glob := Atom;
> >       else
> >           Glob := 0;
> >       end if;
> >    end loop;
> >
> > Here, the read and write of Glob are sequential actions if Glob > 5
> > (as the read of Atom makes that so), and they are concurrent actions
> > (and thus conflicting) if Glob <= 5.
>
> The fact that "Atom" is atomic only affects the reads and writes of
> Atom itself, and makes them sequential w.r.t. one another.  It has no
> effect beyond that as far as two actions being sequential.
> Furthermore, just because flow analysis might prove that two actions
> cannot occur at the same time, that does not make them "sequential."
> To be sequential, according to RM 9.10, they must apply to an atomic
> object, signal each other, be part of the same logical thread of
> control, or be exclusive protected actions of the same protected
> object.  None of these apply to the above actions, so they are *not*
> sequential, and hence are considered concurrent.

This is definitely different than my understanding of how "volatile" works.
One is supposed to be able to make volatile objects useful just be putting
atomic read/writes at the points where one wants to ensure that the volatile
objects are sequential. If that's not true, then volatile and non-volatile
objects are completely equivalent, meaning that there is no point to
volatile. And I know *that's* not true.

...
> > And how are the ends of possibly
> > overlapping iterations handled? My head spins. :-)
>
> You should be able to ignore control flow for the purposes of this
> check.  Whether two actions are sequential or concurrent does not
> depend on control flow (at least that was the intent).

Signaling certainly depends on control flow: the first rule of signaling is

* If A1 and A2 are part of the execution of the same task, and the language
  rules require A1 to be performed before A2;

[note that this says "task" and not "logical thread of control", so applies
even between threads. And this was something that you left this way on
purpose]. Dunno what else to make of this other than control flow, because
the language rules hardly require anything of expression evaluation.

I've previously said that I simply do not understand the concept of signaling,
since pretty much any two actions, regardless of how far apart in time or space
or threads, have a signaling relationship. I don't see how any concept like
that could mean anything statically, and since sequential depends on signaling,
it too necessarily depends on this rather dynamic concept.

In the task cases in particular, signaling actions can be conditional (an entry
call signals the accept body; the accept body signals the return from the entry
call) and the same task rule can be used to associate pretty much any other
action with the entry call. So replace the atomic item with an entry call in
a task, and use a task type instead of a parallel loop. The original question
still applies.

I can see that this scheme as you described it probably works for parallel
operations, because they don't allow blocking (and thus there is no signaling
to worry about). I don't see how to do that in tasks (not that I ever would -
it just not a capability of Janus/Ada to work with more than one expression at
a time).

****************************************************************

From: Tucker Taft
Sent: Thursday, December 20, 2018  5:53 PM

We decided to avoid any possible incompatibility in the default conflict-check
policy, which led us to separate parallel and tasking conflict checks
completely.  Here is an update to AI12-0298, which in turn was defining an
update to the conflict-check policy section. [This is version /02 of the AI
- Editor.]

****************************************************************

From: Randy Brukardt
Sent: Monday, January 7, 2019  10:58 PM

The AI you attached didn't fix the "conflict"/"possibly conflict" issue I
previously pointed out:

>> Aside: 9.10 defines actions that conflict, and actions that
>> potentially conflict. But nowhere does it say that potentially
>> conflicting actions include the ones that directly conflict.
>
>We should certainly say that. In English that is implicit, but I agree we
>should be explicit here.

Could you propose wording to fix that? (I can stick it in the AI).

****************************************************************

From: Tucker Taft
Sent: Monday, January 7, 2019  11:30 PM

> ...
>>> Consider something like:
>>>
>>>   Glob : Natural := ...; -- Normal object
>>>   Atom : Natural with Atomic := ...; -- Atomic object
>>>
>>>   parallel ...
>>>   loop
>>>      if Glob > 5 then
>>>          Glob := Atom;
>>>      else
>>>          Glob := 0;
>>>      end if;
>>>   end loop;
>>>
>>> Here, the read and write of Glob are sequential actions if Glob > 5
>>> (as the read of Atom makes that so), and they are concurrent actions
>>> (and thus conflicting) if Glob <= 5.
>>
>> The fact that "Atom" is atomic only affects the reads and
>> writes of Atom itself, and makes them sequential w.r.t. one
>> another.  It has no effect beyond that as far as two actions
>> being sequential.  Furthermore, just because flow analysis
>> might prove that two actions cannot occur at the same time,
>> that does not make them "sequential."  To be sequential,
>> according to RM 9.10, they must apply to an atomic object,
>> signal each other, be part of the same logical thread of
>> control, or be exclusive protected actions of the same
>> protected object.  None of these apply to the above actions,
>> so they are *not* sequential, and hence are considered concurrent.
>
> This is definitely different than my understanding of how "volatile" works.

There is no use of volatile above.  If you marked Glob as volatile that would
make the example more interesting.  In particular, according to C.6(16/3) all
tasks (should say "logical threads of control" -- my bad!) see updates to
volatile variables in the same order, so if you intersperse an update to an
tomic variable between two volatile variable updates, a read will not see the
update to the atomic variable until the preceding volatile update is done, and
a read will not see the beginning of the update of the following volatile until
after it sees the update to the atomic variable.  E.g., given:

   A, C : String(1..5) with Volatile := "xxxxx";
   B : Integer with Atomic := 7;


    A := "hello";
    B := 42;
    C := "seeya";

another thread could wait until B is 42, and then safely read the final value
of A (i.e. "hello").  Similarly, if we copy what is in C, and then check that
B is still 7, we will see "xxxxx" in our copy of what was in C.  But reading C
is asking for trouble in general, even though it is marked volatile, since it
could be overwritten at any moment.  There would have to be some other
spin-loop before C was overwritten with "seeya" guarded by another atomic
variable set by the intended reader of C if we wanted to avoid erroneousness.
E.g.:

   D : Integer with atomic := 0;

    A := "hello";
    B := 42;
    while D = 0 loop null; end loop;
    C := "seeya";

The reader of C would set D to a non-zero value only after it has read C's
value, if it wants to read the "xxxxx" without fear of it being overwritten.

> One is supposed to be able to make volatile objects useful just be putting
> atomic read/writes at the points where one wants to ensure that the volatile
> objects are sequential. If that's not true, then volatile and non-volatile
> objects are completely equivalent, meaning that there is no point to
> volatile. And I know *that's* not true.

See the above examples.  C.6(16/3) says that updates are seen in the same
order, but updates to a volatile variable are not expected to be indivisible,
so you need one thread setting an atomic variable, and another one spinning
on it, to avoid erroneous concurrency when reading the volatile variables.

> ...
>>> And how are the ends of possibly
>>> overlapping iterations handled? My head spins. :-)
>>
>> You should be able to ignore control flow for the purposes of
>> this check.  Whether two actions are sequential or concurrent
>> does not depend on control flow (at least that was the
>> intent).
>
> Signaling certainly depends on control flow: the first rule of signaling is
>
> * If A1 and A2 are part of the execution of the same task, and the language
> rules require A1 to be performed before A2;
>
> [note that this says "task" and not "logical thread of control", so applies
> even between threads. And this was something that you left this way on
> purpose].

I did leave "task" here on purpose.  But there is no ordering between
different iterations of the same parallel loop, so the language rules don't
provide any signaling between code executing in different iterations.  The
ordering provided by the language is that the code before the beginning of the
parallel loop precedes everything in the loop, and the code after the loop
follows it all.  And within a single iteration, signaling isn't very
interesting since there is normally no parallelism to worry about (unless
you have a nested parallel construct).

> Dunno what else to make of this other than control flow, because
> the language rules hardly require anything of expression evaluation.

Perhaps some clarification is required, but the within-thread control flow
is not very interesting as far as conflicts go.  It is only ordering between
actions occurring in different threads that is interesting, and there the
relevant control flow is typically trivial, and based on lexical scope.
Perhaps we just need to be more specific about the algorithm.  Maybe a long
AARM note would be appropriate here to indicate the intent.

> I've previously said that I simply do not understand the concept of
> signaling, since pretty much any two actions, regardless of how far apart in
> time or space or threads, have a signaling relationship. I don't see how any
> concept like that could mean anything statically, and since sequential
> depends on signaling, it too necessarily depends on this rather dynamic
> concept.

If you have two sibling tasks/threads, then generally an action in one
task/thread is not ordered with an action in the other task/thread, and so
no signaling relationship exists between them.  You can use entry calls to
impose some order, and hence signaling, but in the absence of that, an action
in one sibling is generally to be considered concurrent with any action in the
other sibling.  Fiddling with atomic objects does not "officially" introduce
signaling relationships, but as we have seen, C.6(16/3) does allow one to
reason that certain actions on shared volatile objects can be determined to
be non-sequential.  But that is essentially outside of the purview of these
rules, and we should make that clear.  We will not be smart enough to check
that the fiddling with atomic objects is making your volatile reads/writes
safe.  We can probably understand the signaling rules as part of the conflict
checks, but spinning on atomic objects is not something we will be able to
analyze.  So if people are using that kind of lock-free synchronization,
they are going to want to turn off the conflict checks, at least where they
are doing that.

Even for the explicit signaling via entry calls, we will probably need to be
conservative in the Known_Conflict_Checks case and assume the programmer did
the right thing, so I suppose we could also notice fiddling with atomic
objects and volatile variables.  Overall, it sounds like we will need to
define more completely how Known_Conflict_Checks works.  We want it to be
simple, and not too strict, in the sense that we want very few "false
positives."  The All_Conflict_Checks is the other end of the spectrum, and we
want it to be very strict, and essentially no "false negatives."  We also
explicitly allow the implementation to be smarter in the Known_Conflict_Check
case, if it can prove there is a real problem.

> In the task cases in particular, signaling actions can be conditional (an
> entry call signals the accept body; the accept body signals the return from
> the entry call) and the same task rule can be used to associate pretty much
> any other action with the entry call. So replace the atomic item with an
> entry call in a task, and use a task type instead of a parallel loop. The
> original question still applies.

Agreed, we need to define exactly how strict is the Known_Conflict_Check case.
And we want it to be simple to implement...

> I can see that this scheme as you described it probably works for parallel
> operations, because they don't allow blocking (and thus there is no
> signaling to worry about). I don't see how to do that in tasks (not that I
> ever would - it just not a capability of Janus/Ada to work with more than
> one expression at a time).

Agreed.  The presence of an entry call or an accept statement should probably
be presumed to create a signaling relationship with other entry calls on the
same task/protected object.  That argues for the Known_Tasking_Conflict_Checks
to be very simple-minded, and just detect clearly  unsynchronized use of
shared variables.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 8, 2019  6:57 PM

...
> Even for the explicit signaling via entry calls, we will probably need
> to be conservative in the Known_Conflict_Checks case and assume the
> programmer did the right thing, so I suppose we could also notice
> fiddling with atomic objects and volatile variables.  Overall, it
> sounds like we will need to define more completely how
> Known_Conflict_Checks works.

Yea! This has been my point all along.

> We want it to be simple, and not too strict, in the sense that we want
> very few "false positives."  The All_Conflict_Checks is the other end
> of the spectrum, and we want it to be very strict, and essentially no
> "false negatives."  We also explicitly allow the implementation to be
> smarter in the Known_Conflict_Check case, if it can prove there is a
> real problem.

This all sounds great. Of course, the above isn't wording.

> > In the task cases in particular, signaling actions can be conditional (an
> > entry call signals the accept body; the accept body signals the return from
> > the entry call) and the same task rule can be used to associate pretty much
> > any other action with the entry call. So replace the atomic item
> > with an entry call in a task, and use a task type instead of a parallel
> > loop. The original question still applies.
>
> Agreed, we need to define exactly how strict is the
> Known_Conflict_Check case.  And we want it to be simple to
> implement...

Right. And it would be best if volatile objects properly protected with an
atomic object don't get rejected, 'cause that doesn't sound like a
conservative check.

I'll be looking forward to your proposal. (And you thought you were done with
this one! :-)

****************************************************************

From: Bob Duff
Sent: Tuesday, January 8, 2019  5:42 AM

Reminder: please put the AI number AND the AI title in email subjects.  If
somebody forgets to do that, then please try to remember to fix it when you
reply.

"Thoughts on" isn't very helpful!  ;-)

****************************************************************

From: Tucker Taft
Sent: Tuesday, January 8, 2019  7:04 AM

Randy started it! ;-)

Looks like a "4C" AI.

****************************************************************

From: Bob Duff
Sent: Tuesday, January 8, 2019  8:20 AM

> Randy started it! ;-)

Yeah, well Randy probably has all the AI numbers memorized.  ;-)

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 8, 2019  6:45 PM

I want you guys to read at least the start of every thread, so the less I say
in the title the better. Otherwise, most of you will skip half of the topics.
The only reason that I put the AI in the title is so that I can keep them
straight for filing. Otherwise, they'd all be "Cool!" or "Ugh!" :-) :-)

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2019  1:53 PM

> Could you propose wording to fix that? (I can stick it in the AI).

Augment the second paragraph added after 9.10(15):

                         ... Similarly
  two calls are considered to potentially conflict if they each have
  Global (or Global'Class in the case of a dispatching call) aspects
  such that conflicting actions would be possible during the execution
  of the calls. {Finally, two actions that conflict are also considered
  to potentially conflict.}

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2019  7:41 PM

OK, here is version 3 of this AI, to fix-up issues with AI12-0267-1 which
originally defined Conflict Check policies. [This is version /03 of the AI -
Editor.]

We now define more clearly what it means to be "concurrent" in the context of
these checks, while we allow for almost any sort of synchronization to be
sufficient to prevent two actions being considered concurrent.

And yes, this is past the deadline!

****************************************************************

From: Tucker Taft
Sent: Thursday, January 10, 2019  8:05 PM

Part of the thing that created the need for this AI was due to our fear that
Known_*_Conflict_Checks might not be a "pure" subset of All_*_Conflict_Checks,
and we wanted it to be.  The original solution to the subsetting issue was to
include Known_*_Conflict_Checks in All_*_Conflict_Checks.  But I think we might
want to go the other way, namely Known_*_Conflict_Checks should first see if the
simple rules of All_*_Conflict_Checks are violated, and only if they *are*
violated by some thread, would we delve into the details to look for actions in
the offending threads that might conflict with other actions in other threads.
Probably not a big deal, but it would potentially simplify and narrow the scope
of the Known_*_Conflict_Checks.

****************************************************************

From: Erhard Ploedereder
Sent: Sunday, January 13, 2019  10:04 AM

>    Also, for the purposes of
>    deciding whether two actions are concurrent, it is enough for the
>    logical threads of control in which they occur to be concurrent at
>    any point in their execution, unless the shared object is volatile,
>    the two logical threads of control are known to refer to the same
>    synchronized object, and each thread whose potentially conflicting
>    action updates the shared volatile object, itself updates the shared
>    synchronized object.

I understand the intent, but this wording is at best a real mouthful, and 
and worst, seriously ambiguous. A few questions:

- where did THE synchronized object come from - it is not mentioned in  
  the preceding text.

- does this sentence parse
    (x unless the shared object is volatile) and y and z
  or
    x unless (the shared object is volatile and y and z)


>  AARM Reason: To properly synchronize two actions to prevent 
>       concurrency, a task that does an update of a *non*volatile object
>       must use signaling via a synchronized object to indicate it has
>       completed its update, and the other task needs to be signaled by
>       this action on the synchronized object before it reads the updated
>       nonvolatile object.  In other words, to synchronize communication
>       via a nonvolatile object, signaling must be used.  To synchronize
>       communication via a volatile object, an update of a shared
>       synchronized object followed by a read of the synchronized object
>       in the other task can be sufficient.

This one I do not understand. Isn't an update of a synchronized object here 
followed by a read of the same object there a signaling between the here and
there?  So what's the difference between the two cases?

Moreover, the non-volatile case looks like a requirement on implementations 
to write ALL global objects cached in registers whenever there is any 
signaling going on. You really want to mandate this?

****************************************************************

From: Tucker Taft
Sent: Sunday, January 13, 2019  5:54 PM

>>   Also, for the purposes of
>>   deciding whether two actions are concurrent, it is enough for the
>>   logical threads of control in which they occur to be concurrent at
>>   any point in their execution, unless the shared object is volatile,
>>   the two logical threads of control are known to refer to the same
>>   synchronized object, and each thread whose potentially conflicting
>>   action updates the shared volatile object, itself updates the shared
>>   synchronized object.
> 
> I understand the intent, but this wording is at best a real mouthful, 
> and and worst, seriously ambiguous. A few questions:
> 
> - where did THE synchronized object come from - it is not mentioned in 
> the preceding text.

The requirement is that there be a (shared) synchronized object to which 
they are both known to refer.  That is "the" synchronized object in this 
discussion.
 
> - does this sentence parse
>    (x unless the shared object is volatile) and y and z  or
>    x unless (the shared object is volatile and y and z)

The latter.  Needs some rephrasing to avoid that ambiguity, I guess!  Too bad 
parentheses cannot be used to do this sort of disambiguation in English... ;-)

>> AARM Reason: To properly synchronize two actions to prevent 
>>      concurrency, a task that does an update of a *non*volatile object
>>      must use signaling via a synchronized object to indicate it has
>>      completed its update, and the other task needs to be signaled by
>>      this action on the synchronized object before it reads the updated
>>      nonvolatile object.  In other words, to synchronize communication
>>      via a nonvolatile object, signaling must be used.  To synchronize
>>      communication via a volatile object, an update of a shared
>>      synchronized object followed by a read of the synchronized object
>>      in the other task can be sufficient.
> 
> This one I do not understand. Isn't an update of a synchronized object 
> here followed by a read of the same object there a signaling between 
> the here and there?  So what's the difference between the two cases?

No, an update of a synchronized object and a read of a synchronized object 
are known to be sequential, but they don't imply signaling.  For example, 
there is no signaling between a protected procedure call and a protected 
function call.  Similarly, there is no "signaling" between a write of an 
atomic object and a read of the atomic object.  In both cases these are 
"sequential" but not inherently ordered.  For volatile objects, updates to
these are seen in the same order in all threads.  For non-volatile objects,
updates can happen in different orders, even relative to updates to 
synchronized objects, and you have to use signaling to ensure that you get 
ordering for non-volatile objects being referenced in different tasks.
 
> Moreover, the non-volatile case looks like a requirement on 
> implementations to write ALL global objects cached in registers 
> whenever there is any signaling going on. You really want to mandate this?

I am not creating that requirement.  That is effectively what RM 
9.10(11/5-17/5) is saying.  The writing and reading of non-volatile variables 
is effectively "concurrent" unless one action signals the other (indirectly, 
presumably), since the other things that make two actions "sequential" 
requires them to be part of an operation on a synchronized object.  Volatile 
variables updates can avoid this problem because of the rule in C.6(16/3) 
that updates to volatile variables appear in the same order to all threads.

But if there is signaling between the two actions, then there are considered 
sequential, and that means are well ordered and not erroneous.  

So yes, I suppose 9.10(11/5-17/5) does mean you have to dump the cache (or 
registers) holding any variables that might be shared with a thread using 
a different cache (or registers), before initiating some signaling that might 
reach that thread.  And similarly, you need to refresh any cache (or 
registers) for a variable after receiving signaling that might be from a 
thread which might share that variable but use a different cache (or 
registers).

****************************************************************

From: Randy Brukardt
Sent: Sunday, January 13, 2019  6:11 PM

Note that there literally nothing new here, just some wording reordering.
Signalling has always implied dumping the cache, since Ada 95 and most likely 
before.

But remember that signalling is caused by task interactions (like task 
creation, task termination, task waiting, or 
entry calls-rendezvous/protected body-return). It doesn't happen from 
garden-variety actions like reading or writing an object.

Tucker and I had a lengthy discussion about this because I've never seen much 
value in that difference -- but here is the primary difference:
signalling implies that *all* objects are up-to-date, meaning the cache needs 
to be dumped, the code generator has write back any register copies, and so 
on. "Sequential" only requires that to happen for volatile objects.

****************************************************************

From: Tucker Taft
Sent: Sunday, January 13, 2019  7:11 PM

> ...
> Tucker and I had a lengthy discussion about this because I've never 
> seen much value in that difference -- but here is the primary difference:
> signalling implies that *all* objects are up-to-date, meaning the 
> cache needs to be dumped, the code generator has write back any 
> register copies, and so on. "Sequential" only requires that to happen for 
> volatile objects.

Yes, that is the key difference.  Too bad it is couched in such arcane and 
subtle language. ;-)

****************************************************************

From: Randy Brukardt
Sent: Thursday, January 10, 2019  11:12 PM

...
> We now define more clearly what it means to be "concurrent"
> in the context of these checks, while we allow for almost any sort of
> synchronization to be sufficient to prevent two actions being
> considered concurrent.

Thanks, this makes a lot more sense.
...
>   Known_Tasking_Conflict_Checks
>     If this policy applies to two concurrent actions appearing within
>     the same compilation unit, at least one of which appears within a
>     task body but not within a parallel construct, they are disallowed
>     if they are known to denote the same object (see 6.4.1) with uses
>     that potentially conflict. For the purposes of this check, any named
>     task type may be presumed to have multiple instances. Also, for the
>     purposes of deciding whether two actions are concurrent, it is
>     enough for the tasks in which they occur to be concurrent at any
>     point in their execution, unless either:
>       * the shared object is volatile, the two tasks are known to
>         refer to the same synchronized object, and each task whose
>         potentially conflicting action updates the shared volatile
>         object, itself updates the shared synchronized object; or
>       * the shared object is not volatile, the two tasks are known
>         to refer to the same synchronized object, and each task whose
>         potentially conflicting action updates the shared nonvolatile
>         object, itself signals an action on the shared synchronized
>         object that in turn signals an action in the other task.

This wording doesn't seem to contemplate signaling directly between two tasks
using entry calls and accept statements. It seems to require an intermediary
object that isn't necessary for a direct interaction. It would be bizarre
(especially) to ignore accept statements for this purpose, since that is the Ada
83 way of doing synchronization.

Didn't see anything else weird (well, not after some RM spelunking).

****************************************************************

From: Tucker Taft
Sent: Sunday, January 13, 2019  5:32 PM

> This wording doesn't seem to contemplate signaling directly between 
> two tasks using entry calls and accept statements. It seems to require 
> an intermediary object that isn't necessary for a direct interaction.

I was thinking of the task object as the shared synchronized object in 
this case, with the entry call and the accept statements as the operations 
on this shared object.

> It would
> be bizarre (especially) to ignore accept statements for this purpose, 
> since that is the Ada 83 way of doing synchronization.

Agreed.  The intent was that the task object was the intermediary in the 
rendezvous case.  We often merge the notion of a task object and the task as 
a thread (or threads) of control.  But in this case, I think it is important 
to treat the task object as the synchronized object.  A thread of control is 
most definitely not an "object" in my view.

Almost certainly needs an AARM note if that was unclear to you, or perhaps a 
slight tweak in the wording to get the notion of a "task object" into this 
discussion.

****************************************************************

From: Randy Brukardt
Sent: Sunday, January 13, 2019  6:04 PM

...
> > This wording doesn't seem to contemplate signaling directly between 
> > two tasks using entry calls and accept statements. It seems yo 
> > require an intermediary object that isn't necessary for a direct 
> > interaction.
> 
> I was thinking of the task object as the shared synchronized object in 
> this case, with the entry call and the accept statements as the 
> operations on this shared object.

I suppose, but signalling for entry calls isn't defined in terms of some 
intermediary object:

  If A1 is the action of issuing an entry call, and A2 is part of the
  corresponding execution of the appropriate entry_body or accept_statement;

It is tough to see where an intermediary object comes in here. That would
have to be between A1 and A2, and the transitive rule won't help to put
something in the middle of a signalling action.

> > It would
> > be bizarre (especially) to ignore accept statements for this purpose, 
> > since that is the Ada 83 way of doing synchronization.
> 
> Agreed.  The intent was that the task object was the 
> intermediary in the rendezvous case.  We often merge the 
> notion of a task object and the task as a thread (or threads) 
> of control.  But in this case, I think it is important to 
> treat the task object as the synchronized object.  A thread 
> of control is most definitely not an "object" in my view.
> 
> Almost certainly needs an AARM note if that was unclear to 
> you, or perhaps a slight tweak in the wording to get the 
> notion of a "task object" into this discussion.

I don't think an AARM note is enough, because signalling doesn't go through
an intermediary. Unless I've missed something...9.10 is tough to understand!
  
****************************************************************

From: Tucker Taft
Sent: Sunday, January 13, 2019  6:12 PM

...
> I suppose, but signalling for entry calls isn't defined in terms of 
> some intermediary object:
> 
>  If A1 is the action of issuing an entry call, and A2 is part of the  
> corresponding execution of the appropriate entry_body or 
> accept_statement;
> 
> 
> It is tough to see where an intermediary object comes in here. That 
> would have to be between A1 and A2, and the transitive rule won't help 
> to put something in the middle of a signalling action.

The simplest might be to mention the task/protected object in 9.10 as the 
intermediary.  E.g.:

  If A1 is the action of issuing an entry call on a task or protected object, 
  and A2 is part of the corresponding execution of the appropriate entry_body 
  or accept_statement for that task or protected object;

>>> It would
>>> be bizarre (especially) to ignore accept statements for this 
>>> purpose, since that is the Ada 83 way of doing synchronization.
>> 
>> Agreed.  The intent was that the task object was the intermediary in 
>> the rendezvous case.  We often merge the notion of a task object and 
>> the task as a thread (or threads) of control.  But in this case, I 
>> think it is important to treat the task object as the synchronized 
>> object.  A thread of control is most definitely not an "object" in my 
>> view.
>> 
>> Almost certainly needs an AARM note if that was unclear to you, or 
>> perhaps a slight tweak in the wording to get the notion of a "task 
>> object" into this discussion.
> 
> I don't think an AARM note is enough, because signalling doesn't go 
> through an intermediary. Unless I've missed something...9.10 is tough to 
> understand!

See above for a suggestion to clarify.

****************************************************************

From: Edward Fish
Sent: Monday, January 14, 2019  11:37 AM

Given:

>>- does this sentence parse
>>   (x unless the shared object is volatile) and y and z
>> or
>>   x unless (the shared object is volatile and y and z)

>The latter.  Needs some rephrasing to avoid that ambiguity, I guess!  
>Too bad parentheses cannot be used to do this sort of disambiguation in 
>English... ;-)

I think a better wording would reflect the above,
Perhaps:

For the purpose of deciding whether two actions are concurrent, unless there 
is either a volatile shared object or synchronized object which both logical 
threads of control refer to update said object, it is enough that the two 
logical threads of control to be concurrent at any point in their execution.

****************************************************************

From: Tucker Taft
Sent: Monday, January 14, 2019  3:02 PM

Thanks, Ed. My sense is that we just need to switch to using bullets, which 
is our usual solution for these kinds of tortured sentences.  Here is the 
original with a few words added, made into a bulleted list:

  Also, for the purposes of deciding whether two actions on a shared object 
  are concurrent, it is enough for the logical threads of control in which 
  they occur to be concurrent at any point in their execution, unless all of 
  the following are true:
   * the shared object is volatile;
   * the two logical threads of control are both known to also refer to a 
     shared synchronized object; and 
   * each thread whose potentially conflicting action updates the shared 
     volatile object, also updates this shared synchronized object.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, January 15, 2019  3:44 PM

> >> I was thinking of the task object as the shared synchronized object 
> >> in this case, with the entry call and the accept statements as the 
> >> operations on this shared object.
> > 
> > I suppose, but signalling for entry calls isn't defined in terms of 
> > some intermediary object:
> > 
> >  If A1 is the action of issuing an entry call, and A2 is part of the 
> > corresponding execution of the appropriate entry_body or 
> > accept_statement;
> > 
> > 
> > It is tough to see where an intermediary object comes in here. That 
> > would have to be between A1 and A2, and the transitive rule won't help
> > to put something in the middle of a signalling action.
> 
> The simplest might be to mention the task/protected object in 9.10 as 
> the intermediary.  E.g.:
> 
>   If A1 is the action of issuing an entry call on a task or protected 
> object, and A2 is part of the corresponding execution of the 
> appropriate entry_body or accept_statement for that task or protected 
> object;

I suppose that would work, but you'd have to rewrite all of the bullets 
defining signalling for that to work (they all have this form). There are at 
least 7 bullets that would need a change like this, and that would serve to 
complicate the only simple part of this entire definition. :-)

So I don't view that as a very good option. I'd think it would be better to 
complicate the definition of Known_Conflict_Checks, since pretty much only 
implementers need to really understand what it means.

I probably would define a term "potentially signal" something like:

   A logical thread of control A1 potentially signals another thread A2 if
   A1 would signal A2 if all control flow is ignored.

Still too vague for my taste, and also probably goes too far.

Maybe a better idea would be to follow your idea, but only with a new term 
created for the purpose. Maybe something like:

A task or protected object X is considered a shared synchronized object of 
LToC T if T makes an entry call to an entry of the type of X. Similarly, a 
task or protected object is considered a shared synchronized object of LToC
T if X makes an entry call to an entry of the type of T.

Another issue with "shared synchronized object" is that includes protected 
objects, but protected function calls don't necessarily cause sequential 
behavior. I suppose it is OK here, as we are sort of assuming the best.

The rule for Known_Tasking_Conflict_Checks for non-volatile objects says:

      * the shared object is not volatile, the two tasks are known
        to refer to the same synchronized object, and each task whose
        potentially conflicting action updates the shared nonvolatile
        object, itself signals an action on the shared synchronized
        object that in turn signals an action in the other task.

but again, signalling does not signal objects, so this is nonsense. It should 
just talk about signalling the other task (since signalling is transitive, 
any intermediary can be ignored). Maybe something like:

      * the shared object is not volatile, the two tasks are known
        to refer to the same synchronized object (including each 
        other), and each task signals an action in the other task.

But signalling is dynamic and depends on control flow. So I think you have 
to talk about "potentially signalling" or something like that where control
flow is ignored. You probably also want to ignore signalling caused by the 
starts and stops of tasks (which doesn't seem interesting in this context, as 
that will always happen with nested tasks, eliminating all checks).

Maybe it would even be best to forget about signalling per-se and write 
separate rules for this case.

Anyway, your improvements are better but you're not done yet (one of the 
reasons I suggested we skip this AI during the meeting yesterday - we can
wordsmith just as well via e-mail).

***************************************************************

From: Tucker Taft
Sent: Saturday, January 19, 2019  8:50 AM

I think your idea of "potentially signal" is the simplest.  Below is how I 
defined it and used it.  I also incorporated the bullet-based wording I 
proposed in response to Ed's email.

-----------
Add after 9.10(10):

  Action A1 is defined to /potentially signal/ action A2 if A1 signals A2,
  if action A1 and A2 occur as part of the execution of the same
  logical thread of control, and the language rules permit action A1 to
  precede action A2, or if action A1 potentially signals some action that
  in turn potentially signals A2.  

...

  Known_Parallel_Conflict_Checks
    If this policy applies to two concurrent actions appearing within
    parallel constructs, they are disallowed if they are known to denote
    the same object (see 6.4.1) with uses that potentially conflict. For
    the purposes of this check, any parallel loop may be presumed to
    involve multiple concurrent iterations. Also, for the purposes of
    deciding whether two actions are concurrent, it is enough for the
    logical threads of control in which they occur to be concurrent at
    any point in their execution, unless all of the following are true:
    
    * the shared object is volatile;

    * the two logical threads of control are both known to also refer to
      a shared synchronized object; and 
    
    * each thread whose potentially conflicting action updates the
      shared volatile object, also updates this shared synchronized object.

      AARM Reason: To properly synchronize two actions to prevent
      concurrency, a thread that does an update of a volatile object
      must update a synchronized object afterward to indicate it has
      completed its update, and the other thread needs to test the value
      of the synchronized object before it reads the updated volatile
      object. In a parallel construct, "signaling" cannot be used to
      prevent concurrency, since that generally requires some blocking,
      so testing the value of the synchronized object would probably
      need to use a busy-wait loop.

  Known_Tasking_Conflict_Checks
    If this policy applies to two concurrent actions appearing within
    the same compilation unit, at least one of which appears within a
    task body but not within a parallel construct, they are disallowed
    if they are known to denote the same object (see 6.4.1) with uses
    that potentially conflict, and neither potentially signals the 
    other (see 9.10). For the purposes of this check, any named
    task type may be presumed to have multiple instances. Also, for the
    purposes of deciding whether two actions are concurrent, it is
    enough for the tasks in which they occur to be concurrent at any
    point in their execution, unless all of the following are true:    
    
    * the shared object is volatile;

    * the two tasks are both known to also refer to a shared
      synchronized object; and 
    
    * each task whose potentially conflicting action updates the
      shared volatile object, also updates this shared synchronized object.

      AARM Reason: To properly synchronize two actions to prevent
      concurrency, a task that does an update of a *non*volatile object
      must use signaling via a synchronized object to indicate it has
      completed its update, and the other task needs to be signaled by
      this action on the synchronized object before it reads the updated
      nonvolatile object. In other words, to synchronize communication
      via a nonvolatile object, signaling must be used. To synchronize
      communication via a volatile object, an update of a shared
      synchronized object followed by a read of the synchronized object
      in the other task can be sufficient.

***************************************************************

From: Randy Brukardt
Sent: Tuesday, February  5, 2019  8:54 PM

> I think your idea of "potentially signal" is the simplest.  
> Below is how I defined it and used it.  I also incorporated the 
> bullet-based wording I proposed in response to Ed's email.

Sounds good. But...

> -----------
> Add after 9.10(10):
> 
>   Action A1 is defined to /potentially signal/ action A2 if A1 signals A2,
>   if action A1 and A2 occur as part of the execution of the same
>   logical thread of control, and the language rules permit action A1 to
>   precede action A2, or if action A1 potentially signals some action that
>   in turn potentially signals A2.  

...
>   Known_Tasking_Conflict_Checks
>     If this policy applies to two concurrent actions appearing within
>     the same compilation unit, at least one of which appears within a
>     task body but not within a parallel construct, they are disallowed
>     if they are known to denote the same object (see 6.4.1) with uses
>     that potentially conflict, and neither potentially signals the 
>     other (see 9.10). 

I don't see how this wording eliminates control flow from consideration. We 
want (I believe) this to be a conservative check (unlike the other check), so
any possible signalling (whether or not it actually happens) should be 
sufficient.

Quick aside: It seems that since we're talking about actions that are known to 
denote the same object, we only need to worry about uses that conflict, rather 
than potentially conflict. (We're not looking in subprogram calls, here, 
right?? Doing so would not be very conservative, since we can't know if a 
particular global is actually written even if the contract says that it might 
be written.) So the word "potentially" can be dropped here, right?

Returning to the original question, let's look at a specific example:

    function Foo (...) return Boolean
       with Global => null;

    Glob : Natural := ...;
    task T1 is
        entry E;
    end T1;

    task body T1 is
    begin
        loop
            accept E do
               exit when Glob = 0;
            end E;
        end loop;
    end T1;

    task T2;
    task body T2 is
    begin
        loop
            if Foo(...) then
                T1.E;
            else
                exit when not Foo (...);
            end if;
            Glob := Bar(...);
        end loop;
    end T2;

I believe this is OK, even though Glob is an ordinary object, as the uses of 
Glob are always separated by signalling between T1 and T2 on any possible 
path. However, the only way to figure that out is to recognize that the write 
of Glob cannot be reached on the exit path, and that it the exit must be taken
(since Foo is pure) if the entry call isn't made. I doubt that most compilers 
could do that.

So, how does this wording deal with the control flow here? We start with the 
question of whether the read of Glob in T1 potentially signals the write of 
Glob in T2. On the T1 side, we can see that the action of signalling T2 on the 
return from the rendezvous follows the read of Glob. So:

     A1: T1: Read of Glob 
     A2: T1: Signal T2 by return from rendezvous for E.
     A3: T2: Signal from T1 on return.
     A4: T2: Jump to "end if".
     A5: T2: Write Glob.

So, A2 signals A3, so A2 potentially signals A3. A1 potentially signals A2, 
because they're in the same thread and the read has to precede the return.
A3 potentially signals A4, for the same reason (the return has to precede 
executing the "end if"). A4 potentially signals A5, for the same reason. And 
by the transitive rule, repeatedly applied, we get the answer we want. Cool.
:-)

I see that the key here is "permit A1 to precede A2". We don't necessarily 
know for sure that the signalling action does precede action of interest, but 
it *might*, and that is all we care about.

So I guess there is no problem here, at least not until someone tries to 
figure out how to implement a check like this. ;-) This case is easy, as there 
is only one signalling action to worry about, and there has to be one in order 
to get "potential signalling" between tasks. If there are a bunch, it looks 
much more painful, since this rule would seem to require tracing paths 
(including backwards). (Probably a simple version that simply checked if 
there is any signalling would get the right answer approximately 99% of the 
time.)

***************************************************************

From: Tucker Taft
Sent: Tuesday, February  5, 2019  9:34 PM

...
> I don't see how this wording eliminates control flow from 
> consideration. We want (I believe) this to be a conservative check 
> (unlike the other check), so any possible signalling (whether or not 
> it actually happens) should be sufficient.

I don't understand your concern.  "Potentially signal" is intended to 
not depend on control flow.  If you believe it still does, then we need to 
tweak its definition, not its use.

I see by reading to the end, you decided that it is OK.
 
> Quick aside: It seems that since we're talking about actions that are 
> known to denote the same object, we only need to worry about uses that 
> conflict, rather than potentially conflict. (We're not looking in 
> subprogram calls, here, right?? Doing so would not be very 
> conservative, since we can't know if a particular global is actually 
> written even if the contract says that it might be written.) So the word 
> "potentially" can be dropped here, right?

If I understand you correctly, you are saying that we want this check to be 
"conservative" in the sense that it should *not* complain unless there is a 
clear problem.  So that means we should use "potentially signal" but not 
"potentially conflict."  But if we ignore Global annotations on subprograms, 
then that seems perhaps "too" conservative.  I wonder if we could take 
"specific" Global annotations into account (e.g. Global => in out Foo), but 
ignore less precise annotations (e.g. Global => in out all).  I suppose this 
is perhaps too complex.  Some sort of an implementation permission to 
complain if it really "knows" there is a problem might be adequate.
 
...
> I see that the key here is "permit A1 to precede A2". We don't 
> necessarily know for sure that the signalling action does precede 
> action of interest, but it *might*, and that is all we care about.
> 
> So I guess there is no problem here, at least not until someone tries 
> to figure out how to implement a check like this. ;-) This case is 
> easy, as there is only one signalling action to worry about, and there 
> has to be one in order to get "potential signalling" between tasks. If 
> there are a bunch, it looks much more painful, since this rule would 
> seem to require tracing paths (including backwards). (Probably a 
> simple version that simply checked if there is any signalling would 
> get the right answer approximately 99% of the time.)

OK, so I guess after all you are OK with the definition of "potentially 
signal.".

***************************************************************

From: Randy Brukardt
Sent: Wednesday, February  6, 2019  1:44 AM

...
> > Quick aside: It seems that since we're talking about actions that 
> > are known to denote the same object, we only need to worry about 
> > uses that conflict, rather than potentially conflict. (We're not 
> > looking in subprogram calls, here, right?? Doing so would not be 
> > very conservative, since we can't know if a particular global is 
> > actually written even if the contract says that it might be
> > written.) So the word "potentially" can be dropped here, right?
> 
> If I understand you correctly, you are saying that we want this check 
> to be "conservative" in the sense that it should
> *not* complain unless there is a clear problem.  So that means we 
> should use "potentially signal" but not "potentially conflict."  But 
> if we ignore Global annotations on subprograms, then that seems 
> perhaps "too" conservative.  I wonder if we could take "specific" 
> Global annotations into account (e.g. Global => in out Foo), but 
> ignore less precise annotations (e.g. Global => in out all).  I 
> suppose this is perhaps too complex.  Some sort of an implementation 
> permission to complain if it really "knows" there is a problem might 
> be adequate.

A Global annotation only says that an object *might* be written, and my 
understanding is that we were only wanting to detect the certain mistakes.
(If this rule rejects much legitimate code, no one will ever want to use it,
and the time crafting it will be wasted.)

The original AI has a permission like the one you mention:

  When the applicable conflict check policy is Known_Conflict_Checks, the
  implementation may disallow two concurrent actions if the implementation
  can prove they will at run-time denote the same object with uses that
  potentially conflict.

...which seems broad enough to cover this case, too. That is, if the 
compiler can see that X is unconditionally written in procedure Foo, then 
an error surely can be produced. Same if the compiler does more flow 
analysis than is implicit in "potentially conflicting".

The only problem here is that you didn't rewrite this to use the new 
policy names, so this permission only seems to apply when 
(Known_Parallel_Conflict_Checks, Known_Tasking_Conflict_Checks), and not 
for the more usual (All_Parallel_Conflict_Checks, 
Known_Tasking_Conflict_Checks), or any other combination. That seems like 
a mistake.

...
> OK, so I guess after all you are OK with the definition of 
> "potentially signal.".

I'm not 100% convinced (since this definition is not symmetrical, I wonder 
if you could get different results from comparing action A to action B versus 
doing the reverse). Not sure how the compiler will decide the right order to 
look at things. But in the absence of a specific problem, I'll shut up. :-)

***************************************************************

From: Tucker Taft
Sent: Wednesday, February  6, 2019  7:50 AM

...
> The only problem here is that you didn't rewrite this to use the new 
> policy names, so this permission only seems to apply when 
> (Known_Parallel_Conflict_Checks, Known_Tasking_Conflict_Checks), and 
> not for the more usual (All_Parallel_Conflict_Checks, 
> Known_Tasking_Conflict_Checks), or any other combination. That seems 
> like a mistake.

Agreed, I should be associated separately with either of the 
"Known_XXX_Checks".

Let me know if you would like me to propose some wording.

> ...
>> OK, so I guess after all you are OK with the definition of 
>> "potentially signal.".
> 
> I'm not 100% convinced (since this definition is not symmetrical, I 
> wonder if you could get different results from comparing action A to 
> action B versus doing the reverse). Not sure how the compiler will 
> decide the right order to look at things. But in the absence of a 
> specific problem, I'll shut up. :-)

Well signaling is not symmetric.  We are trying to recognize that, without 
requiring control flow analysis.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, February  6, 2019  1:44 AM

...
> Let me know if you would like me to propose some wording.

Yea, verily! :-)

...
> Well signaling is not symmetric.  We are trying to recognize that, 
> without requiring control flow analysis.

[Ignoring my own intent to shut-up :-)] Right, and that's fine in the case of 
comparing an update to a read, since the order is defined by the rule itself 
(and that also makes sense, the update has to potentially signal the read for
everything to be OK). But when comparing an update to an update, how does the
implementation decide which one has to signal the other? Or is either order 
OK? Or does both orders have to be tested and it is OK if *either* works? It 
*seems* possible (without trying to work out the details) that update A might
 potentially signal B, while update B does not potentially signal A. So how 
does the compiler decide which is A and which is B??

****************************************************************

From: Tucker Taft
Sent: Wednesday, February  6, 2019  7:28 PM

...
> Yea, verily! :-)

Here you go:

    Implementation Permissions

  When the conflict check policy Known_Parallel_Conflict_Checks applies,
  the implementation may disallow two concurrent actions appearing
  within parallel constructs if the implementation can prove they will
  at run-time denote the same object with uses that conflict. Similarly,
  when the conflict check policy Known_Tasking_Conflict_Checks applies,
  the implementation may disallow two concurrent actions, at least one
  of which appears within a task body but not within a parallel
  construct, if the implementation can prove they will at run-time
  denote the same object with uses that conflict.

  AARM Ramification: This permission allows additional enforcement in instance
  bodies (where Legality Rules aren't usually enforced), in subunits and their
  parents, and across compilation units, if the implementation wishes.

I also attached the full AI, if that is useful.

...
> [Ignoring my own intent to shut-up :-)] Right, and that's fine in the 
> case of comparing an update to a read, since the order is defined by 
> the rule itself (and that also makes sense, the update has to 
> potentially signal the read for everything to be OK). But when 
> comparing an update to an update, how does the implementation decide 
> which one has to signal the other? Or is either order OK? Or does both 
> orders have to be tested and it is OK if
> *either* works? It *seems* possible (without trying to work out the 
> details) that update A might potentially signal B, while update B does 
> not potentially signal A. So how does the compiler decide which is A 
> and which is B??

The rule "neither potentially signals the other," so I think that covers the
ground.

****************************************************************

From: Randy Brukardt
Sent: Wednesday, February  6, 2019  8:18 PM

...
> Here you go:

Thanks for the quick service.

...
> I also attached the full AI, if that is useful.

Comparing it seems like more work than it is worth, so I didn't do it.
Everything that I botched copying would probably be wrong either way. :-)

...
> The rule "neither potentially signals the other," so I think that 
> covers the ground.

Sigh. Don't know how I missed that. Seems that does handle the case: you do 
have to make the test both ways.

****************************************************************

Questions? Ask the ACAA Technical Agent