Version 1.3 of ai12s/ai12-0267-1.txt

Unformatted version of ai12s/ai12-0267-1.txt version 1.3
Other versions for file ai12s/ai12-0267-1.txt

!standard 5.1(1)          18-06-15 AI12-0267-1/03
!standard 9.5(57/5)
!standard 9.10(11)
!standard 9.10(15)
!standard 11.5(19.2/2)
!standard H.5(0)
!standard H.5(1/2)
!standard H.5(5/5)
!standard H.5(6/2)
!class Amendment 18-03-29
!status work item 18-03-29
!status received 18-03-29
!priority Medium
!difficulty Hard
!subject Data race and non-blocking checks for parallel constructs
!summary
Data race and non-blocking checks for parallel constructs are defined. Mechanisms to enable and disable data race checks are based on Suppress/Unsuppress for Conflict_Check, plus an overall Detect_Conflicts configuration pragma. Note that some of the data-race checks also apply to concurrency introduced by Ada tasking.
!problem
parallel execution can be a source of erroneous execution in the form of data races and deadlocking. As multicore computing becomes more prevalent, the concerns for improved safety with regard to parallel execution are expected to become more common-place. The proposals for the global and non-blocking aspects should provide the compiler with semantic information to facilitate detection of these errors, but additional checking may be needed to determine if a parallelism construct is data-race free. The focus of this AI is to consider any additional checks that may be needed to support these features. Similarly, if a compiler cannot determine if the parallelismn is safe, then there ought to be a way for the programmer to explicitly override the conservative approach of the compiler and insist that parallelism should be applied, even if data races or deadlocking problems can potentially occur.
Ada needs mechanisms whereby the compiler is given the necessary semantic information to enable the implicit and explicit parallelization of code. After all, the current Ada language only allows parallel execution when it is semantically neutral (see 1.1.4(18) and 11.6(3/3)) or explicitly in the code as a task.
!proposal
This proposal depends on the facilities for aspect Global (AI12-0079-1) and for aspect Nonblocking (AI12-0064-2). Those proposals allow the compiler to statically determine where parallelism may be introduced without introducing data races or deadlocking.
An important part of this model is that the compiler will complain if it is not able to verify that the parallel computations are independent (See AI12-0079-1 and AI12-0064-1 for ways on how this can happen).
Note that in this model the compiler will identify code where a potential data race (herein called a "potential conflict") occurs (following the rules for access to shared variables as specified in 9.10, and point out where objects cannot be guaranteed to be independently addressable. If not determinable at compile-time, the compiler will insert run-time checks to detect data overlap if the Detect_Conflicts pragma is in force.
This model also disallows potentially blocking operations within parallel block statements, parallel loop statements, and parallel reduce attributes, to simplify the implementation of these constructs, and to eliminate possibilities of deadlocking from occurring.
We propose a Detect_Conflicts (configuration) pragma to complement the Detect_Blocking pragma to enforce data-race detection in cases where a compile-time check is not feasible. The checks associated with a Detect_Conflicts pragma can be suppressed by suppressing Conflict_Check in scopes where a true data-race is not believed to occur.
!wording
Add a sentence to the end of 5.1(1):
A /parallel construct/ is a construct that introduces additional logical threads of control without creating a new task, and includes a parallel loop (see 5.5) and a parallel_block_statement (see 5.6.1). [ed note: this particular new sentence probably deserves to be specified in AI12-0119]
Modify 9.5(57/5):
A {parallel construct or a} nonblocking program unit shall not contain, other than within nested units with Nonblocking specified as statically False, a call on a callable entity for which the Nonblocking aspect is statically False, nor shall it contain any of the following:
Between 9.10(10) and 9.10(11):
Remove "Erroneous Execution" subtitle (it will re-appear later).
Modify 9.10(11):
[Given an action of assigning to an object, and an action of reading or updating a part of the same object (or of a neighboring object if the two are not independently addressable), then the execution of the actions is erroneous unless the actions are sequential.] Two actions are {defined to be} sequential if one of the following is true:
Add after 9.10(15):
Two actions that are not sequential are defined to be /concurrent/ actions.
Two actions are defined to /conflict/ if one action assigns to an object, and the other action reads or updates a part of the same object (or of a neighboring object if the two are not independently addressable). The action comprising a call on a subprogram or an entry is defined to /potentially conflict/ with another action if the Global aspect (or Global'Class aspect in the case of a dispatching call) of the called subprogram or entry is such that a conflicting action would be allowed during the execution of the call. Similarly two calls are considered to potentially conflict if they each have Global (or Global'Class in the case of a dispatching call) aspects such that conflicting actions would be allowed during the execution of the calls.
Legality Rules
A parallel construct is illegal if two concurrent actions within the construct are known to refer to the same object (see 6.4.1) with uses that potentially conflict, and neither action is within the scope of a Suppress pragma for the Conflict_Check (see 11.5).
Erroneous Execution
The execution of two concurrent actions is erroneous if the actions make conflicting uses of a shared variable (or neighboring variables that are not independently addressable).
Add after 11.5(19.2/2)
Conflict_Check
Check that two or more concurrent actions do not make potentially conflicting uses of the same shared variable (or of neighboring variables that are not independently addressable), within the scope of a Detect_Conflicts pragma. See H.5.
Change H.5 title to: Pragmas Detect_Blocking and Detect_Conflicts
Modify H.5(1/2):
The following pragma [forces] {requires} an implementation to detect potentially blocking operations [within] {during the execution of} a protected operation{ or a parallel construct}.
Modify H.5(5/5):
An implementation is required to detect a potentially blocking operation {that occurs} during {the execution of} a protected operation{ or a parallel construct defined within a compilation unit to which the pragma applies}, and to raise Program_Error (see 9.5[.1]).
Modify H.5(6/2):
An implementation is allowed to reject a compilation_unit {to which a pragma Detect_Blocking applies} if a potentially blocking operation is present directly within an entry_body{,} [or] the body of a protected subprogram{, or a parallel construct occurring within the compilation unit}.
Add after H.5(6/2)
The following pragma requires an implementation to detect potentially conflicting uses of shared variables by concurrent actions (see 9.10).
Syntax
The form of a pragma Detect_Conflicts is as follows:
pragma Detect_Conflicts;
Post-Compilation Rules
A pragma Detect_Conflicts is a configuration pragma.
Dynamic Semantics
An implementation is required to detect that two concurrent actions make potentially conflicting uses of the same shared variable (or of neighboring variables that are not independently addressable) if the actions occur within a compilation unit to which the pragma applies (see 9.5). Program_Error is raised if such potentially conflicting concurrent actions are detected.
AARM Ramification: Such uses are already statically illegal by rules in 9.10 proposed above if the concurrent actions are "known to refer to the same object" and the actions occur within a parallel construct. This dynamic check covers tasking-related concurrency, and cases in parallel constructs where the names used to refer to the objects are dynamic (e.g. use variable array indices).
AARM Implementation Note: Implementing this can be expensive, both in terms of time and space, if the references that use such dynamic names are inside of loops. Hence, this is probably only appropriate during testing of an application.
!discussion
It is important for the programmer to receive an indication from the compiler whether the desired parallelism is safe.
We considered whether blocking calls should be allowed in a parallel block statement. We felt that allowing that could add significant complexity for the implementor, as well as introduce safety concerns about potential deadlocking.
While supporting blocking within a parallel construct is feasible, it was felt that it should be disallowed for now. If the demand and need is felt in the future, it could be added then, but it is better to not standardize that capability until we know it is needed. In general, to support blocking, a logical thread of control must get its own physical thread of control, which implies overhead that is not normally desired for relatively light-weight parallel constructs.
We have largely chosen to simply piggy-back on the existing rules in 9.5 about potentially blocking to disallow blocking in parallel constructs. To maximize safety, we are considering all parallel constructs to be implicitly "nonblocking" constructs, thereby moving the non-blocking check to compile time.
As far as data-races, we have chosen to introduce the term "concurrent actions" in contrast to "sequential actions," to avoid the somewhat awkward term "non-sequential actions." We have defined the terms "conflicting" and "potentially conflicting" to simplify other wording that talks about data races. The distinction is that directly "conflicting" actions involve assignments and reads, while "potentially conflicting" actions involve calls where the Global (or Global'Class aspect for dispatching calls) implies that a conflicting action would be allowed during the execution of the call. Note that these definitions apply no matter how the concurrency is introduced, so they apply to concurrency associated with "normal" Ada tasking.
We are making data races illegal in parallel constructs when the concurrent actions are "known to refer to the same object" as defined in 6.4.1. For more dynamic cases, we have proposed a Detect_Conflicts pragma to turn on data-race detection at run time. This is a configuration pragma, so it is straightforward to apply it to an entire program library. Suppressing Conflict_Check is a way to indicate that a "true" conflict is not believed to occur. At run-time, more exact checks using address comparison can be performed to determine whether a "true" conflict occurs, when array indexing or pointer dereferencing are involved. Note that for parallel constructs, this checking is always "local" since it is about potential conflicts, and the compiler can rely on the Global (or Global'Class) aspect to determine whether a call "action" potentially conflicts with some other concurrent action.
A compiler that can go beyond what "known to refer to the same object" at compile-time could certainly produce a warning that a Conflict_Check was "certain to fail" at run-time. But we felt as far as specified compile-time checks, we should limit ourselves to what "known to refer to the same object" requires in terms of compile-time analysis.
Note that Detect_Conflicts applies to data races associated with the concurrency introduced by "normal" Ada tasking as well as that associated with our newly proposed parallel constructs.
We have defined the Detect_Conflicts pragma in the same subclause (H.5) as that for the Detect_Blocking pragma, as it is a potentially quite expensive check, and probably only appropriate during testing. Alternative places for this definition might be 9.10 Shared Variables or C.6 Shared Variable Control. Since a "true" conflict is considered to be erroneous according to 9.10, even without H.5 the compiler can insert run-time checks to detect data races.
!ASIS
** TBD.
!ACATS test
ACATS B- and C-Tests are needed to check that the new capabilities are supported.
!appendix


!ASIS

** TBD.


!ACATS test

ACATS B- and C-Tests are needed to check that the new capabilities are
supported.


!appendix

From: Brad Moore
Sent: Wednesday, March 28, 2018  12:12 AM

This is a new AI extracted from AI12-0119-1 (Parallel Operations). [This is
version /01 of this AI - ED.]

This contains the bits that were related to detecting data races and
deadlocking issues relating to blocking calls for parallel constructs.

The wording hasn't change on this, but it probably needs a bit of work to
better define what checks are performed, when they are performed, and how
those checks can be enabled/disabled.

****************************************************************

From: Tucker Taft
Sent: Wednesday, March 28, 2018  3:07 PM

We might want to have these rules be associated with a restriction or
restrictions, rather than as basic legality rules.  Erhard makes a strong
case for allowing programmers to bypass these rules when appropriate.  Randy
argues for introducing the notion of an "Allowance" and making these in effect
unless an Allowance overrules them.  In any case, to provide finer control, we
might want to have some way to turn them on and off, which might argue for
both a restriction and an allowance, with "No_Data_Races" and
"Allow_Data_Races" (though that name is a bit strange).  Perhaps we just need
a way to locally disable a restriction, e.g.:  Pragma
Bypass_Restrictions("No_Data_Races"), analogous to Unsuppress.  We already
have a way to specify a restriction partition wide, and implementations could
choose to provide certain restrictions by default, so that seems like an
alternative to having to introduce the distinct concept of allowances.  In
particular, support for Annex H might require that certain restrictions be on
by default, but then we clearly would need a way to turn them off.

Also, you seem to have left these rules in the basic AI on parallel
operations. They should probably be removed from there.

****************************************************************

From: Randy Brukardt
Sent: Thursday, March 29, 2018  3:07 PM

One thing that is clearly missing from this AI is a statement that it depends
on AI12-0119-1 (parallel blocks and loops) and probably on AI12-0242-1
(reduction). It does mention the contract AIs (64 & 79) so it's odd to not
mention the AIs that it is adding Legality Rules too.

****************************************************************

From: Tucker Taft
Sent: Tuesday, June 12, 2018  12:20 PM

Here is an AI started by Brad, but essentially re-written by Tuck, to capture
the checks for blocking and data races associated with parallel constructs.
[This is version /02 of the AI - Editor.] In fact, as you will see, the
data-race detection applies to any concurrency, whether it is introduced by
tasking or parallel constructs. Furthermore, the detection of potential
blocking is simply a refinement of what is already performed for protected
operations. We introduce a new pragma Detect_Conflicts and a new "check"
Conflict_Check, along with some new terminology such as "concurrent actions,"
"conflicting actions," and "potentially conflicting actions."  Hopefully
these terms are relatively intuitive.

Comments as usual welcome!

****************************************************************

From: Randy Brukardt
Sent: Tuesday, June 12, 2018  8:29 PM

> Here is an AI started by Brad, but essentially re-written by Tuck, to
> capture the checks for blocking and data races associated with
> parallel constructs.
...
> Comments as usual welcome!

Thanks for getting this done. The use of a dynamic check that can be
suppressed is clever and eliminates certain issues.

---

Sigh. I wanted to start with something positive, but sadly I don't see much
positive in this specific proposal.

I best start with a pair of philosophies:

(1) Ada is at its heart a safe language. We apply checking by default other
than in cases where we can't do it in order to keep compatibility with
existing code. (For instance, for these checks in tasks, and for Nonblocking
checks in protected types -- adding the checks would be incompatible with
existing code -- so we don't do it.) The parallel block and loop are new
constructs, so checks should be done by default. (If the checks are so
unreliable that doing so is a problem, then we shouldn't have the checks at
all.)

(2) Ada already is a fine language for writing parallel programs. Brad's
various Paraffin library efforts have proved that you can already write fine
parallel programs using the existing facilities of the language. Various
improvements to Ada 2020 that we've already finished or are close (in
particular, AI12-0189-1 for procedure body iterators) would give virtually the
same syntactic and semantic power to using libraries rather than a complex set
of new features. (Indeed, a library is more flexible, as it can more easily
provide tuning control.)

As such, the value of the parallel loop and block features boils down to two
things: (A) the ability to more safely construct programs using them; and
(B) the marketing advantages of having such features. (A) is important since
it allows average programmers to correctly use parallel programming.

If we're not going to make any serious effort to get (A) and default to be
safe, then the value of the parallel block and loop have (for me) declined to
nearly zero (just being (B), and I don't put a ton of value on marketing).
Without static, safe checks, we could get the benefits of parallel iterators
simply by defining them and not requiring any special syntax to use them. And
we could provide some libraries like Brad's for more free-form iteration. That
would be much less work than what we've been doing, and with no loss of
capability.

---

Given the above, here are some specific criticisms:

(1) > Change H.5 title to: Pragmas Detect_Blocking and Detect_Conflicts

Putting Detect_Conflicts into Annex H and not having it on by default ensures
that most people will never use it (like Detect_Blocking), ensures that many
users won't even have access to it (since many vendors don't support Annex H),
and worst, ensures that Ada newbies won't get the benefit of correctness
checking for parallel loops. It makes sense to let experts turn these checks
off (they're not intended for people like Alan!), but they need to be made
unless/until a user knows they don't need them.

(2)> An implementation is required to detect a potentially blocking
   > operation {that occurs} during {the execution of} a protected
   > operation{ or a parallel construct defined
   > within a compilation unit to which the pragma applies}, and to raise
   > Program_Error (see 9.5[.1]).

I don't understand this at all. AI12-0119-1 already includes the following:

  During the execution of one of the iterations of a parallel loop, it
  is a bounded error to invoke an operation that is potentially blocking
  (see 9.5).

I expect that that most implementations of parallel loops will detect the
error, since that will allow a substantial simplification in the
implementation (a thread that doesn't block doesn't need data structures to
manage blocking or delays - just the ability to complete or busy-wait for a
protected action). For such implementations, this buys nothing. Moreover,
the presence of the bounded error ensure that no portable code can ever
depend on blocking in a parallel loop or block.

Moreover, this formulation completely ignores the substantial work we did
defining static checks for nonblocking. Since no portable code can depend on
blocking, I don't even see much reason to be able to turn the check off. We
could simply have a Legality Rule for each statement preventing the use of
any operation that allows blocking. If we really need a way to turn the check
off, we should declare a pragma for this purpose. (It doesn't have to be a
general feature.)

Aside: If we're in the business of defining configuration pragmas to get
additional checks, we should consider adding a pragma to force all protected
declarations to default to having Nonblocking => True. That would make the
nonblocking checks static in those declarations. We can't do that by default
for compatibility reasons, but we can surely make it easy to do that if the
user wants.

(3) A pragma Detect_Conflicts is a configuration pragma.

Detech_Conflicts should always be on for parallel loops and blocks (to get
(A), above). The suppression form ought to be sufficient for the hopefully
rare cases where it needs to be turned off.

For compatibility, we need a pragma like Detect_Conflicts if we want to detect
these uses for tasks.

(4)
> Implementation Permissions
>
>  An implementation is allowed to reject a compilation_unit to which
> the  pragma Detect_Conflicts applies, if concurrent actions within the
> compilation unit are known to refer to the same object (see 6.4.1)
> with uses that potentially conflict, and neither action is within  the
> scope of a Suppress pragma for the Conflict_Check (see 11.5).

This ought to be a requirement. What is the point of knowing this statically
(which we do) and then not reporting it to the user? I doubt that many vendors
would take advantage of a permission that simply adds work, especially early
on.

Perhaps we should limit any such requirement to obvious places (such as within
a parallel block or loop) as opposed to the rather general catch-all.
But then see below.

(5)
> An implementation is required to detect that two concurrent actions
> make potentially conflicting uses of the same shared variable (or of
> neighboring variables that are not independently addressable) if the
> actions occur within a compilation unit to which the pragma applies
> (see 9.5). Program_Error is raised if such potentially conflicting
> concurrent actions are detected.

I'm not sure how this could be implemented in Janus/Ada, since we don't
process anything larger than a simple statement at a time. Trying to keep
track of every possible concurrent access over an entire unit sounds like a
nightmare, given that nearly object referenced in the compilation unit could
be used that way. The check for a single parallel block or loop seems
tractable since the scope of the check is very limited. But if any possible
concurrent access is involved, that can happen in any code anywhere. That
seems way too broad. It seems especially painful in the case of array
indexing, which already requires a very complex dynamic check even for a
parallel loop. I haven't a clue how one could implement such a check for all
tasks in a program unit.

It might be implementable if such checks were allowed in (and limited to) to
task bodies as well as the parallel constructs. But even then the
implementation would be quite painful, not just the local check in a single
construct as previously envisioned.

=======================================================

So, let me outline an alternative proposal using a few of the features of
Tucker's proposal.

(1) The use of an operation that allows blocking (see 9.5) is not allowed in
a parallel block or loop unless pragma Allow_Blocking applies to the block or
loop.

Pragma Allows_Blocking gives permission to allow blocking in a parallel block
or loop. It applies to a scope similarly to pragma Suppress. (Detailed wording
TBD, of course).

(2) Pragma Statically_Check_Blocking is a configuration pragma. It causes the
value of Nonblocking for a protected declaration (that is, a protected type or
single protected object) to be True, unless it is directly specified. (That
is, the value is not inherited from the enclosing package.) It also causes
pragma Detect_Blocking to be invoked. The effect of this is to cause protected
types to be rejected if a potentially blocking operation can be executed (note
that we include Detect_Blocking so that the deadlock case, which can't be
statically detected, raises Program_Error). One can turn off the effect of
this pragma on an individual declaration by explicitly specifying Nonblocking
=> False.

(3) Conflict checks are defined essentially as Tucker has defined them, but
they only apply to parallel blocks and loops. And they're always on unless
specifically suppressed.

(4) The check Conflict_Check is defined essentially as Tucker has it.

(5) They have a Legality Rule to reject the parallel construct for the reasons
Tucker noted:

  A parallel construct is illegal if concurrent actions within the
  construct are known to refer to the same object (see 6.4.1)
  with uses that potentially conflict, and neither action is within
  the scope of a Suppress pragma for the Conflict_Check (see 11.5).

We could call this a "requirement" if it gives people heartburn to attach a
Legality Rule to pragma Suppress. I don't see much benefit to insisting that a
runtime check is made if it is known to be problematic at compile-time, but
whatever way we describe that doesn't matter much.

P.S. Hope I wasn't too negative in this; I'm trying to make progress on this topic.

****************************************************************

From: Tucker Taft
Sent: Tuesday, June 12, 2018  9:06 PM

I agree with much of what you say.  I tried to define something that was
internally consistent, as well as consistent with existing features of the
language.  However, I agree with you that we might want to have some of these
checks on by default.  I guess I should have made that clear.  I did indicate
in the discussion that placement of the Detect_Conflicts pragma in H.5 is one
approach, but 9.10 and C.6.1 are reasonable alternative places.

To some extent I was reacting to Erhard's big concern that we might be making
parallelism too hard to use, by enforcing all checks by default, when we know
that Global annotations will not be very precise in many situations.

As far as implementation of conflict checks, I tried to make it clear that the
checks can be implemented locally to individual subprograms, since the
Global/Global'Class aspect is used when analyzing a call on some other
subprogram.  So really all you need to worry about are places where a task is
initiated, or a parallel construct occurs.  You don't need to look across the
"entire" compilation unit for possible conflicts.  The checks can be performed
very locally.

Despite Brad's success with his Parafin library, I believe it is totally
unrealistic to expect typical Ada programmers to build their own light-weight
threading that takes advantage of multicore hardware.  We clearly need to
provide a new standard way to create light-weight threads of control, with
appropriate work-stealing-based scheduling.  It could be in a "magic" generic
package of some sort, but I really don't think that would be very friendly.

****************************************************************

From: Randy Brukardt
Sent: Tuesday, June 12, 2018  9:53 PM

> I agree with much of what you say.  I tried to define something that
> was internally consistent, as well as consistent with existing
> features of the language.  However, I agree with you that we might
> want to have some of these checks on by default.  I guess I should
> have made that clear.
> I did indicate in the discussion that placement of the
> Detect_Conflicts pragma in H.5 is one approach, but 9.10 and
> C.6.1 are reasonable alternative places.

Saw that, but I thought it was important to remind everyone that Annex H is
*optional*!

> To some extent I was reacting to Erhard's big concern that we might be
> making parallelism too hard to use, by enforcing all checks by
> default, when we know that Global annotations will not be very precise
> in many situations.

Parallel programming *is* hard, and papering that over doesn't really serve
anyone. Anyone below the "guru" level will appreciate having help getting it
right (even if that is restrictive). After all, parallelism not truly safe
unless everything is Global => null or in synchronized (with a few additional
cases for blocks, which can have different subprograms in each branch). You
don't need Ada to write unsafe parallel code!

I viewed Erhard's concern as being more oriented to the problem that no
existing code will have Nonblocking or Global specified. So to make some
existing code parallel, it might be necessary to add those all over the place.
That does bother me a bit, but I view that as a one-time pain, and since the
contracts have other benefits beyond just these checks, it's well worth the
effort.

> As far as implementation of conflict checks, I tried to make it clear
> that the checks can be implemented locally to individual subprograms,
> since the Global/Global'Class aspect is used when analyzing a call on
> some other subprogram.  So really all you need to worry about are
> places where a task is initiated, or a parallel construct occurs.  You
> don't need to look across the "entire" compilation unit for possible
> conflicts.  The checks can be performed very locally.

I don't quite understand this. The checks are about conflicts between logical
threads, and as such there have to be at least two threads involved. In the
case of task initiation, you can find out the global uses for the new task,
but you don't know that for activating task -- you can activate a task
anywhere. You would know if there are any local conflicts in the scope where
 the task is going to end up -- how could you possibly do that for a task
initiated by an allocator or returned from a function?

> Despite Brad's success with his Parafin library, I believe it is
> totally unrealistic to expect typical Ada programmers to build their
> own light-weight threading that takes advantage of multicore hardware.

I agree, they should download Paraffin. :-) More seriously, we could provide
a light-weight version of the library in Standard and let implementers enhance
it.

> We clearly need to provide a new
> standard way to create light-weight threads of control, with
> appropriate work-stealing-based scheduling.  It could be in a "magic"
> generic package of some sort, but I really don't think that would be
> very friendly.

The friendliness comes from AI12-0189-1 and similar syntax sugar. We didn't
need much extra syntax to get generalized iterators, generalized indexing, or
generalized references. The only reason this case is different is that we (some
of us, at least) want to attach static checks to this case. Otherwise, the
general syntax is good enough.

P.S. Sorry, I think that I inherited the need to always have the last word
from Robert. :-)

****************************************************************

From: Randy Brukardt
Sent: Thursday, June 14, 2018  9:48 PM

> I don't quite understand this. The checks are about conflicts between
> logical threads, and as such there have to be at least two threads
> involved.
> In the case of task initiation, you can find out the global uses for
> the new task, but you don't know that for activating task -- you can
> activate a task anywhere. You would know if there are any local
> conflicts in the scope where the task is going to end up -- how could
> you possibly do that for a task initiated by an allocator or returned
> from a function?

Having thought about this some more, I think you are trying to use the Global
of the activator for this check. That works for the activation of local tasks,
but it doesn't seem to work in the most important case:
library-level tasks (or tasks that are allocated for a library-level access
type).

That's because you have to compare against the Global for the environment
task. What's that? Well, it has to be effectively Global => in out all,
because the environment task elaborates all of the global objects. So you have
to allow it to write them! But that means that there would be a conflict with
every task that uses any global data -- even if that task is the *only*
(regular) task to access that global data. That doesn't seem helpful in any
way.

As I noted yesterday, I also don't see how the check could be done in the case
where a task is returned by a function -- you don't know the activator inside
the function, and you don't know much about the task at the function call
site. You probably could rig up a runtime version of the static check, but
that would be very complex (the static check being complex, since the data
structures needed are very complex). Seems like a huge amount of runtime
overhead for not much benefit (as noted above, almost every task has a
conflict; the whole reason that programming Ada tasks is hard is managing that
conflict).

****************************************************************

From: Tucker Taft
Sent: Friday, June 15, 2018  3:12 AM

I am working on a new version.  Hopefully it will address some of these issues.

****************************************************************

From: Tucker Taft
Sent: Friday, June 15, 2018  4:44 AM

Here is a new version that attempts to address most of Randy's comments.  Both
non-blocking checks and compile-time conflict checks are on by default for
parallel constructs.

[This is version /03 of the AI - Editor.]

****************************************************************

From: Randy Brukardt
Sent: Friday, June 15, 2018  8:17 PM

> Here is a new version that attempts to address most of Randy's 
> comments.  Both non-blocking checks and compile-time conflict checks 
> are on by default for parallel constructs.

Thank you very much; I like this version much better.

For the record, I deleted a couple of extra spaces and added a missing period 
when posting.

There might be some value to describing the model of the data race checks for
tasks, since it didn't seem obvious to me.

Also, there might be value in describing the difference between a "data race" 
and a "race condition" (you did this once for me in e-mail, which I happened 
to re-read last night when working on AI12-0240-2; I doubt most people 
understand the difference). And in particular, that there is no detection of
race conditions (which isn't really possible, and as you pointed out, aren't
even erroneous in an Ada sense).

No rush on those though.

***************************************************************

Questions? Ask the ACAA Technical Agent