CVS difference for ai12s/ai12-0279-1.txt

Differences between 1.8 and version 1.9
Log of other versions for file ai12s/ai12-0279-1.txt

--- ai12s/ai12-0279-1.txt	2019/01/26 00:08:07	1.8
+++ ai12s/ai12-0279-1.txt	2020/02/04 06:30:34	1.9
@@ -1,6 +1,7 @@
-!standard D.2.1(1.5/2)                                19-01-25  AI12-0279-1/05
+!standard D.2.1(1.5/2)                                20-02-03  AI12-0279-1/06
 !standard D.2.1(7/5)
 !class binding interpretation 18-05-14
+!status ARG Approved 9-0-1 18-11-10
 !status Amendment 1-2012 18-11-26
 !status ARG Approved 9-0-1 18-10-22
 !status work item 18-05-14
@@ -43,6 +44,9 @@
 
 !wording
 
+[Editor's note: The changes of AI12-0294-1 have been folded into this revised
+AI.]
+
 Add after D.2.1(1.5/2)
 
 For a noninstance subprogram (including a generic formal subprogram),
@@ -57,19 +61,79 @@
 
    If a Yield aspect is specified True for a primitive subprogram S of a
    type T, then the aspect is inherited by the corresponding primitive
-   subprogram of each descendant of T. If the Yield aspect is specified
-   for a dispatching subprogram that inherits the aspect, the specified
-   value shall be confirming. If the Nonblocking aspect (see 9.5) of
-   the associated callable entity is statically True, the Yield aspect
-   shall not be specified as True.
+   subprogram of each descendant of T.
+
+Legality Rules
+
+If the Yield aspect is specified for a dispatching subprogram that inherits
+the aspect, the specified value shall be confirming.
+
+If the Nonblocking aspect (see 9.5) of the associated callable entity is 
+statically True, the Yield aspect shall not be specified as True. For a 
+callable entity that is declared within a generic body, this rule is checked 
+assuming that any nonstatic Nonblocking attributes in the expression of the 
+Nonblocking aspect of the entity are statically True.
+
+  AARM Reason: The second sentence here is an assume-the-worst rule. The
+  only Nonblocking attributes that are nonstatic are those that depend,
+  directly or indirectly, on the nonblocking aspect of a generic formal
+  parameter. We have to assume these might in fact have the value True if
+  given an appropriate actual entity.
 
+In addition to the places where Legality Rules normally apply (see 12.3), 
+these rules also apply in the private part of an instance of a generic unit.
+
 Add after D.2.1(7/5)
 
-If the Yield aspect has the value True, then a call to procedure Yield is 
-included within the body of the associated callable entity, and invoked
-immediately prior to returning from the body if and only if no other
-task dispatching points were encountered during the execution of the
-body.
+If the Yield aspect has the value True for a callable entity C, then a call to 
+procedure Yield is included within the body of C, and invoked
+immediately prior to returning from the body in the following cases:
+   * if no other task dispatching points were encountered during the execution 
+     of the body;
+   * other implementation-defined cases unless otherwise specified.
+
+AARM Reason: For an
+Ada program using a preemptive task dispatching policy on a host system, it
+would be very expensive to count preemptions for this purpose, so we only
+require that Yield is called if there were no dispatching
+points executed. In other cases, we allow the implementation to determine if
+it is called other than in the non-preemptive policy (see D.4.2). 
+
+Note that for an implementation that will implement the non-preemptive task 
+dispatching policy along with other policies, the implementation cannot know 
+at compile-time the task dispatching policy in effect, so the implementation 
+will need an implementation as described below even if the actual task 
+dispatching policy would allow Yield to be called unconditionally. For this 
+reason, we don't specify exactly when Yield will be called - we want to let 
+implementations share parts of their implementation of different dispatching 
+policies.
+End AARM Reason.
+[Editor's note: Perhaps this second paragraph should be in the !discussion 
+instead?]
+
+AARM Implementation Note: Yield can be implemented by the task supervisor
+maintaining a per-task count of task dispatching points using a modular type. 
+The count is saved at the start of a subprogram that has Yield specified, and
+Yield is called if the count is unchanged when returning from the subprogram.
+If the counter has enough bits, the wraparound case can be made extremely
+extremely unlikely (having exactly 2**32 task dispatching points during the
+execution of a subprogram is highly unlikely), and thus does not need special
+protection.
+End AARM Implementation Note.
+[Editor's note: This addresses Tucker's concern about wraparound; a version
+of this was in the !discussion in previous versions of the AI.]
+
+Add after D.2.4(9/3):
+
+When Non_Preemptive_FIFO_Within_Priorities is in effect and the Yield aspect 
+has the value True for a callable entity C, then the call to 
+procedure Yield that is included within the body of C is invoked
+immediately prior to returning from the body if and only if no other task 
+dispatching points were encountered during the execution of the body;
+
+AARM Reason: This overrides the rules of D.2.1, and requires that for this
+policy Yield is only called when no other task dispatching points are 
+encountered. This improves the analyzability of programs using this policy.
 
 !discussion
 
@@ -88,12 +152,8 @@
 task dispatching points on any of the language defined subprograms could add
 unnecessary overhead where it was not needed.
 
-One possible implementation model would be to declare a local constant
-initialized by copying a per-task count of task dispatching points, and then
-immediately prior to returning from the subprogram, a call to the Yield
-operation is performed if the per-task count has not changed. This count would
-presumably be represented as a modular type to avoid overflow.
 
+
 !corrigendum D.2.1(1.5/2)
 
 @dinsa
@@ -128,6 +188,7 @@
 immediately prior to returning from the body if and only if no other
 task dispatching points were encountered during the execution of the
 body.
+** TBD: Need to redo this when this AI is redone.
 
 !ASIS
 
@@ -636,3 +697,498 @@
 
 ****************************************************************
 
+From: Arnaud Charlet
+Sent: Monday, October 21, 2019  5:54 AM
+
+In the process of prototyping AI12-0279 we discovered that the current
+wording isn't appropriate: it requires a new aspect Yield to be supported
+for all dispatching policy, which is simply not practical:
+
+- it cannot be implemented outside bare metal platforms since there is less/no
+  control there when the processor yields.
+- even on bare metal, it will incur a distribute overhead for all executions
+  and implementations for a very rare case (the case where the user does use
+  the Yield aspect which isn't a general purpose attribute, unlike e.g.
+  Detect_Blocking which could arguably be made a default).
+
+So the proposal to solve this is to move the wording associated with the
+Yield aspect to D.2.4 (out of D.2.1) since this is where the non preemptive
+scheduling policy is defined and where this aspect makes sense.
+
+We can then define whether the Yield aspect should be completely ignored
+for other policies, or possibly implemented via a straight call to Yield at the
+end of the entity (without any "if and only if no other task dispatching points 
+were encountered during the execution of the body" requirement as is the
+case for non preemptive).
+
+Quoting an internal discussion with Tuck on this subject:
+<<
+So I would agree we should move this aspect to be associated directly with the
+nonpreemptive scheduler (i.e. D.2.4), and define its semantics in terms of
+that, where the increment of such a counter would match exactly those points
+where the nonpreemptive scheduler actually looks for another task to switch to.
+The aspect need not be supported at all, or could merely be ignored, in
+environments where there is no support for the nonpreemptive scheduler.
+>>
+
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, October 29, 2019  12:16 AM
+
+> In the process of prototyping AI12-0279 we discovered that the current 
+> wording isn't appropriate: it requires a new aspect Yield to be 
+> supported for all dispatching policy, which is simply not practical:
+> 
+> - it cannot be implemented outside bare metal platforms since there is 
+>   less/no control there when the processor yields.
+> - even on bare metal, it will incur a distribute overhead for all executions
+>   and implementations for a very rare case (the case where the user does
+>   use the Yield aspect which isn't a general purpose attribute, unlike e.g.
+>   Detect_Blocking which could arguably be made a default).
+
+With the standard provision that I know just enough about real-time to be 
+dangerous, the above doesn't make a lot of sense to me.
+
+As I understand the intention, the semantics is no more and no less than 
+inserting a Yield call (which is equivalent to "delay 0.0" in particular 
+circumstances. There is no concern about whether the processor yields too
+much - this is orthogonal to any other yielding.
+
+And the overhead for the implementation is only to use a counter rather than 
+a single Boolean for however Detect_Blocking. It's hard for me to imagine how
+that would cause a significant distributed overhead (is incrementing a word 
+really that more expensive than setting a byte??).
+
+A subprogram that uses aspect Yield of course would also have some overhead, 
+but that's not "distributed" overhead, as it is avoidable by not using the 
+aspect Yield. (The expected implementation is that the subprogram containing 
+aspect Yield saves the Detect_Blocking count when it is entered and calls 
+Yield on exit if the current counter is different.)
+
+So I don't see the problem, can you explain? It seems to me that if there is 
+a problem, it is likely with the definition of the operation. If it is too 
+expensive for a preemptive dispatching policy, it's just as likely to be too 
+expensive for a nonpreemptive policy (which ought to be cheaper than a 
+preemptive policy!). Shoving such a problem under the rug so-to-speak by 
+sticking the aspect into an obscure section isn't the way to fix the problem. 
+(I wouldn't want to add a significant distributed overhead to Janus/Ada just 
+for this aspect, for instance.)
+
+> So the proposal to solve this is to move the wording associated with 
+> the Yield aspect to D.2.4 (out of D.2.1) since this is where the non 
+> preemptive scheduling policy is defined and where this aspect makes 
+> sense.
+
+I'm a bit dubious that it makes sense anywhere (given that Janus/Ada has had 
+a nonpremptive tasks system forever, I do have some experience in this area),
+but I'll defer to the real-time people on that.
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Wednesday, October 30, 2019  11:52 AM
+
+Thanks for your prototyping effort and apologies for reacting this late.
+Yet, echoing Randy's points on your concerns, I fail to see where your two 
+objections come from.
+The implementation model outlined in the AI seems to be free of both
+risks: surely the equivalent semantics of delay 0.0 can be had outside of 
+bare-metal implementations too, and that the additional required logic seems 
+equally tiny. Will you please elaborate on your argument further.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, October 30, 2019  12:01 PM
+
+Is there any point in such an aspect for a preemptive system?  I agree with 
+Arnaud that this belongs with the non-preemptive scheduling policy, 
+independent of whether we think it is efficient to implement.  This will also 
+allow us to define its semantics in terms of the non-preemptive policy. 
+
+For a preemptive scheduler, it is unclear (to me at least) exactly what this 
+aspect requires and what it is supposed to accomplish.
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Wednesday, October 30, 2019  12:18 PM
+
+The !question text says (and means) that the principal use case is 
+non-preemptive dispatching.
+Yet, the notion that by setting the Yield aspect to true one makes sure that 
+a calling an open entry or a suspension object set to true become dispatching 
+points is useful for preemptive dispatching too.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, October 30, 2019  12:31 PM
+
+I'll have to study that part of the wording.  That sounds somewhat harder to 
+accomplish in any case...
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, October 30, 2019  12:51 PM
+
+OK, there is nothing in the wording relating to this issue of calling open 
+entries, it is only in the !question.  I guess your point is that even when 
+you are using a preemptive scheduler, it is useful to insert dispatching 
+points if none come along "naturally."  Can you explain the value that has 
+in a preemptive scheduler?  Clearly you can't call an entry, whether open or 
+not, from inside a protected action.  Given that, wouldn't that mean such 
+dispatching points are sort of irrelevant?  Or are you thinking of round-robin 
+scheduling for tasks at the same priority level?  I presumed that was driven 
+primarily by timing events.
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Thursday, October 31, 2019  3:29 AM
+
+The concern for preemptive dispatching is not dissimilar to that of 
+non-preemptive, and you use (near-)full Ada, that is to say, task entries 
+along with suspension objects. In other words, outside of the Ravenscar 
+Profile. In those situations, having the ability to "force" a dispatching 
+point where it might occur, but only in some circumstances (entry closed, 
+suspension object set to false) does help the user reason about the program
+behaviour.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, October 31, 2019  7:34 AM
+
+So the point is for something like FIFO_Within_Priorities, when you might 
+have multiple ready tasks at the same priority, the Yield aspect will ensure 
+that at least once during a given subprogram execution the scheduler will 
+switch to the next same priority task, if any, on the ready queue.  Perhaps 
+if we provide an implementation permission to always insert an extra 
+dispatching point when Yield is specified, *except* in the non-preemptive 
+tasking policy, where we want tighter control.
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Thursday, October 31, 2019  3:49 PM
+
+The case you describe might be a plausible scenario, but just one in a number. 
+Knowing where dispatching points occur is of value when concurrency is used. 
+Having it conditional on the status of entries (open/close) or suspension 
+objects (true/false) makes things more complicated. One may want that 
+complication, because it is more expressive, but might as well do away with 
+it. Which is where the Yield aspect helps in general, outside of just 
+non-preemptive dispatching.
+
+You seem to be saying that the effect of Yield set to true for the 
+non-preemptive case would be obtained by subtraction instead of by addition. 
+That sounds like you anticipate an intolerable runtime overhead in adding the
+dispatching point when none had occurred in the subprogram with that aspect 
+on. I still cannot see where that overhead would come from.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, October 31, 2019  4:01 PM
+
+It sounds like my response was confusing.  I would like a legal implementation 
+of Yield in a *preemptive* case to be an unconditional extra dispatching point 
+on return from the subprogram, rather than requiring the implementation to keep
+track of whether a dispatching point had already occurred.  In a non-preemptive
+case, it would be exactly as it is currently described, namely the 
+implementation would have to keep track of whether some dispatching point 
+already occurred.
+
+(To ease implementation we would still want to allow even in the 
+non-preemptive case the insertion of an extra dispatching point if the counter 
+of dispatching points happened to wrap around -- not sure exactly how to 
+phrase that, but it would only happen if there were an enormous number of 
+dispatching points already -- perhaps a documentation requirement in this case 
+for cases when an extra dispatching point might be inserted.)
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Friday, November  1, 2019  9:33 AM
+
+Your clarification helped: the possibility might be an agreeable landing point.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, November  1, 2019  9:47 AM
+
+One way to do this is to split the rules into a main part which specifies that 
+at least one dispatching point will occur during the execution of a subprogram 
+with the Yield aspect True, and then implementation advice to only insert an 
+extra dispatching point if there are none that occur "naturally."  In the 
+section on non-preemptive scheduling, we could turn the advice into more of a 
+requirement (with perhaps still a small loop-hole to allow counter 
+wraparound).  There is already a general requirement to document whether and 
+how implementation advice is being followed, so that would ensure some sort 
+of documentation on what the implementation is actually doing.
+
+****************************************************************
+
+From: Arnaud Charlet
+Sent: Monday, November  4, 2019  3:52 AM
+
+The above seems acceptable to me.
+
+To recap my main concern: for full OSes with scheduling fully handled by the 
+kernel, the Ada runtime has zero onctrol on scheduling and when the processor
+decides to yield, so you cannot implement the current wording ("only yield if
+not done yet"), so the wording has to be changed, and clearly shows that only
+very specific scheduling policies were taken into account (and actually mostly
+one, the non preemptive policy).
+
+For other policies, it's not clear to me that we have an exact 1-to-1 mapping 
+between the non blocking detection and the yield detection, has this been 
+discussed and confirmed?
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, November  4, 2019  6:06 PM
+
+> ... In the section on non-preemptive scheduling, we could turn the 
+> advice into more of a requirement (with perhaps still a small 
+> loop-hole to allow counter wraparound).
+...
+
+Somewhat of an aside: I don't think there is any need for allowing counter
+wraparound.
+
+One simply uses [in]equality for a subprogram to determine if the counter
+has changed (*not* an ordering operator) to determine if a call to Yield is
+needed. In that case, wraparound doesn't matter(*), the counter is still
+different.
+
+(*) Yes, if *exactly* the right number of task dispatching points have
+occurred, there is an issue. But that is a very minor issue (an extra task
+dispatching point is inserted, which in most circumstances would not harm
+analysis). And that is easily rendered irrelevant by using a large enough
+counter. Using a 32-bit counter, for instance, means that exactly 2**32 task
+dispatching points would have to occur to cause an extra Yield to be
+inserted. But that probably would take a long time.
+
+For instance, if a task dispatching point takes 0.1 microsecond to execute,
+then it would take over 400 seconds to execute 2**32 task dispatching
+points. And over 21,000,000 days to execute 2**64 task dispatching points.
+So with a 64-bit counter the possibility can be completely ignored, and even
+with a 32-bit counter, it would take a long time.
+
+So I don't think we need a special rule for counter wraparound; any problem
+is avoidable. (And there are many things in Ada that get implemented with
+techniques that might fail in a boundary case. We generally don't worry
+about such things.)
+
+****************************************************************
+
+From: Steve Baird
+Sent: Monday, November  4, 2019  6:16 PM
+
+> (And there are many things in Ada that get implemented with techniques 
+> that might fail in a boundary case. We generally don't worry about 
+> such things.)
+
+I'm don't know what you're referring to; could you give some examples?
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, November  4, 2019  6:27 PM
+
+>...
+> To recap my main concern: for full OSes with scheduling fully  handled 
+>by the kernel, the Ada runtime has zero onctrol on  scheduling and when 
+>the processor decides to yield, so you  cannot implement the current 
+>wording ("only yield if not done  yet"), so the wording has to be 
+>changed, and clearly shows  that only very specific scheduling policies 
+>were taken into  account (and actually mostly one, the non preemptive 
+>policy).
+
+The above confused me because you were talking informally. In particular, 
+when the "processor decides to yield" is irrelevant; the wording is in terms
+of task dispatching points. Specifically:
+
+   If the Yield aspect has the value True, then a call to procedure Yield 
+   is included within the body of the associated callable entity, and 
+   invoked immediately prior to returning from the body if and only if no 
+   other task dispatching points were encountered during the execution of 
+   the body. 
+
+Note that a call of Yield (and a delay statement) are always a task 
+dispatching point, for any task dispatching policy.
+
+Preemption is generally not a task dispatching point, as it just happens.
+Even for a non-preemptive policy, the entire Ada program might be preempted 
+if run on a target like Windows or Linux. This is irrelevant to the program 
+(only the wall clock would change), and certainly should not change the task
+dispatching count.
+
+OTOH, preemption in the preemptive policy *is* a task dispatching point, and I 
+agree that is not implementable on many targets.
+
+> For other policies, it's not clear to me that we have an exact 1-to-1 
+> mapping between the non blocking detection and the yield detection, 
+> has this been discussed and confirmed?
+
+I agree with this - the rules need to be written so these things are 
+essentially the same, both to reduce overhead and for tractability.
+
+One case that I *know* is different is that an I/O call is considered 
+blocking, but certainly is not a task dispatching point. (I don't know of 
+any way to implement Detect_Blocking cheaply for I/O calls; the check I have 
+to use is too expensive to stick into every I/O call, since it requires access
+to the task supervisor context whether or not there is a problem.)
+
+And I'm pretty certain that we want preemption to not be counted for this case 
+(it is not counted for Detect_Blocking, since preemption is *not*
+blocking) [the Yield case]. Otherwise, we would have a difficult problem 
+figuring how to get the kernel to count preemptions.
+
+In general, "task dispatching points" that appear in the source code are the 
+ones that we should be caring about for this aspect. Ones that don't appear in 
+the source code (like preemption) aren't interesting from the POV of analysis, 
+and for independent tasks, such dispatching doesn't even matter since nothing 
+about the "interesting" task (except the wall-clock) is changed by such 
+preemption.
+
+I'm not certain how to define that, but I think that is where we ought to be 
+trying to get to.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, November  4, 2019  6:43 PM
+
+> Somewhat of an aside: I don't think there is any need for allowing 
+> counter wraparound.
+> 
+> One simply uses [in]equality for a subprogram to determine if the 
+> counter has changed (*not* an ordering operator) to determine if a 
+> call to Yield is needed. In that case, wraparound doesn't matter(*), 
+> the counter is still different.
+
+That was always the plan.  Use a modular type, and check for equality to the 
+saved value.
+ 
+> (*) Yes, if *exactly* the right number of task dispatching points have 
+> occurred, there is an issue. But that is a very minor issue (an extra 
+> task dispatching point is inserted, which in most circumstances would 
+> not harm analysis). And that is easily rendered irrelevant by using a 
+> large enough counter. Using a 32-bit counter, for instance, means that 
+> exactly 2**32 task dispatching points would have to occur to cause an 
+> extra Yield to be inserted. But that probably would take a long time.
+
+Right, this was the only loop-hole, which can be made vanishingly small.
+ 
+> For instance, if a task dispatching point takes 0.1 microsecond to 
+> execute, then it would take over 400 seconds to execute 2**32 task 
+> dispatching points. And over 21,000,000 days to execute 2**64 task 
+> dispatching points. So with a 64-bit counter the possibility can be 
+> completely ignored, and even with a 32-bit counter, it would take a 
+> long time.
+> 
+> So I don't think we need a special rule for counter wraparound; any 
+> problem is avoidable. (And there are many things in Ada that get 
+> implemented with techniques that might fail in a boundary case. We 
+> generally don't worry about such things.)
+
+I suppose. The loop-hole is extremely small.  On the other hand, one might 
+argue an 8-bit or 16-bit counter would be adequate, in which case the 
+loop-hole gets somewhat bigger, but it is still probably adequately small.  
+I just don't want some implementor to try to make the Herculean effort to 
+handle the once-in-a-blue moon case, so perhaps a "To-be-honest" would be 
+in order.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, November  4, 2019  7:00 PM
+
+A To Be Honest or perhaps an Implementation Note would be enough. Definitely 
+want to talk about the possibility and note that it is ignorable if the 
+counter has enough bits. And the reference implementation needs to be outlined 
+in the AARM so that no one wonders how to do it (that's currently missing 
+from D.2.1).
+
+I didn't think 16-bits was enough, and certainly "8 is not enough" (groan for 
+fans of old tv shows :-). 256 task dispatching points doesn't seem unusual 
+for a large enough job.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, November  4, 2019  7:08 PM
+
+I probably should have used a different word than "many", like "some".
+
+The first one that came to mind was the way the Rational compiler deals with 
+versioning (with a hash, which could collide but probably won't). A second one 
+is the method Janus/Ada will use with the containers to detect dangling 
+cursors (if one inserts more than 2**32 elements in the containers of a single 
+container instance, it might fail to detect some dangling cursors, but this of 
+course is rather unlikely) [I need to get these implemented so I can stop 
+using future tense for this discussion point].
+
+Neither of these are required, however, as other options are available. Not 
+sure that there is any technique that would completely prevent a versioning 
+error (if two versions are sufficiently close in time or contents it still is 
+impossible to differentiate). The containers check is completely optional 
+anyway (using such cursors is technically erroneous).
+
+I'm pretty sure that there are others (I'm certain that there are several 
+others in Janus/Ada), but I couldn't tell you off-hand what they are.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, February  4, 2020  12:30 AM
+
+Attached is an attempt to redo AI12-0279-1 to address the concerns raised by 
+Arno last October. [This is version /06 of the AI - Editor.]
+
+I settled on a model based on Tucker's suggestion. For a callable entity that 
+has the Yield aspect True, we guarantee that Yield (the procedure) will be 
+called before the callable entity returns if there were no task dispatching 
+points executed during that subprogram. However, if there *were* task 
+dispatching points, we leave it implementation-defined whether Yield will be 
+called (with the exception of the Non_Preemptive_FIFO_Within_Priorities 
+policy, when it is required to not be called if any task dispatching points 
+occurred).
+
+This varies slightly from Tucker's model (which was to unconditionally call 
+Yield for preemptive policies). I did that as a compiler that supports 
+multiple policies including a non-preemptive policy is going to have to 
+compile all subprograms with Yield => True as if the policy is non-preemptive 
+(it in general cannot know the policy at compile-time, since the policy can 
+depend on the priority and the priority can be set at runtime). That means 
+that the subprogram will need to save and check a per-task task dispatching 
+point count. Such an implementation probably will want to share some tasking 
+code between the different policies with minimal complication, so we let it 
+use that per-task counter when convinient in other policies (in particular, 
+I'd expect it to get used if a delay or Yield occurs in the code).
+
+I debated whether we should mandate that such a counter get used for specific 
+task dispatching points (for instance, the ones that belong to potentially 
+blocking operations) in every policy, but that started to seem like a morass. 
+And it isn't clear if there would be sufficient benefit to that, so I'll 
+leave that to someone else if they think it is important.
+
+I suspect that wording can be simplified some (the original wording seems 
+unnecessarily complex -- why do we talk about a "call" being included, as 
+opposed to just saying that the procedure Yield is invoked just before 
+return?). But I wanted to check if this is the right semantics before bending 
+my brain too much on the wording details.
+
+P.S. I merged the correction of AI12-0294-1 into this AI, and reopened the 
+original AI as this was classified as a Binding Interpretation. It didn't 
+seem to make sense to correct a correction that hasn't been WG 9 Approved.
+
+****************************************************************

Questions? Ask the ACAA Technical Agent