!standard D.02 (01) 03-12-03 AI95-00321/05 !standard D.02.01 (01) !standard D.02.01 (02) !standard D.02.01 (04) !standard D.02.01 (05) !standard D.02.01 (06) !standard D.02.01 (07) !standard D.02.01 (08) !standard D.02.01 (09) !standard D.02.01 (10) !standard D.02.02 (00) !standard D.02.02 (03) !standard D.02.02 (05) !standard D.02.02 (07) !standard D.02.02 (08) !standard D.02.02 (09) !standard D.02.02 (10) !standard D.02.02 (11) !standard D.02.02 (12) !standard D.02.02 (13) !standard D.02.02 (14) !standard D.02.02 (15) !standard D.02.02 (16) !standard D.02.02 (17) !standard D.02.02 (18) !standard D.02.02 (19) !standard D.02.02 (20) !standard D.02.02 (21) !standard D.02.03 (01) !class amendment 03-01-07 !status Amendment 200Y 03-07-02 !status WG9 Approved 03-12-12 !status ARG Approved 13-0-0 03-06-23 !status work item 03-01-07 !status received 03-01-07 !priority Medium !difficulty Medium !subject Definition of dispatching policies !summary New wording is proposed for paragraphs within D.2.1 and D.2.2 to clarify the intended effect of dispatching points, and to allow more freedom to dispatching policies. !problem As it is currently worded the general model for dispatching (as defined in D.2.1) does not permit a truly nonpreemptive dispatching policy. There are a number of required dispatching points that amount to preemption points, i.e., points that do not logically correspond to a decision by the running task to give up the processor. During the discussion of this issue, it also appeared that the wording of D.2.1 does not make it clear enough that whenever a running task reaches a dispatching point it is conceptually added to one or more of the conceptual dispatching queues, and so is always eligible for consideration to continue execution. !proposal The solution is to restructure clauses D.2.1 and D.2.2 into a number of clauses: the basic task dispatching model; the pragma Task_Dispatching_Policy; and the specific policy FIFO_Within_Priorities. In so doing, we separate rules which are common to all task dispatching policies from rules for a specific policy. For the detailed changes, see the !wording section. !wording D.2 Priority Scheduling This clause describes the rules that determine which task is selected for execution when more than one task is ready (see 9). D.2.1 The Task Dispatching Model The task dispatching model specifies task scheduling, based on conceptual priority-ordered ready queues. Dynamic Semantics A task can become a running task only if it is ready (see 9) and the execution resources required by that task are available. Processors are allocated to tasks based on each task's active priority. It is implementation defined whether, on a multiprocessor, a task that is waiting for access to a protected object keeps its processor busy. Task dispatching is the process by which one ready task is selected for execution on a processor. This selection is done at certain points during the execution of a task called task dispatching points. A task reaches a task dispatching point whenever it becomes blocked, and when it terminates. Other task dispatching points are defined throughout this Annex for specific policies. Task dispatching policies are specified in terms of conceptual ready queues and task states. A ready queue is an ordered list of ready tasks. The first position in a queue is called the head of the queue, and the last position is called the tail of the queue. A task is ready if it is in a ready queue, or if it is running. Each processor has one ready queue for each priority value. At any instant, each ready queue of a processor contains exactly the set of tasks of that priority that are ready for execution on that processor, but are not running on any processor; that is, those tasks that are ready, are not running on any processor, and can be executed using that processor and other available resources. A task can be on the ready queues of more than one processor. Each processor also has one running task, which is the task currently being executed by that processor. Whenever a task running on a processor reaches a task dispatching point it goes back to one or more ready queues; a task (possibly the same task) is then selected to run on that processor. The task selected is the one at the head of the highest priority nonempty ready queue; this task is then removed from all ready queues to which it belongs. Implementation Permissions An implementation is allowed to define additional resources as execution resources, and to define the corresponding allocation policies for them. Such resources may have an implementation-defined effect on task dispatching. An implementation may place implementation-defined restrictions on tasks whose active priority is in the Interrupt_Priority range. For optimization purposes, an implementation may alter the points at which task dispatching occurs, in an implementation-defined manner. However, a delay_statement always corresponds to at least one task dispatching point. NOTES 7 Section 9 specifies under which circumstances a task becomes ready. The ready state is affected by the rules for task activation and termination, delay statements, and entry calls. When a task is not ready, it is said to be blocked. 8 An example of a possible implementation-defined execution resource is a page of physical memory, which needs to be loaded with a particular page of virtual memory before a task can continue execution. 9 The ready queues are purely conceptual; there is no requirement that such lists physically exist in an implementation. 10 While a task is running, it is not on any ready queue. Any time the task that is running on a processor is added to a ready queue, a new running task is selected for that processor. 11 In a multiprocessor system, a task can be on the ready queues of more than one processor. At the extreme, if several processors share the same set of ready tasks, the contents of their ready queues is identical, and so they can be viewed as sharing one ready queue, and can be implemented that way. Thus, the dispatching model covers multiprocessors where dispatching is implemented using a single ready queue, as well as those with separate dispatching domains. 12 The priority of a task is determined by rules specified in this subclause, and under D.1, ``Task Priorities'', D.3, ``Priority Ceiling Locking'', and D.5, ``Dynamic Priorities''. 13 The setting of a task's base priority as a result of a call to Set_Priority does not always take effect immediately when Set_Priority is called. The effect of setting the task's base priority is deferred while the affected task performs a protected action. D.2.2 Pragma Task_Dispatching_Policy Syntax The form of a pragma Task_Dispatching_Policy is as follows: pragma Task_Dispatching_Policy(policy_identifier); Legality Rules The policy_identifier shall either be one defined in this Annex or an implementation-defined identifier. Post-Compilation Rules A Task_Dispatching_Policy pragma is a configuration pragma. Dynamic Semantics A task dispatching policy specifies the details of task dispatching that are not covered by the basic task dispatching model. These rules govern when tasks are inserted into and deleted from the ready queues, and whether a task is inserted at the head or the tail of the queue for its active priority. The task dispatching policy is specified by a Task_Dispatching_Policy configuration pragma. If no such pragma appears in any of the program units comprising a partition, the task dispatching policy for that partition is unspecified. Implementation Permissions Implementations are allowed to define other task dispatching policies, but need not support more than one task dispatching policy per partition. D.2.3 The Standard Task Dispatching Policy This clause defines the policy_identifier, FIFO_Within_Priorities. Post-Compilation Rules If the FIFO_Within_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition. Dynamic Semantics When FIFO_Within_Priorities is in effect, modifications to the ready queues occur only as follows: o When a blocked task becomes ready, it is added at the tail of the ready queue for its active priority. o When the active priority of a ready task that is not running changes, or the setting of its base priority takes effect, the task is removed from the ready queue for its old active priority and is added at the tail of the ready queue for its new active priority, except in the case where the active priority is lowered due to the loss of inherited priority, in which case the task is added at the head of the ready queue for its new active priority. o When the setting of the base priority of a running task takes effect, the task is added to the tail of the ready queue for its active priority. o When a task executes a delay_statement that does not result in blocking, it is added to the tail of the ready queue for its active priority. Each of the events specified above is a task dispatching point (see D.2.1). A task dispatching point occurs for the currently running task of a processor whenever there is a nonempty ready queue for that processor with a higher priority than the priority of the running task. The currently running task is said to be preempted and it is added at the head of the ready queue for its active priority. Documentation Requirements Priority inversion is the duration for which a task remains at the head of the highest priority nonempty ready queue while the processor executes a lower priority task. The implementation shall document: o The maximum priority inversion a user task can experience due to activity of the implementation (on behalf of lower priority tasks), and o whether execution of a task can be preempted by the implementation processing of delay expirations for lower priority tasks, and if so, for how long. NOTES 14 If the active priority of a running task is lowered due to loss of inherited priority (as it is on completion of a protected operation) and there is a ready task of the same active priority that is not running, the running task continues to run (provided that there is no higher priority task). 15 Setting the base priority of a ready task causes the task to move to the tail of the queue for its active priority, regardless of whether the active priority of the task actually changes. D.2.4 Non-Preemptive Dispatching -- See AI-298. D.2.5 ... !discussion This AI does not intend to make any change to the semantics of Ada 95 features. The inclusion of 'completion of an accept_statement (see 9.5.2),' in D.2.1(4) was redundant for preemptive policies, and ruled out nonpreemptive policies. It is not the only case in which a task changes priority and/or unblocks another task. It has not been moved to D.2.3 as the overriding rule of higher priority task always preempting covers this and all similar cases. Likewise, the inclusion of whenever a task 'becomes ready' was redundant. Dispatching points are defined only for running tasks. Becoming ready requires insertion into one or more ready queues, which invokes the preemption rule D.2.2(13) if the policy is preemptive and the newly ready task has higher priority than a running task for a processor to which the queue belongs. !example An example does not seem valuable for this proposal. !corrigendum D.2(01) @drepl This clause describes the rules that determine which task is selected for execution when more than one task is ready (see 9.2). The rules have two parts: the task dispatching model (see D.2.1), and a specific task dispatching policy (see D.2.2).] @dby This clause describes the rules that determine which task is selected for execution when more than one task is ready (see 9). !corrigendum D.2.1(01) @drepl The task dispatching model specifies preemptive scheduling, based on conceptual priority-ordered ready queues. @dby The task dispatching model specifies task scheduling, based on conceptual priority-ordered ready queues. !corrigendum D.2.1(02) @drepl A task runs (that is, it becomes a @i) only when it is ready (see 9.2) and the execution resources required by that task are available. Processors are allocated to tasks based on each task's active priority. @dby A task can become a @i only if it is ready (see 9) and the execution resources required by that task are available. Processors are allocated to tasks based on each task's active priority. !corrigendum D.2.1(04) @drepl @i is the process by which one ready task is selected for execution on a processor. This selection is done at certain points during the execution of a task called @i. A task reaches a task dispatching point whenever it becomes blocked, and whenever it becomes ready. In addition, the completion of an @fa (see 9.5.2), and task termination are task dispatching points for the executing task. Other task dispatching points are defined throughout this Annex. @dby @i is the process by which one ready task is selected for execution on a processor. This selection is done at certain points during the execution of a task called @i. A task reaches a task dispatching point whenever it becomes blocked, and when it terminates. Other task dispatching points are defined throughout this Annex for specific policies. !corrigendum D.2.1(05) @drepl @i are specified in terms of conceptual @i, task states, and task preemption. A ready queue is an ordered list of ready tasks. The first position in a queue is called the @i, and the last position is called the @i. A task is @i if it is in a ready queue, or if it is running. Each processor has one ready queue for each priority value. At any instant, each ready queue of a processor contains exactly the set of tasks of that priority that are ready for execution on that processor, but are not running on any processor; that is, those tasks that are ready, are not running on any processor, and can be executed using that processor and other available resources. A task can be on the ready queues of more than one processor. @dby @i are specified in terms of conceptual @i and task states. A ready queue is an ordered list of ready tasks. The first position in a queue is called the @i, and the last position is called the @i. A task is @i if it is in a ready queue, or if it is running. Each processor has one ready queue for each priority value. At any instant, each ready queue of a processor contains exactly the set of tasks of that priority that are ready for execution on that processor, but are not running on any processor; that is, those tasks that are ready, are not running on any processor, and can be executed using that processor and other available resources. A task can be on the ready queues of more than one processor. !corrigendum D.2.1(06) @drepl Each processor also has one @i, which is the task currently being executed by that processor. Whenever a task running on a processor reaches a task dispatching point, one task is selected to run on that processor. The task selected is the one at the head of the highest priority nonempty ready queue; this task is then removed from all ready queues to which it belongs. @dby Each processor also has one @i, which is the task currently being executed by that processor. Whenever a task running on a processor reaches a task dispatching point it goes back to one or more ready queues; a task (possibly the same task) is then selected to run on that processor. The task selected is the one at the head of the highest priority nonempty ready queue; this task is then removed from all ready queues to which it belongs. !corrigendum D.2.1(07) @ddel A preemptible resource is a resource that while allocated to one task can be allocated (temporarily) to another instead. Processors are preemptible resources. Access to a protected object (see 9.5.1) is a nonpreemptible resource. {preempted task} When a higher-priority task is dispatched to the processor, and the previously running task is placed on the appropriate ready queue, the latter task is said to be @i. !corrigendum D.2.1(08) @ddel A new running task is also selected whenever there is a nonempty ready queue with a higher priority than the priority of the running task, or when the task dispatching policy requires a running task to go back to a ready queue. These are also task dispatching points. !corrigendum D.2.1(09) @drepl An implementation is allowed to define additional resources as execution resources, and to define the corresponding allocation policies for them. Such resources may have an implementation defined effect on task dispatching (see D.2.2). @dby An implementation is allowed to define additional resources as execution resources, and to define the corresponding allocation policies for them. Such resources may have an implementation-defined effect on task dispatching. !corrigendum D.2.1(10) @dinsa An implementation may place implementation-defined restrictions on tasks whose active priority is in the Interrupt_Priority range. @dinst For optimization purposes, an implementation may alter the points at which task dispatching occurs, in an implementation-defined manner. However, a @fa always corresponds to at least one task dispatching point. !corrigendum D.2.1(16) @dinsa @xindent<@s9<12 The priority of a task is determined by rules specified in this subclause, and under D.1, ``Task Priorities'', D.3, `` Priority Ceiling Locking'', and D.5, ``Dynamic Priorities''.>> @dinst @xindent<@s9<13 The setting of a task's base priority as a result of a call to Set_Priority does not always take effect immediately when Set_Priority is called. The effect of setting the task's base priority is deferred while the affected task performs a protected action.>> !corrigendum D.02.02(00) @drepl The Standard Task Dispatching Policy @dby Pragma Task_Dispatching_Policy !corrigendum D.02.02(03) @drepl The @i_@fa shall either be FIFO_Within_Priorities or an implementation-defined @fa. @dby The @i_@fa shall either be one defined in this Annex or an implementation-defined @fa. !corrigendum D.02.02(05) @ddel If the FIFO_Within_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition. !corrigendum D.02.02(07) @ddel The language defines only one task dispatching policy, FIFO_Within_Priorities; when this policy is in effect, modifications to the ready queues occur only as follows: !corrigendum D.02.02(08) @ddel @xbullet !corrigendum D.02.02(09) @ddel @xbullet !corrigendum D.02.02(10) @ddel @xbullet !corrigendum D.02.02(11) @ddel @xbullet that does not result in blocking, it is added to the tail of the ready queue for its active priority.> !corrigendum D.02.02(12) @ddel Each of the events specified above is a task dispatching point (see D.2.1). !corrigendum D.02.02(13) @ddel In addition, when a task is preempted, it is added at the head of the ready queue for its active priority. !corrigendum D.02.02(14) @ddel @i is the duration for which a task remains at the head of the highest priority ready queue while the processor executes a lower priority task. The implementation shall document: !corrigendum D.02.02(15) @ddel @xbullet !corrigendum D.02.02(16) @ddel @xbullet !corrigendum D.02.02(17) Implementations are allowed to define other task dispatching policies, but need not support more than one such policy per partition. @dby Implementations are allowed to define other task dispatching policies, but need not support more than one task dispatching policy per partition. !corrigendum D.02.02(18) @ddel For optimization purposes, an implementation may alter the points at which task dispatching occurs, in an implementation defined manner. However, a @i always corresponds to at least one task dispatching point. !corrigendum D.02.02(19) @ddel @xindent<@s9<13 If the active priority of a running task is lowered due to loss of inherited priority (as it is on completion of a protected operation) and there is a ready task of the same active priority that is not running, the running task continues to run (provided that there is no higher priority task).>> !corrigendum D.02.02(20) @ddel @xindent<@s9<14 The setting of a task's base priority as a result of a call to Set_Priority does not always take effect immediately when Set_Priority is called. The effect of setting the task's base priority is deferred while the affected task performs a protected action.>> !corrigendum D.02.02(21) @ddel @xindent<@s9<15 Setting the base priority of a ready task causes the task to move to the end of the queue for its active priority, regardless of whether the active priority of the task actually changes.>> !corrigendum D.02.03(01) @dinsc This clause defines the policy_identifier, FIFO_Within_Priorities. @i<@s8> If the FIFO_Within_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition. @i<@s8> When FIFO_Within_Priorities is in effect, modifications to the ready queues occur only as follows: @xbullet @xbullet @xbullet @xbullet that does not result in blocking, it is added to the tail of the ready queue for its active priority.> Each of the events specified above is a task dispatching point (see D.2.1). A task dispatching point occurs for the currently running task of a processor whenever there is a nonempty ready queue for that processor with a higher priority than the priority of the running task. The currently running task is said to be preempted and it is added at the head of the ready queue for its active priority. @i<@s8> @i is the duration for which a task remains at the head of the highest priority nonempty ready queue while the processor executes a lower priority task. The implementation shall document: @xbullet @xbullet @xindent<@s9> @xindent<@s9<15 Setting the base priority of a ready task causes the task to move to the tail of the queue for its active priority, regardless of whether the active priority of the task actually changes.>> !ACATS test Since this AI is rearranging the text of the standard, but not (intentionally) changing the semantics, no test is needed. !appendix From: Randy Brukardt Sent: Wednesday, January 15, 2003 6:55 PM Alan: A comment on "AINew", which I've assigned number AI-321. The current D.2.1(6) says: "Each processor also has one running task, which is the task currently being executed by that processor. Whenever a task running on a processor reaches a task dispatching point, one task is selected to run on that processor. The task selected is the one at the head of the highest priority nonempty ready queue; this task is then removed from all ready queues to which it belongs." This text implies round-robin scheduling at task dispatching points. The new AI, however, is written assuming that run-until blocked scheduling is used. For instance, the new D.2.1(6) says: "If the current running task has a priority greater or equal to the task at the head of the highest priority non-empty ready queue it continues to be the running task; otherwise the task at the head of the highest priority non-empty ready queue is selected, this task is then removed from all ready queues to which it belongs." Moreover, the justification for removing "the end of an accept statement" as an explicit task dispatching point is "It is not the only case in which a task changes priority and/or unblocks another task. It has not been moved to D.2.2 as the overriding rule of higher priority task always preempting covers this and all similar cases." Clearly, however, it does have an effect if the scheduling is round-robin within the current priority. My own understanding has been that the dispatching policy within a priority is not specified. That's necessary if Ada is to run on standard operating systems, which typically use some sort of time slicing for threads of the same priority. Saying that compilers for such targets are not Annex D compliant (or forcing the use of impractical modes to make them compliant, such as the version of GNAT that had to run as root) is not helpful. In any case, this wording is supposed to be inclusive, not exclusive -- it is defining the general task dispatching policy, not any specific one. To write the wording such that a round-robin or time-slicing policy is wrong (as an implementation of FIFO_within_priorities) is just not going to fly -- it would be very incompatible with existing practice. As a practical matter, we've decided that we have to allow validation for Annex D when running on operating systems like Windows and VxWorks which do not exactly meet the letter of rules for task dispatching and queuing. That because Ada is not in a position of strength here; we cannot say that Ada is incompatible with Windows, since that simply will cause people to abandon Ada, not Windows. We should not add more rules that are impossible to implement on popular platforms. **************************************************************** From: Alan Burns Sent: Thursday, January 16, 2003 11:55 AM Some points arising from Randy's note on new AI - AI-321 My motivation is to remove rules from D.2.1 not add some; hence new D.2.2 has the strong rules for FIFO_Within_Priority, and other policies can be defined. I did not realise that current D.2.1(6) implies round-robin scheduling. First, I don't think it does (see next point). But D.2.1 should not imply any scheme (and should not disallow either I agree). I have attempted to remove the non-essential dispatching points so that D.2.1 only has the rules that must apply for all Ada programs. As the AI says, D.2.1(6) is broken; strictly it requires a lower priority task to take over a higher one at a task dispatching point if there is no other task at the higher level (as the current running task is not on a ready queue). No implementation (on whatever OS) should do this. The need to allow round robin implementation is facilitates by: >Replace D.2.1(8) with: >A task reaches a task dispatching point whenever the task >dispatching policy requires a running task to go back to a ready queue. An implementation's `policy' can send a task to the back of it's ready queue whenever it wishes. There will actually be a specific round_robin dispatching policy coming forward for consideration by ARG soon! I have not assumed run-until blocked scheduling, because of above, but "If the current running task has a priority greater or equal to the task at the head of the highest priority non-empty ready queue .." could have `or equal' removed if that would help (although other changes would have to be made) **************************************************************** From: Ted Baker Sent: Thursday, January 16, 2003 2:58 PM You are misreading the current wording of D.2.1. The words you quoted below were taken, nearly verbatim, from the POSIX realtime scheduling standard. They do *not* imply round-robin scheduling. Yes, the running task is nominally removed from the ready queue(s) while it runs, and then put back onto the ready queue whenever it is not running. However, when it is put back onto the ready queue there is no requirement that it be put back into the queue in round-robin fashion. Depending on the dispatching policy, the task may be placed into the ready queue at the head, that the tail, or any other place the policy dictates. Please do not "fix" something that is not broken. If you do, you will actually break it. All of the words in the real-time annex that address scheduling were written are based on the above model, as is the POSIX model. If you start changing the model you will have many unintended consequences, and end up not being compatible with systems that use POSIX threads. **************************************************************** From: Randy Brukardt Sent: Thursday, January 16, 2003 3:19 PM > ... > Please do not "fix" something that is not broken. If you do, > you will actually break it. *I* don't want to "fix" this; I want to make sure that needed corrections don't accidentally make existing scheduling wrong. Alan has proposed the changes in AI-321 for two reasons: -- To allow non-premptive scheduling policies (which isn't allowed by the current D.2.1); -- To fix the wording glitch which does not allow the running task to continue at a task dispatching point. (And I certainly agree with Alan that that is what the current wording says). It's unlikely that many, if any, implementations actually do this. All of which he explains in the AI that he sent last week. **************************************************************** From: Robert Dewar Sent: Friday, January 17, 2003 9:40 PM > I did not realise that current D.2.1(6) implies round-robin scheduling. > First, I don't think it does (see next point). But D.2.1 should > not imply any scheme (and should not disallow either I agree). I have > attempted to remove the non-essential dispatching points so that > D.2.1 only has the rules that must apply for all Ada programs. While we are at it, should we be so inflexible for the default dispatching priority. It is somewhat annoying that you have to have a switch to change the default behavior of the OS (e.g. in Windows) to ensure absolute priorities. yes, you can take the viewpoint that Annex D is optional, but in fact all the priority stuff maps fine into the default Windows behavior. I am not sure we need to be so specific on the default dispatching method. **************************************************************** From: Robert Dewar Sent: Sunday, January 19, 2003 7:23 AM > > As a practical matter, we've decided that we have to allow validation for > > Annex D when running on operating systems like Windows and VxWorks which do > > not exactly meet the letter of rules for task dispatching and queuing. That > > because Ada is not in a position of strength here; we cannot say that Ada is > > incompatible with Windows, since that simply will cause people to abandon > > Ada, not Windows. We should not add more rules that are impossible to > > implement on popular platforms. What does this mean exactly? That you are allowed to fail existing ACATS tests on such targets? If so, I object. Certainly it is possible to pass all existing Annex D tests on Windows. On the other hand, it would certainly be possible to strengthen the tests so that this was not the case (e.g. really strenuously test that there are 32 independent priority levels). **************************************************************** From: Randy Brukardt Sent: Wednesday, January 22, 2003 6:33 PM It means that the situation I inherited is unchanged: there are no "gold stars" for annex validation, you usually cannot fail a validation because of a failure in an annex, the grade can be "unsupported" rather than "failed" for annex tests (although you have to get permission). Of course, vendors can omit testing for any annex (as a whole), and most do. That was decided before I took over conformity assessment, and certainly I haven't changed it (or wanted to re-argue it, for that matter). There certainly are implementations whose test reports show "unsupported" tests in the various annexes where the tests were run. These are test failures, in general. I don't believe that we've ever denied a petition for an unsupported grade (I know we've discussed some of these on the FRT list). Randy Brukardt ACAA Technical Agent. **************************************************************** From: Dan Eilers Sent: Wednesday, January 22, 2003 7:05 PM I don't wish to re-argue this policy either, but I would point out that it seems inconsistent with ISO/IEC 18009 section 8.2.1.1 which states that Ada defines Specialized Needs Annexes to which a processor may conform individually, and Ada conformity testing shall assess conformity to these individual Specialized Needs Annexes. **************************************************************** From: Randy Brukardt Sent: Wednesday, January 22, 2003 7:55 PM Nothing in ISO/IEC 18009 defines how test grades are determined, or precisely what a pass or fail situation is. That's left to the ACAP and ACATS. And certainly, by running the tests, an ACAL is assessing conformity. So I don't see any inconsistency here. P.S. This is a bit off topic here; it would be better discussed on the ACAA mailing list. **************************************************************** From: Robert Dewar Sent: Wednesday, January 22, 2003 8:46 PM > That was decided before I took over conformity assessment, and certainly I > haven't changed it (or wanted to re-argue it, for that matter). There > certainly are implementations whose test reports show "unsupported" tests in > the various annexes where the tests were run. These are test failures, in > general. I don't believe that we've ever denied a petition for an > unsupported grade (I know we've discussed some of these on the FRT list). I really think it is bad to consider a failed test to be a matter of lack of support. We considered this during the language design, and agreed that there was definitely not a license to do arbitrary wrong things when trying to run annex tests. **************************************************************** From: Randy Brukardt Sent: Wednesday, January 22, 2003 8:56 PM It's done on a case-by-case basis, of course, so the implementor has to have some rationale for it. But, as I said, I believe we've granted every request that has been made. (And by "we", I mean the FRT, not me alone.) **************************************************************** From: Robert Dewar Sent: Wednesday, January 22, 2003 9:02 PM OK, that's fine, that means that implementors have been reasonable, which is what you would expect. I thought you were stating this as a policy which would be something else entirely. **************************************************************** From: Robert Dewar Sent: Sunday, January 19, 2003 7:27 AM > Alan has proposed the changes in AI-321 for two reasons: > -- To allow non-premptive scheduling policies (which isn't allowed by the > current D.2.1); I think this change is important, it really seems for example that we should be able to allow the default scheduling policy of Win XP as an *optional* scheduling policy without considering that we are violating annex D. Right now with GNAT, we have to run the whole test suite with a switch that sets FIFO_Within_Priorities, which works fine, but means that the validation is less useful, since it is forced to run with a rather peculiar non-standard switch. Of course we pass all the tests that have the explicit pragma Dispatching_Policy, that's not the issue, the issue is that there are tests without this pragma that still require strict adherence to the premptive priority model of Annex D. > -- To fix the wording glitch which does not allow the running task to > continue at a task dispatching point. (And I certainly agree with Alan that > that is what the current wording says). It's unlikely that many, if any, > implementations actually do this. Well that of course is just a typographical error (it obviously is not intended, and a compiler that felt bound by this wording error would be just as incorrect as an original Ada 83 compiler that thought that all subtypes were non-static due to a wording error). **************************************************************** From: Ted Baker Sent: Friday, January 24, 2003 7:35 AM Alan has been so good as to provide me with a copy of his proposed AI-321. While I appreciate and endorse the idea of adding new dispatching policies to the standard, including nonpreemptive ones, I am not happy with this proposal. | As it is currently worded the general model for dispatching (as | defined in D.2.1) does not define the required behaviour. | Specifically the current running task is not considered as a | candidate for the next running task as it is not on a ready queue. The above certainly was not the intent when we wrote that section of the RM. Looking now at the words, I think I see why you are having problems with them. It comes from the effect of multiple authors and incremental changes, specifically the addition of task dispatching points without the corresponding specification of where the running task gets inserted into the ready queue at the dispatching point. The core model was derived from the POSIX real-time scheduling model, and intended to allow an implementation of that model to also comply with the Ada standard. Therefore, in resolving the problem it may help to look at the POSIX standard The June 2001 draft of the POSIX/ISO/Open Group ("Austin Group") combined standard (sorry, I don't have the final document on line) says: | "A conforming implementation shall select the thread that is | defined as being at the head of the highest priority non-empty | thread list to become a running process, regardless of its | associated policy. This thread is then removed from its thread | list." Compare the Ada 94 RM words: | "Whenever a task running on a processor reaches a task | dispatching point, one task is selected to run on that | processor. The task selected is the one at the head of the | highest priority nonempty ready queue; this task is then removed | from all ready queues to which it belongs." The intent in both cases was that whenever a scheduling decision is made (a dispatching point), the currently executing thread (task) is first returned to one of the conceptual thread (task) lists (queues). The scheduling (dispatching) policies determine when scheduling decisions are made, and the positions where threads (tasks) are inserted into the lists (queues). If you look at D.2.2 you will see that for each of the dispatching points there is a specification of where the running task is inserted into a dispatching queue. The right solution is to clarify the model, i.e., that every time the running task reaches a dispatching point, it is inserted into one (or more) ready queues before the dispatching decision is made. Beside the above issue, Alan's proposal wants to move some of the requirements for dispatching points, including the end of an accept statement, from the general requirements to specific policies. Since we only have one named policy at this point, this should have no impact on existing practice. Therefore, I propose the following revision of the AI. I have left in the movement of dispatching point specifications, since I see no immediate reason to object to them. However, I hope we all think hard about whether these changes make sense from the point of view of a real-time application. --Ted Baker ! summary New wording is proposed for paragraphs within D.2.1 and D.2.2 to clarify the intended effect of dispatching points, and to allow more freedom to dispatching policies. ! problem As it is currently worded the general model for dispatching (as defined in D.2.1) does not permit a truly nonpreemptive dispatching policy. There are a number of required dispatching points that amount to preemption points, i.e., points that do not logically correspond to a decision by the running task to give up the processor. During the discussion of this issue, it also appeared that the wording of D.2.1 does not make it clear enough that whenever a running task reaches a dispatching point it is conceptually added to one or more of the conceptual dispatching queues, and so is always eligible for consideration to continue execution. ! proposal Some wording changes are proposed for the annex. Those deal with the problem defined above. Wording changes are also proposed to facilitate the definition of other dispatching policies in the Annex. ! wording Collapse the third and fourth sentences of D.2.1(4) by the following: A task reaches a dispatching point whenever it becomes blocked, and when the task terminates. Change the title of D.2.2 to "Task Dispatching Policies". Replace D.2.1(8) with: Whenever a task reaches a task dispatching point it goes back to a (possibly more than one) ready queue. Replace D.2.2(3) with: The policy-identifier shall be FIFO_Within_Priority, an alternative policy from D.14 or an implementation-defined identifier. Replace D.2.2 (7): When FIFO_Within_Priorities is in effect, modifications to the ready queues occur only as follows: Add after D.2.2(13): A task dispatching point occurs for the currently running task of a processor whenever there is a non-empty ready queue for that processor with a higher priority than the priority of the running task. ! discussion The inclusion of 'completion of an accept_statement (see 9.5.2),' in D.2.1(4) was redundant for preemptive policies, and ruled out nonpreemptive policies. It is not the only case in which a task changes priority and/or unblocks another task. It has not been moved to D.2.2 as the overriding rule of higher priority task always preempting covers this and all similar cases. Likewise, the inclusion of whenever a task 'becomes ready' was redundant. Dispatching points are defined only for running tasks. Becoming ready requires insertion into one or more ready queues, which invokes the preemption rule D.2.2(13) if the policy is preemptive and the newly ready task has higher priority than a running task for a processor to which the queue belongs. | Ted, | I was about to email you to make ask you to make time | to look at this. Some background. During discussion at ARG | on dispatching policies, in particular, a new non-preemptive | one (but also others) it became clear (to ARG) that it was | not straightforward as some rules in D.2.1 (the general model) | were too specific and should really be in D.2.2 (the particular | dispatching policy). One example was requiring an immediate | switch to high pri process when it is released. I was charged | be ARG to do some moving between D.2.1 and D.2.2 to fix this. | But when doing so I notice, and others agreed, that D.2.1 does | not actually say what we all assumed it did. At a task dispatching | point the highest priority task on a ready queue is chosen to | run next, but the wording is clear that the current running task | is not on a ready queue (it had previously been removed, when it | is made runnable, and is only put back when another task is chosen | to execute). Hence logically current task must give up processor | even if it is the highest pri. | So AI-321 aims to fit both of the above problems, without breaking | the rest of the standard (I hope). Your observations would be | useful. Note IRTAW is talking about a few new dispatching policies, | including non-preemptive, round robin, EDF and combinations of | these - so some wording changes are just to allow more than one | policy to be defined in the standard. | cheers - here is the AI, the changes are really quite small [Editor's note: This was version /01 of the AI, not repeated here.] **************************************************************** From: Alan Burns Sent: Tuesday, January 28, 2003 3:55 AM I am happy with Ted's alternative method of putting D.2.1 right. A couple of minor points 1. The last sentence in D.2.1(7) is not quite accurate as it talks about a single ready queue. With Ted's words for D.2.1(8) which correctly notes that the task may go back to more than one queue, it may be best to delete this last sentence of D.2.1(7) as a definition of preemption is not really needed. 2. I would not change the title of D.2.2, FIFO_Within_Priority will still be THE standard policy, others will be in a new D.14. ****************************************************************