Version 1.14 of ais/ai-00357.txt

Unformatted version of ais/ai-00357.txt version 1.14
Other versions for file ais/ai-00357.txt

!standard D.02.6 (01)          05-03-13 AI95-00357/09
!class amendment 03-09-27
!status Amendment 200Y 04-12-02
!status ARG Approved 10-0-0 04-11-20
!status work item 03-09-27
!status received 03-09-27
!priority Medium
!difficulty Medium
!subject Support for Deadlines and Earliest Deadline First Scheduling
!summary
Direct support for deadlines is defined and a new dispatching policy for Earliest Deadline First (EDF) Scheduling is supported.
!problem
Ada does not give any direct support for deadlines (although they are the most important notion in real-time systems). A standard way of representing deadlines via a predefined attribute of a task is needed.
Once deadlines are supported it is possible to define a dispatching policy for EDF scheduling. This being as common an approach as fixed priority scheduling for real-time system. It has the advantage that higher levels of resource utilization are possible.
For many scheduling schemes, including for example EDF, the most effective locking policy for protected objects is one known as the Stack Resource Policy (or preemption level locking). This was defined by Baker in 1991 [1] and has the advantage that it does not enforce the same policy on task dispatching and PO locking; but it does lead to effective implementation of POs. Moreover when priority dispatching is used the SRP policy is the same as ceiling locking.
!proposal
To give support for EDF scheduling with the necessary protocol for sharing protected objects requires the following:
1. a means of representing a task's (absolute) deadline 2. a means of assigning a preemption level to a task 3. a means of assigning a preemption level to a PO
The basic rule for preemption is that a new task T will preempt current running task S if:
1) Deadline of T is before that of S 2) Preemption level of T is higher than preemption level of any locked PO
If preemption levels are assigned (by the programmer) using relative deadlines for each task then a tight upper bound on blocking equivalent to that of priority ceiling inheritance is obtained.
In this proposal
a) deadlines are represented by a new task 'attribute' b) preemption levels for tasks are represented by base priorities c) preemption levels for POs are represented by ceiling priorities
The usual behaviour of priorities, when tasks execute within a PO, are followed. But the active priority of a task when executing outside a PO may be lower than its base priority (see details below).
A new pragma is provided to allow a task to set a (non-default) relative deadline to control its activation:
pragma Relative_Deadline(expression); where the expected type of expression is Real_Time.Time_Span.
The pragma can only occur in the specification part of a task.
To support EDF scheduling a new Priority_Policy identifier is defined: EDF_Across_Priorities.
When the Priority policy EDF_Across_Priorities is in effect the following rules apply. Let Base(T) be the base priority of task T and Deadline(T) its absolute deadline.
Rule 1 All ready queues in the specified range System.Any_Priority are ordered by deadline. For two tasks on the same ready queue, S and T: Deadline(S) < Deadline(T) implies S is closer to the head of the queue than T.
Rule 2 When a task T becomes unblocked it is placed on the highest priority ready queue R such that
A protected object with ceiling priority R is currently locked, Deadline(T) < Deadline of the task locking this protected object, and Base(T) > Priority level of R.
if no such R exists then add T to Any_Priority'first.
Rule 3 When a task is chosen for execution it runs with the active priority determined by the ready queue from which the task was taken. If preempted it returns to the ready queue for its active priority. If it inherits a higher active priority it will return to its original active priority when it no longer inherits the higher level.
Rule 4 When a task executes a delay_statement that does not result in blocking it is added to the ready queue for its active priority.
Rule 5 Priority ceiling level Priority'first is not allowed.
!wording
D.2.6 Earliest Deadline First Dispatching
The deadline of a task is an indication of the urgency of the task; it represents a point on an ideal physical time line. Unless otherwise specified, whenever tasks compete for processors or other implementation-defined resources, the resources are allocated to the task with the earliest deadline.
This clause defines a package for representing a task's deadline and a dispatching policy that defines Earliest Deadline First (EDF) dispatching. A pragma is defined to assign an initial deadline to a task.
AARM Note: This pragma is the only way of assigning an initial deadline to a task so that its activation can be controlled by EDF scheduling. This is similar to the way pragma Priority is used to give an initial priority to a task.
Syntax
The form of a pragma Relative_Deadline is as follows:
pragma Relative_Deadline (relative_deadline_expression);
Name Resolution Rules
The expected type for relative_deadline_expression is Real_Time.Time_Span.
Legality Rules
A Relative_Deadline pragma is allowed only immediately within a task_definition or the declarative_part of a subprogram_body. At most one such pragma shall appear within a given construct.
Static Semantics
The policy_identifier EDF_Across_Priorities is a task dispatching policy.
The following language-defined library package exists:
with Ada.Real_Time; with Ada.Task_Identification; package Ada.Dispatching.EDF is subtype Deadline is Ada.Real_Time.Time; Default_Deadline : constant Deadline := Ada.Real_Time.Time_Last; procedure Set_Deadline(D : in Deadline; T : in Ada.Task_Identification.Task_ID := Ada.Task_Identification.Current_Task); procedure Delay_Until_And_Set_Deadline( Delay_Until_Time : in Ada.Real_Time.Time; Deadline_Offset : in Ada.Real_Time.Time_Span); function Get_Deadline(T : in Ada.Task_Identification.Task_ID := Ada.Task_Identification.Current_Task) return Deadline; end Ada.Dispatching.EDF;
Post-Compilation Rules
If the EDF_Across_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
If the EDF_Across_Priorities policy appears in a Priority_Specific_Dispatching pragma (see D.2.2) in a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
[Note: The above assumes AI-355 is included in the Amendment.]
AARM Reason: The priority model of EDF assumes that ceiling locking is used; it wouldn't make sense otherwise. Other scheduling policies are not so tightly bound to the locking policy - they still can make sense for an alternative policy.
Dynamic Semantics
A Relative_Deadline pragma has no effect if it occurs in the declarative_part of the subprogram_body of a subprogram other than the main subprogram.
The initial absolute deadline of a task containing pragma Relative_Deadline is the value of Real_Time.Clock + relative_deadline_expression, where the call of Real_Time.Clock is made between task creation and the start of its activation. If there is no Relative_Deadline pragma then the initial absolute deadline of a task is the value of Default_Deadline. The environment task is also given an initial deadline by this rule.
The procedure Set_Deadline changes the absolute deadline of the task to D. The function Get_Deadline returns the absolute deadline of the task.
The procedure Delay_Until_And_Set_Deadline delays the calling task until time Delay_Until_Time. When the task becomes runnable again it will have deadline Delay_Until_Time + Deadline_Offset.
On a system with a single processor, the setting of a task's deadline to the new value occurs immediately at the first point that is outside the execution of an abort-deferred operation. If the task is currently on a ready queue it is removed and re-entered on to the ready queue determined by the rules defined below.
When EDF_Across_Priorities is specified for priority range Low..High all ready queues in this range are ordered by deadline. The task at the head of a queue is the one with the earliest deadline.
A task dispatching point occurs for the currently running task T to which policy EDF_Across_Priorities applies whenever:
o a change to the deadline of T occurs; or
o a decrease to the deadline of any task on a ready queue for
that processor occurs and the new deadline is earlier than that of the running task; or
o there is a non-empty ready queue for that processor
with a higher priority than the priority of the running task.
In these cases, the currently running task is said to be preempted and is returned to the ready queue for its active priority.
Whenever a task T to which policy EDF_Across_Priorities applies is added to a ready queue, other than when it is preempted, it is placed on the ready queue with the highest priority P, if one exists, such that:
o a task is executing within a protected object with ceiling
priority P; and
o task T has an earlier deadline than any task executing within
a protected object with ceiling priority P; and
o the base priority of task T is greater than P.
If no such ready queue exists the task is added to the ready queue for the lowest priority in the range specified as EDF_Across_Priorities.
When the setting of the base priority of a task takes effect and the new priority is in the range specified as EDF_Across_Priorities, the task is added to the ready queue.
When a task is chosen for execution it runs with the active priority of the ready queue from which the task was taken. If it inherits a higher active priority it will return to its original active priority when it no longer inherits the higher level.
For all the operations defined in this package, Tasking_Error is raised if the task identified by T has terminated. Program_Error is raised if the value of T is Null_Task_ID.
Bounded (Run-Time) Error
If EDF_Across_Priorities is specified for priority range Low..High, it is a bounded error to declare a protected object with ceiling priority Low or to assign the value Low to attribute 'Priority. In either case either Program_Error is raised or the ceiling of the protected object is assigned the value Low+1.
Erroneous Execution
If a value of Task_ID is passed as a parameter to any of the subprograms of this package and the corresponding task object no longer exists, the execution of the program is erroneous.
Documentation Requirements
On a multiprocessor, the implementation shall document any conditions that cause the completion of the setting of a task's deadline to be delayed later than what is specified for a single processor.
Notes
If two adjacent priority ranges, A..B and B+1..C are specified to have policy EDF_Across_Priorities then this is not equivalent to this policy being specified for the single range, A..C.
The above rules implement the preemption-level protocol (also called Stack Resource Policy protocol) for resource sharing under EDF dispatching. The preemption-level for a task is denoted by its base priority. The definition of a ceiling preemption-level for a protected object follows the existing rules for ceiling locking.
AARM Note: An implementation may support additional dispatching policies by replacing absolute deadline with an alternative measure of urgency.
!discussion
The addition of procedure Delay_and_Set_Relative_Deadline reflects a common need to change deadline and then delay. For example a periodic task would do this. As the change of deadline (extending it) is likely to cause a context switch (with a later switch to put the task on the delay queue) it is more efficient for the run-time to do the delay and deadline change as a single operation.
Here we show that the above rules do give the required behaviour for EDF scheduling and preemption level locking (without the need for a new locking policy, i.e. just use of ceiling locking).
First if no POs are used or locked then all tasks are always placed on the ready queue for level Priority'first (which is of course ordered by deadline).
The preemption level rule is that a newly executing task (T) should preempt the current running task (S) if it has a higher preemption level than any task that has locked a PO and a shorter deadline than S. There are a few cases to look at
If S has locked the PO then T will be placed in the same ready queue as S, it has a shorter deadline and hence will execute first.
If some other task has locked the PO then S must have preempted it and hence will again be on the ready queue for this level. T added to same queue and will preempt. [What the heck does this last sentence mean? Note that I fixed many more instances of 'q' to be 'queue'. - ED]
If there are a number of locked POs then T needs to be placed on the correct ready queue. On all ready queues, apart from that at Priority'first, the tail of the queue is the task that has the lock on the PO whose ceiling priority is that of the ready queue. Rule 2 then gets the right queue: the queue is appropriate as T has the correct attributes (shorter deadline than the task holding the lock, and higher preemption level). Moreover no higher priority queue is appropriate as one of the conditions will be false.
Note that in order to be placed on a ready queue, T must have a strictly higher preemption level (base priority) than the task on that queue that is holding the lock. Hence mutual exclusion is not broken by the preemption rules.
--
During discussions an alternative way of expressing this model was considered. This did not need the 'no POs at level Priority'first' rule. But it did require that tasks be placed at one above the level in the above model, and required the rule that a task preempted while executing in a PO must always stay at the front of its ready queue even if its deadline is later than the next task on the queue.
!corrigendum D.2.6(1)
Insert new clause:
The deadline of a task is an indication of the urgency of the task; it represents a point on an ideal physical time line. Unless otherwise specified, whenever tasks compete for processors or other implementation-defined resources, the resources are allocated to the task with the earliest deadline.
This clause defines a package for representing a task's deadline and a dispatching policy that defines Earliest Deadline First (EDF) dispatching. A pragma is defined to assign an initial deadline to a task.
Syntax
The form of a pragma Relative_Deadline is as follows:
pragma Relative_Deadline (relative_deadline_expression);
Name Resolution Rules
The expected type for relative_deadline_expression is Real_Time.Time_Span.
Legality Rules
A Relative_Deadline pragma is allowed only immediately within a task_definition or the declarative_part of a subprogram_body. At most one such pragma shall appear within a given construct.
Static Semantics
The policy_identifier EDF_Across_Priorities is a task dispatching policy.
The following language-defined library package exists:
with Ada.Real_Time; with Ada.Task_Identification; package Ada.Dispatching.EDF is subtype Deadline is Ada.Real_Time.Time; Default_Deadline : constant Deadline := Ada.Real_Time.Time_Last; procedure Set_Deadline(D : in Deadline; T : in Ada.Task_Identification.Task_ID := Ada.Task_Identification.Current_Task); procedure Delay_Until_And_Set_Deadline( Delay_Until_Time : in Ada.Real_Time.Time; Deadline_Offset : in Ada.Real_Time.Time_Span); function Get_Deadline(T : in Ada.Task_Identification.Task_ID := Ada.Task_Identification.Current_Task) return Deadline; end Ada.Dispatching.EDF;
Post-Compilation Rules
If the EDF_Across_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
If the EDF_Across_Priorities policy appears in a Priority_Specific_Dispatching pragma (see D.2.2) in a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
Dynamic Semantics
A Relative_Deadline pragma has no effect if it occurs in the declarative_part of the subprogram_body of a subprogram other than the main subprogram.
The initial absolute deadline of a task containing pragma Relative_Deadline is the value of Real_Time.Clock + relative_deadline_expression, where the call of Real_Time.Clock is made between task creation and the start of its activation. If there is no Relative_Deadline pragma then the initial absolute deadline of a task is the value of Default_Deadline. The environment task is also given an initial deadline by this rule.
The procedure Set_Deadline changes the absolute deadline of the task to D. The function Get_Deadline returns the absolute deadline of the task.
The procedure Delay_Until_And_Set_Deadline delays the calling task until time Delay_Until_Time. When the task becomes runnable again it will have deadline Delay_Until_Time + Deadline_Offset.
On a system with a single processor, the setting of a task's deadline to the new value occurs immediately at the first point that is outside the execution of an abort-deferred operation. If the task is currently on a ready queue it is removed and re-entered on to the ready queue determined by the rules defined below.
When EDF_Across_Priorities is specified for priority range Low..High all ready queues in this range are ordered by deadline. The task at the head of a queue is the one with the earliest deadline.
A task dispatching point occurs for the currently running task T to which policy EDF_Across_Priorities applies whenever:
In these cases, the currently running task is said to be preempted and is returned to the ready queue for its active priority.
Whenever a task T to which policy EDF_Across_Priorities applies is added to a ready queue, other than when it is preempted, it is placed on the ready queue with the highest priority P, if one exists, such that:
If no such ready queue exists the task is added to the ready queue for the lowest priority in the range specified as EDF_Across_Priorities.
When the setting of the base priority of a task takes effect and the new priority is in the range specified as EDF_Across_Priorities, the task is added to the ready queue.
When a task is chosen for execution it runs with the active priority of the ready queue from which the task was taken. If it inherits a higher active priority it will return to its original active priority when it no longer inherits the higher level.
For all the operations defined in this package, Tasking_Error is raised if the task identified by T has terminated. Program_Error is raised if the value of T is Null_Task_ID.
Bounded (Run-Time) Errors
If EDF_Across_Priorities is specified for priority range Low..High, it is a bounded error to declare a protected object with ceiling priority Low or to assign the value Low to attribute 'Priority. In either case either Program_Error is raised or the ceiling of the protected object is assigned the value Low+1.
Erroneous Execution
If a value of Task_ID is passed as a parameter to any of the subprograms of this package and the corresponding task object no longer exists, the execution of the program is erroneous.
Documentation Requirements
On a multiprocessor, the implementation shall document any conditions that cause the completion of the setting of a task's deadline to be delayed later than what is specified for a single processor.
NOTES
16 If two adjacent priority ranges, A..B and B+1..C are specified to have policy EDF_Across_Priorities then this is not equivalent to this policy being specified for the single range, A..C.
17 The above rules implement the preemption-level protocol (also called Stack Resource Policy protocol) for resource sharing under EDF dispatching. The preemption-level for a task is denoted by its base priority. The definition of a ceiling preemption-level for a protected object follows the existing rules for ceiling locking.
!ACATS test
Tests should be created to check on the implementation of this feature.
!appendix

Reference
[1] Baker, T.P., Stack-Based Scheduling of Real-Time Processes,
Journal of Real-Time, Vol 3, No 1, pp67-99, March 1991.

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  4:27 AM

Re EDF scheduling and preemption levels.

One suggestion from the last ARG meeting was that the need for preemption
levels could be catered for by using a block of priority levels for a single
scheme such as EDF - the different values in the block been used for
preemption levels (of tasks and POs).

Remember the problem we are trying to solve is the control of the
amount of interaction that occurs between a set of tasks and POs
at the same priority level.

My reading of the basic dispatching model (ie new D2.1) is that the use
of a block of priority levels will not work. It is clear for the basic model
that at any dispatching point the highest (non-empty) ready queue will
provide the task to execute next. It will not be easy to define the EDF
dispatching scheme (or any other implementation defined scheme) in
such a way that a task can be preempted by a newly released task
with a lower base priority. Using active priority will not work as this
would put the task's priority too high for its use of POs. One would
need to introduce a new priority notion (in addition to base and active,
say original) that seems too large a change.

The alternative seems more straightforward. Define a new pragma that
works with Ceiling_Locking to allow tasks and POs to define a preemption
level. This has minimal impact on exisiting words and notions.

I would like to get some feedback on this before the next meeting so that
I can work on the details and wording.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  9:14 AM

> My reading of the basic dispatching model (ie new D2.1) is that the use
> of a block of priority levels will not work. It is clear for the basic model
> that at any dispatching point the highest (non-empty) ready queue will
> provide the task to execute next. It will not be easy to define the EDF
> dispatching scheme (or any other implementation defined scheme) in
> such a way that a task can be preempted by a newly released task
> with a lower base priority. Using active priority will not work as this
> would put the task's priority too high for its use of POs. One would
> need to introduce a new priority notion (in addition to base and active,
> say original) that seems too large a change.

Suppose the "active priority" of a task in such a group is
always the lowest priority level of the group *except* when it
is inside a PO, in which case it takes on the ceiling level of the
PO.  In addition, the PO would check against the max of the
active priority and the base priority for ceiling violations.
I believe this could work.  Whether it is preferable to introducing
preemption levels is worthy of debate...

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  9:48 AM

> Suppose the "active priority" of a task in such a group is
> always the lowest priority level of the group *except* when it
> is inside a PO, in which case it takes on the ceiling level of the
> PO.  In addition, the PO would check against the max of the
> active priority and the base priority for ceiling violations.
> I believe this could work.  Whether it is preferable to introducing
> preemption levels is worthy of debate...

No I'm not sure this works, support lowest pri is 3
T1 and PO have priority of 4
T2 has priority of 5

T1 executes (at level 3) and calls PO (now execting at level 4)
T2 is released, but its active priority is 3 so
does not preempt. But we want T2 to be able to
preempt T1 as preemption level ensures T2 does not use PO
(and deadline of T2 may be shorter than T1's, for example).

****************************************************************

From: Stephen Michell
Sent: Monday, October 20, 2003  10:06 AM

I have significant concerns about the mix-and-match approach that Alan
was suggesting for this AI (357) and those for round-robin, value-based,
etc.

Basically, the IRTAW was proposing that any given priority level could
take on a scheduling paradigm, and that tasks would inherit this
paradigm when they entered that priority level. So for example
priority 1 -> round robin
priority 2 -> FIFO
priority 3 -> round robin
priority 4 -> EDF
priority 5 -> round robin

My concerns:
Round Robin
Round-robin is a paradigm that is used for what are considered
non-realtime tasks. While I don't have much problem with the notion of a
set of low priority tasks working in a round-robin paradigm while higher
priority realtime tasks work with FIFO priorities, the notion that the
base priority of such non-realtime tasks could be changed and that they
would acquire realtime properties seems strange.

EDF
Earliest deadline first is a paradigm which really replaces priority as
a scheduling mechanism. As a task's scheduled completion time approaches
and the task has not been scheduled, its urgency relative to all other
tasks increases until eventually it might become the most urgent task to
run.

EDF analysis, depends upon statically knowing the scheduling rate and
execution time of each task in the system. These are static properties
for each task which are worked out apriori, and assume that you are
analysing 100% of all tasks in the system. They don't support the notion
that there may be other tasks at different priority levels which are
consuming resources which are not accounted in the analysis.

In particular, the current Ada model is a dynamic priority model. If you
permit tasks to be added to or removed from this model dynamically, then
your analysis goes out the window. Similarly, if a lower priority task's
priority is increased beyond the priority used for EDF, it's performance
characteristics will interfere with those in the EDF regime.
- One could theoretically have high priority tasks (higher than the EDF
regime) and account for their resources by subtracting out the total
consumed by higher priority activities (say 10%), but if the load is
unbalanced it could wreck the analysis.


So, how far should we go?
IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
the round-robin tasks are the lowest priorities in the system. We could
possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
base priority is changed form a round-robin level <-> non-round-robin
level. We could also add a legality rule that the highest round robin
priority level be below the lowest FIFO priority level.

I also don't see a problem with EDF, as long as it is the only paradigm
in use - i.e. all tasks participate in EDF. This means that there would
be no round-robin or FIFO tasks in an EDF system.

Value-based scheduling - I'm not sure. I think that I'll stay quiet on
that for now.

Anway, my thoughts for now. I'm interested to hear what others think.

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  11:17 AM

It was unfortunate that you had to leave the IRTAW before it
got started as most of the points you raise were dealt with
at length. I cannot possible replay a day and a half of discussion
but I'll give some response. Remember the IRTAW was fully
supportive of the AIs I have produced.

...
> My concerns:
> Round Robin
> Round-robin is a paradigm that is used for what are considered
> non-realtime tasks. While I don't have much problem with the notion of a
> set of low priority tasks working in a round-robin paradigm while higher
> priority realtime tasks work with FIFO priorities, the notion that the
> base priority of such non-realtime tasks could be changed and that they
> would acquire realtime properties seems strange.

RR is not just used at low levels, I mensioned this at the
ARG meeting and it is discussed in my IRTAW paper that you have.


> EDF
> Earliest deadline first is a paradigm which really replaces priority as
> a scheduling mechanism. As a task's scheduled completion time approaches
> and the task has not been scheduled, its urgency relative to all other
> tasks increases until eventually it might become the most urgent task to
> run.

what is proposed allows EDF to live on its own (if all tasks priority
are given the same EDF level) AND mixed systems to be programmed.

I can assure you that EDF analysis can accommodate some capacity
being stolen by higher priority tasks. Also one safe implementation
scheme for EDF is to allow hard (critical) tasks to jump to
fixed pri levels only when they would otherwise miss their deadlines.
This fault tolerance approach has been discussed in a number of papers,
I can give you the references.

Also analysis is not undertaken on all systems, people want to
use EDF as it is the most efficient in its use of CPU resourses.
For example best-effort aperiodic scheduling under a fixed pri scheme.

...
> In particular, the current Ada model is a dynamic priority model. If you
> permit tasks to be added to or removed from this model dynamically, then
> your analysis goes out the window. Similarly, if a lower priority task's
> priority is increased beyond the priority used for EDF, it's performance
> characteristics will interfere with those in the EDF regime.
> - One could theoretically have high priority tasks (higher than the EDF
> regime) and account for their resources by subtracting out the total
> consumed by higher priority activities (say 10%), but if the load is
> unbalanced it could wreck the analysis.

I disagree with the pessimistic tone of the above, of course if
you have a very dynamic system it if hard (perhaps impossible)
to analysis - but that does not mean mixed systems cannot be
analysed.

> So, how far should we go?
> IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
> the round-robin tasks are the lowest priorities in the system. We could
> possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
> or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
> base priority is changed form a round-robin level <-> non-round-robin
> level. We could also add a legality rule that the highest round robin
> priority level be below the lowest FIFO priority level.
>
> I also don't see a problem with EDF, as long as it is the only paradigm
> in use - i.e. all tasks participate in EDF. This means that there would
> be no round-robin or FIFO tasks in an EDF system.
>

I disagree, and the workshop disagreed as well.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  1:19 PM

> T1 executes (at level 3) and calls PO (now execting at level 4)
> T2 is released, but its active priority is 3 so
> does not preempt. But we want T2 to be able to
> preempt T1 as preemption level ensures T2 does not use PO
> (and deadline of T2 may be shorter than T1's, for example).

Yes, I was confused.  How about the following:

Outside a PO, the active priority of a task in the group and
its location within the corresponding queue shifts according
to the scheduling policy (e.g. EDF), within a range
bounded below by the lowest priority of the group and above by the
specified priority of the task.  When entering a PO, it takes on
the ceiling priority of the PO as usual.

In other words, a task's active priority can be lowered below its specified
priority and/or moved later in a ready queue by the priority-specific
scheduling algorithm, unless it is inside a PO.

I think that would work...

I am pretty sure that this is not an uncommon way for such
scheduling algorithms to work.  That is, the priority is lowered
as appropriate to turn the fundamentally priority-based scheduling
into a more sophisticated algorithm.  However, when holding a lock,
the priority cannot be lowered.

Another way to think of it is that the specified priority is
the *ceiling* priority for the task when not inside any PO.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  1:57 PM

P.S. A legitimate question is why am I trying so hard
to eliminate preemption levels.  I really think they
add substantially to the conceptual complexity of the
scheduling model.  *If* we can come up with an equivalence
that requires only a single notion of "specified" priority for POs
and tasks, I think it will be a big advantage for users.

I believe that it is significantly more complex to juggle both
specified priority and specified preemption level, worry (eventually)
about dynamic setting of both as part of mode changes, figure out how
to pass both through task and protected object discriminants, etc.

So, that's why I hope you will continue this dialogue about
developing an equivalence.  Of course internally the scheduler
doesn't really have to raise and lower the active priority,
but can rather use whatever sort of data structure it wants
to implement EDF or whatever priority (range) specific scheduling
it wants.  But it is awfully nice to keep a single unified priority
scheme as the basic model at the language description level, and I
would hate to lose that.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  2:00 PM

P.P.S. And of course the *real* reason is that I can't bear
adding another field to our protected-object descriptor record
which has to be checked on every lock operation (conceptual
model be-damned) ;-).

****************************************************************

From: Alan Burns
Sent: Tuesday, October 21, 2003  5:08 AM

I am happy to keep trying to get this definition right.
I share you view that the simpler the model we can find the
better. If we can do without a new concept then great.

>
> Yes, I was confused.  How about the following:
>
> Outside a PO, the active priority of a task in the group and
> its location within the corresponding queue shifts according
> to the scheduling policy (e.g. EDF), within a range
> bounded below by the lowest priority of the group and above by the
> specified priority of the task.  When entering a PO, it takes on
> the ceiling priority of the PO as usual.
>
> In other words, a task's active priority can be lowered below its
> specified
> priority and/or moved later in a ready queue by the priority-specific
> scheduling algorithm, unless it is inside a PO.
>

I think this still has the same problem (if I understand your
model correctly). But I have an alternative (later). With
your model, again look at a set of tasks with base value 3
T1 base priority 4
PO has base priority 4 (as it is called by T1)
T2 has base priority 6 (indicating it does not call PO)
T3 has base priority 5

Lets say the ready queue (as base level 3 is ordered by
EDF)

If T2 is released while no POs are locked it take its place
in the ready queue for level 3 and may run immediately if
it has shortest deadline. It will run with priority 3.

However if T1 has locked PO when T2 released then T2 needs to
run with priority 6 (but only if its deadline is less than T1s).
Now T3 could be released and it cannot preempt even if its
priority is less than T2s.

There are other examples that show problems - they all come
about because the tasks run with the ceiling value. Preemption
levels do not have this problem as they do not control dispatching.
They only give rules for when a task can execute: eg
T3 can preempt T2 if deadline is less and preemption level of
T2 is higher than that of any locked PO (ie 4).

Alternative
Take a band of priorities.
1) run all tasks at top pri (ie active priority always top)
2) round up all ceilings to top (as allowed by definition of
ceilings) - hence no program_errors
3) use base pri's of tasks and defined ceilings of POs to
enforce/define dispatching rules

So for EDF it would be a dispatching point if
head of ready queue have shorter deadline than current running
task AND base pri of head was greater than that of any locked
PO's (all within the range of the policy).

How does that sound?

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 21, 2003  7:52 AM

> Lets say the ready queue (as base level 3 is ordered by
> EDF)

I presume that the EDF scheduler would have control over
all the ready queues in the range it manages (say 3-6).
It could raise and lower the active priority of the tasks
under its control as it sees fit, but with a restriction
that it cannot raise their active priority above their
specified priority, except of course that when a task
acquires a PO lock, its active priority is boosted to that
of the PO.

> If T2 is released while no POs are locked it take its place
> in the ready queue for level 3 and may run immediately if
> it has shortest deadline. It will run with priority 3.

I don't know why you are saying it will run with priority 3.
It could run with any priority from 3 to 6, as determined by
the EDF scheduler.  Its priority could change while running
as well.  At the implementation level, it would seem simplest
to have a single EDF queue for all the tasks, and generally
ignore the priority except when deciding whether to start
a given task.  When a task is at the head of the EDF queue,
it would be run if its deadline is earlier than the current
running task and, if the running task is inside a PO,
its priority is higher than that of the PO.

> However if T1 has locked PO when T2 released then T2 needs to
> run with priority 6 (but only if its deadline is less than T1s).

As suggested above, if T2's deadline is earlier than T1, then it
would run because its priority is higher than that of the PO
T1 has locked.

> Now T3 could be released and it cannot preempt even if its
> priority is less than T2s.
>
> There are other examples that show problems - they all come
> about because the tasks run with the ceiling value. Preemption
> levels do not have this problem as they do not control dispatching.
> They only give rules for when a task can execute: eg
> T3 can preempt T2 if deadline is less and preemption level of
> T2 is higher than that of any locked PO (ie 4).

I don't follow the above, because I suspect we are making different
assumptions.

> Alternative
> Take a band of priorities.
> 1) run all tasks at top pri (ie active priority always top)
> 2) round up all ceilings to top (as allowed by definition of
> ceilings) - hence no program_errors
> 3) use base pri's of tasks and defined ceilings of POs to
> enforce/define dispatching rules
>
> So for EDF it would be a dispatching point if
> head of ready queue have shorter deadline than current running
> task AND base pri of head was greater than that of any locked
> PO's (all within the range of the policy).

Yes, we have the same rule in mind.  I presumed the EDF scheduler
would essentially ignore the priority in its basic deadline
queue, except to check it at the moment of release to be sure
it is higher than any locked PO.

> How does that sound?

Very much what I had in mind, but you certainly expressed it better  ;-).

****************************************************************

From: Alan Burns
Sent: Tuesday, October 21, 2003  9:38 AM

We seem to be converging

My problem with 'your' model is being able to specify
easily the required behaviour

>> I think this still has the same problem (if I understand your
>> model correctly). But I have an alternative (later). With
>> your model, again look at a set of tasks with base value 3
>> T1 base priority 4
>> PO has base priority 4 (as it is called by T1)
>> T2 has base priority 6 (indicating it does not call PO)
>> T3 has base priority 5
>>
>> Lets say the ready queue (as base level 3 is ordered by
>> EDF)
>
> I presume that the EDF scheduler would have control over
> all the ready queues in the range it manages (say 3-6).
> It could raise and lower the active priority of the tasks
> under its control as it sees fit, but with a restriction
> that it cannot raise their active priority above their
> specified priority, except of course that when a task
> acquires a PO lock, its active priority is boosted to that
> of the PO.
>
>> If T2 is released while no POs are locked it take its place
>> in the ready queue for level 3 and may run immediately if
>> it has shortest deadline. It will run with priority 3.
>
> I don't know why you are saying it will run with priority 3.
> It could run with any priority from 3 to 6, as determined by
> the EDF scheduler.  Its priority could change while running
> as well.  At the implementation level, it would seem simplest
> to have a single EDF queue for all the tasks, and generally
> ignore the priority except when deciding whether to start
> a given task.  When a task is at the head of the EDF queue,
> it would be run if its deadline is earlier than the current
> running task and, if the running task is inside a PO,
> its priority is higher than that of the PO.

agreed required behaviour could be achieved but is difficult to
define when some fixed points (the PO priorities) are given
and have to be followed, for example with the following

>> However if T1 has locked PO when T2 released then T2 needs to
>> run with priority 6 (but only if its deadline is less than T1s).
>
> As suggested above, if T2's deadline is earlier than T1, then it
> would run because its priority is higher than that of the PO
> T1 has locked.
>
>> Now T3 could be released and it cannot preempt even if its
>> priority is less than T2s.
>
> I don't follow the above, because I suspect we are making different
> assumptions.

T2 should preempt T1 (which has priority 4), it base priority
is 6  - does it get 5 or 6 or 7 ...? (I agree at one level it does not matter)
Then T3 should preempt T2 - again a higher priority must be chosen.
In the end a range of priorities above the highest ceiling must
be reserved for the EDF range (perhaps doubling the range
of priorities needed).

I agree the model does not dictate implementation, but the above
seems hard to define. Each task must be on some ready queue when
it runs - the rules must say which.

>> Alternative
>> Take a band of priorities.
>> 1) run all tasks at top pri (ie active priority always top)
>> 2) round up all ceilings to top (as allowed by definition of
>> ceilings) - hence no program_errors
>> 3) use base pri's of tasks and defined ceilings of POs to
>> enforce/define dispatching rules
>>
>> So for EDF it would be a dispatching point if
>> head of ready queue have shorter deadline than current running
>> task AND base pri of head was greater than that of any locked
>> PO's (all within the range of the policy).
>
> Yes, we have the same rule in mind.  I presumed the EDF scheduler
> would essentially ignore the priority in its basic deadline
> queue, except to check it at the moment of release to be sure
> it is higher than any locked PO.
>
>> How does that sound?
>
> Very much what I had in mind, but you certainly expressed it better  ;-).

Perhaps just my way of thinking, but this seems easier to
express in ARG (LRM) speak.

have you a strong view either way?

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 21, 2003 11:26 AM

> >> Alternative
> >> Take a band of priorities.
> >> 1) run all tasks at top pri (ie active priority always top)
> >> 2) round up all ceilings to top (as allowed by definition of
> >> ceilings) - hence no program_errors
> >> 3) use base pri's of tasks and defined ceilings of POs to
> >> enforce/define dispatching rules
> >>
> >> So for EDF it would be a dispatching point if
> >> head of ready queue have shorter deadline than current running
> >> task AND base pri of head was greater than that of any locked
> >> PO's (all within the range of the policy).

Wouldn't we want to allow a dispatching point if *any* task
on the ready queue satisfied these two requirements, even if
the "head" task didn't?  I presume you would just skip over
the head task(s) with priorities too low.

Another way to imagine it is that there are still queues at each
priority level (ordered by scheduler determined "urgency", e.g. deadline),
but a task is only "ready" if it has a greater urgency than all tasks on
lower priority ready queues, but ignoring queues at priorities
at the level of a locked PO or below.

In other words, when a task's "urgency" drops below a ready task of a lower,
eligible-to-run priority, it becomes non-ready.

> > Yes, we have the same rule in mind.  I presumed the EDF scheduler
> > would essentially ignore the priority in its basic deadline
> > queue, except to check it at the moment of release to be sure
> > it is higher than any locked PO.
> >
> >> How does that sound?
> >
> > Very much what I had in mind, but you certainly expressed it better  ;-).
> >
>
> Perhaps just my way of thinking, but this seems easier to
> express in ARG (LRM) speak.
>
> have you a strong view either way?

I am somewhat worried about rounding up all the ceilings, and also
it seems that if you want to skip over the "head" of the queue
(which I claim you will want to do), then you are violating the basic
scheduling model.  Or perhaps that is solved by stating that the queue
at the top priority is primarily ordered by deadline, but tasks whose
specified priority blocks them due to a locked PO are given an
artificially late deadline, or are simply treated as non-ready.

But at that point, perhaps the model I suggested above is just
as good, and doesn't require rounding up the PO ceilings.

I also presume we want to describe this model in a way that is not
EDF-specific (hence my introduction of the notion of
"scheduler-determined urgency").  Or is this model just for the EDF case?

****************************************************************

From: Alan Burns
Sent: Thursday, October 23, 2003  5:08 AM

> I am somewhat worried about rounding up all the ceilings, and also
> it seems that if you want to skip over the "head" of the queue
> (which I claim you will want to do), then you are violating the basic
> scheduling model.  Or perhaps that is solved by stating that the queue
> at the top priority is primarily ordered by deadline, but tasks whose
> specified priority blocks them due to a locked PO are given an
> artificially late deadline, or are simply treated as non-ready.

I agree that skiping over the head  of a ready queue (even if
ordered by deadline) would be most unfortunate. I'll reread the
papers and see if you are right (certainly you seem to be).

> But at that point, perhaps the model I suggested above is just
> as good, and doesn't require rounding up the PO ceilings.

But my point about having to reserve lots of priority levels
above top ceiling is still a concern. If tasks are to execute
with priorities other than base or ceiling of locked objects
then perhaps it is best to just say only one task (in the
group) is ready to execute and it is the one with shortest
deadline and sufficiently high preemption level.

> I also presume we want to describe this model in a way that is not
> EDF-specific (hence my introduction of the notion of
> "scheduler-determined urgency").  Or is this model just for the
> EDF case?

Both!, I want to define an EDF scheme but in a way that it
would be easy to use another parameter as the definition
of urgency

I'll get back to you next week

****************************************************************

From: Alan Burns
Sent: Tuesday, October 28, 2003  9:33 AM

Working with your model I think the following works.
It may seem a bit complicated but I think this is
needed to be able to ensure that only the head of
any ready queue needs to be looked at. It also
uses PO ceilings in the usual way (tasks in a PO
run at ceiling level), and it does not need priorities
above those used by the tasks and POs to be reserved.

Assume range of priorities for EDF (or any other
scheme using this approach) is B..T; also
restrict tasks and PO priorities to be in range
B+1 .. T. See why in a moment.

Let S denote the priority of the highest priority currently
locked PO (S is dynamic). If no PO is locked let S equal B.

Rule 1
All ready queues (in B..T range) are ordered by deadline.

Rule 2
A task, T, can only become ready if base pri of T > S.

Rule 3
When a task becomes ready it is added to the ready queue
for pri level S and this is a dispatching point. (So if new task
has shorter deadline than current running task it will
preempt it.)

Rule 4
When S drops from level P1 to P2, all tasks on the ready
queue for P1 are transferred to P2.

--
This last rule may sound strange, and would not necessarily be the way
to implement this, but is needed in the model (I think).

Note the only tasks that would move are those that were released
while S had value P1, had deadline greater then the task that locked
the object that made S equal P1, and had base pri > S. If S drops
to B (ie no objects locked) all tasks will be in the B ready queue
in deadline order.

To see why the rule is required, one needs a quite
complicated scenario involving two or more POs. For example

T1 - base priority 3
T2 - base priority 4
T3 - base priority 5
T4 - base priority 6

PO1 used by T1 and T2 - ceiling pri 4
PO2 used by T4 - ceiling 6

S is 1 initially
T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
  is now 4)
T4 released (6>4) and is added to ready queue for priority
  level 4, assume it has shorter deadline, so it preempts T1,
  and executes with active pri 4; it now calls PO2 (it executes
  at active pri 6 and S now 6)
T3 arrives but cannot be made ready even though it has even
  shorter deadline (5 not > 6)
T4 returns from PO2 (S now equal 4). T4 is added to ready queue
  for 4, as is T3 that can now be made ready. The ready q at level
  4 has the right structure (T3 head then T4 then T1) and allows T3
  to execute first (shortest deadline) then T4, then T1
When T1 exists PO1 (S falls to 1) any tasks that were made ready
  (because they had high enough base pri) but longer deadline
  that T1, will move from ready q at 4 to ready q at 1
--

Easy to see that a very simple scheme where it is acceptable
for all PO calls to be, in effect, non-preemptive can easily
be programmed by giving 2 priority levels for EDF (say 7 and
8), set all tasks priorities to 8 (hence all ceiling are 8)
then ready queue at 7 will be ordered by deadline. All
tasks can be made ready immediately unless a task calls a
PO; in which case it will execute at pri 8 (S will be 8)
and no preemption will occur.

If you see nothing immediately wrong with this I'll write a longer
note to the IRTAW group and get their assessment.

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 28, 2003 11:04 AM

You didn't comment specifically on my last proposal.
It seems a little simpler than this new model:

stt> Another way to imagine it is that there are still queues at each
stt> priority level (ordered by scheduler determined "urgency",
stt> e.g. deadline),
stt> but a task is only "ready" if it has a greater urgency than
stt> all tasks on
stt> lower priority ready queues, but ignoring queues at priorities
stt> at the level of a locked PO or below.
stt>
stt> In other words, when a task's "urgency" drops below a
stt> ready task of a lower,
stt> eligible-to-run priority, it becomes non-ready.

I'll try to work through your example with this model
and see if it accomplishes the same thing (see below).
I'll call mine the "model-TT" in what follows ;-).

> Assume range of priorities for EDF (or any other
> scheme using this approach) is B..T; also
> restrict tasks and PO priorities to be in range
> B+1 .. T. See why in a moment.

I don't think model-TT requires such a restriction.

> Let S denote the priority of the highest priority currently
> locked PO (S is dynamic). If no PO is locked let S equal B.

In model-TT, I'll let S be B-1 if no PO is locked in range B..T.

>
> Rule 1
> All ready queues (in B..T range) are ordered by deadline.

Agreed.

>
> Rule 2
> A task, T, can only become ready if base pri of T > S.

The model-TT rule would be: a task T can become ready only if
T'prio <= S or its "urgency" is greater than the head of the
queues at levels S+1..T'Prio-1.

>
> Rule 3
> When a task becomes ready it is added to the ready queue
> for pri level S and this is a dispatching point. (So if new task
> has shorter deadline than current running task it will
> preempt it.)

When a task becomes ready (that is, it is released and its
urgency is greater than tasks at levels S+1..T'Prio-1)
it is added to the ready queue for its base priority level.
This is a dispatching point if
it is more "urgent" than the running task and its prio is > S.

> Rule 4
> When S drops from level P1 to P2, all tasks on the ready
> queue for P1 are transferred to P2.

When S drops from level P1 to P2, rule 2 and 3 are rechecked:
if a queued task whose prio is > S is now the most urgent, it preempts.
Any tasks that are less urgent than the heads of queues in
S+1..T'Prio-1 are made non-ready.

>
> --
> This last rule may sound strange, and would not necessarily be the way
> to implement this, but is needed in the model (I think).
>
> Note the only tasks that would move are those that were released
> while S had value P1, had deadline greater then the task that locked
> the object that made S equal P1, and had base pri > S. If S drops
> to B (ie no objects locked) all tasks will be in the B ready queue
> in deadline order.
>
> To see why the rule is required, one needs a quite
> complicated scenario involving two or more POs. For example
>
> T1 - base priority 3
> T2 - base priority 4
> T3 - base priority 5
> T4 - base priority 6
>
> PO1 used by T1 and T2 - ceiling pri 4
> PO2 used by T4 - ceiling 6
>
> S is 1 initially
> T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
>   is now 4)
> T4 released (6>4) and is added to ready queue for priority
>   level 4, assume it has shorter deadline, so it preempts T1,
>   and executes with active pri 4; it now calls PO2 (it executes
>   at active pri 6 and S now 6)
> T3 arrives but cannot be made ready even though it has even
>   shorter deadline (5 not > 6)
> T4 returns from PO2 (S now equal 4). T4 is added to ready queue
>   for 4, as is T3 that can now be made ready. The ready q at level
>   4 has the right structure (T3 head then T4 then T1) and allows T3
>   to execute first (shortest deadline) then T4, then T1
> When T1 exists PO1 (S falls to 1) any tasks that were made ready
>   (because they had high enough base pri) but longer deadline
>   that T1, will move from ready q at 4 to ready q at 1

Model-TT equivalent scenario:

S is 1 initially
T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
   is now 4)
T4 released (6>4) and is added to ready queue for priority
   level 6 (nothing at prio 5 to worry about),
   assume it has shorter deadline, so it preempts T1,
   and executes with active pri 6; it now calls PO2 (it continues
   to execute at active pri 6 and S now 6).
   T1 is placed on head of queue at level 4.
T3 arrives and is added to prio 5 queue (ordered by deadline),
   since all tasks with prio <= S are allowed on their queues,
   but are ignored because of the locked PO.
T4 returns from PO2 (S now equal 4). T3 preempts T4 since
   it is now the most urgent task at prio > S.
   (T1 is still in a preempted state).
   T4 is added to ready queue for level 6 presuming there is
   no other task at level 5, or it is more urgent than the head of
   the level 5 queue.
   When T3 completes, T4 resumes.  When T4 completes, T1 resumes.
When T1 exits PO1 (S falls to 1) rule 2 is rechecked, which may
   cause some of the tasks on queues 2..6 to be made non-ready
   if they are less urgent than lower priority tasks (which are now
   eligible to run since S is 1).


> --
>
> Easy to see that a very simple scheme where it is acceptable
> for all PO calls to be, in effect, non-preemptive can easily
> be programmed by giving 2 priority levels for EDF (say 7 and
> 8), set all tasks priorities to 8 (hence all ceiling are 8)
> then ready queue at 7 will be ordered by deadline. All
> tasks can be made ready immediately unless a task calls a
> PO; in which case it will execute at pri 8 (S will be 8)
> and no preemption will occur.
>
> If you see nothing immediately wrong with this I'll write a longer
> note to the IRTAW group and get their assessment.

Could you comment on model-TT?  It seems that by
making tasks of high priority but low urgency non-ready,
and by making the readying of a task a dispatching
point only if it is more urgent than the running task,
it satisfies the requirements of the priority model,
while primarily following the "urgency" (e.g. deadline)
ordering.

I don't think either of the models correspond to the
best way to implement things, but my sense is that model-TT
might be simpler to describe, and has the virtue that a
task is queued at its active priority, rather than bouncing
around to different queues that have nothing to do with
its priority.  The "trick" is we simply make a task non-ready
if it is eligible to run due to having a prio > S, but
its relative priority contradicts its relative urgency.

 From an implementation point of view, I think I would
probably have each priority level with its own urgency
queue, and then a queue of levels > S, ordered by the
urgency of the head of the corresponding queue.  That is,
I wouldn't actually remove the tasks from their priority
level's queue if their urgency was too low, but I would just
ignore the queue as a whole.

****************************************************************

From: Alan Burns
Sent: Thursday, October 30, 2003  8:48 AM

Let me first make sure I have model-TT correct

stt> Another way to imagine it is that there are still queues at each
stt> priority level (ordered by scheduler determined "urgency",
stt> e.g. deadline),
stt> but a task is only "ready" if it has a greater urgency than
stt> all tasks on
stt> lower priority ready queues, but ignoring queues at priorities
stt> at the level of a locked PO or below.
stt>
stt> In other words, when a task's "urgency" drops below a
stt> ready task of a lower,
stt> eligible-to-run priority, it becomes non-ready.

Tasks run at their usual base priorities, or with PO
priorities. We get EDF behaviour by making all 'ready
tasks', with priority higher than that of the task with
shortest deadline, non-ready (unless a PO is locked,
but let ignore this situation for a moment).

So if we have the range 1.10 for EDF, and a task with pri
10 was running and then a task with pri 1 was released with
a shorted deadline then the running task and all tasks on
ready queues in range 2.10 would need to be made non-ready.
Correct?

Assuming I have this correct I have some concerns (but I agree
that model-TT gives the correct EDF behaviour).

1) Being 'ready' and on a ready queue are synonymous in the
dispatching model - hence becoming non-ready would mean being
removed from ready queues. Hence many tasks for most of the time
will be in limbo (not in any state currently defined for tasks).

2) I think the control we have over dispatching amounts to
defining the dispatching points, and some postponement of
when a task becomes ready (after being blocked, but before
it executes).

3) Model-TT needs (I think) a task that is running to become
non-ready (if lower pri task released with shorter deadline).
This is quite new and opens up many potentially tricky issues.
I do not think it is a natural way of defining the model
(but I guess that is a question of taste!).

4) Those who know the preemption model will find it confusing
that priority is (apparently) being used for dispatching and
PO locking - rather than just locking.


So what do we have.

I think we both agree (at this stage) that when a task executes
inside a PO it should execute with ceiling priority. Also that
we should only have queues that need the head task to be looked at.

I think the use of non-ready should be limited to the postponement
of when a task can become ready.

The model should look straightforward when no POs are locked and
extend naturally when POs are added to the model.

Let me give a variation of what I last said, to get
rid of the 'strange rule' at the cost of having more tasks
in the non-ready state prior to being released. I'll call
this model-AT (combined alan and tuck):


Assume range of priorities for EDF (or any other
scheme using this approach) is B..H; also
restrict tasks and PO priorities to be in range
B+1 .. H. Let T'P be base priority of task T, and
T'D its current deadline.

Let S denote the priority of the highest priority currently
locked PO. And let S'D be the deadline of
the task that has locked this PO. If no PO is locked let S equal B
and S'D be infinity (Time_Span_Last).

Rule 1
All ready queues (in B..H range) are ordered by deadline.

Rule 2
A task, T, can only become ready if T'P > S and T'D < S'D.

Rule 3
When a task becomes ready it is added to the ready queue
for pri level S and this is a dispatching point.


I think these rules are getting quite simple!

So if no locked POs then all tasks are immediately added to the
single ready queue at priority B (which is ordered by EDF) when
they become unblocked. Only when a PO is locked are some tasks
not immediately made ready.

Once a PO is locked (by task Q say) then S becomes the base until
it is unlocked. Only tasks that have higher Pri than S and shorter
deadline than Q can become ready.
If there are many of these they will be on the ready queue at level
S (and be in front of Q).
If any of these lock a higher level PO then this behaviour is
repeated at the higher level.
When Q runs again and unlocks the PO then its pri (and S) will drop
to B, all 'held tasks' will become ready and correct behaviour
follows.


> I don't think either of the models correspond to the
> best way to implement things, but my sense is that model-TT
> might be simpler to describe, and has the virtue that a
> task is queued at its active priority, rather than bouncing
> around to different queues that have nothing to do with
> its priority.  The "trick" is we simply make a task non-ready
> if it is eligible to run due to having a prio > S, but
> its relative priority contradicts its relative urgency.
>
>  From an implementation point of view, I think I would
> probably have each priority level with its own urgency
> queue, and then a queue of levels > S, ordered by the
> urgency of the head of the corresponding queue.  That is,
> I wouldn't actually remove the tasks from their priority
> level's queue if their urgency was too low, but I would just
> ignore the queue as a whole.
>

Sorry to keep arguing, but I feel model-AT is now quite simple
and easy to explain. For implementation, all that is needed
in addition to the usual stuff is a queue of non-ready tasks
that have become unblocked but are not yet ready. Probably best
to order this by priority. As level of S drops tasks on this
held queue with priority higher than new S will be added to
the usual ready queue (at new S).

I guess back to you for comment!

****************************************************************

From: Tucker Taft
Sent: Thursday, October 30, 2003 11:31 AM

> Let me first make sure I have model-TT correct
>
> stt> Another way to imagine it is that there are still queues at each
> stt> priority level (ordered by scheduler determined "urgency",
> stt> e.g. deadline),
> stt> but a task is only "ready" if it has a greater urgency than
> stt> all tasks on
> stt> lower priority ready queues, but ignoring queues at priorities
> stt> at the level of a locked PO or below.
> stt>
> stt> In other words, when a task's "urgency" drops below a
> stt> ready task of a lower,
> stt> eligible-to-run priority, it becomes non-ready.
>
> Tasks run at their usual base priorities, or with PO
> priorities. We get EDF behaviour by making all 'ready
> tasks', with priority higher than that of the task with
> shortest deadline, non-ready (unless a PO is locked,
> but let ignore this situation for a moment).
>
> So if we have the range 1.10 for EDF, and a task with pri
> 10 was running and then a task with pri 1 was released with
> a shorted deadline then the running task and all tasks on
> ready queues in range 2.10 would need to be made non-ready.
> Correct?

Yes.  (BTW, I think we should use the term "highest urgency"
or "most urgent" instead of "shortest deadline" to keep it
relatively general.)

> Assuming I have this correct I have some concerns (but I agree
> that model-TT gives the correct EDF behaviour).
>
> 1) Being 'ready' and on a ready queue are synonymous in the
> dispatching model - hence becoming non-ready would mean being
> removed from ready queues. Hence many tasks for most of the time
> will be in limbo (not in any state currently defined for tasks).

This seems to be true also of the model you suggest below.  I guess
you are saying this "limbo" state is easier to handle if tasks
can enter it only immediately after becoming unblocked, but once out
of limbo and onto a ready queue, they don't reenter limbo without
first becoming blocked again.  I'm not sure I see the big
difference in terms of understandability.  I think we all
agree that we are trying to construct a model that uses part
but not all of the priority model.  We are not worried about
actual implementation mechanism.

> 2) I think the control we have over dispatching amounts to
> defining the dispatching points, and some postponement of
> when a task becomes ready (after being blocked, but before
> it executes).

Where does the "postponement" license come from?  Perhaps
if I understood that I would agree that restricting "limbo"
to immediately follow blocking would be preferable.

> 3) Model-TT needs (I think) a task that is running to become
> non-ready (if lower pri task released with shorter deadline).
> This is quite new and opens up many potentially tricky issues.
> I do not think it is a natural way of defining the model
> (but I guess that is a question of taste!).

Perhaps a better way to talk about it is that a task
is "held" (a la Asynchronous Task Control) when its urgency
is lower than some lower priority ready task.

I see you are using the term "held" below as well;
it is probably a preferable term to "limbo" ;-).

> 4) Those who know the preemption model will find it confusing
> that priority is (apparently) being used for dispatching and
> PO locking - rather than just locking.

I guess it seemed odd to me that we still have queues at various
priorities, but the task's base priority had nothing to do with
what queue it was on.  I guess we could be saying that the
algorithm for the "active" priority is really what matters,
and we are both saying that the active priority is influenced by
PO locking as usual, but also by urgency.

> So what do we have.
>
> I think we both agree (at this stage) that when a task executes
> inside a PO it should execute with ceiling priority. Also that
> we should only have queues that need the head task to be looked at.

Yes, that sounds right.

> I think the use of non-ready should be limited to the postponement
> of when a task can become ready.

It would help me to understand why.

> The model should look straightforward when no POs are locked and
> extend naturally when POs are added to the model.

Yes, that sounds good too.

> Let me give a variation of what I last said, to get
> rid of the 'strange rule' at the cost of having more tasks
> in the non-ready state prior to being released. I'll call
> this model-AT (combined alan and tuck):
>
>
> Assume range of priorities for EDF (or any other
> scheme using this approach) is B..H; also
> restrict tasks and PO priorities to be in range
> B+1 .. H. Let T'P be base priority of task T, and
> T'D its current deadline.

I don't love the rule that you can't have tasks at level B.
Is that really necessary?  I suppose disallowing POs at
level B seems more reasonable, though I am not convinced,
since it seems we could set "S" to B in the rule below
if no POs are locked, even if there are POs at level B.

> Let S denote the priority of the highest priority currently
> locked PO. And let S'D be the deadline of
> the task that has locked this PO. If no PO is locked let S equal B
> and S'D be infinity (Time_Span_Last).
>
> Rule 1
> All ready queues (in B..H range) are ordered by deadline.
>
> Rule 2
> A task, T, can only become ready if T'P > S and T'D < S'D.
>
> Rule 3
> When a task becomes ready it is added to the ready queue
> for pri level S and this is a dispatching point.
>
>
> I think these rules are getting quite simple!

Perhaps suspiciously simple.  Since you are saying that once
a task is on a ready queue, it stays on that queue until it
runs, I am surprised it can work.  How dynamic are the deadlines?
It would seem to be nice to describe the solution in terms of
the properties of the queues at any given time as a function
of priorities and urgencies, rather than the properties at the
moment a task is added to a queue.

If I follow your model correctly, then it has the following
property:  Presuming at some point there are tasks queued at various
priority levels, then there is a PO locked at each level where
there are tasks queued (except the lowest), and any task queued at a
level L (other than the one holding the lock) has base priority
greater than L, and urgency higher than the task holding the lock
(as well as any task queued at lower levels).

Does that sound right?  If so, let's try to simplify a bit,
and eliminate the "limbo" state.

First, there seems no harm in putting higher-urgency tasks in a queue at
the level one above that of the locked PO.  This will make it a bit more
likely that the base prio matches the queue priority, and might
eliminate the need for the restriction against using level "B" itself.

Second, rather than putting lower-urgency or lower-priority tasks into
limbo, can't we put them where they would have gone if they had arrived
before the PO was locked, but make sure that tasks with a PO lock
always stay on the front of their queue when preempted?
That is, look for the highest prio locked PO that the new
task would have preempted, and add it to the queue just above that.
If there is no such PO, then add it to the lowest level queue.

In this model, the only queues that have tasks on them are those
at the level of a locked PO, which contains the task holding
the lock while preempted, and those immediately above that, which
holds tasks of higher prio than this locked PO (but not higher than
some higher-level locked PO), and higher urgency than
the task holding the PO.  While preempted, a task holding a lock
is kept at the head of its queue (as though it has very high urgency).

If there are no POs locked, then all ready tasks are in the lowest level
queue.  When there are POs locked, the tasks are split out into
multiple queues, at levels one above that of each locked PO, plus any
preempted tasks with locks at the PO's level.  Tasks go into the
highest level where their urgency and priority exceeds that of
the task holding the lock at the immediately lower level.
If either their urgency or priority is not greater, they go in
a lower level.

> So if no locked POs then all tasks are immediately added to the
> single ready queue at priority B (which is ordered by EDF) when
> they become unblocked. Only when a PO is locked are some tasks
> not immediately made ready.
>
> Once a PO is locked (by task Q say) then S becomes the base until
> it is unlocked. Only tasks that have higher Pri than S and shorter
> deadline than Q can become ready.
> If there are many of these they will be on the ready queue at level
> S (and be in front of Q).
> If any of these lock a higher level PO then this behaviour is
> repeated at the higher level.
> When Q runs again and unlocks the PO then its pri (and S) will drop
> to B, all 'held tasks' will become ready and correct behaviour
> follows.
> ...
> Sorry to keep arguing, but I feel model-AT is now quite simple
> and easy to explain. For implementation, all that is needed
> in addition to the usual stuff is a queue of non-ready tasks
> that have become unblocked but are not yet ready. Probably best
> to order this by priority. As level of S drops tasks on this
> held queue with priority higher than new S will be added to
> the usual ready queue (at new S).
>
> I guess back to you for comment!

I don't think we need the "limbo" queue, as explained above.
Also, if we put tasks in the level one above that of the
locked PO, I think we can eliminate the funny restriction to
levels B+1..H, and can use all the levels.  This does require
that while preempted, a task holding a lock is treated as though
it is very urgent, so it is kept at the head of the queue at the
PO's level.

With those two "tweaks" I think we have a model that satisfies
both your goals and mine.

****************************************************************

From: Alan Burns
Sent: Sunday, November 2, 2003  5:27 AM

...
>> So what do we have.
>>
>> I think we both agree (at this stage) that when a task executes
>> inside a PO it should execute with ceiling priority. Also that
>> we should only have queues that need the head task to be looked at.
>
> Yes, that sounds right.
>
>> I think the use of non-ready should be limited to the postponement
>> of when a task can become ready.
>
> It would help me to understand why.

The way all ceiling type protocols work is for a task to
be delayed from really becoming ready; once it starts executing
all its resources are available. So with the standard ceiling
locking, a PO delays other tasks from executing (although they
can be put on the ready queue). With Preemption Level locking
the algorithm is described in terms of the rules for a task
becoming ready. Also I feel within Ada it will be easier to
describe a model (if it needs limbo states) in terms of delayed
entry to a ready queue rather than disappearing from a ready queue.

But lets hope we can work on a model that does not need limbo
states!

...
>> Let me give a variation of what I last said, to get
>> rid of the 'strange rule' at the cost of having more tasks
>> in the non-ready state prior to being released. I'll call
>> this model-AT (combined alan and tuck):
>>
>> Assume range of priorities for EDF (or any other
>> scheme using this approach) is B..H; also
>> restrict tasks and PO priorities to be in range
>> B+1 .. H. Let T'P be base priority of task T, and
>> T'D its current deadline.
>
> I don't love the rule that you can't have tasks at level B.
> Is that really necessary?  I suppose disallowing POs at
> level B seems more reasonable, though I am not convinced,
> since it seems we could set "S" to B in the rule below
> if no POs are locked, even if there are POs at level B.

yes only disallowing PO at B may work

>> Let S denote the priority of the highest priority currently
>> locked PO. And let S'D be the deadline of
>> the task that has locked this PO. If no PO is locked let S equal B
>> and S'D be infinity (Time_Span_Last).
>>
>> Rule 1
>> All ready queues (in B..H range) are ordered by deadline.
>>
>> Rule 2
>> A task, T, can only become ready if T'P > S and T'D < S'D.
>>
>> Rule 3
>> When a task becomes ready it is added to the ready queue
>> for pri level S and this is a dispatching point.
>>
>>
>> I think these rules are getting quite simple!
>
> Perhaps suspiciously simple.  Since you are saying that once
> a task is on a ready queue, it stays on that queue until it
> runs, I am surprised it can work.  How dynamic are the deadlines?
> It would seem to be nice to describe the solution in terms of
> the properties of the queues at any given time as a function
> of priorities and urgencies, rather than the properties at the
> moment a task is added to a queue.

Remember for whatever model we come up with, whenever a task
changes its absolute deadline (or urgency level) then this is a dispatching
point and the task is taken out of its ready queue and replaced
using the new deadline. So an EDF periodic task will at the end of
its period calculate its new absolute deadline (for the next
period), change its deadline attribute (this will cause dispatching),
and then delay for start of next period.

Once the dispatching model is sorted out we need to return to the
representation of deadlines/urgency (as discussed in the AI).

> If I follow your model correctly, then it has the following
> property:  Presuming at some point there are tasks queued at various
> priority levels, then there is a PO locked at each level where
> there are tasks queued (except the lowest), and any task queued at a
> level L (other than the one holding the lock) has base priority
> greater than L, and urgency higher than the task holding the lock
> (as well as any task queued at lower levels).
>
> Does that sound right?  If so, let's try to simplify a bit,
> and eliminate the "limbo" state.

Yes thats my thinking. It follows that at any level L the tail
of the queue is the task that has locked the PO.

> First, there seems no harm in putting higher-urgency tasks in a queue at
> the level one above that of the locked PO.  This will make it a bit more
> likely that the base prio matches the queue priority, and might
> eliminate the need for the restriction against using level "B" itself.

Does this not mean we have to reserve a priority above top PO.
In other words H level cannot be used? Also is one level above
the PO a level that can be used by other tasks and POs or an
artificial level 'between' application levels? I feel a rule of
no POs at B (or use exisiting permission to round up any PO
at B to B+1) is more straightforward.

> Second, rather than putting lower-urgency or lower-priority tasks into
> limbo, can't we put them where they would have gone if they had arrived
> before the PO was locked, but make sure that tasks with a PO lock
> always stay on the front of their queue when preempted?
> That is, look for the highest prio locked PO that the new
> task would have preempted, and add it to the queue just above that.
> If there is no such PO, then add it to the lowest level queue.

I agree that the removal of the limbo state would be good. But
I cannot quite follow through your model here (sorry). If there are
a number of POs locked (at increasing priority levels) then would
you not have to go down all the POs to find the one that the
new task is allowed to preempt and then put it at the head of
that queue level (or +1 using your alternative)?

I know we are talking about a model not an implementation
scheme but this seems a bit excessive - but is perhaps better
than introducing a new idea such as limbo (or held).

With the protocol all locked PO have a strict order. If PO1
and PO2 are both locked then PO1 must have higher priority
and greater urgency than PO2 (or the other way round). So
it may be possible to define where a newly released task should
be placed.

Is this what you were thinking? If so I'll do some more work to
be sure it 'works'.

> In this model, the only queues that have tasks on them are those
> at the level of a locked PO, which contains the task holding
> the lock while preempted, and those immediately above that, which
> holds tasks of higher prio than this locked PO (but not higher than
> some higher-level locked PO), and higher urgency than
> the task holding the PO.  While preempted, a task holding a lock
> is kept at the head of its queue (as though it has very high urgency).
>
> If there are no POs locked, then all ready tasks are in the lowest level
> queue.  When there are POs locked, the tasks are split out into
> multiple queues, at levels one above that of each locked PO, plus any
> preempted tasks with locks at the PO's level.  Tasks go into the
> highest level where their urgency and priority exceeds that of
> the task holding the lock at the immediately lower level.
> If either their urgency or priority is not greater, they go in
> a lower level.
>
>> So if no locked POs then all tasks are immediately added to the
>> single ready queue at priority B (which is ordered by EDF) when
>> they become unblocked. Only when a PO is locked are some tasks
>> not immediately made ready.
>>
>> Once a PO is locked (by task Q say) then S becomes the base until
>> it is unlocked. Only tasks that have higher Pri than S and shorter
>> deadline than Q can become ready.
>> If there are many of these they will be on the ready queue at level
>> S (and be in front of Q).
>> If any of these lock a higher level PO then this behaviour is
>> repeated at the higher level.
>> When Q runs again and unlocks the PO then its pri (and S) will drop
>> to B, all 'held tasks' will become ready and correct behaviour
>> follows.
>> ...
>> Sorry to keep arguing, but I feel model-AT is now quite simple
>> and easy to explain. For implementation, all that is needed
>> in addition to the usual stuff is a queue of non-ready tasks
>> that have become unblocked but are not yet ready. Probably best
>> to order this by priority. As level of S drops tasks on this
>> held queue with priority higher than new S will be added to
>> the usual ready queue (at new S).
>>
>> I guess back to you for comment!
>
> I don't think we need the "limbo" queue, as explained above.
> Also, if we put tasks in the level one above that of the
> locked PO, I think we can eliminate the funny restriction to
> levels B+1..H, and can use all the levels.  This does require
> that while preempted, a task holding a lock is treated as though
> it is very urgent, so it is kept at the head of the queue at the
> PO's level.

Remember this is the usual rule for preemption (ie stay at head).

> With those two "tweaks" I think we have a model that satisfies
> both your goals and mine.

I think we may be closer if I understand correctly your latest thoughts.

****************************************************************

From: Tucker Taft
Sent: Sunday, November  2, 2003 10:39 AM

> ...
> > First, there seems no harm in putting higher-urgency tasks in a queue at
> > the level one above that of the locked PO.  This will make it a bit more
> > likely that the base prio matches the queue priority, and might
> > eliminate the need for the restriction against using level "B" itself.
>
> Does this not mean we have to reserve a priority above top PO.

No.  Nothing can ever preempt a task with a locked PO at level H,
so there is no need to provide a queue above that level.

> In other words H level cannot be used?

H can be used.  H+1 would never be needed since there
are no tasks at priority > H.

> ... Also is one level above
> the PO a level that can be used by other tasks and POs or an
> artificial level 'between' application levels?

No, as I tried to explain below, so long as a task with a locked
PO is kept at the head of its queue, it is OK to have other tasks
on that queue as well.

> ... I feel a rule of
> no POs at B (or use exisiting permission to round up any PO
> at B to B+1) is more straightforward.

I don't think we need either rule.

> > Second, rather than putting lower-urgency or lower-priority tasks into
> > limbo, can't we put them where they would have gone if they had arrived
> > before the PO was locked, but make sure that tasks with a PO lock
> > always stay on the front of their queue when preempted?
> > That is, look for the highest prio locked PO that the new
> > task would have preempted, and add it to the queue just above that.
> > If there is no such PO, then add it to the lowest level queue.
>
> I agree that the removal of the limbo state would be good. But
> I cannot quite follow through your model here (sorry). If there are
> a number of POs locked (at increasing priority levels) then would
> you not have to go down all the POs to find the one that the
> new task is allowed to preempt and then put it at the head of
> that queue level (or +1 using your alternative)?

I suppose you could defer actually putting tasks on the queue
if it was more efficient, so long as it was equivalent to putting
them on the correct queue at the beginning.  I would argue we should not
worry about the efficiency of finding the right queue.  There are
so many clever things you can do once you know what the "right"
answer is, as far as queueing, that this seems like a very
minor problem.  We use a binary "heap" of priorities in our
run-time system simply to speed up the process of finding the
highest-priority queue with at least one task on it.  I could
see using binary "heaps" in this model as well to reduce the time
to find the right place to insert tasks of various urgencies/priorities.

> I know we are talking about a model not an implementation
> scheme but this seems a bit excessive - but is perhaps better
> than introducing a new idea such as limbo (or held).

My concern is that we should be able to describe the steady
state situation, rather than rely on tricks involving deferring
putting tasks on a queue, since if they had come in just
a millisecond earlier, we might have had no trouble placing them
on an appropriate queue.

> With the protocol all locked PO have a strict order. If PO1
> and PO2 are both locked then PO1 must have higher priority
> and greater urgency than PO2 (or the other way round). So
> it may be possible to define where a newly released task should
> be placed.

Yes, I think so, and it shouldn't really matter whether it is
newly released, or released a bit earlier.  Its priority, its urgency,
and the priority/urgency of the various tasks with locked POs, should fully
determine where each ready task should be queued.

To be specific, a ready task should be queued immediately above
the priority of the highest priority locked PO for which it exceeds
both the PO's priority and the lock-holding task's urgency.
The queues are ordered by urgency, except that a lock-holding task
is queued at the front of its queue while preempted.

> Is this what you were thinking? If so I'll do some more work to
> be sure it 'works'.

Yes, I supposed "this" is what I was thinking, presuming we
agree on what "this" is ;-).

> > In this model, the only queues that have tasks on them are those
> > at the level of a locked PO, which contains the task holding
> > the lock while preempted, and those immediately above that, which
> > holds tasks of higher prio than this locked PO (but not higher than
> > some higher-level locked PO), and higher urgency than
> > the task holding the PO.  While preempted, a task holding a lock
> > is kept at the head of its queue (as though it has very high urgency).
> >
> > If there are no POs locked, then all ready tasks are in the lowest level
> > queue.  When there are POs locked, the tasks are split out into
> > multiple queues, at levels one above that of each locked PO, plus any
> > preempted tasks with locks at the PO's level.  Tasks go into the
> > highest level where their urgency and priority exceeds that of
> > the task holding the lock at the immediately lower level.
> > If either their urgency or priority is not greater, they go in
> > a lower level.
> ...
> > I don't think we need the "limbo" queue, as explained above.
> > Also, if we put tasks in the level one above that of the
> > locked PO, I think we can eliminate the funny restriction to
> > levels B+1..H, and can use all the levels.  This does require
> > that while preempted, a task holding a lock is treated as though
> > it is very urgent, so it is kept at the head of the queue at the
> > PO's level.
>
> Remember this is the usual rule for preemption (ie stay at head).

Yes, that is why it seemed like a natural rule to me.

****************************************************************

From: Alan Burns
Sent: Sunday, November  9, 2003  8:43 AM

I think we now have a scheme that does not need a limbo state,
tasks can be put immediately on to the correct ready queue
when they become unblocked. One needs to identify the highest
locked PO that has a priority and urgency less than that of
the released task.

A relatively minor issue remains. To use PO queue, or PO+1 queue
for the tasks.

>> ...
>> > First, there seems no harm in putting higher-urgency tasks in a queue at
>> > the level one above that of the locked PO.  This will make it a bit more
>> > likely that the base prio matches the queue priority, and might
>> > eliminate the need for the restriction against using level "B" itself.
>>
>> Does this not mean we have to reserve a priority above top PO.
>
> No.  Nothing can ever preempt a task with a locked PO at level H,
> so there is no need to provide a queue above that level.

Agreed

If PO+1 scheme is used then one needs the rule that a task with
a locked PO must always be at the front of the ready queue for the
PO's priority. So the queue is not a pure EDF (urgency) queue.

The PO only scheme seems to need the rule that no PO priorities
at the base level which I agree seems strange, so I'm happy to
go along with your model which I think has the following form.
Let T'P be the base priority of task T, T'A its active priority
and T'U its urgency.

Rule 1
All ready queues in the specified range B..H are ordered by urgency
apart from when a task is preempted while executing within a PO,
in this case the task is always at the head of the ready queue.

Rule 2
When a task T becomes unblocked it is placed on the ready queue
identified as follows.
Let S1, S2, ..., Sn be the sequence tasks (ordered by active
priority, S1 the highest) that are executing within a PO and
have the property that Si'A < T'P and Si'U < T'U.
If there are no tasks in this sequence then T is placed on the
ready queue at priority B.
If the sequence is not empty then T is placed on the ready queue
at priority S1'A+1

Rule 3
When a task is chosen for execution it runs with the active priority
determined by the ready queue from which the task was taken. If
preemptied it returns to the ready queue for its active priority.

I guess there will be a better way of saying all this for the ARM.

We will need a statement that for multiprocessor implementations
other rules may apply. Also need to ensure that the model works
when task have inheritance priorities due to other reasons
(rendezvous etc) - but I do not think they give any problems.

A new issue - how to set the urgency of a task. In the original AI
I had a new attribute of a task (deadline) and a new pragma for setting
the deadline of a task on creation (like priority). It would be
good if this could be made more general. Any idea how this could be
done? Of course for some schemes (EDF for example) the smaller the
value of parameter the more urgent, for other schemes (value-based
dispatching) the larger the more urgent. Whatever the scheme we would
need to have the rule that any change to the urgency attribute would
need to be a dispatching point.

I think we are making progress!

****************************************************************

From: Tucker Taft
Sent: Sunday, November  9, 2003 11:20 AM

> I think we now have a scheme that does not need a limbo state,
> tasks can be put immediately on to the correct ready queue
> when they become unblocked. One needs to identify the highest
> locked PO that has a priority and urgency less than that of
> the released task.

We are on the same wavelength now...

> A relatively minor issue remains. To use PO queue, or PO+1 queue
> for the tasks.

I like using PO+1 because, as I mentioned, it makes it more
likely a task will end up on its "true" priority queue.
In particular, if there are only two levels of tasks, the
higher level for ones that can preempt locked POs when
urgent, these will run at this higher level exactly when
they are preempting.

> ...
> If PO+1 scheme is used then one needs the rule that a task with
> a locked PO must always be at the front of the ready queue for the
> PO's priority. So the queue is not a pure EDF (urgency) queue.

Right.

> ...
> Let T'P be the base priority of task T, T'A its active priority
> and T'U its urgency.
>
> Rule 1
> All ready queues in the specified range B..H are ordered by urgency
> apart from when a task is preempted while executing within a PO,
> in this case the task is always at the head of the ready queue.
>
> Rule 2
> When a task T becomes unblocked it is placed on the ready queue
> identified as follows.
> Let S1, S2, ..., Sn be the sequence tasks (ordered by active
                                    ^^ "of"
> priority, S1 the highest) that are executing within a PO and
> have the property that Si'A < T'P and Si'U < T'U.
> If there are no tasks in this sequence then T is placed on the
> ready queue at priority B.
> If the sequence is not empty then T is placed on the ready queue
> at priority S1'A+1
>
> Rule 3
> When a task is chosen for execution it runs with the active priority
> determined by the ready queue from which the task was taken. If
> preemptied it returns to the ready queue for its active priority.
        ^^^^ "ted"
>
>
> I guess there will be a better way of saying all this for the ARM.
>
> We will need a statement that for multiprocessor implementations
> other rules may apply. Also need to ensure that the model works
> when task have inheritance priorities due to other reasons
> (rendezvous etc) - but I do not think they give any problems.

I agree.

> A new issue - how to set the urgency of a task. In the original AI
> I had a new attribute of a task (deadline) and a new pragma for setting
> the deadline of a task on creation (like priority). It would be
> good if this could be made more general. Any idea how this could be
> done? Of course for some schemes (EDF for example) the smaller the
> value of parameter the more urgent, for other schemes (value-based
> dispatching) the larger the more urgent. Whatever the scheme we would
> need to have the rule that any change to the urgency attribute would
> need to be a dispatching point.

Using pragmas and attributes is generally more work for the
compiler builder.  Could all this be done at run-time using
a package and task ids?  Or is having the urgency dynamic
not common enough to justify?

I am a little unclear whether we are trying to standardize
earliest-deadline first scheduling, or rather standardize
support for having multiple priority-specific scheduling
algorithms.  What do we really need to standardize at this point?
Perhaps just a framework in which priority-specific
algorithms can be inserted...

> I think we are making progress!

Ditto.

****************************************************************

From: Alan Burns
Sent: Monday, November 10, 2003  9:07 AM

OK, let me stay with the representation issue

>> A new issue - how to set the urgency of a task. In the original AI
>> I had a new attribute of a task (deadline) and a new pragma for setting
>> the deadline of a task on creation (like priority). It would be
>> good if this could be made more general. Any idea how this could be
>> done? Of course for some schemes (EDF for example) the smaller the
>> value of parameter the more urgent, for other schemes (value-based
>> dispatching) the larger the more urgent. Whatever the scheme we would
>> need to have the rule that any change to the urgency attribute would
>> need to be a dispatching point.
>
> Using pragmas and attributes is generally more work for the
> compiler builder.  Could all this be done at run-time using
> a package and task ids?  Or is having the urgency dynamic
> not common enough to justify?
>
> I am a little unclear whether we are trying to standardize
> earliest-deadline first scheduling, or rather standardize
> support for having multiple priority-specific scheduling
> algorithms.  What do we really need to standardize at this point?
> Perhaps just a framework in which priority-specific
> algorithms can be inserted...

The grand scheme is in 3 parts, first to give support for
priority-specific scheduling (Fifo and Round Robin), the
second to use this framework to define EDF, the third part
(if possible) to generalise the EDF solution for any urgency
parameter.

The priority-specific scheduling is defined in AI-355 and seems
quite straightforward. Our work on preemption levels now implies
that a band of priorities may be needed for a scheme, but that
is easy to work into the priority-specific AI. To support
EDF we originally had

1. a predefined task attribute (ie a language defined instantiation
of the task attribute generic) called deadline

2. a pragma (called deadline) that appears in a task spec to control
dispatching during activation (similar to the use of priority pragma)

3. all changes to a task's deadline (ie calls to the Set_Value) are
dispatching points

I guess (1) and (3) could be achieved by a run-time package, but not sure
(2) can, although I guess we could live without this by giving the
default value of the attribute Time_First.

I cannot see an easy way to generalise this. Perhaps a generic in which
the 'type' for urgency is a parameter, a boolean to say which end
of the type is the most urgent, and two parameters that give the priority
band for the scheme. The 2 priority parameters will then have to match
those given in the priority-specific pragma (unless priority-specific
is defined to say all priority levels are initially FIFO but may
be changed via calls to some run-time package - but this sounds a bit
too dynamic).

EDF is so well known it would be good for Ada to give direct
support, even if it can be derived from a general purpose generic.

Do you feel the use of a generic is a runner?

****************************************************************

From: Tucker Taft
Sent: Monday, November 10, 2003 10:48 AM

...
> The priority-specific scheduling is defined in AI-355 and seems
> quite straightforward. Our work on preemption levels now implies
> that a band of priorities may be needed for a scheme, but that
> is easy to work into the priority-specific AI. To support
> EDF we originally had
>
> 1. a predefined task attribute (ie a language defined instantiation
> of the task attribute generic) called deadline

Oh, I had forgotten you meant *that* kind of attribute.  I was thinking
of the language "'" kind of attribute.  Instantiations of this generic
do not create work for the compiler writer.

> 2. a pragma (called deadline) that appears in a task spec to control
> dispatching during activation (similar to the use of priority pragma)

Here you are creating compiler work.  Using a settable default might be
preferable.

> 3. all changes to a task's deadline (ie calls to the Set_Value) are
> dispatching points

Now you are putting special semantics into an instantiation of a
general-purpose generic.  That may be asking for trouble.  It might
be better to just define your own package, if you want it to have
a special effect when Set_Value is called.

> I guess (1) and (3) could be achieved by a run-time package, but not sure
> (2) can, although I guess we could live without this by giving the
> default value of the attribute Time_First.

That would simplify the proposal in a compiler implementor's view
certainly.

> I cannot see an easy way to generalise this. Perhaps a generic in which
> the 'type' for urgency is a parameter, a boolean to say which end
> of the type is the most urgent, and two parameters that give the priority
> band for the scheme. The 2 priority parameters will then have to match
> those given in the priority-specific pragma (unless priority-specific
> is defined to say all priority levels are initially FIFO but may
> be changed via calls to some run-time package - but this sounds a bit
> too dynamic).

I don't see the need to generalize it this extent.  Each kind of scheduling
would probably need its own package, scheduler, etc.  It would still be
nice to have a general *model* indicating how these new kinds of algorithms
could be supported, without necessarily providing an amazing generic of
some sort that did it directly.

> EDF is so well known it would be good for Ada to give direct
> support, even if it can be derived from a general purpose generic.
>
> Do you feel the use of a generic is a runner?

No, I think EDF deserves its own package with an EDF-oriented interface.
But it might be nice to describe a general model which allows something
other than priority (i.e., the "urgency") to take precedence in task scheduling,
while priority still governs use of POs.  This is essentially the
"preemption" level concept, except it allows us to preserve the single
notion of priority for governing POs, while allowing other parameters
like deadline to control scheduling between tasks that are not involved
with a PO lock.

****************************************************************

From: Alan Burns
Sent: Tuesday, November 11, 2003  9:34 AM

So we have the following for support of EDF

1. A language defined package such as:

with Ada.Real_Time;
with Ada.Task_Identification; use Ada.Task_Identification;
package Deadline_Support is
  subtype Deadline is Ada.Real_Time.Time;
  Default_Deadline : constant Deadline := Ada.Real_Time.Time_First;
  Set_Deadline(D : in Deadline; T : in Task_ID := Current_Task);
  Get_Deadline(D : out Deadline; T : in Task_ID := Current_Task);
end Deadline_Support;

Perhaps with Implementation Advice to hold the deadline value
as a task attribute.

Is there any way Default_Deadline could be configurable?


2. Pragmas to set a band of priorities for EDF scheduling:

pragma Task_Dispatching_Policy (Priority_Specific);
pragma Priority_Policy (EDF_Within_Priorities,
                        first_priority, last_priority);


3. Rules for EDF_Within_Priorities that give the preemption level
dispatching (vis our last 300 emails).


4. A pragma to set the deadline of a task for its activation:

pragma Deadline();

Although you feel this would be perhaps too much of a change
from the compilers point of view.

--

We do not give direct support for any other form of urgency, but
the support for EDF will show how it can be done for any other
X_Within_Priorities.

If you agree, I'll get back to the AI and attempt to write it up.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 11, 2003 10:53 AM

Sounds good to me!

****************************************************************

From: Alan Burns
Sent: Wednesday, November 19, 2003  5:12 AM

I've discussed the EDF AI with people at York and it became
clear that the model, as we had it, was difficult to follow.

To recap: when a task T is ready, it is added to the appropriate
ready queue as follows

a) find the right level, S say
b) add T to S+1
c) order all queues by EDF, unless T has been preempted while
executing within a PO, in which case keep this task at head of Q

S is defined in terms of locked POs (highest locked PO with
ceiling less that new task and task which has locked PO having
longer deadline).


In discussions the following variation was seen to be much more
intuitive. Basically tasks are put in queue S (hence always in
front of the task that locked the PO). This can be expressed
as follows:

a) all queues are EDF ordered
b) add T to the highest non-empty ready queue R such that
    T'Deadline < Deadline of task at tail of R, and
    T'Base > Priority level of R.
   if no such R then add T to L (lowest level)

This requires no PO ceilings at level L as I had before.

This model was preferred by people seeing the models cold
as:
1) all queues are pure EDF with no special rule about the
task which has locked a PO
2) no +1 needed
3) only ready queues need to be mentioned (not locked POs)
4) the no POs at level L was OK as implementation is allowed
to round up ceiling (ie from L to L+1)

Hence I'm going to write up the AI with this model but mention
other model in discussions section.

****************************************************************

From: Tucker Taft
Sent: Wednesday, November 19, 2003  7:05 AM

Ok.

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  6:55 AM

what concerns me about this series of proposals for language additions from
Alan is that they have the feeling of "let's design some nice new features",
rather than "customer xxx who is using Ada is finding that it is a big problem
that Ada cannot support yyy". To me, this latter kind of input is always the
most critical kind of input for justification of any change.

It's not that any specific proposal lacks technical merit, just that in
aggregate the changes seem too large, and not well enough justified. I am
not convinced that any vendors will see value in implementing these features.

Alan, and others ... a better case needs to be made that these features are
critical for the future use and success of Ada.

****************************************************************

From: Tucker Taft
Sent: Monday, November 24, 2003  7:35 AM

I believe all of these grew out of the Int'l Real-Time Ada Workshop.
My sense is that many of the attendees are "real" users of Ada,
or consult to same.  Alan will have to elaborate.  In any case,
having more convincing rationale for need would seem useful, to
help WG9 with their consideration as well.

****************************************************************

From: Tullio Vardanega
Sent: Monday, November 24, 2003  7:53 AM

Yes, the whole range of real-time AIs have been conceived at
and pushed for by the IRTAW, prior to, during and subsequent
to the workshop itself.
At this year's workshop in particular we had a number of
working sessions devoted to smithering out AIs that we felt
had the support of the community that the IRTAW represents.
As usual, we strove to have as many stakeholders represented at
the workshop as possible: we had developers, educators, vendors
and end-users (such as the European Space Agency).
It was indeed appreciated that the IRTAW (via Alan) would
have to strongly justify their demand, which -- just seen from
the stream of AIs that Alan just posted -- seem to extoll a major
token from language implementors.
This argument was discussed at length (as it will appear from the
IRTAW-12 proceedings that will be on the December issue of
the Ada Letters) and should certainly be shared with the ARG.
However, my view as a long-time IRTAW-er and as the program
chair for this year is that the proposed AIs have the same pedigree
as the Ravenscar Profile, which is, to date, the best known product
of the IRTAW series and a very solid and welcome "approved AI".

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  8:13 AM

Right, I understand this, but it is still very important to differentiate
between nice-to-have, and missing-essential-functionality (some things are
of course in between). If you gather a bunch of language/domain experts
at a table, and ask them "what new neat features would you like to see?"
you will get lots of nice new neat features :-)

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  8:17 AM

> which -- just seen from
> the stream of AIs that Alan just posted -- seem to extoll a major
> token from language implementors.

Actually I don't think any implementors will move much in the direction
of implementing any of these extensions unless they get input from their
prime customers asking for this functionality.

Ravenscar has finally found that kind of input (in the intended safety
critical niche), and so vendors have indeed started to implement this
capability, and several products are available.

I am unconvinced that we will see the same input for many of the new
proposed features.

If my skepticism results in considerably more evidence that I am wrong,
that's certainly a constructive and productive result!

****************************************************************

From: Alan Burns
Sent: Monday, November 24, 2003  10:26 AM

Re the comments and concerns raised by Robert.

First let me say, following Tullio, that the IRTAW workshop
series has considered a number of proposals/issues but has
only passed on to the ARG (via myself) those that it feels
are really important and timely. There are many others
potential AIs!

Consider the ones that I put out new versions of today
(obviously I should not have put them all out at the same
time!). They fall into a small number of groups:

1. Budget control. Understanding the amount of CPU resource a
task is taking is fundamental to the safe execution of any real-time
system. OSs in the high integrity area all monitor (in some
form or other) CPU usage as an error recognition facility.
Also the use of servers to control allocation is increasingly
used (for example on at least one flying avionic system from
Honeywell). As such application move away from building their
own run-times and OS to the use of standard technology, languages
and OS standards need to support these approaches. POSIX does
to some extent and much is made in the Real-Time Java extension
of such support. AI307 and AI354 give optional but standard
means of giving required budget control.  I can point to
industrial users that would use Ravenscar but need budget
control as well.

2. Extending the Dispatching facilities (AI355 and AI357).
Round Robin dispatching and EDF are not 'nice to haves'.
They represent used techniques. RR is very common and fits
well with current FIFO. EDF is as well known as Fixed Pri;
and is certainly the preferred scheduling mechanism for soft
real-time as you get much more from the CPU (perhaps 40%
more before deadlines are missed). Look at the literature
on embedded systems and what they want from scheduling.
Again real-time Java claim to give support here. Indeed it
does much further with support for value based scheduling,
admission control etc. It would be great to have these in
Ada, but they are beyond state of practice (and we believe
can be programmed with the facilities we go have/will have).

3. Dynamic ceiling (AI 327). This is not as important but
finishes of the language in the sense of dynamic priorities
for task but static pris for ceiling was always unfortunate.
This AI gives an easy (optional) means of completing the
language in this area.

4. Support for recognition of task termination (AI10266).
I picked this up as it gave out of the Ravenscar AI. It is
not a high priority; but other groups have asked for this
(eg the workshop on exception handling a couple of years
ago).

The AIs in group 1 and 2 are I believe timely for real applications
but also show that Ada is moving forward and remain relevant
to the real-time area.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 24, 2003  7:30 PM

Alan said:

...
> Consider the ones that I put out new versions of today
> (obviously I should not have put them all out at the same
> time!).

Actually, I think that was a good thing, because it shows just how many of
these proposals there are. And keep in mind that there also are a number of
proposals already completed (Ravenscar, timers, etc.). I have a hard time
believing that there could be so many important pieces of missing
functionality.

> OSs in the high integrity area all monitor (in some
> form or other) CPU usage as an error recognition facility.
...

CPU usage is important to improving the performance of any Ada program, not
just real time ones. (If you can't measure it, you can't improve it!). It's
unfortunate that the package has evolved such that budgeting is given
priority over measurement. Clearly, both have value (and proposals having
value for multiple purposes should be preferred over those having value for
only one purpose).

> 4. Support for recognition of task termination (AI10266).
> I picked this up as it gave out of the Ravenscar AI. It is
> not a high priority; but other groups have asked for this
> (eg the workshop on exception handling a couple of years ago).

It's interesting that the capability that seems most important from a High
Integrity perspective (insuring that parts of the system don't become
inoperative without notice) is considered to have the lowest priority.
Certainly, this is the problem that I run into in practice (tasks going away
silently when the task's others exception handler either is absent, has a
bug, or exhausts the same resource).

In any case, I tend to share Robert's skepticism about the demand for some
of these proposals. I, like him, hope to be proven wrong.

****************************************************************

From: Alan Burns
Sent: Tuesday, November 25, 2003  2:53 AM

>...
>
>> OSs in the high integrity area all monitor (in some
>> form or other) CPU usage as an error recognition facility.
> ...
>
> CPU usage is important to improving the performance of any Ada program, not
> just real time ones. (If you can't measure it, you can't improve it!). It's
> unfortunate that the package has evolved such that budgeting is given
> priority over measurement. Clearly, both have value (and proposals having
> value for multiple purposes should be preferred over those having value for
> only one purpose).

The execution time clocks AI is all about measurement, indeed
one of its main uses will be to measure task execution times.
It has this multiple purpose (measurement alone or measurement
and budgeting).

>> 4. Support for recognition of task termination (AI10266).
>> I picked this up as it gave out of the Ravenscar AI. It is
>> not a high priority; but other groups have asked for this
>> (eg the workshop on exception handling a couple of years ago).
>
> It's interesting that the capability that seems most important from a High
> Integrity perspective (insuring that parts of the system don't become
> inoperative without notice) is considered to have the lowest priority.
> Certainly, this is the problem that I run into in practice (tasks going away
> silently when the task's others exception handler either is absent, has a
> bug, or exhausts the same resource).

Remember that a very simple scheme was deemed adequate for Ravenscar,
and there are other approaches with full Ada. My point was that an
Alan AI does not mean its an AI from IRTAW.

> In any case, I tend to share Robert's skepticism about the demand for some
> of these proposals. I, like him, hope to be proven wrong.

I will not repeat the points I made in previous email. But note the most
recent attempt to define/design language features for a real-time
language (real-time Java) attempts to have all these features and
more. I say attempt because I do not think the features are as well
thought out as these Ada proposals.

Also most of these feature are low overhead (eg additional run-time
packages) that, as Robert says, do not need to be supported until
customers need them. But if implemented, the RM defines a standard
way. The fact that Ada supports them makes a strong statement about
the language. Control over execution time and the introduction of
the notion of 'deadline' is not I believe excessive.

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  2:42 PM

My inclination would be to make this an optional feature. I don't
see a good argument for requiring vendors to implement this package.
If their customers need it, then they will ask for it. I prefer
development of the language in this kind of respect to be
customer/user driven.

****************************************************************

From: Randy Brukardt
Sent: Thursday, September  9, 2004  2:59 PM

My understanding was that this was an optional part of the Real-Time Annex
(which is of course optional itself). I certainly agree that this shouldn't
be a required part of the Real-Time Annex; the wording ought to reflect
that.

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  3:17 PM

OK, I quickly scanned and perhaps I missed the optional statement,
so if it is there, I am not so concerned.

****************************************************************

From: Randy Brukardt
Sent: Thursday, September  9, 2004  5:29 PM

I quickly scanned it too, and didn't find the statement, either, so I think
it is really missing. It should be added, IMHO.

****************************************************************

From: Pascal Leroy
Sent: Thursday, September  9, 2004  3:01 PM

Well, this feature is part of the real-time annex, so it's optional.  I
agree that this is a part of the annex that vendors won't probably
implement unless there is proven customer demand.

Surely you are not proposing a new notion of optionality, distinct from
the SNAs?

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  3:21 PM

Absolutely I am. I object to this unit if it is a required
part of the real time annex. Furthermore this is not a new
notion at all:

Implementation Permissions

10   An implementation need not support Asynchronous_Task_Control if it is
infeasible to support it in the target environment.

I would be happy to have a parallel statement for this new proposed
package. Of course in practice it is the vendor who decides whether
or not support is feasible, and that's just fine.

What this means for instance is that the corresponding ACATS tests
are option, even if you intend to support the Annex.

****************************************************************

From: Pascal Leroy
Sent: Thursday, September  9, 2004  7:33 PM

Well, this rule is bogus anyway because the "infeasible" case is already
covered by the blanket statement about "impossible or impractical"
features in 1.1.3(6).

I disagree with the notion of adding optional features to the SNAs,
because then "supporting the real-time annex" become a meaningless
statement, or at least one that must be qualified by all sorts of
additional information about the optional features.

If we believe that AI 357 should be optional (and I am not necessarily
disagreeing with that) we should create a new annex about advanced
scheduling capabilities and put it there (together with the
priority-specific stuff and round-robin, I guess).

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  3:09 AM

> Well, this rule is bogus anyway because the "infeasible" case is already
> covered by the blanket statement about "impossible or impractical"
> features in 1.1.3(6).

I disagree. The point of adding that explicit implementation
permission was precisely to signal that this particular feature
is one that may well not be practical to implement. The idea
benind 1.1.3(6) [AI 325 :-)] was to cover unusual cases. Something
is wrong if we find that particular features are routinely
not implemented and appealing to 1.1.3(6).

> I disagree with the notion of adding optional features to the SNAs,
> because then "supporting the real-time annex" become a meaningless
> statement, or at least one that must be qualified by all sorts of
> additional information about the optional features.

That's rhetorical overkill here. We are talking about a small
number of features (it appears that some people were not even
aware of the existing practice since its scope is small :-)

> If we believe that AI 357 should be optional (and I am not necessarily
> disagreeing with that) we should create a new annex about advanced
> scheduling capabilities and put it there (together with the
> priority-specific stuff and round-robin, I guess).

Seems overkill to me, but given these alternatives

1. Junk the feature completely

2. Keep it as some kind of secondary standard, like the old
Ravenscar

3. Keep it in a separate annex

4. Put it in Annex D optional

5. Put it in Annex D non-optional

My preference would be for 2. I would also put the new
priority specific stuff and round robin stuff there too.
Why? Because this stuff has not been implemented or used,
and I think it would be better to have some experience
before it goes into the standard. The original Ravenscar
document would have been premature as an addition to the
standard. Instead it came out as a secondary standard
and we gained valuable implementation and usage experience.

I can also live with 1 or 4

I dislike 3 (overkill, and I think you will get an annex
that will be in danger of being ignored).

I entirely reject 5.

****************************************************************

From: Alan Burns
Sent: Friday, September 10, 2004  2:37 AM

This general permission is current wording and is included in the new wording.
I believe this makes it clear that applications need only support the new
policies if they wish to, ie when a customer asks.


Implementation Permissions

Implementations are allowed to define other task dispatching policies, but
need not support more than one task dispatching policy per partition.

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  3:11 AM

> I believe this makes it clear that applications need only support the new
> policies if they wish to, ie when a customer asks.
                            ^^
You surely mean eg. You don't want to tell implementors to implement
this *only* if customers ask!

P.S. this confusion of ie for eg is so common that when we wrote
our book on microprocessors, our copy editor said "you are the
only authors I have worked with who use ie correctly. However,
so many readers in the US mistake this for "for example" that
I recommend eliminating all uses ... which we did :-)

****************************************************************

From: Alan Burns
Sent: Friday, September 10, 2004  3:19 AM

I meant ie just to overstate the case - of  course eg really.

Does the permission wording eliminate your concerns?

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  4:53 AM

Probably so, I have to read the entire statement in context

****************************************************************

From: Randy Brukardt
Sent: Friday, September 10, 2004  1:15 PM

> Does the permission wording eliminate your concerns?

No. The permission allows a vendor to insist that an entire partition use the
same dispatching policy. One could argue that that permission means that an
implementation doesn't have to support Priority_Specific_Dispatching, but I
think that is stretching a rule to the breaking point. If we don't want to
require such support, we should say so, in clear English, not hide it in some
unrelated (and existing) permission. After all, we want to key users that such
a feature is optional.

Any, in any case, I don't see how it applies to a partition that is completely
Round_Robin or EDF. Certainly, the permission does not say that an
implementation doesn't have to implement language-defined policies, only that
one such policy need be supported per partition. How could an implementation
justify rejecting "pragma Task_Dispatching_Policy (Round_Robin);"? It certainly
doesn't do anything that would trigger the permission.

I know that you've said that these features are optional. In that case, there
should be a clear statement to that effect in the wording, as there is with
D.11(10). We do not want to confuse either the users nor the reviewers of the
Amendment.

****************************************************************

From: Tucker Taft
Sent: Friday, September 10, 2004  4:34 PM

For what it is worth, I agree with Randy.

****************************************************************


Questions? Ask the ACAA Technical Agent