Version 1.22 of ais/ai-00357.txt

Unformatted version of ais/ai-00357.txt version 1.22
Other versions for file ais/ai-00357.txt

!standard D.02.6 (01)          05-12-02 AI95-00357/13
!standard D.01 (20)
!standard D.11 (04)
!standard D.11 (05)
!standard D.11 (06)
!class amendment 03-09-27
!status Amendment 200Y 04-12-02
!status WG9 Approved 06-06-09
!status ARG Approved 10-0-0 04-11-20
!status work item 03-09-27
!status received 03-09-27
!priority Medium
!difficulty Medium
!subject Support for Deadlines and Earliest Deadline First Scheduling
!summary
Direct support for deadlines is defined and a new dispatching policy for Earliest Deadline First (EDF) Scheduling is supported.
!problem
Ada does not give any direct support for deadlines (although they are the most important notion in real-time systems). A standard way of representing deadlines via a predefined attribute of a task is needed.
Once deadlines are supported it is possible to define a dispatching policy for EDF scheduling. This being as common an approach as fixed priority scheduling for real-time system. It has the advantage that higher levels of resource utilization are possible.
For many scheduling schemes, including for example EDF, the most effective locking policy for protected objects is one known as the Stack Resource Policy (or preemption level locking). This was defined by Baker in 1991 [1] and has the advantage that it does not enforce the same policy on task dispatching and PO locking; but it does lead to effective implementation of POs. Moreover when priority dispatching is used the SRP policy is the same as ceiling locking.
!proposal
To give support for EDF scheduling with the necessary protocol for sharing protected objects requires the following:
1. a means of representing a task's (absolute) deadline 2. a means of assigning a preemption level to a task 3. a means of assigning a preemption level to a PO
The basic rule for preemption is that a new task T will preempt current running task S if:
1) Deadline of T is before that of S 2) Preemption level of T is higher than preemption level of any locked PO
If preemption levels are assigned (by the programmer) using relative deadlines for each task then a tight upper bound on blocking equivalent to that of priority ceiling inheritance is obtained.
In this proposal
a) deadlines are represented by a new task 'attribute' b) preemption levels for tasks are represented by base priorities c) preemption levels for POs are represented by ceiling priorities
The usual behaviour of priorities, when tasks execute within a PO, are followed. But the active priority of a task when executing outside a PO may be lower than its base priority (see details below).
A new pragma is provided to allow a task to set a (non-default) relative deadline to control its activation:
pragma Relative_Deadline(expression); where the expected type of expression is Real_Time.Time_Span.
The pragma can only occur in the specification part of a task.
To support EDF scheduling a new Priority_Policy identifier is defined: EDF_Across_Priorities.
When the Priority policy EDF_Across_Priorities is in effect the following rules apply. Let Base(T) be the base priority of task T and Deadline(T) its absolute deadline.
Rule 1 All ready queues in the specified range System.Any_Priority are ordered by deadline. For two tasks on the same ready queue, S and T: Deadline(S) < Deadline(T) implies S is closer to the head of the queue than T.
Rule 2 When a task T becomes unblocked it is placed on the highest priority ready queue R such that
A protected object with ceiling priority R is currently locked, Deadline(T) < Deadline of the task locking this protected object, and Base(T) > Priority level of R.
if no such R exists then add T to Any_Priority'first.
Rule 3 When a task is chosen for execution it runs with the active priority determined by the ready queue from which the task was taken. If preempted it returns to the ready queue for its active priority. If it inherits a higher active priority it will return to its original active priority when it no longer inherits the higher level.
Rule 4 When a task executes a delay_statement that does not result in blocking it is added to the ready queue for its active priority.
Rule 5 Priority ceiling level Priority'first is not allowed.
!wording
D.2.6 Earliest Deadline First Dispatching
The deadline of a task is an indication of the urgency of the task; it represents a point on an ideal physical time line. The deadline might affect how resources are allocated to the task.
This clause defines a package for representing the deadline of a task and a dispatching policy that defines Earliest Deadline First (EDF) dispatching. A pragma is defined to assign an initial deadline to a task.
AARM Note: This pragma is the only way of assigning an initial deadline to a task so that its activation can be controlled by EDF scheduling. This is similar to the way pragma Priority is used to give an initial priority to a task.
Syntax
The form of a pragma Relative_Deadline is as follows:
pragma Relative_Deadline (relative_deadline_expression);
Name Resolution Rules
The expected type for relative_deadline_expression is Real_Time.Time_Span.
Legality Rules
A Relative_Deadline pragma is allowed only immediately within a task_definition or the declarative_part of a subprogram_body. At most one such pragma shall appear within a given construct.
Static Semantics
The policy_identifier EDF_Across_Priorities is a task dispatching policy.
The following language-defined library package exists:
with Ada.Real_Time; with Ada.Task_Identification; package Ada.Dispatching.EDF is subtype Deadline is Ada.Real_Time.Time; Default_Deadline : constant Deadline := Ada.Real_Time.Time_Last; procedure Set_Deadline (D : in Deadline; T : in Ada.Task_Identification.Task_Id := Ada.Task_Identification.Current_Task); procedure Delay_Until_And_Set_Deadline ( Delay_Until_Time : in Ada.Real_Time.Time; Deadline_Offset : in Ada.Real_Time.Time_Span); function Get_Deadline (T : Ada.Task_Identification.Task_Id := Ada.Task_Identification.Current_Task) return Deadline; end Ada.Dispatching.EDF;
Post-Compilation Rules
If the EDF_Across_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
If the EDF_Across_Priorities policy appears in a Priority_Specific_Dispatching pragma (see D.2.2) in a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
[Note: The above assumes AI-355 is included in the Amendment.]
AARM Reason: The priority model of EDF assumes that ceiling locking is used; it wouldn't make sense otherwise. Other scheduling policies are not so tightly bound to the locking policy - they still can make sense for an alternative policy.
Dynamic Semantics
A Relative_Deadline pragma has no effect if it occurs in the declarative_part of the subprogram_body of a subprogram other than the main subprogram.
The initial absolute deadline of a task containing pragma Relative_Deadline is the value of Real_Time.Clock + relative_deadline_expression, where the call of Real_Time.Clock is made between task creation and the start of its activation. If there is no Relative_Deadline pragma then the initial absolute deadline of a task is the value of Default_Deadline. The environment task is also given an initial deadline by this rule.
The procedure Set_Deadline changes the absolute deadline of the task to D. The function Get_Deadline returns the absolute deadline of the task.
The procedure Delay_Until_And_Set_Deadline delays the calling task until time Delay_Until_Time. When the task becomes runnable again it will have deadline Delay_Until_Time + Deadline_Offset.
On a system with a single processor, the setting of the deadline of a task to the new value occurs immediately at the first point that is outside the execution of a protected action. If the task is currently on a ready queue it is removed and re-entered on to the ready queue determined by the rules defined below.
When EDF_Across_Priorities is specified for priority range Low..High all ready queues in this range are ordered by deadline. The task at the head of a queue is the one with the earliest deadline.
A task dispatching point occurs for the currently running task T to which policy EDF_Across_Priorities applies:
o when a change to the deadline of T occurs;
o there is a task on the ready queue for the active priority of T with
a deadline earlier than the deadline of T; or
o there is a non-empty ready queue for that processor
with a higher priority than the active priority of the running task.
In these cases, the currently running task is said to be preempted and is returned to the ready queue for its active priority.
For a task T to which policy EDF_Across_Priorities applies, the base priority is not a source of priority inheritance; the active priority when first activated or while it is blocked is defined as the maximum of the following:
o the lowest priority in the range specified as EDF_Across_Priorities
that includes the base priority of T;
o the priorities, if any, currently inherited by T;
o the highest priority P, if any, less than the base priority of T
such that one or more tasks are executing within a protected object with ceiling priority P and task T has an earlier deadline than all such tasks.
AARM Ramification: The active priority of T might be lower than its base priority.
When a task T is first activated or becomes unblocked, it is added to the ready queue corresponding to this active priority. Until it becomes blocked again, the active priority of T remains no less than this value; it will exceed this value only while it is inheriting a higher priority.
AARM Discussion: These rules ensure that a task executing in a protected object is preempted only by a task with a shorter deadline and a higher base} priority. This matches the traditional preemption level description without the need to define a new kind of protected object locking.
When the setting of the base priority of a ready task takes effect and the new priority is in a range specified as EDF_Across_Priorities, the task is added to the ready queue corresponding to its new active priority, as determined above.
For all the operations defined in Dispatching.EDF, Tasking_Error is raised if the task identified by T has terminated. Program_Error is raised if the value of T is Null_Task_Id.
Bounded (Run-Time) Error
If EDF_Across_Priorities is specified for priority range Low..High, it is a bounded error to declare a protected object with ceiling priority Low or to assign the value Low to attribute 'Priority. In either case either Program_Error is raised or the ceiling of the protected object is assigned the value Low+1.
Erroneous Execution
If a value of Task_Id is passed as a parameter to any of the subprograms of this package and the corresponding task object no longer exists, the execution of the program is erroneous.
Documentation Requirements
On a multiprocessor, the implementation shall document any conditions that cause the completion of the setting of the deadline of a task to be delayed later than what is specified for a single processor.
Notes
If two adjacent priority ranges, A..B and B+1..C are specified to have policy EDF_Across_Priorities then this is not equivalent to this policy being specified for the single range, A..C.
The above rules implement the preemption-level protocol (also called Stack Resource Policy protocol) for resource sharing under EDF dispatching. The preemption-level for a task is denoted by its base priority. The definition of a ceiling preemption-level for a protected object follows the existing rules for ceiling locking.
AARM Note: An implementation may support additional dispatching policies by replacing absolute deadline with an alternative measure of urgency.
Replace D.1(20) by:
At any time, the active priority of a task is the maximum of all the priorities the task is inheriting at that instant. For a task that is not held (see D.11), its base priority is a source of priority inheritance unless otherwise specified for a particular task dispatching policy. Other sources of priority inheritance are specified under the following conditions:
Add after D.11(4):
For any priority below System.Any_Priority'First, the task dispatching policy is FIFO_Within_Priorities.
AARM to be honest: This applies even if a Task_Dispatching_Policy specifies the policy for all of the priorities of the partition.
AARM ramification: A task at the held priority never runs, so it is not necessary to implement FIFO_Within_Priorities for systems that have only one policy (such as EDF_Across_Priorities).
Replace D.11(5) by:
The Hold operation sets the state of T to held. For a held task, the active priority is reevaluated as if the base priority of the task were the held priority.
Replace D.11(6) by:
The Continue operation resets the state of T to not-held; its active priority is then reevaluated as determined by the task dispatching policy associated with its base priority.
!discussion
The addition of procedure Delay_and_Set_Relative_Deadline reflects a common need to change deadline and then delay. For example a periodic task would do this. As the change of deadline (extending it) is likely to cause a context switch (with a later switch to put the task on the delay queue) it is more efficient for the run-time to do the delay and deadline change as a single operation.
Here we show that the above rules do give the required behaviour for EDF scheduling and preemption level locking (without the need for a new locking policy, i.e. just use of ceiling locking).
First if no POs are used or locked then all tasks are always placed on the ready queue for level Priority'first (which is of course ordered by deadline).
The preemption level rule is that a newly executing task (T) should preempt the current running task (S) if it has a higher preemption level than any task that has locked a PO and a shorter deadline than S. There are a few cases to look at
If S has locked the PO then T will be placed in the same ready queue as S, it has a shorter deadline and hence will execute first.
If some other task has locked the PO then S must have preempted it and hence will again be on the ready queue for this level. T added to same queue and will preempt. [What the heck does this last sentence mean? Note that I fixed many more instances of 'q' to be 'queue'. - ED]
If there are a number of locked POs then T needs to be placed on the correct ready queue. On all ready queues, apart from that at Priority'first, the tail of the queue is the task that has the lock on the PO whose ceiling priority is that of the ready queue. Rule 2 then gets the right queue: the queue is appropriate as T has the correct attributes (shorter deadline than the task holding the lock, and higher preemption level). Moreover no higher priority queue is appropriate as one of the conditions will be false.
Note that in order to be placed on a ready queue, T must have a strictly higher preemption level (base priority) than the task on that queue that is holding the lock. Hence mutual exclusion is not broken by the preemption rules.
--
During discussions an alternative way of expressing this model was considered. This did not need the 'no POs at level Priority'first' rule. But it did require that tasks be placed at one above the level in the above model, and required the rule that a task preempted while executing in a PO must always stay at the front of its ready queue even if its deadline is later than the next task on the queue.
--
We have to fix the rules for held and idle tasks, because the base priority does not participate in the determination of the active priority for EDF. The solution is to define that idle and held tasks always uses FIFO dispatching, even in an all-EDF system. This doesn't cause any implementation overhead, as such tasks never actually run, and thus no extra support is needed in the runtime.
!corrigendum D.1(20)
Replace the paragraph:
At any time, the active priority of a task is the maximum of all the priorities the task is inheriting at that instant. For a task that is not held (see D.11), its base priority is always a source of priority inheritance. Other sources of priority inheritance are specified under the following conditions:
by:
At any time, the active priority of a task is the maximum of all the priorities the task is inheriting at that instant. For a task that is not held (see D.11), its base priority is a source of priority inheritance unless otherwise specified for a particular task dispatching policy. Other sources of priority inheritance are specified under the following conditions:
!corrigendum D.2.6(1)
Insert new clause:
The deadline of a task is an indication of the urgency of the task; it represents a point on an ideal physical time line. The deadline might affect how resources are allocated to the task.
This clause defines a package for representing the deadline of a task and a dispatching policy that defines Earliest Deadline First (EDF) dispatching. A pragma is defined to assign an initial deadline to a task.
Syntax
The form of a pragma Relative_Deadline is as follows:
pragma Relative_Deadline (relative_deadline_expression);
Name Resolution Rules
The expected type for relative_deadline_expression is Real_Time.Time_Span.
Legality Rules
A Relative_Deadline pragma is allowed only immediately within a task_definition or the declarative_part of a subprogram_body. At most one such pragma shall appear within a given construct.
Static Semantics
The policy_identifier EDF_Across_Priorities is a task dispatching policy.
The following language-defined library package exists:
with Ada.Real_Time; with Ada.Task_Identification; package Ada.Dispatching.EDF is subtype Deadline is Ada.Real_Time.Time; Default_Deadline : constant Deadline := Ada.Real_Time.Time_Last; procedure Set_Deadline (D : in Deadline; T : in Ada.Task_Identification.Task_Id := Ada.Task_Identification.Current_Task); procedure Delay_Until_And_Set_Deadline ( Delay_Until_Time : in Ada.Real_Time.Time; Deadline_Offset : in Ada.Real_Time.Time_Span); function Get_Deadline (T : Ada.Task_Identification.Task_Id := Ada.Task_Identification.Current_Task) return Deadline; end Ada.Dispatching.EDF;
Post-Compilation Rules
If the EDF_Across_Priorities policy is specified for a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
If the EDF_Across_Priorities policy appears in a Priority_Specific_Dispatching pragma (see D.2.2) in a partition, then the Ceiling_Locking policy (see D.3) shall also be specified for the partition.
Dynamic Semantics
A Relative_Deadline pragma has no effect if it occurs in the declarative_part of the subprogram_body of a subprogram other than the main subprogram.
The initial absolute deadline of a task containing pragma Relative_Deadline is the value of Real_Time.Clock + relative_deadline_expression, where the call of Real_Time.Clock is made between task creation and the start of its activation. If there is no Relative_Deadline pragma then the initial absolute deadline of a task is the value of Default_Deadline. The environment task is also given an initial deadline by this rule.
The procedure Set_Deadline changes the absolute deadline of the task to D. The function Get_Deadline returns the absolute deadline of the task.
The procedure Delay_Until_And_Set_Deadline delays the calling task until time Delay_Until_Time. When the task becomes runnable again it will have deadline Delay_Until_Time + Deadline_Offset.
On a system with a single processor, the setting of the deadline of a task to the new value occurs immediately at the first point that is outside the execution of a protected action. If the task is currently on a ready queue it is removed and re-entered on to the ready queue determined by the rules defined below.
When EDF_Across_Priorities is specified for priority range Low..High all ready queues in this range are ordered by deadline. The task at the head of a queue is the one with the earliest deadline.
A task dispatching point occurs for the currently running task T to which policy EDF_Across_Priorities applies:
In these cases, the currently running task is said to be preempted and is returned to the ready queue for its active priority.
For a task T to which policy EDF_Across_Priorities applies, the base priority is not a source of priority inheritance; the active priority when first activated or while it is blocked is defined as the maximum of the following:
When a task T is first activated or becomes unblocked, it is added to the ready queue corresponding to this active priority. Until it becomes blocked again, the active priority of T remains no less than this value; it will exceed this value only while it is inheriting a higher priority.
When the setting of the base priority of a ready task takes effect and the new priority is in a range specified as EDF_Across_Priorities, the task is added to the ready queue corresponding to its new active priority, as determined above.
For all the operations defined in Dispatching.EDF, Tasking_Error is raised if the task identified by T has terminated. Program_Error is raised if the value of T is Null_Task_Id.
Bounded (Run-Time) Errors
If EDF_Across_Priorities is specified for priority range Low..High, it is a bounded error to declare a protected object with ceiling priority Low or to assign the value Low to attribute 'Priority. In either case either Program_Error is raised or the ceiling of the protected object is assigned the value Low+1.
Erroneous Execution
If a value of Task_Id is passed as a parameter to any of the subprograms of this package and the corresponding task object no longer exists, the execution of the program is erroneous.
Documentation Requirements
On a multiprocessor, the implementation shall document any conditions that cause the completion of the setting of the deadline of a task to be delayed later than what is specified for a single processor.
NOTES
16 If two adjacent priority ranges, A..B and B+1..C are specified to have policy EDF_Across_Priorities then this is not equivalent to this policy being specified for the single range, A..C.
17 The above rules implement the preemption-level protocol (also called Stack Resource Policy protocol) for resource sharing under EDF dispatching. The preemption-level for a task is denoted by its base priority. The definition of a ceiling preemption-level for a protected object follows the existing rules for ceiling locking.
!corrigendum D.11(4)
Replace the paragraph:
After the Hold operation has been applied to a task, the task becomes held. For each processor there is a conceptual idle task, which is always ready. The base priority of the idle task is below System.Any_Priority'First. The held priority is a constant of the type integer whose value is below the base priority of the idle task.
by:
After the Hold operation has been applied to a task, the task becomes held. For each processor there is a conceptual idle task, which is always ready. The base priority of the idle task is below System.Any_Priority'First. The held priority is a constant of the type Integer whose value is below the base priority of the idle task.
For any priority below System.Any_Priority'First, the task dispatching policy is FIFO_Within_Priorities.
!corrigendum D.11(5)
Replace the paragraph:
The Hold operation sets the state of T to held. For a held task: the task's own base priority does not constitute an inheritance source (see D.1), and the value of the held priority is defined to be such a source instead.
by:
The Hold operation sets the state of T to held. For a held task, the active priority is reevaluated as if the base priority of the task were the held priority.
!corrigendum D.11(6)
Replace the paragraph:
The Continue operation resets the state of T to not-held; T's active priority is then reevaluated as described in D.1. This time, T's base priority is taken into account.
by:
The Continue operation resets the state of T to not-held; its active priority is then reevaluated as determined by the task dispatching policy associated with its base priority.
!ACATS test
Tests should be created to check on the implementation of this feature.
!appendix

Reference
[1] Baker, T.P., Stack-Based Scheduling of Real-Time Processes,
Journal of Real-Time, Vol 3, No 1, pp67-99, March 1991.

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  4:27 AM

Re EDF scheduling and preemption levels.

One suggestion from the last ARG meeting was that the need for preemption
levels could be catered for by using a block of priority levels for a single
scheme such as EDF - the different values in the block been used for
preemption levels (of tasks and POs).

Remember the problem we are trying to solve is the control of the
amount of interaction that occurs between a set of tasks and POs
at the same priority level.

My reading of the basic dispatching model (ie new D2.1) is that the use
of a block of priority levels will not work. It is clear for the basic model
that at any dispatching point the highest (non-empty) ready queue will
provide the task to execute next. It will not be easy to define the EDF
dispatching scheme (or any other implementation defined scheme) in
such a way that a task can be preempted by a newly released task
with a lower base priority. Using active priority will not work as this
would put the task's priority too high for its use of POs. One would
need to introduce a new priority notion (in addition to base and active,
say original) that seems too large a change.

The alternative seems more straightforward. Define a new pragma that
works with Ceiling_Locking to allow tasks and POs to define a preemption
level. This has minimal impact on exisiting words and notions.

I would like to get some feedback on this before the next meeting so that
I can work on the details and wording.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  9:14 AM

> My reading of the basic dispatching model (ie new D2.1) is that the use
> of a block of priority levels will not work. It is clear for the basic model
> that at any dispatching point the highest (non-empty) ready queue will
> provide the task to execute next. It will not be easy to define the EDF
> dispatching scheme (or any other implementation defined scheme) in
> such a way that a task can be preempted by a newly released task
> with a lower base priority. Using active priority will not work as this
> would put the task's priority too high for its use of POs. One would
> need to introduce a new priority notion (in addition to base and active,
> say original) that seems too large a change.

Suppose the "active priority" of a task in such a group is
always the lowest priority level of the group *except* when it
is inside a PO, in which case it takes on the ceiling level of the
PO.  In addition, the PO would check against the max of the
active priority and the base priority for ceiling violations.
I believe this could work.  Whether it is preferable to introducing
preemption levels is worthy of debate...

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  9:48 AM

> Suppose the "active priority" of a task in such a group is
> always the lowest priority level of the group *except* when it
> is inside a PO, in which case it takes on the ceiling level of the
> PO.  In addition, the PO would check against the max of the
> active priority and the base priority for ceiling violations.
> I believe this could work.  Whether it is preferable to introducing
> preemption levels is worthy of debate...

No I'm not sure this works, support lowest pri is 3
T1 and PO have priority of 4
T2 has priority of 5

T1 executes (at level 3) and calls PO (now execting at level 4)
T2 is released, but its active priority is 3 so
does not preempt. But we want T2 to be able to
preempt T1 as preemption level ensures T2 does not use PO
(and deadline of T2 may be shorter than T1's, for example).

****************************************************************

From: Stephen Michell
Sent: Monday, October 20, 2003  10:06 AM

I have significant concerns about the mix-and-match approach that Alan
was suggesting for this AI (357) and those for round-robin, value-based,
etc.

Basically, the IRTAW was proposing that any given priority level could
take on a scheduling paradigm, and that tasks would inherit this
paradigm when they entered that priority level. So for example
priority 1 -> round robin
priority 2 -> FIFO
priority 3 -> round robin
priority 4 -> EDF
priority 5 -> round robin

My concerns:
Round Robin
Round-robin is a paradigm that is used for what are considered
non-realtime tasks. While I don't have much problem with the notion of a
set of low priority tasks working in a round-robin paradigm while higher
priority realtime tasks work with FIFO priorities, the notion that the
base priority of such non-realtime tasks could be changed and that they
would acquire realtime properties seems strange.

EDF
Earliest deadline first is a paradigm which really replaces priority as
a scheduling mechanism. As a task's scheduled completion time approaches
and the task has not been scheduled, its urgency relative to all other
tasks increases until eventually it might become the most urgent task to
run.

EDF analysis, depends upon statically knowing the scheduling rate and
execution time of each task in the system. These are static properties
for each task which are worked out apriori, and assume that you are
analysing 100% of all tasks in the system. They don't support the notion
that there may be other tasks at different priority levels which are
consuming resources which are not accounted in the analysis.

In particular, the current Ada model is a dynamic priority model. If you
permit tasks to be added to or removed from this model dynamically, then
your analysis goes out the window. Similarly, if a lower priority task's
priority is increased beyond the priority used for EDF, it's performance
characteristics will interfere with those in the EDF regime.
- One could theoretically have high priority tasks (higher than the EDF
regime) and account for their resources by subtracting out the total
consumed by higher priority activities (say 10%), but if the load is
unbalanced it could wreck the analysis.


So, how far should we go?
IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
the round-robin tasks are the lowest priorities in the system. We could
possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
base priority is changed form a round-robin level <-> non-round-robin
level. We could also add a legality rule that the highest round robin
priority level be below the lowest FIFO priority level.

I also don't see a problem with EDF, as long as it is the only paradigm
in use - i.e. all tasks participate in EDF. This means that there would
be no round-robin or FIFO tasks in an EDF system.

Value-based scheduling - I'm not sure. I think that I'll stay quiet on
that for now.

Anway, my thoughts for now. I'm interested to hear what others think.

****************************************************************

From: Alan Burns
Sent: Monday, October 20, 2003  11:17 AM

It was unfortunate that you had to leave the IRTAW before it
got started as most of the points you raise were dealt with
at length. I cannot possible replay a day and a half of discussion
but I'll give some response. Remember the IRTAW was fully
supportive of the AIs I have produced.

...
> My concerns:
> Round Robin
> Round-robin is a paradigm that is used for what are considered
> non-realtime tasks. While I don't have much problem with the notion of a
> set of low priority tasks working in a round-robin paradigm while higher
> priority realtime tasks work with FIFO priorities, the notion that the
> base priority of such non-realtime tasks could be changed and that they
> would acquire realtime properties seems strange.

RR is not just used at low levels, I mensioned this at the
ARG meeting and it is discussed in my IRTAW paper that you have.


> EDF
> Earliest deadline first is a paradigm which really replaces priority as
> a scheduling mechanism. As a task's scheduled completion time approaches
> and the task has not been scheduled, its urgency relative to all other
> tasks increases until eventually it might become the most urgent task to
> run.

what is proposed allows EDF to live on its own (if all tasks priority
are given the same EDF level) AND mixed systems to be programmed.

I can assure you that EDF analysis can accommodate some capacity
being stolen by higher priority tasks. Also one safe implementation
scheme for EDF is to allow hard (critical) tasks to jump to
fixed pri levels only when they would otherwise miss their deadlines.
This fault tolerance approach has been discussed in a number of papers,
I can give you the references.

Also analysis is not undertaken on all systems, people want to
use EDF as it is the most efficient in its use of CPU resourses.
For example best-effort aperiodic scheduling under a fixed pri scheme.

...
> In particular, the current Ada model is a dynamic priority model. If you
> permit tasks to be added to or removed from this model dynamically, then
> your analysis goes out the window. Similarly, if a lower priority task's
> priority is increased beyond the priority used for EDF, it's performance
> characteristics will interfere with those in the EDF regime.
> - One could theoretically have high priority tasks (higher than the EDF
> regime) and account for their resources by subtracting out the total
> consumed by higher priority activities (say 10%), but if the load is
> unbalanced it could wreck the analysis.

I disagree with the pessimistic tone of the above, of course if
you have a very dynamic system it if hard (perhaps impossible)
to analysis - but that does not mean mixed systems cannot be
analysed.

> So, how far should we go?
> IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
> the round-robin tasks are the lowest priorities in the system. We could
> possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
> or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
> base priority is changed form a round-robin level <-> non-round-robin
> level. We could also add a legality rule that the highest round robin
> priority level be below the lowest FIFO priority level.
>
> I also don't see a problem with EDF, as long as it is the only paradigm
> in use - i.e. all tasks participate in EDF. This means that there would
> be no round-robin or FIFO tasks in an EDF system.
>

I disagree, and the workshop disagreed as well.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  1:19 PM

> T1 executes (at level 3) and calls PO (now execting at level 4)
> T2 is released, but its active priority is 3 so
> does not preempt. But we want T2 to be able to
> preempt T1 as preemption level ensures T2 does not use PO
> (and deadline of T2 may be shorter than T1's, for example).

Yes, I was confused.  How about the following:

Outside a PO, the active priority of a task in the group and
its location within the corresponding queue shifts according
to the scheduling policy (e.g. EDF), within a range
bounded below by the lowest priority of the group and above by the
specified priority of the task.  When entering a PO, it takes on
the ceiling priority of the PO as usual.

In other words, a task's active priority can be lowered below its specified
priority and/or moved later in a ready queue by the priority-specific
scheduling algorithm, unless it is inside a PO.

I think that would work...

I am pretty sure that this is not an uncommon way for such
scheduling algorithms to work.  That is, the priority is lowered
as appropriate to turn the fundamentally priority-based scheduling
into a more sophisticated algorithm.  However, when holding a lock,
the priority cannot be lowered.

Another way to think of it is that the specified priority is
the *ceiling* priority for the task when not inside any PO.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  1:57 PM

P.S. A legitimate question is why am I trying so hard
to eliminate preemption levels.  I really think they
add substantially to the conceptual complexity of the
scheduling model.  *If* we can come up with an equivalence
that requires only a single notion of "specified" priority for POs
and tasks, I think it will be a big advantage for users.

I believe that it is significantly more complex to juggle both
specified priority and specified preemption level, worry (eventually)
about dynamic setting of both as part of mode changes, figure out how
to pass both through task and protected object discriminants, etc.

So, that's why I hope you will continue this dialogue about
developing an equivalence.  Of course internally the scheduler
doesn't really have to raise and lower the active priority,
but can rather use whatever sort of data structure it wants
to implement EDF or whatever priority (range) specific scheduling
it wants.  But it is awfully nice to keep a single unified priority
scheme as the basic model at the language description level, and I
would hate to lose that.

****************************************************************

From: Tucker Taft
Sent: Monday, October 20, 2003  2:00 PM

P.P.S. And of course the *real* reason is that I can't bear
adding another field to our protected-object descriptor record
which has to be checked on every lock operation (conceptual
model be-damned) ;-).

****************************************************************

From: Alan Burns
Sent: Tuesday, October 21, 2003  5:08 AM

I am happy to keep trying to get this definition right.
I share you view that the simpler the model we can find the
better. If we can do without a new concept then great.

>
> Yes, I was confused.  How about the following:
>
> Outside a PO, the active priority of a task in the group and
> its location within the corresponding queue shifts according
> to the scheduling policy (e.g. EDF), within a range
> bounded below by the lowest priority of the group and above by the
> specified priority of the task.  When entering a PO, it takes on
> the ceiling priority of the PO as usual.
>
> In other words, a task's active priority can be lowered below its
> specified
> priority and/or moved later in a ready queue by the priority-specific
> scheduling algorithm, unless it is inside a PO.
>

I think this still has the same problem (if I understand your
model correctly). But I have an alternative (later). With
your model, again look at a set of tasks with base value 3
T1 base priority 4
PO has base priority 4 (as it is called by T1)
T2 has base priority 6 (indicating it does not call PO)
T3 has base priority 5

Lets say the ready queue (as base level 3 is ordered by
EDF)

If T2 is released while no POs are locked it take its place
in the ready queue for level 3 and may run immediately if
it has shortest deadline. It will run with priority 3.

However if T1 has locked PO when T2 released then T2 needs to
run with priority 6 (but only if its deadline is less than T1s).
Now T3 could be released and it cannot preempt even if its
priority is less than T2s.

There are other examples that show problems - they all come
about because the tasks run with the ceiling value. Preemption
levels do not have this problem as they do not control dispatching.
They only give rules for when a task can execute: eg
T3 can preempt T2 if deadline is less and preemption level of
T2 is higher than that of any locked PO (ie 4).

Alternative
Take a band of priorities.
1) run all tasks at top pri (ie active priority always top)
2) round up all ceilings to top (as allowed by definition of
ceilings) - hence no program_errors
3) use base pri's of tasks and defined ceilings of POs to
enforce/define dispatching rules

So for EDF it would be a dispatching point if
head of ready queue have shorter deadline than current running
task AND base pri of head was greater than that of any locked
PO's (all within the range of the policy).

How does that sound?

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 21, 2003  7:52 AM

> Lets say the ready queue (as base level 3 is ordered by
> EDF)

I presume that the EDF scheduler would have control over
all the ready queues in the range it manages (say 3-6).
It could raise and lower the active priority of the tasks
under its control as it sees fit, but with a restriction
that it cannot raise their active priority above their
specified priority, except of course that when a task
acquires a PO lock, its active priority is boosted to that
of the PO.

> If T2 is released while no POs are locked it take its place
> in the ready queue for level 3 and may run immediately if
> it has shortest deadline. It will run with priority 3.

I don't know why you are saying it will run with priority 3.
It could run with any priority from 3 to 6, as determined by
the EDF scheduler.  Its priority could change while running
as well.  At the implementation level, it would seem simplest
to have a single EDF queue for all the tasks, and generally
ignore the priority except when deciding whether to start
a given task.  When a task is at the head of the EDF queue,
it would be run if its deadline is earlier than the current
running task and, if the running task is inside a PO,
its priority is higher than that of the PO.

> However if T1 has locked PO when T2 released then T2 needs to
> run with priority 6 (but only if its deadline is less than T1s).

As suggested above, if T2's deadline is earlier than T1, then it
would run because its priority is higher than that of the PO
T1 has locked.

> Now T3 could be released and it cannot preempt even if its
> priority is less than T2s.
>
> There are other examples that show problems - they all come
> about because the tasks run with the ceiling value. Preemption
> levels do not have this problem as they do not control dispatching.
> They only give rules for when a task can execute: eg
> T3 can preempt T2 if deadline is less and preemption level of
> T2 is higher than that of any locked PO (ie 4).

I don't follow the above, because I suspect we are making different
assumptions.

> Alternative
> Take a band of priorities.
> 1) run all tasks at top pri (ie active priority always top)
> 2) round up all ceilings to top (as allowed by definition of
> ceilings) - hence no program_errors
> 3) use base pri's of tasks and defined ceilings of POs to
> enforce/define dispatching rules
>
> So for EDF it would be a dispatching point if
> head of ready queue have shorter deadline than current running
> task AND base pri of head was greater than that of any locked
> PO's (all within the range of the policy).

Yes, we have the same rule in mind.  I presumed the EDF scheduler
would essentially ignore the priority in its basic deadline
queue, except to check it at the moment of release to be sure
it is higher than any locked PO.

> How does that sound?

Very much what I had in mind, but you certainly expressed it better  ;-).

****************************************************************

From: Alan Burns
Sent: Tuesday, October 21, 2003  9:38 AM

We seem to be converging

My problem with 'your' model is being able to specify
easily the required behaviour

>> I think this still has the same problem (if I understand your
>> model correctly). But I have an alternative (later). With
>> your model, again look at a set of tasks with base value 3
>> T1 base priority 4
>> PO has base priority 4 (as it is called by T1)
>> T2 has base priority 6 (indicating it does not call PO)
>> T3 has base priority 5
>>
>> Lets say the ready queue (as base level 3 is ordered by
>> EDF)
>
> I presume that the EDF scheduler would have control over
> all the ready queues in the range it manages (say 3-6).
> It could raise and lower the active priority of the tasks
> under its control as it sees fit, but with a restriction
> that it cannot raise their active priority above their
> specified priority, except of course that when a task
> acquires a PO lock, its active priority is boosted to that
> of the PO.
>
>> If T2 is released while no POs are locked it take its place
>> in the ready queue for level 3 and may run immediately if
>> it has shortest deadline. It will run with priority 3.
>
> I don't know why you are saying it will run with priority 3.
> It could run with any priority from 3 to 6, as determined by
> the EDF scheduler.  Its priority could change while running
> as well.  At the implementation level, it would seem simplest
> to have a single EDF queue for all the tasks, and generally
> ignore the priority except when deciding whether to start
> a given task.  When a task is at the head of the EDF queue,
> it would be run if its deadline is earlier than the current
> running task and, if the running task is inside a PO,
> its priority is higher than that of the PO.

agreed required behaviour could be achieved but is difficult to
define when some fixed points (the PO priorities) are given
and have to be followed, for example with the following

>> However if T1 has locked PO when T2 released then T2 needs to
>> run with priority 6 (but only if its deadline is less than T1s).
>
> As suggested above, if T2's deadline is earlier than T1, then it
> would run because its priority is higher than that of the PO
> T1 has locked.
>
>> Now T3 could be released and it cannot preempt even if its
>> priority is less than T2s.
>
> I don't follow the above, because I suspect we are making different
> assumptions.

T2 should preempt T1 (which has priority 4), it base priority
is 6  - does it get 5 or 6 or 7 ...? (I agree at one level it does not matter)
Then T3 should preempt T2 - again a higher priority must be chosen.
In the end a range of priorities above the highest ceiling must
be reserved for the EDF range (perhaps doubling the range
of priorities needed).

I agree the model does not dictate implementation, but the above
seems hard to define. Each task must be on some ready queue when
it runs - the rules must say which.

>> Alternative
>> Take a band of priorities.
>> 1) run all tasks at top pri (ie active priority always top)
>> 2) round up all ceilings to top (as allowed by definition of
>> ceilings) - hence no program_errors
>> 3) use base pri's of tasks and defined ceilings of POs to
>> enforce/define dispatching rules
>>
>> So for EDF it would be a dispatching point if
>> head of ready queue have shorter deadline than current running
>> task AND base pri of head was greater than that of any locked
>> PO's (all within the range of the policy).
>
> Yes, we have the same rule in mind.  I presumed the EDF scheduler
> would essentially ignore the priority in its basic deadline
> queue, except to check it at the moment of release to be sure
> it is higher than any locked PO.
>
>> How does that sound?
>
> Very much what I had in mind, but you certainly expressed it better  ;-).

Perhaps just my way of thinking, but this seems easier to
express in ARG (LRM) speak.

have you a strong view either way?

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 21, 2003 11:26 AM

> >> Alternative
> >> Take a band of priorities.
> >> 1) run all tasks at top pri (ie active priority always top)
> >> 2) round up all ceilings to top (as allowed by definition of
> >> ceilings) - hence no program_errors
> >> 3) use base pri's of tasks and defined ceilings of POs to
> >> enforce/define dispatching rules
> >>
> >> So for EDF it would be a dispatching point if
> >> head of ready queue have shorter deadline than current running
> >> task AND base pri of head was greater than that of any locked
> >> PO's (all within the range of the policy).

Wouldn't we want to allow a dispatching point if *any* task
on the ready queue satisfied these two requirements, even if
the "head" task didn't?  I presume you would just skip over
the head task(s) with priorities too low.

Another way to imagine it is that there are still queues at each
priority level (ordered by scheduler determined "urgency", e.g. deadline),
but a task is only "ready" if it has a greater urgency than all tasks on
lower priority ready queues, but ignoring queues at priorities
at the level of a locked PO or below.

In other words, when a task's "urgency" drops below a ready task of a lower,
eligible-to-run priority, it becomes non-ready.

> > Yes, we have the same rule in mind.  I presumed the EDF scheduler
> > would essentially ignore the priority in its basic deadline
> > queue, except to check it at the moment of release to be sure
> > it is higher than any locked PO.
> >
> >> How does that sound?
> >
> > Very much what I had in mind, but you certainly expressed it better  ;-).
> >
>
> Perhaps just my way of thinking, but this seems easier to
> express in ARG (LRM) speak.
>
> have you a strong view either way?

I am somewhat worried about rounding up all the ceilings, and also
it seems that if you want to skip over the "head" of the queue
(which I claim you will want to do), then you are violating the basic
scheduling model.  Or perhaps that is solved by stating that the queue
at the top priority is primarily ordered by deadline, but tasks whose
specified priority blocks them due to a locked PO are given an
artificially late deadline, or are simply treated as non-ready.

But at that point, perhaps the model I suggested above is just
as good, and doesn't require rounding up the PO ceilings.

I also presume we want to describe this model in a way that is not
EDF-specific (hence my introduction of the notion of
"scheduler-determined urgency").  Or is this model just for the EDF case?

****************************************************************

From: Alan Burns
Sent: Thursday, October 23, 2003  5:08 AM

> I am somewhat worried about rounding up all the ceilings, and also
> it seems that if you want to skip over the "head" of the queue
> (which I claim you will want to do), then you are violating the basic
> scheduling model.  Or perhaps that is solved by stating that the queue
> at the top priority is primarily ordered by deadline, but tasks whose
> specified priority blocks them due to a locked PO are given an
> artificially late deadline, or are simply treated as non-ready.

I agree that skiping over the head  of a ready queue (even if
ordered by deadline) would be most unfortunate. I'll reread the
papers and see if you are right (certainly you seem to be).

> But at that point, perhaps the model I suggested above is just
> as good, and doesn't require rounding up the PO ceilings.

But my point about having to reserve lots of priority levels
above top ceiling is still a concern. If tasks are to execute
with priorities other than base or ceiling of locked objects
then perhaps it is best to just say only one task (in the
group) is ready to execute and it is the one with shortest
deadline and sufficiently high preemption level.

> I also presume we want to describe this model in a way that is not
> EDF-specific (hence my introduction of the notion of
> "scheduler-determined urgency").  Or is this model just for the
> EDF case?

Both!, I want to define an EDF scheme but in a way that it
would be easy to use another parameter as the definition
of urgency

I'll get back to you next week

****************************************************************

From: Alan Burns
Sent: Tuesday, October 28, 2003  9:33 AM

Working with your model I think the following works.
It may seem a bit complicated but I think this is
needed to be able to ensure that only the head of
any ready queue needs to be looked at. It also
uses PO ceilings in the usual way (tasks in a PO
run at ceiling level), and it does not need priorities
above those used by the tasks and POs to be reserved.

Assume range of priorities for EDF (or any other
scheme using this approach) is B..T; also
restrict tasks and PO priorities to be in range
B+1 .. T. See why in a moment.

Let S denote the priority of the highest priority currently
locked PO (S is dynamic). If no PO is locked let S equal B.

Rule 1
All ready queues (in B..T range) are ordered by deadline.

Rule 2
A task, T, can only become ready if base pri of T > S.

Rule 3
When a task becomes ready it is added to the ready queue
for pri level S and this is a dispatching point. (So if new task
has shorter deadline than current running task it will
preempt it.)

Rule 4
When S drops from level P1 to P2, all tasks on the ready
queue for P1 are transferred to P2.

--
This last rule may sound strange, and would not necessarily be the way
to implement this, but is needed in the model (I think).

Note the only tasks that would move are those that were released
while S had value P1, had deadline greater then the task that locked
the object that made S equal P1, and had base pri > S. If S drops
to B (ie no objects locked) all tasks will be in the B ready queue
in deadline order.

To see why the rule is required, one needs a quite
complicated scenario involving two or more POs. For example

T1 - base priority 3
T2 - base priority 4
T3 - base priority 5
T4 - base priority 6

PO1 used by T1 and T2 - ceiling pri 4
PO2 used by T4 - ceiling 6

S is 1 initially
T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
  is now 4)
T4 released (6>4) and is added to ready queue for priority
  level 4, assume it has shorter deadline, so it preempts T1,
  and executes with active pri 4; it now calls PO2 (it executes
  at active pri 6 and S now 6)
T3 arrives but cannot be made ready even though it has even
  shorter deadline (5 not > 6)
T4 returns from PO2 (S now equal 4). T4 is added to ready queue
  for 4, as is T3 that can now be made ready. The ready q at level
  4 has the right structure (T3 head then T4 then T1) and allows T3
  to execute first (shortest deadline) then T4, then T1
When T1 exists PO1 (S falls to 1) any tasks that were made ready
  (because they had high enough base pri) but longer deadline
  that T1, will move from ready q at 4 to ready q at 1
--

Easy to see that a very simple scheme where it is acceptable
for all PO calls to be, in effect, non-preemptive can easily
be programmed by giving 2 priority levels for EDF (say 7 and
8), set all tasks priorities to 8 (hence all ceiling are 8)
then ready queue at 7 will be ordered by deadline. All
tasks can be made ready immediately unless a task calls a
PO; in which case it will execute at pri 8 (S will be 8)
and no preemption will occur.

If you see nothing immediately wrong with this I'll write a longer
note to the IRTAW group and get their assessment.

****************************************************************

From: Tucker Taft
Sent: Tuesday, October 28, 2003 11:04 AM

You didn't comment specifically on my last proposal.
It seems a little simpler than this new model:

stt> Another way to imagine it is that there are still queues at each
stt> priority level (ordered by scheduler determined "urgency",
stt> e.g. deadline),
stt> but a task is only "ready" if it has a greater urgency than
stt> all tasks on
stt> lower priority ready queues, but ignoring queues at priorities
stt> at the level of a locked PO or below.
stt>
stt> In other words, when a task's "urgency" drops below a
stt> ready task of a lower,
stt> eligible-to-run priority, it becomes non-ready.

I'll try to work through your example with this model
and see if it accomplishes the same thing (see below).
I'll call mine the "model-TT" in what follows ;-).

> Assume range of priorities for EDF (or any other
> scheme using this approach) is B..T; also
> restrict tasks and PO priorities to be in range
> B+1 .. T. See why in a moment.

I don't think model-TT requires such a restriction.

> Let S denote the priority of the highest priority currently
> locked PO (S is dynamic). If no PO is locked let S equal B.

In model-TT, I'll let S be B-1 if no PO is locked in range B..T.

>
> Rule 1
> All ready queues (in B..T range) are ordered by deadline.

Agreed.

>
> Rule 2
> A task, T, can only become ready if base pri of T > S.

The model-TT rule would be: a task T can become ready only if
T'prio <= S or its "urgency" is greater than the head of the
queues at levels S+1..T'Prio-1.

>
> Rule 3
> When a task becomes ready it is added to the ready queue
> for pri level S and this is a dispatching point. (So if new task
> has shorter deadline than current running task it will
> preempt it.)

When a task becomes ready (that is, it is released and its
urgency is greater than tasks at levels S+1..T'Prio-1)
it is added to the ready queue for its base priority level.
This is a dispatching point if
it is more "urgent" than the running task and its prio is > S.

> Rule 4
> When S drops from level P1 to P2, all tasks on the ready
> queue for P1 are transferred to P2.

When S drops from level P1 to P2, rule 2 and 3 are rechecked:
if a queued task whose prio is > S is now the most urgent, it preempts.
Any tasks that are less urgent than the heads of queues in
S+1..T'Prio-1 are made non-ready.

>
> --
> This last rule may sound strange, and would not necessarily be the way
> to implement this, but is needed in the model (I think).
>
> Note the only tasks that would move are those that were released
> while S had value P1, had deadline greater then the task that locked
> the object that made S equal P1, and had base pri > S. If S drops
> to B (ie no objects locked) all tasks will be in the B ready queue
> in deadline order.
>
> To see why the rule is required, one needs a quite
> complicated scenario involving two or more POs. For example
>
> T1 - base priority 3
> T2 - base priority 4
> T3 - base priority 5
> T4 - base priority 6
>
> PO1 used by T1 and T2 - ceiling pri 4
> PO2 used by T4 - ceiling 6
>
> S is 1 initially
> T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
>   is now 4)
> T4 released (6>4) and is added to ready queue for priority
>   level 4, assume it has shorter deadline, so it preempts T1,
>   and executes with active pri 4; it now calls PO2 (it executes
>   at active pri 6 and S now 6)
> T3 arrives but cannot be made ready even though it has even
>   shorter deadline (5 not > 6)
> T4 returns from PO2 (S now equal 4). T4 is added to ready queue
>   for 4, as is T3 that can now be made ready. The ready q at level
>   4 has the right structure (T3 head then T4 then T1) and allows T3
>   to execute first (shortest deadline) then T4, then T1
> When T1 exists PO1 (S falls to 1) any tasks that were made ready
>   (because they had high enough base pri) but longer deadline
>   that T1, will move from ready q at 4 to ready q at 1

Model-TT equivalent scenario:

S is 1 initially
T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
   is now 4)
T4 released (6>4) and is added to ready queue for priority
   level 6 (nothing at prio 5 to worry about),
   assume it has shorter deadline, so it preempts T1,
   and executes with active pri 6; it now calls PO2 (it continues
   to execute at active pri 6 and S now 6).
   T1 is placed on head of queue at level 4.
T3 arrives and is added to prio 5 queue (ordered by deadline),
   since all tasks with prio <= S are allowed on their queues,
   but are ignored because of the locked PO.
T4 returns from PO2 (S now equal 4). T3 preempts T4 since
   it is now the most urgent task at prio > S.
   (T1 is still in a preempted state).
   T4 is added to ready queue for level 6 presuming there is
   no other task at level 5, or it is more urgent than the head of
   the level 5 queue.
   When T3 completes, T4 resumes.  When T4 completes, T1 resumes.
When T1 exits PO1 (S falls to 1) rule 2 is rechecked, which may
   cause some of the tasks on queues 2..6 to be made non-ready
   if they are less urgent than lower priority tasks (which are now
   eligible to run since S is 1).


> --
>
> Easy to see that a very simple scheme where it is acceptable
> for all PO calls to be, in effect, non-preemptive can easily
> be programmed by giving 2 priority levels for EDF (say 7 and
> 8), set all tasks priorities to 8 (hence all ceiling are 8)
> then ready queue at 7 will be ordered by deadline. All
> tasks can be made ready immediately unless a task calls a
> PO; in which case it will execute at pri 8 (S will be 8)
> and no preemption will occur.
>
> If you see nothing immediately wrong with this I'll write a longer
> note to the IRTAW group and get their assessment.

Could you comment on model-TT?  It seems that by
making tasks of high priority but low urgency non-ready,
and by making the readying of a task a dispatching
point only if it is more urgent than the running task,
it satisfies the requirements of the priority model,
while primarily following the "urgency" (e.g. deadline)
ordering.

I don't think either of the models correspond to the
best way to implement things, but my sense is that model-TT
might be simpler to describe, and has the virtue that a
task is queued at its active priority, rather than bouncing
around to different queues that have nothing to do with
its priority.  The "trick" is we simply make a task non-ready
if it is eligible to run due to having a prio > S, but
its relative priority contradicts its relative urgency.

 From an implementation point of view, I think I would
probably have each priority level with its own urgency
queue, and then a queue of levels > S, ordered by the
urgency of the head of the corresponding queue.  That is,
I wouldn't actually remove the tasks from their priority
level's queue if their urgency was too low, but I would just
ignore the queue as a whole.

****************************************************************

From: Alan Burns
Sent: Thursday, October 30, 2003  8:48 AM

Let me first make sure I have model-TT correct

stt> Another way to imagine it is that there are still queues at each
stt> priority level (ordered by scheduler determined "urgency",
stt> e.g. deadline),
stt> but a task is only "ready" if it has a greater urgency than
stt> all tasks on
stt> lower priority ready queues, but ignoring queues at priorities
stt> at the level of a locked PO or below.
stt>
stt> In other words, when a task's "urgency" drops below a
stt> ready task of a lower,
stt> eligible-to-run priority, it becomes non-ready.

Tasks run at their usual base priorities, or with PO
priorities. We get EDF behaviour by making all 'ready
tasks', with priority higher than that of the task with
shortest deadline, non-ready (unless a PO is locked,
but let ignore this situation for a moment).

So if we have the range 1.10 for EDF, and a task with pri
10 was running and then a task with pri 1 was released with
a shorted deadline then the running task and all tasks on
ready queues in range 2.10 would need to be made non-ready.
Correct?

Assuming I have this correct I have some concerns (but I agree
that model-TT gives the correct EDF behaviour).

1) Being 'ready' and on a ready queue are synonymous in the
dispatching model - hence becoming non-ready would mean being
removed from ready queues. Hence many tasks for most of the time
will be in limbo (not in any state currently defined for tasks).

2) I think the control we have over dispatching amounts to
defining the dispatching points, and some postponement of
when a task becomes ready (after being blocked, but before
it executes).

3) Model-TT needs (I think) a task that is running to become
non-ready (if lower pri task released with shorter deadline).
This is quite new and opens up many potentially tricky issues.
I do not think it is a natural way of defining the model
(but I guess that is a question of taste!).

4) Those who know the preemption model will find it confusing
that priority is (apparently) being used for dispatching and
PO locking - rather than just locking.


So what do we have.

I think we both agree (at this stage) that when a task executes
inside a PO it should execute with ceiling priority. Also that
we should only have queues that need the head task to be looked at.

I think the use of non-ready should be limited to the postponement
of when a task can become ready.

The model should look straightforward when no POs are locked and
extend naturally when POs are added to the model.

Let me give a variation of what I last said, to get
rid of the 'strange rule' at the cost of having more tasks
in the non-ready state prior to being released. I'll call
this model-AT (combined alan and tuck):


Assume range of priorities for EDF (or any other
scheme using this approach) is B..H; also
restrict tasks and PO priorities to be in range
B+1 .. H. Let T'P be base priority of task T, and
T'D its current deadline.

Let S denote the priority of the highest priority currently
locked PO. And let S'D be the deadline of
the task that has locked this PO. If no PO is locked let S equal B
and S'D be infinity (Time_Span_Last).

Rule 1
All ready queues (in B..H range) are ordered by deadline.

Rule 2
A task, T, can only become ready if T'P > S and T'D < S'D.

Rule 3
When a task becomes ready it is added to the ready queue
for pri level S and this is a dispatching point.


I think these rules are getting quite simple!

So if no locked POs then all tasks are immediately added to the
single ready queue at priority B (which is ordered by EDF) when
they become unblocked. Only when a PO is locked are some tasks
not immediately made ready.

Once a PO is locked (by task Q say) then S becomes the base until
it is unlocked. Only tasks that have higher Pri than S and shorter
deadline than Q can become ready.
If there are many of these they will be on the ready queue at level
S (and be in front of Q).
If any of these lock a higher level PO then this behaviour is
repeated at the higher level.
When Q runs again and unlocks the PO then its pri (and S) will drop
to B, all 'held tasks' will become ready and correct behaviour
follows.


> I don't think either of the models correspond to the
> best way to implement things, but my sense is that model-TT
> might be simpler to describe, and has the virtue that a
> task is queued at its active priority, rather than bouncing
> around to different queues that have nothing to do with
> its priority.  The "trick" is we simply make a task non-ready
> if it is eligible to run due to having a prio > S, but
> its relative priority contradicts its relative urgency.
>
>  From an implementation point of view, I think I would
> probably have each priority level with its own urgency
> queue, and then a queue of levels > S, ordered by the
> urgency of the head of the corresponding queue.  That is,
> I wouldn't actually remove the tasks from their priority
> level's queue if their urgency was too low, but I would just
> ignore the queue as a whole.
>

Sorry to keep arguing, but I feel model-AT is now quite simple
and easy to explain. For implementation, all that is needed
in addition to the usual stuff is a queue of non-ready tasks
that have become unblocked but are not yet ready. Probably best
to order this by priority. As level of S drops tasks on this
held queue with priority higher than new S will be added to
the usual ready queue (at new S).

I guess back to you for comment!

****************************************************************

From: Tucker Taft
Sent: Thursday, October 30, 2003 11:31 AM

> Let me first make sure I have model-TT correct
>
> stt> Another way to imagine it is that there are still queues at each
> stt> priority level (ordered by scheduler determined "urgency",
> stt> e.g. deadline),
> stt> but a task is only "ready" if it has a greater urgency than
> stt> all tasks on
> stt> lower priority ready queues, but ignoring queues at priorities
> stt> at the level of a locked PO or below.
> stt>
> stt> In other words, when a task's "urgency" drops below a
> stt> ready task of a lower,
> stt> eligible-to-run priority, it becomes non-ready.
>
> Tasks run at their usual base priorities, or with PO
> priorities. We get EDF behaviour by making all 'ready
> tasks', with priority higher than that of the task with
> shortest deadline, non-ready (unless a PO is locked,
> but let ignore this situation for a moment).
>
> So if we have the range 1.10 for EDF, and a task with pri
> 10 was running and then a task with pri 1 was released with
> a shorted deadline then the running task and all tasks on
> ready queues in range 2.10 would need to be made non-ready.
> Correct?

Yes.  (BTW, I think we should use the term "highest urgency"
or "most urgent" instead of "shortest deadline" to keep it
relatively general.)

> Assuming I have this correct I have some concerns (but I agree
> that model-TT gives the correct EDF behaviour).
>
> 1) Being 'ready' and on a ready queue are synonymous in the
> dispatching model - hence becoming non-ready would mean being
> removed from ready queues. Hence many tasks for most of the time
> will be in limbo (not in any state currently defined for tasks).

This seems to be true also of the model you suggest below.  I guess
you are saying this "limbo" state is easier to handle if tasks
can enter it only immediately after becoming unblocked, but once out
of limbo and onto a ready queue, they don't reenter limbo without
first becoming blocked again.  I'm not sure I see the big
difference in terms of understandability.  I think we all
agree that we are trying to construct a model that uses part
but not all of the priority model.  We are not worried about
actual implementation mechanism.

> 2) I think the control we have over dispatching amounts to
> defining the dispatching points, and some postponement of
> when a task becomes ready (after being blocked, but before
> it executes).

Where does the "postponement" license come from?  Perhaps
if I understood that I would agree that restricting "limbo"
to immediately follow blocking would be preferable.

> 3) Model-TT needs (I think) a task that is running to become
> non-ready (if lower pri task released with shorter deadline).
> This is quite new and opens up many potentially tricky issues.
> I do not think it is a natural way of defining the model
> (but I guess that is a question of taste!).

Perhaps a better way to talk about it is that a task
is "held" (a la Asynchronous Task Control) when its urgency
is lower than some lower priority ready task.

I see you are using the term "held" below as well;
it is probably a preferable term to "limbo" ;-).

> 4) Those who know the preemption model will find it confusing
> that priority is (apparently) being used for dispatching and
> PO locking - rather than just locking.

I guess it seemed odd to me that we still have queues at various
priorities, but the task's base priority had nothing to do with
what queue it was on.  I guess we could be saying that the
algorithm for the "active" priority is really what matters,
and we are both saying that the active priority is influenced by
PO locking as usual, but also by urgency.

> So what do we have.
>
> I think we both agree (at this stage) that when a task executes
> inside a PO it should execute with ceiling priority. Also that
> we should only have queues that need the head task to be looked at.

Yes, that sounds right.

> I think the use of non-ready should be limited to the postponement
> of when a task can become ready.

It would help me to understand why.

> The model should look straightforward when no POs are locked and
> extend naturally when POs are added to the model.

Yes, that sounds good too.

> Let me give a variation of what I last said, to get
> rid of the 'strange rule' at the cost of having more tasks
> in the non-ready state prior to being released. I'll call
> this model-AT (combined alan and tuck):
>
>
> Assume range of priorities for EDF (or any other
> scheme using this approach) is B..H; also
> restrict tasks and PO priorities to be in range
> B+1 .. H. Let T'P be base priority of task T, and
> T'D its current deadline.

I don't love the rule that you can't have tasks at level B.
Is that really necessary?  I suppose disallowing POs at
level B seems more reasonable, though I am not convinced,
since it seems we could set "S" to B in the rule below
if no POs are locked, even if there are POs at level B.

> Let S denote the priority of the highest priority currently
> locked PO. And let S'D be the deadline of
> the task that has locked this PO. If no PO is locked let S equal B
> and S'D be infinity (Time_Span_Last).
>
> Rule 1
> All ready queues (in B..H range) are ordered by deadline.
>
> Rule 2
> A task, T, can only become ready if T'P > S and T'D < S'D.
>
> Rule 3
> When a task becomes ready it is added to the ready queue
> for pri level S and this is a dispatching point.
>
>
> I think these rules are getting quite simple!

Perhaps suspiciously simple.  Since you are saying that once
a task is on a ready queue, it stays on that queue until it
runs, I am surprised it can work.  How dynamic are the deadlines?
It would seem to be nice to describe the solution in terms of
the properties of the queues at any given time as a function
of priorities and urgencies, rather than the properties at the
moment a task is added to a queue.

If I follow your model correctly, then it has the following
property:  Presuming at some point there are tasks queued at various
priority levels, then there is a PO locked at each level where
there are tasks queued (except the lowest), and any task queued at a
level L (other than the one holding the lock) has base priority
greater than L, and urgency higher than the task holding the lock
(as well as any task queued at lower levels).

Does that sound right?  If so, let's try to simplify a bit,
and eliminate the "limbo" state.

First, there seems no harm in putting higher-urgency tasks in a queue at
the level one above that of the locked PO.  This will make it a bit more
likely that the base prio matches the queue priority, and might
eliminate the need for the restriction against using level "B" itself.

Second, rather than putting lower-urgency or lower-priority tasks into
limbo, can't we put them where they would have gone if they had arrived
before the PO was locked, but make sure that tasks with a PO lock
always stay on the front of their queue when preempted?
That is, look for the highest prio locked PO that the new
task would have preempted, and add it to the queue just above that.
If there is no such PO, then add it to the lowest level queue.

In this model, the only queues that have tasks on them are those
at the level of a locked PO, which contains the task holding
the lock while preempted, and those immediately above that, which
holds tasks of higher prio than this locked PO (but not higher than
some higher-level locked PO), and higher urgency than
the task holding the PO.  While preempted, a task holding a lock
is kept at the head of its queue (as though it has very high urgency).

If there are no POs locked, then all ready tasks are in the lowest level
queue.  When there are POs locked, the tasks are split out into
multiple queues, at levels one above that of each locked PO, plus any
preempted tasks with locks at the PO's level.  Tasks go into the
highest level where their urgency and priority exceeds that of
the task holding the lock at the immediately lower level.
If either their urgency or priority is not greater, they go in
a lower level.

> So if no locked POs then all tasks are immediately added to the
> single ready queue at priority B (which is ordered by EDF) when
> they become unblocked. Only when a PO is locked are some tasks
> not immediately made ready.
>
> Once a PO is locked (by task Q say) then S becomes the base until
> it is unlocked. Only tasks that have higher Pri than S and shorter
> deadline than Q can become ready.
> If there are many of these they will be on the ready queue at level
> S (and be in front of Q).
> If any of these lock a higher level PO then this behaviour is
> repeated at the higher level.
> When Q runs again and unlocks the PO then its pri (and S) will drop
> to B, all 'held tasks' will become ready and correct behaviour
> follows.
> ...
> Sorry to keep arguing, but I feel model-AT is now quite simple
> and easy to explain. For implementation, all that is needed
> in addition to the usual stuff is a queue of non-ready tasks
> that have become unblocked but are not yet ready. Probably best
> to order this by priority. As level of S drops tasks on this
> held queue with priority higher than new S will be added to
> the usual ready queue (at new S).
>
> I guess back to you for comment!

I don't think we need the "limbo" queue, as explained above.
Also, if we put tasks in the level one above that of the
locked PO, I think we can eliminate the funny restriction to
levels B+1..H, and can use all the levels.  This does require
that while preempted, a task holding a lock is treated as though
it is very urgent, so it is kept at the head of the queue at the
PO's level.

With those two "tweaks" I think we have a model that satisfies
both your goals and mine.

****************************************************************

From: Alan Burns
Sent: Sunday, November 2, 2003  5:27 AM

...
>> So what do we have.
>>
>> I think we both agree (at this stage) that when a task executes
>> inside a PO it should execute with ceiling priority. Also that
>> we should only have queues that need the head task to be looked at.
>
> Yes, that sounds right.
>
>> I think the use of non-ready should be limited to the postponement
>> of when a task can become ready.
>
> It would help me to understand why.

The way all ceiling type protocols work is for a task to
be delayed from really becoming ready; once it starts executing
all its resources are available. So with the standard ceiling
locking, a PO delays other tasks from executing (although they
can be put on the ready queue). With Preemption Level locking
the algorithm is described in terms of the rules for a task
becoming ready. Also I feel within Ada it will be easier to
describe a model (if it needs limbo states) in terms of delayed
entry to a ready queue rather than disappearing from a ready queue.

But lets hope we can work on a model that does not need limbo
states!

...
>> Let me give a variation of what I last said, to get
>> rid of the 'strange rule' at the cost of having more tasks
>> in the non-ready state prior to being released. I'll call
>> this model-AT (combined alan and tuck):
>>
>> Assume range of priorities for EDF (or any other
>> scheme using this approach) is B..H; also
>> restrict tasks and PO priorities to be in range
>> B+1 .. H. Let T'P be base priority of task T, and
>> T'D its current deadline.
>
> I don't love the rule that you can't have tasks at level B.
> Is that really necessary?  I suppose disallowing POs at
> level B seems more reasonable, though I am not convinced,
> since it seems we could set "S" to B in the rule below
> if no POs are locked, even if there are POs at level B.

yes only disallowing PO at B may work

>> Let S denote the priority of the highest priority currently
>> locked PO. And let S'D be the deadline of
>> the task that has locked this PO. If no PO is locked let S equal B
>> and S'D be infinity (Time_Span_Last).
>>
>> Rule 1
>> All ready queues (in B..H range) are ordered by deadline.
>>
>> Rule 2
>> A task, T, can only become ready if T'P > S and T'D < S'D.
>>
>> Rule 3
>> When a task becomes ready it is added to the ready queue
>> for pri level S and this is a dispatching point.
>>
>>
>> I think these rules are getting quite simple!
>
> Perhaps suspiciously simple.  Since you are saying that once
> a task is on a ready queue, it stays on that queue until it
> runs, I am surprised it can work.  How dynamic are the deadlines?
> It would seem to be nice to describe the solution in terms of
> the properties of the queues at any given time as a function
> of priorities and urgencies, rather than the properties at the
> moment a task is added to a queue.

Remember for whatever model we come up with, whenever a task
changes its absolute deadline (or urgency level) then this is a dispatching
point and the task is taken out of its ready queue and replaced
using the new deadline. So an EDF periodic task will at the end of
its period calculate its new absolute deadline (for the next
period), change its deadline attribute (this will cause dispatching),
and then delay for start of next period.

Once the dispatching model is sorted out we need to return to the
representation of deadlines/urgency (as discussed in the AI).

> If I follow your model correctly, then it has the following
> property:  Presuming at some point there are tasks queued at various
> priority levels, then there is a PO locked at each level where
> there are tasks queued (except the lowest), and any task queued at a
> level L (other than the one holding the lock) has base priority
> greater than L, and urgency higher than the task holding the lock
> (as well as any task queued at lower levels).
>
> Does that sound right?  If so, let's try to simplify a bit,
> and eliminate the "limbo" state.

Yes thats my thinking. It follows that at any level L the tail
of the queue is the task that has locked the PO.

> First, there seems no harm in putting higher-urgency tasks in a queue at
> the level one above that of the locked PO.  This will make it a bit more
> likely that the base prio matches the queue priority, and might
> eliminate the need for the restriction against using level "B" itself.

Does this not mean we have to reserve a priority above top PO.
In other words H level cannot be used? Also is one level above
the PO a level that can be used by other tasks and POs or an
artificial level 'between' application levels? I feel a rule of
no POs at B (or use exisiting permission to round up any PO
at B to B+1) is more straightforward.

> Second, rather than putting lower-urgency or lower-priority tasks into
> limbo, can't we put them where they would have gone if they had arrived
> before the PO was locked, but make sure that tasks with a PO lock
> always stay on the front of their queue when preempted?
> That is, look for the highest prio locked PO that the new
> task would have preempted, and add it to the queue just above that.
> If there is no such PO, then add it to the lowest level queue.

I agree that the removal of the limbo state would be good. But
I cannot quite follow through your model here (sorry). If there are
a number of POs locked (at increasing priority levels) then would
you not have to go down all the POs to find the one that the
new task is allowed to preempt and then put it at the head of
that queue level (or +1 using your alternative)?

I know we are talking about a model not an implementation
scheme but this seems a bit excessive - but is perhaps better
than introducing a new idea such as limbo (or held).

With the protocol all locked PO have a strict order. If PO1
and PO2 are both locked then PO1 must have higher priority
and greater urgency than PO2 (or the other way round). So
it may be possible to define where a newly released task should
be placed.

Is this what you were thinking? If so I'll do some more work to
be sure it 'works'.

> In this model, the only queues that have tasks on them are those
> at the level of a locked PO, which contains the task holding
> the lock while preempted, and those immediately above that, which
> holds tasks of higher prio than this locked PO (but not higher than
> some higher-level locked PO), and higher urgency than
> the task holding the PO.  While preempted, a task holding a lock
> is kept at the head of its queue (as though it has very high urgency).
>
> If there are no POs locked, then all ready tasks are in the lowest level
> queue.  When there are POs locked, the tasks are split out into
> multiple queues, at levels one above that of each locked PO, plus any
> preempted tasks with locks at the PO's level.  Tasks go into the
> highest level where their urgency and priority exceeds that of
> the task holding the lock at the immediately lower level.
> If either their urgency or priority is not greater, they go in
> a lower level.
>
>> So if no locked POs then all tasks are immediately added to the
>> single ready queue at priority B (which is ordered by EDF) when
>> they become unblocked. Only when a PO is locked are some tasks
>> not immediately made ready.
>>
>> Once a PO is locked (by task Q say) then S becomes the base until
>> it is unlocked. Only tasks that have higher Pri than S and shorter
>> deadline than Q can become ready.
>> If there are many of these they will be on the ready queue at level
>> S (and be in front of Q).
>> If any of these lock a higher level PO then this behaviour is
>> repeated at the higher level.
>> When Q runs again and unlocks the PO then its pri (and S) will drop
>> to B, all 'held tasks' will become ready and correct behaviour
>> follows.
>> ...
>> Sorry to keep arguing, but I feel model-AT is now quite simple
>> and easy to explain. For implementation, all that is needed
>> in addition to the usual stuff is a queue of non-ready tasks
>> that have become unblocked but are not yet ready. Probably best
>> to order this by priority. As level of S drops tasks on this
>> held queue with priority higher than new S will be added to
>> the usual ready queue (at new S).
>>
>> I guess back to you for comment!
>
> I don't think we need the "limbo" queue, as explained above.
> Also, if we put tasks in the level one above that of the
> locked PO, I think we can eliminate the funny restriction to
> levels B+1..H, and can use all the levels.  This does require
> that while preempted, a task holding a lock is treated as though
> it is very urgent, so it is kept at the head of the queue at the
> PO's level.

Remember this is the usual rule for preemption (ie stay at head).

> With those two "tweaks" I think we have a model that satisfies
> both your goals and mine.

I think we may be closer if I understand correctly your latest thoughts.

****************************************************************

From: Tucker Taft
Sent: Sunday, November  2, 2003 10:39 AM

> ...
> > First, there seems no harm in putting higher-urgency tasks in a queue at
> > the level one above that of the locked PO.  This will make it a bit more
> > likely that the base prio matches the queue priority, and might
> > eliminate the need for the restriction against using level "B" itself.
>
> Does this not mean we have to reserve a priority above top PO.

No.  Nothing can ever preempt a task with a locked PO at level H,
so there is no need to provide a queue above that level.

> In other words H level cannot be used?

H can be used.  H+1 would never be needed since there
are no tasks at priority > H.

> ... Also is one level above
> the PO a level that can be used by other tasks and POs or an
> artificial level 'between' application levels?

No, as I tried to explain below, so long as a task with a locked
PO is kept at the head of its queue, it is OK to have other tasks
on that queue as well.

> ... I feel a rule of
> no POs at B (or use exisiting permission to round up any PO
> at B to B+1) is more straightforward.

I don't think we need either rule.

> > Second, rather than putting lower-urgency or lower-priority tasks into
> > limbo, can't we put them where they would have gone if they had arrived
> > before the PO was locked, but make sure that tasks with a PO lock
> > always stay on the front of their queue when preempted?
> > That is, look for the highest prio locked PO that the new
> > task would have preempted, and add it to the queue just above that.
> > If there is no such PO, then add it to the lowest level queue.
>
> I agree that the removal of the limbo state would be good. But
> I cannot quite follow through your model here (sorry). If there are
> a number of POs locked (at increasing priority levels) then would
> you not have to go down all the POs to find the one that the
> new task is allowed to preempt and then put it at the head of
> that queue level (or +1 using your alternative)?

I suppose you could defer actually putting tasks on the queue
if it was more efficient, so long as it was equivalent to putting
them on the correct queue at the beginning.  I would argue we should not
worry about the efficiency of finding the right queue.  There are
so many clever things you can do once you know what the "right"
answer is, as far as queueing, that this seems like a very
minor problem.  We use a binary "heap" of priorities in our
run-time system simply to speed up the process of finding the
highest-priority queue with at least one task on it.  I could
see using binary "heaps" in this model as well to reduce the time
to find the right place to insert tasks of various urgencies/priorities.

> I know we are talking about a model not an implementation
> scheme but this seems a bit excessive - but is perhaps better
> than introducing a new idea such as limbo (or held).

My concern is that we should be able to describe the steady
state situation, rather than rely on tricks involving deferring
putting tasks on a queue, since if they had come in just
a millisecond earlier, we might have had no trouble placing them
on an appropriate queue.

> With the protocol all locked PO have a strict order. If PO1
> and PO2 are both locked then PO1 must have higher priority
> and greater urgency than PO2 (or the other way round). So
> it may be possible to define where a newly released task should
> be placed.

Yes, I think so, and it shouldn't really matter whether it is
newly released, or released a bit earlier.  Its priority, its urgency,
and the priority/urgency of the various tasks with locked POs, should fully
determine where each ready task should be queued.

To be specific, a ready task should be queued immediately above
the priority of the highest priority locked PO for which it exceeds
both the PO's priority and the lock-holding task's urgency.
The queues are ordered by urgency, except that a lock-holding task
is queued at the front of its queue while preempted.

> Is this what you were thinking? If so I'll do some more work to
> be sure it 'works'.

Yes, I supposed "this" is what I was thinking, presuming we
agree on what "this" is ;-).

> > In this model, the only queues that have tasks on them are those
> > at the level of a locked PO, which contains the task holding
> > the lock while preempted, and those immediately above that, which
> > holds tasks of higher prio than this locked PO (but not higher than
> > some higher-level locked PO), and higher urgency than
> > the task holding the PO.  While preempted, a task holding a lock
> > is kept at the head of its queue (as though it has very high urgency).
> >
> > If there are no POs locked, then all ready tasks are in the lowest level
> > queue.  When there are POs locked, the tasks are split out into
> > multiple queues, at levels one above that of each locked PO, plus any
> > preempted tasks with locks at the PO's level.  Tasks go into the
> > highest level where their urgency and priority exceeds that of
> > the task holding the lock at the immediately lower level.
> > If either their urgency or priority is not greater, they go in
> > a lower level.
> ...
> > I don't think we need the "limbo" queue, as explained above.
> > Also, if we put tasks in the level one above that of the
> > locked PO, I think we can eliminate the funny restriction to
> > levels B+1..H, and can use all the levels.  This does require
> > that while preempted, a task holding a lock is treated as though
> > it is very urgent, so it is kept at the head of the queue at the
> > PO's level.
>
> Remember this is the usual rule for preemption (ie stay at head).

Yes, that is why it seemed like a natural rule to me.

****************************************************************

From: Alan Burns
Sent: Sunday, November  9, 2003  8:43 AM

I think we now have a scheme that does not need a limbo state,
tasks can be put immediately on to the correct ready queue
when they become unblocked. One needs to identify the highest
locked PO that has a priority and urgency less than that of
the released task.

A relatively minor issue remains. To use PO queue, or PO+1 queue
for the tasks.

>> ...
>> > First, there seems no harm in putting higher-urgency tasks in a queue at
>> > the level one above that of the locked PO.  This will make it a bit more
>> > likely that the base prio matches the queue priority, and might
>> > eliminate the need for the restriction against using level "B" itself.
>>
>> Does this not mean we have to reserve a priority above top PO.
>
> No.  Nothing can ever preempt a task with a locked PO at level H,
> so there is no need to provide a queue above that level.

Agreed

If PO+1 scheme is used then one needs the rule that a task with
a locked PO must always be at the front of the ready queue for the
PO's priority. So the queue is not a pure EDF (urgency) queue.

The PO only scheme seems to need the rule that no PO priorities
at the base level which I agree seems strange, so I'm happy to
go along with your model which I think has the following form.
Let T'P be the base priority of task T, T'A its active priority
and T'U its urgency.

Rule 1
All ready queues in the specified range B..H are ordered by urgency
apart from when a task is preempted while executing within a PO,
in this case the task is always at the head of the ready queue.

Rule 2
When a task T becomes unblocked it is placed on the ready queue
identified as follows.
Let S1, S2, ..., Sn be the sequence tasks (ordered by active
priority, S1 the highest) that are executing within a PO and
have the property that Si'A < T'P and Si'U < T'U.
If there are no tasks in this sequence then T is placed on the
ready queue at priority B.
If the sequence is not empty then T is placed on the ready queue
at priority S1'A+1

Rule 3
When a task is chosen for execution it runs with the active priority
determined by the ready queue from which the task was taken. If
preemptied it returns to the ready queue for its active priority.

I guess there will be a better way of saying all this for the ARM.

We will need a statement that for multiprocessor implementations
other rules may apply. Also need to ensure that the model works
when task have inheritance priorities due to other reasons
(rendezvous etc) - but I do not think they give any problems.

A new issue - how to set the urgency of a task. In the original AI
I had a new attribute of a task (deadline) and a new pragma for setting
the deadline of a task on creation (like priority). It would be
good if this could be made more general. Any idea how this could be
done? Of course for some schemes (EDF for example) the smaller the
value of parameter the more urgent, for other schemes (value-based
dispatching) the larger the more urgent. Whatever the scheme we would
need to have the rule that any change to the urgency attribute would
need to be a dispatching point.

I think we are making progress!

****************************************************************

From: Tucker Taft
Sent: Sunday, November  9, 2003 11:20 AM

> I think we now have a scheme that does not need a limbo state,
> tasks can be put immediately on to the correct ready queue
> when they become unblocked. One needs to identify the highest
> locked PO that has a priority and urgency less than that of
> the released task.

We are on the same wavelength now...

> A relatively minor issue remains. To use PO queue, or PO+1 queue
> for the tasks.

I like using PO+1 because, as I mentioned, it makes it more
likely a task will end up on its "true" priority queue.
In particular, if there are only two levels of tasks, the
higher level for ones that can preempt locked POs when
urgent, these will run at this higher level exactly when
they are preempting.

> ...
> If PO+1 scheme is used then one needs the rule that a task with
> a locked PO must always be at the front of the ready queue for the
> PO's priority. So the queue is not a pure EDF (urgency) queue.

Right.

> ...
> Let T'P be the base priority of task T, T'A its active priority
> and T'U its urgency.
>
> Rule 1
> All ready queues in the specified range B..H are ordered by urgency
> apart from when a task is preempted while executing within a PO,
> in this case the task is always at the head of the ready queue.
>
> Rule 2
> When a task T becomes unblocked it is placed on the ready queue
> identified as follows.
> Let S1, S2, ..., Sn be the sequence tasks (ordered by active
                                    ^^ "of"
> priority, S1 the highest) that are executing within a PO and
> have the property that Si'A < T'P and Si'U < T'U.
> If there are no tasks in this sequence then T is placed on the
> ready queue at priority B.
> If the sequence is not empty then T is placed on the ready queue
> at priority S1'A+1
>
> Rule 3
> When a task is chosen for execution it runs with the active priority
> determined by the ready queue from which the task was taken. If
> preemptied it returns to the ready queue for its active priority.
        ^^^^ "ted"
>
>
> I guess there will be a better way of saying all this for the ARM.
>
> We will need a statement that for multiprocessor implementations
> other rules may apply. Also need to ensure that the model works
> when task have inheritance priorities due to other reasons
> (rendezvous etc) - but I do not think they give any problems.

I agree.

> A new issue - how to set the urgency of a task. In the original AI
> I had a new attribute of a task (deadline) and a new pragma for setting
> the deadline of a task on creation (like priority). It would be
> good if this could be made more general. Any idea how this could be
> done? Of course for some schemes (EDF for example) the smaller the
> value of parameter the more urgent, for other schemes (value-based
> dispatching) the larger the more urgent. Whatever the scheme we would
> need to have the rule that any change to the urgency attribute would
> need to be a dispatching point.

Using pragmas and attributes is generally more work for the
compiler builder.  Could all this be done at run-time using
a package and task ids?  Or is having the urgency dynamic
not common enough to justify?

I am a little unclear whether we are trying to standardize
earliest-deadline first scheduling, or rather standardize
support for having multiple priority-specific scheduling
algorithms.  What do we really need to standardize at this point?
Perhaps just a framework in which priority-specific
algorithms can be inserted...

> I think we are making progress!

Ditto.

****************************************************************

From: Alan Burns
Sent: Monday, November 10, 2003  9:07 AM

OK, let me stay with the representation issue

>> A new issue - how to set the urgency of a task. In the original AI
>> I had a new attribute of a task (deadline) and a new pragma for setting
>> the deadline of a task on creation (like priority). It would be
>> good if this could be made more general. Any idea how this could be
>> done? Of course for some schemes (EDF for example) the smaller the
>> value of parameter the more urgent, for other schemes (value-based
>> dispatching) the larger the more urgent. Whatever the scheme we would
>> need to have the rule that any change to the urgency attribute would
>> need to be a dispatching point.
>
> Using pragmas and attributes is generally more work for the
> compiler builder.  Could all this be done at run-time using
> a package and task ids?  Or is having the urgency dynamic
> not common enough to justify?
>
> I am a little unclear whether we are trying to standardize
> earliest-deadline first scheduling, or rather standardize
> support for having multiple priority-specific scheduling
> algorithms.  What do we really need to standardize at this point?
> Perhaps just a framework in which priority-specific
> algorithms can be inserted...

The grand scheme is in 3 parts, first to give support for
priority-specific scheduling (Fifo and Round Robin), the
second to use this framework to define EDF, the third part
(if possible) to generalise the EDF solution for any urgency
parameter.

The priority-specific scheduling is defined in AI-355 and seems
quite straightforward. Our work on preemption levels now implies
that a band of priorities may be needed for a scheme, but that
is easy to work into the priority-specific AI. To support
EDF we originally had

1. a predefined task attribute (ie a language defined instantiation
of the task attribute generic) called deadline

2. a pragma (called deadline) that appears in a task spec to control
dispatching during activation (similar to the use of priority pragma)

3. all changes to a task's deadline (ie calls to the Set_Value) are
dispatching points

I guess (1) and (3) could be achieved by a run-time package, but not sure
(2) can, although I guess we could live without this by giving the
default value of the attribute Time_First.

I cannot see an easy way to generalise this. Perhaps a generic in which
the 'type' for urgency is a parameter, a boolean to say which end
of the type is the most urgent, and two parameters that give the priority
band for the scheme. The 2 priority parameters will then have to match
those given in the priority-specific pragma (unless priority-specific
is defined to say all priority levels are initially FIFO but may
be changed via calls to some run-time package - but this sounds a bit
too dynamic).

EDF is so well known it would be good for Ada to give direct
support, even if it can be derived from a general purpose generic.

Do you feel the use of a generic is a runner?

****************************************************************

From: Tucker Taft
Sent: Monday, November 10, 2003 10:48 AM

...
> The priority-specific scheduling is defined in AI-355 and seems
> quite straightforward. Our work on preemption levels now implies
> that a band of priorities may be needed for a scheme, but that
> is easy to work into the priority-specific AI. To support
> EDF we originally had
>
> 1. a predefined task attribute (ie a language defined instantiation
> of the task attribute generic) called deadline

Oh, I had forgotten you meant *that* kind of attribute.  I was thinking
of the language "'" kind of attribute.  Instantiations of this generic
do not create work for the compiler writer.

> 2. a pragma (called deadline) that appears in a task spec to control
> dispatching during activation (similar to the use of priority pragma)

Here you are creating compiler work.  Using a settable default might be
preferable.

> 3. all changes to a task's deadline (ie calls to the Set_Value) are
> dispatching points

Now you are putting special semantics into an instantiation of a
general-purpose generic.  That may be asking for trouble.  It might
be better to just define your own package, if you want it to have
a special effect when Set_Value is called.

> I guess (1) and (3) could be achieved by a run-time package, but not sure
> (2) can, although I guess we could live without this by giving the
> default value of the attribute Time_First.

That would simplify the proposal in a compiler implementor's view
certainly.

> I cannot see an easy way to generalise this. Perhaps a generic in which
> the 'type' for urgency is a parameter, a boolean to say which end
> of the type is the most urgent, and two parameters that give the priority
> band for the scheme. The 2 priority parameters will then have to match
> those given in the priority-specific pragma (unless priority-specific
> is defined to say all priority levels are initially FIFO but may
> be changed via calls to some run-time package - but this sounds a bit
> too dynamic).

I don't see the need to generalize it this extent.  Each kind of scheduling
would probably need its own package, scheduler, etc.  It would still be
nice to have a general *model* indicating how these new kinds of algorithms
could be supported, without necessarily providing an amazing generic of
some sort that did it directly.

> EDF is so well known it would be good for Ada to give direct
> support, even if it can be derived from a general purpose generic.
>
> Do you feel the use of a generic is a runner?

No, I think EDF deserves its own package with an EDF-oriented interface.
But it might be nice to describe a general model which allows something
other than priority (i.e., the "urgency") to take precedence in task scheduling,
while priority still governs use of POs.  This is essentially the
"preemption" level concept, except it allows us to preserve the single
notion of priority for governing POs, while allowing other parameters
like deadline to control scheduling between tasks that are not involved
with a PO lock.

****************************************************************

From: Alan Burns
Sent: Tuesday, November 11, 2003  9:34 AM

So we have the following for support of EDF

1. A language defined package such as:

with Ada.Real_Time;
with Ada.Task_Identification; use Ada.Task_Identification;
package Deadline_Support is
  subtype Deadline is Ada.Real_Time.Time;
  Default_Deadline : constant Deadline := Ada.Real_Time.Time_First;
  Set_Deadline(D : in Deadline; T : in Task_ID := Current_Task);
  Get_Deadline(D : out Deadline; T : in Task_ID := Current_Task);
end Deadline_Support;

Perhaps with Implementation Advice to hold the deadline value
as a task attribute.

Is there any way Default_Deadline could be configurable?


2. Pragmas to set a band of priorities for EDF scheduling:

pragma Task_Dispatching_Policy (Priority_Specific);
pragma Priority_Policy (EDF_Within_Priorities,
                        first_priority, last_priority);


3. Rules for EDF_Within_Priorities that give the preemption level
dispatching (vis our last 300 emails).


4. A pragma to set the deadline of a task for its activation:

pragma Deadline();

Although you feel this would be perhaps too much of a change
from the compilers point of view.

--

We do not give direct support for any other form of urgency, but
the support for EDF will show how it can be done for any other
X_Within_Priorities.

If you agree, I'll get back to the AI and attempt to write it up.

****************************************************************

From: Tucker Taft
Sent: Tuesday, November 11, 2003 10:53 AM

Sounds good to me!

****************************************************************

From: Alan Burns
Sent: Wednesday, November 19, 2003  5:12 AM

I've discussed the EDF AI with people at York and it became
clear that the model, as we had it, was difficult to follow.

To recap: when a task T is ready, it is added to the appropriate
ready queue as follows

a) find the right level, S say
b) add T to S+1
c) order all queues by EDF, unless T has been preempted while
executing within a PO, in which case keep this task at head of Q

S is defined in terms of locked POs (highest locked PO with
ceiling less that new task and task which has locked PO having
longer deadline).


In discussions the following variation was seen to be much more
intuitive. Basically tasks are put in queue S (hence always in
front of the task that locked the PO). This can be expressed
as follows:

a) all queues are EDF ordered
b) add T to the highest non-empty ready queue R such that
    T'Deadline < Deadline of task at tail of R, and
    T'Base > Priority level of R.
   if no such R then add T to L (lowest level)

This requires no PO ceilings at level L as I had before.

This model was preferred by people seeing the models cold
as:
1) all queues are pure EDF with no special rule about the
task which has locked a PO
2) no +1 needed
3) only ready queues need to be mentioned (not locked POs)
4) the no POs at level L was OK as implementation is allowed
to round up ceiling (ie from L to L+1)

Hence I'm going to write up the AI with this model but mention
other model in discussions section.

****************************************************************

From: Tucker Taft
Sent: Wednesday, November 19, 2003  7:05 AM

Ok.

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  6:55 AM

what concerns me about this series of proposals for language additions from
Alan is that they have the feeling of "let's design some nice new features",
rather than "customer xxx who is using Ada is finding that it is a big problem
that Ada cannot support yyy". To me, this latter kind of input is always the
most critical kind of input for justification of any change.

It's not that any specific proposal lacks technical merit, just that in
aggregate the changes seem too large, and not well enough justified. I am
not convinced that any vendors will see value in implementing these features.

Alan, and others ... a better case needs to be made that these features are
critical for the future use and success of Ada.

****************************************************************

From: Tucker Taft
Sent: Monday, November 24, 2003  7:35 AM

I believe all of these grew out of the Int'l Real-Time Ada Workshop.
My sense is that many of the attendees are "real" users of Ada,
or consult to same.  Alan will have to elaborate.  In any case,
having more convincing rationale for need would seem useful, to
help WG9 with their consideration as well.

****************************************************************

From: Tullio Vardanega
Sent: Monday, November 24, 2003  7:53 AM

Yes, the whole range of real-time AIs have been conceived at
and pushed for by the IRTAW, prior to, during and subsequent
to the workshop itself.
At this year's workshop in particular we had a number of
working sessions devoted to smithering out AIs that we felt
had the support of the community that the IRTAW represents.
As usual, we strove to have as many stakeholders represented at
the workshop as possible: we had developers, educators, vendors
and end-users (such as the European Space Agency).
It was indeed appreciated that the IRTAW (via Alan) would
have to strongly justify their demand, which -- just seen from
the stream of AIs that Alan just posted -- seem to extoll a major
token from language implementors.
This argument was discussed at length (as it will appear from the
IRTAW-12 proceedings that will be on the December issue of
the Ada Letters) and should certainly be shared with the ARG.
However, my view as a long-time IRTAW-er and as the program
chair for this year is that the proposed AIs have the same pedigree
as the Ravenscar Profile, which is, to date, the best known product
of the IRTAW series and a very solid and welcome "approved AI".

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  8:13 AM

Right, I understand this, but it is still very important to differentiate
between nice-to-have, and missing-essential-functionality (some things are
of course in between). If you gather a bunch of language/domain experts
at a table, and ask them "what new neat features would you like to see?"
you will get lots of nice new neat features :-)

****************************************************************

From: Robert Dewar
Sent: Monday, November 24, 2003  8:17 AM

> which -- just seen from
> the stream of AIs that Alan just posted -- seem to extoll a major
> token from language implementors.

Actually I don't think any implementors will move much in the direction
of implementing any of these extensions unless they get input from their
prime customers asking for this functionality.

Ravenscar has finally found that kind of input (in the intended safety
critical niche), and so vendors have indeed started to implement this
capability, and several products are available.

I am unconvinced that we will see the same input for many of the new
proposed features.

If my skepticism results in considerably more evidence that I am wrong,
that's certainly a constructive and productive result!

****************************************************************

From: Alan Burns
Sent: Monday, November 24, 2003  10:26 AM

Re the comments and concerns raised by Robert.

First let me say, following Tullio, that the IRTAW workshop
series has considered a number of proposals/issues but has
only passed on to the ARG (via myself) those that it feels
are really important and timely. There are many others
potential AIs!

Consider the ones that I put out new versions of today
(obviously I should not have put them all out at the same
time!). They fall into a small number of groups:

1. Budget control. Understanding the amount of CPU resource a
task is taking is fundamental to the safe execution of any real-time
system. OSs in the high integrity area all monitor (in some
form or other) CPU usage as an error recognition facility.
Also the use of servers to control allocation is increasingly
used (for example on at least one flying avionic system from
Honeywell). As such application move away from building their
own run-times and OS to the use of standard technology, languages
and OS standards need to support these approaches. POSIX does
to some extent and much is made in the Real-Time Java extension
of such support. AI307 and AI354 give optional but standard
means of giving required budget control.  I can point to
industrial users that would use Ravenscar but need budget
control as well.

2. Extending the Dispatching facilities (AI355 and AI357).
Round Robin dispatching and EDF are not 'nice to haves'.
They represent used techniques. RR is very common and fits
well with current FIFO. EDF is as well known as Fixed Pri;
and is certainly the preferred scheduling mechanism for soft
real-time as you get much more from the CPU (perhaps 40%
more before deadlines are missed). Look at the literature
on embedded systems and what they want from scheduling.
Again real-time Java claim to give support here. Indeed it
does much further with support for value based scheduling,
admission control etc. It would be great to have these in
Ada, but they are beyond state of practice (and we believe
can be programmed with the facilities we go have/will have).

3. Dynamic ceiling (AI 327). This is not as important but
finishes of the language in the sense of dynamic priorities
for task but static pris for ceiling was always unfortunate.
This AI gives an easy (optional) means of completing the
language in this area.

4. Support for recognition of task termination (AI10266).
I picked this up as it gave out of the Ravenscar AI. It is
not a high priority; but other groups have asked for this
(eg the workshop on exception handling a couple of years
ago).

The AIs in group 1 and 2 are I believe timely for real applications
but also show that Ada is moving forward and remain relevant
to the real-time area.

****************************************************************

From: Randy Brukardt
Sent: Monday, November 24, 2003  7:30 PM

Alan said:

...
> Consider the ones that I put out new versions of today
> (obviously I should not have put them all out at the same
> time!).

Actually, I think that was a good thing, because it shows just how many of
these proposals there are. And keep in mind that there also are a number of
proposals already completed (Ravenscar, timers, etc.). I have a hard time
believing that there could be so many important pieces of missing
functionality.

> OSs in the high integrity area all monitor (in some
> form or other) CPU usage as an error recognition facility.
...

CPU usage is important to improving the performance of any Ada program, not
just real time ones. (If you can't measure it, you can't improve it!). It's
unfortunate that the package has evolved such that budgeting is given
priority over measurement. Clearly, both have value (and proposals having
value for multiple purposes should be preferred over those having value for
only one purpose).

> 4. Support for recognition of task termination (AI10266).
> I picked this up as it gave out of the Ravenscar AI. It is
> not a high priority; but other groups have asked for this
> (eg the workshop on exception handling a couple of years ago).

It's interesting that the capability that seems most important from a High
Integrity perspective (insuring that parts of the system don't become
inoperative without notice) is considered to have the lowest priority.
Certainly, this is the problem that I run into in practice (tasks going away
silently when the task's others exception handler either is absent, has a
bug, or exhausts the same resource).

In any case, I tend to share Robert's skepticism about the demand for some
of these proposals. I, like him, hope to be proven wrong.

****************************************************************

From: Alan Burns
Sent: Tuesday, November 25, 2003  2:53 AM

>...
>
>> OSs in the high integrity area all monitor (in some
>> form or other) CPU usage as an error recognition facility.
> ...
>
> CPU usage is important to improving the performance of any Ada program, not
> just real time ones. (If you can't measure it, you can't improve it!). It's
> unfortunate that the package has evolved such that budgeting is given
> priority over measurement. Clearly, both have value (and proposals having
> value for multiple purposes should be preferred over those having value for
> only one purpose).

The execution time clocks AI is all about measurement, indeed
one of its main uses will be to measure task execution times.
It has this multiple purpose (measurement alone or measurement
and budgeting).

>> 4. Support for recognition of task termination (AI10266).
>> I picked this up as it gave out of the Ravenscar AI. It is
>> not a high priority; but other groups have asked for this
>> (eg the workshop on exception handling a couple of years ago).
>
> It's interesting that the capability that seems most important from a High
> Integrity perspective (insuring that parts of the system don't become
> inoperative without notice) is considered to have the lowest priority.
> Certainly, this is the problem that I run into in practice (tasks going away
> silently when the task's others exception handler either is absent, has a
> bug, or exhausts the same resource).

Remember that a very simple scheme was deemed adequate for Ravenscar,
and there are other approaches with full Ada. My point was that an
Alan AI does not mean its an AI from IRTAW.

> In any case, I tend to share Robert's skepticism about the demand for some
> of these proposals. I, like him, hope to be proven wrong.

I will not repeat the points I made in previous email. But note the most
recent attempt to define/design language features for a real-time
language (real-time Java) attempts to have all these features and
more. I say attempt because I do not think the features are as well
thought out as these Ada proposals.

Also most of these feature are low overhead (eg additional run-time
packages) that, as Robert says, do not need to be supported until
customers need them. But if implemented, the RM defines a standard
way. The fact that Ada supports them makes a strong statement about
the language. Control over execution time and the introduction of
the notion of 'deadline' is not I believe excessive.

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  2:42 PM

My inclination would be to make this an optional feature. I don't
see a good argument for requiring vendors to implement this package.
If their customers need it, then they will ask for it. I prefer
development of the language in this kind of respect to be
customer/user driven.

****************************************************************

From: Randy Brukardt
Sent: Thursday, September  9, 2004  2:59 PM

My understanding was that this was an optional part of the Real-Time Annex
(which is of course optional itself). I certainly agree that this shouldn't
be a required part of the Real-Time Annex; the wording ought to reflect
that.

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  3:17 PM

OK, I quickly scanned and perhaps I missed the optional statement,
so if it is there, I am not so concerned.

****************************************************************

From: Randy Brukardt
Sent: Thursday, September  9, 2004  5:29 PM

I quickly scanned it too, and didn't find the statement, either, so I think
it is really missing. It should be added, IMHO.

****************************************************************

From: Pascal Leroy
Sent: Thursday, September  9, 2004  3:01 PM

Well, this feature is part of the real-time annex, so it's optional.  I
agree that this is a part of the annex that vendors won't probably
implement unless there is proven customer demand.

Surely you are not proposing a new notion of optionality, distinct from
the SNAs?

****************************************************************

From: Robert Dewar
Sent: Thursday, September  9, 2004  3:21 PM

Absolutely I am. I object to this unit if it is a required
part of the real time annex. Furthermore this is not a new
notion at all:

Implementation Permissions

10   An implementation need not support Asynchronous_Task_Control if it is
infeasible to support it in the target environment.

I would be happy to have a parallel statement for this new proposed
package. Of course in practice it is the vendor who decides whether
or not support is feasible, and that's just fine.

What this means for instance is that the corresponding ACATS tests
are option, even if you intend to support the Annex.

****************************************************************

From: Pascal Leroy
Sent: Thursday, September  9, 2004  7:33 PM

Well, this rule is bogus anyway because the "infeasible" case is already
covered by the blanket statement about "impossible or impractical"
features in 1.1.3(6).

I disagree with the notion of adding optional features to the SNAs,
because then "supporting the real-time annex" become a meaningless
statement, or at least one that must be qualified by all sorts of
additional information about the optional features.

If we believe that AI 357 should be optional (and I am not necessarily
disagreeing with that) we should create a new annex about advanced
scheduling capabilities and put it there (together with the
priority-specific stuff and round-robin, I guess).

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  3:09 AM

> Well, this rule is bogus anyway because the "infeasible" case is already
> covered by the blanket statement about "impossible or impractical"
> features in 1.1.3(6).

I disagree. The point of adding that explicit implementation
permission was precisely to signal that this particular feature
is one that may well not be practical to implement. The idea
benind 1.1.3(6) [AI 325 :-)] was to cover unusual cases. Something
is wrong if we find that particular features are routinely
not implemented and appealing to 1.1.3(6).

> I disagree with the notion of adding optional features to the SNAs,
> because then "supporting the real-time annex" become a meaningless
> statement, or at least one that must be qualified by all sorts of
> additional information about the optional features.

That's rhetorical overkill here. We are talking about a small
number of features (it appears that some people were not even
aware of the existing practice since its scope is small :-)

> If we believe that AI 357 should be optional (and I am not necessarily
> disagreeing with that) we should create a new annex about advanced
> scheduling capabilities and put it there (together with the
> priority-specific stuff and round-robin, I guess).

Seems overkill to me, but given these alternatives

1. Junk the feature completely

2. Keep it as some kind of secondary standard, like the old
Ravenscar

3. Keep it in a separate annex

4. Put it in Annex D optional

5. Put it in Annex D non-optional

My preference would be for 2. I would also put the new
priority specific stuff and round robin stuff there too.
Why? Because this stuff has not been implemented or used,
and I think it would be better to have some experience
before it goes into the standard. The original Ravenscar
document would have been premature as an addition to the
standard. Instead it came out as a secondary standard
and we gained valuable implementation and usage experience.

I can also live with 1 or 4

I dislike 3 (overkill, and I think you will get an annex
that will be in danger of being ignored).

I entirely reject 5.

****************************************************************

From: Alan Burns
Sent: Friday, September 10, 2004  2:37 AM

This general permission is current wording and is included in the new wording.
I believe this makes it clear that applications need only support the new
policies if they wish to, ie when a customer asks.


Implementation Permissions

Implementations are allowed to define other task dispatching policies, but
need not support more than one task dispatching policy per partition.

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  3:11 AM

> I believe this makes it clear that applications need only support the new
> policies if they wish to, ie when a customer asks.
                            ^^
You surely mean eg. You don't want to tell implementors to implement
this *only* if customers ask!

P.S. this confusion of ie for eg is so common that when we wrote
our book on microprocessors, our copy editor said "you are the
only authors I have worked with who use ie correctly. However,
so many readers in the US mistake this for "for example" that
I recommend eliminating all uses ... which we did :-)

****************************************************************

From: Alan Burns
Sent: Friday, September 10, 2004  3:19 AM

I meant ie just to overstate the case - of  course eg really.

Does the permission wording eliminate your concerns?

****************************************************************

From: Robert Dewar
Sent: Friday, September 10, 2004  4:53 AM

Probably so, I have to read the entire statement in context

****************************************************************

From: Randy Brukardt
Sent: Friday, September 10, 2004  1:15 PM

> Does the permission wording eliminate your concerns?

No. The permission allows a vendor to insist that an entire partition use the
same dispatching policy. One could argue that that permission means that an
implementation doesn't have to support Priority_Specific_Dispatching, but I
think that is stretching a rule to the breaking point. If we don't want to
require such support, we should say so, in clear English, not hide it in some
unrelated (and existing) permission. After all, we want to key users that such
a feature is optional.

Any, in any case, I don't see how it applies to a partition that is completely
Round_Robin or EDF. Certainly, the permission does not say that an
implementation doesn't have to implement language-defined policies, only that
one such policy need be supported per partition. How could an implementation
justify rejecting "pragma Task_Dispatching_Policy (Round_Robin);"? It certainly
doesn't do anything that would trigger the permission.

I know that you've said that these features are optional. In that case, there
should be a clear statement to that effect in the wording, as there is with
D.11(10). We do not want to confuse either the users nor the reviewers of the
Amendment.

****************************************************************

From: Tucker Taft
Sent: Friday, September 10, 2004  4:34 PM

For what it is worth, I agree with Randy.

****************************************************************

From: Alan Burns
Sent: Tuesday, May 10, 2005  3:25 AM

>>EPD17: D.2.6 20/2 BIG BUG !?! This model does not work. Consider a
>>task in a protected operation, having its ceiling as active priority; it
>>calls a delay (stupid, but not forbidden); when it comes back, it can
>>end up in the lowest ready queue rather than a ready queue for the
>>ceiling priority it holds. Is this ok? I doubt it very much; it plays havoc
>>with the ceiling model.
>
>>	NOT A BIG BUG - a task cannot call a delay within a protected op
>>	ie it is forbidden (for all policies)!

>No it isn't. 9.5.1 (17) It's a bounded error with one of 3 possible outcomes:
>   - P_E raised
>   - deadlock
>   - nested protected action (which I read as "normal semantics")
>   - (none other)
>Havoc is not one of them  :-)
>It would be more robust to specify the semantics that puts the task back on
>the "right" queue.

My reading of 9.5.1 (17) is that there are 2 possible behaviors
1)  bounded error is detected and Program_Error is raised
2) bounded error is not detected and the program continues; in this case a
number of things may happen including the two noted but also (and more
importantly and not noted) is that mutual exclusion can be broken. It is not
defined if the delaying task holds the 'lock' on the PO - indeed it would not
if the recommend implementation model of the real-time annex is followed (just
use priority to get mutual exclusion).

With EDF the situation is no different. If the bounded error is not detected
the task executing the delay within the PO will allow other task in - when it
next executes it will execute with the ceiling priority of the PO. This follows
from the use of Ceiling_Locking which requires execution at ceiling level while
in the PO. Nothing in the definition of EDF undermines this.

So its not havoc (well it is but the same havoc as Ada 95)

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, May 10, 2005  5:20 AM

> My reading of 9.5.1 (17) is that there are 2 possible behaviors
> 1)  bounded error is detected and Program_Error is raised
> 2) bounded error is not detected and the program continues; in this case
> a number of things may happen including the two noted but also (and more
> importantly and not noted) is that mutual exclusion can be broken.

Not so. The point of bounded error is that it is indeed bounded to the
alternatives specified. So, mutual exclusion must not be broken. (What you
are describing is erroneousness.) -- Aside: where you are going here, i.e.,
ceiling locking, you need an enforced Restriction "no voluntary release
of CPU" to make this a legal implementation.

I haven't checked present semantics, but my presumption is that a delayed
task, when resumed in Ada95, resumes at its active priority, i.e., is robust
against the nested case. Why you would want to introduce a less robust model
for EDF, I do not understand. Implementationwise, one is as easy/hard as the
other. (Admittedly, wording-wise it is a bit harder - the EDF-tasks need to
resume at their active preemption level rather than bottom priority --, but
still doable.)

****************************************************************

From: Alan Burns
Sent: Tuesday, May 10, 2005  7:37 AM

Sorry, I strongly disagree. Yes the bounded error is bound, either the
condition is recognized and program_error is raised or it is not recognized.
End of story. It makes no sense to say condition is not recognized but the
following actions must be taken - how can they be taken if the condition is not
recognized!

To require a lock to deliver correct behavior following a bounded error is
completely against the point of the priority model. This is an issue for Ada
95, not the new EDF facility. The wording in 9.5.1 (17) clearly does not cover
all the cases listed in paragraphs above. Deadlock or nesting is only an issue
if the blocking operation is a call on an entry. For delay statement no
alternative is given.

Looking at the words of 9.5.1 (17) it is clear that 'deadlock' is not an
alternative - how can it be? You cannot force a program to deadlock - the
paragraph is just noting what could happen if the second alternative (no
action) is taken

****************************************************************

From: Bob Duff
Sent: Tuesday, May 10, 2005  7:16 AM

> Not so. The point of bounded error is that it is indeed bounded to the
> alternatives specified.

True.  But the other point is that bounded errors try to avoid causing
difficulties for efficient implementation.  Is that the case here?

This discussion illustrates one reason why I dislike bounded errors:
they cause language designers to spend inordinate amounts of time
worrying about the detailed semantics of wrong programs.  The concept
"raise an exception" and the opposite concept "erroneous" are both much
simpler.  Sigh.

****************************************************************

From: Tucker Taft
Sent: Tuesday, May 10, 2005  8:52 AM

I agree with Alan that if the error is not detected,
mutual exclusion can be broken, just as is true for
Ada 95.  That is what we meant in Ada 95 by saying
that one possible outcome is that a "nested protected
action" might take place -- i.e. mutual exclusion might
be broken.

This will probably *not* directly lead to
erroneous execution due to concurrent access, as described
in 9.10(11), since we don't have two tasks simultaneously
manipulating the data inside the protected object;
one of them is blocked.  Of course it is clearly
a programming error, but the whole point here is that
we don't require that this particular error be detected
by the implementation.

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, May 10, 2005  12:43 PM

I didn't/don't like bounded errors either. However, on this specific case,
which includes the case of nested monitor calls, the language better not go
for erroneousness or anything seemingly like it. Which is why I intend to
stick to my guns on this discussion.

Sometimes one does need nested monitor calls. (Not to be able to call
IO out of protected ops is a bloody nuisance.) Soft real-time guys won't be
deterred. If anything, this particular bounded error went overboard in my
opinion.  It should not have been an error at all.

True that hard real-time systems guys hate them and do so for good reason.
But to make them completely ill-defined for all would be really bad. And is
EDF only for hard real-time guys?  I sure hope not!

****************************************************************

From: Tucker Taft
Sent: Tuesday, May 10, 2005  2:44 PM

I don't know what you mean by "stick to your guns" on this one.
The Ada 95 rules clearly allow the mutual exclusion to be
broken as one of the outcomes.  That is not erroneous, because
it doesn't imply that you have concurrent access.  The "nested"
protected action occurs while the original locker is blocked.
Obviously not the programmer's intent, but not chaos either.
If you want this error detected, then that's what Annex H
provides.  I don't see any value in requiring that EDF detect
blocking but FIFO_Within_Priorities not detect blocking.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 11, 2005  1:21 PM

That't not my issue. Yes, mutual exclusion can be broken in the sense that
you describe.  And I am not asking for any additional checking or suchlike
in the EDF rules.

My issue is that a nested protected call that does not access the same
object or an object locked by some other task should work. Period. I guess
we all agree that this is so by RM rules. They are not bounded errors.

In my reading of the rules of D.2.6 (20-24), this case is treated
incorrectly. Upon loss of active priority due to completion of a nested
protected call, the task can end up on the wrong ready queue; it needs to go
to the queue of its active priority (still due to the ongoing protected
call).


Moreover, I ask that a blocking call within a protected op (such as an
entry call) is allowed to work correctly, as it will in many, many cases
anyway, even though it is called a bounded error.

Specifically I am asking that blocked tasks are resumed at their active
preemption level aka active priority, not at the EDF base queue priority. My
rationale is that resumption at the active priority level preserves most of
the guarantees that ceiling priorities give you, while resumption at the
base priority "creates havoc" with all the ceiling protocol
guarantees. (Note that I am asking for the same thing in all cases, i.e.,
always enqueue in the queue of the active priority.)

Alan's argument is that this case should never arise where the active
priority is unequal to base priority upon resumption, as all situations
in which this could be the case, are classified as a bounded error and
hence it is o.k. to say "base priority".

I am replying that most of these cases of bounded error are perfectly fine,
practically speaking (assuming a locking implementation), and the rules
should not be written so that these cases are certain to be damaged. D.2.6
(24/2) is a punitive rule that intentionally screws up ceiling protocols
in the presence of nested blocking operations for no good reason.

Conceivably one can argue that, if the implementation choses to not use
locks (legitimate only on uni-processors) and hence can cause mutual
exclusion to be truly violated (and not just in the sense of Tuck's
message), the punitive rule ought to discourage all users from issuing
nested blocking calls. I don't believe in such sophistry. The implementation
behaviour would not be sanctioned by 9.5.1 (17) anyway, because it would not
be a nested protected action on the same object, but rather two completely
independent protected actions, i.e., a true screw-up.

You want this efficient but risky implementation, you figure out some other
way, e.g., an implementation permission.


I don't think that anybody can argue that queuing into the active priority
queue rather than the base priority queue can create any kind of
implementation burden. Note that the active priority re-enqueing must
be available anyhow upon completion of a nested protected op that is not a
bounded error, e.g., a nested call on some protected procedure of another
object.

----

Aside: 9.5.1(17) actually has a bug. It does not allow for "it just works
normally" semantics as one of the bounded error alternatives (i.e., the case
where it simply works to call a blocking entry from within a protected
op). But that interpretation is outlandish, since it would force
implementations into doing work to screw things up intentionally, so that I
read the "might" as also including the canonical semantics of the blocking
operation.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 11, 2005  3:58 PM


P.S. In order to make discussion/understanding easier, here is a sample
program to illustrate my point(s) about the EDF scheduling problem:

(imagine the right pragmas inserted to make EDF happen over
prio range 1..5)

with Ada.Text_IO;
procedure Edftest is

   task T1 is  pragma Priority(1);   end T1;

   task T2 is  pragma Priority(2);   end T2;

   protected O1 is
      pragma Priority(4);
      procedure Change;
      entry Block;
   private
      Proceed: Boolean := True;
   end O1;

   protected O2 is
      pragma Priority(3);
      procedure Driver;
   end O2;

   protected body O1 is
    procedure Change is
    begin
      Proceed := not Proceed; -- at active priority 4
    end Change;

    entry Block when Proceed is begin null; end Block;
   end O1;

   protected body O2 is
     procedure Driver is
     begin
      O1.Change; -- perfectly o.k.
      -- should run here at (active) priority 3
      -- per D-11 rule, is in the EDF priority 1 ready queue !

      Ada.Text_Io.Put_Line("1");  -- this better be the very first output

      O1.Block;  -- bounded error since a blocking call
      -- none of the specified behaviours other than P-E make any sense.
      -- expect normal operation ... P_E would be o.k, too

      -- should run here at (active) priority 3
      -- per D-11 rule, is in the EDF priority 1 ready queue !

      Ada.Text_Io.Put_Line("3");  -- must be the 3. output
     end Driver;
   end O2;

   task body T1 is
   begin
      Ada.Text_Io.Put_Line("2"); -- must be the 2. output
      O1.Change;
   end T1;

   task body T2 is
   begin
      O2.Driver;
      Ada.Text_Io.Put_Line("4"); -- must be the last output
   end;

begin  -- T1 runs until blocked; then T2, which unblocks T1
null;
end Edftest;

****************************************************************

From: Alan Burns
Sent: Thursday, May 12, 2005  5:31 AM

Erhard made a number of points in his email and I want to make
some general points/replies in a moment - but his example highlights a
misunderstanding, either the current wording is wrong, or Erhard's
reading of the words, or both!

I'll edit his example down to the first key bit

>   protected body O2 is
>     procedure Driver is
>     begin
>      O1.Change; -- perfectly o.k.
>      -- should run here at (active) priority 3
>      -- per D-11 rule, is in the EDF priority 1 ready queue !

NO.  This task must run at priority 3 (as you require).
I am using the fact that the task is in a PO and Ceiling_Locking applies
to give me the rule that any task within a PO runs at ceiling level.

So it sounds like we agree on the required behavior - but must make sure
the words say that.

>      Ada.Text_Io.Put_Line("1");  -- this better be the very first output

you don't mean this do you?
this is the second output - the first comes from the task before it
calls any PO

...
>   task body T1 is
>   begin
>      Ada.Text_Io.Put_Line("2"); -- must be the 2. output

As above - this is first output - its the first thing the program does

...

To return to the general issues 9.5.1(17) (ie not EDF specific).

First the term 'nested protected actions' is a little confusing. 'Nested
monitor calls' is a well used phrase in concurrency and concernes a task in one
monitor calling another monitor (should it keep the first lock was the original
question asked in these situations). But in 9.5.1(17) nesting here means
concurrent execution within the same PO.

I feel Ada 95 made exactly the right decision(s) for controling access  to POs:

1. It does not require an actual lock, nor preclude one.
2. It thus allows for an efficient implementation on a single
processor (using priority itself to give mutual exclusion).
3. In the non-lock case there is a potential to break mutual excision
if a task blocks within a PO.
4. But a programmer can accept this when he/she knows that no actual
state (within the PO) is compromised (as in Erhard's example).
5. So 3 cases exist
    a) no lock, and we want to ensure no blocking, here raising P_E is the
right thing to do (as required in Ravenscar)
    b) no lock but allow blocking and its up to programmer to get it right, so
no exception raised and the program continues its execution
    c) a lock is used and retained by a blocking task, this reduces concurrency
but does not undermine correctness, so again allowing the program to continue
is fine

None of these cases are erroneous (and should not be). Ada 95 allows all three.
It does this by saying its a bounded error to block but raising P_E or just
continuing is OK. Whether bounded error should have been used in this
situation I'm not sure, but it was so I guess we live with it in Ada 2005.5

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  7:07 AM

minor clarification so we are on the same page... I used priority pragma to
indicate a priority ordering of the tasks that, by EDF, I would need to
achieve by means of deadlines causing this ordering.

In that case, I believe I can insist on the "1234"-Output on a
uni-processor. (And I actually ran the program to make sure :-)

****************************************************************

From: Alan Burns
Sent: Thursday, May 12, 2005  8:01 AM

yes agree - was reading task 1 rather than task 2 calling this object.

Are you agreeing with the substantive points?

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  10:01 AM

This time, give me time to not simply cross-read, but really think about the
corner cases as well. The quick read said, for the simple cases, that I
agree; I might not (immediately) agree on the ultimate treatment of the
nested case.

Meanwhile, since we agree on the example, I need to understand why you
believe that the EDF paragraphs that I am quoting do not apply or say
priority 3 -- which I definitely cannot interpret into those paragraphs.
///let me rephrase to be precise.../// or why a task with active priority 3
has any business being on the ready queue for priority 1, which is what I
think the paragraphs are saying.

****************************************************************

From: Alan Burns
Sent: Thursday, May 12, 2005 11:01 AM

I agree it has no business on ready queue for priority 1.

I am assuming (probably wrongly!) that any task executing in a PO MUST run with
the ceiling priority (for locking policy C_L). The whole point of the model we
have is that no change is made to the C_L policy. So in your example, if a task
delays or blocks within a PO when it next executes it does so with the ceiling
priority. I do not say anything in the new parapraphs as I am assuming C_L
covers these details. This may be wrong, in which case words need to be added

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  12:40 PM

> To return to the general issues 9.5.1(17) (ie not EDF specific).

> First the term 'nested protected actions' is a little confusing. 'Nested
> monitor calls' is a well used phrase in concurrency and concernes a task in
> one monitor calling another monitor (should it keep the first lock was the
> original question asked in these situations).

Certainly Agree.

> But in 9.5.1(17) nesting here means concurrent execution within the same
> PO.

Absolutely, but only within (17). (8-13) take a much more general view on
what causes a bounded error, i.e., not just nested monitor calls.

For a nested monitor call the bounded error situations would indeed be
complete, assuming that "it simply works" is also allowed: you block on a
semaphore or on a lock held by another process (possibly deadlocking), or
you run into a lock held by the current process, but that one may not lock
you out; hence atomicity (but not mutual exclusion of processes) gets
violated. The bounded error situation exists whether or not ceiling
protocols are implemented with or without actual locking
mechanisms. Rather, this bounded error reflects the implementation question
of who or what owns the lock; that question gives rise to the "dont know
whether deadlock or non-atomic access occurs by a nested monitor call on the
same object."  Without this question, the bounded error should be:
  -- works as canonically promised
  -- deadlocks (under the right circumstances)
  -- raises P_E

For all the other cases of (8-13), the scenario that causes problems is not
the nested monitor call. It is the fact that the monitor is logically held
by the task that is about to be blocked on some condition or
event. Consequently, a risk of deadlock arises (extremely hard to detect,
even at runtime) for subsequent monitor calls, hence one of the bounded
error situations. P_E is ok, if the implementation were to detect the
deadlock. The big question discussed further below is whether another
monitor call to the same object by another task/threads is locked out.  If not,
we loose both atomicity and mutual exclusion for that object. Put
differently, really bad things can start to happen.

----
> ....
Glad to hear that we agree that the "no-problem" cases of potentially
blocking operations should indeed work :-)

-----

Let me cut to the heart to the matter: Does "general Ada" allow for an
implementation of ceiling locking on a uni-processor without actual locks,
i.e., relying entirely on the priority protocol to ensure mutual exclusion ?
By "general Ada" I mean the canonical semantics in the absence of any
Profile.

It is reasonably well known that the mutual exclusion guarantee of an
immediate ceiling protocol without using actual locks holds only if there
is no possibility that, once a monitor is entered, the running task is
synchronously blocked.  9.5.1 (8-16) seems to try to enumerate all these
situations. So, if all these situations were illegal or guaranteed to
raise exceptions, indeed, Ada would support an implementation of ceiling
protocols without locks.

In Ada95, they are neither illegal or guaranteed ot raise P_E. They are
rendered Bounded Error with the really serious problem of loss of mutual
exclusion among different processes not mentioned as one of the allowed
outcomes.

Conclusion: No, Ada95 in canonical semantics does not allow for leaving
out the locks. It's o.k. to do so under Ravenscar, because the raising of
P_E is mandated!

Question: should Ada05 allow leaving out the locks? (by fixing it to what
seemed to have been the intent of Ada95 ??)

I have been mulling this over for the last week ever since this came up,
because the EDF-model seemed to imply this permission.

It's a close call, but I am presently on the side of safety: No, Ada does
not and should not allow a lock-free implementation of ceiling protocols in
its canonical semantics (since it has all thse potentially blocking
operations), but it should allow it under a set of Restrictions that ensure
that monitor calls cannot be synchronously blocked. Mandating the P_E check
is one of them (but then the cost of the check is rather close to the cost
of locking); static restrictions are equally conceivable, although pretty
draconian.  At any time, non-standard modes can do it anyway in a
"user-beware" mode. I just do not want to see the language bent (as it
partially is right now in 9.5.1) towards an efficient but inherently unsafe
semantics.

Mutual Exclusion among threads should be guaranteed by the language.

So, Alan, in terms of ultimate conclusion, no I guess we do disagree,
since I read youz msg to say that the canonical semantics should be
implementable without locks. I do not know how to make that safe.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  1:50 PM

...
> Mutual Exclusion among threads should be guaranteed by the language.
>
> So, Alan, in terms of ultimate conclusion, no I guess we do disagree,
> since I read youz msg to say that the canonical semantics should be
> implementable without locks. I do not know how to make that safe.

I should keep my nose out of this, but I have to come down on Alan's side.

The bounded error in question (as has been said before) does not allow
havoc. It only allows deadlock, Program_Error, or working (with possible
loss of atomicity). Right?

So that is a constraint on the *implementation*, as well as the programmer.
The implementation has to insure that only results allowed by this bounded
error occur. Otherwise, the implementation is incorrect.

The implementation can always do that by raising Program_Error (as that is
one of the allowed options). So I can't imagine why a lock-free
implementation is a problem.

If you have a lock, you don't need to detect blocking, because one of the
prescribed conditions will occur. But if you don't have a lock, you will
have trouble if you don't detect blocking. OK, so the implementation has to
move the cost somewhere else. Big deal.

Your explanation makes it sound like the implementation may not detect the
blocking for general Ada (without Ravenscar). That's silly; an
implementation can always do the work to detect blocking. Indeed, Janus/Ada
actually detects the blocking *and* uses a lock.

The blocking check is fairly cheap; it isn't impossible like detecting
deadlock. All that is needed is a bit in the TCB and a test of that bit
before operations in the list. So avoiding it isn't that big of a deal, and
if you need it to be correct, you ought to do it.

Based on your description, an implementation that implemented general Ada
tasking with no lock *and* no blocking check would be an incorrect
implementation of Ada. I can buy that,  there are a lot of incorrect
implementations of Ada possible. You can't use them! But it doesn't mean
that a no-lock implementation is impossible, only that a no-lock
implementation must detect blocking. (And thus it is already a leg up on
Ravenscar.) So the implementer is trading off a lock for a (cheap) check.
That seems like a net win to me, and certainly doesn't mean that no lock is
impossible or somehow less safe than a locking implementation. (Personally,
I think that the blocking check is cheap enough that it always should be
made [why allow deadlock?], but here I am getting out of my area of
expertise so I'll keep quiet.)

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  4:04 PM

> The bounded error in question (as has been said before) does not allow
> havoc. It only allows deadlock, Program_Error, or working (with possible
> loss of atomicity). Right?
> So that is a constraint on the *implementation*, as well as the programmer.
> The implementation has to insure that only results allowed by this bounded
> error occur. Otherwise, the implementation is incorrect.

Exactly my interpretation, too.

> The implementation can always do that by raising Program_Error (as that is
> one of the allowed options). So I can't imagine why a lock-free
> implementation is a problem.

I did not mean to exclude this option. You are completely correct in saying
that.

I am only opposing a model that reads into 9.5.1(17) a permission ("it's an
error anyway") to not do locking and not do checking either, with the result
being that mutual exclusion is no longer guaranteed. With a grain of salt, I
could read this into the last sentence of (17) -- all I have to do is to
drop the "nested".
Certainly Alan did when he wrote:
>    b) no lock but allow blocking and its up to programmer to get it
>    right, so no exception raised and the program continues its execution


----
as to you statement that the P_E check is easy...that depends: if it is
merely a check that ensures that no potentially blocking operation gets
called, yes it is cheap. But that automatically also excludes lots of
useful programs, i.e., my sample program would raise P_E then, for no
good reason. (I am tempted to say that this would curtail the language
capabilities too much, but I don't want to start that argument.)

That's the corner where I will say that 9.5.1 went overboard in
summarily making any potentially blocking call a bounded error. It needs to
turn into a bounded error only if a cyclic resource dependency arises at
run-time. Without the cycle, everything should simply work as advertised.

Notice that the P_E check does NOT protect against all deadlocks, since the
nested monitor call on another object is not a Bounded Error, and that
is all it takes to create a deadlocking cyclic dependency in the general case.

In the special case of immediate ceiling protocols, the P_E check protects
against loss of mutual exclusion, though, for a non-locking implementation
and it prevents the one situation that can give rise to a deadlock in an
implementation of the protocol with locks (so it is a deadlock preventer
under that scheduling regime, but not in general). For that purpose, it
would suffice if P_E were raised if the blocking does occur, not just
if the call is potentially blocking (and is as easily realized as
the check on potentially blocking).

****************************************************************

From: Tucker Taft
Sent: Thursday, May 12, 2005  4:28 PM

    I interpreted one of your earlier responses to say
you were *not* worried about the permission in 9.5.1(17).
Now it appears you are again.  It was perfectly clear
at the time we wrote 9.5.1(17) that we were allowing
mutual exclusion to be lost to some other thread if
the thread holding the lock blocks, and you are using
priorities rather than locks to provide ceiling "locking."
Let's not try to rewrite history here.

No matter how much someone may be concerned about this,
I don't think we should try to change it.  We have
10 years of use out there with existing RTS', and noone
has complained (officially) about this rule in the interim, as
far as I know.  Implementors have presumably settled
on an appropriate compromise of safety and efficiency,
and I don't see the need to push them one way or the
other at this point.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  4:49 PM

...
> I did not mean to exclude this option. You are completely correct
> in saying that.
>
> I am only opposing a model that reads into 9.5.1(17) a permission ("it's an
> error anyway") to not do locking and not do checking either, with the result
> being that mutual exclusion is no longer guaranteed. With a grain of salt, I
> could read this into the last sentence of (17) -- all I have to do is to
> drop the "nested".
> Certainly Alan did when he wrote:
> >    b) no lock but allow blocking and its up to programmer to get it
> >    right, so no exception raised and the program continues its execution

And that is absolutely correct; such an implementation is just plain wrong.
Bounded error is not the same as erroneous (which is where the discussion
started, I think).

...
> as to you statement that the P_E check is easy...that depends: if it is
> merely a check that ensures that no potentially blocking operation gets
> called, yes it is cheap. But that automatically also excludes lots of
> useful programs, i.e., my sample program would raise P_E then, for no
> good reason. (I am tempted to say that this would curtail the language
> capabilities too much, but I don't want to start that argument.)

But you did anyway. :-) :-)

Your sample program raises Program_Error "on line 43" when run under Janus/Ada;
that's the call "O1.Block". That seems exactly right, given the existing rules.

> That's the corner where I will say that 9.5.1 went overboard in
> summarily making any potentially blocking call a bounded error. It needs to
> turn into a bounded error only if a cyclic resource dependency arises at
> run-time. Without the cycle, everything should simply work as advertised.

That would have been an alternative. But it's too expensive to check. And I
don't think it would actually work with Janus/Ada runtime of hard locks:
once a lock is locked, nothing can get it again, including the task that
locked it. That would probably deadlock in some cases that you would "expect
to work".

I probably would have said the bounded error happens if the task actually
blocks (period). And I'd probably require the check to be made at that point
(it isn't expensive enough to justify avoiding it).

But I can understand the current rule, too. Checking only for actual
blocking may mean that the error only shows up under weird timing conditions
(ones that Murphy says will never happen in testing). So now the rocket
reaches staging and the software raises Program_Error due to a latent
blocking problem. With the current rules and use of pragma Detect_Blocking,
that is much less likely to happen: the error will almost certainly appear
in testing.

> Notice that the P_E check does NOT protect against all deadlocks, since the
> nested monitor call on another object is not a Bounded Error, and that
> is all it takes to create a deadlocking cyclic dependency in the
> general case.

Yes, of course.

> In the special case of immediate ceiling protocols, the P_E check protects
> against loss of mutual exclusion, though, for a non-locking implementation
> and it prevents the one situation that can give rise to a deadlock in an
> implementation of the protocol with locks (so it is a deadlock preventer
> under that scheduling regime, but not in general). For that purpose, it
> would suffice if P_E were raised if the blocking does occur, not just
> if the call is potentially blocking (and is as easily realized as
> the check on potentially blocking).

Yes, I noted that above. That's probably the rule I would have used. But
there is at least an argument for the current rule (I made it above), and we
certainly aren't going to change this for the sake of changing it. (Why make
work for implementations?)

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  5:11 PM

>     I interpreted one of your earlier responses to say
> you were *not* worried about the permission in 9.5.1(17).
> Now it appears you are again.  It was perfectly clear
> at the time we wrote 9.5.1(17) that we were allowing
> mutual exclusion to be lost to some other thread if
> the thread holding the lock blocks, and you are using
> priorities rather than locks to provide ceiling "locking."
> Let's not try to rewrite history here.

Well, maybe that's clear to you, but I for one can't read that into the rule
as it exists or anything in the AARM. I can believe that you thought that
when the rule was crafted, but if so, it has been a well-kept secret.

> No matter how much someone may be concerned about this,
> I don't think we should try to change it.  We have
> 10 years of use out there with existing RTS', and noone
> has complained (officially) about this rule in the interim, as
> far as I know.  Implementors have presumably settled
> on an appropriate compromise of safety and efficiency,
> and I don't see the need to push them one way or the
> other at this point.

I agree with this, but for the opposite reason: I see nothing in the current
rules that allow the loss of mutual exclusion. And, practically, I don't
think that an implementation could pass the ACATS if it allowed such a loss.
(That's hard to prove, of course, since so much of this depends on timing.
But I don't think there is any special cases in the ACATS to allow loss of
mutual exclusion.) So I don't think that there are, in practice, any
implementations of Ada that allow such a loss.

Now, there are a lot of "almost Ada" things in real implementations (the
infamous "non-standard mode"), and they can do what they want. But we don't
care (here) about such things.

A "nested protected action" doesn't imply that the original action is
somehow null-and-void; it simply allows a task to start a second action on
an object it already holds. It certainly doesn't imply that anything goes
for other tasks. If you want some words that mean that, then *you* have to
try to change the words. Or try to put in an AARM note to that effect (if
you want to mis-interpret the words, which admittedly are quite vague).

Certainly, if an implementer tried to blame this paragraph for the failure
of an ACATS test because of loss of mutual exclusion, I would not give them
much sympathy.

****************************************************************

From: Tucker Taft
Sent: Thursday, May 12, 2005  5:36 PM

> A "nested protected action" doesn't imply that the original action is
> somehow null-and-void; it simply allows a task to start a second action on
> an object it already holds.

But this is all in the context that the task holding the lock
is *blocked*.

The only possible meaning of nested protected action here
is that some other task instigates it.  I also know that
that is what I meant and what was discussed with the
design and review teams.  We didn't create an AARM note
here because we didn't sense any confusion on anyone's part.
Apparently you and Erhard were absent during this discussion!
I hope at least someone else in the ARG remembers it...

Basically, if you use priorities for locking, then this
is what you get.  And we knew that.  It sounds like Alan
understood that as well.

> ... It certainly doesn't imply that anything goes
> for other tasks. If you want some words that mean that, then *you* have to
> try to change the words. Or try to put in an AARM note to that effect (if
> you want to mis-interpret the words, which admittedly are quite vague).

I can provide some AARM wording if others think it necessary.
But it seems pretty clear to me we must be talking about
what can happen on behalf of other tasks, not on behalf of the task
that is blocked.

> Certainly, if an implementer tried to blame this paragraph for the failure
> of an ACATS test because of loss of mutual exclusion, I would not give them
> much sympathy.

But you would be wrong in this case, in my view.

In any case, I would be surprised if there were an ACATS test
that would reveal this.  I am pretty sure we have
validated with an RTS that uses priorities for locking.

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  5:55 PM

>    I interpreted one of your earlier responses to say
> you were *not* worried about the permission in 9.5.1(17).
> Now it appears you are again.

Weeell... truth be told, you did read it right. I wasn't too worried,
because I was hoping that no implementer would have the irresponsible
attitude to use the loophole. The more I am thinking about it, the more
I worry.

But the REAL concern that started all this was that somehow the "no locks
needed, no blocking calls possible or, if possible, ignorable" model that
started out with a permission flavor in 9.5.1 had turned into canonical
semantics in EDF, i.e., the semantics only made sense if the above was
always true, not just an option. I definitely oppose that route which
makes a sensible and safe implementation impossible.  Meanwhile, I guess
Alan and I are coming to a shared understanding of the Annex D semantics,
so that the REAL concern is lessened.

> It was perfectly clear at the time we wrote 9.5.1(17) that we were allowing
> mutual exclusion to be lost to some other thread if
> the thread holding the lock blocks, and you are using
> priorities rather than locks to provide ceiling "locking."
> ... Let's not try to rewrite history here.

If you say so. :-)
Should we then drop the "nested"?

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  6:18 PM

...
> The only possible meaning of nested protected action here
> is that some other task instigates it.  I also know that
> that is what I meant and what was discussed with the
> design and review teams.  We didn't create an AARM note
> here because we didn't sense any confusion on anyone's part.
> Apparently you and Erhard were absent during this discussion!
> I hope at least someone else in the ARG remembers it...

I'm sure that I slept through it. :-)

I didn't understand any of this until I tried to implement it, and that was
years after the discussions that you are mentioning.

> Basically, if you use priorities for locking, then this
> is what you get.  And we knew that.  It sounds like Alan
> understood that as well.

I see no reason for that. The check to maintain exclusion is dirt cheap, and
there is no reason whatsoever to allow implementations to ignore the most
fundamental invariant of multitasking just to skip two extra instructions
before operations that are already relatively rare (in the context of a
complete program) and quite expensive (blocking).

Certainly an implementation could follow Erhard's model and only raise the
exception if it actually was going to block. That would make the check even
cheaper.

In any case, this is the multitasking equivalent of assuming that
uninitialized objects are in their range. We certainly don't allow
implementations to do that, even though it has a substantial impact on
performance. Why in the world would we allow the same thing here, where the
check is *cheaper* and the damage is as great (and a lot less common, making
it much harder to track down).

> > ... It certainly doesn't imply that anything goes
> > for other tasks. If you want some words that mean that, then *you* have to
> > try to change the words. Or try to put in an AARM note to that effect (if
> > you want to mis-interpret the words, which admittedly are quite vague).
>
> I can provide some AARM wording if others think it necessary.
> But it seems pretty clear to me we must be talking about
> what can happen on behalf of other tasks, not on behalf of the task
> that is blocked.

I have no idea how you could reason in any useful way about a program for a
runtime that doesn't enforce exclusion. It's very difficult to reason about
a program that *does* enforce exclusion! So I can't imagine why this is a
"bounded error", given that the actual effects are nearly unbounded.

> > Certainly, if an implementer tried to blame this paragraph for the failure
> > of an ACATS test because of loss of mutual exclusion, I would not give them
> > much sympathy.
>
> But you would be wrong in this case, in my view.
>
> In any case, I would be surprised if there were an ACATS test
> that would reveal this.  I am pretty sure we have
> validated with an RTS that uses priorities for locking.

Priorities for locking is fine, as long as you don't allow blocking in a PO.
That requires a bit (not a lock), which is no big deal.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  6:35 PM

...
> Weeell... truth be told, you did read it right. I wasn't too worried,
> because I was hoping that no implementer would have the irresponsible
> attitude to use the loophole. The more I am thinking about it, the more
> I worry.

The one thing I have taken away from this exercise is that implementers will
take any loophole given to them, no matter how irresponsible. That applies
to me, too. When I'm wearing my implementer hat, I'll try to do whatever it
takes to make my customers happy. Now, I don't care if implementers have
a -fast_but_more_dangerous_than_C option switch, but the Standard shouldn't
be in the business of allowing unnecessary dangerous constructs. (Especially
when having such a switch is just fine.) That goes here, that goes for
13.9.1(12), and probably other cases as well.

My strong preference is to eliminate such loopholes; make sure that the
checks involved have names so that users can suppress them when needed, but
make the default as safe as it can be without significant distributed
overhead. That seems to have been ignored at many turns for Ada 9X. How sad.
After all, all Ada really has going for it are safer semantics than C (the
syntax being better is hardly an argument that is going to make much
difference), and abandoning that at the first sign of trouble is awful.
Especially when so many other cases where it was *not* abandoned mean extra
overhead for Ada compared to C.

Oh well, better get off the soapbox.

...
> I definitely oppose that route which
> makes a sensible and safe implementation impossible.  Meanwhile, I guess
> Alan and I are coming to a shared understanding of the Annex D semantics,
> so that the REAL concern is lessened.

Absolutely. As in 13.9.1, I got just enough so that a safe implementation is
still possible. But whether that can be done in the marketplace is a good
question...

...
> If you say so. :-)
> Should we then drop the "nested"?

Absolutely. There is nothing "nested" about them. It is in parens, however,
so in theory it doesn't mean anything. I suppose that's Tucker's argument;
these are just "additional" protected operations, and the use of the word
"nested" is unfortunate, because only the task in question could make a
"nested" operation.

****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 12, 2005  6:55 PM


Alan, to come back to D.2.6 ....

I read history, the AI and everything,  and I am 100% convinced now that
rule 2 in the AI, resp. D.2.6 (24/2) is plain wrong.
> If no such ready queue exists the task is added to the ready queue for the
> lowest priority in the range specified as EDF_Across_Priorities.
needs to say:
If no such ready queue exists the task is added to the ready queue for the
active priority of the task.

<<< this rules works both for blocked tasks becoming ready again as well as
    for newly created tasks. It is also the rule for any other scheduling
    policy. Without this, the ceiling protocol can be violated. continue
    reading, before questioning...>>>>
--------------
There is a second problem that creates numerous inconsistencies; it was
discussed at length in the AI, but was never turned into wording.
There definitely needs to be a rule saying:

If the base priority of a task is in a priority range specified by the
pragma EDF_Across_Priorities, the active priority of a task is the maximum
of any priority it inherits and of the lowest priority in the priority
range. (The base priority has no direct influence on the active priority for
EDF dispatching.)

<<<< this rule is consistent with D.1 (15), D.3.(12), D.2.6(26/2), and
 the above rewrite of D.2.6(24/2) for newly created tasks.
 Among others, it avoids the present inconsistency that an EDF task can be
 in a queue with lesser priority than base or active priority. See D.2.1 (5/2)>>>

-----------
 The paras 20-23 need to be supplemented by the statement that P becomes the
 active priority of T (or else you break D.2.1 (5/2), since without an
 explicit statement, the active priority will be different; I can formulate
 this in terms of inheritance, too, if you like ...).

 There is a bullet 23.1 missing that says:
    - P is greater than the active priority of T
 <<< i.e., (20-23) can only increase the priority, not decrease.
     Again, this is needed so that the ceiling protocol stays intact >>>

----------
Looks to me as if these changes would do away with the most noticeable
consistency problems.

----------

On a more academic side: I am completely unconvinced that sensible RTA
predictions can come from this scheduling model: the semantics in (20-23)
resp.  rule 2 of the AI of enqueing a readied task T on the highest priority
queue less than the preemption level of T and currently occupied by at least
one task creates a significant race condition: arrival of such other task
just prior or just after task T makes the active priority of T
non-deterministic within the range Low to T'BasePriority. How RTA prediction
can survive that is a miracle to me -- don't I get the possibility of
starvation despite a very near deadline and a very high preemption level? I
certainly can draw timelines where a task happens to drop into the LOW
bucket and now needs to wait until all other tasks have completed all
protected operations, with additionally started protected operations receiving
preferential treatment even for tasks with remote deadlines.

****************************************************************

From: Tucker Taft
Sent: Thursday, May 12, 2005  7:59 PM

> If you say so. :-)
> Should we then drop the "nested"?

I think that could hurt more than help.  It is already parenthesized.
The point is that there is already a protected action underway,
and because the task blocks, it allows another task at the same
or lower priority to proceed, and it might then reenter
the protected object performing a *nested* protected action
on the same object.  Once it is done, the blocked task might
reawaken and regain control, in which case it would return
from the routine that evilly invoked the blocking construct,
and then finish the "outer" protected action.

This certainly seems pretty nested, and we make it clear
we are talking about a protected action on the
*same* target protected object.  So I really don't
see where there could be any confusion if you actually
read all the words in the sentence!

****************************************************************

From: Tucker Taft
Sent: Thursday, May 12, 2005  8:06 PM

> ...
> My strong preference is to eliminate such loopholes; make sure that the
> checks involved have names so that users can suppress them when needed, but
> make the default as safe as it can be without significant distributed
> overhead.
> ...

The history behind protected objects was *all*
about efficiency.  Being able to have a construct
that only required raising and lowering the
priority with no queuing whatsoever on a
monoprocessor was one of the "raison d'etre" of
the design.  I will object strongly if we try
to make the priority-based implementation inappropriate,
or require additional checks.

>>If you say so. :-)
>>Should we then drop the "nested"?
>
> Absolutely. There is nothing "nested" about them. ...

I explained what this means.  I think you and Erhard
must have some fundamental different world view, because I
don't understand your confusion.

Protected actions are not associated with a particular task --
they are associated with a particular protected object.
In a single protected action work can be done on behalf
of many tasks.  Nesting is with respect to the protected
object.  That is, while one protected action is underway,
another one starts on the *same* protected object, runs
to completion, and then at some point the outer one
continues and completes.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  8:20 PM

> This certainly seems pretty nested, and we make it clear
> we are talking about a protected action on the
> *same* target protected object.  So I really don't
> see where there could be any confusion if you actually
> read all the words in the sentence!

I *did* read all of the words in the sentence! And while I agree with Erhard
("If you say so."), you've made it clear why *you* think this is what was
intended. But "nested" is what makes this totally mysterious, because there
is no nesting here!

Let's look at *all of the words* again:

"If not detected, the bounded error might result in deadlock or a (nested)
protected action on the same target object."

Without the parenthesized comment: "a protected action on the same object"
makes no sense, since there already is one by the current task. There would
have to be more than one to have this mean anything at all. And "nested"
makes it worse, because it implies that the current task is doing it - an
operation by some other task could never be "nested" - its a completely
separate thread of control.

This should be crystal clear that it is destroying the entire tasking model.
At a very least it should say "...or an additional protected action on the
same target object.", which is not as open to misinterpretation. And an AARM
note similar to the one above would be a good idea. And I would also like to
see a documentation requirement that implementors that take advantage of
this permission to violate mutual exclusion, so users can avoid them if they
care.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  8:38 PM

...
> Protected actions are not associated with a particular task --
> they are associated with a particular protected object.
> In a single protected action work can be done on behalf
> of many tasks.  Nesting is with respect to the protected
> object.  That is, while one protected action is underway,
> another one starts on the *same* protected object, runs
> to completion, and then at some point the outer one
> continues and completes.

Fine, that's what *you* think it means. But this is not a defined term, it
never appears normatively in the Standard or AARM, and the places where I
see it used are completely in passing. So you completely leave it open to
interpretation what the heck a "nested protected action" is. Nesting is
usually a static concept ("procedure P is nested in procedure S"), so it is
utterly confusing to use this in a totally dynamic case.

Essentially, this is a case of "knowing" what you mean by something, using
it in normative wording, and then justifying that everyone should know that
by explaining that it means precisely what you know it means.

I don't have the slightest problem believing that you think that nesting
means what you explain above. It could even make sense; but without any such
indication in the AARM, how in the heck is the reader supposed to know? The
only way to know what it means is for Tucker to come along and "explain" it.
I can't imagine that every Ada user has that resource available!

For me a protected object is something that enforces mutual exclusion
always. It's worthless if it doesn't do that. Clearly, some Ada
implementations are worthless (in this respect), and that is allowed by the
Standard. Can't we at least make this clear so that users can judge for
themselves if this important? By making it a bounded error, and including
complete havoc as one of the choices, you are neither providing well-defined
semantics nor any documentation of the fact. I suppose that's why Ravenscar
felt it necessary to include what is now pragma Detect_Blocking, because
otherwise the language is unmentionably shoddy in this respect. It would be
worlds better if we had pragma Suppress(Blocking_Check), for people that
actually need this ultimate performance. But I suppose it is too late for
that.

****************************************************************

From: Tucker Taft
Sent: Thursday, May 12, 2005  9:34 PM

For what it's worth, 9.5.3(22) tries to make it clear
that protected actions are associated with protected
objects, not with particular threads of control, and
so a "(nested) protected action on the same target object"
was thought to be pretty clear as well.

But I can accept that the term "nested protected action"
is confusing to current readers of the manual.  When
we wrote it, I believe it was crystal clear to the
reviewers what it meant, and perhaps we didn't have the
foresight to put ourselves in the shoes of someone
10 years hence who hadn't spent 3 years arguing about
protected records vs. passive tasks, eggshell models,
etc., etc.

So by all means, lets make it clear to the group who
missed out, or have blessedly forgotten, all those
discussions.  But let's not try to change the semantics.
They were very carefully designed, and if you want to
hear all the gory details, we will have to get Ted
Baker and Offer Pazy on the line...  This was definitely
not just a case of being sloppy or hang loose or throwing
caution to the winds.  We very much knew exactly what
we wanted to accomplish, and we discussed the whole
area of how to handle (potentially) blocking operations
for months, if not years.

****************************************************************

From: Randy Brukardt
Sent: Thursday, May 12, 2005  10:08 PM

> But I can accept that the term "nested protected action"
> is confusing to current readers of the manual.  When
> we wrote it, I believe it was crystal clear to the
> reviewers what it meant, and perhaps we didn't have the
> foresight to put ourselves in the shoes of someone
> 10 years hence who hadn't spent 3 years arguing about
> protected records vs. passive tasks, eggshell models,
> etc., etc.

I was there, of course, but I slept through most of those discussions. I
recall thinking that the model was neat, but totally impractical. And I
still think that...

> So by all means, lets make it clear to the group who
> missed out, or have blessedly forgotten, all those
> discussions.  But let's not try to change the semantics.
> They were very carefully designed, and if you want to
> hear all the gory details, we will have to get Ted
> Baker and Offer Pazy on the line...  This was definitely
> not just a case of being sloppy or hang loose or throwing
> caution to the winds.  We very much knew exactly what
> we wanted to accomplish, and we discussed the whole
> area of how to handle (potentially) blocking operations
> for months, if not years.

Fair enough. But I continue to think (with the benefit of hindsight, of
course) that the result was misguided. It smacks very much of premature
optimization. After all, all we are really discussing is whether the check
should have been required rather than a bounded error.

It seems to me that either tasking operations are implemented by calling
some library, or by doing them in-line. (I remember Ted trying to tell me
that people actually thought they could successfully and profitably do those
in-line. I still don't believe it, but let's assume they can...) If they are
implemented in a library, the cost of setting a bit and the cost of testing
the bit to check is far less than the calling overhead into the library. So
there is no performance reason there for not making the check.

If the operations are implemented in-line, they are available to the
compiler for optimization. That means that a pragma Suppress on the check
would be effective at eliminating (the already tiny) overhead.
Alternatively, one could have had a pragma like Do_Not_Detect_Blocking to
eliminate the overhead. Either way, the default should have been safe.

Making the default safe and eliminating the overhead for the 10% of
applications where it matters (and probably for the 5% of POs in that 10%)
would make far more sense.

Anyway, it is water under the dam at this point. (And I chose that broken
metaphor carefully -- too much water under the dam causes it to be washed
away -- which is I fear the potential result if we give too many loopholes
to implementers, especially dubious ones.)

I don't suppose it makes sense to change the wording from "(nested)
protected action" to "additional protected action" (we could put it into the
presentation AI if we did do so), but an AARM note would be called for. Can
you craft one?; you don't have any open action items for the first time in
my memory, so you must have plenty of spare time. :-) :-) :-)

****************************************************************

From: Erhard Ploedereder
Sent: Friday, May 13, 2005  6:15 AM

> Weeell... truth be told, you did read it right. I wasn't too worried,
> because I was hoping that no implementer would have the irresponsible
> attitude to use the loophole. The more I am thinking about it, the more
> I worry.

Despite a good night's sleep, the issue won't leave my mind. In reflecting
on it some more, I realized that 6 months ago I was confronted with a
system, in which blocking operations inside improperly synchronized critical
regions caused unbelievable system behaviour. (Not the "blib once in a blue
moon" failures, which one knows from missing snchronizations, but the "every
second we loose our marbles in the most amazing ways" kind. These blocking
operations - which weren't recognizable as such, because the blocking
happened deep down a call chain inside a library support - were really bad
news.) It took me about 3 weeks of redesign to get them outside the now
synchronized critical regions and, boy, was it painful, since logically they
were supposed to be synchronous with the critical regions. (In case you
wonder, the impl.language was Java.)

Why this story? Well, the very same thing would happen, if I used an Ada
implementation that subscribed to the "user-beware, we rely on priority
to guarantee mutual exclusion" model that we have been discussing.

I want to take the position to, at best, let sleeping dogs lie, i.e., do not
change anything. That is, I no longer suggest to delete the word "nested".

Which gives (Randy and) me the freedom to believe that the word "nested"
saves the day. I really don't want the language to acknowledge the
possibility of a fundamentally unsafe implementation, causing the problems
that I have seen.

Frankly, the argument that a check against actual blocking inside a
protected operation is too expensive, is amazingly bogus, especially since
the assumption is already that no such call is ever issued or else you are
seriously dead. And yes, there is a two instructions overhead on any
protected operation to set and unset the "inside" bit in the TCB, but that
is it. The rest is completely neglegible. Two instructions vs loss of
safety? )

Note that I am not taking away the lock-less implementation. It merely has
to come with a guarantee of absence of blocking inside protected ops by
Profile, Switch, Assertion, or pure Magic; excluded is only the option of
Silent Assumption that there won't be any blocking.

****************************************************************

From: Tucker Taft
Sent: Friday, May 13, 2005  7:06 AM

Presumably it is a count, not just a bit, indicating being "inside."
And I agree it is implementable, but I don't believe this
is fixing something for which there was any AI, nor any
ARG discussion during a meeting.  I agree with letting
sleeping dogs lie...

****************************************************************

From: Bob Duff
Sent: Friday, May 13, 2005  8:28 AM

I agree with Tucker on this point.  Namely, that the intent of the design was:

    Protected records can be implemented (on a uniprocessor) using the
    priority-based locking method.

    It is wrong to put a potentially-blocking operation inside.

    Implementations should not be required to have any overhead for
    checking that rule, whether or not they use the prio-based method.

If I had designed this part of the language, I would probably have done
what Randy suggests -- *require* an exception to be raised, but then
allow it to be suppressed with pragma Suppress.  [I know how to
implement check suppression inside the run-time system.  I also know how
to implement inlining of protected operations.  But these things are not
simple, and Offer and Ted probably did not trust implementations to
actually suppress such checks; hence the permission to suppress them
without any pragma Suppress.  Hence, bounded error.]

However, I think it should be considered out-of-bounds to change the
semantics at this point.

I understand what Tucker means by "nested", and that's clearly the only
sensible interpretation (since the task is going to sleep, it can't be
the one calling the PO).

But I also understand why Randy and Erhard are confused by the term
"nested".  It makes one think of the call stack -- a recursive
invocation of the same protected operation, which is nonsense.
Hence confusion.  How about the following re-wording:

    If not detected, the bounded error might result in deadlock, or
    another task might start an additional protected action on the same
    target object, despite the fact that the current task holds the
    lock.

Does that make it clearer?

[It's interesting that Ravenscar requires the check!  The people using
Ravenscar are the very ones Offer and Ted were trying to satisfy
efficiency-wise, and yet they say they don't want this little bit of
efficiency.]

P.S. I think it was *not* the intent that some implementations might
allow blocking inside protected records, and that this would be a useful
feature that people would actual use.  But this is in fact what has
happened.  People write code all the time that does I/O inside protected
records, and it works fine on some implementations.  It's unfortunate
to have the language split into two dialects like this.  But I don't
think we should try to fix that now.

P.P.S. I still call them "protected records" when I'm not writing
RM-ese.  But then, I still say "rep spec".  ;-)

****************************************************************

From: Stephen Michell
Sent: Friday, May 13, 2005  9:46 AM

I agree with Tuck's statement that his interpretation of "nested" was as he
stated it. It was not many of the rest of our interpretations, however, and the
language chosen was left this way because of the underlying disagreement that
we had at the time.

It is clear that the execution of any protected operation by a task while some
other task holds the execution resource for that PO is erroneous, whether or
not the task which holds the resource suspends. As Erhard points out, the
object is inconsistent while a task holds the resource and any execution of
Protected operations using that resource by another task can be disasterous.

Para 17 discusses bounded errors, and the situation that we are considering is
not a bounded error. I (and others) requested that that wording be removed but
the mapping team refused. We agreed to leave it as it was because the term
"nesting" here made the sentence essentially vacuous (precisely because nesting
is a static lexical concept and of course another task can't call a nested
protected operation without first obtaining the lock).

****************************************************************

From: Arnaud Charlet
Sent: Friday, May 13, 2005  2:25 AM

> The history behind protected objects was *all*
> about efficiency.  Being able to have a construct
> that only required raising and lowering the
> priority with no queuing whatsoever on a
> monoprocessor was one of the "raison d'etre" of
> the design.  I will object strongly if we try
> to make the priority-based implementation inappropriate,
> or require additional checks.

Strongly agreed.

As an implementor of such model for some GNAT bare targets I can only
confirm that the model works very nicely and very efficiently, as
intended.

I was not there during the design meetings, however I clearly
recall discussing several times with both Robert Dewar and
Ted Baker about this model and this discussion, so at least the two
of them weren't sleeping during that meeting ;-)
Ted also implemented such approach a few years ago for
e.g. RT Linux and GNAT.

****************************************************************

From: Alan Burns
Sent: Sunday, May 15, 2005  3:16 AM

There has been a lot of email about the general issue of
9.5.1 (17) - the conclusion of which seems to be stay as
we are (with perhaps an AARM note to be writen by Tuck).
For what its worth I agree fully with the line Tuck has
been running.

So as Erhard says

>
> Alan, to come back to D.2.6 ....
>

Although it is a bounded error be blocked in a PO, Erhard
wants the model for EDF to be correct in this circumstance.
Fine, I think it is for the reason I'll give in a moment.
I would appreciate the views of Tuck, Pascal, Randy on this.

If you have EDF you must have Ceiling_Locking.
Under D.3 (12) a task always inherits the ceiling priority when executing
in a PO. So this rule applies additonally to the EDF rules and hence
if a ready queue is identified by D.2.6 (20/2-23/2) but you
are also inheriting a ceiling priority then the ceiling is clearly
higher and wins.
At worst I feel a note to make this clear is all that is needed.

If people disagree, then wording such as that proposed by Erhard
below seems OK

****************************************************************

From: Alan Burns
Sent: Sunday, May 15, 2005  3:30 AM

> On a more academic side: I am completely unconvinced that sensible RTA
> predictions can come from this scheduling model: the semantics in (20-23)
> resp.  rule 2 of the AI of enqueing a readied task T on the highest priority
> queue less than the preemption level of T and currently occupied by at least
> one task creates a significant race condition: arrival of such other task
> just prior or just after task T makes the active priority of T
> non-deterministic within the range Low to T'BasePriority. How RTA prediction
> can survive that is a miracle to me -- don't I get the possibility of
> starvation despite a very near deadline and a very high preemption level? I
> certainly can draw timelines where a task happens to drop into the LOW
> bucket and now needs to wait until all other tasks have completed all
> protected operations, with additionally started protected operations
> receiving
> preferential treatment even for tasks with remote deadlines.

I agree that the active ready queue will change but the actual order
of executions is the same. Baker argues very convincing;y for this
model, we have just fitted it into the Ada ready queue descrition.

One way to consider this is to see a dynamic set of levels
for the tasks. Initially the priority'first level is the one that
all tasks join - this remains the only level until an executing
task (T) calls a PO. Now a new level is introduced, and this
splits the world. Any newly arrived task should either run
before T (as it has shorter deadline and higher preemption level),
or after T.

If it should run before T (and hence before all other tasks,
as T was running before them) it joins the new level. If it must
wait until after T has finished it joins the initial lower level.

The new level remains until T actually executes and leaves the PO.
While the new level exists no task in the lower level should execute,
and all tasks that join the new level execute before T. This
is exactly the required behavior. Further levels can be introduced
if other POs are called.

I hope this helps convince you.

****************************************************************

From: Bob Duff
Sent: Sunday, May 15, 2005  6:44 AM

> There has been a lot of email about the general issue of
> 9.5.1 (17) - the conclusion of which seems to be stay as
> we are (with perhaps an AARM note to be writen by Tuck).

Alan, did you see my suggested rewording of that para?

It seems clear that the para is unclear as is, given that
Tuck and I were the only ones who admitted to understanding
what was meant by "nested".  ;-)  I think it deserves rewording
(but no change in (intended) meaning).

****************************************************************

From: Alan Burns
Sent: Monday, May 16, 2005  3:20 AM

>Alan, did you see my suggested rewording of that para?

yes that seemed fine - but it seemed that the tone of the emails was that there
is no AI on this and that a minimum AARM note was all that was required. I'm
sure the editors will decide.

By the way, for Ravenscar which is high integrity as well as real time, we
went for detection rather than absolute efficiency.

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, May 17, 2005  7:49 AM

> I agree that the active ready queue will change but the actual order of
> executions is the same. Baker argues very convincingly for this model, we
> have just fitted it into the Ada ready queue description.

I am worried that Ted's model has (silent) premises that are not expressed
here.

I would love to be convinced that the first sentence about the actual
execution order were true. However, I can definitively disprove it
   - if blocking in protected actions occurs (probably a dont care :-); or
   - if nested protected actions are allowed (which are not a bounded
        error presently)

The 2. case is absolutely worrisome, isn't it?
And I can show an unbounded delay for the affected task despite its near
deadline.

Should I spend the time to write the proof? Will it make a difference to how
the EDF rules are specified?

****************************************************************

From: Erhard Ploedereder
Sent: Tuesday, May 17, 2005  8:26 AM

I figured it is better to present the example, not just claim its
existence :-)

So what's wrong in my time line? If nothing, then the EDF+Ceilings has
a serious problem with nested monitor calls.

Erhard

------------------------

Task A  has a long deadline and preemption level H;
Task B has a short deadline and a preemption level b, H > b > L

Sequence of events:

t0: A starts
t1: A grabs PO-L (L is its ceiling, L > bottom Prio)
t2: A grabs PO-H, too

from here, two alternative time-lines:

-------------------
1. Timeline:

t3: B wakes up; it goes into the bottom queue, since its preemption
    level is not sufficient to preempt an A holding PO-H, and there
    are no tasks in the ready queues in between

t4: A gives up PO-H
t5: A gives up PO-L

with no other tasks involved, B is scheduled at this point

however...
between t4 and t5, an arbitrary number of tasks can arrive with rather long
deadlines and preemption level b. They go on queue L and are executed in
preference to B, although their deadlines are much later than B's.
They don't block, so they run to conclusion if no other tasks arrive.
It follows that B can be delayed unboundedly, despite its near deadline.

---------------------

2. Timeline:

t3: A gives up PO-H
t4: B wakes up; it goes onto the L queue and is immediately scheduled,
    since it deadline is shorter than A's; the subsequently arriving
    other tasks cannot influence B
    (the desired behaviour)
....
---------------------

Conclusion:
   -  Certainly the execution orders are different.
   -  A rather serious race condition (t3 - t4 = Eps) decides between
      EDF-ness of the scheduling and unbounded delays breaking deadlines.

****************************************************************

From: Alan Burns
Sent: Wednesday, May 18, 2005  3:11 AM

>Task A  has a long deadline and preemption level H;
>Task B has a short deadline and a preemption level b, H > b > L

You are missing a key part of the overall picture. To get the behaviour you
require you have to choose the right values for the preemption levels. This is
just the same as fixed priority systems. If you assigned priority following
rate montonic then you get optimal performance, it you choose the wrong order
then timeliness is reduced.

What Baker provides is a mechanism and a policy. Ada05 supports the
mechanism but the policy comes from the preemption levels chosen by the user.

The strong result from Baker is that: if you assign preemption levels in
inverse relative deadline order (ie rate monotonic for deadline equals period
systems) then you get the single block no deadlock result. In your example, B
with its short deadline would have a high preemption level - so the behaviour
you describe would not occur.

Just to repeat. In Fixed Priority Dispatching and EDF, priority is assigned in
exactly the same way (for optimal performance). This is one of the nice
properties we have. And if you assign priorities inappropriately you get
sub-optimal behaviour

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 18, 2005  6:31 AM

> You are missing a key part of the overall picture. To get the behaviour
> you require you have to choose the right values for the preemption
> levels.    ....

Indeed, I did not use that. So, here is my time line fixed up in accordance
with this rule (which actually has no real bearing on the problem).

Still the same problem of unbounded blocking.

Next premise that I missed ?

[I am not arguing about the "block only once" claim, which is rather
 believable, given ceiling locking and only nested monitor calls.
 I am arguing about unacceptable unbounded delays, much akin to priority
 inversion, which clearly is a no-no in real-time scheduling.]

------------------------

Task A  has a very short deadline and preemption level H;
Task B has a short deadline and a preemption level H-1, H-1 > L

Sequence of events:

t0: A starts
t1: A grabs PO-L (L is its ceiling, L > bottom Prio)
t2: A grabs PO-H, too

from here, two alternative time-lines:

-------------------
1. Timeline:

t3: B wakes up; it goes into the bottom queue, since its preemption
    level is not sufficient to preempt an A holding PO-H, and there
    are no tasks in the ready queues in between

t4: A gives up PO-H
t5: A gives up PO-L

with no other tasks involved, B is scheduled at this point (o.k.)

however...
between t4 and t5, an arbitrary number of tasks can arrive with medium
deadlines and preemption level H-2, H-2 >= L. They go on queue L and are
executed in preference to B, although their deadlines are much later than
B's.  They don't block, so they run to conclusion if no other tasks arrive.
It follows that B can be delayed unboundedly, despite its near deadline.

---------------------

2. Timeline:

t3: A gives up PO-H
t4: B wakes up; it goes onto the L queue and, after A is finished, will
    be scheduled in preference to the subsequently arriving
    other tasks (o.k.)
---------------------

Conclusion:
   -  Certainly the execution orders are different.
   -  A rather serious race condition (t3 - t4 = Eps) decides between
      EDF-ness of the scheduling and unbounded delays breaking deadlines.

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 18, 2005  12:37 PM

Ignore my last message with the revised timelime. This was too fast a
response. I see that A cannot grab PO-L. I need to work on the example.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, May 20, 2005  7:58 AM

Alan writes:
> I would appreciate the views of Tuck, Pascal, Randy on this.

So would I.

Alan continues:
> If you have EDF you must have Ceiling_Locking.
> Under D.3 (12) a task always inherits the ceiling priority
> when executing in a PO.
> So this rule applies additonally to the EDF rules and hence
> if a ready queue is identified by D.2.6 (20/2-23/2) but you
> are also inheriting a ceiling priority then the ceiling is clearly
> higher and wins.
> At worst I feel a note to make this clear is all that is needed.
> If people disagree, then wording such as that proposed by Erhard
> below seems OK.

My reaction... NO. We do not specify rules like "A = 5", "A = 6", and "if in
doubt, the first rule wins". We try to never have contradicting
rules (and the current rules are contradictory in their wording).
The wording does need to be fixed.

****************************************************************

From: Alan Burns
Sent: Tuesday, May 24, 2005  9:03 AM

There has been a series of emails concerning Annex D, which
has now gone quiet. I think there is one open issue, some
closed issues and a list of possible edits to the current
text.

The open issue concerns the rules for EDF. If a task becomes
unblocked in a protected operation the current wording seems
to be be wrong. I feel that as Ceiling_Locking is required
then the unblocked task must run with ceiling priority. Erhard
feels that this is not enough and that rewoding is needed.
I'll leave this issue for now.

The main point of this email is to put in one place what I
think are the edits needed to Annex D following Erhard and Tullio's
comments, my replies and subsequent emails.
Perhaps Randy has all this in place and has already
done the edits - but the following might help. I am assuming
that Erhard being quiet on an issue means he has agreed with me
(I'm sure he will shout it that is not the case).


Alan

--


EPD1: D.2.1, 1.3/2: "with dispatching" -> "with task dispatching".
	OK

EPD2: D.2.1, 9/2: implementation[-]defined manner
	OK

EPD3: D.2.2 2.2/2: may have a formatting problem (line split)
	OK

EPD4: D.2.2 3/2 and 3.1/2 could be easily combined to read "The
policy_identifier used in either pragma shall...

	OK, but current wording is very clear


EPD9: D.2.2, 6.3/2 Presumably not intended but right now this says that
setting a base priority unconditionally preempts the caller since the
affected
task "is immediately dispatched". It should say "is
immediately subject to the new dispatching policy".

	Yes that is probably better


EPD11: D.2.5 6/2:  "for a single [priority] level"

	OK


EPD13: D.2.5 14/2:  WRONG!!!! APPLIES ONLY TO READY TASKS.
But I am not sure whether simply adding "ready" here is sufficient.
        OK, could add 'ready' or use esiting text such as:
     When the setting of the base priority of a running task takes
     effect and the new priority is a Round Robin priority the
     task is moved to the tail of the ready queue for its new priority



EPD16: D.2.6 18/2 presumably should read : "...than the [active] priority"

	Yes



EPD17: D.2.6 20/2 BIG BUG !?! This model does not work.

        This is the remaining open issue


EPD18: D.2.6 22/2 BUG!! In a mixed mode, there is no guarantee that P is
an EDF-Priority and that the tasks in the P-queue have deadlines at
all. This looks like a workable model only if paras 20-24 are
subjected to the restriction of P being within the same EDF-priority
range. Note also that multiple ranges may be present.

        I agree this clarification is needed, suggest add after 'highest
priority P' (in 20/2) the words 'within the priority range for the policy
that applies to T'


EPD20: D.2.6 24/2: Wording nit: "if no such ready queue exists" is always
false; the queue exists for sure. "if no such priority P
exists" is more like what is intended here.

        Agreed



EPD21: D.2.6 24/2: BUG. in the case, where multiple priority ranges are
specified as EDF-priorities (which presently is allowed), the para is
ill-defined, since it doesn't say of which EDF-range.

        Yes some added words needed - would 'for task T' at the end be OK?



EPD23: D.2.6 24.a/2 1. BIG BUG!!! interesting... shorter deadlines by
themselves do not cause preemption ???  True by the above words in the RM,
but this is fundamentally wrong, isn't it? AI-357 agrees: the words
earlier in the RM are fundamentally wrong: a shorter deadline (just now
unblocked from some queue or delay) should preempt within the same
priority, the AI says so explicitly, but this preemption rule is not there
in the RM. Note that the rule is needed, since otherwise you may end up
with EDF for non-communicating tasks being equal to Nonpreemptive
scheduling!

        After much thought I agree. Add after 18/2 as a 4th bullet
point: 'there is a task on the ready queue for the active priority
of T with a deadline shorter than the deadline of T.'



EPD26: D.2.6 25/2: WRONG!!!! APPLIES ONLY TO READY TASKS. But I am not
sure whether simply adding "ready" here is sufficient.

        Yes adding ready is needed here, and is sufficinet I think,
or use words similar to EDP13 above.



EPD28: D.2.6 26/2: what is an "active priority of the ready queue"?
Should read "the active priority of a task chosen for execution is the
priority of the ready queue from which it was taken."


        Yes this is better or 'active priority set to the priority
of the ready queue ...'



EPD29: D.2.5 27/2 "in this package" -> in package Ada.Dispatching.EDF

        OK



EPD31: D.5.1. 10/2 The new verbage is definitely insufficient. Consider
that the call on Set_Priority is issued by some other task not presently
involved in a protected operation. Hence the setting of the priority needs
to happen immediately (since "the first point..." is now). This is not the
intent, is it? It certainly would contradict what 10.a/2 is saying and
what AI-188-2 wants to achieve. The words are plain wrong, since they do
not refer to the task whose priority is being changed.
However, just adding that task context to the words still seems wrong,
since the AI seems to try for an immediate change on all ready queues (on
which I can be, while preempted in a protected operation) and postponed
change on entry queues. (The stuff about abortion doesn't help me.)

        Yes this rule applies to the designated task, not the one calling
Set_Priority; this needs to be clear if Erhard read this the other way
round. But the wording used here is the same as in existing RM
(D.5.1(10)). So I'm not sure if re-wording is needed, happy to
leave that to Randy



EPD35: D.14 25/2 superfluous blank in "execution -time"

        Agreed


EPD37: D.14.1 21/2 "were a handler [of at least that ceiling priority] to
be executed." Actually, I would prefer less conditionality:
"The constant Min_Handler_Ceiling is the priority value that ensures that
no ceiling violation will occur if a handler of at least this priority is
executed."

        I leave that up to the Editor


EPD39: D.14.2 29/2 "were a handler [of at least that ceiling priority] to
be executed." Actually, I would prefer less conditionality:
"The constant Min_Handler_Ceiling is the priority value that ensures that
no ceiling violation will occur if a handler of at least this priority is
executed." see EPD37 for an ident. comment

        I leave that up to Editor



EPD41: D.15 15/2 Why not as the first step? I would be really worried, if
an already mostly finalized Timing_Event fired.

        I cannot remember why it says final



D.2.1(9.a): typo: "The affect of" for "The effect of"

        OK


D.2.6(18/2): Again, we use the term "priority" without qualifying it with
"active", as we should, also given 19/2.

        Agree we could add active



D.2.6(30/2): typo: "procressor" for "processor".

        OK




D.4(12): typo: "if more than one such call has" --> "have" ?

        ?


D.6.1(3.b/2): typo: "processorshall" --> "processor shall"

        OK



D.14(25/2): typo: "execution -time" --> "execution-time"

        OK



D.14.2(6/2): typo: "range <})" --> "range <>)"

        OK

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 25, 2005  6:25 AM

I have updated the example so that it now assigns base priorities in
deadline order. The problem is still there. The scheduling is subject to
unbounded delays for "high priority" tasks, which is completely
unacceptable.  An analysis of why this is so (and a way to fix it)
is added at the bottom.

------------------------

Task A  has a long deadline and preemption level LOW
Task B has a short deadline and a preemption level b, H > b > L > LOW

Sequence of events:

t0: A starts
t1: A grabs PO-L (L is its ceiling)
t2: A grabs PO-H, too

from here, two alternative time-lines:

-------------------
1. Timeline:

t3: B wakes up; it goes into the bottom queue, since its preemption
    level is not sufficient to preempt an A holding PO-H, and there
    are no tasks in the ready queues in between

t4: A gives up PO-H
t5: A gives up PO-L

with no other tasks involved, B is scheduled at this point

however...

between t4 and t5, an arbitrary number of tasks can arrive with medium
deadlines and preemption level x, b > x > L. They go on queue L and are
executed in preference to B, although their deadlines are later than
B's.  If they don't block and no other tasks arrive, they run to conclusion
which will be away as far as their own deadline.

It follows that B can be delayed unboundedly, despite its near deadline.
Note that no task after t5 accesses a PO.

---------------------

2. Timeline:

t3: A gives up PO-H
t4: B wakes up; it goes onto the L queue and is immediately scheduled (see
    footnote *** below), since it deadline is shorter than A's;
    the subsequently arriving other tasks cannot influence B
    (the desired behaviour)
....
---------------------

Conclusion:
   -  Certainly the execution orders are different.
   -  A rather serious race condition (t3 - t4 = Eps) decides between
      EDF-ness of the scheduling and unbounded delays breaking deadlines.

*** actually, by current rules, it is not immediately scheduled. Present
rules 15/2-18/2 do not make this a task dispatching point, so B has to
wait until A is done with PO-L. Arguably, a break with the notion that
a higher-priority task preempts any task of lower active priority.
This is a side-issue, though (but it does play a role in the fix proposed
below).

---------------------

Analysis:
   The hole that I drove the truck through is the fact that the current
   rules allow for tasks to be queued at preemption levels and to retain
   the active priority thus acquired after the PO that caused it all has
   been released.

   For a while, I thought that the fix would be a loss of active priority
   for a queue members once the causing PO is released. I could not believe
   that this was reasonable semantics, due to the distrivbuted overhead on
   PO-operations.

   Then, for a while, I thought that 20/2-23/2 were just a very round-about
   way of saying that the new task preempts and that it is impossible to get
   any task to stay on a queue at levels higher than LOW other than a
   preempted task. I.e., 20/-23/2 only put something on these queues for
   immediation selection by the scheduler.

   However, that's not the case. See also the *** footnote.
   (If someone does find the words that say so, please also check
   the case of equal base priorities, because in that case, 23/2 disallows
   EDF-scheduling.)

   If that were the case, the problem would go away and EDF would work (and
   replacing 20/2-23/2 by a "preempts" rule would make a lot more sense
   to the reader.

****************************************************************

From: Alan Burns
Sent: Wednesday, May 25, 2005  7:28 AM

>I have updated the example so that it now assigns base priorities in
>deadline order. The problem is still there. The scheduling is subject to
>unbounded delays for "high priority" tasks, which is completely
>unacceptable.  An analysis of why this is so (and a way to fix it)
>is added at the bottom.
>

Sorry to disagree ;-)

>Task A  has a long deadline and preemption level LOW
>Task B has a short deadline and a preemption level b, H > b > L > LOW
>
>Sequence of events:
>
>t0: A starts
>t1: A grabs PO-L (L is its ceiling)
>t2: A grabs PO-H, too
>
>from here, two alternative time-lines:
>
>-------------------
>1. Timeline:
>
>t3: B wakes up; it goes into the bottom queue, since its preemption
>    level is not sufficient to preempt an A holding PO-H, and there
>    are no tasks in the ready queues in between

No, I have not got the words in front of me, but L does have a task executing
within it (ie A - the fact that it is also in H does not stop it being in L as
well). So B joins (is the only member of) ready queue at level L; and the right
behaviour follows.

Interestingly, it was when I did a formal proof of the protocol that this case
came out. Before that the situation was as you describe (we talked about
non-empty ready queues).

****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 25, 2005  10:13 AM

> No, I have not got the words in front of me, but L does have a task
> executing within it (ie A - the fact that it is also in H does not stop it
> being in L as well).

I stand corrected. I did not think to interpret 21/2 in the transitive way.
(And I am begining to believe in the semantics, well, up to wording :-)

> So B joins (is the only member of) ready queue at level
> L; and the right behaviour follows.

Agreed, given DMPO.

Minor question, quite possibly only for my education:

why doesn't 23/2 say "the base priority of T is greater than or equal to P"?
I realize that a DMPO would not allow this for a t with an earlier
deadline, but it sure seems harmless and more in the spirit of EDF to let
the task with the earlier deadline take over at equal preemption level.

And, the related secondary question: Is "adding a task to a ready queue"
always a dispatching point? (For a ready queue, whose level is higher than
that of the running task, it is per 18/2.) -- For my suggestion above, this
would have to include equal as well upon changes to the queue.

****************************************************************

From: Tullio Vardanega
Sent: Wednesday, May 25, 2005  9:30 AM

I reckon that the right behaviour should be warranted anyway
if preemption levels were assigned inverserly proportional
to the respective deadline.
This does not seem to be the case in Erhard's example, does it?

In timeline 1. (the problematic one), at time t3, task B gets into
queue L because task A sits in there (but also in queue H, which
is however higher than b); B gets ahead of A in L because it has
an earlier deadline, so that when A releases PO-H, B preempts it
and the right behaviour follows.

I suppose that Alan did already point this out, but you'll excuse my
being slower ;-)

*****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 25, 2005  12:47 PM

On the semi-open issues....


> EPD13: D.2.5 14/2:  WRONG!!!! APPLIES ONLY TO READY TASKS.
         But I am not sure whether simply adding "ready" here is sufficient.

Alan:
>        OK, could add 'ready' or use esiting text such as:
>     When the setting of the base priority of a running task takes
>     effect and the new priority is a Round Robin priority the
>     task is moved to the tail of the ready queue for its new priority

I agree with Alan's wording suggestion. (and note that the "ready, not
running" case is covered by D.2.4 (6/2)) I knew "ready" was not enough :-)

-------

> EPD21: D.2.6 24/2: BUG. in the case, where multiple priority ranges are
> specified as EDF-priorities (which presently is allowed), the para is
> ill-defined, since it doesn't say of which EDF-range.
>
>        Yes some added words needed - would 'for task T' at the end be OK?

If Alan agrees with changing the sentence anyway to refer to the active
priority (as discussed in a separate thread), the issue is moot. If not,
then this becomes a worm of a sentence, since "for task T" doesn't really
help - more like "the range in which the base priority of the task T is
included" , but since I would disagree on intent, anyway, I won't suggest
wording ;-)
Nota bene the obvious contradiction of this sentence with another sentence
in the RM that says that all tasks are in the queues of their active
priority, which Alan simply wants to "win" over this contradicting sentence.

-----------

> EPD26: D.2.6 25/2: WRONG!!!! APPLIES ONLY TO READY TASKS. But I am not
> sure whether simply adding "ready" here is sufficient.

Alan:
>        Yes adding ready is needed here, and is sufficinet I think,
>or use words similar to EDP13 above.

On this one, I agree with the "ready". Note that the suggested EDP13 wording
would not work, since there is then no rule around for "ready, not running"
tasks, is there?

*****************************************************************

From: Erhard Ploedereder
Sent: Wednesday, May 25, 2005  7:02 PM

A postscript to :
> Minor question, quite possibly only for my education:

> why doesn't 23/2 say "the base priority of T is greater than or equal to P"?
> I realize that a DMPO would not allow this for a t with an earlier
> deadline, but it sure seems harmless and more in the spirit of EDF to let
> the task with the earlier deadline take over at equal preemption level.

> And, the related secondary question: Is "adding a task to a ready queue"
> always a dispatching point? (For a ready queue, whose level is higher than
> that of the running task, it is per 18/2.) -- For my suggestion above, this
> would have to include equal as well upon changes to the queue.

I just noticed that Alan's response to EPD23 addresses the second half of
this question already with the asked-for change.
Alan, can I convince you of the first one, too? It's only logical to do at
preemption levels what you do at base level.

Here is the EPD23 exchange for easier reference:

> EPD23: D.2.6 24.a/2 1. BIG BUG!!! interesting... shorter deadlines by
> themselves do not cause preemption ???  True by the above words in the RM,
> but this is fundamentally wrong, isn't it? AI-357 agrees: the words
> earlier in the RM are fundamentally wrong: a shorter deadline (just now
> unblocked from some queue or delay) should preempt within the same
> priority, the AI says so explicitly, but this preemption rule is not there
> in the RM. Note that the rule is needed, since otherwise you may end up
> with EDF for non-communicating tasks being equal to Nonpreemptive
> scheduling!

>         After much thought I agree. Add after 18/2 as a 4th bullet
> point: 'there is a task on the ready queue for the active priority
> of T with a deadline shorter than the deadline of T.'

*****************************************************************

From: Alan Burns
Sent: Thursday, May 26, 2005  2:48 AM

> I reckon that the right behaviour should be warranted anyway
> if preemption levels were assigned inverserly proportional
> to the respective deadline.
> This does not seem to be the case in Erhard's example, does it?


I think we do get the 'right' behaviour. Remember with fixed priority
scheduling if the priorities are assigned in the wrong order some tasks will
suffer unacceptable delays

*****************************************************************

From: Erhard Ploedereder
Sent: Thursday, May 26, 2005 12:33 AM

Mostly as a postscript to Randy, the Editor, in case he is keeping track of
rationales of changes:

> EPD23: D.2.6 24.a/2 1. BIG BUG!!! interesting... shorter deadlines by
> themselves do not cause preemption ???  True by the above words in the RM,
> but this is fundamentally wrong, isn't it? AI-357 agrees: the words
> earlier in the RM are fundamentally wrong: a shorter deadline (just now
> unblocked from some queue or delay) should preempt within the same
> priority, the AI says so explicitly, but this preemption rule is not there
> in the RM. Note that the rule is needed, since otherwise you may end up
> with EDF for non-communicating tasks being equal to Nonpreemptive
> scheduling!

>         After much thought I agree. Add after 18/2 as a 4th bullet
> point: 'there is a task on the ready queue for the active priority
> of T with a deadline shorter than the deadline of T.'

Here is the wooden nickel example that shows that much more would have been
broken without this added rule:

Task A  has a long deadline and preemption level LOW
Task B has a short deadline and a preemption level b, H > b > L > LOW

Sequence of events:

t0: A starts
t1: A grabs PO-L (L is its ceiling)
t2: A grabs PO-H, too
t3: B wakes up; it goes into L queue, since its preemption
    level is not sufficient to preempt an A holding PO-H

t4: A gives up PO-H
since A now goes back to level L, it MUST be that B with its lesser
deadline takes over, or else....

t5: A grabs PO-(L+1); b > L+1
now many tasks arrive, with medium deadlines and preemption level L+1
(and I can worsen the picture to all levels x, b < x < L,  by A grabing
 PO-H after t5 before arrival of these tasks)
t6: A gives up PO-(L+1)

Now all the medium deadline tasks at level L+1 are executed in preference to
B, although their deadlines are later than B's.  If they don't block and no
other tasks arrive, they run to conclusion which will be away as far as
their own deadline. Again, we would have unbounded starvation despite
deadline monotonic priorities/preemption levels.

t7: A gives up PO-L
t8: finally B's turn
--------------------

****************************************************************

From: Tullio Vardanega
Sent: Thursday, May 26, 2005  5:03 AM

I am commenting on:

(a):
Erhard's comment on Alan's response to EPD21
"[...] Nota bene the obvious contradiction of this sentence
[TV: i.e. D2.6(24/2)] with another sentence in the RM that
says that all tasks are in the queues of their active priority [...]"

(b):
Alan's recommendation to:
"[...] Add after 18/2 as a 4th bullet point: 'there is a task on
the ready queue for the active priority of T with a deadline
shorter than the deadline of T.' "

Regarding (a):
AI-357 makes it clear that tasks are initially placed in the ready
queue for the lowest priority level in their EDF range of reference.
This is what 20/2 says. And this, intentionally and legitimately,
overrides the previous RM rule.
The whole point of this overriding is that tasks be ordered by
earliest deadline, whereas priorities are used for ruling on
PO locking (Baker's SRP).
I suppose that an explicit note to this effect would do, in lieu
of the ramblings in 8.a/2.

Regarding (b):
If I understand how the EDF scheme works (which I hope
I do ;-), then I *cannot* see -- in force of (a) above and of
[20/2-24/2] -- how 17/2 can ever occur.
Instead I very much see the point of the new (4th bullet) clause
that Alan suggests to add and I suspect that it really should
replace 17/2.
In other words, 17/2 seems to imply priority-based dispatching
irrespective of deadlines, which does not seem right.

****************************************************************

From: Erhard Ploedereder
Sent: Friday, May 27, 2005  7:26 AM

> AI-357 makes it clear that tasks are initially placed in the ready
> queue for the lowest priority level in their EDF range of reference.
> This is what 20/2 says.  ....

No, that's not (all) what 20/2 says. It does say what you claim,
but it also says that this is true for any task which becomes ready for any
reason, and for all the latter tasks the sentence is wrong, wrong, wrong.

A second observation is that "active priority" for EDF-scheduling is
presently not defined by the RM. We all think we know what it means, but
actually, we're not being told by the RM (except that it is a function of
the base priority and any inherited priority; but which function, we're not
being told -- D.1(15)). Luckily for this case, because this does allow us to
define "active priority" (which we have to do anyway, since D.1 forces us to
do so) to ignore the base priority.

Somehow, I seem to be unable to communicate my RM point (which has nothing
to do with which rule is right or wrong):
       "The RM shall not have contradicting statements".

The notion that one rule somehow "overrides" another rule is an impossible
notion, unless the rule says so. And we have precise terminology for that.
And we all hate it when we need to apply it.
It is spelled "Notwithstanding what this International Standard says
elsewhere", the backside of the moon is made of green cheese after all.
Luckily we don't need this here.


> If I understand how the EDF scheme works (which I hope
> I do ;-), then I *cannot* see -- in force of (a) above and of
> [20/2-24/2] -- how 17/2 can ever occur.

E.g., a call on Set_Deadline. (You did mean 17/2, right, since you mention
it thrice ? But then, I don't get the last point which seems to be more
about 18/2 than 17/2).

So, to answer your question above for 18/2 instead...
a) a task that blocks inside a protected op, upon wakeup  (o.k., its a
   bounded error, but still...)
b) on a multi-processor (by virtue of 20/2-23/2 on another processor)
c) on a uni-processor .... -- I can't think of one, neither ... Alan??

****************************************************************

From: Alan Burns
Sent: Tuesday, May 31, 2005  5:14 AM

>Minor question, quite possibly only for my education:
>
>why doesn't 23/2 say "the base priority of T is greater than or equal to P"?
>I realize that a DMPO would not allow this for a t with an earlier
>deadline, but it sure seems harmless and more in the spirit of EDF to let
>the task with the earlier deadline take over at equal preemption level.
>

The preemption level control protocols (including the ceiling protocol of Ada
95) always has the rule of strictly greater - this ensures mutual exclusion
when no blocking inside the object.

****************************************************************

From: Alan Burns
Sent: Tuesday, May 31, 2005  5:22 AM

>If Alan agrees with changing the sentence anyway to refer to the active
>priority (as discussed in a separate thread), the issue is moot. If not,
>then this becomes a worm of a sentence, since "for task T" doesn't really
>help - more like "the range in which the base priority of the task T is
>included" , but since I would disagree on intent, anyway, I won't suggest
>wording ;-)

Yes this wording is better.

>Nota bene the obvious contradiction of this sentence with another sentence
>in the RM that says that all tasks are in the queues of their active
>priority, which Alan simply wants to "win" over this contradicting sentence.


I do not see the contradiction - ready queue remains equal to active
priority - but the rules for which ready
queue (and hence the resulting active priority) are different for EDF

****************************************************************

From: Alan Burns
Sent: Tuesday, May 31, 2005  5:26 AM

>I just noticed that Alan's response to EPD23 addresses the second half of
>this question already with the asked-for change.
>Alan, can I convince you of the first one, too? It's only logical to do at
>preemption levels what you do at base level.

No - in fixed priority scheduling (and EDF) you need to have a strictly
higher priority to preempt.
- see my previous email.

the following does not concern preemption levels, just a newly released
task with a shorted deadline than the running task - and hence preemption. My
'much thought' was was not the rule but whether it was covered elsewhere (which
I convincedmyself it was not)

****************************************************************

From: Alan Burns
Sent: Tuesday, May 31, 2005  5:31 AM

> In other words, 17/2 seems to imply priority-based dispatching
> irrespective of deadlines, which does not seem right.


17/2 is there to cover the case that the running task sets the deadline
of some other
task, and the new level is shorter than the running task. This is unlikely
but has to be covered. However saying that, the new bullet covers
this case as well and hence 17/2 could now be dropped (I think)

****************************************************************

From: Erhard Ploedereder
Sent: Saturday, November  5, 2005  2:19 PM


collected final comments....

-----
Editorials:

D.2.1, 9/2 superfluous blank "implementation- defined"
D.2.2, 2.2/2 formatting:  why are two lines needed ?

-----
BUG:

D.2.6. 25/2
The para reads:
When the setting of the base priority of a task takes effect and the
new priority is in a range specified as EDF_Across_Priorities, the
task is added to the ready queue corresponding to its new active
priority, as determined above.

This is wrong, as it would make blocked tasks ready. In earlier
discussion of this (see e-mail dated 25 May 05, 19:46:53), Alan agreed
to put "ready" in the sentence as in...

"When the setting of the base priority of a ready task takes effect and
the.."

Please do put it in.

-----
Editorial:
EPD29: D.2.5 27/2 "in this package" -> in package Ada.Dispatching.EDF

Alan wonders whether this is "too much of an edit".
But, just read it in context of the RM (not the AI). "this package" in
this RM place lacks any kind of textual binding.

------
BUG:

Tullio writes:
> D.14(17/2 and 19/2): these two clauses disagree on the effect of calling
> Clock on tasks that be in between terminated and no longer existing.
(and similar complaints on paragraphs cited below.)

Erhard writes:
> EPD34: D.14 17/2: kill the half-sentence dealing with a terminated task
> T. Is covered by 19/2 (erroneous). ... As it is, there is a rather nasty
> race condition between getting an exception or being erroneous.

> EPD38: D.14.1 22/2 kill the half-sentence dealing with a terminated
> task T. It is covered by 24/2 (erroneous).  See EPD34 for more detail.

> EPD40: D.14.2 33/3 kill the half-sentence dealing with a terminated
> task T. It is covered by 35/2 (erroneous). See EPD34 for more detail.

Randy writes:
> As Alan points out, this makes no sense; these rules work the same as the
> ones for the operations in Task_Identification itself (such as Terminated).
> Surely we aren't going to have stricter requirements here than there??

Indeed, but we do right now because Alan is plain wrong...in
Task_Identification, there is no Tasking_Error being raised on
terminated tasks (just search for the word). In C.7.1., calling for a
terminated task is erroneous. Period. The same should hold here.

Hence our comments stand. The fix ought to be made in all these places
to avoid a nasty race condition.
--------------------
BUG:

EPD42: D.15 26/2 The Note is in direct contradiction of 22/2. Delete it or
make it work by adding rules in the context of 22/2.  And even if it were
not contradictory, it could not be a Note, since it cannot be derived from
the RM.

Nobody seems to have picked up/responded on this issue (para. numbers
updated to the current version)

(Why is it contractory? Imagine two tasks issuing such calls; how can you
possibly satisfy atomicity required by 22/2 without risk of blocking ? )

I ask for deletion, given the status of Notes. vs. IRs. The Note is not
derivable at all, so a change would be necessary anyway. (But, if Alan
thinks that the semantics are needed, fine, make it an IR, but somehow
resolve the contradiction then.)

------
BUG/Disagreement

D.2.6(1/2) states:
> ... Unless otherwise specified, whenever tasks compete for processors or
> other implementation-defined resources, the resources are allocated to
> the task with the earliest deadline.

Erhard writes:
> EPD15: D.2.6 1/2: insert in second sentence: "if Earliest Dedline First
> (EDF) scheduling is in effect," because otherwise this surely doesn't
> apply. Or, more formally, "if the pragma EDF_Across_Priorities applies,".

Tullio writes:
> D.2.6(1/2): The last sentence of the clause is rather tricky, because
> clause D.1(15) states that resource allocation is decided on the highest
> priority level and D.1 should presumably be valid for all dispatching
> policies (as amended by local rules, of course). Hence, either this
> sentence is explicitly placed in the context of the EDF dispatching
> policy or we have to look back at D.1(15).

Randy writes:
> This introductory text was dicussed extensively at an ARG meeting, and there
> is no reason to revisit that.

Sorry, but I am saying it is clearly wrong, since it mandates
deadline-ordered queues in all places that do not say otherwise. Are
you claiming it says something different?

****************************************************************

From: Alan Burns
Sent: Sunday, November 13, 2005  4:08 AM

- -----
>BUG:
>
>D.2.6. 25/2
>The para reads:
>When the setting of the base priority of a task takes effect and the
>new priority is in a range specified as EDF_Across_Priorities, the
>task is added to the ready queue corresponding to its new active
>priority, as determined above.
>
>This is wrong, as it would make blocked tasks ready. In earlier
>discussion of this (see e-mail dated 25 May 05, 19:46:53), Alan agreed
>to put "ready" in the sentence as in...
>
>"When the setting of the base priority of a ready task takes effect and
>the.."
>
>Please do put it in.

OK

>- -----
>Editorial:
>EPD29: D.2.5 27/2 "in this package" -> in package Ada.Dispatching.EDF
>
>Alan wonders whether this is "too much of an edit".
>But, just read it in context of the RM (not the AI). "this package" in
>this RM place lacks any kind of textual binding.
>

OK - obviously Randy's call

...
>>As Alan points out, this makes no sense; these rules work the same as the
>>ones for the operations in Task_Identification itself (such as Terminated).
>>Surely we aren't going to have stricter requirements here than there??
>>
>>
>
>Indeed, but we do right now because Alan is plain wrong...in
>Task_Identification, there is no Tasking_Error being raised on
>terminated tasks (just search for the word). In C.7.1., calling for a
>terminated task is erroneous. Period. The same should hold here.
>
>Hence our comments stand. The fix ought to be made in all these places
>to avoid a nasty race condition.

Perhaps I quoted the wrong 'example'. I refer you to D.11 in Ada 95. Here
tasking_error is raised if task has terminated ( D.11(8)) and program is
erroneous if task no longer exists (para 9). I think this is fine as a task
moves from terminated to no longer existing in a clear defined way. So I do not
see a race condition and think we can stay with these words. In fact I seem to
remember (and this is some years ago now) Erhard was the person who drafted
these words for me!


...
>I ask for deletion, given the status of Notes. vs. IRs. The Note is not
>derivable at all, so a change would be necessary anyway. (But, if Alan
>thinks that the semantics are needed, fine, make it an IR, but somehow
>resolve the contradiction then.)

It is important that 26/2 is true as a number of algorithms of use will
need this. Also note that the handler is executed at priority
Interrupt_Priority'last, so on a single processor atomicity comes for free.

I assumed that a Note was correct as we are making use of the lack of a
statement that a call of Set_Priority is potential blocking.

Blocking is an Ada term. I do not feel it follows that to implement
atomicity the effected task needs to be blocked in the Ada sense of the term.
Moreover to implement atomicity I would execute Set_Priority at priority
Interrupt_Priority'last so we get serialization without any form of blocking
(just preemption)

I conclude no need to change.

...
>Randy writes:
>>This introductory text was dicussed extensively at an ARG meeting, and
>>there is no reason to revisit that.
>
>Sorry, but I am saying it is clearly wrong, since it mandates
>deadline-ordered queues in all places that do not say otherwise. Are
>you claiming it says something different?

I assumed that the introductory paragraph was just that - introductory. It
gives the basic property of the algorithm. I was asked to reword this many
times with final agreement on the current words - so I tend to agree with Randy

PS my apologies for not attending forthcoming meeting

****************************************************************

From: Randy Brukardt
Sent: Tuesday, November 15, 2005  11:30 PM

Erhard writes:

> collected final comments....
>
> -----
> Editorials:
>
> D.2.1, 9/2 superfluous blank "implementation- defined"

I don't see this blank.

> D.2.2, 2.2/2 formatting:  why are two lines needed ?

Because one line wouldn't fit on the page. (Remember, the RM is smaller than
the AARM - look at the RM-Final.PDF file.)

> -----
> BUG:
>
> D.2.6. 25/2
...
> "When the setting of the base priority of a ready task takes effect and
> the.."
>
> Please do put it in.

OK.

> -----
> Editorial:
> EPD29: D.2.5 27/2 "in this package" -> in package Ada.Dispatching.EDF
>
> Alan wonders whether this is "too much of an edit".
> But, just read it in context of the RM (not the AI). "this package" in
> this RM place lacks any kind of textual binding.

You mean D.2.6(26), right? This change was already made there.

> ------
> BUG:
>
> Tullio writes:
> > D.14(17/2 and 19/2): these two clauses disagree on the effect of calling
> > Clock on tasks that be in between terminated and no longer existing.
> (and similar complaints on paragraphs cited below.)
...
> Indeed, but we do right now because Alan is plain wrong...in
> Task_Identification, there is no Tasking_Error being raised on
> terminated tasks (just search for the word). In C.7.1., calling for a
> terminated task is erroneous. Period. The same should hold here.

Huh? Calling for a terminated (but existing task) in C.7.1 is legal and had
better work. Are you saying that an implementation of Is_Terminated that
always returns False is correct? I don't think so.

In any case, I should have pointed out C.7.2 (Task Attributes), which has
*exactly* these rules. (And always has, see C.7.2(13)). And Alan points out
that D.11 has these rules, too.

> Hence our comments stand. The fix ought to be made in all these places
> to avoid a nasty race condition.

As Alan says in his reply, these are transitions that happen in some random
way. A task goes from running to terminated and then eventually to not
existing. Tullio talks about a task that's "in between terminated and not
existing", but I don't know what that means: either the task still exists in
the standard form, or it doesn't. It's not like objects gradually fade
away!!

And I don't see that the "race condition" here matters in implementation
terms; we have a defined result for a terminated task that the
implementation knows about. It's certainly not testable for a "non-existent"
task, anyway, so I don't see any impact.

There's nothing wrong here, I've made no change here.

> --------------------
> BUG:
>
> EPD42: D.15 26/2 The Note is in direct contradiction of 22/2. Delete it or
> make it work by adding rules in the context of 22/2.  And even if it were
> not contradictory, it could not be a Note, since it cannot be derived from
> the RM.
>
> Nobody seems to have picked up/responded on this issue (para. numbers
> updated to the current version)
>
> (Why is it contractory? Imagine two tasks issuing such calls; how can you
> possibly satisfy atomicity required by 22/2 without risk of blocking ? )
>
> I ask for deletion, given the status of Notes. vs. IRs. The Note is not
> derivable at all, so a change would be necessary anyway. (But, if Alan
> thinks that the semantics are needed, fine, make it an IR, but somehow
> resolve the contradiction then.)

I'm following Alan's resolution here. Especially as I see no reason to think
that atomicity requires blocking: surely pragma Atomic doesn't require
blocking, and set handler is just a pointer copy anyway - it would be hard
for it to be *non-atomic*. And, as Alan says, predefined routines are not
potentially blocking unless we say so.

Note that Task Attributes require atomicity, but are not defined to be
potentially blocking. (Humm, that might be a bug. Well, whatever.)

> ------
> BUG/Disagreement
>
> D.2.6(1/2) states:
> > ... Unless otherwise specified, whenever tasks compete for processors or
> > other implementation-defined resources, the resources are allocated to
> > the task with the earliest deadline.
>
> Erhard writes:
> > EPD15: D.2.6 1/2: insert in second sentence: "if Earliest Dedline First
> > (EDF) scheduling is in effect," because otherwise this surely doesn't
> > apply. Or, more formally, "if the pragma EDF_Across_Priorities
> applies,".
>
> Tullio writes:
> > D.2.6(1/2): The last sentence of the clause is rather tricky, because
> > clause D.1(15) states that resource allocation is decided on the highest
> > priority level and D.1 should presumably be valid for all dispatching
> > policies (as amended by local rules, of course). Hence, either this
> > sentence is explicitly placed in the context of the EDF dispatching
> > policy or we have to look back at D.1(15).
>
> Randy writes:
> > This introductory text was dicussed extensively at an ARG meeting, and
> > there is no reason to revisit that.
>
> Sorry, but I am saying it is clearly wrong, since it mandates
> deadline-ordered queues in all places that do not say otherwise. Are
> you claiming it says something different?

OIC. Yes, that is goofy, and something needs to be done. The deadline values
are meaningless unless a dispatching policy that uses them is in effect. The
"unless otherwise specified" is exactly backwards for that purpose.

I changed it to:

The deadline of a task is an indication of the urgency of the task; it
represents a point on an ideal physical time line. For policies that use
deadlines, whenever tasks compete for processors or other
implementation-defined resources, the resources are allocated to the task with
the earliest deadline.

Using the vague "policies" to cover imp-def task dispatching policies using
deadlines, and possibly even locking or queuing policies.

****************************************************************

From: Alan Burns
Sent: Wednesday, November 16, 2005  4:14 AM

As I will not be at the meeting, just wanted to throw in my view that I
agree with all of Randy's thoughts
above (especially when he agrees with me!). The re-wording in the final
part of above seems to usefully
cover this and any imp-def related policy.

****************************************************************

From: Tucker Taft
Sent: Saturday, November 19, 2005  6:21 AM

We are trying to be sure there are sufficient rules
to determine what dispatching policy applies to a task,
and to cover how tasks move from one dispatching policy
to another.  In particular, we have debated whether
the base priority or the active priority of a task
determines which dispatching policy it obeys.

D.2.1(5) links dispatching policies to ready queues,
which seems to favor active priority as determining the
dispatching policy.

D.2.2(6.3) talks about changes to base priority as being
something that might affect what dispatching policy applies,
even though setting a base priority might not affect
active priority if there are priorities being inherited;
furthermore, if a task inherits a high-enough priority presumably
it could become subject to another dispatching policy.

D.2.6(20-23) imply that a task's base priority must
be within an EDF range for them to apply, but it seems
possible that through inheritance a task's active
priority could be in an EDF range even though it's base
priority isn't.

What happens when an EDF task is held?  The effect of "Hold"
is described in terms of removing the base priority as a
source of inheritance, but base priority isn't a source
of inheritance in EDF, and instead acts as an upper limit.

Comments appreciated!  I'll try to send more info later.

****************************************************************

From: Alan Burns
Sent: Sunday, November 20, 2005  6:49 AM

Sorry for slow reply - broadband failed at home so I've had
to come to work to see if there were any ARG emails!

...
> D.2.1(5) links dispatching policies to ready queues,
> which seems to favor active priority as determining the
> dispatching policy.

That is not quite my reading of this paragraph. I feel it is saying the whole
model of dispatching is described in terms of ready queues. And that policies
when defined will need to refer to their impact on ready queues.

> D.2.2(6.3) talks about changes to base priority as being
> something that might affect what dispatching policy applies,
> even though setting a base priority might not affect
> active priority if there are priorities being inherited;
> furthermore, if a task inherits a high-enough priority presumably
> it could become subject to another dispatching policy.

To stop inherence being the cause of a permanent policy change we link
policy to base priority. In a multi-policy system in which tasks interact
across the policies (by for example sharing access to a protected
object) a task will 'obey' the rules for the ready queue it is on (ordered
by EDF or Deadline).

> D.2.6(20-23) imply that a task's base priority must
> be within an EDF range for them to apply, but it seems
> possible that through inheritance a task's active
> priority could be in an EDF range even though it's base
> priority isn't.

Yes this is intended. Assume a band of EDF on top of a round
robin low priority band. A RR task shares an object with an EDF
task. So when the RR enters the PO its active priority will rise to a priority
in the EDF range, It is the fundamental priority level that keeps it
executing; if it was eligible to run and then has its priority raised it
must still be the most eligible task to run unless and until some other task
is released - if it is preempted it will be placed on the ready
queue for this active priority in a position determined by its 'deadline'.
Note all tasks have deadlines, even if their base priority is not in an EDF
band. However the rules preemptive dispatching imply that the ready
queue at that time will only have the RR task on it.

> What happens when an EDF task is held?  The effect of "Hold"
> is described in terms of removing the base priority as a
> source of inheritance, but base priority isn't a source
> of inheritance in EDF, and instead acts as an upper limit.

When a task is held its base priority is changed to be below that of
the idle task. So if it was EDF before being held it is no longer an EDF
task when held. Later when 'continued' its base priority is changed back to
the EDF level and it continues as an EDF task.

> Comments appreciated!  I'll try to send more info later.

I do not see any contradictions in the paragraphs you  quote

1) The base priority of a task determines the basic policy under which
it is dispatched
2) A dispatching policy determines the rules for the behavior of the
ready queues under its control
3) If the active priority of a task is in the same priority (policy)
band as its base priority, no problem at all
4) If the active priority is higher (it cannot be lower) then if it is
preempted it will join an empty ready queue
and if that ready queue has later tasks added then the policy for the
queue applies.
5) Note all tasks have deadlines (default), and the rules for Round
Robin tasks make it clear that
they cannot have their quantum exhausted while inheriting a priority.
6) All non-EDF queues are order by FIFO and hence defined for all tasks

Seems alright to me!
I hope this helps

****************************************************************

From: Stephen Michell
Sent: Sunday, November 20, 2005  7:39 AM

Perhaps you will available to join us by phone this morning. Joyce has
net-phone on her laptop and will call you if you are available and give
a number. We start meeting in 1/2 hour.

I have some comments.

At the end, I also have a few issues that need addressing.

...
>> D.2.2(6.3) talks about changes to base priority as being
>> something that might affect what dispatching policy applies,
>> even though setting a base priority might not affect
>> active priority if there are priorities being inherited;
>> furthermore, if a task inherits a high-enough priority presumably
>> it could become subject to another dispatching policy.
>
> To stop inherence being the cause of a permanent policy change we link
> policy to base priority. In a multi-policy system in which tasks interact
> across the policies (by for example sharing access to a protected
> object) a task will 'obey' the rules for the ready queue it is on
> (ordered by EDF or Deadline).
>
Agreed. It took a while to come around to this, but we now are in agreement.

>> D.2.6(20-23) imply that a task's base priority must
>> be within an EDF range for them to apply, but it seems
>> possible that through inheritance a task's active
>> priority could be in an EDF range even though it's base
>> priority isn't.
>
> Yes this is intended. Assume a band of EDF on top of a round
> robin low priority band. A RR task shares an object with an EDF
> task. So when the RR enters the PO its active priority will rise to a
> priority in the EDF range, It is the fundamental priority level that keeps it
> executing; if it was eligible to run and then has its priority raised it
> must still be the most eligible task to run unless and until some
> other task is released - if it is preempted it will be placed on the ready
> queue for this active priority in a position determined by its
> 'deadline'. Note all tasks have deadlines, even if their base priority is
> not in an EDF band. However the rules preemptive dispatching imply that
> the ready queue at that time will only have the RR task on it.

It certainly is true on a single-cpu system that raising a higher
priority implies that the que for that priority is currently empty, but
that does not imply that the queue will remain empty until the task
finishes executing at the priority. Since the task has an effective
infinite deadline, it could be very adversely affected by a call to a PO
in an EDF priority range.

I think that we need to say that the priority of protected types must be
above the EDF range, unless EDF is the only policy on the system.

...

Here are the other issues that I identified.

D.1(20) [AI-357/11]
   This paragraph was updated as suggested at the Burlington meeting. The minutes of
   that meeting say "Alan will update the AARM note". Since that didn't happen
   (so far as I can tell - the Annex D comments got very confused), I tried to
   update it.              (LOOK AT THIS & NEXT FEW, & FOR POLICIES)

We need to make it clear what the connection is between active priority and
base priority. Here is possible wording.

      Unless otherwise specified for a task dispatching policy, the
     active priority is the maximum of the base priority and any
     priority inherited by T.


 D.2.6(17-18)
   Current wording says that a dispatching point happens when a task is
in a queue. The actual point is the event of the other task being added
to a queue. Here is suggested wording

15 A task Dispatching... (same)
16 a change to the deadline... (same)
17 a task is added to the ready queue with the same priority as the
active priority of the executing task but has an earlier deadline than
the executing task
18 a task is added to a ready queue of the same processor as the
executing task but with a higher priority than the active priority of
the executing task

****************************************************************

From: Tucker Taft
Sent: Sunday, November 20, 2005  8:08 AM

We went ahead and decided the following:

   The base priority determines the policy that computes
   the active priority.  The active priority determines
   what ready queue the task ends up in.

I think that essentially agrees with what you are saying.

As far as "held" tasks, it seems like their active
priority should be computed using the FIFO_Within_Priorities
policy, excluding the base priority from the calculation.

Does that make sense?

****************************************************************

From: Alan Burns
Sent: Monday, November 21, 2005  3:07 AM

> We went ahead and decided the following:
>
>   The base priority determines the policy that computes
>   the active priority.  The active priority determines
>   what ready queue the task ends up in.
>
> I think that essentially agrees with what you are saying.

yes that fine

 As far as "held" tasks, it seems like their active
> priority should be computed using the FIFO_Within_Priorities
> policy, excluding the base priority from the calculation.
>
> Does that make sense?

yes - as I said I feel the definition of held is 'change base priority to this
minimum value'. With a multi-policy system we say that any priority level
that is not explicitly defined to have a policy has 'FIFO_Within_Priority'
(D.2.2 3.5/2), so I assumed we already covered this situation. However
if extra words are needed to say this, fine

*****************************************************************


Questions? Ask the ACAA Technical Agent