CVS difference for ais/ai-00357.txt

Differences between 1.2 and version 1.3
Log of other versions for file ais/ai-00357.txt

--- ais/ai-00357.txt	2003/10/29 22:54:14	1.2
+++ ais/ai-00357.txt	2003/11/25 02:23:38	1.3
@@ -1,4 +1,4 @@
-!standard D.03 (00)                                  03-09-27  AI95-00357/01
+!standard D.03 (00)                                  03-11-24  AI95-00357/02
 !class amendment 03-09-27
 !status work item 03-09-27
 !status received 03-09-27
@@ -26,62 +26,127 @@
 It is important that EDF works with priorities and hence a combined
 scheme via Priority_Specific scheduling is defined.
 
+For many scheduling schemes, including for example EDF, the most effective
+locking policy for protected objects is one known as the Stack Resource
+Policy (or preemption level locking). This was defined
+by Baker in 1991 [1] and has the advantage that is does not enforce the
+same policy on task dispatching and PO locking; but it does lead to
+effective implementation of POs. Moreover when priority dispatching
+is used the SRP policy is the same as ceiling locking.
+
 !proposal
+
+To give support for EDF scheduling with the necessary protocol
+for sharing protected objects requires the following:
 
-First support for deadlines. The following package is provided:
+1. a means of representing a task's (absolute) deadline
+2. a means of assigning a preemption level to a task
+3. a means of assigning a preemption level to a PO
+
+The basic rule for preemption is that a new task T will preempt
+current running task S if:
+
+1) Deadline of T is before that of S
+2) Premeption level of T is higher than preemption level of any
+locked PO
+
+If preemption levels are assigned (by the programmer) using
+relative deadlines for each task then a tight upper bound on
+blocking equivalent to that of priority ceiling inheritance is
+obtained.
+
+In this proposal
+
+a) deadlines are represented by a new task attribute
+b) preemption levels for tasks are represented by base priority
+c) preemption levels for POs are represented by ceiling priorities
+
+The usual behaviour of priorities, when tasks execute within a PO,
+are followed. But the active priority of a task when executing outside
+a PO may be lower than its base priority (see details below).
+
+First support for (absolute) deadlines. The following package is provided:
+
+with Ada.Real_Time;
+with Ada.Task_Identification; use Ada.Task_Identification;
+package Ada.Deadline_Support is
+  subtype Deadline is Ada.Real_Time.Time;
+  Default_Deadline : constant Deadline := Ada.Real_Time.Time_Last;
+  Set_Deadline(D : in Deadline; T : in Task_ID := Current_Task);
+  Get_Deadline(D : out Deadline; T : in Task_ID := Current_Task);
+end Ada.Deadline_Support;
 
-with Ada.Task_Attributes;
-package Ada.Real-Time.Deadline_Support is
-  package Deadline is new Ada.Task_Attributed(Time, Time_Last);
-end Ada.Real_Time.Deadline_Support;
+with Implementation Advice to hold the deadline value
+as a task attribute.
 
-To enable a task to be assigned (a non-default) deadline to control
-its activation a new pragmas is provided;
+To enable a task to be assigned a (non-default) deadline to control
+its activation, a new pragma is provided:
 
 pragma Deadline(expression);
 where the expected type of expression is Ada.Real_Time.Time.
 
-This pragma can only occur in the specification part of a task.
+This pragma is the only real impact this AI has on the compiler;
+if this pragma is rejected then the Default_Deadline should be
+Ada.Real_Time.Time_First, hence all new tasks would be given a
+very early deadline.
 
-A task can change its current deadline by use of the attribute
-operations: Deadline.Set_Value and read the current deadline
-via Deadline.Value.
+The pragma can only occur in the specification part of a task.
 
+A task can change its current deadline by use of Set_Deadline
+and read the current deadline using Get_Deadline.
+
 To support EDF scheduling a new Priority_Policy identifier is
 defined: EDF_Within_Priority. If this policy is in effect then
 the Task_Dispatching_Policy must be Priority_Specific and the
-Locking_Policy must be Preemption_Level_Within_Ceiling_Locking.
+Locking_Policy must be Ceiling_Locking:
+
+pragma Task_Dispatching_Policy (Priority_Specific);
+pragma Priority_Policy (EDF_Within_Priorities,Lower_Pri,Upper_Pri);
 
-When the Priority policy EDF_Within_Priority is in effect for
-priority level P the ready queue for P is ordered by the deadline
-attribute, with the head of the queue being the task with the
-earliest deadline value.
-
-Modifications to the ready queues occur as follows:
-
-o when a blocked task becomes ready it is added to the ready queue
-for priority P at the position dictated by its deadline attribute
-
-o when a task executes a delay_statement that does not result in
-blocking it is added to the ready queue at the position dictated
-by its deadline attribute
-
-o when a task has its base priority changes and as a result
-joins the ready queue for priority P it is added to the ready queue
-at the position dictated by its deadline attribute
-
-Each of the events specified above is a task dispatching point (see
-D.2.1).
-
-A task dispatching point occurs for the currently running task with base
-priority P whenever there is a task on the ready queue for P with
-a shorter deadline and with a preemption level higher than the
-preemption level of any protected object in use by tasks of
-priority P. The currently running task is said to be preempted and
-is added to the ready queue at the position dictated by its deadline
-attribute.
+This defines (see AI-355) a band of priorities, Lower_Pri..
+Upper_Pri, that will be used for EDF scheduling.
 
+When the Priority policy EDF_Within_Priority is in effect the
+following rules apply. Let T'Base be the base priority of task T
+and T'Deadline its absolute deadline.
+
+
+Rule 1
+All ready queues in the specified range Lower_Pri..Upper_Pri
+are ordered by deadline. For two tasks on the same ready queue,
+S and T: S'Deadline < T'Deadline implies S is closer to the
+head of the queue than T.
+
+Rule 2
+When a task T becomes unblocked it is placed on the highest
+non-empty ready queue R (in range Lower_Pri+1..Upper_Pri) such that
+  T'Deadline < Deadline of task at tail of R, and
+  T'Base > Priority level of R.
+if no such R exists then add T to Lower_Pri.
+
+Rule 3
+When a task is chosen for execution it runs with the active priority
+determined by the ready queue from which the task was taken. If
+preempted it returns to the ready queue for its active priority.
+If it inherits a higher active priority it will return to its
+original active priority when it no longer inherits the higher
+level. (NOT SURE IF THIS RULE IS ALREADY IMPLIED)
+
+Rule 4
+When a task executes a delay_statement that does not result in
+blocking it is added to the ready queue for its active priority.
+
+Rule 5
+When a task has its base priority changed to a value in the range
+Lower_Pri..Upper_Pri it is placed on the ready queue for priority
+Lower_Pri when the change to its base priority takes effect.
 
+Rule 6
+Implementation rounds all ceilings at level Lower_Pri to Lower_Pri+1.
+
+Words such as the following are needed:
+For multiprocessor implementations other rules may apply.
+
 !wording
 
 
@@ -90,6 +155,49 @@
 
 !discussion
 
+Here we show that the above rules do give the required behaviour for EDF
+scheduling and preemption level locking (without the need for a new
+locking policy, i.e. just use of ceiling locking).
+
+First if no POs are used or locked then all tasks are always placed on the
+ready queue for level Lower_Pri (which is of course ordered by deadline).
+
+The preemption level rule is that a newly executing task (T) should
+preempt the current running task (S) if it has a higher preemption
+level than any task that has locked a PO and a shorter deadline than
+S. There are a few cases to look at
+
+If S has locked the PO then T will be placed in the same ready q as
+S, it has a shorter deadline and hence will execute first.
+
+If some other task has locked the PO then S must have preempted it
+and hence will again be on the ready q for this level. T added to
+same q and will preempt.
+
+If there are a number of locked POs then T needs to be placed on the
+correct ready q. On all ready queues, apart from that at Lower_Pri,
+the tail of the q is the task that has the lock on the PO whose
+ceiling pri is that of the ready q. Rule 2 then gets the right q:
+the q is appropriate as T has the correct attributes (shorter
+deadline than the task holding the lock, and higher preemption
+level). Moreover no higher pri q is appropriate as one of the
+conditions will be false.
+
+Note to be placed on a ready q, T must have a strictly higher
+preemption level (base priority) than the task on that q that
+is holding the lock. Hence mutual exclusion is not broken by
+the preemption rules.
+
+--
+
+During discussions an alternative way of expressing this model
+was considered. This did not need the 'no POs at level Lower_Pri'
+rule. But it did required that tasks be placed at one above
+the level in the above model, and required the rule that a
+task preempted while executing in a PO must always stay
+at the front of its ready q even if its deadline is later than
+the next task on the queue.
+
 
 !ACATS test
 
@@ -97,4 +205,2220 @@
 
 !appendix
 
+Reference
+[1] Baker, T.P., Stack-Based Scheduling of Real-Time Processes,
+Journal of Real-Time, Vol 3, No 1, pp67-99, March 1991.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Monday, October 20, 2003  4:27 AM
+
+Re EDF scheduling and preemption levels.
+
+One suggestion from the last ARG meeting was that the need for preemption
+levels could be catered for by using a block of priority levels for a single
+scheme such as EDF - the different values in the block been used for
+preemption levels (of tasks and POs).
+
+Remember the problem we are trying to solve is the control of the
+amount of interaction that occurs between a set of tasks and POs
+at the same priority level.
+
+My reading of the basic dispatching model (ie new D2.1) is that the use
+of a block of priority levels will not work. It is clear for the basic model
+that at any dispatching point the highest (non-empty) ready queue will
+provide the task to execute next. It will not be easy to define the EDF
+dispatching scheme (or any other implementation defined scheme) in
+such a way that a task can be preempted by a newly released task
+with a lower base priority. Using active priority will not work as this
+would put the task's priority too high for its use of POs. One would
+need to introduce a new priority notion (in addition to base and active,
+say original) that seems too large a change.
+
+The alternative seems more straightforward. Define a new pragma that
+works with Ceiling_Locking to allow tasks and POs to define a preemption
+level. This has minimal impact on exisiting words and notions.
+
+I would like to get some feedback on this before the next meeting so that
+I can work on the details and wording.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, October 20, 2003  9:14 AM
+
+> My reading of the basic dispatching model (ie new D2.1) is that the use
+> of a block of priority levels will not work. It is clear for the basic model
+> that at any dispatching point the highest (non-empty) ready queue will
+> provide the task to execute next. It will not be easy to define the EDF
+> dispatching scheme (or any other implementation defined scheme) in
+> such a way that a task can be preempted by a newly released task
+> with a lower base priority. Using active priority will not work as this
+> would put the task's priority too high for its use of POs. One would
+> need to introduce a new priority notion (in addition to base and active,
+> say original) that seems too large a change.
+
+Suppose the "active priority" of a task in such a group is
+always the lowest priority level of the group *except* when it
+is inside a PO, in which case it takes on the ceiling level of the
+PO.  In addition, the PO would check against the max of the
+active priority and the base priority for ceiling violations.
+I believe this could work.  Whether it is preferable to introducing
+preemption levels is worthy of debate...
+
+****************************************************************
+
+From: Alan Burns
+Sent: Monday, October 20, 2003  9:48 AM
+
+> Suppose the "active priority" of a task in such a group is
+> always the lowest priority level of the group *except* when it
+> is inside a PO, in which case it takes on the ceiling level of the
+> PO.  In addition, the PO would check against the max of the
+> active priority and the base priority for ceiling violations.
+> I believe this could work.  Whether it is preferable to introducing
+> preemption levels is worthy of debate...
+
+No I'm not sure this works, support lowest pri is 3
+T1 and PO have priority of 4
+T2 has priority of 5
+
+T1 executes (at level 3) and calls PO (now execting at level 4)
+T2 is released, but its active priority is 3 so
+does not preempt. But we want T2 to be able to
+preempt T1 as preemption level ensures T2 does not use PO
+(and deadline of T2 may be shorter than T1's, for example).
+
+****************************************************************
+
+From: Stephen Michell
+Sent: Monday, October 20, 2003  10:06 AM
+
+I have significant concerns about the mix-and-match approach that Alan
+was suggesting for this AI (357) and those for round-robin, value-based,
+etc.
+
+Basically, the IRTAW was proposing that any given priority level could
+take on a scheduling paradigm, and that tasks would inherit this
+paradigm when they entered that priority level. So for example
+priority 1 -> round robin
+priority 2 -> FIFO
+priority 3 -> round robin
+priority 4 -> EDF
+priority 5 -> round robin
+
+My concerns:
+Round Robin
+Round-robin is a paradigm that is used for what are considered
+non-realtime tasks. While I don't have much problem with the notion of a
+set of low priority tasks working in a round-robin paradigm while higher
+priority realtime tasks work with FIFO priorities, the notion that the
+base priority of such non-realtime tasks could be changed and that they
+would acquire realtime properties seems strange.
+
+EDF
+Earliest deadline first is a paradigm which really replaces priority as
+a scheduling mechanism. As a task's scheduled completion time approaches
+and the task has not been scheduled, its urgency relative to all other
+tasks increases until eventually it might become the most urgent task to
+run.
+
+EDF analysis, depends upon statically knowing the scheduling rate and
+execution time of each task in the system. These are static properties
+for each task which are worked out apriori, and assume that you are
+analysing 100% of all tasks in the system. They don't support the notion
+that there may be other tasks at different priority levels which are
+consuming resources which are not accounted in the analysis.
+
+In particular, the current Ada model is a dynamic priority model. If you
+permit tasks to be added to or removed from this model dynamically, then
+your analysis goes out the window. Similarly, if a lower priority task's
+priority is increased beyond the priority used for EDF, it's performance
+characteristics will interfere with those in the EDF regime.
+- One could theoretically have high priority tasks (higher than the EDF
+regime) and account for their resources by subtracting out the total
+consumed by higher priority activities (say 10%), but if the load is
+unbalanced it could wreck the analysis.
+
+
+So, how far should we go?
+IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
+the round-robin tasks are the lowest priorities in the system. We could
+possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
+or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
+base priority is changed form a round-robin level <-> non-round-robin
+level. We could also add a legality rule that the highest round robin
+priority level be below the lowest FIFO priority level.
+
+I also don't see a problem with EDF, as long as it is the only paradigm
+in use - i.e. all tasks participate in EDF. This means that there would
+be no round-robin or FIFO tasks in an EDF system.
+
+Value-based scheduling - I'm not sure. I think that I'll stay quiet on
+that for now.
+
+Anway, my thoughts for now. I'm interested to hear what others think.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Monday, October 20, 2003  11:17 AM
+
+It was unfortunate that you had to leave the IRTAW before it
+got started as most of the points you raise were dealt with
+at length. I cannot possible replay a day and a half of discussion
+but I'll give some response. Remember the IRTAW was fully
+supportive of the AIs I have produced.
+
+...
+> My concerns:
+> Round Robin
+> Round-robin is a paradigm that is used for what are considered
+> non-realtime tasks. While I don't have much problem with the notion of a
+> set of low priority tasks working in a round-robin paradigm while higher
+> priority realtime tasks work with FIFO priorities, the notion that the
+> base priority of such non-realtime tasks could be changed and that they
+> would acquire realtime properties seems strange.
+
+RR is not just used at low levels, I mensioned this at the
+ARG meeting and it is discussed in my IRTAW paper that you have.
+
+
+> EDF
+> Earliest deadline first is a paradigm which really replaces priority as
+> a scheduling mechanism. As a task's scheduled completion time approaches
+> and the task has not been scheduled, its urgency relative to all other
+> tasks increases until eventually it might become the most urgent task to
+> run.
+
+what is proposed allows EDF to live on its own (if all tasks priority
+are given the same EDF level) AND mixed systems to be programmed.
+
+I can assure you that EDF analysis can accommodate some capacity
+being stolen by higher priority tasks. Also one safe implementation
+scheme for EDF is to allow hard (critical) tasks to jump to
+fixed pri levels only when they would otherwise miss their deadlines.
+This fault tolerance approach has been discussed in a number of papers,
+I can give you the references.
+
+Also analysis is not undertaken on all systems, people want to
+use EDF as it is the most efficient in its use of CPU resourses.
+For example best-effort aperiodic scheduling under a fixed pri scheme.
+
+...
+> In particular, the current Ada model is a dynamic priority model. If you
+> permit tasks to be added to or removed from this model dynamically, then
+> your analysis goes out the window. Similarly, if a lower priority task's
+> priority is increased beyond the priority used for EDF, it's performance
+> characteristics will interfere with those in the EDF regime.
+> - One could theoretically have high priority tasks (higher than the EDF
+> regime) and account for their resources by subtracting out the total
+> consumed by higher priority activities (say 10%), but if the load is
+> unbalanced it could wreck the analysis.
+
+I disagree with the pessimistic tone of the above, of course if
+you have a very dynamic system it if hard (perhaps impossible)
+to analysis - but that does not mean mixed systems cannot be
+analysed.
+
+> So, how far should we go?
+> IMHO, I don't see a problem with round-robin mixed with FIFO, as long as
+> the round-robin tasks are the lowest priorities in the system. We could
+> possibly keep dynamic priorities if we create a rule that PROGRAM_ERROR
+> or TASKING_ERROR is raised if a task's (or a PO if we adopt that AI)
+> base priority is changed form a round-robin level <-> non-round-robin
+> level. We could also add a legality rule that the highest round robin
+> priority level be below the lowest FIFO priority level.
+>
+> I also don't see a problem with EDF, as long as it is the only paradigm
+> in use - i.e. all tasks participate in EDF. This means that there would
+> be no round-robin or FIFO tasks in an EDF system.
+>
+
+I disagree, and the workshop disagreed as well.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, October 20, 2003  1:19 PM
+
+> T1 executes (at level 3) and calls PO (now execting at level 4)
+> T2 is released, but its active priority is 3 so
+> does not preempt. But we want T2 to be able to
+> preempt T1 as preemption level ensures T2 does not use PO
+> (and deadline of T2 may be shorter than T1's, for example).
+
+Yes, I was confused.  How about the following:
+
+Outside a PO, the active priority of a task in the group and
+its location within the corresponding queue shifts according
+to the scheduling policy (e.g. EDF), within a range
+bounded below by the lowest priority of the group and above by the
+specified priority of the task.  When entering a PO, it takes on
+the ceiling priority of the PO as usual.
+
+In other words, a task's active priority can be lowered below its specified
+priority and/or moved later in a ready queue by the priority-specific
+scheduling algorithm, unless it is inside a PO.
+
+I think that would work...
+
+I am pretty sure that this is not an uncommon way for such
+scheduling algorithms to work.  That is, the priority is lowered
+as appropriate to turn the fundamentally priority-based scheduling
+into a more sophisticated algorithm.  However, when holding a lock,
+the priority cannot be lowered.
+
+Another way to think of it is that the specified priority is
+the *ceiling* priority for the task when not inside any PO.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, October 20, 2003  1:57 PM
+
+P.S. A legitimate question is why am I trying so hard
+to eliminate preemption levels.  I really think they
+add substantially to the conceptual complexity of the
+scheduling model.  *If* we can come up with an equivalence
+that requires only a single notion of "specified" priority for POs
+and tasks, I think it will be a big advantage for users.
+
+I believe that it is significantly more complex to juggle both
+specified priority and specified preemption level, worry (eventually)
+about dynamic setting of both as part of mode changes, figure out how
+to pass both through task and protected object discriminants, etc.
+
+So, that's why I hope you will continue this dialogue about
+developing an equivalence.  Of course internally the scheduler
+doesn't really have to raise and lower the active priority,
+but can rather use whatever sort of data structure it wants
+to implement EDF or whatever priority (range) specific scheduling
+it wants.  But it is awfully nice to keep a single unified priority
+scheme as the basic model at the language description level, and I
+would hate to lose that.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, October 20, 2003  2:00 PM
+
+P.P.S. And of course the *real* reason is that I can't bear
+adding another field to our protected-object descriptor record
+which has to be checked on every lock operation (conceptual
+model be-damned) ;-).
+
+****************************************************************
+
+From: Alan Burns
+Sent: Tuesday, October 21, 2003  5:08 AM
+
+I am happy to keep trying to get this definition right.
+I share you view that the simpler the model we can find the
+better. If we can do without a new concept then great.
+
+>
+> Yes, I was confused.  How about the following:
+>
+> Outside a PO, the active priority of a task in the group and
+> its location within the corresponding queue shifts according
+> to the scheduling policy (e.g. EDF), within a range
+> bounded below by the lowest priority of the group and above by the
+> specified priority of the task.  When entering a PO, it takes on
+> the ceiling priority of the PO as usual.
+>
+> In other words, a task's active priority can be lowered below its
+> specified
+> priority and/or moved later in a ready queue by the priority-specific
+> scheduling algorithm, unless it is inside a PO.
+>
+
+I think this still has the same problem (if I understand your
+model correctly). But I have an alternative (later). With
+your model, again look at a set of tasks with base value 3
+T1 base priority 4
+PO has base priority 4 (as it is called by T1)
+T2 has base priority 6 (indicating it does not call PO)
+T3 has base priority 5
+
+Lets say the ready queue (as base level 3 is ordered by
+EDF)
+
+If T2 is released while no POs are locked it take its place
+in the ready queue for level 3 and may run immediately if
+it has shortest deadline. It will run with priority 3.
+
+However if T1 has locked PO when T2 released then T2 needs to
+run with priority 6 (but only if its deadline is less than T1s).
+Now T3 could be released and it cannot preempt even if its
+priority is less than T2s.
+
+There are other examples that show problems - they all come
+about because the tasks run with the ceiling value. Preemption
+levels do not have this problem as they do not control dispatching.
+They only give rules for when a task can execute: eg
+T3 can preempt T2 if deadline is less and preemption level of
+T2 is higher than that of any locked PO (ie 4).
+
+Alternative
+Take a band of priorities.
+1) run all tasks at top pri (ie active priority always top)
+2) round up all ceilings to top (as allowed by definition of
+ceilings) - hence no program_errors
+3) use base pri's of tasks and defined ceilings of POs to
+enforce/define dispatching rules
+
+So for EDF it would be a dispatching point if
+head of ready queue have shorter deadline than current running
+task AND base pri of head was greater than that of any locked
+PO's (all within the range of the policy).
+
+How does that sound?
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, October 21, 2003  7:52 AM
+
+> Lets say the ready queue (as base level 3 is ordered by
+> EDF)
+
+I presume that the EDF scheduler would have control over
+all the ready queues in the range it manages (say 3-6).
+It could raise and lower the active priority of the tasks
+under its control as it sees fit, but with a restriction
+that it cannot raise their active priority above their
+specified priority, except of course that when a task
+acquires a PO lock, its active priority is boosted to that
+of the PO.
+
+> If T2 is released while no POs are locked it take its place
+> in the ready queue for level 3 and may run immediately if
+> it has shortest deadline. It will run with priority 3.
+
+I don't know why you are saying it will run with priority 3.
+It could run with any priority from 3 to 6, as determined by
+the EDF scheduler.  Its priority could change while running
+as well.  At the implementation level, it would seem simplest
+to have a single EDF queue for all the tasks, and generally
+ignore the priority except when deciding whether to start
+a given task.  When a task is at the head of the EDF queue,
+it would be run if its deadline is earlier than the current
+running task and, if the running task is inside a PO,
+its priority is higher than that of the PO.
+
+> However if T1 has locked PO when T2 released then T2 needs to
+> run with priority 6 (but only if its deadline is less than T1s).
+
+As suggested above, if T2's deadline is earlier than T1, then it
+would run because its priority is higher than that of the PO
+T1 has locked.
+
+> Now T3 could be released and it cannot preempt even if its
+> priority is less than T2s.
+>
+> There are other examples that show problems - they all come
+> about because the tasks run with the ceiling value. Preemption
+> levels do not have this problem as they do not control dispatching.
+> They only give rules for when a task can execute: eg
+> T3 can preempt T2 if deadline is less and preemption level of
+> T2 is higher than that of any locked PO (ie 4).
+
+I don't follow the above, because I suspect we are making different
+assumptions.
+
+> Alternative
+> Take a band of priorities.
+> 1) run all tasks at top pri (ie active priority always top)
+> 2) round up all ceilings to top (as allowed by definition of
+> ceilings) - hence no program_errors
+> 3) use base pri's of tasks and defined ceilings of POs to
+> enforce/define dispatching rules
+>
+> So for EDF it would be a dispatching point if
+> head of ready queue have shorter deadline than current running
+> task AND base pri of head was greater than that of any locked
+> PO's (all within the range of the policy).
+
+Yes, we have the same rule in mind.  I presumed the EDF scheduler
+would essentially ignore the priority in its basic deadline
+queue, except to check it at the moment of release to be sure
+it is higher than any locked PO.
+
+> How does that sound?
+
+Very much what I had in mind, but you certainly expressed it better  ;-).
+
+****************************************************************
+
+From: Alan Burns
+Sent: Tuesday, October 21, 2003  9:38 AM
+
+We seem to be converging
+
+My problem with 'your' model is being able to specify
+easily the required behaviour
+
+>> I think this still has the same problem (if I understand your
+>> model correctly). But I have an alternative (later). With
+>> your model, again look at a set of tasks with base value 3
+>> T1 base priority 4
+>> PO has base priority 4 (as it is called by T1)
+>> T2 has base priority 6 (indicating it does not call PO)
+>> T3 has base priority 5
+>>
+>> Lets say the ready queue (as base level 3 is ordered by
+>> EDF)
+>
+> I presume that the EDF scheduler would have control over
+> all the ready queues in the range it manages (say 3-6).
+> It could raise and lower the active priority of the tasks
+> under its control as it sees fit, but with a restriction
+> that it cannot raise their active priority above their
+> specified priority, except of course that when a task
+> acquires a PO lock, its active priority is boosted to that
+> of the PO.
+>
+>> If T2 is released while no POs are locked it take its place
+>> in the ready queue for level 3 and may run immediately if
+>> it has shortest deadline. It will run with priority 3.
+>
+> I don't know why you are saying it will run with priority 3.
+> It could run with any priority from 3 to 6, as determined by
+> the EDF scheduler.  Its priority could change while running
+> as well.  At the implementation level, it would seem simplest
+> to have a single EDF queue for all the tasks, and generally
+> ignore the priority except when deciding whether to start
+> a given task.  When a task is at the head of the EDF queue,
+> it would be run if its deadline is earlier than the current
+> running task and, if the running task is inside a PO,
+> its priority is higher than that of the PO.
+
+agreed required behaviour could be achieved but is difficult to
+define when some fixed points (the PO priorities) are given
+and have to be followed, for example with the following
+
+>> However if T1 has locked PO when T2 released then T2 needs to
+>> run with priority 6 (but only if its deadline is less than T1s).
+>
+> As suggested above, if T2's deadline is earlier than T1, then it
+> would run because its priority is higher than that of the PO
+> T1 has locked.
+>
+>> Now T3 could be released and it cannot preempt even if its
+>> priority is less than T2s.
+>
+> I don't follow the above, because I suspect we are making different
+> assumptions.
+
+T2 should preempt T1 (which has priority 4), it base priority
+is 6  - does it get 5 or 6 or 7 ...? (I agree at one level it does not matter)
+Then T3 should preempt T2 - again a higher priority must be chosen.
+In the end a range of priorities above the highest ceiling must
+be reserved for the EDF range (perhaps doubling the range
+of priorities needed).
+
+I agree the model does not dictate implementation, but the above
+seems hard to define. Each task must be on some ready queue when
+it runs - the rules must say which.
+
+>> Alternative
+>> Take a band of priorities.
+>> 1) run all tasks at top pri (ie active priority always top)
+>> 2) round up all ceilings to top (as allowed by definition of
+>> ceilings) - hence no program_errors
+>> 3) use base pri's of tasks and defined ceilings of POs to
+>> enforce/define dispatching rules
+>>
+>> So for EDF it would be a dispatching point if
+>> head of ready queue have shorter deadline than current running
+>> task AND base pri of head was greater than that of any locked
+>> PO's (all within the range of the policy).
+>
+> Yes, we have the same rule in mind.  I presumed the EDF scheduler
+> would essentially ignore the priority in its basic deadline
+> queue, except to check it at the moment of release to be sure
+> it is higher than any locked PO.
+>
+>> How does that sound?
+>
+> Very much what I had in mind, but you certainly expressed it better  ;-).
+
+Perhaps just my way of thinking, but this seems easier to
+express in ARG (LRM) speak.
+
+have you a strong view either way?
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, October 21, 2003 11:26 AM
+
+> >> Alternative
+> >> Take a band of priorities.
+> >> 1) run all tasks at top pri (ie active priority always top)
+> >> 2) round up all ceilings to top (as allowed by definition of
+> >> ceilings) - hence no program_errors
+> >> 3) use base pri's of tasks and defined ceilings of POs to
+> >> enforce/define dispatching rules
+> >>
+> >> So for EDF it would be a dispatching point if
+> >> head of ready queue have shorter deadline than current running
+> >> task AND base pri of head was greater than that of any locked
+> >> PO's (all within the range of the policy).
+
+Wouldn't we want to allow a dispatching point if *any* task
+on the ready queue satisfied these two requirements, even if
+the "head" task didn't?  I presume you would just skip over
+the head task(s) with priorities too low.
+
+Another way to imagine it is that there are still queues at each
+priority level (ordered by scheduler determined "urgency", e.g. deadline),
+but a task is only "ready" if it has a greater urgency than all tasks on
+lower priority ready queues, but ignoring queues at priorities
+at the level of a locked PO or below.
+
+In other words, when a task's "urgency" drops below a ready task of a lower,
+eligible-to-run priority, it becomes non-ready.
+
+> > Yes, we have the same rule in mind.  I presumed the EDF scheduler
+> > would essentially ignore the priority in its basic deadline
+> > queue, except to check it at the moment of release to be sure
+> > it is higher than any locked PO.
+> >
+> >> How does that sound?
+> >
+> > Very much what I had in mind, but you certainly expressed it better  ;-).
+> >
+>
+> Perhaps just my way of thinking, but this seems easier to
+> express in ARG (LRM) speak.
+>
+> have you a strong view either way?
+
+I am somewhat worried about rounding up all the ceilings, and also
+it seems that if you want to skip over the "head" of the queue
+(which I claim you will want to do), then you are violating the basic
+scheduling model.  Or perhaps that is solved by stating that the queue
+at the top priority is primarily ordered by deadline, but tasks whose
+specified priority blocks them due to a locked PO are given an
+artificially late deadline, or are simply treated as non-ready.
+
+But at that point, perhaps the model I suggested above is just
+as good, and doesn't require rounding up the PO ceilings.
+
+I also presume we want to describe this model in a way that is not
+EDF-specific (hence my introduction of the notion of
+"scheduler-determined urgency").  Or is this model just for the EDF case?
+
+****************************************************************
+
+From: Alan Burns
+Sent: Thursday, October 23, 2003  5:08 AM
+
+> I am somewhat worried about rounding up all the ceilings, and also
+> it seems that if you want to skip over the "head" of the queue
+> (which I claim you will want to do), then you are violating the basic
+> scheduling model.  Or perhaps that is solved by stating that the queue
+> at the top priority is primarily ordered by deadline, but tasks whose
+> specified priority blocks them due to a locked PO are given an
+> artificially late deadline, or are simply treated as non-ready.
+
+I agree that skiping over the head  of a ready queue (even if
+ordered by deadline) would be most unfortunate. I'll reread the
+papers and see if you are right (certainly you seem to be).
+
+> But at that point, perhaps the model I suggested above is just
+> as good, and doesn't require rounding up the PO ceilings.
+
+But my point about having to reserve lots of priority levels
+above top ceiling is still a concern. If tasks are to execute
+with priorities other than base or ceiling of locked objects
+then perhaps it is best to just say only one task (in the
+group) is ready to execute and it is the one with shortest
+deadline and sufficiently high preemption level.
+
+> I also presume we want to describe this model in a way that is not
+> EDF-specific (hence my introduction of the notion of
+> "scheduler-determined urgency").  Or is this model just for the
+> EDF case?
+
+Both!, I want to define an EDF scheme but in a way that it
+would be easy to use another parameter as the definition
+of urgency
+
+I'll get back to you next week
+
+****************************************************************
+
+From: Alan Burns
+Sent: Tuesday, October 28, 2003  9:33 AM
+
+Working with your model I think the following works.
+It may seem a bit complicated but I think this is
+needed to be able to ensure that only the head of
+any ready queue needs to be looked at. It also
+uses PO ceilings in the usual way (tasks in a PO
+run at ceiling level), and it does not need priorities
+above those used by the tasks and POs to be reserved.
+
+Assume range of priorities for EDF (or any other
+scheme using this approach) is B..T; also
+restrict tasks and PO priorities to be in range
+B+1 .. T. See why in a moment.
+
+Let S denote the priority of the highest priority currently
+locked PO (S is dynamic). If no PO is locked let S equal B.
+
+Rule 1
+All ready queues (in B..T range) are ordered by deadline.
+
+Rule 2
+A task, T, can only become ready if base pri of T > S.
+
+Rule 3
+When a task becomes ready it is added to the ready queue
+for pri level S and this is a dispatching point. (So if new task
+has shorter deadline than current running task it will
+preempt it.)
+
+Rule 4
+When S drops from level P1 to P2, all tasks on the ready
+queue for P1 are transferred to P2.
+
+--
+This last rule may sound strange, and would not necessarily be the way
+to implement this, but is needed in the model (I think).
+
+Note the only tasks that would move are those that were released
+while S had value P1, had deadline greater then the task that locked
+the object that made S equal P1, and had base pri > S. If S drops
+to B (ie no objects locked) all tasks will be in the B ready queue
+in deadline order.
+
+To see why the rule is required, one needs a quite
+complicated scenario involving two or more POs. For example
+
+T1 - base priority 3
+T2 - base priority 4
+T3 - base priority 5
+T4 - base priority 6
+
+PO1 used by T1 and T2 - ceiling pri 4
+PO2 used by T4 - ceiling 6
+
+S is 1 initially
+T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
+  is now 4)
+T4 released (6>4) and is added to ready queue for priority
+  level 4, assume it has shorter deadline, so it preempts T1,
+  and executes with active pri 4; it now calls PO2 (it executes
+  at active pri 6 and S now 6)
+T3 arrives but cannot be made ready even though it has even
+  shorter deadline (5 not > 6)
+T4 returns from PO2 (S now equal 4). T4 is added to ready queue
+  for 4, as is T3 that can now be made ready. The ready q at level
+  4 has the right structure (T3 head then T4 then T1) and allows T3
+  to execute first (shortest deadline) then T4, then T1
+When T1 exists PO1 (S falls to 1) any tasks that were made ready
+  (because they had high enough base pri) but longer deadline
+  that T1, will move from ready q at 4 to ready q at 1
+--
+
+Easy to see that a very simple scheme where it is acceptable
+for all PO calls to be, in effect, non-preemptive can easily
+be programmed by giving 2 priority levels for EDF (say 7 and
+8), set all tasks priorities to 8 (hence all ceiling are 8)
+then ready queue at 7 will be ordered by deadline. All
+tasks can be made ready immediately unless a task calls a
+PO; in which case it will execute at pri 8 (S will be 8)
+and no preemption will occur.
+
+If you see nothing immediately wrong with this I'll write a longer
+note to the IRTAW group and get their assessment.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, October 28, 2003 11:04 AM
+
+You didn't comment specifically on my last proposal.
+It seems a little simpler than this new model:
+
+stt> Another way to imagine it is that there are still queues at each
+stt> priority level (ordered by scheduler determined "urgency",
+stt> e.g. deadline),
+stt> but a task is only "ready" if it has a greater urgency than
+stt> all tasks on
+stt> lower priority ready queues, but ignoring queues at priorities
+stt> at the level of a locked PO or below.
+stt>
+stt> In other words, when a task's "urgency" drops below a
+stt> ready task of a lower,
+stt> eligible-to-run priority, it becomes non-ready.
+
+I'll try to work through your example with this model
+and see if it accomplishes the same thing (see below).
+I'll call mine the "model-TT" in what follows ;-).
+
+> Assume range of priorities for EDF (or any other
+> scheme using this approach) is B..T; also
+> restrict tasks and PO priorities to be in range
+> B+1 .. T. See why in a moment.
+
+I don't think model-TT requires such a restriction.
+
+> Let S denote the priority of the highest priority currently
+> locked PO (S is dynamic). If no PO is locked let S equal B.
+
+In model-TT, I'll let S be B-1 if no PO is locked in range B..T.
+
+>
+> Rule 1
+> All ready queues (in B..T range) are ordered by deadline.
+
+Agreed.
+
+>
+> Rule 2
+> A task, T, can only become ready if base pri of T > S.
+
+The model-TT rule would be: a task T can become ready only if
+T'prio <= S or its "urgency" is greater than the head of the
+queues at levels S+1..T'Prio-1.
+
+>
+> Rule 3
+> When a task becomes ready it is added to the ready queue
+> for pri level S and this is a dispatching point. (So if new task
+> has shorter deadline than current running task it will
+> preempt it.)
+
+When a task becomes ready (that is, it is released and its
+urgency is greater than tasks at levels S+1..T'Prio-1)
+it is added to the ready queue for its base priority level.
+This is a dispatching point if
+it is more "urgent" than the running task and its prio is > S.
+
+> Rule 4
+> When S drops from level P1 to P2, all tasks on the ready
+> queue for P1 are transferred to P2.
+
+When S drops from level P1 to P2, rule 2 and 3 are rechecked:
+if a queued task whose prio is > S is now the most urgent, it preempts.
+Any tasks that are less urgent than the heads of queues in
+S+1..T'Prio-1 are made non-ready.
+
+>
+> --
+> This last rule may sound strange, and would not necessarily be the way
+> to implement this, but is needed in the model (I think).
+>
+> Note the only tasks that would move are those that were released
+> while S had value P1, had deadline greater then the task that locked
+> the object that made S equal P1, and had base pri > S. If S drops
+> to B (ie no objects locked) all tasks will be in the B ready queue
+> in deadline order.
+>
+> To see why the rule is required, one needs a quite
+> complicated scenario involving two or more POs. For example
+>
+> T1 - base priority 3
+> T2 - base priority 4
+> T3 - base priority 5
+> T4 - base priority 6
+>
+> PO1 used by T1 and T2 - ceiling pri 4
+> PO2 used by T4 - ceiling 6
+>
+> S is 1 initially
+> T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
+>   is now 4)
+> T4 released (6>4) and is added to ready queue for priority
+>   level 4, assume it has shorter deadline, so it preempts T1,
+>   and executes with active pri 4; it now calls PO2 (it executes
+>   at active pri 6 and S now 6)
+> T3 arrives but cannot be made ready even though it has even
+>   shorter deadline (5 not > 6)
+> T4 returns from PO2 (S now equal 4). T4 is added to ready queue
+>   for 4, as is T3 that can now be made ready. The ready q at level
+>   4 has the right structure (T3 head then T4 then T1) and allows T3
+>   to execute first (shortest deadline) then T4, then T1
+> When T1 exists PO1 (S falls to 1) any tasks that were made ready
+>   (because they had high enough base pri) but longer deadline
+>   that T1, will move from ready q at 4 to ready q at 1
+
+Model-TT equivalent scenario:
+
+S is 1 initially
+T1 released (3>1), calls PO1 (S now equal 4, active pri of T1
+   is now 4)
+T4 released (6>4) and is added to ready queue for priority
+   level 6 (nothing at prio 5 to worry about),
+   assume it has shorter deadline, so it preempts T1,
+   and executes with active pri 6; it now calls PO2 (it continues
+   to execute at active pri 6 and S now 6).
+   T1 is placed on head of queue at level 4.
+T3 arrives and is added to prio 5 queue (ordered by deadline),
+   since all tasks with prio <= S are allowed on their queues,
+   but are ignored because of the locked PO.
+T4 returns from PO2 (S now equal 4). T3 preempts T4 since
+   it is now the most urgent task at prio > S.
+   (T1 is still in a preempted state).
+   T4 is added to ready queue for level 6 presuming there is
+   no other task at level 5, or it is more urgent than the head of
+   the level 5 queue.
+   When T3 completes, T4 resumes.  When T4 completes, T1 resumes.
+When T1 exits PO1 (S falls to 1) rule 2 is rechecked, which may
+   cause some of the tasks on queues 2..6 to be made non-ready
+   if they are less urgent than lower priority tasks (which are now
+   eligible to run since S is 1).
+
+
+> --
+>
+> Easy to see that a very simple scheme where it is acceptable
+> for all PO calls to be, in effect, non-preemptive can easily
+> be programmed by giving 2 priority levels for EDF (say 7 and
+> 8), set all tasks priorities to 8 (hence all ceiling are 8)
+> then ready queue at 7 will be ordered by deadline. All
+> tasks can be made ready immediately unless a task calls a
+> PO; in which case it will execute at pri 8 (S will be 8)
+> and no preemption will occur.
+>
+> If you see nothing immediately wrong with this I'll write a longer
+> note to the IRTAW group and get their assessment.
+
+Could you comment on model-TT?  It seems that by
+making tasks of high priority but low urgency non-ready,
+and by making the readying of a task a dispatching
+point only if it is more urgent than the running task,
+it satisfies the requirements of the priority model,
+while primarily following the "urgency" (e.g. deadline)
+ordering.
+
+I don't think either of the models correspond to the
+best way to implement things, but my sense is that model-TT
+might be simpler to describe, and has the virtue that a
+task is queued at its active priority, rather than bouncing
+around to different queues that have nothing to do with
+its priority.  The "trick" is we simply make a task non-ready
+if it is eligible to run due to having a prio > S, but
+its relative priority contradicts its relative urgency.
+
+ From an implementation point of view, I think I would
+probably have each priority level with its own urgency
+queue, and then a queue of levels > S, ordered by the
+urgency of the head of the corresponding queue.  That is,
+I wouldn't actually remove the tasks from their priority
+level's queue if their urgency was too low, but I would just
+ignore the queue as a whole.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Thursday, October 30, 2003  8:48 AM
+
+Let me first make sure I have model-TT correct
+
+stt> Another way to imagine it is that there are still queues at each
+stt> priority level (ordered by scheduler determined "urgency",
+stt> e.g. deadline),
+stt> but a task is only "ready" if it has a greater urgency than
+stt> all tasks on
+stt> lower priority ready queues, but ignoring queues at priorities
+stt> at the level of a locked PO or below.
+stt>
+stt> In other words, when a task's "urgency" drops below a
+stt> ready task of a lower,
+stt> eligible-to-run priority, it becomes non-ready.
+
+Tasks run at their usual base priorities, or with PO
+priorities. We get EDF behaviour by making all 'ready
+tasks', with priority higher than that of the task with
+shortest deadline, non-ready (unless a PO is locked,
+but let ignore this situation for a moment).
+
+So if we have the range 1.10 for EDF, and a task with pri
+10 was running and then a task with pri 1 was released with
+a shorted deadline then the running task and all tasks on
+ready queues in range 2.10 would need to be made non-ready.
+Correct?
+
+Assuming I have this correct I have some concerns (but I agree
+that model-TT gives the correct EDF behaviour).
+
+1) Being 'ready' and on a ready queue are synonymous in the
+dispatching model - hence becoming non-ready would mean being
+removed from ready queues. Hence many tasks for most of the time
+will be in limbo (not in any state currently defined for tasks).
+
+2) I think the control we have over dispatching amounts to
+defining the dispatching points, and some postponement of
+when a task becomes ready (after being blocked, but before
+it executes).
+
+3) Model-TT needs (I think) a task that is running to become
+non-ready (if lower pri task released with shorter deadline).
+This is quite new and opens up many potentially tricky issues.
+I do not think it is a natural way of defining the model
+(but I guess that is a question of taste!).
+
+4) Those who know the preemption model will find it confusing
+that priority is (apparently) being used for dispatching and
+PO locking - rather than just locking.
+
+
+So what do we have.
+
+I think we both agree (at this stage) that when a task executes
+inside a PO it should execute with ceiling priority. Also that
+we should only have queues that need the head task to be looked at.
+
+I think the use of non-ready should be limited to the postponement
+of when a task can become ready.
+
+The model should look straightforward when no POs are locked and
+extend naturally when POs are added to the model.
+
+Let me give a variation of what I last said, to get
+rid of the 'strange rule' at the cost of having more tasks
+in the non-ready state prior to being released. I'll call
+this model-AT (combined alan and tuck):
+
+
+Assume range of priorities for EDF (or any other
+scheme using this approach) is B..H; also
+restrict tasks and PO priorities to be in range
+B+1 .. H. Let T'P be base priority of task T, and
+T'D its current deadline.
+
+Let S denote the priority of the highest priority currently
+locked PO. And let S'D be the deadline of
+the task that has locked this PO. If no PO is locked let S equal B
+and S'D be infinity (Time_Span_Last).
+
+Rule 1
+All ready queues (in B..H range) are ordered by deadline.
+
+Rule 2
+A task, T, can only become ready if T'P > S and T'D < S'D.
+
+Rule 3
+When a task becomes ready it is added to the ready queue
+for pri level S and this is a dispatching point.
+
+
+I think these rules are getting quite simple!
+
+So if no locked POs then all tasks are immediately added to the
+single ready queue at priority B (which is ordered by EDF) when
+they become unblocked. Only when a PO is locked are some tasks
+not immediately made ready.
+
+Once a PO is locked (by task Q say) then S becomes the base until
+it is unlocked. Only tasks that have higher Pri than S and shorter
+deadline than Q can become ready.
+If there are many of these they will be on the ready queue at level
+S (and be in front of Q).
+If any of these lock a higher level PO then this behaviour is
+repeated at the higher level.
+When Q runs again and unlocks the PO then its pri (and S) will drop
+to B, all 'held tasks' will become ready and correct behaviour
+follows.
+
+
+> I don't think either of the models correspond to the
+> best way to implement things, but my sense is that model-TT
+> might be simpler to describe, and has the virtue that a
+> task is queued at its active priority, rather than bouncing
+> around to different queues that have nothing to do with
+> its priority.  The "trick" is we simply make a task non-ready
+> if it is eligible to run due to having a prio > S, but
+> its relative priority contradicts its relative urgency.
+>
+>  From an implementation point of view, I think I would
+> probably have each priority level with its own urgency
+> queue, and then a queue of levels > S, ordered by the
+> urgency of the head of the corresponding queue.  That is,
+> I wouldn't actually remove the tasks from their priority
+> level's queue if their urgency was too low, but I would just
+> ignore the queue as a whole.
+>
+
+Sorry to keep arguing, but I feel model-AT is now quite simple
+and easy to explain. For implementation, all that is needed
+in addition to the usual stuff is a queue of non-ready tasks
+that have become unblocked but are not yet ready. Probably best
+to order this by priority. As level of S drops tasks on this
+held queue with priority higher than new S will be added to
+the usual ready queue (at new S).
+
+I guess back to you for comment!
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, October 30, 2003 11:31 AM
+
+> Let me first make sure I have model-TT correct
+>
+> stt> Another way to imagine it is that there are still queues at each
+> stt> priority level (ordered by scheduler determined "urgency",
+> stt> e.g. deadline),
+> stt> but a task is only "ready" if it has a greater urgency than
+> stt> all tasks on
+> stt> lower priority ready queues, but ignoring queues at priorities
+> stt> at the level of a locked PO or below.
+> stt>
+> stt> In other words, when a task's "urgency" drops below a
+> stt> ready task of a lower,
+> stt> eligible-to-run priority, it becomes non-ready.
+>
+> Tasks run at their usual base priorities, or with PO
+> priorities. We get EDF behaviour by making all 'ready
+> tasks', with priority higher than that of the task with
+> shortest deadline, non-ready (unless a PO is locked,
+> but let ignore this situation for a moment).
+>
+> So if we have the range 1.10 for EDF, and a task with pri
+> 10 was running and then a task with pri 1 was released with
+> a shorted deadline then the running task and all tasks on
+> ready queues in range 2.10 would need to be made non-ready.
+> Correct?
+
+Yes.  (BTW, I think we should use the term "highest urgency"
+or "most urgent" instead of "shortest deadline" to keep it
+relatively general.)
+
+> Assuming I have this correct I have some concerns (but I agree
+> that model-TT gives the correct EDF behaviour).
+>
+> 1) Being 'ready' and on a ready queue are synonymous in the
+> dispatching model - hence becoming non-ready would mean being
+> removed from ready queues. Hence many tasks for most of the time
+> will be in limbo (not in any state currently defined for tasks).
+
+This seems to be true also of the model you suggest below.  I guess
+you are saying this "limbo" state is easier to handle if tasks
+can enter it only immediately after becoming unblocked, but once out
+of limbo and onto a ready queue, they don't reenter limbo without
+first becoming blocked again.  I'm not sure I see the big
+difference in terms of understandability.  I think we all
+agree that we are trying to construct a model that uses part
+but not all of the priority model.  We are not worried about
+actual implementation mechanism.
+
+> 2) I think the control we have over dispatching amounts to
+> defining the dispatching points, and some postponement of
+> when a task becomes ready (after being blocked, but before
+> it executes).
+
+Where does the "postponement" license come from?  Perhaps
+if I understood that I would agree that restricting "limbo"
+to immediately follow blocking would be preferable.
+
+> 3) Model-TT needs (I think) a task that is running to become
+> non-ready (if lower pri task released with shorter deadline).
+> This is quite new and opens up many potentially tricky issues.
+> I do not think it is a natural way of defining the model
+> (but I guess that is a question of taste!).
+
+Perhaps a better way to talk about it is that a task
+is "held" (a la Asynchronous Task Control) when its urgency
+is lower than some lower priority ready task.
+
+I see you are using the term "held" below as well;
+it is probably a preferable term to "limbo" ;-).
+
+> 4) Those who know the preemption model will find it confusing
+> that priority is (apparently) being used for dispatching and
+> PO locking - rather than just locking.
+
+I guess it seemed odd to me that we still have queues at various
+priorities, but the task's base priority had nothing to do with
+what queue it was on.  I guess we could be saying that the
+algorithm for the "active" priority is really what matters,
+and we are both saying that the active priority is influenced by
+PO locking as usual, but also by urgency.
+
+> So what do we have.
+>
+> I think we both agree (at this stage) that when a task executes
+> inside a PO it should execute with ceiling priority. Also that
+> we should only have queues that need the head task to be looked at.
+
+Yes, that sounds right.
+
+> I think the use of non-ready should be limited to the postponement
+> of when a task can become ready.
+
+It would help me to understand why.
+
+> The model should look straightforward when no POs are locked and
+> extend naturally when POs are added to the model.
+
+Yes, that sounds good too.
+
+> Let me give a variation of what I last said, to get
+> rid of the 'strange rule' at the cost of having more tasks
+> in the non-ready state prior to being released. I'll call
+> this model-AT (combined alan and tuck):
+>
+>
+> Assume range of priorities for EDF (or any other
+> scheme using this approach) is B..H; also
+> restrict tasks and PO priorities to be in range
+> B+1 .. H. Let T'P be base priority of task T, and
+> T'D its current deadline.
+
+I don't love the rule that you can't have tasks at level B.
+Is that really necessary?  I suppose disallowing POs at
+level B seems more reasonable, though I am not convinced,
+since it seems we could set "S" to B in the rule below
+if no POs are locked, even if there are POs at level B.
+
+> Let S denote the priority of the highest priority currently
+> locked PO. And let S'D be the deadline of
+> the task that has locked this PO. If no PO is locked let S equal B
+> and S'D be infinity (Time_Span_Last).
+>
+> Rule 1
+> All ready queues (in B..H range) are ordered by deadline.
+>
+> Rule 2
+> A task, T, can only become ready if T'P > S and T'D < S'D.
+>
+> Rule 3
+> When a task becomes ready it is added to the ready queue
+> for pri level S and this is a dispatching point.
+>
+>
+> I think these rules are getting quite simple!
+
+Perhaps suspiciously simple.  Since you are saying that once
+a task is on a ready queue, it stays on that queue until it
+runs, I am surprised it can work.  How dynamic are the deadlines?
+It would seem to be nice to describe the solution in terms of
+the properties of the queues at any given time as a function
+of priorities and urgencies, rather than the properties at the
+moment a task is added to a queue.
+
+If I follow your model correctly, then it has the following
+property:  Presuming at some point there are tasks queued at various
+priority levels, then there is a PO locked at each level where
+there are tasks queued (except the lowest), and any task queued at a
+level L (other than the one holding the lock) has base priority
+greater than L, and urgency higher than the task holding the lock
+(as well as any task queued at lower levels).
+
+Does that sound right?  If so, let's try to simplify a bit,
+and eliminate the "limbo" state.
+
+First, there seems no harm in putting higher-urgency tasks in a queue at
+the level one above that of the locked PO.  This will make it a bit more
+likely that the base prio matches the queue priority, and might
+eliminate the need for the restriction against using level "B" itself.
+
+Second, rather than putting lower-urgency or lower-priority tasks into
+limbo, can't we put them where they would have gone if they had arrived
+before the PO was locked, but make sure that tasks with a PO lock
+always stay on the front of their queue when preempted?
+That is, look for the highest prio locked PO that the new
+task would have preempted, and add it to the queue just above that.
+If there is no such PO, then add it to the lowest level queue.
+
+In this model, the only queues that have tasks on them are those
+at the level of a locked PO, which contains the task holding
+the lock while preempted, and those immediately above that, which
+holds tasks of higher prio than this locked PO (but not higher than
+some higher-level locked PO), and higher urgency than
+the task holding the PO.  While preempted, a task holding a lock
+is kept at the head of its queue (as though it has very high urgency).
+
+If there are no POs locked, then all ready tasks are in the lowest level
+queue.  When there are POs locked, the tasks are split out into
+multiple queues, at levels one above that of each locked PO, plus any
+preempted tasks with locks at the PO's level.  Tasks go into the
+highest level where their urgency and priority exceeds that of
+the task holding the lock at the immediately lower level.
+If either their urgency or priority is not greater, they go in
+a lower level.
+
+> So if no locked POs then all tasks are immediately added to the
+> single ready queue at priority B (which is ordered by EDF) when
+> they become unblocked. Only when a PO is locked are some tasks
+> not immediately made ready.
+>
+> Once a PO is locked (by task Q say) then S becomes the base until
+> it is unlocked. Only tasks that have higher Pri than S and shorter
+> deadline than Q can become ready.
+> If there are many of these they will be on the ready queue at level
+> S (and be in front of Q).
+> If any of these lock a higher level PO then this behaviour is
+> repeated at the higher level.
+> When Q runs again and unlocks the PO then its pri (and S) will drop
+> to B, all 'held tasks' will become ready and correct behaviour
+> follows.
+> ...
+> Sorry to keep arguing, but I feel model-AT is now quite simple
+> and easy to explain. For implementation, all that is needed
+> in addition to the usual stuff is a queue of non-ready tasks
+> that have become unblocked but are not yet ready. Probably best
+> to order this by priority. As level of S drops tasks on this
+> held queue with priority higher than new S will be added to
+> the usual ready queue (at new S).
+>
+> I guess back to you for comment!
+
+I don't think we need the "limbo" queue, as explained above.
+Also, if we put tasks in the level one above that of the
+locked PO, I think we can eliminate the funny restriction to
+levels B+1..H, and can use all the levels.  This does require
+that while preempted, a task holding a lock is treated as though
+it is very urgent, so it is kept at the head of the queue at the
+PO's level.
+
+With those two "tweaks" I think we have a model that satisfies
+both your goals and mine.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Sunday, November 2, 2003  5:27 AM
+
+...
+>> So what do we have.
+>>
+>> I think we both agree (at this stage) that when a task executes
+>> inside a PO it should execute with ceiling priority. Also that
+>> we should only have queues that need the head task to be looked at.
+>
+> Yes, that sounds right.
+>
+>> I think the use of non-ready should be limited to the postponement
+>> of when a task can become ready.
+>
+> It would help me to understand why.
+
+The way all ceiling type protocols work is for a task to
+be delayed from really becoming ready; once it starts executing
+all its resources are available. So with the standard ceiling
+locking, a PO delays other tasks from executing (although they
+can be put on the ready queue). With Preemption Level locking
+the algorithm is described in terms of the rules for a task
+becoming ready. Also I feel within Ada it will be easier to
+describe a model (if it needs limbo states) in terms of delayed
+entry to a ready queue rather than disappearing from a ready queue.
+
+But lets hope we can work on a model that does not need limbo
+states!
+
+...
+>> Let me give a variation of what I last said, to get
+>> rid of the 'strange rule' at the cost of having more tasks
+>> in the non-ready state prior to being released. I'll call
+>> this model-AT (combined alan and tuck):
+>>
+>> Assume range of priorities for EDF (or any other
+>> scheme using this approach) is B..H; also
+>> restrict tasks and PO priorities to be in range
+>> B+1 .. H. Let T'P be base priority of task T, and
+>> T'D its current deadline.
+>
+> I don't love the rule that you can't have tasks at level B.
+> Is that really necessary?  I suppose disallowing POs at
+> level B seems more reasonable, though I am not convinced,
+> since it seems we could set "S" to B in the rule below
+> if no POs are locked, even if there are POs at level B.
+
+yes only disallowing PO at B may work
+
+>> Let S denote the priority of the highest priority currently
+>> locked PO. And let S'D be the deadline of
+>> the task that has locked this PO. If no PO is locked let S equal B
+>> and S'D be infinity (Time_Span_Last).
+>>
+>> Rule 1
+>> All ready queues (in B..H range) are ordered by deadline.
+>>
+>> Rule 2
+>> A task, T, can only become ready if T'P > S and T'D < S'D.
+>>
+>> Rule 3
+>> When a task becomes ready it is added to the ready queue
+>> for pri level S and this is a dispatching point.
+>>
+>>
+>> I think these rules are getting quite simple!
+>
+> Perhaps suspiciously simple.  Since you are saying that once
+> a task is on a ready queue, it stays on that queue until it
+> runs, I am surprised it can work.  How dynamic are the deadlines?
+> It would seem to be nice to describe the solution in terms of
+> the properties of the queues at any given time as a function
+> of priorities and urgencies, rather than the properties at the
+> moment a task is added to a queue.
+
+Remember for whatever model we come up with, whenever a task
+changes its absolute deadline (or urgency level) then this is a dispatching
+point and the task is taken out of its ready queue and replaced
+using the new deadline. So an EDF periodic task will at the end of
+its period calculate its new absolute deadline (for the next
+period), change its deadline attribute (this will cause dispatching),
+and then delay for start of next period.
+
+Once the dispatching model is sorted out we need to return to the
+representation of deadlines/urgency (as discussed in the AI).
+
+> If I follow your model correctly, then it has the following
+> property:  Presuming at some point there are tasks queued at various
+> priority levels, then there is a PO locked at each level where
+> there are tasks queued (except the lowest), and any task queued at a
+> level L (other than the one holding the lock) has base priority
+> greater than L, and urgency higher than the task holding the lock
+> (as well as any task queued at lower levels).
+>
+> Does that sound right?  If so, let's try to simplify a bit,
+> and eliminate the "limbo" state.
+
+Yes thats my thinking. It follows that at any level L the tail
+of the queue is the task that has locked the PO.
+
+> First, there seems no harm in putting higher-urgency tasks in a queue at
+> the level one above that of the locked PO.  This will make it a bit more
+> likely that the base prio matches the queue priority, and might
+> eliminate the need for the restriction against using level "B" itself.
+
+Does this not mean we have to reserve a priority above top PO.
+In other words H level cannot be used? Also is one level above
+the PO a level that can be used by other tasks and POs or an
+artificial level 'between' application levels? I feel a rule of
+no POs at B (or use exisiting permission to round up any PO
+at B to B+1) is more straightforward.
+
+> Second, rather than putting lower-urgency or lower-priority tasks into
+> limbo, can't we put them where they would have gone if they had arrived
+> before the PO was locked, but make sure that tasks with a PO lock
+> always stay on the front of their queue when preempted?
+> That is, look for the highest prio locked PO that the new
+> task would have preempted, and add it to the queue just above that.
+> If there is no such PO, then add it to the lowest level queue.
+
+I agree that the removal of the limbo state would be good. But
+I cannot quite follow through your model here (sorry). If there are
+a number of POs locked (at increasing priority levels) then would
+you not have to go down all the POs to find the one that the
+new task is allowed to preempt and then put it at the head of
+that queue level (or +1 using your alternative)?
+
+I know we are talking about a model not an implementation
+scheme but this seems a bit excessive - but is perhaps better
+than introducing a new idea such as limbo (or held).
+
+With the protocol all locked PO have a strict order. If PO1
+and PO2 are both locked then PO1 must have higher priority
+and greater urgency than PO2 (or the other way round). So
+it may be possible to define where a newly released task should
+be placed.
+
+Is this what you were thinking? If so I'll do some more work to
+be sure it 'works'.
+
+> In this model, the only queues that have tasks on them are those
+> at the level of a locked PO, which contains the task holding
+> the lock while preempted, and those immediately above that, which
+> holds tasks of higher prio than this locked PO (but not higher than
+> some higher-level locked PO), and higher urgency than
+> the task holding the PO.  While preempted, a task holding a lock
+> is kept at the head of its queue (as though it has very high urgency).
+>
+> If there are no POs locked, then all ready tasks are in the lowest level
+> queue.  When there are POs locked, the tasks are split out into
+> multiple queues, at levels one above that of each locked PO, plus any
+> preempted tasks with locks at the PO's level.  Tasks go into the
+> highest level where their urgency and priority exceeds that of
+> the task holding the lock at the immediately lower level.
+> If either their urgency or priority is not greater, they go in
+> a lower level.
+>
+>> So if no locked POs then all tasks are immediately added to the
+>> single ready queue at priority B (which is ordered by EDF) when
+>> they become unblocked. Only when a PO is locked are some tasks
+>> not immediately made ready.
+>>
+>> Once a PO is locked (by task Q say) then S becomes the base until
+>> it is unlocked. Only tasks that have higher Pri than S and shorter
+>> deadline than Q can become ready.
+>> If there are many of these they will be on the ready queue at level
+>> S (and be in front of Q).
+>> If any of these lock a higher level PO then this behaviour is
+>> repeated at the higher level.
+>> When Q runs again and unlocks the PO then its pri (and S) will drop
+>> to B, all 'held tasks' will become ready and correct behaviour
+>> follows.
+>> ...
+>> Sorry to keep arguing, but I feel model-AT is now quite simple
+>> and easy to explain. For implementation, all that is needed
+>> in addition to the usual stuff is a queue of non-ready tasks
+>> that have become unblocked but are not yet ready. Probably best
+>> to order this by priority. As level of S drops tasks on this
+>> held queue with priority higher than new S will be added to
+>> the usual ready queue (at new S).
+>>
+>> I guess back to you for comment!
+>
+> I don't think we need the "limbo" queue, as explained above.
+> Also, if we put tasks in the level one above that of the
+> locked PO, I think we can eliminate the funny restriction to
+> levels B+1..H, and can use all the levels.  This does require
+> that while preempted, a task holding a lock is treated as though
+> it is very urgent, so it is kept at the head of the queue at the
+> PO's level.
+
+Remember this is the usual rule for preemption (ie stay at head).
+
+> With those two "tweaks" I think we have a model that satisfies
+> both your goals and mine.
+
+I think we may be closer if I understand correctly your latest thoughts.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, November  2, 2003 10:39 AM
+
+> ...
+> > First, there seems no harm in putting higher-urgency tasks in a queue at
+> > the level one above that of the locked PO.  This will make it a bit more
+> > likely that the base prio matches the queue priority, and might
+> > eliminate the need for the restriction against using level "B" itself.
+>
+> Does this not mean we have to reserve a priority above top PO.
+
+No.  Nothing can ever preempt a task with a locked PO at level H,
+so there is no need to provide a queue above that level.
+
+> In other words H level cannot be used?
+
+H can be used.  H+1 would never be needed since there
+are no tasks at priority > H.
+
+> ... Also is one level above
+> the PO a level that can be used by other tasks and POs or an
+> artificial level 'between' application levels?
+
+No, as I tried to explain below, so long as a task with a locked
+PO is kept at the head of its queue, it is OK to have other tasks
+on that queue as well.
+
+> ... I feel a rule of
+> no POs at B (or use exisiting permission to round up any PO
+> at B to B+1) is more straightforward.
+
+I don't think we need either rule.
+
+> > Second, rather than putting lower-urgency or lower-priority tasks into
+> > limbo, can't we put them where they would have gone if they had arrived
+> > before the PO was locked, but make sure that tasks with a PO lock
+> > always stay on the front of their queue when preempted?
+> > That is, look for the highest prio locked PO that the new
+> > task would have preempted, and add it to the queue just above that.
+> > If there is no such PO, then add it to the lowest level queue.
+>
+> I agree that the removal of the limbo state would be good. But
+> I cannot quite follow through your model here (sorry). If there are
+> a number of POs locked (at increasing priority levels) then would
+> you not have to go down all the POs to find the one that the
+> new task is allowed to preempt and then put it at the head of
+> that queue level (or +1 using your alternative)?
+
+I suppose you could defer actually putting tasks on the queue
+if it was more efficient, so long as it was equivalent to putting
+them on the correct queue at the beginning.  I would argue we should not
+worry about the efficiency of finding the right queue.  There are
+so many clever things you can do once you know what the "right"
+answer is, as far as queueing, that this seems like a very
+minor problem.  We use a binary "heap" of priorities in our
+run-time system simply to speed up the process of finding the
+highest-priority queue with at least one task on it.  I could
+see using binary "heaps" in this model as well to reduce the time
+to find the right place to insert tasks of various urgencies/priorities.
+
+> I know we are talking about a model not an implementation
+> scheme but this seems a bit excessive - but is perhaps better
+> than introducing a new idea such as limbo (or held).
+
+My concern is that we should be able to describe the steady
+state situation, rather than rely on tricks involving deferring
+putting tasks on a queue, since if they had come in just
+a millisecond earlier, we might have had no trouble placing them
+on an appropriate queue.
+
+> With the protocol all locked PO have a strict order. If PO1
+> and PO2 are both locked then PO1 must have higher priority
+> and greater urgency than PO2 (or the other way round). So
+> it may be possible to define where a newly released task should
+> be placed.
+
+Yes, I think so, and it shouldn't really matter whether it is
+newly released, or released a bit earlier.  Its priority, its urgency,
+and the priority/urgency of the various tasks with locked POs, should fully
+determine where each ready task should be queued.
+
+To be specific, a ready task should be queued immediately above
+the priority of the highest priority locked PO for which it exceeds
+both the PO's priority and the lock-holding task's urgency.
+The queues are ordered by urgency, except that a lock-holding task
+is queued at the front of its queue while preempted.
+
+> Is this what you were thinking? If so I'll do some more work to
+> be sure it 'works'.
+
+Yes, I supposed "this" is what I was thinking, presuming we
+agree on what "this" is ;-).
+
+> > In this model, the only queues that have tasks on them are those
+> > at the level of a locked PO, which contains the task holding
+> > the lock while preempted, and those immediately above that, which
+> > holds tasks of higher prio than this locked PO (but not higher than
+> > some higher-level locked PO), and higher urgency than
+> > the task holding the PO.  While preempted, a task holding a lock
+> > is kept at the head of its queue (as though it has very high urgency).
+> >
+> > If there are no POs locked, then all ready tasks are in the lowest level
+> > queue.  When there are POs locked, the tasks are split out into
+> > multiple queues, at levels one above that of each locked PO, plus any
+> > preempted tasks with locks at the PO's level.  Tasks go into the
+> > highest level where their urgency and priority exceeds that of
+> > the task holding the lock at the immediately lower level.
+> > If either their urgency or priority is not greater, they go in
+> > a lower level.
+> ...
+> > I don't think we need the "limbo" queue, as explained above.
+> > Also, if we put tasks in the level one above that of the
+> > locked PO, I think we can eliminate the funny restriction to
+> > levels B+1..H, and can use all the levels.  This does require
+> > that while preempted, a task holding a lock is treated as though
+> > it is very urgent, so it is kept at the head of the queue at the
+> > PO's level.
+>
+> Remember this is the usual rule for preemption (ie stay at head).
+
+Yes, that is why it seemed like a natural rule to me.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Sunday, November  9, 2003  8:43 AM
+
+I think we now have a scheme that does not need a limbo state,
+tasks can be put immediately on to the correct ready queue
+when they become unblocked. One needs to identify the highest
+locked PO that has a priority and urgency less than that of
+the released task.
+
+A relatively minor issue remains. To use PO queue, or PO+1 queue
+for the tasks.
+
+>> ...
+>> > First, there seems no harm in putting higher-urgency tasks in a queue at
+>> > the level one above that of the locked PO.  This will make it a bit more
+>> > likely that the base prio matches the queue priority, and might
+>> > eliminate the need for the restriction against using level "B" itself.
+>>
+>> Does this not mean we have to reserve a priority above top PO.
+>
+> No.  Nothing can ever preempt a task with a locked PO at level H,
+> so there is no need to provide a queue above that level.
+
+Agreed
+
+If PO+1 scheme is used then one needs the rule that a task with
+a locked PO must always be at the front of the ready queue for the
+PO's priority. So the queue is not a pure EDF (urgency) queue.
+
+The PO only scheme seems to need the rule that no PO priorities
+at the base level which I agree seems strange, so I'm happy to
+go along with your model which I think has the following form.
+Let T'P be the base priority of task T, T'A its active priority
+and T'U its urgency.
+
+Rule 1
+All ready queues in the specified range B..H are ordered by urgency
+apart from when a task is preempted while executing within a PO,
+in this case the task is always at the head of the ready queue.
+
+Rule 2
+When a task T becomes unblocked it is placed on the ready queue
+identified as follows.
+Let S1, S2, ..., Sn be the sequence tasks (ordered by active
+priority, S1 the highest) that are executing within a PO and
+have the property that Si'A < T'P and Si'U < T'U.
+If there are no tasks in this sequence then T is placed on the
+ready queue at priority B.
+If the sequence is not empty then T is placed on the ready queue
+at priority S1'A+1
+
+Rule 3
+When a task is chosen for execution it runs with the active priority
+determined by the ready queue from which the task was taken. If
+preemptied it returns to the ready queue for its active priority.
+
+I guess there will be a better way of saying all this for the ARM.
+
+We will need a statement that for multiprocessor implementations
+other rules may apply. Also need to ensure that the model works
+when task have inheritance priorities due to other reasons
+(rendezvous etc) - but I do not think they give any problems.
+
+A new issue - how to set the urgency of a task. In the original AI
+I had a new attribute of a task (deadline) and a new pragma for setting
+the deadline of a task on creation (like priority). It would be
+good if this could be made more general. Any idea how this could be
+done? Of course for some schemes (EDF for example) the smaller the
+value of parameter the more urgent, for other schemes (value-based
+dispatching) the larger the more urgent. Whatever the scheme we would
+need to have the rule that any change to the urgency attribute would
+need to be a dispatching point.
+
+I think we are making progress!
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Sunday, November  9, 2003 11:20 AM
+
+> I think we now have a scheme that does not need a limbo state,
+> tasks can be put immediately on to the correct ready queue
+> when they become unblocked. One needs to identify the highest
+> locked PO that has a priority and urgency less than that of
+> the released task.
+
+We are on the same wavelength now...
+
+> A relatively minor issue remains. To use PO queue, or PO+1 queue
+> for the tasks.
+
+I like using PO+1 because, as I mentioned, it makes it more
+likely a task will end up on its "true" priority queue.
+In particular, if there are only two levels of tasks, the
+higher level for ones that can preempt locked POs when
+urgent, these will run at this higher level exactly when
+they are preempting.
+
+> ...
+> If PO+1 scheme is used then one needs the rule that a task with
+> a locked PO must always be at the front of the ready queue for the
+> PO's priority. So the queue is not a pure EDF (urgency) queue.
+
+Right.
+
+> ...
+> Let T'P be the base priority of task T, T'A its active priority
+> and T'U its urgency.
+>
+> Rule 1
+> All ready queues in the specified range B..H are ordered by urgency
+> apart from when a task is preempted while executing within a PO,
+> in this case the task is always at the head of the ready queue.
+>
+> Rule 2
+> When a task T becomes unblocked it is placed on the ready queue
+> identified as follows.
+> Let S1, S2, ..., Sn be the sequence tasks (ordered by active
+                                    ^^ "of"
+> priority, S1 the highest) that are executing within a PO and
+> have the property that Si'A < T'P and Si'U < T'U.
+> If there are no tasks in this sequence then T is placed on the
+> ready queue at priority B.
+> If the sequence is not empty then T is placed on the ready queue
+> at priority S1'A+1
+>
+> Rule 3
+> When a task is chosen for execution it runs with the active priority
+> determined by the ready queue from which the task was taken. If
+> preemptied it returns to the ready queue for its active priority.
+        ^^^^ "ted"
+>
+>
+> I guess there will be a better way of saying all this for the ARM.
+>
+> We will need a statement that for multiprocessor implementations
+> other rules may apply. Also need to ensure that the model works
+> when task have inheritance priorities due to other reasons
+> (rendezvous etc) - but I do not think they give any problems.
+
+I agree.
+
+> A new issue - how to set the urgency of a task. In the original AI
+> I had a new attribute of a task (deadline) and a new pragma for setting
+> the deadline of a task on creation (like priority). It would be
+> good if this could be made more general. Any idea how this could be
+> done? Of course for some schemes (EDF for example) the smaller the
+> value of parameter the more urgent, for other schemes (value-based
+> dispatching) the larger the more urgent. Whatever the scheme we would
+> need to have the rule that any change to the urgency attribute would
+> need to be a dispatching point.
+
+Using pragmas and attributes is generally more work for the
+compiler builder.  Could all this be done at run-time using
+a package and task ids?  Or is having the urgency dynamic
+not common enough to justify?
+
+I am a little unclear whether we are trying to standardize
+earliest-deadline first scheduling, or rather standardize
+support for having multiple priority-specific scheduling
+algorithms.  What do we really need to standardize at this point?
+Perhaps just a framework in which priority-specific
+algorithms can be inserted...
+
+> I think we are making progress!
+
+Ditto.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Monday, November 10, 2003  9:07 AM
+
+OK, let me stay with the representation issue
+
+>> A new issue - how to set the urgency of a task. In the original AI
+>> I had a new attribute of a task (deadline) and a new pragma for setting
+>> the deadline of a task on creation (like priority). It would be
+>> good if this could be made more general. Any idea how this could be
+>> done? Of course for some schemes (EDF for example) the smaller the
+>> value of parameter the more urgent, for other schemes (value-based
+>> dispatching) the larger the more urgent. Whatever the scheme we would
+>> need to have the rule that any change to the urgency attribute would
+>> need to be a dispatching point.
+>
+> Using pragmas and attributes is generally more work for the
+> compiler builder.  Could all this be done at run-time using
+> a package and task ids?  Or is having the urgency dynamic
+> not common enough to justify?
+>
+> I am a little unclear whether we are trying to standardize
+> earliest-deadline first scheduling, or rather standardize
+> support for having multiple priority-specific scheduling
+> algorithms.  What do we really need to standardize at this point?
+> Perhaps just a framework in which priority-specific
+> algorithms can be inserted...
+
+The grand scheme is in 3 parts, first to give support for
+priority-specific scheduling (Fifo and Round Robin), the
+second to use this framework to define EDF, the third part
+(if possible) to generalise the EDF solution for any urgency
+parameter.
+
+The priority-specific scheduling is defined in AI-355 and seems
+quite straightforward. Our work on preemption levels now implies
+that a band of priorities may be needed for a scheme, but that
+is easy to work into the priority-specific AI. To support
+EDF we originally had
+
+1. a predefined task attribute (ie a language defined instantiation
+of the task attribute generic) called deadline
+
+2. a pragma (called deadline) that appears in a task spec to control
+dispatching during activation (similar to the use of priority pragma)
+
+3. all changes to a task's deadline (ie calls to the Set_Value) are
+dispatching points
+
+I guess (1) and (3) could be achieved by a run-time package, but not sure
+(2) can, although I guess we could live without this by giving the
+default value of the attribute Time_First.
+
+I cannot see an easy way to generalise this. Perhaps a generic in which
+the 'type' for urgency is a parameter, a boolean to say which end
+of the type is the most urgent, and two parameters that give the priority
+band for the scheme. The 2 priority parameters will then have to match
+those given in the priority-specific pragma (unless priority-specific
+is defined to say all priority levels are initially FIFO but may
+be changed via calls to some run-time package - but this sounds a bit
+too dynamic).
+
+EDF is so well known it would be good for Ada to give direct
+support, even if it can be derived from a general purpose generic.
+
+Do you feel the use of a generic is a runner?
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, November 10, 2003 10:48 AM
+
+...
+> The priority-specific scheduling is defined in AI-355 and seems
+> quite straightforward. Our work on preemption levels now implies
+> that a band of priorities may be needed for a scheme, but that
+> is easy to work into the priority-specific AI. To support
+> EDF we originally had
+>
+> 1. a predefined task attribute (ie a language defined instantiation
+> of the task attribute generic) called deadline
+
+Oh, I had forgotten you meant *that* kind of attribute.  I was thinking
+of the language "'" kind of attribute.  Instantiations of this generic
+do not create work for the compiler writer.
+
+> 2. a pragma (called deadline) that appears in a task spec to control
+> dispatching during activation (similar to the use of priority pragma)
+
+Here you are creating compiler work.  Using a settable default might be
+preferable.
+
+> 3. all changes to a task's deadline (ie calls to the Set_Value) are
+> dispatching points
+
+Now you are putting special semantics into an instantiation of a
+general-purpose generic.  That may be asking for trouble.  It might
+be better to just define your own package, if you want it to have
+a special effect when Set_Value is called.
+
+> I guess (1) and (3) could be achieved by a run-time package, but not sure
+> (2) can, although I guess we could live without this by giving the
+> default value of the attribute Time_First.
+
+That would simplify the proposal in a compiler implementor's view
+certainly.
+
+> I cannot see an easy way to generalise this. Perhaps a generic in which
+> the 'type' for urgency is a parameter, a boolean to say which end
+> of the type is the most urgent, and two parameters that give the priority
+> band for the scheme. The 2 priority parameters will then have to match
+> those given in the priority-specific pragma (unless priority-specific
+> is defined to say all priority levels are initially FIFO but may
+> be changed via calls to some run-time package - but this sounds a bit
+> too dynamic).
+
+I don't see the need to generalize it this extent.  Each kind of scheduling
+would probably need its own package, scheduler, etc.  It would still be
+nice to have a general *model* indicating how these new kinds of algorithms
+could be supported, without necessarily providing an amazing generic of
+some sort that did it directly.
+
+> EDF is so well known it would be good for Ada to give direct
+> support, even if it can be derived from a general purpose generic.
+>
+> Do you feel the use of a generic is a runner?
+
+No, I think EDF deserves its own package with an EDF-oriented interface.
+But it might be nice to describe a general model which allows something
+other than priority (i.e., the "urgency") to take precedence in task scheduling,
+while priority still governs use of POs.  This is essentially the
+"preemption" level concept, except it allows us to preserve the single
+notion of priority for governing POs, while allowing other parameters
+like deadline to control scheduling between tasks that are not involved
+with a PO lock.
+
+****************************************************************
+
+From: Alan Burns
+Sent: Tuesday, November 11, 2003  9:34 AM
+
+So we have the following for support of EDF
+
+1. A language defined package such as:
+
+with Ada.Real_Time;
+with Ada.Task_Identification; use Ada.Task_Identification;
+package Deadline_Support is
+  subtype Deadline is Ada.Real_Time.Time;
+  Default_Deadline : constant Deadline := Ada.Real_Time.Time_First;
+  Set_Deadline(D : in Deadline; T : in Task_ID := Current_Task);
+  Get_Deadline(D : out Deadline; T : in Task_ID := Current_Task);
+end Deadline_Support;
+
+Perhaps with Implementation Advice to hold the deadline value
+as a task attribute.
+
+Is there any way Default_Deadline could be configurable?
+
+
+2. Pragmas to set a band of priorities for EDF scheduling:
+
+pragma Task_Dispatching_Policy (Priority_Specific);
+pragma Priority_Policy (EDF_Within_Priorities,
+                        first_priority, last_priority);
+
+
+3. Rules for EDF_Within_Priorities that give the preemption level
+dispatching (vis our last 300 emails).
+
+
+4. A pragma to set the deadline of a task for its activation:
+
+pragma Deadline();
+
+Although you feel this would be perhaps too much of a change
+from the compilers point of view.
+
+--
+
+We do not give direct support for any other form of urgency, but
+the support for EDF will show how it can be done for any other
+X_Within_Priorities.
+
+If you agree, I'll get back to the AI and attempt to write it up.
+
 ****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, November 11, 2003 10:53 AM
+
+Sounds good to me!
+
+****************************************************************
+
+From: Alan Burns
+Sent: Wednesday, November 19, 2003  5:12 AM
+
+I've discussed the EDF AI with people at York and it became
+clear that the model, as we had it, was difficult to follow.
+
+To recap: when a task T is ready, it is added to the appropriate
+ready queue as follows
+
+a) find the right level, S say
+b) add T to S+1
+c) order all queues by EDF, unless T has been preempted while
+executing within a PO, in which case keep this task at head of Q
+
+S is defined in terms of locked POs (highest locked PO with
+ceiling less that new task and task which has locked PO having
+longer deadline).
+
+
+In discussions the following variation was seen to be much more
+intuitive. Basically tasks are put in queue S (hence always in
+front of the task that locked the PO). This can be expressed
+as follows:
+
+a) all queues are EDF ordered
+b) add T to the highest non-empty ready queue R such that
+    T'Deadline < Deadline of task at tail of R, and
+    T'Base > Priority level of R.
+   if no such R then add T to L (lowest level)
+
+This requires no PO ceilings at level L as I had before.
+
+This model was preferred by people seeing the models cold
+as:
+1) all queues are pure EDF with no special rule about the
+task which has locked a PO
+2) no +1 needed
+3) only ready queues need to be mentioned (not locked POs)
+4) the no POs at level L was OK as implementation is allowed
+to round up ceiling (ie from L to L+1)
+
+Hence I'm going to write up the AI with this model but mention
+other model in discussions section.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, November 19, 2003  7:05 AM
+
+Ok.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, November 24, 2003  6:55 AM
+
+what concerns me about this series of proposals for language additions from
+Alan is that they have the feeling of "let's design some nice new features",
+rather than "customer xxx who is using Ada is finding that it is a big problem
+that Ada cannot support yyy". To me, this latter kind of input is always the
+most critical kind of input for justification of any change.
+
+It's not that any specific proposal lacks technical merit, just that in
+aggregate the changes seem too large, and not well enough justified. I am
+not convinced that any vendors will see value in implementing these features.
+
+Alan, and others ... a better case needs to be made that these features are
+critical for the future use and success of Ada.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, November 24, 2003  7:35 AM
+
+I believe all of these grew out of the Int'l Real-Time Ada Workshop.
+My sense is that many of the attendees are "real" users of Ada,
+or consult to same.  Alan will have to elaborate.  In any case,
+having more convincing rationale for need would seem useful, to
+help WG9 with their consideration as well.
+
+****************************************************************
+
+From: Tullio Vardanega
+Sent: Wednesday, November 24, 2003  7:53 AM
+
+Yes, the whole range of real-time AIs have been conceived at
+and pushed for by the IRTAW, prior to, during and subsequent
+to the workshop itself.
+At this year's workshop in particular we had a number of
+working sessions devoted to smithering out AIs that we felt
+had the support of the community that the IRTAW represents.
+As usual, we strove to have as many stakeholders represented at
+the workshop as possible: we had developers, educators, vendors
+and end-users (such as the European Space Agency).
+It was indeed appreciated that the IRTAW (via Alan) would
+have to strongly justify their demand, which -- just seen from
+the stream of AIs that Alan just posted -- seem to extoll a major
+token from language implementors.
+This argument was discussed at length (as it will appear from the
+IRTAW-12 proceedings that will be on the December issue of
+the Ada Letters) and should certainly be shared with the ARG.
+However, my view as a long-time IRTAW-er and as the program
+chair for this year is that the proposed AIs have the same pedigree
+as the Ravenscar Profile, which is, to date, the best known product
+of the IRTAW series and a very solid and welcome "approved AI".
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, November 24, 2003  8:13 AM
+
+Right, I understand this, but it is still very important to differentiate
+between nice-to-have, and missing-essential-functionality (some things are
+of course in between). If you gather a bunch of language/domain experts
+at a table, and ask them "what new neat features would you like to see?"
+you will get lots of nice new neat features :-)
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, November 24, 2003  8:17 AM
+
+> which -- just seen from
+> the stream of AIs that Alan just posted -- seem to extoll a major
+> token from language implementors.
+
+Actually I don't think any implementors will move much in the direction
+of implementing any of these extensions unless they get input from their
+prime customers asking for this functionality.
+
+Ravenscar has finally found that kind of input (in the intended safety
+critical niche), and so vendors have indeed started to implement this
+capability, and several products are available.
+
+I am unconvinced that we will see the same input for many of the new
+proposed features.
+
+If my skepticism results in considerably more evidence that I am wrong,
+that's certainly a constructive and productive result!
+
+****************************************************************
+
+From: Alan Burns
+Sent: Wednesday, November 24, 2003  10:26 AM
+
+Re the comments and concerns raised by Robert.
+
+First let me say, following Tullio, that the IRTAW workshop
+series has considered a number of proposals/issues but has
+only passed on to the ARG (via myself) those that it feels
+are really important and timely. There are many others
+potential AIs!
+
+Consider the ones that I put out new versions of today
+(obviously I should not have put them all out at the same
+time!). They fall into a small number of groups:
+
+1. Budget control. Understanding the amount of CPU resource a
+task is taking is fundamental to the safe execution of any real-time
+system. OSs in the high integrity area all monitor (in some
+form or other) CPU usage as an error recognition facility.
+Also the use of servers to control allocation is increasingly
+used (for example on at least one flying avionic system from
+Honeywell). As such application move away from building their
+own run-times and OS to the use of standard technology, languages
+and OS standards need to support these approaches. POSIX does
+to some extent and much is made in the Real-Time Java extension
+of such support. AI307 and AI354 give optional but standard
+means of giving required budget control.  I can point to
+industrial users that would use Ravenscar but need budget
+control as well.
+
+2. Extending the Dispatching facilities (AI355 and AI357).
+Round Robin dispatching and EDF are not 'nice to haves'.
+They represent used techniques. RR is very common and fits
+well with current FIFO. EDF is as well known as Fixed Pri;
+and is certainly the preferred scheduling mechanism for soft
+real-time as you get much more from the CPU (perhaps 40%
+more before deadlines are missed). Look at the literature
+on embedded systems and what they want from scheduling.
+Again real-time Java claim to give support here. Indeed it
+does much further with support for value based scheduling,
+admission control etc. It would be great to have these in
+Ada, but they are beyond state of practice (and we believe
+can be programmed with the facilities we go have/will have).
+
+3. Dynamic ceiling (AI 327). This is not as important but
+finishes of the language in the sense of dynamic priorities
+for task but static pris for ceiling was always unfortunate.
+This AI gives an easy (optional) means of completing the
+language in this area.
+
+4. Support for recognition of task termination (AI10266).
+I picked this up as it gave out of the Ravenscar AI. It is
+not a high priority; but other groups have asked for this
+(eg the workshop on exception handling a couple of years
+ago).
+
+The AIs in group 1 and 2 are I believe timely for real applications
+but also show that Ada is moving forward and remain relevant
+to the real-time area.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, November 24, 2003  7:30 PM
+
+Alan said:
+
+...
+> Consider the ones that I put out new versions of today
+> (obviously I should not have put them all out at the same
+> time!).
+
+Actually, I think that was a good thing, because it shows just how many of
+these proposals there are. And keep in mind that there also are a number of
+proposals already completed (Ravenscar, timers, etc.). I have a hard time
+believing that there could be so many important pieces of missing
+functionality.
+
+> OSs in the high integrity area all monitor (in some
+> form or other) CPU usage as an error recognition facility.
+...
+
+CPU usage is important to improving the performance of any Ada program, not
+just real time ones. (If you can't measure it, you can't improve it!). It's
+unfortunate that the package has evolved such that budgeting is given
+priority over measurement. Clearly, both have value (and proposals having
+value for multiple purposes should be preferred over those having value for
+only one purpose).
+
+> 4. Support for recognition of task termination (AI10266).
+> I picked this up as it gave out of the Ravenscar AI. It is
+> not a high priority; but other groups have asked for this
+> (eg the workshop on exception handling a couple of years ago).
+
+It's interesting that the capability that seems most important from a High
+Integrity perspective (insuring that parts of the system don't become
+inoperative without notice) is considered to have the lowest priority.
+Certainly, this is the problem that I run into in practice (tasks going away
+silently when the task's others exception handler either is absent, has a
+bug, or exhausts the same resource).
+
+In any case, I tend to share Robert's skepticism about the demand for some
+of these proposals. I, like him, hope to be proven wrong.
+
+****************************************************************
+

Questions? Ask the ACAA Technical Agent