Rationale for Ada 2012
5.2 Scheduling
Ada 83 was remarkably silent about the scheduling
of tasks. It muttered about tasks being implemented on multiprocessors
or using interleaved execution on a single processor. But it said nothing
about how such interleaving might be achieved. It also indicated that
a single Ada task might be implemented using several actual processors
if the effect would be the same.
Ada 83 introduced the
pragma
Priority and stated
if two task with different priorities are both
eligible for execution ... then it cannot be the case that the task with
the lower priority is executing while the task with the higher priority
is not.
The Rationale for Ada
83 says that this rule requires preemptive scheduling. But it says nothing
about what happens if several tasks have the same priority. It does however
have a dire warning
Priorities are provided as a tool for indicating
relevant degrees of urgency and on no account should their manipulation
be used as a technique for attempting to obtain mutual exclusion.
So, apart from the existence of priorities, implementations
were free to use whatever scheduling algorithms they liked such as Round
Robin time slicing or simply running until blocked.
There was also a bit
of a mystery about the delay statement. On the one hand Ada 83 says
suspends execution of the task for at least the
duration specified.
The words "at least" caused much confusion.
The intent was simply a reminder that a task might not get the processor
back at the end of the interval because another task might have become
eligible for execution meanwhile. It did not mean that the implementation
could willy-nilly delay execution for a longer time.
Another mystery surrounded
the meaning of
delay 0.0;
Ada 83 did state that delay with a negative value
is equivalent to a delay statement with a zero value. But it did not
say what a delay with a zero value meant. The Rationale remained mute
on the topic as well.
However, a general convention seemed to arise that
delay 0.0; indicated that the task
was willing to relinquish the processor and so force a scheduling point.
Ada 95 brought some clarity to the situation in the
new Real-Time Systems annex by introducing the pragma
Task_Dispatching_Policy
and the standard argument of
FIFO_Within_Priorities.
But the core language did not clarify the effect of a delay of zero.
It does say that a delay causes a task to be blocked but if the expiration
time has already passed, the task is not blocked. So clearly a negative
delay does not block. However, it still has the note that a negative
delay is equivalent to delay zero so we could deduce that delay zero
does not block and so cannot force scheduling.
But help is at hand in the Real-Time Systems annex
where it clearly states that even if a delay does not result in blocking,
nevertheless the task goes to the end of the ready queue for its active
priority. But that is only for the standard policy of FIFO_Within_Priorities.
If a malevolent vendor introduces a curious policy called perhaps Dodgy_Scheduling
then it need not follow this rule.
Ada 2005 added further
policies namely
Non_Preemptive_FIFO_Within_Priorities
Round_Robin_Within_Priorities
In the case of Non_Preemptive_FIFO_Within_Priorities
a non-blocking delay also sends the task to the end of the ready queue
for its active priority. However, a non-blocking delay has absolutely
no effect in the case of Round_Robin_Within_Priorities
and EDF_Across_Priorities.
The introduction of non-preemptive dispatching revealed
a shortcoming that is cured in Ada 2012. The problem is that in such
a system there is a need to be able to indicate that a task is willing
to be preempted by a task of a higher priority but not by one of the
same priority. So somehow we need to say
Yield_To_Higher.
Moreover, some felt
that it was time to get rid of this strange habit of writing delay 0.0; to indicate a scheduling point. Those restricted to the Ravenscar
profile, had been forced to write something really gruesome such as
delay until Ada.Real_Time.Time_First;
Accordingly, the procedure
Yield is added to the package
Ada.Dispatching
so that it becomes
package Ada.Dispatching
is
pragma Preelaborate(Dispatching);
procedure Yield;
Dispatching_Policy_Error:
exception;
end Ada.Dispatching;
Calling Yield is exactly
equivalent to delay 0.0; and similarly
causes a bounded error if called from within a protected operation.
There is also a new
child package thus
package Ada.Dispatching.Non_Preemptive
is
pragma Preelaborate(Non_Preemptive);
procedure Yield_To_Higher;
procedure Yield_To_Same_Or_Higher
renames Yield;
end Ada.Dispatching.Non_Preemptive;
Calling Yield_To_Higher
provides the additional facility required for non-preemptive scheduling.
Note that, unlike Yield, it can be called
from within a protected operation and does not cause a bounded error.
The pedantic programmer can call the precisely named
Yield_To_Same_Or_Higher which simply renames
Yield in the parent package.
Incidentally, note that since Yield
has a side effect, Ada.Dispatching has been
downgraded to preelaborable whereas it was pure in Ada 2005.
We now turn to consider an interaction between suspension
objects introduced in Ada 95 and EDF scheduling introduced in Ada 2005.
Remember that suspension
objects are manipulated by the following package
package Ada.Synchronous_Task_Control
is
type Suspension_Object
is limited private;
procedure Set_True(S:
in out Suspension_Object);
procedure Set_False(S:
in out Suspension_Object);
function Current_State(S: Suspension_Object)
return Boolean;
procedure Suspend_Until_True (S:
in out Suspension_Object);
private
...
end Ada.Synchronous_Task_Control;
The state of a suspension object can be set by calls
of Set_True and Set_False.
The key feature is that the procedure Suspend_Until_True
enables a task to be suspended until the suspension object is set true
by some other task. Thus this provides a neat mechanism for signalling
between tasks.
Earliest Deadline First
(EDF) scheduling is manipulated by the following child package of Ada.Dispatching
introduced in Ada 2005 (with use clauses added to save space)
with Ada.Real_Time;
with Ada.Task_Identification;
use Ada.Real_Time;
use Ada.Task_Identification;
package Ada.Dispatching.EDF
is
subtype Deadline is Ada.Real_Time.Time;
Default_Deadline: constant Deadline := Time_Last;
procedure Set_Deadline(
D: in Deadline;
TT: in Task_Id := Current_Task);
procedure Delay_Until_And_Set_Deadline(
Delay_Until_Time: in Time;
Deadline_Offset: in Time_Span);
function Get_Deadline(T: Task_Id := Current_Task)
return Deadline;
end Ada.Dispatching.EDF;
The procedure Delay_Until_And_Set_Deadline
is the key feature. It enables a task to be blocked until the time given
by the parameter Delay_Until_Time and sets
the deadline so that it is Deadline_Offset
after that.
But what is missing
in Ada 2005 is the ability for a sporadic task triggered by a suspension
object to have its deadline set in a similar manner. This is remedied
in Ada 2012 by the addition of the following child package
with Ada.Real_Time;
package Ada.Synchronous_Task_Control.EDF
is
procedure Suspend_Until_True_And_Set_Deadline(
S:
in out Suspension_Object;
TS:
in Ada.Real_Time.Span);
end Ada.Synchronous_Task_Control.EDF;
This enables a task to be blocked until the suspension
object S is set true; it then becomes ready
with a deadline of Ada.Real_Time.Clock + TS.}
The other new feature concerning scheduling in Ada
2012 is the addition of a package Ada.Synchronous_Barriers.
This enables many tasks to be blocked and to be released together.
The rationale for needing this facility is explained
in the AI concerned. As general purpose computing is moving to parallel
architectures and eventually to massively parallel machines, there is
a need to efficiently schedule many tasks using barrier primitives. The
POSIX OS interface provides a barrier primitive where N tasks
wait on a barrier and are released simultaneously when all are ready
to execute.
There are many situations where the release of
N
tasks is required to execute an algorithm in parallel. Often the calculation
is relatively small for each task on each iteration but the number of
tasks is relatively high. As an example consider the solution of partial
differential equations where one task is allocated to each node of a
grid; there might easily be several thousand nodes. Such an example is
outlined in
[7]. The cost of linearly
scheduling and releasing them could remove almost all gains made through
parallelization in the first place.
The new package is
package Ada.Synchronous_Barriers
is
pragma Preelaborate(Synchronous_Barriers);
subtype Barrier_Limit is range 1 .. implementation-defined;
type Synchronous_Barrier (Release_Threshold: Barrier_Limit) is limited private;
procedure Wait_For_Release(
The_Barrier: in out Synchronous_Barrier;
Notified: out Boolean);
private
...
end Ada.Synchronous_Barriers;
The type Synchronous_Barrier
has a discriminant whose value indicates the number of tasks to be waited
for. When an object of the type is declared its internal counter is set
to zero. Thus we might write
SB: Synchronous_Barrier(Release_Threshold => 100);}
When a task calls the
procedure Wait_For_Release thus
Wait_For_Release(SB, My_Flag);
then the task is blocked
and the internal counter in SB is incremented.
If the counter is then equal to the release threshold for that object
(100 in this example), then all the tasks are released. Just one task
will have the parameter Notified set to true
(the mechanism for selecting the chosen task is not defined). This specially
chosen task is then expected to do some work on behalf of all the others.
Typically all the tasks will be of the same task type so the code of
that type might have
Wait_For_Release(SB, My_Flag);
if My_Flag then
-- Gosh, I am the chosen one
... -- do stuff
end if;
Once all the tasks are released, the counter in SB
is reset to zero so that the synchronous barrier can be used again.
Care is needed regarding finalization, aborting tasks
and other awkward activities. For example, if a synchronous barrier is
finalized, then any tasks blocked on it are released and Program_Error
is raised at the point of the call of Wait_For_Release.
Many embedded real-time programs, such as those conforming
to the Ravenscar profile, run forever. However, there are soft multitasking
programs which are hosted on systems such as Windows or Linux and these
require closing down in an orderly manner. There are also programs that
have mode changes in which the set of tasks involved can be changed dramatically.
In such situations it is important that synchronous barriers are finalized
neatly.
© 2011, 2012, 2013 John Barnes Informatics.
Sponsored in part by: