Version 1.7 of ai05s/ai05-0159-1.txt
!standard A.18.24 10-04-12 AI05-0159-1/05
!class Amendment 05-10-24
!status ARG Approved 11-0-0 10-02-27
!status work item 06-03-15
!status received 05-10-02
!priority Medium
!difficulty Hard
!subject Queue containers
!summary
Add task-safe queues to the set of containers.
!problem
The Ada 2005 containers only allow a single task to access them at a time,
leaving it to the users to manage access by multiple tasks. There is a
need for task-safe containers.
!proposal
Add a synchronized interface and four implementations of task-safe queues.
We suggest adding both regular queues and priority queues. In addition, there
are bounded and unbounded versions of both kinds of queues.
!wording
A.18.23 Containers.Synchronized_Queue_Interfaces
The language-defined generic package Containers.Synchronized_Queue_Interfaces
provides interface type Queue, and a set of operations for that type.
Interface Queue specifies a first-in, first-out queue.
Static Semantics
The generic library package Containers.Synchronized_Queue_Interfaces has the
following declaration:
generic
type Element_Type is private;
package Ada.Containers.Synchronized_Queue_Interfaces is
pragma Pure;
type Queue is synchronized interface;
procedure Enqueue
(Container : in out Queue;
New_Item : Element_Type) is abstract;
pragma Implemented (Enqueue, By_Entry);
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
pragma Implemented (Dequeue, By_Entry);
function Current_Use (Container : Queue) return Count_Type is abstract;
function Peak_Use (Container : Queue) return Count_Type is abstract;
end Ada.Containers.Synchronized_Queue_Interfaces;
procedure Enqueue
(Container : in out Queue;
New_Item : Element_Type) is abstract;
A queue type that implements this interface may have a bounded
capacity. If the queue object has a bounded capacity,
and the number of existing elements equals the capacity, then Enqueue
blocks until storage becomes available; otherwise Enqueue does not block.
In any case, it then copies New_Item onto the tail of the queue.
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
If the queue is empty, then Dequeue blocks until an item becomes
available. In any case, it then returns a copy of the element at the head of the
queue, and removes it from the container.
function Current_Use (Container : Queue) return Count_Type is abstract;
Returns the number of elements currently in the queue.
function Peak_Use (Container : Queue) return Count_Type is abstract;
Returns the maximum number of elements that have been in the queue at
any one time.
A.18.24 Containers.Unbounded_Synchronized_Queues
Static Semantics
The language-defined generic package
Containers.Unbounded_Synchronized_Queues provides type Queue, which
implements the interface type Containers.Synchronized_Queue_Interfaces.Queue.
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
package Ada.Containers.Unbounded_Synchronized_Queues is
pragma Preelaborate;
type Queue is synchronized new Queue_Interfaces.Queue with private;
private
--
end Ada.Containers.Unbounded_Synchronized_Queues;
The type Queue is used to represent task-safe queues. The type Queue needs
finalization (see 7.6).
The capacity for instances of type Queue is unbounded.
AARM Ramification: Enqueue never blocks; if more storage is
needed for a new element, it is allocated dynamically.
A.18.25 Containers.Bounded_Synchronized_Queues
Static Semantics
The language-defined generic package
Containers.Bounded_Synchronized_Queues provides type Queue, which
implements the interface type Containers.Synchronized_Queue_Interfaces.Queue.
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
package Ada.Containers.Bounded_Synchronized_Queues is
pragma Pure;
type Queue (Capacity : Count_Type) is
synchronized new Queue_Interfaces.Queue with private;
private
--
end Ada.Containers.Bounded_Synchronized_Queues;
The type Queue is used to represent task-safe queues. The type Queue needs
finalization if and only if type Element_Type needs finalization.
The capacity for instances of type Queue is bounded and specified
by the discriminant Capacity.
AARM Ramification: Since this type has a bounded capacity, Enqueue might block
if the queue is full.
Implementation Advice
Bounded queue objects should be implemented without implicit pointers or dynamic
allocation.
A.18.26 Containers.Unbounded_Priority_Queues
Static Semantics
The language-defined generic package
Containers.Unbounded_Priority_Queues provides type Queue, which
implements the interface type Containers.Synchronized_Queue_Interfaces.Queue.
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
with function Before
(Left, Right : Queues.Element_Type) return Boolean is <>;
package Ada.Containers.Unbounded_Priority_Queues is
pragma Preelaborate;
type Queue is synchronized new Queue_Interfaces.Queue with private;
private
--
end Ada.Containers.Unbounded_Priority_Queues;
The type Queue is used to represent task-safe priority queues. The type Queue
needs finalization (see 7.6).
The capacity for instances of type Queue is unbounded.
AARM Ramification: Enqueue never blocks; if more storage is
needed for a new element, it is allocated dynamically.
Two elements E1 and E2 are equivalent if both Before (E1, E2) and
Before (E2, E1) return False, using the generic formal function
Before.
Enqueue inserts an item according to the order specified by the generic
formal Before operation. If the queue already contains elements
equivalent to New_Item, then it is inserted after the existing
equivalent elements.
The actual function for the generic formal function Before of
Unbounded_Priority_Queues is expected to return the same value each
time it is called with a particular pair of element values. It should
define a strict weak ordering relationship (see A.18); it should not
modify the elements. If the actual for Before behaves in some other
manner, the behavior of Unbounded_Priority_Queues is unspecified. How
many times Enqueue calls Before is unspecified.
A.18.27 Containers.Bounded_Priority_Queues
The language-defined generic package
Containers.Bounded_Priority_Queues provides type Queue, which
implements the interface type Containers.Synchronized_Queue_Interfaces.Queue.
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
with function Before
(Left, Right : Queues.Element_Type) return Boolean;
package Ada.Containers.Bounded_Priority_Queues is
pragma Pure;
type Queue (Capacity : Count_Type) is
synchronized new Queue_Interfaces.Queue with private;
private
--
end Ada.Containers.Bounded_Priority_Queues;
The type Queue is used to represent task-safe priority queues. The type Queue
needs finalization if and only if type Element_Type needs finalization.
The capacity for instances of type Queue is bounded and specified
by the discriminant Capacity.
Two elements E1 and E2 are equivalent if both Before (E1, E2) and
Before (E2, E1) return False, using the generic formal function
Before.
Enqueue inserts an item according to the order specified by the generic
formal Before operation. If the queue already contains elements
equivalent to New_Item, then it is inserted after the existing
equivalent elements.
The actual function for the generic formal function Before of
Bounded_Priority_Queues is expected to return the same value each time
it is called with a particular pair of element values. It should
define a strict weak ordering relationship (see A.18); it should not
modify the elements. If the actual for Before behaves in some other
manner, the behavior of Bounded_Priority_Queues is unspecified. How
many times Enqueue calls Before is unspecified.
Implementation Advice
Bounded priority queue objects should be implemented without implicit
pointers or dynamic allocation.
!discussion
We considered making protected forms of all of the existing containers, but
determined that there are various problems with such an approach:
(1) What kind of protection is needed (multiread, single write vs. multi anything)?
(2) Some of the operations (especially those with callbacks) could potentially
deadlock. How do we handle those cases (allow a single task to do anything while
it holds the lock vs. always locking).
(3) Can we actually do better performance-wise than the user wrapping the
operations that they need? Note that the existing containers are not potentially
blocking, so they can be called from protected operations. A well-designed
protected type will have much better abstraction than anything we could provide,
and probably would not have as large a performance cost.
Thus, we decided to provide task-safe queue abstractions (which are commonly
needed) and let the users create any others that they need.
!example
declare
--
package Widgets is
type Widget is new Integer; --
...
end Widgets;
package Widget_Queue_Types is
new Ada.Containers.Synchronized_Queues (Widgets.Widget);
--
--
--
--
task type Assembly_Line_Task
(Q : access Widget_Queue_Types.Queue'Class := null,
N : Natural := 0) --
is end;
function New_Assembly_Line_Task
(Q : not null access Widget_Queue_Types.Queue'Class;
N : Natural) return Assembly_Line_Task
is
begin
return T : Assembly_Line_Task (Q, N);
end New_Assembly_Line_Task;
type Assembly_Line_Task_Array is
array (Positive range <>) of Assembly_Line_Task;
task type Delivery_Service_Task
(Q : not null access Widget_Queue_Types.Queue'Class;
N : Natural) --
is end;
--
--
--
--
--
task body Assembly_Line_Task is
begin
for I in 1 .. N loop
delay 0.5;
Q.Enqueue (Widget (I));
end loop;
Q.Enqueue (Widget'(0)); --
end Assembly_Line_Task;
--
--
--
--
--
task body Delivery_Service_Task is
W : Widget;
Done : Natural := 0;
begin
loop
Q.Dequeue (W);
if W > 0 then
Put_Line ("Delivering widget" & Widget'Image (W));
else
Done := Done + 1;
exit when Done >= N;
end if;
end loop;
end Delivery_Service_Task;
--
--
--
package Widget_Queues is
new Ada.Containers.Unbounded_Synchronized_Queues
(Queues => Widget_Queue_Types);
Q : aliased Widget_Queues.Queue;
--
--
Assembly_Lines : Assembly_Line_Task_Array (1 .. 5) :=
(others => New_Assembly_Line_Task (Q'access, 10));
Delivery_Service : Delivery_Service_Task (Q'access, Assembly_Lines'Length);
begin --
null;
end;
!ACATS test
ACATS Tests are needed for these new packages.
!appendix
From: Martin Dowie
Date: Tuesday, July 28, 2009 2:53 AM
1. For the priority queue, what is the expected behaviour of trying to add 2
identical elements to the queue? It would seem that the 2nd would not be
added (as "Before (x, x) = False"). I appreciate that to get both messages
added would probably require "multiset" support - are "Multi" variants of the
existing containers planned?
2. For the bounded variants, is the intent that they should not use dynamic
memory (same as the new bounded containers)? If so, should there be some
words added to state this?
3. The queues are all specified to be FIFO - could LIFO also be added relatively
cheaply? Not as common but still useful.
****************************************************************
From: Randy Brukardt
Date: Tuesday, August 11, 2009 1:42 PM
Sorry about the long delay on posting this one; it got blocked by a useless
filter in the mailing list software (that unfortunately can't be turned off) and
I was on vacation.
> 1. For the priority queue, what is the expected behaviour of trying to
> add 2 identical elements to the queue? It would seem that the 2nd
> would not be added (as "Before (x, x) = False"). I appreciate that to
> get both messages added would probably require "multiset" support -
> are "Multi"
> variants of the existing containers planned?
I would have expected the reverse: that two identical elements get queued
(probably in an unspecified order). That's how the queues in my mail filter work
(the only time I've used priority queues in practice) [They're actually more
like delay queues, as the "priority" is the retry time. Surely you don't want to
loose a message just because it happens to have the same retry time as another.]
In any case, this will need to be clarified.
> 2. For the bounded variants, is the intent that they should not use
> dynamic memory (same as the new bounded containers)?
> If so, should there be some words added to state this?
That is my understanding, and yes, of course, they will need all of the
appropriate wording that AI05-0001-1 uses for bounded forms.
> 3. The queues are all specified to be FIFO - could LIFO also be added
> relatively cheaply? Not as common but still useful.
Not sure it is worth it. We're really only trying to provide the most basic and
common of protected containers because generally the "protection" needed is
something that depends on the entire algorithm; naively dropping in protected
containers could easily generate race or deadlock conditions.
In any case, "LIFO" is a stack, which is generally thought of as a totally
different data structure, with different names for the operations. Trying to use
queue terminology for it would be pretty unnatural. So there wouldn't be much if
any reuse potential here.
****************************************************************
From: Simon Wright
Date: Tuesday, August 11, 2009 3:45 PM
>> 1. For the priority queue, what is the expected behaviour of trying
>> to add 2 identical elements to the queue? It would seem that the 2nd
>> would not be added (as "Before (x, x) = False"). I appreciate that to
>> get both messages added would probably require "multiset" support -
>> are "Multi"
>> variants of the existing containers planned?
>
> I would have expected the reverse: that two identical elements get
> queued (probably in an unspecified order).
As a data point here, the BC ordered queue queues 'identical' items in FIFO
order (preferring defined behaviour). Whether an unspecified order is
satisfactory must, I think, depend on the application.
****************************************************************
From: Jeffrey R. Carter
Date: Tuesday, August 11, 2009 4:12 PM
> I would have expected the reverse: that two identical elements get
> queued (probably in an unspecified order). That's how the queues in my
> mail filter work (the only time I've used priority queues in practice)
> [They're actually more like delay queues, as the "priority" is the
> retry time. Surely you don't want to loose a message just because it
> happens to have the same retry time as another.]
If they're specified as FIFO, the priority queue must really be
FIFO-within-priorities, so it must queue the 2nd after the 1st.
****************************************************************
From: Martin Dowie
Date: Monday, August 24, 2009 7:03 AM
Thanks for all that - making more sense now...
Would it be useful to add a "Peek" operation too? One that returned the highest
priority element in the queue without actually removing it from the queue?
****************************************************************
From: Matthew Heaney
Date: Thursday, October 8, 2009 12:28 AM
> 1. For the priority queue, what is the expected behaviour of trying to
> add 2 identical elements to the queue? It would seem that the 2nd
> would not be added (as "Before (x, x) = False"). I appreciate that to
> get both messages added would probably require "multiset" support -
> are "Multi"
> variants of the existing containers planned?
There are two separate issues here.
The first issue is the matter of the priority queue. Yes, you should be able to
add multiple items that are equivalent. And yes, new elements that are
equivalent to existing elements should probably go after existing elements.
(But this will have to be decided at the ARG meeting next month.)
The second issue is multiset support. This is orthogonal to priority queues
(because the language doesn't specify how the queues are implemented). On the
specific matter of multiset support, I lobbied last year for their inclusion in
the language standard, but wasn't able to make my case with sufficient cogency,
so they didn't make the cut.
However, an unbounded multiset is part of the GNAT distribution, so if you're
using GNAT then you have one already. I am in the process of integrating the
bounded forms into GNAT, and so GNAT will have a bounded multiset too.
For now I have implemented the bounded priority queues using the multisets, but
eventually I'll probably replace that implementation with a proper heap.
> 2. For the bounded variants, is the intent that they should not use
> dynamic memory (same as the new bounded containers)? If so, should
> there be some words added to state this?
Right, the bounded forms do not use dynamic memory allocation.
> 3. The queues are all specified to be FIFO - could LIFO also be added
> relatively cheaply? Not as common but still useful.
That's just a stack. Use a vector or a list container, wrapped in a protected
object.
****************************************************************
Questions? Ask the ACAA Technical Agent