Version 1.13 of ai05s/ai05-0159-1.txt
!standard A.18.27 10-08-04 AI05-0159-1/09
!standard A.18.28
!standard A.18.29
!standard A.18.30
!standard A.18.31
!standard A.18.32
!class Amendment 05-10-24
!status Amendment 2012 10-08-04
!status work item 10-07-06
!status ARG Approved 10-0-0 10-06-18
!status work item 10-04-29
!status ARG Approved 11-0-0 10-02-27
!status work item 06-03-15
!status received 05-10-02
!priority Medium
!difficulty Hard
!subject Queue containers
!summary
Add task-safe queues to the set of containers.
!problem
The Ada 2005 containers only allow a single task to access them at a time,
leaving it to the users to manage access by multiple tasks. There is a
need for task-safe containers.
!proposal
Add a synchronized interface and four implementations of task-safe queues. We
have both regular queues and priority queues. There are bounded and unbounded
versions of both kinds of queues.
In addition, add another interface for indefinite queues, with two
implementations: unbounded regular queues and priority queues.
!wording
A.18.27 The Generic Package Containers.Synchronized_Queue_Interfaces
The language-defined generic package Containers.Synchronized_Queue_Interfaces
provides interface type Queue, and a set of operations for that type.
Interface Queue specifies a first-in, first-out queue.
Static Semantics
The generic library package Containers.Synchronized_Queue_Interfaces has the
following declaration:
generic
type Element_Type is private;
package Ada.Containers.Synchronized_Queue_Interfaces is
pragma Pure(Synchronized_Queue_Interfaces);
type Queue is synchronized interface;
procedure Enqueue
(Container : in out Queue;
New_Item : in Element_Type) is abstract;
pragma Implemented (Enqueue, By_Entry);
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
pragma Implemented (Dequeue, By_Entry);
function Current_Use (Container : Queue) return Count_Type is abstract;
function Peak_Use (Container : Queue) return Count_Type is abstract;
end Ada.Containers.Synchronized_Queue_Interfaces;
procedure Enqueue
(Container : in out Queue;
New_Item : Element_Type) is abstract;
A queue type that implements this interface may have a bounded capacity. If
the queue object has a bounded capacity, and the number of existing elements
equals the capacity, then Enqueue blocks until storage becomes available;
otherwise Enqueue does not block. In any case, it then copies New_Item onto the
queue.
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
If the queue is empty, then Dequeue blocks until an item becomes
available. In any case, it then returns a copy of the element at the head of the
queue, and removes it from the container.
function Current_Use (Container : Queue) return Count_Type is abstract;
Returns the number of elements currently in the queue.
function Peak_Use (Container : Queue) return Count_Type is abstract;
Returns the maximum number of elements that have been in the queue at
any one time.
A.18.28 The Generic Package Containers.Unbounded_Synchronized_Queues
Static Semantics
The language-defined generic package Containers.Unbounded_Synchronized_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Unbounded_Synchronized_Queues is
pragma Preelaborate(Unbounded_Synchronized_Queues);
protected type Queue (Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
--
end Queue;
private
--
end Ada.Containers.Unbounded_Synchronized_Queues;
The type Queue is used to represent task-safe queues.
The capacity for instances of type Queue is unbounded.
AARM Ramification: Enqueue never blocks; if more storage is needed for a new
element, it is allocated dynamically. We don't need to explicitly specify that
Queue needs finalization, because it is visibly protected.
A.18.29 The Generic Package Containers.Bounded_Synchronized_Queues
Static Semantics
The language-defined generic package Containers.Bounded_Synchronized_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
Default_Capacity : Count_Type;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Bounded_Synchronized_Queues is
pragma Preelaborate(Bounded_Synchronized_Queues);
protected type Queue
(Capacity : Count_Type := Default_Capacity;
Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
--
end Queue;
private
--
end Ada.Containers.Bounded_Synchronized_Queues;
The semantics are the same as for Unbounded_Synchronized_Queues,
except:
- The capacity for instances of type Queue is bounded and specified by the
discriminant Capacity.
AARM Ramification: Since this type has a bounded capacity, Enqueue might block
if the queue is full.
Implementation Advice
Bounded queue objects should be implemented without implicit pointers or dynamic
allocation.
A.18.30 The Generic Package Containers.Unbounded_Priority_Queues
Static Semantics
The language-defined generic package Containers.Unbounded_Priority_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
type Queue_Priority is private;
with function Get_Priority
(Element: Queue_Interfaces.Element_Type) return Queue_Priority is <>;
with function Before
(Left, Right : Queue_Priority) return Boolean is <>;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Unbounded_Priority_Queues is
pragma Preelaborate(Unbounded_Priority_Queues);
protected type Queue
(Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
not overriding
entry Dequeue_Only_High_Priority
(Low_Priority : in Queue_Priority;
Element : out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
--
end Queue;
private
--
end Ada.Containers.Unbounded_Priority_Queues;
The type Queue is used to represent task-safe priority queues.
The capacity for instances of type Queue is unbounded.
Two elements E1 and E2 are equivalent if Before(Get_Priority(E1),
Get_Priority(E2)) and Before(Get_Priority(E2), Get_Priority(E1)) both return
False.
Enqueue inserts an item according to the order specified by the Before function
on the result of Get_Priority on the elements. If the queue already contains
elements equivalent to New_Item, then it is inserted after the existing
equivalent elements.
AARM Ramification: Enqueue never blocks; if more storage is needed for a new
element, it is allocated dynamically. We don't need to explicitly specify that
Queue needs finalization, because it is visibly protected.
Dequeue_Only_High_Priority is the same as Dequeue, except that it
blocks until the element E at the head of the queue satisfies
Before(Get_Priority(E), Low_Priority).
The actual functions for Get_Priority and Before are expected to return the
same value each time they are called with the same actuals, and should not
modify their actuals. Before should define a strict weak ordering relationship
(see A.18). If the actual functions behave in some other manner, the behavior
of Unbounded_Priority_Queues is unspecified.
A.18.31 The Generic Package Containers.Bounded_Priority_Queues
Static Semantics
The language-defined generic package Containers.Bounded_Priority_Queues provides
type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
type Queue_Priority is private;
with function Get_Priority
(Element: Queue_Interfaces.Element_Type) return Queue_Priority is <>;
with function Before
(Left, Right : Queue_Priority) return Boolean is <>;
Default_Capacity : Count_Type;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Bounded_Priority_Queues is
pragma Preelaborate(Bounded_Priority_Queues);
protected type Queue
(Capacity : Count_Type := Default_Capacity;
Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
not overriding
entry Dequeue_Only_High_Priority
(Low_Priority : in Queue_Priority;
Element : out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
--
end Queue;
private
--
end Ada.Containers.Bounded_Priority_Queues;
The semantics are the same as for Unbounded_Priority_Queues, except:
- The capacity for instances of type Queue is bounded and specified by the
discriminant Capacity.
AARM Ramification: Since this type has a bounded capacity, Enqueue might block
if the queue is full.
Implementation Advice
Bounded priority queue objects should be implemented without implicit
pointers or dynamic allocation.
A.18.32 Indefinite Synchronized Queues
There are three generic packages
Containers.Indefinite_Synchronized_Queue_Interfaces,
Containers.Indefinite_Unbounded_Synchronized_Queues, and
Containers.Indefinite_Unbounded_Priority_Queues. These are identical to
Containers.Synchronized_Queue_Interfaces,
Containers.Unbounded_Synchronized_Queues, and
Containers.Unbounded_Priority_Queues, respectively, except except that the
generic formal Element_Type is indefinite.
!discussion
We considered making protected forms of all of the existing containers, but
determined that there are various problems with such an approach:
(1) What kind of protection is needed (multiread, single write vs. multi anything)?
(2) Some of the operations (especially those with callbacks) could potentially
deadlock. How do we handle those cases (allow a single task to do anything while
it holds the lock vs. always locking).
(3) Can we actually do better performance-wise than the user wrapping the
operations that they need? Note that the existing containers are not potentially
blocking, so they can be called from protected operations. A well-designed
protected type will have much better abstraction than anything we could provide,
and probably would not have as large a performance cost.
Thus, we decided to provide task-safe queue abstractions (which are commonly
needed) and let the users create any others that they need.
!example
declare
--
package Widgets is
type Widget is new Integer; --
...
end Widgets;
package Widget_Queue_Types is
new Ada.Containers.Synchronized_Queue_Interfaces (Widgets.Widget);
--
--
--
--
task type Assembly_Line_Task
(Q : access Widget_Queue_Types.Queue'Class := null,
N : Natural := 0) --
is end;
function New_Assembly_Line_Task
(Q : not null access Widget_Queue_Types.Queue'Class;
N : Natural) return Assembly_Line_Task
is
begin
return T : Assembly_Line_Task (Q, N);
end New_Assembly_Line_Task;
type Assembly_Line_Task_Array is
array (Positive range <>) of Assembly_Line_Task;
task type Delivery_Service_Task
(Q : not null access Widget_Queue_Types.Queue'Class;
N : Natural) --
is end;
--
--
--
--
--
task body Assembly_Line_Task is
begin
for I in 1 .. N loop
delay 0.5;
Q.Enqueue (Widget (I));
end loop;
Q.Enqueue (Widget'(0)); --
end Assembly_Line_Task;
--
--
--
--
--
task body Delivery_Service_Task is
W : Widget;
Done : Natural := 0;
begin
loop
Q.Dequeue (W);
if W > 0 then
Put_Line ("Delivering widget" & Widget'Image (W));
else
Done := Done + 1;
exit when Done >= N;
end if;
end loop;
end Delivery_Service_Task;
--
--
--
package Widget_Queues is
new Ada.Containers.Unbounded_Synchronized_Queues
(Queues => Widget_Queue_Types);
Q : aliased Widget_Queues.Queue;
--
--
Assembly_Lines : Assembly_Line_Task_Array (1 .. 5) :=
(others => New_Assembly_Line_Task (Q'access, 10));
Delivery_Service : Delivery_Service_Task (Q'access, Assembly_Lines'Length);
begin --
null;
end;
!corrigendum A.18.27
The language-defined generic package Containers.Synchronized_Queue_Interfaces
provides interface type Queue, and a set of operations for that type.
Interface Queue specifies a first-in, first-out queue.
Static Semantics
The generic library package Containers.Synchronized_Queue_Interfaces has the
following declaration:
generic
type Element_Type is private;
package Ada.Containers.Synchronized_Queue_Interfaces is
pragma Pure(Synchronized_Queue_Interfaces);
type Queue is synchronized interface;
procedure Enqueue
(Container : in out Queue;
New_Item : in Element_Type) is abstract;
pragma Implemented (Enqueue, By_Entry);
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
pragma Implemented (Dequeue, By_Entry);
function Current_Use (Container : Queue) return Count_Type is abstract;
function Peak_Use (Container : Queue) return Count_Type is abstract;
end Ada.Containers.Synchronized_Queue_Interfaces;
procedure Enqueue
(Container : in out Queue;
New_Item : in Element_Type) is abstract;
A queue type that implements this interface may have a bounded capacity. If
the queue object has a bounded capacity, and the number of existing elements
equals the capacity, then Enqueue blocks until storage becomes available;
otherwise Enqueue does not block. In any case, it then copies New_Item onto the
queue.
procedure Dequeue
(Container : in out Queue;
Element : out Element_Type) is abstract;
If the queue is empty, then Dequeue blocks until an item becomes
available. In any case, it then returns a copy of the element at the head of the
queue, and removes it from the container.
function Current_Use (Container : Queue) return Count_Type is abstract;
Returns the number of elements currently in the queue.
function Peak_Use (Container : Queue) return Count_Type is abstract;
Returns the maximum number of elements that have been in the queue at
any one time.
!corrigendum A.18.28
Insert new clause:
Static Semantics
The language-defined generic package Containers.Unbounded_Synchronized_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Unbounded_Synchronized_Queues is
pragma Preelaborate(Unbounded_Synchronized_Queues);
protected type Queue (Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
... -- not specified by the language
end Queue;
private
... -- not specified by the language
end Ada.Containers.Unbounded_Synchronized_Queues;
The type Queue is used to represent task-safe queues.
The capacity for instances of type Queue is unbounded.
!corrigendum A.18.29
Insert new clause:
Static Semantics
The language-defined generic package Containers.Bounded_Synchronized_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
Default_Capacity : Count_Type;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Bounded_Synchronized_Queues is
pragma Preelaborate(Bounded_Synchronized_Queues);
protected type Queue
(Capacity : Count_Type := Default_Capacity;
Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
... -- not specified by the language
end Queue;
private
... -- not specified by the language
end Ada.Containers.Bounded_Synchronized_Queues;
The semantics are the same as for Unbounded_Synchronized_Queues,
except:
- The capacity for instances of type Queue is bounded and specified by
the discriminant Capacity.
Implementation Advice
Bounded queue objects should be implemented without implicit pointers or dynamic
allocation.
!corrigendum A.18.30
Insert new clause:
Static Semantics
The language-defined generic package Containers.Unbounded_Priority_Queues
provides type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
type Queue_Priority is private;
with function Get_Priority
(Element: Queue_Interfaces.Element_Type) return Queue_Priority is <>;
with function Before
(Left, Right : Queue_Priority) return Boolean is <>;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Unbounded_Priority_Queues is
pragma Preelaborate(Unbounded_Priority_Queues);
protected type Queue
(Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
not overriding
entry Dequeue_Only_High_Priority
(Low_Priority : in Queue_Priority;
Element : out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
... -- not specified by the language
end Queue;
private
... -- not specified by the language
end Ada.Containers.Unbounded_Priority_Queues;
The type Queue is used to represent task-safe priority queues.
The capacity for instances of type Queue is unbounded.
Two elements E1 and E2 are equivalent if Before(Get_Priority(E1),
Get_Priority(E2)) and Before(Get_Priority(E2), Get_Priority(E1))
both return False.
Enqueue inserts an item according to the order specified by the Before function
on the result of Get_Priority on the elements. If the queue already contains
elements equivalent to New_Item, then it is inserted after the existing
equivalent elements.
Dequeue_Only_High_Priority is the same as Dequeue, except that it
blocks until the element E at the head of the queue satisfies
Before(Get_Priority(E), Low_Priority).
The actual functions for Get_Priority and Before are expected to return the
same value each time they are called with the same actuals, and should not
modify their actuals. Before should define a strict weak ordering relationship
(see A.18). If the actual functions behave in some other manner, the behavior
of Unbounded_Priority_Queues is unspecified.
!corrigendum A.18.31
Insert new clause:
Static Semantics
The language-defined generic package Containers.Bounded_Priority_Queues provides
type Queue, which implements the interface type
Containers.Synchronized_Queue_Interfaces.Queue.
with System;
with Ada.Containers.Synchronized_Queue_Interfaces;
generic
with package Queue_Interfaces is new Ada.Containers.Synchronized_Queue_Interfaces (<>);
type Queue_Priority is private;
with function Get_Priority
(Element: Queue_Interfaces.Element_Type) return Queue_Priority is <>;
with function Before
(Left, Right : Queue_Priority) return Boolean is <>;
Default_Capacity : Count_Type;
Default_Ceiling: System.Any_Priority := System.Priority'Last;
package Ada.Containers.Bounded_Priority_Queues is
pragma Preelaborate(Bounded_Priority_Queues);
protected type Queue
(Capacity : Count_Type := Default_Capacity;
Ceiling: System.Any_Priority := Default_Ceiling) is
new Queue_Interfaces.Queue with
pragma Priority(Ceiling);
overriding
entry Enqueue (New_Item: in Queue_Interfaces.Element_Type);
overriding
entry Dequeue (Element: out Queue_Interfaces.Element_Type);
not overriding
entry Dequeue_Only_High_Priority
(Low_Priority : in Queue_Priority;
Element : out Queue_Interfaces.Element_Type);
overriding
function Current_Use return Count_Type;
overriding
function Peak_Use return Count_Type;
private
... -- not specified by the language
end Queue;
private
... -- not specified by the language
end Ada.Containers.Bounded_Priority_Queues;
The semantics are the same as for Unbounded_Priority_Queues, except:
- The capacity for instances of type Queue is bounded and specified by the
discriminant Capacity.
Implementation Advice
Bounded priority queue objects should be implemented without implicit
pointers or dynamic allocation.
!corrigendum A.18.32
Insert new clause:
There are three generic packages
Containers.Indefinite_Synchronized_Queue_Interfaces,
Containers.Indefinite_Unbounded_Synchronized_Queues, and
Containers.Indefinite_Unbounded_Priority_Queues. These are identical to
Containers.Synchronized_Queue_Interfaces,
Containers.Unbounded_Synchronized_Queues, and
Containers.Unbounded_Priority_Queues, respectively, except except that the
generic formal Element_Type is indefinite.
!ACATS test
ACATS Tests are needed for these new packages.
!appendix
From: Martin Dowie
Date: Tuesday, July 28, 2009 2:53 AM
1. For the priority queue, what is the expected behaviour of trying to add 2
identical elements to the queue? It would seem that the 2nd would not be
added (as "Before (x, x) = False"). I appreciate that to get both messages
added would probably require "multiset" support - are "Multi" variants of the
existing containers planned?
2. For the bounded variants, is the intent that they should not use dynamic
memory (same as the new bounded containers)? If so, should there be some
words added to state this?
3. The queues are all specified to be FIFO - could LIFO also be added relatively
cheaply? Not as common but still useful.
****************************************************************
From: Randy Brukardt
Date: Tuesday, August 11, 2009 1:42 PM
Sorry about the long delay on posting this one; it got blocked by a useless
filter in the mailing list software (that unfortunately can't be turned off) and
I was on vacation.
> 1. For the priority queue, what is the expected behaviour of trying to
> add 2 identical elements to the queue? It would seem that the 2nd
> would not be added (as "Before (x, x) = False"). I appreciate that to
> get both messages added would probably require "multiset" support -
> are "Multi"
> variants of the existing containers planned?
I would have expected the reverse: that two identical elements get queued
(probably in an unspecified order). That's how the queues in my mail filter work
(the only time I've used priority queues in practice) [They're actually more
like delay queues, as the "priority" is the retry time. Surely you don't want to
loose a message just because it happens to have the same retry time as another.]
In any case, this will need to be clarified.
> 2. For the bounded variants, is the intent that they should not use
> dynamic memory (same as the new bounded containers)?
> If so, should there be some words added to state this?
That is my understanding, and yes, of course, they will need all of the
appropriate wording that AI05-0001-1 uses for bounded forms.
> 3. The queues are all specified to be FIFO - could LIFO also be added
> relatively cheaply? Not as common but still useful.
Not sure it is worth it. We're really only trying to provide the most basic and
common of protected containers because generally the "protection" needed is
something that depends on the entire algorithm; naively dropping in protected
containers could easily generate race or deadlock conditions.
In any case, "LIFO" is a stack, which is generally thought of as a totally
different data structure, with different names for the operations. Trying to use
queue terminology for it would be pretty unnatural. So there wouldn't be much if
any reuse potential here.
****************************************************************
From: Simon Wright
Date: Tuesday, August 11, 2009 3:45 PM
>> 1. For the priority queue, what is the expected behaviour of trying
>> to add 2 identical elements to the queue? It would seem that the 2nd
>> would not be added (as "Before (x, x) = False"). I appreciate that to
>> get both messages added would probably require "multiset" support -
>> are "Multi"
>> variants of the existing containers planned?
>
> I would have expected the reverse: that two identical elements get
> queued (probably in an unspecified order).
As a data point here, the BC ordered queue queues 'identical' items in FIFO
order (preferring defined behaviour). Whether an unspecified order is
satisfactory must, I think, depend on the application.
****************************************************************
From: Jeffrey R. Carter
Date: Tuesday, August 11, 2009 4:12 PM
> I would have expected the reverse: that two identical elements get
> queued (probably in an unspecified order). That's how the queues in my
> mail filter work (the only time I've used priority queues in practice)
> [They're actually more like delay queues, as the "priority" is the
> retry time. Surely you don't want to loose a message just because it
> happens to have the same retry time as another.]
If they're specified as FIFO, the priority queue must really be
FIFO-within-priorities, so it must queue the 2nd after the 1st.
****************************************************************
From: Martin Dowie
Date: Monday, August 24, 2009 7:03 AM
Thanks for all that - making more sense now...
Would it be useful to add a "Peek" operation too? One that returned the highest
priority element in the queue without actually removing it from the queue?
****************************************************************
From: Matthew Heaney
Date: Thursday, October 8, 2009 12:28 AM
> 1. For the priority queue, what is the expected behaviour of trying to
> add 2 identical elements to the queue? It would seem that the 2nd
> would not be added (as "Before (x, x) = False"). I appreciate that to
> get both messages added would probably require "multiset" support -
> are "Multi"
> variants of the existing containers planned?
There are two separate issues here.
The first issue is the matter of the priority queue. Yes, you should be able to
add multiple items that are equivalent. And yes, new elements that are
equivalent to existing elements should probably go after existing elements.
(But this will have to be decided at the ARG meeting next month.)
The second issue is multiset support. This is orthogonal to priority queues
(because the language doesn't specify how the queues are implemented). On the
specific matter of multiset support, I lobbied last year for their inclusion in
the language standard, but wasn't able to make my case with sufficient cogency,
so they didn't make the cut.
However, an unbounded multiset is part of the GNAT distribution, so if you're
using GNAT then you have one already. I am in the process of integrating the
bounded forms into GNAT, and so GNAT will have a bounded multiset too.
For now I have implemented the bounded priority queues using the multisets, but
eventually I'll probably replace that implementation with a proper heap.
> 2. For the bounded variants, is the intent that they should not use
> dynamic memory (same as the new bounded containers)? If so, should
> there be some words added to state this?
Right, the bounded forms do not use dynamic memory allocation.
> 3. The queues are all specified to be FIFO - could LIFO also be added
> relatively cheaply? Not as common but still useful.
That's just a stack. Use a vector or a list container, wrapped in a protected
object.
****************************************************************
From: Randy Brukardt
Date: Monday, April 5, 2010 8:29 PM
I recently had a discussion with Martin Dowie. He was concerned that the lack of
a Peek operation in the priority queues makes it difficult to implement some
common situations. In particular, cases where the reader wants to defer
processing until an item of sufficient priority is available. (Obviously, this
case only applies to priority queues.)
Using Peek requires a busy-wait loop of some sort, which is obviously
suboptimal. In addition, Peek is very hard to use safely, as in general you can
assume nothing at all about the item retrieved by a Dequeue after executing a
Peek. (Another task may have put a higher-priority item on the queue, or another
task may have removed the item at the head of the queue, between the Peek and a
Dequeue.) As such, it's like leaving a loose razor blade in a toolchest.
Moreover, in the bounded queue case (without any dynamic memory management) it
is not any more efficient than doing a Dequeue (both require copying the
element). So it probably is a wash in terms of efficiency and safer to do a
Dequeue followed by a Enqueue if you don't want to process the element (the
element will end up in the same relative place on the queue since it is sorted
by priority). (Enqueue might be more expensive if the queue has a lot of
elements on it or the Before function is unusually expensive, but in normal
circumstances it costs about the same as a Peek would, one element copy.)
I had a similar problem in my spam filter, for which Peek would not have worked
(there are multiple readers). The "priority' is the time when an item ought to
be processed (most are ready to process immediately, but after a failed
operation you have to wait a random amount of time and try again). I used a
combination of timed entry calls and a high-priority-only dequeue operations.
For the AI05-0159-1 queues, this routine would look like:
procedure Dequeue_Only_High_Priority
(Container : in out Queue;
Minimum_High_Priority : Element_Type;
Element : out Element_Type);
This routine would block if there wasn't an element with the minimum priority
available.
This was used like (I've left out some parts):
type Queued_Item is tagged private;
type Queued_Item_Ptr is access all Queue_Node'Class;
type Queue_Node is record
Item : Queued_Item_Ptr;
Process_After : Ada.Calendar.Time; -- Use like "delay until".
end record;
function Before (Left, Right : Queue_Node) return Boolean is
use type Ada.Calendar.Time;
begin
return Left.Process_After < Right.Process_After;
end Before;
package Message_Queue is Ada.Containers.Unbounded_Priority_Queues
(Queue_Node, Before => Before);
---
declare
Node : Queue_Node;
begin
loop
select
accept Send_Queue.Dequeue_Only_High_Priority (Element => Node,
Minimum_High_Priority => (Item => null, After => Ada.Calendar.Clock));
Send (Node);
or
accept Shutdown;
or
delay 10.0;
end loop;
end loop;
end;
Note that I had to use the timeout because the "high-priority" value is based on
the clock, and thus has to be reevaluated occasionally. That wouldn't happen if
the priority didn't involve an exterior actor (and barriers that depend on
globals are bad practice for this reason). Since we don't need to do retries on
any particular schedule, this is a non-issue in this application. Also note that
I also used that code to handle shutdown requests, since blocking forever would
prevent shutdowns from being recognized.
---
Anyway, the lack of such an operation would make using the AI05-0159-1 priority
queues suboptimal for this use. Without the Dequeue_Only_High_Priority
operation, I would have to use a faster loop (so that newly queued operations
are checked reasonably quickly). There also would be a reordering of nodes that
have the same sending time (not a major problem in this use, but not optimal).
This would look something like:
declare
Node : Queue_Node;
begin
loop
select
accept Send_Queue.Dequeue (Element => Node);
if Node.Process_After < Ada.Calendar.Clock then
Send (Node);
else
Send_Queue.Enqueue (Element => Node);
end if;
or
accept Shutdown;
or
delay 1.0;
end loop;
end loop;
end;
While this is not great, it might be good enough.
So, is this problem worth addressing? If so, do we want to add an operation like
this to the priority queues (only)? Or is there some other idea (not involving a
race condition)?
****************************************************************
From: Bob Duff
Date: Tuesday, April 6, 2010 12:43 PM
> I recently had a discussion with Martin Dowie. He was concerned that
> the lack of a Peek operation in the priority queues makes it difficult
> to implement some common situations. In particular, cases where the
> reader wants to defer processing until an item of sufficient priority is available.
> (Obviously, this case only applies to priority queues.)
I have mixed feelings about such race-condition-causing features.
If we have Peek, there should probably be a NOTE warning of the danger.
There is precedent: Function Current_Use in this package has the same property.
(By the way, there's no rationale for why we have Current_Use and Peak_Use --
seems like we ought to say something.) And Ada.Synchronous_Task_Control has
Current_State, with the mild AARM-only warning, "This state can change
immediately after the operation returns."
> procedure Dequeue_Only_High_Priority
> (Container : in out Queue;
> Minimum_High_Priority : Element_Type;
> Element : out Element_Type);
How about this:
procedure Conditional_Dequeue
(Container : in out Queue;
Element : out Element_Type;
Condition : function (Element_Type) return Boolean) is abstract;
pragma Implemented (Conditional_Dequeue, By_Entry); ?
where "function (Element_Type) return Boolean" of course stands for "not null
access function (Element: Element_Type) return Boolean".
It dequeues an element if and only if Condition returns True for the element to
be dequeued. Never blocks.
Hmm. I guess we'd need another Boolean 'out' param to indicate whether it
succeeded.
> select
> accept Send_Queue.Dequeue_Only_High_Priority (Element => Node,
> Minimum_High_Priority => (Item => null, After => Ada.Calendar.Clock));
That doesn't look right -- you can only "accept" your own entries.
****************************************************************
From: Martin Dowie
Date: Tuesday, April 6, 2010 1:12 PM
There are certainly loads of precedents, e.g. the attributes wrt to tasking are
race conditions ('Callable, 'Count and 'Terminated).
Randy helpfully suggested alternatives without something akin to a 'peek' (e.g.
a 2 queue solution; 1 high priority, 1 low priority) but they wouldn't work when
using these in high-integrity systems - i.e. Ravenscar profile environments - as
they required 'select' statements.
I'm sure some version of 'peek' - either a blocking or a
non-blocking-with-success or blocking-for-period-with-success - would help in
such environments.
****************************************************************
From: Bob Duff
Date: Tuesday, April 6, 2010 2:36 PM
What do you think of my Conditional_Dequeue suggestion?
****************************************************************
From: Randy Brukardt
Date: Tuesday, April 6, 2010 3:05 PM
> What do you think of my Conditional_Dequeue suggestion?
It's better than Peek (much less trouble with race conditions), but I think it
has two negatives:
* The need to write a function when the condition is likely to be a simple
expression (Obj.After < Ada.Calendar.Clock in my example) is annoying;
* The lack of blocking semantics means almost all uses will need a busy-wait
loop.
Neither of these is critically important, but they do suggest that there might
be a better solution out there. For the first, the fact that we are talking
about priority queues means we already have an ordering, so it isn't clear that
there is any value to having a second one. For the second, I see the issue that
Ravenscar uses will necessarily have to use a busy-wait loop, but it is annoying
to force everyone to live by the limitations of Ravenscar. (Indeed, as with
SPARK, I would hope that we be able to move to larger subsets of the language
for verification purposes; we shouldn't be tied to these overly limiting subsets
forever.)
Peek itself is just too dangerous in normal use: it can only be used safely on a
priority queue if there is only a single peeking/dequeuing task AND the peeked
item is tested then immediately discarded (there can be no assumption that it
will be the one dequeued) AND getting a higher-priority item from Dequeue than
the one expected is OK. I'm sure there are applications that fit into that
profile, but there are many that don't (such as my "flock of senders" problem).
Moreover, there is no practical way to enforce safe use of Peek (since there is
no way to enforce the immediate discarding of the retrieved value). I don't
think it would even be possible for a static analysis tool to figure out Peek
misuse (other than the obvious: "ERROR: Use of Peek detected" :-).
So we need a better solution (or none, as we currently have).
****************************************************************
From: Bob Duff
Date: Tuesday, April 6, 2010 3:36 PM
> It's better than Peek (much less trouble with race conditions), but I
> think it has two negatives:
I just had a discussion with someone at AdaCore who pointed out another
negative:
Presumably the implementation of these queues is a protected type. You normally
want all operations of a protected type to have a short bounded time. Normally,
you can determine that by analyzing the protected body. But if the protected
body is calling my suggested Condition function, which is passed in by the
client, you need to do a more global analysis.
****************************************************************
From: Randy Brukardt
Date: Tuesday, April 6, 2010 3:48 PM
You could say that about any priority queue in general, however, as the ordering
function (Before in this case) is passed in and could take an unknown amount of
time. Moreover, the number of calls will depend (in some way) upon the number of
items in the queue. In practice, both of these functions would be short and
execute quickly, but there is a real concern if you need strong bounds. Of
course, Before is fundamental to the Queue, while your Condition function is
less so. Still, it seems that this is a problem with the entire idea of generic
priority queues, and not specific to your suggestion.
****************************************************************
From: Bob Duff
Date: Tuesday, April 6, 2010 3:54 PM
Good point.
****************************************************************
From: Martin Dowie
Date: Wednesday, April 7, 2010 3:24 AM
> What do you think of my Conditional_Dequeue suggestion?
It looks good to me!
A 'kitchen-sink' addition could be:
procedure Conditional_Dequeue
(Container : in out Queue;
Element : out Element_Type;
Success : out Boolean;
Condition : not null access function (Element : Element_Type) return Boolean;
Timeout : Duration) is abstract;
pragma Implemented (Conditional_Dequeue, By_Entry);
Where this blocks for up to a 'Timeout' period. That also mitigates Randy's 2nd
point ("The lack of blocking semantics means almost all uses will need a
busy-wait loop"). [Alternative: 'Timeout : Duration := 0.0' offers non-blocking
option.]
Randy's 1st point ("The need to write a function when the condition is likely to
be a simple expression (Obj.After < Ada.Calendar.Clock in my example) is
annoying") is true and a small 'lamba' expression would be great here! ;-) But
for (ordered) container use and, to a less extent, just using general ADTs with
primary keys, we end up writing those little functions _all_the_time_. Still
saves a heck of a time over rolling-your-own, so I know which I prefer.
I'm not sure I understand Randy's point "it is annoying to force everyone to
live by the limitations of Ravenscar" - this would be 'adding to' rather than
'replacing' any functionality, so it will be safely ignored by those who don't
need it (I ignore entire packages of the standard library quite happily ;-).
I appreciate there are concerns about how this could be used appropriately but
you could say the same for 'goto' or ATC or Unchecked_<anything> or overlaying
with 'Address or etc...the above seems a lot less (potentially) harmful to me...
****************************************************************
From: Randy Brukardt
Date: Wednesday, April 7, 2010 1:06 PM
...
> Where this blocks for up to a 'Timeout' period. That also mitigates
> Randy's 2nd point ("The lack of blocking semantics means almost all
> uses will need a busy-wait loop").
> [Alternative: 'Timeout : Duration := 0.0'
> offers non-blocking option.]
I thought of something like this, but I don't think it can be implemented with a
standard Ada protected object. You'd have to have a subprogram with a timed or
conditional entry call, but then it couldn't be an entry (so it couldn't be used
in a requeue or conditional entry call). And the internal conditional entry call
wouldn't be allowed by Ravenscar.
My original idea (not posted to Ada Comment, just in our e-mail discussion) had
the same problem.
...
> I'm not sure I understand Randy's point "it is annoying to force
> everyone to live by the limitations of Ravenscar" - this would be
> 'adding to' rather than 'replacing' any functionality, so it will be
> safely ignored by those who don't need it (I ignore entire packages of
> the standard library quite happily ;-).
I was referring to forcing everyone to use busy-wait loops because Ravenscar
programs have to be coded that way (no conditional entry calls). If we have a
non-blocking operation here (as Bob proposed), we're essentially forcing all
uses to use busy-wait loops. That's a lack of capability caused by Ravenscar
concerns, not *extra* capability (which I wouldn't have a concern about).
> I appreciate there are concerns about how this could be used
> appropriately but you could say the same for 'goto' or ATC or
> Unchecked_<anything> or overlaying with 'Address or etc...the above
> seems a lot less (potentially) harmful to me...
I agree, this is a lot less harmful than Peek.
It strikes me, though, that the separate Condition function introduces a new
race condition hazard that doesn't exist in the original ones. It's less serious
than the original one, but it still exists. Consider a numeric priority with a
Before function of Left.Priority > Right.Priority. Assume Condition is defined
as Element.Priority in 10..15. Consider the following two execution orders:
Task 2 inserts an item with priority 10.
Task 1 executes Conditional_Dequeue, receiving success and the item with priority 10.
Task 2 inserts an item with priority 20.
vs.
Task 2 inserts an item with priority 10.
Task 2 inserts an item with priority 20.
Task 1 executes Conditional_Dequeue, receiving failure.
There is a race between task 1 reading the queue and task 2's insertion of the
second item.
Of course, one could say that such a condition is bad (it IS bad), but I suspect
that in practice the problem could be much more subtle. It might be hard to
detect.
This implies to me that it is only safe to use the same comparison function for
Conditional_Dequeue. Which makes me think that my original proposal is
(slightly) better:
procedure Dequeue_Only_High_Priority
(Container : in out Queue;
Minimum_High_Priority : Element_Type;
Element : out Element_Type);
because this race condition cannot happen here (meaning that there doesn't need
to be any analysis to prove that).
If we wanted to insist on a non-blocking version, that would be:
procedure Dequeue_Only_High_Priority
(Container : in out Queue;
Minimum_High_Priority : Element_Type;
Element : out Element_Type;
Success : out Boolean);
It would be nice to have both. (That goes for regular Dequeue as well, a
non-blocking version would be usable in a Ravenscar context where a blocking one
might not be.) Of course, this all might be considered feeping creaturism. :-)
The need to construct a complete element here for just the priority portion is
indeed annoying, but there isn't any alternative since Before takes elements.
****************************************************************
From: Martin Dowie
Date: Wednesday, April 7, 2010 1:18 PM
Thankfully, we now have the 'others => <>' option to keep the construction of
the element to a minimum...
I don't mind the creeping...but I'm biased by having seen the need for this sort
of operation! ;-)
****************************************************************
From: Mark Lorenzen
Date: Wednesday, April 7, 2010 3:14 PM
> I thought of something like this, but I don't think it can be
> implemented with a standard Ada protected object. You'd have to have a
> subprogram with a timed or conditional entry call, but then it
> couldn't be an entry (so it couldn't be used in a requeue or
> conditional entry call). And the internal conditional entry call
> wouldn't be allowed by Ravenscar.
In my opinion, tasks and protected objects should not be a part of a re-usable
library.
I think it's better to provide functionality for registering a call-back
subprogram, which will be called in case an element (that fulfills a certain
condition) is put on the queue, and at the same time dequeue the element.
procedure Set_Conditional_Pass_Through_Handler
(Container : in out Queue;
Condition : in not null access function (Element : Element_Type) return Boolean;
Handler : in not null access procedure (Element : in Element_Type));
The idea is to enable elements to "pass through" the normal priority queue
mechanism. The element is never stored in the queue, but passed directly to the
user as parameter Element in subprogram Handler, if Condition is fulfilled. It
is up to the user to implement an optional time-out mechanism, using whatever
language constructs that are appropriate for the program in question.
****************************************************************
From: Randy Brukardt
Date: Wednesday, April 7, 2010 4:47 PM
> In my opinion, tasks and protected objects should not be a part of a
> re-usable library.
It has to be in this case. We've defined the queue as s synchronized interface,
in order to provide task-safe access. If it wasn't synchronized (or visibly
protected), it wouldn't be task-safe by the definitions in the Ada Standard. In
addition, the definition declares Enqueue and Dequeue to be blocking if items
cannot be added or removed respectively. Note that a synchronized interface can
only be implemented with a task or protected object.
Even if we decided to redefine everything without the visible synchonization
(and that by adding a bunch of wording to describe what operations have to be
supported -- which would be a major pain to do), we also have a principle that
the containers need to be implementable in standard Ada. That could only be done
by implementing the queues as a protected object of some sort (pragma Atomic is
only required to be supported on "small" types, so it could not be used on a
generic element type).
Thus, while I agree with your principle in general for the containers, the queue
containers exist *only* for the purpose of multitasking (it is trivial to create
a queue using of the list container if task safety isn't needed, just rename
Append to be Enqueue, and define Dequeue to be First_Element followed by
Delete_First), and thus must be task-safe. And that implies that they have to be
implementable with a protected object.
> I think it's better to provide functionality for registering a
> call-back subprogram, which will be called in case an element (that
> fulfills a certain condition) is put on the queue, and at the same
> time dequeue the element.
>
> procedure Set_Conditional_Pass_Through_Handler
> (Container : in out Queue;
> Condition : in not null access function (Element : Element_Type) return Boolean;
> Handler : in not null access procedure (Element : in Element_Type));
>
> The idea is to enable elements to "pass through" the normal priority
> queue mechanism. The element is never stored in the queue, but passed
> directly to the user as parameter Element in subprogram Handler, if
> Condition is fulfilled. It is up to the user to implement an optional
> time-out mechanism, using whatever language constructs that are
> appropriate for the program in question.
This does not solve the use case in question, which is a delay queue (the
"priority" is the time to process the element). In that case, the element will
be on the queue for a while before it becomes high enough priority (the time is
reached) to remove it.
It also doesn't work when multiple high priority items are queued at about the
same time and there is a single consumer (this is similar to Martin's original
use case); the consumer will want to get one item and come back when it is
finished to get the other high priority item. With the scheme you are proposing,
you would have to have a second queue to store "extra" items in this case. If
that is going to be the case, you might as well have used two queues in the
first place. (Martin notes that such an extra queue would imply an extra task in
a Ravenscar environment, as conditional entry calls aren't allowed.)
****************************************************************
From: Mark Lorenzen
Date: Thursday, April 8, 2010 8:50 AM
> It has to be in this case. We've defined the queue as s synchronized
> interface, in order to provide task-safe access. If it wasn't
> synchronized (or visibly protected), it wouldn't be task-safe by the
> definitions in the Ada Standard. In addition, the definition declares
> Enqueue and Dequeue to be blocking if items cannot be added or removed
> respectively. Note that a synchronized interface can only be
> implemented with a task or protected object.
> ...
I recently wrote a re-useable module containing both POs and tasks. It worked as
expected in my test setup, until it had to be used in an actual application. The
problems were 1) the PO priorities were not configurable and 3) the task
priorities were not configurable and 3) the task stack sizes were not
configurable.
So if the Synchronized_Queue makes use of either, it is important that the user
is able to define priorities and stack sizes e.g. by supplying them as formal
parameters. Unfortunately the formal generic parameters must then of course
cater for all kinds of implementations using PO(s), task(s) or PO(s) + task(s).
This is messy and may result in generic parameters not being used etc. A
solution is to demand that an implementation uses e.g. exactly one PO and
exactly one task if this is feasible and require that an implementation
documents the necessary timings and task behavior (cyclic, sporadic etc.) for
perfoming a schedulability analysis.
Another approach is to use suspension objects, but that is probably not an
acceptable solution to the Ravenscar users and suspension objects cannot
implement a synchronized interface.
I'm therefore still sceptical about the useability of the proposed generic
package.
****************************************************************
From: Randy Brukardt
Date: Thursday, April 8, 2010 12:19 PM
...
> So if the Synchronized_Queue makes use of either, it is important that
> the user is able to define priorities and stack sizes e.g. by
> supplying them as formal parameters.
> Unfortunately the formal generic parameters must then of course cater
> for all kinds of implementations using PO(s),
> task(s) or PO(s) + task(s). This is messy and may result in generic
> parameters not being used etc. A solution is to demand that an
> implementation uses e.g. exactly one PO and exactly one task if this
> is feasible and require that an implementation documents the necessary
> timings and task behavior (cyclic, sporadic etc.) for perfoming a
> schedulability analysis.
Humm, I hadn't thought of priorities, and I don't think anyone else has either.
(I personally have never used them in an Ada program, although I've used plenty
of tasks.) Thanks for bringing that up.
In any case, a queue (or other container) should use exactly zero tasks in
general, so I wouldn't worry about that. Implementers could do stupid things,
but I think their customers would set them straight in a hurry.
So I think what you are saying is that the generic that defines the interface
also needs a priority parameter (to set the ceiling priority of the protected
object(s)); the parameter would presumably be defaulted so that it doesn't have
to be given unless you care. [You also seem to be saying that the entire idea
of synchronized interfaces is flawed because they don't include the priorities
of the implementation, which makes it impossible to reason about the behavior.]
None of the predefined containers are intended to be used when timing is
critical (for that, you need visibility into the implementation, which is
exactly what we don't want for the containers -- we want to allow implementers
to make the best implementations possible and to be able to change them freely),
so actually documenting timings does not make sense. But there are a lot of
systems out there (like my web server and spam filter) that just need "good
enough" implementations -- where timings aren't critical.
> Another approach is to use suspension objects, but that is probably
> not an acceptable solution to the Ravenscar users and suspension
> objects cannot implement a synchronized interface.
>
> I'm therefore still sceptical about the useability of the proposed
> generic package.
Would you prefer no predefined queue at all? There is no value to a predefined
queue that isn't implemented with protected objects (it's trivial to write
yourself, as I showed yesterday), so we don't want to do that. I also think that
the blocking semantics is important (although that is less clear-cut).
****************************************************************
From: Jeffrey R. Carter
Date: Thursday, April 8, 2010 2:45 PM
> Humm, I hadn't thought of priorities, and I don't think anyone else
> has either. (I personally have never used them in an Ada program,
> although I've used plenty of tasks.) Thanks for bringing that up.
I agree that client control of priorities is important.
> In any case, a queue (or other container) should use exactly zero
> tasks in general, so I wouldn't worry about that. Implementers could
> do stupid things, but I think their customers would set them straight in a hurry.
Then perhaps it should be a protected interface rather than synchronized?
> Would you prefer no predefined queue at all? There is no value to a
> predefined queue that isn't implemented with protected objects (it's
> trivial to write yourself, as I showed yesterday), so we don't want to
> do that. I also think that the blocking semantics is important
> (although that is less clear-cut).
It's also trivial to wrap a protected type around a list (or ordered set) and
write your own protected (priority) queue, with blocking semantics if desired. I
don't think that's a reason for a library not to provide them.
FWIW, the protected data structures in the PragmAda Reusable Components (Ada 95)
define a protected type with a defaulted Ceiling_Priority discriminant that is
used to set the priority of objects of the type.
****************************************************************
From: Randy Brukardt
Date: Wednesday, April 7, 2010 9:36 PM
We didn't define an indefinite version of the queues, unlike all other
containers forms. This seems weird to me; I can't think of any good reason to
not have an indefinite unbounded queue (other than that it doesn't fit well with
the interface definition -- maybe another reason that the interface isn't that
good of an idea??), and it surely is inconsistent.
****************************************************************
From: Bob Duff
Date: Monday, May 10, 2010 10:50 AM
Here's a new version of AI05-0159-1, modified as per the April 29 telephone
meeting.
Everything is heavily modified, except the !example, which I didn't change (nor
even read).
One way in which I disobeyed my marching orders: I gave the Ceiling priority
discriminant a default even in the Bounded case. This requires giving a default
for Capacity. I chose 1. Most users will want to specify some other value, but
I don't see that as a problem. Objections?
There are two levels of default -- you pass in a Ceiling (and Capacity in the
bounded case) as generic formals with defaults. Then these are used as the
default for the discriminant(s).
Anyway, a single-element queue is not entirely nonsensical.
I also tried to remove some duplicated verbiage. For example, the "interfaces"
part says that Enqueue blocks if and only of the thing is bounded and is full.
We don't need to mention that again in AARM notes for the concrete types, which
are only a small distance away.
[This is version /06 - Editor.]
****************************************************************
From: Tucker Taft
Date: Monday, May 10, 2010 11:06 AM
I don't see why you need a default for the generic formal object
Default_Capacity. It seems reasonable to require that the user specify a
Default_Capacity when instantiating, rather than allowing it to unintentionally
become one, which turns the "queue" into a mailbox. That seems error-prone.
Otherwise, looks good...
****************************************************************
From: Bob Duff
Date: Monday, May 10, 2010 11:38 AM
> I don't see why you need a default for the generic formal object
> Default_Capacity.
> It seems reasonable to require that the user specify a
> Default_Capacity when instantiating, rather than allowing it to
> unintentionally become one, which turns the "queue" into a mailbox.
> That seems error-prone.
OK, I'm convinced.
> Otherwise, looks good...
Thanks for reviewing.
****************************************************************
From: Randy Brukardt
Date: Wednesday, May 19, 2010 10:40 PM
...
> I also tried to remove some duplicated verbiage. For example, the
> "interfaces" part says that Enqueue blocks if and only of the thing is
> bounded and is full. We don't need to mention that again in AARM
> notes for the concrete types, which are only a small distance away.
Yes, we do. The intent is that this is reference information, so that someone
can look up just the definition of the single package that they care about
(prehaps by clicking on the page) and get the important information. The
blocking/non-blocking seems important enough to put in each place (in the AARM
only, of course). Indeed, I would argue that a user note in each place might be
appropriate. Remember that in the HTML version, that "short distance" is on a
different HTML page that the user probably will not see (printed is different,
but there is less use of that version every year).
****************************************************************
From: Randy Brukardt
Date: Wednesday, May 19, 2010 10:53 PM
...
> The generic library package
> Containers.Synchronized_Queue_Interfaces has the following
> declaration:
>
> generic
> type Element_Type is private;
>
> package Ada.Containers.Synchronized_Queue_Interfaces is
> pragma Pure;
This wasn't your mistake, but the name of the package is always included in the
pragma Pure or Preelaborate.
...
> with System;
> with Ada.Containers.Synchronized_Queue_Interfaces;
> generic
> with package Queue_Interfaces is new
> Ada.Containers.Synchronized_Queue_Interfaces (<>);
> type Queue_Priority is private;
> with function Get_Priority
> (Element: Queues.Element_Type) return Queue_Priority is <>;
> with function "<"
> (Left, Right : Queues.Element_Type) return Boolean is <>;
It took me forever to understand this; I finally realized that the English
wording makes no sense because you actually meant:
with function "<"
(Left, Right : Queue_Priority) return Boolean is <>;
...
> Enqueue inserts an item according to the order specified by the "<"
> function.
> If the queue already contains elements equivalent to New_Item, then it
> is inserted after the existing equivalent elements.
I don't think this quite works. I think you need to mention Get_Priority
somewhere. Maybe:
Enqueue inserts an item according to the order specified by the "<" function on
the result of Get_Priority on the elements.
I've made these changes (and Tucker's suggestion) in a separate version (so we
can recover yours if necessary). [This is version /07 - Editor.]
****************************************************************
From: Bob Duff
Date: Thursday, May 20, 2010 7:15 AM
> It took me forever to understand this; I finally realized that the
> English wording makes no sense because you actually meant:
>
> with function "<"
> (Left, Right : Queue_Priority) return Boolean is <>;
Right, sorry about that.
> ...
> > Enqueue inserts an item according to the order specified by the "<"
> > function.
> > If the queue already contains elements equivalent to New_Item, then
> > it is inserted after the existing equivalent elements.
>
> I don't think this quite works. I think you need to mention
> Get_Priority somewhere. Maybe:
>
> Enqueue inserts an item according to the order specified by the "<"
> function on the result of Get_Priority on the elements.
Seems slightly less readable (albeit more correct), to me. Can we keep my
wording, but add "Get_Priority returns the priority (duh)."? Or "...is expected
to..."?
> I've made these changes (and Tucker's suggestion) in a separate
> version (so we can recover yours if necessary).
You always make a separate version, right? It's in CVS...
****************************************************************
From: Randy Brukardt
Date: Thursday, May 20, 2010 1:52 PM
> You always make a separate version, right? It's in CVS...
Not always; I make editorial corrections as I find them, which often means while
I'm loading the file. So the original submission is never saved anywhere. I made
a special point to do that this time, which is what I meant above.
****************************************************************
From: Martin Dowie
Date: Friday, May 21, 2010 2:55 AM
Couple of minor points / questions and revision 1.10
The example references "Ada.Containers.Synchronized_Queues" and not
"Ada.Containers.Synchronized_Queue_Interfaces".
The Enqueue / Dequeue entries all have a "Container: Queue" paramater - but
"Queue" is the name of the protected type containing these entries. I've had a
look through the AI's referencing section 3.09 but didn't see anything that
would make this allowable. I'm not sure it even makes sense - isn't the PT the
queue?...
In a number of the PTs, e.g.
"Ada.Containers.Unbounded_Synchronized_Queues", the discriminent takes a default
- is that allowed now?
Also, for "Element_Type", shouldn't it be "Queue_Interfaces.Element_Type"?
****************************************************************
From: Bob Duff
Date: Friday, May 21, 2010 1:29 PM
> Couple of minor points / questions and revision 1.10
>
> The example references "Ada.Containers.Synchronized_Queues" and not
> "Ada.Containers.Synchronized_Queue_Interfaces".
Right.
> The Enqueue / Dequeue entries all have a "Container: Queue" paramater
> - but "Queue" is the name of the protected type containing these entries.
> I've had a look through the AI's referencing section 3.09 but didn't
> see anything that would make this allowable. I'm not sure it even
> makes sense - isn't the PT the queue?...
That's a mistake I made while converting from the old version to the new
visibly-protected type. Those parameters should be removed.
We made it visible so the ceiling priority stuff would make sense.
> In a number of the PTs, e.g.
> "Ada.Containers.Unbounded_Synchronized_Queues", the discriminent takes
> a default - is that allowed now?
Good point! I had forgotten that this type is tagged.
I wonder if we should allow defaulted discriminants on limited types.
> Also, for "Element_Type", shouldn't it be
> "Queue_Interfaces.Element_Type"?
Yes.
Thanks for your comments.
****************************************************************
From: Bob Duff
Date: Friday, May 21, 2010 1:35 PM
Randy, are you going to fix the editorial issues pointed out recently by Martin
Dowie?
Here's another one:
There is one reference to Queues.Element_Type, and several to Element_Type,
which should all be Queue_Interfaces.Element_Type.
The defaulted discriminants issue is substantive -- we need to discuss that.
****************************************************************
From: Randy Brukardt
Date: Saturday, June 12, 2010 11:39 PM
I just did that after I realized that you hadn't in your version of June 7th. I
also noticed that the abstract functions Current_Use and Peak_Use were never
defined in any of the concrete versions. I'd expect any decent Ada compiler to
throw up on that, so I fixed that too.
****************************************************************
Questions? Ask the ACAA Technical Agent