CVS difference for ai05s/ai05-0159-1.txt

Differences between 1.7 and version 1.8
Log of other versions for file ai05s/ai05-0159-1.txt

--- ai05s/ai05-0159-1.txt	2010/04/13 03:12:58	1.7
+++ ai05s/ai05-0159-1.txt	2010/04/30 02:23:22	1.8
@@ -1,5 +1,6 @@
 !standard  A.18.24                                 10-04-12    AI05-0159-1/05
 !class Amendment 05-10-24
+!status work item 10-04-29
 !status ARG Approved  11-0-0  10-02-27
 !status work item 06-03-15
 !status received 05-10-02
@@ -67,7 +68,7 @@
 *capacity*. If the queue object has a bounded capacity,
 and the number of existing elements equals the capacity, then Enqueue
 blocks until storage becomes available; otherwise Enqueue does not block.
-In any case, it then copies New_Item onto the tail of the queue.
+In any case, it then copies New_Item onto the queue.
 
    procedure Dequeue
      (Container : in out Queue;
@@ -534,3 +535,635 @@
 object.
 
 ****************************************************************
+
+From: Randy Brukardt
+Date: Monday, April 5, 2010  8:29 PM
+
+I recently had a discussion with Martin Dowie. He was concerned that the lack of
+a Peek operation in the priority queues makes it difficult to implement some
+common situations. In particular, cases where the reader wants to defer
+processing until an item of sufficient priority is available. (Obviously, this
+case only applies to priority queues.)
+
+Using Peek requires a busy-wait loop of some sort, which is obviously
+suboptimal. In addition, Peek is very hard to use safely, as in general you can
+assume nothing at all about the item retrieved by a Dequeue after executing a
+Peek. (Another task may have put a higher-priority item on the queue, or another
+task may have removed the item at the head of the queue, between the Peek and a
+Dequeue.) As such, it's like leaving a loose razor blade in a toolchest.
+Moreover, in the bounded queue case (without any dynamic memory management) it
+is not any more efficient than doing a Dequeue (both require copying the
+element). So it probably is a wash in terms of efficiency and safer to do a
+Dequeue followed by a Enqueue if you don't want to process the element (the
+element will end up in the same relative place on the queue since it is sorted
+by priority). (Enqueue might be more expensive if the queue has a lot of
+elements on it or the Before function is unusually expensive, but in normal
+circumstances it costs about the same as a Peek would, one element copy.)
+
+I had a similar problem in my spam filter, for which Peek would not have worked
+(there are multiple readers). The "priority' is the time when an item ought to
+be processed (most are ready to process immediately, but after a failed
+operation you have to wait a random amount of time and try again). I used a
+combination of timed entry calls and a high-priority-only dequeue operations.
+
+For the AI05-0159-1 queues, this routine would look like:
+
+   procedure Dequeue_Only_High_Priority
+                     (Container               : in out Queue;
+                      Minimum_High_Priority   :        Element_Type;
+                      Element                 :    out Element_Type);
+
+This routine would block if there wasn't an element with the minimum priority
+available.
+
+This was used like (I've left out some parts):
+
+    type Queued_Item is tagged private;
+    type Queued_Item_Ptr is access all Queue_Node'Class;
+    type Queue_Node is record
+        Item : Queued_Item_Ptr;
+        Process_After : Ada.Calendar.Time; -- Use like "delay until".
+    end record;
+    function Before (Left, Right : Queue_Node) return Boolean is
+        use type Ada.Calendar.Time;
+    begin
+        return Left.Process_After < Right.Process_After;
+    end Before;
+
+    package Message_Queue is Ada.Containers.Unbounded_Priority_Queues
+       (Queue_Node, Before => Before);
+
+---
+
+    declare
+        Node : Queue_Node;
+    begin
+        loop
+            select
+                accept Send_Queue.Dequeue_Only_High_Priority (Element => Node,
+                    Minimum_High_Priority => (Item => null, After => Ada.Calendar.Clock));
+                Send (Node);
+            or
+                accept Shutdown;
+            or
+                delay 10.0;
+            end loop;
+        end loop;
+    end;
+
+Note that I had to use the timeout because the "high-priority" value is based on
+the clock, and thus has to be reevaluated occasionally. That wouldn't happen if
+the priority didn't involve an exterior actor (and barriers that depend on
+globals are bad practice for this reason). Since we don't need to do retries on
+any particular schedule, this is a non-issue in this application. Also note that
+I also used that code to handle shutdown requests, since blocking forever would
+prevent shutdowns from being recognized.
+
+---
+
+Anyway, the lack of such an operation would make using the AI05-0159-1 priority
+queues suboptimal for this use. Without the Dequeue_Only_High_Priority
+operation, I would have to use a faster loop (so that newly queued operations
+are checked reasonably quickly). There also would be a reordering of nodes that
+have the same sending time (not a major problem in this use, but not optimal).
+This would look something like:
+
+    declare
+        Node : Queue_Node;
+    begin
+        loop
+            select
+                accept Send_Queue.Dequeue (Element => Node);
+                if Node.Process_After < Ada.Calendar.Clock then
+                    Send (Node);
+                else
+                    Send_Queue.Enqueue (Element => Node);
+                end if;
+            or
+                accept Shutdown;
+            or
+                delay 1.0;
+            end loop;
+        end loop;
+    end;
+
+While this is not great, it might be good enough.
+
+So, is this problem worth addressing? If so, do we want to add an operation like
+this to the priority queues (only)? Or is there some other idea (not involving a
+race condition)?
+
+****************************************************************
+
+From: Bob Duff
+Date: Tuesday, April 6, 2010  12:43 PM
+
+> I recently had a discussion with Martin Dowie. He was concerned that
+> the lack of a Peek operation in the priority queues makes it difficult
+> to implement some common situations. In particular, cases where the
+> reader wants to defer processing until an item of sufficient priority is available.
+> (Obviously, this case only applies to priority queues.)
+
+I have mixed feelings about such race-condition-causing features.
+If we have Peek, there should probably be a NOTE warning of the danger.
+
+There is precedent:  Function Current_Use in this package has the same property.
+(By the way, there's no rationale for why we have Current_Use and Peak_Use --
+seems like we ought to say something.)  And Ada.Synchronous_Task_Control has
+Current_State, with the mild AARM-only warning, "This state can change
+immediately after the operation returns."
+
+>    procedure Dequeue_Only_High_Priority
+>                      (Container               : in out Queue;
+>                       Minimum_High_Priority   :        Element_Type;
+>                       Element                 :    out Element_Type);
+
+How about this:
+
+    procedure Conditional_Dequeue
+     (Container : in out Queue;
+      Element   : out Element_Type;
+      Condition : function (Element_Type) return Boolean) is abstract;
+   pragma Implemented (Conditional_Dequeue, By_Entry); ?
+
+where "function (Element_Type) return Boolean" of course stands for "not null
+access function (Element: Element_Type) return Boolean".
+
+It dequeues an element if and only if Condition returns True for the element to
+be dequeued.  Never blocks.
+
+Hmm.  I guess we'd need another Boolean 'out' param to indicate whether it
+succeeded.
+
+>             select
+>                 accept Send_Queue.Dequeue_Only_High_Priority (Element => Node,
+>                     Minimum_High_Priority => (Item => null, After => Ada.Calendar.Clock));
+
+That doesn't look right -- you can only "accept" your own entries.
+
+****************************************************************
+
+From: Martin Dowie
+Date: Tuesday, April 6, 2010  1:12 PM
+
+There are certainly loads of precedents, e.g. the attributes wrt to tasking are
+race conditions ('Callable, 'Count and 'Terminated).
+
+Randy helpfully suggested alternatives without something akin to a 'peek' (e.g.
+a 2 queue solution; 1 high priority, 1 low priority) but they wouldn't work when
+using these in high-integrity systems - i.e. Ravenscar profile environments - as
+they required 'select' statements.
+
+I'm sure some version of 'peek' - either a blocking or a
+non-blocking-with-success or blocking-for-period-with-success - would help in
+such environments.
+
+****************************************************************
+
+From: Bob Duff
+Date: Tuesday, April 6, 2010  2:36 PM
+
+What do you think of my Conditional_Dequeue suggestion?
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Tuesday, April 6, 2010  3:05 PM
+
+> What do you think of my Conditional_Dequeue suggestion?
+
+It's better than Peek (much less trouble with race conditions), but I think it
+has two negatives:
+* The need to write a function when the condition is likely to be a simple
+  expression (Obj.After < Ada.Calendar.Clock in my example) is annoying;
+* The lack of blocking semantics means almost all uses will need a busy-wait
+  loop.
+
+Neither of these is critically important, but they do suggest that there might
+be a better solution out there. For the first, the fact that we are talking
+about priority queues means we already have an ordering, so it isn't clear that
+there is any value to having a second one. For the second, I see the issue that
+Ravenscar uses will necessarily have to use a busy-wait loop, but it is annoying
+to force everyone to live by the limitations of Ravenscar. (Indeed, as with
+SPARK, I would hope that we be able to move to larger subsets of the language
+for verification purposes; we shouldn't be tied to these overly limiting subsets
+forever.)
+
+Peek itself is just too dangerous in normal use: it can only be used safely on a
+priority queue if there is only a single peeking/dequeuing task AND the peeked
+item is tested then immediately discarded (there can be no assumption that it
+will be the one dequeued) AND getting a higher-priority item from Dequeue than
+the one expected is OK. I'm sure there are applications that fit into that
+profile, but there are many that don't (such as my "flock of senders" problem).
+Moreover, there is no practical way to enforce safe use of Peek (since there is
+no way to enforce the immediate discarding of the retrieved value). I don't
+think it would even be possible for a static analysis tool to figure out Peek
+misuse (other than the obvious: "ERROR: Use of Peek detected" :-).
+
+So we need a better solution (or none, as we currently have).
+
+****************************************************************
+
+From: Bob Duff
+Date: Tuesday, April 6, 2010  3:36 PM
+
+> It's better than Peek (much less trouble with race conditions), but I
+> think it has two negatives:
+
+I just had a discussion with someone at AdaCore who pointed out another
+negative:
+
+Presumably the implementation of these queues is a protected type.  You normally
+want all operations of a protected type to have a short bounded time. Normally,
+you can determine that by analyzing the protected body. But if the protected
+body is calling my suggested Condition function, which is passed in by the
+client, you need to do a more global analysis.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Tuesday, April 6, 2010  3:48 PM
+
+You could say that about any priority queue in general, however, as the ordering
+function (Before in this case) is passed in and could take an unknown amount of
+time. Moreover, the number of calls will depend (in some way) upon the number of
+items in the queue. In practice, both of these functions would be short and
+execute quickly, but there is a real concern if you need strong bounds. Of
+course, Before is fundamental to the Queue, while your Condition function is
+less so. Still, it seems that this is a problem with the entire idea of generic
+priority queues, and not specific to your suggestion.
+
+****************************************************************
+
+From: Bob Duff
+Date: Tuesday, April 6, 2010  3:54 PM
+
+Good point.
+
+****************************************************************
+
+From: Martin Dowie
+Date: Wednesday, April 7, 2010  3:24 AM
+
+> What do you think of my Conditional_Dequeue suggestion?
+
+It looks good to me!
+
+A 'kitchen-sink' addition could be:
+
+    procedure Conditional_Dequeue
+     (Container : in out Queue;
+      Element   :    out Element_Type;
+      Success   :    out Boolean;
+      Condition :        not null access function (Element : Element_Type) return Boolean;
+      Timeout   :        Duration) is abstract;
+   pragma Implemented (Conditional_Dequeue, By_Entry);
+
+Where this blocks for up to a 'Timeout' period. That also mitigates Randy's 2nd
+point ("The lack of blocking semantics means almost all uses will need a
+busy-wait loop"). [Alternative: 'Timeout : Duration := 0.0' offers non-blocking
+option.]
+
+Randy's 1st point ("The need to write a function when the condition is likely to
+be a simple expression (Obj.After < Ada.Calendar.Clock in my example) is
+annoying") is true and a small 'lamba' expression would be great here! ;-) But
+for (ordered) container use and, to a less extent, just using general ADTs with
+primary keys, we end up writing those little functions _all_the_time_. Still
+saves a heck of a time over rolling-your-own, so I know which I prefer.
+
+I'm not sure I understand Randy's point "it is annoying to force everyone to
+live by the limitations of Ravenscar" - this would be 'adding to' rather than
+'replacing' any functionality, so it will be safely ignored by those who don't
+need it (I ignore entire packages of the standard library quite happily ;-).
+
+I appreciate there are concerns about how this could be used appropriately but
+you could say the same for 'goto' or ATC or Unchecked_<anything> or overlaying
+with 'Address or etc...the above seems a lot less (potentially) harmful to me...
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Wednesday, April 7, 2010  1:06 PM
+
+...
+> Where this blocks for up to a 'Timeout' period. That also mitigates
+> Randy's 2nd point ("The lack of blocking semantics means almost all
+> uses will need a busy-wait loop").
+> [Alternative: 'Timeout : Duration := 0.0'
+> offers non-blocking option.]
+
+I thought of something like this, but I don't think it can be implemented with a
+standard Ada protected object. You'd have to have a subprogram with a timed or
+conditional entry call, but then it couldn't be an entry (so it couldn't be used
+in a requeue or conditional entry call). And the internal conditional entry call
+wouldn't be allowed by Ravenscar.
+
+My original idea (not posted to Ada Comment, just in our e-mail discussion) had
+the same problem.
+
+...
+> I'm not sure I understand Randy's point "it is annoying to force
+> everyone to live by the limitations of Ravenscar" - this would be
+> 'adding to' rather than 'replacing' any functionality, so it will be
+> safely ignored by those who don't need it (I ignore entire packages of
+> the standard library quite happily ;-).
+
+I was referring to forcing everyone to use busy-wait loops because Ravenscar
+programs have to be coded that way (no conditional entry calls). If we have a
+non-blocking operation here (as Bob proposed), we're essentially forcing all
+uses to use busy-wait loops. That's a lack of capability caused by Ravenscar
+concerns, not *extra* capability (which I wouldn't have a concern about).
+
+> I appreciate there are concerns about how this could be used
+> appropriately but you could say the same for 'goto' or ATC or
+> Unchecked_<anything> or overlaying with 'Address or etc...the above
+> seems a lot less (potentially) harmful to me...
+
+I agree, this is a lot less harmful than Peek.
+
+It strikes me, though, that the separate Condition function introduces a new
+race condition hazard that doesn't exist in the original ones. It's less serious
+than the original one, but it still exists. Consider a numeric priority with a
+Before function of Left.Priority > Right.Priority. Assume Condition is defined
+as Element.Priority in 10..15. Consider the following two execution orders:
+
+Task 2 inserts an item with priority 10.
+Task 1 executes Conditional_Dequeue, receiving success and the item with priority 10.
+Task 2 inserts an item with priority 20.
+
+vs.
+
+Task 2 inserts an item with priority 10.
+Task 2 inserts an item with priority 20.
+Task 1 executes Conditional_Dequeue, receiving failure.
+
+There is a race between task 1 reading the queue and task 2's insertion of the
+second item.
+
+Of course, one could say that such a condition is bad (it IS bad), but I suspect
+that in practice the problem could be much more subtle. It might be hard to
+detect.
+
+This implies to me that it is only safe to use the same comparison function for
+Conditional_Dequeue. Which makes me think that my original proposal is
+(slightly) better:
+
+   procedure Dequeue_Only_High_Priority
+                     (Container               : in out Queue;
+                      Minimum_High_Priority   :        Element_Type;
+                      Element                 :    out Element_Type);
+
+because this race condition cannot happen here (meaning that there doesn't need
+to be any analysis to prove that).
+
+If we wanted to insist on a non-blocking version, that would be:
+
+   procedure Dequeue_Only_High_Priority
+                     (Container               : in out Queue;
+                      Minimum_High_Priority   :        Element_Type;
+                      Element                 :    out Element_Type;
+                      Success                 :    out Boolean);
+
+It would be nice to have both. (That goes for regular Dequeue as well, a
+non-blocking version would be usable in a Ravenscar context where a blocking one
+might not be.) Of course, this all might be considered feeping creaturism. :-)
+
+The need to construct a complete element here for just the priority portion is
+indeed annoying, but there isn't any alternative since Before takes elements.
+
+****************************************************************
+
+From: Martin Dowie
+Date: Wednesday, April 7, 2010  1:18 PM
+
+Thankfully, we now have the 'others => <>' option to keep the construction of
+the element to a minimum...
+
+I don't mind the creeping...but I'm biased by having seen the need for this sort
+of operation! ;-)
+
+****************************************************************
+
+From: Mark Lorenzen
+Date: Wednesday, April 7, 2010  3:14 PM
+
+> I thought of something like this, but I don't think it can be
+> implemented with a standard Ada protected object. You'd have to have a
+> subprogram with a timed or conditional entry call, but then it
+> couldn't be an entry (so it couldn't be used in a requeue or
+> conditional entry call). And the internal conditional entry call
+> wouldn't be allowed by Ravenscar.
+
+In my opinion, tasks and protected objects should not be a part of a re-usable
+library.
+
+I think it's better to provide functionality for registering a call-back
+subprogram, which will be called in case an element (that fulfills a certain
+condition) is put on the queue, and at the same time dequeue the element.
+
+procedure Set_Conditional_Pass_Through_Handler
+   (Container : in out Queue;
+    Condition : in     not null access function (Element : Element_Type) return Boolean;
+    Handler   : in     not null access procedure (Element : in Element_Type));
+
+The idea is to enable elements to "pass through" the normal priority queue
+mechanism. The element is never stored in the queue, but passed directly to the
+user as parameter Element in subprogram Handler, if Condition is fulfilled. It
+is up to the user to implement an optional time-out mechanism, using whatever
+language constructs that are appropriate for the program in question.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Wednesday, April 7, 2010  4:47 PM
+
+> In my opinion, tasks and protected objects should not be a part of a
+> re-usable library.
+
+It has to be in this case. We've defined the queue as s synchronized interface,
+in order to provide task-safe access. If it wasn't synchronized (or visibly
+protected), it wouldn't be task-safe by the definitions in the Ada Standard. In
+addition, the definition declares Enqueue and Dequeue to be blocking if items
+cannot be added or removed respectively. Note that a synchronized interface can
+only be implemented with a task or protected object.
+
+Even if we decided to redefine everything without the visible synchonization
+(and that by adding a bunch of wording to describe what operations have to be
+supported -- which would be a major pain to do), we also have a principle that
+the containers need to be implementable in standard Ada. That could only be done
+by implementing the queues as a protected object of some sort (pragma Atomic is
+only required to be supported on "small" types, so it could not be used on a
+generic element type).
+
+Thus, while I agree with your principle in general for the containers, the queue
+containers exist *only* for the purpose of multitasking (it is trivial to create
+a queue using of the list container if task safety isn't needed, just rename
+Append to be Enqueue, and define Dequeue to be First_Element followed by
+Delete_First), and thus must be task-safe. And that implies that they have to be
+implementable with a protected object.
+
+> I think it's better to provide functionality for registering a
+> call-back subprogram, which will be called in case an element (that
+> fulfills a certain condition) is put on the queue, and at the same
+> time dequeue the element.
+>
+> procedure Set_Conditional_Pass_Through_Handler
+>    (Container : in out Queue;
+>     Condition : in     not null access function (Element : Element_Type) return Boolean;
+>     Handler   : in     not null access procedure (Element : in Element_Type));
+>
+> The idea is to enable elements to "pass through" the normal priority
+> queue mechanism. The element is never stored in the queue, but passed
+> directly to the user as parameter Element in subprogram Handler, if
+> Condition is fulfilled. It is up to the user to implement an optional
+> time-out mechanism, using whatever language constructs that are
+> appropriate for the program in question.
+
+This does not solve the use case in question, which is a delay queue (the
+"priority" is the time to process the element). In that case, the element will
+be on the queue for a while before it becomes high enough priority (the time is
+reached) to remove it.
+
+It also doesn't work when multiple high priority items are queued at about the
+same time and there is a single consumer (this is similar to Martin's original
+use case); the consumer will want to get one item and come back when it is
+finished to get the other high priority item. With the scheme you are proposing,
+you would have to have a second queue to store "extra" items in this case. If
+that is going to be the case, you might as well have used two queues in the
+first place. (Martin notes that such an extra queue would imply an extra task in
+a Ravenscar environment, as conditional entry calls aren't allowed.)
+
+****************************************************************
+
+From: Mark Lorenzen
+Date: Thursday, April 8, 2010  8:50 AM
+
+> It has to be in this case. We've defined the queue as s synchronized
+> interface, in order to provide task-safe access. If it wasn't
+> synchronized (or visibly protected), it wouldn't be task-safe by the
+> definitions in the Ada Standard. In addition, the definition declares
+> Enqueue and Dequeue to be blocking if items cannot be added or removed
+> respectively. Note that a synchronized interface can only be
+> implemented with a task or protected object.
+> ...
+
+I recently wrote a re-useable module containing both POs and tasks. It worked as
+expected in my test setup, until it had to be used in an actual application. The
+problems were 1) the PO priorities were not configurable and 3) the task
+priorities were not configurable and 3) the task stack sizes were not
+configurable.
+
+So if the Synchronized_Queue makes use of either, it is important that the user
+is able to define priorities and stack sizes e.g. by supplying them as formal
+parameters. Unfortunately the formal generic parameters must then of course
+cater for all kinds of implementations using PO(s), task(s) or PO(s) + task(s).
+This is messy and may result in generic parameters not being used etc. A
+solution is to demand that an implementation uses e.g. exactly one PO and
+exactly one task if this is feasible and require that an implementation
+documents the necessary timings and task behavior (cyclic, sporadic etc.) for
+perfoming a schedulability analysis.
+
+Another approach is to use suspension objects, but that is probably not an
+acceptable solution to the Ravenscar users and suspension objects cannot
+implement a synchronized interface.
+
+I'm therefore still sceptical about the useability of the proposed generic
+package.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Thursday, April 8, 2010  12:19 PM
+
+...
+> So if the Synchronized_Queue makes use of either, it is important that
+> the user is able to define priorities and stack sizes e.g. by
+> supplying them as formal parameters.
+> Unfortunately the formal generic parameters must then of course cater
+> for all kinds of implementations using PO(s),
+> task(s) or PO(s) + task(s). This is messy and may result in generic
+> parameters not being used etc. A solution is to demand that an
+> implementation uses e.g. exactly one PO and exactly one task if this
+> is feasible and require that an implementation documents the necessary
+> timings and task behavior (cyclic, sporadic etc.) for perfoming a
+> schedulability analysis.
+
+Humm, I hadn't thought of priorities, and I don't think anyone else has either.
+(I personally have never used them in an Ada program, although I've used plenty
+of tasks.) Thanks for bringing that up.
+
+In any case, a queue (or other container) should use exactly zero tasks in
+general, so I wouldn't worry about that. Implementers could do stupid things,
+but I think their customers would set them straight in a hurry.
+
+So I think what you are saying is that the generic that defines the interface
+also needs a priority parameter (to set the ceiling priority of the protected
+object(s)); the parameter would presumably be defaulted so that it doesn't have
+to be given unless you care.  [You also seem to be saying that the entire idea
+of synchronized interfaces is flawed because they don't include the priorities
+of the implementation, which makes it impossible to reason about the behavior.]
+
+None of the predefined containers are intended to be used when timing is
+critical (for that, you need visibility into the implementation, which is
+exactly what we don't want for the containers -- we want to allow implementers
+to make the best implementations possible and to be able to change them freely),
+so actually documenting timings does not make sense. But there are a lot of
+systems out there (like my web server and spam filter) that just need "good
+enough" implementations -- where timings aren't critical.
+
+> Another approach is to use suspension objects, but that is probably
+> not an acceptable solution to the Ravenscar users and suspension
+> objects cannot implement a synchronized interface.
+>
+> I'm therefore still sceptical about the useability of the proposed
+> generic package.
+
+Would you prefer no predefined queue at all? There is no value to a predefined
+queue that isn't implemented with protected objects (it's trivial to write
+yourself, as I showed yesterday), so we don't want to do that. I also think that
+the blocking semantics is important (although that is less clear-cut).
+
+****************************************************************
+
+From: Jeffrey R. Carter
+Date: Thursday, April 8, 2010  2:45 PM
+
+> Humm, I hadn't thought of priorities, and I don't think anyone else
+> has either. (I personally have never used them in an Ada program,
+> although I've used plenty of tasks.) Thanks for bringing that up.
+
+I agree that client control of priorities is important.
+
+> In any case, a queue (or other container) should use exactly zero
+> tasks in general, so I wouldn't worry about that. Implementers could
+> do stupid things, but I think their customers would set them straight in a hurry.
+
+Then perhaps it should be a protected interface rather than synchronized?
+
+> Would you prefer no predefined queue at all? There is no value to a
+> predefined queue that isn't implemented with protected objects (it's
+> trivial to write yourself, as I showed yesterday), so we don't want to
+> do that. I also think that the blocking semantics is important
+> (although that is less clear-cut).
+
+It's also trivial to wrap a protected type around a list (or ordered set) and
+write your own protected (priority) queue, with blocking semantics if desired. I
+don't think that's a reason for a library not to provide them.
+
+FWIW, the protected data structures in the PragmAda Reusable Components (Ada 95)
+define a protected type with a defaulted Ceiling_Priority discriminant that is
+used to set the priority of objects of the type.
+
+****************************************************************
+
+From: Randy Brukardt
+Date: Wednesday, April 7, 2010  9:36 PM
+
+We didn't define an indefinite version of the queues, unlike all other
+containers forms. This seems weird to me; I can't think of any good reason to
+not have an indefinite unbounded queue (other than that it doesn't fit well with
+the interface definition -- maybe another reason that the interface isn't that
+good of an idea??), and it surely is inconsistent.
+
+****************************************************************
+

Questions? Ask the ACAA Technical Agent