Rationale for Ada 2012

John Barnes
Contents   Index   References   Search   Previous   Next 

8.6 Queue containers

When the goals of the revision to Ada 2005 were discussed, one of the expectations was that it would be possible to improve the containers, or maybe introduce variants, that would be task safe. However, further investigation revealed that this would not be practicable because the number of ways in which several tasks could interact with a container such as a list or map was large.
However, one data structure that is amenable to controlled access by several tasks is the queue. One or more tasks can place objects on a queue and one or more can remove them. Moreover, the existing container library did not include queues as such so we were not tied to any existing structures.
There are in fact four different queue containers in Ada 2012. These are all for elements of a definite type. Two are bounded and two are unbounded. And there are priority and synchronized queues. The names are
A.C.Unbounded_Synchronized_Queues
A.C.Bounded_Synchronized_Queues
A.C.Unbounded_Priority_Queues
A.C.Bounded_Priority_Queues
At one stage it was also planned to have unbounded containers for elements of an indefinite type. This would then have been similar to the other containers which have unbounded definite, unbounded indefinite and bounded definite forms. However, there were significant problems with the Dequeue operation to remove an indefinite object related to the fact that Ada does not have entry functions. This is easily overcome by making the elements of the queue a holder container as described in the previous section.
These four different queue containers are all derived from a single synchronized interface declared in a generic package whose specification is as follows
generic
   type Element_Type is private;    --definite
package A.C.Synchronized_Queue_Interfaces is
   pragma Pure(Synchronized_Queue_Interfaces);
   type Queue is synchronized interface;
   procedure Enqueue
               (Container: in out Queue;
                New_Item: in Element_Type) is abstract
            with Synchronization => By_Entry;
   procedure Dequeue
               (Container: in out Queue;
                Element: out Element_Type) is abstract
            with Synchronization => By_Entry;
   function Current_Use(Container: Queue) return Count_Type is abstract;
   function Peak_Use(Container: Queue) return Count_Type is abstract;
end A.C.Synchronized_Queue_Interfaces;
This generic package declares the synchronized interface Queue and four operations on queues. These are the procedures Enqueue and Dequeue to add items to a queue and remove items from a queue respectively; note the aspect Synchronization which ensures that all implementations of these abstract procedures must be by an entry. There are also functions Current_Use and Peak_Use which can be used to monitor the number of items on a queue.
The four queue containers are generic packages which themselves declare a type Queue derived in turn from the interface Queue declared in the package above. We will look first at the synchronized queues and then at the priority queues.
The package for unbounded synchronized queues is as follows
with System; use System;
with A.C.Synchronized_Queue_Interfaces;
generic
   with package Queue_Interfaces is new
            A.C.Synchronized_Queue_Interfaces(<>);
   Default_Ceiling: Any_Priority := Priority'Last;
package A.C.Unbounded_Synchronized_Queues is
   pragma Preelaborate(Unbounded_Synchronized_Queues);
   package Implementation is
       ...   -- not specified by the language
   end Implementation;
   protected type Queue(Ceiling: Any_Priority := Default_Ceiling)
            with Priority => Ceiling
               is new Queue_Interfaces.Queue with
      overriding
      entry Enqueue(New_Item: in Queue_Interfaces.Element_Type);
      overriding
      entry Dequeue(Element: out Queue_Interfaces.Element_Type);
      overriding
      function Current_Use return Count_Type;
      overriding
      function Peak_Use return Count_Type;
   private
      ...   -- not specified by the language
   end Queue;
private
   ...   -- not specified by the language
end A.C.Unbounded_Synchronized_Queues;
Note that there are two generic parameters. The first (Queue_Interfaces) has to be an instantiation of the interface generic Synchronized_Queue_Interfaces; remember that the parameter (<>) means that any instantiation will do. The second parameter concerns priority and has a default value so we can ignore it for the moment.
Inside this package there is a protected type Queue which controls access to the queues via its entries Enqueue and Dequeue. This protected type is derived from Queue_Interfaces.Queue and so promises to implement the operations Enqueue, Dequeue, Current_Use and Peak_Use of that interface. And indeed it does implement them and moreover implements Enqueue and Dequeue by entries as required by the aspect Synchronization.
As an example suppose we wish to create a queue of some records such as
type Rec is record ... end record;
First of all we instantiate the interface package (using named notation for clarity) thus
package Rec_Interface is
   new A.C.Synchronized_Queue_Interfaces (Element_Type => Rec);
This creates an interface from which we can create various queuing mechanisms for dealing with objects of the type Rec.
Thus we might write
package Unbounded_Rec_Package is
   new A.C.Unbounded_Synchronized_Queues (Queue_Interfaces => Rec_Interface);
Finally, we can declare a protected object, My_Rec_UQ which is the actual queue, thus
My_Rec_UQ: Unbounded_Rec_Package.Queue;
To place an object on the queue we can write
Enqueue(My_Rec_UQ, Some_Rec);
or perhaps more neatly
My_Rec_UQ.Enqueue(Some_Rec);
And to remove an item from the queue we can write
My_Rec_UQ.Dequeue(The_Rec);
where The_Rec is some object of type Rec which thereby is given the value removed.
The statement
N := Current_Use(My_Rec_UQ);
assigns to N the number of items on the queue when Current_Use was called (it could be out of date by the time it gets into N) and similarly Peak_Use(My_Rec_UQ) gives the maximum number of items that have been on the queue since it was declared.
This is all task safe because of the protected type; several tasks can place items on the queue and several, perhaps the same, can remove items from the queue without interference.
It should also be noticed that since the queue is unbounded, we never get blocked by Enqueue since extra storage is allocated as required just as for the other unbounded containers (I suppose we might get Storage_Error).
The observant reader will note the mysterious local package called Implementation. This enables the implementation to declare local types to be used by the protected type. It will be recalled that there is an old rule that one cannot declare a type within a type. These local types really ought to be within the private part of the protected type; maybe this is something for Ada 2020.
The package for bounded synchronized queues is very similar. The only differences (apart from its name) are that it has an additional generic parameter Default_Capacity and the protected type Queue has an additional discriminant Capacity. So its specification is
with System; use System;
with A.C.Synchronized_Queue_Interfaces;
generic
   with package Queue_Interfaces is new
            A.C.Synchronized_Queue_Interfaces(<>);
   Default_Capacity: Count_Type;
   Default_Ceiling: Any_Priority := Priority'Last;
package A.C.Bounded_Synchronized_Queues is
   pragma Preelaborate(Bounded_Synchronized_Queues);
   package Implementation is
       ...   -- not specified by the language
   end Implementation;
   protected type Queue
               (Capacity: Count_Type := Default_Capacity,
                Ceiling: Any_Priority := Default_Ceiling)
            with Priority => Ceiling
               is new Queue_Interfaces.Queue with
   ...   -- etc as for the unbounded one
end A.C.Bounded_Synchronized_Queues;
So using the same example, we can use the same interface package Rec_Interface. Now suppose we wish to declare a bounded queue with capacity 1000, we can write
package Bounded_Rec_Package is
   new A.C.Bounded_Synchronized_Queues
          (Queue_Interfaces => Rec_Interface,
           Default_Capacity => 1000);
Finally, we can declare a protected object, My_Rec_BQ which is the actual queue, thus
My_Rec_BQ: Bounded_Rec_Package.Queue;
And then we can use the queue as before. To place an object on the queue we can write
My_Rec_BQ.Enqueue(Some_Rec);
And to remove an item from the queue we can write
My_Rec_BQ.Dequeue(The_Rec);
The major difference is that if the queue becomes full then calling Enqueue will block the calling task until some other task calls Dequeue. Thus, unlike the other containers, Capacity_Error is never raised.
Note that having given a value for Default_Capacity, it can be overridden when the queue is declared, perhaps
My_Rec_Giant_BQ: Bounded_Rec_Package.Queue(Capacity => 100000);
These packages also provide control over the ceiling priority of the protected type. By default it is Priority'Last. This default can be overridden by our own default when the queue package is instantiated and can be further specified as a discriminant when the actual queue object is declared. So we might write
My_Rec_Ceiling_BQ: Bounded_Rec_Package.Queue(Ceiling => 10);
In the case of the bounded queue, if we do not give an explicit capacity then the ceiling has to be given using named notation. This does not apply to the unbounded queue which only has one discriminant, so to give that a ceiling priority we can just write
My_Rec_Ceiling_UQ: Unbounded_Rec_Package.Queue(10);
But clearly the use of the named notation is advisable.
Being able to give default discriminants is very convenient. In Ada 2005, this was not possible if the type was tagged. However, in Ada 2012, it is permitted in the case of limited tagged types and a protected type is considered to be limited. This was explained in detail in Section 4.4 on discriminants.
If we wanted to make a queue of indefinite objects, then as mentioned above, there is no special container for this because Dequeue would be difficult to use since it is a procedure and not a function. So the actual parameter would have to be constrained which means knowing before the call the value of the discriminant, tag, or bound of the object which is unlikely. However, we can use the holder container to wrap the indefinite type so that it looks definite.
So to create a queue for strings, using the example of the previous section, we can write
package Strings is
   new Ada.Containers.Indefinite_Holders(String);
package Strings_Interface is
   new A.C.Synchronized_Queue_Interfaces (Element_Type => Strings.Holder);
package Unbounded_Strings_Package is
   new A.C.Unbounded_Synchronized_Queues (Queue_Interfaces => Strings_Interface);
and then finally declare the actual queue
My_Strings_UQ: Unbounded_Strings_Package.Queue;
To put some strings on this queue, we write
My_Strings_UQ.Enqueue(To_Holder("rabbit"));
My_Strings_UQ.Enqueue(To_Holder("horse"));
or even
My_Strings_UQ.Enqueue(Element(Kennel));
We now turn to considering the two other forms of queue which are the unbounded and bounded priority queues.
Here is the specification of the unbounded priority queue
with System; use System;
with A.C.Synchronized_Queue_Interfaces;
generic
   with package Queue_Interfaces is new
            A.C.Synchronized_Queue_Interfaces(<>);
   type Queue_Priority is private;
   with function Get_Priority
     (Element : Queue_Interfaces.Element_Type)
               return Queue_Priority is <>;
   with function Before(Left, Right : Queue_Priority)
               return Boolean is <>;
   Default_Ceiling: Any_Priority := Priority'Last;
package A.C.Unbounded_Priority_Queues is
   pragma Preelaborate(Unbounded_Priority_Queues);
   package Implementation is
       ...   -- not specified by the language
   end Implementation;
   protected type Queue(Ceiling: Any_Priority := Default_Ceiling)
            with Priority => Ceiling
               is new Queue_Interfaces.Queue with
      overriding
      entry Enqueue(New_Item: in Queue_Interfaces.Element_Type);
      overriding
      entry Dequeue(Element: out Queue_Interfaces.Element_Type);
      not overriding
      procedure Dequeue_Only_High_Priority
            (At_Least: in Queue_Priority;
             Element: in out Queue_Interfaces.Element_Type;
             Success: out Boolean);
      overriding
      function Current_Use return Count_Type;
      overriding
      function Peak_Use return Count_Type;
   private
      ...   -- not specified by the language
   end Queue;
private
   ...   -- not specified by the language
end A.C.Unbounded_Priority_Queues;
The differences from the synchronized bounded queue are that there are several additional generic parameters, namely the private type Queue_Priority and the two functions Get_Priority and Before which operate on objects of the type Queue_Priority, and also that the protected type Queue has an additional operation, the protected procedure Dequeue_Only_High_Priority.
The general idea is that elements have an associated priority which can be ascertained by calling the function Get_Priority. The meaning of this priority is given by the function Before.
When we call Enqueue, the new item is placed in the queue taking due account of its priority with respect to other elements already on the queue. So it will go before all less important elements as defined by Before. If existing elements already have the same priority then it goes after them.
As expected Dequeue just returns the first item on the queue and will block if the queue is empty.
The new procedure Dequeue_Only_High_Priority (note that it is marked as not overriding unlike the other operations) is designed to enable us to process items only if they are important enough as defined by the parameter At_Least. The priority of the first element E on the queue is P as given by Get_Priority(E). And so if Before(At_Least, P) is false, then the item on the queue is indeed important enough and so is removed from the queue and the Boolean parameter Success is set to true. On the other hand if Before(At_Least, P) is true then the item is not removed and Success is set to false. Note especially that Dequeue_Only_High_Priority never blocks. If the queue is empty, then Success is just set to false; it never waits for an item to be put on the queue.
As an (unrealistic) example, suppose we decide to make the queue of strings into a priority queue and that the priority is given by their length so that "rabbit" takes precedence over "horse". Remember that the type of the elements is Strings.Holder. We can define the priority as given by the attribute Length so we might as well make the actual type corresponding to Queue_Priority as simply Natural. Then we define
function S_Get_Priority(H: Strings.Holder) return Natural is
   (H.Element'Length);
function S_Before(L, R: Natural) return Boolean is
   (L > R);
Note the convenient use of expression functions for this sort of thing.
The instantiation now becomes
package Unbounded_Priority_Strings_Package is
   new A.C.Unbounded_Priority_Queues
             (Queue_Interfaces => Strings_Interface,
              Queue_Priority => Natural,
              Get_Priority => S_Get_Priority,
              Before => S_Before);
and we then declare a queue thus
My_Strings_UPQ: Unbounded_Priority_Strings_Package.Queue;
To put some strings on this queue, we write
My_Strings_UPQ.Enqueue(To_Holder("rabbit"));
My_Strings_UPQ.Enqueue(To_Holder("horse"));
My_Strings_UPQ.Enqueue(To_Holder("donkey"));
My_Strings_UPQ.Enqueue(To_Holder("gorilla"));
The result is that "gorilla" will have jumped to the head of the queue despite having been put on last. It will be followed by "rabbit" and "donkey" and the "horse" is last.
If we do
My_Strings_UPQ.Dequeue_Only_High_Priority(7, Kennel, OK);
then the "gorilla" will be taken from the queue and placed in the Kennel and OK will be true. But if we then do it again, nothing will happen because the resulting head of the queue (the "rabbit") is not long enough.
Finally, we need to consider bounded priority queues. They are exactly like the unbounded priority queues except that they have the same additional features regarding capacity as found in the synchronized queues. Thus the only differences (apart from the name) are that there is an additional generic parameter Default_Capacity and the protected type Queue has an additional discriminant Capacity.
As a final example we will do a bounded priority queue of records. Suppose the records concern requests for servicing a dishwasher. They might include usual information such as the model number, name and address of owner and so on. They might also have a component indicating degree of urgency, such as
Urgent — machine has vomited dirty water all over floor; housewife/husband having a tantrum,
Major — machine won't do anything; husband refuses to help with washing up,
Minor — machine leaves some dishes unclean, mother-in-law is coming next week,
Routine — machine needs annual service.
So we might have
type Degree is (Urgent, Major, Minor, Routine);
type Dish_Job is
   record
      Urgency: Degree;
      Name: ...
      ...
   end record;
First we declare the interface for this type
package Dish_Interface is
   new A.C.Synchronized_Queue_Interfaces (Element_Type => Dish_Job);
and then we declare the two slave functions for the priority mechanism thus
function W_Get_Priority(X: Dish_Job) return Degree is
   (X.Urgency);
function W_Before(L, R: Degree) return Boolean is
   (Degree'Pos(L) < Degree'Pos(R));
The instantiation is then
package Washer_Package is
   new A.C.Bounded_Priority_Queues
            (Queue_Interfaces => Dish_Interface,
             Queue_Priority => Degree,
             Get_Priority => W_Get_Priority,
             Before => W_Before,
             Default_Capacity => 100);
and we declare the queue of waiting calls thus
Dish_Queue: Washer_Package.Queue;
which gives a queue with the default capacity of 100.
The staff taking requests then place the calls on the queue by
Dish_Queue.Enqueue(New_Job);
To cope with the possibility that the queue is full, they can do a timed entry call; remember that this is possible because the procedure Enqueue in the interface package has Synchronization => By_Entry.
And then general operatives checking in and taking the next job do
Dish_Queue.Dequeue(Next_Job);
However, at weekends we can suppose that just one operative is on call and deals with only Urgent and Major calls. He might check the queue from time to time by calling
Dish_Queue.Dequeue_Only_High_Priority(Major, My_Job, Got_Job);
and if Got_Job is false, he can relax and go back to digging the garden or playing golf.

Contents   Index   References   Search   Previous   Next 
© 2011, 2012, 2013 John Barnes Informatics.
Sponsored in part by:
The Ada Resource Association:

    ARA
  AdaCore:


    AdaCore
and   Ada-Europe:

Ada-Europe