CVS difference for ais/ai-30302.txt

Differences between 1.3 and version 1.4
Log of other versions for file ais/ai-30302.txt

--- ais/ai-30302.txt	2004/03/02 04:45:01	1.3
+++ ais/ai-30302.txt	2004/05/29 00:38:42	1.4
@@ -9339,53 +9339,2693 @@
 
 ****************************************************************
 
-From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+From: Randy Brukardt
+Sent: Thursday, April 29, 2004  9:59 PM
+
+I've just posted the updated Container library AI. [This is version /03.] This
+was updated to reflect the conclusions of the six hours of discussion (which
+was a record for a single AI) at the Phoenix meeting.
+
+I'm happy to say that most of the suggestions made here were implemented in
+some way. Indefinite element containers were added, as well as a list
+container. Set operations were added to the set package. Iteration was
+changed somewhat to be more familiar to Ada programmers. The operations and
+their semantics were made more regular.
+
+Comments are welcome. (But please remember that I have to read and file all
+of them for the permanent record, so try to take the long-winded discussions
+of philosophy to comp.lang.ada. :-)
 
 ****************************************************************
 
-From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+From: Pascal Obry
+Sent: Friday, April 30, 2004  1:26 AM
+
+That's great news ! Congratulations to all for the hard word on this issue.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Friday, April 30, 2004  3:04 PM
+
+> I've just posted the updated Container library AI...
+
+Excelent!
+
+Just a tiny comment at this time: the names Indefinite_Vectors, etc. do not
+sound right to me, because the element type is indefinite, not the
+containers. Alternatives:
+
+1. Containers.Indefinite_Elements.Vectors
+2. Containers.Vectors_Of_Indefinite_Elements
+3. Containers_Of_Indefinite_Elements.Vectors
+
+("Indefinite_Elements" is not literally correct either because the type, not
+the elements, is indefinite. But it is a common idiom to say "things" in
+place of "thing type".)
+
+I think I like 3.
+
+****************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Friday, April 30, 2004  8:20 AM
+
+Everybody talks about a real vector, or a complex matrix. Doesn't seem
+to hurt the mathematicians...
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Friday, April 30, 2004  11:21 AM
+
+Vector of real numbers... real vector.
+Vector of elements of indefinite type... vectors of indefinite elements...
+indefinite vector.
+Ok, I think the ears will get accostumed.
+
+****************************************************************
+
+From: Jeffrey Carter
+Sent: Friday, April 30, 2004  5:27 PM
+
+> Comments are welcome. (But please remember that I have to read and file all
+> of them for the permanent record, so try to take the long-winded discussions
+> of philosophy to comp.lang.ada. :-)
+
+Perhaps I'm missing something, but I don't see why the vector component
+needs the assertion anymore. If it's not needed, it would be nice to
+eliminate it.
+
+****************************************************************
+
+From: Dan Eilers
+Sent: Friday, April 30, 2004  6:25 PM
+
+  Some typos:
 
+> All containers are non-limited, and hence allow ordinary assignment.  In
+> the unique case of a vector, there is a separate assignment procedure:
+>
+>    Assert (Target => V1, Source => V2);
+     ^^^^^^
+
+> The reason is that the model for a vector is that it's implemented using
+> an unconstrained array. During ordinary assignment, the internal array
+> is deallocated (during controlled finalization), and then a new internal
+> [array] is allocated (during controlled adjustment) to store a copy of the
+  ^^^^^^^
+
+"is may not"
+
+hat the average bucket
+
+caching *effects
+
+arbitary
+conbined
+evalution
+exmples
+Generic_Revserse_Find
+heirarchy
+Indefinited_Hashed_Maps
+insuffiently
+machinary
+simplied
+stratgies
+sucessful
+
+****************************************************************
+
+From: Christoph Grein
+Sent: Thursday, May 6, 2004  4:07 AM
+
+A few more typos:
+
+specify precisely where this will happen (it will happen no lat{t}er than the
+                                                               ^^^
+
+AARM Note: Replace_Element, Generic_Update, and Generic_Update_by_Index are
+[the] only ways that an element can change from empty to non-empty.
+^^^^^
+
+Any exceptions raising during element assignment
+               raised (as everywhere else)
+
+cursor designates with a[n] index value (or a cursor designating an element at
+                        ^^^
+
+declared in Containers.Vectors with a[n] ambiguous (but not invalid, see below)
+                                     ^^^
+
+but it is {is} a *different* element
+          ^^^^
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May  6, 2004  3:07 PM
+
+What happened to the Lower_Bound, Upper_Bound and "insert with hint"
+operations for sets? They were very useful. Is there a way to make the same
+kind of searches/updates with the new spec?
+
+Furthermore, often the user already has a cursor value for an element that he
+knows is a bound for another search he wants to make. It should be possible
+to use this information to improve the search.
+
+Previous versions (e.g. 1.1) of the spec had an "insert with hint" operation
+providing something similar, albeit more restrictive (the known cursor had to
+be adjacent). The current version does not have even this.
+
+/* I found these requirements in real world situations, namely writing a
+database system that uses large sets to store some things. */
+
+At least implementation permissions/advice should exist allowing/encouraging
+implementations to provide these optimized search/update operations. Namely
+via operations with the standard profiles except having additional "hint"
+parameters. Better yet make a number of these optimized profiles standard,
+permitting the actual optimization to be null. To assure portability.
+
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Monday, May 10, 2004  6:48 PM
+
+> What happened to the Lower_Bound, Upper_Bound and "insert with hint"
+> operations for sets? They were very useful. Is there a way to make the same
+> kind of searches/updates with the new spec?
+
+I tried to keep them, but I argued badly and hence lost that vote.
+
+I discussed restoring these operations (Lower_Bound and Upper_Bound)
+with Randy, and he said I'd have post a message on ada-comment
+justifying why those operations are needed.  (You have to do it that way
+since the entire ARG voted in the last meeting, and you have give them
+the opportunity to reconsider their decision during the next meeting.)
+
+It's good that you're asking about this, since that's evidence that
+there is interest in these operations from someone other than me.
+
+It would be helpful if you could post a follow-up message on ada-comment
+giving a specific example of why you need LB and UB.  The ARG can then
+put this on the agenda for the ARG meeting in Palma.
+
+
+> Furthermore, often the user already has a cursor value for an element that he
+> knows is a bound for another search he wants to make. It should be possible
+> to use this information to improve the search.
+
+That's similar to insert-with-hint.  However, the ARG members weren't
+persuaded by my defense of optimized insertion.
+
 
+> Previous versions (e.g. 1.1) of the spec had an "insert with hint" operation
+> providing something similar, albeit more restrictive (the known cursor had to
+> be adjacent). The current version does not have even this.
+
+Yes, that is correct.  Personally I can live without the
+insert-with-hint operations (because you have insert-sans-hint), but I
+think removing the Lower_Bound and Upper_Bound operations was a mistake,
+since that leaves no way to find the set element nearest some key.  All
+you have now is basically a membership test, which is too coarse a
+granularity.
+
+For example, someone on CLA had an ordered set of integers, and he
+wanted to iterate over the values in [0, 1000), then from [1000, 2000),
+etc.  Without Lower_Bound there's no way to do that.
+
+
+> /* I found these requirements in real world situations, namely writing a
+> database system that uses large sets to store some things. */
+
+Please post an example of what you're trying to do, and show how it
+can't be done without Lower_Bound and Upper_Bound.
+
+
+> At least implementation permissions/advice should exist allowing/encouraging
+> implementations to provide these optimized search/update operations. Namely
+> via operations with the standard profiles except having additional "hint"
+> parameters. Better yet make a number of these optimized profiles standard,
+> permitting the actual optimization to be null. To assure portability.
+
+Give an example of why you need Lower_Bound and Upper_Bound, and request
+that the ARG put it on the agenda for Palma.
+
+Some other possibilities are:
+
+procedure Find
+   (Container : in     Set;
+    Key       : in     Key_Type;
+    Position  :    out Cursor;
+    Success   :    out Boolean);
+
+If the Key matches, then Success=True and Position.Key = Key.
+Otherwise, Success=False and Key < Position.Key.
+
+Technically you don't need that, since you can test the result of
+Lower_Bound:
+
+    C : Cursor := Lower_Bound (Set, Key);
+begin
+    if Key < Position then
+       null;  --Position denotes next (successor) neighbor
+    else
+       null;  --Position denotes node containing Key
+    end;
+
+
+Another possibility is to name it something like "Ceiling" or whatever.
+
+An additional possibility is something (STL) like:
+
+procedure Equal_Range
+   (Container   : in     Set;
+    Key         : in     Key;
+    Lower_Bound :    out Cursor;
+    Upper_Bound :    out Cursor);
+
+Then you can test:
+
+    Lower_Bound = Upper_Bound => key not found
+    Lower_Bound /= Upper_Bound => found
+
+This latter operation has the benefit of working with multisets too.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Tuesday, May 11, 2004  6:55 AM
+
+Upon Heaney's advice, I'll detail the case for optimized operations for
+sets.
+
+I use Lower_Bound in the implementation of Mneson, in at least four
+subprograms, excerpted below. For the entire code see
+www.liacc.up.pt/~maa/mneson. Mneson is a database system based on a directed
+graph implemented as a set of links.
+
+Link_Sets is an instantiation of AI302.Containers.Ordered_Sets for
+Link_Type, which is an array (1 .. 2) of vertices. Link_Set and Inv_Link_Set
+are Link_Sets.Set_Type objects. Links are ordered by the 1st component, then
+by the 2nd. Front_Vertex is an unlinked vertex value lower that any other.
+
+   procedure Delete_All_In_Range
+     (Link_Set : in out Link_Sets.Set_Type; From, To : Link_Type)
+   is
+      use Link_Sets;
+      I : Cursor_Type := Lower_Bound (Link_Set, From);
+   begin
+     while I /= Back (Link_Set) loop
+       exit when Element (I) > To;
+       Delete (Link_Set, I);
+     end loop;
+   end;
+
+   procedure For_Each_Link_In_Range
+     (Set : Link_Sets.Set_Type; From, To : Link_Type)
+   is
+      use Link_Sets;
+      I : Cursor_Type := Lower_Bound (Set, From);
+      E : Link_Type;
+   begin
+      I := Lower_Bound (Set, From);
+      while I /= Back (Set) loop
+         E := Element (I);
+         exit when E > To;
+         Process (E);
+         Increment (I);
+      end loop;
+   end;
+
+   function Connected (Source : Vertex) return Boolean is
+      use Link_Sets;
+   begin
+      return
+         Lower_Bound (Links, (Source, Front_Vertex)) /= Null_Cursor;
+   end;
+
+   function Inv_Connected (Target : Vertex) return Boolean is
+      use Link_Sets;
+   begin
+      return
+         Lower_Bound (Inv_Links, (Target, Front_Vertex)) /= Null_Cursor;
+   end;
+
+I'm also developing optimized algorithms for set intersection that require
+not only Lower_Bound but also search with hint (known bounds), and
+eventually Upper_Bound. These are still on the drawing board, but I already
+know at this point that they require those operations. Soon I'll have some
+code, but it's rather complicated, because Mneson sets can be of various
+kinds, extensional and intensional, the basic extensional being a designated
+vertex whose targets are the elements, the intensional being a dedicated
+"selection" structure, designed for lazy evaluation, with elements being
+represented in several ways, and materialized only upon certain operations
+like iteration and extraction.
+
+My interest is databases. At least here, ordered sets are an incredibly
+useful thing. Pretty much every interesting database function can be defined
+in terms of them. In a graph-based implementation like Mneson, set
+intersection is crucial.
+
+The spec now has the full set algebra (union, intersection, differences,
+etc.) That is good, and if their performance were ideal for all purposes,
+I'd be silent.
+
+But I know their performance cannot be ideal in many situations, because I
+know optimization techniques that require more than what the spec now
+offers. Namely they require search with hint and/or Lower_Bound.
+
+And anyway the spec does not specify performance for them (only for Insert,
+Find, Element).
+
+Also note that the Find operations for Vectors and Hashed_Maps are kind of
+hintful, so it's only fair that Ordered_Sets have these versions too.
+
+For databases, performance is paramount. Even apparently small gains matter.
+"Apparently" because many database functions scale worse than lineary, e.g.
+cross products. Optimization is these cases is a must. In many cases the
+optimization makes all the difference (between feasible and unfeasible).
+
+Optimization is invariably based on knowledge the system prepares about the
+sets in the expression queried for computation. The preparation time is
+usually negligible. In a system implemented with Ada.Containers, great part
+of the prepared knowledge is ultimately expressed as cursor values for known
+element value bounds for the sought element ranges.
+
+Ordered_Sets implementations are likely to be able to take advantage of this
+knowledge for improving time performance (the previous AI302 "insert with
+hint" is an example).
+
+Therefore it is required that this knowledge can be passed to the basic
+operations.
+
+Immodestely assuming I've made a convincing case, I can inform that Heaney
+and myself have solid ideas on how the operations should look like and we
+are ready to prepare a pretty open-shut proposal for Palma. I myself will be
+there from Sunday to Sunday, and happily available for discussion. To me the
+most promising format is *prescribing* hintful versions of Find et al. but
+with only *advised* performance, i.e. allowing null optimization.
+
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Tuesday, May 11, 2004  12:37 PM
+
+> Link_Sets is an instantiation of AI302.Containers.Ordered_Sets for
+> Link_Type, which is an array (1 .. 2) of vertices. Link_Set and Inv_Link_Set
+> are Link_Sets.Set_Type objects. Links are ordered by the 1st component, then
+> by the 2nd. Front_Vertex is an unlinked vertex value lower that any other.
+>
+>    procedure Delete_All_In_Range
+>      (Link_Set : in out Link_Sets.Set_Type; From, To : Link_Type)
+>    is
+>       use Link_Sets;
+>       I : Cursor_Type := Lower_Bound (Link_Set, From);
+>    begin
+>      while I /= Back (Link_Set) loop
+>        exit when Element (I) > To;
+>        Delete (Link_Set, I);
+>      end loop;
+>    end;
+
+You might want to vet From and To, to assert that they're in order.  It
+also looks like you mean to delete the node designated by To (this is
+apparently a closed range), which means you could use Upper_Bound to
+find the endpoint of the range:
+
+   procedure Delete_All_In_Range
+     (Link_Set : in out Link_Sets.Set_Type; From, To : Link_Type)
+   is
+      pragma Assert (From <= To);
+
+      use Link_Sets;
+      I : Cursor_Type := Lower_Bound (Link_Set, From);
+      J : constant Cursor_Type := Upper_Bound (Link_Set, To);
+   begin
+     while I /= J loop
+       Delete (Link_Set, I);
+     end loop;
+   end;
+
+
+>    procedure For_Each_Link_In_Range
+>      (Set : Link_Sets.Set_Type; From, To : Link_Type)
+>    is
+>       use Link_Sets;
+>       I : Cursor_Type := Lower_Bound (Set, From);
+>       E : Link_Type;
+>    begin
+>       I := Lower_Bound (Set, From);  --???
+>       while I /= Back (Set) loop
+>          E := Element (I);
+>          exit when E > To;
+>          Process (E);
+>          Increment (I);
+>       end loop;
+>    end;
+
+This again appears to be a closed range, so I recommend using
+Upper_Bound to find the endpoint:
+
+   procedure For_Each_Link_In_Range
+     (Set : Link_Sets.Set_Type; From, To : Link_Type)
+   is
+      pragma Assert (From <= To);
+
+      use Link_Sets;
+      I : Cursor_Type := Lower_Bound (Set, From);
+      J : constant Cursor_Type := Upper_Bound (Set, To);
+   begin
+      while I /= J loop
+         Process (Element (I));
+         Increment (I);
+      end loop;
+   end;
+
+Alternatively, you could use the new Generic_Update procedure:
+
+   procedure For_Each_Link_In_Range
+     (Set : Link_Sets.Set_Type; From, To : Link_Type)
+   is
+      pragma Assert (From <= To);
+
+      use Link_Sets;
+
+      procedure Process (E : in out Link_Type) is
+      begin
+         ...; --whatever
+      end;
+
+      procedure Update is new Generic_Update;
+
+      I : Cursor_Type := Lower_Bound (Set, From);
+      J : constant Cursor_Type := Upper_Bound (Set, To);
+   begin
+      while I /= J loop
+         Update (I);
+         Increment (I);
+      end loop;
+   end;
+
+(Note that I only have the vectors done in the reference implementation.)
+
+
+>    function Connected (Source : Vertex) return Boolean is
+>       use Link_Sets;
+>    begin
+>       return
+>          Lower_Bound (Links, (Source, Front_Vertex)) /= Null_Cursor;
+>    end;
+
+
+Lower_Bound will only return Null_Cursor if the value is greater than
+every element in the set.  So it looks like you're testing whether the
+value is less than or equal to an element in the set.  There are
+probably other ways to implement this predicate function, for example:
+
+   function Connected (Source : Vertex) return Boolean is
+      use Link_Sets;
+   begin
+      if Is_Empty (Source) then
+          return False;
+      end if;
+
+      return Link_Type'(Source, Front_Vector) <= Last_Element (Links);
+   end;
 
+
+>    function Inv_Connected (Target : Vertex) return Boolean is
+>       use Link_Sets;
+>    begin
+>       return
+>          Lower_Bound (Inv_Links, (Target, Front_Vertex)) /= Null_Cursor;
+>    end;
+
+Ditto for this function.
+
+The moral here is you don't need Lower_Bound if all you do is throw away
+its result.
+
+However, it looks like in the first two examples, you have a legitimate
+need for Lower_Bound (and arguably Upper_Bound, too).
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Tuesday, May 11, 2004  1:20 PM
+
+> ...
+> >    function Connected (Source : Vertex) return Boolean is
+> >       use Link_Sets;
+> >    begin
+> >       return
+> >          Lower_Bound (Links, (Source, Front_Vertex)) /= Null_Cursor;
+> >    end;
+>
+>
+> Lower_Bound will only return Null_Cursor if the value is greater than
+> every element in the set.
+
+Oops, this was a bug. Thanks a lot for catching it. What I must have meant
+is:
+
+   X := Lower_Bound (Links, (Source, Front_Vertex));
+   return X /= Null_Cursor and then Element (X) (1) = Source;
+
+Thanks a lot for the other suggestions too. I won't be applying them yet
+because if-it-works-dont-fix-it, but I've certainly queued them in the
+Mneson "to do" list.
+
+> ...it looks like in the first two examples, you have a legitimate
+> need for Lower_Bound (and arguably Upper_Bound, too).
+
+Yes. And these, unlike the specific version of Connect above, are used and
+tested.
+
+(It seems the specific version of Connect above had not been used yet. Which
+accounts for it's fault not being detected. It's there in the library
+because when I wrote the libary it looked like it would be necessary. Thanks
+to you, if and when it does, it will be flawless. Thanks again.)
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Tuesday, May 11, 2004  2:17 PM
+
+I think I missed the beginning of this discussion,
+but I would agree with the suggestion for using
+Floor and Ceiling rather than Lower_Bound and Upper_Bound,
+to find the nearest element of the set no greater
+(or no less, respectively) than a given value.
+And I agree they would be useful operations on an ordered set.
+
+Lower_Bound and Upper_Bound seem more likely to refer to the
+minimum and maximum elements of the entire set.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Tuesday, May 11, 2004  3:31 PM
+
+> ... I would agree with the suggestion for using
+> Floor and Ceiling...
+
+Good. One of the proposals I'm discussing with Matt has indeed
+
+   function Floor (Item : Element_Type) return Cursor;
+   function Ceiling (Item : Element_Type) return Cursor;
+
+where Ceiling = Lower_Bound, but Floor /= Upper_Bound, Floor =
+Reverse_Lower_Bound.
+
+(Here Lower_Bound and Upper_Bound are the functions defined in version 1.1
+of the spec, and that were dropped in the current. Reverse_Lower_Bound is a
+fictitious function like Lower_Bound but in reverse order.)
+
+The proposal also has
+
+   function Slice (Container : Set; Low, High : Cursor) return Set;
+   function Open_Bound (Position : Cursor) return Cursor;
+
+The four functions provide a complete search optimization framework. The
+main idea is that a slice can be used to convey range and/or optimization
+information to any operation.
+
+Slice returns the subset of Set consisting of the elements of Set that are
+in the specified interval.
+
+Open_Bound returns a cursor marked as an open bound when used in Slice. A
+unmarked cursor represents a closed bound.
+
+The integer set example, namely to iterate over the values in [0, 1000),
+then [1000, 2000), etc., becomes:
+
+   procedure Iterate is new Generic_Iteration;
+begin
+   Iterate (Slice (Integer_Set, Floor (0), Open_Bound (Ceiling (1000))));
+   Iterate (Slice (Integer_Set, Floor (1000), Open_Bound (Ceiling (2000))));
+
+Also, with this framework, Upper_Bound (Set, Item) can be realised
+functionally as:
+
+First
+  (Slice
+     (Set,
+      Open_Bound (Ceiling (Set, Item)),
+      Last (Set)))
+
+So no need for Upper_Bound.
+
+The only sensitive aspect of this framework is the use of a slice as an
+object of update operations (Insert, etc.) A slice is likely to be best
+represented as a 'virtual' set, i.e. only a 'view' to the corresponding
+subset of its 'ground' container. We are currently checking whether and how
+and which update operations can process this virtual object properly.
+
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Tuesday, May 11, 2004  5:13 PM
+
+> Lower_Bound and Upper_Bound seem more likely to refer to the
+> minimum and maximum elements of the entire set.
+
+As Mario has pointed out, Ceiling is equivalent to Lower_Bound.
+
+There is no function that corresponds to a Floor function in the STL,
+Charles, or earlier releases of the AI-302 draft.  I did discuss how to
+implement a floor function in the examples section of earlier drafts, as
+follows:
+
+    Floor (S, K) = Previous (Upper_Bound (S, K))
+
+Here Floor is derived from Upper_Bound.  In the most recent draft, for
+subtle reasons you have to implement Floor as:
+
+    function Floor (S, K) return Cursor is
+       C : Cursor := Upper_Bound (S, K);
+    begin
+       if C = No_Element then
+          return Last (S);
+       else
+          return Previous (C);
+       end if;
+    end;
+
+To derive Upper_Bound from Floor, I think it would be:
+
+    function Upper_Bound (S, K) return Cursor is
+       C : Cursor := Floor (S, K);
+    begin
+       if C = No_Element then
+         return First (C);
+       else
+         return Next (C);
+       end if;
+    end;
+
+To iterate over the half-open range [K1, K2), where K1 <= K2, I think
+you would have to write:
+
+    declare
+       I : Cursor := Ceiling (S, K1);
+       J : Cursor := Floor (S, K2);
+    begin
+       if J = No_Element then
+          J := First (S);
+       end if;
+
+       while I /= J loop
+           ...
+           Next (I);
+       end loop;
+    end;
+
+However, this seems a little awkward.  (Assuming my analysis is correct.
+   I have to think about whether [K1, K2) is a closed range or a
+half-open range.  Mario's example was a closed range.)
+
+What we really need is something to compliment Ceiling, something like
+"Strict_Ceiling" or "Proper_Ceiling", e.g.
+
+    declare
+       I : Cursor := Ceiling (S, K1);         -- K1 <= I.Key
+       J : Cursor := Proper_Ceiling (S, K2);  -- K2 < J.Key
+    begin
+       while I /= J loop ...;
+    end;
+
+Is there a technical term for "proper ceiling"?  I want a function that,
+given a key, returns the smallest key greater than the key.  (That's
+what function Upper_Bound returns, but that name seems to be confusing
+to people unfamiliar with the STL.)
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Tuesday, May 11, 2004  5:19 PM
+
+Two corrections:
+
+
+Slice returns the subset of *Container* consisting of the elements of
+*Container* that are in the specified interval (not Set, that's the type).
+
+
+Upper_Bound (S, Item) =
+First
+  (Slice
+     (S,
+      Open_Bound (*Floor* (S, Item)),
+      Last (S)))
+
+(S instead of Set because that's the type name, and Floor, not Ceiling)
+
 
+Sorry.
+
+
+BTW, Reverse_Upper_Bound (S, Item) =
+
+Last
+  (Slice
+     (S,
+      First (S),
+      Open_Bound (Ceiling (S, Item)))).
+
+Also,
+
+   S = Slice (S, First (S), Last (S))
+
+should always hold.
+
+Currently thinking about the special cases, namely those with occurrences of
+No_Element.
+
+And about the slice-for-update problem: easily solved with a specification
+similar to the current one for invalid cursors, given that a slice is
+expressed as cursor values.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  3:15 AM
+
+> Here Floor is derived from Upper_Bound.  In the most recent draft, for
+> subtle reasons you have to implement Floor as:
+>
+>     function Floor (S, K) return Cursor is
+>        C : Cursor := Upper_Bound (S, K);
+>     begin...
+
+But you don't have Upper_Bound in the most recent draft!
+
+> What we really need is something to compliment Ceiling, something like
+> "Strict_Ceiling" or "Proper_Ceiling", e.g.
+
+I'm against strange things in the spec. Give the user only well known
+concepts. A complete set of primitive well known concepts. Ceiling, Floor,
+Slice, Open_Bound. Then he can derive whatever Strange_Ceiling he wants.
+
+> Is there a technical term for "proper ceiling"?
+
+Smallest_Greater_Than :-) But then to be complete you need also
+Greatest_Smaller_Than. But then again, don't give strange things.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  3:44 AM
+
+>    procedure Delete_All_In_Range
+>      (Link_Set : in out Link_Sets.Set_Type; From, To : Link_Type)
+>    is
+>       pragma Assert (From <= To);
+>
+>       use Link_Sets;
+>       I : Cursor_Type := Lower_Bound (Link_Set, From);
+>       J : constant Cursor_Type := Upper_Bound (Link_Set, To);
+>    begin
+>      while I /= J loop
+>        Delete (Link_Set, I);
+>      end loop;
+>    end;
+
+My impression is that the original version is more efficient, because it
+only calls a search function once (Lower_Bound). Your version makes two
+calls (Lower_Bound, Upper_Bound). I assume these operations have O(log n)
+time performance, and the others (Back, Element, Delete) constant time. But
+my version calls these more times. So I'd have to check the absolute times.
+
+This also provides an example for optimized search with Slice. Because I
+know the upper bound must be above the lower, I could pass this information
+to Upper_Bound:
+
+   J : constant Cursor_Type := Upper_Bound (Slice (Link_Set, I, Last
+(Link_Set)), To);
+
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Wednesday, May 12, 2004  9:33 AM
+
+>>What we really need is something to compliment Ceiling, something like
+>>"Strict_Ceiling" or "Proper_Ceiling", e.g.
+>
+> I'm against strange things in the spec. Give the user only well known
+> concepts. A complete set of primitive well known concepts. Ceiling, Floor,
+> Slice, Open_Bound. Then he can derive whatever Strange_Ceiling he wants.
+
+Does Upper_Bound qualify as a "well known concept"?  All I'm trying to
+do is come up with another name for Upper_Bound.
+
+>>Is there a technical term for "proper ceiling"?
+>
+> Smallest_Greater_Than :-) But then to be complete you need also
+> Greatest_Smaller_Than. But then again, don't give strange things.
+
+But Upper_Bound isn't a strange thing.
+
+I suspect Stepanov was motivated by the set-theoretic terms "upper
+bound," "least upper bound", etc.  But I think it's that conflation to
+which Tucker objects.
+
+Other names for Upper_Bound are: limit, supremum, supremum limit, etc.
+
+procedure Op (K1, K2 : Key_Type) is
+    I : Cursor := Ceiling (Set, K1);
+    J : constant Cursor := Limit (Set, K2);
+begin
+    while I /= J loop ...;
+end;
 
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Wednesday, May 12, 2004  10:16 AM
 
+> Lower_Bound and Upper_Bound seem more likely to refer to the
+> minimum and maximum elements of the entire set.
+
+One counter-argument is that both Lower_Bound and Upper_Bound accept a key.
+
+Maybe we could provide these:
+
+Lower_Limit
+Floor
+Ceiling   (AKA Lower_Bound)
+Upper_Limit  (AKA Upper_Bound)
+
+with the following semantics:
+
+Key (Lower_Limit (S, K)) < K
+
+Key (Floor (S, K)) <= K
+
+Key (Ceiling (S, K)) >= K
+
+Key (Upper_Limit (S, K)) > K
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, May 12, 2004  11:41 AM
+
+I don't find the names Lower_Limit and Upper_Limit
+a whole lot better than Lower_Bound/Upper_Bound.
+
+I don't see why you need them.  It seems
+Lower_Limit(S,K) = Previous(Ceiling(S,K)) and
+Upper_Limit(S,K) = Next(Floor(S,K))
+
+Or am I confused?
+
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Wednesday, May 12, 2004  1:46 PM
+
+No, you got it right, except for the endpoints; see my last message.
+For example, if Ceiling(S,K) returns No_Element (because K is large),
+then Previous(Ceiling(S,K)) returns No_Element, whereas Lower_Limit
+returns Last(S).
+
+We can define the abstraction to have the semantics you describe above,
+but I think that requires that (1) the set has an internal sentinel and
+(2) type Cursor is privately tagged.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  10:36 AM
+
+> Lower_Limit
+> Floor
+> Ceiling   (AKA Lower_Bound)
+> Upper_Limit  (AKA Upper_Bound)
+
+Better to keep a consistent metaphor: Ground, Floor, Ceiling, Roof. "Limit" is
+too abstract.
+
+Alternatives for Ground: Basement, Base... Underworld :-)
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  11:47 AM
+
+> Lower_Limit
+> Floor
+> Ceiling   (AKA Lower_Bound)
+> Upper_Limit  (AKA Upper_Bound)
+
+In mathematics "lower limit" applies to a sequence of values (e.g. the values
+of sin (x) with x from zero to infinity), and means the least value of the
+sequence. So it's really more similar to First.
+
+[My main source for checking this stuff has been the Wikipedia
+(en.wikipedia.org)]
+
+Have you considered my Slice, Open_Bound proposal yet?
+
+Recapitulating:
+
+Ground, Floor, Ceiling, Roof, do not solve the problem of providing search
+optimization information to the other operations.
+
+Slice, Open_Bound do.
+
+And Ground, Roof can be derived from Ceiling, Floor, Slice, Open_Bound, First,
+Last.
+
+So my proposal is adding Ceiling, Floor, Slice, Open_Bound.
+
+And eventually Ground, Roof defined as "equivalent to..."
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  8:22 AM
+
+Connected *is* a legitimate example of the need for Lower_Bound.
+
+The fixed Connected body is
+
+      X : Cursor_Type := Lower_Bound (Links, (Source, Front_Vertex));
+   begin
+      return X /= Null_Cursor and then Element (X) (1) = Source;
+
+Matt, your suggestion,
+
+>    function Connected (Source : Vertex) return Boolean is
+>       use Link_Sets;
+>    begin
+>       if Is_Empty (Source) then
+>           return False;
+>       end if;
+>
+>       return Link_Type'(Source, Front_Vector) <= Last_Element (Links);
+>    end;
+
+won't work. Apart from the obvious bugs Is_Empty (Source) which should be
+Is_Empty (Links), and Front_Vector which should be Front_Vertex, the return
+expression
+
+   Link_Type'(Source, Front_Vertex) <= Last_Element (Links)
+
+does not yield as desired. Front_Vertex is a value that is never connected
+and is lower than any other. Let's say Front_Vertex = 0 and Links = ((2, 3),
+(2, 4)). Then Connected (1) would (erroneously) return True, because (1, 0)
+<= (2, 4). You're not checking for actual membership in Links. Maybe you had
+something else in mind.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  10:30 AM
+
+<<Does[n't] Upper_Bound qualify as a "well known concept"? >>
+
+Not terribly, no.
+
+<<All I'm trying to do is come up with another name for Upper_Bound....
+I suspect Stepanov was motivated by the set-theoretic terms "upper
+bound," "least upper bound", etc. ...
+Other names for Upper_Bound are: limit, supremum, supremum limit, etc.>>
+
+Actually the term "least upper bound" in mathematics is what we have been
+calling *Lower_Bound*, or Ceiling. And "greatest lower bound" is Floor. I don't
+know a mathematical term for what we have been calling Upper_Bound. Which to me
+indicates a bit of strangeness.
+
+Also, Upper_Bound (let's keep calling it that):
+- does not seem to be so useful as Ceiling, if at all
+- can be derived with First, Last, and Slice
+My previous examples demonstrate this.
+
+But the term you're looking for might be: Roof.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, May 12, 2004  12:40 PM
+
+> Have you considered my Slice, Open_Bound proposal yet?
+
+I don't see the need for "Slice" or "Open_Bound".
+These seem to be introducing a layer of "virtual"
+set on top, which you could do with a new abstraction.
+Is there a real efficiency need here, or just a desire
+for the additional abstraction level?
+
+For example, it seems using an Open_Bound as the high
+bound of an iteration is equivalent to iterating up to
+Previous(Ceiling()).  You can easily create a "real"
+slice by iterating from the low bound to the high
+bound and insert the result in a new set.  If you want
+a "virtual" slice, then to me that is an additional
+layer on top, and not something appropriate for the
+basic Ordered_Sets abstraction.
+
+
+...
+> So my proposal is adding Ceiling, Floor, Slice, Open_Bound.
+>
+> And eventually Ground, Roof defined as "equivalent to..."
+
+I don't see the need to go beyond Floor and Ceiling.  They
+seem to provide all the primitives needed to enable the
+efficient construction of any operations you might want,
+and I believe their meaning is more intuitive than the others
+you have suggested.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Wednesday, May 12, 2004  1:25 PM
+
+> For example, it seems using an Open_Bound as the high
+> bound of an iteration is equivalent to iterating up to
+> Previous(Ceiling()).
+
+This requires care, since Ceiling can return No_Element if the key is
+greater than every key in the set.  To make your algorithm fully general
+I think you'd have to say:
+
+declare
+    C : Cursor := Ceiling (S, K);
+begin
+    if Has_Element then
+       Previous (C);
+    else
+       C := Last (S);
+    end if;
+    ...
+end;
+
+> I don't see the need to go beyond Floor and Ceiling.  They
+> seem to provide all the primitives needed to enable the
+> efficient construction of any operations you might want,
+> and I believe their meaning is more intuitive than the others
+> you have suggested.
+
+As above, the problem case is when Floor returns No_Element, because the
+key is less than every key in the set.  To implement an equivalent of
+Upper_Bound, it's not good enough to say Next (Floor (S, K)); you have
+to say instead:
+
+declare
+    C : Cursor := Floor (S, K);
+begin
+    if Has_Element (C) then
+       Next (C);
+    else
+       C := First (S);
+    end if;
+    ...
+end;
+
+I don't know whether this is really a problem, but I just wanted to
+bring it up.  Having to handle the endpoints as a special case is a
+consequence of the fact that we got rid of the internal sentinel node.
+
+Another possibility is to restore the sentinel, and then define rules
+for how it compares to the deferred constant No_Element.  Assuming type
+Cursor is defined as:
+
+   type Node_Type is record  -- red-black tree node
+      Color : Color_Type;
+      ...
+   end record;
+
+   type Cursor is record
+      Node : Node_Access;
+   end record;
+
+   No_Element : constant Cursor := (Node => null);
+
+function Has_Element (C : Cursor) return Boolean is
+begin
+    if C.Node = null then
+      return False;
+    end if;
+
+    if C.Node.Color = White then -- sentinel has special color
+       return False;
+    end if;
+
+    return True;
+end;
+
+function "=" (L, R : Cursor) return Boolean is
+begin
+    if L.Node = null
+      or else L.Node.Color = White
+    then
+       return R.Node = null or else R.Node.Color = White;
+    end if;
+
+    if R.Node = null
+      or else R.Node.Color = White
+    then
+       return False;
+    end if;
+
+    return L.Node = R.Node;
+end;
+
+The problem of course is that "=" for type Cursor overrides predefined
+"=", which means predefined "=" re-emerges when type Cursor is a record
+or array component, or when type Cursor is a generic actual type.
+
+I suppose we could privately tag type Cursor, to guarantee that
+predefined "=" never re-emerges.  I was trying to avoid that, however.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  1:47 PM
+
+"Lower_Limit(S,K) = Previous(Ceiling(S,K))"
+
+You mean
+
+   Lower_Limit (S, K) = Previous (Floor (S, K)).
+
+But this fails when Floor (S, K) < K.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Wednesday, May 12, 2004  1:56 PM
+
+No.  Tucker was correct.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, May 12, 2004  2:21 PM
+
+No, I meant what I wrote, based on Matt's specification
+that Key(Lower_Limit(S,K)) < K.  I'm not sure you
+and Matt have the same definition in mind for all
+these functions.  In particularly I get the sense
+that your definition of Lower_Bound is the opposite
+of his.  I understand the notion of Greatest_Lower_Bound
+on a lattice, but I have never quite understand how
+that relates to Lower_Bound.
+
+In any case, I was focusing on the specifications
+that Matt gave for Lower_Limit and Upper_Limit,
+and based my equivalence on those.
+
+And I realize my equivalence fails at the end points,
+but I suspect that some special handling may be required
+for those in any case, and it is easy enough for the
+user to define a function that does what is desired
+(e.g. Previous_Or_Last() which returns Last when given
+No_Element).
+
+> But this fails when Floor (S, K) < K.
+
+That's why I wrote Previous(Ceiling(S,K)).
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  4:30 PM
+
+Oops, sorry. *I* was confused.
+
+By the way, I checked the names so far, and they are (aligned, but in no
+specific order):
+
+Version 1.1  Mathematics           My names  Matthew's    AKAs, other
+----------------------------------------------------------------------------
+Lower_Bound  least upper bound     Ceiling   Ceiling
+Upper_Bound                        Roof      Upper_Limit
+             greatest lower bound  Floor     Floor
+Reverse_Lower_Bound
+                                   Ground    Lower_Limit
+Reverse_Upper_Bound
+             lower limit                                  First
+             upper limit                                  Last
+----------------------------------------------------------------------------
+****************************************************************
+
+From: Matthew Heaney
+Sent: Wednesday, May 12, 2004  2:18 PM
+
+> I don't see why you need them.  It seems
+> Lower_Limit(S,K) = Previous(Ceiling(S,K)) and
+> Upper_Limit(S,K) = Next(Floor(S,K))
+
+Thinking about this issue some more, there might be a way to create
+these semantics without a sentinel.  If a cursor is implemented this way:
+
+   type Cursor is record
+      Container : Set_Access;
+      Node      : Node_Access;
+   end record;
+
+In which case you could implement Previous as:
+
+function Previous (C : Cursor) return Cursor is
+begin
+    if C.Container = null then  --No_Element
+       return C;
+    end if;
+
+    if C.Node = null then  --pseudo-sentinel
+      return C;  --or: Last (C.Container)
+    end if;
+
+    if C = First (C.Container) then
+       return (C.Container, null);  --pseudo-sentinel
+    end if;
+
+    return Previous (C.Container.Tree, C.Node);
+end;
+
+Next would be implemented similarly.
+
+The only issue here is that Previous (First (S)) /= No_Element (the LHS
+has a non-null set pointer, the RHS has a null set pointer).  I don't
+know if this is an issue.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, May 12, 2004  2:30 PM
+
+I don't think we need to change
+"Previous" to make these equivalences work for
+endpoints.   Just let the user write a
+"Previous_Or_Last" if they really want to,
+which would need to take both a cursor and a set.
+Or more directly, write Lower_Limit or Upper_Limit
+if you want them, since these already have enough
+information with the set and the key.
+
+Providing Ceiling and Floor still seems adequate to me,
+as they provide the needed primitives for all other
+operations mentioned thus far.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Wednesday, May 12, 2004  2:36 PM
+
+OK.  That seems reasonable.  I just wanted to make sure we were on the
+same page w.r.t the behavior at the endpoints.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Wednesday, May 12, 2004  1:28 PM
+
+<<I don't see the need for "Slice" or "Open_Bound".
+These seem to be introducing a layer of "virtual"
+set on top, which you could do with a new abstraction.
+Is there a real efficiency need here, or just a desire
+for the additional abstraction level?>>
+
+Efficiency. Slice is a simple way of passing known bounds to *any* operation.
+
+As an example consider the usual scenario from accounting where you have
+invoices, and each invoice has a variable number of items. The relational
+representation of this database includes a set Items of (Invoice_Id, Item_Id)
+pairs, ordered by (Invoice_Id, Item_Id). You want to insert a new invoice X
+with items A and B. Without Slice you do:
+
+   Insert (Items, (X, A), Point_XA, Ok);
+   Insert (Items, (X, B), Point_XB, Ok);
+
+Each time Insert will have to search for the insertion point from the start
+(e.g. from the root of a binary tree). But clearly Point_XA is close to
+Point_XB, so if there was a way of telling Insert that we are inserting (X, B)
+next to Point_XA, Insert could start looking from there to great advantage.
+Slice provides that way.
+
+   Insert
+     (Slice (Items, Point_XA, Last (Items)),
+      (X, B), Ok);
+
+You could even save some extra micro-seconds writing:
+
+   Insert
+     (Slice (Items, Open_Bound (Point_XA), Last (Items)),
+      (X, B), Ok);
+
+[Of course there are other ways, not relational, of representing the data. For
+example, Items could be a set of pairs (Invoice_Id, Item_Set), where Item_Set
+is a set of items. But there are a number of reasons why you might want the
+relational scheme. One is that with this scheme you can search Items by
+properties of Item_Id. For example you might want to know which invoices sold
+part number 12345. One more subtle reason--not applicable to this example, but
+occuring in other common situations--has to do with the unfortunate fact that
+it is not possible to have recursive containers without resorting to a pointer
+idiom. There are other reasons.]
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Wednesday, May 12, 2004  3:07 AM
+
+> Efficiency. Slice is a simple way of passing known
+> bounds to *any* operation....
+
+If I understand you, Slice is not a copy, but a by-reference
+subset of a set, created for the purpose of improving performance.
+I don't find this example sufficiently compelling to include it
+in a basic capability like Ordered_Sets.  It requires significant
+set up by the user, and it seems possible that in some implementations,
+it would be a waste of energy.
+
+I like "Ceiling" and "Floor" because they address the common
+notion of "nearest" element or approximate match, something
+which makes sense to ask in a set.  Slice and Open_Bound
+seem to only serve some more obscure performance concern,
+which I don't see of being of wide or general usefulness.
+
+All these things involve subtle tradeoffs, and I accept you
+might make different choices, but we are looking to provide
+the 20% of all possible set operations that together meet
+the needs of 80% of the typical users of sets.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  6:34 AM
+
+> > Efficiency. Slice is a simple way of passing known
+> > bounds to *any* operation....
+>
+> If I understand you, Slice is not a copy, but a by-reference
+> subset of a set, created for the purpose of improving performance.
+
+Exactly. It *must not* be a copy.
+
+> I don't find this example sufficiently compelling to include it
+> in a basic capability like Ordered_Sets.  It requires significant
+> set up by the user, and it seems possible that in some implementations,
+> it would be a waste of energy.
+
+The setup is not significant because the user can always ignore the slice
+idiom, and/or only use it when the known bounds have been acquired naturally
+from previous operations required by the application logic, as was the case
+in the invoices example.
+
+The implementation is easy, especially if null optimization is allowed, as I
+proposed (a slice obviously knows about its base container, so an
+unoptimized operation can just call itself with the base). But in most
+implementations, namely using trees or skip lists, the implementation of
+non-null optimization is also easy, because usually the internal search
+primitives are recursive operations accepting bounds expressed as a node or
+nodes, and the Cursor type is likely to have node information, as in the
+previous Matt's study. So not a waste of energy. The implementation is
+already there.
+
+And here's one more real life example. Website access analysis. You want to
+identify sessions from a HTTP requests log file. A session is a sequence of
+requests from the same IP such that the time between each consecutive
+request does not exceed 30 minutes (this is a common criteria). You want to
+update each request with the corresponding (computed) session id. You key
+the access log by (IP, Time), and traverse the entire file to effect this
+logic. You will have naturally collected bounds for the fine search and
+update operations. Rather strict bounds, giving you (in an non-null
+optimization implementation) orders of magnitude gains in time. The usual
+application of website analysis is for huge log files, of tens of million
+accesses. The gains could mean a difference from hours to minutes or
+seconds. I've done this stuff using databases systems (Postgres, MySQL), and
+the scripts ran four hours. I wasn't able to optimized more because of the
+same reasons we're discussing here: lack of ways to pass known bounds to the
+core data engine. I've done this kind of work in several real life
+application, including
+http://soleunet.ijs.si/website/other/final_report/html/WP5-s9.html
+
+Note that with the increasing availability of large RAM, the tendency is
+towards *prevalent* systems, where all data required for search and
+retrieval is held in RAM during work. An optimized Ada.Containers library
+could mean a great plus for Ada in this area. Databases have been identified
+as a promising area for Ada. Perhaps the DADAISM project had not stalled if
+there were optimized Ada.Containers around then.
+
+Open_Bound is not strictly required for optimization, but together with
+Slice it provides a means to express any kind of interval.
+
+I understand the 20/80 rule. It's just that in my perception the addition of
+Slice configures 20.5/90 or so. Say 21/95 further adding Open_Bound.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  7:36 AM
+
+Note that Slice is also useful for non-optimization purposes. For example,
+currently to process a "range" you must use the "active" iterator idiom:
+
+      I : Cursor := From;
+   begin
+      while I <= To loop
+         Process (I);
+         Next (I);
+      end loop;
+
+With Slice you have access to the "passive" idiom right out of the box:
+
+      procedure Iterate is new Generic_Iteration;
+   begin
+      Iterate (Slice (S, From, To));
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  8:08 AM
+
+> ...you could use the new Generic_Update procedure:
+>
+>    procedure For_Each_Link_In_Range
+>      (Set : Link_Sets.Set_Type; From, To : Link_Type)
+>    is
+>       pragma Assert (From <= To);
+>
+>       use Link_Sets;
+>
+>       procedure Process (E : in out Link_Type) is
+>       begin
+>          ...; --whatever
+>       end;
+>
+>       procedure Update is new Generic_Update;
+>
+>       I : Cursor_Type := Lower_Bound (Set, From);
+>       J : constant Cursor_Type := Upper_Bound (Set, To);
+>    begin
+>       while I /= J loop
+>          Update (I);
+>          Increment (I);
+>       end loop;
+>    end;
+
+Generic_Update is excellent stuff. It does not apply in this particular case
+though, because Mneson links are immutable by design (can only be created or
+deleted, never changed). But there are a lot of element update situations in
+other applications, and so having Generic_Update is a great improvement from
+version 1.1 of the spec and corresponding reference implementation (that is
+the one currently used by Mneson, and that does not have
+Generic_Update--good thing that Mneson does not need them :-).
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Thursday, May 13, 2004  10:24 AM
+
+This will have to written as
+
+    I : Cursor := Ceiling (Set, From);
+    J : Cursor := Floor (Set, To);
+begin
+    if J = No_Element then  --To is small key
+       pragma Assert (I = First (Set));
+       return;
+    end if;
+
+    Next (J);  --now has value of Upper_Bound
+
+    while I /= J then
+       Update (I);
+       Next (I):
+    end loop;
+end;
+
+> Generic_Update is excellent stuff. It does not apply in this particular case
+> though, because Mneson links are immutable by design (can only be created or
+> deleted, never changed). But there are a lot of element update situations in
+> other applications, and so having Generic_Update is a great improvement from
+> version 1.1 of the spec and corresponding reference implementation (that is
+> the one currently used by Mneson, and that does not have
+> Generic_Update--good thing that Mneson does not need them :-).
+
+Generic_Update is equivalent to Generic_Element.  The only difference is
+that Generic_Update doesn't require the element to be aliased.  It
+provides no new functionality relative to the 1.1 spec.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  12:13 PM
+
+I see. I didn't need it for Mneson so it was there in the book but not in my
+mind.
+
+Anyway Generic_Update is better because it's pointerless :-)
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Thursday, May 13, 2004  10:33 AM
+
+I guess I am still not convinced.  If you use a binary
+tree, having a cursor pointing into the tree is not
+always terribly useful when you are trying to search
+for some subsequent element with a given key.  You will
+often have to go back "up" several levels before being
+able to go back down.  With "Slice" you are forcing
+every operation to support a "virtual" subset as well
+as a real set.  This is going to inevitably introduce
+some distributed overhead. I would be surprised if on
+balance, this is a net savings.  I'm sure you could
+construct a case where it would be a savings, but overall,
+I would expect the mix of uses would favor keeping the
+abstraction simpler.
+
+An alternative is to have additional versions of operations
+like "Find" and "Delete" which take a "Starting_With" cursor
+parameter.  (This may be something that was there to begin
+with, I have forgotten.)  Those might be useful, but still
+they seem like operations that might sometimes be slower
+than starting at the "top" of the binary tree, depending
+on exactly where in the tree the Starting_With cursor points.
+
+The added complexity to the interface just doesn't seem worth it.
+
+There is certainly nothing preventing someone defining a
+"Very_Ordered_Set" or whatever that has more of these operations,
+making it closer to a "Vector" in interface.  Or it could be
+a generic child of Ordered_Set.  I just don't
+think the justification is there for our initial attempt at
+a standard container library to include these additional
+capabilities.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Thursday, May 13, 2004  10:59 AM
+
+> An alternative is to have additional versions of operations
+> like "Find" and "Delete" which take a "Starting_With" cursor
+> parameter.  (This may be something that was there to begin
+> with, I have forgotten.)  Those might be useful, but still
+> they seem like operations that might sometimes be slower
+> than starting at the "top" of the binary tree, depending
+> on exactly where in the tree the Starting_With cursor points.
+
+That's pretty much my feeling.  It's hard to know apriori whether it's
+faster to find an item by starting at the top and then searching the
+tree, or starting from some point in the tree and then searching linearly.
+
+Starting from the top does have the benefit that we can definitely say
+that the time complexity is O(log n) even in the worst case, which is
+why I re-wrote Mario's example to use a top-down search.
+
+I agree that Slice, Open_Range, etc, aren't necessary.  However, you
+could make an argument for including an Upper_Bound style function,
+since it's more efficient than the expression Next(Floor(S,K)), and
+because it handles the endpoint issue automatically.
+
+In fact I think the issue of endpoint is the more compelling argument
+for including Upper_Bound (with some other name, of course), since even
+trying to write Mario's example sans Upper_Bound required a bit of
+mental effort.  Maybe call it Next_Ceiling or Upper_Ceiling or whatever.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  12:48 PM
+
+Slice does not complicate the abstraction, on the contrary, cf. my example
+about iterating a range.
+
+I agree tree implementations might have trouble optimizing certain cases.
+But in those cases they can just start with at the root as for a no-slice.
+But, yes, there is a slight overhead even then, namely for detecting the
+kind of case.
+
+Skiplist and hashtable implementations might do better though.
+
+But remember it's not just about optimization, it's also about expressing
+ranges declaratively.
+
+Just some final thoughts. By now I think I've made the case for Slice.
+Personally as a user I'd like it there. But I might be too biased a user
+(towards databases). I'm all confident you'll make the right choice. And as
+you point out there is always space for an Ada.Containers.Optimized_Sets
+package (*), which can mean business for independent Ada tool developers :-)
+
+(*) Is there? I was under the impression that the RM ruled-out extensions to
+package Ada. But the Ada.Containers spec talks about them as if they were
+legal. Sorry for the newby question.
+
+****************************************************************
+
+From: Martin Dowie
+Sent: Thursday, May 13, 2004  2:08 PM
+
+> (*) Is there? I was under the impression that the RM ruled-out extensions
+to
+> package Ada. But the Ada.Containers spec talks about them as if they were
+> legal. Sorry for the newby question.
+
+I believe the rule is you can't add Child packages to package "Ada" but you
+can add Grand-Child and extend existing Child packages.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Thursday, May 13, 2004  1:25 PM
+
+(Damn I said I was done but you keep asking for it :-)
+
+> > An alternative is to have additional versions of operations
+> > like "Find" and "Delete" which take a "Starting_With" cursor
+> > parameter.
+
+I fail to see how duplicating Insert, Delete, Is_In, Find, complicate the
+interface less than simply adding Slice.
+
+> > (This may be something that was there to begin
+> > with, I have forgotten.)
+
+There was, but only for Insert, and the known position add to be adjacent to
+the new.
+
+> > Those might be useful, but still
+> > they seem like operations that might sometimes be slower
+> > than starting at the "top" of the binary tree, depending
+> > on exactly where in the tree the Starting_With cursor points.
+>
+> That's pretty much my feeling.  It's hard to know apriori whether it's
+> faster to find an item by starting at the top and then searching the
+> tree, or starting from some point in the tree and then searching linearly.
+
+This happens for either Slice or Starting_With. Actually Slice has more
+information (the upper bound), which can help make a better decision.
+
+> Starting from the top does have the benefit that we can definitely say
+> that the time complexity is O(log n) even in the worst case, which is
+> why I re-wrote Mario's example to use a top-down search.
+
+Can you pinpoint please? (Not pressing.)
+
+> I agree that Slice, Open_Range, etc, aren't necessary.  However, you
+> could make an argument for including an Upper_Bound style function,
+> since it's more efficient than the expression Next(Floor(S,K)), and
+> because it handles the endpoint issue automatically.
+>
+> In fact I think the issue of endpoint is the more compelling argument
+> for including Upper_Bound (with some other name, of course), since even
+> trying to write Mario's example sans Upper_Bound required a bit of
+> mental effort.
+
+Again, can you pinpoint please? (Not pressing.)
+
+>  Maybe call it Next_Ceiling or Upper_Ceiling or whatever.
+
+I take it you don't like Roof :-(
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Thursday, May 13, 2004  11:45 AM
+
+> I guess I am still not convinced.  If you use a binary
+> tree, having a cursor pointing into the tree is not
+> always terribly useful when you are trying to search
+> for some subsequent element with a given key.  You will
+> often have to go back "up" several levels before being
+> able to go back down.  With "Slice" you are forcing
+> every operation to support a "virtual" subset as well
+> as a real set.  This is going to inevitably introduce
+> some distributed overhead. I would be surprised if on
+> balance, this is a net savings.  I'm sure you could
+> construct a case where it would be a savings, but overall,
+> I would expect the mix of uses would favor keeping the
+> abstraction simpler.
+
+I totally agree. Moreover, there is overhead from requiring every
+implementation of Sets to support by-reference, not copied set objects.
+(That is, the result of Slice). Moreover, you're introducing even more
+erroneous cases into the library.
+
+Matt will be happy to tell you how hard I tried to eliminate *all*
+erroneousness from the containers library. He eventually convinced me that
+some cases of dangling cursors cannot be detected (that is, those that point
+into container objects that no longer exist). So some erroneousness is
+inevitable; but I'm very opposed to having it where it is not required.
+
+(Note that the erroneous cases come from the non-OOP design of the library.
+If the container object was a parameter to all operations [as it ought to
+be, IMHO], then there would be no need for erroneous cases. But that's water
+under the dam. :-)
+
+****************************************************************
+
+From: Nick Roberts
+Sent: Friday, May 14, 2004  2:43 PM
+
+I am generally delighted by this amendment, and I hope it goes in. I think
+it shows how the knocking together of many wise heads generally produces a
+good result (even if it is only after an awful lot of argument :-)
+
+It does seem clear to me that a comprehensive set of packages could easily
+have numbered in the hundreds, when one considers the combinations of
+different structures and the selection between bounded and unbounded,
+definite and indefinite, and so on. I haven't counted, but Booch is over a
+hundred isn't it?
+
+I have a few queries. My profuse apologies if any of these have already been
+addressed (and I've missed them).
+
+[1] The vectors and maps are intended to automatically expand when required.
+This is fine, but the interface seems to provide no control over this
+expansion at all. Would it perhaps be a good idea to add a generic parameter
+such as below?
+
+   Expansion_Size: Size_Type := [implementation defined];
+
+The idea is that automatic expansion is done in multiples of Expansion_Size.
+It has a default value, so that it can be conveniently ignored by the user.
+A possible alternative is:
+
+   Expansion_Factor: Float := [implementation defined];
+
+The idea here is that automatic expansion of a map or vector X is by
+Size_Type(Expansion_Factor*Float(Size(X))). Again there is a convenient
+default.
+
+Alternatively, ExpansionSize/Factor could be made a visible discriminant of
+the container types, or an invisible attribute (with appropriate get and set
+operations).
+
+[2] What was the reason for not permitting Resize to make a container
+smaller, please?
+
+[3] I'd quite like the amendment to add a paragraph near the top clarifying
+the idea that every container has a set of 'slots', and that each slot can
+be either empty or contain the (valid?) value of one element. The following
+descriptions could, I think, be made slightly clearer and more succinct by
+referring to these slots. (Would you like specific wording?)
+
+[4] Regarding the optimisation of operations, I suggest it may be possible
+for an implementation  to keep enough extra internal information (in a Set
+object) to enable it to detect and optimise various scenarios (judged to be
+typical).
+
+For example, assuming a tree structure, a pointer to the node above the
+(terminal) node most recently inserted could be retained; the implementation
+could test each insertion to see if it falls under this node; if a sequence
+of insertions of (as it turns out) adjacent values occurs, this trick could
+yield a very good speed improvement.
+
+[5] Probably already mentioned, but in line 3364 'Assert (Target => V1,
+Source => V2);' should be 'Assign (Target => V1, Source => V2);'.
+
+Finally, is there a sample implementation of any these packages yet?
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Thursday, May 13, 2004  3:05 PM
+
+Nick Roberts wrote:
+> I am generally delighted by this amendment, and I hope it goes in. I think
+> it shows how the knocking together of many wise heads generally produces a
+> good result (even if it is only after an awful lot of argument :-)
+
+Most of the argument you didn't even see...
+
+> It does seem clear to me that a comprehensive set of packages could easily
+> have numbered in the hundreds, when one considers the combinations of
+> different structures and the selection between bounded and unbounded,
+> definite and indefinite, and so on. I haven't counted, but Booch is over a
+> hundred isn't it?
+
+Booch is large.  But my original AI-302 proposal was large too: I think
+were something like 25 containers (some of them had bounded and
+unbounded forms, etc), and the proposal itself was about 150 pgs.
+
+> I have a few queries. My profuse apologies if any of these have already been
+> addressed (and I've missed them).
+>
+> [1] The vectors and maps are intended to automatically expand when required.
+
+Yes.
+
+> This is fine, but the interface seems to provide no control over this
+> expansion at all.
+
+No.  That's what Resize is for.
+
+> Would it perhaps be a good idea to add a generic parameter
+> such as below?
+>
+>    Expansion_Size: Size_Type := [implementation defined];
+
+Use Resize.
+
+> The idea is that automatic expansion is done in multiples of Expansion_Size.
+> It has a default value, so that it can be conveniently ignored by the user.
+> A possible alternative is:
+>
+>    Expansion_Factor: Float := [implementation defined];
+
+Use Resize to supply a hint about intended maximum length.  The
+implementation then resizes the container according to the algorithm the
+vendor has chosen.
+
+> The idea here is that automatic expansion of a map or vector X is by
+> Size_Type(Expansion_Factor*Float(Size(X))). Again there is a convenient
+> default.
+
+In the AI-302 reference implementation, the array is automatically
+expanded to twice its current size.
+
+> Alternatively, ExpansionSize/Factor could be made a visible discriminant of
+> the container types, or an invisible attribute (with appropriate get and set
+> operations).
+
+The container types do not have discriminants.
+
+> [2] What was the reason for not permitting Resize to make a container
+> smaller, please?
+
+Make a copy of the container, Clear the original, and then Move the copy
+to the original.   (Wasn't this in the examples section?)
+
+> [4] Regarding the optimisation of operations, I suggest it may be possible
+> for an implementation  to keep enough extra internal information (in a Set
+> object) to enable it to detect and optimise various scenarios (judged to be
+> typical).
+>
+> For example, assuming a tree structure, a pointer to the node above the
+> (terminal) node most recently inserted could be retained; the implementation
+> could test each insertion to see if it falls under this node; if a sequence
+> of insertions of (as it turns out) adjacent values occurs, this trick could
+> yield a very good speed improvement.
+
+Earlier releases of the AI-302 draft had overloadings of Insert that had
+a hint parameter, which, if it were successfully used to perform the
+insertion, then the time complexity would be O(1) instead of O(log n).
+
+However, the insert-with-hint operations were removed from the API at
+the ARG meeting in Phoenix.
+
+> Finally, is there a sample implementation of any these packages yet?
+
+<http://charles.tigris.org/>
+
+See the ai302 subdirectory.
+
+The vector containers in the ai302 subdirectory conform to the most
+recent AI-302 draft (dated 2004/04/29).  Look for updates to the
+remaining containers this weekend.  (I recommend simply joining the
+charles project mailing lists, so you get notified automatically.)
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, May 14, 2004  10:09 PM
+
+> [1] The vectors and maps are intended to automatically expand when required.
+> This is fine, but the interface seems to provide no control over this
+> expansion at all.
+
+That's intentional. The implementation is allowed to choose the expansion
+algorithm that makes the most sense for it's architecture. Resize can be
+used to tell the implementation the ultimate size; there is an AARM note to
+mention to implementors that it is intended that this do the allocations
+needed. Matt claims that Resize often can be used in practice (I'm
+skeptical), but when it can't be used, you really don't have enough
+information to choose at all.
+
+> [2] What was the reason for not permitting Resize to make a container
+> smaller, please?
+
+The same reason that deleting an element doesn't necessarily destroy the
+element. We wanted to give the implementation flexibility in using blocking,
+caching, etc. The only operation that is guaranteed to recover space is the
+destruction of the container.
+
+Matt shows that it can be done by jumping through hoops, so there is a way
+to do it in the rare case that it is needed.
+
+> [3] I'd quite like the amendment to add a paragraph near the top clarifying
+> the idea that every container has a set of 'slots', and that each slot can
+> be either empty or contain the (valid?) value of one element. The following
+> descriptions could, I think, be made slightly clearer and more succinct by
+> referring to these slots. (Would you like specific wording?)
+
+It's not necessary, and makes things read more like a description of a
+specific implementation. We want as abstract a description as possible. We
+spent quite a bit of effort getting rid of such wording from the vector and
+maps containers (there should be no further reference to "nodes" in those
+containers). I would have done the same to the other containers if I would
+have had more time and energy.
+
+> [5] Probably already mentioned, but in line 3364 'Assert (Target => V1,
+> Source => V2);' should be 'Assign (Target => V1, Source => V2);'.
+
+Yes, and I've fixed all of the typos noted by Dan and Christoph in the
+working version -- so the ARG won't need to consider them in Palma.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Friday, May 14, 2004  11:25 PM
+
+> Matt shows that it can be done by jumping through hoops, so
+> there is a way to do it in the rare case that it is needed.
+
+Just to add to what Randy said: the point of Resize is to prevent
+automatic expansion that would otherwise occur as items are inserted
+into the container.  It's not influencing the size that's important per
+se; rather, it's disabling expansion.
+
+If you ever need to shrink a vector (say), then just do this:
+
+Shrink:
+declare
+   Temp : Vector := V;
+begin
+   Clear (V);
+   Move (Target => V, Source => Temp);
+end Shrink;
+
+Note that I've been an STL user for 4 years now, and I've never actually
+had a need to shrink a vector.  Most of the time I use a vector to store
+a large index or whatever, and usually I can determine prior to
+insertion how many items I'm going to insert, so I call Resize first.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, May 14, 2004  11:24 PM
+
+I think you meant "Assign" rather than "Move", as Move just copies the existing
+internal contents (thus preserving the size). "Assign" would make the target
+only as large as necessary.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Saturday, May 15, 2004  1:54 AM
+
+No, you've got it backwards.  Move does indeed preserve the size -- of
+the source.  Here, Temp has the minimum size necessary to store the
+Length (V) elements of V (although the API doesn't actually specify
+this).
+
+Note that Move doesn't copy any elements.  The copying happened during
+assignment of V to Temp.
+
+Assign copies the active elements Source of onto the existing internal
+array of Target, so it doesn't modify the size unless Length (Source) >
+Size (Target).
+
+****************************************************************
+
+From: Nick Roberts
+Sent: Saturday, May 15, 2004  8:39 AM
+
+> > [2] What was the reason for not permitting Resize to make a container
+> > smaller, please?
+>
+> The same reason that deleting an element doesn't necessarily destroy the
+> element. We wanted to give the implementation flexibility in using blocking,
+> caching, etc. The only operation that is guaranteed to recover space is the
+> destruction of the container.
+
+Well, it may seem like nitpicking, but that seems to be a reason to /allow/
+the implementation /not/ to (actually) shrink a container. It doesn't seem
+like a reason to /disallow/ the implementation from shrinking it. Surely
+allowing an implementation to shrink if it wishes would be provide the
+greatest flexibility?
+
+I suspect, with respect, that you are being a bit hopeful if you expect
+implementations to use blocking, caching, or other optimisations. I doubt
+that many will, in practice. And with an implementation close to the model,
+there would be no difficulty in shrinking (by reallocation and copying, as
+for enlargement). Actually, I think shrinking would probably be feasible for
+most implementations, maybe all.
+
+Again, I guess that's arguing the case as strongly as it can be.
+
+> > [3] I'd quite like the amendment to add a paragraph near the top
+> > clarifying the idea that every container has a set of 'slots', and that
+> > each slot can be either empty or contain the (valid?) value of one
+> > element. The following descriptions could, I think, be made slightly
+> > clearer and more succinct by referring to these slots. (Would you
+> > like specific wording?)
+>
+> It's not necessary, and makes things read more like a description of a
+> specific implementation. We want as abstract a description as possible. We
+> spent quite a bit of effort getting rid of such wording from the vector and
+> maps containers (there should be no further reference to "nodes" in those
+> containers). I would have done the same to the other containers if I would
+> have had more time and energy.
+
+Hmm. Well, I intended the 'slot' to be an abstract (model) concept, and you
+could even say that in the description. I do really think it could
+significantly clarify the descriptions. I could do some actual wording, if
+you wish.
+
+****************************************************************
+
+From: Nick Roberts
+Sent: Saturday, May 15, 2004  8:39 AM
+
+> > [1] The vectors and maps are intended to automatically expand when required.
+> > This is fine, but the interface seems to provide no control over this
+> > expansion at all.
+> > Would it perhaps be a good idea to add a generic parameter
+> > such as below?
+> >
+> >    Expansion_Size: Size_Type := [implementation defined];
+> >    Expansion_Factor: Float := [implementation defined];
+>
+> Use Resize to supply a hint about intended maximum length.  The
+> implementation then resizes the container according to the algorithm the
+> vendor has chosen.
+> In the AI-302 reference implementation, the array is automatically
+> expanded to twice its current size.
+
+This seems to correspond with the idea of having something like:
+
+   Expansion_Factor: Float := 2.0;
+
+as a generic parameter.
+
+Such a parameter would not interfere with the use of Resize, wherever the
+user could or wanted to use it (and which would certainly be superior where
+it could be used). However, it would provide a small extra measure of
+control for the user.
+
+An implementation could partially or entirely ignore the value of
+Expansion_Factor, if there were better criteria for it to base the decision
+on. Since it has a default value, it does not get in the way of the user who
+doesn't want to use it.
+
+I don't think its addition would add much complexity to the specifications,
+or much burden to implementations. It would actually simplify some
+implementations, wouldn't it?.
+
+I seem to remember that, back in the days when computers (operating systems)
+had fixed-length files on their hard disks, you could usually specify an
+expansion size for a file. A file would be automatically reallocated,
+expanded by it expansion size, when necessary (just like a vector in the AI,
+curiously).
+
+Okay, I think I've argued the case for this feature as strongly as possible
+now :-)
+
+> > [2] What was the reason for not permitting Resize to make a container
+> > smaller, please?
+>
+> Make a copy of the container, Clear the original, and then Move the copy
+> to the original.   (Wasn't this in the examples section?)
+
+Yes, but that doesn't answer my question, Matt!
+
+> > Finally, is there a sample implementation of any these packages yet?
+>
+> <http://charles.tigris.org/>
+>
+> See the ai302 subdirectory.
+>
+> The vector containers in the ai302 subdirectory conform to the most
+> recent AI-302 draft (dated 2004/04/29).  Look for updates to the
+> remaining containers this weekend.  (I recommend simply joining the
+> charles project mailing lists, so you get notified automatically.)
+
+Great. Thanks.
+
+****************************************************************
+
+From: Ehud Lamm
+Sent: Sunday, May 16, 2004  4:59 AM
+
+> An implementation could partially or entirely ignore the value of
+> Expansion_Factor, if there were better criteria for it to base the decision
+> on. Since it has a default value, it does not get in the way of the user who
+> doesn't want to use it.
+
+This makes sense to me. That's the way I usually do it.
+
+****************************************************************
+
+From: Nick Roberts
+Sent: Saturday, May 15, 2004  8:48 AM
+
+> > Matt shows that it can be done by jumping through hoops, so
+> > there is a way to do it in the rare case that it is needed.
+>
+> Just to add to what Randy said: the point of Resize is to prevent
+> automatic expansion that would otherwise occur as items are inserted
+> into the container.  It's not influencing the size that's important per
+> se; rather, it's disabling expansion.
+>
+...
+
+Okay, but it would be way easier to be able to use one call to Resize
+instead!
+
+> Note that I've been an STL user for 4 years now, and I've never actually
+> had a need to shrink a vector.  Most of the time I use a vector to store
+> a large index or whatever, and usually I can determine prior to
+> insertion how many items I'm going to insert, so I call Resize first.
+
+Hmm. I think perhaps what you're missing is the case where: (a) you don't
+know in advance what size is going to be required; (b) you want to Resize
+the vector to something big, so as to minimise (eliminate) reallocations. I
+think this is a fairly common scenario. In this kind of case, the user knows
+the length of the vector after it has been populated, and would probably
+like to be able to issue a simple Resize afterwards to change the size of
+the vector to its length (eliminating wasted space). E.g.:
+
+   Open(File,...);
+   Resize(Vector,100_000);
+   while not End_of_File(File) loop
+      Read(File,X);
+      Append(Vector,X);
+   end loop;
+   Close(File);
+   Resize(Vector,Length(Vector));
+
+Does this not make sense?
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Saturday, May 15, 2004  11:40 AM
+
+> Okay, but it would be way easier to be able to use one call
+> to Resize instead!
+
+Right now Resize has the same semantics as reserve() does in the STL.
+You might want to post a note on comp.lang.c++ asking about reserve()
+(and its associated function capacity()).  You might want also want to
+send your question to Musser, Plauger, or Scott Meyers to get their
+opinion.
+
+> Hmm. I think perhaps what you're missing is the case where:
+> (a) you don't know in advance what size is going to be
+> required; (b) you want to Resize the vector to something big,
+> so as to minimise (eliminate) reallocations. I think this is
+> a fairly common scenario.
+
+In that case I would use a std::deque, not a std::vector, if the number
+of elements is large and I need population of the container to be as
+fast as possible.
+
+(I had included a deque container in my original proposal, but removed
+it after the ARG asked me to reduce its size.  We should revisit this if
+there's ever a secondary container library standard.)
+
+>In this kind of case, the user
+> knows the length of the vector after it has been populated,
+> and would probably like to be able to issue a simple Resize
+> afterwards to change the size of the vector to its length
+> (eliminating wasted space). E.g.:
+>
+>    Open(File,...);
+>    Resize(V,100_000);
+>    while not End_of_File(File) loop
+>       Read(File,X);
+>       Append(V,X);
+>    end loop;
+>    Close(File);
+>    Resize(V,Length(V));
+>
+> Does this not make sense?
+
+Read the file into a temporary vector (which has been resized as above),
+and then assign it to the real vector V.
+
+The moral of the story is that you can shrink a vector.  We're only
+disagreeing about the syntax.
+
+(Note that what I do mostly involves .avi files, which have a header
+describing how many frames are in the file.  So in my case I read in the
+avi stream header first, and then resize the vector based on the
+information in the header.)
+
+****************************************************************
+
+From: Nick Roberts
+Sent: Monday, May 17, 2004  2:47 PM
+
+"Matthew Heaney" <matthewjheaney@earthlink.net> wrote:
+
+> > Okay, but it would be way easier to be able to use one call
+> > to Resize instead!
+>
+> Right now Resize has the same semantics as reserve() does in the STL.
+> You might want to post a note on comp.lang.c++ asking about reserve()
+> (and its associated function capacity()).  You might want also want to
+> send your question to Musser, Plauger, or Scott Meyers to get their
+> opinion.
+
+I must say, that seems like a very evasive answer. Can you not give a direct
+answer to the question "Why not permit Resize to reduce the size of a
+vector?" Why does the Ada standard need to do what the STL does?
+
+> > Hmm. I think perhaps what you're missing is the case where:
+> > (a) you don't know in advance what size is going to be
+> > required; (b) you want to Resize the vector to something big,
+> > so as to minimise (eliminate) reallocations. I think this is
+> > a fairly common scenario.
+>
+> In that case I would use a std::deque, not a std::vector, if the number
+> of elements is large and I need population of the container to be as
+> fast as possible.
+>
+> (I had included a deque container in my original proposal, but removed
+> it after the ARG asked me to reduce its size.  We should revisit this if
+> there's ever a secondary container library standard.)
+
+In which case, I must ask what is the point of providing the vector
+abstraction at all? What does it provide that is not bettered, in practice,
+either by Ada's instrinsic arrays or by the list abstraction?
+
+> ...
+> The moral of the story is that you can shrink a vector.  We're only
+> disagreeing about the syntax.
+
+Yes, we are disagreeing about the syntax. I am suggesting that the syntax:
+
+   Resize(V,Length(V));
+
+is a big improvement upon:
+
+   declare
+      Temp : Vector := V;
+   begin
+      Clear (V);
+      Move (Target => V, Source => Temp);
+   end Shrink;
+
+and I do not see -- and I have not been given -- any reason why the former
+should not be permitted.
+
+> (Note that what I do mostly involves .avi files, which have a header
+> describing how many frames are in the file.  So in my case I read in the
+> avi stream header first, and then resize the vector based on the
+> information in the header.)
+
+In which case, why do you not simply use an array?
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, May 17, 2004  5:07 PM
+
+> In which case, I must ask what is the point of providing the vector
+> abstraction at all? What does it provide that is not bettered, in
+practice,
+> either by Ada's intrinsic arrays or by the list abstraction?
+
+Because Matt is very tied (in his mind) to a particular implementation. The
+containers as described in AI-302-03 are much more abstract, and do not have
+a prescribed implementation. Janus/Ada will probably use a two-level
+implementation for vector (which is more like what Matt calls a "Deque"),
+because the extra cost of such an implementation is quite low in return for
+the benefits that will be available. (It also maps much better to the
+code-shared generics of Janus/Ada).
+
+...
+> > (Note that what I do mostly involves .avi files, which have a header
+> > describing how many frames are in the file.  So in my case I read in the
+> > avi stream header first, and then resize the vector based on the
+> > information in the header.)
+>
+> In which case, why do you not simply use an array?
+
+I've made this point many times, and you're never going to get a
+satisfactory answer. It's best to let it go. (Otherwise, we'll use up the
+entire budget for Ada 2005 discussing trivialities, and there will not be
+any money to build the RM...)
+
+Earlier, Nick wrote:
+
+> This seems to correspond with the idea of having something like:
+>   Expansion_Factor: Float := 2.0;
+> as a generic parameter.
+
+This is very specific to a particular implementation. We don't want that
+much specification of the implementation.
+
+...
+> An implementation could partially or entirely ignore the value of
+> Expansion_Factor, if there were better criteria for it to base the
+decision
+> on. Since it has a default value, it does not get in the way of the user
+who
+> doesn't want to use it.
+
+We don't want a parameter whose value can be ignored. Resize itself is bad
+enough.
+
+In any event, micro-managing memory use is not what containers are about.
+You use them when you want the system to manage memory for you. If you care
+deeply about memory use, you need to build your own abstractions. If you
+don't, a compiler update could completely destroy your system's performance.
+(You can't rely on predefined stuff for critical time/space performance.)
+
+...
+> I suspect, with respect, that you are being a bit hopeful if you expect
+> implementations to use blocking, caching, or other optimisations. I doubt
+> that many will, in practice. And with an implementation close to the
+model,
+> there would be no difficulty in shrinking (by reallocation and copying, as
+> for enlargement). Actually, I think shrinking would probably be feasible
+for
+> most implementations, maybe all.
+
+IBM Rational insisted on weakening some of the requirements so that they
+could use alternative implementations. Similarly, I've been very concerned
+about specifying an implementation, simply because Matt's implementations
+would be outrageously slow if compiled for Janus/Ada (due to generic code
+sharing). I fully intend to use a two-level scheme for vectors. All of the
+containers will use limited free lists to avoid excess allocation. I've
+considered allocation blocking for lists (but it wouldn't work for
+Janus/Ada, so we won't do that). Now, some vendors may simply use Matt's
+implementations, but it's pretty clear that at least some vendors are not
+planning to do so.
+
+> Hmm. Well, I intended the 'slot' to be an abstract (model) concept, and
+you
+> could even say that in the description. I do really think it could
+> significantly clarify the descriptions. I could do some actual wording, if
+> you wish.
+
+But we don't need it! Containers just hold a number of elements; all else is
+specific to particular implementations, and does not really belong in the
+standard. We made a number of specific exceptions to that to allow inserting
+of empty elements into vectors for performance reasons (similar to the
+reason that Resize exists). Those do not need any wrapper concept.
+
+As I said, Matt's original text had "nodes" in many places, and I took them
+out as much as possible. It generally shortened the wording; there were no
+cases where it helped anything. (It's more useful in the List container, but
+even there, it would be best to remove it. Just no more energy or budget.)
+
+And no thanks, I don't have any energy or budget to spend training someone
+how to write Standard wording. Especially when it isn't necessary. I don't
+doubt that there exist paragraphs that need wordsmithing, but I think the
+overall wording is about on the right level.
+
+****************************************************************
+
+From: Pascal Obry
+Sent: Tuesday, May 18, 2004  12:55 AM
+
+ > Because Matt is very tied (in his mind) to a particular implementation. The
+ > containers as described in AI-302-03 are much more abstract, and do not have
+ > a prescribed implementation.
+
+This is not true for the map. The name is Indefinite_Hashed_Maps. This state
+cleary that the implementation uses an hash table. I have found that if an
+hash table is very fast for small set of data (< 10000) it is quite slower
+than an AVL for very large set of data (> 100_000). Maybe this is the current
+reference implementation but that's what I have experienced. FYI, the AVL
+implementation I'm talking about is Table_Of_*_And_Dynamic_Data_G from the
+LGL.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Friday, May 19, 2004  7:14 PM
+
+The "Hashed_Maps" and "Ordered_Sets" cases are special. I think everyone would
+have preferred to avoid specifying an implementation there as well. But that's
+impossible, because of the vastly different generic parameters needed. That is,
+a "Hashed_Map" takes a hash function as a generic parameter, while an
+"Ordered_Set" (implemented as a tree) takes generic ordering operators as
+generic parameters. So, that exposes the basic implementation, as does any
+ordering requirements (hash tables aren't ordered by definition). Given these
+basic properties differ, a container where the hash vs. tree implementation
+isn't specified doesn't make sense.
+
+I do have to wonder about your results. Since an AVL tree is going to be log N
+access by key, it should be quite a bit slower in large collections. The only
+reason for a hash table to slow down is a bad hash function (which then could
+make long chains in a few buckets) - essentially turning lookups into brute
+force searches. Are you sure that your hash function is good enough for "large
+sets of data"? An ideal function would put one item into each bucket.
+
+****************************************************************
+
+From: Pascal Obry
+Sent: Saturday, May 22, 2004  2:04 AM
+
+The hash routine was not good at all. We have discussed this with Matthew,
+using a standard one (close to a hash routine used to implement associative
+arrays in Tcl or Gawk) the hash table is now 2 times faster.
+
+****************************************************************
+
+From: Michael F. Yoder
+Sent: Saturday, May 22, 2004  11:40 AM
+
+I've seen bad behavior with hashing many times, both in personal and
+professional contexts.  The basic reason is: if you use a fixed table
+size and linear chaining within a bucket, hashing is linear (albeit with
+a small constant) and large datasets can perform very badly even if the
+hash function is good.  I don't recall the problem ever being a bad hash
+function, though it could have occurred and I've forgotten.
+
+My own solution was to expand the table size when it becomes 3/4 full or
+so (using internal rather than external chaining); it might be better to
+make each bucket be a tree.  The latter solution has a security benefit:
+it mitigates DOS attacks based on causing collisions deliberately.  This
+consideration occurred at my last job, but admittedly isn't a common
+one.  For what it's worth, the use of an expanding table has always
+solved the problem.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Saturday, May 22, 2004  3:42 PM
+
+The Hash_Maps are intended to be expandable hash tables.
+That's what Resize() is all about.  And yes, I expect the only
+reason AVLs might start to outperform a hash table is if the
+hash table has a fixed number of buckets.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Saturday, May 22, 2004  3:46 PM
+
+The containers library uses an expanding hash table. The only way the
+behavior can get bad is if the hash function isn't good enough to use most
+of the buckets.
+
+****************************************************************
+
+From: Ehud Lamm
+Sent: Sunday, May 16, 2004  4:54 AM
+
+> Tucker wrote:
+>
+> > I guess I am still not convinced.  If you use a binary
+> > tree, having a cursor pointing into the tree is not
+> > always terribly useful when you are trying to search
+> > for some subsequent element with a given key.  You will
+> > often have to go back "up" several levels before being
+> > able to go back down.  With "Slice" you are forcing
+> > every operation to support a "virtual" subset as well
+> > as a real set.  This is going to inevitably introduce
+> > some distributed overhead. I would be surprised if on
+> > balance, this is a net savings.  I'm sure you could
+> > construct a case where it would be a savings, but overall,
+> > I would expect the mix of uses would favor keeping the
+> > abstraction simpler.
+>
+> I totally agree. Moreover, there is overhead from requiring every
+> implementation of Sets to support by-reference, not copied
+> set objects. (That is, the result of Slice).
+
+This is also the way I see it.
+Perhaps I missed something, so let me put it bluntly: are we talking ADT
+interfaces here, or are we working solely for a specific implementation?
+As you know from our Ada-Europe workshop a couple of years ago, I am firmly
+in the ADT camp myself, so I prefer interfaces that don't impose to many
+implementation restrictions. They an then be extended at will -- much easier
+than removing operations that are hard of inefficient to support.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Monday, May 17, 2004  1:01 PM
+
+[Slice et al.]
+
+> Perhaps I missed something, so let me put it bluntly: are we talking ADT
+> interfaces here, or are we working solely for a specific implementation?
+
+Both. Slice provides a way to express ranges declaratively (interface) and a
+way to pass information to operations that can use it to optimize
+(implementation, but not specific).
+
+(Just clarifying. The cases have been made, the tendency of the ARG is to
+leave Slice out, so it's only academic now.)
+
+****************************************************************
+
+From: Ehud Lamm
+Sent: Sunday, May 16, 2004  5:03 AM
+
+> From: Matthew Heaney [mailto:mheaney@on2.com]
+>
+> Tucker Taft wrote:
+>
+> > I don't think we need to change
+> > "Previous" to make these equivalences work for
+> > endpoints.   Just let the user write a
+> > "Previous_Or_Last" if they really want to,
+> > which would need to take both a cursor and a set.
+> > Or more directly, write Lower_Limit or Upper_Limit
+> > if you want them, since these already have enough
+> > information with the set and the key.
+> >
+> > Providing Ceiling and Floor still seems adequate to me,
+> > as they provide the needed primitives for all other
+> > operations mentioned thus far.
+>
+> OK.  That seems reasonable.  I just wanted to make sure we
+> were on the
+> same page w.r.t the behavior at the endpoints.
+
+It does seem reasonable, and since I never used this sort of operations, my
+opinion shouldn't count as much, so take this with a grain of salt...
+
+It looks like the equivalences help understand what's going on. The special
+cases make code less readable and the logic a bit less clear. How important
+this is, is hard to judge.
+
+I wager many students will forget about the special case. Why not provide
+Lower_Limit or Upper_Limit? The cost seems tiny.
+
+****************************************************************
+
+From: Matthew Heaney
+Sent: Monday, May 17, 2004  1:15 PM
+
+I am in favor of providing the following four operations:
+
+Lower_Limit (S, K) < K    (AKA "Ground", "Previous_Floor")
+Floor (S, K) <= K
+Ceiling (S, K) >= K       (AKA Lower_Bound)
+Upper_Limit (S, K) > K    (AKA Upper_Bound, "Roof", "Next_Ceiling")
+
+I think Tucker only wants the middle two.
+
+If I had to pick only two, then I'd pick the last two (Ceiling and
+Upper_Limit).  (This is what the STL & Charles do, and what was in the
+API prior to the ARG meeting in Phoenix.)
+
+Note that there are really two separate issues:
+
+(1) What is the value of the expression:
+
+   Previous (Next (C))
+
+We got rid of the internal sentinel node in Phoenix, which means once a
+cursor assumes the value No_Element, then it keeps that value.
+
+This is what Tucker and I were discussing in the earlier message quoted
+above, about letting a user define a Previous_or_Last function if he
+needs to back up onto the actual sequence.
+
+(2) Restoring the functionality of the two operations formerly known as
+"Lower_Bound" and "Upper_Bound".
+
+There seems to be agreement that this functionality is useful.  One of
+the issues is that several of the ARG reviewers were confused by the
+names "Lower_Bound" and "Upper_Bound".
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Monday, May 17, 2004  1:49 PM
+
+Will this never end? ;-)
+
+My *major* complaint with Upper_Limit, Lower_Limit,
+Upper_Bound, Lower_Bound, etc. is that the names
+make no intuitive sense.
+
+If you could come up with some reasonable names,
+I might support the inclusion.  I do not find
+any of the ones that have been proposed thus far
+acceptable.
+
+Predecessor and Successor might make it, where they
+are allowed to take a key that might or might
+not appear in the set, and return the cursor for
+the item in the set next preceding or following the given key.
+
+****************************************************************
+
+From: Michael F. Yoder
+Sent: Wednesday, May 19, 2004  11:48 AM
+
+Whether 2 or 4 operations are included, it would be pleasant if the
+names came from a consistent scheme.  For example:
+
+    Lt_Item (S, K) < K
+    Le_Item (S, K) <= K
+    Gt_Item (S, K) > K
+    Ge_Item (S, K) >= K
+
+This is easier to do if the "Lt" and "Gt" operations are the only two
+provided.  For example, 'Predecessor' and 'Successor' would be fine.
+Floor for Le_Item and Ceiling for Ge_Item, together with Predecessor and
+Successor, would be acceptable.
+
+****************************************************************
+
+From: Christoph Grein
+Sent: Sunday, May 23, 2004  11:37 PM
+
+I do think the names at the right intuitively describe the meaning:
+
+     Gt_Item (S, K) >  K         Roof
+     Ge_Item (S, K) >= K         Ceiling
+     Le_Item (S, K) <= K         Floor
+     Lt_Item (S, K) <  K         Ground, Basement
+
+It's like a building, you're in a room, which has a floor and a ceiling; above
+is the roof (or the attic), below the basement or ground.
+
+****************************************************************
+
+From: Marius Amado Alves
+Sent: Monday, May 24, 2004  5:26 AM
+
+":=" for containers clones the source (as opposed to passing a reference
+to).
+
+Do I understand correctly that this behaviour is specified solely by the
+fact that containers are non-limited?
+
+In that case, wouldn't a small clarifying Note by useful, specially for new
+users coming e.g. from... uh... Java...
+
+And should't the behaviour of ":=" be documented for any controlled type
+anyway?
 
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Saturday, May 13, 2004  9:31 AM
 
 ****************************************************************
 
 From: Matthew Heaney
-Sent: Tuesday, February 24, 2004  9:31 AM
+Sent: Saturday, May 13, 2004  9:31 AM
 
 ****************************************************************
 

Questions? Ask the ACAA Technical Agent