CVS difference for ai05s/ai05-0144-1.txt

Differences between 1.6 and version 1.7
Log of other versions for file ai05s/ai05-0144-1.txt

--- ai05s/ai05-0144-1.txt	2009/11/04 06:26:38	1.6
+++ ai05s/ai05-0144-1.txt	2010/01/09 01:31:29	1.7
@@ -1640,3 +1640,1473 @@
 
 ****************************************************************
 
+[Note: The following thread gradually diverged from threads on conditional
+expressions; see the appendix of AI05-0147-1 for earlier mail. Since it
+is primarily on side-effects in functions, it seems more related to this
+AI than conditional expressions.]
+
+From: Bob Duff
+Sent: Saturday, February 28, 2009  2:14 PM
+
+> Bob says "never write code
+> like that", but the language allows it, and any time you have a
+> language feature that you feel like saying "never use it", something
+> is wrong.
+
+We can't forbid every evil thing.  Ada allows me to write an evil rat's nest of
+goto's.  But we can't forbid that, short of outlawing goto's, which cure is
+worse than the disease.
+
+Anyway, I think you're being inconsistent.  Last week, you and I agreed that it
+would have been nice if Ada 83 had specified a particular portable elaboration
+order, chosen by all compilers.  It is bad style to depend on the elab order
+(when not nailed down by with's and pragmas, etc), but people DO depend on it,
+and it costs thousands of dollars to port their code from one compiler to
+another.
+
+Note that Ada's rule is actually similar to Eiffel's: For "F(X) and G(Y)",
+Eiffel says either or both operand can be evaluated.  Ada says both are
+evaluated, but in either order.  So in Ada, if F and G have side effects, you're
+treading on thin ice -- if those side effects affect each other, the program is
+wrong, but might work by accident on some implementations.
+
+Which reminds me of another case where it's hard to decide which side effects
+matter: if both F(X) and G(Y) raise an unhandled exception, I've got two bugs,
+but I probably don't care which one is detected.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Saturday, February 28, 2009  2:01 PM
+
+> Actually that's not a motivation, since we are talking primarily about
+> the coding style used internally for tools, so the choice is made on
+> aesthetic grounds (basically a viewpoint that non-short-circuiting
+> logical operators are fundamentally not a good idea, though I must say
+> the Eiffel convention makes good sense to me.
+
+I don't understand this attitude at all (re: the Eiffel "compiler-writer's whim"
+rule).  It goes against the entire design of Ada, which is based on the
+assumption that programmers make mistakes, and that the language should try to
+prevent such mistakes (preferably by static checks).  Not by threatening to
+introduce bugs 2 years hence.
+
+After all, it's quite clear from the C standard that you're not supposed to
+index arrays out-of-bounds.  But it happens all the time.
+
+>...I really dislike
+> the idea of being able to write code where full evaluation is
+>necessary for functional correctness.
+
+But you ARE able to write such code in Eiffel.  You depend (by accident) on the
+evaluations and order chosen by your current compiler.  There is no rule in
+Eiffel (or Ada) that prevents that.  Then 2 years later, the code mysteriously
+quits working -- that costs thousands of dollars, each time.
+
+I really think the following language design is better:
+
+    Rule: The operands of "and" are both evaluated, left to right.
+    NOTE: It is considered bad style to depend on the fact that both are
+    evaluated when one is True, or to depend on the order.  Please try not to
+    do that.
+
+>...Bob says "never write code
+> like that", but the language allows it, and any time you have a
+>language feature that you feel like saying "never use it", something
+>is wrong.
+
+Something is indeed wrong, but we don't know how to fix it.  That is, we cannot
+forbid (at compile time) the thing that you and I both agree is evil.  Two
+reasons:
+
+    - Ada doesn't provide enough information to the compiler to so it can know
+      about side effects.  (SPARK is a bit different, here.)
+
+    - Even if the compiler had that info, it's not clear how to formally define
+      what is forbidden.  It's not clear which side effects matter.  Lots of
+      software has the side effect of heating up the CPU chip, but we don't
+      care about that.  We probably don't care about memo-izing functions,
+      which have the side effect of modifying some cache.  We probably don't
+      care in which order two "new" operations are done, even though it can
+      certainly affect the output of the program (e.g. convert to
+      Integer_Address and print it out, for debugging).
+
+> Probably the most important thing is consistency, and it is a bit
+> unfortunate that there are two entirely different consistent coding
+> styles possible here, and people can't agree on which they prefer.
+
+Yes it's unfortunate, but we don't really HAVE to agree.  For example, it's just
+fine that I am forced against my will by AdaCore to write "and then".
+
+> That means you end up with essentially two different languages in this
+> respect, and you have to learn both of these languages to be able to
+> read Ada code.
+
+I agree, that's bad.  But I can't get too excited about it, when I regularly
+have to read those two dialects of Ada, plus programs written in C, Python,
+make, bash, Perl, awk, autoconfjunk, etc etc.  Maybe 100 years from now, people
+will agree on these things (I hope they agree autoconf is junk).
+
+> It's like the situation with variant records, where there are two
+> entirely different implementation strategies (allocate max, or
+> allocate actual size and fiddle with the heap implicitly), again you
+> get two languages, and unfortunately compilers only have to implement
+> one of those languages, causing significant portability problems (I
+> believe that all the mainstream compilers allocate the max now, so
+> that this is not as much of a problem as it might be).
+
+Yes, but not much the language design could do about that, IMHO.
+
+Another (trivial) example is "procedure P(X: Integer);"
+versus "procedure P(X: in Integer);".  Some folks like the "in", others think
+it's just noise.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Sunday, March 1, 2009  3:19 AM
+
+> I don't understand this attitude at all (re: the Eiffel
+> "compiler-writer's whim" rule).  It goes against the entire design of
+> Ada, which is based on the assumption that programmers make mistakes,
+> and that the language should try to prevent such mistakes (preferably
+> by static checks).  Not by threatening to introduce bugs 2 years hence.
+
+Harrumph! .. and that's why Ada is FULL of cases in which the compiler is
+allowed to change semantics, e.g. evaluate right-to-left or left-to-right, or
+move or eliminate checks etc etc.
+
+Ada takes the attitude that it is OK for the compiler to do such things if no
+reasonable programs are affected, when we write
+
+     A + F(X)
+
+we do not expect F(X) to modify A in any reasonable code, so it is OK that the
+compiler may one day evaluate this in one order or another (note this is a very
+reasonable example, many compilers will evaluate F(X) first here on most
+machines, but on a stack machine it is natural to evaluate A first).
+
+So here we have a case where bugs can be introduced 2 years hence when switching
+architectures.
+
+You can easily write realistic examples where improved optimization changes
+order of elaboration, so the error can show up with new compiler versions.
+
+We have seen this happen plenty of times in Ada in other circumstances such as
+erroneous aliasing assumptions, or erroneous checked suppression cases.
+
+Now in this case we have
+
+     A or B
+
+in Eiffel and the concern is that if B is "bad" somehow, the compiler may cover
+this up by not evaluating B when A is True.
+
+Bob agrees that no one should ever write code where B must always be evaluated,
+but unlike the example above, is upset that if this rare event DOES happen, the
+compiler can affect the outcome.
+
+Who's being inconsistent here???
+
+> I really think the following language design is better:
+>
+>     Rule: The operands of "and" are both evaluated, left to right.
+>     NOTE: It is considered bad style to depend on the fact that both are
+>     evaluated when one is True, or to depend on the order.  Please try not to
+>     do that.
+
+Of course that's not the Ada rule, since as noted above the "left to right" is
+not True of Ada. I appreciate the language design approach (e.g. of Java) which
+tries to eliminate all compiler dependent behavior differences, but Ada is not a
+language in this category, so it is bogus to appeal to this principle as an
+absolute one in discussing new features.
+
+BTW, you don't need to go to obscure languages like Eiffel, Pascal allows A or B
+to be short circuited or not left-to-right, or BIG SURPRISE! right-to-left. I
+got bitten by this once porting a large Pascal program from one compiler to
+another, and the second compiler did systematic short circuiting right to left.
+
+
+>> It's like the situation with variant records, where there are two
+>> entirely different implementation strategies (allocate max, or
+>> allocate actual size and fiddle with the heap implicitly), again you
+>> get two languages, and unfortunately compilers only have to implement
+>> one of those languages, causing significant portability problems (I
+>> believe that all the mainstream compilers allocate the max now, so
+>> that this is not as much of a problem as it might be).
+>
+> Yes, but not much the language design could do about that, IMHO.
+
+Of *COURSE* it could, it should just have implementation advice saying you
+should allocate the maximum, and then you have tests that enforce this
+implementation advice.
+
+> Another (trivial) example is "procedure P(X: Integer);"
+> versus "procedure P(X: in Integer);".  Some folks like the "in",
+> others think it's just noise.
+
+Yes, that's a language design mistake. I must say COBOL is well designed in this
+kind of respect. COBOL style in practice is much more free from this kind of
+stylistic differences. Actually to be fair this comes more from agreed on coding
+style, you can say
+
+    COMPUTE A = A + B
+
+but no one but a C/Ada/Fortran programmer who has not yet learned proper COBOL
+would ever write such ghastly code, they would write
+
+    ADD B TO A
+
+:-)
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Sunday, March 1, 2009  3:23 AM
+
+> Anyway, I think you're being inconsistent.  Last week, you and I
+> agreed that it would have been nice if Ada 83 had specified a
+> particular portable elaboration order, chosen by all compilers.  It is
+> bad style to depend on the elab order (when not nailed down by with's
+> and pragmas, etc), but people DO depend on it, and it costs thousands
+> of dollars to port their code from one compiler to another.
+
+No, very consistent. The design view in Ada is the following
+
+In reasonable programs that people are actually liable to write in practice, the
+language should strive for guaranteed portable semantics.
+
+In unreasonable programs, that no one should write, it is OK if the language
+does not guarantee portable semantics.
+
+Elaboration order clearly comes in the first category
+
+A or B where B is required to be evaluated for correctness clearly comes in the second category.
+
+So I would have been happy with a design in Ada that defined A or B in the
+canonical way, and then had
+
+Implementation Permission
+   If A is True, then the compiler may omit the evaluation of B
+
+which is basically the Eiffel approach
+
+After all, we do allow in Ada the compiler to skip the evaluation of B if all it
+does is raise a predefined exception (at least for sure we did in the original
+Ada 83, where I understood 11.6, I am not sure I understand 11.6 any more, so
+let's just say that in Ada 2005, I trust this is still the case :-))
+
+****************************************************************
+
+From: Bob Duff
+Sent: Sunday, March 1, 2009  2:06 PM
+
+...
+> Harrumph! .. and that's why Ada is FULL of cases in which the compiler
+> is allowed to change semantics, e.g. evaluate right-to-left or
+> left-to-right, or move or eliminate checks etc etc.
+
+There are actually very few such cases in Ada, compared to, say, C.
+I think there are a little too many in Ada, but still not a lot.
+
+...
+> You can easily write realistic examples where improved optimization
+> changes order of elaboration, so the error can show up with new
+> compiler versions.
+>
+> We have seen this happen plenty of times in Ada in other circumstances
+> such as erroneous aliasing assumptions, or erroneous checked
+> suppression cases.
+
+What do you mean by "erroneous aliasing"?
+
+The "check suppression" cases don't bother me -- we can't require code that
+suppresses failing checks to be deterministic, without defeating the whole
+purpose of check suppression, which is efficiency.
+
+> Now in this case we have
+>
+>      A or B
+>
+> in Eiffel and the concern is that if B is "bad" somehow, the compiler
+> may cover this up by not evaluating B when A is True.
+>
+> Bob agrees that no one should ever write code where B must always be
+> evaluated, but unlike the example above, is upset that if this rare
+> event DOES happen, the compiler can affect the outcome.
+>
+> Who's being inconsistent here???
+
+My position is 100% consistent.  I think nondeterminism is bad.
+I can tolerate it only in cases where it actually buys something (usually
+efficiency, as in the pragma Suppress case).
+
+If I ran the circus, arguments would be evaluated left to right, as in Java.  Or
+else I would forbid A or B from having side effects in "A or B", so the compiler
+can reorder without causing future bugs (which is hard -- it would require some
+SPARK-like annotations.)
+
+> > I really think the following language design is better:
+> >
+> >     Rule: The operands of "and" are both evaluated, left to right.
+> >     NOTE: It is considered bad style to depend on the fact that both are
+> >     evaluated when one is True, or to depend on the order.  Please try not to
+> >     do that.
+>
+> Of course that's not the Ada rule, since as noted above the "left to
+> right" is not True of Ada. I appreciate the language design approach
+> (e.g. of Java) which tries to eliminate all compiler dependent
+> behavior differences, but Ada is not a language in this category, so
+> it is bogus to appeal to this principle as an absolute one in
+> discussing new features.
+
+Ada is a lot closer to the Java end of the spectrum than to the C end of the
+spectrum.
+
+> BTW, you don't need to go to obscure languages like Eiffel, Pascal
+> allows A or B to be short circuited or not left-to-right, or BIG
+> SURPRISE! right-to-left.
+
+Which Pascal?  Jensen and Wirth's book doesn't say that.
+
+>...I got bitten by this once porting a
+> large Pascal program from one compiler to another, and the second
+>compiler did systematic short circuiting right to left.
+
+This is what I don't get.  You were bitten, yet you claim it can't happen by
+accident.  Heh?
+
+> >> It's like the situation with variant records, where there are two
+> >> entirely different implementation strategies (allocate max, or
+> >> allocate actual size and fiddle with the heap implicitly), again
+> >> you get two languages, and unfortunately compilers only have to
+> >> implement one of those languages, causing significant portability
+> >> problems (I believe that all the mainstream compilers allocate the
+> >> max now, so that this is not as much of a problem as it might be).
+> >
+> > Yes, but not much the language design could do about that, IMHO.
+>
+> Of *COURSE* it could, it should just have implementation advice saying
+> you should allocate the maximum, and then you have tests that enforce
+> this implementation advice.
+
+I don't see how you can test this.
+
+If we wanted to push in the opposite direction (require the heap-based
+alloc/reallocate implementation), then it's easy to test -- just create an
+object whose max size is 2**36 bytes or something.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Sunday, March 1, 2009  1:42 PM
+
+> No, very consistent. The design view in Ada is the following
+>
+> In reasonable programs that people are actually liable to write in
+> practice, the language should strive for guaranteed portable
+> semantics.
+>
+> In unreasonable programs, that no one should write, it is OK if the
+> language does not guarantee portable semantics.
+
+OK, if "no one should write" means "nobody would write by accident", then your
+position is consistent.
+
+But we had the bug of accidentally writing code that depends on argument eval
+order at SofCheck.  I think it happened 3 times (but I could be misremembering).
+And it was very hard to debug, because the order dependence showed up in the
+final output, long after the offending code.
+
+Anyway, I'd prefer to assume things are in the first category, unless I have
+strong evidence otherwise.
+
+> Elaboration order clearly comes in the first category
+
+Yes.
+
+> A or B where B is required to be evaluated for correctness clearly
+> comes in the second category.
+
+Not so clear to me, given the above-mentioned experience.
+(I don't think it involved "and" or "or", though.)
+
+Here's a related example with "or" that seems quite likely to me:
+
+    if X = null or X.all > 0 ...
+
+If X = null, you are saying one compiler could return True, and the
+2-years-later one suddenly starts raising Constraint_Error. That's the kind of
+non-portability I don't like.
+
+Or worse:
+
+    if X = null or F(X) ...
+
+where F does X.all.
+
+> So I would have been happy with a design in Ada that defined A or B in
+> the canonical way, and then had
+>
+> Implementation Permission
+>    If A is True, then the compiler may omit the evaluation of B
+
+I think you mean if either operand is True, it may omit the other one.
+
+> which is basically the Eiffel approach
+>
+> After all, we do allow in Ada the compiler to skip the evaluation of B
+> if all it does is raise a predefined exception (at least for sure we
+> did in the original Ada 83, where I understood 11.6, I am not sure I
+> understand 11.6 any more, so let's just say that in Ada 2005, I trust
+> this is still the case :-))
+
+Yes, I believe that's the case, although 11.6 continues to surprise me.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Monday, March 2, 2009  3:52 PM
+
+...
+> My position is 100% consistent.  I think nondeterminism is bad.
+> I can tolerate it only in cases where it actually buys something
+> (usually efficiency, as in the pragma Suppress case).
+>
+> If I ran the circus, arguments would be evaluated left to right, as in
+> Java.
+
+That would have been my choice, as well. But Tucker argues that that would
+"validate" the writing of intentionally tricky code (especially if we have "in
+out" parameters on function) - that seems like a valid concern.
+
+> Or else I would forbid A or B from having side effects in "A or B", so
+> the compiler can reorder without causing future bugs (which is hard --
+> it would require some SPARK-like annotations.)
+
+No, this is not hard, and only one annotation is required. See AI05-0144-1 for a
+lengthy discussion of this topic. (Note that this is a forward reference; I only
+wrote about half of that AI before leaving for the meeting, and the half of
+interest here hasn't been written yet, other than in my head.) The question is
+really whether it makes sense to live within the restrictions that would be
+required. For Ada, I suspect that the answer is no (mainly because of
+compatibility concerns - I think AI05-0144-1 will ultimately propose rules only
+on "in out" and access-to-variable parameters on functions for that reason - and
+that will be mainly to mitigate the main technical objection to "in out"
+parameters on functions). For a new language, however, the answer is certainly.
+
+The key is to have a class of functions, whose only side-effects are documented
+in their contract. (I call them "strict" functions, since "pure" means something
+slightly different and less pure. :-) Then it is possible for legality rules to
+check whether an expression has bad side-effects or not. (Strict functions have
+a number of other useful properties, including that they don't need to be
+elaborated - freezing is enough - and that they can be safely be executed in
+parallel without seeing the body. It was the latter property that originally led
+me to them, although I've given up on the "safe tasking" idea as being too much
+work to define in my spare time.)
+
+I was going to write a lengthy discussion of this topic here, but I realized
+that it would end up the same as the one I need to write for the AI, and no
+sense in writing it twice. So you'll have to wait for it.
+
+****************************************************************
+
+From: Jean-Pierre Rosen
+Sent: Tuesday, March 3, 2009  2:52 AM
+
+> The key is to have a class of functions, whose only side-effects are
+> documented in their contract. (I call them "strict" functions, since "pure"
+> means something slightly different and less pure. :-)
+
+So, you would define true functions as opposed to value-returning procedures...
+Makes me feel younger suddenly.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Tuesday, March 3, 2009  4:17 AM
+
+> Not so clear to me, given the above-mentioned experience.
+> (I don't think it involved "and" or "or", though.)
+
+Indeed, order of evaluation for e.g. arithmetic is a very different thing than
+deliberately writing OR/AND where you depend on FULL elaboration, remember
+that's what we were discussing not ordering of AND/OR operands
+
+I think it VERY unlikely that anyone would write, by accident, an AND/OR which
+depended on full elaboration. Though one can see someone doing this deliberately
+(for me it is the only justification for using AND/OR :-))
+
+> Here's a related example with "or" that seems quite likely to me:
+>
+>     if X = null or X.all > 0 ...
+>
+> If X = null, you are saying one compiler could return True, and the
+> 2-years-later one suddenly starts raising Constraint_Error.
+> That's the kind of non-portability I don't like.
+
+Well 11.6 already allows this behavior!
+
+...
+>> So I would have been happy with a design in Ada that defined A or B
+>> in the canonical way, and then had
+>>
+>> Implementation Permission
+>>    If A is True, then the compiler may omit the evaluation of B
+>
+> I think you mean if either operand is True, it may omit the other one.
+
+Not sure, I don't like right to left short circuiting, it's very unnatural
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Tuesday, March 3, 2009  4:30 AM
+
+>> We have seen this happen plenty of times in Ada in other
+>> circumstances such as erroneous aliasing assumptions, or erroneous
+>> checked suppression cases.
+>
+> What do you mean by "erroneous aliasing"?
+
+The use of unchecked conversion to create counterfeit pointers.
+Generally if we have
+
+    type A is new Integer;
+    type B is new Integer;
+    type AA is access all A;
+    type BB is access all B;
+
+the compiler can assume A/B are different alias sets, and gcc does so. Many
+people using GNAT find the need to use -fno-strict-aliasing to prevent this kind
+of assumption. In this respect Ada is very like C, though we have managed to
+deal automatically in GNAT with cases where the UC occurs in the same unit as
+the relevant access type declarations.
+
+This is a VERY common source of portability/optimization problems (personally I
+think -fno-strict-aliasing should be the default, but we have strong internal
+disagreement on this, since we have at least some examples of big apps where it
+has noticeable space/speed consequences).
+
+> My position is 100% consistent.  I think nondeterminism is bad.
+> I can tolerate it only in cases where it actually buys something
+> (usually efficiency, as in the pragma Suppress case).
+
+Obviously the non-determinisim of AND/OR to allow short circuiting buys
+efficiency.
+
+> If I ran the circus, arguments would be evaluated left to right, as in
+> Java.  Or else I would forbid A or B from having side effects in "A or
+> B", so the compiler can reorder without causing future bugs (which is
+> hard -- it would require some SPARK-like annotations.)
+
+Well sometimes it is hard to tell if you are arguing about the way Ada should
+be, or the way it is wrt new additions :-)
+
+> Which Pascal?  Jensen and Wirth's book doesn't say that.
+
+At least one edition of the J&W book says that, I remember this well.
+
+> This is what I don't get.  You were bitten, yet you claim it can't
+> happen by accident.  Heh?
+
+You are getting confused, check the thread, what I said could not happen by
+accident was a case where And/Or *required* full evaluation of both sides. Using
+a non-short circuited form where a short-circuited form is required as in
+
+      A /= NULL or A.all = 3
+
+is a bug that can certainly happen by accident, and in all of Eiffel, Pascal,
+and Ada, you can have the situation where some compilers will do what you expect
+and work if A = null and other compilers will bomb or raise an exception.
+
+>>> Yes, but not much the language design could do about that, IMHO.
+>> Of *COURSE* it could, it should just have implementation advice
+>> saying you should allocate the maximum, and then you have tests that
+>> enforce this implementation advice.
+>
+> I don't see how you can test this.
+
+Trivial!
+
+     type AA is array (Integer range <>) of something-big
+
+     type A (X : Long_Long_Integer := 0) is
+     record
+         AA (1 .. X);
+     end record;
+
+     AAA : A(3);
+
+and see if you can execute this. It should raise Storage_Error if the IA has
+been followed, perhaps not 100% formally sound, but in practice fine. There are
+several acats tests that work this way to ensure the compiler is doing the right
+thing (usually the other way round, to make sure SE is not raised for a case
+where you are not allowed to allocate the max
+
+e.g. the above example with
+
+    subtype AAS is AA (10);
+    AAAS : AAS;
+
+> If we wanted to push in the opposite direction (require the heap-based
+> alloc/reallocate implementation), then it's easy to test -- just
+> create an object whose max size is 2**36 bytes or something.
+
+Well the difficulty with the heap based approach comes with storage leaks when
+these things are used as components, and effective tests for storage leaks are
+tricky.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Tuesday, March 3, 2009  4:34 AM
+
+> The key is to have a class of functions, whose only side-effects are
+> documented in their contract. (I call them "strict" functions, since "pure"
+> means something slightly different and less pure. :-)
+
+How do you give the side-effects for a memo function without going to full blown
+SPARK-style annotations, or do you forbid them, along with say instrumented
+functions that count the number of times they are called.
+
+It is the existence of such functions which are clearly side-effect free
+conceptually but which appear to have side effects which lead in Ada 83 to
+abandoning the notion of side-effect free functions. I do not see that things
+have changed here.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Tuesday, March 3, 2009  4:38 AM
+
+> So, you would define true functions as opposed to value-returning
+> procedures... Makes me feel younger suddenly.
+
+As I say, I don't know if "true" function for you includes say an Ackerman
+function that memoizes internally. Or a Sqrt function that counts the number of
+times it is called for statistical performance evaluation purposes.
+
+Or a function that allocates stuff on the heap as it goes.
+
+Even taking unbounded time might be a side effect affecting correctness, it all
+depends on your point of view.
+
+This was very extensively discussed in 1980, and I see nothing that would change
+things happening in between.
+
+I do think it is evil that IN OUT parameters are not allowed, it is sort of sad
+that the status of functions is that they are allowed to have side effects
+provided that they are NOT mentioned in the spec :-)
+
+It is interesting that something like the random number generator really needs
+to be able to update its argument. In GNAT we do this by taking
+'Unrestricted_Access of a parameter known to be passed by reference but that is
+really nasty!
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Tuesday, March 3, 2009  11:13 AM
+
+> > The key is to have a class of functions, whose only side-effects are
+> > documented in their contract. (I call them "strict" functions, since "pure"
+> > means something slightly different and less pure. :-)
+>
+> How do you give the side-effects for a memo function without going to
+> full blown SPARK-style annotations, or do you forbid them, along with
+> say instrumented functions that count the number of times they are
+> called.
+
+You don't allow such things, because they don't have the properties I was
+interested in (safe parallel execution, no elaboration needed, algebraic
+reduction is safe). No side-effects other than in the spec means exactly that.
+The use to prevent order dependencies is a happy accident, and I surely would
+not suggest adding such things to Ada for that reason alone.
+
+You might be able to come up with some SPARK-like annotations to allow safe
+parallel execution OR no elaboration OR algebraic reduction OR no order
+dependencies, but there is almost no overlap between the requirements and the
+rules would be complex. I thought a bunch of simple rules to allow 50% of
+eligible functions to have this property would make more sense.
+
+> It is the existence of such functions which are clearly side-effect
+> free conceptually but which appear to have side effects which lead in
+> Ada 83 to abandoning the notion of side-effect free functions.
+> I do not see that things have changed here.
+
+Perhaps they have not. I specifically said that the issue was whether people
+could live with the restrictions, not whether it is hard to write a set of
+restrictions. And I don't think that people could live with the restrictions in
+Ada (even though it could be for another language).
+
+That's why I'm not pursuing this idea other than as an additional possibility in
+another AI. And perhaps in our own compiler (as it would allow optimizations
+that we would not otherwise do).
+
+****************************************************************
+
+From: Bob Duff
+Sent: Tuesday, March 3, 2009  6:25 PM
+
+> As I say, I don't know if "true" function for you includes say an
+> Ackerman function that memoizes internally. Or a Sqrt function that
+> counts the number of times it is called for statistical performance
+> evaluation purposes.
+>
+> Or a function that allocates stuff on the heap as it goes.
+
+These are some of the reasons why I said it's "hard".  Randy disputed that,
+saying it's easy and then referred us to a document he has not yet written. (And
+then we all want to foolishly speculate on what it says.  ;-)).  It is indeed
+easy to forbid side effects, if we don't mind forbidding memoizing functions and
+the like.
+
+The "allocates stuff on the heap" case is interesting to me.  Pure functional
+languages forbid (interesting) side effects, but they do heap allocation all the
+time.  They also don't have "=" on pointers.  Contrast with Ada's pragma Pure,
+which disallows allocators.
+
+Note also that to allow "F(X) and G(Y)" to be evaluated in either order,
+invisibly, we don't need "no side effects".  We just need "the side effects of F
+and G don't affect each other in bad ways".  That's hard to define.
+
+> Even taking unbounded time might be a side effect affecting
+> correctness, it all depends on your point of view.
+
+Exactly.  Everything has side effects.  Evaluating an expression on a computer
+uses up electricity, and heats up the CPU chip.  ;-)  The hard part is deciding
+which side effects we care about.
+
+> This was very extensively discussed in 1980, and I see nothing that
+> would change things happening in between.
+>
+> I do think it is evil that IN OUT parameters are not allowed, it is
+> sort of sad that the status of functions is that they are allowed to
+> have side effects provided that they are NOT mentioned in the spec :-)
+>
+> It is interesting that something like the random number generator
+> really needs to be able to update its argument. In GNAT we do this by
+> taking 'Unrestricted_Access of a parameter known to be passed by
+> reference but that is really nasty!
+
+That case is a side effect we really DO care about.  A Random function (or
+procedure!) should return a different value next time.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Tuesday, March 3, 2009  6:25 PM
+
+> That would have been my choice, as well. But Tucker argues that that
+> would "validate" the writing of intentionally tricky code (especially
+> if we have "in out" parameters on function) - that seems like a valid concern.
+
+Tuck and I have argued this before.  I still don't understand that point of
+view.
+
+If some particular kind of "tricky code" is evil, let's make it illegal.
+If we can't make it illegal, catch it at run time with an exception.
+But punishing people with flaky bugs 2 years hence is just too severe.
+
+We (language designers) cannot prevent people from writing tricky code,
+intentionally or unintentionally, by fiat.  People learn by experiment, not by
+reading some language definition.  So we should make the rules match their naive
+expectations, as best we can.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Tuesday, March 3, 2009  6:25 PM
+
+> > Here's a related example with "or" that seems quite likely to me:
+> >
+> >     if X = null or X.all > 0 ...
+> >
+> > If X = null, you are saying one compiler could return True, and the
+> > 2-years-later one suddenly starts raising Constraint_Error.
+> > That's the kind of non-portability I don't like.
+>
+> Well 11.6 already allows this behavior!
+
+I don't think so (although as I said, 11.6 continues to surprise me).
+I am presuming this 'if' protecting something important, i.e. affects the
+output.  For ex:
+
+    if X = null or X.all > 0 then
+        Put_Line(T'Image(X.all));
+
+cannot print a negative number.  And:
+
+    if X = null or X.all > 0 then
+        Put_Line ("Yes");
+    else
+        Put_Line ("No");
+    end if;
+
+cannot print Yes or No -- it must raise C_E.
+Am I wrong?
+
+> > Or worse:
+> >
+> >     if X = null or F(X) ...
+> >
+> > where F does X.all.
+> >
+> >> So I would have been happy with a design in Ada that defined A or B
+> >> in the canonical way, and then had
+> >>
+> >> Implementation Permission
+> >>    If A is True, then the compiler may omit the evaluation of B
+> >
+> > I think you mean if either operand is True, it may omit the other one.
+>
+> Not sure, I don't like right to left short circuiting, it's very
+> unnatural
+
+OK.  I don't quite understand your overall point of view...
+
+****************************************************************
+
+From: Bob Duff
+Sent: Tuesday, March 3, 2009  6:25 PM
+
+> This is a VERY common source of portability/optimization problems
+> (personally I think -fno-strict-aliasing should be the default, but we
+> have strong internal disagreement on this, since we have at least some
+> examples of big apps where it has noticeable space/speed
+> consequences).
+
+Yes, that's a real problem.  I don't know the right answer.
+
+> > My position is 100% consistent.  I think nondeterminism is bad.
+> > I can tolerate it only in cases where it actually buys something
+> > (usually efficiency, as in the pragma Suppress case).
+>
+> Obviously the non-determinisim of AND/OR to allow short circuiting
+> buys efficiency.
+
+Yes.  Whether it's worth it, I'm not sure.  But you were arguing that the
+nondeterminism is desirable in and of itself (which I disagree with), rather
+than tolerable given efficiency gains (which I might agree with).
+
+> > If I ran the circus, arguments would be evaluated left to right, as
+> > in Java.  Or else I would forbid A or B from having side effects in
+> > "A or B", so the compiler can reorder without causing future bugs
+> > (which is hard -- it would require some SPARK-like annotations.)
+>
+> Well sometimes it is hard to tell if you are arguing about the way Ada
+> should be, or the way it is wrt new additions :-)
+
+Yeah, sorry about that.  I got carried away with the Eiffel-related argument,
+since you said you liked the Eiffel rule.  I'll try to be clearer in future!
+
+> > Which Pascal?  Jensen and Wirth's book doesn't say that.
+>
+> At least one edition of the J&W book says that, I remember this well.
+
+I can believe that.
+
+> > This is what I don't get.  You were bitten, yet you claim it can't
+> > happen by accident.  Heh?
+>
+> You are getting confused, check the thread, what I said could not
+> happen by accident was a case where And/Or *required* full evaluation
+> of both sides. Using a non-short circuited form where a
+> short-circuited form is required as in
+>
+>       A /= NULL or A.all = 3
+>
+> is a bug that can certainly happen by accident, and in all of Eiffel,
+> Pascal, and Ada, you can have the situation where some compilers will
+> do what you expect and work if A = null and other compilers will bomb
+> or raise an exception.
+
+OK.
+
+> >>> Yes, but not much the language design could do about that, IMHO.
+> >> Of *COURSE* it could, it should just have implementation advice
+> >> saying you should allocate the maximum, and then you have tests
+> >> that enforce this implementation advice.
+> >
+> > I don't see how you can test this.
+>
+> Trivial!
+>
+>      type AA is array (Integer range <>) of something-big
+>
+>      type A (X : Long_Long_Integer := 0) is
+>      record
+>          AA (1 .. X);
+
+That's illegal.  I think you need Long_Long_Integer(X) here.
+
+>      end record;
+>
+>      AAA : A(3);
+>
+> and see if you can execute this. It should raise Storage_Error if the
+> IA has been followed, perhaps not 100% formally sound, but in practice
+> fine.
+
+Hmm...  I guess I object to tests that require Storage_Error.
+There's no requirement that the implemenation have any particular finite amount
+of memory.  OTOH I'm happy with tests that require either Storage_Error or
+running out of time or getting the right answer.
+
+In other words, I'm agreeing with "not 100% formally sound", here.
+
+>... There are several acats tests that work this  way to ensure the
+>compiler is doing the right thing (usually the  other way round, to
+>make sure SE is not raised for a case where  you are not allowed to
+>allocate the max
+
+Sure, the other way round is testable, as I said below.
+
+> e.g. the above example with
+>
+>     subtype AAS is AA (10);
+>     AAAS : AAS;
+>
+> > If we wanted to push in the opposite direction (require the
+> > heap-based alloc/reallocate implementation), then it's easy to test
+> > -- just create an object whose max size is 2**36 bytes or something.
+>
+> Well the difficulty with the heap based approach comes with storage
+> leaks when these things are used as components, and effective tests
+> for storage leaks are tricky.
+
+Agreed.  Storage leaks are a menace, but cannot be formally tested for.
+
+****************************************************************
+
+From: Steve Baird
+Sent: Tuesday, March 3, 2009  7:38 PM
+
+>   And:
+>
+>     if X = null or X.all > 0 then
+>         Put_Line ("Yes");
+>     else
+>         Put_Line ("No");
+>     end if;
+>
+> cannot print Yes or No -- it must raise C_E.
+> Am I wrong?
+>
+
+Was there a cut-and-paste error in the above?
+If X is non-null, it must print Yes or No and must not raise C_E.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, March 4, 2009  1:32 AM
+
+> You might be able to come up with some SPARK-like annotations to allow
+> safe parallel execution OR no elaboration OR algebraic reduction OR no
+> order dependencies, but there is almost no overlap between the
+> requirements and the rules would be complex. I thought a bunch of
+> simple rules to allow 50% of eligible functions to have this property would make more sense.
+
+Well I find this approach useless as I and many others did in 1980, one can
+always reopen old arguments, but usually it is useful to do so only if new
+circumstances exist, I don't see that here. I definitely suggest examining the
+old archives to look at previous very detailed discussions.
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, March 4, 2009  1:36 AM
+
+> Tuck and I have argued this before.  I still don't understand that
+> point of view.
+
+I agree with Tuck on this ... It has indeed been discussed many times before.
+Actually to me, Ada is much too restrictive, it insists on left to right or
+right to left evaluation, instead of arbitrary order of evaluation of both sides
+interleaved.
+
+    f(g) + q(r)
+
+where f(g) modifies r and q (r) modifies g is well defined in Ada if the two
+orderes give the same result, this is a bad thing (and most compilers will not
+bother to guarantee this in practice).
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, March 4, 2009  1:40 AM
+
+>>      type AA is array (Integer range <>) of something-big
+>>
+>>      type A (X : Long_Long_Integer := 0) is
+>>      record
+>>          AA (1 .. X);
+>
+> That's illegal.  I think you need Long_Long_Integer(X) here.
+
+just a thinko, change I to LLI in the decl of AA
+
+>>      end record;
+>>
+>>      AAA : A(3);
+>>
+>> and see if you can execute this. It should raise Storage_Error if the
+>> IA has been followed, perhaps not 100% formally sound, but in
+>> practice fine.
+>
+> Hmm...  I guess I object to tests that require Storage_Error.
+> There's no requirement that the implemenation have any particular
+> finite amount of memory.
+
+Please ... tests are about what works in practice, they are not formal objects
+to be attacked by language lawyers.
+
+> In other words, I'm agreeing with "not 100% formally sound", here.
+
+OK, we agree Ada is not 100% formally sound, no one even imagined for a moment
+that it was
+
+****************************************************************
+
+From: Bob Duff
+Sent: Wednesday, March 4, 2009  8:15 AM
+
+> Bob Duff wrote:
+> >   And:
+> >
+> >     if X = null or X.all > 0 then
+> >         Put_Line ("Yes");
+> >     else
+> >         Put_Line ("No");
+> >     end if;
+> >
+> > cannot print Yes or No -- it must raise C_E.
+> > Am I wrong?
+>
+> Was there a cut-and-paste error in the above?
+
+I don't think so.
+
+> If X is non-null, it must print Yes or No and must not raise C_E.
+
+I was talking about the case where X = null.
+Sorry if that wasn't clear...
+
+I understood Robert to say that 11.6 implies that when X = null, it can print
+"Yes", and I was disputing that.  But 11.6 is always tricky -- what's your take
+on it?
+
+****************************************************************
+
+From: Steve Baird
+Sent: Wednesday, March 4, 2009  1:24 PM
+
+> I was talking about the case where X = null.
+> Sorry if that wasn't clear...
+
+Upon rereading, it was clear. My mistake.
+
+> I understood Robert to say that 11.6 implies that when X = null, it
+> can print "Yes", and I was disputing that.  But 11.6 is always tricky
+> -- what's your take on it?
+
+I certainly agree that 11.6 is tricky.
+
+I think I disagree with you in this case.
+
+In the X=null case, the expressions "X.all" and "X.all > 0"
+may yield an "undefined result" (11.6(5)) unless the value of this expression
+would "have some effect on the external interactions of the program".
+
+Since the condition of the if statement will return the same result regardless
+of the value of "X.all > 0", it seems like the "would have no effect"
+requirement is met.
+
+One could argue that the result of or'ing True with <undefined result> is
+undefined, but if a particular implementation defines it to be "True", then I
+think the "has no effect" requirement is met.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Wednesday, March 4, 2009  1:52 PM
+
+> Since the condition of the if statement will return the same result
+> regardless of the value of "X.all > 0", it seems like the "would have
+> no effect" requirement is met.
+
+OK, I'll buy that.  What about a more realistic example:
+
+    if X = null or X.all > 0 then
+        Do_Something (X);
+
+where the compiler cannot prove that X does not affect external results via
+Do_Something?
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, March 4, 2009  1:53 PM
+
+> One could argue that the result of or'ing True with <undefined result>
+> is undefined, but if a particular implementation defines it to be
+> "True", then I think the "has no effect" requirement is met.
+
+I agree with Steve and Robert, and not Bob. I think Ada almost always
+*allows* short circuiting of Boolean "and" and "or", as the "undefined result"
+cannot have any impact on the external interactions of the program. The
+exception would be when the second operand does something that *does* have an
+external effect. That means that in
+
+    if X = null or F(X) then
+
+a compiler does have to call F unless it knows the contents of F do not include
+an external effect (or an exception raised by something other than a
+language-defined check).
+
+Which brings up an interesting 11.6 question. Are the exceptions raised by the
+language-defined packages "language-defined checks" for this purpose? 11.5(2)
+implies that they are (although it is hard to say definitively, because what is
+a "check" in this context), but the people who built the index of the Standard
+think that they are not. (I briefly thought about fixing that until I discovered
+how much work it would be.)
+
+This matters because it impacts what optimizations a compiler can do if it has
+built-in or other knowledge about language-defined packages. For instance, could
+an implementation short-circuit the following?
+
+    if C = No_Element or Element(C) >= 10 then
+
+(You can construct similar examples with streams or other I/O, but the above is
+the most likely kind to appear in practice and actually matter.)
+
+****************************************************************
+
+From: Bob Duff
+Sent: Wednesday, March 4, 2009  2:04 PM
+
+> Which brings up an interesting 11.6 question. Are the exceptions
+> raised by the language-defined packages "language-defined checks" for this
+> purpose?
+
+I believe the answer is no.
+
+I'm not interested in trying to prove that from RM wording, right now.  ;-)
+
+> ...but the people who built the
+> index of the Standard think that they are not.
+
+Not-so-coincidentally, I am primarily responsible for the Ada 95 index.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, March 4, 2009  2:54 PM
+
+> OK, I'll buy that.  What about a more realistic example:
+>
+>     if X = null or X.all > 0 then
+>         Do_Something (X);
+>
+> where the compiler cannot prove that X does not affect external
+> results via Do_Something?
+
+I would surely hope that future uses have no effect on whether the optimization
+is allowed or not, because if they did, the rule would be essentially pointless.
+If you had:
+
+     if X = null or X.all > 0 then
+         Do_It;
+
+how could a compiler ever prove that Do_It didn't depend on X?
+
+I would justify that opinion by noting the 11.6 says only that the optimization
+can be done if the "value of this undefined result would have some effect on the
+external interactions of the program". It says nothing about whether the absence
+of raising the exception would have an effect on the external interactions of
+the program, which would be much harder (often impossible) to prove. (And if you
+had to prove that, there would be no need for the rule, because then you would
+simply be talking about a normal as-if optimization anyway.)
+
+In this case, the result of "X.all > 0" would not have an effect of the external
+actions of the program no matter what value it had. What happens afterwards
+isn't taken into account. QED. :-)
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, March 4, 2009  3:07 PM
+
+> I'm not interested in trying to prove that from RM wording, right now.
+> ;-)
+
+Well, I'm pretty sure you can't prove that (or the reverse, for that matter)
+from the RM wording, because that depends on what you think "requires a check to
+be made" means. You could argue that it only means rules that include the word
+"check" in their description (but I doubt that is completely consistent either
+way), or a variety of other meanings.
+
+Anyway, the better question is why you would want it that way. That means that
+anything that is defined as a language-defined package is a second-class citizen
+when it comes to optimization, even if the compiler otherwise "builds-in" the
+support. For instance, it is easy to imagine a compiler that directly
+implemented unbounded lists (much like most compilers do with
+Unchecked_Deallocation). But the compiler would not be allowed to optimize out
+checks according to your opinion.
+
+I suppose I shouldn't be surprised; you wanted a way to turn off 11.6
+altogether, so you probably also would just as soon limit it as much as
+possible.
+
+> > ...but the people who built the
+> > index of the Standard think that they are not.
+>
+> Not-so-coincidentally, I am primarily responsible for the Ada
+> 95 index.
+
+I know that, but I didn't want to throw stones publicly. Now that's you've taken
+responsibility, however... ;-) [Even if they are not checks, it is still
+annoying to be unable to find out all of the places (for instance) Use_Error is
+raised.]
+
+****************************************************************
+
+From: Bob Duff
+Sent: Wednesday, March 4, 2009  3:32 PM
+
+> I suppose I shouldn't be surprised; you wanted a way to turn off 11.6
+> altogether, so you probably also would just as soon limit it as much
+> as possible.
+
+Well, much as I hate 11.6, I do understand its purpose.
+Also, I'm a big fan of consistency.  So I do buy your argument that exceptions
+in language-defined packages ought to behave like predefined exceptions.  For
+that matter, user-defined exceptions ought to behave like predefined ones (not
+speaking specifically about 11.6, here -- just that in general, I don't think
+language designers should reserve too many magical special privileges for
+themselves).
+
+As to what is a "check", I think that was driven more by pragma Suppress than by
+11.6.  Ada 83 was pretty vague about it.  I wanted to nail it down more in Ada
+95, but I also didn't want to grow the RM too much (and we were getting heat
+about that!), so I compromised by putting whatever information about checks as
+AARM annotations, including index entries.  So if you want to know what's a
+check, you can look in the AARM/index to see nonnormative info about what Bob
+Duff thought at the time.
+
+Also, your example of Vectors.Element is very similar to array indexing, but
+Vectors didn't exist in Ada 83 or 95, so most of the packages were things like
+Text_IO, where optimization is less of an issue.
+
+> > > ...but the people who built the
+> > > index of the Standard think that they are not.
+> >
+> > Not-so-coincidentally, I am primarily responsible for the Ada
+> > 95 index.
+>
+> I know that, but I didn't want to throw stones publicly. Now that's
+> you've taken responsibility, however... ;-) [Even if they are not
+> checks, it is still annoying to be unable to find out all of the
+> places (for instance) Use_Error is raised.]
+
+I've no objection to improving the index, but I'm not sure you can call
+Use_Error raising a "check" without proper ARG approval.
+
+****************************************************************
+
+From: Steve Baird
+Sent: Wednesday, March 4, 2009  3:47 PM
+
+> What about a more realistic example:
+>
+>     if X = null or X.all > 0 then
+>         Do_Something (X);
+>
+> where the compiler cannot prove that X does not affect external
+> results via Do_Something?
+
+Randy replied:
+> In this case, the result of "X.all > 0" would not have an effect of
+> the external actions of the program no matter what value it had. What
+> happens afterwards isn't taken into account.
+
+I agree with Randy.
+The analysis of the previous example carries over here. The "value" of the
+expression "X.all > 0" does not "have some effect on the external interactions
+of the program" for all the same reasons that we just went through.
+
+It is not a question of whether X affects external results, but of whether the
+value of the expression "X.all > 0" does,
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Wednesday, March 4, 2009  4:45 PM
+
+...
+> Well, much as I hate 11.6, I do understand its purpose.
+> Also, I'm a big fan of consistency.  So I do buy your argument that
+> exceptions in language-defined packages ought to behave like
+> predefined exceptions.
+
+OK.
+
+...
+> > I know that, but I didn't want to throw stones publicly. Now that's
+> > you've taken responsibility, however... ;-) [Even if they are not
+> > checks, it is still annoying to be unable to find out all of the
+> > places (for instance) Use_Error is raised.]
+>
+> I've no objection to improving the index, but I'm not sure you can
+> call Use_Error raising a "check" without proper ARG approval.
+
+Surely not. The pragma Suppress issue is an interesting one, too. Would need an
+AI to clarify all of this. But it is best to leave these issues rest
+until/unless an implementer actually cares. These hypothetical discussions are
+great fun, but probably aren't the best use of ARG time (or my time, for that
+matter :-).
+
+****************************************************************
+
+From: Bob Duff
+Sent: Wednesday, March 4, 2009  4:54 PM
+
+Right.  This is in line with my philosophy: during language "design", one should
+strive for perfection, but during language "maintenance", one should let
+sleeping dogs lie; that is, one should only fix language bugs that cause trouble
+in practice, like causing two implementers to produce mutually-incompatible
+compilers.
+
+Nonetheless, I'm usually willing to preach about language design to anyone
+willing to listen to me sermons (sorry).  ;-)
+
+****************************************************************
+
+From: Robert Dewar
+Sent: Wednesday, March 4, 2009  6:11 PM
+
+> One could argue that the result of or'ing True with <undefined result>
+> is undefined, but if a particular implementation defines it to be
+> "True", then I think the "has no effect" requirement is met.
+
+I think that's the intent, the original 11.6 if I remember right talked about
+not having to evaluate an expression if the only possible external effect was
+raising a predefined exception. I think that is still the intent.
+
+****************************************************************
+
+From: Tucker Taft
+Sent: Friday, March 6, 2009  11:46 PM
+
+...
+> Tuck and I have argued this before.  I still don't understand that
+> point of view.
+
+In part I was inspired by Djikstra in my acceptance of "nondeterminism" as a
+reasonable semantics.  His book "A Discipline of Programming" (I think that is
+the title) introduces "nondeterministic" "if" and "while" constructs, and he
+argues that they are better than their deterministic equivalents. I bought his
+argument, and I think I still do.
+
+I don't like Eiffel and Java's approach of default-initializing numeric objects
+to zero, for example, as programmers sometimes rely on it, with no indication in
+the source code. Java made an attempt to fully specify floating point semantics,
+and then had to back off because it didn't match how some hardware worked.  I
+believe that there should be very little "implicit" semantics, and left-to-right
+evaluation order is an example of that.  I could imagine a mode that a compiler
+would support where it promised left-to-right evaluation order to help with
+debugging, but it would be nice if it also allowed specifying right-to-left,
+just to shake the tree a bit.  I would certainly welcome a compiler that checked
+for uninitialized variables, and a static analysis tool or type-checking rule
+that prevented them.  But I don't like something like left-to-right evaluation
+being the way that such problems are "solved."
+
+And I suspect this is just one of those things that rational people will differ
+about.
+
+****************************************************************
+
+From: Bob Duff
+Sent: Monday, March 9, 2009  7:13 AM
+
+> In part I was inspired by Djikstra
+
+Is this an argument from authority?  I won't buy it...
+
+>...in my
+> acceptance of "nondeterminism" as a reasonable  semantics.  His book
+>"A Discipline of Programming"
+> (I think that is the title) introduces "nondeterministic"
+> "if" and "while" constructs, and he argues that  they are better than
+>their deterministic equivalents.
+> I bought his argument, and I think I still do.
+
+Sure.  E.g. the rather trivial "min" example.  But Dijkstra expends a lot of
+energy proving that "min" is deterministic, even though it is built from
+nondeterministic primitives.  Dijkstra never claimed that "min" should produce
+nondeterministic results!
+
+> I don't like Eiffel and Java's approach of default-initializing
+> numeric objects to zero, for example, as programmers sometimes rely on
+> it, with no indication in the source code.
+
+As I think you know, I agree with that.  Note that Ada follows the same approach
+as Eiffel and Java, for access types.
+
+> Java made an attempt to fully specify floating point semantics, and
+> then had to back off because it didn't match how some hardware worked.
+> I believe that there should be very little "implicit" semantics, and
+> left-to-right evaluation order is an example of that.  I could imagine
+> a mode that a compiler would support where it promised left-to-right
+> evaluation order to help with debugging, but it would be nice if it
+> also allowed specifying right-to-left, just to shake the tree a bit.
+> I would certainly welcome a compiler that checked for uninitialized
+> variables, and a static analysis tool or type-checking rule that
+> prevented them.  But I don't like something like left-to-right
+> evaluation being the way that such problems are "solved."
+>
+> And I suspect this is just one of those things that rational people
+> will differ about.
+
+OK, but I think you and I are actually closer to agreement than it might appear.
+We both agree, for example, that the best solution is to statically prove things
+(absence of uninit vars, absence of subtle order dependences, etc).  We merely
+disagree about the ranking of various second-best solutions, in some cases.
+
+****************************************************************
+
+From: Randy Brukardt
+Sent: Saturday, March 14, 2009  7:45 PM
+
+>The rule in Pascal, which I just looked up, is that the operands of
+>and/or are both evaluated, and are evaluated left-to-right -- same as
+>all other operators. I'm talking about Jensen and Wirth's "Pascal User
+>Manual and Report".
+>I'm ignoring the ISO Pascal standard, because I didn't read it until
+>after I had quit using Pascal, and I don't have a copy, and I'm too
+>lazy to Google for it.
+
+I have the IEEE Pascal standard (ANSI/IEEE 770X3.97.97-1983) on the shelf here
+(that's on paper, of course). There is nothing special about Boolean operators
+(there are only two sentences).
+
+The overall expression rule is:
+
+    The order of evaluation of the operands of a dyadic operator shall be
+    implementation-dependent.
+
+[Shall be implementation-dependent? That means "must be anything you want",
+which is not much of a requirement! Terrible use of "shall".]
+
+They then follow that up with a note to make sure that everyone knows that they
+didn't specify anything at all:
+
+NOTE: This means, for example, that the operands may be evaluated in textual
+order, or in reverse order, or in parallel or they may not both be evaluated.
+
+So Pascal doesn't specify anything at all. Which probably has nothing to do with
+anything. :-)
+
+****************************************************************

Questions? Ask the ACAA Technical Agent