Version 1.1 of acs/ac-00314.txt

Unformatted version of acs/ac-00314.txt version 1.1
Other versions for file acs/ac-00314.txt

!standard 5.2.1(07)          19-01-04 AC95-00314/00
!class Amendment 19-01-04
!status received no action 19-01-04
!status received 18-10-19
!subject '@' strikes again
!summary
!appendix

From: Richard Wai
Sent: Wednesday, December 5, 2018  11:22 PM

As a very green observer of the ARG, I am very self-conscious in making this
dramatic criticism. However, I need to be frank that this AI deeply troubles me.
I'm too deeply in-love with Ada to  hold my peace. I hope I have not been
overzealous - I have the greatest respect for the ARGs diligent work over these
many years. I've been reading over ARG email chains that were written when I was
a mere junior high-schooler, only barely capable of rational foresight. My
passion can get me into trouble now and again!

***

Reviewing all of the comments to the AI, I can't help but feel that everyone was
distracted by the problems with the solution, rather than problems with the
problem. If one proposal attracts so much attention and so much disagreement, I
feel it should be reserved for the most pressing problems. I cannot see how this
AI solves anything worthy of the controversy.

I honestly feel that implementing this AI would be a mistake, and furthermore
would set a dangerous precedent for Ada in the times ahead. I feel that the
motivation behind this AI prioritises programmer convenience at too great a
cost, and would probably not exist in the first-place if it was not for the
perceived inconvenience.

I understand that one target of this AI is the potential for "unintended
multiple evaluations of expressions in the name, or mismatching array indices."
In my experience, this situation is uncommon enough in practice, that it does
not justify such a controversial, and quite un-Ada-like change. I'd argue that
rename statements and/or subtypes should always be used whenever referring to
specific indexes of an array for which multiple references are necessary, and
for which a single variable (e.g. 'I') is not already denoting it.

Taking a bit more time to write things clearly should be a virtue, not a fault.
I really wish the software community would be a little more self-respecting, and
a little less self-serving. I know that is a terse statement to make, but I make
it with the greatest intentions, and I think it is a truly serious problem that
has actual deleterious effects on society. Software is more important than it
has ever been. Discipline and care is the hallmark of professionalism and
quality. It should be encouraged. Ada has always been the one language that
stood up to lazy short-cuts and "write-only" designs. I'd be devastated to see
Ada infected by that worst habit of the software community.

I fear what may come next. Could "func" be proposed as a short-hand for
function? 'Pos,'Val,'Pred,'Succ are hard enough to swallow, do we really _need_
to have '@'? Is it really worth it?

Spending most of my time either writing Ada, or training others to write it,
this "problem" seems insignificant in the wild from my experience. I don't feel
this is a problem that deserves solving. I just do not see this AI doing
anything justifiably useful for the steep cost of irritating some of Ada's core
principals, setting an irrevocable precedent, and even corrupting Ada's beauty
and self-confidence.

If this kind of convenience-driven, un-Ada-like change becomes the norm, I can
see a future where Ada simply becomes an old, desperate Rust wannabe. It is one
thing to add more language-defined packages (e.g. containers), it is another to
add an off-putting symbol that is openly a short-hand for a moderately frequent
(at best) nuisance.

So, instead of throwing in another syntax proposal, I want to propose that the
actual need for, and long-term effects of this AI be reconsidered more
thoroughly as the ARG approaches the final vote.

***************************************************************

From: Jeff Cousins
Sent: Thursday, December 6, 2018  11:16 AM

Please don't feel shy about speaking out Richard, but we spent half the October
2016 meeting in Pittsburgh re-visiting this (when we should have been getting on
with parallelism) and decided that it was still the best option.  I thought that
GNAT had already prototyped implementing it even by then?  I feel that the very
shortness of this proposal, a single character, serves to emphasise that it is
an abbreviation.

***************************************************************

From: Richard Wai
Sent: Thursday, December 6, 2018  12:01 PM

But that was not really my point - I saw all the discussion about how to make
the abbreviation, but almost none of if it made sense to even have that
abbreviation. I really think suggesting that we should add that kind of
shorthand to Ada is dangerous, period. I'm not really commenting on the
selection of '@' vs '(..)', I'm saying that Ada does not need this abbreviation,
nor does it need anything which is there primarily to save keystrokes for the
programmer - Ada was never about that, and I don't think it ever should be.

I didn't really see this being discussed at all. I didn't really see any strong
rationale for introducing this kind of abbreviation except for "Incrementing,
decrementing, scaling, etc., are all somewhat painful in Ada", and that it might
improve readability and reduce errors in some largely uncommon cases. Yet, it is
clearly a controversial topic, and I think most agreed it doesn't look like Ada
at all.

***************************************************************

From: Tucker Taft
Sent: Thursday, December 6, 2018  12:54 PM

This was definitely a tough call.  At this point I think it should probably be
considered water under the dam, as I believe this has been approved both at the
ARG and at the WG9 level.  I can imagine a number of projects may choose to
disallow it, but for new users of Ada, the verbosity of simple increments has
always been a sticking point.  But I understand your point of view very well.

***************************************************************

From: Tucker Taft
Sent: Thursday, December 6, 2018  2:28 PM

> ... At this point I think it should
> probably be considered water under the dam, ...

Oops.  Let's make that "over the dam."

***************************************************************

From: Randy Brukardt
Sent: Thursday, December 6, 2018  6:22 PM

>I honestly feel that implementing this AI would be a mistake, and furthermore
>would set a dangerous precedent for Ada in the times ahead.

Paying any attention to this complaint would also set a dangerous precedent. One
rule around here that we've strictly enforced is that "I don't like xxx" is
NEVER grounds for reconsideration of an AI. You at least have to find some
technical reason that wasn't previously considered to try to get something
reconsidered. Otherwise, we would be locked into a perpetual cycle of arguing
the same points over and over and over. We do that enough as it is. (It took
*forever* to agree on syntax for reductions. And I don't like it much, but I'm
not going to try to reopen that discussion.)

As Tucker noted, this AI is ARG approved, WG 9 approved, and approved to be
included in WG 9 as part of their scope approval. There is only the yes or no
vote on the Standard as a whole remaining. We're only going back to finished
stuff to fix up technical issues and conflicts, and hopefully there won't be
many of those.

As such, this comment is so far out of bounds that it will take days of skiing
to just get in sight of the boundary. :-)

As far as your actual argument goes (and I shouldn't spend any time on this, but
since you've already robbed me of one good nights sleep - I spent several hours
awake last night thinking of how to respond):

> Spending most of my time either writing Ada, or training others to write
> it, this "problem" seems insignificant in the wild from my experience. I
> don't feel this is a problem that deserves solving.

It's clearly not earth-shaking, but it has been an issue in Ada since the very
beginning.

This probably was the most-requested feature from the Ada community. It has come
up repeatedly.

Moreover, this was the only thing that Ada was missing that was available in
Modula (*not* Modula 2!), which was the language of choice at the University of
Wisconsin in the early 1980s. (There wasn't a Unix Pascal compiler, so they had
repurposed a grad student Modula compiler as the primary compiler on their PDP
11s and VAXes.) It had Inc and Dec procedures, which originally seemed silly but
eventually turned out to be critical to writing understandable code. Indeed, I
originally thought that the 'Succ and 'Pred attributes were supposed meet this
need, but was disappointed to find out that they don't (and are mostly useless
outside of the occasional generic unit). One makes due, but the need surely
hasn't disappeared.

And your contention that we didn't spend enough time on the problem is simply
wrong. We discussed exactly the question of whether the problem was worth
solving, and answer was always yes -- assuming we could agree on a solution that
did not harm readability. I probably didn't record much about those discussions
because for me at least the answer is obvious. In addition, a lot of that
discussion occurred during the Ada 2005 work (when we never decided on a
solution and thus ultimately did nothing -- but it was clear to all that it was
the lack of a solution, and not the lack of a problem, that was the binding
point).

> I'd argue that rename statements and/or subtypes should always be used
> whenever referring to specific indexes of an array for which multiple
> references are necessary, and for which a single variable (e.g. 'I') is not
> already denoting it.

Renames (and anything else that introduces an extra name to the program) adds to
the cognitive load of reading an expression, and generally results in having to
jump back and forth from the expression to the renames in order to understand
what is going on. It can be a useful tool, but it by itself is not enough.
Moreover, every extra declaration requires choosing a name, and that is hard --
moreover, while you're doing that, your counterpart in other languages is moving
on to debugging. (Of course, they'll be doing that forever, but I digress. :-)
The beauty of @ is that it doesn't introduce another name to what you're
reading, it stands for something that easily found and most likely you'd just
finished reading anyway.

We explicitly talked about using renames as part of the solution here, and
explicitly rejected it for the reasons given above. Making the rename separately
declared surely doesn't help in any way there.

> If this kind of convenience-driven, un-Ada-like change becomes the norm
...

I'm sorry to disappoint you, but a whole lot of what we've been doing since the
beginning of Ada 2012 qualifies as "convenience-driven":

* user-defined references;
* user-defined indexing;
* user-defined iterators;
* conditional expressions;
* quantified expressions;
* expression functions;
* Obj'Image;

And Ada 2020 goes further:

* user-defined literals;
* container aggregates;
* iterator filters (I'm still trying to kill this, but I expect to fail);
* delta aggregates;
* declare expressions;
* as well as the target name symbol.

Every one of the effects of these things could be done in Ada 95, their only
reason to exist is to make it more convenient to use containers and contracts
(and to improve the readability of those things). You're at least ten years too
late to prevent that drift in the direction of convenience. One could even argue
that tagged types is another example of a convenience-driven feature. (You're
much better off in Ada using discriminants/variants/case statements rather than
dispatching for types that have multiple implementations, because of the
compile-time checking available with the former. Thus OOP itself is mostly about
convenience of not having to recompile something.)

> I really think suggesting that we should add that kind of shorthand to
> Ada is dangerous, period.

As noted above, I think you're in a rather small minority that would hold such
an extreme view. Almost everyone wants a solution to the increment problem, and
@ is worlds better than
     Expr :=+ 1;
which looks like C, has resolution problems, and doesn't work with regular
function calls or attributes.

I would have been happy with
    Obj'Succ
but almost no one else was.

> ... it need anything which is there primarily to save keystrokes for
> the programmer ...

That is NOT the purpose of @; at most it is a side effect. It is all about
making expressions easier to read by eliminating duplication and/or excess
identifiers. It also potentially makes assignments run faster (by avoiding
duplicate evaluations of functions with side-effects), but that's not a goal,
either.

Improving readability of assignments can be controversial simply because it is
so subjective. I know that I've been programming in Ada for 38 years, and I have
yet to figure out how to make this sort of expression readable. [And I use them
a lot, programs that are gathering data like profilers and simulators are often
updating result objects with simple values.] @ at least gives some hope there.
If you grant that notion that most of think that @ improves the readability of
expressions, and you remember that the goals of Ada found at the start of the RM
highlight readability as one of the most important, then I think you can see why
many of us think that @ is very much in keeping with the Ada philosophy. You
don't have to agree, of course, but it doesn't help to harp on a non-purpose of
the feature as if it is the only benefit.

> Yet, it is clearly a controversial topic, and I think most agreed it
doesn't look like Ada at all.

Ada 2020 is not going to "look like Ada". The use of square brackets for
array/container aggregates will be much more jarring than the occasional use of
@ (which in the worst case, will send one to their favorite Ada
textbook/website/RM to find out what it means). I fear that ship has sailed.

Moreover, early experience (esp. with GNAT, which implemented it long ago)
suggests that Ada programmers find it useful as soon as they understand it. The
best example was Tucker, who in Pittsburgh argued against using @ and then put
up a parallelism example which used @ extensively. And the example seemed far
more understandable than it would have been without using @.

Ergo, I think @ is new and different and people often are concerned about the
new and different. And then they often wonder how they lived without it (how did
we live without Google and smartphones, anyway??).

I do think that there is a chance for misuse of @, if programmers bury it deeply
in source expressions, use it as a prefix, and other such nonsense. As Tucker
noted, it will need style guides and style checkers (probably J-P's style tool
has a dozen rules for the use of @, if not, it soon will) to use it
intelligently. There are many Ada features in this category where
project-specific rules ought to be enforced (use clauses and anonymous access
types come to mind), so this is hardly unique to @.

But here I lean on a paraphrase of Jean Ichbiah. "The possibility that a
programming language feature can be abused does not mean that the feature should
be excluded from a programming language." He was talking about the very
controversial (at the time) provision for operator symbols, as some people
worried that programmers would be soon implementing "*" that subtracted and
other such things. He certainly was proven right about that; they are rarely
abused and these days are considered a rather core feature of Ada (and one that
has been widely copied in other languages, starting with C++). I think the same
applies here, and I expect the results to be the same.

P.S. Darn, still spent a lot longer writing this than I hoped. Grumble.

BTW: Sorry about daisy-picking single sentences to respond to; let me assure you
I read the entire messages several times and understand all of your points.

***************************************************************

From: Richard Wai
Sent: Thursday, December 6, 2018  7:07 PM

I'm sorry for robbing you of sleep, but I nevertheless appreciate the time you
took to respond with such depth, especially at this time in the cycle.

There was a lot of really valuable insight in there that I didn't have.

The only real nitpick I want to put on record is that I'm not against programmer
convenience when it comes to things that are overly contrived otherwise, and
which are something that needs to be done frequently. I just think having things
like rust with 'fn' instead of function is inexcusably lazy. In some sense, I
felt like this was getting a bit too close to that for comfort.

I can accept the argument that using renames would make it overly contrived
where '@' would be admittedly more legible. If there is one thing that I am
thankful for is the emphasis on readability.

I may never truly be convinced this is the right thing, but c'est le vie!

***************************************************************

From: Tucker Taft
Sent: Thursday, December 6, 2018  8:18 PM

    Nothing like a little trial by fire to make you feel welcome! ;-)

I trust you got the invitation for the ARG video conference on Monday at 11AM.  If not, let me know ...

***************************************************************

From: Randy Brukardt
Sent: Thursday, December 6, 2018  8:07 PM

> I'm sorry for robbing you of sleep, but I nevertheless appreciate the
> time you took to respond with such depth, especially at this time in
> the cycle.

That happened because I was monitoring my e-mail to see who, if anyone, had sent
in homework. So I saw your message just as I was going to bed. Always a bad
thing (second time this week, in fact), but hard to avoid short of not checking
mail after leaving here. (It works better when I don't see the message until I
get up in the morning, then it just makes my shower longer than necessary. :-)

***************************************************************

From: Richard Wai
Sent: Thursday, December 6, 2018  8:30 PM

I've always been a trial by fire kind of guy :P

I got the invitation, and am very much looking forward to it!

***************************************************************

From: John Barnes
Sent: Sunday, December 9, 2018  4:41 AM

I have every sympathy with the view that @ is not perfect.
Maybe it is like Brexit, we have to think again.

What's against @

1  It looks like something to do with email

2 We might want to use @ for a better purpose at some time in the future

3  The trouble with any choice like this for writing it out in full versus using
   an abbreviation is in program maintenance

So here is my suggestion which came to me in the night - use

{LHS}

So we have

A(I).catastrophe:= {LHS} + 1:

In favour

1 it is pitifully obvious what it means; The terms LH value etc were widely used
  by Strachey and others in 50s and 60s

2 it is sufficiently long that it won't get used for trivial things such as

X := X + 1; which some nuts might now write as  X := @ +1

3  It doesn't use @ which might be useful later.

4  Its not so long that it would be tedious.

***************************************************************

From: Jeff Cousins
Sent: Sunday, December 9, 2018  6:37 AM

John - I think it was you and I who originated the idea of having a shorthand
for the LHS, rather than using += or ++ or whatever like C, even if not the
specific symbol @, so please don't complain too much!

I think Richard is objecting to the principle of having a shorthand rather than
the specific symbol.  I quite like the way that a shorthand is a way of
explicitly saying “yes I really do mean the same as the LHS”.

I would also hope that it would be a big nudge to the compiler to produce more
efficient code.  In the days of single core processors we had to change much of
our code from the form:

My_Package.My_Array(I).Field :=
   My_Package.My_Array(I).Field + 1;

to

declare
   My_Field : Field_Type renames My_Package.My_Array(I).Field;
begin
My_Field := My_Field + 1;
end

in order to meet or performance requirements, as otherwise the compilers (at
least in those days) would evaluate the address twice, one each for the LHS and
the RHS, even with optimisation.

PS. (chit chat) Became a grandad for the second time this morning, another boy.  A bit like London buses, wait for ages then two come at once.  The Americans may be amused (or not) that the mother, Susan, is a middling distant cousin of Hillary Clinton, bo
th being descended from Jonathan Rodham, coalminer of County Durham.

***************************************************************

From: Tucker Taft
Sent: Sunday, December 9, 2018  9:39 AM

[No comment on the "@" thing.  I sent us back to the drawing board once on this
-- can't bear to do it again.]

>PS. (chit chat) Became a grandad for the second time this morning, another boy.  A bit like London buses, wait for ages then two come at once.

Mazel Tov!

>The Americans may be amused (or not) that the mother, Susan, is a middling
>distant cousin of Hillary Clinton, both being descended from Jonathan Rodham,
>coalminer of County Durham.

Interesting

***************************************************************

From: Bob Duff
Sent: Sunday, December 9, 2018  11:07 AM

> As a very green observer of the ARG, I am very self-conscious in
> making this dramatic criticism.

There's no need to be shy about criticizing ARG's technical decisions.

But I think you misunderstand the rationale for this feature, at least in part.
It's not a shorthand to save keystrokes.  It's about readability, in particular,
the DRY principle.  That's a good principle, and Ada forces you to violate it in
cases where C-family languages do not:

    Some_Array(Some_Index).Some_Component :=
      Some_Array(Some_Index).Some_Component + 1;

The renaming alternative is even worse:

    declare
        This_Component : ... renames
            Some_Array(Some_Index).Some_Component;
    begin
        This_Component := This_Component + 1;
    end;

5 lines of code to add 1 to a variable does not aid readability.  Nor does
introducing a new variable name to remember.  And we STILL have to violate DRY
(albeit just for the one identifier, but are you sure it's not "This_Component
:= That_Component + 1;"?).

***************************************************************

From: Bob Duff
Sent: Sunday, December 9, 2018  11:10 AM

> So here is my suggestion which came to me in the night - use
>
> {LHS}

I agree that "@" is ugly, and would prefer something with the letters "LHS" in
it.  But the important thing is to avoid violating the DRY principle.  Important
concrete principles (like DRY) should trump aesthetic concerns, especially when
everybody disagrees about the aesthetics.

Anyway, it's too late.  This feature has been implemented in GNAT for a couple
of years, and people are using it. It makes no sense to make an incompatible
change to satisfy our aesthetic opinions.  GNAT would likely end up supporting
BOTH (yuck!).

***************************************************************

From: Richard Wai
Sent: Sunday, December 9, 2018  11:21 AM

> But I think you misunderstand the rationale for this feature, at least in part.
> It's not a shorthand to save keystrokes.  It's about readability, in particular,
> the DRY principle.

Indeed, Randy made some good points about this also, and I yield that this is a
strong argument in favour of the change.

***************************************************************

From: Richard Wai
Sent: Sunday, December 9, 2018  11:45 AM

> I have every sympathy with the view that @ is not perfect.
> Maybe it is like Brexit, we have to think again.

This is what would get me in the most trouble! :P

> What's against @
>
> 1  It looks like something to do with email

I was also in the camp that, on the face of it, a reserved word would be the
obvious Ada-like approach. Yet the very nature of putting a reserved word in an
expression with other names is obviously a fatal flaw which is even worse than
'@'.

All this trouble is what led me to propose that the entire proposal is not worth
the controversy.

But lo, the ship has sailed under the bridge and over the dam, as I'm told.

***************************************************************

From: Tucker Taft
Sent: Sunday, December 9, 2018  11:47 AM

For those unfamiliar with the TLA (three-letter-acronym) of DRY, it means:

   "Don't Repeat Yourself"

It is related to the idea of not requiring a programmer to "jump through hoops"
just to say something simple.

On the other hand, if the repeat is actually more of a "restatement" and the
"hoop" helps ensure the intent, then it can be useful.  Some form of redundancy
is really the only protection against unintentional errors ...

The key is to be able to distinguish "useful" redundancy (e.g. declaring your
variables before using them) vs. "useless" redundancy (e.g. specifying the type
multiple times when trying to rename a class-wide object to be of a more
specific type).  And of course it is a spectrum.

***************************************************************

From: Richard Wai
Sent: Sunday, December 9, 2018  11:56 AM

> I think Richard is objecting to the principle of having a shorthand rather
> than the specific symbol.

Yes! This is really my position, though it seems not quite technical enough..

> PS. (chit chat) Became a grandad for the second time this morning, another
> boy.

Congratulations! I'm still only hoping for my first legacy, so this is
aspirational :P

> A bit like London buses, wait for ages then two come at once.

As a resident of the old colonial town of York, Upper Canada, now Toronto, it is
interesting to trace this bit of heritage to its roots!

***************************************************************

From: Richard Wai
Sent: Sunday, December 9, 2018  12:14 PM

> For those unfamiliar with the TLA (three-letter-acronym) of DRY, it means:
>
>    "Don't Repeat Yourself"

I'm a little more familiar with  COMEON: Contrived bacrOnym crammed into a
Marginally rElated Original Notion.

This is a great point, since it really addresses my fears of function turning
into fn. My primary fear was that '@' could be a gateway into that dark world.
But framed as a potential reduction of "usless redundancy" adds credibility to
it. I also note the original AI suggesting that it could reduce errors that the
compiler would be pretty hard-pressed to catch. Maybe I really wanted to say
A(1) := A(2) + 1!

***************************************************************

From: John Barnes
Sent: Sunday, December 9, 2018  4:01 PM

>PS. (chit chat) Became a grandad for the second time this morning, another boy.

Congrats on grandad 2.

***************************************************************

From: John Barnes
Sent: Sunday, December 9, 2018  4:27 PM

>Anyway, it's too late.  This feature has been implemented in GNAT for a
>couple of years, and people are using it.

I am sure it is too late.

What concerns me more is whether Brexit is too late.

***************************************************************

Questions? Ask the ACAA Technical Agent